Video Compression With CNN-Based Postprocessing

Fan Zhang, Di Ma, Chen Feng, David R. Bull

Research output: Contribution to journalArticle (Academic Journal)peer-review

39 Citations (Scopus)

Abstract

In recent years, video compression techniques have been significantly challenged by the rapidly increased demands associated with high quality and immersive video content. Among various compression tools, postprocessing can be applied on reconstructed video content to mitigate visible compression artefacts and to enhance the overall perceptual quality. Inspired by advances in deep learning, we propose a new convolutional neural network based postprocessing approach, which has been integrated with two state-of-the-art coding standards, versatile video coding (VVC) and AOMedia Video (AV1). The results show consistent coding gains on all tested sequences at various spatial resolutions, with average bit rate savings of 4.0% and 5.8% against original VVC and AV1, respectively (based on the assessment of peak signal-to-noise ratio). This network has also been trained with perceptually inspired loss functions, which have further improved the reconstruction quality based on perceptual quality assessment (VMAF), with average coding gains of 13.9% over VVC and 10.5% against AV1.
Original languageEnglish
Pages (from-to)74-83
Number of pages10
JournalIEEE Multimedia
Volume28
Issue number4
Early online date18 Jan 2021
DOIs
Publication statusPublished - 1 Oct 2021

Fingerprint

Dive into the research topics of 'Video Compression With CNN-Based Postprocessing'. Together they form a unique fingerprint.

Cite this