Intelligent Resampling Methods for Video Compression

  • Mariana Fernandez Afonso

Student thesis: Doctoral ThesisDoctor of Philosophy (PhD)


Video compression is the core process that allows efficient storage and transmission of digital video. The main concept is to exploit redundancies in the video signal, with the goal of achieving the best possible trade-off between bitrate and quality. This very active research field has seen great improvements in the last 30 years, from the simple methods of the early 1990s, to more complex but highly optimized systems of modern compression formats. At the same time, improvements in consumer devices and the availability of faster Internet connections has allowed for the growing popularity of digital video. Nonetheless, with an ever-increasing demand for better quality and more immersive experiences from consumers, pressure for improved compression algorithms persists. Since current systems are already highly optimized, the challenge arises for innovative techniques to provide further enhancements.
This thesis explores novel intelligent resampling-based methods for future video compression systems, from texture synthesis to spatial resolution adaptation. Texture synthesis has been proposed previously for the coding of difficult scenes, specially those featuring dynamic textures. However, the classification of dynamic textures is usually too broad and not well understood. For this reason, an extensive analysis of encoding statistics is presented based on a new homogenous texture dataset. In addition, limitations of prior dynamic texture synthesis approaches are presented and a new method is proposed. Moreover, we explore another type of resampling approach, known as spatial resolution adaptation. Our proposed framework dynamically adapts the spatial resolution of the input video based on a prediction that uses low-level visual features and later reconstructs the original resolution video at the decoder. Several methods are studied to achieve improved quality of the reconstructed video, including the use of Convolutional Neural Networks (CNNs) trained on a large dataset of videos using both pixel-based loss functions and a perceptually-inspired framework.
Experimental results show that the proposed spatial resolution adaptation system achieves significant gains over HEVC using objective quality metrics, visual comparisons and subjective tests. In addition, an early version of the proposed approach was submitted to the Video Compression Technology Grand Challenge at ICIP 2017, and won first prize. For these reasons, we believe that this work provides a valuable contribution and offers significant potential for enhancing video coding systems.
Date of Award25 Jun 2019
Original languageEnglish
Awarding Institution
  • The University of Bristol
SupervisorDimitris Agrafiotis (Supervisor) & David R Bull (Supervisor)


  • Video compression
  • Video coding
  • Convolutional Neural Networks
  • Deep learning
  • Spatial resolution adaptation
  • HEVC
  • Perceptual video coding

Cite this