Temporal interpolation has the potential to be a powerful tool for video compression. Existing methods for frame interpolation do not discriminate between video textures and generally invoke a single general model capable of interpolating a wide range of video content. However, past work on video texture analysis and synthesis has shown that different textures exhibit vastly different motion characteristics and they can be divided into three classes (static, dynamic continuous and dynamic discrete). In this work, we study the impact of video textures on video frame interpolation, and propose a novel framework where, given an interpolation algorithm, separate models are trained on different textures. Our study shows that video texture has significant impact on the performance of frame interpolation models and it is beneficial to have separate models specifically adapted to these texture classes, instead of training a single model that tries to learn generic motion. Our results demonstrate that models fine-tuned using our framework achieve, on average, a 0.3dB gain in PSNR on the test set used.
|Publication status||Published - Jun 2021|
|Event||IEEE Picture Coding Symposium - Bristol, United Kingdom|
Duration: 29 Jun 2021 → 2 Jul 2021
|Conference||IEEE Picture Coding Symposium|
|Period||29/06/21 → 2/07/21|
Bibliographical noteFunding Information:
This work was supported by the China Scholarship Council - University of Bristol Scholarship. Grant No. 202008060038.
© 2021 IEEE.
- Video Frame Interpolation
- Video Super-Resolution
FingerprintDive into the research topics of 'Texture-aware Video Frame Interpolation'. Together they form a unique fingerprint.
Papadopoulos, M. A. (Creator), University of Bristol, 16 Jan 2015