Texture-aware Video Frame Interpolation

Research output: Contribution to conferenceConference Paperpeer-review

3 Citations (Scopus)


Temporal interpolation has the potential to be a powerful tool for video compression. Existing methods for frame interpolation do not discriminate between video textures and generally invoke a single general model capable of interpolating a wide range of video content. However, past work on video texture analysis and synthesis has shown that different textures exhibit vastly different motion characteristics and they can be divided into three classes (static, dynamic continuous and dynamic discrete). In this work, we study the impact of video textures on video frame interpolation, and propose a novel framework where, given an interpolation algorithm, separate models are trained on different textures. Our study shows that video texture has significant impact on the performance of frame interpolation models and it is beneficial to have separate models specifically adapted to these texture classes, instead of training a single model that tries to learn generic motion. Our results demonstrate that models fine-tuned using our framework achieve, on average, a 0.3dB gain in PSNR on the test set used.
Original languageEnglish
Publication statusPublished - Jun 2021
EventIEEE Picture Coding Symposium - Bristol, United Kingdom
Duration: 29 Jun 20212 Jul 2021


ConferenceIEEE Picture Coding Symposium
Abbreviated titlePCS
Country/TerritoryUnited Kingdom

Bibliographical note

Funding Information:
This work was supported by the China Scholarship Council - University of Bristol Scholarship. Grant No. 202008060038.

Publisher Copyright:
© 2021 IEEE.


  • Video Frame Interpolation
  • Video Super-Resolution

Cite this