Compressing Deep Image Super-resolution Models

Yuxuan Jiang*, Jakub T Nawala, Fan Zhang, David R Bull

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference Contribution (Conference Proceeding)


Deep learning techniques have been applied in the context of image super-resolution (SR), achieving remarkable advances in terms of reconstruction performance. Existing techniques typically employ highly complex model structures which result in large model sizes and slow inference speeds. This often leads to high energy consumption and restricts their adoption for practical applications. To address this issue, this work employs a three-stage workflow for compressing deep SR models which significantly reduces their memory requirement. Restoration performance has been maintained through teacher–student knowledge distillation using a newly designed distillation loss. We have applied this approach to two popular image super-resolution networks, SwinIR and EDSR, to demonstrate its effectiveness. The resulting compact models, SwinIRmini and EDSRmini, attain an 89% and 96% reduction in both model size and floating-point operations (FLOPs) respectively, compared to their original versions. They also retain competitive super-resolution performance compared to their original models and other commonly used SR approaches. The source code and pre-trained models for these two lightweight SR approaches are released at
Original languageEnglish
Title of host publication2024 Picture Coding Symposium (PCS)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Publication statusAccepted/In press - 5 Feb 2024
EventPicture Coding Symposium 2024 - Millennium Hotel Taichung, Taichung, Taiwan
Duration: 12 Jun 202414 Jun 2024


ConferencePicture Coding Symposium 2024
Abbreviated titlePCS 2024
Internet address


  • image super-resolution
  • complexity reduction
  • model compression
  • knowledge distillation


Dive into the research topics of 'Compressing Deep Image Super-resolution Models'. Together they form a unique fingerprint.

Cite this