| The quality of images is a major factor affecting the performance of computer vision systems.Low resolution images affect people’s visual perception on the one hand and the accuracy of computer vision systems on the other.Therefore,how to efficiently improve image resolution and image quality is of great importance in military security,medical health and industrial production.The method of increasing image resolution by hardware has the disadvantage of high cost,while at the same time,it is more efficient,cheaper and convenient to restore low-resolution images from the perspective of software algorithms.In recent years,deep learning-based image super-resolution methods have made great progress and greatly improved the effectiveness of image super-resolution algorithms.However,the required storage and computation of these methods are huge which makes them difficult to be realistically adapt to resource limited devices.Some works have proposed model compression methods to solve the problem,but there are still some shortcomings.One is that most of the existing methods achieve model compression by designing compact network structures,which rely heavily on expert experience and are difficult to be applied to other models in general? the second is that the existing methods using model quantization only reduce model storage,neglecting both model computation and runtime? the third is that the existing methods consider the problem from a static perspective,without dynamically considering the characteristics of different inputs and adapting the model to the inputs.In this paper,we address these problems by conducting research on image super-resolution model compression and dynamic inference.The main contributions of this paper are as follows:(1)Using self-distillation and contrastive learning,this paper proposes a framework that can simultaneously compress and accelerate various off-the-shelf SR models.The method first constructs a weight-sharing self-distillation model,and introduces a contrastive loss to explicitly strengthen the knowledge transfer between the ”teacher-student”pair,which effectively improves the performance of the ”student” network.At the same time,the method is plug-and-play and can be flexibly applied to any CNN structure superresolution network.(2)In order to improve the performance-computation balance of the model,the number of branches of the model is further extended.At the same time,considering the interdependence among branches after the number of branches increases,this paper proposes a progressive knowledge distillation method to distill the knowledge of the ”teacher” branch to the narrow branch gradually,which further improves the overall model performance.(3)Considering the differences in the restoration difficulty of the input images,this paper proposes a difficulty-aware dynamic inference method for image super-resolution tasks.By using the characteristics of the input image the best performance-computation balance can be achieved.In summary,this paper uses self-distillation,contrastive learning and dynamic inference to carry out improvements and innovations in model compression algorithms for image super-resolution models.Adequate experiments are conducted on several publicly available datasets and several backbone networks,and the experimental results verify the superiority of the methods in this paper. |