| With the rapid development of the Internet,image has become the most commonly used and convenient way of information transmission in our lives.As we all know,it is inevitable that noise will be introduced due to imaging system,transmission medium and equipment hardware problems,which will cause image distortion,in the process of image obtention,compression,transmission and reception.It is of great significance to develop algorithms that enable computers to evaluate image quality like humans.At present,deep learning has achieved great success in many fields,and deep learning-based methods have gradually dominated the field of no-reference(NR-IQA)methods.However,methods based on deep learning usually require large-scale labeled data as support.Because manual labeling data is very expensive,the existing image quality evaluation field does not have large-scale labeled data,and the trained models often have problems such as overfitting and poor generalization.At the same time,the scale of model parameters makes it difficult to deploy on edge computing devices.In view of the above problems,3 new image quality assessmentmethods based on knowledge distillation and self-supervised learning are proposed,and the main work and innovation content are as follows:1.We propose an image quality evaluation method based on knowledge distillation and selfsupervised learning.This method first performs self-supervised learning of soft label prediction,which is typed by the teacher network;Secondly,based on knowledge distillation technology,the joint training of student networks using soft labels and manually labeled labels was carried out.Finally,the trained model is used for image quality prediction A large number of experimental results on the public database show that the proposed method is superior to the current and latest quality evaluation methods in performance,and the model scale is much smaller than the teacher model,and can be deployed in edge devices for smooth inference.2.Different from the above method,we use the middle feature vector of the teacher network to guide the student model learning.First,self-supervised learning is performed using soft tags graded by the teacher network;Secondly,the feature vectors of the middle layer of the teacher network are extracted,and the feature vectors output by the corresponding positions of the student network are guided.After the knowledge distillation process,a dataset with labeled values is used for fine-tuning.A large number of experimental results on the public database show that the process of knowledge distillation achieves the goal of transferring knowledge from the teacher network to the student network.Student networks perform prediction tasks well across multiple datasets.The performance of the algorithm is better than the current latest image quality assessment methods.3.We utilize relationship-based knowledge distillation and self-supervised learning to train student networks.Supervise the deep network learning of student networks with feature vectors output by multiple shallow networks of teacher networks.Since the feature vectors learned by the student network span multiple levels of the teacher network,the features learned in the shallow network will be learned more than once during the training process,which is called "knowledge review".The test results in multiple datasets show that the process of knowledge distillation based on "knowledge review" has a good supervision effect on the learning process of student networks.This algorithm obtained good evaluation results,which are consistent with human subjective scores. |