| Face super-resolution refers to the process of transforming coarse low-resolution facial images into clear high-resolution images,which can enhance the clarity and details of facial images and be used for subsequent tasks such as face detection,face recognition,and emotion analysis.Compared to natural images,face images have certain characteristics,such as fixed facial structure and specific identity,which can be used as prior information to guide the process of face super-resolution reconstruction.The facial prior information has been widely applied in face super-resolution tasks to assist models in generating high-quality facial images.However,these methods often ignore the crucial facial identity information during the reconstruction process,resulting in potential identity bias between the reconstructed facial image and the original identity,ultimately affecting the accuracy of the subsequent tasks.To address the issues mentioned above,this thesis proposes a method for facial image super-resolution reconstruction with constraints on preserving facial identity,aiming to enhance the quality of low-resolution images while maintaining their original identities.The research can be divided into three primary components:(1)Due to severe degradation in real-world facial images,it is challenging to directly extract effective facial prior from them.To address this issue,we propose a facial superresolution reconstruction method based on generative prior and identity preserving.The method pretrains a face generation model to capture the real distribution of facial images from pretrained weights,and incorporates it as facial generative prior into the face super-resolution task to effectively improve the generation quality of the model.In addition,the method incorporates a facial recognition network to ensure that the reconstructed image maintains consistency with the original image by defining and optimizing the identity-preserving loss function.(2)Due to the unstable training process of face generation models,the generated images often exhibit perceptually unpleasant artifacts along with realistic details.To address this problem,we propose a facial super-resolution reconstruction method based on multi-level feature fusion network and identity preserving.The method incorporates a multi-level feature fusion network,which integrates the spatial information of the input face image into the reconstruction process,achieving a good balance between realism and fidelity.To suppress the artifacts generated by the generative model,the method employs a locally discriminative learning method to regulate the adversarial training process and introduces an artifact loss function to suppress artifact regions while preserving realistic details.In addition,the method calculates pixel loss at multiple levels and aggregates the loss at each resolution as an intermediate supervision,effectively avoiding the loss of crucial details caused by image degradation.(3)While convolutional neural networks are known for their powerful feature extraction capability,they face limitations due to their local perceptual fields and fixed weight sharing.These limitations make it difficult to effectively capture long-range dependencies in face images.To address this problem,we propose a facial super-resolution reconstruction method based on Transformer and identity preserving.Transformer has powerful feature representation capabilities that focus on long-distance dependencies in data,and its self-attention mechanism is capable of processing sequences of arbitrary length.By combining the advantages of both CNN and Transformer,the method enhances the feature extraction capability of the model.As a result,this approach achieves improved image clarity and realism by effectively capturing both local and global information in images,while also preserving important facial details.To improve the performance of the model,an identity preserving loss is introduced to train a more accurate and faithful face super-resolution model.In this study,extensive experiments and analyses are conducted for each of the abovementioned methods to validate the effectiveness of the proposed algorithms.These results fully demonstrate the superiority of the proposed methods in this study. |