Font Size: a A A

Lightweight Low-resolution Face Recognition Method Based On Knowledge Distillation

Posted on:2024-03-13Degree:MasterType:Thesis
Country:ChinaCandidate:C WangFull Text:PDF
GTID:2568307100966249Subject:Engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of deep learning technology,face recognition technology has achieved very high recognition accuracy in many application scenarios,further promoting this technology’s wide application and promotion in civil and industrial fields.Current face recognition technology can achieve high recognition rates in short-distance restrictive scenes.However,in many outdoor application scenarios,interference factors such as low image resolution,posture deflection,occlusion,and lighting changes will be in the same application.It appears in the scene simultaneously,making it difficult to obtain satisfactory accuracy in the above scenes.At the same time,the high complexity and high training costs of deep network models make it difficult for existing deep learning techniques to solve low-resolution face recognition problems in unrestricted scenarios efficiently.Because of the above problems,starting from compressing the model,accelerating the efficiency of model training,and improving the low-resolution face recognition rate,this paper has completed the following tasks:(1)In this paper,we experimentally find that the existing classical knowledge distillation methods have difficulty in achieving high recognition rates under unbalanced training samples and low resolution scenarios.Aiming at the problem of the huge amount of parameters in the existing models,various face recognition experiments based on knowledge distillation were designed.The experimental results show that the general knowledge distillation method can significantly reduce the size of the model,and has a high face recognition rate,which can effectively shorten the training and recognition time of the model.At the same time,through a large number of experiments,this paper finds that the existing classical knowledge distillation algorithm is difficult to achieve a high face recognition rate in the scene of unbalanced training samples and low image resolution.In the follow-up of this paper,much research has been done on the problems of knowledge distillation.(2)Aiming at the high complexity of the face recognition network model,uneven training samples and the high training cost,a lightweight and efficient face recognition method based on Sample Balance Distillation(SBD)is proposed.The knowledge distillation method is used to transfer the knowledge trained by the complex and high-performance face recognition model to the simple and small face recognition model;then,the Focalloss loss function is used to solve the problem of sample imbalance,which can increase the weight of sparse samples and improve the attention to sparse samples,allowing the network to achieve higher recognition accuracy.In this paper,the knowledge distillation loss and the Focalloss loss are weighted and fused to obtain the sample balanced distillation loss.The experimental results show that the proposed algorithm still has a very high recognition accuracy when the model memory size is only one-seventh of that of the teacher network in the case of unbalanced training samples.(3)Since the existing deep learning technology is difficult to efficiently solve the problem of low-resolution face recognition in unrestricted scenes,a lightweight and efficient low-resolution face recognition method based on attentional similarity distillation is proposed.Based on sample equalization distillation,by introducing an attention mechanism,the network’s attention is directed to informative regions(e.g.,eyes,ears,nose,and lips)to improve recognition performance and increase the similarity of attention maps in high-resolution networks.It extracts well-structured attention map information into the low-resolution model,and guides the low-resolution network to pay attention to the face details captured by the high-resolution network.The experimental results show that the model is small and achieves good recognition rate in low resolution scenes and sample imbalance scenes.
Keywords/Search Tags:Face recognition, Low Resolution, Knowledge Distillation, Lightweight, Attention mechanism
PDF Full Text Request
Related items