| People’s subjective impressions of other’s gender,identity,race,age and expression mainly come from human faces,and the acquisition of these information in the objective world depends on the research of related face algorithms.Age plays a unique role in entertainment and social interaction,security monitoring and commodity recommendation,so face age estimation based on big data and deep learning has become a hot issue in the field of face research.However,the appearance of human face reflected by age growth is highly uncertain,and the face image in the actual scene is also complex,which will adversely affect the age estimation of human face.So how to build a better face age estimation model has important research significance.This paper mainly studies the face age estimation model based on deep learning,mainly uses ResNet series network to build the face age estimation model,and combines the model compression method of knowledge distillation to compress the complex face age estimation model,so as to reduce the complexity of the model.The specific research contents are as follows:First,this paper uses ResNet-50 to build a basic face age estimation model,and experiments show that it performs well on some public face age datasets.Then,the feature enhancement module and the contextual feature module are introduced into ResNet to make it a DM-ResNet face age estimation model,which uses the feature enhancement ability of the feature enhancement module and expression ability of the contextual feature module to estimate the age of face images.Then the face local image feature extraction networks are concatenated next to DM-ResNet,which together form an age estimation model based multi-features fusion.The new model fuses the global age features extracted from the face and the local age features from the local areas of the face to the fused age features,which carry more age features,and the experiments show that the new model further reduces the MAE of the face age dataset and improves the CS under different thresholds.Next,this paper addresses the problems of redundant parameters of the face age estimation model fused with multi-facial features and large memory occupation of the trained model,firstly,we explain the basic principle,network framework and training strategy of knowledge distillation.And then the paper uses Efficient-B0 as Student model of the Teacher-Student model of the knowledge distillation framework,which performs better in both accuracy and speed,and the face age estimation model with multi-facial features as the Teacher model.In addition,the Non-Local module is introduced between the feature layers of the two models so that the Student model can better learn the knowledge of the Teach model.After the knowledge distillation training is completed,experiments show that the Student model not only has similar age estimation performance as the Teacher model,but also has a smaller memory size,number of parameters,amount of compute and the time of estimation age is also reduced. |