| Facial motion tracking and animation drivers have been widely used in film,television,games,entertainment and other industries,mainly relying on hardware devices to capture and extract facial motions,including head posture,facial expressions and eyeball directions.These motion parameters are mapped to the 3D model to realize the driving of face animation.In the film and television industry,the requirements for accuracy are very demanding,relying on very complex image acquisition equipment,such as multi-lens or depth cameras,and there are pre-and post-processing procedures that require manual operations;on the contrary,the methods used in the entertainment industry,The real-time requirements are very high,and only a few facial motion features are extracted using simplified methods.Although real-time is guaranteed,the processing of animation details is very rough.In response to the above problems,this paper proposes a lightweight face animation real-time driving method based on motion capture,using a monocular camera to simultaneously extract the three facial motion features of head posture,facial expression and eyeball orientation,and drive the 3D face in real time.The model perfectly balances the accuracy,real-time performance,stability and size of the model,and can achieve real-time animation effects in consumer-level devices.First,this paper proposes a multi-task learning method based on the attention mechanism,which learns the three features of head posture,facial expression and eyeball orientation at the same time,and adds a shared layer for feature extraction.Each subtask uses the attention mechanism.Extract the features that the current task cares about,and propose a multi-task weight dynamic allocation method,which dynamically calculates the weight of the loss value of each subtask,which balances the accuracy and speed of feature extraction.Secondly,in order to improve the speed of model calculation and animation driving,a new type of lightweight network module is proposed,which effectively reduces the number of model parameters,and when driving the 3D face model,the calculation shader is used to speed up the calculation of Blendshape.Aiming at the problem of poor eye tracking stability,a video-based eye tracking correction model is proposed,which effectively guarantees the stability of the eyeball orientation of adjacent video frames.Based on the above three innovations,this paper constructs a lightweight face animation real-time driving frameworkâRDLFA,which provides a complete solution for lightweight face animation real-time driving based on motion capture.This framework is designed in this article and realized the real-time driving system of face animation.Through system testing,the system can realize facial motion tracking and animation drive in real time,accurately and stably,and has good consumer-level application value and entertainment color. |