Font Size: a A A

Research On Face Expression Transfer Method In 3D Space

Posted on:2024-06-12Degree:MasterType:Thesis
Country:ChinaCandidate:L AoFull Text:PDF
GTID:2568307184955929Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Facial expression transfer is an important research topic in computer vision.The task involves transferring the expression of a source facial image to a target facial image while preserving the identity and pose of the target face.While completing the facial expression transfer task in two-dimensional space seems intuitive,the loss of depth information has always affected the transfer effect.To solve this problem,mapping the source face and target face to three-dimensional space and performing expression transfer in three-dimensional space can be an effective approach.However,mapping delicate expressions to the three-dimensional face is still a difficult task.In addition,decoupling the expression and identity information of the three-dimensional face and reducing the problem caused by large differences in pose between the source and target faces are also challenging points in three-dimensional facial expression transfer.The main works of this thesis are as follow:Propose a three-dimensional facial expression transfer method that considers the head and uses the FLAME model to represent the head.After mapping the source and target heads to three-dimensional space,the method uses a three-dimensional mesh to represent the three-dimensional head model and performs facial expression transfer on the three-dimensional head model.To map more accurate expression information,expression consistency constraints are introduced.An expression recognizer predicts the expression category,arousal,and valence parameters of the input image and the rendered image of the three-dimensional head.The difference between them is reduced to add delicate expressions to the face of the three-dimensional head and improve the effect of expression mapping.To decouple the expression and identity information of the three-dimensional head,depth-wise separable spiral convolution is used to extract three-dimensional mesh features.A content encoder extracts target three-dimensional head identity features,and a style encoder extracts source three-dimensional head expression features.Reconstruction constraints,cycle consistency constraints,and style reconstruction constraints are applied to add the expression of the source head and the identity of the target head to the three-dimensional head mesh in a self-supervised manner.Specifically,the facial expression features are mapped to instance normalization parameters,and this parameter is used to decode the identity features into the process of three-dimensional head in a normalized way,achieving facial expression transfer in the feature space.The experiments show that this method can capture more expression information and transfer it to the three-dimensional head.Propose a grid-guided facial expression transfer method to reduce the problem caused by the large difference in pose between the source face and the target face.The method uses dense optical flow maps to provide initial head shape,and the reconstructed target three-dimensional head and the transferred three-dimensional head as guidance to predict the dense optical flow maps before and after the expression transfer.Then,a skip connection network is used to encode and decode the target image.During decoding,the optical flow map is used to warp the feature map of the target image to achieve facial expression transfer.Since the motion before and after the facial expression change is more complex,multiple-scale occlusion masks are predicted to focus on different scales of the target feature map when decoding the target features to guide the network to more realistically restore the missing areas caused by the optical flow distortion.The experiments show that this method can complete the facial expression transfer task while maintaining the pose and identity of the target face.
Keywords/Search Tags:Computer vision, Facial expression transfer, 3D reconstruction, Style transfer
PDF Full Text Request
Related items