Font Size: a A A

Study On Video-Based Face Expression Modeling

Posted on:2004-03-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:J WangFull Text:PDF
GTID:1100360095961720Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
At present, virtual face modeling and expression animation is one of the research hot in computer graphics, image processing and computer vision, which have great application in teleconference, artificial life, wireless presence, and the like; also, face detection of image and video has especial significance in biology certification area. In this thesis, we present one prototype system of video based performance-driven facial expression animation, and describe in detail the relative content of face detection, feature extracting, feature tracking, special face modeling, expression animation and the like. After analyzing every technique in detail, we present some new idea and improvement to every key step: Present an improved parameterized face model of CANDIDE-4, which is a subset of MPEG-4. In order to meet the requirement of performance-driven facial expression animation, we modify the AUs(Act Units), vertexes and SUs(Shape Units) of FAPs in MPEG-4. Discuss three different methods, based on single image, two images and multi-images, to create special face model. Utilizing the idea of SFS (Shape from Shading) and the facial constrained information, we reconstruct face model by single frontal face image. We utilize orthogonal image method to generate individualized face model by adjusting the parameters of the CANDIDE-4. We realize an algorithm based on minimum features for rapid face modeling from video, by tracking feature points, calibrating exterior parameter, estimating 3D location of feature points. Present two constrain-based texture mapping methods using RBF and Harmonic Modelinterpolation respectively, which are expressed in explicit formulation and satisfy C1 or C2. Combining wavelet decomposition and ERI (Expression Ratio Image), we propose a new parameterized algorithm for transforming facial expressional details. The algorithm can hold basic illumination and key characters of source image, transform texture details of target expression image; in addition, we can control the degree of expression exaggeration by the function of FAPs. Realize the expression feature detection and tracking. In this thesis, we adopt the technique of statistical training, create a sample database of every kinds of expression face images,construct a matrix of the difference of each sample and average image, and reduce dimension by PCA, then decrease the relativity of principle components by ICA, and therefore get the character sub-space of face. When detecting a face, we adopt the method of disturbing principle components of model to match special facial image, which is called whole optimization method in this thesis. In order to improve precision, we refer to the idea of LFA (Local Feature Analysis) and give a model of ML-IDAM (ICA-based Multi-Layer Directly Appearance Model), which can give more precise experiment results.
Keywords/Search Tags:Face detection, Face modeling, Face animation, Texture synthesize and mapping
PDF Full Text Request
Related items