| Since 1970 s,as one of the key techniques in obtaining real posture data,motion capture has been widely applied in driving character animation in virtual environment,like cartoon,game,film and so on,in order to make them livelier.However,a large number of unlabeled and style-limited data has reduced the reuse rate of the motion sequences.Thus,effective classifying and splicing methods are significant for data reusing.In view of these two problems,we focus on the successes of deep learning models in computer vision research area,and establish spatial temporal models for human skeletal motion capture data based on theory of RBM.The main contributions are manifested in three aspects:1.Behavior semantic recognition.We stack the discriminative RBM on the factored spatio-temporal model to construct a semi-supervised model.The key idea is that the lower-level model is to establish a three way generative model for extracting abstract features of the original skeletal motion sequences.Consequently,it is promotive for higher level discriminative model to distinguish the style of the input frame block.Finally,the belonging style of the whole motion sequence can be easily determined in the vote space.2.Motion transition.Aiming at putting motion capture corpus into interactive applications,we propose a framework based on the procedures of constructing a motion graph.Considering the liberal rotations in the global space,all the motion sequences first of all are split into inhomogeneous pieces on the basis of deflection while moving,and this result in higher reusing rate.Then we utilize the 3D convolutional RBM for modeling the linkage inner the body joints and also the temporal smoothness.Thus,the information of the candidate frames included in the graph nodes can be assigned by this unsupervised model.Ultimately,some rules are proposed to filter these pairs which can lead to unnatural transition frames generated by spherical linear interpolation.Finally,the motion graph is constructed and optimized to fulfill different input trajectories and style transitions.3.Parameterized motion generation.Considering the special structure of motion capture data,we compare the motion qualities when different kinds of deep learning models received central distance and exponential map as the input vectors.Then,the effects on models made by various preprocess procedures are analyzed through spatial and temporal reconstructingability.By means of the experimental results,we present the potential problems in designing a deep learning model for motion generation tasks,and then the corresponding solutions are discussed. |