Font Size: a A A

Semantic Gesture Recognition Based On Cognitive Behavior Model Algorithm And Application Research

Posted on:2015-05-02Degree:MasterType:Thesis
Country:ChinaCandidate:T F ZhangFull Text:PDF
GTID:2298330431978606Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years, human-computer interaction has increasingly become one of the hotresearch spot at domestic and abroad. The interaction between users and the virtual system isthe focus for people to explore. As a kind of important way for human communication,gestures naturally become one of the scientists’ research interest. How to interact betweenhands’ command information and virtual system is an important research subject. Thetraditional human-computer interaction often use external elective devices such as data gloveor gesture tracking locator to interact.These interactive modes are heavier, and limit thefreedom of human interaction, thus violating the human natural interaction wishes.Gesture interaction based on visual executes the interaction task between gestures andvirtual system by capture the natural gestures’motion information. During gesture interactionbased on visual, people don’t need to wear other electronic devices, researchers adopt one ormore cameras to record the gestures’information, thus getting rid of the bondage of thewearable electronic devices. The research of gesture interaction Based on visual generallyincludes four parts: gesture segmentation, gesture tracking, feature extraction and gestureidentification. Gesture recognition occupies very important position in the field ofhuman-computer interaction. Accurate gesture recognition ensures the smooth progress of thehuman-computer interaction.This paper is supported by National Natural Science Foundation of China (No.61173079):“3-D human-computer interaction interface research based on the cognitivemechanism and gestures animation”, and Key Project of Natural Science Foundation ofShandong Province (ZR2011FZ003):“The key issues research on natural hand trackingoriented3D human-computer interaction interface”. This paper does many researches ofgestures on condition of monocular camera. The concrete research content is as follows:(1) A virtual assembly platform based on monocular camera is established and mainlyusing the OpenGL, OpenCV and3DMAX to implement the construction of the assemblyplatform. The camera obtains the image of a gestures, and the virtual system receives theinformation of hand image information and then analyze them. After getting the analysis results,the virtual system make corresponding interactions. (2) Gesture segmentation and feature extraction are important premises of gesturerecognition. This paper uses RGB space to get gesture color characteristics and adopt gaussianto model, and then use the training gaussian model for gesture segmentation. On the stage offeature extraction, this paper use the density distribution method based on binary image andextract contour point detection method to extract the feature parameters. Density distributionfeature has the characteristics of translation invariance, scale invariance and rotation invariance,Contour detection method can effectively extracts fingertip, thus getting the precise gestures.characteristic parameters.(3) This paper proposes two simple methods for semantic gesture recognition: intervalproportion algorithm and image fusion method. First, this paper proposes the concept ofsemantic gestures, and divides the semantic gestures into four categories. Interval proportionalgorithm gets effective characteristics by the effective partition of gesture image, andrecognise gestures with the method of Euclidean distance. Image fusion method put all dynamicimage of semantic gestures in a static image, and do recognition on the static image. These twomethods has good effect for semantic gesture recognition.(4) During the study of gesture recognition, this paper joins the relates theory of cognitivepsychology, and applies in the virtual scene. And then this paper proposes a cognitivebehavioral model to identify the semantic gestures. First, analyzing the snowman virtualassembly system, set the gesture recognition sites and choose a lot of users for cognitiveinformation, then training transition probability matrix of interaction scenarios until got thestable transition probability matrix. Second, repeat the semantic gestures under the scenario toobtain the DDF characteristics of these gestures, and then set up the HMM for each kind ofsemantic gestures based on the characteristics of DDF, thus setting up the cognitive behavioralmodel of interactive scenario. Third, test the semantic gestures on the established cognitivebehavioral model. The experimental results show that the semantic gesture recognition basedon cognitive behavior model is able to adapt to specific interaction environment well, and hashigh recognition rate and better time efficiency.
Keywords/Search Tags:Human-computer interaction, semantic gesture recognition, virtual reality, cognitive behavioral model
PDF Full Text Request
Related items