| Deaf-Mute persons are important part of society, sign language is used by them to communicate. With the rapid development of computer and network technology, sign language auto-recognition technology is becoming a hotspot for research. It can help the Deaf-Mute persons have a better life; in the meantime, it has comprehensive application prospect and academic value as well.Normally there’re two approaches for sign language recognition: Data Glove based and Vision based. Vision based technology is the trend in future because it fits "Harmonious Human-Computer Interactions" theory better. This thesis introduces an approach to detect, track and recognize sign language gesture based on the current status of domestic and international research. The detailed research information is as following:Introducing an algorithm to detect palm gesture:make and select200positive sample pictures of palm gesture,1000negative sample picture of palm picture, Train palm gesture classifier based on Haar feature and Adaboost algorithm. Realize Single Gaussian Skin Model and Otsu binary picture algorithm to filter out candidate areas of palm gesture after classifier is executed. This method can precisely detect palm gesture.Analyzing the shortages of traditional Camshift algorithm:1) Semi-automation.2) No prediction during execution.3) Can only track single target. Introducing improved algorithm to solve the3shortages of Camshift:1) use detected palm gesture as initialization of Camshift instead of selecting the target manually, this can make Camshift be fully automatic.2) Combining Kalman filter with Camshift so that gesture motion can be predicted, this prevent target loss from encountering interference like large area of similar color or fast motion.3) Encapsulate Camshift and Kalman filter as class using Object-Oriented theory, so that Camshift can track multiple targets independently, this solves the problem that Camshift can only track one target.Introducing a method to extract gesture feature including ellipse feature and EOH feature. Ellipse feature contains coordinates of center point of palm, length of major and minor axis of ellipse, rotation angle. Among of which, center point of palm use relative coordinates against face center point instead absolute coordinates to avoid influence of different people acting gestures. EOH feature use Canny operator to calculate edge of palm image, then calculate tangent angle of each edge point, statistic the angle into histogram of8group in angle space. Normalize the2features, then combine them into a13*1feature vector, this feature vector can effectively distinguish different gestures.Introducing an approach to recognize gestures based on featuring template match and Euclidian distance: calculate the Euclidian distance between extracted gesture feature and feature template, decide the recognizing result with the minimum Euclidian distance. If Euclidian distance is greater than the settled threshold, then decide to reject recognition. The computational amount and time complexity of this approach is low while accuracy is high.Implementing the sign language gesture recognition system based on OpenCV2.4.5, MFC and Visual Studio2012which can run effectively and timely, and show the processing detail clearly. |