| As wearable devices are rapid developing, human-computer interaction technologybased on wearable vision has received extensive attention. It involves visual interactivetechnology, computer vision, wearable computing and many other areas, and its mainresearch content is both visual technology and modes of perception with center of human,which aimes at establishment of a natural and efficient interaction between people,equipments and environment. This paper mainly studies the static hand gesture recognitiontechnology based on wearable vision, and explore the solution to hand gesturesegmentation and gesture model in static hand gesture recognition technology.In a wearable situation, static gesture recognition faces many problems and challenges.Hand segmentation plays a very important role in the static hand gesture recognition, butaffected by variations in illumination, complex background, camera shake, shooting anglechanges and other factors, gesture segmentation is difficult to get the desired results.Aiming at the problem above, this paper proposes adaptive segmentation method combinedwith the Gauss model and the multi-layer perceptron based gestures. First this paperanalyzes the robustness of the skin color clustering in YCbCr color space and the lightintensity, and the color model is established in YCbCr color space. Then the application ofthe median filter is implemented as the mayor means of image preprocessing for its goodskin color clustering. Then combined with fixed threshold, single Gauss model and multiGauss model, the adaptive Gauss model, segments the gesture image likelihood map byusing the model of skin. Finally the skin likelihood image is divided into background andforeground (gestures), then the two parts are processed as the input of a mult-layerperceptron, and gesture segmentation result is obtained. The experimental results show thatthe method is robust in complex background, illumination changes and camera shake.In a wearable situation, hand gesture segmentation results in some errors, robustness toerrors for gesture model is needed. Aiming at this problem, this paper presents theimporved shape context gesture model based on geometric normalization. First of all,through geometric normalization the interference of arm will be removed and the maindirection of the hand area is calculated. Then application of internal distance instead ofEuclidean distance shape context, which improves the robustness of a non rigid objectactivity. Finally shape context descriptors are gained by using palm and fingertips as reference points in accordance of the hand area’s main direction, and then descriptors areclassified by multi-layer perceptrons to get recognition result. The experimental resultsshow that the method proposed is robust to error of hand gesture segmentation and nonrigid motion, and solves the problem that the traditional shape context contains too muchreference points, which can accelerate the rate of gesture recognition. |