Font Size: a A A

Research Of Sign Language And Gesture Recognition Based On Multiple Sensors Information Detection And Fusion

Posted on:2011-03-06Degree:MasterType:Thesis
Country:ChinaCandidate:W H WangFull Text:PDF
GTID:2178360308455464Subject:Biomedical engineering
Abstract/Summary:PDF Full Text Request
Sign language expresses specifical information with hand shape,position,motion,orientation and facial expression and other channel. Sign language is one of the main means of everyday communication between deaf and deaf and beween deaf and healthy people. Sign language recognition research can afford facilities for deaf communicating with outside world. With the improving of computer performance, natural human-computer interaction is more important in human's daily life. Natural human-computer interaction systems should interact with computer using voice,facial expression,gesture and other human natural language, which is similar to natural interaction manner between human and human. Research on sign language and gesture recognition has important significance in improving human language comprehension level of computer and developing multimode human-computer interaction.Sign language and gesture contain rich information of hand shape,arm motion,orientation and position etc. Different types of sensors acquire gesture motion information from different aspects. Surface electromyography electrodes,accelerometer and camera are three small,convinient and low cost sensors. The avantages of three sensors for gesture recognition are combined for describing gesture more adequately. A sign language and gesture recognition method based on multiple sensors information detection and fusion is proposed. The proposed method aims at improving the recognition accuracy of gestures and enlarging recognizable vocabulary size.The main work and achievements of the dissertation could be presneted as the follows:(1) Baesd on information of the changing amplitudes of SEMG, active segments of SEMG, ACC and video signals are specified synchronously using 64-points moving average algorithm and double-threshold method. The temporal segmentations difficulty from continuous streams of input signals is alleviated.(2) Gesture is analyzed concretely from the view of spatial morphology. Dynamic gesture is decomposed into more smaller recognition units. Dynamic element and static element are used for multi-stream HMM recognition instead of entire gesture. The recognition results of elements are integrated into the class of dynamic gesture. The proposed method shortens the train and recognition time and improves the recognition rate.(3) Based on multiple sensor information detection and fusion, a multi-level classification and fusion strategy is proposed. The advantages of different sensors in gesture information detection are synthesized by the proposed method. Sign language vocabulary is divided into several smaller subsets based on the amplitudes of SEMG and connected domain of gesture image. The candidate sets of sign language is reduced. Lastly, decision-level fusion approach is carried out on local matching results of multiple classifiers with Sugeno fuzzy integral for improving classification performance. For 201 high-frequency sign words from three signers, the recognition accuracies are all above 99%.(4) With Visual C++ and OpenCV techniques, a acquisition system based on SEMG,ACC and vision is built. Reasonable multi-threading method is designed so that three sub-thread of acquisition,display and save could run synchronously. Camshift algorithm is utilized by gesture tracking module within system for hand tracking.
Keywords/Search Tags:sign language recognition, multiple sensor fusion, hidden markov models, Sugeno fuzzy integral
PDF Full Text Request
Related items