Font Size: a A A

Hand Gesture And Speech Fusion Algorithm For Virtual Experiment

Posted on:2021-02-21Degree:MasterType:Thesis
Country:ChinaCandidate:J LiFull Text:PDF
GTID:2427330605960605Subject:Computer Science and Technology
Abstract/Summary:PDF Full Text Request
Virtual experiment system uses virtual reality technology and visualization technology to avoid the danger of real operation,reduce the cost of experiment,and realize the "unattended" of experiment process through the visual expression of relevant theoretical knowledge and operation scene.Human-computer interaction is the basic guarantee of all functions of virtual experiment system.However,the current interaction design of virtual experiment system often focuses on Simulation and functions.The system can not actively perceive the user's intention to guide and assist the user to operate,ignoring the human interaction experience.In this paper,two existing methods of natural interaction,hand gesture interaction,voice interaction and multi-state fusion,are studied in order to establish a natural and harmonious human-computer interaction mode for the virtual experimental system,improve the intelligence of the virtual experimental system,and reduce the operation load and cognitive load in the interaction process.The main goal of this paper is to explore the realization mechanism of multimodal fusion interaction.By constructing the interaction algorithm framework of gesture and voice fusion,the reasoning of user intention is realized and the intelligence of virtual experiment system is improved.By designing the hardware structure and sensors of the intelligent microscope,the intelligent microscope can give users a real sense of operation,and also can sense the user's operation intention,highlighting the advantages of multimodal natural interaction.The main innovations are as follows:(1)Most of the virtual experiment systems can't perceive the user's interaction intention.In this paper,a multi-modal fusion framework and its key algorithm for understanding the human's intention are proposed for virtual experiment,which establishes a unified semantic expression for multi-modal data and realizes the user's interaction intention reasoning.(2)Most of the traditional virtual experiment systems are single-mode interaction.When the phenomenon of misidentification occurs,the interaction behavior of the system will also appear errors,and the user can not correct it in time.Different from the traditional interaction mode of virtual experiment system,this paper proposes a human-computer cooperation mode based on gesture and voice fusion interaction algorithm.Users can grasp and operate virtualobjects by hand in a natural way.At the same time,virtual experiment system can infer user's current operation intention according to user's operation action and voice input,and virtual experiment system can infer user's current operation intention according to user's operation intention The proposed experimental system actively monitors whether users have difficulties in interaction,and assists users in interaction through active scene conversion or voice feedback.(3)The existing solid microscope needs teachers to guide students' experiments one-on-one in the process of operation,which increases the burden of teachers,while the virtual digital microscope which interacts with mouse and keyboard affects students' sense of experience.The basic reason is that the existing technology is insufficient in the "natural" and "intelligent" aspects of interaction.Therefore,this paper first designs the hardware structure of the microscope so that students can get the real operation experience;on the basis of multimodal fusion,a navigation interaction algorithm is proposed and implemented.Therefore,the system can not only accurately perceive the operation behavior of students,but also real-time perceive the operation intention of students.Therefore,the system can conduct real-time,accurate navigation and active guidance for students' experimental process.
Keywords/Search Tags:virtual experiment, human-computer interaction, gesture interaction, speech interaction, multimodal fusion
PDF Full Text Request
Related items