Font Size: a A A

Research And Application Of Multimodal Fusion Method In Augmented Reality

Posted on:2021-01-16Degree:MasterType:Thesis
Country:ChinaCandidate:M T XiaoFull Text:PDF
GTID:2428330605460583Subject:Computer technology
Abstract/Summary:PDF Full Text Request
As the current hot research in the field of human-computer interaction,augmented reality technology has added new vitality to the development of various fields such as education,medical treatment,and home life.In particular,researchers use augmented reality technology to simulate experimental classrooms in the field of education,which can not only enrich students' experience,but also enhance students' hands-on ability.But most researchers only use card markers to simulate the experimental process,which greatly reduces the user's real sense of operation.In addition,most users only simulate experiments in the visual mode,and only a single mode of interaction causes the user's operation load to be overloaded and reduces the interaction efficiency.Therefore,this thesis is supported the National Key R&D Program of China “Cloud Fusion Natural Interactive Equipment and Tools” and “Multimodal Natural Interactive Virtual and Real Fusion Open Experimental Teaching Environment”.This thesis relies on the application background of the virtual experiment classroom and takes the multimodal fusion nature interaction as the research purpose and proposes a multi-modal fusion interaction method based on auditory,visual and tactile in an augmented reality environment,and designs and implements an intelligent chemistry experiment system.Our aim is to allow students to experience and learn the dangerous phenomena of explosion,corrosion and other phenomena that are invisible to the naked eye through the interaction of different modalities in chemical experiments.The innovation of this article is mainly reflected in the following three aspects:(1)Designing the intelligent equipment with new structure functions.In order to solve the problems that chemical experiment reagents cannot be reused,the cost of laboratory equipment is expensive,and existing intelligent devices cannot detect user-specific behaviors,this thesis designs a new type of 3D printed intelligent equipment.We set up multiple sensors to detect user behavior at different locations on the intelligent device,so that it has the function of sensing the user's specific behaviors such as picking up,dumping,moving,and rotating,as well as the vibration feedback function for user to grab the virtual model.The use of intelligent device improves the user's immersion and realism in virtual experiments.(2)A gesture recognition and interaction algorithm based on augmented reality is proposed.Existing AR systems generally use card markers to achieve the integration of virtual and real,but this method affects the fluency and naturalness of the experimental interaction process.To this end,this thesis uses natural gestures instead of cards,and then proposes gesture recognition and interaction algorithms based on augmented reality.First,we obtain pre-processed 6 types of gesture depth maps and implement gesture recognition by building a convolutional neural network to train gesture recognition models;Then,in order to solve the consistency problem of virtual and real fusion,the mapping relationship between the hand joint point coordinates and the virtual scene is determined in the augmented reality scene,and the coordinate consistency model of gestures and virtual models is established;Finally,this thesis proposes a navigational interaction mode based on multi-modal intent understanding,which correctly guides the user's operation behavior and solves the problem of difficult operation.The results show that under normal operating speed,the average success rate is 92%,and the average operating load of multimodal interaction is reduced by 36%,which improves user interaction efficiency.(3)An intelligent navigation interaction mothed for multi-modal fusion is proposed.Aiming at the problem that the single-modal interaction is overloaded by the user's operation and the interaction efficiency is relatively low,this thesis proposes a fusion interaction method of gesture,speech and sensor information at the decision-making level.First,we establish three modal information data sets and intention analysis,and build multi-modal information intersection and information independent functions;Then,we proposed a multi-modal information fusion interaction strategy to realize the user's intention understanding based on the directed graph;Finally,we proposed a navigation interactive algorithm based on multimodal intent understanding to guide and correct user's operation behavior,and to solve the difficult operation problem.The results show that the average success rate is 92% under the normal operating speed,and the average operation load of multi-modal interaction is reduced by 36%,which improves user interaction efficiency.On this basis,for the purpose of multi-modal interaction virtual experiment classroom,this thesis designs and builds an intelligent chemistry experiment system based on augmented reality based on multi-modal fusion interaction method.Users can easily complete experimental operations in the system with the help of gestures,voice,intelligent equipment and virtual models in the scene,effectively reducing the user's cognitive load of operation and improving interaction efficiency.The purpose of this research and application is to provide better educational resources for middle school students,and it is of great significance to the education in areas with severe shortage of teaching and experimental conditions in remote mountainous areas in China.
Keywords/Search Tags:augmented reality, gesture recognition, multimodal fusion, intelligent navigation interaction, intelligent chemistry experiment system
PDF Full Text Request
Related items