| Since human society has evolved into an information society, there is an urgent need fora robot that can provide a variety of services and assistance for people’s daily lives. Typically,a robot needs the help of a map to play a role in people’s daily life and indoor workenvironments. Therefore, each robot needs to have the capability of unknown environmentexploration and mapping. Additionally, the common metric map built by robot sensors can notreflect semantic information of an indoor environment. Therefore, a kind of semantic map,which reflects semantic information of the environment and can be understand by robots,needs to be created. To achieve the above purpose, this thesis firstly focuses on unknownenvironment exploration and mapping, and then presents the concept of building a semanticmap in an indoor three-dimensional (3D) environment based on Human-Robot Interaction(HRI). In an indoor environment, two kinds of mobile robot platform are used to do researchon topics of unknown environment exploration and mapping. Furthmore, by means of awearable motion sensor network and a motion capture system, studies on some relatedtechnology problems of semantic mapping in a3D environment are carried out within theframework of HRI. The main research contents are as follows:Firstly, an iRobot mobile robot, a Pioneer mobile robot, laser range finders, four cameras,micro computers and Robot Operating System (ROS) are used to build two kind ofmulti-purpose experiment platforms. The mobile robot platform based on iRobot robot mainlyuses a laser range finder and various types of camera to realize functions such as mapping in atwo-dimensional environment, unknown environment exploration and multi-robot cooperativelocalization. The platform based on Pioneer mobile robot mainly uses a Kinect camera,amotion capture system and a wearable wireless motion sensor network to create a3Denvironment map and recognize gestures, respectively. Composition of these two platformswith high-low structure can be flexibly applied to different experimental tasks. Additionally,in order to achieve HRI in an indoor environment, a wearable wireless motion sensor, whichis composed by an orientation sensor module, a wireless communication module and a powermanagement module for activity and gesture recognition, is designed in this thesis. Besides,an energy management algorithm is proposed to prolong its service time.Secondly, when Simultaneous Localization And Mapping (SLAM) is proceeding, theproblem of failure in data association is mainly caused by accumulative errors. Therefore, anerror correction algorithm is proposed to reduce accumulative errors. And for the problem ofSimultaneous Planning Localization And Mapping (SPLAM) in unknown environmentexploration, a method of utility function construction is proposed to achieve mapping in anunknown environment and autonomous path planning based on information entropy theory.Additionally, for the problem of multi-robot cooperative localization, a data fusion strategy ispresented. And with the help of a motion capture system, the accuracy and validity of theproposed method are verified.Thirdly, the limited viewing angle of a Kinect camera, posture and position changes ofthe camera caused by the movement of a mobile robot are two main problems in Vision Simultaneous Localization And Mapping (VSLAM), which generate the problem of pointcloud data in a same shared frame cannot be matched. A method which fuses data fromposture information of a Kinect camera and data from multiple frames is proposed by thisthesis, and a Multilevel Iterative Closest Point algorithm (MICP) is proposed for constructinga3D environmental map.Fourthly, to avoid similar computational complexity in traditional vision-based gesturerecognition method, another way based on a wearable wireless motion sensor is used in thisthesis, and an approach based on Multilayer Hidden Markov Models (MHMMs) is proposedfor continuous gesture recognition. Firstly, a three-layer feed-forward neural networkstructure is used to detect gesture signals; Secondly, Low-level Hidden Markov Models(LHMMs) are used to recognize single gesture in continuous signals. Finally, a Bayesian filterwith constraints of context in High-level Hidden Markov Models (LHMMs) is used to correctfinal recognition result.Finally, a method fuses information from human motion and human’s locationinformation is proposed for modeling a semantic3D map. Three wireless motion sensorsdesigned by the thesis are worn on the same side of thigh, waist and wrist of a tester, whichform a body sensor network for simultaneous human’s activities and gestures recognition.Meanwhile, a motion capture system is used to obtain location information of the tester. Athree-layer Dynamic Bayesian Network (DBN) is used to model constraints among human’sposition, physical activities and gestures. Then, a Bayesian filter and an improved Viterbialgorithm are used to estimate physical activities and gestures. Finally, human’s activities areused to determine the furniture types and then information of furniture types is embedded intoa3D map to achieve the task of indoor3D semantic mapping. |