| Perception and localization are the basic problems to be solved for the autonomous and intelligent mobile robot.Due to the lost of GPS and serious drifts of low-cost IMU,localization of robots in indoor environment is a challenging problem and has been studied in the last decade.Therefore,the vision-based three-dimensional ego-motion estimation and environment modeling technologies of robots show an important research and application value.However,most of the traditional visual navigation and environment modeling methods can only work in the static environment.If there are moving objects in the scene,they will become the interference in the image which seriously affect the ego-motion estimation and the environment modeling.Therefore,this thesis studies how to estimate the position and orientation of the robot and establish a global-consistent static map without moving elements in a three-dimensional dynamic environment by only using the visual information.The main contributions of this thesis are summarized as follows.Firstly,the information preprocessing technologies of the RGB-D sensor are studied deeply.At first,the internal parameters of the color camera and the infrared camera are calibrated respectively.The spatial registration and time synchronization of the two image signals are carried out to obtain the depth value of each pixel in the color image.And then,the depth measurement theory of the RGB-D camera is researched.Combined with the Gaussian mixture model,the measurement uncertainty of the RGB-D sensor is modeled in order to get the uncertainty of each pixel’ depth and three-dimensional spatial position in the carema coordinate.Secondly,a novel feature regions segmentation based RGB-D visual odometry is proposed,aimed at the problem of three-dimensional ego-motion estimation in dynamic environment.It extracts the features from the keyframe and current frame images and matches them,and then calculates the transformation according to the matched features.In order to eliminate the disturbance of moving objects in the scene,it divides the features into static and dynamic regions based on the invariance of the distance between adjacent feature points when the camera moves.Finally,only the matched features on the static region are used to estimate the ego-motion of the camera.The experimental results show that the proposed visual odometry algorithm can obtain accurate ego-motion estimation in both static and dynamic scenes.The accuracy of the proposed algorithm is better than other advanced visual odometry methods in large-scale dynamic environments.Thirdly,a new method of building the static map is proposed for the three-dimensional environment modeling in dynamic environments.It first constructs a simultaneous localization and mapping system by using the proposed visual odometry combined with the loop closure and graph optimization techniques to obtain the global-consistent poses of keyframes.In the process of building the map,the intensity and depth of each matched pixel between adjacent keyframes are checked according to their transformation in order to remove the moving objects.Finally,only the pixels which are consistent betweeen adjacent keyframes are used to build the globle point cloud map.The experimental results show that the proposed method for building static map can remove the moving objects thoroughly in the point cloud and keep the information of the static region.Thus,it maintains the integrity and consistency of the whole environment map. |