Font Size: a A A

Research On Mapping Of Indoor Mobile Robot Based On IMU And RGB-D Camera Fusion

Posted on:2020-04-02Degree:MasterType:Thesis
Country:ChinaCandidate:P LinFull Text:PDF
GTID:2428330623451378Subject:Control engineering
Abstract/Summary:PDF Full Text Request
Accurate map is the premise of mobile robot navigation.It is still a major difficulty for the current mapping to divide the passable and non-passable areas in the scene accurately and provide the location information of obstacles.Although the 3D point cloud map can reproduce the scene,it can't meet the needs of obstacle avoidance and path planning for robot navigation due to the lack of the above information.In the process of mapping,it mainly relies on the sensor carried by the robot to sense the surrounding environment information.Visual sensor has more advantages in the process of mapping due to the light structure,small size,low power consumption,low price and abundant information,especially the emergence of RGB-D camera,which can simultaneously obtain color images and corresponding depth images,it has attracted great attention in three-dimensional reconstruction and mapping.However,in the process of pose estimation,the accuracy of the pose will be affected by the lack of scene texture or the too fast camera motion.And when the image changes,it is not clear whether the scene changes or the camera itself produces motion,so it is difficult to deal with the problem of dynamic scene only with visual information.Inertial sensor can measure the angular velocity and acceleration of the sensor itself.When the visual information is unreliable,it can provide an initial pose estimation for mobile robots,which is complementary to the camera.Therefore,RGB-D camera and inertial sensor are used for pose estimation and mapping of indoor mobile robots by fusing the data of these two sensors in this paper.In the data processing,the method of feature extraction is improved,which can adjust the number of extracted feature points adaptively by using the magnitude of image gray gradient,improving the robustness in texture-poor scenes,and the quadtree is used to divide the feature points so that they are distributed in the image evenly,which makes full use of the image information,and the visual re-projection error is constructed according to the matched feature points.In the data fusion,the IMU preintegration measurement model between adjacent key frames is constructed by converting the IMU data between the two frames into a single relative measurement constraint using the means of IMU pre-integration,and the pose estimation of the mobile robot is obtained by jointly optimizing the IMU pre-integration measurement error and the visual re-projection error.In the process of pose estimation and optimization,a sliding window method is introduced to reduce the optimization scale,which can speed up the calculation and reduce the requirements on the processor by optimizing only the poses of the robot in the window.And a globally consistent optimized pose is obtained through the loop detection,which can reduce the cumulative error.In the process of mapping,the 3D point cloud map is segmented to ground area and non-ground area to distinguish the passable area and the obstacle area in the scene.And considering that the mobile robot only moves on the ground,the point cloud map is transformed into a two-dimensional navigable map to facilitate navigation of the mobile robot.The experimental results prove the feasibility of the proposed algorithm in this paper.
Keywords/Search Tags:RGB-D Camera, Inertial sensor, IMU Preintegration, Graph Optimization, Mapping
PDF Full Text Request
Related items