| Simultaneous Localization and Mapping(SLAM)is a technology that enables mobile robots to determine their own position and build a map of the surrounding environment through sensors carried on board in unknown environments.It is the foundation for mobile robots to complete other tasks and has important research value and practical significance.In previous research and applications,cameras and LiDAR are the two most commonly used sensors in SLAM systems.However,using only a LiDAR or a camera in SLAM schemes has limitations.Considering that LiDAR and cameras can complement each other,this paper focuses on the SLAM technology that integrates LiDAR and vision,constructing a SLAM system that fuses LiDAR and vision to leverage the complementary advantages of different sensors,and improve the positioning accuracy and mapping consistency of SLAM systems in challenging scenarios.The main contributions of this paper are summarized as follows:To address the problem of pose estimation inaccuracy caused by the degradation of point cloud registration in pure LiDAR SLAM solutions in indoor structured scenes,this paper proposes a tightly-coupled LiDAR-visual Odometry(LVO)system that fuses LiDAR and vision by introducing visual constraints and building a unified cost function.The proposed LVO system utilizes visual information to correct inaccurate pose estimation caused by point cloud registration degradation in long corridor environments of pure LiDAR SLAM solutions.Experimental results demonstrate that the LVO fusion algorithm can effectively overcome the point cloud registration degradation in long corridor environments of pure LiDAR SLAM solutions and obtain more accurate pose estimation.We investigated the changes in visual reprojection error under point cloud registration degradation and demonstrated through theoretical and experimental analysis that using inaccurately estimated poses obtained during point cloud frame registration degradation results in significantly larger visual reprojection error.This indicates that visual reprojection error can reflect the extent of point cloud registration degradation to some extent.To address this,we designed a dynamic weight allocation method for laser and vision weights,which is determined by both the visual reprojection error and the number of visual feature points.Experimental results on a public dataset showed that the designed LVO system reduced the relative position error compared to a pure laser-based SLAM system by 52% in the KITTI 02 sequence,which contains long corridor scenes.Experimental tests were conducted on a self-collected indoor dataset using the LVO algorithm constructed above.The results show that compared with the pure laser SLAM scheme,LVO can effectively overcome the problems of inaccurate pose estimation and poor mapping consistency caused by point cloud registration degradation in scenes with sparse structural features.In long corridor scenes with few structural features,LVO demonstrates better localization accuracy and mapping consistency than pure laser SLAM scheme,validating the effectiveness of the LVO algorithm. |