| Mobile robots are widely used in today’s world,especially after the outbreak of the epidemic,the use of "non-contact" robots has increased.The SLAM(Simultaneous Localization and Mapping)is the basic technology to complete the positioning and mapping of the robot,which is the premise of realizing the autonomous movement of the robot.The SLAM algorithm using a single sensor has poor robustness,low positioning accuracy,and drift phenomenon occurs when the mobile robot moves for a long time,which cannot guarantee its long-term stable operation in various environments.To improve the robustness and accuracy of the algorithm,the SLAM algorithm framework of image and laser point cloud fusion is deeply studied.Based on the visual SLAM algorithm and the laser SLAM algorithm,the laser point cloud data is used to enhance the visual SLAM algorithm to complete the positioning and mapping work.The specific research contents are as follows:(1)A joint calibration method based on nonlinear optimization is proposed for the problem of external parameter calibration of camera and lidar.First,the internal parameters of the monocular camera are obtained through the checkerboard calibration board,and then the features of the calibration board are detected in the point cloud and the image at the same time.The objective function is constructed according to the projection error of the point cloud detection feature point to the image,and the external parameter solution is converted into a least two multiplication problem.Finally,through the nonlinear optimization algorithm based on Levenberg-Marquardt,the optimal external parameters are obtained by iterative solution.(2)Vision-based SLAM algorithms are fully studied.First,the module composition of the visual SLAM and the working principle of each module are expounded.The front-end module extracts features from the image and uses the RANSAC algorithm to eliminate false matches in the feature matching process,and estimates the camera motion according to the basic matching conditions of the three feature points.The back-end optimization strategy is expounded,including constructing the objective function according to the reprojection error,optimizing the camera pose using the method based on nonlinear least squares,and selecting the key frame.In the loop closure detection,we study how to construct a bag-of-words model and calculate the image similarity judgment loop.Finally,the SLAM algorithm is run on the TUM data set to verify the optimization and loop closure effects.(3)In view of the problems existing in motion estimation based on visual SLAM algorithm,an improved scheme is proposed combined with lidar.The data of the laser point cloud is used to provide additional depth information for the image features to assist the monocular camera in completing the motion estimation.In the process of inter-frame pose estimation,the antipolar error is introduced,and the residuals are calculated according to whether the feature points have depth information.The objective function is reconstructed using the feature point residuals and the pose is iteratively solved to enhance the stability of the front end of the SLAM algorithm.Second,to reduce the complexity of the optimization problem,a keyframe-based method is used to locally optimize the pose.Finally,experiments are carried out on the KITTI dataset and outdoor scenes.Compared with the pure visual SLAM algorithm,the trajectory error of the mobile robot is reduced by 52.7%.The experimental results show that the improved algorithm has strong adaptability to the environment,and achieves high-precision pose estimation and dense mapping.There are 40 figures,7 tables,and 72 references in this paper. |