| In recent years,with the continuous development of real-time positioning and map construction technology(SLAM)technology,some intelligent positioning systems such as VR and automatic driving have also appeared in life,and SLAM technology has attracted more and more attention and research from researchers.Vision can provide rich texture information,lidar can provide accurate depth measurement,and inertial sensor can provide accurate attitude information,so the SLAM algorithm integrating inertial,vision,and lidar can greatly improve the accuracy of the algorithm.However,the error of multi-sensor fusion SLAM in time-varying dynamic scenarios is variable,and how to match the error to achieve optimal fusion is still a very tricky problem.At the same time,when the SLAM system is mapped in a dynamic environment,the shadowing problem caused by dynamic objects will hinder the SLAM system from achieving good real-time positioning and navigation performance,and pollute the subsequent map construction,so the shadow removal and map reconstruction in the SLAM system are also the difficulties of the SLAM algorithm.In view of the above problems in the inertial/vision/lidar fusion SLAM system,in order to improve the positioning accuracy of the inertial/vision/lidar fusion SLAM system in specific experimental scenarios and avoid the influence of shadowing caused by object motion on the pose estimation accuracy and map cleanliness of the inertial/vision/lidar fusion SLAM system,the research content of this paper is as follows:(1)Aiming at the positioning problem in inertial/radar/visual SLAM system,the method of variance-covariance estimation is proposed to optimize the system.The position information calculated by the inertial/radar SLAM system and the position information calculated by the inertial/visual SLAM system are fused and optimized by the Helmert variance component estimation method,so as to improve the position estimation optimization of the entire system by 10%.(2)A method for introducing residual images is proposed for objects with high-speed motion during the operation of inertial/radar/visual SLAM systems,which affect the pose estimation and map cleanliness of the system.The before and after frames of the radar point cloud are transformed and the residual image is calculated,which is substituted into the existing semantic segmentation convolution network,so that the system can reject the point cloud map of the moving object in real time,which achieves the effect of improving the accuracy of the recognition of fast-moving objects,reduces the influence of highspeed moving objects on the pose calculation,and improves the robustness of the system.(3)A non-real-time map recovery method is proposed for objects with low-speed motion during the operation of inertial/radar/visual SLAM systems.According to the comparison chart of the frame before and after the map,it is initially judged whether the point is a dynamic point,and the map is updated by continuously reducing the resolution,so as to complete the detection of low-speed moving objects and the rejection of the shadow generated by them and the reconstruction of the map.(4)In order to verify the robustness and accuracy of the proposed algorithm in the real scene,the Liren Building and the Innovation Experiment Center in the school were located and mapped,and the problem of shadowing in the mapping process was eliminated,and the final experimental result was that the trajectory error in the X,Y,and Z directions between the experimental starting point and the experimental end point in the real scene was optimized,and the Io U index for dynamic object recognition was also increased by 38%,which proved that the proposed algorithm was accurate and effective in the actual scene.This study effectively improves the accuracy of the SLAM system,makes the existing SLAM system more accurate positioning and reflects the surrounding real scene environment,and can better meet the requirements of SLAM system for positioning accuracy and map cleanliness. |