| At present,unmanned systems such as drones and mobile robots with autonomous navigation functions are gradually being applied in various fields.Among the numerous key technologies,Simultaneous Localization And Mapping(SLAM)is the foundation and core to realize the autonomous navigation function of unmanned systems.Vision SLAM estimates the pose of the unmanned system in an unknown environment and builds an environment map through the image sequence captured by the vision sensor.However,the existing visual SLAM method has the problems of poor robustness and limited applicable scenes.The positioning accuracy is low under the conditions of fast movement of unmanned systems and sparse environmental characteristics,which is difficult to meet the needs of autonomous navigation of unmanned systems.In view of the above problems,this paper carries out research on the SLAM method of unmanned system based on the fusion of visual and inertial information to improve the accuracy and robustness of unmanned system positioning.The main research contents and innovations of this article are as follows:Aiming at the problems of unstable number of iterations and low efficiency of existing visual SLAM methods when eliminating mismatch points,this paper proposes a visual SLAM method based on Progressive Sampling Consensus(PROSAC)algorithm to eliminate mismatch points.First,extract the ORB features of the image sequence and perform feature matching;secondly,use the PROSAC algorithm to rank the matching quality of the feature points,select the high-quality point estimation data model,and use this to remove outliers outside the model and retain the correct feature point;then,based on the results of excluding mismatched points,estimate the pose of the unmanned system,realize the positioning function,and update the environment map at the same time.Finally,the experimental results show that the PROSAC algorithm can significantly improve the computational efficiency of the algorithm,and provide a time basis for the subsequent fusion of visual and inertial information.Aiming at the problems of low positioning accuracy and poor robustness of pure visual SLAM method in fast motion and sparse environmental features,this paper proposes a SLAM method based on fusion of visual and inertial information.First,pre-integrate the IMU information between two adjacent key frames to provide IMU constraints for subsequent visual inertial navigation fusion.In particular,in order to improve the accuracy of the pre-integration stage,this paper uses the fourth-order Runge-Kutta algorithm to iteratively calculate the pre-integration value at the next moment.Then,the measured value of IMU pre-integration is used as the initial value of the system’s pose at the current frame time.By combining the visual reprojection error term and the IMU pre-integration error term,the pose of the unmanned system is optimized,and the three-dimensional coordinates of the map points are restored.Finally,a sliding window is set in the local map to further optimize the system pose and map point position in the sliding window.This paper verifies the accuracy and robustness of the SLAM method on the Eu Ro C data set.The experimental results show that,compared with the pure visual SALM system and the existing visual and inertial information fusion SLAM system,this system has higher positioning accuracy.In addition,the system’s robustness has been improved to some extent in situations such as rapid movement and lack of environmental features. |