Font Size: a A A

Research On SLAM Algorithm Based On Vision And IMU Fusion

Posted on:2024-05-05Degree:MasterType:Thesis
Country:ChinaCandidate:T LiFull Text:PDF
GTID:2568307157971479Subject:(degree of mechanical engineering)
Abstract/Summary:PDF Full Text Request
In recent years,mobile robot technology has developed rapidly,and an increasing number of robots have appeared in people’s daily lives.At the same time,Location and Mapping(SLAM)is a key technology in the field of mobile robots,which refers to the use of sensors on their own carriers to locate themselves in real-time and map the surrounding environment during movement.With the continuous development of computer vision algorithms,vision based SLAM methods have become a research hotspot in recent years,replacing laser radar based SLAM methods.However,pure visual SLAM is prone to tracking failure in environments with fast motion and missing textures.Inertial Measurement Unit(IMU)can provide highprecision positioning data in short time and fast motion,and has obvious complementarity with vision.Therefore,this article proposes a SLAM scheme based on visual and IMU fusion.The main content includes the following parts:A visual SLAM algorithm is proposed based on the ORB-SLAM2 framework,which combines feature point method and direct method.Modified the front-end tracking thread of ORB-SLAM2 and provided a specific tracking process,abandoning the operation of ORBSLAM2 extracting ORB feature points for each frame of image.Determine whether to use direct method or feature point method for pose estimation based on time,space,and error conditions for each image.Finally,an experimental comparison was conducted between the proposed algorithm and ORB-SLAM2 on a public dataset.The absolute trajectory error of the improved visual SLAM algorithm in this paper is about 14% higher than ORB-SLAM2,and the relative trajectory error is about 12% lower than ORB-SLAM2.The system’s robustness and running speed are improved by approximately 2.5 times while the positioning accuracy is roughly equivalent to ORB-SLAM2.A visual inertia SLAM algorithm was proposed by fusing IMU on the previous visual SLAM framework.First,perform visual inertia joint initialization,and then perform visual inertia SLAM backend optimization.Use sliding window marginalization to convert the old keyframes and IMU information to be removed into prior information and add it to the window.A precision comparison experiment was conducted between the proposed visual inertia SLAM algorithm and two mainstream VISLAM algorithms,OKVIS and VINS Mono.The absolute trajectory error of the algorithm in this paper was reduced by about 59% compared to VINS and about 78% compared to OKVIS.The relative trajectory error is reduced by about 20%compared to VINS and about 64% compared to OKVIS.The results indicate that the positioning accuracy of the visual inertia SLAM algorithm proposed in this article is better than the other two mainstream algorithms.A mapping system based on ROS has been proposed.Using ROS,the system is divided into three nodes: data driven,pose estimation and dense mapping.In the actual scene,dense point cloud maps with rich visual depth information and octree maps that can be used for robot navigation and obstacle avoidance are constructed respectively.
Keywords/Search Tags:Mobile robot, Simultaneous localization and mapping, Visual inertia fusion, Characteristic point method, Direct method
PDF Full Text Request
Related items