Font Size: a A A

Research On SLAM Based On Fusion Of Lidar Point Cloud And Visual Image

Posted on:2023-07-27Degree:MasterType:Thesis
Country:ChinaCandidate:Y C GongFull Text:PDF
GTID:2568307178478874Subject:Engineering
Abstract/Summary:PDF Full Text Request
Simultaneous Localization and Mapping(SLAM)can help mobile robots to determine their own position and orientation information in the map while building a map.SLAM technology is one of the key technical solutions for mobile robots,whether applied to daily life or industrial production environments,and has received a lot of attention in academic research and industrial production.In recent years,with the improvement of hardware level,this field has begun to accelerate development and made many remarkable progress.However,there are still many problems to be studied in the application research based on SLAM technology.Since the 1980 s,mobile robots rely on odometer,Inertial Measurement Unit(IMU),Light Detection and Ranging(Li DAR),camera and other sensors to achieve higher self positioning And mapping accuracy.However,the SLAM algorithm relying on a single sensor still has many problems.The laser SLAM has high measurement accuracy,wide range,fast speed and strong anti-interference ability,and can quickly and accurately collect the geometric information of the surrounding environment in normal environments.However,the point cloud information obtained by the laser radar lacks the color and texture features of the environment,and it is very easy to penetrate the transparent glass area to form effective point cloud data,which makes the positioning accuracy and mapping effect of the laser SLAM vulnerable to the environmental geometric shape and material.Visual SLAM generally extracts feature points in images through ORB(Oriented FAST and Rotated BRIEF),triangulation and other algorithms,and obtains rotation translation amount through feature point matching,but the camera’s imaging effect is vulnerable to environmental light.Aiming at the above problems and challenges,this paper studies the method of laser point cloud and visual image information fusion to achieve multi-sensor fusion of mobile robot SLAM,avoiding the disadvantage that the location accuracy and mapping effect of SLAM based on a single sensor are easily affected by the environment.The main research contents and achievements are as follows:(1)Aiming at the problem that the registration of laser point cloud and visual image needs to be jointly calibrated in advance,and is vulnerable to the influence of mechanical structure,this paper proposes a registration algorithm of laser point cloud and panoramic image based on gray scale similarity to achieve automatic registration between laser point cloud and visual image.The point cloud depth map of laser plane is formed by projecting the multi-threaded lidar point cloud to a two-dimensional plane through the column coordinate transformation relationship.This method divides the laser plane point cloud depth map and the panoramic image into equal size horizontal and vertical pixel block regions,and compares the gray values of each region circularly.The combination order of regions with the smallest ratio variance of gray values is used as the registration result to reconstruct the multi-threaded laser radar point cloud and panoramic image.The experimental results show that the registration algorithm based on the principle of gray scale similarity can well register the heterogeneous information of multimodal sensors without manual intervention,and the pixel error is within two pixel values.(2)In order to solve the problem that traditional visual SLAM method will have position and attitude drift in the long running process,this paper combines laser point cloud with visual image.The traditional laser SLAM algorithm is used to build a laser priori map,and plane segmentation is used to extract the geometric features of lines in the laser priori map.Then,based on the standard VINS(Visual Inertial Navigation System)VIO framework,line matching is performed through online extraction of image line features and laser priori map to add global map constraints.The registration results are added to the sliding window as a global constraint to update the estimated state in real time,It can suppress the pose drift that is easy to occur when SLAM algorithm runs for a long time,and reduce the cumulative error.(3)In order to solve the problem of low reusability and high cost of the laser prior map aided localization method,this paper proposes a geometric structure based visual prior map aided localization algorithm to realize online pose correction during the operation of the algorithm.This method constructs a visual prior map with geometric structure in advance by adding a line feature extraction framework to the traditional visual SLAM algorithm,and filtering stable line features by combining point features and line features,so as to facilitate the subsequent addition of global map constraints in the algorithm,and expand the universality of the algorithm.Subsequently,the depth value of spatial lines in the visual prior map is used as the constraint condition to filter the correct matching between the real-time camera image and the lines in the visual prior map.The experimental results show that the geometric structure based visual prior map aided localization method can improve the accuracy of initial camera pose estimation without reducing the positioning accuracy.Aiming at the existing problems and the improved algorithm proposed in this paper,we use the open data set to conduct experimental simulation analysis and run it in the real scene.The experimental results show that the proposed theoretical basis and algorithm framework are feasible and superior.The research results of this paper are helpful to improve the accuracy of mobile robot’s autonomous positioning and mapping,improve its robustness,positioning accuracy and mapping effect in the use of multisensor fusion SLAM algorithm,and promote its practical application in complex real environments.
Keywords/Search Tags:simultaneous positioning and mapping, mobile robot, pose estimation, multi-sensor fusion, lidar, visual camera
PDF Full Text Request
Related items