Font Size: a A A

Research On Localization In The Park With Vehicle-Mounted Multi-Sensor Fusion

Posted on:2022-10-17Degree:MasterType:Thesis
Country:ChinaCandidate:W H GuanFull Text:PDF
GTID:2492306572451274Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
With the increasing progress and innovation of scientific and technological,more and more people can enjoy the convenience of intelligent robots bring to life.As a foundational capability of robot navigation tasks,simultaneous localization and mapping(SLAM)plays an important role in the field of mobile robots and unmanned driving.Over the past 20 years,great success has been achieved using SLAM for realtime state estimation in challenging settings with a single perceptual sensor.However,single-sensor based methods naturally have limitations and are difficult to adapt to complex and changeable environments.For example,camera-based methods are sensitive to illumination changes,while lidar-based methods will suffer degeneration in structure-less environments.Based on passenger vehicle platform,combines the advantages of monocular camera,3D lidar and inertial measurement unit(IMU),we propose a framework for tightly-coupled visual-lidar-inertial SLAM,CIL-SLAM,that achieves real-time state estimation and map-building with high accuracy and robustness.The main research contents of this paper are as follows:1.Extract features from original measurements of both visual and lidar sensors.There are abundant line features in artificial environment,in terms of vision,2D point and line features are extracted in this paper,and LK optical flow and LBD descriptor matching are respectively used for visual data association.For the lidar,the motion distortion of original point cloud is removed with the aid of high-frequency IMU.The geometric feature points are extracted from the non-ground point cloud based on local surface,after the ground segmentation of undistorted point cloud.2.Perform data association between 2D visual features and lidar point cloud.In this paper,accurate 3D measurement of local map is used to directly complete the depth information of visual features.The depth of the visual point features in the camera coordinate is directly completed,and in terms of visual line features,we compute corresponding Pluck coordinate in the camera coordinate.3.Construct a visual-lidar-inertial tightly coupled odometry atop a factor graph.In this paper,the IMU pre-integration technique is used to deal with high frequency IMU measurements,by which the inertial constraints are added to the factor graph.At the same time,by minimizing the reprojection errors of 3D feature points and lines with bundle adjustment,we construct visual constraints between two key frames.As for lidar,the multi-metric linear least squares ICP point cloud registration method which consider distance,category,direction and intensity is adopted to construct lidar constraints between the current lidar frame and the local map.After these three constraints are added,the factor graph is updated incrementally and smoothly using the i SAM2 algorithm.4.Construct a loop closure detection module based on the combination of odometry geometry information and visual appearance.In this paper,the position of historical key frames is used to identify the nearby loop closure,and the bag-of-words is used to identify the visual loop closure.The constraints of current frame and loop closure frame are added to the factor graph by point cloud registration method.We also use a variety of validation strategies to prevent false positive loop constraints added to the factor graph.
Keywords/Search Tags:Multi-Sensor Fusion, Visual-Lidar-Inertial Odometry, Point and Line Features, Driverless Cars
PDF Full Text Request
Related items