Font Size: a A A

Research On Key Technologies Of Visual Localization For Autonomous Driving

Posted on:2020-10-24Degree:MasterType:Thesis
Country:ChinaCandidate:Z ChenFull Text:PDF
GTID:2392330575977681Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
In recent years,artificial intelligence has been developing rapidly.As the combination of artificial intelligence and automobile industry,autonomous driving has received wide attention.The autonomous driving car is a complex system and key technologies include perception,localization,decision-making,planning and control,in which localization is an important basis for autonomous navigation,decision-making and planning.Traditional localization methods usually use GNSS,IMU and other sensors.The GNSS method suffers from environment shielding,which makes it difficult to achieve high-precision localization result in urban environment.The IMU method has cumulative drift and the localization precision declines as time goes.In recent years,the development and application of LiDARs and cameras have inspired the study of localization.On the one hand,localization and mapping can be accomplished synchronously through the SLAM method.On the other hand,localization can be completed based on existing maps.The 3D maps used in localization are mostly obtained by terrestrial laser scanners with GNSS,IMU or other localization equipment.Terrestrial laser scanners are expensive,which set high barrier for users.The 3D real-time LiDAR is cheaper than terrestrial laser scanners and can get accurate 3D information from the environment with little influence by ambient light.However,the point cloud of 3D real-time LiDARs is obviously sparser than that of terrestrial laser scanners.Camera can acquire images of the environment and image data contains rich details,such as color and luminance,but images suffer seriously from different environment and it is difficult to extract distance information directly from monocular camera.The research in this paper relied on the project of the robot research group of Jilin University.3D real-time LiDARs and cameras are utilized to do mapping and localization.The main research contents are shown below.(1)To deal with the problem that the point cloud of 3D real-time LiDARs is sparse,the calibration of multiple LiDARs is studied.A new calibration algorithm of multiple LiDARs is proposed with reference to point cloud registration problem,in which the point-to-plane cost function is constructed and Particle Swarm Optimization algorithm is applied to search for the optimal transformation between different LiDARs.The calibration results can be used to integrate point clouds of different LiDARs so as to improve the point cloud resolution.(2)To fuse point clouds and pictures,the calibration of LiDAR and camera is studied to make use of the advantages of different sensors.A calibration algorithm based on the Perspective-n-Point problem is realized with the assistance of a calibration target,which can complete the spatial alignment of point clouds and pictures.(3)For 3D mapping,the method of SLAM using LiDARs is studied.A SLAM algorithm utilizing the 3D LiDAR is realized.By applying the calibration result,the colorized 3D map can be achieved.(4)For localization,the feature map generation method and Monte Carlo Localization algorithm are studied.A hierarchical raster map generation method based on 3D map is proposed in order to make full use of the information of different heights of the 3D map.By applying Monte Carlo Localization algorithm and point cloud registration algorithm,3D localization can be complete.The localization results of different hierarchical maps can be fused to get a better result.
Keywords/Search Tags:calibration of multiple LiDARs, calibration of LiDAR and camera, point cloud registration, SLAM, mapping, raster map, 3D localization
PDF Full Text Request
Related items