Font Size: a A A

Research On Key Technologies Of Distributed LiDAR And Visual Information Fusion

Posted on:2021-01-12Degree:DoctorType:Dissertation
Country:ChinaCandidate:L YinFull Text:PDF
GTID:1482306461964159Subject:Photogrammetry and Remote Sensing
Abstract/Summary:PDF Full Text Request
For high-level autonomous driving,whether for the consideration of improving performance,stability,and ease of use,or from the perspective of redundant observation to improve safety,it is necessary to integrate vision and LiDAR information to complete the critical tasks such as environmental perception,obstacle detection,mapping and localization.With the continuous development of unmanned driving technology,mass production applications have become an inevitable trend.In this context,the system solutions of both software and hardware must be closely integrated with the process of vehicle production.For the consideration of expense,vehicle regulations,safety and installation requrements,sensors with low-cost and narrow overlapping field of view which are embedded installed distributely around the vehicle will become the mainstream choice.The new scenarios put forward additional requirements for the configuration and installation of the sensors,which in turn brings challenges to traditional fusion research of camera and LiDAR.How to complete the fast,robust and accurate sensor calibration and fusion applications on the basis of sparse and high-noise data with narrow or even no field of view has become awkward problems to be solved urgently.This paper studies several key technical issues in the process of information fusion for distributed LiDAR and cameras,including external parameters optimization between multiple LiDARs in the case of narrow overlapping field of view or even none,extrinsic calibration between camera and LiDAR and environment modeling and localization based on the fused information.Results and innovations achieved in this paper are as follows:(1)In this paper,we systematically illustrate the trend of technical development,sensor selection and deployment in the process of batch unmanned driving applications,which brings new requirements and challenges for the multi-sensor calibration and fusion.(2)A method for extrinsic calibration of multiple multi-beam LiDARs with narrow overlapping field of view based on ground constraints and registration of occupancy grid map distance transformation is proposed.The proposed method neither relys on GNSS/INS equipments,nor the needs to extract specific features to achieve association constraints.The experimental results show that direct matching based on point cloud density distribution has higher accuracy and robustness.(3)The method of LiDAR odometry based on GP(gaussian process)regression is proposed for extrinsic calibration of multi-LiDARs which have no overlapping field of view.The GP process is innovatively applied to the estimation of the SDF distribution for the 2D environment.Using the pose and explicit surface points output by the LiDAR odometer,the initial parameters are estimated based on relative motion constraints.The high-precision results are further refined based on the matching of the implicit surface points and the SDF distribution.(4)An end-to-end optimization framework based on the Co Mask(corresponding mask)is proposed for the extrinsic calibration between camera and LiDAR.The checkerboard in the calibration field is used to achieve accurate and robust offline calibration.At the same time,based on this framework,the plane target with semantic information in the natural scene is used to realize online monitoring and tuning for the extrinsic parameters.(5)A method of SLAM based on the monocular point-line visual features and LiDAR point cloud fusion is proposed.The depth of the visual features is fitted through the corresponding LiDAR points.The loss function that combines the visual and LiDAR features is constructed to estimate more accurate poses.We built a fused map with sparse visual features and dense LiDAR points with the same spatial scale,and used the method of bag of words(BoW)and laser point cloud matching to cross-validate the closed-loop detection to improve the recall rate and accuracy.Finally,fast global positioning is performed based on the fused feature registration.The research achievements of this paper serve several unmanned driving applications of our research team,including the unmanned truck deployed in the Wuhan-Shanghai high-speed line,the driveless Robotaxi test platform in the urban environment,and the Dongfeng self-driving minibus Sharing-VAN plus and the multi-function autonomous guided vehicle(AGV)Sharing-smart.
Keywords/Search Tags:Multi-LiDAR Extrinsic Calibration, Camera and LiDAR Extrinsic calibration, Occupancy Grid Map Distance Transformation, Gaussian Process Implicit Surface, Corresponding Mask End-to-End Optimization, Visual-LiDAR SLAM
PDF Full Text Request
Related items