Font Size: a A A

Research On Road Environment Perception Based On Lidar Point Cloud And Image Information Fusion

Posted on:2024-08-16Degree:MasterType:Thesis
Country:ChinaCandidate:S X LiuFull Text:PDF
GTID:2542307151452914Subject:Power electronics and electric drive
Abstract/Summary:PDF Full Text Request
Vehicle intelligence is the current development trend of the automobile industry,and accurate driving environment perception is the primary link to realize the safe driving of intelligent vehicles.With the increasing number of on-board sensors,how to overcome the limitations of a single sensor,effectively integrate multi-sensor information,and achieve accurate perception of the road environment is a research hotspot in this field.This thesis mainly focuses on the needs of intelligent driving for environmental perception,and conducts research on joint calibration of radar and camera,point cloud 3D target detection and multi-source heterogeneous sensor fusion detection.The main research results are as follows:First of all,aiming at the problems of low constraint accuracy,instability and poor convergence in the traditional joint calibration algorithm of lidar and binocular camera,a joint calibration algorithm of binocular camera and lidar based on point-line feature constraints is proposed.Use the rough segmentation method to divide the point cloud on the self-made calibration board,and then fit it with the plane and edge line to obtain the spatial coordinates of the corner points and edge lines in the lidar coordinate system,and map them one by one with the camera space points in the image,establish the point-line constraint equation to solve the external parameter matrix,and realize the calibration of the laser radar and the camera in space.It has been verified by experiments that the average relative error of the external parameters u-axis pixels and v-axis pixels obtained by the algorithm in this thesis is 0.26%,and the average relative error of v-axis pixels is 0.42%.The results show that the algorithm reduces the calculation error caused by noise points and improves the convergence of joint calibration results.sex and accuracy.Secondly,aiming at the problem that the original point cloud data obtained by lidar is huge and contains a lot of noise points,which leads to the slow calculation speed of the algorithm,a plane fitting low-iteration ground segmentation algorithm based on spatial grid division is proposed.The algorithm first divides the point cloud into a spatial grid,then calculates the average height of the point cloud in the grid,screens the ground point cloud for plane fitting,and finally calculates the distance between the fitted plane normal vector and the ground reference normal vector in each grid.Angle,filter abnormal planes such as car body and building wall,and quickly and accurately obtain point cloud segmentation results.Experiments show that compared with traditional methods,the method in this thesis has improved segmentation accuracy and computational efficiency.Finally,aiming at the problem that the target is occluded by other objects and the background point cloud exists in the target frame after data fusion,an image and point cloud target detection algorithm based on the two-dimensional projected area intersection ratio is proposed.Using the joint calibration results of the binocular camera and lidar,project the 3D point cloud data into the 2D image coordinate system,calculate the intersection ratio between the 2D point cloud and the target frame,and select the point cloud cluster corresponding to the maximum intersection ratio as the Take the real point cloud data of the target,and take its cluster center as the spatial position of the target to accurately locate the target.The experimental results show that the missed detection rate and false detection rate of the data fusion algorithm are both within 10%,the average error of the overall azimuth angle within50 m is less than 2.5°,and the average error of the distance is less than 85 mm,which meets the requirements of road environment perception for target detection accuracy and positioning accuracy requirements.
Keywords/Search Tags:Road environment perception, Information fusion, Lidar, Binocular camera, Point cloud segmentation, Target recognition
PDF Full Text Request
Related items