| Self-driving technology separates the most unstable factors that caused by humans in vehicle operation,conforms to the development trend of vehicle intelligence and precision,and has gained extensive concern in the world.Environmental perception is an important part of driverless technology,and various sensors are an important medium for vehicles to perceive the environment.Due to the limited information obtained by a single sensor and its poor adaptability,it is difficult to comprehensively obtain real road condition information.This thesis focuses on the unmanned environment perception method based on the fusion of 3D LIDAR and camera information.Regarding visual perception,an object detection model based on YOLOv3 is constructed.Based on KITTI,the authoritative dataset in the field of unmanned driving,the training set and test set of the model are self-made.The YOLOv3 algorithm is improved by K-IGA algorithm to optimize the value of Anchor(a priori box),merging the BN layer in the neural network to the convolution layer,and using the GIOU idea to improve the calculation method of the loss function.Configure the data processing platform,use the self-made data set to train the improved YOLOv3 model,and discuss the influence of the training set balance,size of input image,dimension and value of anchors on the performance of the algorithm,then obtain the optimally configured detection model which can identify vehicles,pedestrians,and cyclists in images of road conditions.The experimental results show that the detection model can effectively identify vehicles,pedestrians,and cyclists in the scene,the overall accuracy on the test set reaches 0.8713,and the real-time performance meets the requirements of the camera.Regarding LIDAR environment perception,a set of point cloud data processing algorithms are constructed to realize target detection.The process is based on Kd-tree to construct the topological relationship of point cloud data to realize fast search of data point neighborhood.The division of interest region and voxel grid down sampling are used to reduce the number of data points,and the statistical filtering method is used to remove the outliers in point cloud.The ground point cloud is segmented based on the random sampling consistency algorithm(RANSAC).The non-ground point cloud is segmented by the distance-based multisegment threshold Euclidean clustering method.The optimized L-shape algorithm is used to fit the point cloud clusters formed by segmentation.The bounding box,and the three-dimensional space position information of the detection target is obtained.Build a data-collection platform based on unmanned vehicles,collect point cloud data of low-and medium-speed road scenes as input,and realize the overall process on the data-processing platform.The results show that the target detection model can extract the obstacle target in the scene,and the processing time of a single frame is less than 100ms.In terms of the fusion of LiDAR and Camera,a multi-dimensional perception algorithm based on decision-level fusion is constructed.The algorithm takes the image and point cloud target detection results as input.After the time synchronization and space matching of data information,the point cloud bounding box is projected to the image,the coincidence degree between the detection frame of image and the projection frame of point cloud is calculated.And then judge whether to perform fusion according to the degree of coincidence.If the fusion is successful,the spatial information and category information of the target in the form of a threedimensional bounding box are output,and the spatial information is output for the target that does not meet the fusion conditions.Experiments show that the perceptual fusion model accurately projects the point cloud data to the image,the efficiency of vehicle,pedestrian and cyclists target fusion reaches 88.9%,81.1%and 80.56%,respectively.The fusion algorithm can meet the detection accuracy and real-time requirements of medium and low-speed road conditions,realize environmental perception based on multi-dimensional information,and improve the safety and reliability of unmanned driving technology. |