Font Size: a A A

Study On Multi-sensor Environmental Perception System For Autonomous Driving Scenarios

Posted on:2024-03-05Degree:MasterType:Thesis
Country:ChinaCandidate:L Q ZhuFull Text:PDF
GTID:2542307118479024Subject:Electronic information
Abstract/Summary:PDF Full Text Request
The autonomous driving industry is rapidly developing,and the application of autonomous driving technology is gradually changing people’s travel methods and the operation mode of the transportation system.Numerous large technology companies and automobile manufacturers worldwide are actively investing in research and development and testing.The key technologies required for autonomous driving are diverse,including environmental perception,high-precision mapping,decision-making and planning,control execution and vehicle-to-vehicle communication,as well as related testing and verification technologies.Among them,environmental perception is the"eyes"of autonomous driving vehicles.Through various sensor devices such as cameras,Li DAR,and millimeter-wave radar,real-time environmental information around the vehicle is obtained,and the data is analyzed and recognized through corresponding processing techniques to help the vehicle accurately understand the road conditions around it.This thesis uses camera and Li DAR sensors to collect environmental data and focuses on solving the problems of lane detection and 3D object detection in autonomous driving scenarios based on deep learning theories and methods.The main research contents are as follows:(1)For the lane detection task,a detection method based on image and point cloud fusion is proposed.Based on the 2D image segmentation model,the detection result is mapped to the point cloud to obtain richer depth information.A lightweight convolutional network is used to extract features to improve feature extraction efficiency.A cross-channel joint attention module and an improved dilated spatial convolution and pooling module are added to enhance the extraction of local detailed features.The depth information of the lane is obtained by fusing the image lane detection results and the pose transformation relationship between the camera and Li DAR.(2)For the 3D object detection task,a multi-modal feature fusion 3D object detection algorithm is proposed to overcome the shortcomings of insufficient single-modal data representation.The image and point cloud data are preprocessed together as the input of the algorithm model.Then,feature extraction and information fusion are performed to automatically extract more related information in different modalities,highlighting effective information and weakening irrelevant information.(3)A multi-sensor fusion perception system based on ROS is designed,which uses the convenient message communication mechanism of ROS to complete the communication and data sharing between different sensors and processes.The perception algorithm model is deployed on the experimental hardware platform using Tensor RT tools to improve the inference speed while maintaining accuracy,using FP16accuracy compared to FP32 overall speed increased by 127%.Finally,the algorithm is tested and validated on both the open-source autonomous driving dataset and the collected dataset from the actual vehicle,the lane line segmentation task 1F-score reached 75.6,and the average accuracy of the obstacle detection task reached 62.99.
Keywords/Search Tags:environment perception, multi-sensor fusion, lane detection, 3D target detection
PDF Full Text Request
Related items