Font Size: a A A

Research And Realization Of Vision And Millimeter Wave Radar Fusion Perception And Positioning Method For Autonomous Driving

Posted on:2022-04-09Degree:MasterType:Thesis
Country:ChinaCandidate:J J WuFull Text:PDF
GTID:2492306524979549Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Perception and positioning of surrounding targets is one of the key technologies of autonomous driving systems.The current system mainly uses two kinds of sensors:camera and millimeter wave radar.It’s difficult for the camera to effectively measure the position and speed of the target,and the millimeter-wave radar cannot distinguish the target category.Multi-sensor fusion can achieve complementary advantages,but it faces two major problems: calibration accuracy and error and missed detection.In this paper,the radial basis function neural network is used to fit the projection function,which improves the calibration accuracy.A dynamic confidence positioning method combining target detection and tracking is proposed to reduce the influence of noise and missed detection and false detection.Corresponding fusion measures are formulated for various complex situations to further improve the accuracy of target positioning after fusion.This paper gradually builds a real-vehicle experimental platform that includes multiple sets of cameras and millimeter wave radar,and verifies the ability of surrounding target perception and positioning based on it.The research content mainly includes the following parts:1.Sensor calibration: improve the calibration accuracy of external parameters of the camera and millimeter wave radar.Use radial basis function neural network instead of traditional small hole imaging model to complete the external parameter calibration of multiple sensors.First,calibrate the internal parameters of 4 cameras and 4 millimeter wave radars.Then the radial basis neural network model is used to calibrate the external parameters of the camera and millimeter wave radar to solve the problem of local misalignment of the left and right edges of the image after point cloud projection.Finally,all sensors are calibrated relative to the center position of the vehicle.2.Fusion of vision and radar target detection and tracking positioning: give confidence to the target to reduce the impact of missed detection and false detection,and develop fusion measures for various situations.Multi-target obstacle detection is performed after model conversion of YOLO-v3.Using the calibration relationship,the millimeter wave radar point cloud is projected onto the image plane,and the point cloud cone corresponding to each image target is intercepted.The size,position,category and other information of the target image frame are estimated through the image and point cloud,and the radial basis function neural network is used to initially match the target radar point.Finally,the multi-target tracking results based on the target confidence and the projection detection results are merged to complete the fusion perception and positioning of the target obstacles.3.Real vehicle platform construction and real vehicle verification.Deploy the MDC300 computing platform on the actual vehicle,install the above-mentioned multiple sets of sensors and related hardware,collect the data in the actual vehicle,and complete the calibration of internal and external parameters after synchronous processing.Finally,five different real-vehicle experiments are carried out on the campus road of the University of Electronic Science and Technology of China to verify the feasibility of the target perception positioning method based on the fusion of vision and millimeter wave radar in this paper.
Keywords/Search Tags:Autonomous driving, camera and millimeter wave radar, internal and external parameter calibration, target detection, fusion positioning
PDF Full Text Request
Related items