Font Size: a A A

Research On 3D Vehicle Detection Algorithm Based On Multi-sensor Fusion

Posted on:2024-09-28Degree:MasterType:Thesis
Country:ChinaCandidate:Z Q LiuFull Text:PDF
GTID:2542307115977819Subject:Mechanical engineering
Abstract/Summary:PDF Full Text Request
Autonomous vehicles need to be aware of their driving environment to navigate safely and reliably.For complex traffic scenes,it is difficult for a single sensor to perform robust perception,and mature perception methods need to combine multiple sensor data fusion to obtain sufficient external environmental information to ensure the active safety of vehicles.To this end,this paper combines the method of multi-sensor fusion to explore the use of sensor data in different spaces to improve vehicle detection in the driving environment.During the course of this paper,three studies were conducted from different aspects of multi-sensor fusion.It mainly involves the joint calibration of camera and lidar sensor,point cloud density and semantic enhancement,and 3D vehicle detection of multimodal feature fusion.The specific work content and contributions are summarized as follows:(1)Study the joint calibration method of camera and lidar.This paper studies the 3D vehicle detection algorithm of monocular camera and lidar fusion,analyzes the independent calibration of camera sensor,the principle of joint calibration of camera and lidar sensor;designs a series of calibration schemes,builds an experimental system;uses Open CV to calibrate Toolkit,Autoware,RViz and other tools solve the internal and external parameter matrices after the joint calibration of the camera and lidar;complete the spatial alignment of the camera and lidar,and achieve a good calibration effect.(2)Research on 3D vehicle detection algorithm with point cloud density and semantic enhancement.Aiming at the problem of low detection accuracy of long-distance and small targets in 3D vehicle detection,this paper proposes a point cloud density and semantic enhancement method for 3D vehicle detection,D-S Augmentation.The method in this chapter is mainly divided into two stages: the first stage uses the instance segmentation,point cloud projection and global N nearest neighbor data association method to complete the density enhancement of the point cloud;the second stage uses the instance segmentation method to classify the category labels and segmentation The score is given to the projection point,and the projection point cloud with one-dimensional features is mapped to the point cloud space to complete the semantic enhancement of the point cloud.Extensive experiments on nu Scenes and KITTI datasets demonstrate the effectiveness and efficiency of our D-S Augmentation.In the ablation experiments,we analyze the enhancement effect of the fusion module on the baseline detector and achieve better performance.(3)Study the multimodal feature fusion network of images and point clouds.Aiming at the problems that factors such as illumination changes,target occlusion and detection distance will affect the detection accuracy.This paper proposes a multimodal feature fusion network,MFF-Net.First,the spatial transformation projection algorithm is used to solve the problem of spatial inconsistency when merging image features and point cloud features.Then an adaptive expressiveness enhancement fusion network is constructed to enhance the weight of important features and improve the directivity of features at the same time.Finally,a one-dimensional threshold method is proposed to solve the problem of false detection and missed detection in the non-maximum value suppression algorithm.Experimental results show that the proposed method outperforms previous state-of-the-art multimodal feature fusion networks in terms of detection accuracy and real-time performance obtained on KITTI and nu Scenes datasets.
Keywords/Search Tags:autonomous driving, multi-sensor joint calibration, 3D vehicle detection, point cloud augmentation, feature fusion
PDF Full Text Request
Related items