Font Size: a A A

Low-observable Targets Detection Method For Autonomous Vehicles Based On Deep Learning

Posted on:2021-02-08Degree:MasterType:Thesis
Country:ChinaCandidate:W ZouFull Text:PDF
GTID:2492306473498974Subject:Vehicle Engineering
Abstract/Summary:PDF Full Text Request
Environment perception is the basic and necessary technology for autonomous vehicles to ensure the safety and reliable driving.A lot of research has been done on the problem in the ideal environment,while much less work on the perception of low-observerble tagerts,whose feature may be unobvious in a complex environment.However,Autonomous vehicles will inevitably encounter various complex environmental conditions such as rain,snow,nights,haze and so on during the driving in the real scenes,on which the feature of the targets on RGB images is not obvious and various sensors will also be greatly affected.As a result,detection models trained by images with significant features fail to detect low-observable targets and the autonomous vehicle will be severely "blind".Some studies show that the fusion of multi-modal image data could improve the performance of object detection algorithms in some complex scenarios and the feature of other image modalities could provide extra information compared with traditional monocular RGB cameras in many cases.Aiming at the problem of low-observable targets detection in autonomous vehicles under real driving conditions,this thesis mainly studies the efficient and intelligent detection algorithm of low-observable targets in complex environments and explores the application of feature fusion technology of multi-modal image based on deep learning in intelligent perception system to improve the ability of visual perception of autonomous vehicles in a real driving environment.The main research contents are as follows:(1)A multi-modal image synchronous shooting and registration system is constructed,which solves the difficulty of synchronous shooting due to the difference in exposure modes,trigger mechanisms,and frame rates of RGB and infrared cameras.The registration between different modal images is achieved by obtaining the homography matrix between different cameras,so that the multi-modal image acquisition system can acquire target image pairs with the same perspective and overlapping areas at the same time in real time.(2)A multi-modal image pairs dataset of low-observable targets for autonomous vehicles is established for training and testing of multi-modal targets detection network.The data set contains dual-modal and 3-modal image data sets,covering people and vehicle targets under a variety of scenes,different seasons,weather conditions,temperature,light intensity and visibility and other environmental variables.(3)An object detection method based on multi-modal feature fusion is proposed to solve the problem of low-observable targets detection in autonomous vehicles under real driving conditions.In order to improve the detection on the low-observable targets,the multi-modal(dual-modal\3-modal)deep convolutional neural networks are designed,based on the Faster R-CNN,to fuse the feature of RGB image,polarized image and infrared image.The experimental results indicate that both dual-modal targets detection algorithom and 3-modal algorithom based on the multi-modal fusion have a better performance on the low-observable targets detection and recognition in complex environments than traditional single-modal method.(4)A multi-modal image real-time target detection system based on ROS is designed and the real-time communication of the multi-modal target detection algorithm under ROS is realized,which will pave the way for the deployment and operation of detection models on embedded computing devices.The testing result on the vehicle shows that the 3-modal target detection algorithm takes approximately 0.07 seconds to process a frame forward,and the dual-modal target recognition network requires 0.05 seconds on an image processing workstation equipped with a high-performance GPU.
Keywords/Search Tags:Autonomous driving, Multi-modal feature fusion, Deep convolutional neural network, Low-observable targets(LOT), Intelligent sensing, Multi-modal image pairs
PDF Full Text Request
Related items