Font Size: a A A

Research On Vehicle Detection Method Based On Data Fusion Of Lidar Points And Image

Posted on:2020-08-31Degree:MasterType:Thesis
Country:ChinaCandidate:J S LiuFull Text:PDF
GTID:2392330572984492Subject:Vehicle Engineering
Abstract/Summary:PDF Full Text Request
The technology of driverless has vital strategy value in many field,like the future transportation and the future commercial as well as the smart city.The most fundamental task for realizing such a technology is to develop reliable perception system and multi-source information fusion could make it come true through collecting more redundancy and precise environmental data.During the development of fusion technology,it used to select high performance sensors for ensuring to make detection more precise and reliable.As result,the cost of hardware is too high to commercial application.In order to improve the shortcoming of perception ability caused by single sensor system as well as avoid cost of hardware is too high,a detection method based on sensors fusion of camera and 4 lines lidar was proposed and the corresponding research include:Firstly,deploying suitable computer environment to adopt Convolutional Neural Network(CNN)model(Mask R-CNN)to detect image and research on how data fusion on both space of time and physical location.Firstly,unifying sensors' sampling to identical timestamp and align to the sensor which sampling period is most long.Then,calibrating camera and establish space transform model among sensors to map the same frame data to same space.Secondly,in order to calculating the object's three-dimensional location by making use of lidar points,we need furtherly matched the detection boxes and corresponding points.The traversal algorithm is unable to guarantee efficiency when matching.Hence,a modified R-Tree algorithm was proposed as substitution and verified that is more quick and stable.After matching,each of object's space location is acquired by calculating average range of all points inner corresponding box(namely object).Meanwhile,a method to correct confidence probability based on lidar points was proposed.We adopted Sigmoid as correct function and test the fusion algorithm by some dataset collected from real scenes.Finally,we verified the proposed fusion algorithm could drop probability of misseddetected down from 14.86% to 8.03%.Finally,Aspect on improve ranging precise,we analyzed the reason why precise is low and proposed corresponding measurement method named joint ranging based on image and lidar data.The distance measurement based on image,part of joint ranging method,is work through camera's calibration parameter and imaging model's geometric relationship.And its applicable range is within 20 meters——a conclusion after testing.In its working range,it was used to eliminate abnormal points and the rest of points was calculated by averaging and the finally value represents object's location.Similarly,the joint ranging method indeed improve ranging accuracy and stability——average error drops down from 2.36 meter to 0.37 meter within 20 meters.On the other hand,in order to improving error-detected,we referred some research achievement on CNN transfer learning and proposed some suggestions when retrain the CNN model.In a word,compared with single sensor system,the proposed fusion method was verified could pratically improve ability of detection in terms of the probability of missed detection and the precise of ranging.In addition,having competitive advantage of cost of devices make it closer to pratical application.
Keywords/Search Tags:multi-sensors fusion, object detection, CNN, R-Tree, joint ranging
PDF Full Text Request
Related items