Font Size: a A A

Research On Real-Time Vision Location Algorithm Of Intelligent Vehicle Based On Deep Learning

Posted on:2021-02-13Degree:MasterType:Thesis
Country:ChinaCandidate:D X LianFull Text:PDF
GTID:2492306572467284Subject:Vehicle Engineering
Abstract/Summary:PDF Full Text Request
With the development of artificial intelligence and people’s pursuit of higher quality life,autonomous driving has become a research hotspot in the field of automobiles.And the key point is that real-time position and attitude information of vehicles should be provided during the process of driving,so that autonomous navigation decisions can be made based on the obtained data.The front-end visual odometer of VSLAM(Visual Simultaneous Localization and Mapping)obtains image sequences from camera,and uses the principle of multi-view geometry in camera imaging model and calculation to calculate vehicle attitude information,which lays the foundation for subsequent decision making for driverless vehicles.Based on the principle of the front-end visual odometer in the VSLAM system,this paper focuses on the key algorithms of each part of the visual odometer module in the VSLAM system,including the extraction of feature points in the image,the matching of feature points and pose estimation based on the obtained matching feature points.The algorithms are verified by experiments using the visual odometer proposed by this thesis.For the feature point extraction and matching module,deep learning method is used to design a shallow convolutional network to simultaneously extract feature points and descriptors.In order to improve the accuracy of the training model,a global orthogonal regularization term is added to the original three-branch boundary loss function.At the same time,the data set is enhanced during training to improve the robustness of the training model.According to the trained model,the algorithm performance of the paper and the two traditional algorithms of ORB and SIFT are analyzed under the conditions of scale,rotation,angle of view and illumination transformation.The pose results after feature point pair matching show that the algorithm in this paper is optimized by standard RANSAC and compared with the pose results obtained by the ORB and SIFT algorithms,the error of pose estimation is smaller than that of the ORB algorithm.On different conditions,the pose result is similar to or better than the SIFT algorithm.In terms of realtime performance,the algorithm in this paper can extract feature points and descriptors in the image in real time with GPU hardware,which can meet the real-time performance requirements of autonomous driving.For the pose estimation module,while the standard RANSAC(Random Sample Consensus)algorithm is used by the traditional visual odometer to purify and optimize the feature extraction and matching samples in the image to eliminate outliers,Graph-cut RANSAC algorithm is used for replacement in the paper.The optimization results of graph-cut RANSAC algorithm are compared with standard RANSAC algorithm on the different conditions of scale,rotation,angle of view and illumination transformation.Experimental results show that,compared with the standard RANSAC algorithm,the pose estimation results optimized by graph cut RANSAC algorithm are more accurate.Finally,based on the improved feature point extraction and matching and pose optimization module,a visual odometer is constructed.Compared with the ORB algorithm’s visual odometer on the data sets,the results show that the visual odometer’s pose estimation results are closer to the real posture than the traditional visual odometer using the ORB algorithm,and the average translation error is reduced by 33.04%,the average rotation error is reduced by 18.78%,which are with higher accuracy.
Keywords/Search Tags:deep learning, visual odometer, feature point extraction and matching, pose estimation
PDF Full Text Request
Related items