Font Size: a A A

Research On Vehicle-road Environment Perception Technology Based On Deep Learning

Posted on:2022-11-06Degree:MasterType:Thesis
Country:ChinaCandidate:C Q GuoFull Text:PDF
GTID:2492306758999629Subject:Automation Technology
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid development of our economy and the continuous improvement of road construction system,more and more vehicles travel,which brings convenience to people’s lives and also brings some adverse effects.Traffic roads become more congested,traffic accidents occur more frequently,and air pollution caused by vehicle exhaust is becoming more and more serious.Traditional traffic technology cannot solve these problems well.Automatic driving technology can assist people to drive vehicles and even do not require human participation in controlling vehicles,which can effectively reduce traffic accidents caused by human factors.Intelligent network technology enables vehicles to obtain road information in advance,V2 X technology enables vehicles to interact with traffic lights to effectively alleviate traffic congestion;the development of 5G technology and deep learning has also cleared up many obstacles for the landing of autonomous driving.People’s yearning for intelligent travel and the development of artificial intelligence have promoted the rapid development of autonomous driving technology.The vehicle road environment perception technology based on target detection is the core technology in autonomous driving.In this paper,based on the problems of too long target detection time caused by too large amount of image information,too many training parameters of the existing automatic driving target detection model,and too large amount of point cloud data directly collected by Lidar,a vehicle road environment perception system based on deep learning is designed.The system is based on the effective information of the extracted image,and cooperates with the SVM classifier to realize the real-time detection of the vehicle.Multi-target detection based on deep learning target detection network;The target detection box based on image features and deep learning output selects Lidar data to realize the collision detection of point cloud fusion.The main work of this paper is summarized as follows:(1)Aiming at the problem that the target detection time is too long due to the large amount of image information,this paper studies the image feature extraction method.Firstly,the gradient space features of the image are extracted based on different operators,and the color space features of the image are extracted based on different color channels such as RGB,HSV and HLS.The color space features of the extracted image and the gradient space features of the image are fused.The corner feature extraction of the fused image is carried out based on different operators such as Harris and Fast.The corner feature extracted is introduced into the improved HOG algorithm.Finally,the vehicle target detection is carried out based on the extracted HOG feature and other effective image features with SVM classifier.(2)Aiming at the complex problem of the existing target detection model,this paper proposes a lightweight Yolov4-Mobilenetv3 target detection algorithm based on the existing Yolo series target detection algorithm,which greatly reduces the training parameters of the model.Firstly,the Pytorch-Yolov4 network is built to train the Pytorch-Yolov4 target detection model and output the target detection diagram.The main feature extraction network of Pytorch-Yolov4 is replaced by Mobilenetv3 network,and a new network Pytorch-Yolov4-Mobilenetv3 is obtained.The PytorchYolov4-Mobilenetv3 target detection model is trained,and the target detection map is output.The training parameters and detection effect of Pytorch-Yolov4-Mobilenetv3 network and Pytorch-Yolov4 network are compared.Based on the data set in this paper,the difference of m AP between the two target detection models is 0.25%,and the overall parameter number of Pytorch-Yolov4-Mobilenet target detection model decreases by 41.3% compared with that of Pytorch-Yolov4.(3)In view of the large amount of Lidar data in the process of vehicle driving,this paper obtains Lidar data based on KITTI data set,and carries out point cloud fusion.According to the prediction box of SVM and Pytorch-Yolov4-Mobilenetv3 output,the ROI area is determined to reduce the invalid Lidar data.Based on SIFT and SURF operator,the rotation invariant features of the image in ROI are extracted and matched.The acceleration invariant model is used to calculate the vehicle collision time.The urban road environment is complex.At present,the environmental perception technology in automatic driving technology is not mature enough,and unmanned driving is difficult to land on a large scale.Target detection is the key technology of autonomous driving perception technology.Vehicles and pedestrians are two common targets in the road.This paper mainly studies the detection of several vehicles and traffic signals based on deep learning,and improves the network on the existing depth model to reduce the training parameters of the model.Therefore,this study has certain theoretical and practical significance.
Keywords/Search Tags:Deep learning, Automatic driving, Feature extraction, Target detection, Point cloud filtering, Feature matching
PDF Full Text Request
Related items