| Autonomous driving system is an important research of intelligent vehicle,which will change the way of human travel,improve the traffic environment and the comfort and safety of the drivers.As an important component of automatic driving system,environment perception generally adopts the multi-sensor configuration scheme of millimeter wave radar,camera,laser radar and other sensors to understand the traffic environment.The environment perception technology based on machine vision has become a hot research spot.In the vision-based environment perception technology,how to use the low-cost vision sensors to obtain the kind,position and speed of the targets in front of the vehicle in real time with high accuracy is an important research content.In order to solve this problem,this thesis uses the the convolutional neural network and spatio-temporal matching of features to process the images which collected by the binocular camera,so as to realize the real-time detection and the motion state estimation of the targets in front of the vehicle.The main contents of this thesis are as follows:In order to implement the real-time target detection on embedded device,this thesis designes the target detection network structure based on Mobilenet-YOLOv3,which uses Mobile Net as the backbone network and combines with the YOLOv3 object detection algorithm.In addition,the process of target detection model training is completed under Caffe framework.The target detection model is optimized and deployed to embedded device by Tensor RT,so as to realize real-time detection of the target in front of the vehicle.This thesis discusses binocular vision and its range principle,and a binocular camera extrinsic calibration method based on Ar Uco marker is proposed to solve the problem of complicated process of binocular extrinsic calibration process and the difficulty of extending to the multi-camera system.The result of calibration is used to calculate he threedimensional coordinates of a point in space.This thesis designs a scene flow calculation method based on ORB features.The spatial coordinates and the speed of the scene feature points in front of the vehicle is calculated by matching the ORB features which extracted from the left and right images in space and time.Combined with the result of target detection and DBSCAN clustering algorithm,the feature scene flow in the non-target region is eliminated to obtain the spatial coordinates and the speed of the target clustering center.The estimated value of the target motion state is obtained by using kalman filtering.In order to verify the real-time performance and accuracy of the algorithm,the performance of the algorithm is tested on the KITTI dataset and the vehicle experiment platform.The result of experiment shows that the algorithm proposed in this thesis can achieve the speed of 32 FPS on the NVIDIA Jetson Xavier.And the correct detection rate of the targets within range of 40 meters can reach more than 80%.The localization error and speed measurement error are within 10%. |