| Environmental perception technology is an important basis for the development of intelligent vehicles.The function of the perception module is to get the detailed information of the surrounding object,including category,distance,etc.Among the commonly used sensors,camera is good at detecting target category,while Lidar is easy to detect 3D position,but the detection category information is not rich enough,so combining the two can realize complementary advantages.The object information obtained by the perception module is the input of the decision and planning module,so the perception algorithm needs to be accurate and fast.At present,fusion perception algorithms often have shortcomings such as poor real-time performance and difficult to deploy on embedded devices,so it is worth studying to improve the real-time performance and accuracy of fusion perception.The accuracy of fusion perception needs reliable sensor external parameters to ensure.The existing external parameter calibration algorithms of lidar and camera rely on the accuracy of initial values of external parameters,so the calibration results are not accurate enough.The real-time performance of fusion perception can be improved by accelerating target detection.In order to improve the accuracy and real-time performance,this paper proposes a lidar and camera fusion target detection and tracking method,which is divided into two parts: lidar and camera joint calibration,fusion target detection and tracking.The main research contents of this paper are as follows:(1)Improve and innovate the external parameter calibration scheme,aiming at improving the accuracy of external parameters.The original scheme is based on the reprojection error of 2D-3D point pairs to solve the nonlinear optimization,which is very dependent on the external parameter initial value,and the initial value is difficult to measure or calculate.To solve this problem,this paper redesigns the external parameter calibration scheme to convert 2D points into 3D points first,and then carry out linear solution of ICP through 3D-3D point pairs.This scheme does not need to rely on good initial value,because the local optimal is the global optimal when the matching point pairs are known,which can avoid the solution and measurement of the external parameter initial value.In order to realize the transformation from 2D point to 3D point,the calibration plate coordinate system is introduced as the intermediate coordinate system,and the Pn P algorithm is used to calculate the coordinate system combined with the size of the calibration plate.In order to overcome the disadvantage of manual extraction of point cloud corner points,point cloud corner points are obtained by extracting the edge points of scan lines and fitting the lines to obtain the intersection points,so as to improve the degree of automation of the algorithm.Finally,the comparison experiment shows that the accuracy of external parameter calibration and data fusion is improved.(2)A method of target detection and tracking based on laser radar and camera is designed,which has high real-time performance and is easy to be deployed in embedded devices.In order to improve the real-time performance,the target detection is accelerated.The image detection is based on the YOLOv3 algorithm.In view of the problem that time is wasted when adjacent frames are highly similar when the target detection is carried out frame-by-frame,the KCF algorithm is introduced to reduce the processing time by locally tracking a certain number of frames instead of detection.Point cloud detection is based on European-style clustering.Because the time cost of traversing point cloud search is too high during point cloud clustering,KDTree structure with its own neighbor query function is introduced to manage point cloud to achieve acceleration.In order to realize time synchronization,due to the difficulty of hard synchronization,the adaptive approximate time synchronization algorithm is adopted to find the data of the same time of the two sensors by minimizing the time difference with the latest news.In order to prevent manyto-one matching,the Hungarian algorithm based on midpoint distance is used to realize one-to-one matching and post-fusion of perception results corresponding to the same object.After the detection is completed,target tracking is carried out through two steps of multi-frame multi-target matching based on key point matching count and Kalman filtering,so as to obtain the continuous motion trajectory of multiple targets around the vehicle.(3)Use real vehicle data for algorithm verification.The results show that the object detection results of this algorithm are correct and comprehensive,the data fusion accuracy is good,the performance of the improvement link is improved,the track output of Kalman filter in the tracking link is close to the true value,the multi-object track grouping is clear and the track is stable.The average error of longitudinal position is 2.08%,and the average processing time of each frame is 84 ms.Compared with other existing methods,the proposed method ensures reasonable accuracy,greatly improves the real-time performance of fusion sensing,and is convenient for deployment on embedded devices,which verifies the effectiveness of the proposed method. |