| Driverless technology is a comprehensive technology,including environmental perception technology,positioning and navigation technology,path planning technology and decision control technology.Environmental perception technology is one of the most basic and important technologies.How to improve the accuracy of environmental perception has important theoretical research and engineering practical application value.At present,the research on environmental perception mainly focuses on the structured road environment and parking environment,but seldom on the unstructured environment(such as the field environment).When the vehicle is driving in the field,it needs to have the detection ability of various complex obstacles.For intelligent vehicles,how to accurately perceive the field environment is a very difficult but necessary problem.With the help of self-developed driverless vehicle as the research platform,this paper studies several key theoretical technologies based on binocular machine vision and 3D lidar environment perception.The main research contents are as follows:1.Feature point extraction method of machine vision image.Aiming at the problems of feature points aggregation and overlap caused by fixed threshold selection,an adaptive local threshold feature point extraction method is proposed,which can automatically design and calculate the threshold according to the brightness of pixels,so as to improve the accuracy of feature point extraction.The experimental verification shows that the improved method has strong adaptability to the brightness change,and the calculation speed and extraction accuracy are improved.It can detect convex obstacles in the field environment and obstacles in the unknown area of map.2.Machine vision image feature point pair matching method.Aiming at the problems of time-consuming and low precision of traditional matching methods,an improved orb-prosac method is proposed to improve the matching accuracy and running speed of the algorithm.In this method,the number of iterations and time consumption are reduced by introducing the quality factor to rank the sample points,and the model is estimated by selecting the high-quality interior points to avoid the random uncertainty of the initial sample selection,so as to improve the accuracy.The experimental results show that the improved orb-prosac method reduces the time consumption and improves the matching accuracy.3.Machine vision positioning method.Based on the matching of image feature points,a feature point tracking method is proposed to locate the vehicle.This method finds the common corresponding feature points in the four adjacent frames,matches these feature points,calculates the rotation translation vector,and obtains the continuous motion track of the vehicle.Experiments on the spot and popular datasets show that the positioning error is small,the tracking accuracy is greatly improved,and it has better adaptability to the environment and application value.4.Information processing method of lidar point cloud.In the coarse registration stage,NDT registration is used,and in the fine registration stage,curvature based ICP registration is used.In the process of registration,the search of the corresponding points usually has matching errors.The PROSAC algorithm is used to eliminate the wrong point pairs by setting the number threshold.It overcomes the disadvantage that the point cloud distance is too large when the NDT algorithm is registered,and the ICP algorithm based on point and point is easy to fall into local optimum.5.Joint calibration method of machine vision image and lidar point cloud.In order to increase the measurement accuracy of machine vision,the lidar and binocular camera are calibrated jointly to integrate the lidar point cloud data into the machine vision image.The lidar coordinate system is transformed into image coordinate system by rigid body transformation and projection perspective transformation.The point cloud data under the lidar coordinate system is inversely transformed into threedimensional coordinates,and the point to point correspondence between lidar points and visual image is completed,so as to realize the spatial level data association of the measured object,lidar point cloud and visual image. |