| With the rapid development of intelligent transportation technology,IVIS(Intelligent Vehicle-Infrastructure System)have gradually become an effective approach to solve traffic problems in the new situation.In the IVIS,accurate analysis of traffic environment information is essential to realize the highly collaborative function of "people-car-road-cloud".It’s important to to accurately perceive the spatial position,quantity and other information of the vehicles which are the main participants of traffic.Nowadays,roadside vehicle detection technologies mainly contain sensing coils,vision sensors and other perception technologies.It is difficult to meet the requirements of high accuracy,environmental adaptability,mobility,real-time,and accurate space information.With the development of artificial intelligence technology,vehicle recognition algorithms based on deep convolutional neural networks,which uses sensors such as vision sensors and Lidar as information sources,have gradually become mainstream methods in this field,and are widely used in the direction of environmental perception with huge performance advantages.From the perspective of environmental perception,roadside traffic scenes containing multi-scale vehicle target images and lighting variability become more complex,so it is difficult to accurately identify vehicles only with a single type of sensor.Therefore,how to improve the recognition accuracy of multi-scale vehicle targets based on multi-sensor information fusion and obtain accurate vehicle position information while maintaining high system real-time is a very challenging and urgent solved problem.In order to solve the above problems,this paper has carried out research on intelligent roadside terminal vehicle recognition technology based on Lidar and vision sensors.Based on this,we expand the following three parts of the experiment:Part I: Research and design on sensor calibration method for roadside terminal environmental perception system based on plane features.This method takes the plane characteristics of the calibration plate in the camera coordinate system and Lidar coordinate system as constraints.Firstly,the plane characteristic parameters are extracted,and the constraint relationship is used to linearly optimize the distance and angle difference functions of the two to obtain the initial value of the joint calibration parameters.Then,taking the distance between the point cloud plane and the image plane as the objective function,the LM iterative algorithm is used for nonlinear optimization to obtain the rotation matrix and translation vector.Through experimental verification,the accuracy of the designed calibration method meets the accuracy requirements of joint calibration between sensors and lays a foundation for subsequent accurate vehicle position information acquisition.Part II: Research and design on a roadside vehicle recognition method based on visual sensors(YOLO-AF)This method aims to improve the recognition accuracy of multi-scale vehicle targets.It improves the YOLOv3 network from two aspects: introduction of a residual attention module;adding feature selective anchor-free module.The introduction of the residual attention module fully highlights the effective information in the multi-scale fusion process,while reducing the invalid interference information.The feature selective anchor-free module is added to make up for the shortcomings of the anchor-box mechanism,and further improves the target recognition performance for small-scale vehicles.Experiments show that YOLO-AF network has significantly improved the accuracy of vehicle recognition,especially for the recognition of distant vehicles.Part III: Research and design on roadside vehicle recognition method(CBYOLO-AF)based on fusion of vision and Lidar sensorThis method aims to improve the anti-interference ability of illumination variation,Lidar and the vision sensor are deeply integrated at the data and feature levels.On the basis of the YOLO-AF network model,we add the sub-branch feature extraction network structure that takes the fusion data of the point cloud and the image as input,and use adjacent high-level feature fusion method to enhance the data features of the main network with the radar data of sub-network and the higher-order feature of image color information.This method can effectively improve the accuracy of vehicle recognition under a variety of lighting conditions with good real-time performance.Experiments show that the roadside vehicle identification method based on the fusion of Lidar and visual sensors designed in this paper enhances data reliability and environmental adaptability,and can further improve the performance of the intelligent roadside terminal environment perception system. |