| With the continuous development of modern vehicle industry,the intelligence of vehicles is developing rapidly,and various smart cars are being launched continuously.The consequent rise of assisted driving and even smart driving is increasingly becoming an important capability of smart cars,and is the reason why smart cars can fundamentally avoid traffic accidents.As a prerequisite for intelligent vehicles to realize autonomous driving,environment perception is the most basic module among the three core modules of intelligent driving: environment perception,path planning and decision control.At the same time,whether the environment perception module can accurately obtain various information in the environment is directly related to the most important safety issue of smart driving function,and is also the key to decide whether smart cars equipped with smart driving function can actually participate in the traffic.In this paper,we use camera and LIDAR sensors to achieve detection and identification of various targets in the environment.Faced with the inherent characteristics of both sensors due to physical hardware,especially the low quality of images acquired by the camera in the complex environment of rain and night can lead to a decrease in the accuracy of target recognition.By designing a decision layer fusion method of two sensors applicable to different environments,the inherent deficiencies of the two sensors are compensated and their accuracy of information acquisition in different environments is improved.The main research of this paper is as follows:(1)Design of image-based target detection algorithm.In this paper,we design an attention mechanism-based target detection algorithm for different environments.First,we use an image enhancement algorithm to enhance the image quality of low-quality images acquired in complex environments such as rain and night.Then the target detection algorithm is designed based on YOLOv5,including the backbone feature extraction network,enhanced feature extraction network,attention mechanism,and loss function for the target detection algorithm.Finally,the anchor frame size applicable to the detection target is obtained by clustering the annotated frames of the nu Scenes dataset,and the target detection algorithm is trained and tested using the nu Scenes dataset.(2)Design of target detection algorithm based on Li DAR point cloud.In this paper,firstly,we adopt point cloud region of interest extraction,point cloud filtering,and point cloud ground point removal to pre-process the original point cloud so as to remove a large number of useless and noise points.Secondly,the point cloud target detection network is designed based on PV-RCNN,and the algorithm is designed from the overall structure of the network,point cloud feature extraction and region suggestion network,area of interest grid pooling,output detection layer,and loss function.Finally,the point cloud target detection network is trained and tested using the nu Scenes dataset.(3)Multi-sensor decision layer fusion algorithm design.Firstly,the 3D detection frame obtained from the LIDAR point cloud-based target detection is projected onto the image and converted into a 2D detection frame by joint calibration of the camera and LIDAR sensors to obtain the coordinate system conversion relationship.Next,the KM target matching algorithm is used to optimally match the two sensor detection results to the target.Then,depending on whether it is in a good or complex environment,different target confidence fusion correction methods are adopted to obtain the final target confidence.Finally,for the completed matched target,the final target detection frame is fused and output according to the intersection ratio of the two detection frames output by the camera and the LIDAR sensor detection. |