| In recent years,traffic accidents have occurred frequently.In order to ensure travel safety,autonomous driving has become one of the research hotspots in the field of road traffic safety with the development of deep learning and computer technologies.Autonomous driving environment perception and 3D reconstruction of driving scenes are very important for ensuring the safety of autonomous driving,thus this paper focuses on the two tasks mentioned above and includes the following research:We firstly implements a 2D object detection and(road)semantic segmentation fusion model(referred to as the fusion model later).The fusion model contains 3 submodules: a feature extraction module(residual network),a 2D object detection module,and a(road)semantic segmentation module.An “1-” alternate training strategy is proposed for the fusion model,so that the two sub-tasks of the 2D object detection and semantic segmentation can borrow features form each other and improve the robustness of the model.The experiments show that the alternate training strategy acts as a role to learn the data distribution using low cost 2D object detection data as an auxiliary data source,and reduce the need for semantic segmentation data with high labeling cost,which can be treated as a data augmentation method.Object detection data can be recruited to better the segmentation accuracy performance,and furthermore,segmentation data assist a lot to enhance the confidence of predictions for the object detection task.The fusion model is verified on the KITTI autonomous driving dataset.The paper then implements a 3D object detection model based on deep learning.The 3D object detection model firstly preprocesses the laser point cloud data into BEV(Bird’s Eye View)images,and then gets the feature map through a convolutional neural network.Next,feed the feature map into the revised YOLO detection layer to predict the objects’ poses while predicting the classification results and the positions of the objects in the BEV,the pose refers to the angle between the orientation of the object and the X-axis of the camera coordinate system.After that,the detection result is mapped to the three-dimensional space,and thereby get the 3D object detection results.Compared with other models,our model improves the detection speed while ensuring accuracy.Finally,the driving scene 3D reconstruction platform is built with the data driven method.First,the object detection model is utilized to obtain the positions and poses of the objects in driving environment,and then the prefabs are read from a 3D model gallery and displayed in the 3D virtual scene,and during which,render the road area at according to the road semantic segmentation result,thus finally achieve the 3D reconstruction of the driving scene.This solution can quickly reproduce the real driving scenarios and can be used for virtual simulation of autonomous vehicles.It is of great significance to reduce the cost of road tests and ensure driving safety. |