Font Size: a A A

Research On Visual Environmental Perception For Complex Environmental Autonomous Driving

Posted on:2024-02-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q XuFull Text:PDF
GTID:1522307064973699Subject:Computer application technology
Abstract/Summary:PDF Full Text Request
With the development of high-performance sensors and calculators,the advancement of artificial intelligence technology,and the new demand for "carbon neutrality" in environmental protection,electric vehicles equipped with autonomous driving functions are becoming the development direction of future vehicles.The key technologies of autonomous vehicles include environmental perception,mapping and positioning,decision and planning,control and execution.Among them,environmental perception is the foundation of other technologies and is the key to the realization of car autonomous driving.In the field of automatic driving environment perception,visual environment perception can achieve a good balance between precision and cost,which is one of the hot research directions of environmental perception.Based on the visual environment perception method,monocular or multi-ocular cameras are usually used to collect environmental images,and computer vision technology is used to analyze the images to complete the perceptual tasks such as object recognition,target detection,target tracking,semantic segmentation,three-dimensional reconstruction and distance measurement.With the development of military modernization and people’s natural need to explore nature,the demand of environment awareness technology of autonomous driving for complex environment becomes more and more intense.However,in the field of autonomous driving,the current research on visual environment perception is still mainly focused on typical structured scenes such as urban roads,while the research on unstructured scenes such as off-road roads is still lacking.However,the algorithm generated in the study of structured scenarios will produce new problems such as reduced accuracy and robustness when applied in unstructured scenarios.Based on specific projects in the laboratory,this paper aims to meet the requirements of autonomous driving of intelligent unmanned units on complex off-road road conditions.In view of the shortage of accuracy and robustness of existing algorithms,the visual environment perception for automatic driving in complex environment is deeply studied.The main innovations of this paper are as follows:(1)This paper introduces an improved region proposal network called Boosted Region Proposal Network(BRPN)to address the limitation of exploration space in the Region Proposal Network(RPN).Firstly,a novel enhanced pooling network is designed to enhance BRPN’s adaptability to objects of different shapes.Secondly,the loss function expression of BRPN is improved to strengthen the learning capability of negative samples and prevent missed detections.Additionally,the grey wolf optimization(GWO)algorithm is employed to optimize the parameters of the improved BRPN loss function,thereby enhancing its performance.Lastly,a new GA-SVM classifier is proposed to enhance the model’s classification ability.The effectiveness of BRPN is demonstrated through testing on the PASCAL VOC 2007,VOC 2012,and KITTI datasets.(2)This paper proposes a novel domain adaptive detection method called Skip-Layer network with optimization method(SLNO).Firstly,SLNO adopts a multi-level feature fusion method to integrate convolutional features from different layers into the domain classifier,enhancing feature sampling capabilities.Secondly,a multi-level domain adaptation approach is introduced,aligning both image-level and instance-level distributions and applying domain classifiers to each distribution to improve the model’s domain adaptive capability.Finally,the cuckoo search(CS)optimization method is employed to automatically search for optimal SLNO coefficients,further enhancing the model’s domain alignment ability.The proposed SLNO method is tested on the Cityscape,Foggy Cityscape,SIM10 K,and KITTI datasets,and experimental results demonstrate its effectiveness in domain adaptive detection.(3)This paper presents a real-time image semantic segmentation framework based on deep learning,called RT-Seg Net(Real Time Segmentation Network).The framework consists of three stages: encoding,dimension reduction,and decoding.Firstly,in the encoding stage,a feature map skip superposition method(FMSS)is introduced to enhance the extraction of image features.Secondly,a dimension reduced module(DR)is designed in the dimension reduction stage,which connects the encoder to the decoder layer by layer to enhance the performance of the decoder.Finally,a lightweight decoder(LD)structure is proposed in the decoding stage,effectively reducing the number of convolutional layers and speeding up model training and inference.The RT-Seg Net model achieves better performance than the original Seg Net on public datasets such as Cam Vid,KITTI,Cityscape,SUN RGB-D,as well as a self-annotated dataset called JLUData.(4)To meet the demands of autonomous driving in complex environments,this paper integrates the aforementioned improved algorithms and proposes a modular visual environment perception method called VEP-CE(Visual Environment Perception for Complex Environment).This method has the following characteristics: Firstly,it utilizes the BRPN algorithm for candidate box selection,which expands the target search range and improves detection accuracy.Secondly,it employs the SNLO method for domain adaptive learning and training specifically tailored for complex environments,enhancing the algorithm’s adaptability to such scenarios.Finally,VEP-CE combines real-time semantic segmentation results from RT-Seg Net to determine the drivability of the current environment in real-time.The algorithm was tested on a custom-built mobile platform in real-world environments,and the experimental results demonstrate that the VEP-CE method provides effective support for autonomous driving in complex environments,especially in situations that require real-time perception and decision-making capabilities.In conclusion,the proposed methods in this paper provide effective support for the application of autonomous driving technology in complex environments.The research conducted in this paper is of significant importance in advancing and applying autonomous driving technology,as it offers valuable exploration and reference for achieving efficient,accurate,and safe environment perception capabilities in autonomous driving systems.These research findings are expected to be applicable in real-world road environments and provide new ideas and approaches for further research and development in related fields.
Keywords/Search Tags:Autonomous driving, Complex environmental perception, Object detection, Semantic segmentation, Deep learning
PDF Full Text Request
Related items