With the rapid progress of artificial intelligence,it has been increasingly utilized across a multitude of industries,and autonomous driving has emerged as a fundamental area of inquiry within the field of artificial intelligence.Existing autonomous driving algorithms typically use hierarchical strategies,dividing autonomous driving tasks into perception,decision-making,and control,applying deep learning,imitation learning,etc.separately in each part,and fitting human-like driving strategies based on massive datasets.However,this hierarchical strategy is prone to errors in information propagation and requires a large amount of dataset to support model learning,increasing the workload of data processing.The present study posits a novel autonomous driving algorithm grounded in deep reinforcement learning,aimed at acquiring enhanced and intelligent driving strategies.The algorithm can interact with the simulation environment and use end-to-end learning strategies to directly learn the vehicle’s decision instructions from the Bird’s Eye View(BEV)of the vehicle,which is further transformed into control instructions.This algorithm does not require a large amount of dataset and complex hierarchical strategies,can better handle situations with insufficient data,and avoids the problem of error propagation in hierarchical strategies.Specifically,the key research components of this study are outlined as follows:Firstly,a high-precision map BEV sensing method based on vehicle-road-cloud cooperation is proposed.The method collects real-time state information of vehicles in the way of vehicle-cloud cooperation and conducts supplementary positioning in the way of target detection in the way of road-cloud cooperation,which can provide real-time and dynamic high-precision map BEV for vehicles.Secondly,a deep reinforcement learningbased end-to-end autonomous driving algorithm is proposed,and an improved DQN algorithm is proposed to train the reinforcement learning model.The algorithm can directly output the vehicle’s decision information through the vehicle’s high-precision map BEV,and further combine GNSS,IMU sensors,and high-level prior information to convert it into control of the vehicle’s throttle,steering,and brakes.Next,based on the hyper-realistic urban simulation environment CARLA,an experimental platform with the OpenAI Gym universal interface is built,and multiple autonomous driving scenarios are designed based on the open-source component Scenario Runner to facilitate the training and evaluation of reinforcement learning algorithms.Finally,both ablation experiments for improving DQN performance and comparative experiments for autonomous driving algorithm performance are designed to quantify the performance in the training and testing stages.Experimental results show that in the training stage,the proposed improved DQN significantly outperforms the baseline algorithm,achieving faster convergence speed and better training results.In the testing stage,the proposed autonomous driving algorithm can learn the correct driving strategy and complete the designated autonomous driving tasks in complex traffic scenarios,with outstanding performance. |