Font Size: a A A

Research On Path Planning Method Of Unmanned Vehicle Based On Deep Reinforcement Learning

Posted on:2021-10-09Degree:MasterType:Thesis
Country:ChinaCandidate:R L LiFull Text:PDF
GTID:2492306572466244Subject:Control Engineering
Abstract/Summary:PDF Full Text Request
Path planning plays a very important role in the vehicle system when unmanned vehicle is carrying out exploration activities,which is the premise of carrying out exploration tasks safely.In order to accomplish scientific research tasks safely,the adaptive ability of the dynamic environment of the unmanned mobile devices obviously needs to be enhanced.The traditional planning problem is that researchers model the map in advance,and the action strategy of the unmanned equipment is also provided by people.Sometimes the working environment of the unmanned vehicle is not completely known,so enhancing the adaptive ability of the unmanned vehicle to the environment has become a research d irection.Deep reinforcement learning has developed rapidly in recent y ears,which provides a new method for path planning.It does not need people to set up detailed path planning algorithm,but let the unmanned equipment learn independently to generate a n end-to-end model.Firstly,some theories of deep reinforcement learning are discussed and studied.CNN and LSTM are used to deal with the environmental perception information of unmanned vehicle.By fusing the advantages of some deep reinforcement learni ng algorithms,and optimizing the network training,the improved dqn algorithm is obtained.Secondly,the simulation environment is built,and the training parameters of neural network are set And the RRT algorithm is introduced and improved.Finally,the path planning performance of the improved dqn algorithm in the unknown environment is verified by experiments,and the effectiveness of the improved dqn algorithm is verified by three environments in turn.The first is the barrier free environment,in which the performance of the improved dqn algorithm is tested and compared with other deep reinforcement learning algorithms,the improved dqn algorithm performs better in the path planning;The second is the barrier static obstacle environment,in which the improved dqn algorithm is trained and tested,and its performance changes are analyzed and studied and compared with the improved RRT algorithm;The third is dynamic obstacle environment,which uses the network model trained in the second environment,and the generalization ability of the model is great in the path planning.Using the improved dqn algorithm can avoid obstacles independently in unknown dynamic environment.
Keywords/Search Tags:Unmanned Vehicle, End to End, Deep Reinforcement Learning, DQN Improvement, Path Planning
PDF Full Text Request
Related items