Font Size: a A A

Research On Control System And Path Planning Algorithm Of Unmanned Combat Mobile Platform

Posted on:2019-05-27Degree:MasterType:Thesis
Country:ChinaCandidate:P F DongFull Text:PDF
GTID:2432330551961443Subject:Mechanical and electrical engineering
Abstract/Summary:PDF Full Text Request
With the development of science and technology,unmanned mobile platforms are used more and more widely in military,industrial,service and other fields,especially in the field of military operations.Among them,motion control and autonomous navigation are the key points in the application of technology,but they also face many difficulties.This research designed the omni-directional mobile platform with Mecanum wheel,And the reinforcement learning algorithm in artificial intelligence domain is applied to the path planning of mobile platform.Software simulation and autonomous navigation in indoor environment based on ROS framework are used to study and analyze the algorithm in detail.(1)Tirst analyzes the development trend of mobile warfare platform at home and abroad,and analyzes the key technical difficulties.On this basis applied Mecanum wheel to build omnidirectional mobile platform,designed the overall structure of the platform and general scheme of control system:using STM32f4 as hypogynous machine control core,to achieve the motion control of chassis and odometer information transmission.Used TX1 high performance embedded main control computer as the Upper computer control core,Runing ROS operating system under the Ubuntu framework to implement autonomous navigation algorithm.The data exchange through the serial.(2)Calculating the transformation relation between platform speed and the motor speed based on Mecanum wheel kinematics model,designed the improved PID control algorithm to achieve precise control of motor speed of the chassis,and the use of high precision Holzer sensor to record the motor rotation,calculate the odometer information.(3)According to the problem of slow convergence speed of the reinforcement learning algorithm:Q-learning,the gravitational potential field and trap search are used to obtain the prior information of the environment.planning the global path in the static environment established by pyGame.It is proved that the improved algorithm can effectively improve the convergence speed of training and the trajectory effect of the planning.(4)Proposed application of depth reinforcement learning algorithm(DDQN:double deep Q-learning)in local dynamic path planning,Taking the lidar data and the local target point as input,outputing the moving direction of the agent directly through the convolutional neural network.According to the deep reinforcement learning training instability problem put forward to improve training methods to deal with the environmental reward sparsity problem.The dynamic environment simulation based on pyGame and tensorflow verified that DDQN has strong local planning ability in dynamic environment,and the agent can effectively avoid static and dynamic obstacles,reach the target point successfully.(5)This paper transplant the reinforcement learning algorithm into the autonomous navigation package under the ROS framework,selected the indoor environment for experiments,used the improved Q-learning for static global path planning and DDQN for the unknown environment local path planning,verified the feasibility of the algorithm.
Keywords/Search Tags:Omni directional mobile platform, improved PID control algorithm, ROS autonomous navigation, Q-learning, Double Deep Q-network, global path planning, local path planning
PDF Full Text Request
Related items