Font Size: a A A

Priority Dispatch Of Emergency Vehicles Based On The Combination Of Reinforcement Learning And Computer Simulation

Posted on:2022-08-12Degree:MasterType:Thesis
Country:ChinaCandidate:J X LiFull Text:PDF
GTID:2507306509989009Subject:Applied Statistics
Abstract/Summary:PDF Full Text Request
With the increase in car ownership per capita,the problem of urban traffic congestion is becoming more and more serious,which makes it difficult for emergency vehicles to quickly reach the scene for rescue after an emergency occurs.In this regard,it is particularly important to carry out the research on the priority dispatch of emergency vehicles.In most cities,people pay less attention to the priority of emergency vehicle signals,sometimes they give the priority to the emergency vehicles in the condition of fixed time traffic signals.However,in actual situations,due to the high complexity and uncertainty of road conditions,a more intelligent signal light strategy is needed to achieve priority dispatch of emergency vehicles.In this thesis,the situation where emergency vehicles and ordinary vehicles appear on the road at the same time is called multi-mode traffic,and a deep reinforcement learning algorithm under multi-mode traffic is proposed.It is designed to use DQN algorithm to control intersection signal lights when emergency vehicles appear at intersections,so as to achieve priority dispatch of emergency vehicles and reduce the average routing time of ordinary vehicles.Since the Q-learning algorithm is a model-free reinforcement learning algorithm,the agent learns through dynamic interaction with the environment,so in order to enable it to learn different traffic conditions,the state space and the structure of the reward function in the algorithm are expanded to achieve the purpose of integrated control of multi-mode traffic.At the same time,in order to reduce the correlation between Q value calculation and network iteration,two neural networks with exactly the same structure are established for iterative training,and an experience pool is introduced to reduce the relevance of training data and improve the accuracy of the results.Firstly,establish a single-intersection DQN model,define state space,action space and reward function,where action space is two different phases of the intersection,and select the number of vehicles queued in front of emergency vehicles as an indicator to measure congestion for experiments.Secondly,establish a multi-intersection DQN model to define state space,action space,and reward function for different agents.In order to make action selection more flexible,the action space is improved to the length of the green light in the north-south direction of the intersection.In the experiment,the saturation of the intersection is selected to define the congestion of the road.Finally,use SUMO software and Python language to carry out a simulation experiment.In the experiment,the cumulative reward value and the average number of queued vehicles are selected to analyze the effectiveness of the DQN algorithm,and the routing time of emergency vehicles and the average routing time of ordinary vehicles are obtained to judge the advantages of different strategies.Through experiments,it is concluded that the DQN algorithm with improved state space and reward function has achieved good results in single intersection and multiple intersections.Especially in the case of congestion,compared with the priority strategy,the DQN strategy performs better in reducing the routing time of emergency vehicles and the average routing time of ordinary vehicles.
Keywords/Search Tags:Emergency Vehicles, Computer Simulation, Deep Reinforcement Learning, Intelligent Traffic Lights
PDF Full Text Request
Related items