Font Size: a A A

Traffic Signal Control Optimization Based On Predicted Traffic Flow

Posted on:2024-01-06Degree:MasterType:Thesis
Country:ChinaCandidate:J Q ZhanFull Text:PDF
GTID:2542307091497144Subject:Software engineering
Abstract/Summary:PDF Full Text Request
In recent years,with the rapid growth of China’s economy and the continuous improvement of national living standards,small cars have gradually become more and more popular among families,and this phenomenon of car popularity has also caused a significant increase in the domestic private car ownership rate,and with the continuous increase in the level of urbanization in China,the fastest growth rate of private car ownership is basically concentrated in the cities,which has led to the construction of existing urban traffic road facilities The speed of construction and upgrading of existing urban transportation road facilities cannot match the rapid growth of traffic on the urban road network caused by the massive popularity of private cars.Urban road congestion and inefficient traffic flow has become a norm for road traffic in large cities.Traffic congestion has become one of the problems that need to be solved in China’s urbanization development process.Therefore,indepth analysis and research on the current situation of traffic congestion in major cities at home and abroad,and the adoption of more efficient traffic strategies to improve the efficiency of the traffic network vehicles to solve traffic congestion,has also become a popular topic of research in the field of transportation at home and abroad.As a tool to control the flow of vehicles at intersections,traffic signals play a very important role in the overall road traffic capacity.In recent years,research on traffic signal control at road intersections using Deep Reinforcement Learning(DRL)methods has received increasing attention.Compared with traditional traffic signal control methods with fixed time intervals or manual settings,traffic signal control using DRL methods has the advantages of self-adaptability,real-time and excellent results.In this thesis,we propose a DRL traffic signal control algorithm combining predicted traffic flow states,which combines a Long Short Term Memory Neural Network(LSTM)with a Double Deep Q-Network to build a traffic signal adaptive control model.The model uses the LSTM to predict the future traffic state(traffic flow state)at intersections,and improves the Qvalue calculation of the Target network in Double DQN by predicting the state and combining it with Q-learning.The model also adds a dynamic ε-action selection strategy to the original Double DQN network,which changes the traditional DQN-like algorithm using a linear function to regulate the action selection strategy,and uses a segmentation function composed of a linear function and a Sigmoid function to regulate the randomness of the optimal action selection in the algorithm,which is smoother than the traditional method.In this thesis,the improvement of the above method can better avoid the problem of over-estimation of Q value in the traditional DQN target network as well as improve the reliability of the training model.This thesis conducts simulation experiments based on the simulation engine provided by the KDD City Brain Competition and the road network data simulating the actual traffic environment.By constructing a road network formed by intersections and the roads connected between them,the maximum number of vehicles that can be carried at intersections is increased in the area of the network,and the average delay of vehicles on the network is minimized.By comparing with some other improved DQN traffic signal control methods,the experimental results show that the algorithm can effectively optimize the traffic signal control strategy and improve road traffic efficiency and safety under different road conditions and traffic flow situations.
Keywords/Search Tags:Deep reinforcement learning, Traffic signal, Dynamic Strategy, LSTM, DQN
PDF Full Text Request
Related items