| Traffic congestion is a common problem in cities around the world,causing environmental and economic issues,particularly in large-scale road networks.This can hinder the development of smart cities and sustainable growth.Traffic signal control plays a critical role in addressing congestion and numerous methods.Recently,deep reinforcement learning has emerged as a cutting-edge solution due to its ability to extract information from large-scale road networks and optimize traffic signal control.However,there are still challenges such as deadlock control strategies resulting from a lack of coordination between traffic lights and the exponential growth of the action space hindering optimization efficiency.This thesis proposes a deep reinforcement learning-based framework to optimize traffic signal control in large-scale road networks by increasing throughput and reducing vehicle delay.Firstly,the framework uses feature extraction with convolutional,recurrent,and graph attention networks to efficiently utilize spatial-temporal relations for further inference.Then,this thesis develops a mathematical model for the traffic signal control problem and further formulates into a Markov decision process.Based on the modeling,a deep reinforcement learning is proposed to centrally control signals in the large-scale road networks.Furthermore,optimization methods are also applied to address characteristics such as large network size and complex road conditions.Finally,this thesis implements an algorithm with multiple sub-region intelligent agents and a centralized global agent to coordinate updates,where each sub-region intelligent agents train own models in small regions,while the centralized global agent aggregates information of sub-region agents and coordinate these sub-region agents to update collaboratively.Experimental results show that the proposed framework outperforms all benchmark tests and can reduce waiting vehicles by an average of 25%,with effective scalability to different network sizes. |