Font Size: a A A

Deep Reinforcement Learning Based Wireless Resource Allocation For Vehicle-to-Everything Networks

Posted on:2023-06-16Degree:MasterType:Thesis
Country:ChinaCandidate:J H LiFull Text:PDF
GTID:2532306845990289Subject:Communication Engineering (including broadband network, mobile communication, etc.) (Professional Degree)
Abstract/Summary:PDF Full Text Request
Vehicle-to-Everything(V2X)applications are evolving from basic applications such as road safety,traffic efficiency,and information services to enhanced applications,like autonomous driving and intelligent transportation.These enhanced applications have diverse communication requirements.Therefore,to support a wide range of V2 X applications,the V2 X communication technology faces many challenges such as the fast time-varying channel in highly dynamic vehicle scenarios,the huge amount of access data,and the limitation of spectrum resources.This thesis focuses on the problem of wireless resource allocation under two types of V2 X systems and uses data-driven deep reinforcement learning(DRL)approaches to solve the proposed resource management problems.This work enables smarter resource allocation methods and will assist in the further development of intelligent urban transportation systems.First,to improve spectrum utilization efficiency of the typical V2 X system,we assume that Vehicle-to-Infrastructure(V2I)links occupy the pre-allocated orthogonal spectrum with a fixed transmit power,and then design a joint spectrum sharing and transmit power configuration mechanism for Vehicle-to-Vehicle(V2V)links to meet the Quality of Service(Qo S)requirements of high capacity for V2 I links and high reliability for V2 V links.On this basis,this work models the resource allocation problem mentioned above and proposes a resource allocation algorithm based on DRL.Specifically,we model each transmitter of V2 V links as an agent,and the system iteratively selects an agent so that it interacts with the environment.Then,each V2 V agent selects actions in the corresponding state based on the Q-network and obtains feedback from the system environment to correct the next actions.In this way,the final resource allocation strategy is obtained.Simulation results analyze the impact of the neural network hyper-parameters on the performance of the algorithm and show the superiority of the proposed DRL scheme compared to the baseline scheme.Then,we consider the larger range of heterogeneous V2 X system.Due to the greater number of user types and communication links in this system,the interference caused by spectrum sharing in the system increases accordingly.Therefore,to address the resource allocation problem in this system,we propose an improved multi-agent reinforcement learning(MARL)algorithm based on the previous work to meet the Qo S requirements of different links in the system.Specifically,all V2 V agents can simultaneously interact with the environment and select their actions.At the same time,we transform the state space of each agent and introduce low-dimensional fingerprint information to represent the policy information of the agent to address the environmental instability problem in multi-agent systems.Finally,we redesign the action space and the reward function for the algorithm.Simulation results demonstrate the superiority and robustness of the proposed MARL algorithm compared to other algorithms.Furthermore,through simulation experiments,we find that the proposed scheme is effective in encouraging cooperation between multiple agents to complete the transmission task.Finally,the thesis identifies shortcomings in the current works and specifies future directions for improvement.There are 23 figures,10 tables and 50 references in this thesis.
Keywords/Search Tags:V2X communication, deep reinforcement learning, resource allocation, quality of service
PDF Full Text Request
Related items