Font Size: a A A

Research On Task Scheduling Algorithms For Vehicle Edge Computing Based On Reinforcement Learning

Posted on:2024-03-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y H YanFull Text:PDF
GTID:2542307133991719Subject:Communication Engineering (including broadband network, mobile communication, etc.) (Professional Degree)
Abstract/Summary:PDF Full Text Request
With the significant increase in 5G communication rate and the rapid development of urbanization,a large number of latency-sensitive telematics business applications have emerged.These latency-sensitive applications have a high demand for computing resources.However,the limited computing resources of vehicle terminals cannot realize the real-time processing of tasks.For the above problems,Vehicle Edge Computing(VEC)is an effective solution.VEC deploys servers on the roadside close to the vehicle to provide the required computing resources for the vehicle.However,network operators have limited computing resources to deploy VEC servers,and during peak commuting traffic periods,the tasks offloaded from vehicle terminals will increase significantly,and the server computing load will be close to saturation,leading to a significant increase in task processing latency and traffic safety hazards.To address this problem,implementing an efficient task scheduling strategy based on VEC server to improve task processing efficiency is a feasible solution.Based on this,this article aims to reduce task processing latency and improve task processing success probability,and investigates how to utilize Deep Reinforcement Learning(DRL)to solve the VEC server computing task scheduling problem.The specific work of the paper is as follows.(1)To solve the problem of decreasing task processing efficiency of vehicle edge computing networks during peak traffic hours,this paper proposes a task processing framework based on VEC,constructs a multi-agent reinforcement learning task scheduling model,and solves the task scheduling strategy using Multi-Agent Deep Q-Networks(MADQN)algorithm.The MADQN algorithm is used to solve the task scheduling policy in order to improve the realtime processing efficiency and reduce the processing delay of VEC tasks.In the MADQN task scheduling strategy,a reward mechanism based on thread service time is embedded to guarantee the fast convergence of deep neural networks.The mechanism of multi-agent centralized training and distributed operation is also introduced to reduce the computational dimensionality and the training time of deep neural networks to adapt to the real-time task processing environment of VEC.The experimental results show that the MADQN task scheduling algorithm proposed in this paper can obtain lower task processing latency and higher task processing success probability compared with other task scheduling algorithms.(2)To further improve the task processing efficiency,the proposed MADQN algorithm is optimized using the Multi Agent Double Deep Q-Networks(MADDQN)algorithm.Among them,the MADQN algorithm has an overestimation problem in calculating Q values.The MADDQN algorithm addresses this problem by using a double Q-network to calculate Q values,which effectively reduces the estimation bias of the algorithm and thus further improves the performance of the algorithm.Experimental results show that the MADDQN task scheduling algorithm can obtain further performance improvement compared to the MADQN task scheduling algorithm.
Keywords/Search Tags:task scheduling, MADQN, MADDQN, vehicle edge computing, internet of vehicle
PDF Full Text Request
Related items