| With the rapid development of edge computing,the vehicle ad hoc network relying on edge computing is also growing rapidly.In the foreseeable future,the demand for data communication and computing load of automated vehicles will inevitably surge.In the current situation of limited communication and computing resources,it is a basic research problem that effectively schedule the resources to achieve the best utilization in the network.Based on the existing research results,this thesis constructs a task scheduling model for vehicle edge computing.Under the constraint of delay,with the goal of minimizing the energy consumption cost within the system,a task scheduling scheme including communication,computing,caching and collaborative computing is established.In view of the characteristics of sensitive task delay requirements,large amount of data,and high requirements of computing resource in vehicle edge scenarios,considering the scenario constraints from different view,the task scheduling scheme is established,the decision model for vehicle task scheduling is obtained through training,and the comparative experiments are carried out on the evaluation indexes of the model,such as cost loss and average resource utilization.The specific work is as follows:(1)Aiming at the task scheduling problem in the centralized vehicle edge computing scenario,a task scheduling scheme based on deep reinforcement learning algorithm is proposed to minimize the energy consumption cost under the constraints of delay and computing power.The vehicle edge computing scenario is regarded as a single-agent environment,and the loss function that minimizes the energy consumption cost is given.The optimal scheduling strategy is derived through the Markov process,and it is constructed as a deep reinforcement learning problem.The experimental results show that the vehicle edge task scheduling model established in this thesis has less energy consumption cost loss in each scheduling stage.Compared with the machine learning algorithm,its average resource utilization rate is higher and the average task failure rate is lower.(2)Aiming at the task scheduling problem in the distributed vehicle edge computing scenario,considering the cooperation between roadside units,a task scheduling scheme is proposed which based on Multi-Agent Reinforcement learning algorithm.Each RSU is regarded as an agent,so that it can cooperate with other agents within the communication range,so as to establish a multi-agent environment.The scheduling problem in this environment is abstracted into a scientific problem to ensure the minimization of energy consumption cost under delay constraints,and the optimal scheduling strategy is deduced through the Markov game process.The deep Q-network learning modeling is used to construct optimal scheduling strategy,and the multi-agent reinforcement learning algorithm is used to train the model.Simulation experimental results show that compared with the single-agent reinforcement learning algorithm,its convergence is faster,the return rate is increased by21.9%,and the system energy consumption cost is reduced by 22.8%.(3)A vehicle edge computing simulation system with front-end interaction is designed and implemented.The system mainly includes three modules,namely front-end interaction module,simulation module and background service module.The front-end interaction module provides users with a unified parameter input function and displays the results to the users.The simulation module is mainly used to simulate the task scheduling process in the vehicle edge computing scenario,and records the data generated in the simulation process in the form of logs.The background service module is responsible for receiving front-end data and starting the simulation service.The test results show that the system is practicality and operability. |