| In recent years,with the rapid development of intelligent connected vehicles,the degree of vehicle informatization and intelligence has been continuously improved,which puts forward strict requirements on the vehicle’s computing power.Insufficient computing power has become a key problem restricting its development.Vehicle edge computing is a new computing paradigm proposed to meet the exponentially growing demand for computing resources in smart connected vehicles,which allows vehicles to offload computing tasks to edge servers and extend the computing side,communication side and storage side to the edge of the network closer to the vehicle,thus effectively relieving the pressure on vehicle resources and reducing latency and bandwidth consumption.However,how to make efficient task offloading decisions and perform computing resource allocation is still a key problem that urgently needs to be solved and has become a research hotspot.In this paper,we take vehicle edge computing as the research background,and focuses on the problem modeling and solving for offloading decisions and computing resource allocation.The main research work of this paper is described as follows:(1)There has been less consideration of the following aspects in existing studies: the rapid change of network topology due to high-speed vehicle motion,the task offloading of moving vehicles in continuous time,and the randomness of vehicle task arrival.To address these problems,a continuous-time vehicle edge computing model is developed.It describes a Markov decision process with a seven state and two action space,and a deep reinforcement learning model is developed to solve the problem.In addition,a deep reinforcement learning algorithm for split-order decision making is proposed by nesting the input layer with a first-order decision network to address the poor results caused by the discrete-continuous hybrid decision problem.Simulation experiments show that the algorithm maintains a low level of energy consumption and has significant advantages in terms of task completion rate,latency,and reward.(2)High-speed vehicle movement brings rapid changes in network topology and makes vehicles aggregate from time to time.Explosive task offloading requests in vehicle-dense areas can lead to depletion of computational resources of edge servers,resulting in soaring vehicle task processing latency.To this end,collaborative computing among edge servers is introduced,and we explore a twotier task offloading and resource allocation algorithm based on deep reinforcement learning.To better balance the edge server load,an improved twotier resource-constrained deep reinforcement learning offloading algorithm is proposed.Simulation experiments show that the algorithm can guarantee the execution success rate of tasks and achieve better results in terms of latency and energy consumption.(3)To address the universal environment modeling difficulties in vehicle edge computing research,a multi-time-slot vehicle edge computing task offloading and resource allocation model is constructed.The continuous time is abstracted into a multi-time slot model,and the vehicle-related motion state,computational resources,and computational tasks are dynamically pressed into the time slot queue to construct a continuous vehicle motion model,task model,and computational model.Based on the above model,a generic dynamic vehicle edge computing task offloading and resource allocation simulation system is designed.It can perform simulation of task offloading and resource allocation according to user-designed algorithms,show them in real time with a graphical interface,and finally output detailed simulation results with an intuitive graphical display of the simulation results. |