| The rapid development of vehicular networks technology has resulted in the popularity of many new vehicular applications that need to meet the stringent delay requirements.Vehicular Edge Computing(VEC),as a promising solution to guarantee the quality of vehicular services,accelerates task execution through computation offloading by integrating edge computing functionalities.Delay and energy are the key metrics to evaluate the performance of computation offloading in edge computing.However,in the VEC system,the movement of vehicles will cause the handovers between the edge servers,which incurs extra delay and affects user experience.In addition,the varying available computational capacity and network topology have made it more difficult to capture optimal performance of delay and energy consumption in this scenario.This thesis focuses on delay and energy optimization problems in the task offloading under different VEC scenarios,which mainly includes the following three aspects:In order to tackle the longer delay caused by the vehicle mobility under VEC scenario,a temporal-spatial task offloading scheme is proposed.Firstly,based on the analysis of the uploading,computing,transferring,and receiving process of the task execution during the movement of the vehicle,the expression of the total task completion delay is derived.Then,the energy-constrainted delay minimization problem is formulated as non-linear mixed integer programming.Secondly,in order to solve the non-convexity problem,the original problem is transformed into two sub-problems by analyzing temporal-spatial correlation for VEC system.Then,the two sub-problems are solved by two-stage decision tree algorithm and dynamic programming method.Simulation results show that the proposed temporal-spatial scheme can further reduce the delay compared with optimal spatial offloading and the nearest offloading scheme.To address the problem of extra migration costs incurred by the fact that the sojourn time of edge server is not enough to perform task offloading successfully,a deep reinforcement learning task offloading scheme is proposed.Firstly,based on probability theory,the migration costs and corresponding probabilities for different offloading selections are derived.Then,a sequential decision making problem is formulated to minimize the overall costs of delay and energy consumption.Secondly,by leveraging the sequential decision making characteristics,a deep Q network algorithm combining with bayesian inference is proposed,which can gurantee the accuracy of input states and offloading decisions.Simulation results demonstrate that the proposed scheme can effectively reduce the costs of task execution compared with the traditional offloading schemes,and verify the convergence of the proposed learning algorithm.In terms of uncertainty brought by changes of the network topology and computing resources in the task offloading under VEC scenario,a distributed learning task offloading scheme is proposed.Firstly,based on the analysis of the execution process of tasks offloaded to servers on different sides,the task completion delay expressions are derived respectively,and the average delay minimization problem is established.Secondly,under the condition that the vehicle lacks information about the surrounding environment,based on the Multi-Armed Bandit(MAB)theory,a utility function is designed according to a series of historical observations and offloading decisions is made in accordance to the output probability distribution.Simulation results show that compared with the typical MAB-based offloading schemes,the proposed scheme achieves superior delay performance,which can better adapt to the dynamic environment. |