Font Size: a A A

Research On Offloading Method Of Vehicular Computing Task Based On Deep Reforcement Learning Strategy

Posted on:2024-03-19Degree:MasterType:Thesis
Country:ChinaCandidate:C YangFull Text:PDF
GTID:2542307127960699Subject:Computer technology
Abstract/Summary:PDF Full Text Request
The Internet of Vehicles can provide services for vehicles and pedestrians through wireless communication technology.In the Internet of Vehicles,vehicles need to communicate frequently with surrounding servers,which consumes a lot of time delay,and task transmission has high requirements for delay and energy consumption.By introducing Mobile Edge Computing(MEC)technology,the task is offloaded to the roadside server for execution,so as to meet the vehicle’s demand for low latency.Therefore,computing offloading technology has attracted much attention as one of the key technologies of MEC.This paper studies the computational offloading problem in the Internet of Vehicles scenario.Using reinforcement learning and Multi-Armed Bandit(MAB)theory as a tool,two computational offloading strategies are proposed,and compared with existing offloading methods,the superiority of the algorithm proposed in this paper is proved by simulation experiments.The two uninstallation methods proposed in this paper are as follows:(1)In the Internet of Vehicles scenario,due to the high-speed mobility of the vehicle,the vehicle may interact with multiple MEC servers in a short period of time.The MEC server that the vehicle currently unloads tasks is different from the server that the vehicle finally travels to.The traditional calculation offloading method needs to start from the current The connected server transmits the calculation results to the server where the vehicle finally drives through the backhaul link.This transmission method consumes a lot of time delay.Based on this scenario,this paper designs a vehicle-mounted computing task offloading based on a deep reinforcement learning strategy.Method(Offloading Method of Vehicular Computing Task Based on Deep Reinforcement Learning Strategy,OBDRLS).The vehicle collects the MEC status information through the SDN server,so as to find the server with the least load,and transfer the calculation results between the vehicles instead of passing through the infrastructure,so as to save the transmission delay of the task,and apply the deep reinforcement learning algorithm into the computational offloading problem.Finally,the superiority of the algorithm proposed in this paper is proved by simulation experiments and actual scene tests.(2)This thesis investigates the problem of task offloading in Vehicular Edge Computing(VEC)systems.Since the vehicular task offloading environment is constantly changing,it has a rapidly changing network topology.These uncertainties pose additional challenges for task offloading.This thesis proposes an Adaptive Offloading Method of Vehicular Computing Task Based on MAB Theory(AOMBMT).This approach enables vehicles to learn the latency performance of neighboring vehicles while offloading tasks without frequently exchanging state information.Considering the time-varying nature of task loads and candidate services,we improve the existing multi-armed bandit(MAB)algorithm to be input-aware and event-aware,so that the AOMBMT algorithm can adapt to the dynamic vehicular task offloading environment.We evaluate the average latency and energy consumption of AOMBMT in simulation scenarios and real highway scenarios,and the results show that this method can achieve lower latency performance and energy consumption compared to existing offloading methods.
Keywords/Search Tags:Internet of Vehicles, Computing unloading, Mobile edge computing, Deep reinforcement learning, MAB theory
PDF Full Text Request
Related items