Font Size: a A A

Research On Online Task Offloading In MEC Based On Deep Reinforcement Learning

Posted on:2022-04-26Degree:MasterType:Thesis
Country:ChinaCandidate:S S LiangFull Text:PDF
GTID:2518306524984369Subject:Electronics and Communications Engineering
Abstract/Summary:PDF Full Text Request
Nowadays,applications produced by mobile devices are increasingly complex and require stronger computing power to deal with them.Mobile edge computing provides an effective way to deal with this kind of tasks.The problem of task offloading and resource allocation in mobile edge computing(MEC)is always the key problem.If we can make the right computing task offload decision in real time and allocate network resources and computing resources reasonably,it can improve the service performance of mobile edge computing and improve the user experience.In recent years,reinforcement learning tech-nology has been developing,and its application in MEC has attracted much attention.Since there are many uncertain factors in the MEC environment,reinforcement learning can interact with the environment and get feedback and rewards without prior knowledge of the environment,so as to learn a better dynamic decision scheme.Deep learning can make use of powerful neural networks to extract features from complex environments.Deep reinforcement learning,which is a combination of deep learning and reinforcement learning,combines the powerful perception ability of deep learning with the ability of exploration and interaction ability of reinforcement learning,and can be used to solve problems with relatively complex environments.Thesis mainly adopts the deep reinforcement learning method to solve the task of-floading and resource allocation problems of online tasks in MEC.The research work of thesis is summarized as follows:1)For non-preemptive computing servers,thesis proposes an offloading decision and non-preemptive computing resource allocation scheme for online tasks based on Deep Q Net(DQN).At the same time,tasks are scheduled to meet their own deadlines.The simu-lation results show that the DQN based Reservation of Future Resources(RFR)algorithm has a task success rate that is 12% higher than that of the heuristic comparison algorithm in the specific mode of non-uniform arrival of large tasks following small tasks.In the mode of uniform arrival of tasks,setting appropriate reward function parameters can make more tasks meet their deadlines with relatively low energy consumption.2)For mobile users with renewable energy collection devices and tasks with differ-ent hard deadlines,a DDPG based Dynamic Allocation algorithm(DA-DDPG)for hard deadlines is proposed.The simulation results prove that the DA-DDPG algorithm can make reasonable task decisions and allocate power to reduce the task discard rate and av-erage completion time in the face of different deadlines and the deadline is not large,and the delayed reward is short.In response to the soft deadlines scenario,the performance advantage of the DA-DDPG algorithm is more obvious in the face of low battery or poor channel environment.
Keywords/Search Tags:Mobile edge computing(MEC), Computation offloading, Online scheduling, Renewable energy harvesting, Resource allocation, Deep reinforcement learn-ing
PDF Full Text Request
Related items