Font Size: a A A

Reinforcement Learning For Computation Offloading And Content Caching In Mobile Edge Computing

Posted on:2022-04-30Degree:MasterType:Thesis
Country:ChinaCandidate:K JiangFull Text:PDF
GTID:2518306521455724Subject:Computer technology
Abstract/Summary:PDF Full Text Request
In recent years,with the popularization of intelligent terminals and the progress of wireless access technology in the 5G network,various emerging mobile applications have experienced unprecedented vigorous development.As a supplement and extension to the cloud computing model,Mobile Edge Computing(MEC)effectively alleviates the high delay and unreliability caused by the long transmission distance in cloud computing,which extends network functions such as storage,communication,computing,control,and management from the centralized cloud to the edge of the network.However,the resource in MEC server is still limited.How to allocate the computing,storage,and communication resources reasonably in the network to process the massive edge data and realize the dynamic deployment of tasks and data is a significant challenge in developing the network.Therefore,this dissertation considers the dynamic property and specific constraints in the network,and designs effective computation offloading and edge caching strategies using various reinforcement learning-based methods.The main research can be divided into two parts according to the business characteristics:(1)The joint optimization of computation offloading and resource allocation in dynamic multi-user MEC systems is investigated.First,from the perspective of the system as a whole,an optimization function is established to minimize the energy consumption of the whole MEC system by combining the heterogeneity of devices,the uncertainty of resource demand,limited resource capacity,and delay sensitivity of computing tasks in the dynamic network.The problem of system energy consumption minimization is approximated as a Markov decision process.The state space,action space,and the reward function of reinforcement learning agent are defined in detail.Then,Q-learning,a reinforcement learning method based on value iteration,is used to determine the strategy of computation offloading and resource allocation in the system.In order to avoid the defect that Q-learning algorithm is easy to fall into dimension explosion,a computation offloading and resource allocation method based on Double Deep Q Network(DDQN)is further proposed.The simulation results show that the proposed method can not only effectively reduce the energy consumption of the system under different scenarios,but also achieve the ideal average time delay of all tasks.(2)The edge caching strategy under the multi-layer edge caching architecture is studied.First,an optimization problem to minimize the long-term overhead of content delivery in the system is formulated under specific constraints.Then,the Markov decision process of singleagent reinforcement learning is extended to a multi-agent system.Distributed Multi-Agent Reinforcement Learning(MARL)is used to solve the corresponding combinational multi-arm slot machine problem under the framework of the Stochastic game.In this method,each agent can adaptively learn its best behavior in conjunction with other agents for intelligent caching in the system.Simultaneously,to further reduce the computational complexity,parameter approximation is introduced to improve the distributed MARL based edge caching method.The simulation results show that the proposed distributed edge caching method can effectively improve the hit ratio of edge caching in different scenarios,thus reducing the long-term overhead and average delay of content delivery in the system.
Keywords/Search Tags:mobile edge computing, reinforcement learning, computation offloading, resource allocation, edge caching
PDF Full Text Request
Related items