Font Size: a A A

Research On Joint Optimization Of Resources In Mobile Edge Networks

Posted on:2023-01-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:Z Y WangFull Text:PDF
GTID:1528306911495184Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
With the advancement of Internet of Things technology and the advent of rich and diverse new services,the massive amount of data and the diverse needs of new services have led to the emergence and development of Mobile Edge Computing(MEC),a new computing paradigm.The combination of MEC and wireless networks,enables the multi-dimensional resources such as communication,computation,and caching resources coexist,which challenges the resource management in the mobile edge networks.Therefore,it is of key importance to study the joint optimization of resources in mobile edge networks to ensure the quality of service(QoS),improve the quality of experience,and enhance resource utilization.In the edge network,the states of the wireless network change dynamically,service requirements are diverse,and multi-dimensional resources are unevenly distributed.However,most of the existing work considers the complete system model and the dynamics of the environment can be accurately obtained.Therefore,the key problem to be solved in mobile edge network is how to make dynamic adaptive resource optimization decisions to achieve long-term system performance optimization.Secondly,the existing work seldom considers the improvement of communication performance between the user-edge computing node in edge networks.Nevertheless,the situation probably causes the advantages of the MEC system to not be fully utilized.Hence,from the perspective of communication,it is imperative to enhance the communication link by improving the channel condition.Finally,MEC can provide multi-dimensional resources at the edge network for different services in network slicing,which can meet the requirements of diverse services while ensuring the QoS of users.Existing research on the resource allocation problem for network slicing seldom considers the joint optimization of communication,computing and caching resources.Therefore,it is crucial to study the multi-dimensional resource collaboration in multi-service scenarios in order to improve the performance of edge network slicing.This dissertation studies the above problems existing in the current work,and the specific research is as follows:(1)Research on the deep q-network based computation offloading and resource allocation schemeThis work studies the computation offloading and resource allocation problem in a multi user and single MEC server scenario,with the optimization objective of minimizing the weighted sum of latency and energy consumption for all users(overhead).Since the states of the wireless network change dynamically,service requirements are diverse,and multi-dimensional resources are unevenly distributed,this work applies the model-free reinforcement learning(RL)framework to formulate and tackle the computation offloading and resource allocation problem.The agent obtains the reward through interactions with the environment,and estimates the performance of learned policy in the form of value function,then it chooses the computation offloading and resource allocation action with the least overhead based on its state.Since the value function estimation cannot efficiently solve the high-dimensional state space problem,this work adopts deep reinforcement learning(DRL)algorithm that employs deep neural networks for approximating value functions to solve the above problems.The simulation results verify the effectiveness of the Deep Q-Network(DQN)based algorithm,and compare the performance of the proposed algorithm with the benchmark algorithms under different parameters.(2)Research on the joint optimization of communication and computation resources for intelligent reflecting surface assisted edge networkThis work proposes an edge heterogeneous network assisted by intelligent reflecting surface(IRS),which is composed of macro base station and.small base stations equipped with MEC servers.In view of the unstable communication link between users and edge computing nodes,IRS with low cost and low energy consumption is used to provide auxiliary links for users,and the communication performance between users and edge computing nodes is enhanced by intelligently adjusting the channel state.The user association,computation offloading and resource allocation,as well as IRS phase shift design are optimized with the aim of minimizing the long-term energy consumption while ensuring the QoS of users.The challenge of the optimization problem is rooted from the fact that the update timescale of user association is different from others.Hence,this work proposes a two-timescale mechanism.For the long-timescale user association decision,the low complexity matching theory is applied to perform one-to-many matching.In the short timescale,the computation offloading,resource allocation and IRS phase shift design strategy is learned by the DQN algorithm to adapt to the dynamic nature of the wireless environment.The simulation results verify the convergence of the two-timescale algorithm proposed in this work.The comparison of energy consumption performance between the proposed algorithm and the benchmark algorithms in different simulation environments shows that the proposed algorithm can effectively reduce the energy consumption of edge network.(3)Research on the joint optimization of communication,computation and caching resources for network slicingThis work studies the joint optimization of communication,computation and caching resources in edge network slicing.The optimization objective of two-level resource allocation problem involving slice-level and user-level is to maximize the utility obtained by mobile virtual network operators while ensuring the slice QoS.A dynamic adaptive resource allocation scheme is realized by using deep reinforcement learning approach.Specifically,this work proposes a novel DRL approach named Twin-Actor Deep Deterministic Policy Gradient(Twin-Actor DDPG).Since the action space is continuous,the DDPG is adopted where the actor generates the deterministic policy while the critic evaluates the policy and guides the actor to optimize the policy.A novel twin-actor structure is put forward to replace the actor of DDPG,thus the slice-level resource allocation action and user-level resource allocation action can be generated respectively.Compared with the traditional DDPG method,the numerical simulation verifies the convergence and high training efficiency of the proposed Twin-Actor DDPG algorithm.Furthermore,the proposed algorithm can obtain the policy with the highest utility compared with discrete action space DRL algorithms and benchmark algorithm.
Keywords/Search Tags:mobile edge network, resource optimization, network slicing, intelligent reflecting surface, deep reinforcement learning
PDF Full Text Request
Related items