Font Size: a A A

Research On Joint Resource Management Technology For Edge Intelligence

Posted on:2024-07-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y SuFull Text:PDF
GTID:1528306944956789Subject:Electronic Science and Technology
Abstract/Summary:PDF Full Text Request
In recent years,thanks to the continuous progress of chips,computing power and big data,as well as the rapid innovation of machine learning technology represented by deep learning,the development of artificial intelligence has once again ushered in a climax.At the same time,the popularity of the Internet of Things makes hundreds of millions of terminal devices connected to the Internet,generating massive amounts of data at the edge of the network,giving birth to a new computing paradigm called edge computing.Under this trend,the combination of edge computing and artificial intelligence has become inevitable,resulting in a new cross-research,edge intelligence.Today,edge intelligence has become a paradigm for optimizing machine learning model training and inference performance by fully utilizing the data and resources available in the device-edge-cloud continuum.However,many practical factors limit the improvement of edge intelligence level and resource utilization efficiency,such as:(1)the difference and complexity of data distribution;(2)the limitation and diversity of edge resources;(3)different Individual interests and competition.Considering the above realistic factors,many studies have carried out a lot of work on machine learning model training and model inference in edge computing systems.However,key issues such as the trade-off between training effect and training cost in the process of machine learning model training,the trade-off between service revenue and service quality in the process of machine learning model inference,and the trade-off between inference delay and inference energy consumption in the process of machine learning model inference need be further solved.To this end,this paper selects four key technologies of edge intelligence,namely federated learning,aggregation control,incentive mechanism and model segmentation,as the starting point,and conducts research on four joint resource management technologies for edge intelligence.Combining theoretical analysis and experimental verification,multiple joint resource management algorithms involving aggregation control,task offloading,service pricing,model partition,and computing resource and wireless resource allocation are proposed,improving the efficiency of machine learning model training and model inference.The specific contributions and achievements of this paper are summarized as follows.1.This paper conducts research on the joint optimization of aggregation frequency and resource allocation for resource-constrained hierarchical federated learning systems.Specifically,this paper considers the training time and energy consumption constraints of devices for the hierarchical federated learning system,establishes the computing model and communication model in the training process of the hierarchical federated learning,and proposes an optimization problem with the goal of minimizing the final global model loss function problem.An adaptive learning process suitable for hierarchical federated learning is designed to achieve dynamic optimization of aggregation frequency and resource allocation.Under different training time constraints,the proposed adaptive learning process reduces the final global model loss function by 29.90%,and improves the model testing accuracy by 14.63%compared with the existing hierarchical federated learning resource allocation optimization scheme.2.This paper conducts research on the design of distributed incentive mechanism based on game theory.Specifically,this paper considers multicloud and multi-edge environments,models the interaction between service providers and devices as a multi-leader multi-follower Stackelberg game,and analyzes the existence of Nash equilibrium in the leader noncooperative game between service providers and the follower non-cooperative game between devices.A distributed iterative proximal offloading algorithm and an iterative Stackelberg game pricing algorithm are designed.Compared with traditional cloud-based and edge-based machine learning task offloading schemes,the proposed distributed task offloading algorithm is closer to the social optimal offloading scheme,and the price of anarchy(PoA)is always less than 200%.Furthermore,the proposed distributed pricing algorithm boosts the average revenue of service providers by 100%after a limited number of iterations,while increasing the average disutility of terminals by only 10%.Compared with the existing centralized incentive mechanism,the proposed incentive mechanism requires less private information and has higher execution efficiency.3.This paper conducts research on the design of incentive mechanism based on truthful combinatorial auction.Specifically,this paper considers the case of personalized service and differentiated bidding.That is,each edge server can allocate different computing and wireless resources to each device according to delay and energy consumption constraints of the device,and each device submits different bid to different edge servers according to different resource allocation.This paper proposes a truthful combinatorial auction mechanism,which consists of a service cost optimization algorithm,a buyer-seller matching algorithm combining optimal matching and heuristic matching,and the corresponding payment determination algorithm.The proposed auction mechanism satisfies three desirable properties,namely the individual rationality,incentive compatibility,and computational efficiency.Compared with existing auction mechanisms based on heuristic buyer-seller matching,the proposed truthful combinatorial auction mechanism can improve social welfare by up to 75%.4.This paper conducts research on the joint optimization of model partition and resource allocation for energy-constrained hierarchical edge computing systems.Specifically,this paper considers the energy budgets of the edge server and cloud computing center in a hierarchical edge computing system,establishes an model inference delay model based on queuing theory,and proposes an optimization problem with the goal of minimizing the long-term average model inference delay.Based on Lyapunov optimization technology,this paper transforms the origianl optimization problem.Furthermore,based on the Deep Deterministic Policy Gradient(DDPG)algorithm,this paper designs a joint optimization algorithm of model partition and resource allocation,and realizes dynamic optimization of model partition and resource allocation.Compared with traditional cloud-based and edge-based model inference schemes,the proposed algorithm reduces the long-term average model inference delay by up to 11.19%and 7.79%,respectively.In addition,compared with the existing model partition and resource allocation schemes that do not consider the energy budget,the proposed algorithm only increases the long-term average model inference delay by 3.08%under the premise of meeting the energy budget(the longterm average energy consumption of edge server is reduced by 42.40%).
Keywords/Search Tags:edge computing, edge intelligence, model training, model inference
PDF Full Text Request
Related items