Font Size: a A A

Convergence And Optimization Of Communication,Computation And Learning In Wireless Edge Networks

Posted on:2023-11-28Degree:DoctorType:Dissertation
Country:ChinaCandidate:J K RenFull Text:PDF
GTID:1528306809495764Subject:Information and Communication Engineering
Abstract/Summary:PDF Full Text Request
In the past decade,the revolution of wireless communications and the renaissance of artificial intelligence(AI)have brought earth-shaking changes to human life.The convergence of these two technologies promotes the further evolution of mobile communication systems,which will shift from connected people,connected things to connected intelligence.However,traditional network architecture based on cloud computing cannot meet the requirements of latency,energy consump-tion,privacy,and security in the future.Driven by this,edge computing and edge intelligence are envisioned as two key technologies for realizing the next generation of mobile communication sys-tems in both academia and industry.Edge computing aims to migrate cloud computing platform to the edge of radio access networks in order to provide end users with elastic computing services.Meanwhile,edge intelligence targets embedding model training and inference capabilities of AI into the network edge,which realizes the seamless integration of communication,computation,and intelligence.Despite the great potential in edge computing and edge intelligence,they still face a lot of challenges,mainly including:1)the collaboration of cloud server,edge servers,and mobile devices;2)the management of statistical heterogeneity and system heterogeneity;3)the communication bottleneck.This thesis will conduct in-depth research on the three issues.First,this thesis studies the task offloading and resource allocation problem in a cloud-edge collaboration system.To meet the latency requirement of mobile users,we propose a hierar-chical network architecture,in which one task can be partially processed at the edge node and the cloud server in a distributed way.Target minimizing the system latency,an optimal task s-plitting strategy is designed as a function of the normalized backhaul communication capacity and the normalized cloud computation capacity.The performance of four special scenarios,i.e.,communication-limited system,computation-limited system,edge-dominated system,and cloud-dominated system are also discussed,which further reveal the impact of cloud computing capabili-ty and backhaul communication capability on the task splitting strategy.The optimal computation resource allocation strategy is then developed in closed form.To further improve the system re-source utilization,a criterion is defined to judge whether a task should be processed at the edge node only,without offloading to the cloud server.Next,this thesis investigates the joint communication and computation resource allocation in an edge computing system.Since traditional edge computing systems adopt binary offload-ing,i.e.,a task has to be executed as a whole either locally at the device or offloaded to the edge server,which results in two modes of local computing and edge computing.In this context,we first deduce the end-to-end latency as well as the minimum system latency in the two modes,re-spectively.Based on this,we propose a new partial-offloaded computing mode,which integrates the advantages of the two traditional modes and can provide parallel computing services.Then,an optimal task-splitting strategy is devised to minimize the system latency.By using piece-wise optimization tools,we further develop a joint communication and computation resource alloca-tion algorithm.In addition,the closed-form task splitting and resource allocation strategies for two special scenarios,i.e.,communication-limited system and computation-limited system are proposed,respectively.Then,this thesis considers the heterogeneous convergence of communication and learning in a federated edge learning(FEEL)system.Traditional mobile communication systems aim at maximizing system throughput without considering the heterogeneous computing property across devices.On the other hand,the communication is regarded as a“data pipeline”in traditional dis-tributed learning systems,whereas the heterogeneous communication attribute is not emphasized.Towards this end,this thesis proposes a joint communication and learning optimization frame-work.By analyzing the FEEL mechanism,we deduce the closed-form expressions for the global loss decay and the end-to-end latency in each communication round.Based on this,we define a new performance metric,namely learning efficiency,which reflects the global loss decrease rate in the training duration.This metric also integrates the hyper-parameters(batchsize)and com-munication resource(time-slot)into a unified framework for joint optimization.In addition,the optimal batchsize selection and communication resource allocation policies are developed in both CPU and GPU scenarios.The results theoretically demonstrate that the batchsize should dynam-ically adapt to the wireless channel condition to achieve a desired learning performance.Finally,this thesis discusses the scheduling problem in FEEL.Different from traditional ran-dom scheduling,round-robin scheduling,and proportional fair scheduling policies,a new proba-bilistic scheduling framework is developed to yield unbiased update aggregation in FEEL.Then,we formulate a scheduling optimization problem to balance the learning improvement and com-munication cost.An importance-and channel-aware scheduling policy is developed to realize the trade-off between multiuser channel and update diversity.As compared with traditional schedul-ing policies,the proposed policy can be implemented by a central controller without inducing ad-ditional communication overhead.Besides,the optimal scheduling probability increases linearly with both the data unbalanced indicator and the local gradient norm,whereas decreases sublinear-ly with the exponent of-21when the local gradient upload latency is large.The concrete model convergence rate and bandwidth allocation are also provided.The first two works realize the goal of“collaborative computing”while the latter two works achieve the target of“intelligent convergence”in wireless edge networks.They jointly provide some theoretical basis and technical solutions for building a native-intelligent edge network.
Keywords/Search Tags:Edge computing, edge intelligence, edge learning, cellular network, collaborative computing, heterogeneous management, device scheduling, resource allocation
PDF Full Text Request
Related items