Font Size: a A A

Distributed Online Optimization Algorithms In Time-Varying Networks

Posted on:2024-06-18Degree:MasterType:Thesis
Country:ChinaCandidate:R Y FangFull Text:PDF
GTID:2530307127972119Subject:Mathematics
Abstract/Summary:PDF Full Text Request
With the rapid development of Internet communication technology,the Internet of Things has been deeply popularized in many fields,such as transportation,medical health,and intelligent manufacturing.The resulting massive amount of real-time dynamic streaming data has greatly increased computing scale.The traditional centralized batch processing method is difficult to solve the large-scale optimization problem due to the dual bottleneck of storage and computing on a single device.However,distributed online optimization algorithms based on multi-machine collaboration overcome the shortcoming of limited computing power of a single device.These algorithms allocate complex optimization problems to multiple machines for collaborative processing,which greatly improves the efficiency of data processing.Moreover,distributed online optimization algorithms flexibly adjust the model according to data changes.Since there is no need to store data first and then process it uniformly,these algorithms effectively reduce storage resource consumption.Now,most distributed online optimization algorithms are built on fixed undirected networks,that is,the communication network structure composed of multiple nodes is required to remain unchanged at all times,and the communication between any two nodes is bidirectional.In fact,in real scenarios,most network structures are time-varying.Sometimes there may even be unexpected situations such as machine breakdown and line failure,which directly disrupt the balance of the network.In order to enhance the practicality of the algorithm,this thesis considers the analysis and design of distributed online optimization algorithms on time-varying networks.The contents and contributions of the thesis mainly include:1.Distributed online gradient push-sum algorithm with heavy ball acceleration on time-varying directed networks(Acc DOGPS): The algorithm uses the amount of information sent by each agent,corresponding to the outdegree of each node on directed networks,to construct the column stochastic weight matrix.The core of Acc DOGPS is push-sum protocol,which eliminates network imbalance by adding new variables to make nodes send weighted states to their neighbors.Then,using the heavy-ball momentum acceleration technology,agents incorporate the historical information into the update strategy of current variable to correct update amplitude,so that the convergence of the algorithm is accelerated.2.Distributed online adaptive gradient algorithm with long-term memory and dynamic bound of learning rate over time-varying networks(DADAXBOUND): The algorithm redesigns the second-order momentum update strategy in the adaptive algorithm Adam,which replaces the exponential moving average design with exponential long-term memory design.In other words,during the training process,the second-order momentum exponentially accumulates the past long-term gradient information and reduces dependence on the current gradient.When approaching the optimal solution of the problem,the second-order momentum remains relatively stable,so the learning rate can also be stable.Meanwhile,DADAXBOUND clips the learning rate and limits the learning rate to the dynamic boundary range.Through second-order momentum redesign and learning rate clipping,the learning rate during update is adjusted to fluctuate smoothly within a reasonable range.Thus,the problem of poor generalization performance of distributed online adaptive optimization algorithm DADAM caused by instable and extreme learning rate is solved.DADAXBOUND achieves fast convergence in the early training stage and smooth convergence to the optimal solution of the problem in the late training stage.Figure [12] Table [1] Reference [70]...
Keywords/Search Tags:multi-agent systems, distributed optimization, online learning, time-varying networks, acceleration of convergence, generalization performance
PDF Full Text Request
Related items