Font Size: a A A

Research On Distributed Algorithm With Fixed-time Convergence

Posted on:2022-06-07Degree:MasterType:Thesis
Country:ChinaCandidate:S L LiFull Text:PDF
GTID:2517306521952399Subject:Statistics
Abstract/Summary:PDF Full Text Request
With the widespread use of computers and the Internet,data has exploded,and the era of big data has arrived.How to quickly and effectively realize data collection,storage and calculation has become an urgent problem to be solved in all industries.Due to the wide variety and huge scale of data,when people use the traditional centralized method to process data,the following problems are likely to occur:a single machine cannot centrally store and calculate all the data,the training time is too long,the data is leaked and so on.In order to overcome the shortcomings of the centralized method,a multi-machine distributed method has emerged.Since most of the existing distributed algorithms can only achieve asymptotic convergence and exponential convergence,it may consume a long time and waste communication resources.Therefore,it is necessary to study distributed algorithms that can reduce time consumption.This paper mainly studies the distributed algorithm which introduces the fixed time stability theory and reduces the waste of computing and communication resources by accelerating the convergence speed of distributed algorithm.Firstly,for problem of solving linear algebraic equations,a distributed zero-gradient sum least squares algorithm with fixed time convergence is proposed.Many engineering problems can be thought of as find-ing solutions to a set of linear equations,but these problems may appear to be physically,geographically,or logically distributed.Therefore,it is necessary to find a solution in a distributed environment.In this paper,under the condition of fixed undirected network topology,the least-squares problem of linear equations is firstly transformed into a form suitable for distributed solution of multi-agent system.Then the fixed time stability theory is introduced into the zero gradient sum algorithm,and a distributed zero gradient sum algorithm with fixed time convergence is obtained,whose initial conditions depend only on the set parameters.The convergence of the proposed algorithm is proved by Lyapunov method.The convergence of the proposed algorithm is proved by Lyapunov method.Finally,the effectiveness of the proposed algorithm is proved to simulation experiments.Secondly,for weight learning problem of stochastic configuration network,two distributed learning algorithms are proposed.The first method is to propose a distributed learning algorithm with finite time convergence by combining finite time consistency theory and Newton's method.The second method is to introduce the fixed time stability theory and put forward the fixed time convergence distributed learning algorithm.Large data sets may be distributed in multiple randomly configured networks.In the absence of any fusion center,a distributed algorithm that can efficiently and safely obtain the optimal weight within a certain time is needed.In order to solve the optimal weight problem of neural networks in finite time,The core idea of these two algorithms is to select input weights and deviations randomly by using stochastic configuration scheme,and then calculate locally and exchange information with neighbor agents to get the consistent and optimal output weights within a certain time.The advantage is that their time setting does not depend on the initial value of the node.Finally,the validity of the two algorithms is verified by simulation.
Keywords/Search Tags:Fixed-time convergence, distributed optimization, least-squares algorithm, distributed learning
PDF Full Text Request
Related items