Font Size: a A A

Finite-time Convergent Distributed Optimization Algorithm And Its Applications

Posted on:2018-12-02Degree:MasterType:Thesis
Country:ChinaCandidate:Y F SongFull Text:PDF
GTID:2310330542952385Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
In recent years,with the extensive application of distributed systems,there are a lot of problems need to be solved.To design some appropriate strategies to solve the corresponding distributed optimization problem is urgent.This paper mainly focuses on the convergence rate of distributed convex optimization algorithm.Inspired by the current distributed convex optimization algorithm and the multi-agent coherence cooperation protocol,a finite-time convergent distributed optimization algorithm is proposed.In addition,this algorithm is extended to the machine learning,and propose fast convergent distributed cooperative learning(DCL)algorithms.The main work of the paper can be summarized as the following two parts:· In first part,a finite-time convergent distributed continuous-time algorithm is proposed to solve a network optimization problem where the global cost function is the sum of strictly convex local cost functions under an undirected network with fixed topologies.The algorithm is inspired by finite-time consensus protocols and continuous-time zero-gradient-sum algorithms.Instead of the exponential convergence in existing works,the finite-time convergence is guaranteed based on the Lyapunov method.A numerical simulation example is provided to illustrate the effectiveness of the developed algorithm.· The second part aims to design a fast convergent DCL algorithm for feedforward neural networks with linear parametric neural network over undirected and connected networks.Firstly,a continuous-time fast convergent DCL algorithm is proposed whose finite-time convergence is guaranteed based on the Lyapunov method.Secondly,using the fourthorder Runge-Kutta method this algorithm is extended to a discrete-time form.For the highorder neural network,we use the continuous-time DCL algorithm,and give a simple simulation example for function approximation.For the feedforward neural network with random weights,we use the discrete-time DCL algorithm,and give four simulation examples with the artificial data set of "Sin C" function and three common UCI data sets: Housing,Skin and Handwritten.Then,compared with the distributed alternating direction method of multipliers(ADMM)and the Zero-Gradient-Sum-based(ZGS-based)algorithm,the proposed algorithm has high learning capability and convergence speed.Furthermore,simulation results demonstrate that the proposed algorithm has fast convergence speed,and the convergence rate may be adjusted by properly selecting some tuning parameters.
Keywords/Search Tags:finite-time convergence, distributed optimization, Lyapunov method, distributed cooperative learning, linear parametric neural network
PDF Full Text Request
Related items