| Distributed optimization theory and application has become one of the important development directions of system and control science.In the research process of optimization theory,we focus on the performance research of optimization algorithm,including the analysis complexity,arithmetic complexity and convergence performance of algorithm.There are two kinds of research problems in distributed optimization: one is the optimization of performance index function,the other is the optimization of system dynamic process.The main outstanding theoretical research belongs to the first type of optimization,each agent has its own cost function,and the cost function of the whole network is represented by the sum of these agents' functions.The purpose of this network is to complete the optimization of the whole cost function through the local information exchange between agents.Although the traditional gradient descent method(DGD)can guarantee the convergence to the optimal solution,it has higher requirements on the network and slower convergence speed.In this paper,the convergence rate of the algorithm is optimized when the network weight matrix is only row random.The weight matrix is only row random,which means that each agent only needs to know the information of its in degree neighbors,not its out degree neighbors.More practical significance.The convergence speed of the algorithm,in the traditional DGD algorithm,will slow down because of the decreasing step size.In this paper,we introduce the gradient tracking mechanism to accelerate the convergence performance of the algorithm.With the expansion of network scale,the traditional centralized control and optimization technology is difficult to solve the complex network optimization problem.Distributed optimization framework does not require centralized data control,and has the advantages of personal privacy protection,excellent network expansion and robustness.In practice,undirected network topology is difficult to realize.Therefore,it is of great significance to study the fast optimization algorithm of directed multi-agent network.Most of the distributed algorithms in the directed network require the weight matrix to have double randomness,which can not be constructed in any directed graph.In this paper,the convergence speed of the algorithm is optimized under the condition that the network weight matrix is only row random.As a fast distributed optimization method,gradient tracking has good practicability in accelerating algorithm convergence.In this paper,gradient tracking is proposed to accelerate the distributed optimization algorithm.In this paper,a distributed optimization problem on multi-agent network is studied.The goal of agent is to optimize the sum of all local objective functions.This paper discusses that the network topology between agents is strongly connected and directed.The algorithm uses row random weight matrix and uncoordinated step size.Under the condition that the objective function is strongly convex and has Lipschitz continuous gradient,as long as the selected step size does not exceed an exact upper bound of the feature,we prove that the algorithm converges linearly to the global optimal solution faster than other algorithms.Numerical experiments also confirm the correctness of the theoretical analysis. |