Font Size: a A A

Stochastic Optimization Algorithms In Distributed Networks

Posted on:2019-06-19Degree:MasterType:Thesis
Country:ChinaCandidate:H ZhangFull Text:PDF
GTID:2370330545498913Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
This thesis studies the optimization problems in distributed networks and uses stochastic optimization algorithms to solve them effectively.Specifically,the nodes in the network have local objective functions and constraints.Our goal is to optimize the global objective function so as to maximize the utility of the entire network.The distributed network can be divided into two categories,centralized distributed com-puting and decentralized distributed computing,according to information processing methods.They both have advantages and disadvantages in terms of robustness,work-ing time,communication cost,storage cost,and so on.This thesis has studied these two distributed computing methods.Specifically include:In this thesis,the decentralized distributed consensus optimization problem is studied.The specific content of the research is that the nodes in the network con-tain complex constraints.The complex constraint is the intersection of multiple simple constraints.Based on the centralized gradient projection method and the distributed stochastic gradient projection method proposed by previous researchers,we propose two distributed primal-dual splitting projection algorithms.The difference between them is the constraint handling method.One is a deterministic process,that is,all con-straints are processed in each iteration;the other one is a random process,that is,a constraint is randomly selected for processing in each iteration.These two algorithms use splitting projection methods to solve the issue that complex constrained projections are difficult to calculate.At the same time,the primal-dual method is used to improve the convergence precision and speed of the algorithm.In this thesis,the deterministic algorithm is analyzed theoretically and its convergence is proved.In numerical experi-ments,the effectiveness of the two algorithms is also proved.This thesis also studies the centralized distributed computing methods.The re-search content is the link prediction in signed networks in social networks.In this thesis,the low-rank matrix decomposition problem model is used to effectively reduce the net-work parameters and adapt to the situation of big data.At the same time,we propose two effective distributed asynchronous stochastic gradient descent algorithms for the prediction task.The difference between two algorithms lies in the degree of complete-ness of the asynchrony.One is partially asynchronous method,which means each node asynchronously processes a part of objective functions and then synchronizes;the other one is completely asynchronous method,that is,each node is completely asynchronous and handles randomly selected single objective function.We validate our algorithms in two datasets in real world.The experimental results verify the effectiveness of our algorithms which can achieve near-linear speedups.
Keywords/Search Tags:Distributed Network, Consensus Optimization, Primal Dual Algorithm, Low-rank Matrix Factorization, Stochastic Gradient Descent, Asynchronous Algorithm
PDF Full Text Request
Related items