Font Size: a A A

Research On Distributed Optimization And Privacy Protection Of Networked System

Posted on:2022-03-08Degree:DoctorType:Dissertation
Country:ChinaCandidate:Q G LvFull Text:PDF
GTID:1488306530492864Subject:Computational intelligence and information processing
Abstract/Summary:PDF Full Text Request
In recent years,the rapid development of sensor technology,communication network and digital system has promoted the emergence of large-scale networked systems,such as sensor network,mobile network and Internet.These networked systems have,in turn,given rise to new applications,such as smart power networks,social and economic networks,epidemic networks,and robot networks.Usually,these applications need a variety of optimization techniques to solve the core optimization problems existed in the network itself,including resource allocation,parameter estimation and machine learning.Therefore,there is a need to develop new technologies to solve the optimization problem of large-scale networked systems.Due to the distributed nature of networked systems,traditional centralized methods are not suitable for solving these large-scale optimization problems.On the one hand,the centralized framework is limited by performance,such as high communication and computation requirements,single point of failure as well as limited flexibility and scalability.On the other hand,the cost of transmitting the data collected in a distributed manner to a central node is too high,and it may cause the leakage of sensitive information(privacy).Therefore,it is necessary to study distributed optimization algorithm to solve the optimization problem of large-scale networked system.In addition,the emergence of big data applications further stimulates researchers’ growing interest in distributed optimization.Aiming at the distributed optimization problem of current networked system,this dissertation is committed to investigating efficient distributed optimization algorithms from the perspective of efficiency(communication and computation)and privacy.The main contents of this dissertation are as follows:(1)The problem of composite constrained convex optimization with a sum of smooth and non-smooth functions over networked systems is studied.Motivated by the modern large-scale information processing problems in machine learning(the samples of a training dataset are randomly distributed across multiple computing nodes),each of the smooth objective functions is considered as the average of several constituent functions.To solve this problem in a distributed manner,a novel computation-efficient distributed stochastic gradient algorithm is proposed,which leverages the variance reduction technique and the distributed projection method.Theoretical analysis indicates that if the constant step-size is less than an explicitly estimated upper bound,the proposed algorithm can converge to the exact optimal solution in expectation when each constituent function(smooth)is strongly convex.Compared with the existing distributed methods,the proposed algorithm not only is suitable for solving the composite optimization problems with general constraints but also possesses low computation cost in terms of the total number of local gradient evaluations.Finally,simulation results verify the good performance of the proposed algorithm.(2)The problem of distributed machine learning optimization over networked systems is studied.Distributed optimization plays an important role in many large-scale machine learning tasks where the samples of a training dataset are randomly distributed across multiple computing nodes.Each objective function is considered as the average of several constituent functions.Due to some real-world challenges,distributed accelerated methods with both communication and computation efficiency have not yet been investigated.Through utilizing the Nesterov’s acceleration mechanism and gradienttracking technique,a double-efficient stochastic distributed accelerated algorithm is proposed,which leverages the event-triggering strategy for communication efficiency and the variance-reduction technique for computation efficiency,respectively.Theoretical analysis shows that if a suitable constant step size is selected,the proposed algorithm can converge linearly to the exact optimal solution in expectation when each component function is strongly convex and smooth.Under certain conditions,it is proved that for each node the time interval between two successive triggering instants is larger than the iteration interval.Finally,simulation results verify the good performance of the algorithm.(3)The privacy protection problem of distributed online optimization over timevarying unbalanced directed networked system is studied.The main target of the set of nodes is to cooperatively minimize the sum of all locally known convex objective functions(time-varying)while pursuing the privacy of their local objective functions being well protected.To solve this problem,a differentially private distributed stochastic subgradient-push algorithm is proposed.Unlike existing distributed algorithms which do not consider privacy issues,the proposed algorithm via differential privacy strategy successfully protects the privacy of participating nodes,which is more practical in applications involving sensitive messages,such as military affaires or medical treatment.An important feature of the proposed algorithm is tackling distributed online optimization problems under circumstance of time-varying unbalanced directed networks.Theoretical analysis indicates that the proposed algorithm can not only effectively guarantee differential privacy,but also obtain sublinear regret,which further reveals the tradeoff between the privacy level and accuracy of the algorithm.In addition,the robustness of the proposed algorithm to the information transmission delay in the communication link is discussed.Finally,simulation results verify the good performance of the algorithm.(4)The privacy protection problem of distributed resource allocation over unbalanced directed networked system is studied.Distributed optimization to solve the resource allocation problem has been a significant focus due to its advantages in scalability,robustness,and flexibility.The purpose of distributed resource allocation is that each node only communicates with its in-neighbors and attempts to minimize its own objective function while satisfying network-wide resource constraints as well as local capacity limits.The emergence of data security and the requirement of complex computing have led to a resurgence of activity in this area.To address distributed resource allocation problems while considering the issues of privacy security and computation efficiency,a privacy protection distributed random sleep algorithm is proposed,which is suitable for unbalanced directed networks.On the one hand,the algorithm can protect sensitive information effectively by adding conditional noise to state exchange.On the other hand,the random sleep strategy is applied in the algorithm design,which effectively improves the computation efficiency.Theoretical analysis shows that the proposed algorithm can achieve the optimal allocation of resources while protecting sensitive information.Finally,simulation results verify the good performance of the algorithm.
Keywords/Search Tags:Networked system, Distributed optimization, Computation efficiency, Communication efficiency, Privacy protection
PDF Full Text Request
Related items