Font Size: a A A

Research On Several Nonconvex Optimization Problems Based On Neurodynamic Optimization Algorithms

Posted on:2022-02-01Degree:DoctorType:Dissertation
Country:ChinaCandidate:N LiuFull Text:PDF
GTID:1480306569487394Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Nonconvex optimization problems refer to the optimization problems with nonconvex objective function or nonconvex constraint set.Many important practical problems in machine learning,compressed sensing,data mining and other fields can be modeled as nonconvex optimization problems.However,the lack of convexity poses challenges to the algorithm design and convergence analysis of such problems.In recent years,the algorithm exploration of nonconvex optimization problems has attracted extensive attention of scholars.Due to the ability of massively parallel computing,neurodynamic optimization algorithms can better satisfy the needs of real-time solution.This paper mainly studies four kinds of nonconvex optimization problems,and the corresponding neurodynamic optimization algorithms are proposed.The specific contents are as follows.1.A nonautonomous neural network with auxiliary function is proposed for solving a class of nonsmooth nonconvex optimization problems with affine equality and convex inequality constraints.Based on the concrete structure of the inequality constraint set,an auxiliary function is constructed.By the virtue of its good properties,the assumptions that the inequality constraint set is bounded and the objective function is bounded below over the equality constraint set in many existing references are cancelled.Moreover,it is proved that the state of the presented neural network with any initial point is convergent to the stationary point set of the considered nonconvex optimization problem.In particular,if the objective function is pseudoconvex,the state is globally convergent to an optimal solution of the related pseudoconvex optimization problem.2.A neural network based on smoothing technique is proposed for solving a class of nonsmooth nonconvex optimization problem with nonconvex inequality constraints.By smoothing the objective function,the limitation that the objective function needs to be smooth or nonsmooth regular in many existing references is eliminated.In addition,a hard-limiter function is introduced to overcome the disadvantages of nonconvex constraints.Based on this,it is proved that any accumulation point of the presented neural network is a stationary point of the nonconvex optimization under consideration.Furthermore,the neural network is capable of finding an optimal solution to some generalized convex optimization problems.Compared with some related neural networks,this model does not contain any penalty parameters which need to be estimated in advance.And some additional assumptions are eliminated,for example,the objective function is coercive;Salter's condition holds and so on.3.A complex-valued neural network is proposed for solving a class of nonsmooth constrained complex-variable pseudoconvex optimization.The complex structure of complex domain makes it difficult to solve complex-variable optimization problems.At present,most existing algorithms are only suitable for solving complex-variable smooth convex optimization problems.Based on CR calculus and nonsmooth theory,this paper gives the nonsmooth analyses of real-valued functions in complex variables.Furthermore,it is proved that the state of the neural network from any initial point is convergent to an optimal solution of the considered optimization problem.Comparisons with the existing complex-valued neural networks show that the neural network here has wider applicability and lower computational complexity.4.A neural network based on partial p-power reformulation is proposed for solving a class of constrained distributed nonconvex optimization problems.First,in order to eliminate the duality gap of the considered distributed nonconvex optimization problem,the corresponding p-power transformation of the inequality constraints is considered.Then,a distributed neural network is proposed to solve the transformed equivalence problem.It is proved that the states of all agents reach an output consensus,and converge to a strict local optimal solution of the constrained distributed nonconvex optimization problem.In addition,it is worth pointing out that the algorithm here does not require the agents to exchange their own privacy information such as objective function and constraint functions,which is helpful for privacy protection.
Keywords/Search Tags:nonsmooth nonconvex optimization, complex-variable pseudoconvex optimization, distributed nonconvex optimization, neurodynamic optimization algorithm, convergence analysis
PDF Full Text Request
Related items