Font Size: a A A

Research On Non Smooth Non Convex Optimization Problems With Recurrent Neural Networks

Posted on:2019-06-29Degree:MasterType:Thesis
Country:ChinaCandidate:Z R ChenFull Text:PDF
GTID:2370330545967619Subject:Computer software and theory
Abstract/Summary:PDF Full Text Request
The optimization problem is a widespread and numerous problems in military sciences,natural sciences,engineering management and other disciplines.With the development of science and technology,many of the core issues of engineering ultimately come down to optimization problems.Traditional optimization methods,such as gradient descent method,Newton method,Lagrange multiplier method,etc.,because their computation time largely depends on the scale and complexity of the problem,so it is difficult to solve the engineering optimization problem in real time.So the problem of real time optimization by artificial neural network is widely studied.Based on differential inclusion and improved Lagrange multiplier theory,two different recurrent neural network models are proposed,and the effectiveness of the model is finally proved.The results of this paper are as follows:1.An augmented Lagrange neural network is studied to solve nonconvex nonsmooth problems.Based on the convergence theory of KKT conditions and the penalty function,a Lagrange neural network model with equality and inequality constraints is proposed.Compared with the traditional Lagrange neural network model,our model has two augmented functions,which greatly improves the convergence speed of the network model.Finally,the effectiveness of the proposed model is verified by simulation experiments.2.The recurrent neural network of differential inclusions is studied to solve nonconvex nonsmooth optimization problems.Firstly,based on the differential inclusion theory,a new recurrent neural network model with equality constraints and inequality constraints is constructed.The advantages of the proposed network model are:1)Compared with the traditional neural network model based on penalty function,the model is proposed in this paper does not need to calculate the penalty factor.2)The initial points of the network model of many models can only be selected in a bounded sphere,but the network initial points of our model can be selected arbitrarily.3)At present,most of the models can only solve the convex optimization problem with objective function,and our model can solve a class of optimization problems whose objective function is nonconvex.It is proved that when the objective function has a lower bound,the neural network converges to a feasible domain in a finite time.Meanwhile,the solution trajectory of neural network converges to optimal solution set of the corresponding optimization problems,which finally converge to critical point set of optimization problems.
Keywords/Search Tags:optimization problem, recurrent neural network, Lagrange, penalty factor, nonconvex
PDF Full Text Request
Related items