Font Size: a A A

Research On Convex Optimization Problem Based On One-layer Neural Network

Posted on:2017-04-08Degree:MasterType:Thesis
Country:ChinaCandidate:J H SongFull Text:PDF
GTID:2180330509956861Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Neurodynamic optimization theory has been widely studied since it can be used to solve optimization problem efficiently in real-time. Various neural networks are proposed for solving nonlinear programming, especially convex optimization problem. According to the difference of the variables’ domain, the optimization problems can be divided into two cases: constrained optimization in complex variables(“the complexvariables optimization problem” for short) and constrained optimization in real variables(“the real-variables optimization problem” for short).For the complex-variables optimization problem, the traditional method is that the complex-variables optimization problem is converted into a real-valued one by splitting the complex variables into their real and imaginary parts. But this method suffers from some disadvantages, such as enlarging the dimension of the original problem or breaking the original data structure. It is well known, the derivate plays an important role in optimization. However, a real function with complex-variables is nonanalytic in complex-variables optimization problem. This means a large number of real-valued optimization methods cannot be directly applied to solve the complex-variables optimization problem. To overcome this problem, in this paper, based on ??-calculus and penalty method, a one-layer recurrent neural network is proposed for solving the complex-variables optimization problem. It is proved that for any initial state from a given sphere, the state of the proposed neural network reaches the feasible region in finite time and converges to an optimal solution finally. In the end, some numerical examples are presented to substantiate the effectiveness of the proposed neural network.Recently, most of neural networks for solving nonsmooth real-variables convex optimization problems are proposed based on penalty methods. Moreover, the convergence of state depends on suitable parameters and some additional assumptions. However, it is not easy to determine suitable penalty parameter. In order to avoid this difficulty, we propose a new one-layer neural network without any penalty parameters. Under mild assumptions, it is proved that the state for any initial point will reach the feasible region in finite time and finally converge to an optimal solution.
Keywords/Search Tags:one-layer neural network, complex-variables convex optimization, real-variables convex optimization, convergence
PDF Full Text Request
Related items