Font Size: a A A

Research On Theory And Arithmetic Of Neural Optimization Based On Differential Equation And Differential Inclusion

Posted on:2010-11-23Degree:DoctorType:Dissertation
Country:ChinaCandidate:W BianFull Text:PDF
GTID:1100360302965556Subject:Basic mathematics
Abstract/Summary:PDF Full Text Request
Optimization problems arise in a broad variety of scientific and engineering applications, where the real-time solutions of the optimization problems are most required. One promising approach to handle the real-time optimization problems is to employ artificial neural networks based on circuit implementation. Based on differential inclusion, Lyapunov method, matrix theory, nonsmooth analysis and variational theorem, this thesis studies four classes of optimization problems, proposing relative neural networks, giving global existence, uniqueness, stability, convergence and exactness of solutions to these networks. Our main results are as follows:1. Study two classes of important degenerate quadratic optimization problems in R~n. Firstly, basing on Lagrange function method, we establish a neural network which can solve a class of degenerate quadratic convex minimization problem with general linear constraints. The proposed neural network has some good properties, such as complete stability and finite time convergence. In convergence rate, the nonsingular part of the output trajectory respect to Q is exponentially convergent. Meantime, we offer a criterion to determine whether the objective function still can reach its minimum of R~n with these constraints. Next, by appealing to clever analysis and transformation, we transfer a class of degenerate quadratic saddle point problem with mixed linear constraints to a class of degenerate quadratic minimization problem. Basing on theories and methods of the above part, we introduce another project neural network for solving this class of saddle point problem and give complete stability, finite time convergence and exponential convergence of the proposed network. In addition, we also propose a simpler network for solving global quadratic convex saddle point problem.2. Study two classes of important nonsmooth convex optimization problems in R~n. Firstly, basing on exact penalty function method, we present a neural network modeled by a differential inclusion for solving a class of nonsmooth convex minimization problem with not only affine equality constraints but also convex inequality constraints . By controlling the two parameters respectively, we get that the trajectory of the network reaches the feasible region in finite time and stays there thereafter. Then, arguing by contradiction, we obtain that the trajectory of the network converges to the equilibrium point set. Meantime, we present a condition to ensure finite time convergence to the equilibrium point set. Furthermore, exactness of the network illustrates the superiority of the proposed network. Next, we present another network modeled by a differential inclusion for solving a class of nonsmooth convex saddle point problem with mixed constraints. On the basis of theories and techniques of the anterior part, we show some properties of the network, such as global existence, uniqueness, convergence and exactness of solution.3. Study two classes of important nonsmooth nonconvex optimization problems in R~n. Firstly, basing on penalty function method, we establish another network modeled by a differential inclusion only with one parameter for solving a class of nonsmooth non-convex minimization problem with not only affine equality constraints but also convex inequality constraints . Forcing some suitable conditions on feasible region and parameter, we get the global existence of solutions to the network. Then, by introducing the weaker one-side Lipschitz condition, we obtain the uniqueness of solution. Controlling the unique parameter, we force the trajectory of the proposed network reach the feasible region in finite time and stay there thereafter. Meantime, the final convergence and exactness of the network are also obtained. In order to improve the feasibility of the network, we provide some conditions to guarantee the coincidence between the solution to the network and the slow solution of it. Without these results, the proposed network cannot be implemented by circuit, MATLAB or other mathematic software. Next, we propose another network modeled by a differential inclusion to solve a class of smooth nonconvex saddle point problem with mixed constraints. The proof about the convergence and exactness of the network are also presented. Moreover, a geometrical interpretation of the network is given.In the above three parts, illustrate examples are given to show the validity of theories and arithmetics obtained and explain the implementation of process of the arithmetics.4. Study a class of nonsmooth convex optimization problem in Hilbert space. We propose a neural network modeled by a differential inclusion for solving a class of non-smooth convex minimization problem with nonsmooth convex inequality constraints in Hilbert space. Under suitable assumptions on objective function, feasible region and penalty parameter, we get global existence, uniqueness of solution to the differential inclusion, finite time convergence to the feasible region and invariance of it, exactness and some convergence results. Specially, when the subgradient of objective function is strongly monotone or the interior of the optimal solution set is not empty, we obtain the convergence of the trajectory with respect to the strong topology. Finally, we present an asymptotic control result of the proposed network.
Keywords/Search Tags:optimization problem, differential equation, differential inclusion, finite time convergence, exactness
PDF Full Text Request
Related items