Font Size: a A A

Neural Networks For Two Kinds Of Nonlinear Variational Inequalities

Posted on:2008-02-15Degree:MasterType:Thesis
Country:ChinaCandidate:H M BiFull Text:PDF
GTID:2120360215999780Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Variational inequalities(Ⅵ) can be viewed as a uniform framework of optimization problems, equilibrium problems and the related problems. It arises in wide fields including signal processing, system identification, filter design, robot control, economic science, transprotation science, operational research, nonlinear analysis fields, and so on. In particular, many problems in mathematics, physics, and engineering can be formulated as it. In many scientific and engineering applications, real-time solutions of the variational inequalities are desired. However, traditional algorithms are not suitable for a real-time implementation on the computer since the required computing time for a solution is greatly dependent on the dimension and the structure of the problem, and the complexity of the used algorithm. One promising approach to handle these problems with high dimension and dense struc- ture is to employ artificial neural network based circuit implementation. Because of the dynamic nature and the potential of electronic implementation, neural networks can be implemented physically by designated hardware such as application-specific integrated circuits where the computational procedure is truly distributed and in parallel. Therefore, the neural network approach can solve optimization problems in running times at the order of magnitude much faster than conventional optimizationalgorithms executed on general-purpose digital computers, and it is of great interest in practice to develop some neural network models which could provide a real-time solution for variational inequality problems.Based on optimization theory, projection theory, we present neural networks for solving the two kinds of variational inequalities. Meanwhile, asymptotic behav- iors including stability, convergence and exponential stability of these networks are strictly proved by the stability theory of ordinary differential equation and LaSalle invariant theory. Some illustrative examples show the performance of the proposed networks. The thesis is divided into three parts. The first part is introduction, there are significance and development ofⅥ, and some fundmental theories, such as optimization theory, projection theory, stability theory of ordinary differential equation and LaSalle invariant theory. In the second part,we consider the following nonlinear variational inequality (SNVI)problem:to find x~*, y~*∈Ksuch that where H is a real Hilbert space,K is a nonempty closed convex SUbset of H,and T:K→H is any mapping.Based on the necessary and sufficient conditions of the solution,a new neural network is defined as follows:whereλ>0 is scaling parameter, The proposed neural network is shown to be globally convergent, globally asymptitically stable and globally exponentially stable under mild conditions. The new model has simple structure and can be implemented in hardware.In the third part, we consider a class of nonlinear implicit variational inequality (NIVI) problem: find x~*∈K such that〈T(x~*,x~*), x- x~*〉≥O, (?)_x∈K, where H is a real Hilbert space, K is a nonempty closed convex subset of H, and T : K→H is any mapping. Based on the property of the problem, we present a neural network to solve it. Then the relationship between the equilibrium point of the network and the solution of the problem is established, and the stability and convergence of the proposed neural network are shown.
Keywords/Search Tags:Variational inequality, Neural network, Convergence, Stability, Exponential stability
PDF Full Text Request
Related items