Font Size: a A A

Study On Algorithms For Variational Inequality And Unconstrained Optimization Problems

Posted on:2012-04-06Degree:DoctorType:Dissertation
Country:ChinaCandidate:X Y ZhengFull Text:PDF
GTID:1110330338950234Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Variational inequality problem is a very important research direction in the applied math-ematics fields and is widely used to study the various balance models in economics, engineer-ing science, transportation and so on. Moreover, many optimization problems can be trans-formed into variational inequalities to solve. Therefore, the study of variational inequality problem is of great theoretical significance and practical value.Unconstrained optimization problem is one of the main branch of optimization problems and constrained optimization problem can be transformed into unconstrained optimization problem. In particular, under certain conditions, variational inequality problem can be trans-formed into unconstrained problem to solve. In recent years, the theory of unconstrained optimization problems has been developed, and many methods are proposed to solve this problem. Among them, the conjugate gradient method is a very effective way, and become a popular method in the sixties and seventies of the twentieth century. With the emergence of large-scale practical problems, the conjugate gradient method once again becomes a hot topic.This dissertation mainly studies the algorithms for variational inequality problems and unconstrained optimization problems. The main work is as follows:1. A new smoothing Newton algorithm is presented for complementarity problems. Smoothing functions play an important role in smoothing algorithms for complementarity problems. We present a new smoothing function and study some properties of the smoothing function. Based on this smoothing function, we reformulate the nonlinear complementarity problem into the smoothing equations to solve and present a smoothing Newton type algo-rithm for the nonlinear complementarity problem. The algorithm can start at any given initial point, each iteration only solves a linear equation, and performs a line search. Under certain assumptions, we prove the algorithm is globally convergent and without strict complementar-ity conditions, the algorithm is quadratically convergent.2. For the variational inequality problem, a inexact smoothing Newton method is pro-posed. Due to difficulties of the smoothing algorithm for a exact solution of linear subprob-lems, we give a method for the inexact solution and thus that can reduce the computational load. Based on the ideas of smoothing Newton method and semismooth theory, we use a smoothing function to reformulate the variational inequality problem into the smoothing non-linear equations and then to solve the approximate solution of the nonlinear equations. Hence, we can obtain a smoothing inexact Newton method for a approximate solution of variational inequalities. Finally, numerical results show that the algorithm is feasible and effective.3. A new projection algorithm is presented for the variational inequality problem. The amount of projection computation per iteration is very small and thus it is very suitable for solving large-scale problems. At each step, the projection algorithm is only to do the pro-jection into the feasible set and some function calculations. By improving the length and search direction, we present a new projection-type algorithm for variational inequality prob-lems. The algorithm can guarantee that the step at each iteration has a positive bound. Under the condition that the underlying function is pseudomonotone, we prove that the algorithm is globally convergent. Numerical experiments verify the feasibility and effectiveness of the new algorithm.4. For large scale unconstrained optimization problems, two modified nonlinear con-jugate gradient-type methods are proposed. The modified methods have sufficient descent directions, which are not dependent on the linear search used and the convexity of the un-derlying function. Moreover, the modified method is the global convergent under the Wolfe condition. Numerical experiments are reported on a set of large-scale problems, which show that this method is promising.
Keywords/Search Tags:variational inequality, complementarity problem, unconstrained optimiza-tion, global convergence, local superlinear convergence/quadratic convergence
PDF Full Text Request
Related items