Font Size: a A A

Algorithm For Solving Linear Constrained Optimization Problems

Posted on:2010-06-10Degree:MasterType:Thesis
Country:ChinaCandidate:N ZhangFull Text:PDF
GTID:2190360275464988Subject:Applied Mathematics
Abstract/Summary:PDF Full Text Request
Optimization is one of subject with strong application.As the developing of the computer technology and the requirement of the actual problems,the large-scale optimization is been enlighten more and more,and then the celerity and efficiency way of algorithm is becoming to a very hot topic in recent research.The optimization contains two different branches,one is the optimization without constraint,the other is the optimization with the constraint.As the optimization is a very interesting problem in operational research,many of researchers have proposed lots of improvements and innovations of algorithm under the construction of classical optimization theory.For the constrained optimization problem,they have got many of achievements,such as quadratic programming,penalty function method,sequential quadratic programming,feasible direction method and trust region method etc.In this paper,we mainly study constrained optimization problems,we make some improvement and innovation of algorithm and have a new algorithm under the classical sequential quadratic programming and the trust region method.Numerical example show that the algorithm is effective.In chapter 1,firstly,we introduce the development of optimization.Secondly, We review the basic idea of sequential quadratic programming and trust region method for constrained optimization problems and theirs research works.In chapter 2,we study linearly equality constrained optimization.For LC~1 convex constrained problems,we refomulat the KKT system as a nonsmoth equations, we give a modified BFGS-SQP method for this problems.The new BFGS update formula guarantees y_k~Td_k>0 and makes update matrix have transitivity of positive definite.Under some conditions,we prove that the algorithm possesses global and superlinear convergence properties.Numerical example show that the algorithm is effective.In chapter 3,we study linearly inequality constrained optimization,we present a new trust region method of conic model for solving it.Different from traditional trust region method,Our algorithm gets the next point by the Wolfe line search at each iteration whether the trial step is accepted.This new algorithm not only does not resolve the subproblem but also satisfies the quasi-newton condition at each iteration and simultaneously maintains a positive-definite approximation to the Hessian of the object function.Under mild conditions,we prove that global convergence of the algorithm.Numerical example show that the algorithm is effective.
Keywords/Search Tags:Constrained optimization, Sequential quadratic programming, Trust region method, Conic model, Line search, Convergence
PDF Full Text Request
Related items