Font Size: a A A

Sequential quadratic programming methods based on indefinite Hessian approximations

Posted on:2000-12-28Degree:Ph.DType:Dissertation
University:Stanford UniversityCandidate:Goldsmith, Meredith JoFull Text:PDF
GTID:1460390014462408Subject:Operations Research
Abstract/Summary:
We address the nonlinearly constrained optimization problem: to find a local minimizer for an objective function subject to nonlinear inequality constraints, where all functions are twice continuously differentiable. A key challenge in solving this problem is to find an accurate approximation to the Hessian of the Lagrangian 12xLx ,l . In this dissertation we develop a new technique for forming a quasi-Newton approximation to the Hessian of the Lagrangian and apply it to a sequential quadratic programming (SQP) method.;The common approach in approximating 12xLx ,l has been to follow quasi-Newton techniques used for unconstrained optimization. A single, positive-definite approximation matrix is maintained using the BFGS update, and the dependence of the Lagrangian on the multipliers is ignored. This direct approximation may be poor even on ideal problems. We aim to improve the quality of the quadratic model and the convergence rate of the SQP algorithm with the following alternative approach. We approximate the Hessian of the objective function and the Hessian of each constraint function using the symmetric rank-one (SR1) update, and then we combine these individual estimates to obtain an estimate of 12xLx ,l . This disaggregated approximation can be applied efficiently to large, sparse problems.;The disaggregated approach can result in indefinite approximations, leading to nonconvex QP subproblems. When the direct approximation is used, the QP subproblems are strictly convex and have unique solutions, and hence the solution of the QP is a descent direction for various merit functions. When the QP is not convex, obtaining a descent direction may require modifying QP subproblems or terminating QP subproblems (even convex ones) at the first stationary point. Our objective is to construct a suitable search direction without modifying the subproblem unnecessarily. We develop rules and Hessian modifications that eliminate the need to stop at the first stationary point. If the QP is strictly convex, the rules enable the minimizer to be determined without any modification to the subproblem.;A MATLAB implementation of this SQP algorithm with the disaggregated Hessian approximation and the SR1 quasi-Newton update achieves superior performance when compared to the direct BFGS Hessian approximation.
Keywords/Search Tags:Hessian, Approximation, QP subproblems, Quadratic
Related items