Font Size: a A A

Trust Region Methods Combining Filter Line Search Technique For Nonlinear Constrained Optimization

Posted on:2015-03-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y G PeiFull Text:PDF
GTID:1220330431966215Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Both trust-region methods and line search techniques can ensure the globalconvergence of the corresponding algorithms for solving nonlinear optimization.Each one has its own advantages. Trust-region methods guarantee very niceconvergence property of the corresponding algorithms. Line search techniquesrequire very little computation to determine a new trial point. Noting the advantagesand disadvantages of these two methods, Nocedal and Yuan presented an idea ofcombining them such that the algorithm retains the good convergence properties oftrust-region methods and is more economical to implement. Filter technique, initiallyproposed by Fletcher and Leyffer, is usually used for solving constrained optimizationproblems, and its main idea is that trial points are accepted if they improve theobjective function or reduce the constraint violation. Instead of the traditionalpenalty function method, filter technique can be embedded in trust-region methodframework as well as line search framework to ensure the global convergence of thealgorithm. In this thesis, the trust-region methods which employ backtracking linesearch with filter technique are presented for solving nonlinear equality constrainedprogramming. Global convergence and local convergence rate are analyzed. Thenumerical results verified the efficiency of the proposed algorithm. Furthermore, inassociation with affine scaling interior method, the algorithms are extended forsolving nonlinear equality constrained optimization with nonnegative constraints onvariables.For nonlinear equality constrained optimization, two algorithms are presentedby using the idea of combining trust-region method and filter line search technique.The first is based on trust-region sequential quadratic programming framework. Thetrial step is decomposed into a normal step and a tangential step. The normal step isrequired to satisfy the linearized constraints and should not be too big. Otherwise, itis impossible for tangential step to supply sufficient descent for the objectivefunction’s model. If such the normal step does not exist, the algorithm turns to afeasibility restoration phase whose purpose is to find a new iterate that provides sufficient reduction of the constraint violation or the objective function and is alsoacceptable to the current filter by trying to decrease the constraint violation. Havingobtained tangential step, we can compute the tangential step by solving atrust-region subproblem to obtain sufficient reduction of the objective function’smodel. The step size is decided by backtracking line search together with filtertechnique and hence the next iteration point is determined. The ratio of the actualreduction to the predicted reduction is only used to adjust the trust-region radius.The proof of global convergence is presented under some reasonable assumptionsand the reported numerical results show that the new algorithm is feasible andeffective. In addition, introducing second order correction steps can overcome theMaratos effect and the proposed algorithm achieves superlinear local convergence.The second algorithm is based on Lagrange function. Different from the firstalgorithm, at the current iteration, the general full trust-region subproblem isdecomposed into a pair of trust-region subproblems in normal and tangentialsubspaces respectively. The trial step is given by solving these two trust-regionsubproblems. The step size is decided by backtracking line search together with filtertechnique where the objective function is replaced by Lagrange function. The newmethod retains the global convergence to the first-order critical points under somereasonable assumptions without using a penalty function. Meanwhile, by usingLagrange function in the filter, superlinear local convergence is achieved withoutsecond order correction steps. The preliminary numerical results are reported toshow effectiveness of the proposed algorithm.For nonlinear equality constrained optimization with nonnegative constraints onvariables, an affine scaling interior trust-region algorithm which employsbacktracking line search with filter technique is presented. According to the characterof the optimal conditions, an affine scaling matrix is introduced appropriately and thecorresponding affine scaling trust-region subproblem is established to generatesearch direction. The step size is decided by backtracking line search in associationwith filter technique to determine the next iteration point. Furthermore, theiteration points should be positive because the problem has the nonnegative constraints on variables. To this end, the initial trial step length during thebacktracking line search is modified to ensure the strict positiveness if the initialpoint is strict positive. The global convergence analysis is presented and numericalresults are reported.Finally, we summarize the main results in this thesis and conceive someproblems which should be further studied in the future work.
Keywords/Search Tags:Nonlinear optimization, Trust-region, Line search, Filter technique, Global convergence, Local convergence rate, Maratos effect
PDF Full Text Request
Related items