Font Size: a A A

Affine Scaling Interior Trust Region Methods For Solving Convex Constrained Nonlinear Systems

Posted on:2009-06-08Degree:DoctorType:Dissertation
Country:ChinaCandidate:C X JiaFull Text:PDF
GTID:1100360272987377Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Optimization problems, which make research on how to find the optimal solution amongmany feasible plans, are widely applied in many fields such as finance, trade, management andscientific research.Both line search technique and trust region strategy are well-accepted methods in the opti-mization to assure global convergence. In this thesis, we propose some families of algorithmswhich relevant to line search technique and trust region strategy for solving convex constrainedoptimization.The modern trust region concept minimizes an appropriate quadratic model that approxi-mates the objective function in a region centered at current iterate point. By updating the trustregion radius, we can obtain acceptable step. Coleman and Li [6] presented double trust regionaffine scaling interior point algorithm for the minimization problem subject only to linear inequal-ity constraints. By using affine scaling to formulate an appropriate quadratic function and trustregion subproblem, difficulties imposed by constraints are overcome. For unconstrained min-imization, Nocedal and Yuan [30] suggested a combination of the trust region and line searchmethod, which is called backtracking, for solving the optimization problems.The first algorithm we proposes is a new affine scaling trust region interior algorithm withnonmonotonic line search technique for solving nonlinear equality systems subject to bounds onvariables. The trust region subproblem in the proposal algorithm is defined by minimizing asquared Eudidean norm reformulation of linear model subject to an interior ellipsoidal constraint.In the general trust region subproblem, each iterate switches to strict feasibility interior pointgenerate a new trial step. The global convergence and fast local convergence rate of the proposedalgorithm are established under some reasonable conditions. A nonmonotonic criterion shouldbring about speeding up the convergence progress in some ill-conditioned cases. The results ofnumerical experiments are reported to show the effectiveness of the proposed algorithm.The conjugate gradient method is the most popular method in optimization. Because it canbe easily programmed and computed, this method has become an important method for solv-ing large-scale problems. Conjugate is the essential characteristic property of conjugate gradientmethod. Bulteau and Vial formed a conjugate gradient path of unconstrained optimization. Thecurvilinear path is defined as linear combination of a sequence of conjugate directions which areobtained by applying standard conjugate direction method to approximate quadratic function ofunconstrained optimization. In this thesis, by employing the affine scaling conjugate gradient path search strategy, we obtain an iterative direction by solving the linearize model reformulated by thenonlinear equality systems. By using the line search technique, we will find an acceptable trialstep length along this direction which is strictly feasible and makes the objective function non-monotonically decreasing. The proposed method not only has the global convergence, but alsohas local super-linear convergence rate. Numerical results indicate that the algorithm is useful andeffective in practice.In this thesis, we also combine Lanczos method with conjugate gradient method, and con-struct a new path, which has both properties of Lanczos vectors and properties of conjugate gra-dient path. The main idea of this method is that we employ Lanczos method while employ con-jugate gradient method for the approximate quadric model of the optimization, we can obtain thesequences of Lanczos vectors and conjugate gradient vectors, then construct Lanczos path withthem. The properties of this path are similar to the conjugate gradient path. It is important toanalysis the global convergence and the local super-linear convergence of the algorithm.There are many methods involving projected gradient to solve optimization problems. Al-though they can converge fast, they need to compute projection several times per iteration, whichis not we expect. The algorithm which need only one projection calculation per iteration is givenin [9]. Nonsingularity condition is often used when we discuss the local convergence rate. In[24, 40], the error bound assumption which is much weaker than the standard nonsingularity con-dition is used. Stimulated by these aspects, we propose a projected trust-region algorithm forsolving bound-constrained smooth systems of equations. Based on a regularized problem, we ob-tain the Newton-like step, which generates the projected Newton step. The global convergenceand fast local convergence rate of the proposed algorithm are established under some reasonableconditions without nonsingular assumption. A nonmonotonic criterion should bring about speed-ing up the convergence rate progress in the contours of the merit function with large curvature.The results of numerical experiments are reported to show the effectiveness of the above proposedalgorithms.For solving nonlinear equality systems with convex constraints, we propose a nonmonotonetrust region algorithm. Similar to [9], we get the iterative direction, step length can be gotten bysolving a simple trust region subproblem. Stimulated by [4, 42], a projected gradient trust re-gion algorithm for solving nonlinear equality systems with convex constraints is considered. Theglobal convergence results are developed in a very general setting of computing trial directions bya projected gradient trust region method combining with the line search technique. Close to the so-lution set the projected gradient trust region method is shown to converge locally Q-superlinearlyunder an error bound assumption that is much weaker than the standard nonsingularity condition. The last chapter concludes the main results of this thesis and proposes some further researchdirections about our work.
Keywords/Search Tags:Nonlinear constrained programming, Affine scaling, Interior point method, Indefinite dogleg path, Global convergence, Local convergence rate
PDF Full Text Request
Related items