Font Size: a A A

Research On Regularized Newton Type Method For Nonlinear Programming

Posted on:2019-06-21Degree:DoctorType:Dissertation
Country:ChinaCandidate:H ZhangFull Text:PDF
GTID:1360330590466630Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
In this thesis,we study the regularized Newton type method and its application in equality constrained optimization.Two algorithms for unconstrained optimization and three algorithms for equality constrained optimization are proposed.We analysis the convergence properties of all algorithms.Numerical experiments show that all algorithms are effective.In chapter I and chapter II,we introduce the research background,the involved preliminary knowledge,the current research status and our research contents.In chapter III and chapter IV,we propose two regularized Newton type methods for unconstrained optimization.In chapter V and chapter VI,we propose three regularized Newton type methods for equality constrained optimization.For unconstrained problems,we propose two adaptive regularized Newton type algorithms.In these two algorithms,the trial step is computed by solving an unconstrained subproblem at each iteration,and its length is determined by adaptively adjusting a regularized parameter.In the first algorithm,the regularized parameter is adjusted according to the ratio between the actual reduction and predicted reduction.Based on the properties of the approximate solution of the subproblem,we establish the global convergence under the assumptions that the objective is continuously differentiable and bounded below.Based on the exact solution of the subproblem,we show that the algorithm is locally superlinearly convergent.In the second method,we focus on the large-scale unconstrained optimization.In each iteration,we obtain the search direction by solving a regularized quasi-Newton equation and then perform a nonmonotone line search to determine the step size.The regularized parameter is determined by the step size obtained in the line search procedure but not the ratio between the actual reduction and predicted reduction.We combine the limit memory quasi-Newton matrix in the algorithm to solve large scale problems.If the objective is bounded below and its gradient is Lipschitz continuous,the algorithm is globally convergent.Furthermore,if the objective is twice continuously differentiable,the algorithm is locally superlinearly convergent.Numerical results show that this two algorithms are effective and outperform the existing regularized Newton methods.In this thesis,we also apply the regularized Newton type method to solve equality constrained optimization.Two regularized augmented Lagrangian algorithms are proposed.In the first algorithm,we embed the regularized Newton type method in classic augmented Lagrangian algorithm by using the RNM method to minimize the augmented Lagrangian subproblem.In the second algorithm,We construct an unconstrained subproblem by adding an adaptive quadratic term to the quadratic model of augmented Lagrangian function.In each iteration,we solve this unconstrained subproblem to obtain the trial step.The augmented Lagrangian function is also used as a merit function to determine whether the trial step can be accepted.Under some reasonable assumptions,these two regularized augmented Lagrangian algorithms converge to a KKT point or an infeasible stationary point.We compare the two proposed algorithms with ALGENCAN.Numerical results show that these two methods outperform ALGENCAN.In chapter VI,we propose a sequential quadratic programming method for equality constrained optimization.In each iteration,the trial step is computed by solving two regularized quadratic subproblems.The first subproblem aims to decrease the constraint violation and the second subproblem to decrease the objective.Global convergence properties are analyzed under some reasonable assumptions.Numerical experiments show this method is effective for equality constrained optimization.
Keywords/Search Tags:regularized method, Newton type method, unconstrained optimization, equality constrained optimization
PDF Full Text Request
Related items