Font Size: a A A

Adaptive Regularisation With Cubic For Solving Nonlinear Equality Constrained Optimization Problem

Posted on:2022-08-28Degree:MasterType:Thesis
Country:ChinaCandidate:W Y KongFull Text:PDF
GTID:2480306491950429Subject:Control Science and Engineering
Abstract/Summary:PDF Full Text Request
Nonlinear constrained optimization algorithms are widely applied in many fields such as science and technology,military,engineering,finance,industry and economics.It is of great theoretical significance and practical value to construct and analyze efficient calculation methods for nonlinear constrained optimization problems.Line search methods and trust region methods are two methods to ensure the convergence for constrained optimization.The recently arising adaptive regularization with cubic method is different from these methods.The adaptive regularization with cubic framework uses an adaptive estimation of the local Lipschitz constant and approximation that approximates the minimum value of the global model.It ensures the global convergence of the algorithm and also has good numerical results,with optimal complexity amongst second-order methods..Therefore,it has quickly attracted many scholars in the field of optimization to reacher.We research nonlinear equality-constrained optimization problems.Then combine the penalty function method to construct a class of adaptive regularization with cubic methods for solving nonlinear equality-constrained optimization problems.Under appropriate assumptions,the global convergence and fast convergence of the algorithm are analyzed.Preliminary numerical results are given.In Chapter 1,the application background and mathematical model of nonlinear optimization are summarized.The related research references are analyzed,and the main research content of this paper is introduced.In Chapter 2,standard adaptive regularization with cubic are no longer applicable for nonlinear equality-constrained optimization problems.Thus reconstruct new subproblems.In each iteration,the trial step is decomposed into the sum of the normal step and the tangential step.The normal step is chosen to reduce the constraint violation and required to satisfy the linearized constraints,and the tangential step is chosen to reduce its model.Having obtained the trail step,calculate the ratio of the penalty function reduction to the model reduction to judge the acceptance of the trial point.Finally,the global convergence of the algorithm is proved under appropriate assumptions,and the numerical results are given.In Chapter 3,the adaptive regularization with cubic method proposed in Chapter 2may suffer Maratos effect,which leads to the failure of fast convergence.In this chapter,the second-order correction step is introduced to correct the constrained curvature and overcome Maratos effect.The superlinear convergence and second-order convergence of the algorithm are analyzed.Furthermore,it is proved that the algorithm converges to the second-order critical point under appropriate assumptions.
Keywords/Search Tags:nonlinear constrained optimization, adaptive regularisation with cubic, merit function, second-order correction step, convergence analysis
PDF Full Text Request
Related items