Font Size: a A A

Theory And Application Of Inaccurate (Gauss -) Newton Method For Sub - Space - Based (none) Constrained Optimization Problems

Posted on:2017-04-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:J Y WangFull Text:PDF
GTID:1100330485966809Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
Optimization theory and methods is a subject that is widely and increasingly used in science,engineering, economics, management, industry, and other areas. It deals with selecting the best of many possible decisions in real-life environment, constructing computational methods to find optimal solutions, exploring the theoretical properties, and studying the computational performance of numerical algorithms implemented based on computational methods. Along with the rapid development of high-performance computers and progress of computational methods, more and more large-scale optimization problems have been studied and solved.Derivative-free optimization is an area of long history and current rapid growth. Powell proposed UOBYQA(unconstrained optimization by quadratic approximation) [65] method that constructs interpolation quadratic models of the objective function. It is based on the Lagrange functions and the parameters of the model are updated when an interpolation point is changed.Wild and Shoemaker [79] extended the work of Conn, Scheinberg and Vicente [29] to fully linear models that include a nonlinear term and analyzed globally convergent derivative-free trust region algorithms relying on radial basis function interpolation models.In order to reduce the computational cost of Newton’s method, Dembo, Eisenstat and Steihaug proposed and studied inexact Newton method in [76], which was a generalization of Newton method. Evidently, at each iteration step of the inexact Newton method we only need to solve the Newton equation approximately by an efficient iteration solver for systems of linear equations such as the classical splitting methods or the modern Krylov subspace methods. With the development of Krylov subspace projection methods, some hybrid globally convergent modifications of inexact-Newton methods have been considered to enhance convergence from arbitrary starting points.In this paper, we propose some inexact-Newton(Gauss-Newton) via Krylov subspace methods with derivative-free technique to solving(constrained) nonlinear equations or optimization problems. We are concerned with the inexact-Newton methods via GMRES, Lanczos and CG algorithm, where these Krylov subspace methods are used as the inner iteration solver for solving the linear systems approximately. These methods are the extension of Newton-Krylov without backtracking techniques designed to solve(constrained) nonlinear equations or optimization problems.The globalization strategy of the proposed methods depend on the property of Krylov subspace iteration and the acceptable rule of search direction. Krylov subspace iteration guarantees that the residual norm of the corresponding(Gauss-)Newton iteration equation of the objective function is nonincreasing at each iteration. Further, we obtain the conclusion that the local convergence condition of inexact Newton method for each search direction which generated by Krylov subspace iteration. Meanwhile, we combine the acceptable rule of search direction and the predicted reduction satisfying the sufficient decrease condition to obtain a sufficient decrease of the actual reduction on objective function at each iteration. Thus, we obtain the global convergence condition which equals to the condition in [62]. Further, we point out that our method is consistent with efficient matrix-free implementation.The thesis consists of eight parts. In Chapter 1, we give a summary of the basic concepts of optimization. From Chapter 2 to Chapter 6, we propose some inexact-Newton(Gauss-Newton)via Krylov subspace methods to solving(constrained) nonlinear equations or optimization problems with derivative-free technique. To obtain the global convergence of these methods, which do not rely on traditional backtracking line search techniques or trust region methods, we construct the appropriate acceptance rule of search direction. In Chapter 8, we propose a derivativefree restrictively preconditioned conjugate gradient path method without line search technique for solving linear equality constrained optimization. This method originates from the classical conjugate gradient method and its restrictively preconditioned variant. The global convergence and local superlinear convergence rate of the proposed algorithms are established under some reasonable conditions. The numerical results are reported to show the effectiveness of the proposed algorithms. Finally, we give some conclusions and further research directions follow in Chapter8.
Keywords/Search Tags:Optimization, Inexact-Newton methods, Gauss-Newton methods, Derivativefree technique, Interpolation polynomial, Finite differences, Krylov subspace methods, Global convergence, Local convergent rate
PDF Full Text Request
Related items