Font Size: a A A

A Class Of Penalty Function Methods And The Applications Of Error Bound Theory For The Constrained Optimization Problems

Posted on:2009-07-02Degree:DoctorType:Dissertation
Country:ChinaCandidate:W L ZhaoFull Text:PDF
GTID:1100360242984611Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
This dissertation studies the convergence and finite termination of a smooth penalty function method, defines the merit functions by using trust region and SQP subproblem, and presents error bounds for the projected gradient and the distance from a feasible solution to the optimal solution set in the constrained optimization problems. Furthermore, using the error bounds, it analyses the convergence and the finite termination of the sequence of feasible points.The main results, obtained in this dissertation, may be summarized as follows:1. In chapter 2, based on a class of asymptotic l1 exact penalty functions, a smooth penalty function method for solving the constrained optimization problems is proposed. The main feature is that the global optimal solution to the penalty function is not necessarily solved at each iteration. The global convergence property is obtained in the absence of any constraint qualifications, that is, any accumulation point of the sequence generated by the algorithm is the solution to the primal program (NP). We further show that the limit of objective function value generated by the method exists and is equal to that of perturbation function at zero. Based on this fact, several of quite interesting results are readily available. In particular, the lower semi-continuity of perturbation function at zero is the necessary and sufficient condition to ensure that the sequence of the objective function value generated by the method converge to the optimal objective value of the primal problem. Noting that the perturbation function is only dependent on the primal problem, these results enable us to justify the convergence of the algorithm in advance. Furthermore, all iterates remain feasible after a finite number of iterations is shown, and develop its necessary condition in the presence of Mangasarian-Fromovitz constraint qualification. Finally, under the assumption that the solution set is non-degenerate or weakly sharp, the project gradient (?)k = P(xk -▽f(xk|S0) at the points {xk} generated by the algorithm after a finite number of iterations is just the optimal solution is proved. Numerical examples are given to show the effectiveness of these methods.2. In chapter 3 two merit functionsΦ(x,Δ) and (?)(x,Δ) are defined in the trust region method for constrained optimization problem. Both merit functions here are different from the regular gap function (another kind of merit function) discussed in previous literatures, as hereΦ(x,Δ) and (?)(x,Δ) are generated over trust regions of polyhedral sets that are obtained by linearizing the constraint functions and the active constraint functions at the given point x∈S, respectively, rather than over S. Some properties of the merit functions are given.In order to ensure boundedness of the level sets of merit functions, it is usually required in previous studies that the corresponding mapping be strongly monotone. Recently, one paper used the strongly coercive condition to guarantee the level sets of the natural residual function in variational inequality problem to be bounded. This condition is weaker than strong monotonicity. However, for the merit functionsΦ(·,Δ) and (?)(·,Δ) in this dissertation, weak coerciveness is enough to ensure boundedness of their level sets.3. Chapter 4 utilizes merit functions to provide some error bounds. In particular, a global error bound for the projected gradient with respect to S and a local error bound for the distance from a feasible solution to the optimal solution set are provided by utilizing (?)(x,Δ). Under the conditions that▽f(s) is strongly monotone or monotone, a global error bound or a local error bound for the distance from a feasible solution to the optimal solution set is presented, repectively, by utilizingΦ(x,Δ).4. In chapter 5, the convergence and the finite termination of the sequence {xk,Δk} are analyzed, where xk∈S, andΔk is the trust region radius of the subproblem (QP(xk,Δk)) or ((?)(xk,Δk)) at xk. More specifically, two characteristic functions, i.e.,Ψ(x,Δ) = max{(?)(x,Δ)<sup>1/2,(Φ(x,Δ)/Δ}and(?)(x,Δ) = max{Φ(x,Δ)1/2,(?)(x,Δ)/Δ}are presented. When (?){xk,Δk) orΨ(xk,Δk) converges to zero, an accumulation point of {xk,Δ} is proved to be a stationary point, a K - T point, or an optimal solution in different cases, and when {xk,Δ} converges to a K - T point,Ψ(xk,Δk) converges to zero. Finite termination has been studied extensively in convex optimization problems. However, results of previous studies are given under the condition that the optimal solution set is weak sharp minimal or non-degenerate. In contrast, this chapter first generalize the above two conditions so that they can be used to study finite termination of a sequence of feasible solutions for general optimization problems. Subsequently, it is proved that for a sequence of feasible solutions, the condition that (?){xk,Δk) converges to zero is a necessary and sufficient condition for the sequence to terminate finitely at a K - T point when the solution set is generalized non-degenerate, and it is also a sufficient condition for the sequence to terminate finitely at a stationary point when the solution set is weak sharp minimal. In particular, a deduction of these results improves and simplifies the corresponding results in previous literatures.5. Chapter 6 the merit functions (?)(x) is defined in the sequential quadratic programming method and has similar properties and applications with (?)(·,Δ). These results in this chapter can be applicated to variational inequality problems.
Keywords/Search Tags:Smooth Penalty Function Method, Merit Function, Error bound, Global Convergence, Finite termination
PDF Full Text Request
Related items