Font Size: a A A

Some New Algorithms For Constrained Optimization Problems

Posted on:2020-12-31Degree:MasterType:Thesis
Country:ChinaCandidate:H X ZhangFull Text:PDF
GTID:2370330578457622Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Constrained optimization problems,which are widely used in mathematical programming,mechanics,engineering,economics,transportation,control,and finance,are often used to construct mathematical models.The current main method for solving constraint optimization problems is to transform the constraint optimization problem into Unconstrained optimization problems,Then use the unconstrained optimization algorithm to solve the problem.In these algorithms of unconstrained optimization problem,the conjugate gradient method is a kind of practical method with good properties,simple algorithm structure and less computation.In addition,for some special Constrained optimization problems,such as splitting feasible problems,fixed point problems,and Nash equilibrium problems,etc.It plays a very important role in the application of mathematical models.These mathematical models can often be transformed into variational inequality problems through some optimization tools,so it is necessary to design an algorithm for solving variational inequality problems.In this paper,Chapter 1 introduces the research background briefly,the research status and the preliminary knowledge of the new algorithm for solving the constraint optimization problem.In Chapter 2,a new penalty-conjugate gradient method for constrained optimization is proposed.This method combines the non-linear conjugate gradient method with penalty function.Firstly,constrained optimization problem is transformed into penalty function.Then,we take a new conjugate gradient method with descending property to solve penalty function.Every direction selected in the new algorithm is a descending direction,Finally,we prove the algorithm has global convergence under Wolfe step.In Chapter 3,we present a Newton projection algorithm to solve the variational inequality problems.In the algorithm,we first consider the Newton projection direction of each iteration,then use the line search direction when the Newton step cannot satisfy certain restrictive condition,and finally give a projection step which reduces the distance from iterative point to the solution set of the problem.We prove the algorithm has global convergence and give numerical experiments.In Chapter 4,we present the notion of augmented weak sharpness of solution sets for VIP.The notion of augmented weak sharpness is the extension of the weak sharpness and strong non-degeneracy of a solution set.the augmented weak sharpness of the solution set provides more weaker sufficient conditions than the weak sharpness and strong non-degeneracy for the finite convergence of these algorithms.
Keywords/Search Tags:Penalty function, Penalty-conjugate gradient method, Variational inequalities, Newton-type projection, Augmented weak sharpness
PDF Full Text Request
Related items