An optimization model,usually involves parameters associated with decision vari-ables in the objective function or in the constraint set.The forward optimization problem is to find optimal solutions to the optimization model in which parameter values are known. However, there are many instances in the practice,in which we only know some estimates for parameter values,but we may have certain optimal solutions from our ex-perience,observations or experiments. An inverse optimization problem is to find values of parameters which are near to the given estimates and make the known solutions optimal. Therefore,this paper is devoted to the study of two classes of inverse problems for the continuous optimization i.e. the inverse linear programming underι1 measure and the inverse quadratic programming underι2 measure.Firstly,we consider an inverse linear programming(LP) underι1 measure. We for-mulate the problem as a nonsmooth optimization problem with linear complementarity constrains. Then,we introduce a perturbation method to reformulate the problem as a se-ries of smooth problems and demonstrate the global convergence. The smoothing Newton method,which is proved to have global and local quadratic convergence,is used to solve the problem. And numerical experiments are reported.Then,we consider an inverse quadratic programming(QP) underι2 measure. The problem can be formulated as a cone constrained optimization problem with a comple-mentary constrain. Based on the theory of duality,we reformulate this problem as a linear complementarity constrained nonsmooth optimization problem with fewer variables than the original one. We introduce a perturbation approach to solve the reformulated problem and demonstrate the global convergence.An Newton method is constructed to solve the perturbed problem and its global convergence and local quadratic rate can be obtained. An Newton method is applied to solve the subproblem. In the end,our numerical experi-ment results show that this method is very effective.
|