Font Size: a A A

Penalized Subgradient Methods For Convex Optimization And Stochastic Optimization

Posted on:2022-02-08Degree:MasterType:Thesis
Country:ChinaCandidate:X LiuFull Text:PDF
GTID:2480306509485114Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
We firstly present a penalized subgradient method for solving convex optimization problems,and analyze the global convergence of the methods.Unlike some classical methods for solving nonlinear programming,such as augmented Lagrangian method and sequential quadratic programming method,at each iteration which require to solve a subproblem,penalized subgradient method is easier to be implemented.Secondly,We extend the algorithm to random situations,and propose a random penalized subgradient algorithm to solve the random optimization problem where the objective function and the constraint function are both expected functions.We prove that the penalized subgradient method converge almost surely to an optimal solution and also establish nonasymptotic convergence rate.Finally,we consider the method for solving stochastic optimization with multiple expectation constraints.At each iteration,we only select one constrain randomly to compute.
Keywords/Search Tags:Convex Optimization, Stochastic Optimization, Expected Constraints, Penalized Subgradient Method, Convergence Analysis
PDF Full Text Request
Related items