We firstly present a penalized subgradient method for solving convex optimization problems,and analyze the global convergence of the methods.Unlike some classical methods for solving nonlinear programming,such as augmented Lagrangian method and sequential quadratic programming method,at each iteration which require to solve a subproblem,penalized subgradient method is easier to be implemented.Secondly,We extend the algorithm to random situations,and propose a random penalized subgradient algorithm to solve the random optimization problem where the objective function and the constraint function are both expected functions.We prove that the penalized subgradient method converge almost surely to an optimal solution and also establish nonasymptotic convergence rate.Finally,we consider the method for solving stochastic optimization with multiple expectation constraints.At each iteration,we only select one constrain randomly to compute. |