Font Size: a A A

Analysis On A Class Of Stochastic Optimization Algorithms

Posted on:2023-04-21Degree:MasterType:Thesis
Country:ChinaCandidate:Y J ZhouFull Text:PDF
GTID:2530306794977489Subject:Mathematics
Abstract/Summary:PDF Full Text Request
With the development of science and technology and the advent of the era of big data,machine learning is becoming more and more significant in our life.The stochastic optimization method is an important theoretical basis and indispensable method of machine learning.It is widely applied in statistical learning,data clas-sification,transportation and economic management.The quasi-Newton method and conjugate gradient method are well-known way for solving unconstrained op-timization problems,which could be used to solve nonlinear equations,smooth and nonsmooth optimization problems.Therefore,this paper utilizes them to stochas-tic optimization,so as to obtain global convergence,better complexity results and convergence rate.In this thesis,a class of stochastic optimization algorithms are proposed,including stochastic L-BFGS method and stochastic conjugate gradient method.This kind of methods have more accurate direction and appropriate step size,which improve the efficiency of the algorithm.In this thesis,a modified stochastic L-BFGS method is presented for stochastic optimization.The algorithm adopts the vanishing step size technique.The search direction is to replace the original gradient difference by(?)kwith disturbance ter-m on the basis of L-BFGS method,which could preserve the approximate matrix definiteness automatically.Under the modified stochastic L-BFGS algorithm,the global convergence results,the better complexity results and excellent numerical ex-perimental performance are obtained.A stochastic conjugate gradient method with Armijo line search is also proposed in this thesis.The ingenious method gets step size by Armijo line search technique based on subsampled function and search di-rection by three-terms conjugate gradient method.Under some suitable assumptions,the global convergence is gained and the complexity result is O((?)-1).The linear con-vergence rate is achieved for strongly convex conditions.Numerical experiments on nonconvex problems and strongly convex problems are conducted,and the numerical results show that the proposed method is efficient.
Keywords/Search Tags:Stochastic optimization problems, Stochastic L-BFGS method, Stochastic conjugate gradient method, Global convergence, Machine learning
PDF Full Text Request
Related items