Font Size: a A A

To Accelerated Regularization Tensor Methods Via Random Sampling

Posted on:2022-04-01Degree:MasterType:Thesis
Country:ChinaCandidate:N WangFull Text:PDF
GTID:2480306764994929Subject:Macro-economic Management and Sustainable Development
Abstract/Summary:PDF Full Text Request
In recent years,the higher order optimization algorithm has become an important research direction in the optimization field,especially the research on the algorithm convergence com-plexity boundary problem has attracted the attention of many experts and scholars.In essence,the high-order optimization algorithms use the high-order derivative information of the objec-tive function to construct the Taylor expansion function.Compared with the one-order gradient type algorithms,this kind of algorithms can usually obtain better convergence complexity.In view of the convergence advantage of the higher order algorithm,people began to test combined with the stochastic method,the unconstrained optimization problem with finite sum objective function is studied,and the random cubic regularization Newton method is proposed.However,the convergence complexity of this algorithm is relatively high.In order to make the algorithm have better convergence complexity,an accelerated random tensor regularization optimization algorithm is proposed based on random cubic regularization Newton method.This algorithm is mainly used to solve general unconstrained optimization problems with finite sum of objective functions,and the individual component of the problem may be non-convex,but after the sum of all individual functions,the overall objective function can be guaranteed to be smooth convex function.In this paper,in the process of solving the problem,firstly,the approximate Hessen matrix and the third-order tensor information of the objective function are calculated by the way of subsampling,and the exact derivative informa-tion of the function is replaced by this method,so as to reduce the calculation amount.Then,the local Taylor model of the desired optimization problem is constructed by using the approximate derivative information above,and a regular term is added.Furthermore,the regularized Taylor model is minimized to calculate the update step size.In addition,because of the inexact Hessen matrix and tensor information of the third order,the direct extension of the accelerated regular-ization tensor method can not guarantee the convergence property of the algorithm theoretically.Therefore,this paper proposes a two-stage method to accelerate the problem,and gives a theo-retical proof of the convergence of the algorithm.It is shown that(-1/4)at most can be used to find the approximate minimum point of the problem.Finally,the classical logistic regres-sion problem in machine learning is taken as an example to verify the numerical effect of the algorithm,and the results of numerical experiments are compared with other existing methods,which fully shows the advantages of the proposed algorithm,and further verifies the correctness of the theoretical analysis of the algorithm.
Keywords/Search Tags:stochastic optimization, acceleration, quadratically regularized tensor model, two-phase scheme, complexity
PDF Full Text Request
Related items