Font Size: a A A

Research On Algorithms For Solving Minimax Problems In Machine Learning

Posted on:2024-01-15Degree:MasterType:Thesis
Country:ChinaCandidate:Y Y QinFull Text:PDF
GTID:2568306914974919Subject:Operational Research and Cybernetics
Abstract/Summary:PDF Full Text Request
In recent years,nonconvex minimax problems have been widely applied in fields such as machine learning,signal processing and deep neural networks,becoming one of the research hotspots in artifi-cial intelligence,optimization,and other fields.Many literatures have designed new single-loop algo-rithms based on the gradient descent ascent(GDA)algorithm to solve nonconvex minimax problems.However,the convergence analysis of such problems in the stochastic environment needs to be further studied.In this paper,the stochastic alternating gradient projection(SAGP)algorithm is proposed to solve nonconvex minimax problems based on a variant of the GDA algorithm-the alternating gradient projection(AGP)algorithm,which combines the properties of the stochastic gradient descent ascent(SGDA)algorithm and introduces stochastic ideas.Furthermore,the convergence of the proposed al-gorithm is further proved,and the iterative complexity is analyzed in the nonconvex-strongly concave,nonconvex-concave,strongly convex-nonconcave and convex-nonconcave environments.Finally,nu-merical experiments are carried out on the gaussian distribution data set,and the effectiveness of the new algorithm is verified by analyzing and comparing the experimental results.The main research work of this paper is as follows:1.In stochastic environments,for nonconvex-(strongly)concave minimax problems,this pa-per proves that the SAGP algorithm has better convergence rate under appropriate assumptions.In nonconvex-strongly concave environment,the algorithm obtains anε-approximate first-order stable point of f(x,y)with an iteration complexity of O(ε-2).In nonconvex-concave environment,the iter-ation complexity is O(ε-4).2.In stochastic environments,for(strongly)convex-nonconcave minimax problems,this paper proves that the SAGP algorithm has better convergence rate under appropriate assumptions.In strong-ly convex-nonconcave environment,the algorithm obtains anε-approximate first-order stable point of f(x,y)with an iteration complexity of O(ε-2).In convex-nonconcave environment,the iteration com-plexity is O(ε-4).The SAGP algorithm is applied to the Toy WGAN problem with a linear generator.Through numerical experiments are carried out on the gaussian distribution data set,the experimental results verified the effectiveness of the algorithm.
Keywords/Search Tags:machine learning, nonconvex optimization, GDA algorithm, stochastic gradient, stochastic alternating gradient projection algorithm
PDF Full Text Request
Related items