Font Size: a A A

A Recurrent Neural Network With MMSE Loss And Its Truncated Condensed Adam Learning

Posted on:2023-06-01Degree:MasterType:Thesis
Country:ChinaCandidate:Z LiuFull Text:PDF
GTID:2568306827975879Subject:Financial Mathematics and Actuarial
Abstract/Summary:PDF Full Text Request
minimax is a very typical nonsmooth optimization problem.The research on it has a long history,a wide range of fields,many methods and rich achievements In addition,problems in many fields can be transformed into minimax problems to solve,Minimax the condensing smoothing method of problems,has received extensive attention and research since it was proposed.Although there are many methods in this area,with the advent of the big data era,solving large-scale minimax problems has become an important topic,and the research in this area still needs to be expanded.Time series forecasting is very common in the stock market,industrial production indicators and so on With the advent of big data era and the wide application of neural network model,neural network model has achieved good results on many issues Subsequently,the cyclic neural network was proposed and applied to time series analysis and prediction,and showed extraordinary effect and potential.In this paper,a cyclic neural network model with maximum mean square error(MMSE)and loss is given.We use the maximum value function and mse loss to construct the maximum value form of mmse loss function.In our model,the constructed mmse loss is used as the loss function of neural network.Therefore,the training of neural network is transformed into solving the finite Then the truncated condensing smoothing Adam method for training the model is given According to the truncation strategy,this paper constructs a truncated condensing function to smoothly approximate(MMSE)loss function.When the condensing parameter p increases,the constructed truncated condensing function smoothly approaches(MMSE)loss,so we use(MMSE)the truncated condensing smooth approximation function of the loss function instead of(MMSE)the loss function is used as the loss function of the model of the cyclic neural network,and then use adam The optimization method trains the cyclic neural network model using the loss During the implementation of the algorithm,if the aggregation parameter p increases too fast,the loss function will approach negative infinity,which will lead to optimization failure.If the aggregation parameter p increases too slowly,it will lead to large error or slow optimization Therefore,in order to reduce the ill conditioned effect caused by too large coagulation parameters in the coagulation smoothing process,we adopt the coagulation parameter adjustment criteria,which can adaptively adjust the coagulation parameters p and ensure that p at the appropriate size,so as to reduce the occurrence of ill conditioned phenomena Then the truncated condensation algorithm,Adam algorithm and the convergence guarantee of the algorithm are described.Finally,three stocks are selected for numerical experiments.Using the time series data of the opening price,the highest price,the lowest price and the closing price of stocks,a prediction method of stock prices is given by using the proposed cyclic neural network model and the truncated condensation algorithm Compared with other loss functions,the e ciency and accuracy of neural networks using this method are significantly improved.Here,other loss functions include mse loss,mae loss,log cosh loss and smooth l1 loss Finally,the advantages and disadvantages of the proposed model and algorithm are analyzed.
Keywords/Search Tags:Recurrent neural networks, Minimax problem, Condensation function, Truncated condensation
PDF Full Text Request
Related items