Font Size: a A A

Convergence Of A Batch Method For Gray Neural Network GNNM(1, 1)

Posted on:2010-04-22Degree:MasterType:Thesis
Country:ChinaCandidate:Y W SunFull Text:PDF
GTID:2120360275458140Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Gray neural network GNNM(1,1) is an coupled model with gray system and neural network,which's effective integration can't only improve efficiency and accuracy of the model,but also strengthen system's parallel computing power.By learning artificial neural network,we know that adding penalty to error function in neural network can control the size of weight and improve the network generalization ability,and adding momentum in recurrent formula of weight can reduce shock and accelerate the convergence.In this paper,we do an attempt of integrating these two methods and we prove the convergence of an batch gradient method for gray neural network GNNM(1,1) being improved.Numerical examples are also provided to support our theoretical findings.This thesis is organized as follows.In Chapter 1,Some background information about neural networks and gray system are reviewed;In Chapter 2,we introduce the improvement to gray neural network GNNM(1,1) and we prove the convergence of an batch gradient method for improved gray neural network GNNM(1,1);In Chapter 3,we support our theoretical findings and compare prediction capability of GM and BP neural network with gray neural network GNNM(1,1)'s by numerical examples.
Keywords/Search Tags:GNNM(1,1), Batch Gradient Method, Penalty, Momentum, Monotonicity, Convergence
PDF Full Text Request
Related items