Font Size: a A A

Successive Approximation Training Algorithm With Increasing Hidden Units For Feedforward Neural Networks

Posted on:2010-10-03Degree:MasterType:Thesis
Country:ChinaCandidate:T ZhangFull Text:PDF
GTID:2178360275957973Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Basic concepts and development of neural networks are introduced in this dissertation. The structure and learning method of BP networks are introduced in details.Basic idea and development of neural network ensemble are also introduced.A successive approximation training algorithm based on the decrease of the error for feedforward neural networks is introduced.Some disadvantages of the algorithm are found out by analyzing the proof of the algorithm.Then a new successive approximation training algorithm with increasing hidden units for feedforward neural networks is presented to improve the above algorithm.Finally,the learning and generalization capabilities of the new algorithm for BP networks are investigated by some numerical experiments:A numerical experiment for N-bit parity problem shows that the new algorithm is convergent and that the old algorithm exits some disadvantages.The better performances in generalization capabilities and prediction stability of the new algorithm, compared to BP networks,are also illustrated by the numerical experiments for two-spiral problem and for nonlinear system prediction,respectively.
Keywords/Search Tags:Neural Networks, BP Networks, Increasing Hidden Units, Learning Capabilities, Generalization Capabilities
PDF Full Text Request
Related items