Font Size: a A A

Convergence Of Gradient Learning Algorithm For Two Kinds Of Feedforward Neural Networks

Posted on:2007-12-04Degree:MasterType:Thesis
Country:ChinaCandidate:H F LuFull Text:PDF
GTID:2120360182461108Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
BP network is one of most widely used neural networks. The cases for two-layer with a penalty term and three-layer without penalty term have been studied in [1] and [2], respectively. In this paper, we present and discuss an online gradient method with a penalty term for three-layer BP neural networks. The input training examples are reset stochastically before the performance of each batch so that the learning is easy to jump off from local minima. The monotonicity and the convergence of deterministic nature are proved.Higher-order Neural Networks (HONN) have been developed with intention to enhance the nonlinear descriptive capacity of the feedforward multilayer perceptron networks. The Pi-Sigma Neural Networks (PSNN), as an efficient HONN for pattern classification and approximating problems, maintain the powerful learning capability of multilayer HONN while avoiding to certain degree the combinatorial increase of the number of the weights and the hidden units when the dimension of the input vectors increases (see, e.g. [3, 4]). The numerical tests in [3, 5] indicate that fairly complex approximation and classification problems can be tackled by Pi-Sigma Neural Networks using only three or four summing units. However, to the best of our knowledge, there has not found any theoretical convergence analysis for PSNN. The second aim of this thesis is to work on this respect and to provide some convergence results for the gradient descent learning algorithm for PSNN.
Keywords/Search Tags:BP neural networks, online gradient method, penalty term, stochastic inputs, Pi-Sigma Neural Network, gradient descent algorithm, convergence
PDF Full Text Request
Related items