Font Size: a A A

Convergence Of BP Algorithms With Penalty For FNN Training

Posted on:2007-02-20Degree:DoctorType:Dissertation
Country:ChinaCandidate:H M ShaoFull Text:PDF
GTID:1100360212457659Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Feedforward neural networks (FNNs) have been widely used in many applications. Generalization, i.e., the capacity of a network to predict correct output for untrained data outside the training sets, is an important criterion of its performance. Many research results have shown that the smallest network that fits the training samples adequately has better generalization [1-7].Network pruning is an effective way to obtain such a small network, including direct pruning method and penalty method. The direct pruning means to start with a network that is larger than necessary and then remove some unimportant or unsensitive connections and nodes through a selective or an ordinal way after the training has been completed [3,8,9]. However, this method unfortunately destroys the topology of the network and results in longer training time. The penalty method is an indirect pruning strategy, the principal of which is to add a "model complexity term" to the conventional error function. Applied to the weight updating rule, this term can act as a brute-force to prevent the weights from taking too large and to drive unnecessary weights to zero during the training process. In doing so, the trained network actually acts like a small one, even if those weights that are close to zero and not likely to influence the output much are not actually removed from the network. It is obvious that penalty method achieves the goal of network pruning automatically while keeping the structure undestroyed, which makes it an important strategy to gain better generalization.Many different penalty terms have been discussed in the literature [1-3, 6, 10-13], and most of the studies are based on the experiments. There remains a lack of theoretical assurance. In this thesis, the suppression of some penalties on network weights is analyzed theoretically and an affirmation to the above observation is presented. Backpropagation (BP) algorithm is a simple and popular learning algorithm for FNN training. There are two different ways in which BP algorithm can be implemented: online mode and batch mode. The main work of this thesis is to prove that, in the above two cases, the corresponding BP algorithms with weigh-decay and inner-product penalty are deterministically convergent. The boundedness of network weights during the training procedure is also established, which is an important rewarding of adding penalty. This thesis is organized as follows:a...
Keywords/Search Tags:Feedforward neural networks, Penalty, BP algorithm, Convergence, Boundedness
PDF Full Text Request
Related items