Font Size: a A A

Convergence Results Of Learning Algorithms For Fuzzy Neural Networks

Posted on:2013-12-30Degree:DoctorType:Dissertation
Country:ChinaCandidate:Y LiuFull Text:PDF
GTID:1220330395499243Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Both neural networks and fuzzy systems are the simulation of people’s thinking, and they can solve many nonlinear, uncertainty of complex problems which traditional technology unable to solve, playing an important role in machine learning area. And each one has advantages and disadvantages, neural network has the ability to learn, but weights vector is hard to make qualitative explanation and expression. Fuzzy system is good at using the experience of the people, but in the fuzzy reasoning system, how to extract, survival, optimize fuzzy rules and membership function parameters, always are fields in need of solution. It is very important and meaningful to form a unified system of the two mutual influenced fields. Fuzzy neural network or neuro-fuzzy system came into being, not only made fuzzy system become the learning ability of the adaptive system, and made neural network to deal with the fuzzy rules. Fuzzy neural network has been widely used in nonlinear system identification, intelligent control, pattern recognition, etc.1. Feedforward neural networks with hidden layers are probably the most widely used neural networks, and they are usually trained by the back-propagation (BP) algorithm based on the gradient descent method. When a feedforward neural network is trained with the BP algorithm, the performance depends on several factors, such as choice of learning parameters, cost function and network topology. An important and commonly adopted strategy is that the initial weights are chosen to be small in magnitude in order to prevent premature saturation in the training procedure. The aim of this paper is to point out the other side of the story:In some cases, the gradient of the error functions are zero not only for infinitely large weights but also for zero weights. Slow convergence in the beginning of the training procedure is often the result of sufficiently small initial weights. Therefore, we suggest that, in these cases, the initial values of the weights should be neither too large, nor too small. For instance, a typical range of choice of the initial weights might interval getting rid of zero, rather than interval including zero as suggested by the usual strategy.2.The main function of Fuzzy Perceptron is to solve classification problems through self-learning. In this paper, an algorithm for a recurrent fuzzy perceptron based on fuzzy logic is presented, and the structure of the recurrent fuzzy perceptron is similar to conditional per-ceptron based on Addition-Production. Set initial weights be constant of zero, in the case the training examples are separable, under some conditions, its finite convergence is proved, i.e., the training procedure for the network weights will stop in finite steps. Aimed to the condition for convergence is restrict, a fuzzy δ-rule training algorithm is proposed for a fuzzy perceptron, in which the training pattern pairs are supplied in a stochastic order. Moreover, it is proved that if the training pattern pairs are fuzzily separable and the learning rate η is small enough, then the algorithm is finitely convergent under less rigorous conditions.3. High-order network distinguishes itself from ordinary feedforward networks by the p-resence of both summation and product units in the network, which has been shown to have impressive computational capabilities. By combining of the benefits of high-order network and Takagi-Sugeno inference system, Pi-Sigma network is capable to deal with the nonlinear sys-tems more efficiently, which means it has a simple structure, and fast computational speed. This paper presents a Pi-Sigma network to identify first-order Tagaki-Sugeno (T-S) fuzzy inference system and proposes a modified gradient-based neuro-fuzzy learning algorithm, in which way the algorithm is simplified. A comprehensive study on the weak and strong convergence re-sults for for the learning method is discussed. Simulation results show the modified learning algorithm is effective to support the theoretical findings.4. A popular and feasible approach to determine the appropriate size of a neural network is to remove unnecessary connections from an oversized network. The advantage of L1/2regu-larization has been recognized for sparse modeling. However, the nonsmoothness of L1/2reg-ularization may lead to oscillation phenomenon. An approach with smoothing L1/2regular-ization is proposed in this paper to improve the learning efficiency and to promote sparsity of Takagi-Sugeno (T-S) fuzzy models. Some weak and strong convergence results are presented for T-S fuzzy neural networks with zero-order. A relationship between the learning rate pa-rameter and the penalty parameter is given to guarantee the convergence. Simulation results are provided to support the theoretical findings, and they show the superiority of the smoothing Li/2regularization over the original L1/2regularization.
Keywords/Search Tags:Fuzzy neural networks, Fuzzy perceptrod, Convergence, Takagi-Sugenoinference system, Pi-Sigma network, L1/2 regularization
PDF Full Text Request
Related items