Font Size: a A A

Neural Network Approaches For Computing Eigenvalues Of Symmetric Matrix

Posted on:2007-02-03Degree:DoctorType:Dissertation
Country:ChinaCandidate:L J LiuFull Text:PDF
GTID:1100360182482397Subject:Computational Mathematics
Abstract/Summary:PDF Full Text Request
Theory and application of recurrent neural networks (RNNs), represented by Hop-field neural networks and cellular neural networks, have become a new research focus recently. RNNs are characterized by interconnection of a large number of neural units, which formes highly nonlinear dynamical systems. As each neural unit can be realized by linear or nonlinear analog circuits, RNNs is fairly suitable for VLSI implementation and large-scale parallel computing. This resolves the timing and synchronization problems which appear in discrete-time implementations. Therefore, researchers in modern scientific and engineering computing have been actively searching for fast, efficient and robust algorithms represented by RNNs and have achieved great success. The research area involves computation of many optimization problems as well as many classical problems in numerical computation. The computation of eigenvalues of a matrix has been a very important problem ever since it was proposed in different fields. Today, it still maintains great power and is widely used in data compression, signal processing and pattern recognition et al. This thesis mainly deals with the computation of eigenvalue and generalized eigenvalue problems by using recurrent neural ne.works approach. Also considered is the convergence analysis of madaline I feedforward neural networks. To be specific, this thesis includes the following contents:1. The second chapter mainly deals with the eigenvalue problem of symmetric matrix. A novel RNNs model is proposed based on an invariance of B-norm. Sufficient conditions are established for the computation of the largest eigenvalues. By simply changing the sign of given symmetric matrix, an algorithm for computing the smallest eigenvalue is also obtained. Based on the computation of the largest and smallest eigenvalues, a scheme for computing all the eigenvalues is given, together with supporting numerical experiments. In view of the theory of stochastic approximation, adaptive learning algorithms are discussed for extracting the principal and minor components of stochastic signals.2. Generalized eigenvalue problem Ax = XBx is considered in Chapter 3. Two RNNs models are presented for computing the largest and smallest generalized eigenvalues. Under the assumptions that A is symmetric and B is positive and symmetric, convergence results are established respectively. Application in linear discriminantanalysis is briefly discussed.3. The last chapter covers the convergence analysis of madaline I feedforward neural networks. When the training patterns are linearly separable, a finite convergence result is obtained.
Keywords/Search Tags:Recurrent Neural Networks, Symmetric Matrix, Eigenvalue, Generalized Eigenvalue, Convergence, Principal Components Analysis, Linear Discriminant Analysis
PDF Full Text Request
Related items