Font Size: a A A

Krylov Subspace Methods For Computing Matrix Functions Acting On A Vector

Posted on:2015-08-31Degree:DoctorType:Dissertation
Country:ChinaCandidate:H LvFull Text:PDF
GTID:1220330452969387Subject:Mathematics
Abstract/Summary:PDF Full Text Request
Krylov subspace methods for approximating a matrix function f (A) times a vectorv, which are further analyzed in this thesis, have been intensively studying over the yearsand constitute one of main topics in numerical algebra. Unlike in other linear algebraproblems, such as the linear system, the eigenvalue problem and so on, this problem isnot naturally equipped with an a posteriori error representation or a clear residual no-tion that can be used for determining the convergence and designing reliable stoppingcriteria for a Krylov subspace approximation to f (A)v. For the Arnoldi approximationto eAv, Saad established an error expansion and empirically claimed that the first term ofthe error expansion can be a reliable a posteriori error estimate. However, its theoreticaljustification has not yet been given hitherto. In this thesis, some reliable a posteriori errorestimates are derived from the new bounds and generalized error expansion we establish.We theoretically justify the empirical claim by Saad. Moreover, we establish the errorexpansion of the Krylov-like approximation for f (z) sufficiently smooth, which general-izes Saad’s result on the Arnoldi approximation to e τAv. Similarly, it is shown that thefirst term of the generalized error expansion can be used as a reliable a posteriori esti-mate for the Krylov-like approximation to f (A)v for sufficiently smooth f (z). Numericalexamples are reported to demonstrate the effectiveness of the a posteriori error estimatesfor the Krylov-like approximations to e τAv, cos(A)v and sin(A)v.The errors of the Krylov subspace approximations to f (A)v are closely related to theresidual of their counterparts for solving the linear systems. Based on the principle ofthe GMRES method, we propose a new method, called the harmonic Ritz method, forapproximating f (A)v. The method is so named because it is associated with the harmonicRitz values of A with respect to a certain targetξ. We analyze the convergence of itand propose a restarted harmonic Ritz algorithm. Compared with the standard Arnoldiapproximation, the new method has two remarkable advantages: firstly, it converges moresmoothly; secondly, the restarted algorithm always converges with a suitableξ. Thesecond advantage is more important and has great practical value. We consider the crucialissue of selectingξand show how to determine it robustly and cheaply. Importantly, wejustify that the harmonic Ritz method is not sensitive toξtheoretically and numerically, demonstrating that our new method is robust and of generally.The choice of the restarting vector has great influence on the convergence of therestarted algorithms and can be elegantly dealt with the implicit restarting approach. Wehave adapted the implicit restarting technique to the Arnoldi method for approximatingf (A)v. Furthermore, we have proved that the implicitly restarted Arnoldi method withexact shifts used is mathematically equivalent to the restarted Arnoldi method with defla-tion.
Keywords/Search Tags:matrix function, Krylov subspace method, a posteriori error estimate, har-monic Ritz value, restarted method
PDF Full Text Request
Related items