Font Size: a A A

Convergence to steady-state of weighted ENO schemes, norm preserving Runge-Kutta methods and a modified conjugate gradient method

Posted on:1999-07-04Degree:Ph.DType:Dissertation
University:Brown UniversityCandidate:Gottlieb, SigalFull Text:PDF
GTID:1460390014473198Subject:Mathematics
Abstract/Summary:
This dissertation explores three numerical methods which arise in the solution of partial differential equations.; In the first part, we explore methods of enhancing convergence to steady state of hyperbolic conservation laws, where the spatial derivatives are discretized by the Weighted Essentially Non Oscillatory (WENO) method. We examine the usefulness of the Newton iteration, a Jacobian based method. This method is not time dependent, and is equivalent to solving the steady-state problem directly. For scalar problems this method shows much promise, but when used in the solution of a one dimensional system exhibits difficulties in convergence to steady-state. We then examine preconditioning, implicit residual smoothing and filtering as ways of speeding up convergence of a Runge-Kutta time stepping scheme. We conclude that preconditioning, when used alone, speeds up convergence to steady-state best.; In the second part we further study a class of high order norm-preserving Runge-Kutta time discretizations initialized by Shu and Osher, suitable for solving hyperbolic conservation laws with stable spatial discretizations. We illustrate with numerical examples that a non-norm-preserving but linearly stable Runge-Kutta time discretization can generate oscillations even for TVD (total variation diminishing) spatial discretization, verifying the claim that norm-preserving Runge-Kutta methods are important for such applications. We explore optimal norm-preserving Runge-Kutta methods of second, third and fourth order, and Runge-Kutta methods of any order for linear spatial discretizations. We then study the norm-preserving properties of low storage Runge-Kutta methods and multistep Runge-Kutta methods.; In the third and final part, we examine a modified conjugate gradient procedure for solving Ax = b in which the approximation space is based upon the Krylov space {dollar}({lcub}cal K{rcub}sbsp{lcub}sqrt{lcub}A,underline{lcub}b{rcub}{rcub}{rcub}{lcub}k{rcub}){dollar} associated with {dollar}sqrt{lcub}A{rcub}{dollar} and b. We show that, given initial vectors b and {dollar}sqrt{lcub}Aunderline{lcub}b{rcub}{rcub}{dollar} (possibly computed at some expense), the best fit solution in {dollar}{lcub}cal K{rcub}sbsp{lcub}sqrt{lcub}A,underline{lcub}b{rcub}{rcub}{rcub}{lcub}k{rcub}{dollar} can be computed using a finite-term recurrence requiring only one multiplication by A per iteration. The initial convergence rate appears, as expected, to be twice as fast as that of the standard conjugate gradient method, but stability problems cause the convergence to be degraded.
Keywords/Search Tags:Method, Conjugate gradient, Convergence, Steady-state
Related items