## Krylov Solvers for Linear Algebraic Systems: Krylov SolversThe first four chapters of this book give a comprehensive and unified theory of the Krylov methods. Many of these are shown to be particular examples of the block conjugate-gradient algorithm and it is this observation that permits the unification of the theory. The two major sub-classes of those methods, the Lanczos and the Hestenes-Stiefel, are developed in parallel as natural generalisations of the Orthodir (GCR) and Orthomin algorithms. These are themselves based on Arnoldi's algorithm and a generalised Gram-Schmidt algorithm and their properties, in particular their stability properties, are determined by the two matrices that define the block conjugate-gradient algorithm. These are the matrix of coefficients and the preconditioning matrix. In Chapter 5 the"transpose-free" algorithms based on the conjugate-gradient squared algorithm are presented while Chapter 6 examines the various ways in which the QMR technique has been exploited. Look-ahead methods and general block methods are dealt with in Chapters 7 and 8 while Chapter 9 is devoted to error analysis of two basic algorithms. In Chapter 10 the results of numerical testing of the more important algorithms in their basic forms (i.e. without look-ahead or preconditioning) are presented and these are related to the structure of the algorithms and the general theory. Graphs illustrating the performances of various algorithm/problem combinations are given via a CD-ROM. Chapter 11, by far the longest, gives a survey of preconditioning techniques. These range from the old idea of polynomial preconditioning via SOR and ILU preconditioning to methods like SpAI, AInv and the multigrid methods that were developed specifically for use with parallel computers. Chapter 12 is devoted to dual algorithms like Orthores and the reverse algorithms of Hegedus. Finally certain ancillary matters like reduction to Hessenberg form, Chebychev polynomials and the companion matrix are described in a series of appendices. · comprehensive and unified approach · up-to-date chapter on preconditioners · complete theory of stability · includes dual and reverse methods · comparison of algorithms on CD-ROM · objective assessment of algorithms |

### What people are saying - Write a review

We haven't found any reviews in the usual places.

### Contents

1 | |

21 | |

Chapter 3 The short recurrences | 43 |

Chapter 4 The Krylov aspects | 77 |

Chapter 5 Transposefree methods | 105 |

Chapter 6 More on QMR | 117 |

Chapter 7 Lookahead methods | 133 |

Chapter 8 General block methods | 151 |

Chapter 12 Duality | 279 |

Appendix A Reduction of upper Hessenberg matrix to upper triangular form | 287 |

Appendix B Schur complements | 293 |

Appendix C The Jordan Form | 295 |

Appendix D Chebychev polynomials | 297 |

Appendix E The companion matrix | 299 |

Appendix F The algorithms | 301 |

Appendix G Guide to the graphs | 313 |

Chapter 9 Some numerical considerations | 163 |

Chapter 10 And in practice? | 173 |

Chapter 11 Preconditioning | 193 |

315 | |

327 | |

### Common terms and phrases

approximate solution arbitrary BiCG BiCGStab BiCR breakdown calculation CG algorithm Chapter Chebychev computed condition number Conjugacy conjugate convergence defined by equation denotes diagonal elements displacement vectors eigenvalues equation Ax error exact arithmetic factorisation fill-in follows from equation Galerkin given by equation gives GMRes GMRes(m Graphs Hessenberg matrix HS versions ILUT implies indefinite inequality Krylov methods Lanczos algorithm Lanczos version Lemma linear look-ahead version LSQR LU decomposition M-matrix matrix minimises nonsingular nonsingular matrix number of iterations obtain original OrthoDir orthogonal pivoting polynomial positive real possible preconditioner preconditioning premultiplying problems Proof QMRBiCG rank-deficient rate of convergence recursively reduce replaced residual norm satisfy scalars scaling Schur complement short recurrences singular solving Ax sparse sparse matrix spectral radius stagnation step Substituting symmetric matrix symmetric positive definite termination tests Theorem triangular factors upper triangular v'Au values Xi+1 xj+1 zero

### Popular passages

Page 318 - The design and use of algorithms for permuting large entries to the diagonal of sparse matrices. SIAM J. Matrix Anal. Appl., 20:889-901, 1999. [96] I, S. Duff and J. Koster. On algorithms for permuting large entries to the diagonal of a sparse matrix.

Page 318 - Elman and E. Agron, Ordering techniques for the preconditioned conjugate gradient method on parallel computers, Comput.

Page 320 - The Theory of Matrices in Numerical Analysis. Blaisdell Publishing Company, New York, 1964.

Page 320 - IE KAPORIN, High quality preconditioning of a general symmetric positive definite matrix based on its UTU + UT R + RTU decomposition, Numer.

Page 315 - M. BENZI, CD MEYER, AND M. TUMA, A sparse approximate inverse preconditioner for the conjugate gradient method, SIAM J. Sci. Comput., 17 (1996), pp.

Page 318 - B. Fischer and RW Freund. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices. SIAM J.

Page 318 - R. Fletcher. Conjugate gradient methods for indefinite systems. In GA Watson, editor, Proceedings of the Dundee Biennal Conference on Numerical Analysis 1974, pages 73-89, New York, 1975.

### References to this book

Nonlinear Optimization with Engineering Applications Michael Bartholomew-Biggs No preview available - 2008 |