Introduction to Parallel and Vector Solution of Linear SystemsAlthough the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors. |
What people are saying - Write a review
We haven't found any reviews in the usual places.
Contents
Introduction | 1 |
12 Basic Concepts of Parallelism and Vectorization | 20 |
13 Matrix Multiplication | 36 |
Direct Methods for Linear Equations | 59 |
22 Direct Methods Tor Parallel Computers | 85 |
23 Banded Systems | 108 |
Iterative Methods for Linear Equations | 133 |
32 The GaussSeidel and SOR Iterations | 156 |
34 The Preconditioned Conjugate Gradient Method | 196 |
The ijk Forms of LU and Choleski Decomposition | 235 |
Convergence of Iterative Methods | 253 |
The Conjugate Gradient Algorithm | 269 |
Basic Linear Algebra | 281 |
285 | |
299 | |
33 Minimization Methods | 185 |
Other editions - View all
Introduction to Parallel and Vector Solution of Linear Systems James M. Ortega No preview available - 2013 |
Introduction to Parallel and Vector Solution of Linear Systems James M. Ortega No preview available - 2014 |
Common terms and phrases
algorithm of Figure assume bandwidth basic broadcast Choleski decomposition Choleski factorization column sweep algorithm conjugate gradient method consider corresponding degree of parallelism delayed update discrete Poisson equation discussed eigenvalues example Exercise fan-in fcth column Gauss-Seidel iteration Gaussian elimination given in Figure grid points Hence Householder transformation hypercube ijk forms illustrated in Figure implementation incomplete factorization inner product algorithm interleaved storage iterative method ith row Jacobi iteration Jacobi's method kji form linear combination linked triad loop lower triangular LU decomposition machines main diagonal matrix multiplication matrix-vector multiplication memory systems n x n nonsingular nonzero Note number of processors obtain outer product parallel and vector parallel computers parallel systems preconditioning problem recursive doubling reduction requires result Section solution solved speedup SSOR iteration step stored by rows symmetric positive definite synchronization Theorem triangular systems unknowns vector computers vector operations vector processors vector registers vectors of length zero
Popular passages
Page 286 - Ipsen, Parallel scaled givens rotations for the solution of linear least squares problems, Report DCS/RR-310, Dept.