Introduction to Parallel and Vector Solution of Linear Systems (Google eBook)
Although the origins of parallel computing go back to the last century, it was only in the 1970s that parallel and vector computers became available to the scientific community. The first of these machines-the 64 processor llliac IV and the vector computers built by Texas Instruments, Control Data Corporation, and then CRA Y Research Corporation-had a somewhat limited impact. They were few in number and available mostly to workers in a few government laboratories. By now, however, the trickle has become a flood. There are over 200 large-scale vector computers now installed, not only in government laboratories but also in universities and in an increasing diversity of industries. Moreover, the National Science Foundation's Super computing Centers have made large vector computers widely available to the academic community. In addition, smaller, very cost-effective vector computers are being manufactured by a number of companies. Parallelism in computers has also progressed rapidly. The largest super computers now consist of several vector processors working in parallel. Although the number of processors in such machines is still relatively small (up to 8), it is expected that an increasing number of processors will be added in the near future (to a total of 16 or 32). Moreover, there are a myriad of research projects to build machines with hundreds, thousands, or even more processors. Indeed, several companies are now selling parallel machines, some with as many as hundreds, or even tens of thousands, of processors.
What people are saying - Write a review
We haven't found any reviews in the usual places.
12 Basic Concepts of Parallelism and Vectorization
13 Matrix Multiplication
Direct Methods for Linear Equations
22 Direct Methods Tor Parallel Computers
23 Banded Systems
Iterative Methods for Linear Equations
32 The GaussSeidel and SOR Iterations
33 Minimization Methods
algorithm of Figure assume bandwidth basic broadcast Choleski decomposition Choleski factorization column sweep algorithm conjugate gradient method consider corresponding degree of parallelism delayed update discrete Poisson equation discussed eigenvalues example Exercise fan-in fcth column Gauss-Seidel iteration Gaussian elimination given in Figure grid points Hence Householder transformation hypercube ijk forms illustrated in Figure implementation incomplete factorization inner product algorithm interleaved storage iterative method ith row Jacobi iteration Jacobi's method kji form linear combination linked triad loop lower triangular LU decomposition machines main diagonal matrix multiplication matrix-vector multiplication memory systems n x n nonsingular nonzero Note number of processors obtain outer product parallel and vector parallel computers parallel systems preconditioning problem recursive doubling reduction requires result Section solution solved speedup SSOR iteration step stored by rows symmetric positive definite synchronization Theorem triangular systems unknowns vector computers vector operations vector processors vector registers vectors of length zero