Distributed and Parallel ComputingThe state-of-the-art in high-performance concurrent computing -- theory and practice. -- Detailed coverage of the growing integration between parallel and distributed computing. -- Advanced approaches for programming distributed, parallel systems -- and adapting traditional sequential software. -- Creating a Parallel Virtual Machine (PVM) from networked, heterogeneous systems.This is the most up-to-date, comprehensive guide to the rapidly changing field of distributed and parallel systems.The book begins with an introductory survey of distributed and parallel computing: its rationale and evolution. It compares and contrasts a wide variety of approaches to parallelism, from distributed computer networks, to parallelism within processors (such as Intel's MMX), to massively parallel systems. The book introduces state-of-the-art methods for programming parallel systems, including approaches to reverse engineering traditional sequential software. It includes detailed coverage of the critical scheduling problem, compares multiple programming languages and environments, and shows how to measure the performance of parallel systems. The book introduces the Parallel Virtual Machine (PVM) system for writing programs that run on a network of heterogenous systems; the new Message Passing Interface (MPI-2)standard; and finally, the growing role of Java in writing distributed and parallel applications. |
What people are saying - Write a review
We haven't found any reviews in the usual places.
Contents
What is distributed and parallel computing? | 1 |
Performance measures | 42 |
Distributed and parallel processors | 82 |
Copyright | |
11 other sections not shown
Common terms and phrases
allocation Amdahl's Law applets application architecture array assigned benchmark binary blocking buff buff+ cache called chapter client client/server cluster communication delay Complexity analysis components compound document configuration connection CORBA create data types database datatype define distributed and parallel distributed computing distributed program distributed system dynamic example execution following function Gantt chart hypercube identifier implementation initial input integer interconnection network interface interval order iteration Java loop matrix multiplication message-passing method MIMD MPI_COMM_WORLD node nonblocking NP-complete null number of processors object operation optimal paradigm Parallaxis parallel algorithm parallel computing parallel program parameters partition performance PRAM rank send buffer sequential server shared-memory shown in figure SIMD single processor socket solve spawned specified speed speedup SPMD statement static step superscalar supervisor synchronization task graph tellers thread tion total number tree update vector variable virtual processors workers workstations