Parallel Computer Architecture: A Hardware/software Approach

Front Cover
Morgan Kaufmann Publishers, 1999 - Computers - 1025 pages

The most exciting development in parallel computer architecture is the convergence of traditionally disparate approaches on a common machine structure. This book explains the forces behind this convergence of shared-memory, message-passing, data parallel, and data-driven computing architectures. It then examines the design issues that are critical to all parallel architecture across the full range of modern design, covering data access, communication performance, coordination of cooperative work, and correct implementation of useful semantics. It not only describes the hardware and software techniques for addressing each of these issues but also explores how these techniques interact in the same system. Examining architecture from an application-driven perspective, it provides comprehensive discussions of parallel programming for high performance and of workload-driven evaluation, based on understanding hardware-software interactions.



* synthesizes a decade of research and development for practicing engineers, graduate students, and researchers in parallel computer architecture, system software, and applications development

* presents in-depth application case studies from computer graphics, computational science and engineering, and data mining to demonstrate sound quantitative evaluation of design trade-offs

* describes the process of programming for performance, including both the architecture-independent and architecture-dependent aspects, with examples and case-studies

* illustrates bus-based and network-based parallel systems with case studies of more than a dozen important commercial designs

Other editions - View all

About the author (1999)

David Culler led the Berkeley Network of Workstations (NOW) project, which sparked the current commercial revolution in high-performance clusters. Anoop Gupta co-led the Stanford DASH multiprocessor project, which developed the shared-memory technology increasingly used in commercial machines.

Jaswinder Pal Singh led the development of the SPLASH and SPLASH-2 suites of parallel programs, which have defined the workloads and methodology used to drive decisions and evaluate trade-offs in shared- memory parallel architecture.

Peter Pacheco received a PhD in mathematics from Florida State University. After
completing graduate school, he became one of the first professors in UCLA's "Program
in Computing," which teaches basic computer science to students at the College
of Letters and Sciences there. Since leaving UCLA, he has been on the faculty of
the University of San Francisco. At USF Peter has served as chair of the computer
science department and is currently chair of the mathematics department.

His research is in parallel scientific computing. He has worked on the development
of parallel software for circuit simulation, speech recognition, and the simulation
of large networks of biologically accurate neurons. Peter has been teaching
parallel computing at both the undergraduate and graduate levels for nearly twenty
years. He is the author of Parallel Programming with MPI, published by Morgan
Kaufmann Publishers.

Bibliographic information