How to Write Parallel Programs: A First Course
In the not-too-distant future every programmer, software engineer, and computer scientist will need to understand parallelism, a powerful and proven way to run programs fast. The authors of this straightforward tutorial explain why this is so and provide the instruction that will transform ordinary programmers into parallel programmers.
How to Write Parallel Programs focuses on programming techniques for the largest class of parallel machines - general purpose asynchronous or MIMD machines. It outlines the basic parallel algorithm classes and the three basic programming paradigms, takes up the implementation techniques for these paradigms, and presents a series of case studies explaining code and discussing its measured performance. Because parallel programming requires both a computing language and a coordination language, the authors use C and Linda (a language they developed) as a combination that can be simply and efficiently implemented on a wide range of machines. The techniques discussed, however, can be applied in any comparable language environment.
Nicholas Carriero is Associate Research Scientist and David Gelernter is Associate Professor in the Department of Computer Science at Yale University.
Contents: Introduction. The Three Basic Models of Parallelism. Programming Techniques for the Three Basic Models. A Simple Problem, in Detail. Case Studies. From Parallelism to Coordination. Conclusions. Appendix: Linda User's Manual.
32 pages matching paradigms in this book
Results 1-3 of 32
What people are saying - Write a review
We haven't found any reviews in the usual places.
Basic Paradigms for Coordination
Basic Data Structures
Performance Measurement and Debugging
6 other sections not shown
Other editions - View all
agenda parallelism algorithm application approach argv array assigned basic block block_id blocked matrix build C-Linda calendar chapter char chopsticks comparison concurrent coordination framework coordination language create d_length daemon data objects data structure program database search db_id deadlock debugging discuss distributed data structure efficient element ensemble eval example execute field Figure function grab granularity implementation in-stream index tuple integer iteration live data structure load balance loop Massive parallelism master process master-worker program matching tuple matrix meeting maker message passing message passing program Multimax non-determinism num_workers number of workers operating system output overhead paradigms parallel computers parallel machine parallel program performance primes problem real_max realtime require result parallel routine runtime scheduling sequence sequential server simultaneously single software architecture solution specialist specialist-parallel speedup struct sub-block synchronization task techniques trellis nodes trellis program tuple space update vector worker processes write