How to Write Parallel Programs: A First CourseIn the not-too-distant future every programmer, software engineer, and computer scientist will need to understand parallelism, a powerful and proven way to run programs fast. The authors of this straightforward tutorial explain why this is so and provide the instruction that will transform ordinary programmers into parallel programmers. How to Write Parallel Programsfocuses on programming techniques for the largest class of parallel machines - general purpose asynchronous or MIMD machines. It outlines the basic parallel algorithm classes and the three basic programming paradigms, takes up the implementation techniques for these paradigms, and presents a series of case studies explaining code and discussing its measured performance. Because parallel programming requires both a computing language and a coordination language, the authors use C and Linda (a language they developed) as a combination that can be simply and efficiently implemented on a wide range of machines. The techniques discussed, however, can be applied in any comparable language environment. Nicholas Carriero is Associate Research Scientist and David Gelernter is Associate Professor in the Department of Computer Science at Yale University. Contents: Introduction. The Three Basic Models of Parallelism. Programming Techniques for the Three Basic Models. A Simple Problem, in Detail. Case Studies. From Parallelism to Coordination. Conclusions. Appendix: Linda User's Manual. |
Other editions - View all
Common terms and phrases
agenda parallelism algorithm approach array assigned block block_id blocked matrix build C-Linda calendar chapter char chopsticks comparison concurrent conversation stream coordination framework coordination language create daemon data objects data structure program database search db_id deadlock debugging discuss distributed data structure efficient element ensemble eval example execute field Figure function grab granularity implementation in-stream index tuple integer iteration live data structure load balance local area network Massive parallelism master process matching tuple matrix meeting maker message passing message passing program num_workers number of workers operating system output overhead paradigms parallel computers parallel machine parallel program performance primes problem programming language real_max realtime require result parallelism routine runtime scheduling sequence sequential server simultaneously single software architecture solution specialist specialist-parallel speedup struct sub-block synchronization t_length task task_id techniques trellis nodes trellis program tuple space update vector worker processes write