Introduction to Concurrency in Programming Languages

Front Cover
Illustrating the effect of concurrency on programs written in familiar languages, this text focuses on novel language abstractions that truly bring concurrency into the language and aid analysis and compilation tools in generating efficient, correct programs. It also explains the complexity involved in taking advantage of concurrency with regard to program correctness and performance. The book describes the historical development of current programming languages and the common threads that exist among them. It also contains several chapters on design patterns for parallel programming and includes quick reference guides to OpenMP, Erlang, and Cilk. Ancillary materials are available on the book's website.
 

What people are saying - Write a review

We haven't found any reviews in the usual places.

Contents

Introduction
1
Concepts in Concurrency
17
Concurrency Control
43
The State of the Art
65
HighLevel Language Constructs
85
Historical Context and Evolution of Languages
109
Modern Languages and Concurrency Constructs
149
Performance Considerations and Modern Systems
175
Pattern Data Parallelism
233
Pattern Recursive Algorithms
247
Pattern Pipelined Algorithms
263
OpenMP Quick Reference
279
Erlang Quick Reference
295
Cilk Quick Reference
305
References
315
Index
323

Introduction to Parallel Algorithms
197
Pattern Task Parallelism
207

Other editions - View all

Common terms and phrases

About the author (2009)

Matthew J. Sottile is a research associate and adjunct assistant professor in the Department of Computer and Information Sciences at the University of Oregon. He has a significant publication record in both high performance computing and scientific programming. Dr. Sottile is currently working on research in concurrent programming languages and parallel algorithms for signal and image processing in neuroscience and medical applications.

Timothy G. Mattson is a principal engineer at Intel Corporation. Dr. Mattson's noteworthy projects include the world's first TFLOP computer, OpenMP, the first generally programmable TFLOP chip (Intel's 80 core research chip), OpenCL, and pioneering work on design patterns for parallel programming.

Craig E Rasmussen is a staff member in the Advanced Computing Laboratory at Los Alamos National Laboratory (LANL). Along with extensive publications in computer science, space plasma, and medical physics, Dr. Rasmussen is the principal developer of PetaVision, a massively parallel, spiking neuron model of visual cortex that ran at 1.14 Petaflops on LANL's Roadrunner computer in 2008.

Bibliographic information