High Performance Compilers for Parallel ComputingBy the author of the classic 1989 monograph, Optimizing Supercompilers for Supercomputers, this book covers the knowledge and skills necessary to build a competitive, advanced compiler for parallel or high-performance computers. Starting with a review of basic terms and algorithms used-such as graphs, trees, and matrix algebra-Wolfe shares the lessons of his 20 years experience developing compiler products such as KAP, the capstone product of Kuck and Associates, Inc., of Champaign, Illinois. |
Contents
3 | 41 |
BASIC GRAPH CONCEPTS | 49 |
REVIEW OF LINEAR ALGEBRA | 84 |
Copyright | |
13 other sections not shown
Common terms and phrases
algorithm aliases alignment allocated analysis array assignment array element bounds cache coherence cache line coefficient column compiler computed control dependence control flow graph data access data dependence relations dependence cycle dependence distance dependence equation dependence graph dimension directed graph direction vector distance vector dominance frontier dopar edge emit endfor endfor endfor endfor Solution endforall endif Entry Example execution exit fetch following loop forall Fortran 90 FUD chains global inequalities inner loop integer solutions interchanging interprocedural iteration space iteration vectors layer linear induction variables loop fission loop limits machine matrix multiple nested loops node offset operations optimizations outer loop parallel loop pointer points-to postdominator procedure processor reordering reuse factor scalar scheduling sequential loop shown in Figure SIMD single statement strip-mining strongly connected components target template tile transformation trip count unimodular unimodular matrix use-def chains vector code vector register x₁ zero