High Performance Compilers for Parallel Computing
This work covers everything necessary to build a competitive, advanced compiler for parallel or high-performance computers. It starts with a review of basic terms and algorithms such as graphs, trees and matrix algebra. Methods focus on analysis and synthesis, where analysis extracts information from the source program. The various restrictions and problems caused by different languages commonly used in such machines are shown.
What people are saying - Write a review
We haven't found any reviews in the usual places.
fflGH PERFORMANCE SYSTEMS
PROGRAMMING LANGUAGE FEATURES
BASIC GRAPH CONCEPTS
15 other sections not shown
active layers algorithm aliases aligned allocated analysis array assignment array element bounds cache coherence cache line called coefficient column communication compiler compute control dependence control flow graph data access data dependence relations dependence cycle dependence distance dependence equation dependence graph dimension direction vector distance vector distributed dominance frontier dopar edge emit endfor endfor endfor endfor Solution endforall endif endloop Entry Example execution exit fetch following loop forall Fortran 90 FUD chains global inequalities inner loop integer solutions interchanging interprocedural IoFi iteration space iteration vectors loop fission loop limits machine multiple nested loops node operations optimizations outer loop parallel loop pointer points-to postdominator procedure region reordering reuse factor scalar scheduling sequential loop shown in Figure SIMD single statement strip-mining strongly connected components target template offset tile transformation trip count unimodular unimodular matrix use-def chains vector code vector register zero