Compiling with Continuations

Front Cover
Cambridge University Press, 2006 - Computers - 272 pages
0 Reviews
This book shows how continuation-passing style is used as an intermediate representation to perform optimizations and program transformations. Continuations can be used to compile most programming languages. The method is illustrated in a compiler for the programming language Standard ML. Prior knowledge of ML, however, is not necessary, as the author carefully explains each concept as it arises. This is the first book to show how concepts from the theory of programming languages can be applied to the production of practical optimizing compilers for modern languages like ML. All the details of compiling are covered, including the interface to a runtime system and garbage collector.
  

What people are saying - Write a review

We haven't found any reviews in the usual places.

Selected pages

Contents

Overview
1
12 Advantages of CPS
4
13 What is ML?
6
14 Compiler organization
9
Continuationpassing style
11
22 Functions that escape
16
23 Scope rules
17
24 Closure conversion
18
124 When to initiate garbage collection
144
The abstract machine
147
132 Interface with the garbage collector
148
133 Positionindependent code
150
134 Specialpurpose registers
151
135 Pseudooperations
154
136 Instructions of the continuation machine
155
137 Register assignment
158

25 Spilling
21
Semantics of the CPS
23
MLspecific optimizations
37
42 Pattern matching
43
43 Equality
45
44 Unboxed updates
46
46 Exception declarations
49
47 The lambda language
50
48 The module system
52
Conversion into CPS
55
52 Records and selection
56
53 Primitive arithmetic operators
57
54 Function calls
59
55 Mutually recursive functions
60
57 Case statements
61
58 Exception handling
63
59 Call with current continuation
64
Optimization of the CPS
67
61 Constant folding and βcontraction
68
62 Eta reduction and uncurrying
76
63 Cascading optimizations
78
64 Implementation
80
Beta expansion
83
71 When to do inline expansion
87
72 Estimating the savings
89
73 Runaway expansion
92
Hoisting
93
82 Rules for hoisting
95
83 Hoisting optimizations
96
Common subexpressions
99
Closure conversion
103
101 A simple example
104
102 A bigger example
106
103 Closurepassing style
109
105 Closure representation
112
106 Calleesave registers
114
107 Calleesave continuation closures
119
108 Stack allocation of closures
122
109 Lifting function definitions to top level
124
Register spilling
125
111 Rearranging the expression
128
Space complexity
133
121 Axioms for analyzing space
136
122 Preserving space complexity
137
123 Closure representations
142
138 Branch prediction
160
139 Generation of abstractmachine instructions
161
1311 Unboxed floatingpoint values
162
Machinecode generation
165
1411 Spandependent instructions
167
142 Translation for the MC68020
168
143 Translation for the MIPS and SPARC
169
1431 PCrelative addressing
170
1433 Antialiasing
171
1434 Alternating temporaries
173
144 An example
174
Performance evaluation
179
151 Hardware
181
152 Measurements of individual optimizations
183
153 Tuning the parameters
187
155 Compile time
198
156 Comparison with other compilers
200
157 Conclusions
201
The runtime system
205
162 Breadthfirst copying
206
163 Generational garbage collection
207
164 Runtime data formats
210
165 Big bags of pages
211
166 Asynchronous interrupts
212
Parallel programming
215
171 Coroutines and semaphores
216
172 Better programming models
219
173 Multiple processors
220
174 Multiprocessor garbage collection
221
Future directions
223
182 Type information
225
184 Garbage collection
227
185 Static singleassignment form
228
Introduction to ML
229
A1 Expressions
231
A2 Patterns
233
A3 Declarations
235
A4 Some examples
236
Semantics of the CPS
239
Obtaining Standard ML of New Jersey
245
Readings
247
Bibliography
249
Index
256
Copyright

Common terms and phrases

About the author (2006)

Andrew W. Appel is the Eugene Higgins Professor and Chairman of the Department of Computer Science at Princeton University.

Bibliographic information