ArticlePDF Available

Abstract

p>We present a strikingly simple partial evaluator, that is type-directed and reifies a compiled program into the text of a residual, specialized program. Our partial evaluator is concise (a few lines) and it handles the flagship examples of offline monovariant partial evaluation. Its source programs are constrained in two ways: they must be closed and monomorphically typable. Thus dynamic free variables need to be factored out in a ``dynamic initial environment´´. Type-directed partial evaluation uses no symbolic evaluation for specialization, and naturally processes static computational effects. Our partial evaluator is the part of an offline partial evaluator that residualizes static values in dynamic contexts. Its restriction to the simply typed lambda-calculus coincides with Berger and Schwichtenberg's ``inverse of the evaluation functional´´ (LICS'91), which is an instance of normalization in a logical setting. As such, type-directed partial evaluation essentially achieves lambda-calculus normalization. We extend it to produce specialized programs that are recursive and that use disjoint sums and computational effects. We also analyze its limitations: foremost, it does not handle inductive types. This paper therefore bridges partial evaluation and lambda-calculus normalization through higher-order abstract syntax, and touches upon parametricity, proof theory, and type theory (including subtyping and coercions), compiler optimization, and run-time code generation (including decompilation). It also offers a simple solution to denotational semantics-based compilation and compiler generation. Proceedings of POPL96, the 1996 ACM Symposium on Principles of Programming Languages (to appear).</p
T~pe-Directed Partial Evalt~ation
Olivier Danvy *
Computer Science Department
Aarhus University t
(danvyQdaimi.aau. dk)
Abstract
We present a strikingly simple partial evaluator, that is type-
directed and reifies a compiled program into the text of a re-
sidual, specialized program. Our partial evaluator is concise
(a few lines) and it handles the flagship examples of off-
line monovariant partial evaluation. Its source programs are
constrained in two ways: they must be closed and mono-
morphic ally typable. Thus dynamic free variables need to
be factored out in a “dynamic initial environment”. Type-
directed partial evaluation uses no symbolic evaluation for
specialization, and naturally processes static computational
effects.
Our partial evaluator is the part of an offline partial eval-
uator that residualizes static values in dynamic contexts. Its
restriction to the simply typed lambda-calculus coincides
with Berger and Schwichtenberg’s “inverse of the evaluation
functiona~ (LICS ’91), which is an instance of normalization
in a logical setting. As such, type-directed partial evaluation
essentially achieves lambda-calculus normalization. We ex-
tend it to produce specialized programs that are recursive
and that use disjoint sums and computational effects. We
also analyze its limitations: foremost, it does not handle in-
ductive types.
This paper therefore bridges partial evaluation and J-
calculus normalization through higher-order abstract syn-
tax, and touches upon parametricity, proof theory, and type
theory (including subtyping and coercions), compiler op-
timization, and rut-time code generation (including recom-
pilation ). It also offers a simple solution to denotational
semantics-based compilation and compiler generation.
1 Background and Introduction
Given a source program and parts of its input, a partial
evaluator reduces static expressions and reconstructs dy-
namic expressions, producing a residual, specialized pro-
gram [15, 36]. To this end, a partial evaluator needs some
method for inserting (lifting) arbitrary statically-calculated
* Supported by BRICS (Basic Research in Computer S., emce,
Centre of the Danish National Research Foundation)
tNY M~&e@e, Bulldmg 540, 11~-80011 Aarhw c, Denmark
Home page http:/luww. daLml .aau. dk/-danvy
Permission to make digitdhard
copies of all or part of this material for
personal or classroom use is granted without fee provided that the copies
are not made or distributed for ~rotit or commercial advantage, the copy-
right notice, the title of the pubkation and its date appear, and notice ia
given that copyright is by permission of the
ACM, Inc. To copy otberwiae,
to republish, to post on servers or to redistribute to lists, requires specific
permission andlor fee.
POPL ’96, St. Petersburg FLA USA
@ 1996 ACM @89791 _769.3/95/ol ..$3.5(3
values into the residual program i.e., for residualizing
them (Section 1.1). We present such a method (Section 1.2);
it is type directed and we express it using Nielson and Niel-
son’s two-level A-calculus [44]. After a first assessment (Sec-
tion 1.3), we formalize it (Section 1.4) and outline a first
application: given a compiled normal form and its type,
we can recover its text (Section 1.5). We implement type-
directed residualization in Scheme [IO] and illustrate it (Sec-
tion 1.6). The restriction of type-directed residualization to
the simply typed ~-calculus actually coincides with Berger
and Schwlchtenberg’s normalization algorithm as presented
in the proceedings of LICS’91 [3], and is closely related to
Pfenning’s normalization algorithm in Elf [46] (Section 1.7).
Residualization also exhibits a normalization effect. Moving
beyond the pure A-calculus, we observe that this effect ap-
pears for constants and their operators as well (Section 1.8).
We harness it to achieve partial evaluation of compiled pro-
grams (Section 2). Because this form of partial evaluation
is completely directed by type information, we refer to it as
type- directed parttal evaluation.
Section 3 extends type-directed partial evaluation to dis-
joint sums. Section 4 analyzes the limitations of type-
directed partial evaluation. Section 5 reviews related work,
and Section 6 concludes.
1.1 The problem
Let us consider the example term
where the infix operator @ denotes function application
(and associates, as usual, to the left). Both g and d are
unknown, t. e., dynamic. d has type
bl and g has type
bl ~ (bl + bl ) -+ bz, where bl and bz are dynamic base
types. ~ occurs twice in this term: as a fumction in an ap-
plication (where its denotation Aa.u could be reduced) and
as an argument in a dynamic application (where its denota-
tion ~a. a should be residualized). The question is: what are
the binding times of this term?
This paper addresses closed terms. Thus let us close the
term above by abstracting its two free variables. For added
clarity, let us also declare its types.
We want to decorate each A-abstraction and application with
static annotations (overlines) and dynamic annotations (un-
derlines) in such a way that static reduction of the decorated
242
term “does not go wrong” and yields a completely dynamic
term. These are the usual rules of binding-time analysis,
which is otherwise abundantly described in the literature
[4, 6, 11, 15, 36, 37, 41, 44]. In the rest of this paper, we
use Nielson and Nielson’s two-level J-calculus, which is sum-
marized in Appendix A.
Before considering three solutions to analyzing the term
above, let us mention a non-solution and why it is a non-
solution.
Non-solution:
This annotation is appealing because the application of ~
is static (and thus will be statically reduced away), but it
is incorrect because the type of g is not entirely dynamic.
Thus after static reduction, the residual term is not entirely
dynamic either:
Ag : bl~(bl=bl)~bz.~d: bl.g~d~~a : bl.a
In Scheme, the residual program would contain a closure:
> (let* ([g (gensym! “g”)]
[d (gensym! “d’’)])
‘(lambda (,g)
(lambda (,d)
,((lsnbda (f) ‘((, g ,(f d))
,f))
(lambda (a) a)))))
(lambda (g15)
(lambda (d16)
((g15 d16) #<procedure>)))
>
In summary, theannotation is incorrect because~a : bl.a
can only declassified to be static (i. e., of type bl~bl) if ~
is always applied. Thus it should declassified to be dynamic
(t.e., of type
b, ~bl ), as done in Solution 1.
Solution 1:
This solution is correct, but it does not give a satisfactory
result: static reduction unfolds the outer call, duplicates the
denotation of $, and creates an inner /3-redex:
~g : blti(bltibl)tibz.
‘Ad : bl.gg((~a :
bl.a)~d)g~a : bl.a
To remedy this shortcoming, the source program needs a
binding-time improvement [36, Chapter 12], z.e., a modifica-
tion of the source program to make the binding-time analysis
yield better results. The particular binding-time improve-
ment needed here is eta-expansion, as done in Solution 2.
Solution 2:
Eta-expanding the second occurrence of ~
makes it always occur in position of application. Therefore
Au :
bl .a can be classified to be static.
This annotation is correct. Static reduction yields the fol-
lowing term:
The result is optimal. It required, however, a binding-time
improvement, i.e., human intervention on the source term.
Recent work by Malmkjzer, Palsberg, and the author
[18, 19] shows that binding-time improvements compensate
for the lack of binding-time coercions in existing binding-
time analyses. Source eta-expansion, for example, provides a
syntactic representation for a binding-time coercion between
a higher-order static value and a dynamic context, or con-
versely between a dynamic value and a static higher-order
context.
Binding-time analyses therefore should produce annot-
ated terms that include binding-time coercions, as done in
Solution 3. We use a down arrow to represent the coercion
of a static (overlined) value into a dynamic (underlined) ex-
pression.
Solution 3: In this solution, the coercion of ~ from bl ~
bl
to bl ~bl is written JbI+bl ~:
One possibility y is to represent the coercion directly with
a two-level eta-redex, to make the type structure of the term
syntactically apparent [18, 19]. The result of binding-time
analysis is then the same as for Solution 2.
Another possibility is to produce the binding-time coer-
cion as such, without committing to its representation, and
to leave it to the static reducer to treat this coercion appro-
priately.
This treatment 2s the topic of the present paper.
In Scheme:
> (let* ([g (gensym! “g”)] [d (gensym! “d”)] )
(lambda ( ,g)
(lambda (,d)
,((lambda (f) ‘((, g ,(f d))
,(residualize f ‘(a -> a))))
(lambda (a) a)) ) ) )
(lambda (g23)
(lambda (d24)
((g23 d24) (lambda (x25) x25))))
>
Specifically, this paper is concerned with the residualiza-
tion of static values in dynamic contexts, in a type-directed
fashion. The solution to this seemingly insignificant problem
turns out to be widely applicable.
1.2 Type-directed residualization
We want to map a static value into its dynamic counterpart,
given its type.
The leaves of a type are dynamic and ground. In the rest of
the paper, types where all constructors are static (resp. cly-
namic) are said to be “completely static” (resp. “completely
dynamic” ).
243
At base type, residualization acts as the identity function:
$bu = u
Residualizing a value of product type amounts to resid-
ualizing each of its components and then reconstructing the
product:
4t’xt2rJ =
pair(~” Giu,J’’~t)
Onecan infer from Solution 3 that
Jb’+b’v = &l. LIGxl
(where.zl is fresh) andmore generally that
b+t
J-u=
~.c.~t (w@x).
At higher types, the fresh variable needs to be coerced
from dynamic to static. A bit of practice with two-level eta-
expansion [16, 17, 18] makes it clear that, for example:
~(b,+h)+b, ~
——
= &l.2JC!(AJ3.zlgz3)
It is therefore natural to define a function ~ that is sym-
metric to .J, i.e., that coerces its argument from dynamic
to static, and to define the residualization of functions as
follows.
Jt, +’, u =
&l. J’’(u6(~t1 Zl))
The functions ~and ~essentially match the insertion of
two-level eta-redexes for binding-time improvements [18, 19].
Figure 1 displays the complete definition of residualization.
Type-directed residualization maps a completely static
two-level A-term into a completely dynamic one. First, reify
(~) and reflect (T)
fully eta-expand a static two-ievel ~-terrn
with two-level eta-redexes. Then, static reduction evaluates
all the static parts and reconstructs all the dynamic parts,
yielding the residual term.
1.3 Assessment
So far, we have seen (1) the need for binding-time coercions
at higher types in a partial evaluator; (2) the fact that these
binding-time coercions can be represented as two-level eta-
redexes; and (3) an interpreter for these binding-time coer-
cions t. e., type-directed residualization.
Let us formalize type-directed residualization, and then
describe a first application,
1.4 Formalization
Proposition 1
In Figure 1, r-ez’fy maps a szmply typed com-
pletely static A-term tnto a well-typed two-level A-term.
Proo& by structural induction on the types (see Appendix
A for the notion of well-typing).
Property
1 In the simply typed case, static reduction in the
two-level
A-calculus enjoys both strong normalization and
subject
reductton [4J].
Corollary 1 Static reduction after reification (see Figure 1)
does not go
wrong and yzelds completely dynamtc terms.
t c Type
::=
bltl+t21tlxt2
v E
Value
::= c I .z I Xz. zl I vo6rJl I
pair(ul, u2) [ fst
u I snd u
e c
Expr
::= c I z I &.e I eOQel \
pair(el, ez) I fste I snde
.
reify =
At. Ju : t.~t v
Type -+ Value -+ TLT
&b u = u
4’1+’2 rJ =
&l.Jt’ (rJ6(~t, z,))
where z 1 is fresh.
4’1 “2 t’ =
pair(~” = v,~t’ ~ u)
reflect =
At.Ae : t.~t e
Type -+ Expr ~ TLT
~t,e = e
Tt, +t, e
= Xul.tt, (e CJ(J” ZJI))
Tt,xt,e =
pair(~t, &e, Tt, @e)
residualize = statically-reduce o reify
Type + Value + Expr
Since the definition of type-directed residualization is based
jolely on the structure of types, we have omitted type an-
notations.
The domains Value and Expr are defined inductively, fol-
owing the structure of types, and starting from the same set
]f (dynamic) base types. TLT is the domain of (well-typed)
iwo-level terms; it contains both Value and Expr.
The down arrow is read
reify itmaps a static value and
ts type into a two-level J-term that statically reduces to the
iynamic counterpart of this static value. Conversely, the up
mrow is read
reflect: it maps a dynamic expression into a
two-level A-term representing the static counterpart of this
dynamic expression.
In residualize, reify (resp. reflect) is applied to types oc-
curring positively (resp. negatively) in the source type.
N.B. One could be tempted to define
ffevap =
dynamically-reduce o reflect
by symmetry, but the result is not an evaluation functional
because base types are dynamic.
Figure 1: Type-directed residualization
Corollary 2
The type of a source ter-m and the type of a r+-
sidual term have the same shape (z. e., erast,ng thetr annota-
tions ytelds the same szmple type).
This last property extends to terms that are in long ~rp
normal form [30], t. e., to normal forms that are completely
eta-expanded. By Proposition 1, we already know that static
reduction of a completely static term in long /3q-normal form
yields a completely dynamic term in long /3q-normal form
We have independently proven the following proposition.
244
(define-record (Base base-type) )
(define-record
(Func domain range))
(define-record (Prod type type) )
(define residualize
(lambda
(V t)
(letrec ([reify (lambda (t V)
(case-record t
[(Base -)
VI
[@me tl t2)
(let ( [xl
(gensym! )] )
(lambda (,x1) , (reify t2
(V (reflect tl xi)))))]
[(prod tl t’2)
‘(cons ,(reify ti (car v)) ,(reify t2 (cdr v)))]))]
[reflect (lambda (t e)
(case-record t
[(Base -)
e]
[(Func tl t’2)
(lambda (vI) (reflect
t2 ‘(, e ,(reify tl VI))))]
[(Prod tl t2)
(cons (reflect tl ‘(car ,e)) (reflect t2 ‘(cdr ,e)))] ))])
(begin
(reset-gensym! )
(reify (parse-type t) v)))))
Figure 2: Type-directed partial evaluation in Scheme
Proposition 2
Residualizinq a comdetelv statzc term an long
-> C) is
mapped into (make-Func (make-Prod (make-Base ‘A)
/3q-hormal
form yields a ~erm w;th th; same shape (i. e~, (make-Base ‘B) ) (make-Base ‘C)).
erasing
the annotations of both terms yields the same szmply
The following Scheme session illustrates this implement
typed A-term, modulo a-renammg).
ation:
In other words, residualization preserves both the shape
of types and the shape of expressions that are in long ~q-
normal form.
1.5 Application
We can represent a completely static expression with a com-
piled representation of this expression, and a completely dy-
namic expression with a (compiled) program constructing
the textual representation of this expression. 1 In Consel’s
partial evaluator Schism, for example, thk representation is
used to optimize program specialization [13]. Since (1) re-
ification amounts to mapping this static expression into a
two-level term and (2) static reduction amounts to running
both the static and the dynamic components of this two-
level term, type-directed residualization constructs the tex-
tual representation of the original static expression. There-
fore, in principle, we can map a compiled program back into
its text under the restrictions that (1) this program ter-
.
minates, and (2) it has a type.
The following section illustrates this application
1.6 Type-directed residualization in Scheme
Figure 2 displays an implementation of type-directed re-
sidualization in Scheme, using a syntactic extension for
declaring and using records [10]. Procedure pars e-t ype
maps the concrete syntactic representation of a type (an S-
expression) into the corresponding abstract syntactic repres-
entation (a nested record structure} # For example, ‘((A * B)
lThe same situation occurs with interpreted instead of compiled
representations, s.e., If one uses an interpreter instead of a compder
> (define S (lambda (f)
(lambda (g)
(lambda (x)
((f x) (g
>s
#<procedure S>
> (residualize S
‘((A -> B ->
(lambda (xO)
(lambda (xl)
(lambda (x2)
((XO X2) (xl x2)))))
> (define I*K (cons (lambda
(lambda
> I*K
x))))))
C) -> (A->B) ->A -> C))
(x) x)
(y)
(lambda (z) y))))
(#<procedure> #<procedure>)
> (residualize
I*K ‘((A -> A) * (B -> C ‘> B)))
(cons (lambda (xO) xO)
(lambda (xl) (lambda (x2) xl) ) )
>
s and I *K denote values that we wish to residualize. We know
their type. Procedure residualize maps these (static, conl-
piled) values and a representation of their type into the cor-
responding (dynamic, textual) representation of these val-
ues.
At this point, a legitimate question arises:
how does thzs
realty work ?
Let us consider the completely static expression
Xx.x
together with the type
b -+ b. This expression is mapped
into the following eta-expanded term:
Az(ix.x)a’:.
245
Static ~-reduction yields the completely dynamic residual
term
Az. z
which constructs the text of the static expression we started
with.
Similarly, the S combinator is mapped into the term
Ja.Ab.A..SG (Xd.Xe.(a Qd)Qe)~(Xj.bQ~)@c
——.
which statically L-reduces to the completely dynamic resid-
ual term
~a.~b.~c.(a~c)tl(b~c).
Let us conclude with a remark: because residual terms
are eta-expanded, refining the type parameter yields different
residual programs, as in the following examples.
> (residualize (lambda (x) x) ‘((a * b) -> a * b))
(lambda (xo) (cons (car xo) (cclr xo)))
> (residualize (lambda (x) x)
‘(((fI -> B) -j c) -j (A -> B) -j C))
(lambda (xol (lambda (xl) (xO (lambda (x2) (xl x2)J)))
>
1.7 Strong normalization
The algorithm of type-directed residualization is actually
well known.
In their paper “An Inverse of the Evaluation Func-
tional for Typed J-Calculus” [3], Berger and Schwichten-
berg present a normalization algorithm for the simply typed
~-calculus. It is used for normalizing proofs as programs.
Berger and Schwichtenberg’s algorithm coincides with
the restriction of type-directed residualization to the simply
typed J-calculus. Reify maps a (semantic, meta-level) value
and its type into a (syntactic, object-level) representation
of this value ( “syntactic”
in the sense of “abstract-syntax
tree” ), and conversely, reflect maps a syntactic represent-
ation into the corresponding semantic value. Disregarding
the dynamic base types, reflect thus acts as an evaluation
f~mctional, and reify acts as its inverse hence probably
the title of Berger and Schwichtenberg’s paper [3].
In the implementation of his Elf logic programming lan-
guage [46], Pfenning uses a similar normalization algorithm
to test extensional equality, though with no static/dynamic
notion and also with the following difference.
When pro-
cessing an arrow type whose co-domain is itself an arrow
type, the function is processed en
bloc with all its arguments:
Jt1+t2+ +t.+1 ~ =
XI-,.Aq. .
Arn.J’~+’(zl@(tt, zl)Q(tt2z2)@... @(ttnrn))
where tn+ ~ is not an arrow type and xl, .... x~ are fresh.
(The algorithm actually postpones reflection until reification
reaches a base type. )
Residualization also etilbits a normalization effect, as
illustrated below: we residualize the result of applying a
procedure to an argument. This program contains an ap-
plication and
thzs applicataorz is performed at residualization
time.2
>
(define foo (lambda (f) (lambda (x) (f x))))
>
(residualize (foo (lambda (z) z)) ‘(A -> A))
(lambda (xO) xO)
>
‘Or,
viewing resldual]zation as a form of decompdatlon (an analogy
due to Goldberg [27]). “at decompile time”
The same legitimate question as before arises: how does
this really work ? Let us consider the completely static ex-
pression
This expression is mapped into the following eta-expanded
term:
~g. ((xf.xz.faz) a(xz.z))ay.
Static @reductions yield the completely dynamic residual
term
~y.y.
As an exercise, the curious reader might want to run the
residualizer on the term S d 1( 6 K with respect to the type
b + b. The combinators S and K are defined as above [2,
Definition 5.1.8, Item (i)].
1.8 Beyond the pure J-calculus
Moving beyond the pure )-calculus, let us reiterate this last
experiment: we residualize the result of applying a procedure
to an argument, and a multiplication is
computed at resid-
uaii,zatton tzme.
> (define bar (lambda (x) (lambda (k) (k (* x 5)))))
>
(residualize (bar 100) ‘((Int -> Ans) -> Ans))
(lambda (xO) (xO 500))
>
The usual legitimate question arises:
how does this really
work ? Let us consider the completely static expression
(Iz.Nc.k6(xY5)) 5100.
This expression is mapped it into the following eta-expanded
term:
&.((Xz.R.k ?i3(z X5))@ 100) G(Xn.a Qra)
Static /3-reduction leads to
&LaQ(looY5)
which is statically &reduced to the residual term Au. a @500.
Remark:
Introducing a static fixed-point operator does not
compromise the subject-reduction property, so the second
part of Corollary 1 in Section 1.4 can be rephrased with the
proviso “if static reduction terminates”.
1.9 This paper
The fact that arbitrary static reductions can occur at re-
sidualization time suggests that residualization can be used
as a full-fledged partial evaluator for closed compiled pro-
grams, given their type. In the following section, we apply
it to various examples that have been presented as typical
or even significant achievements of partial evaluation, in the
literature [15, 33, 36]. These examples include the power and
the format source programs, and interpreters for Paulson’s
imperative language Tiny and for the A-calculus.
The presentation of each example is structured as follows:
we consider interpreter-like programs, t. e., programs
where one argument determines a part of the control
flow (Abelson, [24, Foreword]);
246
> (define power
(lambda (x n)
(letrec ([1OOP (lambda (n)
(cond [(zero? n)
11
[(odd? n) (*
x (1OOP (1- n)))]
[else (sqr (loop (/ n 2)))1) )1)
(loop n))))
> (define sqr (lambda (x) (* x x)))
> (power 2 10)
1024
> (define power-abstracted
; ; ; Int -> (Int -> Int) * (Int * Int => Int) => Int ‘> Int
(lembda (n)
(lambda (sqr *)
(lambda (x)
(letrec f [loop (lambda (n)
(concl [(zero? n) 11
[(odd? n) (* x (1OOP (1- n)))]
[else (sqr (loop (/ n 2)))1))1)
(loop n))))))
> (((power-abstracted 10) sqr *) z)
1024
> (residualize (poper-abstracted 10)
‘((Int -> Int) * (Int * Int => Int) => Int -> Int))
(lembda (xO xl) (lambda (x2) (xO (xI X2 (xO (xO (x1 x2 l)))))))
> (((lambda (xO xI) (lambda (x2) (xO (x1 X2 (xO (xO (xl x2 1))))))) sqr *) 2)
1024
>
The residualized code reads better after a-renaming. It is the specialized version of power when n is set to 10:
(lambda (sqr *)
(lembda (x)
(sqr (* x (sqr (sqr (* x 1)))))))
N.B. For convenience, our implementation of residual ize, unlike the simpler version shown in Figure ‘2, handles
Scheme-style uncurried rz-ary procedures. Their types are indicated in type expressions by 1’=>” preceded by the
n-ary product of the argument types.
Figure 3: Type-directed partial evaluation of power (an interactive session with Scheme)
we residualize the result of applying these (separately
2.1 Power
compiled) programs to the corresponding argument.
Figure 3 displays the usual definition of the power procedure
Because residualization is type-directed, we need to know
in Scheme. and its abstracted counter~art where we have
the type of the free variables in the residual program. We
factored out the residual operators sq; and *. The figure
will routinely abstract them in the source program, as a form
illustrates that residualizing the partial application of pove r
of “initial rumtime environment”, hence making the residual
to an exponent yields the specialized version of power with
program a closed A-term.
2 Type-Directed Partial Evaluation
The following examples illustrate that residualization yields
specialized programs, under the condition that the residual
program is a simply typed combinator i.e., with no free
variables and with a simple type. The static parts of the
source program, however, are less constrained than when
using a partial evaluator: they can be untyped and impure.
In that sense it is symmetric to a partial evaluator such as
Gomard and Jones’s J-Mix [35, 36] that allows dynamic com-
putations to be untyped but requires static computations to
be typed.’
In any case, residualization produces the same
result as conventional partial evaluation (i. e., a specialized
program) but is naturally more efficient since no program
analysis other than type inference and no symbolic inter-
pretation take place.
3X.Mix and type-directed partial evaluation both consider
closed
source programs They work alike for typed source programs whose
bmdlng t]mes have been improved by source eta-expansion
respect to this exponent
2.2 Format
For lack of space, we omit the classical example of par-
tial evaluation: formatting stringe. Its source code can be
found in Figure 1 of Consel and Danvy’s tutorial notes on
partial evaluation at POPL’93 [15]. Type-directed partial
evaluation yields
the same rest dual code as the one presen-
ted in the tutorial notes (modulo of course the factoriza-
tion of the residual operators write-string, urite-nurnber,
urite-newline, and list -ref).
2.3 Definitional interpreter for Paulson’s Tiny language
Recursive procedures can be defined with fixed-point operat-
ors. This makes it simple to residualize recursive procedures
.
by abstracting their (typed) fixed-point operator.
As an example, let us consider Paulson’s Tiny language
[45], which is a classical example in partial evaluation [6,8.
247
block res, val, aux
in val :=
read ; aux :=
1;
while val > 0 do
aux := aux * val ; val := val - 1
end ;
res := aux
encl
Figure 4: Source factorial program
[lsmbda (add sub mul eq gt read fix true? lookup update)
(lambda (k8)
(lambda (s9)
(read (lsmbda (vIO)
(update 1 v1O S9 (lambda (s11)
(update 2 1 sII (lsmbda (s12)
((fix (lambda (while)
(lsmbda (s14)
(lookup 1 s14 (lsmbda (v15)
(gt v15 O (lambda (v16)
(true? v16
(lambda (s17)
(lookup 2 .17 (lambda (v18)
(lookup 1 s17 (lambda (vi9)
(mul v18 vi9 (lambda (v20)
(update 2 v20 517 (lambda (s21)
(lookup 1 s21 (lsmbda (v22)
(sub v22 1 (lambda (v23)
(update i v23 s2i (lsmbda (s24)
(while s24)) ))))))))))))))
(lsmbda (s25)
(lookup 2 s25 (lambda (v26)
(update O v26 s25 (lambda (s27)
(k8 s27)) ))))
SIB))))))))
SIB))))))))))
rhis residual program is a specialized version of the Tmy
nterpreter (Figures 9 and 10) with respect to the source pro-
g-am of Figure 4. As can be observed, it is a continuation-
~assing Scheme program threading the store throughout.
17he while loop of Figure 4 has been mapped into a fixed-
~oint declaration. All the location offsets have been com-
mted at partial-evaluation time.
Figure 5: Residual factorial program (after a-renaming and
“pretty” printing)
(define instantiate-type
(lambda (t)
‘(((() => Exp) ‘> Exp) *
((Str -> Exp) -> Exp) *
(EXP -~ Exp) *
(Str -> Var) *
(Str * Exp => Exp) *
(EXP * Exp ‘> EXP) *
(EXP * EXP ‘> EXP) *
(EXP ‘j EXP) *
(EXP -> EXP)
=> ,t
-> Exp)))
;;;
;;;
:::
;;;
;;;
;;;
;;;
;;;
;;;
reset -gensym-c
gensym-c
unparse-expression
make-Var
make-Lam
make-App
make-Pair
make-Fst
make-Snd
Figure 6: Type construction for self-application
I
9, 11, 14, 35, 36, 37, 41, 48]:
(P9~) ::=
(name)” (cmd)
(cmcl) ::=
skip \ (crrwl) ; (cd) I (tale) := (ezp) I
if (ezp) then (crnd) else (crnd) I
while (ezp) do (crnd) end
(ezp) ::=
(Znt) I (de) \ (ezp) (op) (exp) I read
(W):: =+ I- IX I=12
It is a simple exercise (see Figures 9 and 10 in appendix)
to write the corresponding definitional interpreter, to apply It
to, e.g., the factorial program (Figure 4), and to residualize
the result (Figure 5).
Essentially, type-directed partial evaluation of the Tiny
interpreter acts as a front-end compiler that maps the ab-
stract syntax of a source program into a A-expression repres-
enting the dynamic semantics of this program [14]. This A-
expression is in continuation-passing style [49], t. e., in three-
address code.
We have extended the definitional J-interpreter described
in this section to richer languages, including typed higher-
order procedures, block structure, and subt yping, d la Re yn-
olds [47]. Thus this technique of “type-directed compilation”
scales up in practice. In that sense, type-directed partial
evaluation provides a simple and effective solution to (de-
notational ) semantics-directed compilation in the J-calculus
[32, 43].
2.4
Residualizing the residualizer
To visualize the effect of residualization, one can residualize
the residualizer with respect to a type. As a first approxim-
ation, given a type t, we want to evaluate
(residualize
(lambda (v) (residualize v t))
t)
To this end, we first need to define an abstracted version of
the residualizer (with no free variables). We need to factor
out all the abstract-s yntax constructors, the unparser, 4 and
the gensym paraphernalia,
which we make continuation-
passing to ensure that new symbols will be generated cor-
rectly at run time. To be precise:
(define abstract-residualize
(lambda (t)
(lsmbda (reset-gensym-c
gensym-c
unparse-expression
make-Var
make-Lsm make-App
make-Pair make-Fst make -Snd)
(lambda (v)
(letrec ([reify . ..1 ;;; asin
[reflect .1)
; ; ;
Figure 2
(reset-gensym-c
(lsmbda ()
(unparse-expression
(reify (parse-type t) v)))))))))
The type of abstract-residualize is a dependent type in
that the value of t denotes a representation of the t,ype of
V.
Applying abstract-residualize to a represention of a type,
however, yields a simply typed value. We can then write a
4Flgure 2 uses quas]quote and unquote for readablllty, thus avoid.
Ing the need for an unpars, er
248
> (define meaning-expr-cps-cbv
(lambda (e)
(Ietrec ( [meaning (lambda (e r)
(lambda (k)
(case-record e
[(Var i)
(k (r i))]
[(LsM i e)
(k (lambda (v)
(meaning e (lambda (i v r) (lambda (j) (if
[(APP eO ei)
((meaning eO r) (lambda (vO)
((meaniruz ei r) (lambda (vi)
equal? i j) v (r j)))))))]
((vO Vi) k)))))])))])
(meaning (parse-expression e) (lembda (i) (error
‘init-env “undeclared identifier:
> (define meaning-type -cps-cbv
(lambda (t)
(letrec ( [computation (lambda (t)
(make -Func (make-Fun. (value t) (make-Base ‘Ans ) ) (make-Base
[value (lambda (t)
(case-record t
[(Base -)
t]
[(Func tl t2)
(make-Func (value ti) (computation t2)
)1 ) )1 )
(unparse-type (computation (parse-type t) ) ) ) ) )
>
(residualize (mean ing-expr-cps-cbv
(lambda (x) x)) (meaning-type-. ps-cbv ‘(a -> a)))
(lambda (xO) (xO (lambda (xl) (lambda (x2) (x2 xl)))))
>
-s” i)
‘Ans) )
))))
1
N.B. The interpreter is untyped and thus we can only residualize interpreted terms that are closed and simply
typed. Untyped or polymorphically typed terms, for example, are out of reach.
Figure 7: Type-directed partial evaluation of a call-by-value CPS A-interpreter
m-ocedure instant iate-tme that maps the representation of
~he input type to a repre;~ntation o{ the typ; of that simply
typed value (see Figure 6).
We are now ready for self-application with respect to a
type t:
(residualize
(abstract-residualize (instantiate-type t))
t)
The result is the text of a Scheme procedure. Applying this
procedure to the initial environment of the residualizer (i. e.,
the abstract-syntax constructors,
etc.) and then to a com-
piled version of an expression of type t yields the text of
that expression.
Self-application eliminates the overhead of interpreting
the type of a source program.
For example, let us consider the S combinator of Section
1.6. Residualizing the residualizer with respect to its type es-
sentially yields the eta-expanded two-level version we wrote
in Section 1.6 to visualize the residualization of S.
For another example, we can consider the Tiny inter-
preter of Section 2.3. Residualizing the residualizer with
respect to its type (see Figure 9) yields the text of a Tiny
compiler (whose run-time support includes the Tiny inter-
preter).
2,5
The art of the A-interpreter
We consider various A-interpreters and residualize their ap-
plication to a A-term. The running question is as follows:
which type should drive residualization?
Direct style: For a direct-style interpreter, the type is the
same as the type of the interpreted X-term and the residual
term is structurally equivalent to the interpreted J-term [36,
Section 7.4].
Continuation-passing style: For a continuation-style inter-
preter, the type is the CPS counterpart of the type of
the interpreted J-term and the residual term is the CPS
counterpart of the interpreted Xterm for each possible
continuation-passing style [28]. Figure 7 illustrates the point
for left-to-right call-by-value.
Other passing styles: The same technique applies for store-
passing. etc. interpreters,
be they direct or continuation-
passing, and in particular for interpreters that simulate lazy
evaluation with thunks.
“Monadic” style: We cannot, however, specialize a “mon-
adic” interpreter with respect to a source program because
the residual program is parameterized with polymorphic
functions [42, 50] and these polymorphic functions do not
have a simple type. Thus monadic interpreters provide an
example where traditional partial evaluation wins over t,ype-
directed partiaJ evaluation.
249
2.6 Static corn mutational effects
It is simple to construct a program that uses computational
effects (assignments, 1/0, or call/cc) statically, and that
type-directed partial evaluation specializes successfully
something that comes for free here but that (for better or for
worse ) no previous partial evaluator does. We come back to
this point in Section 4.4.
3 Disjoint Sums
Let us extend the language of Figure 1 with disjoint sums
and booleans. (Booleans are included for pedagogical value. )
Reifying a disjoint-sum value is trivial:
Reflecting upon a disjoint-sum expression is more chal-
lenging. By symmetry, we would like to write
end
(where Z1 and z, are fresh) but this would yield ill-typed
two-level J-terms, as in the non-solution of Section 1.1.
Static values would occur in conditional branches and dv-
.
namic conditional expressions would occur in static contexts
a clash at higher types.
The symmetric definition requires us to supply the con-
text of reflection (which is expecting a static value) both with
an appropriate left value and an appropriate right value, and
then to construct the corresponding residual case expression.
Unless the source term is tail-recursive, we thus need to ab-
stract and to relocate this context.
Context abstraction is achieved with a control operator.
This context, however, needs to be delimited, which rules out
call/cc [10] but invites one to use shift and reset [16, 17]
(though of course any other delimited control operator could
do as well [21] ).5 The extended residualizer is displayed in
Figure 8.
The following Scheme session illustrates this extension.
> (residuali,ze (lambda (x) x) ‘((A + B) -> (A ~ B)))
(lambda (xO) (case-record XO
[(Left xl) (make-Left xi)]
[(Right x2) (make-Right x2)1 ) )
> (residualize (lambda (x) 42) “(Boo1 -> Int))
(lambda (xO) (if XO 42 42))
> (residualize
(lambda (call/cc fix
null? zero? * car cdr)
(lambda (X, )
(call/cc
(lambda (k)
( (+ix
(lambda (m)
(lambda (XS)
(if (null? XS)
1
(if (zero? (car XS))
(k O)
(* (car x5)
(m (cdr XS))) )))))
Xs) ))))
5An overview of shift and reset can be found m Appendix B
t E Type
u ~ Value
e c Expr
reify =
.._
.,—
..—
..—
..—
..—
bltl+t21tl xtzltl+t2\Bool
C\zl Xz:t. ulvoaull
pair(vl, uz) I ~v I sndv I
inleft (v) I inright
(u) I
case u ;f inleft(zl) * .1
1
inright(z~) ~ vz
end
c121&:t. eleOQel\
pair(el, ez) ] fste I snde \
-(e) I inright(e) I
~eof inleft(zl) * el
[ insight + e2
$&l
At.Av : t..J.t u
Type + Value -+ TLT
u
where z 1 is fresh.
case v ;f
inleft(vl) + M(.Jt’ WI)
Dinright (VZ) + inright (~t’
.2)
end
BOOI
J’w=
~ v
then true else false
reflect =
At’. At. Ae : t.~~’ e
Type + Type + Expr + TLT
pair(ttl Me,ti, we)
shift ~:tl+tz-+t
where x ~ and X2 are fresh.
shift ~ : Bool +
t
fiife
~ resett
(K @true)
& resett (~ 5false)
leset and reflect are annotated with the
:xpected by the delimited context,
type of the valu~
residualize = statically-reduce o reify
: Type -+ Value + Expr
Figure 8: Type-directed residualization
250
( ( ((Hum -> Hum) -> Hum) -> Num) *
(((LMum -> Num) -> LNum -> Mum) -> LNum -> Num) *
(LNum -> Bool) * (Num -> Bool) *
(Mum * Num => Num) *
(LNum -> Num) *
(LMum -> LNum) => LNum -> Num))
(lambda (xO xl x2 x3 x4 X5 x6)
(lambda (x7)
(xO (lambda (x8)
((xl (lambda (x9)
(lambda (xIO)
(if (x2 xIO)
1
(if (x3 (x5 xIO) )
(x8 O)
(x4 (x5 Xlo)
(x9 (x6 Rio))))))))
x7)))))
>
In the first interaction, the identity procedure over a disjoint
sum is residualized. In the second interaction, a constant,
procedure is residualized. The third interaction features a
s~andard example in the continuations community: a pro-
cedure that multiplies numbers in a list, and escapes if it
encounters zero. Residualization requires both the type of
fix (to traverse the list) and of call/cc (to escape).
The same legitimate question as in Section 1 arises: how
dogs tlus reaily work? Let us residualize the static application
f Cl g with respect to the type Bool + Int, where
f = xh.Xz.l+h Gs
9
= ~g.~ g then 2 else 3
We want to perform the addition in ~ statically. This re-
quires us to reduce the conditional expression in g, even
though g’s argument is_mk~own. During residualization,
the delimited context [~@ g @ [.]] is abstracted and relocated
in both branches of a dynamic conditional expression:
4
Bool+Int(f~g) =
——
~b.a,nt (J’”’(f @ g @ (-&&,, b))) =
Ab.if b ~reset,”, (.f~g~true) ~ resetI., (~a9afa1se)
which statically reduces to Jb.if
b @ 3 else 4.
.—
4 Limitations
Our full type-directed partial evaluator is not formally
proven. Only its restriction to the simply typed J-calculus
has been proven correct, because it coincides with Berger and
Schwichtenberg’s algorithm [3]. (The t we-level J-calculus,
though, provides a more convenient format for proving, e.g.,
that static reduction does not go wrong and yields a com-
pletely dynamic term. )
This section addresses the practical limitations of type-
directed partial evaluation.
4.1 Static errors and non-termination may occur
As soon as we move beyond the simply-typed A-calculus,
nothing a prio n guarantees that type-directed partial eval-
uation yields no static errors or even terminates. (As usual
in partial evaluation, one cannot solve the halting problem. )
For example, given the looping thunk loop, the expression
(residualize (lambda (dummy) ((loop) (/ 1 O)))
‘(Dummy -> Whatever) )
may either diverge or yield a “division by zero” error, de-
pending on the Scheme processor at hand, since the order
in which sub-expressions are evaluated, in an application, is
undetermined [10].
4.2 Residual programs must have a type
We must know the type of every residual program, since it
is this type that directs residualization. (The static parts of
a source program, however, need not be typed. )
Residual programs can be polymorphically typed at base
type (names of base types do not matter), but they must,
be monomorphically typed at higher types. Overcoming
this limitation would require one to pass type tags to poly-
morphic functions, and to enumerate possible occurrences
of type tags at the application site of polymorphic functions
(an
F2 analogue of control-flow analysis / closure analysis
for higher-order programs).
Examples include definitional interpreters for program-
ing languages with recursive definitions that depend on
the type of the source program. Unless one can
enumer-
ate all possible instances of such recursive definitions (and
thus abstract all the corresponding fixpoint operators in the
definitional interpreter), these interpreters cannot be resid-
ualized.
Inductive types are out of reach as well because eta-
expanding a term of
type tthat includes an inductive type
# does not terminate if
t’occurs in negative position within
t.For example, we can consider lists.
case w ;f
m+~
flC6iiS(ff, w)* -(J’ ., +“’’(’) ‘w)
shift ~ : List(t) -+
t’
-
where x and y are fresh. Reflecting upon a list-typed ex-
pression diverges.
The problem here is closely related to coding fixed-point
operators in a call-by-value language. A similar solution can
be devised and gives rise to a notion of lazy insertion of
coercions.
4.3 Single-threading and computation duplication must be
addressed
Type-directed partial evaluation does not escape the prob-
lem of computation duplication: a program such as
(lambda (f g x) ((lambda (y) (f y y)) (g x)))
is residualized as
(lambda (f g x) (f (g x) (g x)))
Fortunately, the Similix solution applies: a residual let ex-
pression should be inserted [8]. The residual term above
then reads:
(lambda (T g x) (let ([y (g x)]) (f y y)))
251
We have implemented type-directed partial evaluation in
such a way. This makes it possible to specialize a
direct-
style
version of the Tiny interpreter in Section 2.3.
The
corresponding residual programs (see Figure 5) are in direct
style as well. Essential y the y use let expressions “let u =
$@x in e“ instead of CPS ‘(f@.z@~v.e” .
4.4 Side effects
Unless side-effecting procedures can be performed statically,
they need to be factored out and, in the absence of let inser-
tion, be made continuation-passing.
At first, this can be seen as a shortcoming, until one con-
siders the contemporary treatment of side effects in partial
evaluators. Since Similix [8], all I/O-like side effects are re-
sidualized, which on the one hand is safe but on the other
hand prevents, e.g., the non-trivial specialization of an inter-
preter which finds its source program in a file. Ditto for spe-
cializing an interpreter that uses I/O to issue compile-time
messages
they all are delayed until run time. Similar
heuristics can be devised for other kinds of computational
effects.
In summary, the treatment of side effects in partial eval-
uators is not clear cut. Type-directed partial evaluation at
least offers a simple testbed for experiments.
4,5 Primitive procedures must be either static or dynamic
The following problem appears as soon as we move beyond
the pure A-calculus.
During residualization, a primitive procedure cannot be
used both statically and dynamically. Thus for purposes of
residualization, in a source expression such as
((lambda (x) (lambda (y) (+ (+ x 10) y))) 100)
the two instances of + must be segregated. The outer occur-
rence must be declared in the initial run-time environment:
(lambda (add)
((lambda (x) (lambda (y) (add (+ x 10) y))) 100))
This limitation may remind one of the need for binding-time
separation in some partial evaluators [36, 41].
A simple solution, however, exists, that prevents segreg-
ation. Rather than binding a factorized primitive operator
such as + to the offline procedure
(lambda (<fresh-name>)
(lambda (al a2)
(,< fresh-name> ,al ,a2)))
one could bind it to an online procedure that probes its ar-
guments for static-reduction opportunities.
(lambda (<fresh-name>)
(lambda (al a2 )
(if (number? al)
(if
(if
(number? a2)
(+ al a2)
(if (zero? al)
a2
( ,< fresh-name> ,al ,a2) ) )
(and (number? a2) (zero? a2))
al
‘(, <fresh-nanw> ,ai ,a2)) )))
4.6 Type-directed partial evaluation is monovariant
Partial evaluation derives much power from polyvanance
(the generation of mutually recursive specialized versions of
source program points). Polyvariance makes it possible, e.g.,
to derive linear string matchers out of a quadratic one and to
compile pattern matching and regular expressions efficiently
[15, 36]. We are currently working on making type-directed
partial evaluation polyvariant.
4.7 Residual programs are hard to decipher
A pretty-printer proves very useful to read residual pro-
grams. We are currently experimenting with the ability to
attach residual-name stubs to type declarations, as in Elf.
This mechanism would liberate us from renaming by hand,
as in the residual program of Figure 5.
5 Related Work
5.1 A-calculus normalization and G6delization
Normalization is traditionally understood as rewriting ut-
til a normal form is reached. In that context, (one-level)
type-directed eta-expansion is a necessary step towards long
fl~-normal forms [31]. A recent trend, embodied by par-
tial evaluation, amounts to staging normalization in two
steps: a translation into an annotated language, followed
by a symbolic evaluation. This technique of normalization
by translation appears to be spreading [39]. Follow-up work
on Berger and Schwichtenberg’s algorithm includes Alten-
kirch, Hofmann, and Streicher’s categorical reconstruction
of this algorithm [I].6 This reconstruction formalizes the
environment of fresh identifiers generated by the reifica-
tion of A-abstractions as a categorical fibration. Berger and
Schwichtenberg also dedicate a significant part of their pa-
per to formalizing the generation of fresh identifiers (they
represent abstract-syntax trees as base types).
In the presence of disjoint sums, the existence of a nor-
malization algorithm in the simply typed lambda-calculus is
not known. Therefore our naive extension in Section 3 needs
to be studied more closely. The call-by-value nature of our
implementation, for example, makes it possible to distin-
guish terms that are undistinguishable under call-by-name.
> (residualize
(lambda (f) (lambda (x)
42))
‘((a-> (b+ c)) -> a-> Int))
(lsmbda (xO) (lambda (xl) 42.))
> (residuallze
(lambda (f) (lsmbda (x) ((lambda (y) 42) (f x))))
‘((a-> (b+ c)) -> a-> Int))
(lambda (xO) (lambda (xl) (case-record (xO xl)
[(Left x2) 42]
[(Right x3) 42] )))
>
In his PhD thesis [27], Goldberg investigates C%del-
ization, i,e.,
the encoding of a value from one language
into another.
He identifies Berger and Schwichtenberg’s
algorithm as one instance of Godelization, and presents a
Godelizer for proper combinators in the untyped J-calculus,
An implementation of Berger and Schwlchtenberg’s al-
gorithm in Standard ML can be found in Filinski’s PhD
61n that work, re]fy IS “quote” and reflect is “unquote)’
252
thesis [23]. This implementation handles most of the ex-
amples displayed in the present paper, in ML. It is ingeni-
ous because as expressed in Figures 1 and 2, type-directed
partial evaluation reauires deDendent tv~es. Jt can be easilv
. .
translated into Haskell (excl;ding disjoint sums, of course,
for lack of computational effects).
5.2 Binding-time analysis
All of Nielson and Nielson’s binding-time analyses dynamize
functions in dynamic contexts because of the difficulties of
handling contravariance in program analysis [44]. So do all
other binding-time analyses [36], with the exception of Con-
sel’s [12] and Heintze’s [40]. These analyses are polyvariant
and thus they spawn another variant instead of dynamiting,
In practice, type-directed partial evaluation needs a
simple form of binding-time analysis: a type inference where
all base types are duplicated into a static version and a dy-
namic version. Whenever a primitive operation is applied to
an expression which is not entirely static, it is factored out
into the initial run-time environment. The other occurrences
of this primitive operation do not need to be factored out,
however (thus enabling a small form of polyvariance).
To use this binding-time analysis and more generally
type-directed partial evaluation, the simplest is to define
source programs as closed A-terms, by abstracting all free
occurrences of variables. (To enable the small form of poly-
variance mentioned in the last paragraph, each occurrence
of primitive operator can be abstracted separately. ) One can
then curry this program with the static variables first, and
then residualize the application of this curried program to
the static values, with respect to the type of the result. This
simple use mat ches the statement of Kleene’s S: -theorem.
5.3 Partial evaluation
Type-directed partial evaluation radically departs from all
other partial evaluators (and optimizing compilers) because
it has no interpretive dimension whatsoever: its source pro-
grams are compiled. If anything, it is closest to run-time
code generation (the output syntax need not be Scheme’s).
The last ten years have seen two flavors of partial evalu-
ation emerge: online and offline. Offline partial evaluation is
staged into two components: program analysis and program
specialization. Online partial evaluation is more monolithic.
Extensive work on both sides [6, 11, 15, 34, 36, 41, 48, 51]
has led to the conclusions of both the usefulness of mom-am
.“
analysis and the need for online partial evaluation in a pro-
gram specialize (as illustrated in Section 4.5). Because it
relies on one piece of static information the type of the
residual program type-directed partial evaluation appears
as an extreme form of offline partial evaluation.
In the spring of 1989, higher-order partial evaluation was
blooming at DIKU [34]. In parallel with Bondorf (then visit-
ing Dortmund), the author developed a version of Similix [8]
that did not dynamize higher-order values in dynamic con-
texts. In this unreleased version, instead, the specialize kept
track of the arity of static closures and eta-expanded them
when they occurred in dynamic contexts. Type-directed par-
tial evaluation stems from this unpublished work. The idea,
however, did not Lake. Despite the analogy with call unfold-
ing, which is central to partial evaluation but unsafe without
let insertion (under call by value), ‘(Similix [...] refuses
to lift higher-order values into residual expressions: lifting
higher-order values and data structures is in general unsafe
since it may lead to residual code that exponentially duplic-
ates data structure and closure allocations”
[9, page 327].
All the later higher-order partial evaluators developed at
DIKU have adopted the same conservative strategy a
choice that Henglein questions from a type-theoretical stand-
point [29]. In practice, thk decision
created the need for
source binding-time improvements in offline partial evalu-
ation [36, Chapter 12]. In contrast, binding-time coercions
“improve binding times without explicit eta-conversion”, to
paraphrase the title of Bondorf’s LFP ’92 paper [7] a prop-
ert y which should prove crucial for multi-level binding-time
analyses since it eliminates the need for (unfathomed) multi-
level binding-time improvements [26].
Thus Mix-like partial evaluation [36] and type-directed
partial evaluation fundamentally contrast when it comes
to dynamic computations: Mix-like partial evaluators do
not, associate any structure to the binding time “dynamic”,
whereas we rely on the type structure of dynamic computa-
tions in an essential way. As a consequence, and modulo the
abstraction of the initial run-time environment (which must
be defined in a partial evaluator anyway), type-directed par-
tial evaluation needs a restricted form of binding-time ana-
lysis (see Section 5.2) but it needs no specialization by sym-
bolic interpretation. It is, however, monovariant.
We are currently integrating the residualization algorithm
in Pell-Mell, our partial evaluator for ML [40]. This al-
gorithm fulfills our need for binding-time coercions at higher
types. Type-directed partial evaluation also formalizes and
clarifies a number of earlier pragmatic decisions in the sys-
tem. For example, our treatment of inductive data structures
can be seen as lazy insertion of coercions (Section 4.2).
5.4 Self-application
Self-application is the best-known way to optimize partial
evaluation [6, 11, 15, 25, 36, 37, 41, 48]: rather than run-
ning the partial evaluator on a source program, with all the
symbolic interpretive overhead this entails, one could instead
1. generate aspecializer dedicated to this program (a.k.a.
“a generating extension”), and
2. run this dedicated specialize on the available input.
To be efficient, self-application requires agood binding-time
analysis and a good binding-time division in the partial eval-
uator [36, Section 7.3].
Type-directed partial evaluation is based on type infer-
ence, needs no particular binding-time division, and as illus-
tratedin Section 2.4, is self-applicable as well.
5.5 Partial evaluation of definitional interpreters
In the particular cases where the source program p is a defin-
itional interpreter, or where the source program is the partial
evaluator PE itself and the static input is a definitional in-
terpreter or PE itself, the partial-evaluation equations
{
run PE (p, (s, -)) = P(s)-)
run p (s, d) =
r~ p(s,_) (-1 +
are known as the ‘[Futamura Projections” [15, 25, 36]. As il-
lustrated in Section 2.3 and (to a lesser extent) in Section 2.4,
type-directed partial evaluation enables their implementa-
tion in a strikingly simple way. (The third Futamura pl-o-
jection, however, is out of reach because of the polymorphic
tYPe of
PE(PE, _}.)
253
6 Conclusion
To produce a residual program, a partial evaluator needs
to residualize static values in dynamic contexts. Consider-
ing higher-order values introduces a new challenge for resid-
ualization. Most partial evaluators dodge this challenge by
disallowing static higher-order values to occur in dynamic
contexts i.e., in practice, by dynamiting both, and more
generally by restricting residualized values to be of base type
[4, 6, 9, 36, 37]. Only recently, some light has been shed on
the residualization of values at higher types, given informa-
tion about these types [18, 19].
We have presented an algorithm that residualizes a closed
typed static value in a dynamic context, by eta-expandkig
the value with two-level eta-redexes and then reducing all
static redexes. For the simply typed A-calculus, the al-
gorithm coincides with Berger and Schwichtenberg’s ‘{inverse
of the evaluation functional” [3]. It is also interesting in
its own right in that it can be used as a partial evaluator
for closed compiled programs, given their type. This par-
tial evaluator, in several respects, outperforms all previous
partial evahators, e.g., in simplicity and in efficiency. It
also provides a simple and effective solution to (denotational)
semantics-directed compilation in the A-calculus.
Future ~work includes formalizing type-directed partial
evaluation,
extending it to a richer type language (e. g.,
polymorphic or linear), making it more user-friendly, pro-
gramming more substantial examples, and obtaining poly -
variance.
An implementation is available on the author’s home
page.
Acknowledgements
Various parts of this work have benefited from the kind at-
tention and encouragement of Thorsten Altenkirch, David
Basin, Luca Cardelli, Craig Chambers, Charles Consel, Jim
des Rivi&es, Dirk Dussart, John Hatcliff, Martin Hofmann,
Thomas Jensen, Peter Mosses, Flemming Nielson, Dino
Oliva, Bob Paige, Frank Pfenning, Jon Riecke, Amr Sabry,
Michael Schwartzbach, Helmut Schwichtenberg, Thomas
Streicher, Tommy Thorn, and G] ynn Winskel. Thanks are
due to the referees for insightful comments, and to Guy
L. Steele Jr., for shepherding this paper. Particular thanks
to Rowan Davies, Mayer Goldberg, Julia Lawall, Nevin
Heintze, Karoline Malmkj cer, Jens Palsberg, and Ren4 Ves-
tergaard for discussions and comments. Hearty thanks to
Andrzej Filinski for substantial e-mail interaction and com-
ments.
All of the examples were run with R. Kent Dybvig’s Chez
Scheme system and with Aubrey Jaffer’s SCM system. Most
were type-checked using Standard ML of New Jersey.
The author gratefully acknowledges the support of the
DART project (Design, Analysis and Reasoning about
Tools) of the Danish Research Councils during 1995.
References
[I] Thorsten Altenkirch, Martin Hofmann, and Thomas
Streicher. Categorical reconstruction of a reduchon
‘Dawes and Pfennlng’s loglcal formalization of binding-time aria.
lysis and offline partial evaluation opens a prom]sing avenue [20]
free reduction moof. In Peter Dvbier and Randy Pol-
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
lack, editors, Informal Proceedings of
the Joznt CLICS-
TYPES Workshop on
Categories and Type Theory,
Goteborg, Sweden, May 1995. Report 85, Programming
Methodology Group, Chalmers University and the Uni-
versity of Goteborg.
Henk Barendregt.
The Lambda Calculus Its Syntax
and Semanttcs. North-Holland, 1984.
Ulrich Berger and Helmut Schwichtenberg. An inverse
of the evaluation functional for typed A-calculus. In Pro-
ceedings of
the Szxth Annual IEEE Symposium on Logzc
in Computer 5’czence, pages 203–211, Amsterdam, The
Netherlands, July 1991. IEEE Computer Society Press.
Lars Birkedal and Morten Welinder. Partial evaluation
of Standard ML. Master’s thesis, DIKU, Computer Sci-
ence Department, University of Copenhagen, August
1993. DIPU report 93/22.
Hans-J. Boehm, editor.
Proceedings of the Tzuenty -
Fzrst
Annual ACM Symposzum on Principles of Pro-
grammmg Languages, Portland, Oregon, January 1.9!74.
ACM Press.
Anders Bondorf.
Self-Applicable Partial Eualuatzon.
PhD thesis, DIKU, Computer Science Department,
University of Copenhagen, Copenhagen, Denmark,
1990. DIKU Report 90-17.
Anders Bondorf. Improving binding times without ex-
plicit cps-conversion. In William Clinger, editor,
Pro-
ceedings of the 1992 ACM Conference on Lisp and
Functional Programming, LISP Pointers, Vol. V, No. 1,
pages 1-10, San Francisco, California, June 1992. ACM
Press.
Anders Bondorf and Olivier Danvy. Automatic auto-
projection of recursive equations with global variables
and abstract data types. Sczence
of Computer Program-
ming, 16:151–195, 1991.
Anders Bondorf and Jesper J~rgensen. Efficient ana-
lyses for realistic off-line partial evaluation. Journal of
Functional Programming, 3(3):315-346, 1993.
William Clinger and Jonathan Rees (editors). Revised4
report on the algorithmic language Scheme.
LISP
Poznters, 1V(3):1-55,
July-September 1991.
Charles Consel. ,4nalyse
de Programmed, Evaluation
Partielle
et G6n4rataon de Compdateurs. PhD thesis,
LTniversit6 Pierre et Marie Curie (Paris VI), Paris,
France, June 1989.
Charles Consel. Polyvariant binding-time analysis for
applicative languages. In David A. Schmidt, editor,
Proceedings
of the Second ACM SIGPL.4N Symposzum
on Partial
Evaluation and Semantics-Based Program
Manipzdatzon, pages 66–77, Copenhagen, Denmark,
June 1993. ACM Press.
Charles Consel and Olivier Danvy. From interpreting to
compiling binding times. In Neil D, Jones, editor,
Pro-
ceedings of the Third
European Sympostum on Program-
ming, number 432 in Lecture Notes in Computer Sci-
ence, pages 88–105, Copenhagen, Denmark, May 1990.
254
[14] Charles Consel and Olivier Danvy. Static and dynamic
semantics processing. In Robert (Corky ) Cartwright,
editor, Proceedings
of the Eighteenth Annual ACM
Symposium on Principles of Programming Languages,
pages 14-24, Orlando, Florida, January 1991. ACM
Press.
[15] Charles Consel and Olivier Danvy. Tutorial notes on
partial evaluation. In Susan L. Graham, editor,
Pro-
ceeding~ of the Twentieth Annual ACM Symposium on
Principles of Programming Languages, pages 493-5o1,
Charleston, South Carolina, January 1993. ACM Press.
[16] Olivier Danvy and Andrzej Filinski. Abstracting con-
trol. In Mitchell Wand, editor, Proceedings
of the 1990
ACM
Conference on hasp and lknctional Programmwzg,
pages 151–160, Nice, France, June 1990. ACM Press.
[17] Olivier Danvy and Andrzej Filinski. Representing con-
trol, a study of the CPS transformation.
Mathematical
Structures
in Computer Science, 2(4):361-391, Decem-
ber 1992.
[18] Olivier Danvy, Karoline Malmkj~r, and Jens Palsberg.
The essence of eta-expansion in partial evaluation.
LISP
and Symbolzc
Computation, 8(3):209–227, 1995. An
earlier version appeared in the proceedings of the 1994
ACM SIGPLAN Workshop on Partial Evaluation and
Semantics-Based Program Manipulation.
[19] Olivier Danvy, Karoline Malmkja-, and Jens Palsberg.
Eta-expansion does The Trick. Technical report BRICS
RS-95-41, DAIMI, Computer Science Department, Aar-
hus University, Aarhus, Denmark, August 1995.
[20] Rowan Davies and Frank Pfenning. A modal analysis of
staged computation. In Guy L. Steele Jr., editor,
Pro-
ceedings of the Twenty- Third Annual ACM Sympostum
on
Principles of Programming Languages, St. Peters-
burg Beach, Florida, January 1996. ACM Press.
[21] Matthias Felleisen. The theory and practice of first-class
prompts. In Jeanne Ferrante and Peter Mager, editors,
Proceedings of
the Fifteenth Annual ACM Symposium
on
Principles of Programming Languages, pages 180–
190, San Diego, California, January 1988.
[22] Andrzej Filinski. Representing monads. In Boehm [5],
pages 446–457.
[23] Andrzej Filinski.
Controlling Effects. PhD thesis,
School of Computer Science, Carnegie Mellon Uni-
versit y, Pitt sburgh, Pennsylvania, 1995.
[24] Daniel P. Friedman, Mitchell Wand, and Christopher T.
Haynes.
Essentials of Programming Languages. The
MIT Press and McGraw-Hill, 1991.
[25] Yoshihito Futamura. Partial evaluation of computation
process an approach to a compiler-compiler.
Systems,
Computers, Controls ,?, 5,
pages 45-50, 1971.
[26] Robert Gluck and Jesper J@rgensen. Efficient multi-
level generating extensions for program specialization.
In S. D. Swierstra and M. Hermenegildo, editors, Sew
enth lnternattonal Symposzum on Programming Lan-
guage
Implementation and Logic Programming, number
982 in Lecture Notes in Computer Science, pages
259–
278, Utrecht, The Netherlands, September 1995.
[27] Mayer Goldberg.
Recursiue Application Survival an the
}-
Calculus. PhD thesis, Computer Science Department,
Indiana IJniversity, Bloomington, Indiana, 1996. Forth-
coming.
[28] John Hatcliff and Olivier Danvy. A generic account of
continuation-passing styles. In Boehm [5], pages 458–
471.
[29] Fritz Henglein. Dynamic typing: Syntax and proof the-
ory. Sc$ence
of Computer Programming, 22(3):197–230,
1993. Special Issue on ESOP’92, the Fourth European
Symposium on programming, Rennes, February 1992.
[30] G&srd Huet. R&olution d’6quations clans les langages
d’ordre 1, 2, .... u. Th&e d’Etat, University de Paris
VII, Paris, France, 1976.
[31] C. Barry Jay and Neil Ghani. The virtues of eta-ex-
pansion. Journal
of Functional Programming, 5(3):135-
154, 1995.
[32] Neil D. Jones, editor.
Semantzcs-Dwected Compder
Generation, number 94 in Lecture Notes in Computer
Science, Aarhus, Denmark, 1980.
[33] Neil D. Jones. Challenging problems in partial evalu-
ation and mixed computation. In Partial
Evaluation
and Mzxed Computation, pages 1–14. North-Holland,
1988.
[34] Neil D. Jones. Mix ten years after. In William L.
Scherlis, editor,
Proceedings of the ACM SIGPLA N
Symposium on Partial
Evaluation and Semantics-Based
Program
Manipulation, pages 24-38, La Jolla, Califor-
nia, June 1995.
[35] Neil D. Jones, Carsten K. Gomard, Anders Bondorf,
Olivier Danvy, and Torben A3. Mogensen. A self-
applicable partial evaluator for the lambda calculus. In
K. C. Tai and Alexander L. Wolf, editors,
Proceedings
of the 1990 IEEE
International Conference on Com-
puter Languages, pages 49–58, New Orleans, Louisiana,
March 1990.
[36] Neil D. Jones, Carsten K. Gomard, and Peter Sestoft.
Partial Evaluation and Automatic Program Generation.
Prentice Hall International Series in Computer Science.
Prentice-Hall, 1993.
[37] John Launchbury. Pro~ectton Factorisations zn
Part~al
Evaluation. Distinguished Dissertations in Computer
Science. Cambridge University Press, 1991.
[38] Julia L. Lawall and Olivier Danvy. Continuation-
based partial evaluation. In Carolyn L. Talcott, ed-
itor,
Proceedings of the 1994 ACM Conference on Lzsp
and Functional
Programming, LISP Pointers, Vol. VII,
No. 3, Orlando, Florida, June 1994. ACM Press.
[39] Ralph Loader. Normalisation by translation. Tech-
nical report, Computing Laboratory, Oxford University,
April 1995.
[40] Karoline Malmkjam, Nevin Heintze, and Olivier Danvy.
ML partial evaluation using set-based analYsis.
In
John Reppy, editor,
Record of the 1991 ACM SIG-
PLAN Workshop on hTL
and its Applzcatzons, Rapport
255
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
derecherche N02265, lNRIA, pages 112-119, Orlando,
Florida, June 1994. Also appears as Technical report
CMU-CS-94-129.
Torben A3. Mogensen. Bznding Time Aspects
of Par-
tzal Euaiuat!on. PhD thesis, DIKU, Computer Science
Department, University of Copenhagen, Copenhagen,
Denmark, March 1989.
Eugenio Moggi. Notions of computation and monads.
Information and Computation, 93:55-92, 1991.
Peter D. Mosses. S1S semantics implementation sys-
tem, reference manual and user guide. Technical Re-
port MD-30, DAIMI, Computer Science Department,
Aarhus University, Aarhus, Denmark, 1979.
Flemming Nielson and Hanne Ftiis Nielson. Two-Level
Functional Languages, volume 34 of Cambridge
Tracts
in Theoretical
Computer Sczence.
Cambridge Uni-
versity Press, 1992.
Larry Paulson. Compiler generation from denotational
semantics. In Bernard Lorho, editor,
Methods and Took
for Cornpder
Construction, pages 219-250. Cambridge
University press, 1984.
Frank Pfenning. Logic programming in the LF logical
framework. In G&-ard Huet and Gordon Plotkin, ed-
itors,
Logical Frameworks, pages 149–181. Cambridge
University Press, 1991.
John C. Reynolds. The essence of Algol. In van
Vliet, editor, International Symposium on Algortthm~c
Languages, pages 345-372, Amsterdam, 1982. North-
Holland.
Erik Ruf. Topzcs in Online Partzal
Evaluation. PhD
thesis, Stanford University, Stanford, California, Feb-
ruary 1993. Technical report CSL-TR-93-563.
Guy L. Steele Jr. Rabbit: A compiler for Scheme. Tech-
nical Report AI- TR-474, Artificial Intelligence Laborat -
ory, Massachusetts Institute of Technology, Cambridge,
Massachusetts, May 1978.
Philip Wadler. The essence of functional programming
(tutorial). In Andrew W. Appel, editor, Proceedings of
the Nineteenth Annual ACM Syrnposzum on Principles
of Programm~ng Languages, pages 1–14, Albuquerque,
New Mexico, January 1992. ACM Press.
Daniel Weise, Roland Conybeare, Erik Ruf, and Scott
Seligman. Automatic online partial evaluation. In John
Hughes, editor,
Proceedings of the Fzfth ACM Confer-
ence
on Functional Programming and Computer Archi-
tecture,
number 523 in Lecture Notes in Computer Sci-
ence, pages 165–191, Cambridge, Massachusetts, Au-
gust 1991.
A Nielson and Nielson’s Two-Level A-Calculus
In its most concise form, the two-level simply typed A-
calculus [44] duplicates all the constructs of the simply typed
A-calculus (A, application (hereby noted C!), pair, fst, snd)
into static constructs (A, ~, pair, fst, snd) and dynamic con-
structs (~, Q pair, N, S@).
define-record (Program names command) )
~define-record (Skip))
~define-record (Sequence command command))
[define-record (Assign identifier expression) )
~define-record (Conditional expression command command) )
[clef ine-record (While expression command))
[define-record (Literal constant) )
[clef ine-record (Boolean constant))
[define-record (Identifier name) )
[define-record (Primop op expression expression))
[define-record (Read) )
(define meaning-type
‘((Int * Int * (Int -> Ans) => Ans) *
;;; add
(Int * Int * (Int -> Ans) => Ans) *
;;; sub
(Int * Int * (Int -> Ans) => Ans) *
;;; mul
(Int * Int * (Int -> Ans) => Ans) * ;;; eq
(Int * Int * (Int -> Ans) => Ans) * ;;; gt
((Int -> Ans) -> Ans) *
; ; ; read
(((Sto ->
Ans) -> Sto -> Ans) -> Sto -> Ans) *
(Int * (Sto -> Ans) * (Sto -> Ans) * Sto => An. ) *
(Nat * Sto * (Int -> Ans) =>
Ans) *
; ; ; lookup
(Nat * Int * St. * (Sto -> Ans) => Ans,) =>
(Sto -> Ans) *
; ; ; continuation
Sto =>
; ; ; store
Ans) )
Figure 9: Scheme interpreter for Tiny (abstract syntax and
semantic algebras)
A simply typed A-term is mapped into a two-level J-term
by a binding-time analysis. The intention is to formalize the
following idea:
Statically reducing a twe-level A-term, erasing the
annotations of the residual term, and reducing
this unannotated term should yield the same res-
ult (normal form ) as reducing the original term.
The two-level A-calculus thus provides an ideal medium
for staged evaluation with more than one binding time. To
this end, it makes use of three properties that are captured
in its typing discipline:
e static reduction preserves well-typing;
e static reduction strongly normalizes;
s static reduction yields normal forms that are com-
pletely dynamic.
Static reduction can be implemented directly in a func-
tional language: overlined constructs are treated as syntax
constructs and dynamic constructs as syntax-building func-
tions. It can also be implemented with quasiquote and ur-
quote in Scheme (as done in Section 1) and thus can be seen
as macro-expansion in a simply typed setting.
B Abstracting Control with Shift and Reset
This section provides some intuition about the effect of shift,
and reset.
Shift and reset were introduced to capture composition
and identity over continuations [16, 17]. Reset dehmits a
context, and is identical to Felleisen’s prompt; shift abstracts
a delimited context, and is similar (though not in general
256
(define meaning
(lambda (p)
(lambda (add sub mul eq gt read fix true? lookup update)
(lambda (k s)
(letrec ( [meaning-program
(lambda (p k s)
(case-record p
[(Program vs c) (meaning-declaration vs O (lambda (r) (meaning-command c r k s)) )1 ) )1
[meaning-declaration
(lambda (d offset k)
(if (null? d)
(k (lambda (i) (error ‘lookup “undeclared identifier “s” i)))
(mesning-declaration (cdr d)
(addl offset)
(lambda (r) (k (lambda (i) (if (eq? (car d) i) offset (r i))))))))]
[meaning-command
(lambda (c r k s)
(case-record c
[(Skip) (k s)1
[(sequence CI c2) (meaning-command c1 r (lambda (s) (meaning-command C2 r k S)) s)1
[(Assign i e) (Meaning-expression e r (lambda (v) (update (r i) v s k)) s)1
[(conditional e c-then c-else)
(meaning-expression e r (lsmbda (v)
(true? v
(lambda (s) (meaning-command c-then r k s))
(lambda (s) (meaning-command c-else r k S))
s)) s)]
[( Hhile e c)
((fix (lambda (while)
(lambda (s)
(meaning-expression e r (lsmbda (v)
(true? v
(lambda (s) (meaning-command c r while s))
k