Technical ReportPDF Available

Lambda-lifting and CPS conversion in an imperative language

Authors:

Abstract

This paper is a companion technical report to the article "Continuation-Passing C: from threads to events through continuations". It contains the complete version of the proofs of correctness of lambda-lifting and CPS-conversion presented in the article.
arXiv:1202.3247v1 [cs.PL] 15 Feb 2012
Lambda-lifting and CPS conversion in an imperative
language
Gabriel Kerneis Juliusz Chroboczek
Universit´e Paris Diderot, PPS, Paris, France
February 2012
Abstract
This paper is a companion technical report to the article “Continuation-Passing C: from
threads to events through continuations”. It contains the complete version of the proofs of
correctness of lambda-lifting and CPS-conversion presented in the article.
Contents
1 Introduction 2
2 Lambda-lifting in an imperative language 2
2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.1.1 Naive reduction rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1.2 Lambda-lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.3 Correctness condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Optimised reduction rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Minimal stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.2 Compact closures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.3 Optimised reduction rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Equivalence of optimised and naive reduction rules . . . . . . . . . . . . . . . . . 7
2.3.1 Optimised and intermediate reduction rules equivalence . . . . . . . . . . 8
2.3.2 Intermediate and naive reduction rules equivalence . . . . . . . . . . . . . 13
2.4 Correctness of lambda-lifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.1 Strengthened hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.2 Overview of the proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4.3 Rewriting lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4.4 Aliasing lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.4.5 Proof of correctness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3 CPS conversion 32
3.1 CPS-convertible form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 Early evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Small-step reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.4 CPS terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5 Correctess of the CPS-conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
1
1 Introduction
This paper is a companion technical report to the article “Continuation-Passing C: from threads
to events through continuations” [4]. It contains the complete version of the proofs presented in
the article. It does not, however, give any background or motivation for our work: please refer
to the original article.
2 Lambda-lifting in an imperative language
To prove the correctness of lambda-lifting in an imperative, call-by-value language when functions
are called in tail position, we do not reason directly on CPC programs, because the semantics of
C is too broad and complex for our purposes. The CPC translator leaves most parts of converted
programs intact, transforming only control structures and function calls. Therefore, we define a
simple language with restricted values, expressions and terms, that captures the features we are
most interested in (Section 2.1).
The reduction rules for this language (Section 2.1.1) use a simplified memory model without
pointers and enforce that local variables are not accessed outside of their scope, as ensured by
our boxing pass. This is necessary since lambda-lifting is not correct in general in the presence
of extruded variables.
It turns out that the “naive” reduction rules defined in Section 2.1.1 do not provide strong
enough invariants to prove this correctness theorem by induction, mostly because we represent
memory with a store that is not invariant with respect to lambda-lifting. Therefore, in Section 2.2,
we define an equivalent, “optimised” set of reduction rules which enforces more regular stores
and closures.
The proof of correctness is then carried out in Section 2.4 using these optimised rules. We first
define the invariants needed for the proof and formulate a strengthened version of the correctness
theorem (Theorem 2.28, Section 2.4.1). A comprehensive overview of the proof is then given in
Section 2.4.2. The proof is fully detailed in Section 2.4.5, with the help of a number of lemmas
to keep the main proof shorter (Sections 2.4.3 and 2.4.4).
The main limitation of this proof is that Theorems 2.9 and 2.28 are implications, not equiva-
lences: we do not prove that if a term does not reduce, it will not reduce once lifted. For instance,
this proof does not ensure that lambda-lifting does not break infinite loops.
2.1 Definitions
In this section, we define the terms (Definition 2.1), the reduction rules (Section 2.1.1) and the
lambda-lifting transformation itself (Section 2.1.2) for our small imperative language. With these
preliminary definitions, we are then able to characterise liftable parameters (Definition 2.8) and
state the main correctness theorem (Theorem 2.9, Section 2.1.3).
Definition 2.1 (Values, expression and terms).Values are either boolean and integer constants
or 1, a special value for functions returning void.
v::=1|true |false |nN
Expressions are either values or variables. We deliberately omit arithmetic and boolean oper-
ators, with the sole concern of avoiding boring cases in the proofs.
e::=v|x|...
2
Terms are consist of assignments, conditionals, sequences, recursive functions definitions and
calls.
T::=e|x:=T|if Tthen Telse T|T;T
|letrec f(x1...xn) = Tin T|f(T,...,T)
Our language focuses on the essential details affected by the transformations: recursive func-
tions, conditionals and memory accesses. Loops, for instance, are ignored because they can be
expressed in terms of recursive calls and conditional jumps — and that is, in fact, how the split-
ting pass translates them. Since lambda-lifting happens after the splitting pass, our language
need to include inner functions (although they are not part of the C language), but it can safely
exclude goto statements.
2.1.1 Naive reduction rules
Environments and stores Handling inner functions requires explicit closures in the reduction
rules. We need environments, written ρ, to bind variables to locations, and a store, written s, to
bind locations to values.
Environments and stores are partial functions, equipped with a single operator which extends
and modifies a partial function: ·+{· 7→ ·}.
Definition 2.2. The modification (or extension) fof a partial function f, written f=f+{x7→
y}, is defined as follows:
f(t) = (ywhen t=x
f(t)otherwise
dom(f) = dom(f)∪ {x}
Definition 2.3 (Environments of variables and functions).Environments of variables are defined
inductively by
ρ::=ε|(x, l)·ρ,
i.e. the empty domain function and ρ+{x7→ l}(respectively).
Environments of functions associate function names to closures:
F:{f, g , h, . . .} → {[λx1...xn.T, ρ, F]}.
Note that although we have a notion of locations, which correspond roughly to memory
addresses in C, there is no way to copy, change or otherwise manipulate a location directly in
the syntax of our language. This is on purpose, since adding this possibility would make lambda-
lifting incorrect: it translates the fact, ensured by the boxing pass in the CPC translator, that
there are no extruded variables in the lifted terms.
Reduction rules We use classical big-step reduction rules for our language (Figure 1, p. 4).
In the (call) rule, we need to introduce fresh locations for the parameters of the called
function. This means that we must choose locations that are not already in use, in particular in
the environments ρand F. To express this choice, we define two ancillary functions, Env and
Loc, to extract the environments and locations contained in the closures of a given environment
of functions F.
3
(val) vsρ
Fvs(var) ρ x =ldom s
xsρ
Fs l s
(assign)
asρ
Fvsρ x =ldom s
x:=asρ
F1s+{l7→v}(seq)
asρ
Fvsbsρ
Fvs′′
a;bsρ
Fvs′′
(if-t.)
asρ
Ftrue sbsρ
Fvs′′
if athen belse csρ
Fvs′′ (if-f.)
asρ
Ffalse scsρ
Fvs′′
if athen belse csρ
Fvs′′
(letrec)
bsρ
Fvs
F=F+{f7→ [λx1...xn.a, ρ, F]}
letrec f(x1...xn) = ain bsρ
Fvs
(call)
Ff= [λx1. . . xn.b, ρ,F]ρ′′ = (x1, l1)·...·(xn, ln)lifresh and distinct
i, a si
i
ρ
Fvsi+1
ibsn+1+{li7→vi}ρ′′ ·ρ
F+{f7→F f}vs
f(a1...an)s1ρ
Fvs
Figure 1: “Naive” reduction rules
Definition 2.4 (Set of environments, set of locations).
Env(F) = [{ρ, ρ|[λx1...xn.M, ρ, F]Im(F), ρEnv(F)}
Loc(F) = [{Im(ρ)|ρEnv(F)}
A location lis said to appear in Fiff lLoc(F).
These functions allow us to define fresh locations.
Definition 2.5 (Fresh location).In the (call) rule, a location is fresh when:
l /dom(sn+1), i.e. lis not already used in the store before the body of fis evaluated, and
ldoesn’t appear in F+{f7→ F f}, i.e. lwill not interfere with locations captured in the
environment of functions.
Note that the second condition implies in particular that ldoes not appear in either For ρ.
2.1.2 Lambda-lifting
Lambda-lifting can be split into two parts: parameter lifting and block floating[2]. We will focus
only on the first part here, since the second one is trivial. Parameter lifting consists in adding a
free variable as a parameter of every inner function where it appears free. This step is repeated
until every variable is bound in every function, and closed functions can safely be floated to
top-level. Note that although the transformation is called lambda-lifting, we do not focus on a
single function and try to lift all of its free variables; on the contrary, we define the lifting of a
single free parameter xin every possible function.
4
Smart lambda-lifting algorithms strive to minimize the number of lifted variables. Such is not
our concern in this proof: parameters are lifted in every function where they might potentially
be free.
Definition 2.6 (Parameter lifting in a term).Assume that xis defined as a parameter of a
given function g, and that every inner function in gis called hi(for some iN). Also assume
that function parameters are unique before lambda-lifting.
Then the lifted form (M)of the term Mwith respect to xis defined inductively as follows:
(1)=1(n)=n
(true)=true (f alse)=f alse
(y)=yand (y:=a)=y:= (a)(even if y=x)
(a;b)= (a); (b)
(if athen belse c)=if (a)then (b)else (c)
(letrec f(x1...xn) = ain b)=(letrec f(x1...xnx) = (a)in (b)if f=hi
letrec f(x1...xn) = (a)in (b)otherwise
(f(a1...an))=(f((a1),...,(an), x)if f=hifor some i
f((a1),...,(an))otherwise
2.1.3 Correctness condition
We show that parameter lifting is correct for variables defined in functions whose inner functions
are called exclusively in tail position. We call these variables liftable parameters.
We first define tail positions as usual [1]:
Definition 2.7 (Tail position).Tail positions are defined inductively as follows:
1. Mand Nare in tail position in if Pthen Melse N.
2. Nis in tail position in Nand M;Nand letrec f(x1...xn) = Min N.
A parameter xdefined in a function gis liftable if every inner function in gis called exclusively
in tail position.
Definition 2.8 (Liftable parameter).A parameter xis liftable in Mwhen:
xis defined as the parameter of a function g,
inner functions in g, named hi, are called exclusively in tail position in gor in one of the
hi.
Our main theorem states that performing parameter-lifting on a liftable parameter preserves
the reduction:
Theorem 2.9 (Correctness of lambda-lifting).If xis a liftable parameter in M, then
t, M εε
εvtimplies t,(M)ε
ε
εvt.
Note that the resulting store tchanges because lambda-lifting introduces new variables, hence
new locations in the store, and changes the values associated with lifted variables; Section 2.4 is
devoted to the proof of this theorem. To maintain invariants during the proof, we need to use
an equivalent, “optimised” set of reduction rules; it is introduced in the next section.
5
2.2 Optimised reduction rules
The naive reduction rules (Section 2.1.1) are not well-suited to prove the correctness of lambda-
lifting. Indeed, the proof is by induction and requires a number of invariants on the structure
of stores and environments. Rather than having a dozen of lemmas to ensure these invariants
during the proof of correctness, we translate them as constraints in the reduction rules.
To this end, we introduce two optimisations — minimal stores (Section 2.2.1) and compact
closures (Section 2.2.2) — which lead to the definition of an optimised set of reduction rules
(Figure 2, Section 2.2.3). The equivalence between optimised and naive reduction rules is shown
in Section 2.3.
2.2.1 Minimal stores
In the naive reduction rules, the store grows faster when reducing lifted terms, because each
function call adds to the store as many locations as it has function parameters. This yields stores
of different sizes when reducing the original and the lifted term, and that difference cannot be
accounted for locally, at the rule level.
Consider for instance the simplest possible case of lambda-lifting:
letrec g(x) = (letrec h() = xin h()) in g(1) (original)
letrec g(x) = (letrec h(y) = yin h(x)) in g(1) (lifted)
At the end of the reduction, the store for the original term is {lx7→ 1}whereas the store for the
lifted term is {lx7→ 1;ly7→ 1}. More complex terms would yield even larger stores, with many
out-of-date copies of lifted variables.
To keep the store under control, we need to get rid of useless variables as soon as possible
during the reduction. It is safe to remove a variable xfrom the store once we are certain that
it will never be used again, i.e. as soon as the term in tail position in the function which defines
xhas been evaluated. This mechanism is analogous to the deallocation of a stack frame when a
function returns.
To track the variables whose location can be safely reclaimed after the reduction of some term
M, we introduce split environments. Split environments are written ρT|ρ, where ρTis called the
tail environment and ρthe non-tail one; only the variables belonging to the tail environment may
be safely reclaimed. The reduction rules build environments so that a variable xbelongs to ρTif
and only if the term Mis in tail position in the current function fand xis a parameter of f. In
that case, it is safe to discard the locations associated to all of the parameters of f, including x,
after Mhas been reduced because we are sure that the evaluation of fis completed (and there
are no first-class functions in the language to keep references on variables beyond their scope of
definition).
We also define a cleaning operator, · \ ·, to remove a set of variables from the store.
Definition 2.10 (Cleaning of a store).The store scleaned with respect to the variables in ρ,
written s\ρ, is defined as s\ρ=s|dom(s)\Im(ρ).
2.2.2 Compact closures
Another source of complexity with the naive reduction rules is the inclusion of useless variables in
closures. It is safe to remove from the environments of variables contained in closures the variables
that are also parameters of the function: when the function is called, and the environment
restored, these variables will be hidden by the freshly instantiated parameters.
6
This is typically what happens to lifted parameters: they are free variables, captured in
the closure when the function is defined, but these captured values will never be used since
calling the function adds fresh parameters with the same names. We introduce compact closures
in the optimised reduction rules to avoid dealing with this hiding mechanism in the proof of
lambda-lifting.
A compact closure is a closure that does not capture any variable which would be hidden
when the closure is called because of function parameters having the same name.
Definition 2.11 (Compact closure and environment).A closure [λx1...xn.M, ρ, F]is compact
if i, xi/dom(ρ)and Fis compact. An environment is compact if it contains only compact
closures.
We define a canonical mapping from any environment Fto a compact environment F,
restricting the domains of every closure in F.
Definition 2.12 (Canonical compact environment).The canonical compact environment Fis
the unique environment with the same domain as Fsuch that
fdom(F),Ff= [λx1...xn.M, ρ, F]
implies Ff=λx1. . . xn.M, ρ|dom(ρ)\{x1...xn},F.
2.2.3 Optimised reduction rules
Combining both optimisations yields the optimised reduction rules (Figure 2, p. 8), used Sec-
tion 2.4 for the proof of lambda-lifting. We ensure minimal stores by cleaning them in the (val),
(var) and (assign) rules, which correspond to tail positions; split environments are introduced in
the (call) rule to distinguish fresh parameters, to be cleaned, from captured variables, which are
preserved. Tail positions are tracked in every rule through split environments, to avoid cleaning
variables too early, in a non-tail branch.
We also build compact closures in the (letrec) rule by removing the parameters of ffrom the
captured environment ρ.
Theorem 2.13 (Equivalence between naive and optimised reduction rules).Optimised and naive
reduction rules are equivalent: every reduction in one set of rules yields the same result in the
other. It is necessary, however, to take care of locations left in the store by the naive reduction:
Mεε|ε
====
εvεiff s, M εε
εvs
We prove this theorem in Section 2.3.
2.3 Equivalence of optimised and naive reduction rules
This section is devoted to the proof of equivalence between the optimised naive reduction rules
(Theorem 2.13).
To clarify the proof, we introduce intermediate reduction rules (Figure 3, p. 9), with only one
of the two optimisations: minimal stores, but not compact closures.
The proof then consists in proving that optimised and intermediate rules are equivalent
(Lemma 2.15 and Lemma 2.16, Section 2.3.1), then that naive and intermediate rules are equiv-
alent (Lemma 2.21 and Lemma 2.22, Section 2.3.2).
Naive rules Lemma 2.22
=========
Lemma 2.21 Intermediate rules Lemma 2.15
=========
Lemma 2.16 Optimised rules
7
(val)
vsρT|ρ
=====
Fvs\ρT
(var) ρT·ρ x =ldom s
xsρT|ρ
=====
Fs l s\ρT
(assign)
as|ρT·ρ
=====
FvsρT·ρ x =ldom s
x:=asρT|ρ
=====
F1s+{l7→v}\ρT
(seq)
as|ρT·ρ
=====
FvsbsρT|ρ
=====
Fvs′′
a;bsρT|ρ
=====
Fvs′′
(if-t.)
as|ρT·ρ
=====
Ftrue sbsρT|ρ
=====
Fvs′′
if athen belse csρT|ρ
=====
Fvs′′ (if-f.)
as|ρT·ρ
=====
Ffalse scsρT|ρ
=====
Fvs′′
if athen belse csρT|ρ
=====
Fvs′′
(letrec)
bsρT|ρ
=====
Fvs
ρ=ρT·ρ|dom(ρT·ρ)\{x1...xn}F=F+{f7→ [λx1...xn.a, ρ,F]}
letrec f(x1. . . xn) = ain bsρT|ρ
=====
Fvs
(call)
Ff= [λx1. . . xn.b, ρ,F]ρ′′ = (x1, l1)·...·(xn, ln)lifresh and distinct
i, a si
i
|ρT·ρ
=====
Fvsi+1
ibsn+1+{li7→vi}ρ′′ |ρ
===========
F+{f7→F f}vs
f(a1. . . an)s1ρT|ρ
=====
Fvs\ρT
Figure 2: Optimised reduction rules
2.3.1 Optimised and intermediate reduction rules equivalence
In this section, we show that optimised and intermediate reduction rules are equivalent:
Intermediate rules Lemma 2.15
=========
Lemma 2.16 Optimised rules
We must therefore show that it is correct to use compact closures in the optimised reduction
rules.
Compact closures carry the implicit idea that some variables can be safely discarded from the
environments when we know for sure that they will be hidden. The following lemma formalises
this intuition.
Lemma 2.14 (Hidden variables elimination).
l, l, M sρT·(x,l)|ρ
F
vsiff MsρT·(x,l)|(x,l)·ρ
F
vs
l, l, M sρT·(x,l)|ρ
========
Fvsiff MsρT·(x,l)|(x,l)·ρ
============
Fvs
Moreover, both derivations have the same height.
Proof. The exact same proof holds for both intermediate and optimised reduction rules.
By induction on the structure of the derivation. The proof relies solely on the fact that
ρT·(x, l)·ρ=ρT·(x, l)·(x, l)·ρ.
8
(val)
vsρT|ρ
Fvs\ρT
(var) ρT·ρ x =ldom s
xsρT|ρ
Fs l s\ρT
(assign)
as|ρT·ρ
F
vsρT·ρ x =ldom s
x:=asρT|ρ
F
1s+{l7→v}\ρT
(seq)
as|ρT·ρ
F
vsbsρT|ρ
F
vs′′
a;bsρT|ρ
F
vs′′
(if-t.)
as|ρT·ρ
F
true sbsρT|ρ
F
vs′′
if athen belse csρT|ρ
F
vs′′
(if-f.)
as|ρT·ρ
F
false scsρT|ρ
F
vs′′
if athen belse csρT|ρ
F
vs′′
(letrec)
bsρT|ρ
Fvs
ρ=ρT·ρF=F+{f7→ [λx1...xn.a, ρ, F]}
letrec f(x1. . . xn) = ain bsρT|ρ
F
vs
(call)
Ff= [λx1. . . xn.b, ρ,F]ρ′′ = (x1, l1)·...·(xn, ln)lifresh and distinct
i, a si
i
|ρT·ρ
F
vsi+1
ibsn+1+{li7→vi}ρ′′ |ρ
F+{f7→F f}
vs
f(a1. . . an)s1
ρT|ρ
F
vs\ρT
Figure 3: Intermediate reduction rules
(seq) ρT·(x, l)·ρ=ρT·(x, l)·(x, l)·ρ. So,
as|ρT·(x,l)·(x,l)·ρ
F
vsiff as|ρT·(x,l)·ρ
F
vs
Moreover, by the induction hypotheses,
bsρT·(x,l)|(x,l)·ρ
F
vs′′ iff bsρT·(x,l)|ρ
F
vs′′
Hence,
a;bsρT·(x,l)|(x,l)·ρ
Fvs′′ iff a;bsρT·(x,l)|ρ
Fvs′′
The other cases are similar.
(val) vsρT·(x,l)|ρ
Fvs\ρT·(x,l)iff vsρT·(x,l)|(x,l)·ρ
Fvs\ρT·(x,l)
(var) ρT·(x, l)·ρ=ρT·(x, l)·(x, l)·ρso, with l′′ =ρT·(x, l)·ρ y,
ysρT·(x,l)|ρ
F
s l′′ s\ρT·(x,l)iff ysρT·(x,l)|(x,l)·ρ
F
s l′′ s\ρT·(x,l)
9
(assign) ρT·(x, l)·ρ=ρT·(x, l)·(x, l)·ρ. So,
as|ρT·(x,l)·(x,l)·ρ
F
vsiff as|ρT·(x,l)·ρ
F
vs
Hence, with l′′ =ρT·(x, l)·ρ y,
y:=asρT·(x,l)|ρ
F1s+{l′′7→v}\ρT·(x,l)iff y:=asρT·(x,l)|(x,l)·ρ
F1s+{l′′7→v}\ρT·(x,l)
(if-true) and (if-false) are proved similarly to (seq).
(letrec) ρT·(x, l)·ρ=ρT·(x, l)·(x, l)·ρ=ρ. Moreover, by the induction hypotheses,
bsρT·(x,l)|(x,l)·ρ
Fvsiff bsρT·(x,l)|ρ
Fvs
Hence,
letrec f(x1. . . xn) = ain bsρT·(x,l)|(x,l)·ρ
F
vsiff
letrec f(x1. . . xn) = ain bsρT·(x,l)|ρ
F
vs
(call) ρT·(x, l)·ρ=ρT·(x, l)·(x, l)·ρ. So,
i, a si
i
|ρT·(x,l)·(x,l)·ρ
F
vsi+1
iiff asi
i
|ρT·(x,l)·ρ
F
vsi+1
i
Hence,
f(a1...an)s1
ρT·(x,l)|(x,l)·ρ
F
vs\ρT·(x,l)iff f(a1. . . an)s1
ρT·(x,l)|ρ
F
vs\ρT·(x,l).
Now we can show the required lemmas and prove the equivalence between the intermediate
and optimised reduction rules.
Lemma 2.15 (Intermediate implies optimised).
If MsρT|ρ
Fvsthen MsρT|ρ
=====
F
vs.
Proof. By induction on the structure of the derivation. The interesting cases are (letrec) and
(call), where compact environments are respectively built and used.
(letrec) By the induction hypotheses,
bsρT|ρ
=====
F
vs
Since we defined canonical compact environments so as to match exactly the way compact en-
vironments are built in the optimised reduction rules, the constraints of the (letrec) rule are
fulfilled:
F=F+{f7→ [λx1...xn.a, ρ,F]},
hence:
letrec f(x1...xn) = ain bsρT|ρ
=====
F
vs
10
(call) By the induction hypotheses,
i, a si
i
|ρT·ρ
=====
F
vsi+1
i
and
bsn+1+{li7→vi}ρ′′ |ρ
=============
(F+{f7→F f})
vs
Lemma 2.14 allows to remove hidden variables, which leads to
bsn+1+{li7→vi}ρ′′ |ρ
|dom(ρ)\{x1...xn}
================
(F+{f7→F f})
vs
Besides,
Ff=hλx1. . . xn.b, ρ
|dom(ρ)\{x1...xn},Fi
and
(F+{f7→ F f})=F+{f7→ Ff}
Hence
f(a1...an)s1ρT|ρ
=====
F
vs\ρT.
(val) vsρT|ρ
=====
F
vs\ρT
(var) xsρT|ρ
=====
F
s l s\ρT
(assign) By the induction hypotheses, as|ρT·ρ
=====
F
vs. Hence,
x:=asρT|ρ
=====
F
1s+{l7→v}\ρT
(seq) By the induction hypotheses,
as|ρT·ρ
=====
F
vsbsρT|ρ
=====
F
vs′′
Hence,
a;bsρT|ρ
=====
F
vs′′
(if-true) and (if-false) are proved similarly to (seq).
Lemma 2.16 (Optimised implies intermediate).
If MsρT|ρ
=====
Fvsthen ∀G such that G=F, M sρT|ρ
G
vs.
Proof. First note that, since G=F,Fis necessarily compact.
By induction on the structure of the derivation. The interesting cases are (letrec) and (call),
where non-compact environments are respectively built and used.
11
(letrec) Let Gsuch as G=F. Remember that ρ=ρT·ρ|dom(ρT·ρ)\{x1...xn}. Let
G=G+{f7→ [λx1...xn.a, ρT·ρ, F]}
which leads, since Fis compact (F=F), to
G=F+{f7→ [λx1...xn.a, ρ,F]}
=F
By the induction hypotheses,
bsρT|ρ
Gvs
Hence,
letrec f(x1...xn) = ain bsρT|ρ
G
vs
(call) Let Gsuch as G=F. By the induction hypotheses,
i, a si
i
|ρT·ρ
G
vsi+1
i
Moreover, since Gf=Ff,
Gf= [λx1...xn.b, (xi, li)·...·(xj, lj)ρ,G]
where G=F, and the liare some locations stripped out when compacting Gto get F. By the
induction hypotheses,
bsn+1+{li7→vi}ρ′′ |ρ
G+{f7→G f}
vs
Lemma 2.14 leads to
bsn+1+{li7→vi}ρ′′ |(xi,li)·...·(xj,lj)ρ
G+{f7→G f}
vs
Hence,
f(a1...an)s1
ρT|ρ
G
vs\ρT.
(val) ∀G such as G=F, v sρT|ρ
G
vs
(var) ∀G such as G=F, x sρT|ρ
G
s l s\ρT
(assign) Let Gsuch as G=F. By the induction hypotheses, as|ρT·ρ
G
vs. Hence,
x:=asρT|ρ
G
1s+{l7→v}\ρT
12
(seq) Let Gsuch as G=F. By the induction hypotheses,
as|ρT·ρ
G
vsbsρT|ρ
G
vs′′
Hence
a;bsρT|ρ
G
vs′′
(if-true) and (if-false) are proved similarly to (seq).
2.3.2 Intermediate and naive reduction rules equivalence
In this section, we show that the naive and intermediate reduction rules are equivalent:
Naive rules Lemma 2.22
=========
Lemma 2.21 Intermediate rules
We must therefore show that it is correct to use minimal stores in the intermediate reduction
rules. We first define a partial order on stores:
Definition 2.17 (Store extension).
ssiff s|dom(s)=s
Property 2.18. Store extension () is a partial order over stores. The following operations
preserve this order: · \ ρand ·+{l7→ v}, for some given ρ,land v.
Proof. Immediate when considering the stores as function graphs: is the inclusion, · \ ρa
relative complement, and ·+{l7→ v}a disjoint union (preceded by · \ (l, v ) when lis already
bound to some v).
Before we prove that using minimal stores is equivalent to using full stores, we need an alpha-
conversion lemma, which allows us to rename locations in the store, provided the new location
does not already appear in the store or the environments. It is used when choosing a fresh
location for the (call) rule in proofs by induction.
Lemma 2.19 (Alpha-conversion).If MsρT|ρ
F
vsthen, for all l, for all lappearing neither
in snor in Fnor in ρ·ρT,
Ms[l/l]ρT[l/l]|ρ[l/l]
F[l/l]
vs[l/l].
Moreover, both derivations have the same height.
Proof. By induction on the height of the derivation. For the (call) case, we must ensure that the
fresh locations lido not clash with l. In case they do, we conclude by applying the induction
hypotheses twice: first to rename the clashing liinto a fresh l
i, then to rename linto l.
Two preliminary elementary remarks. First, provided lappears neither in ρor ρT, nor in s,
(s\ρ)[l/l] = (s[l/l]) \(ρ[l/l])
and
(ρT·ρ)[l/l] = ρT[l/l]·ρ[l/l].
13
Moreover, if MsρT|ρ
F
vs, then dom(s) = dom(s)\ρT(straightforward by induction).
This leads to: ρT=εdom(s) = dom(s).
By induction on the height of the derivation, because the induction hypothesis must be
applied twice in the case of the (call) rule.
(call) i, dom(si) = dom(si+1). Thus, i, l/dom(si). This leads, by the induction hypothe-
ses, to
i, a si[l/l]
i
|(ρT·ρ)[l/l]
F
vsi+1[l/l]
iF[l/l]
Moreover , Fis part of F. As a result, since ldoes not appear in F, it does not appear in F,
nor in F+{f7→ F f}. It does not appear in ρeither (since ρis part of F). On the other
hand, there might be some jsuch that lj=l, so lmight appear in ρ′′. In that case, we apply
the induction hypotheses a first time to rename ljin some l
j6=l. One can chose l
jsuch that it
does not appear in sn+1,F+{f7→ F f}nor in ρ′′ ·ρ. As a result, l
jis fresh. Since ljis fresh
too, and does not appear in dom(s) (because of our preliminary remarks), this leads to a mere
substitution in ρ′′:
bsn+1+{li[l
j/lj]7→vi}ρ′′[l
j/lj]|ρ
F+{f7→F f}
vs
Once this (potentially) disturbing ljhas been renamed (we ignore it in the rest of the proof), we
apply the induction hypotheses a second time to rename lto l:
b(sn+1+{li7→vi})[l/l]ρ′′ [l/l]|ρ[l/l]
F+{f7→F f}
vs[l/l]
Now, (sn+1 +{li7→ vi})[l/l] = sn+1[l/l] + {li7→ vi}. Moreover,
F[l/l]f= [λx1...xn.b, ρ[l/l],F[l/l]]
and
(F+{f7→ F f})[l/l] = F[l/l] + {f7→ F[l/l]f}
Finally, ρ′′[l/l] = ρ′′. Hence:
f(a1...an)s1[l/l]ρT[l/l]|ρ[l/l]
F[l/l]
vs[l/l]\ρT[l/l].
(val) vs[l/l]ρT[l/l]|ρ[l/l]
F[l/]
vs[l/l]\ρT[l/l]
(var) s[l/l](ρT[l/l]·ρ[l/l]x) = s(ρT·ρ x) = vimplies
xs[l/l]ρT[l/l]|ρ[l/l]
F[l/]
vs[l/l]\ρT[l/l]
14
(assign) By the induction hypotheses,
as[l/l]|(ρT·ρ)[l/l]
F[l/]
vs[l/]
Let s′′ =s+{ρT·ρ x 7→ v}. Then,
s[l/l] + {(ρT·ρ)[l/l]x7→ v}=s′′ [l/l]
Hence
x:=as[l/l]ρT[l/l]|ρ[l/l]
F[l/]
1s′′[l/l]\ρT[l/l]
(seq) By the induction hypotheses,
as[l/l]|(ρT·ρ)[l/l]
F[l/]
vs[l/l]
Besides, dom(s) = dom(s), therefore l/dom(s). Then, by the induction hypotheses,
bs[l/l]ρT[l/l]|ρ[l/l]
F[l/]
vs′′[l/l]
Hence
a;bs[l/l]ρT[l/l]|ρ[l/l]
F[l/]
vs′′[l/l]
(if-true) and (if-false) are proved similarly to (seq).
(letrec) Since lappears neither in ρnor in F, it does not appear in Feither. By the
induction hypotheses,
bs[l/l]ρT[l/l]|ρ[l/l]
F[l/l]
vs[l/l]
Moreover,
F[l/l] = F[l/l] + {f7→ [λx1. . . xn.a, ρ[l/l],F]}
Hence
letrec f(x1...xn) = ain bsρT[l/l]|ρ[l/l]
F[l/]
vs
To prove that using minimal stores is correct, we need to extend them so as to recover the
full stores of naive reduction. The following lemma shows that extending a store before an
(intermediate) reduction extends the resulting store too:
Lemma 2.20 (Extending a store in a derivation).
Given the reduction MsρT|ρ
Fvs,then ts, ts, M tρT|ρ
Fvt.
Moreover, both derivations have the same height.
Proof. By induction on the height of the derivation. The most interesting case is (call), which
requires alpha-converting a location (hence the induction on the height rather than the structure
of the derivation).
(var), (val) and (assign) are straightforward by the induction hypotheses and Property 2.18;
(seq), (if-true), (if-false) and (letrec) are straightforward by the induction hypotheses.
15
(call) Let t1s1. By the induction hypotheses,
t2s2, a t1
1
|ρT·ρ
F
vt2
1
ti+1 si+1, a ti
i
|ρT·ρ
F
vti+1
i
tn+1 sn+1, a tn
n
|ρT·ρ
F
vtn+1
n
The locations limight belong to dom(tn+1) and thus not be fresh. By alpha-conversion (Lemma 2.19),
we chose fresh l
i(not in Im(ρ) and dom(s)) such that
bsn+1+{l
i7→vi}(l
i,vi)|ρ
F+{f7→F f}
vs
By Property 2.18, tn+1 +{l
i7→ vi} ⊒ sn+1 +{l
i7→ vi}. By the induction hypotheses,
ts, b tn+1+{l
i7→vi}(l
i,vi)|ρ
F+{f7→F f}
vt
Moreover, t\ρTs\ρT. Hence,
f(a1...an)t1
ρT|ρ
F
vt\ρT.
(var) Let ts.vtρT|ρ
Fvt\ρTand t=t\ρTs\ρT=s(Property 2.18).
(val) Let ts.xtρT|ρ
F
t l t\ρTand t=t\ρTs\ρT=s(Property 2.18). Moreover,
t l =s l because ldom(s) and t|dom(s)=s.
(assign) Let ts. By the induction hypotheses,
ts, a t|ρT·ρ
F
vt
Hence,
x:=atρT|ρ
F
1t+{l7→v}\ρT
concludes, since t+{l7→ v} \ ρTt+{l7→ v}\ρT(Property 2.18).
(seq) Let ts. By the induction hypotheses,
ts, a t|ρT·ρ
F
vt
t′′ s′′, b tρT|ρ
F
vt′′
Hence,
t′′ s′′, a ;btρT|ρ
F
vt′′
16
(if-true) and (if-false) are proved similarly to (seq).
(letrec) Let ts. By the induction hypotheses,
ts, b sρT|ρ
Fvs
Hence,
ts,letrec f(x1. . . xn) = ain bsρT|ρ
F
vt
Now we can show the required lemmas and prove the equivalence between the intermediate
and naive reduction rules.
Lemma 2.21 (Intermediate implies naive).
If MsρT|ρ
F
vsthen ts, M sρT·ρ
Fvt.
Proof. By induction on the height of the derivation, because some stores are modified during the
proof. The interesting cases are (seq) and (call), where Lemma 2.20 is used to extend intermediary
stores. Other cases are straightforward by Property 2.18 and the induction hypotheses.
(seq) By the induction hypotheses,
ts, a sρ
Fvt.
Moreover,
bsρT|ρ
F
vs′′ .
Since ts, Lemma 2.20 leads to:
ts′′, b tρT|ρ
Fvt
and the height of the derivation is preserved. By the induction hypotheses,
t′′ t, b tρ
Fvt′′
Hence, since is transitive (Property 2.18),
t′′ s′′, a ;bsρ
Fvt′′ .
(call) Similarly to the (seq) case, we apply the induction hypotheses and Lemma 2.20:
t2s2, a s1
1
ρ
Fvt2
1(Induction)
t
i+1 si+1, a ti
i
|ρT·ρ
F
vt
i+1
i(Lemma 2.20)
ti+1 t
i+1 si+1, a ti
i
ρ
Fvti+1
i(Induction)
t
n+1 sn+1, a tn
n
|ρT·ρ
F
vt
n+1
n(Lemma 2.20)
tn+1 t
n+1 sn+1, a tn
n
ρ
Fvtn+1
n(Induction)
17
The locations limight belong to dom(tn+1) and thus not be fresh. By alpha-conversion (Lemma 2.19),
we choose a set of fresh l
i(not in Im(ρ) and dom(s)) such that
bsn+1+{l
i7→vi}(l
i,vi)|ρ
F+{f7→F f}
vs.
By Property 2.18, tn+1 +{l
i7→ vi} ⊒ sn+1 +{l
i7→ vi}. Lemma 2.20 leads to,
ts, b tn+1+{l
i7→vi}(l
i,vi)|ρ
F+{f7→F f}
vt.
By the induction hypotheses,
tts, b tn+1 +{l
i7→vi}(l
i,vi)·ρ
F+{f7→F f}vt.
Moreover, t\ρTs\ρT. Hence,
f(a1...an)s1ρ
Fvt\ρT.
(val) vsρ
Fvtwith t=ss\ρT=s.
(var) xsρ
Fs l s′′ with t=ss\ρT=s.
(assign) By the induction hypotheses,
s′′ s, a sρ
Fvt
Hence,
x:=asρ
F1t+{l7→v}
concludes since t+{l7→ v} ⊒ s+{l7→ v}(Property 2.18).
(if-true) and (if-false) are proved similarly to (seq).
(letrec) By the induction hypotheses,
ts, b sρ
Fvs.
Hence,
ts,letrec f(x1. . . xn) = ain bsρ
Fvt.
The proof of the converse property — i.e. if a term reduces in the naive reduction rules, it
reduces in the intermediate reduction rules too — is more complex because the naive reduction
rules provide very weak invariants about stores and environments. For that reason, we add an
hypothesis to ensure that every location appearing in the environments ρ,ρTand Falso appears
in the store s:
Im(ρT·ρ)Loc(F)dom(s).
Moreover, since stores are often larger in the naive reduction rules than in the intermediate ones,
we need to generalise the induction hypothesis.
18
Lemma 2.22 (Naive implies intermediate).Assume Im(ρT·ρ)Loc(F)dom(s). Then,
MsρT·ρ
Fvsimplies
tssuch that Im(ρT·ρ)Loc(F)dom(t), M tρT|ρ
F
vs|dom(t)\Im(ρT).
Proof. By induction on the structure of the derivation.
(val) Let ts. Then
t\ρT=s|dom(t)\Im(ρT)because s|dom(t)=t
=s|dom(t)\Im(ρT)because s=s
Hence,
vtρT|ρ
Fvt\ρT.
(var) Let tssuch that Im(ρT·ρ)Loc(F)dom(t). Note that lIm(ρT·ρ)dom(t)
implies t l =s l. Then,
t\ρT=s|dom(t)\Im(ρT)because s|dom(t)=t
=s|dom(t)\Im(ρT)because s=s
Hence,
xtρT|ρ
F
t l t\ρT.
(assign) Let tssuch that Im(ρT·ρ)Loc(F)dom(t). By the induction hypotheses, since
Im(ε) = ,
at|ρT·ρ
F
vs|dom(t)
Note that lIm(ρT·ρ)dom(t) implies ldom(s|dom(t)). Then
(s|dom(t)+{l7→ v})\ρT= (s+{l7→ v})|dom(t)\ρTbecause ldom(s|dom(t))
= (s+{l7→ v})|dom(t)\Im(ρT)
Hence,
x:=asρT|ρ
F
1(s|dom(t)+{l7→v})\ρT.
(seq) Let tssuch that Im(ρT·ρ)Loc(F)dom(t). By the induction hypotheses, since
Im(ε) = ,
at|ρT·ρ
F
vs|dom(t)
Moreover, s|dom(t)sand Im(ρT·ρ)Loc(F)dom(s|dom(t)) = dom(t). By the induction
hypotheses, this leads to:
bs|dom(t)
ρT|ρ
F
vs′′|dom(s|dom(t))\Im(ρT).
19
Hence, with dom(s|dom(t)) = dom(t),
a;btρT|ρ
F
vs′′|dom(t)\Im(ρT).
(if-true) and (if-false) are proved similarly to (seq).
(letrec) Let tssuch that Im(ρT·ρ)Loc(F)dom(t).
Loc(F) = Loc(F)Im(ρT·ρ) implies Im(ρT·ρ)Loc(F)dom(t).
Then, by the induction hypotheses,
btρT|ρ
Fvs|dom(t)\Im(ρT).
Hence,
letrec f(x1...xn) = ain btρT|ρ
F
vs|dom(t)\Im(ρT).
(call) Let ts1such that Im(ρT·ρ)Loc(F)dom(t). Note the following equalities:
s1|dom(t)=t
s2|dom(t)s2
Im(ρT·ρ)Loc(F)dom(s2|dom(t)) = dom(t)
s3|dom(s2|dom(t))=s3|dom(t)
By the induction hypotheses, they yield:
at
1
|ρT·ρ
F
vs2|dom(t)
1
as2|dom(t)
2
|ρT·ρ
F
vs3|dom(t)
1
i, a si|dom(t)
i
|ρT·ρ
F
vsi+1|dom(t)
i
Moreover, sn+1|dom(t)sn+1 implies sn+1|dom(t)+{li7→ vi} ⊑ sn+1 +{li7→ vi}(Property 2.18)
and:
Im(ρ′′ ·ρ)Loc(F+{f7→ F f}) = Im(ρ′′ )(Im(ρ)Loc(F))
⊂ {li} ∪ Loc(F)
⊂ {li} ∪ dom(t)
dom(sn+1|dom(t)+{li7→ vi})
Then, by the induction hypotheses,
bsn+1|dom(t)+{li7→vi}ρ′′ |ρ
F+{f7→F f}
vs|dom(sn+1|dom(t)+{li7→vi})\Im(ρ′′ )
20
Finally,
s|dom(sn+1|dom(t)+{li7→vi})\Im(ρ′′ )\ρT=s|dom(t)∪{li}\{li}\ρT=s|dom(t)\ρT
= (s\ρT)|dom(t)\Im(ρT)(by definition of · \ ·)
Hence,
f(a1...an)tρT|ρ
F
v(s\ρT)|dom(t)\Im(ρT).
2.4 Correctness of lambda-lifting
In this section, we prove the correctness of lambda-lifting (Theorem 2.9, p. 5) by induction on
the height of the optimised reduction.
Section 2.4.1 defines stronger invariants and rewords the correctness theorem with them.
Section 2.4.2 gives an overview of the proof. Sections 2.4.3 and 2.4.4 prove a few lemmas needed
for the proof. Section 2.4.5 contains the actual proof of correctness.
2.4.1 Strengthened hypotheses
We need strong induction hypotheses to ensure that key invariants about stores and environments
hold at every step. For that purpose, we define aliasing-free environments, in which locations may
not be referenced by more than one variable, and local positions. They yield a strengthened ver-
sion of liftable parameters (Definition 2.25). We then define lifted environments (Definition 2.26)
to mirror the effect of lambda-lifting in lifted terms captured in closures, and finally reformulate
the correctness of lambda-lifting in Theorem 2.28 with hypotheses strong enough to be provable
directly by induction.
Definition 2.23 (Aliasing).A set of environments Eis aliasing-free when:
ρ, ρ∈ E,xdom(ρ),ydom(ρ), ρ x =ρyx=y.
By extension, an environment of functions Fis aliasing-free when Env(F)is aliasing-free.
The notion of aliasing-free environments is not an artifact of our small language, but translates
a fundamental property of the C semantics: distinct function parameters or local variables are
always bound to distinct memory locations (Section 6.2.2, paragraph 6 in ISO/IEC 9899 [3]).
A local position is any position in a term except inner functions. Local positions are used to
distinguish functions defined directly in a term from deeper nested functions, because we need
to enforce Invariant 3 (Definition 2.25) on the former only.
Definition 2.24 (Local position).Local positions are defined inductively as follows:
1. Mis in local position in M,x:=M,M;M,if Mthen Melse Mand f(M,...,M).
2. Nis in local position in letrec f(x1...xn) = Min N.
We extend the notion of liftable parameter (Definition 2.8, p. 5) to enforce invariants on
stores and environments.
Definition 2.25 (Extended liftability).The parameter xis liftable in (M, F, ρT, ρ)when:
1. xis defined as the parameter of a function g, either in Mor in F,
2. in both Mand F, inner functions in g, named hi, are defined and called exclusively:
(a) in tail position in g, or
(b) in tail position in some hj(with possibly i=j), or
21
(c) in tail position in M,
3. for all fdefined in local position in M,xdom(ρT·ρ)⇔ ∃i, f =hi,
4. moreover, if hiis called in tail position in M, then xdom(ρT),
5. in F,xappears necessarily and exclusively in the environments of the hi’s closures,
6. Fcontains only compact closures and Env(F)∪ {ρ, ρT}is aliasing-free.
We also extend the definition of lambda-lifting (Definition 2.6, p. 5) to environments, in order
to reflect changes in lambda-lifted parameters captured in closures.
Definition 2.26 (Lifted form of an environment).
If Ff= [λx1. . . xn.b, ρ,F]then
(F)f=(λx1...xnx. (b), ρ|dom(ρ)\{x},(F)when f=hifor some i
[λx1...xn.(b), ρ,(F)]otherwise
Lifted environments are defined such that a liftable parameter never appears in them. This
property will be useful during the proof of correctness.
Lemma 2.27. If xis a liftable parameter in (M, F, ρT, ρ), then xdoes not appear in (F).
Proof. Since xis liftable in (M, F, ρT, ρ), it appears exclusively in the environments of hi. By
definition, it is removed when building (F).
These invariants and definitions lead to a correctness theorem with stronger hypotheses.
Theorem 2.28 (Correctness of lambda-lifting).If xis a liftable parameter in (M, F, ρT, ρ), then
MsρT|ρ
=====
Fvsimplies (M)s
ρT|ρ
=====
(F)
vs
Since naive and optimised reductions rules are equivalent (Theorem 2.13, p. 7), the proof of
Theorem 2.9 (p. 5) is a direct corollary of this theorem.
Corollary 2.29. If xis a liftable parameter in M, then
t, M εε
εvtimplies t,(M)ε
ε
εvt.
2.4.2 Overview of the proof
With the enhanced liftability definition, we have invariants strong enough to perform a proof by
induction of the correctness theorem. This proof is detailed in Section 2.4.5.
The proof is not by structural induction but by induction on the height of the derivation.
This is necessary because, even with the stronger invariants, we cannot apply the induction
hypotheses directly to the premises in the case of the (call) rule: we have to change the stores
and environments, which means rewriting the whole derivation tree, before using the induction
hypotheses.
To deal with this most difficult case, we distinguish between calling one of the lifted functions
(f=hi) and calling another function (either g, where xis defined, or any other function outside of
g). Only the former requires rewriting; the latter follows directly from the induction hypotheses.
In the (call) rule with f=hi, issues arise when reducing the body bof the lifted function.
During this reduction, indeed, the store contains a new location lbound by the environment to
the lifted variable x, but also contains the location lwhich contains the original value of x. Our
22
goal is to show that the reduction of bimplies the reduction of (b), with store and environments
fulfilling the constraints of the (call) rule.
To obtain the reduction of the lifted body (b), we modify the reduction of bin a series of
steps, using several lemmas:
the location lof the free variable xis moved to the tail environment (Lemma 2.30);
– the resulting reduction meets the induction hypotheses, which we apply to obtain the
reduction of the lifted body (b);
– however, this reduction does not meet the constraints of the optimised reduction rules
because the location lis not fresh: we rename it to a fresh location lto hold the lifted
variable (Lemma 2.31);
finally, since we renamed lto l, we need to reintroduce a location lto hold the original
value of x(Lemmas 2.32 and 2.33).
The rewriting lemmas used in the (call) case are shown in Section 2.4.3.
For every other case, the proof consists in checking thoroughly that the induction hypotheses
apply, in particular that xis liftable in the premises. These verifications consist in checking
Invariants 3 to 6 of the extended liftability definition (Definition 2.25) — Invariants 1 and 2
are obvious enough not to be detailed. To keep the main proof as compact as possible, the
most difficult cases of liftability, related to aliasing, are proven in some preliminary lemmas
(Section 2.4.4).
One last issue arises during the induction when one of the premises does not contain the lifted
variable x. In that case, the invariants do not hold, since they assume the presence of x. But it
turns out that in this very case, the lifting function is the identity (since there is no variable to
lift) and lambda-lifting is trivially correct.
2.4.3 Rewriting lemmas
Calling a lifted function has an impact on the resulting store: new locations are introduced for the
lifted parameters and the earlier locations, which are not modified anymore, are hidden. Because
of these changes, the induction hypotheses do not apply directly in the case of the (call) rule for
a lifted function hi. We use the following four lemmas to obtain, through several rewriting steps,
a reduction of lifted terms meeting the induction hypotheses.
– Lemma 2.30 shows that moving a variable from the non-tail environment ρto the tail
environment ρTdoes not change the result, but restricts the domain of the store. It is
used transform the original free variable x(in the non-tail environment) to its lifted copy
(which is a parameter of hi, hence in the tail environment).
Lemma 2.31 handles alpha-conversion in stores and is used when choosing a fresh location.
– Lemmas 2.32 and 2.33 finally add into the store and the environment a fresh location,
bound to an arbitrary value. It is used to reintroduce the location containing the original
value of x, after it has been alpha-converted to l.
Lemma 2.30 (Switching to tail environment).If MsρT|(x,l)·ρ
========
Fvsand x /dom(ρT)then
MsρT·(x,l)|ρ
========
Fvs|dom(s)\{l}. Moreover, both derivations have the same height.
Proof. By induction on the structure of the derivation. For the (val), (var), (assign) and (call)
cases, we use the fact that s\ρT·(x, l) = s|dom(s)\{l}when s=s\ρT.
(val) vsρT·(x,l)|ρ
========
Fvs\ρT·(x,l)and s\ρT·(x, l) = s|dom(s)\{l}with s=s\ρT.
23
(var) ysρT·(x,l)|ρ
========
Fs ls\ρT·(x,l)and s\ρT·(x, l) = s|dom(s)\{l}, with l=ρT·(x, l)·ρ y and
s=s\ρT.
(assign) By hypothesis, as|ρT·(x,l)·ρ
=========
Fvshence y:=asρT·(x,l)|ρ
========
F1s+{l7→v}\ρT·(x,l)and
s+{l7→ v}\ρT·(x, l) = s|dom(s)\{l}with l=ρT·(x, l)·ρ y and s=s+{l7→ v} \ ρT.
(seq) By hypothesis, as|ρT·(x,l)·ρ
=========
Fvsand, by the induction hypotheses, bsρT·(x,l)|ρ
========
F
vs′′|dom(s′′ )\{l}hence
a;bsρT·(x,l)|ρ
========
Fvs′′|dom(s′′ )\{l}.
(if-true) and (if-false) are proved similarly to (seq).
(letrec) By the induction hypotheses,
bsρT·(x,l)|ρ
========
Fvs|dom(s)\{l}
hence
letrec f(x1...xn) = ain bsρT·(x,l)|ρ
========
Fvs|dom(s)\{l}
(call) The hypotheses do not change, and the conclusion becomes:
f(a1...an)s1ρT·(x,l)|ρ
========
Fvs\ρT·(x,l)
as expected, since s\ρT·(x, l) = s′′ |dom(s′′ )\{l}with s′′ =s\ρT
Lemma 2.31 (Alpha-conversion).If MsρT|ρ
=====
Fvsthen, for all l, for all lappearing neither
in snor in Fnor in ρ·ρT,
Ms[l/l]ρT[l/l]|ρ[l/l]
===========
F[l/l]vs[l/l]
Moreover, both derivations have the same height.
Proof. See Lemma 2.19, p. 2.19.
Lemma 2.32 (Spurious location in store).If MsρT|ρ
=====
Fvsand kdoes not appear in either
s,For ρT·ρ, then, for all value u,Ms+{k7→u}ρT|ρ
=====
Fvs+{k7→u}. Moreover, both derivations
have the same height.
Proof. By induction on the height of the derivation. The key idea is to add (k , u) to every store
in the derivation tree. A collision might occur in the (call) rule, if there is some jsuch that
lj=k. In that case, we need to rename ljto some fresh variable l
j6=k(by alpha-conversion)
before applying the induction hypotheses.
24
(call) By the induction hypotheses,
i, a si+{k7→u}
i
|ρT·ρ
=====
Fvsi+1+{k7→u}
i
Because kdoes not appear in F,
k /Loc(F+{f7→ F f})Loc(F)
For the same reason, it does not appear in ρ. On the other hand, there might be a jsuch that
lj=k, so kmight appear in ρ′′ . In that case, we rename ljin some fresh l
j6=k, appearing in
neither sn+1, nor For ρ′′ ·ρ(Lemma 2.31). After this alpha-conversion, kdoes not appear in
either ρ′′ ·ρ,F+{f7→ F f}, or sn+1 +{li7→ vi}. By the induction hypotheses,
bsn+1+{li7→vi}+{k7→u}ρ′′ |ρ
===========
F+{f7→F f}vs+{k7→u}
Moreover, s+{k7→ u}\ρT=s\ρT+{k7→ u}(since kdoes not appear in ρT). Hence
f(a1...an)s1+{k7→u}ρT|ρ
=====
Fvs+{k7→u}\ρT.
(val) vs+{k7→u}ρT|ρ
=====
Fvs+{k7→u}\ρTand s+{k7→ u} \ ρT=s\ρT+{k7→ u}since kdoes
not appear in ρT.
(var) xs+{k7→u}ρT|ρ
=====
F(s+{k7→ u})ls+{k7→u}\ρT, with s+{k7→ u} \ ρT=s\ρT+{k7→ u}
since kdoes not appear in ρT, and (s+{k7→ u})l=s l since k6=l(kdoes not appear in s).
(assign) By the induction hypotheses, as+{k7→u}|ρT·ρ
=====
Fvs+{k7→u}. And k6=l(since kdoes
not appear in s) then s+{k7→ u}+{l7→ v}=s+{l7→ v}+{k7→ u}. Moreover, kdoes not
appear in ρTthen s+{l7→ v}+{k7→ u}\ρT=s+{l7→ v}\ρT+{k7→ u}. Hence
x:=as+{k7→u}ρT|ρ
=====
F1s+{l7→v}\ρT+{k7→u}
(seq) By the induction hypotheses,
as+{k7→u}|ρT·ρ
=====
Ftrue s+{k7→u}
bs+{k7→u}ρT|ρ
=====
Fvs′′+{k7→u}
Hence
a;bs+{k7→u}ρT|ρ
=====
Fvs′′+{k7→u}
(if-true) and (if-false) are proved similarly to (seq).
25
(letrec) The location kdoes not appear in F, because it does not appear in either For
ρρT·ρ(F=F+{f7→ [λx1. . . xn.a, ρ,F]}). Then, by the induction hypotheses,
bs+{k7→u}ρT|ρ
=====
Fvs+{k7→u}
Hence
letrec f(x1...xn) = ain bs+{k7→u}ρT|ρ
=====
Fvs+{k7→u}.
Lemma 2.33 (Spurious variable in environments).
l, l, M sρT·(x,l)|ρ
========
Fvsiff MsρT·(x,l)|(x,l)·ρ
============
Fvs
Moreover, both derivations have the same height.
Proof. See Lemma 2.14, p. 2.14.
2.4.4 Aliasing lemmas
We need three lemmas to show that environments remain aliasing-free during the proof by in-
duction in Section 2.4.5. The first lemma states that concatenating two environments in an
aliasing-free set yields an aliasing-free set. The other two prove that the aliasing invariant (In-
variant 6, Definition 2.25) holds in the context of the (call) and (letrec) rules, respectively.
Lemma 2.34 (Concatenation).If E ∪ {ρ, ρ}is aliasing-free then E ∪ {ρ·ρ}is aliasing-free.
Proof. By exhaustive check of cases. We want to prove
ρ1, ρ2∈ E ∪ {ρ·ρ},xdom(ρ1),ydom(ρ2), ρ1x=ρ2yx=y.
given that
ρ1, ρ2∈ E ∪ {ρ, ρ},xdom(ρ1),ydom(ρ2), ρ1x=ρ2yx=y.
If ρ1∈ E and ρ2∈ E , immediate. If ρ1∈ {ρ·ρ},ρ1x=ρ x or ρx. This is the same for ρ2.
Then ρ1x=ρ2yis equivalent to ρ x =ρy(or some other combination, depending on x,y,ρ1
and ρ2) which leads to the expected result.
Lemma 2.35 (Aliasing in (call) rule).Assume that, in a (call) rule,
Ff= [λx1. . . xn.b, ρ,F],
Env(F)is aliasing-free, and
ρ′′ = (x1, l1)·...·(xn, ln), with fresh and distinct locations li.
Then Env(F+{f7→ F f})∪ {ρ, ρ′′}is also aliasing-free.
Proof. Let E= Env(F+{f7→ F f})∪ {ρ}. We know that E ⊂ Env(F) so Eis aliasing-free
We want to show that adding fresh and distinct locations from ρ′′ preserves this lack of freedom.
More precisely, we want to show that
ρ1, ρ2∈ E ∪ {ρ′′},xdom(ρ1),ydom(ρ2), ρ1x=ρ2yx=y
given that
ρ1, ρ2∈ E,xdom(ρ1),ydom(ρ2), ρ1x=ρ2yx=y.
We reason by checking of all cases. If ρ1∈ E and ρ2∈ E , immediate. If ρ1=ρ2=ρ′′ then
ρ′′ x=ρ′′ yx=yholds because the locations of ρ′′ are distinct. If ρ1=ρ′′ and ρ2∈ E then
ρ1x=ρ2yx=yholds because ρ1x6=ρ2y(by freshness hypothesis).
26
Lemma 2.36 (Aliasing in (letrec) rule).If Env(F)∪ {ρ, ρT}is aliasing free, then, for all xi,
Env(F)∪ {ρ, ρT} ∪ {ρT·ρ|dom(ρT·ρ)\{x1...xn}}
is aliasing free.
Proof. Let E= Env(F)∪ {ρ, ρT}and ρ′′ =ρT·ρ|dom(ρT·ρ)\{x1...xn}. Adding ρ′′ , a restricted
concatenation of ρTand ρ, to Epreserves aliasing freedom, as in the proof of Lemma 2.34. If
ρ1∈ E and ρ2∈ E, immediate. If ρ1∈ {ρ′′},ρ1x=ρ x or ρx. This is the same for ρ2. Then
ρ1x=ρ2yis equivalent to ρ x =ρy(or some other combination, depending on x,y,ρ1and
ρ2) which leads to the expected result.
2.4.5 Proof of correctness
We finally show Theorem 2.28.
Theorem 2.28. If xis a liftable parameter in (M, F, ρT, ρ), then
MsρT|ρ
=====
Fvsimplies (M)s
ρT|ρ
=====
(F)
vs
Assume that xis a liftable parameter in (M , F, ρT, ρ). The proof is by induction on the height
of the reduction of MsρT|ρ
=====
Fvs. To keep the proof readable, we detail only the non-trivial
cases when checking the invariants of Definition 2.25 to ensure that the induction hypotheses
hold.
(call) — first case First, we consider the most interesting case where there exists isuch that
f=hi. The variable xis a liftable parameter in (hi(a1...an),F, ρT, ρ) hence in (ai,F, ε, ρT·ρ)
too.
Indeed, the invariants of Definition 2.25 hold:
Invariant 3: By definition of a local position, every fdefined in local position in aiis in
local position in hi(a1. . . an), hence the expected property by the induction hypotheses.
Invariant 4: Immediate since the premise does not hold : since the aiare not in tail position
in hi(a1...an), they cannot feature calls to hi(by Invariant 2).
Invariant 6: Lemma 2.34, p. 26.
The other invariants hold trivially.
By the induction hypotheses, we get
(ai)
si|ρT·ρ
=====
(F)
vsi+1
i.
By definition of lifting, (hi(a1...an))=hi((a1),...,(an), x). But xis not a liftable parameter
in (b, F, ρ′′, ρ) since the Invariant 4 might be broken: x /dom(ρ′′) (xis not a parameter of hi)
but hjmight appear in tail position in b.
On the other hand, we have xdom(ρ): since, by hypothesis, xis a liftable parameter in
(hi(a1...an),F, ρT, ρ), it appears necessarily in the environments of the closures of the hi, such
as ρ. This allows us to split ρinto two parts: ρ= (x, l)·ρ′′′. It is then possible to move (x, l)
to the tail environment, according to Lemma 2.30:
bsn+1+{li7→vi}ρ′′ (x,l)|ρ′′′
===========
F+{f7→F f}vs|dom(s)\{l}
This rewriting ensures that xis a liftable parameter in (b, F+{f7→ F f}, ρ′′ ·(x, l), ρ′′′).
Indeed, the invariants of Definition 2.25 hold:
27
Invariant 3: Every function defined in local position in bis an inner function in hiso, by
Invariant 2, it is one of the hiand xdom(ρ′′ ·(x, l)·ρ′′′ ).
Invariant 4: Immediate since xdom(ρ′′ ·(x, l)·ρ′′′ ).
Invariant 5: Immediate since Fis included in F.
– Invariant 6: Immediate for the compact closures. Aliasing freedom is guaranteed by
Lemma 2.35 (p. 26).
The other invariants hold trivially.
By the induction hypotheses,
(b)sn+1+{li7→vi}
ρ′′(x,l)|ρ′′′
=============
(F+{f7→F f})
vs|dom(s)\{l}
The llocation is not fresh: it must be rewritten into a fresh location, since xis now a parameter
of hi. Let lbe a location appearing in neither (F+{f7→ F f}), nor sn+1 +{li7→ vi}or
ρ′′ ·ρT. Then lis a fresh location, which is to act as lin the reduction of (b).
We will show that, after the reduction, lis not in the store (just like lbefore the lambda-
lifting). In the meantime, the value associated to ldoes not change (since lis modified instead
of l).
Lemma 2.27 implies that xdoes not appear in the environments of (F), so it does not appear
in the environments of (F+{f7→ F f})(F)either. As a consequence, lack of aliasing
implies by Definition 2.23 that the label l, associated to x, does not appear in (F+{f7→ F f})
either, so
(F+{f7→ F f})[l/l] = (F+{f7→ F f}).
Moreover, ldoes not appear in s|dom(s)\{l}. By alpha-conversion (Lemma 2.31, since ldoes not
appear in the store or the environments of the reduction, we rename lto l:
(b)sn+1[l/l]+{li7→vi}
ρ′′(x,l)|ρ′′′
=============
(F+{f7→F f})
vs|dom(s)\{l}.
We want now to reintroduce l. Let vx=sn+1 l. The location ldoes not appear in sn+1 [l/l] +
{li7→ vi}, (F+{f7→ F f}), or ρ′′ (x, l)·ρ′′′ . Thus, by Lemma 2.32,
(b)sn+1[l/l]+{li7→vi}+{l7→vx}
ρ′′(x,l)|ρ′′′
=============
(F+{f7→F f})
vs|dom(s)\{l}+{l7→vx}.
Since
sn+1[l/l] + {li7→ vi}+{l7→ vx}=sn+1[l/l] + {l7→ vx}+{li7→ vi}because i, l 6=li
=sn+1 +{l7→ vx}+{li7→ vi}because vx=sn+1l
=sn+1 +{li7→ vi}+{l7→ vx}because i, l6=li
and s|dom(s)\{l}+{l7→ vx}=s+{l7→ vx}, we finish the rewriting by Lemma 2.33,
(b)sn+1+{li7→vi}+{l7→vx}
ρ′′(x,l)|(x,l)·ρ′′′
=============
(F+{f7→F f})
vs+{l7→vx}.
28
Hence the result:
(call)
(F)hi= [λx1. . . xnx. (b), ρ,(F)]
ρ′′ = (x1, l1)·...·(xn, ln)(x, ρTx)land lifresh and distinct
i, (ai)
si|ρT·ρ
=====</