Article

Efficient Self-Interpretation in Lambda Calculus

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

We start by giving a compact representation schema for -terms and show how this leads to an exceedingly small and elegant self-interpreter. We then define the notion of a self-reducer, and show how this too can be written as a small -term. Both the self-interpreter and the self-reducer are proved correct. We finally give a constructive proof for the second fixed point theorem for the representation schema. All the constructions have been implemented on a computer, and experiments verify their correctness. Timings show that the self-interpreter and self-reducer are quite efficient, being about 35 and 50 times slower than direct execution using a call-byneed reduction strategy. 1 Preliminaries The set of -terms, , is defined by the abstract syntax: = V j j V: where V is a countable infinite set of distinct variables. (Possibly subscripted) lower case letters a; b; x; y; . . . are used for variables, and capital letters M;N;E; . . . for -terms. We will assume familiarity with the rul...

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... To demonstrate the application of PITS, we build a simple surface language Fun that extends PITS with algebraic datatypes using a Scott encoding of datatypes (Mogensen, 1992). We also implement prototype interpreter and compiler for Fun, which can run all examples shown in this paper. ...
... Table 1 shows a summary of encodable features in Fun, including algebraic datatypes (Section 3.1), higher-kinded types (Section 3.2), datatype promotion (Section 3.2), high-order abstract syntax (Section 3.2) and object encodings (Section 3.3). The encoding of algebraic datatypes in Fun uses Scott encodings (Mogensen, 1992). The encoding itself uses casts, but the use of casts is completely transparent to programmers. ...
... λ I specifies the PITS triple (see Section 2.1) as Sort = { }, A = {( , )} and R = {( , , )}. Algebraic datatypes and pattern matching in Fun are implemented using Scott encodings (Mogensen, 1992), which can be later desugared into PITS (λ I) terms. For demonstration, we implemented a prototype interpreter and compiler for Fun, both written in GHC Haskell (Marlow, 2010). ...
Article
Traditional designs for functional languages (such as Haskell or ML) have separate sorts of syntax for terms and types. In contrast, many dependently typed languages use a unified syntax that accounts for both terms and types. Unified syntax has some interesting advantages over separate syntax, including less duplication of concepts, and added expressiveness. However, integrating unrestricted general recursion in calculi with unified syntax is challenging when some level of type-level computation is present, since properties such as decidable type-checking are easily lost. This paper presents a family of calculi called pure iso-type systems (PITSs), which employs unified syntax, supports general recursion and preserves decidable type-checking. PITS is comparable in simplicity to pure type systems (PTSs), and is useful to serve as a foundation for functional languages that stand in-between traditional ML-like languages and fully blown dependently typed languages. In PITS, recursion and recursive types are completely unrestricted and type equality is simply based on alpha-equality, just like traditional ML-style languages. However, like most dependently typed languages, PITS uses unified syntax, naturally supporting many advanced type system features. Instead of implicit type conversion, PITS provides a generalization of iso-recursive types called iso-types . Iso-types replace the conversion rule typically used in dependently typed calculus and make every type-level computation explicit via cast operators. Iso-types avoid the complexity of explicit equality proofs employed in other approaches with casts. We study three variants of PITS that differ on the reduction strategy employed by the cast operators: call-by-name , call-by-value and parallel reduction . One key finding is that while using call-by-value or call-by-name reduction in casts loses some expressive power, it allows those variants of PITS to have simple and direct operational semantics and proofs. In contrast, the variant of PITS with parallel reduction retains the expressive power of PTS conversion, at the cost of a more complex metatheory.
... Torben Mogensen [36] was inspired by the construction of Peter de Bruin and came up with what is called a higher order encoding of λ-terms, see [38], in which a λ is interpreted by itself. ...
... 6.5. Definition (Mogensen [36]). An open lambda term M can be interpreted as an open lambda term with the same free variables as follows. ...
... This can be seen as first using three unspecified constructors var, app, abs ∈ Λ ø as follows [4] it is proved that for closed terms equality discrimination on coded terms M m , N m is lambda definable. (iii) In Mogensen [36] it is also proved that there is a normalizer acting on coded terms. ...
Preprint
Full-text available
The main scientific heritage of Corrado B\"ohm is about computing, both concerning concrete algorithms as well as concerning models of computability. Discussed will be the following. 1. A compiler that can compile itself. 2. Structured programming, eliminating the `goto' statement. 3. Functional programming and an early implementation. 4. Separability in {\lambda}-calculus. 5. Compiling combinators without parsing. 6. Self-evaluation in {\lambda}-calculus.
... Une piste intéressante, que nous n'empruntons pas ici, serait d'utiliser les informations de typage pour décider plus rapidement de la convertibilité de deux termes, à l'aide de « théorèmes gratuits » par exemple (Wadler, 1989 Böhm et Berarducci (1985). Alors une représentation d'un terme (clos) du λ-calcul dans le λ-calcul est donnée par (Mogensen, 1992) : (Reynolds, 1985). ...
... En remarquant que la normalisation par évaluation est une auto-réduction de Mogensen avec un schéma de représentation légèrement adapté, nous sommes arrivés à un auto-réduiseur très naturel et plus efficace que l'auto-réduiseur originel de Mogensen (1992). Nous avons ensuite montré que cet auto-réduiseur, un algorithme de normalisation par évaluation non typée, se généralisait très naturellement à une relation de réduction donnée par ajout de règles de réécriture à (−→ β ), et encaissait tout aussi naturellement l'ajout dans le langage objet de constructions d'analyses de cas et d'opérateurs de points fixes avec conditions de garde. ...
... Naviguer sous un lieur libère une variable dont il faut alors choisir un nom.Nous avons maintenant tous les ingrédients d'un normaliseur pour les termes clos duλ-calcul pur :nf M = ↓ 0 eval M = ↓ 0 ⟦M ⟧ En réinjectant la forme normale obtenue par réification dans la modèle, on obtient un auto-réduiseur, c'est-à-dire une fonction de représentation à représentation. Cet auto-réduiseur est plus efficace que l'auto-réduiseur originel deMogensen (1992) qui construit la forme normale à l'aide de représentations en paires avec leur sémantique 19 , essentiellement contrainte par le fait que l'interprétation de Mogensen ne distingue par les variables libres des variables liées (à l'inverse de nos B et F). ...
Article
In recent years, the emergence of feature rich and mature interactive proof assistants has enabled large formalization efforts of high-profile conjectures and results previously established only by pen and paper. A medley of incompatible and philosophically diverging logics are at the core of all these proof assistants. Cousineau and Dowek (2007) have proposed the λΠ-calculus modulo as a universal target framework for other front-end proof languages and environments. We explain in this thesis how this particularly simple formalism allows for a small, modular and efficient proof checker upon which the consistency of entire systems can be made to rely upon. Proofs increasingly rely on computation both in the large, as exemplified by the proof of the four colour theorem by Gonthier (2007), and in the small following the SSReflect methodoly and supporting tools. Encoding proofs from other systems in the λΠ-calculus modulo bakes yet more computation into the proof terms. We show how to make the proof checking problem manageable by turning entire proof terms into functional programs and compiling them in one go using off-the-shelf compilers for standard programming languages. We use untyped normalization by evaluation (NbE) as an enabling technology and show how to optimize previous instances of it found in the literature. Through a single change to the interpretation of proof terms, we arrive at a representation of proof terms using higher order abstract syntax (HOAS) allowing for a proof checking algorithm devoid of any explicit typing context for all Pure Type Systems (PTS). We observe that this novel algorithm is a generalization to dependent types of a type checking algorithm found in the HOL proof assistants enabling on-the-fly checking of proofs. We thus arrive at a purely functional system with no explicit state, where all proofs are checked by construction. We formally verify in Coq the correspondence of the type system on higher order terms lying behind this algorithm with respect to the standard typing rules for PTS. This line of work can be seen as connecting two historic strands of proof assistants: LCF and its descendents, where proofs of untyped or simply typed formulae are checked by construction, versus Automath and its descendents, where proofs of dependently typed terms are checked a posteriori. The algorithms presented in this thesis are at the core of a new proof checker called Dedukti and in some cases have been transferred to the more mature platform that is Coq. In joint work with Denes, we show how to extend the untyped NbE algorithm to the syntax and reduction rules of the Calculus of Inductive Constructions (CIC). In joint work with Burel, we generalize previous work by Cousineau and Dowek (2007) on the embedding into the λΠ-calculus modulo of a large class of PTS to inductive types, pattern matching and fixpoint operators.
... Various representations can be used for this, see, e.g., [7], [8] and [9]. In general there is a choice between "Church-style" and "standard style" representations, named after Church numerals and standard numerals, see [1], section 6.4. ...
... In [7], a different representation of lambda terms was used. It was based on higher-order abstract syntax, but used a standard-style representation where recursion over the syntax is not encoded in the term itself (see Appendix A). ...
... In [7], a standard-style higher order representation was used: ...
Article
We show that linear-time self-interpretation of the pure untyped lambda calculus is possible, in the sense that interpretation has a constant overhead compared to direct execution under various execution models. The present paper shows this result for reduction to weak head normal form under call-by-name, call-by-value and call-by-need.We use a self-interpreter based on previous work on self-interpretation and partial evaluation of the pure untyped lambda calculus.We use operational semantics to define each reduction strategy. For each of these we show a simulation lemma that states that each inference step in the evaluation of a term by the operational semantics is simulated by a sequence of steps in evaluation of the self-interpreter applied to the term (using the same operational semantics).By assigning costs to the inference rules in the operational semantics, we can compare the cost of normal evaluation and self-interpretation. Three different cost-measures are used: number of beta-reductions, cost of a substitution-based implementation (similar to graph reduction) and cost of an environment-based implementation.For call-by-need we use a non-deterministic semantics, which simplifies the proof considerably.
... Definition 6.5 [Mog94]. An open lambda term M can be interpreted as an open lambda term with the same free variables as follows. ...
... (3) In [Mog94] it is also proved that there is a normalizer acting on coded terms. ...
Article
Full-text available
The main scientific heritage of Corrado B\"ohm consists of ideas about computing, concerning concrete algorithms, as well as models of computability. The following will be presented. 1. A compiler that can compile itself. 2. Structured programming, eliminating the 'goto' statement. 3. Functional programming and an early implementation. 4. Separability in {\lambda}-calculus. 5. Compiling combinators without parsing. 6. Self-evaluation in {\lambda}-calculus.
... In 1994 Torben Mogensen [26] introduced a method of self representing and interpreting terms of lambda calculus. We will analyze this method and demonstrate how the self-interpretation of lambda calculus is an instance of a replete syntax framework. ...
... Let D A be the domain of values of this inductive type. Then R A = (D A , V A ) is a syntax representation of Λ. Mogensen [26] suggests a different syntax representation of lambda calculus. Let ⌈·⌉ : Λ → NF Λ be a representation schema for lambda calculus such that: ...
Article
Full-text available
It is often useful, if not necessary, to reason about the syntactic structure of an expression in an interpreted language (i.e., a language with a semantics). This paper introduces a mathematical structure called a syntax framework that is intended to be an abstract model of a system for reasoning about the syntax of an interpreted language. Like many concrete systems for reasoning about syntax, a syntax framework contains a mapping of expressions in the interpreted language to syntactic values that represent the syntactic structures of the expressions; a language for reasoning about the syntactic values; a mechanism called quotation to refer to the syntactic value of an expression; and a mechanism called evaluation to refer to the value of the expression represented by a syntactic value. A syntax framework provides a basis for integrating reasoning about the syntax of the expressions with reasoning about what the expressions mean. The notion of a syntax framework is used to discuss how quotation and evaluation can be built into a language and to define what quasiquotation is. Several examples of syntax frameworks are presented.
... Using the fact that inductive types can be directly represented in the lambda calculus, Torben AE. Mogensen in [92] represents the inductive type of lambda terms in lambda calculus itself as well as defines a global evaluation operator in the lambda calculus. (See Henk Barendregt's survey paper [6] on the impact of the lambda calculus for a nice description of this work.) ...
Preprint
{\rm CTT}_{\rm qe}isaversionofChurchstypetheorythatincludesquotationandevaluationoperatorsthataresimilartoquoteandevalintheLispprogramminglanguage.Withquotationandevaluationitispossibletoreasonin is a version of Church's type theory that includes quotation and evaluation operators that are similar to quote and eval in the Lisp programming language. With quotation and evaluation it is possible to reason in {\rm CTT}_{\rm qe}abouttheinterplayofthesyntaxandsemanticsofexpressionsand,asaresult,toformalizesyntaxbasedmathematicalalgorithms.Wepresentthesyntaxandsemanticsof about the interplay of the syntax and semantics of expressions and, as a result, to formalize syntax-based mathematical algorithms. We present the syntax and semantics of {\rm CTT}_{\rm qe}aswellasaproofsystemfor as well as a proof system for {\rm CTT}_{\rm qe}.Theproofsystemisshowntobesoundforallformulasandcompleteforformulasthatdonotcontainevaluations.Wegiveseveralexamplesthatillustratetheusefulnessofhavingquotationandevaluationin. The proof system is shown to be sound for all formulas and complete for formulas that do not contain evaluations. We give several examples that illustrate the usefulness of having quotation and evaluation in {\rm CTT}_{\rm qe}$.
... To gain some insight into the construction defined above, one can observe that it is related to an encoding of data attributed to Scott 3 (and thus commonly referred to in the literature as the Scott encoding), which has subsequently been developed by others (e.g. [30,27,16,31]). In the more familiar 'standard' encoding of functions, a fixed-point combinator is used to solve any recursion in the definition. This has the effect of making recursion explicit, and thus the representations of recursive functions have infinite expansions consisting of a 'list' of distinct instances of the function body, one for each recursive call that may be made. ...
Preprint
Full-text available
Jay and Given-Wilson have recently introduced the Factorisation (or SF-) calculus as a minimal fundamental model of intensional computation. It is a combinatory calculus containing a special combinator, F, which is able to examine the internal structure of its first argument. The calculus is significant in that as well as being combinatorially complete it also exhibits the property of structural completeness, i.e. it is able to represent any function on terms definable using pattern matching on arbitrary normal forms. In particular, it admits a term that can decide the structural equality of any two arbitrary normal forms. Since SF-calculus is combinatorially complete, it is clearly at least as powerful as the more familiar and paradigmatic Turing-powerful computational models of Lambda Calculus and Combinatory Logic. Its relationship to these models in the converse direction is less obvious, however. Jay and Given-Wilson have suggested that SF-calculus is strictly more powerful than the aforementioned models, but a detailed study of the connections between these models is yet to be undertaken. This paper begins to bridge that gap by presenting a faithful encoding of the Factorisation Calculus into the Lambda Calculus preserving both reduction and strong normalisation. The existence of such an encoding is a new result. It also suggests that there is, in some sense, an equivalence between the former model and the latter. We discuss to what extent our result constitutes an equivalence by considering it in the context of some previously defined frameworks for comparing computational power and expressiveness.
... Encodings. The Church (Church, 1941;Corrado & Alessandro, 1985) and Scott (Mogensen, 1992) encodings encode ADTs using functions. The encoding derived in this paper has a close connection to the Scott encoding. ...
Article
The three-continuation approach to coroutine pipelines efficiently represents a large number of connected components. Previous work in this area introduces this alternative encoding but does not shed much light on the underlying principles for deriving this encoding from its specification. This paper gives this missing insight by deriving the three-continuation encoding based on eliminating the mutual recursion in the definition of the connect operation. Using the same derivation steps, we are able to derive a similar encoding for a more general setting, namely bidirectional pipes. Additionally, we evaluate the encoding in an advertisement analytics benchmark where it is as performant as pipes , conduit , and streamly , which are other common Haskell stream processing libraries.
... option), is none other than nat. This is the idea behind Equation (19) in Section 7. We have used two encoding of the X option data type: Böhm & Berarducci (1985) in Equation (5) and Scott-Mogensen (Mogensen, 1992;Abadi et al., 1993) in Equation (18). ...
Article
From the outset, lambda calculus represented natural numbers through iterated application. The successor hence adds one more application, and the predecessor removes. In effect, the predecessor un-applies a term—which seemed impossible, even to Church. It took Kleene a rather oblique glance to sight a related representation of numbers, with an easier predecessor. Let us see what we can do if we look at this old problem with today’s eyes. We discern the systematic ways to derive more predecessors—smaller, faster, and sharper—while keeping all teeth.
... Theorem 2 (Wand). M ∼ = N if and only if M ≡ α N Wand's definition of observation is termination at a weak head normal form, and his quoting function · is a Scott-Mogensen encoding: see [Polonsky, 2011, Mogensen, 1992. However, the result is strong and general, and puts the last nail in the coffin: there can be no semantic study of functional programming languages that are so strongly reflective that they can internally define quoting. ...
Article
Intensionality is a phenomenon that occurs in logic and computation. In the most general sense, a function is intensional if it operates at a level finer than (extensional) equality. This is a familiar setting for computer scientists, who often study different programs or processes that are interchangeable, i.e. extensionally equal, even though they are not implemented in the same way, so intensionally distinct. Concomitant with intensionality is the phenomenon of intensional recursion, which refers to the ability of a program to have access to its own code. In computability theory, intensional recursion is enabled by Kleene's Second Recursion Theorem. This thesis is concerned with the crafting of a logical toolkit through which these phenomena can be studied. Our main contribution is a framework in which mathematical and computational constructions can be considered either extensionally, i.e. as abstract values, or intensionally, i.e. as fine-grained descriptions of their construction. Once this is achieved, it may be used to analyse intensional recursion.
... Using the fact that inductive types can be directly represented in the lambda calculus, Torben AE. Mogensen in [73] represents the inductive type of lambda terms in lambda calculus itself as well as defines a global evaluation operator in the lambda calculus. (See Henk Barendregt's survey paper [4] on the impact of the lambda calculus for a nice description of this work.) ...
Article
Full-text available
{\rm CTT}_{\rm qe}isaversionofChurchstypetheorythatincludesquotationandevaluationoperatorsthataresimilartoquoteandevalintheLispprogramminglanguage.Withquotationandevaluationitispossibletoreasonin is a version of Church's type theory that includes quotation and evaluation operators that are similar to quote and eval in the Lisp programming language. With quotation and evaluation it is possible to reason in {\rm CTT}_{\rm qe}abouttheinterplayofthesyntaxandsemanticsofexpressionsand,asaresult,toformalizesyntaxbasedmathematicalalgorithms.Wepresentthesyntaxandsemanticsof about the interplay of the syntax and semantics of expressions and, as a result, to formalize syntax-based mathematical algorithms. We present the syntax and semantics of {\rm CTT}_{\rm qe}aswellasaproofsystemfor as well as a proof system for {\rm CTT}_{\rm qe}.Theproofsystemisshowntobesoundforallformulasandcompleteforformulasthatdonotcontainevaluations.Wegiveseveralexamplesthatillustratetheusefulnessofhavingquotationandevaluationin. The proof system is shown to be sound for all formulas and complete for formulas that do not contain evaluations. We give several examples that illustrate the usefulness of having quotation and evaluation in {\rm CTT}_{\rm qe}$.
... To gain some insight into the construction defined above, one can observe that it is related to an encoding of data attributed to Scott 3 (and thus commonly referred to in the literature as the Scott encoding), which has subsequently been developed by others (e.g. [30,27,16,31]). In the more familiar 'standard' encoding of functions, a fixed-point combinator is used to solve any recursion in the definition. This has the effect of making recursion explicit, and thus the representations of recursive functions have infinite expansions consisting of a 'list' of distinct instances of the function body, one for each recursive call that may be made. ...
Article
Full-text available
Jay and Given-Wilson have recently introduced the Factorisation (or SF-) calculus as a minimal fundamental model of intensional computation. It is a combinatory calculus containing a special combinator, F, which is able to examine the internal structure of its first argument. The calculus is significant in that as well as being combinatorially complete it also exhibits the property of structural completeness, i.e. it is able to represent any function on terms definable using pattern matching on arbitrary normal forms. In particular, it admits a term that can decide the structural equality of any two arbitrary normal forms. Since SF-calculus is combinatorially complete, it is clearly at least as powerful as the more familiar and paradigmatic Turing-powerful computational models of Lambda Calculus and Combinatory Logic. Its relationship to these models in the converse direction is less obvious, however. Jay and Given-Wilson have suggested that SF-calculus is strictly more powerful than the aforementioned models, but a detailed study of the connections between these models is yet to be undertaken. This paper begins to bridge that gap by presenting a faithful encoding of the Factorisation Calculus into the Lambda Calculus preserving both reduction and strong normalisation. The existence of such an encoding is a new result. It also suggests that there is, in some sense, an equivalence between the former model and the latter. We discuss to what extent our result constitutes an equivalence by considering it in the context of some previously defined frameworks for comparing computational power and expressiveness.
... To gain some insight into the construction defined above, one can observe that it is related to an encoding of data attributed to Scott 3 (and thus commonly referred to in the literature as the Scott encoding), which has subsequently been developed by others (e.g. [28,25,14,29]). In the more familiar 'standard' encoding of functions, a fixed-point combinator is used to solve any recursion in the definition. This has the effect of making recursion explicit, and thus the representations of recursive functions have infinite expansions consisting of a 'list' of distinct instances of the function body, one for each recursive call that may be made. ...
Research
Full-text available
Jay and Given-Wilson have recently introduced the Factorisation (or SF-) calculus as a minimal fundamental model of intensional computation. It is a combinatory calculus containing a special combinator, F, which is able to examine the internal structure of its first argument. The calculus is significant in that as well as being combinatorially complete it also exhibits the property of structural completeness, i.e. it is able to represent any function on terms definable using pattern matching on arbitrary normal forms. In particular, it admits a term that can decide the structural equality of any two arbitrary normal forms. Since SF-calculus is combinatorially complete, it is clearly at least as powerful as the more familiar and paradigmatic Turing-powerful computational models of Combinatory Logic and λ-calculus. Its relationship to these models in the converse direction is less obvious, however. Jay and Given-Wilson have suggested that SF-calculus is strictly more powerful than the aforementioned models, but a detailed study of the connections between these models is yet to be undertaken. This paper begins to bridge that gap by presenting a faithful encoding of the Factorisation Calculus into the λ-calculus (and thus also into Combinatory Logic) preserving both reduction and strong normalisation. The existence of such an encoding is a new result. It also suggests that there is, in some sense, an equivalence between the former model and the latter. We discuss to what extent our result constitutes an equivalence by considering it in the context of some previously defined frameworks for comparing computational power and expressiveness.
... Following Barendregt's publication of a mathematically simple self-interpreter for the pure lambda calculus without constants, Mogensen developed a much simpler and more efficient alternative, and a " self-reducer " as well, reported in [83]. Extension of these ideas led to an exceptionally small partial evaluator, reported in [84]. ...
Article
The second DART workshop took place on August 20-21, 1992 in Aalborg. The primary aim of the workshop was to increase the awareness of DART participants for each other's work, and to stimulate cooperation between the various groups. This report contains a brief overview of DART as well as abstracts of the 20 talks given at the workshop and a status report of DART as of summer 1992.
... This encoding is relatively unknown, and independently (re)discovered by several authors (e.g. [9,8,10] and the author of this paper [6]), but originally attributed to Scott in an unpublished lecture which is cited in Curry, Hindley and Seldin ( [4], page 504) as: Dana Scott, A system of functional abstraction. Lectures delivered at University of California, Berkeley, Cal., 1962/63. ...
Article
Full-text available
Although the λ-calculus is well known as a universal programming language, it is seldom used for actual programming or expressing algorithms. Here we demonstrate that it is possible to use the λ-calculus as a comprehensive formalism for programming by showing how to convert programs written in functional programming languages like Clean and Haskell to closed λ-expressions. The transformation is based on using the Scott-encoding for Algebraic Data Types instead of the more common Church encoding. In this way we not only obtain an encoding that is better comprehensible but that is also more efficient. As a proof of the pudding we provide an implementation of Eratosthenes’ prime sieve algorithm as a self-contained, 143 character length, λ-expression.
... The trick here is to code the lambda with lambda itself, one may speak of an inner model of the lambda calculus in itself. Putting the ideas of Mogensen [1992] and Böhm et al. [1994] together, as done by Berarducci and Böhm [1993], one obtains a very smooth way to create the mechanism of reflection the lambda calculus. The result was already proved in Kleene [1936] 12 . ...
... Using the fact that inductive types can be directly represented in the lambda calculus, Torben AE. Mogensen in [47] represents the inductive type of lambda terms in lambda calculus itself as well as defines an evaluation operator in the lambda calculus. He thus shows that the global-internal approach to reasoning about syntax, minus the presence of a built-in quotation operator, can be realized in the lambda calculus. ...
Article
Full-text available
This paper presents a version of simple type theory called Q0uqe{\cal Q}^{\rm uqe}_{0} that is based on Q0{\cal Q}_0, the elegant formulation of Church's type theory created and extensively studied by Peter B. Andrews. Q0uqe{\cal Q}^{\rm uqe}_{0} directly formalizes the traditional approach to undefinedness in which undefined expressions are treated as legitimate, nondenoting expressions that can be components of meaningful statements. Q0uqe{\cal Q}^{\rm uqe}_{0} is also equipped with a facility for reasoning about the syntax of expressions based on quotation and evaluation. Quotation is used to refer to a syntactic value that represents the syntactic structure of an expression, and evaluation is used to refer to the value of the expression that a syntactic value represents. With quotation and evaluation it is possible to reason in Q0uqe{\cal Q}^{\rm uqe}_{0} about the interplay of the syntax and semantics of expressions and, as a result, to formalize in Q0uqe{\cal Q}^{\rm uqe}_{0} syntax-based mathematical algorithms. The paper gives the syntax and semantics of Q0uqe{\cal Q}^{\rm uqe}_{0} as well as a proof system for Q0uqe{\cal Q}^{\rm uqe}_{0}. The proof system is shown to be sound for all formulas and complete for formulas that do not contain evaluations. The paper also illustrates some applications of Q0uqe{\cal Q}^{\rm uqe}_{0}.
... T. AE. Mogensen's self-interpretation of lambda calculus [14] and the logic Chiron [7], derived from classical NBG set theory, are two other examples of replete syntax frameworks [9]. ...
Conference Paper
Full-text available
Algorithms like those for differentiating functional expressions manipulate the syntactic structure of mathematical expressions in a mathematically meaningful way. A formalization of such an algorithm should include a specification of its computational behavior, a specification of its mathematical meaning, and a mechanism for applying the algorithm to actual expressions. Achieving these goals requires the ability to integrate reasoning about the syntax of the expressions with reasoning about what the expressions mean. A syntax framework is a mathematical structure that is an abstract model for a syntax reasoning system. It contains a mapping of expressions to syntactic values that represent the syntactic structures of the expressions; a language for reasoning about syntactic values; a quotation mechanism to refer to the syntactic value of an expression; and an evaluation mechanism to refer to the value of the expression represented by a syntactic value. We present and compare two approaches, based on instances of a syntax framework, to formalize a syntax-based mathematical algorithm in a formal theory T. In the first approach the syntactic values for the expressions manipulated by the algorithm are members of an inductive type in T, but quotation and evaluation are functions defined in the metatheory of T. In the second approach every expression in T is represented by a syntactic value, and quotation and evaluation are operators in T itself.
... The encoding we use is relatively unknown, and independently (re)discovered by several authors (e.g. [SM89,Mog94,Stu08] A type consists of one or more alternatives. Each alternative consist of a name, possibly followed by a number of arguments. ...
Thesis
Full-text available
The Internet has become a prominent platform for the deployment of computer applications. Web-browsers are an important interface for e-mail, on-line shopping, and banking applications. Despite this popularity, the development of web applications is a difficult job through their complex client-server structure. Web applications use client-side processing to speed up their performance. This is often realized by using an interpreter at the browser side. This complicates the development of web applications even more. The programmer has to develop code for both server and client and these parts should co-operate closely to obtain the desired result. Functional programming languages like Haskell and Clean are a promising development platform for web applications. They support higher order functions that enable a high level of compositional programming where irrelevant details can be hidden for the developer. They support generic programming techniques for automatic generation and handling of web forms, interaction with data sources and server-client communication. An important example of this approach is the iTask system. iTask is a declarative domain specific language embedded in Clean, enabling the creation of dynamic workflow applications. iTask workflows consist of a combination of tasks to be performed by humans and/or automated processes. From iTask specifications complete web-based workflow applications are generated. iTask is built on a single, powerful, concept: the task. iTask uses combinators to combine tasks into new tasks. With combinators tasks can be executed sequentially or in parallel using or- and- or ad-hoc parallelism. The main object of study in this thesis is the extension of iTask with client-side processing while maintaining the declarative nature of the system and the generation of the application from one Clean source. For this a dedicated client-side Clean platform is developed in combination with a mechanism to move processing from server to client. This thesis also contains an initial study of applications of iTask in the domains of Military and Crisis-Management Operations.
Chapter
This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.
Chapter
This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.
Chapter
This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.
Article
A polymorphic subtyping relation, which relates more general types to more specific ones, is at the core of many modern functional languages. As those languages start moving towards dependently typed programming a natural question is how can polymorphic subtyping be adapted to such settings. This paper presents the dependent implicitly polymorphic calculus (λI∀): a simple dependently typed calculus with polymorphic subtyping. The subtyping relation in λI∀ generalizes the well-known polymorphic subtyping relation by Odersky and Läufer (1996). Because λI∀ is dependently typed, integrating subtyping in the calculus is non-trivial. To overcome many of the issues arising from integrating subtyping with dependent types, the calculus employs unified subtyping, which is a technique that unifies typing and subtyping into a single relation. Moreover, λI∀ employs explicit casts instead of a conversion rule, allowing unrestricted recursion to be naturally supported. We prove various non-trivial results, including type soundness and transitivity of unified subtyping. λI∀ and all corresponding proofs are mechanized in the Coq theorem prover.
Article
Full-text available
The MetaCoq project aims to provide a certified meta-programming environment in Coq. It builds on Template-Coq, a plugin for Coq originally implemented by Malecha (Extensible proof engineering in intensional type theory, Harvard University, http://gmalecha.github.io/publication/2015/02/01/extensible-proof-engineering-in-intensional-type-theory.html, 2014), which provided a reifier for Coq terms and global declarations, as represented in the Coq kernel, as well as a denotation command. Recently, it was used in the CertiCoq certified compiler project (Anand et al., in: CoqPL, Paris, France, http://conf.researchr.org/event/CoqPL-2017/main-certicoq-a-verified-compiler-for-coq, 2017), as its front-end language, to derive parametricity properties (Anand and Morrisett, in: CoqPL’18, Los Angeles, CA, USA, 2018). However, the syntax lacked semantics, be it typing semantics or operational semantics, which should reflect, as formal specifications in Coq, the semantics of Coq ’s type theory itself. The tool was also rather bare bones, providing only rudimentary quoting and unquoting commands. We generalize it to handle the entire polymorphic calculus of cumulative inductive constructions, as implemented by Coq, including the kernel’s declaration structures for definitions and inductives, and implement a monad for general manipulation of Coq ’s logical environment. We demonstrate how this setup allows Coq users to define many kinds of general purpose plugins, whose correctness can be readily proved in the system itself, and that can be run efficiently after extraction. We give a few examples of implemented plugins, including a parametricity translation and a certified extraction to call-by-value λ\lambda -calculus. We also advocate the use of MetaCoq as a foundation for higher-level tools.
Article
Full-text available
Programmers can use gradual types to migrate programs to have more precise type annotations and thereby improve their readability, efficiency, and safety. Such migration requires an exploration of the migration space and can benefit from tool support, as shown in previous work. Our goal is to provide a foundation for better tool support by settling decidability questions about migration with gradual types. We present three algorithms and a hardness result for deciding key properties and we explain how they can be useful during an exploration. In particular, we show how to decide whether the migration space is finite, whether it has a top element, and whether it is a singleton. We also show that deciding whether it has a maximal element is NP-hard. Our implementation of our algorithms worked as expected on a suite of microbenchmarks.
Conference Paper
Closure calculus is simpler than pure lambda-calculus as it does not mention free variables or index manipulation, variable renaming, implicit substitution, or any other meta-theory. Further, all programs, even recursive ones, can be expressed as normal forms. Third, there are reduction-preserving translations to calculi built from combinations of operators, in the style of combinatory logic. These improvements are achieved without sacrificing three fundamental properties of lambda-calculus, being a confluent rewriting system, supporting the Turing computable numerical functions, and supporting simple typing.
Chapter
Spivey has recently presented a novel functional representation that supports the efficient composition, or merging, of coroutine pipelines for processing streams of data. This representation was inspired by Shivers and Might’s three-continuation approach and is shown to be equivalent to a simple yet inefficient executable specification. Unfortunately, neither Shivers and Might’s original work nor the equivalence proof sheds much light on the underlying principles allowing the derivation of this efficient representation from its specification. This paper gives the missing insight by reconstructing a systematic derivation in terms of known transformation steps from the simple specification to the efficient representation. This derivation sheds light on the limitations of the representation and on its applicability to other settings. In particular, it has enabled us to obtain a similar representation for pipes featuring two-way communication, similar to the Haskell pipes library. Our benchmarks confirm that this two-way representation retains the same improved performance characteristics.
Article
Full-text available
We formalise a (weak) call-by-value λ\lambda -calculus we call L in the constructive type theory of Coq and study it as a minimal functional programming language and as a model of computation. We show key results including (1) semantic properties of procedures are undecidable, (2) the class of total procedures is not recognisable, (3) a class is decidable if it is recognisable, corecognisable, and logically decidable, and (4) a class is recognisable if and only if it is enumerable. Most of the results require a step-indexed self-interpreter. All results are verified formally and constructively, which is the challenge of the project. The verification techniques we use for procedures will apply to call-by-value functional programming languages formalised in Coq in general.
Conference Paper
Recursive programs can now be expressed as normal forms within some rewriting systems, including traditional combinatory logic, a new variant of lambda-calculus called closure calculus, and recent variants of combinatory logic that support queries of internal program structure. In all these settings, partial evaluation of primitive recursive functions, such as addition, can reduce open terms to normal form without fear of non-termination. In those calculi where queries of program structure are supported, program optimisations that are expressed as non-standard rewriting rules can be represented as functions in the calculus, without any need for quotation or other meta-theory.
Article
In this paper, it is shown that induction is derivable in a type-assignment formulation of the second-order dependent type theory λP2, extended with the implicit product type of Miquel, dependent intersection type of Kopylov, and a built-in equality type. The crucial idea is to use dependent intersections to internalize a result of Leivant's showing that Church-encoded data may be seen as realizing their own type correctness statements, under the Curry–Howard isomorphism.
Article
We present partial evaluation by specialization-safe normalization, a novel partial evaluation technique that is Jones-optimal, that can be self-applied to achieve the Futamura projections and that can be type-checked to ensure it always generates code with the correct type. Jones-optimality is the gold-standard for nontrivial partial evaluation and guarantees that a specializer can remove an entire layer of interpretation. We achieve Jones-optimality by using a novel affine-variable static analysis that directs specialization-safe normalization to always decrease a program’s runtime. We demonstrate the robustness of our approach by showing Jones-optimality in a variety of settings. We have formally proved that our partial evaluator is Jones-optimal for call-by-value reduction, and we have experimentally shown that it is Jones-optimal for call-by-value, normal-order, and memoized normal-order. Each of our experiments tests Jones-optimality with three different self-interpreters. We implemented our partial evaluator in Fωµ i, a recent language for typed self-applicable meta-programming. It is the first Jones-optimal and self-applicable partial evaluator whose type guarantees that it always generates type-correct code.
Conference Paper
Recursive programs can now be expressed as normal forms within some rewriting systems, including traditional combinatory logic, a new variant of lambda-calculus called closure calculus, and recent variants of combinatory logic that support queries of internal program structure. In all these settings, partial evaluation of primitive recursive functions, such as addition, can reduce open terms to normal form without fear of non-termination. In those calculi where queries of program structure are supported, program optimisations that are expressed as non-standard rewriting rules can be represented as functions in the calculus, without any need for quotation or other meta-theory.
Conference Paper
We formalise a weak call-by-value λ\lambda -calculus we call L in the constructive type theory of Coq and study it as a minimal functional programming language and as a model of computation. We show key results including (1) semantic properties of procedures are undecidable, (2) the class of total procedures is not recognisable, (3) a class is decidable if it is recognisable, corecognisable, and logically decidable, and (4) a class is recognisable if and only if it is enumerable. Most of the results require a step-indexed self-interpreter. All results are verified formally and constructively, which is the challenge of the project. The verification techniques we use for procedures will apply to call-by-value functional programming languages formalised in Coq in general.
Article
Many popular languages have a self-interpreter, that is, an interpreter for the language written in itself. So far, work on polymorphically-typed self-interpreters has concentrated on self-recognizers that merely recover a program from its representation. A larger and until now unsolved challenge is to implement a polymorphically-typed self-evaluator that evaluates the represented program and produces a representation of the result. We present Fωµi, the first λ-calculus that supports a polymorphically-typed self-evaluator. Our calculus extends Fω with recursive types and intensional type functions and has decidable type checking. Our key innovation is a novel implementation of type equality proofs that enables us to define a versatile representation of programs. Our results establish a new category of languages that can support polymorphically-typed self-evaluators.
Article
Modern constructive type theory is based on pure dependently typed lambda calculus, augmented with user-defined datatypes. This paper presents an alternative called the Calculus of Dependent Lambda Eliminations, based on pure lambda encodings with no auxiliary datatype system. New typing constructs are defined that enable induction, as well as large eliminations with lambda encodings. These constructs are constructor-constrained recursive types, and a lifting operation to lift simply typed terms to the type level. Using a lattice-theoretic denotational semantics for types, the language is proved logically consistent. The power of CDLE is demonstrated through several examples, which have been checked with a prototype implementation called Cedille.
Conference Paper
Many popular languages have a self-interpreter, that is, an interpreter for the language written in itself. So far, work on polymorphically-typed self-interpreters has concentrated on self-recognizers that merely recover a program from its representation. A larger and until now unsolved challenge is to implement a polymorphically-typed self-evaluator that evaluates the represented program and produces a representation of the result. We present Fωµi, the first λ-calculus that supports a polymorphically-typed self-evaluator. Our calculus extends Fω with recursive types and intensional type functions and has decidable type checking. Our key innovation is a novel implementation of type equality proofs that enables us to define a versatile representation of programs. Our results establish a new category of languages that can support polymorphically-typed self-evaluators.
Conference Paper
Traditional designs for functional languages (such as Haskell or ML) have separate sorts of syntax for terms and types. In contrast, many dependently typed languages use a unified syntax that accounts for both terms and types. Unified syntax has some interesting advantages over separate syntax, including less duplication of concepts, and added expressiveness. However, integrating unrestricted general recursion in calculi with unified syntax is challenging when some level of type-level computation is present, as decidable type-checking is easily lost. This paper argues that the advantages of unified syntax also apply to traditional functional languages, and there is no need to give up decidable type-checking. We present a dependently typed calculus that uses unified syntax, supports general recursion and has decidable type-checking. The key to retain decidable type-checking is a generalization of iso-recursive types called iso-types. Iso-types replace the conversion rule typically used in dependently typed calculus, and make every computation explicit via cast operators. We study two variants of the calculus that differ on the reduction strategy employed by the cast operators, and give different trade-offs in terms of simplicity and expressiveness.
Conference Paper
In 1991, Pfenning and Lee studied whether System F could support a typed self-interpreter. They concluded that typed self-representation for System F "seems to be impossible", but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kind-polymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress? We show that it is not and present a typed self-representation for Girard's System U, the first for a λ-calculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to "tie the knot" -- they enable representations of kind polymorphic terms without introducing another form of polymorphism. Our self-representation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed self-applicable operations: a self-interpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuation-passing-style (CPS) transformation -- the first typed self-applicable CPS transformation. Our techniques could have applications from verifiably type-preserving metaprograms, to growable typed languages, to more efficient self-interpreters.
Conference Paper
According to conventional wisdom, a self-interpreter for a strongly normalizing lambda-calculus is impossible. We call this the normalization barrier. The normalization barrier stems from a theorem in computability theory that says that a total universal function for the total computable functions is impossible. In this paper we break through the normalization barrier and define a self-interpreter for System F-omega, a strongly normalizing lambda-calculus. After a careful analysis of the classical theorem, we show that static type checking in F-omega can exclude the proof's diagonalization gadget, leaving open the possibility for a self-interpreter. Along with the self-interpreter, we program four other operations in F-omega, including a continuation-passing style transformation. Our operations rely on a new approach to program representation that may be useful in theorem provers and compilers.
Article
According to conventional wisdom, a self-interpreter for a strongly normalizing lambda-calculus is impossible. We call this the normalization barrier. The normalization barrier stems from a theorem in computability theory that says that a total universal function for the total computable functions is impossible. In this paper we break through the normalization barrier and define a self-interpreter for System F_omega, a strongly normalizing lambda-calculus. After a careful analysis of the classical theorem, we show that static type checking in F_omega can exclude the proof's diagonalization gadget, leaving open the possibility for a self-interpreter. Along with the self-interpreter, we program four other operations in F_omega, including a continuation-passing style transformation. Our operations rely on a new approach to program representation that may be useful in theorem provers and compilers.
Conference Paper
From the λ-calculus it is known how to represent (recursive) data structures by ordinary λ-terms. Based on this idea one can represent algebraic data types in a functional programming language by higher-order functions. Using this encoding we only have to implement functions to achieve an implementation of the functional language with data structures. In this paper we compare the famous Church encoding of data types with the less familiar Scott and Parigot encodings. We show that one can use the encoding of data types by functions in a Hindley-Milner typed language by adding a single constructor for each data type. In an untyped context, like an efficient implementation, this constructor can be omitted. By collecting the basic operations of a data type in a type constructor class and providing instances for the various encodings, these encodings can co-exist in a single program. By changing the instance of this class we can execute the same algorithm in a different encoding. This makes it easier to compare the encodings with each other. We show that in the Church encoding selectors of constructors yielding the recursive type, like the tail of a list, have an undesirable strictness in the spine of the data structure. The Scott and Parigot encodings do not hamper lazy evaluation in any way. The evaluation of the recursive spine by the Church encoding makes the complexity of these destructors linear time. The same destructors in the Scott and the Parigot encoding requires only constant time. Moreover, the Church encoding has problems with sharing reduction results of selectors. The Parigot encoding is a combination of the Scott and Church encoding. Hence we might expect that it combines the best of both worlds, but in practice it does not offer any advantage over the Scott encoding.
Article
This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. in this book, the authors focus on three classes of typing for lambda terms: Simple types, recursive types and intersection types. it is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.
Article
In 1991, Pfenning and Lee studied whether System F could support a typed self-interpreter. They concluded that typed selfrepresentation for System F "seems to be impossible", but were able to represent System F in Fω. Further, they found that the representation of Fω requires kind polymorphism, which is outside Fω. In 2009, Rendel, Ostermann and Hofer conjectured that the representation of kind-polymorphic terms would require another, higher form of polymorphism. Is this a case of infinite regress? We show that it is not and present a typed self-representation for Girard's System U, the first for a λ-calculus with decidable type checking. System U extends System Fω with kind polymorphic terms and types. We show that kind polymorphic types (i.e. types that depend on kinds) are sufficient to "tie the knot" - They enable representations of kind polymorphic terms without introducing another form of polymorphism. Our self-representation supports operations that iterate over a term, each of which can be applied to a representation of itself. We present three typed self-applicable operations: A self-interpreter that recovers a term from its representation, a predicate that tests the intensional structure of a term, and a typed continuation-passing-style (CPS) transformation - The first typed self-applicable CPS transformation. Our techniques could have applications from verifiably type-preserving metaprograms, to growable typed languages, to more efficient self-interpreters. Copyright
Article
This millennium has seen a great deal of research into embedded domain-specific languages. Primarily, such languages are simply-typed. Focusing on System F, we demonstrate how to embed polymorphic domain specific languages in Haskell and OCaml. We exploit recent language extensions including kind polymorphism and first-class modules.
Article
Full-text available
We introduce MetaML, a statically-typed multi-stage programming language extending Nielson and Nielson's two stage notation to an arbitrary number of stages. MetaML extends previous work by introducing four distinct staging annotations which generalize those published previously [25, 12, 7, 6]We give a static semantics in which type checking is done once and for all before the first stage, and a dynamic semantics which introduces a new concept of cross-stage persistence, which requires that variables available in any stage are also available in all future stages.We illustrate that staging is a manual form of binding time analysis. We explain why, even in the presence of automatic binding time analysis, explicit annotations are useful, especially for programs with more than two stages.A thesis of this paper is that multi-stage languages are useful as programming languages in their own right, and should support features that make it possible for programmers to write staged computations without significantly changing their normal programming style. To illustrate this we provide a simple three stage example, and an extended two-stage example elaborating a number of practical issues.
Article
In this thesis we study aspects of specialisation by partial evaluation and compiler generation. After significant research during the last two decades, there are now powerful specialisers for several programming languages, such as LISP, Scheme, ML, and C. But some features of programming languages are still not handled by specialisers. We consider two such features: polymorphic types and modules. In our formalism of the binding-time analyser, we provide a solution to how to treat coercions in the context of polymorphism: coercions are used to make values more dynamic. Furthermore, since the semantics of the partial evaluator affect the binding-time analyser, we have formalised both the binding-time rules and the specialiser. We also discuss practical issues arising from our integration of a polymorphic binding-time analyser with a specialiser: the treatment of fix-points, program points, and bounded static variation. Where modules are concerned, we consider specialising a program consisting of several modules into a specialised version that also consists of several modules. This work relies heavily on a polymorphic binding-time analyser and a compiler generator. The polymorphic binding-time analyser works independently on each module. For each annotated module, we use a compiler generator to create a generating extension. Then we build generating extensions for complete programs, in much the same way as the original modules were put together into complete programs. The result of running all generating extensions is a collection of residual modules, which have a structure derived from the original program. But modules cause another problem. Previous specialisers produced a program consisting of one module, derived from a source program of one module. It is a weakness that the program being specialised imposes a limitation on the structure of the program generated, in this case, the number of modules. In this thesis we remove the restriction. Since we can obtain many modules by specialising one module, we remove an "inherited limit" on the total number of modules. In this work, we need new types of annotation to control the specialisation. We have formalised a binding-time analyser for finding such annotations. We have also given the semantics of the partial evaluator.
Article
We develop a calculus in which the computation steps required to execute a computer program can be separated into discrete stages. The calculus, denoted &lgr;2, is embedded within the pure untyped &lgr;-calculus. The main result of the paper is a characterization of sufficient conditions for confluence for terms in the calculus. The condition can be taken as a correctness criterion for translators that perform reductions in one stage leaving residual redexes over for subsequent computation stages. As an application of the theory, we verify the correctness of a macro expansion algorithm. The expansion algorithm is of some interest in its own right since it solves the problem of desired variable capture using only the familiar capture avoiding substitutions.
Conference Paper
Full-text available
We describe motivation, design, use, and implementation of higher-order abstract syntax as a central representation for programs, formulas, rules, and other syntactic objects in program manipulation and other formal systems where matching and substitution or unification are central operations. Higher-order abstract syntax incorporates name binding information in a uniform and language generic way. Thus it acts as a powerful link integrating diverse tools in such formal environments. We have implemented higher-order abstract syntax, a supporting matching and unification algorithm, and some clients in Common Lisp in the framework of the Ergo project at Carnegie Mellon University.
Article
Full-text available
Programming languages which are capable of interpreting themselves have been fascinating computer scientists. Indeed, if this is possible then a ‘strange loop’ (in the sense of Hofstadter, 1979) is involved. Nevertheless, the phenomenon is a direct consequence of the existence of universal languages. Indeed, if all computable functions can be captured by a language, then so can the particular job of interpreting the code of a program of that language. Self-interpretation will be shown here to be possible in lambda calculus. The set of λ-terms , notation Λ, is defined by the following abstract syntax where is the set {v, v′, v″, v′″,…} of variables . Arbitrary variables are usually denoted by x, y,z,… and λ -terms by M,N,L,…. A redex is a λ -term of the form that is, the result of substituting N for (the free occurrences of) x in M. Stylistically, it can be said that λ -terms represent functional programs including their input. A reduction machine executes such terms by trying to reduce them to normal form; that is, redexes are continuously replaced by their contracta until hopefully no more redexes are present. If such a normal form can be reached, then this is the output of the functional program; otherwise, the program diverges.
Conference Paper
Full-text available
A functional p→e (procedure→expression) that inverts the evaluation functional for typed λ-terms in any model of typed λ-calculus containing some basic arithmetic is defined. Combined with the evaluation functional, p→e yields an efficient normalization algorithm. The method is extended to λ-calculi with constants and is used to normalize (the λ-representations of) natural deduction proofs of (higher order) arithmetic. A consequence of theoretical interest is a strong completeness theorem for βη-reduction. If two λ-terms have the same value in some model containing representations of the primitive recursive functions (of level 1) then they are probably equal in the βη-calculus
Book
A, by 1984, reasonably complete survey of the untyped lambda calculus.
Article
We describe motivation, design, use, and implementation of higher-order abstract syntax as a central representation for programs, formulas, rules, and other syntactic objects in program manipulation and other formal systems where matching and substitution or unification are central operations. Higher-order abstract syntax incorporates name binding information in a uniform and language generic way. Thus it acts as a powerful link integrating diverse tools in such formal environments. We have implemented higher-order abstract syntax, a supporting matching and unification algorithm, and some clients in Common Lisp in the framework of the Ergo project at Carnegie Mellon University.
Article
A systematic representation of objects grouped into types by constructions similar to the composition of sets in mathematics is proposed. The representation is by lambda expressions, which supports the representation of objects from function spaces. The representation is related to a rather conventional language of type descriptions in a way that is believed to be new. Ordinary control-expressions (i.e.,case- and let-expressions) are derived from the proposed representation.
Article
We consider the question of whether a useful notion of metacircularity exists for the polymorphic λ-calculus. Even though complete metacircularity seems to be impossible, we obtain a close approximation to a metacircular interpreter. We begin by presenting an encoding for the Girard-Reynolds second-order polymorphic λ-calculus in the third-order polymorphic λ-calculus. The encoding makes use of representations in which abstractions are represented by abstractions, thus eliminating the need for the explicit representation of environments. We then extend this construction to encompass all of the ω-order polymorphic λ-calculus (Fω). The representation has the property that evaluation is definable, and furthermore that only well-typed terms can be represented and thus type inference does not have to be explicitly defined. Unfortunately, this metacircularity result seems to fall short of providing a useful framework for typed metaprogramming. We speculate on the reasons for this failure and the prospects for overcoming it in the future. In addition, we briefly describe our efforts in designing a practical programming language based on Fω.
Conference Paper
We examine three disparate views of the type structure of programming languages: Milner's type deduction system and polymorphic let construct, the theory of subtypes and generic operators, and the polymorphic or second-order typed lambda calculus. These approaches are illustrated with a functional language including product, sum and list constructors. The syntactic behavior of types is formalized with type inference rules, but their semantics is treated intuitively.KeywordsInference RuleReduction RuleType ExpressionBase LanguagePrincipal TypingThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.