Content uploaded by Torben Mogensen

Author content

All content in this area was uploaded by Torben Mogensen on Sep 04, 2013

Content may be subject to copyright.

We start by giving a compact representation schema for -terms and show how this leads to an exceedingly small and elegant self-interpreter. We then define the notion of a self-reducer, and show how this too can be written as a small -term. Both the self-interpreter and the self-reducer are proved correct. We finally give a constructive proof for the second fixed point theorem for the representation schema. All the constructions have been implemented on a computer, and experiments verify their correctness. Timings show that the self-interpreter and self-reducer are quite efficient, being about 35 and 50 times slower than direct execution using a call-byneed reduction strategy. 1 Preliminaries The set of -terms, , is defined by the abstract syntax: = V j j V: where V is a countable infinite set of distinct variables. (Possibly subscripted) lower case letters a; b; x; y; . . . are used for variables, and capital letters M;N;E; . . . for -terms. We will assume familiarity with the rul...

Content uploaded by Torben Mogensen

Author content

All content in this area was uploaded by Torben Mogensen on Sep 04, 2013

Content may be subject to copyright.

... To demonstrate the application of PITS, we build a simple surface language Fun that extends PITS with algebraic datatypes using a Scott encoding of datatypes (Mogensen, 1992). We also implement prototype interpreter and compiler for Fun, which can run all examples shown in this paper. ...

... Table 1 shows a summary of encodable features in Fun, including algebraic datatypes (Section 3.1), higher-kinded types (Section 3.2), datatype promotion (Section 3.2), high-order abstract syntax (Section 3.2) and object encodings (Section 3.3). The encoding of algebraic datatypes in Fun uses Scott encodings (Mogensen, 1992). The encoding itself uses casts, but the use of casts is completely transparent to programmers. ...

... λ I specifies the PITS triple (see Section 2.1) as Sort = { }, A = {( , )} and R = {( , , )}. Algebraic datatypes and pattern matching in Fun are implemented using Scott encodings (Mogensen, 1992), which can be later desugared into PITS (λ I) terms. For demonstration, we implemented a prototype interpreter and compiler for Fun, both written in GHC Haskell (Marlow, 2010). ...

Traditional designs for functional languages (such as Haskell or ML) have separate sorts of syntax for terms and types. In contrast, many dependently typed languages use a unified syntax that accounts for both terms and types. Unified syntax has some interesting advantages over separate syntax, including less duplication of concepts, and added expressiveness. However, integrating unrestricted general recursion in calculi with unified syntax is challenging when some level of type-level computation is present, since properties such as decidable type-checking are easily lost. This paper presents a family of calculi called pure iso-type systems (PITSs), which employs unified syntax, supports general recursion and preserves decidable type-checking. PITS is comparable in simplicity to pure type systems (PTSs), and is useful to serve as a foundation for functional languages that stand in-between traditional ML-like languages and fully blown dependently typed languages. In PITS, recursion and recursive types are completely unrestricted and type equality is simply based on alpha-equality, just like traditional ML-style languages. However, like most dependently typed languages, PITS uses unified syntax, naturally supporting many advanced type system features. Instead of implicit type conversion, PITS provides a generalization of iso-recursive types called iso-types . Iso-types replace the conversion rule typically used in dependently typed calculus and make every type-level computation explicit via cast operators. Iso-types avoid the complexity of explicit equality proofs employed in other approaches with casts. We study three variants of PITS that differ on the reduction strategy employed by the cast operators: call-by-name , call-by-value and parallel reduction . One key finding is that while using call-by-value or call-by-name reduction in casts loses some expressive power, it allows those variants of PITS to have simple and direct operational semantics and proofs. In contrast, the variant of PITS with parallel reduction retains the expressive power of PTS conversion, at the cost of a more complex metatheory.

... Definition 6.5 [Mog94]. An open lambda term M can be interpreted as an open lambda term with the same free variables as follows. ...

... (3) In [Mog94] it is also proved that there is a normalizer acting on coded terms. ...

The main scientific heritage of Corrado B\"ohm consists of ideas about
computing, concerning concrete algorithms, as well as models of computability.
The following will be presented. 1. A compiler that can compile itself. 2.
Structured programming, eliminating the 'goto' statement. 3. Functional
programming and an early implementation. 4. Separability in {\lambda}-calculus.
5. Compiling combinators without parsing. 6. Self-evaluation in
{\lambda}-calculus.

... Encodings. The Church (Church, 1941;Corrado & Alessandro, 1985) and Scott (Mogensen, 1992) encodings encode ADTs using functions. The encoding derived in this paper has a close connection to the Scott encoding. ...

The three-continuation approach to coroutine pipelines efficiently represents a large number of connected components. Previous work in this area introduces this alternative encoding but does not shed much light on the underlying principles for deriving this encoding from its specification. This paper gives this missing insight by deriving the three-continuation encoding based on eliminating the mutual recursion in the definition of the connect operation. Using the same derivation steps, we are able to derive a similar encoding for a more general setting, namely bidirectional pipes. Additionally, we evaluate the encoding in an advertisement analytics benchmark where it is as performant as pipes , conduit , and streamly , which are other common Haskell stream processing libraries.

... option), is none other than nat. This is the idea behind Equation (19) in Section 7. We have used two encoding of the X option data type: Böhm & Berarducci (1985) in Equation (5) and Scott-Mogensen (Mogensen, 1992;Abadi et al., 1993) in Equation (18). ...

From the outset, lambda calculus represented natural numbers through iterated application. The successor hence adds one more application, and the predecessor removes. In effect, the predecessor un-applies a term—which seemed impossible, even to Church. It took Kleene a rather oblique glance to sight a related representation of numbers, with an easier predecessor. Let us see what we can do if we look at this old problem with today’s eyes. We discern the systematic ways to derive more predecessors—smaller, faster, and sharper—while keeping all teeth.

This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.

This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.

This handbook with exercises reveals in formalisms, hitherto mainly used for hardware and software design and verification, unexpected mathematical beauty. The lambda calculus forms a prototype universal programming language, which in its untyped version is related to Lisp, and was treated in the first author's classic The Lambda Calculus (1984). The formalism has since been extended with types and used in functional programming (Haskell, Clean) and proof assistants (Coq, Isabelle, HOL), used in designing and verifying IT products and mathematical proofs. In this book, the authors focus on three classes of typing for lambda terms: simple types, recursive types and intersection types. It is in these three formalisms of terms and types that the unexpected mathematical beauty is revealed. The treatment is authoritative and comprehensive, complemented by an exhaustive bibliography, and numerous exercises are provided to deepen the readers' understanding and increase their confidence using types.

A polymorphic subtyping relation, which relates more general types to more specific ones, is at the core of many modern functional languages. As those languages start moving towards dependently typed programming a natural question is how can polymorphic subtyping be adapted to such settings.
This paper presents the dependent implicitly polymorphic calculus (λI∀): a simple dependently typed calculus with polymorphic subtyping. The subtyping relation in λI∀ generalizes the well-known polymorphic subtyping relation by Odersky and Läufer (1996). Because λI∀ is dependently typed, integrating subtyping in the calculus is non-trivial. To overcome many of the issues arising from integrating subtyping with dependent types, the calculus employs unified subtyping, which is a technique that unifies typing and subtyping into a single relation. Moreover, λI∀ employs explicit casts instead of a conversion rule, allowing unrestricted recursion to be naturally supported. We prove various non-trivial results, including type soundness and transitivity of unified subtyping. λI∀ and all corresponding proofs are mechanized in the Coq theorem prover.

The MetaCoq project aims to provide a certified meta-programming environment in Coq. It builds on Template-Coq, a plugin for Coq originally implemented by Malecha (Extensible proof engineering in intensional type theory, Harvard University, http://gmalecha.github.io/publication/2015/02/01/extensible-proof-engineering-in-intensional-type-theory.html, 2014), which provided a reifier for Coq terms and global declarations, as represented in the Coq kernel, as well as a denotation command. Recently, it was used in the CertiCoq certified compiler project (Anand et al., in: CoqPL, Paris, France, http://conf.researchr.org/event/CoqPL-2017/main-certicoq-a-verified-compiler-for-coq, 2017), as its front-end language, to derive parametricity properties (Anand and Morrisett, in: CoqPL’18, Los Angeles, CA, USA, 2018). However, the syntax lacked semantics, be it typing semantics or operational semantics, which should reflect, as formal specifications in Coq, the semantics of Coq ’s type theory itself. The tool was also rather bare bones, providing only rudimentary quoting and unquoting commands. We generalize it to handle the entire polymorphic calculus of cumulative inductive constructions, as implemented by Coq, including the kernel’s declaration structures for definitions and inductives, and implement a monad for general manipulation of Coq ’s logical environment. We demonstrate how this setup allows Coq users to define many kinds of general purpose plugins, whose correctness can be readily proved in the system itself, and that can be run efficiently after extraction. We give a few examples of implemented plugins, including a parametricity translation and a certified extraction to call-by-value \(\lambda \)-calculus. We also advocate the use of MetaCoq as a foundation for higher-level tools.

Programmers can use gradual types to migrate programs to have more precise type annotations and thereby improve their readability, efficiency, and safety. Such migration requires an exploration of the migration space and can benefit from tool support, as shown in previous work. Our goal is to provide a foundation for better tool support by settling decidability questions about migration with gradual types. We present three algorithms and a hardness result for deciding key properties and we explain how they can be useful during an exploration. In particular, we show how to decide whether the migration space is finite, whether it has a top element, and whether it is a singleton. We also show that deciding whether it has a maximal element is NP-hard. Our implementation of our algorithms worked as expected on a suite of microbenchmarks.

We describe motivation, design, use, and implementation of higher-order abstract syntax as a central representation for programs, formulas, rules, and other syntactic objects in program manipulation and other formal systems where matching and substitution or unification are central operations. Higher-order abstract syntax incorporates name binding information in a uniform and language generic way. Thus it acts as a powerful link integrating diverse tools in such formal environments. We have implemented higher-order abstract syntax, a supporting matching and unification algorithm, and some clients in Common Lisp in the framework of the Ergo project at Carnegie Mellon University.

Programming languages which are capable of interpreting themselves have been fascinating computer scientists. Indeed, if this is possible then a ‘strange loop’ (in the sense of Hofstadter, 1979) is involved. Nevertheless, the phenomenon is a direct consequence of the existence of universal languages. Indeed, if all computable functions can be captured by a language, then so can the particular job of interpreting the code of a program of that language. Self-interpretation will be shown here to be possible in lambda calculus.
The set of λ-terms , notation Λ, is defined by the following abstract syntax
where
is the set {v, v′, v″, v′″,…} of variables . Arbitrary variables are usually denoted by x, y,z,… and λ -terms by M,N,L,…. A redex is a λ -term of the form
that is, the result of substituting N for (the free occurrences of) x in M. Stylistically, it can be said that λ -terms represent functional programs including their input. A reduction machine executes such terms by trying to reduce them to normal form; that is, redexes are continuously replaced by their contracta until hopefully no more redexes are present. If such a normal form can be reached, then this is the output of the functional program; otherwise, the program diverges.

A functional p→e (procedure→expression) that inverts the
evaluation functional for typed λ-terms in any model of typed
λ-calculus containing some basic arithmetic is defined. Combined
with the evaluation functional, p→e yields an efficient
normalization algorithm. The method is extended to λ-calculi with
constants and is used to normalize (the λ-representations of)
natural deduction proofs of (higher order) arithmetic. A consequence of
theoretical interest is a strong completeness theorem for
βη-reduction. If two λ-terms have the same value in some
model containing representations of the primitive recursive functions
(of level 1) then they are probably equal in the βη-calculus

A, by 1984, reasonably complete survey of the untyped lambda calculus.

We describe motivation, design, use, and implementation of higher-order abstract syntax as a central representation for programs, formulas, rules, and other syntactic objects in program manipulation and other formal systems where matching and substitution or unification are central operations. Higher-order abstract syntax incorporates name binding information in a uniform and language generic way. Thus it acts as a powerful link integrating diverse tools in such formal environments. We have implemented higher-order abstract syntax, a supporting matching and unification algorithm, and some clients in Common Lisp in the framework of the Ergo project at Carnegie Mellon University.

A systematic representation of objects grouped into types by constructions similar to the composition of sets in mathematics is proposed. The representation is by lambda expressions, which supports the representation of objects from function spaces. The representation is related to a rather conventional language of type descriptions in a way that is believed to be new. Ordinary control-expressions (i.e.,case- and let-expressions) are derived from the proposed representation.

We consider the question of whether a useful notion of metacircularity exists for the polymorphic λ-calculus. Even though complete metacircularity seems to be impossible, we obtain a close approximation to a metacircular interpreter. We begin by presenting an encoding for the Girard-Reynolds second-order polymorphic λ-calculus in the third-order polymorphic λ-calculus. The encoding makes use of representations in which abstractions are represented by abstractions, thus eliminating the need for the explicit representation of environments. We then extend this construction to encompass all of the ω-order polymorphic λ-calculus (Fω). The representation has the property that evaluation is definable, and furthermore that only well-typed terms can be represented and thus type inference does not have to be explicitly defined. Unfortunately, this metacircularity result seems to fall short of providing a useful framework for typed metaprogramming. We speculate on the reasons for this failure and the prospects for overcoming it in the future. In addition, we briefly describe our efforts in designing a practical programming language based on Fω.

We examine three disparate views of the type structure of programming languages: Milner's type deduction system and polymorphic let construct, the theory of subtypes and generic operators, and the polymorphic or second-order typed lambda calculus. These approaches are illustrated with a functional language including product, sum and list constructors. The syntactic behavior of types is formalized with type inference rules, but their semantics is treated intuitively.KeywordsInference RuleReduction RuleType ExpressionBase LanguagePrincipal TypingThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.