Rod M. Burstall’s research while affiliated with University of Edinburgh and other places
What is this page?
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
A small set of constructs can simulate a wide variety of apparently distinct features in modern programming languages. Using typed lambda calculus with bindings, declarations, and types as first-class values, we show how to build modules, interfaces and implementations, abstract data types, generic types, recursive types, and unions. The language has a concise operational semantics given by inference rules.
A later version of this paper is
Pebble: A kernel language for modules and abstract data types, Information and Computation 76, 2/3 (Feb./Mar. 1988), pp 278-346
There is a population explosion among the logical systems being used in computer science. Examples include first order logic (with and without equality), equational logic, Horn clause logic, second order logic, higher order logic, infinitary logic, dynamic logic, process logic, temporal logic, and modal logic; moreover, there is a tendency for each theorem prover to have its own idiosyncratic logical system. Yet it is usual to give many of the same results and applications for each logical system; of course, this is natural in so far as there are basic results in computer science that are independent of the logical system in which they happen to be expressed. But we should not have to do the same things over and over again; instead, we should generalize, and do the essential things once and for all! Also, we should ask what are the relationships among all these different logical systems. This paper shows how some parts of computer science can be done in any suitable logical system, by introducing the notion of an institution as a precise generalization of the informal notion of a "logical system." A first main result shows that if an institution is such that interface declarations expressed in it can be glued together, then theories (which are just sets of sentences) in that institution can also be glued together. A second main result gives conditions under which a theorem prover for one institution can be validly used on theories from another; this uses the notion of an institution morphism. A third main result shows that institutions admiting free models can be extended to institutions whose theories may include, in addition to the original sentences, various kinds of constraints upon interpretations; such constraints are useful for defining abstract data types, and include so-called "data," "hierarchy," and "generating" constraints. Further results show how to define insitutions that mix sentences from one institution with constraints from another, and even mix sentences and (various kinds of) constraints from several different institutions. It is noted that general results about institutions apply to such "multiplex" institutions, including the result mentioned above about gluing together theories. Finally, this paper discusses some applications of these results to specification languages, showing that much of that subject is in fact independent of the institution used.
An extension to the Edinburgh LCF interactive theorem-proving system is described which provides new ways of constructing theories, drawing upon ideas from the Clear specification language. A new theory can be built from an existing theory in two new ways: by renaming its types and constants, or by abstraction (forgetting some types and constants and perhaps renaming the rest). A way of providing parameterised theories is described. These theory-building operations -- together with operations for forming a primitive theory and for taking the union of theories -- allow large theories to be built in a flexible and well-structured fashion. Inference rules and str~tngies for proof in structured theories are also discussed.
Using the program transformation technique we derive some algorithms for evaluating linear recurrence relations in logarithmic time. The particular case of the Fibonacci function is first considered and a comparison with the conventional matrix exponentiation algorithm is made. This comparison allows us also to contrast the transformation technique and the stepwise refinement technique underlining some interesting features of the former one. Through the examples given we also explain why those features are interesting for a useful and reliable program construction methodology.
In the last ten years or so a lot of algebraic ideas have wormed their way into Computer Science, particularly in work connected with correctness of compilers, with abstract data types and with specification. We have been among those responsible [Burstall and Goguen 1980, 1981]. Most papers begin with a compressed section of definitions, but it is difficult for the well-disposed outsider to make much of these. Reference to books for algebraists, such as Graetzer [1979], or even to those angled towards automata theory [Arbib and Manes 1975] may not be encouraging. So it is perhaps worthwhile to present some of the key algebraic ideas in a leisurely and, we hope, intuitive form, emphasising the Computer Science connection. Some water has flowed under the bridge since one of us was last involved in an attempt to do this [Goguen, Thatcher, Wagner and Wright 1975].
A system of rules for transforming programs is described, with the programs in the form of recursion equations. An initially very simple, lucid, and hopefully correct program is transformed into a more efficient one by altering the recursion structure. Illustrative examples of program transformations are given, and a tentative implementation is described. Alternative structures for programs are shown, and a possible initial phase for an automatic or semiautomatic program-manipulation system is indicated.
... We should mention as the sources of the descriptive view on image analysis and consequently of DA the works of 1970 -1990 by F. Ambler, G. Barrow, R. Burstall [1], T. ...
... The mechanism for referring to non-local variables which is built into the BCPL language is inadequate to deal naturally with this situation. (So, for that matter, are those in ALGOL 60, ALGOL 68 and PL/1; two languages which are sufficiently powerful are PAL (Evans, 1968) and POP-2 (Burstall, Collins, and Popplestone, 1968).) This means that we must make special provisions to preserve the information ourselves, which we do by keeping it all in the vector S. The length of S may therefore vary from stream to stream, but the first few elements are always reserved for the basic functions and routines. ...
... The preceding section described an example of arity-generic programming. We consider now a type-generic task: proving the no confusion property [4] of datatype constructors (that is, they are injective and disjoint) for a closed universe of strictly positive types. For defining a universe of datatypes, the idea (describe in more detail by Dagand and McBride [9]) is to define a type whose elements are interpreted as codes for datatype signatures and combine this with a type-level least fixedpoint operator. ...
... The Poplog system developed at Sussex University in the early 1980s was an implementation of the Pop-11 language (a descendant of Pop-2 [Burstall and Popplestone 1968]) that included support for Prolog (hence the name). Subsequently, support for Common Lisp and later Standard ML was added to the implementation; the SML implementation was owed to Robert Duncan and Simon Nichols and corresponded roughly to the 1990 version of the language. ...
... Following this general approach, the first part of the paper develops the notion of derived structures and models. Burstall and Goguen [20,48] introduced the notion of derivor 4 to 4 Implemented as part of the mechanisms available in the programming language Clear [20] to synthesize operations out from a set of basic (and available) ones [47,48]. ...
... The graph correspondence problem between G a and G t then reduces to finding the maximum clique of the correspondence graph C [2]. This allows us to obtain matches between the 2D tree positions of the aerial and terrestrial clouds, used to solve for the relative planar pose using Umeyama's method [27]. ...
... Freddy II was a "hand-eye" robot that could assemble toy wooden models from a heap of pieces. It used vision to identify and locate the parts, and was able to rearrange them to enable identification when they were obscured by other parts [1]. Given the state-of-the-art at the time, this required not only building the robot, but also designing and building the vision system, and a programming environment for controlling the various subsystems. ...
... We have been highly influenced by the early work of Burstall and Goguen [BG77,BG79], Doug Smith's Specware [Smi93,Smi99], and the work of Kapur, Musser and Stepanov on Tecton [KMS82,KM92]. They gave us the basic operational ideas, and some of the semantic tools we needed. ...