Ewen Maclean’s research while affiliated with University of Edinburgh and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (10)


Automating Event-B invariant proofs by rippling and proof patching
  • Article
  • Full-text available

January 2019

·

73 Reads

·

8 Citations

Formal Aspects of Computing

·

·

·

Ewen Maclean

The use of formal method techniques can contribute to the production of more reliable and dependable systems. However, a common bottleneck for industrial adoption of such techniques is the needs for interactive proofs. We use a popular formal method, called Event-B, as our working domain, and set invariant preservation (INV) proofs as targets, because INV proofs can account for a significant proportion of the proofs requiring human interactions. We apply an inductive theorem proving technique, called rippling, for Event-B INV proofs. Rippling automates proofs using meta-level guidance. The guidance is in particular useful to develop proof patches to recover failed proof attempts. We are interested in the case when a missing lemma is required. We combine a scheme-based theory-exploration system, called IsaScheme [MRMDB10], with rippling to develop a proof patch via lemma discovery. We also develop two new proof patches to unfold operator definitions and to suggest case-splits, respectively. The combined use of rippling with these three proof patches as a proof method significantly improves the proof automation for our evaluation set.

Download

Concept Invention Foundations, Implementation, Social Aspects and Applications: Foundations, Implementation, Social Aspects and Applications

January 2018

·

23 Reads

·

14 Citations

·

·

·

[...]

·

This book introduces a computationally feasible, cognitively inspired formal model of concept invention, drawing on Fauconnier and Turner's theory of conceptual blending, a fundamental cognitive operation. The chapters present the mathematical and computational foundations of concept invention, discuss cognitive and social aspects, and further describe concrete implementations and applications in the fields of musical and mathematical creativity. Featuring contributions from leading researchers in formal systems, cognitive science, artificial intelligence, computational creativity, mathematical reasoning and cognitive musicology, the book will appeal to readers interested in how conceptual blending can be precisely characterized and implemented for the development of creative computational systems.


Fig. 1. The 'houseboat' blend, adapted from [33]. by Arthur Koestler in his book The Act of Creation in 1964 [46]. Based on these basic intuitions, within the cognitive sciences these ideas have been further developed into more concrete approaches of how to produce novel ideas (which may be concepts, theories, solutions to problems, works of art, etc.). One particular such approach, known as the theory of conceptual blending or conceptual integration has been proposed by Fauconnier and Turner [26] as a kind of primitive or fundamental cognitive operation underlying much of everyday thought and language. The process by which two concepts are blended into a novel idea is seen as a complex event in which particular elements and their relations pertaining to the initial two concepts are combined selectively into a new whole, which is understood to be structurally richer, in a sense we will make precise below, than the mere commonalities of the two concepts. Fauconnier's view of concepts is prior to the notion of blending, and his Mental Spaces Theory is a highly influential cognitive theory of meaning construction, developed in [24] and [25]. According to Fauconnier, meaning construction involves two processes: (1) the building of mental spaces; and (2) the establishment of mappings between those mental spaces. Moreover, the mapping relations are guided by the local discourse context, which means that meaning construction is always situated or context-dependent. Fauconnier and Turner [27] describe several constitutive elements of conceptual blending. These are (i) the input spaces that are to be blended, (ii) a partial cross-space mapping that connects counterparts in the input mental spaces, (iii) a generic space that is an abstraction from what the input spaces have in common, (iv) a blending operation that produces a blend of the input spaces and into which the structure of the input spaces is selectively projected, and (v) an emergent structure, i.e., structure that is a synergistic gain to the naive sum of the structure of input spaces. These constitutive elements can be organised in a conceptual integration network, i.e., the network of all input spaces, generic spaces and blend spaces together with the selective projections that model a particular blending process. Finally, Fauconnier and Turner propose certain optimality principles that govern the blending process, and that can be taken as a way to assess the quality of a blend. Let us briefly review these constitutive elements and optimality principles as put forth in Fauconnier and Turner's model, using the house-boat blend depicted in Fig. 1 as an illustrative example. 
Fig. 1. The 'houseboat' blend, adapted from [33].
A Computational Framework for Conceptual Blending

December 2017

·

767 Reads

·

73 Citations

Artificial Intelligence

We present a computational framework for conceptual blending, a concept invention method that is advocated in cognitive science as a fundamental and uniquely human engine for creative thinking. Our framework treats a crucial part of the blending process, namely the generalisation of input concepts, as a search problem that is solved by means of modern answer set programming methods to find commonalities among input concepts. We also address the problem of pruning the space of possible blends by introducing metrics that capture most of the so-called optimality principles, described in the cognitive science literature as guidelines to produce meaningful and serendipitous blends. As a proof of concept, we demonstrate how our system invents novel concepts and theories in domains where creativity is crucial, namely mathematics and music.


Fig. 1: Amalgamation workflow  
ASP, Amalgamation, and the Conceptual Blending Workflow

September 2015

·

120 Reads

·

12 Citations

Lecture Notes in Computer Science

We present a framework for conceptual blending – a concept invention method that is advocated in cognitive science as a fundamental, and uniquely human engine for creative thinking. Herein, we employ the search capabilities of ASP to find commonalities among input concepts as part of the blending process , and we show how our approach fits within a generalised conceptual blending workflow. Specifically, we orchestrate ASP with imperative Python programming , to query external tools for theorem proving and colimit computation. We exemplify our approach with an example of creativity in mathematics.


Figure 1: Conceptual blending between the perfect and Phrygian cadence gives rise to the tritone substitution progression and the backdoor progression The second application of chord blending is to ‘cross-fade’ chord progressions of different keys or idioms in a smooth manner by means of a transition chord , which is the result of blending. Assume that the input sequences are precom- posed by another system, e.g. the constrained HMM (cHMM) by [Kaliakatsos and Cambouropoulos, 2014]. Let us suppose that a chord sequence starts in C major, such as C-Dmin-G7- C-F, and after its ending an intermediate G 7-C chord progression is introduced (having thus a very remote modulation from C major to C major). The cHMM system will not find any transition from the available diatonic chords in C major to the G 7 chord, and will terminate or give a random continua- tion. However, if we perform blending on F ([5,9,0]) – the last 
Table 1 : Cadence fusion results generated by our system
Figure 2: Blending as interleaved generalisation, combination and completion process σ(absNote(c, 2 )); σ(absNote(c, 7 )); σ(relNote(c, 10 )); σ(relNote(c, 4 )); σ(absNote(c, 11 )); (6)
Computational Invention of Cadences and Chord Progressions by Conceptual Chord-blending

January 2015

·

1,128 Reads

·

34 Citations

We present a computational framework for chord invention based on a cognitive-theoretic perspective on conceptual blending. The framework builds on algebraic specifications, and solves two musicological problems. It automatically finds transitions between chord progressions of different keys or idioms, and it substitutes chords in a chord progression by other chords of a similar function , as a means to create novel variations. The approach is demonstrated with several examples where jazz cadences are invented by blending chords in cadences from earlier idioms, and where novel chord progressions are generated by inventing transition chords.


Proof automation for functional correctness in separation logic

January 2014

·

13 Reads

·

4 Citations

Journal of Logic and Computation

We describe an approach to automatically prove the functional correctness of pointer programs that involve iteration and recursion. Building upon separation logic, our approach has been implemented as a tightly integrated tool chain incorporating a novel combination of proof planning and invariant generation. Starting from shape analysis, performed by the Smallfoot static analyser, we have developed a proof strategy that combines shape and functional aspects of the verification task. By focusing on both iterative and recursive code, we have had to address two related invariant generation tasks, i.e. loop and frame invariants. We deal with both tasks uniformly using an automatic technique called term synthesis, in combination with the IsaPlanner/Isabelle theorem prover. In addition, where verification fails, we attempt to overcome failure by automatically generating missing preconditions. We present in detail our experimental results. Our approach has been evaluated on a range of examples, drawn in part from a functional extension to the Smallfoot corpus.


Fig. 3. Term tree and adjacency matrix for Theorem fact-fact-tail; assuming the root-to-leaf direction of edges.
Fig. 5. Left: The source theorem and lemma along with the target theorem for which we seek an analogous lemma. Right: overview of the process of generating analogical lemmas. Here the label iteration is the iteration step and mappings is a count of the number of analogy mappings. 
Proof-Pattern Recognition and Lemma Discovery in ACL2

August 2013

·

133 Reads

·

37 Citations

Lecture Notes in Computer Science

We present a novel technique for combining statistical machine learning for proof-pattern recognition with symbolic methods for lemma discovery. The resulting tool, ACL2(ml), gathers proof statistics and uses statistical pattern-recognition to pre-processes data from libraries, and then suggests auxiliary lemmas in new proofs by analogy with already seen examples. This paper presents the implementation of ACL2(ml) alongside theoretical descriptions of the proof-pattern recognition and lemma discovery methods involved in it.


Towards Automated Proof Strategy Generalisation

March 2013

·

15 Reads

·

2 Citations

The ability to automatically generalise (interactive) proofs and use such generalisations to discharge related conjectures is a very hard problem which remains unsolved. Here, we develop a notion of goal types to capture key properties of goals, which enables abstractions over the specific order and number of sub-goals arising when composing tactics. We show that the goal types form a lattice, and utilise this property in the techniques we develop to automatically generalise proof strategies in order to reuse it for proofs of related conjectures. We illustrate our approach with an example.


The CORE system: Animation and functional correctness of pointer programs

November 2011

·

15 Reads

·

5 Citations

Pointers are a powerful and widely used programming mechanism, but developing and maintaining correct pointer programs is notoriously hard. Here we describe the CORE1 system, which supports the development of provably correct pointer programs. The CORE system combines data structure animation with functional correctness. While the animation component allows a programmer to explore and debug their algorithms, the functional correctness component provides a stronger guarantee via formal proof. CORE builds upon two external reasoning tools, i.e. the Smallfoot family of static analysers and the IsaPlanner proof planner. At the heart of the CORE functional correctness capability lies an integration of planning and term synthesis. The planning subsystem bridges the gap between shape analysis, provided by Smallfoot, and functional correctness. Key to this process is the generation of functional invariants, i.e. both loop and frame invariants. We use term synthesis, coupled with IsaPlanner's capability for reasoning about inductively defined structures, to automate the generation of functional invariants. The formal guarantees constructed by the CORE system take the form of proof tactics.


Mutation in Linked Data Structures

October 2011

·

6 Reads

·

4 Citations

Lecture Notes in Computer Science

Separation logic was developed as an extension to Hoare logic with the aim of simplifying pointer program proofs. A key feature of the logic is that it focuses the reasoning effort on only those parts of the heap that are relevant to a program - so called local reasoning. Underpinning this local reasoning are the separating conjunction and separating implication operators. Here we present an automated reasoning technique called mutation that provides guidance for separation logic proofs. Specifically, given two heap structures specified within separation logic, mutation attempts to construct an equivalence proof using a difference reduction strategy. Pivotal to this strategy is a generalised decomposition operator which is essential when matching heap structures. We show how mutation provides an effective strategy for proving the functional correctness of iterative and recursive programs within the context of weakest precondition analysis. Currently, mutation is implemented as a proof plan within our CORE program verification system. CORE combines results from shape analysis with our work on invariant generation and proof planning. We present our results for mutation within the context of the CORE system.

Citations (9)


... An emotion is preceded by a stimulus that can be a situation or an event such as seeing, feeling, hearing, listening, thinking and touching. Emotions play a key role in social interactions [10][11][12] and carries regulatory and useful functions between the human body and brain [43,44,47]. A recent study highlights the necessity of human emotions in perception and human cognition ([42], and [3]). ...

Reference:

Real and fake emotion detection using enhanced boosted support vector machine algorithm
Concept Invention Foundations, Implementation, Social Aspects and Applications: Foundations, Implementation, Social Aspects and Applications
  • Citing Book
  • January 2018

... Much of this work relied on so-called symbolic AI, but struggled because GOFAI (good, old-fashioned AI) could not provide the analogous reasoning skills and extensive general knowledge required to process metaphorical language. In the context of conceptual blending specifically, there have been tentative attempts to model the underlying processes involved computationally [20]. ...

A Computational Framework for Conceptual Blending

Artificial Intelligence

... In formal CB theory, however, the generic space plays the crucial role of computing which properties and relations need to be transferred as-they-are in the blend. Additionally, in the COINVENT implementation of CB, the generic space plays also a methodologically crucial role, since it provides the generalization limits for the amalgamation process [13,14]. In the concluding section, the potential role of the generic space in the high-level framework is discussed in the context of future work. ...

ASP, Amalgamation, and the Conceptual Blending Workflow

Lecture Notes in Computer Science

... The probability value for each activated transition in the anti-diagonal blocks is the average of the two probabilities in the initial matrices. At a second stage, all pairs of the most common transitions on the initial idioms are blended (Eppe et al., 2015), giving rise to new chord transitions that might potentially incorporate new chords, in a sense that these chords do not belong to any of the learned idioms. Such chords are appended in the augmented matrix (a new line and a new column are added for any new chord), while the probability assigned to the transitions generated by blending is the average probability of the input transitions (for more information please refer to Kaliakatsos-Papakostas et al., 2017). ...

Computational Invention of Cadences and Chord Progressions by Conceptual Chord-blending

... Having − * is a desirable feature, since many algorithms/programs are verified using this connective, especially when expressing tail-recursive operations [26], iterators [23], septraction in rely/guarantee [37] etc.. Moreover, − * is useful in the weakest precondition calculus for SL, which introduces − * "in each statement in the program being analysed" [25]. See the introduction of [24] and [36] for other examples requiring − * . ...

Proof automation for functional correctness in separation logic
  • Citing Article
  • January 2014

Journal of Logic and Computation

... Instead of computing a hitting ratio based on a fixed formula, we provide SeLFiE as a language, so that Isabelle experts can encode their expertise as assertions based on the syntactic structure of the inductive problem at hand and the semantics of its relevant constants. Heras et al. used ML4PG learning method to find patterns to generalize and transfer inductive proofs from one domain to another in ACL2 [13]. ...

Proof-Pattern Recognition and Lemma Discovery in ACL2

Lecture Notes in Computer Science

... Even if the proofs are not identical, they are still captured by the same proof strategy. In fact, the proofs can be described as simple version of the the mutation proof strategy developed to reason about functional properties in separation logic [19]. Here, we assume the existence of a hypothesis H (i.e. ...

Mutation in Linked Data Structures
  • Citing Conference Paper
  • October 2011

Lecture Notes in Computer Science