About
49
Publications
2,035
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
635
Citations
Introduction
Current institution
Additional affiliations
December 2008 - present
Publications
Publications (49)
The efficacy of address space layout randomization has been formally demonstrated in a shared-memory model by Abadi et al., contingent on specific assumptions about victim programs. However, modern operating systems, implementing layout randomization in the kernel, diverge from these assumptions and operate on a separate memory model with communica...
The paper extends the expectation transformer based analysis of higher-order probabilistic programs to the quantum higher-order setting. The quantum language we are considering can be seen as an extension of PCF, featuring unbounded recursion. The language admits classical and quantum data, as well as a tick operator to account for costs. Our quant...
We introduce eRHL, a program logic for reasoning about relational expectation properties of pairs of probabilistic programs. eRHL is quantitative, i.e., its pre- and post-conditions take values in the extended non-negative reals. Thanks to its quantitative assertions, eRHL overcomes randomness alignment restrictions from prior logics, including pRH...
The efficacy of address space layout randomization has been formally demonstrated in a shared-memory model by Abadi et al., contingent on specific assumptions about victim programs. However, modern operating systems, implementing layout randomization in the kernel, diverge from these assumptions and operate on a separate memory model with communica...
We introduce eRHL, a program logic for reasoning about relational expectation properties of pairs of probabilistic programs. eRHL is quantitative, i.e., its pre- and post-conditions take values in the extended non-negative reals. Thanks to its quantitative assertions, eRHL overcomes randomness alignment restrictions from prior logics, including PRH...
The efficacy of address space layout randomization has been formally demonstrated in a shared-memory model by Abadi et al., contingent on specific assumptions about victim programs. However, modern operating systems, implementing layout randomization in the kernel, diverge from these assumptions and operate on a separate memory model with communica...
We propose, implement, and evaluate a hopping proof approach for proving expectation-based properties of probabilistic programs. Our approach combines EHL, a syntax-directed proof system for reducing proof goals of a program to proof goals of simpler programs, with a "hopping" proof rule for reducing proof goals of an original program to proof goal...
In this paper, we study quantitative properties of quantum programs. Properties of interest include (positive) almost-sure termination, expected runtime or expected cost, that is, for example, the expected number of applications of a given quantum gate, etc. After studying the completeness of these problems in the arithmetical hierarchy over the Cl...
In this work, we study the fully automated inference of expected result values of probabilistic programs in the presence of natural programming constructs such as procedures, local variables and recursion. While crucial, capturing these constructs becomes highly non-trivial. The key contribution is the definition of a term representation, denoted a...
In this work, we study the fully automated inference of expected result values of probabilistic programs in the presence of natural programming constructs such as procedures, local variables and recursion. While crucial, capturing these constructs becomes highly non-trivial. The key contribution is the definition of a term representation, denoted a...
We introduce a new kind of expectation transformer for a mixed classical-quantum programming language. Our semantic approach relies on a new notion of a cost structure, which we introduce and which can be seen as a specialisation of the Kegelspitzen of Keimel and Plotkin. We show that our weakest precondition analysis is both sound and adequate wit...
We define a continuation-passing style (CPS) translation for a typed λ-calculus with probabilistic choice, unbounded recursion, and a tick operator — for modeling cost. The target language is a (non-probabilistic) λ-calculus, enriched with a type of extended positive reals and a fixpoint operator. We then show that applying the CPS transform of an...
We present a novel methodology for the automated resource analysis of non-deterministic, probabilistic imperative programs, which gives rise to a modular approach . Program fragments are analysed in full independence. Moreover, the established results allow us to incorporate sampling from dynamic distributions , making our analysis applicable to a...
We study the termination problem for probabilistic term rewrite systems. We prove that the interpretation method is sound and complete for a strengthening of positive almost sure termination, when abstract reduction systems and term rewrite systems are considered. Two instances of the interpretation method—polynomial and matrix interpretations—are...
We are concerned with the average case runtime complexity analysis of a prototypical imperative language endowed with primitives for sampling and probabilistic choice. Taking inspiration from known approaches from to the modular resource analysis of non-probabilistic programs, we investigate how a modular runtime analysis is obtained for probabilis...
We study the termination problem for probabilistic term rewrite systems. We prove that the interpretation method is sound and complete for a strengthening of positive almost sure termination, when abstract reduction systems and term rewrite systems are considered. Two instances of the interpretation method—polynomial and matrix interpretations—are...
We study the termination problem for probabilistic term rewrite systems. We prove that the interpretation method is sound and complete for a strengthening of positive almost sure termination, when abstract reduction systems and term rewrite systems are considered. Two instances of the interpretation method - polynomial and matrix interpretations -...
This paper introduces a new methodology for the complexity analysis of higher-order functional programs, which is based on three ingredients: a powerful type system for size analysis and a sound type inference procedure for it, a ticking monadic transformation and constraint solving. Noticeably, the presented methodology can be fully automated, and...
This paper introduces a new methodology for the complexity analysis of higher-order functional programs, which is based on three ingredients: a powerful type system for size analysis and a sound type inference procedure for it, a ticking monadic transformation, and constraint solving. Noticeably, the presented methodology can be fully automated, an...
This paper introduces a new methodology for the complexity analysis of higher-order functional programs, which is based on three components: a powerful type system for size analysis and a sound type inference procedure for it, a ticking monadic transformation and a concrete tool for constraint solving. Noticeably, the presented methodology can be f...
In this extended abstract we present the GUBS Upper Bound Solver. GUBS is a dedicated constraint solver over the naturals for inequalities formed over uninterpreted function symbols and standard arithmetic operations. GUBS now forms the backbone of HoSA, a tool for analysing space and time complexity of higher-order functional programs automaticall...
In this extended abstract we present the GUBS Upper Bound Solver. GUBS is a dedicated constraint solver over the naturals for inequalities formed over uninterpreted function symbols and standard arithmetic operations. GUBS now forms the backbone of HoSA, a tool for analysing space and time complexity of higher-order functional programs automaticall...
This paper introduces a new methodology for the complexity analysis of higher-order functional programs, which is based on three components: a powerful type system for size analysis and a sound type inference procedure for it, a ticking monadic transformation and a concrete tool for constraint solving. Noticeably, the presented methodology can be f...
In this paper we present
v3.0, the latest version of our fully automated complexity analyser.
implements our framework for automated complexity analysis and focuses on extensibility and automation.
is open with respect to the input problem under investigation and the resource metric in question. It is the most powerful tool in the realm of automate...
We show how the complexity of higher-order functional programs can be
analysed automatically by applying program transformations to a
defunctionalized versions of them, and feeding the result to existing tools for
the complexity analysis of first-order term rewrite systems. This is done while
carefully analysing complexity preservation and reflecti...
We study how the adoption of an evaluation mechanism with sharing and
memoization impacts the class of functions which can be computed in polynomial
time. We first show how a natural cost model in which lookup for an already
computed value has no cost is indeed invariant. As a corollary, we then prove
that the most general notion of ramified recurr...
Adopting former term rewriting characterisations of polytime and
exponential-time computable functions, we introduce a new reduction order, the
Path Order for ETIME (POE* for short), that is sound and complete for ETIME
computable functions. The proposed reduction order for ETIME makes contrasts to
those related complexity classes clear.
This paper is concerned with the complexity analysis of constructor term
rewrite systems and its ramification in implicit computational complexity. We
introduce a path order with multiset status, the polynomial path order POP*,
that is applicable in two related, but distinct contexts. On the one hand POP*
induces polynomial innermost runtime comple...
We present a Haskell library for first-order term rewriting covering basic
operations on positions, terms, contexts, substitutions and rewrite rules. This
effort is motivated by the increasing number of term rewriting tools that are
written in Haskell.
In this paper we present a combination framework for polynomial complexity
analysis of term rewrite systems. The framework covers both derivational and
runtime complexity analysis. We present generalisations of powerful complexity
techniques, notably a generalisation of complexity pairs and (weak) dependency
pairs. Finally, we also present a novel...
The Tyrolean Complexity Tool, TCT for short, is an open source complexity analyser for term rewrite systems. Our tool TCT features a majority of the known techniques for the automated characterisation of polynomial complexity of rewrite systems and can investigate derivational and runtime complexity, for full and innermost rewriting. This system de...
This paper is concerned with the automated complexity analysis of term
rewrite systems (TRSs for short) and the ramification of these in implicit
computational complexity theory (ICC for short). We introduce a novel path
order with multiset status, the polynomial path order POP*. Essentially relying
on the principle of predicative recursion as prop...
We propose a new order, the small polynomial path order (sPOP* for short).
The order sPOP* provides a characterisation of the class of polynomial time
computable function via term rewrite systems. Any polynomial time computable
function gives rise to a rewrite system that is compatible with sPOP*. On the
other hand any function defined by a rewrite...
In this paper we present a new path order for rewrite systems, the
exponential path order EPOSTAR. Suppose a term rewrite system is compatible
with EPOSTAR, then the runtime complexity of this rewrite system is bounded
from above by an exponential function. Furthermore, the class of function
computed by a rewrite system compatible with EPOSTAR equa...
Recently, many techniques have been introduced that allow the (automated)
classification of the runtime complexity of term rewrite systems (TRSs for
short). In earlier work, the authors have shown that for confluent TRSs,
innermost polynomial runtime complexity induces polytime computability of the
functions defined.
In this paper, we generalise th...
In earlier work, we have shown that for confluent TRSs, innermost polynomial runtime complexity induces polytime computability of the functions defined. In this paper, we generalise this result to full rewriting, for that we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise...
We show how polynomial path orders can be employed efficiently in conjunction
with weak innermost dependency pairs to automatically certify polynomial
runtime complexity of term rewrite systems and the polytime computability of
the functions computed. The established techniques have been implemented and we
provide ample experimental data to assess...
The polynomial path order ( POP* for short) is a termination method that induces polynomial bounds on the innermost runtime complexity of term rewrite
systems (TRSs for short). Semantic labeling is a transformation technique used for proving termination.
In this paper, we propose an efficient implementation of POP* together with finite semantic la...
Recent studies have provided many characterisations of the class of polynomial time computable functions through term rewriting
techniques. In this paper we describe a (fully automatic and command-line based) system that implements the majority of these
techniques and present experimental findings to simplify comparisons.
In this paper we introduce a restrictive version of the mul- tiset path order, called polynomial path order. This recursive path order induces polynomial bounds on the maximal number of innermost rewrite steps. This result opens the way to automatically verify for a given pro- gram, written in an eager functional programming language, that the maxi...
Recent studies have provided many characterisations of the class of polynomial time computable functions through term rewriting techniques. In this paper we describe a (fully automatic and command-line based) system that implements the majority of these techniques and present experimental ndings to simplify comparisons.
In this note we present the Exponential Path Order EPO . Inspired by a novel term rewriting characterisation of the exponential time functions FEXP, this order is carefully trimmed so that compatibility of TRSs implies exponentially bounded runtime complexity. Moreover, the order is complete in the sense that every exponential time function can be...
In this paper we study the termination behaviour and the runtime com-plexity of Scheme programs by a translation into rewrite systems. We employ simple termination methods like recursive path orders to show termination of Scheme programs in many cases. All transformations are complexity preserv-ing, i.e., analysing the complexity of the resulting r...