“Carbon Credits” for Resource-Bounded Computations Using Amortised Analysis

DOI: 10.1007/978-3-642-05089-3_23 In book: FM 2009: Formal Methods, pp.354-369
Source: DBLP


Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems.
In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs
involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without
needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space
usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds
and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we
obtain less tight bounds, this is due to the use of software floating-point libraries.

Download full-text


Available from: Norman Scaife
  • Source
    • "This allows us to recast innermost rewriting into an operational big-step semantics instrumented with resource counters, cf. Figure 1. The semantics closely resembles similar definitions given in the literature on amortised resource analysis (see for example [15] [13] [10]). Let σ be a (normalised) substitution and let f (x 1 , . . . "
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a novel resource analysis for typed term rewrite systems based on a potential-based type system. This type system gives rise to polynomial bounds on the innermost runtime complexity. We relate the thus obtained amortised resource analysis to polynomial interpretations and obtain the perhaps surprising result that whenever a rewrite system R can be well-typed, then there exists a polynomial interpretation that orients R. For this we adequately adapt the standard notion of polynomial interpretations to the typed setting.
    Preview · Article · Feb 2014
  • Source
    • "For simplicity, but without loss of generality, we choose to use a uniform cost model: evaluation will cost one (heap) unit for each fresh heap location that is needed during evaluation. Other cost models are also possible [28], modelling the usage of other countable resources such as execution time, or stack usage, for example. The WHNF ⇓ rule for weak-head normal forms (λexpressions and constructors) incurs no cost. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes the first successful attempt, of which we are aware, to define an automatic, type-based static analysis of resource bounds for lazy functional programs. Our analysis uses the automatic amortisation approach developed by Hofmann and Jost, which was previously restricted to eager evaluation. In this paper, we extend this work to a lazy setting by capturing the costs of unevaluated expressions in type annotations and by amortising the payment of these costs using a notion of lazy potential. We present our analysis as a proof system for predicting heap allocations of a minimal functional language (including higher-order functions and recursive data types) and define a formal cost model based on Launchbury's natural semantics for lazy evaluation. We prove the soundness of our analysis with respect to the cost model. Our approach is illustrated by a number of representative and non-trivial examples that have been analysed using a prototype implementation of our analysis.
    Full-text · Article · Oct 2012 · ACM SIGPLAN Notices
  • Source
    • "It is parametric in the resource of interest and can measure every quantity whose usage in a single evaluation step can be bounded by a constant. The actual constants for a step on a specific system architecture can be derived by analyzing the translation of the step in the compiler implementation for that architecture [23]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the problem of automatically analyzing the worst-case resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. In this paper we generalize this to arbitrary multivariate polynomial functions thus allowing bounds of the form mn which had to be grossly overestimated by m2+n2 before. Our framework even encompasses bounds like ∗i,j≤n m_i mj where the mi are the sizes of the entries of a list of length n. This allows us for the first time to derive useful resource bounds for operations on matrices that are represented as lists of lists and to considerably improve bounds on other super-linear operations on lists such as longest common subsequence and removal of duplicates from lists of lists. Furthermore, resource bounds are now closed under composition which improves accuracy of the analysis of composed programs when some or all of the components exhibit super-linear resource or size behavior. The analysis is based on a novel multivariate amortized resource analysis. We present it in form of a type system for a simple first-order functional language with lists and trees, prove soundness, and describe automatic type inference based on linear programming. We have experimentally validated the automatic analysis on a wide range of examples from functional programming with lists and trees. The obtained bounds were compared with actual resource consumption. All bounds were asymptotically tight, and the constants were close or even identical to the optimal ones.
    Preview · Conference Paper · Jan 2011
Show more