Chapter

“Carbon Credits” for Resource-Bounded Computations Using Amortised Analysis

DOI: 10.1007/978-3-642-05089-3_23
Source: DBLP

ABSTRACT Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems.
In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs
involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without
needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space
usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds
and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we
obtain less tight bounds, this is due to the use of software floating-point libraries.

0 Bookmarks
 · 
79 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the problem of automatically analyzing the worst-case resource usage of procedures with several arguments. Existing automatic analyses based on amortization, or sized types bound the resource usage or result size of such a procedure by a sum of unary functions of the sizes of the arguments. In this paper we generalize this to arbitrary multivariate polynomial functions thus allowing bounds of the form mn which had to be grossly overestimated by m2+n2 before. Our framework even encompasses bounds like ∗i,j≤n m_i mj where the mi are the sizes of the entries of a list of length n. This allows us for the first time to derive useful resource bounds for operations on matrices that are represented as lists of lists and to considerably improve bounds on other super-linear operations on lists such as longest common subsequence and removal of duplicates from lists of lists. Furthermore, resource bounds are now closed under composition which improves accuracy of the analysis of composed programs when some or all of the components exhibit super-linear resource or size behavior. The analysis is based on a novel multivariate amortized resource analysis. We present it in form of a type system for a simple first-order functional language with lists and trees, prove soundness, and describe automatic type inference based on linear programming. We have experimentally validated the automatic analysis on a wide range of examples from functional programming with lists and trees. The obtained bounds were compared with actual resource consumption. All bounds were asymptotically tight, and the constants were close or even identical to the optimal ones.
    Proceedings of the 38th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL 2011, Austin, TX, USA, January 26-28, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes the first successful attempt, of which we are aware, to define an automatic, type-based static analysis of resource bounds for lazy functional programs. Our analysis uses the automatic amortisation approach developed by Hofmann and Jost, which was previously restricted to eager evaluation. In this paper, we extend this work to a lazy setting by capturing the costs of unevaluated expressions in type annotations and by amortising the payment of these costs using a notion of lazy potential. We present our analysis as a proof system for predicting heap allocations of a minimal functional language (including higher-order functions and recursive data types) and define a formal cost model based on Launchbury's natural semantics for lazy evaluation. We prove the soundness of our analysis with respect to the cost model. Our approach is illustrated by a number of representative and non-trivial examples that have been analysed using a prototype implementation of our analysis.
    01/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the era of information explosion, a program is necessary to be scalable. Therefore, scalability analysis becomes very important in software verification and validation. However, current approaches to empirical scalability analysis remain limitations related to the number of supported models and performance. In this paper, we propose a runtime approach for estimating the program resource usage with two aims: evaluating the program scalability and revealing potential errors. In this approach, the resource usage of a program is first observed when it is executed on inputs with different scales, the observed results are then fitted on a model of the usage according to the program's input. Comparing to other approaches, ours supports diverse models to illustrate the resource usage, i.e., linear-log, power-law, polynomial, etc. We currently focus on the computation cost and stack frames usage as two representatives of resource usage, but the approach can be extended to other kinds of resource. The experimental result shows that our approach achieves more precise estimation and better performance than other state-of-the-art approaches.
    Proceedings of the Fourth Symposium on Information and Communication Technology; 12/2013

Full-text (2 Sources)

View
25 Downloads
Available from
Jun 3, 2014