Chapter

“Carbon Credits” for Resource-Bounded Computations Using Amortised Analysis

DOI: 10.1007/978-3-642-05089-3_23
Source: DBLP

ABSTRACT Bounding resource usage is important for a number of areas, notably real-time embedded systems and safety-critical systems.
In this paper, we present a fully automatic static type-based analysis for inferring upper bounds on resource usage for programs
involving general algebraic datatypes and full recursion. Our method can easily be used to bound any countable resource, without
needing to revisit proofs. We apply the analysis to the important metrics of worst-case execution time, stack- and heap-space
usage. Our results from several realistic embedded control applications demonstrate good matches between our inferred bounds
and measured worst-case costs for heap and stack usage. For time usage we infer good bounds for one application. Where we
obtain less tight bounds, this is due to the use of software floating-point libraries.

Full-text

Available from: Norman Scaife, Jun 05, 2015
0 Followers
 · 
112 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Verified compilers guarantee the preservation of semantic properties and thus enable formal verification of programs at the source level. However, important quantitative properties such as memory and time usage still have to be verified at the machine level where interactive proofs tend to be more tedious and automation is more challenging. This article describes a framework that enables the formal verification of stack-space bounds of compiled machine code at the C level. It consists of a verified CompCert-based compiler that preserves quantitative properties, a verified quantitative program logic for interactive stack-bound development, and a verified stack analyzer that automatically derives stack bounds during compilation. The framework is based on event traces that record function calls and returns. The source language is CompCert Clight and the target language is x86 assembly. The compiler is implemented in the Coq Proof Assistant and it is proved that crucial properties of event traces are preserved during compilation. A novel quantitative Hoare logic is developed to verify stack-space bounds at the CompCert Clight level. The quantitative logic is implemented in Coq and proved sound with respect to event traces generated by the small-step semantics of CompCert Clight. Stack-space bounds can be proved at the source level without taking into account low-level details that depend on the implementation of the compiler. The compiler fills in these low-level details during compilation and generates a concrete stack-space bound that applies to the produced machine code. The verified stack analyzer is guaranteed to automatically derive bounds for code with non-recursive functions. It generates a derivation in the quantitative logic to ensure soundness as well as interoperability with interactively developed stack bounds. In an experimental evaluation, the developed framework is used to obtain verified stack-space bounds for micro benchmarks as well as real system code. The examples include the verified operating-system kernel CertiKOS, parts of the MiBench embedded benchmark suite, and programs from the CompCert benchmarks. The derived bounds are close to the measured stack-space usage of executions of the compiled programs on a Linux x86 system.
    Programming Languages Design and Implementation; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Predicting the resources that are consumed by a program component is crucial for many parallel or distributed systems. In this context, the main resources of interest are execution time, space and communication/synchronisation costs. There has recently been significant progress in resource analysis technology, notably in type‐based analyses and abstract interpretation. At the same time, parallel and distributed computing are becoming increasingly important. This paper synthesises progress in both areas to survey the state‐of‐the‐art in resource analysis for parallel and distributed computing. We articulate a general model of resource analysis and describe parallel/distributed resource analysis together with the relationship to sequential analysis. We use three parallel or distributed resource analyses as examples and provide a critical evaluation of the analyses. We investigate why the chosen analysis is effective for each application and identify general principles governing why the resource analysis is effective. Copyright © 2011 John Wiley & Sons, Ltd.
    Concurrency and Computation Practice and Experience 03/2013; 25(3). DOI:10.1002/cpe.1898 · 0.78 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the era of information explosion, a program is necessary to be scalable. Therefore, scalability analysis becomes very important in software verification and validation. However, current approaches to empirical scalability analysis remain limitations related to the number of supported models and performance. In this paper, we propose a runtime approach for estimating the program resource usage with two aims: evaluating the program scalability and revealing potential errors. In this approach, the resource usage of a program is first observed when it is executed on inputs with different scales, the observed results are then fitted on a model of the usage according to the program's input. Comparing to other approaches, ours supports diverse models to illustrate the resource usage, i.e., linear-log, power-law, polynomial, etc. We currently focus on the computation cost and stack frames usage as two representatives of resource usage, but the approach can be extended to other kinds of resource. The experimental result shows that our approach achieves more precise estimation and better performance than other state-of-the-art approaches.
    Proceedings of the Fourth Symposium on Information and Communication Technology; 12/2013