Fig 2 - uploaded by Manuel Fähndrich

Content may be subject to copyright.

Source publication

We motivate, define and design a simple static analysis to check that comparisons of floating point values use compatible bit widths and thus compatible precision ranges. Precision mismatches arise due to the difference in bit widths of processor internal floating point registers (typically 80 or 64 bits) and their corresponding widths when stored...

## Contexts in source publication

**Context 1**

... a postcondition violation in the method Deposit (cf. Fig. 2). What is wrong here? Let's first rule out causes that are not the ...

**Context 2**

... floating point values in programs can introduce subtle errors due to precision mismatches of the compared values. Precision mismatches arise as a result of truncating typically larger register internal floating point widths (80 or 64 bits) to the floating point width used when storing the value into main memory (64 or 32 bits). Such mismatches may produce unexpected program behavior, resulting in programmer confusion and—if ignored—unsound static program analysis. We introduce the problem with the code snippet in Fig. 1, extracted from the “classical” bank account example annotated with contracts [8]. In this paper, we use C# as our language and the .NET runtime. However, the general problem addressed in this paper is present in numerous programming languages and runtimes. We address these other contexts in Sect. 5. The class Account represents a bank account. The method Deposit updates the balance by a given non-negative amount . The postcondition for Deposit states that on method exit the balance has been correctly updated. The current balance is stored in an instance field of type float . The ECMA standard requires . NET implementations of floating point types to follow the IEC:60559:1989 standard. At a first glance, one expects the postcondition to hold and any static analyzer to easily prove it. In fact, a simple reasoning by symbolic propagation ( denotes the value of the field at method entry) could be: Unfortunately, a static analyzer for . NET performing this reasoning would be unsound! For instance, the following two lines of # code: cause a postcondition violation in the method Deposit (cf. Fig. 2). What is wrong here? Let’s first rule out causes that are not the problem: • Overflow can be excluded, as floating point numbers cannot overflow (at worst, operations result in special values ±∞ or NaN ). • Non-determinism is ruled out by the IEEE754 standard, and by the fact that the code in the example is single-threaded. • Cancellation is to be ruled out too: e.g. the numerical quantities are positive and of the same order of magnitude. • Floating point addition is commutative, so this is not the cause of the problem either. • Addition is not necessarly associative, but we do not need associativity here (we are adding only two numbers). The real culprit here is the equality test. In general all comparisons of floating point values are problematic. However it is still unclear at first sight why the comparison is a source of problems here: after all we are adding up the same two quantities and then comparing them for equality. If some rounding error occurs, then the same error should occur in both additions, or won’t it? The reason for the unexpected behavior is to be found deeper in the specification of the Common Language Runtime (Partition I, Sect. 12.1.3 of [1]): The standard allows exploiting the maximum precision available from the floating point hardware for operations on values in registers despite of their nominal type, provided that on memory stores the internal value is truncated to the nominal size. It is now easy to see why we get the postcondition violation at runtime. The result of the evaluation of the expression this . balance + amount is internally stored at the maximum precision available from the hardware (on Intel processors 80 bits registers, on ARM architectures 64 bits). In the example, the result of the addition is 9 . 42477822 , a value that cannot be precisely represented in 32 bits. The successive field store forces the value to be truncated to 32 bits, thereby changing the value. In the example, 9 . 42477822 is coerced to a float , causing a loss of precision resulting in the value 9 . 424778 being stored in the field this . balance . When the postcondition is evaluated, the truncated value of balance is re- loaded from memory, but the addition in the postcondition is re-computed with the internal precision. Comparing these two values causes the postcondition to fail, since = . We present a simple static analysis to check that floating point comparisons (equal- ities, inequalities) use operands of compatible types. When they are not compatible, the analysis reports a warning message to the user, so that all successive valida- tions should be understood as conditional. We fully implemented the analysis in Clousot, our static contract checker based on abstract interpretation for .NET [2]. We validated the analysis by running it on the base class library of .NET where it emitted 5 real warnings. We illustrate our analysis on a minimalistic bytecode language. We make some simplyfing assumptions. There are two kinds of variables: store variables ( f , a ∈ S ) and locals ( l , p ∈ L ). Store variables are instance fields, static fields and arrays. Local variables are locals, parameters, and the return value. Variables belong to the set Vars = S ∪ L . Aliasing is not allowed. The language has only two nominal floating point types ( float32 , float64 ∈ T N ) and one internal floating point type ( floatX ) such that 64 ≤ X . On x86 , floatX is float80 , allowing extended floating point precision. Please note that the . NET standard does not include a long double as for instance C [3], so application programmers have no access to floatX types. All variables have a nominal floating point type. At runtime, the nominal floating point type for locals may be widened but not that of store variables. We say “may be widened”, as it depends on register allocation choices by the compiler. We think it is reasonable to force the code to compute values independent of floating point register allocation choices. The simplified bytecode language is presented in Fig. 3. Floating point constants are loaded into locals (load const) . Admissible constant values are only those ad- mitted by the nominal types, i.e. floating point constants in 32 or 64 bits including special values as ±∞ and NaN (Not-a-number). Values can be copied to locals, retaining their internal value (copy) . Casting is allowed only to nominal types, with values narrowed or widened as needed (cast) . In general it is not true that if l 1 and l have the same nominal type then (cast) is semantically equivalent to (copy) as their internal type may differ. Binary operations are the usual floating point arithmetic ones (+ , − , ∗ , / ) and (unordered) comparison operations (== , <, ≤ , ) (binary op) . The result of a comparison is 0 . 0 if the comparison is false, 1 . 0 otherwise. Values are loaded from and stored to fields ([load/store] field) . We do not dis- tinguish between static and instance fields. Fields only contain values of nominal types: therefore, when storing a local into a field, its value is automatically narrowed to the field nominal type value. If the value of l is too large or too small, then it is approximated to ±∞ or to 0. Similarly, values read from arrays have a nominal type value and values written into arrays are narrowed to the nominal type of the array type. Arrays are indexed by local values, and in addition to the usual out-of-bounds checking, we assume that the computation stops also when l i is a floating point number with non-zero decimal part or it is NaN . Example 2.1 The compilation to simplified bytecode of the body of method deposit (without contracts) of Fig. 1 is in Fig. 4. Please note that the store and load field operations are now made explicit in the bytecode. The abstract domain T we use captures the potential runtime floating point width a variable may have, which may be more precise than its nominal type. Therefore, the elements of T belong to the set Vars −→ T X where 64 ≤ X and T X is the abstract domain: If X = 64, i.e. the hardware does not provide any wider floating point register, then float64 and floatX co-incide. This is the case on ARM architectures, but not for x86 architectures which provide extra precision registers. The operations of the abstract domain T (order, join, meet) are the functional pointwise extensions of those on the lattice above. No widening is required as the lattice is of finite height. The abstract semantics · ∈ P × T −→ T statically determines, at each program point an internal type for each local variable. Store variables are known, by the ECMA standard, to have their nominal type coincide with the internal type. The abstract transfer function is defined in Fig. 5. The only constant values that can be explicitly represented are those admissible as float32 or float64 values: the internal type of a local after a load constant is its nominal type. Variable copy retains the internal type. The ECMA standard guarantees that casting a value v to type truncates the value v to one in the type range. If v is too large or too small for type then it is rounded to ±∞ or 0. The result of a binary operation is a value of maximum hardware precision, which we denote by floatX . Reading from a field or an array location provides a value of the nominal type (no extra precision can be stored in fields). Writing into a field or an array location causes the truncation of the value to the corresponding nominal ...

## Similar publications

The Alphasat satellite was launched on 25 July 2013. The Aldo Paraboni technology demonstration payload, funded by ASI under ESA’s ARTES Programme, was embarked as an hosted payload on Alphasat. This Technology Demonstration Payload (identified as TDP5 and recently renamed "Aldo Paraboni") was implemented under an ESA contract awarded in co-contrac...

## Citations

We introduce Verification Modulo Versions (VMV), a new static analysis technique for reducing the number of alarms reported by static verifiers while providing sound semantic guarantees. First, VMV extracts semantic environment conditions from a base program P. Environmental conditions can either be sufficient conditions (implying the safety of P) or necessary conditions (implied by the safety of P). Then, VMV instruments a new version of the program, P', with the inferred conditions. We prove that we can use (i) sufficient conditions to identify abstract regressions of P' w.r.t. P; and (ii) necessary conditions to prove the relative correctness of P' w.r.t. P. We show that the extraction of environmental conditions can be performed at a hierarchy of abstraction levels (history, state, or call conditions) with each subsequent level requiring a less sophisticated matching of the syntactic changes between P' and P. Call conditions are particularly useful because they only require the syntactic matching of entry points and callee names across program versions. We have implemented VMV in a widely used static analysis and verification tool. We report our experience on two large code bases and demonstrate a substantial reduction in alarms while additionally providing relative correctness guarantees.

We introduce Verification Modulo Versions (VMV), a new static analysis technique for reducing the number of alarms reported by static verifiers while providing sound semantic guarantees. First, VMV extracts semantic environment conditions from a base program P. Environmental conditions can either be sufficient conditions (implying the safety of P) or necessary conditions (implied by the safety of P). Then, VMV instruments a new version of the program, P', with the inferred conditions. We prove that we can use (i) sufficient conditions to identify abstract regressions of P' w.r.t. P; and (ii) necessary conditions to prove the relative correctness of P' w.r.t. P. We show that the extraction of environmental conditions can be performed at a hierarchy of abstraction levels (history, state, or call conditions) with each subsequent level requiring a less sophisticated matching of the syntactic changes between P' and P. Call conditions are particularly useful because they only require the syntactic matching of entry points and callee names across program versions. We have implemented VMV in a widely used static analysis and verification tool. We report our experience on two large code bases and demonstrate a substantial reduction in alarms while additionally providing relative correctness guarantees.

We study the problem of suggesting code repairs at design time, based on the warnings issued by modular program verifiers. We introduce the concept of a verified repair, a change to a program's source that removes bad execution traces while increasing the number of good traces, where the bad/good traces form a partition of all the traces of a program. Repairs are property-specific. We demonstrate our framework in the context of warnings produced by the modular cccheck (a.k.a. Clousot) abstract interpreter, and generate repairs for missing contracts, incorrect locals and objects initialization, wrong conditionals, buffer overruns, arithmetic overflow and incorrect floating point comparisons. We report our experience with automatically generating repairs for the .NET framework libraries, generating verified repairs for over 80% of the warnings generated by cccheck.

Safety verification of a plant together with its controller is an important part of controller design. If the controller is implemented in software, then a formal model such as hybrid automata is needed to model the composite system. However, classic hybrid automata scale poorly for complex software controllers due to their eager representation of discrete states. In this paper we present safety verification for software controllers without constructing hybrid automata. Our approach targets a common class of software controllers, where the plant is periodically sampled and actuated by the controller. The resulting systems exhibit a regular alternation of discrete steps and fixed length continuous-time evolution. We show that these systems can be verified by a combination of SMT solving and Taylor models. SMT formulas accurately capture control software in a compact form, and Taylor models accurately capture continuous trajectories up to guaranteed error bounds.

In this tutorial I will report our experience with CodeContracts [5], and in particular with its static checker (cccheck/clousot) [6].
CodeContracts are a language-agnostic solution to the specification problem. Preconditions, postconditions and object invariants are with opportune method calls acting as specification markers [4]. The CodeContracts API is part of the core .NET standard. The CodeContracts tools have been downloaded more than 50 000 times, and they are currently used in many projects by professional programmers.