Conference Paper

Formal Methods for the Analysis of Critical Control Systems Models: Combining Non-Linear and Linear Analyses

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Critical control systems are often built as a combination of a control core with safety mechanisms allowing to recover from failures. For example a PID controller used with triplicated inputs. Typically those systems would be designed at the model level in a synchronous language like Lustre or Simulink, and their code automatically generated from those models. In previous SAE symposium, we addressed the formal analysis of such systems - focusing on the safety parts - using a combination of formal techniques, ie. k-induction and abstract interpretation. The approach developed here extends the analysis of the system to the control core. We present a new analysis framework combining the analysis of open-loop stable controller with those safety constructs. We introduce the basic analysis approaches: abstract interpretation synthesizing quadratic invariants and backward analysis based on quantifier elimination. Then we apply it on a simple but representative example that no other available state-of-the-art technique is able to analyze. This contribution is another step towards early use of formal methods for critical embedded software such as the ones of the aerospace industry.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... • la vérification des modèles par rapport à leur spécification formelle [16,17,23,55,12,6] permettent de prouver que la spécification est respectée ; ...
... • verification of models against high-level specifications [16,17,23,55,12,6] proves mathematically that the design of the system respects its specification; ...
... newGeneratorSet.get(pivot) − seed ) 16: if (hull = false) then 17 Communication with the rest of the framework can take place here. 26: end while 27: return {p i |∃S (p i , S) ∈ generatorSet} {s 1 , s 2 , s 3 } with the one of source {s 4 , s 5 } then it is not necessary to consider trying to merge say the ECH of source {s 3 , s 4 } with the one of source {s 1 , s 2 , s 5 }. ...
Thesis
Full-text available
This work deals with the verification of software components of avionics critical embedded systems. Failure of such systems has catastrophic consequences, it is thus rewarding to make sure they are consistent with their specification. Formal verification consists in proving this consistency if it is true, or produce a counterexample if it is not. Unfortunately current methods are unable to address the verification challenges stemming from realistic critical systems because of the combinatorial explosion of the state space. This calls for the discovery of additional information (invariants) on the system to reduce the search space and hopefully strengthen the proof objective, i.e. discover enough information for methods to conclude "easily". We define a parallel architecture allowing the cooperation of invariant discovery methods around a k-induction engine. In this context we propose a new potential invariant generation heuristic based on pre-image calculus by quantifier elimination and convex hulls, called HullQe. We show that HullQe is able to automatically strengthen proof objectives corresponding to safety properties on common avionics design patterns which, to the best of our knowledge, elude the capabilities of current verification methods. We detail our improvements to Monniaux's SMT-based quantifier elimination algorithm so that the pre-image calculus scales up to our systems. Our prototype formal framework Stuff implements this parallel architecture and features an implementation of HullQe, a template-based invariant discovery technique and a generalization of PDR to arithmetics.
... These numerical analyses work on the model level representation of the systems, i.e., without complex pointers and memory issues, but they consider floating-point semantics. A more detailed explanation of these interactions between solvers is presented in [6]. ...
Conference Paper
Full-text available
Robustness analyses play a major role in the synthesis and analysis of controllers. For control systems, robustness is a measure of the maximum tolerable model inaccuracies or perturbations that do not destabilize the system. Analyzing the robustness of a closed-loop system can be performed with multiple approaches: gain and phase margin computation for single-input single-output (SISO) linear systems, mu analysis, IQC computations, etc. However, none of these techniques consider the actual code in their analyses. The approach presented here relies on an invariant computation on the discrete system dynamics. Using semi-definite programming (SDP) solvers, a Lyapunov-based function is synthesized that captures the vector margins of the closed-loop linear system considered. This numerical invariant expressed over the state variables of the system is compatible with code analysis and enables its validation on the code artifact. This automatic analysis extends verification techniques focused on controller implementation, addressing validation of robustness at model and code level. It has been implemented in a tool analyzing discrete SISO systems and generating over-approximations of phase and gain margins. The analysis will be integrated in our toolchain for Simulink and Lustre models autocoding and formal analysis.
... Formal verification techniques have already been used for the certification of aircraft avionics software [32] and a lot of work is underway in this field [34]. Specific work on the verification of stability and safety properties of flight control software could be of special interest for UAV [10]. ...
Chapter
Full-text available
We introduce an approach for designing foolproof control software for Cyber-Physical Systems (CPS) by using formal descriptions of the design requirements and, at the same time, automating the development and deployment phases. Symbolic Control is first introduced as an approach for automated synthesis of controllers for CPS in which finite abstractions of CPS are constructed and then controllers are algorithmically synthesized for them. We then identify some issues with symbolic control that hinder applying it to real-world CPS. More specifically, the following issues are considered: (1) the computational complexity of symbolic control algorithms that increases exponentially with the size of systems, (2) the lack of support for practical specifications in current state-of-the-art tools, and (3) the absence of formal deployments of the synthesized controllers. Then, we introduce parallel scalable algorithms of symbolic control and show that they lead to significant reductions in the computational complexity allowing for real-time online implementations. An approach that allows symbolic control to handle more complex and practical design requirements given as Linear Temporal Logic (LTL) formulae or as automata on infinite strings is also presented. Finally, we introduce formal representations of the designed symbolic controllers and an automated approach for their deployments using code-generation.
Conference Paper
Synchronous languages have long been the standard formalism for modeling and implementing embedded control software in critical domains like avionics, automotive or railway system development. Those languages are equipped with qualified compilers that generate the target final embedded code. An extensively used technique to define the expected behavior is the use of synchronous observers. Those observers are typically used for simulation and testing purposes. However, the information contained in those observers is lost during the compilation process. This makes the verification of expected behavior at code level difficult, since it requires the re-specification of the observer. In this paper, we propose an integrated process in which functional properties expressed at the model level through synchronous observers are compiled as code-level contracts. We also show how these specifications, both at model level and code level could be analyzed via SMT-based model checking, static analysis and runtime verification. We have implemented these techniques in a tool chain targeting embedded systems modeled in Simulink.
Article
Full-text available
Embedded system control often relies on linear systems, which admit quadratic invariants. The parts of the code that host linear system implementations need dedicated analysis tools, since intervals or linear abstract domains will give imprecise results, if any at all, on these systems. Previous work by FERET proposes a specific abstraction for digital filters that addresses this issue on a specific class of controllers. This paper aims at generalizing the idea. It works directly on system representation, relying on existing methods from control theory to automatically generate quadratic invariants for linear time invariant systems, whose stability is provable. This class encompasses n-th order digital filters and, in general, controllers embedded in critical systems. While control theorists only focus on the existence of such invariants, this paper proposes a method to effectively compute tight ones. The method has been implemented and applied to some benchmark systems, giving good results. It also considers floating points issues and validates the soundness of the computed invariants.
Article
Full-text available
This paper addresses the issue of lemma generation in a k-induction-based formal analysis of transition systems, in the linear real/integer arithmetic fragment. A backward analysis, powered by quantifier elimination, is used to output preimages of the negation of the proof objective, viewed as unauthorized states, or gray states. Two heuristics are proposed to take advantage of this source of information. First, a thorough exploration of the possible partitionings of the gray state space discovers new relations between state variables, representing potential invariants. Second, an inexact exploration regroups and over-approximates disjoint areas of the gray state space, also to discover new relations between state variables. k-induction is used to isolate the invariants and check if they strengthen the proof objective. These heuristics can be used on the first preimage of the backward exploration, and each time a new one is output, refining the information on the gray states. In our context of critical avionics embedded systems, we show that our approach is able to outperform other academic or commercial tools on examples of interest in our application field. The method is introduced and motivated through two main examples, one of which was provided by Rockwell Collins, in a collaborative formal verification framework.
Article
Full-text available
We present a generic congruence closure algorithm for deciding ground formulas in the combination of the theory of equality with uninterpreted symbols and an arbitrary built-in solvable theory X. Our algorithm CC(X) is reminiscent of Shostak combination: it maintains a union-find data-structure modulo X from which maximal information about implied equalities can be directly used for congruence closure. CC(X) diverges from Shostak's approach by the use of semantic values for class representatives instead of canonized terms. Using semantic values truly reflects the actual implementation of the decision procedure for X. It also enforces to entirely rebuild the algorithm since global canonization, which is at the heart of Shostak combination, is no longer feasible with semantic values. CC(X) has been implemented in Ocaml and is at the core of Ergo, a new automated theorem prover dedicated to program verification.
Conference Paper
Full-text available
We present the application of real quantifier elimination to formal verification and synthesis of continuous and switched dynamical systems. Through a series of case studies, we show how first-order formulas over the reals arise when formally analyzing models of complex control systems. Existing off-the-shelf quantifier elimination procedures are not successful in eliminating quantifiers from many of our benchmarks. We therefore automatically combine three established software components: virtual subtitution based quantifier elimination in Reduce/Redlog, cylindrical algebraic decomposition implemented in Qepcad, and the simplifier Slfq implemented on top of Qepcad. We use this combination to successfully analyze various models of systems including adaptive cruise control in automobiles, adaptive flight control system, and the classical inverted pendulum problem studied in control theory.
Conference Paper
Full-text available
We introduce a new numerical abstract domain able to infer min and max invariants over the program variables, based on max-plus polyhedra. Our abstraction is more precise than octagons, and allows to express non-convex properties without any disjunctive representations. We have defined sound abstract operators, evaluated their complexity, and implemented them in a static analyzer. It is able to automatically compute precise properties on numerical and memory manipulating programs such as algorithms on strings and arrays.
Conference Paper
Full-text available
We propose a shape analysis that adapts to some of the complex composite data structures found in industrial systems-level pro- grams. Examples of such data structures include "cyclic doubly-linked lists of acyclic singly-linked lists", "singly-linked lists of cyclic doubly- linked lists with back-pointers to head nodes", etc. The analysis intro- duces the use of generic higher-order inductive predicates describing spa- tial relationships together with a method of synthesizing new parame- terized spatial predicates which can be used in combination with the higher-order predicates. In order to evaluate the proposed approach for realistic programs we have performed experiments on examples drawn from device drivers: the analysis proved safety of the data structure ma- nipulation of several routines belonging to an IEEE 1394 (firewire) driver, and also found several previously unknown memory safety bugs.
Conference Paper
Full-text available
A program denotes computations in some universe of objects. Abstract interpretation of programs consists in using that denotation to describe computations in another universe of abstract objects, so that the results of abstract execution give some information on the actual computations. An intuitive example (which we borrow from Sintzoff [72]) is the rule of signs. The text -1515 * 17 may be understood to denote computations on the abstract universe {(+), (-), (±)} where the semantics of arithmetic operators is defined by the rule of signs. The abstract execution -1515 * 17 → -(+) * (+) → (-) * (+) → (-), proves that -1515 * 17 is a negative number. Abstract interpretation is concerned by a particular underlying structure of the usual universe of computations (the sign, in our example). It gives a summary of some facets of the actual executions of a program. In general this summary is simple to obtain but inaccurate (e.g. -1515 + 17 → -(+) + (+) → (-) + (+) → (±)). Despite its fundamentally incomplete results abstract interpretation allows the programmer or the compiler to answer questions which do not need full knowledge of program executions or which tolerate an imprecise answer, (e.g. partial correctness proofs of programs ignoring the termination problems, type checking, program optimizations which are not carried in the absence of certainty about their feasibility, …).
Conference Paper
Full-text available
We give a simple formulation of Karr's algorithm for computing all affine relationships in affine programs. This simplified algorithm runs in time Ç´ÒÒ ¿ µ where Ò is the program size and is the number of program variables assuming unit cost for arithmetic operations. This improves upon the original formulation by a factor of . Moreover, our re-formulation avoids exponential growth of the lengths of intermediately occurring numbers (in binary representa-tion) and uses less complicated elementary operations. We also describe a gener-alization that determines all polynomial relations up to degree in time Ç´ÒÒ ¿ µ.
Conference Paper
Full-text available
We address the problem of model checking hybrid systems which exhibit nontrivial discrete behavior and thus cannot be treated by considering the discrete states one by one, as most currently available verification tools do. Our procedure relies on a deep integration of several techniques and tools. An extension of AND-Inverter-Graphs (AIGs) with first-order constraints serves as a compact representation format for sets of configurations which are composed of continuous regions and discrete states. Boolean reasoning on the AIGs is complemented by first-order reasoning in various forms and on various levels. These include implication checks for simple constraints, test vector generation for fast inequality checks of boolean combinations of constraints, and an exact subsumption check for representations of two configurations. These techniques are integrated within a model checker for universal CTL. Technically, it deals with discrete-time hybrid systems with linear differentials. The paper presents the approach, its prototype implementation, and first experimental data.
Article
Full-text available
Safety-critical embedded software has to satisfy stringent quality requirements. Testing and validation consumes a large and growing fraction of development cost. The last years have seen the emergence of semantics-based static analysis tools in various application areas, from runtime error analysis to worst-case execution time prediction. Their appeal is that they have the potential to reduce testing effort while providing 100% coverage, thus enhancing safety. Static runtime error analysis is applicable to large industry-scale projects and produces a list of definite runtime errors and of potential runtime errors which might be true errors or false alarms. In the past, often only the definite errors were fixed because manually inspecting each alarm was too time-consuming due to a large number of false alarms. Therefore no proof of absence of runtime errors could be given. In this article the parameterizable static analyzer Astrée is presented. By specialization and parametrization Astrée can be adapted to the software under analysis. This enables Astrée to efficiently compute precise results. Astrée has sucessfully been used to analyze large-scale safety-critical avionics software with zero false alarms.
Article
Full-text available
We propose a method for automatically generating abstract transformers for static analysis by abstract interpretation. The method focuses on linear constraints on programs operating on rational, real or floating-point variables and containing linear assignments and tests. Given the specification of an abstract domain, and a program block, our method automatically outputs an implementation of the corresponding abstract transformer. It is thus a form of program transformation. In addition to loop-free code, the same method also applies for obtaining least fixed points as functions of the precondition, which permits the analysis of loops and recursive functions. The motivation of our work is data-flow synchronous programming languages, used for building control-command embedded systems, but it also applies to imperative and functional programming. Our algorithms are based on quantifier elimination and symbolic manipulation techniques over linear arithmetic formulas. We also give less general results for nonlinear constraints and nonlinear program constructs.
Conference Paper
Full-text available
We propose a quantifier elimination scheme based on nested lazy model enumeration through SMT-solving, and projections. This scheme may be applied to any logic that fulfills certain conditions; we illustrate it for linear real arithmetic. The quantifier elimination problem for linear real arithmetic is doubly exponential in the worst case, and so is our method. We have implemented it and benchmarked it against other methods from the literature.
Conference Paper
Full-text available
Decision procedures for equality in a combination of theories are at the core of a number of verification systems. R.E. Shostak's (J. of the ACM, vol. 31, no. 1, pp. 1-12, 1984) decision procedure for equality in the combination of solvable and canonizable theories has been around for nearly two decades. Variations of this decision procedure have been implemented in a number of specification and verification systems, including STP, EHDM, PVS, STeP and SVC. The algorithm is quite subtle and a correctness argument for it has remained elusive. Shostak's algorithm and all previously published variants of it yield incomplete decision procedures. We describe a variant of Shostak's algorithm, along with proofs of termination, soundness and completeness
Article
Full-text available
The model of abstract interpretation of programs developed by Cousot and Cousot [2nd ISOP, 1976], Cousot and Cousot [POPL 1977] and Cousot [PhD thesis 1978] is applied to the static determination of linear equality or inequality invariant relations among numerical variables of programs.
Conference Paper
Formal analysis tools for system models often require or benefit from the availability of auxiliary system invariants. Abstract interpretation is currently one of the best approaches for discovering useful invariants, in particular numerical ones. However, its application is limited by two orthogonal issues: (i) developing an abstract interpretation is often non-trivial; each transfer function of the system has to be represented at the abstract level, depending on the abstract domain used; (ii) with precise but costly abstract domains, the information computed by the abstract interpreter can be used only once a post fix point has been reached; this may take a long time for large systems or when widening is delayed to improve precision. We propose a new, completely automatic, method to build abstract interpreters which, in addition, can provide sound invariants of the system under analysis before reaching the end of the post fix point computation. In effect, such interpreters act as on-the-fly invariant generators and can be used by other tools such as logic-based model checkers. We present some experimental results that provide initial evidence of the practical usefulness of our method.
Article
In this thesis, we define a static analysis by abstract interpretation of memory manipulations. It is based on a new numerical abstract domain, which is able to infer program invariants involving the operators min and max. This domain relies on tropical polyhedra, which are the analogues of convex polyhedra in tropical algebra. Tropical algebra refers to the set IR U {-oo} endowed with max as addition and + as multiplication. This abstract domain is provided with sound abstract primitives, which allow to automatically compute over-approximations of semantics of programs by means of tropical polyhedra. Thanks to them, we develop and implement a sound static analysis inferring min- and max-invariants over the program variables, the length of the strings, and the size of the arrays in memory. In order to improve the scalability of the abstract domain, we also study the algorithmics of tropical polyhedra. In particular, a tropical polyhedron can be represented in two different ways, either internally, in terms of extreme points and rays, or externally, in terms of tropically affine inequalities. Passing from the external description of a polyhedron to its internal description, or inversely, is a fundamental computational issue, comparable to the well-known vertex/facet enumeration or convex hull problems in the classical algebra. It is also a crucial operation in our numerical abstract domain. For this reason, we develop two original algorithms allowing to pass from an external description of tropical polyhedra to an internal description, and vice versa. They are based on a tropical analogue of the double description method introduced by Motzkin et al. We show that they outperform the other existing methods, both in theory and in practice. The cornerstone of these algorithms is a new combinatorial characterization of extreme elements in tropical polyhedra defined by means of inequalities: we have proved that the extremality of an element amounts to the existence of a strongly connected component reachable from any node in a directed hypergraph. We also show that the latter property can be checked in almost linear time in the size of the hypergraph. Moreover, in order to have a better understanding of the intrinsic complexity of tropical polyhedra, we study the problem of determining the maximal number of extreme points in a tropical polyhedron. In the classical case, this problem is addressed by McMullen upper bound theorem. We prove that the maximal number of extreme points in the tropical case is bounded by a similar result. We introduce a class of tropical polyhedra appearing as natural candidates to be maximizing instances. We establish lower and upper bounds on their number of extreme points, and show that the McMullen type bound is asymptotically tight when the dimension tends to infinity and the number of inequalities defining the polyhedra is fixed. Finally, we experiment our tropical polyhedra based static analyzer on programs manipulating strings and arrays. These experimentations show that the analyzer successfully determines precise properties on memory manipulations, and that it scales up to highly disjunctive invariants which could not be computed by the existing methods. The implementation of all the algorithms and abstract domains on tropical polyhedra developed in this work is available in the Tropical Polyhedra Library (TPLib).
Article
We report on work in progress to generalize an algorithm recently introduced in [10] for checking satisfiability of formulas with quantifier alternation. The algorithm uses two auxiliary procedures: a procedure for producing a candidate formula for quantifier elimina-tion and a procedure for eliminating or partially eliminating quantifiers. We also apply the algorithm for Presburger Arithmetic formulas and evaluate it on formulas from a model checker for Duration Calculus [8]. We report on experiments on different variants of the auxiliary procedures. So far, there is an edge to applying SMT-TEST proposed in [10], while we found that a simpler approach which just eliminates quantified variables per round is almost as good. Both approaches offer drastic improvements to applying default quantifier elimination.
Conference Paper
We describe two complementary techniques to aid the automatic verification of safety properties of synchronous systems by model checking. A first technique allows the automatic generation of certain inductive invariants for mode variables. Such invariants are crucial in the verification of safety properties in systems with complex modal behavior. A second technique allows the simultaneous verification of multiple properties incrementally. Specifically, the outcome of a property--valid or invalid--is communicated to the user as soon as it is known. Moreover, each property proven valid is used immediately as an invariant in the model checking procedure to aid the verification of the remaining properties. We have implemented these techniques as new options in the Kind model checker. Experimental evidence shows that these two techniques combine synergistically to increase Kind's precision as well as its speed.
Article
Numerical static program analyses by abstract interpretation, e.g., the problem of inferring bounds for the values of numerical program variables, are faced with the problem that the abstract domains often contain infinite ascending chains. In order to enforce termination within the abstract interpretation framework, a widening/narrowing approach can be applied that trades the guarantee of termination against a potential loss of precision. Alternatively, recently strategy improvement algorithms have been proposed for computing numerical invariants which do not suffer the imprecision incurred by widenings. Before, strategy improvement algorithms have successfully been applied for solving two-players zero-sum games. In this article we discuss and compare max-strategy and min-strategy improvement algorithms for static program analysis. For that, the algorithms are cast within a common general framework of solving systems of fixpoint equations x->=e where the right-hand sides e are maxima of finitely many monotone and concave functions. Then we indicate how the general setting can be instantiated for inferring numerical invariants of programs based on non-linear templates.
Conference Paper
The problem of ensuring control software properties hold on their actual implementation is rarely tackled. While stability proofs are widely used on models, they are never carried to the code. Using program verification techniques requires express these properties at the level of the code but also to have theorem provers that can manipulate the proof elements. We propose to address this challenge by following two phases: first we introduce a way to express stability proofs as C code annotations; second, we propose a PVS linear algebra library that is able to manipulate quadratic invariants, i.e., ellipsoids. Our framework achieves the translation of stability properties expressed on the code to the representation of an associated proof obligation (PO) in PVS. Our library allows us to discharge these POs within PVS.
Conference Paper
Formal methods are being progressively incorporated in the aircraft and spacecraft software design and verification process and become commonplace elements of the aerospace industry. Five aerospace software system experts will present their views on this process and where it is headed. Focusing first on design issues, PETE MANOLIOS (Northeastern University, USA) will discuss design aspects and costs of commercial air transport vehicles, including integrated modular avionics, verification costs, and system integration. He will then discuss how new verification technology is used to algorithmically synthesize an optimal architecture subject to high level constraints. This work will be illustrated by a case study involving the Boeing 787 Dreamliner. MARC PANTEL (IRIT, France) will then discuss safety requirements as a key aspect of the development of embedded systems in avionics. He will discuss the current regulations linking safety requirements to software design guidelines. He will then discuss novel approaches to model driven software development, using formal models and verification activities at the various steps of the development cycle. Experiments conducted in relation with European avionics companies will be described. Moving then towards analysis methods, GUILLAUME BRAT (NASA, USA) will discuss sound, complete, precise, and scalable static analysis of flight control systems. He will introduce the IKOS static analysis framework, whose intellectual foundation is abstract interpretation. He will insist on compositional verification, a necessary tool for to make formal methods scale up to real, avionics systems. He will address the component-based development approach of these systems. ERIC FERON (Georgia Tech, USA), and PIERRE-LOIC GAROCHE will discuss the application of the methods introduced above to control software, a narrow, but essential component of any safety-critical software system. They will then describe a possible evolution of the current developm- nt process of aircraft control systems towards more formalism (through a combination of formal proof and proof replay). They will discuss the static analysis of the behavior of the controller (stability and other non linear properties), and the static analysis of the safety architecture of the controller.
Conference Paper
Critical systems are subject to drastic certification constraints (DO178-B for avionic systems, SIL-4 for railway systems, ISO26262 for the automotive domain), which require system providers to produce strong evidence for the correctness, reliability, or performance of their systems. Today, the early use of formal modeling and verification methods is recognized as favorable by the industry. Formal methods, which started to appear in the 60's, have now reached a maturity level allowing them to be used in an industrial context. The approach of control system modeling as proposed by the MathWorks with MATLAB Simulink, by Esterel Technologies with the SCADE language or by the academic community with the Lustre language, is extensively used for reactive systems design and often allows the automatic generation of the embedded code. However, despite the existence of a few formal verification tools supporting these languages, few system builders actually rely on formal approaches to demonstrate safety properties of their software products. Among the different formal proof techniques available for such models, the k-induction approach gives nice results but often needs external help in order to conclude non-trivial proofs and scale up. Another method, abstract interpretation, is very efficient to discover properties over programs or models but is not well developed at model level to verify user specified properties. The novelty of our framework is the tight cooperation between the k-induction engine and the abstract interpretation engine on the analysis of safety properties. The cooperation approach consists in using the abstract interpreter as an oracle in order to infer and inject numerical invariants in the k-induction analysis to prevent spurious falsifications of the induction scheme. This new collaborative approach seems to be a valuable way to overcome identified drawbacks of each technique and ease the scalability of formal methods at model level.
Article
This paper attempts to provide an adequate basis for formal definitions of the meanings of programs in appropriately defined programming languages, in such a way that a rigorous standard is established for proofs about computer programs, including proofs of correctness, equivalence, and termination. The basis of our approach is the notion of an interpretation of a program: that is, an association of a proposition with each connection in the flow of control through a program, where the proposition is asserted to hold whenever that connection is taken. To prevent an interpretation from being chosen arbitrarily, a condition is imposed on each command of the program. This condition guarantees that whenever a command is reached by way of a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. Then by induction on the number of commands executed, one sees that if a program is entered by a connection whose associated proposition is then true, it will be left (if at all) by a connection whose associated proposition will be true at that time. By this means, we may prove certain properties of programs, particularly properties of the form: ‘If the initial values of the program variables satisfy the relation R l, the final values on completion will satisfy the relation R 2’.
Article
As an alternative to methods by which the correctness of given programs can be established a posteriori, this paper proposes to control the process of program generation such as to produce a priori correct programs. An example is treated to show the form that such a control might then take. This example comes from the field of parallel programming; the way in which it is treated is representative of the way in which a whole multiprogramming system has actually been constructed.
Chapter
Tarski in 1948, [18] published a quantifier elimination method for the elementary theory of real closed fields (which he had discovered in 1930). As noted by Tarski, any quantifier elimination method for this theory also provides a decision method, which enables one to decide whether any sentence of the theory is true or false. Since many important and difficult mathematical problems can be expressed in this theory, any computationally feasible quantifier elimination algorithm would be of utmost significance.
Article
 We present a new, “elementary” quantifier elimination method for various special cases of the general quantifier elimination problem for the first-order theory of real numbers. These include the elimination of one existential quantifier ∃x in front of quantifier-free formulas restricted by a non-trivial quadratic equation in x (the case considered also in [7]), and more generally in front of arbitrary quantifier-free formulas involving only polynomials that are quadratic in x. The method generalizes the linear quantifier elimination method by virtual substitution of test terms in [9]. It yields a quantifier elimination method for an arbitrary number of quantifiers in certain formulas involving only linear and quadratic occurrences of the quantified variables. Moreover, for existential formulas ϕ of this kind it yields sample answers to the query represented by ϕ. The method is implemented in REDUCE as part of the REDLOG package (see [4, 5]). Experiments show that the method is applicable to a range of benchmark examples, where it runs in most cases significantly faster than the QEPCAD package of Collins and Hong. An extension of the method to higher degree polynomials using Thom’s lemma is sketched.
Conference Paper
We present an Abstract Interpretation-based framework for automatically analyzing programs containing digital filters. Our frame- work allows refining existing analyses so that they can handle given classes of digital filters. We only have to design a class of symbolic prop- erties that describe the invariants throughout filter iterations, and to describe how these properties are transformed by filter iterations. Then, the analysis allows both inference and proofs of the properties about the program variables that are tied to any such filter. abstract domain. This domain propagates all these properties throughout the abstract computations of programs. Our approach is not syntactic, so that loop unrolling, filter reset, boolean control, and trace (or state) partitioning are dealt with for free and any filter of the class (for any setting) is analyzed precisely. Moreover, in case of linear filters, we propose a general approach to build the corresponding class of properties. We first design a rough abstraction, in which at each filter iteration, we do not distinguish between the contributions of each input. Then, we design a precise abstraction: using linearity, we split the output between the global contribution of floating-point errors, and the contribution of
Conference Paper
In the present paper we compute numerical invariants of programs by abstract interpretation. For that we consider the abstract domain of quadratic zones recently introduced by Adjé et al. [2]. We use a relaxed abstract semantics which is at least as precise as the relaxed abstract semantics of Adjé et al. [2]. For computing our relaxed abstract semantics, we present a practical strategy improvement algorithm for precisely computing least solutions of fixpoint equation systems, whose right-hand sides use order-concave operators and the maximum operator. These fixpoint equation systems strictly generalize the fixpoint equation systems considered by Gawlitza and Seidl [11].
Conference Paper
For several years, Rockwell Collins has been developing and using a verification framework for MATLAB Simulink © and SCADE SuiteTMmodels that can generate input for different proof engines. Recently, we have used this framework to analyze aerospace domain models containing arithmetic computations. In particular, we investigated the properties of a triplex sensor voter, which is a redundancy management unit implemented using linear arithmetic operations as well as conditional expressions (such as saturation). The objective of this analysis was to analyze functional and non-functional properties, but also to parameterize certain parts of the model based on the analysis results of other parts. In this article, we focus on results about the reachable state space of the voter, which prove the bounded-input bounded-output stability of the system, and the absence of arithmetic overflows. We also consider implementations using floating point arithmetic.
Conference Paper
Satisfiability Modulo Theories (SMT) studies methods for checking the (un)- satisfiability of first-order formulas with respect to a given logical theory T . Distinguishing features of SMT, as opposed to traditional theorem proving, are that the background theory T need not be finitely or even first-order axiomatizable, and that specialized inference methods are used for each theory of interest. By being theory-specific and restricting their language to certain classes of formulas (such as, typically but not exclusively, quantifier-free formulas), these methods can be implemented into solvers that are more efficient in practice than general-purpose theorem provers.
Article
A method is given for deciding formulas in combinations of unquantified first-order theories. Rather than coupling separate decision procedures for the contributing theories, the method makes use of a single, uniform procedure that minimizes the code needed to accommodate each additional theory. It is applicable to theories whose semantics can be encoded within a certain class of purely equational canonical form theories that is closed under combination. Examples are given from the equational theories of integer and real arithmetic, a subtheory of monadic set theory, the theory of cons, car, and cdr, and others. A discussion of the speed performance of the procedure and a proof of the theorem that underlies its completeness are also given. The procedure has been used extensively as the deductive core of a system for program specification and verification.
Article
A method for combining decision procedures for several theories into a single decision procedure for their combination is described, and a simplifier based on this method is discussed. The simplifier finds a normal form for any expression formed from individual variables, the usual Boolean connectives, the equality predicate =, the conditional function if-then-else, the integers, the arithmetic functions and predicates +, -, and ≤, the Lisp functions and predicates car, cdr, cons, and atom, the functions store and select for storing into and selecting from arrays, and uninterpreted function symbols. If the expression is a theorem it is simplified to the constant true, so the simplifier can be used as a decision procedure for the quantifier-free theory containing these functions and predicates. The simplifier is currently used in the Stanford Pascal Verifier.
Article
Several optimizations of programs can be performed when in certain regions of a program equality relationships hold between a linear combination of the variables of the program and a constant. This paper presents a practical approach to detecting these relationships by considering the problem from the viewpoint of linear algebra. Key to the practicality of this approach is an algorithm for the calculation of the “sum” of linear subspaces.
Article
In this paper an attempt is made to explore the logical founda- tions of computer programming by use of techniques which were first applied in the study of geometry and have later been extended to other branches of mathematics. This in- volves the elucidation of sets of axioms and rules of inference which can be used in proofs of the properties of computer programs. Examples are given of such axioms and rules, and a formal proof of a simple theorem is displayed. Finally, it is argued that important advantages, both theoretical and prac- tical, may follow from a pursuance of these topics.
Article
This paper presents the industrial use of a program proof method based on CAVEAT (C program prover developed by the commissariat à lénergie atomique) in the verification process of a safety critical avionics program. Full Text at Springer, may require registration or fee
Conference Paper
The article presents a novel numerical abstract domain for static analysis by abstract interpretation. It extends a former numerical abstract domain based on Difference-Bound Matrices and allows us to represent invariants of the form (±x±y⩽c), where x and y are program variables and c is a real constant. We focus on giving an efficient representation based on Difference-Bound Matrices with O(n2) memory cost, where n is the number of variables, and graph-based algorithms for all common abstract operators, with O(n3 ) time cost. This includes a normal form algorithm to test the equivalence of representation and a widening operator to compute least fixpoint approximations
Article
interpretation, alias analysis, dataflow analysis, destructive updating, pointer analysis, shape analysis, shape graphs, static analysis 1. INTRODUCTION This article concerns the static analysis of programs that perform destructive updating on heap-allocated storage. It addresses problems that can be looked at--- depending on one's point of view---as pointer analysis problems, alias analysis problems, sharing-analysis problems, storage analysis problems (also known as shape analysis problems), or typechecking problems. The information obtained is useful, for instance, for generating e#cient sequential or parallel code. Throughout most of the article, we emphasize the application of our work to shape-analysis problems. The goal of shape analysis is to give, for each program point, a conservative, finite characterization of the possible "shapes" that the program 's heap-allocated data structures can have at that point. We illustrate our approach by means of a running example in which w...
Article
We present an interprocedural flow-insensitive points-to analysis based on type inference methods with an almost linear time cost complexity. To our knowledge, this is the asymptotically fastest non-trivial interprocedural points-to analysis algorithm yet described. The algorithm is based on a non-standard type system. The type inferred for any variable represents a set of locations and includes a type which in turn represents a set of locations possibly pointed to by the variable. The type inferred for a function variable represents a set of functions it may point to and includes a type signature for these functions. The results are equivalent to those of a flowinsensitive alias analysis (and control flow analysis) that assumes alias relations are reflexive and transitive.
Article
This paper presents a new numerical abstract domain for static analysis by abstract interpretation. This domain allows us to represent invariants of the form (x − y = c) and (±x = c) , where x and y are variables values and c is an integer or real constant. Abstract elements are represented by Difference-Bound Matrices, widely used by model-checkers, but we had to design new operators to meet the needs of abstract interpretation. The result is a complete lattice of infinite height featuring widening, narrowing and common transfer functions. We focus on giving an efficient O(n 2)re presentation and graph-based O(n 3)algorit hms—where n is the number of variables—and claim that this domain always performs more precisely than the well-known interval domain. To illustrate the precision/cost tradeoff of this domain, we have implemented simple abstract interpreters for toy imperative and parallel languages which allowed us to prove some non-trivial algorithms correct.
Stuff: Stuff is the ultimate formal framework
  • A Champion
  • R Delmas
SMT-AI: SMT abstract interpreter
  • P.-L Garoche
  • P Roux
Dicrete time observers and lqg control. MIT, Dpt. of Mechanical Engineering - 2.151 Advanced System Dynamics
  • D Rowell
Formal methods for areospace applications
  • Feron Eric
  • Brat Guillaume
  • Garoche Pierre-Loic
  • Manolios Pete
  • Pantel Marc
A polynomial template abstract domain based on bernstein polynomials
  • Roux Pierre
  • Garoche Pierre-Loïc