## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... This establishes the standard (strong) bisimulation equivalence [4] of the machines under analysis, by generating simple logical expressions characterising the bisimulation relation. The expressions are validated separately using a standard theorem-proving tool currently we use the PVS proof system [5]. The approach enables abstract high-level verification of designs, and can be applied to systems with infinite state-spaces. ...

... A prototype of the compilation, simulation and verification tools based on the formal semantic definitions, is implemented in Common Lisp, and embedded into a hardware design environment, containing text and schematic editors, design database, and a soft logic analyser for visualising input and output of simulations, and equipped with a sophisticated graphical user interface [11] 5. The compositional style of the semantic definitions means that a Common Lisp implementation is straightforward, with there being a close correspondence between the definitions and resulting code in the Core ELLAto-IOA compiler. 5 The environment was developed jointly by the ELLA Project partners, using the LispWorks programmer's environment [12], which is produced by, and a trademark of Harlequin Ltd. (Cambridge, UK). Harlequin developed the environment framework and tools, DRA implemented the ELLA-to-Kernel ELLA parsing and compilation tools, and Manchester implemented the Kernel ELLA-to-IOA compiler, and associated IOA analysis tools. ...

... The user must supply the number of steps, N, which is usually related to the latency or testability of the designs. The validity of the generated VCs can be checked using a separate theorem-proving tool --at present the PVS system is used [5]. The task of verification has therefore been broken down into two separate operations: one of generating the set of VCs from the ELLA design descriptions, and then proving their validity. ...

We describe the development of formal verification support tools for the commercial hardware description language ELLA, which are embedded into an industrial-style hardware design system, to be utilised by hardware engineers. A formal semantics for ELLA is given using various semantic representations, including state machines and process algebraic terms, so that different formal analysis methods can be used. In particular, a novel symbolic verification method can be used with the process terms generated from ELLA-text, giving a high level and efficient means of verifying the correctness of ELLA designs.

... In the projects, object-oriented modeling approaches, using object diagrams, data flow diagrams, and state diagrams, are used, as well as the PVS theorem prover [71]. ...

... Formal Methods for Requirements EngineeringFormal methods have become increasingly more popular for the requirements engineering process. This section overviews a transition-based approach, Software Cost Reduction (SCR)[37], a state machine-based approach, Requirements State Machine Language (RSML)[35], and a reuse-based approach using the Prototype Verification System (PVS)[71]. ...

... to check the verification using the theorem prover PVS from SRI [12, 13, 14, 16]. This paper is organised as follows. ...

... The specification language of PVS is a highe~-order typed logic ([13, 14, 16]), with many built-in types including booleans, integers, sequences, lists, etc. For example, upto(i): TYPE = {s: nat I s<= i} is the subtype of the integers less or equal to i. New types may be added together with functions, tuples, records, predicate subtypes, abstract datatypes. ...

We present an algebraic verification of Segall’s propagation of information with feedback algorithm and we report on the verification of the proof using the PVS system. This algorithm serves as a nice benchmark for verification exercises. The verification is based on the methodology presented by J. F. Groote and J. Springintveld [J. Log. Algebr. Program. 49, 31–60 (2001; Zbl 1015.68175)] and demonstrates its suitability to deliver mechanically verifiable correctness proofs of highly nondeterministic distributed algorithms.

... Our technique works on linear and non-linear polynomial hybrid systems, that is, the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. We have a prototype tool in the SAL environment [13] which is built over the theorem prover PVS [19]. The technique promises to scale well to large and complex hybrid systems. ...

... The quantifier elimination decision procedure for the real closed fields is implicitly used to decide the implications over the real numbers. The tool QEPCAD [12], which is built over the symbolic algebra library SACLIB [4], is integrated to the theorem prover PVS [19] for this purpose. We are working on adding more explicit interfaces into SAL to directly construct such abstractions for hybrid systems. ...

We present a technique based on the use of the quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. Our technique works on linear and non-linear polynomial hybrid systems, that is, the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. We have a prototype tool in the SAL environment [13] which is built over the theorem prover PVS [19]. The technique promises to scale well to large and complex hybrid systems.

... This can reduce verification costs. To address these challenges, we present a verification toolchain that utilizes the analytic power of state-of-the-art reverse engineering tools such as radare2 [16] and the rigor of a powerful interactive theorem prover, PVS7 [17]. We focus on a subset of the ARMv8 A64 ISA as the target architecture. ...

... • Theorem proving [53] consists in formulating the system and the verified property as a formal proof (a mathematical model) that will be solved using a tool called proof assistant/theorem prover (e.g. Coq [24], HOL [73] and PVS [155]). This type of verification often requires human interaction to guide the mapping and the resolution. ...

The verification of real-time systems is a complex and delicate task, especially when they are used in safety-critical domains. In this context , the formal methods have become one of the advocated techniques in safety-critical software engineering. Yet, they are applied on specific formalisms such as Petri nets and process algebras, based on formal semantics, to be analyzed by dedicated tools. Indeed, the formal verification techniques can not be directly applied on design languages (such as AADL), since they lack of formal semantics. For this reason, a common approach is to transform design models into formal specifications to be verified by analysis tools. In this thesis, we aim at integrating formal methods in an AADL model-based development process. To do this, we provide a mapping of the AADL model into a formal model. We have defined and implemented a model transformation, in order to allow the model-checking of a set of structural and behavioral properties of real-time systems.

... Some of the more widely used proof assistans are ACL2 [55], Agda [1], Coq [8; 20], HOL [37], HOL Light [38], Isabelle [70], Matita [5], Mizar [6], Nuprl [40], PVS [74]. This list is not exhaustive. ...

This thesis deals with the formalization of mathematics in the proof assistant Coq with the purpose of verifying numerical methods. We focus in particular on formalizing concepts involved in solving systems of equations, both linear and non-linear.
We analyzed Newton's method which is a numerical method widely used for approximating solutions of equations or systems of equations. The goal was to formalize Kantorovitch's theorem which gives the convergence of Newton's method to a solution, the speed of the convergence and the local stability of the method. The formal study of this theorem also demanded a formalization of concepts of multivariate analysis. Based on these classic results on Newton's method, we showed that rounding at each step in Newton's method still yields a convergent process with an accurate correlation between the precision of the input ant that of the result. In a joint work with Nicolas Julien, we studied formally computations with Newton's method in a library of exact real arithmetic.
For linear systems of equations, we analyzed the case where the associated matrix has interval coefficients. For solving such systems, an important issue is to establish whether the associated matrix is regular. We provide a collection of formally verified criteria for regularity of interval matrices.

... In addition to symbolic representation by OBDDs, there are other methods for proving equivalence efficiently. Theorem-provers such as PVS [12] typically solve this problem by computing the congruence closure over ground terms and uninterpreted functions (see below) [8]. However, this method is not very efficient in the presence of many disjunctions or If-Then-Else constructs, due to its tendency to split on each such branch. ...

Efficient decision procedures for equality logic (quantifier-free predicate calculus+the equality sign) are of major importance when proving logical equivalence between systems. We introduce an efficient decision procedure for the theory of equality based on finite instantiations. The main idea is to analyze the structure of the formula and compute accordingly a small domain to each variable such that the formula is satisfiable iff it can be satisfied over these domains. We show how the problem of finding these small domains can be reduced to an interesting graph theoretic problem. This method enabled us to verify formulas containing hundreds of integer and floating point variables that could not be efficiently handled with previously known techniques.

... Besides, theories can be linked by import and export lists. The theorem prover of PVS [5] is both interactive and highly mechanized, the user chooses each step that is to be applied and PVS performs it, displays the result, and then waits for the next command. ...

... 1 Inspired by and named for a PVS [6] proof command ...

ACL2 allows users to define predicates whose logical behavior mimics that of universally or existentially quantified formulae. Proof support for such quantification, however, is quite limited. We present an ACL2 framework that employs tables, computed hints and clause processing to identify quantified formulae and to skolemize or instantiate them when possible. We demonstrate how the framework can be used to prove automatically the forall-p-append example presented in the ACL2 documentation.

... (i) Proving the soundness of the permutation rules. This can be accomplished by theorem provers such as PVS [SOR93]. ...

The paper presents approaches to the validation of optimizing compilers. The emphasis is on aggressive and architecture-targeted optimizations which try to obtain the highest performance from modern architectures, in particular EPIC-like micro-processors. Rather than verify the compiler, the approach of translation validation performs a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code.First we survey the standard approach to validation of optimizations which preserve the loop structure of the code (though they may move code in and out of loops and radically modify individual statements), present a simulation-based general technique for validating such optimizations, and describe a tool, VOC-64, which implements these technique. For more aggressive optimizations which, typically, alter the loop structure of the code, such as loop distribution and fusion, loop tiling, and loop interchanges, we present a set of permutation rules which establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the necessary extensions to the VOC-64 in order to validate these structure-modifying optimizations.Finally, the paper discusses preliminary work on run-time validation of speculative loop optimizations, that involves using run-time tests to ensure the correctness of loop optimizations which neither the compiler nor compiler-validation techniques can guarantee the correctness of. Unlike compiler validation, run-time validation has not only the task of determining when an optimization has generated incorrect code, but also has the task of recovering from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange, loop tiling, and software pipelining and appears to be quite promising.

... The core of a functional language is the λ-calculus [10, 11], which has the same expressiveness as Turing machines and allows for the formulation of any operation, and even any datum, in terms of higherorder functions. Languages like Scheme [50], SML [81], Haskell [96] or OCaml [75], and proof assistants such as ACL2 [69], Coq [79], PVS [91], HOL [57], Isabelle [83], and Agda [27], are all based on typed variants or extensions of the λ-calculus. These extensions are easier to use than the original calculus, and all propose a number of basic data types, powerful type construction operators (such as inductive types), and mechanisms for the definition of functions. ...

The goal of this chapter is to give an overview of the different approaches and tools pertaining to formal methods. We do
not attempt to be exhaustive, but focus instead on the main approaches (formal specification, formal verification and proofs,
transformation, and formal development). A consise introduction to basic logic concepts and methods is also provided. After
reading the chapter the reader will be familiar with the terminology of the area, as well as with the most important concepts
and techniques.
Moreover the chapter will allow the reader to contextualise and put into perspective the topics that are covered in detail
in the book.

... Basic definitions of safety vs. liveness can be found in [4]. Several model checking tools can be used to prove liveness, invariant and eventuality properties [11][12]. ...

Complex large-scale embedded systems arise in many applications, in particular in the design of automotive systems, controllers and networking protocols. In this paper, we attempt to present a review of salient results in modeling of complex large scale embedded systems, including hybrid systems, and review existing results for composition, analysis, model checking, and verification of safety properties. We then present a library of vehicle models designed for cruise control (and CACC) that attempt to cross the chasm between theory and practice by capturing real-world challenges faced by industry and making the library accessible in a public domain form.

... These intermediate representations are read by the validation system VOC-64, analysed, and then for every transformation the target code is proved to be a correct implementation of the source code. External theorem provers such as SEeP [MAB + 94] and PVS [SOR93] are used in proving the correctness of optimisations. Unfortunately, the system has not been fully implemented. ...

... Two tools are used to assist the analysis. The first is the theorem prover PVS (Prototype Verification System) [SOR93], extended with a number of more specialised theories concerning the rank function techniques. The second tool is the RankAnalyser that helps the user to construct a rank function for a given protocol. ...

The research presented in this thesis lies in the area of security protocol analysis, focusing mainly on confidentiality and authentication properties. The formal method used is CSP with the model checker FDR. This approach has proved to be very successful for modelling security protocols, especially when it comes to finding attacks (for example [Low95] and [LR97b]). However, since it can only check a small finite instance of a protocol model, this was incomplete as a method for proving...

... However, they realized that this open architecture based approach did not work due to several complications involved in defining the interface [22]. Some other attempts have tried for a tighter integration, as in the PVS system [73] and the Stanford Pascal Verifier [63]. ...

... The Prototype Verification System (PVS) consists of a specification language (PVS-SL)[9], a type checker and a theorem-prover [10] based on a classical, simply typed higher-order logic. It exploits the synergy between a highly expressive specification language and an interactive theorem-prover based on powerful decision procedures. ...

In this paper, we present formal semantics of UML (Unified Modeling Language) sequence diagrams using the PVS (Prototype Ver-ification System) [8] as an underlying semantic foundation. We give a formal definition of a trace-based semantics [5] of UML sequence dia-grams; i.e. a sequence diagram is interpreted as a set of traces of events that may occur in the realization of the interaction specified by the se-quence diagram. This work is a part of a long-term vision to explore how the PVS tool set could be used to underpin practical tools for analysis of models in UML. It also contributes to the ongoing effort to provide formal semantics of UML, with the aim of clarifying and disambiguat-ing the language as well as supporting the development of semantically based tools.

... Some of them allow the user to call " oracles " to delegate some computations to other tools, with a possible loss of confidence. Among well-known proof assistants, let us cite COQ [24], HOL [21, 31], Isabelle [25], PVS [30], Lego [28], NuPRL [13]. As Pnueli points out in his FM'99 conference [27], both kinds of tools are bound to cooperate within the framework of verification of reactive systems : for instance, we can use a proof assistant to prove that some property P holds for each reachable state of a system S, and use an automatic tool to computation of the set of reachable states of S. The CClair project [10], written in Isabelle/HOL, aims to provide the user with an environment for working with transition systems of various kinds, allowing verification and testing in a single framework. ...

The CClair project is designed to be a general framework for working with various kinds of transition systems, allowing both ver-ification and testing activities. It is structured as a set of theories of Isabelle/HOL, the root being a theory of transition systems and their behaviour. Subtheories define particular families of systems, like con-strained and timed automata. Besides the great expressivity of higher order logic, we show how important features like rewriting and existen-tial variables are determinant in this kind of framework.

... Finally, as is most often stated, a formal speciÿcation, along with a formal model of an implementation can be used to increase assurance about system correctness by allowing mathematical proof that the implementation model satisÿes the requirement. Some formal veriÿcation procedures such as model-checking [18] can be totally automated , and others partially automated, using mechanised proof assistants such as HOL [32] and PVS [52]. One technique that is often proposed is that of reÿnement, where a series of transformations are carried out from the speciÿcation to the implementation, each of which leads to a set of proof obligations or veriÿcation conditions, which must be discharged to complete a proof that the implementation satisÿes its speciÿcation. ...

AORTA has been proposed as an implementable real-time language for concurrent systems where event times, rather than values of data, are critical. In this paper we describe how to use AORTA with a formal data model, allowing integration with a variety of model-based data specification languages. Example definitions are given of time-critical systems with important data attributes. A development technique and supporting software tools for AORTA are also described.

... Proof-development systems like Coq [4], Hol [21], Isabelle [28] and PVS [32] rely on powerful type systems featuring (co-)inductive types. The latter, which capture in a type-theoretical framework the notions of initial algebra or final coalgebra , are extensively used in the formalization of programming languages, reactive and embedded systems, communication and cryptographic protocols. . . ...

The Calculus of Inductive Constructions (CIC) is a powerful type system, featuring dependent types and inductive definitions, that forms the basis of proof-assistant systems such as Coq and Lego. We extend CIC with constructor subtyping, a basic form of subtyping in which an inductive type sigma is viewed as a subtype of another inductive type tau if tau has more elements than sigma. It is shown that the calculus is well-behaved and provides a suitable basis for formalizing natural semantics in ...

... On the one hand, the users of model checking technology [12] must overcome the "state explosion problem," i.e., how to exhaustively search, either explicitly or implicitly, the large state spaces of practical system specifications. On the other hand, while theorem provers, such as PVS [36], can handle large, even infinite state spaces, they lack automation for formulating and proving the inductive invariants often necessary to complete a proof. Typically, applying a theorem prover requires user ingenuity to formulate the needed invariants, theorem proving skills, and detailed knowledge of a particular prover. ...

This paper describes a compositional proof strategy for verifying properties of requirements specifications. The proof strategy, which may be applied using either a model checker or a theorem prover, uses known state invariants to prove state and transition invariants. Two proof rules are presented: a standard incremental proof rule analogous to Manna and Pnueli's incremental proof rule and a compositional proof rule. The advantage of applying the compositional rule is that it decomposes a large verification problem into smaller problems which often can be solved more efficiently than the larger problem. The steps needed to implement the compositional rule are described, and the results of applying the proof strategy to two examples, a simple cruise control system and a real-world Navy system, are presented. In the Navy example, compositional verification %based on the compositional proof using either theorem proving or model checking was three times faster than verification %model checking using based on the standard incremental (noncompositional) rule. In addition to the two above rules for proving invariants, a new compositional proof rule is presented for circular assume-guarantee proofs of invariants. While in principle the strategy and %the rules described for proving invariants may be applied to any state-based specification with parallel composition of components, the specifications in the paper are expressed in the SCR (Software Cost Reduction) tabular notation, the auxiliary invariants used in the proofs are automatically generated invariants, and the verification is supported by the SCR tools.

Since their inception, the Perspectives in Logic and Lecture Notes in Logic series have published seminal works by leading logicians. Many of the original books in the series have been unavailable for years, but they are now in print once again. This volume, the fifteenth publication in the Lecture Notes in Logic series, collects papers presented at the symposium 'Reflections on the Foundations of Mathematics' held in celebration of Solomon Feferman's 70th birthday (The 'Feferfest') at Stanford University, California in 1988. Feferman has shaped the field of foundational research for nearly half a century. These papers reflect his broad interests as well as his approach to foundational research, which emphasizes the solution of mathematical and philosophical problems. There are four sections, covering proof theoretic analysis, logic and computation, applicative and self-applicative theories, and philosophy of modern mathematical and logic thought.

Of special interest in formal verification are safety properties, which assert that the system always stays within some allowed region. Proof rules for the verification of safety properties have been developed in the proof-based approach to verification, making verification of safety properties simpler than verification of general properties. In this paper we consider model checking of safety properties. A computation that violates a general linear property reaches a bad cycle, which witnesses the violation of the property. Accordingly, current methods and tools for model checking of linear properties are based on a search for bad cycles. A symbolic implementation of such a search involves the calculation of a nested fixed-point expression over the system's state space, and is often infeasible. Every computation that violates a safety property has a finite prefix along which the property is violated. We use this fact in order to base model checking of safety properties on a search for finite bad prefixes. Such a search can be performed using a simple forward or backward symbolic reachability check. A naive methodology that is based on such a search involves a construction of an automaton (or a tableau) that is doubly exponential in the property. We present an analysis of safety properties that enables us to prevent the doubly-exponential blow up and to use the same automaton used for model checking of general properties, replacing the search for bad cycles by a search for bad prefixes.

This paper presents a hybrid method for verification and synthesis of parameterized self-stabilizing protocols where algorithmic design and mechanical verification techniques/tools are used hand-in-hand. The core idea behind the proposed method includes the automated synthesis of self-stabilizing protocols in a limited scope (i.e., fixed number of processes) and the use of theorem proving methods for the generalization of the solutions produced by the synthesizer. Specifically, we use the Prototype Verification System (PVS) to mechanically verify an algorithm for the synthesis of weakly self-stabilizing protocols. Then, we reuse the proof of correctness of the synthesis algorithm to establish the correctness of the generalized versions of synthesized protocols for an arbitrary number of processes. We demonstrate the proposed approach in the context of an agreement and a coloring protocol on the ring topology.

datatype StatusValue is introduced as a means to achieve mutual exclusion between producer and consumer. StatusValue has three has three elements idle, writing and reading. The LOOP tool automatically generates a corresponding PVS data type definition, containing precisely these three elements as constructors. Figure 5.1 Classes of the CCSL specification of PFC_imp. Each class is represented by a rectangle with its name (top), attributes (middle) and methods (bottom). Methods to set attributes are omitted for clarity. The arrows denote inheritance. They point to the class from which the properties are inherited. 5.3 Methods and assertions of ProcessNetwork Class ProcessNetwork has three methods write, read and tick. The methods write and read are used to model a communication command. In the model of YAPI processes perform alternating communication steps and computation steps. Therefore, the read method only has effect on cons when no "previous" read command is being carried out. In terms of the attributes this means if nr_to_transmit of cons equals 0 then the nr_to_transmit of cons after a read(n) command equals n. The write(n) command is similar to read(n) command. There is an additional constraint however: the size of buff should be sufficient for transfer of n items. The methods write and read are formally described in assertions set_nr_to_write and set_nr_to_read in the CCSL file in appendix 4. The evolution of the system in time is modelled using method tick. Three assertions specify the effect of tick: no_write_no_read, no_write_go_read and go_write_no_read. They are all three written in the form: pre-condition(x) IMPLIES post-condition(tick(x)). The pre-conditions are all mutually exclusive and the disjunction of all three pre-conditions equals TRUE. Ther...

Although a number of mechanical provers have been introduced and applied widely by academic researchers, these provers are rarely used in the practical development of software. For mechanical provers to be used more widely in practice, two major barriers must be overcome. First, the languages provided by the mechanical provers for expressing the required system behavior must be more natural for software developers. Second, the reasoning steps supported by mechanical provers are usually at too low and detailed a level and therefore discourage use of the prover. To help remove these barriers, we are developing a system called TAME, a high-level user interface to PVS for specifying and proving properties of automata models. TAME provides both a standard specification format for automata models and numerous high-level proof steps appropriate for reasoning about automata models. In previous work, we have shown how TAME can be useful in proving properties about systems described as Lynch-Vaandrager Timed Automata models. TAME has the potential to be used as a PVS interface for other specification methods that are specialized to define automata models. This paper first describes recent improvements to TAME, and then presents our initial results in using TAME to provide theorem proving support for the SCR (Software Cost Reduction) requirements method, a method with a wide range of other mechanized support.

: Model checking is a proven successful technology for verifying hardware. It works, however, on only finite state machines, and most software systems have infinitely many states. Our approach to applying model checking to software hinges on identifying appropriate abstractions that exploit the nature of both the system, S, and the property, phi, to be verified. We check phi on an abstracted, but finite, model of S. Following this approach we verified three cache coherence protocols used in distributed file systems. These protocols have to satisfy this property: 'If a client believes that a cached file is valid then the authorized server believes that the client's copy is valid.' In our finite model of the system, we need only represent the 'beliefs' that a client and a server have about a cached file; we can abstract from the caches, the files' contents, and even the files themselves. Moreover, by successive application of the generalization rule from predicate logic, we need only consider a model with at most two clients, one server, and one file. We used McMillan's SMV model checker; on our most complicated protocol, SMV took less than 1 second to check over 43,600 reachable states.

This article describes uses of dependencies in tools to maintainbig proofs in interactive proof assistants. The aim of this article isto describe the notion of dependency in proof assistants, how it can beused to design tools to maintain large proofs and how those tools couldbe integrated in proof environment.

PROBLEM STATEMENT: As the complexity of software projects for embedded avionics applications increases, it becomes increasingly obvious that new innovative techniques will have to be used to help test those projects. Current methods of testing often require man-in-the-loop as well as extensive set up times. Often, the verification and validation of moderate software changes requires several weeks of man hours to accomplish. ADVANCED AVIONICS VERIFICATION AND VALIDATION (AAV&V) AS AN INNOVATIVE Technique: The AAV&V addresses the above problem by providing software developers and testers access to current testing a project technologies while investigating and proposing new techniques for validation and verification. Current software developers and testers can take advantage of an AAV&V Tool which sets on a powerful engineering workstation with open system architecture and gives them coverage and static analysis capability as well as documentation access and generation. Future software developers and testers will enjoy expansion of language options on the AAV&V tool's front end, as well as access to Formal Methods and Statistical techniques.

The workshop description lists security, safety, survivability, fault-tolerant, and real-time as included among the critical properties that may be required of high assurance systems. Each of these is an active area of ongoing research. It is a daunting challenge to develop systems that satisfy several of them at once, yet there is a pressing need to do so for many high assurance systems. This position paper suggests the use of a quantitative risk-based model to help in such situations. The aim is to support reasoning and decision making that spans many of these critical properties. The potential benefits include prioritization of the challenges that high assurance system development efforts face, ability to identify solutions to match the challenges, identification of the areas of weakness where further research is warranted, and improved understanding of the strengths and weaknesses of a particular system.

We derive an eecient linear simd architecture for the algebraic path problem app. For a graph with n nodes, our array has n processors, each with n + 3 memory cells, and computes the result in 2n 2 + n ? 2 steps. Our array is ideally suited for vlsi, since the control is quite simple, the memory is implemented as fos, and i/o is trivial (the array is linear and only the boundary processors communicate with the host). It can be easily adapted to run in multiple passes, and moreover, this version improves work eeciency. The running time with passes on n== processors is no more than (+ 1==)n 2 . The work is no more than (1 + 1== 2)n 3 and can be made as close to n 3 as desired. Our array is derived through the application of correctness-preserving transformations to an initial speciication, our the derivation is veriied mechanically by means of an automatic theorem prover.

In this paper generic software development steps of different complexity are represented and verified using the (higher-order, strongly typed) specification and verification system PVS. The transformations considered in this paper include large powerful steps encoding general algorithmic paradigms as well as smaller transformations for the operationalization of a descriptive specification. The application of these transformation patterns is illustrated by means of simple examples. Furthermore, we show how to guide proofs of correctness assertions about development steps. Finally, this work serves as a case-study and test for the usefulness of the PVS system.

One of the main tasks of the mathematical knowledge management community must surely be to enhance access to mathematics on
digital systems. In this paper we present a spectrum of approaches to solving the various problems inherent in this task,
arguing that a variety of approaches is both necessary and useful. The main ideas presented are about the differences between
digitised mathematics, digitally represented mathematics and formalised mathematics. Each has its part to play in managing
mathematical information in a connected world. Digitised material is that which is embodied in a computer file, accessible
and displayable locally or globally. Represented material is digital material in which there is some structure (usually syntactic
in nature) which maps to the mathematics contained in the digitised information. Formalised material is that in which both
the syntax and semantics of the represented material, is automatically accessible. Given the range of mathematical information
to which access is desired, and the limited resources available for managing that information, we must ensure that these resources
are applied to digitise, form representations of or formalise, existing and new mathematical information in such a way as
to extract the most benefit from the least expenditure of resources. We also analyse some of the various social and legal
issues which surround the practical tasks.

We are interested in the verification of Newton’s method. We use a formalization of the convergence and stability of the method
done with the axiomatic real numbers of Coq’s Standard Library in order to validate the computation with Newton’s method done
with a library of exact real arithmetic based on co-inductive streams. The contribution of this work is twofold. Firstly,
based on Newton’s method, we design and prove correct an algorithm on streams for computing the root of a real function in
a lazy manner. Secondly, we prove that rounding at each step in Newton’s method still yields a convergent process with an
accurate correlation between the precision of the input and that of the result. An algorithm including rounding turns out
to be much more efficient.

The method of Invisible Invariants was developed originally in order to verify safety properties of parameterized systems
fully automatically. Roughly speaking, the method is based on a small model property that implies it is sufficient to prove some properties on small instantiations of the system, and on a heuristic that generates
candidate invariants. Liveness properties usually require well founded ranking, and do not fall within the scope of the small
model theorem. In this paper we develop novel proof rules for liveness properties, all of whose proof obligations are of the
correct form to be handled by the small model theorem. We then develop abstraction and generalization techniques that allow
for fully automatic verification of liveness properties of parameterized systems. We demonstrate the application of the method
on several examples.

Constructor subtyping is a form of subtyping in which an inductive type σ is viewed as a subtype of another inductive type τ if τ has more constructors
than σ. As suggested in [5,12], its (potential) uses include proof assistants and functional programming languages.
In this paper, we introduce and study the properties of a simply typed λ-calculus with record types and datatypes, and which
supports record subtyping and constructor subtyping. In the first part of the paper, we show that the calculus is confluent
and strongly normalizing. In the second part of the paper, we show that the calculus admits a well-behaved theory of canonical
inhabitants, provided one adopts expansive extensionality rules, including η-expansion, surjective pairing, and a suitable expansion rule for datatypes. Finally, in the third part of the paper, we extend
our calculus with unbounded recursion and show that confluence is preserved.

Safety-critical software systems such as certain nuclear instrumentation and control (NI&C) systems should be developed with thorough verification. This study presents a method of software requirement verification with a case study for a nuclear power plant (NPP) protection system. The verification introduces colored petri net (CPN) for system modeling and prototype verification system (PVS) for mathematical verification. In order to aid flow-through from modeling by CPN to mathematical proof by PVS, an information extractor from CPN models has been developed in this paper. In order to convert the extracted information to the PVS specification language, a translator has also been developed. This combined method has been applied to the functional requirements of the Wolsong NPP Shut Down System #2 (SDS2); logical properties of the requirements were verified. Through this research, guidelines and a tool support for the use of formal methods have been developed for application to NI&C software verification.

We present a method for efficiently providing algebraic correctness proofs for communication systems. It is described in the setting of μCRL [J.F. Groote, A. Ponse, The syntax and semantics of μCRL, in: A. Ponse, C. Verhoef, S.F.M. van Vlijmen (Eds.), Algebra of Communicating Processes, Workshops in Computing, Springer, Berlin, 1994, pp. 26–62] which is, roughly, ACP [J.C.M. Baeten, W.P. Weijland, Process Algebra, Cambridge Tracts in Theoretical Computer Science, vol. 18, Cambridge University Press, Cambridge 1990, J.A. Bergstra, J.W. Klop, The algebra of recursively defined processes and the algebra of regular processes, in: Proceedings of the 11th ICALP, Antwerp, Lecture Notes in Computer Science, vol. 172, Springer, Berlin, 1984, pp. 82–95] extended with a formal treatment of the interaction between data and processes. The method incorporates assertional methods, such as invariants and simulations, in an algebraic framework, and centers around the idea that the state spaces of distributed systems are structured as a number of cones with focus points. As a result, it reduces a large part of algebraic protocol verification to the checking of a number of elementary facts concerning data parameters occurring in implementation and specification. The resulting method has been applied to various non-trivial case studies of which a number have been verified mechanically with the theorem checker PVS. In this paper the strategy is illustrated by several small examples and one larger example, the Concurrent Alternating Bit Protocol (CABP).

The paper shows that, by an appropriate choice of a rich assertional language, it is possible to extend the utility of symbolic model checking beyond the realm of BDD-represented finite-state systems into the domain of infinite-state systems, leading to a powerful technique for uniform verification of unbounded (parameterized) process networks. The main contributions of the paper are a formulation of a general framework for symbolic model checking of infinite-state systems, a demonstration that many individual examples of uniformly verified parameterized designs that appear in the literature are special cases of our general approach, verifying the correctness of the Futurebus+ design for all single-bus configurations, and extending the technique to tree architectures.

The paper presents approaches to the validation of optimizing compilers. The emphasis is on aggressive and architecture-targeted optimizations which try to obtain the highest performance from modern architectures, in particular EPIC-like micro-processors. Rather than verify the compiler, the approach of translation validation performs a validation check after every run of the compiler, producing a formal proof that the produced target code is a correct implementation of the source code.First we survey the standard approach to validation of optimizations which preserve the loop structure of the code (though they may move code in and out of loops and radically modify individual statements), present a simulation-based general technique for validating such optimizations, and describe a tool, VOC-64, which implements these technique. For more aggressive optimizations which, typically, alter the loop structure of the code, such as loop distribution and fusion, loop tiling, and loop interchanges, we present a set of permutation rules which establish that the transformed code satisfies all the implied data dependencies necessary for the validity of the considered transformation. We describe the necessary extensions to the VOC-64 in order to validate these structure-modifying optimizations.Finally, the paper discusses preliminary work on run-time validation of speculative loop optimizations, that involves using run-time tests to ensure the correctness of loop optimizations which neither the compiler nor compiler-validation techniques can guarantee the correctness of. Unlike compiler validation, run-time validation has not only the task of determining when an optimization has generated incorrect code, but also has the task of recovering from the optimization without aborting the program or producing an incorrect result. This technique has been applied to several loop optimizations, including loop interchange, loop tiling, and software pipelining and appears to be quite promising.

Time Warp is the most common mechanism used for implementing optimistically synchronized Parallel Discrete Event Simulation (PDES). Rollback relaxation is an optimization to Time Warp that reduces the space and time requirements of rollback. Rollback relaxation is applicable to simulation systems that contain memoryless components (i.e., components whose output at any instant of time is determined completely by its inputs at that time). For such components, a complete rollback is not necessary for the correct completion of simulation. Instead, on the receipt of a straggler message, a rollback relaxed processes merely aligns the input set to send new, and validate already sent, output messages. This optimization has been implemented and has experimentally proven to enhance the performance of Time Warp simulations. However, no formal proof of the correctness of rollback relaxation exists (although correctness proofs of Time Warp do). In this paper, we formally specify and verify the correctness of rollback relaxation. The problem is specified using the Prototype Verification System (PVS) Specification Language and proved using the PVS Prover.

In this paper we show how the specifcation and verifcation system PVS (Prototype Verifcation System) can provide tool support for Abstract State Machines (ASMs), especially oriented towards automatic proof checking and mechanized proving of properties. Useful templates are presented
which allow encoding of ASM models into PVS without any extra user’s skill. We prove the transformation preserves the ASM
semantics and provide a framework for an automatic tool, prototypically implemented, which translates ASM specifcations in
PVS. The ASM specifcation of the Production Cell given in [4] is taken as case study to show how to formalize multi-agent ASMs in PVS and prove properties.

We propose a theoretical foundation for proof reuse, based on the novel idea of a computational interpretation of type isomorphisms.

A pragmatic approach to algorithm specification and verification is presented. The language AL provides a level of abstraction between a mathematical specification notation and a programming language, supporting compact but expressive algorithm description. Proofs of correctness about algorithms written in AL can be done via an embedding of the semantics of the language in a proof system; implementations of algorithms can be done through translation to standard programming languages. The proofs of correctness are more tractable than direct verification of programming language code; descriptions in AL are more easily related to executable programs than standard mathematical specifications. AL provides an independent, portable description which can be related to different proof systems and different programming languages. Several interfaces have been explored and tools for fully automatic translation of AL specifications into the HOL logic and Standard ML executable code have been implemented. A substantial case study uses AL as the common specification language from which both the formal proofs of correctness and executable code have been produced.

The paper presents a method, called the method of verification by invisible invariants, for the automatic verification of a large class of parameterized systems. The method is based on the automatic calculation
of candidate inductive assertions and checking for their inductiveness, using symbolic model-checking techniques for both
tasks. First, we show how to use model-checking techniques over finite (and small) instances of the parameterized system in
order to derive candidates for invariant assertions. Next, we show that the premises of the standard deductive INV rule for proving invariance properties can be automatically resolved by finite-state (BDD-based) methods with no need for interactive theorem proving. Combining the automatic computation of invariants with the automatic
resolution of the VCs (verification conditions) yields a (necessarily) incomplete but fully automatic sound method for verifying
large classes of parameterized systems. The generated invariants can be transferred to the VC-validation phase without ever
been examined by the user, which explains why we refer to them as “invisible”. The efficacy of the method is demonstrated
by automatic verification of diverse parameterized systems in a fully automatic and efficient manner.

The paper shows that, by an appropriate choice of a rich assertional language, it is possible to extend the utility of symbolic model checking beyond the realm of BDD-represented finite-state systems into the domain of infinite-state systems, leading to a powerful technique for uniform verification of unbounded (parameterized) process networks.
The main contributions of the paper are a formulation of a general framework for symbolic model checking of infinite-state systems, a demonstration that many individual examples of uniformly verified parameterized designs that appear in the literature are special cases of our general approach, verifying the correctness of the Futurebus+ design for all singlebus configurations, extending the technique to tree architectures, and establishing that the presented method is a precise dual to the top-down invariant generation method used in deductive verification.

ResearchGate has not been able to resolve any references for this publication.