Chapter

Understanding Formal Methods

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A new problem is always tackled, at the outset, via both intuition and empirical methods. The design of software systems is no exception. The first step is to determine the objects to be realized. We then have to describe it. Most of the time, one employs the usual means of expression to this effect: our mother tongue, explanatory diagrams Subsequent steps are devoted to code writing, generally using a high level language. An intuitive understanding of the language constructs is then key. Of course, people involved in this process employ some reasoning: “in that case, such an event happens, then… etc.”

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The earliest signs of AD are cognitive impairments including deficits in memory formation and storage that are caused by disruption of the processes of synaptic plasticity, which are complex in their own right. One avenue toward understanding these complex processes is to represent, simulate, and analyze them using "process algebra," a computational technique that falls under the umbrella of formal methods in computer science (Monin and Hinchey, 2003). The purpose of this study is to develop an initial computational framework for understanding how various chemical compounds might aid memory in AD by using formal methods to simulate the deficits in synaptic plasticity that accompany the disorder. ...
... In computer science, formal methods are computational methods for specifying and analyzing systems (Monin and Hinchey, 2003). In practice, formal methods are implemented by creating a model of a system that takes the form of a computer program written in a declarative programming language. ...
... Maude and MATLAB are fundamentally different. In MATLAB, which is an imperative programming language, statements are commands that execute in strict order but in Maude, which is a declarative language, statements are descriptions of facts that execute in arbitrary order as they apply (Monin and Hinchey, 2003;Huth and Ryan, 2004;Clavel et al., 2007). To make their processing modes somewhat more similar, the interactions in the MATLAB program were expressed as conditionals within a while loop. ...
Article
Full-text available
The leading hypothesis on Alzheimer Disease (AD) is that it is caused by buildup of the peptide amyloid-β (Aβ), which initially causes dysregulation of synaptic plasticity and eventually causes destruction of synapses and neurons. Pharmacological efforts to limit Aβ buildup have proven ineffective, and this raises the twin challenges of understanding the adverse effects of Aβ on synapses and of suggesting pharmacological means to prevent them. The purpose of this paper is to initiate a computational approach to understanding the dysregulation by Aβ of synaptic plasticity and to offer suggestions whereby combinations of various chemical compounds could be arrayed against it. This data-driven approach confronts the complexity of synaptic plasticity by representing findings from the literature in a course-grained manner, and focuses on understanding the aggregate behavior of many molecular interactions. The same set of interactions is modeled by two different computer programs, each written using a different programming modality: one imperative, the other declarative. Both programs compute the same results over an extensive test battery, providing an essential crosscheck. Then the imperative program is used for the computationally intensive purpose of determining the effects on the model of every combination of ten different compounds, while the declarative program is used to analyze model behavior using temporal logic. Together these two model implementations offer new insights into the mechanisms by which Aβ dysregulates synaptic plasticity and suggest many drug combinations that potentially may reduce or prevent it.
... The rework process takes too much time, money and efforts which ultimately delay the process of deployment of system. Therefore, there is an utter need of usage of the formal model [3] for testing criteria of Safety Critical Systems [4] for test case"s completeness and correctness. Formal methods are equipped with rich mathematical axioms and tool support. ...
... Formal methods: Formal methods [3] are the methods which use mathematical techniques as their foundation pillars and are used to develop the software systems. They can be applied at any phase of software development process, but highly recommended to apply in early phases. ...
Article
Full-text available
This paper presents the formal aspects of testing criteria for Safety Critical Systems. A brief review of testing strategies i.e. white box and black box is given along with their various criteria’s. Z Notation; a formal specification language is used to sever the purpose of formalization. Initially, the schemas are formed for Statement Coverage (SC), Decision coverage (DC), Path Coverage (PC), Equivalence Partition Class (EPC), Boundary Value Analysis (BV) and Cause & Effect (C&F). The completeness and correctness of test schema are enriched by verifying these with Z/EVES; a Theorem Prover tool for Z specification.
... Mathematical logic provides a natural framework for precisely constructing and expressing various concepts in computing and lends itself well to formalization [123]. Formal verification of safety-critical systems requires tools and techniques for mechanically checking proofs of correctness guarantees. ...
Thesis
Full-text available
As autonomous vehicular technologies such as self-driving cars and uncrewed aircraft systems (UAS) evolve to become more accessible and cost-efficient, autonomous multi-agent systems, that comprise of such entities, will become ubiquitous in the near future. The close operational proximity between such autonomous agents will warrant the need for multi-agent coordination to ensure safe operations. In this thesis, we adopt a formal methods-based approach to investigate multi-agent coordination for safety-critical autonomous multi-agent systems. We explore algorithms that can be used for decentralized multi-agent coordination among autonomous mobile agents by communicating over asynchronous vehicle-to-vehicle (V2V) networks that can be prone to agent failures. In particular, we study two types of distributed algorithms that are useful for decentralized coordination — consensus, which can be used by autonomous agents to agree on a set of compatible operations; and knowledge propagation, which can be used to ensure sufficient situational awareness in autonomous multi-agent systems. We develop the first machine-checked proof of eventual progress for the Synod consensus algorithm, that does not assume a unique leader. To consider agent failures while reasoning about progress, we introduce a novel Failure-Aware Actor Model (FAM). We then propose a formally verified Two-Phase Acknowledge Protocol (TAP) for knowledge propagation that can establish a safe state of knowledge suitable for autonomous vehicular operations. The non-deterministic and dynamic operating conditions of distributed algorithms deployed over asynchronous V2V networks make it challenging to provide appropriate formal guarantees for the algorithms. To address this, we introduce probabilistic correctness properties that can be developed by stochastically modeling the systems. We present a formal proof library that can be used for reasoning about probabilistic properties of distributed algorithms deployed over V2V networks. We also propose a Dynamic Data-Driven Applications Systems (DDDAS)-based approach for the runtime verification of distributed algorithms. This approach uses parameterized proofs, which can be instantiated at runtime, and progress envelopes, which can divide the operational state space into distinct regions where a proof of progress may or may not hold. To motivate our verification of decentralized coordination, we introduce an autonomous air traffic management (ATM) technique for multi-aircraft systems called Decentralized Admission Control (DAC).
... Model analysis exploits powerful computational methods based on declarative programming that facilitate enumeration of the entire model state space, which for the food-intake control model is the set of all allowed configurations (or network-wide patterns) of the responses of the neural subtypes represented in the model. We then apply tools known as state-space search and temporal-logic model-checking (Monin and Hinchey, 2003; Huth and Ryan, 2004), both to search for response configurations that satisfy certain criteria, and to determine temporal relationships between specific response patterns. This computational analysis is the first of its kind in the neuroscience of food-intake control. ...
Article
Full-text available
Abstract Food-intake control is mediated by a heterogeneous network of different neural subtypes, distributed over various hypothalamic nuclei and other brain structures, in which each subtype can release more than one neurotransmitter or neurohormone. The complexity of the interactions of these subtypes poses a challenge to understanding their specific contributions to food-intake control, and apparent consistencies in the dataset can be contradicted by new findings. For example, the growing consensus that arcuate nucleus neurons expressing Agouti-related peptide (AgRP neurons) promote feeding, while those expressing pro-opiomelanocortin (POMC neurons) suppress feeding, is contradicted by findings that low AgRP neuron activity and high POMC neuron activity can be associated with high levels of food intake. Similarly, the growing consensus that GABAergic neurons in the lateral hypothalamus suppress feeding is contradicted by findings suggesting the opposite. Yet the complexity of the food-intake control network admits many different network behaviors. It is possible that anomalous associations between the responses of certain neural subtypes and feeding are actually consistent with known interactions, but their effect on feeding depends on the responses of the other neural subtypes in the network. We explored this possibility through computational analysis. We made a computer model of the interactions between the hypothalamic and other neural subtypes known to be involved in food-intake control, and optimized its parameters so that model behavior matched observed behavior over an extensive test battery. We then used specialized computational techniques to search the entire model state space, where each state represents a different configuration of the responses of the units (model neural subtypes) in the network. We found that the anomalous associations between the responses of certain hypothalamic neural subtypes and feeding are actually consistent with the known structure of the food-intake control network, and we could specify the ways in which the anomalous configurations differed from the expected ones. By analyzing the temporal relationships between different states we identified the conditions under which the anomalous associations can occur, and these stand as model predictions. Computer code and data sets used in this analysis are available at: https://github.com/SigmoidNetmaker/food-intake-neural-network
... One feasible solution to overcome these hurdles is to apply formal methods to the ABM development as the observed advantages in the discrete event modeling domains [13]. Formal methods, or formalisms, have been used to unambiguously specify system behaviors, and this trait enabled the applications to requirement analysis, development, and verification of systems [14]. Moreover, this unambiguity makes the formal methods to be considered as methods supporting model reusability. ...
Article
Full-text available
As agent-based models (ABMs) are applied to various domains, the efficiency of model development has become an important issue in its applications. The current practice is that many models are developed from scratch, while they could have been built by reusing existing models. Moreover, when models need reconfiguration, they often need to be rebuilt significantly. These problems reduce the development efficiency and ultimately damage the efficacy of ABM. This paper partially resolves the challenges of model reusability from the systems engineering approach. Specifically, we propose a formalism-based ABM development and demonstrate its potential to promote model reuses. Our formalism, named large-scale, dynamic, extensible, and flexible (LDEF) formalism, encourages the building of a larger model by the composition of modularly developed components. Also, LDEF is tailored to the ABM contexts to represent the agent's action procedure and support the dynamic changes of their interactions. This paper shows that LDEF improves the model reusability in ABM development through its practical examples and theoretical discussions.
... La logique de Hoare (Hoare, 1969) est une approche qui fait parti des méthodes formelles (Monin, 2002), qui visent à prouver avec des méthodes mathématiques qu'un programme est correct vis-à-vis de sa spécification, c'est-à-dire qu'il se comporte comme attendu dans toutes les situations dans lesquelles le programme est censé fonctionner. Cette approche est supérieure au test dans le sens qu'elle peut fournir des garanties bien plus importantes. ...
Thesis
Full-text available
In this thesis, we first present a theoretical system that enables proof of higher-order programs with side effects. This system consists of three major parts. First, a programming language with a traditional type, effect and region system, with effect polymorphism. Second, a higher-order specification language, that also contains a means to describe modifications of the state. Finally, a weakest precondition calculus that, starting from an annotated program, allows to obtain proof obligations, that is, formulas whose validity implies the correctness of the program w.r.t. its specification. We also present two restrictions of the initial system. The first disallows region aliasing, obtaining better modularity of the calculus, the second restricts the system to regions that are singleton, containing only a single reference. Both restrictions enable important simplifications that can be applied to proof obligations, but restrict the set of accepted programs. We also present an implementation of this theoretical system, called Who. This tool uses in particular translations of the proof obligations to higher-order logic and first-order logic; these translations are detailed in this thesis. The translation to higher-order logic in particular allows using the Coq proof assistant to validate proof obligations. The translation to first-order logic allows using automated theorem provers. Higher-order programs, found in the standard library of OCaml and elsewhere, have been proved correct using the tool Who, as well as a continuation-based implementation of the Koda-Ruskey algorithm.
... Notably, both programmers and users of some such system would like to know whether its design specifications are actually fulfilled in normal operational environments. Formal methods (Monin and Hinchey 2003) were developed in theoretical computer science to address similar predictive problems on the basis of a formal representation of the hardware or software system under examination (usually referred to as system specification), and a formal representation of the behavioural property one is interested in (usually referred to as property specification). Formal methods prominently include theorem proving and model checking (Clarke and Wing 1996). ...
Article
Full-text available
Model checking, a prominent formal method used to predict and explain the behaviour of software and hardware systems, is examined on the basis of reflective work in the philosophy of science concerning the ontology of scientific theories and model-based reasoning. The empirical theories of computational systems that model checking techniques enable one to build are identified, in the light of the semantic conception of scientific theories, with families of models that are interconnected by simulation relations. And the mappings between these scientific theories and computational systems in their scope are analyzed in terms of suitable specializations of the notions of model of experiment and model of data. Furthermore, the extensively mechanized character of model-based reasoning in model checking is highlighted by a comparison with proof procedures adopted by other formal methods in computer science. Finally, potential epistemic benefits flowing from the application of model checking in other areas of scientific inquiry are emphasized in the context of computer simulation studies of biological information processing.
Article
Full-text available
The application of formal methods to the examination of reactive programs simulating cell systems’ behaviours in current computational biology is taken to shed new light on the simulative approaches in Artificial Intelligence and Artificial Life. First, it is underlined how reactive programs simulating many cell systems’ behaviours are more profitably examined by means of executable models of the simulating program’s executions. Those models turn out to be representations of both the simulating reactive program and of the simulated cell system. Secondly, it is highlighted how discovery processes of significant regular behaviours of the simulated system are carried out performing algorithmic verifications on the formal model representing the biological phenomena of interest. Finally, a distinctive methodological trait of current computational biology is recognized in that the advanced model-based hypotheses are not corroborated or falsified by testing the simulative program, which is not even encoded, but rather by performing wet experiments aiming at the observation of behaviours corresponding to paths in the model either satisfying or violating the hypotheses under evaluation.
Article
Full-text available
This commentary on John Symons’ and Jack Horner’s paper, besides sharing its main argument, challenges the authors’ statement that there is no effective method to evaluate software intensive systems as a distinguishing feature of software intensive science. It is underlined here how analogous methodological limitations characterise the evaluations of empirical systems in non-software intensive sciences. The authors’ claim that formal methods establish the correctness of computational models rather than of the represented program is here compared with the empirical adequacy problem typifying the model-based reasoning approach in physics. And the remark that testing all the paths of a software intensive system is unfeasible is related to the enumerative induction problem in the justification of empirical law-like hypotheses in non-software intensive sciences.
Conference Paper
This paper describes a formal methods approach to process engineering. The approach involves statechart based formal process modeling as well as the use of embedded assertion statecharts to ensure the modeled process adheres to stated requirements. This approach can help the process engineer develop and maintain a process. The formal nature of our approach can also help the process engineer to reason about the process. We apply this approach to the Unified Cross Domain Management Office's cross domain solution workflow process. This is a key process in the development, implementation, and certification and accreditation of cross domain solutions.
Conference Paper
Formal methods are commonly used to model complex behavior of a system without ambiguities and specification errors. This paper presents a formal model of the multi-agent requirements using finite-state automata (FSA). We describe application of formal methods to model multi-agent systems using the example of English Auction Protocol (EAP). It is shown that the proposed approach increases the behavior handling and semantic characterization. The use of a formal specification language such as Z ensures the correctness, reliability and consistency at the analysis and design stage. This is because capturing the errors and inconsistencies at initial stages could greatly affect the time and cost spent in later stages of the system development lifecycle. We also show that our formal model of the EAP incorporates security properties such as anonymity, traceability and unforgeability.
ResearchGate has not been able to resolve any references for this publication.