Science topic

Formal Methods - Science topic

In computer science, specifically software engineering, formal methods are a particular kind of mathematically-based techniques for the specification, development and verification of software and hardware systems.
Questions related to Formal Methods
  • asked a question related to Formal Methods
Question
2 answers
The multi-analysis modeling of a complex system is the act of building a family of models which allows to cover a large spectrum of analysis methods (such as simulation, formal methods, enactment…) that can be performed to derive various properties of this system. The High-Level Language for Systems Specification (HiLLS) has recently been introduced as a graphical language for discrete event simulation, with potential for other types of analysis, like enactment for rapid system prototyping. HiLLS defines an automata language that also opens the way to formal verification.
Relevant answer
Answer
The first ever formal model or can say a high level language for system specification is Transition System in paper "Formal Verification of Parallel Programs" written by Robert M. Keller Princeton University 1976. . This model was proposed to formally specify parallel programs. Many of the models emerged after it such as Automata Theory, Timed Automata, Hybrid Automata, Model Checking tools specific languages are based on the original labeled transition system (LTS). The new languages like HiLLS or Business Process Model and Notation from Object Management Group and many others are basically evolved from early LTS.
  • asked a question related to Formal Methods
Question
8 answers
I am currently working on a model checking a systematic literature review. please help in this case.
Relevant answer
Answer
Define clearly the definition of Model Checking and its related definitions. Because mode checking is suite of concepts including type of models e.g. automata models or petri-nets, etc. The verification logic languages of a model-checker. First, take some model checkers on list and then see what are the different parts: model, engine, verification language, set of verification properties. Study each model checker and its each part in detail. See what each has edge over other, what types of application each can model, compare their performance and applicability. Case studies are helpful to understand tools practically.
  • asked a question related to Formal Methods
Question
3 answers
As we are known, the Falsification of CPS already has made some progress. But there is a problem: how to select input trace efficiently for finding the bad output trace. It can divide into two parts:
1. What is the input space and how to describe it
2. What optimization algorithms can help select input trace for the falsification of CPS
Can you introduce some ideas about the two parts and which next direction do you think?
Relevant answer
Answer
Zeashan H. Khan .Thank you for your replay. I already have read those papers, what do you think of the role of machine learning or reinforcement learning? Are they effective for counter-example search?
  • asked a question related to Formal Methods
Question
1 answer
Hi,
Is there a simple real-valued time-series dataset on which a vanilla RNN model can be trained. With "very simple" I mean only two to four real-valued inputs per time step and a single real-valued output per time-step.
Background: I am doing research in the intersection of machine learning and formal methods. To test a new technique for formally verifying RNNs, we need to start with a quite simple setup.
Thanks!
Relevant answer
Answer
Here's a simple dataset for RNN training; contains number of passengers in equal time spacing.
  • asked a question related to Formal Methods
Question
3 answers
Hello all,
I am aware of "formal methods" as it is used in computer science, to make sure our specifications are mathematically and logically sound (before we put them into a specific language). Isn't there a tool or form of notation so we can do that while we construct legitimate hypotheses for psychological research?
It is common knowledge that the construction of natural language questions (surveys about thinking and behavior), can be very questionable when it comes to construct validity. For example, did I cover everything? Am I even asking the right question? Trial and error without a formal test of our logic doesn't seem very efficient.
Does anyone know of a formal notation or research tool that lets us test natural language questions for how "sound" they are? After all, whether computer code (if/then we can do this) or human language (if/then we can assume that), it's pretty much the same logic. Thanks for any advice!
Relevant answer
Answer
I think you will need to address the use of formal methods in the society of engineering before you can use the methods more broadly. I am familiar with these methods, and I am quite skeptical of their application as you suggest.
However, I do believe that there is merit to the development of mathematics in the description of social behaviors on the underlying biophysical basis. A step beyond the statistical analysis and game theory so far.
  • asked a question related to Formal Methods
Question
3 answers
  1. Artificial Intelligence's safety is important for us to build AI world, but feeling some confused about how to verified AI.
  2. And which differences between model checking , theorem proving in traditional formal method and the methods for Verified AI?
  3. For traditional cyber-physical system(CPS), which the new problems we should consider when CPS carries with neural network?
Relevant answer
Answer
Hi,
Artificial Intelligence's is important for us to checking model theorem in traditional methods.
Regards
  • asked a question related to Formal Methods
Question
3 answers
As we known, machine learning already have a great progress in cyber-physics system, eg: self-driving car, robot, aircraft...... They can help improve their work efficiency, but due to their uncertainty we can't use them in key component. However, formal method has a great completeness and accuracy.
How to use machine learning combine with formal method for guarantee safety under not influencing efficiency?
And can you provide some material about this in the practical research?
Relevant answer
Answer
Modeling, simulation, verification, and validation can guaranty such systems. However, we should exactly state what sort of system we're talking about. What I understood you want to analyze the performance of a Machine Learning algorithm for a specific problems in such systems with formal verification. However, a machine learning algorithm itself can be used for formal verification; it starts analyzing the system as black box. It then can find the relations in the system to predict the performance of the system. By modeling and simulation, you can:
a-Analyze a system at scale
b-Predict the behaviour of the system with different confugrations and situations.
Last but not the least, at some point, it is neither possible to analyze a real system with different configurations nor the system is available.
G.L
  • asked a question related to Formal Methods
Question
6 answers
Given a model consisting of a set of (first-order) formulas, certain dependencies arise between the free variables of these formulas. For instance, given the assumptions { x <= y }, a choice of value for x constrains the possible values of y. This extends to more complex terms and formulas, e.g. f(x) depending on (x < 0) in the model { f(x) = sgn(x) * x² }.
Are there any results on the nature of such dependencies in logic or any other field? Particularly regarding automatic detection of such dependencies?
I have found dependence/independence logic (logics of imperfect information), which extend first-order logic with formulas specifying such relations explicitly, but nothing on the implicit relationships established by regular first-order logic or on automatic analyses. I thought fields such as constraint solving or SMT might have investigated this topic, but have found nothing so far.
Relevant answer
Answer
Thanks everyone for your answers.
So, a little more background: I am working with some fixed "background" SMT theory T. The dependencies between constants defined in those are indeed rather trivial, but the set of formulas Phi I was talking about corresponds not to the theory itself, but to a problem formulation, e.g. input to an SMT solver. As such, it will indeed contain newly invented variables (or indeed, "constants" does seem to fit better in the SMT context) with non-trivial dependencies.
Furthermore, I am interested not only in the dependencies between such constants, but more complex terms and formulas made up of them as well. The definition easily extends to this case if you take "M(e)" for some (closed?) expression e to mean the semantic value of e in the model M, be it that e is a constant, or a function application, or a quantified formula.
I have been thinking about the details of the definition - so far I had assumed a fixed T-structure A and the "models M" were the variable assignments. As I want to fit the whole into the context of SMT, I am probably going to change that to work on constants and structures, most likely constraining the models in the class C to have the same universe.
But I do think the core idea of the definition is suitable, as I already have some useful results (which should survive the transition from variables to constants).
I will have a look at the book you suggested. Even if I do not find anything on automated analyses, it will at least give me some context and be relevant as related work.
  • asked a question related to Formal Methods
Question
2 answers
In the area of symbolic control, which to be precisely defined I mean using finite abstraction of dynamical systems with verification algorithms to construct provably correct controllers satisfying a temporal logic formula, I am interested in the ways to tackle environment uncertainties like dynamic obstacles.
In the paper a method is proposed, but it is assumed that the car model is given in form of a Markov Decision Process(MDP) apriori, While I am interested in researches done combining the problem of finding an abstract model of the system and the problem of handling uncertainties in environment.
Is anybody aware of a research which has done so? Or any ideas how one can combine these two problems? I have done a literature review in the field and I have not came up with an idea so far.
Thanks in advance for your help.
Relevant answer
Answer
Abstraction based controller synthesis solves control problems in three steps:
1. Computing an abstraction of the plant , and an abstract version of the specification, to obtain an auxiliary control problem (pair of abstraction, abstract specification).
2. Solving the abstract control problem, to obtain an abstract controller.
3. Refining the abstract controller to obtain a controller that solves the original control problem.
For details, please see the work
[1] Reissig, Weber and Rungger, Feedback Refinement Relations for the Synthesis of Symbolic Controllers, IEEE Trans Autom Control 2017,
That work assumes a discrete-time control problem and allows for static uncertainties of the environment, by modelling the plant as a difference inclusion,
x(t+1) \in F(x(t),u(t)), t \in \mathbb{Z}_+
If the uncertanties are dynamic, that dynamics must be made part of the plant. Feedback refinement relations have, to the best of my knowledge, not yet extended to that case.
  • asked a question related to Formal Methods
Question
2 answers
I introduce the new model for information diffusion based on node behaviors in social networks. I simulate the models and find interesting result from it. I want to evaluate it with one formal method and find Interactive Markov Chain. Can I use it to evaluate my model?
Relevant answer
Answer
I find this paper :
A Survey on Information Diffusion in Online Social
Networks: Models and Methods
  • asked a question related to Formal Methods
Question
8 answers
The Baier and Katoen textbook references this paper
  • E. M. Clarke and I. A. Draghicescu. Expressibility results for linear-time and branching-time logics, pages 428–437. Springer Berlin Heidelberg, Berlin, Heidelberg, 1989.
to say that, given a CTL formula ϕ, if there exists an equivalent LTL one, it can be obtained by dropping all branch quantifiers (i.e. A and E) from ϕ.
The equivalence definition (from Baier & Katoen): CTL formula Φ and LTL formula φ (both over AP) are equivalent, denoted Φ ≡ φ, if for any transition system TS over AP:
TS |= Φ if and only if TS |= φ.
(Satisfaction |= is CTL or LTL respectively.)
Is there a syntactic criterion that provides a guarantee that if a CTL formula passes the test, then an equivalent LTL formula does exist?
Please note: Just dropping all branch quantifiers is not enough. For an example, consider 'AF AG p', where p is an atomic predicate. The LTL formula 'F G p' obtained by dropping the branch quantifiers is NOT equivalent to 'AF AG p', since it is not expressible in CTL. The question is whether there is a way (sufficient, but not necessary is Ok) of looking at a CTL formula and saying that it does have an equivalent LTL one?
I am emphasizing the need for a syntactic criterion, as opposed to the semantic one: drop the branch quantifiers and check the equivalence with the resulting formula. Something along the lines of: if, after pushing negations down to atomic predicates, all branch quantifiers are universal (A) and <some additional requirement>, then the formula has an equivalent LTL one (which, necessarily, can be obtained by dropping the branch quantifiers).
An additional requirement (or a totally different criterion) is necessary -- see the `AF AG p`.
Same question on CS Theory Stack Exchange (see the link)
Relevant answer
Answer
there is some work on the intersection of CTL and LTL:
The Common Fragment of CTL and LTL, Monika Maidl
The Common Fragment of ACTL and LTL, Mikolaj Bojan ́czyk ⋆
  • asked a question related to Formal Methods
Question
4 answers
While experimenting with nuXmv and NuSMV model checkers, I observed that input variables contribute no state to the entire state-space of any given model.
The user manuals of these tools clearly mentioned syntactic differences between these kinds of variable. For example, the "IVAR" introduces the input variables paragraph; but "VAR" introduces state variables.
Is there any science behind this behavior of input variables? In particular, how do these model checkers handle input variables and state variables? Any reference on this observation will be appreciated.
Relevant answer
Answer
Hi Yasir,
Here are my concerns:
MODULE main
-- this is an IVAR paragraph
  IVAR
   v1 : 0..20;
   v2 : 0..20;
-- this is a VAR paragrapgh
  VAR
    v3 : 0..100;
ASSIGN
  init(v3) := 0;
  next(v3) := case
    v2 + v1 = 0 : 10;
    TRUE : v2 + v1;
  esac;
LTLSPEC !F(v3 = 0)
But by simply changing this to VAR, we have x = 18081, y = 44541
MODULE main
-- this is a VAR paragraph
  VAR
   v1 : 0..20;
   v2 : 0..20;
-- this is a VAR paragraph
  VAR
    v3 : 0..100;
ASSIGN
  init(v3) := 0;
  next(v3) := case
    v2 + v1 = 0 : 10;
    TRUE : v2 + v1;
  esac;
LTLSPEC !F(v3 = 0)
The difference reflects from the values of y. So my question is what makes IVAR behaves differently from VAR.
Try to run the my code and compare the complexities. You should be able to know the difference. I would have modified you code but the restrictions on IVAR will not make it compile.
  • asked a question related to Formal Methods
Question
45 answers
Do the formal languages of logic share so many properties with natural languages that it would be nonsense to separate them in more advanced investigations or, on the contrary, are formal languages a sort of ‘crystalline form’ of natural languages so that any further logical investigation into their structure is useless? On the other hand, is it true that humans think in natural languages or rather in a kind of internal ‘language’ (code)? In either of these cases, is it possible to model the processing of natural language information using formal languages or is such modelling useless and we should instead wait until the plausible internal ‘language’ (code) is confirmed and its nature revealed?
The above questions concern therefore the following possibly triangular relationship: (1) formal (symbolic) language vs. natural language, (2) natural language vs. internal ‘language’ (code) and (3) internal ‘language’ (code) vs. formal (symbolic) language. There are different opinions regarding these questions. Let me quote three of them: (1) for some linguists, for whom “language is thought”, there should probably be no room for the hypothesis of two different languages such as the internal ‘language’ (code) and the natural language, (2) for some logicians, natural languages are, in fact, “as formal languages”, (3) for some neurologists, there should exist a “code” in the human brain but we do not yet know what its nature is.
Relevant answer
Answer
To Velina (if I may): Imperatives are used to coordinate actions. The "generation of meaning" is a hypothetical process that not every theory must presuppose.
To Andre (if I may): Of course the receiver's role must be taken into account. The reason why this role has been omitted is the fact that it belongs to the "effect-part" in the basic formula of logical pragmatics: within the communication type C, normally if the sender S emits message P, then the effect E occurs on the side of receiver R. In short, if C, then [S:"P"]ER, where the formula [Cause]Effect is to be understood as a formula of dynamic logic. What is the scope of logical research in this context? Logic usually studies the syntax and semantics of a language in which a message P is formulated. Logical pragmatics studies language in use.
Regarding the claim "Obviously, in order to create artificially such a powerful (having the expressivity of natural language) device we will need to take many cognitive and psychological aspects into consideration." It would be useful to make the distinction between the further development of the Leibnizian concept-script and the extension of logical research to pragmatics. In the latter case, it is not only the case that psychological states must be taken into account, but also the normative or social dimension. There have been made significant contributions to the development of the logical pragmatics: illocutionary logic by J. Searle and D. Vanderveken, dynamic logic of J. van Benthem, normative pragmatic of R. Brandom.
  • asked a question related to Formal Methods
Question
1 answer
I am conducting some experiments with a focus on the number of BDD nodes required for the analysis of a given property on various NuSMV or nuXmv models.
It will be appreciated if anyone can provide a guide on the procedure or recommend resources to assist me in conducting the experiments.
Thanks.
  • asked a question related to Formal Methods
Question
5 answers
I need to define some scenarios for smart spaces. For example: 1-Lights are turned on when user enters the environment 2- Telephone starts to play messages automatically.
Is there any formal template, method or language for scenario definition in this context? 
Relevant answer
Answer
Thank you every one. I found a useful paper describing Scenario Markup Language for driving behavior studies. It has a good literature review on other scenario authoring languages for virtual environment. I thought it would be helpful for anyone having same question.
Gajananan, Kugamoorthy, et al. "An experimental space for conducting controlled driving behavior studies based on a multiuser networked 3D virtual environment and the Scenario Markup Language." Human-Machine Systems, IEEE Transactions on 43.4 (2013): 345-358.
  • asked a question related to Formal Methods
Question
3 answers
I did my search on LOTOS white papers, many contains abstract theorem, but I think there is lack of white papers with example of requirement -> LOTOS specification, and syntax or implementation into software system.
This will greatly helps my understanding on how to put LOTOS language into real problem solving issues.
Thank you.
Relevant answer
Answer
Hello,
Following Luigi answer, in the CADP Publications page, you will find under "Papers about compositional verification", recent papers that contains some references you can use, in particular in :
and:
and then, more specificaly case-studies papers, in:
  • asked a question related to Formal Methods
Question
4 answers
Is there any event-based formal method that can be mapped from SOAML into that method?
Relevant answer
Answer
You are right, Event-B is well-suited for event-based system modeling
  • asked a question related to Formal Methods
Question
10 answers
Which formal method is the most suitable to analyze SOA security patterns?
Relevant answer
Answer
You could also have a look at COBIT 5 for a complete framework for governance including security. There is work on risk assessment there that you might find useful.
Cheers,
Adrian.
  • asked a question related to Formal Methods
Question
10 answers
Exploring the current state of the art in formal systems architecture development and production by requesting information about the current practices associated with design structure matrices.
Relevant answer
Answer
It would appear to me that others may also have thought about using DSM for forming an autonomous organization. I would like to hear from anyone who may have implemented this concept and learn of their experience. Has anyone published anything on it?
And I would like to know of anyone who may be interested in the Explainer.I have used the Explainer to show how Congress can solve some of the problems that cause them to be in gridlock. But I have not seen any evidence that these problems are being solved. If anyone else has come up with such a method, I would like to hear about it. Maybe someone has come up with such a method and they have also had trouble getting people to look below the surface to see how it works and can be used.
Don
  • asked a question related to Formal Methods
Question
8 answers
Looking for current research efforts in system science and system engineering that use interpretive structural modeling as a formal approach.
Relevant answer
Answer
Very interesting question and discussion.
  • asked a question related to Formal Methods
Question
3 answers
.
Relevant answer
Answer
----Available VDM Tools----
SpecBox-Syntax Checking,Document Printing,Semantic Analysis
VDM through Pictures(VtP)-for Editing,Visual Specification
mural-Proof Support
VDM Domain Compiler-Code Generation
  • asked a question related to Formal Methods
Question
12 answers
During his work on software in the railway domain, Dines Bjorner started with a list of subdomains into which the railway domain can be divided. His list can be found here(http://euler.fd.cvut.cz/railwaydomain/?page=faq&subpage=railway-domain). From the perspective of a railway domain expert, is this list complete and up to date with respect to the current state of the railway domain? If not, what can be added or removed from the list to refine it?
Relevant answer
Answer
I think it strongly depends on the viewpoint and - above all - on the objective of the analysis. For instance, from the hw-sw control viewpoint we usually divide the system into subsystems like train supervision, traffic management, route management and train control. Dividing a complex system into domains is a modelling task and as such it would greatly benefit from appropriate and more formal approaches, like the ones based on UML/SysML. Otherwise, it is difficult - if not impossible - to say what division is correct and what is wrong.
  • asked a question related to Formal Methods
Question
4 answers
Dear colleges, 
I have struggled all day with a proof that appears obvious to me using a pen and a paper, but that it is not obvious to Isabelle theorem prover. 
The goal I am trying to solve is: 
((∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s') -->
(∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s'))
I would like to prove this without expanding pre_p and rely_p. An evidence that this should be possible is that I can prove: 
∀ pre_p rely_p . 
((∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s') -->
(∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s'))
which is a more generic theorem. I am using Isabelle 2013-2. 
I know how to prove the theorem expanding pre_p and rely_p, but this would not suit my needs, because my intention is to reuse theorems already proved, as lemma to prove other occurrences of their statements, instead of repeating the proof steps. 
To make the discussion shorter, the problem can be reduced to the question in the title (i.e. (∀ s inp . pre_p inp s) --> (∀ s inp. pre_p inp s))
Thanks for any contribution towards a clarification of this issue! 
Relevant answer
Answer
Dear Stefan,
I see the point you are making, say if I try to prove:
((1/0) < x) --> ((1/0) <x) 
Formally: "((4::int) div 0 < x) --> ((4::int) div 0 < x)"
I would reach such a situation that a operation would be undefined. However, in Isabelle, the logic does not have natural expression for partial functions, where things could go wrong. We know that division by zero is undefined, but in Isabelle that would be defined as zero. 
I believe that in case of Isabelle, the problem is the mechanism to infer the types. In this case, I am using a quantified variable which has some type assigned to it. If I let Isabelle to infer the types, the inference system will infer the most generic types possible, thus the quantified variable at the left of --> will be assigned to a type, say T1, and the quantified variable at the right of --> will be assigned to another type, say T2. If I explicitely say that the types must be the same, then the theorem infers true using the axiom "A-->A" of HOL.
Thanks for made me aware of strict vs. non-strict evaluation. I already hear of lazy evaluation, but I was not aware of the other terms for this concept. 
Thanks for your answer, 
Diego Dias
  • asked a question related to Formal Methods
Question
9 answers
A robot to functional testing and the virtuality to structural testing.
Relevant answer
Answer
Gerhard's point regarding automated self-tests suites run after any change to a system is key.  This is actually a good practice, although it is difficult to sustain with comprehensiveness in realistic situations where new features (requirements) are being introduced frequently  and rapidly (agile)--Gerhard's other point.   Because requirements typically evolve, rather remain "specified", I would argue that automating emerging requirements need more consideration.  On the other hand, there are situations, like your virtual robot, that may indeed have "hard" requirements--i.e., they can be specified up front, rather than being allowed to evolve.  Peoples needs continually shift, so that reflects as changing requirements in most of the world's systems.  How does this argument fit with automated robots testing in your situation?  Stories, features, etc. rule a great deal of the world's software systems development these days partly because requirements evolve.
  • asked a question related to Formal Methods
Question
4 answers
Many of the current methods for assessing risks depends on probabilities and events as randomly distributed occurences. This does not fit with intentional acts that can be persistent, reoccuring, focused etc. I am looking for any new work on providing new methods for assessing events that are not randomly distributed and for aggregating such risks. A closely related topic would be any new work on aggregating very low probability or rare events. Does anyone have a pointer or suggestions for who to look at and where to look?
Relevant answer
Answer
I’m not familiar with your domain, so I’ll make some general comments on dealing with “rare” events that happen too often.
Perhaps the “rare” events occur too often with respect to a particular proposed probability model. A mixture model, entertaining other distributions or parameterizations, would allow various component models in the mixture to address various modes of reality (non-intentional events, intentional events, …)
If you believe events group in time or location (or some other dimension), you might want to entertain models similar to MRFs used in computer vision. An approach to constructing maximum entropy probability models to replicate observed statistical characteristics is discussed in:
Zhu, Song Chun, Yingnian Wu, and David Mumford. "Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling." International Journal of Computer Vision 27.2 (1998): 107-126.
  • asked a question related to Formal Methods
Question
3 answers
When a decision maker makes a decision under certainty, it is assumed that decision maker have complete knowledge about the problem and all potential outcomes known. Why we need a decision support system?
Relevant answer
Answer
Ola Ola,
Sound decision making is dictated by logical deductions. This means that a decision support system should better be a binary decision tree. Just like a binary plant determination table is a good example of a binary decision tree developed to decide with which plant species one is dealing.
Actually when an outcome is potential, it is an outcome which is by definition attributed a level of uncertainty. I have put this forward in an earlier thread, when you cannot boil down a series of questions in a decision tree to a yes or no answer (binary decision), then the question or problem is ill defined and hence only potentially apt for an absolutely certain decision.
Typically, when you cannot make a decision tree for example to determine plant species, you will never be able to identify new species in the plant world. The same is true for any other problem area in science. One is never absolutely certain that all solutions for a problem area are identified. Hence solutions are always doted with a level of uncertainty which you have to reduce as much as possible by a rigorous evaluation of the problem area by developing a binary decision tree, which requires thorough study and knowledge acquisition of the problem area. In many ways sound decision making is the art of reducing the uncertainty of known solutions in a specific problem area as much as possible. As if it were a statistical law, the more solutions found in a problem area the lower the uncertainty of each of these solutions.
No?
The more plant species known, the lower the uncertainty on erroneous determinations. Why else do botanists put so much effort in finding and describing new plant species and studying the relationships between the determined 'solutions', nature has developed over thousands of millenia.
Cheers,
Frank
  • asked a question related to Formal Methods
Question
21 answers
Formal methods have mathematical foundations. They are based on algebra or finite state machines. Are they practical? Do they justify their costs?
Is it better to use light-weight implementations of formal methods in industrial projects to reduce costs and increase flexibility and practicality?
Relevant answer
Answer
I'd say it depends on what you are designing. If you look at it from a process perspective, the whole software development process is like a funnel, from vague requirements over a (hopefully) not so vague design to a concrete implementation.
While this is usually fine for systems where there is no "right" or "wrong" solution (the look and feel of a web shop, for example), other systems are much more restrictive. You don't want to be vague about alarm and shutdown criteria in the control software of a nuclear power plant or in aircraft control.
So I think using formal methods will be worthwile in only some scenarios, while in others semi-formal notation (such as UML) will suffice.
  • asked a question related to Formal Methods
Question
28 answers
I am looking for tool chains (even a model based engineering methodology) to enable formal verification of ERTMS (railway signalling) systems. Something along the lines of how Prover works with Simulink and SCADE, but preferably a Symbolic tool like NuSMV. or other industrially viable tools with some way to have a formal verification.
Relevant answer
Answer
Scade provides safe state machines to describe state-based behavoir. But you're right everything is finally mapped to an (at least locally) synchronous execution model,
However, when using UPPAAL or another model checker you usually do the design in another framework and build a verification model to prove your properties. That's is often non-trivial since the design model has to be abstracted effectively (e.g. UPPAAL comes to its limits easily if the system contains a lot of variables and clocks). Second, the abstraction-refinement relation has to be proven property preserving for the relevant properties, otherwise verifying something for the verification model may not tell anything about the design model.
So my first question would be whether you already have some design models for your systems, or do you constsruct your own verification models only?
  • asked a question related to Formal Methods
Question
13 answers
Agile development is concerned with the rapid development of a software. In agile development the customer involvement plays an important role. The software is updated and upgraded according to the customer requirements.
Formal verification is concerned with mathematical approaches to verify the software. How can we integrate these formal approaches (there are many types of formal techniques) into the agile development process.
Relevant answer
Answer
It's a matter of value. In Scrum, the PO has to be convinced that formal modelling (or other formal techniques) have value above cost. If so convinced, the PO can make the corresponding engineering practices part of the Definition of Done. The Product Owner may herself use formal specification techniques if they are made transparent and if it can be shown that the benefit outweighs the cost. Both of these, of course, depend on doing true Scrum where the PO really is the boss and really has final say over the product vision, final say over the PBIs on the Product Backlog, and the wherewithal to come together with the team on the corresponding components of the Definition of Done.
To disagree slightly with Nick: Keep in mind that working code is not the first priority — it's self-organisation and feedback in Agile, and it's people and Kaizen mind in Lean and in Scrum. Few formal methods cater well to that agenda.
To disagree slightly with Yasar: Agile says nothing against tedious work.
To disagree with Maria: A system is as weak as its weakest link. Using formal methods on one part to robustify it does not make the system robust; there are many formal proofs around this concept. What she describes takes quite advanced risk analysis techniques; these are the purview of the PO, supported by input from the development team.
  • asked a question related to Formal Methods
Question
2 answers
Concerning Model Based Testing, I have three models (FSM, EFSM and LTS for ioco) for the same system. Basically the main objective of these models is to generate test cases for the system and then conducting a conformance test.
I need to benchmark or validate the LTS for ioco model and hence the testing method. what are the criteria of validation? Or what are the items of comparison should I use?
Relevant answer
Answer
I second Mohammed Younis: for a case study, measuring the fault detection capability is an important aspect. When mutating the code, it is best to do it according to some fault model.
You could also measure test coverage on the specification, according to some coverage criterion, to get an approximation on how well your tests cover the specification. But there is quite some disagreement whether/when coverage is a good approximation to the fault detection capability.
Since ioco is formal, you could also validate your testing method formally, by proving that it is sound and complete (i.e. exhaustive).
  • asked a question related to Formal Methods
Question
5 answers
I'm looking for a journal to submit my paper which is about modeling context-aware systems
Relevant answer
Answer
Depending on the approach of the paper could be: International Journal of Advanced Studies in Computers, Science and Engineering or International Journal of Pervasive Computing and Communications or the Journal of Universal Computer Science.
If the approach is more of a interaction model it could be the International Journal of Human-Computer Interaction.
I know most of these journals are too broad themed but i've seen many papers on ubiquitous computing published there.
  • asked a question related to Formal Methods
Question
2 answers
Can anybody using VDM guide me about its tools?
Relevant answer
Answer
Thanks Nick, it is very helpful
  • asked a question related to Formal Methods
Question
1 answer
For verification of code (in Java) I want to use JML and as JML itself is tedious to create for the developer I want to generate it from other artifacts, like UML models.
Do you have any idea about that work? I mean if you have any personal experience of the possibility of generating useful specifications from those models or any alternative way to generate them.
I appreciate in advance.
Relevant answer
Answer
I've seen an article before. It was about "Generating JML specifications from UML state diagrams" for verification purpose. They proposed a tool "AutoJML".
  • asked a question related to Formal Methods
Question
5 answers
I am looking for a research topic in Specification and Verification using JML. It is for my M.Sc thesis.
Any idea is appreciated.
Relevant answer
Answer
Dear Reza,
The easiest way to find a research topic for the application of JML is to explore some related work that you can find in publishing websites such as: Springer, Elsevier, IEEE and so on.
Anyway, here's a research paper that might be helpful:
  • asked a question related to Formal Methods
Question
8 answers
I am looking for the current applications/ideas/challenges of the Formal Methods in MDD in order to evaluate/analyse them.
I know that for example FMs tools is used to check a model for correctness. What are other usages and actually specific usages and challenges in MDD?
Also what is missing in the area. I mean, for example in terms of a Tool which can be of benefit in the field.
Any idea would be highly appreciated.
Relevant answer
Answer
I was involved in a European project called DEPLOYED where the objective was to help Industry partners integrate Formal methods development techniques and tools in their development process. Conversely, the project also helped research partners in fine tuning their formal methods and tools to address Industry pilot cases. The main methods put forward in the project were Event-B but others were addressed.
In the DEPLOY project, my research center CETIC was involved in the collection of evidence related to achievement made by these Industry pilots as well as other effort.
I can suggest that you take a look at http://www.fm4industry.org which resulted from the project and present various achievements from the project.
With regards to formal method development, one of the most challenging part to dramatically improve their Industry take-up is to integrate a complete chain of tools. Currently, very few examples exist where formal models can automatically lead to the generation executable proven correct code. I know of Siemens Transport System that used B (not event B) to generate Subway controller of the driver-less subway line in Paris among others. This example is presented at the website mentioned above.
The pilot on test case generation from a UML-like workflow notation from SAP is also explained at this website.
The main problem is that if you only use formal method here and there, then the final results can not be proven correct and a heavy testing effort remains required. The formal models are then just another complex artifact to maintain throughout the system and software life cycle (development, maintenance, evolution until retirement).
If you undertake Industry pilots as part of you research please consider contributing your pilot results to the wiki above.
  • asked a question related to Formal Methods
Question
9 answers
I would like to implement MDA with a fully formal language instead of UML as modeling language but I am out of ideas. My idea is to have a full MDA implementation, transforming a very abstract model to code. I have studied few customized UML profiles with stereotypes, but those made the model development rather unconventional and also UML is not precise enough.
Any idea would be appreciated.
Relevant answer
Answer
Nick: You are absolutely right, regarding trade-offs of a very abstract model or a fine-grained one. But I presume there for sure are tools to transform an abstract Activity Diagram to a code in any target language, then it will be useless any research in that type of code generation even if the transformation is with some Formal Proofs, with transforming back and forth between models, using side Tools.
Actually, I was also thinking of a declarative programming language, like Haskel, as the code's target language, that way we fill some gaps between the source and any imperative language as we focus more on what to do rather than how and omit some details that way. (Like your example Nick)
Could it compensate the weaknesses of an abstract source model?
  • asked a question related to Formal Methods
Question
4 answers
Which diagrams/aspects of UML are not formal and need an external formal technique/tool to analyse the associated produced model in UML? I mean external utilities which is not in the existing dominant UML CASE tools like Rational Rose.
Relevant answer
Answer
Yes, it looks doable. You will have to decide which formal analyses you want to perform. Model-checking will probably require abstraction. If you have really huge AD you might want to resort to static analysis.
  • asked a question related to Formal Methods
Question
3 answers
Dominance relationships are used in compilers and Extended Finite State Machines (EFSM). In compilers, they indicate which statements are necessarily executed before others while in EFSM, they are used in determining control dependence between transitions. Pre-dominance and Post-dominance are the two types of dominance relationship.
Kindly, I have a question regarding control dependency. in all the articles I have read so far, the post-dominance is always mentioned for control dependency in EFSM. My question is " can we use pre-dominance instead of post-dominance in EFSM control dependency? If yes, what are the conditions for determining the control dependence between two transitions in EFSM". Thank you in advance.
Relevant answer
Answer
Prof.Peter T. Breuer, Thank you very much for your answer.
What I mean by control dependence between transitions is based on many articles like " Dependence Analysis in Reduction of Requirement Based Test Suites",
Control Dependence captures the notion that one node in program control graph may affect the execution of another node. This definition is extended on the above article to Extended Finite State Machine "EFSM".
Control dependence in EFSM exits between transitions and captures the notion that one transition may affect traversal of another transition.
The article states that control dependence between transitions is defined similarly as control dependence between nodes of program control graph, i.e, in terms of the concept of post-dominance.
Node Z post-dominates node Y iff Z is on every path from Y to the exit node.
However there is another type of dominance called pre-dominance
Node Z pre-dominates node Y iff Z is on every path from the initial node to Y.
Therefore, as I understand, post-dominance is on a relation with exit node wile pre-dominance is with initial node.
My question is " can we use pre-dominance instead of post-dominance in EFSM control dependency? If yes, what are the conditions for determining the control dependence between two transitions in EFSM". Thank you in advance.
  • asked a question related to Formal Methods
Question
9 answers
Attempting to understand the boundary between formal and informal language types.
Relevant answer
Answer
Very interesting.
I looked at a converted postscript version the first time.
A quick first scan indicated the standard tools of parsing and execution of tokens. Toward the back of the document, the text appeared to apply these techniques to groups of unknown symbols (I could not read the symbols).
I looked at another copy of the work , this evening, and I can read all of the pages (all known symbols.)
So, there must have been a mixup in the language and symbol set in the first document that I scanned.
Take care and have fun,
Joe
  • asked a question related to Formal Methods
Question
5 answers
I am seeking an interesting topic in that field for my master thesis.
Relevant answer
Answer
I suggest software product line specification and verification -- but there are many others.
  • asked a question related to Formal Methods
Question
8 answers
Partial Evaluation is a technique of program specialization
Relevant answer
Answer
Partial Evaluation is a certain application of program specialization. Program specialization: Fix a subset of a program's arguments (i.e., a function's parameters) and specialize the code according to the values of the fixed ("static") parameters. The resulting program will be smaller, lighter, faster.
Using this kind of specializer, interpreters can be specialized for programs that must be interpreted quite often. Thus, the resulting program is a (kind of) compiled version of the original program. I think this is what just-in-time compilers (like in the JVM) do. You can push this further (it's called Futamura's projections, then): Specialize the specializer for a given interpreter (resulting in a compiler) or even specializing the specializer for itself (resulting in a compiler generator).
I am interested in this topic because we work with DSLs a lot (domain specific languages). 25 yrs ago, this wasn't a use case, because we only hat lex/yacc then for building industry-level DSLs. It would be useful to generate code generators from interpreters - we could be sure that both have identical semantics. I am currently developing a self-applicable specializer for the Xtend language.
Would be great to find people interested in this very topic... the PE community seems to be a bit inactive currently...
  • asked a question related to Formal Methods
Question
15 answers
Looking for a set of specific Problems to work on for my doctoral work
Relevant answer
Answer
Event-B is a rigorous formal method for system specification. Rodin is an Eclipse-based IDE that provides tool support for Event-B. One Rodin plug-in is UML-B which, as its name suggests, combines UML and B. You may want to google Colin Snook (EECS/Univ Southampton), which is the main designer and maintainer of UML-B, I believe. He has also authored several papers on this approach.
  • asked a question related to Formal Methods
Question
2 answers
I'm working on a new article about context-aware modeling and I'm confused on how to do the case study.
Relevant answer
Answer
A sample motivating scenario from the healthcare domain for the patients medical records management can be found in our paper. In our paper, we have briefly discussed a context-aware scenario, that illustrates the need for the incorporation of context information in the access control processes. Also, you can have a look on the Prototype section.
  • asked a question related to Formal Methods
Question
3 answers
The investigators here are searching for a way of saying that the test matched its requirements, met the expectations of the standards that are applicable, and that the test was high quality. They drew the term from HW tests. All obviously overlapping with sense of test validation, test coverages (of many sorts), and other test effectiveness measures. Have any of you developed a metric on the fit of a test against requirements, standards, etc.? Any references very welcomed.
Relevant answer
Answer
Kristie:
Lets assume that the metric covers the total range of system components.
Further lets assume the requirements are not static bu dynamic, like a dynamic threat environment.
I developed the "Secure Adaptive Response Potential (SARP)," a system security metric to focus on all aspects of security requirements.
You may be able to adapt the SARP approach to other areas.
A link to the SARP paper is:
Take care and have fun,
Joe
  • asked a question related to Formal Methods
Question
4 answers
I am not sure this is the right forum for this kind of questions, but I ask it here since this topic reflects my research area.
Relevant answer
Answer
There are some really misguided applications of bibliometrics. For instance, a national body responsible for assessing and grading graduate programs in computer science has classified events according to H-index.
First, this is statistically flawed as there is an obvious bias to favor very large conferences against small events.
Second, it is far from simple to collect correctly such a massive amount of data, as the same event might go through name changes, merge with another event, and so forth.
So, even though bibliometrics might be useful in some situations, their widespread and indiscriminate usage make them a threat to the future of science funding, at least in some parts of the world.
This is best said in the San Francisco Declaration on Research Assessment (DORA) http://am.ascb.org/dora/.
  • asked a question related to Formal Methods
Question
3 answers
I am creating a project which "transforms the C code" to flow graph. Please suggest any tool or any materials. Should I change the compiler intermediate code to any specification of graph transformation system?
Relevant answer
Answer
Thank you Alexandre Chapoutot...
  • asked a question related to Formal Methods
Question
20 answers
I'm currently writing a short essay on Formal Verification and Program Derivation, and I would like to include an example of an algorithm that seems to be correct under a good number of test cases, but fails for very specific ones. The idea is to present the importance of a logical, deductive approach to infer the properties of an algorithm, in order to avoid such mistakes.
I can't quite think of a good example, though, because I'd like something simple that a layman can understand, but all the examples I came up with are related to NP-complete graph theory problems (usually an algorithm that is correct for a subset of the inputs, but not to the general problem).
Could someone help me think of a simpler example?
Relevant answer
Answer
A folklore example is swapping the value of two (integer) references without using a local variables. Take a programming language with call-by-reference and the following function:
swap(&int a, &int b) {
a := a - b; // value of a is a0 - b0
b := a + b; // value of b is a0 - b0 + b0 = a0
a := b - a; // value of a is a0 - a0 + b0
}
Assume the value of a is a pointer to the value a0 and b to b0. Then after a call to swap(a, b) the value pointed to by a and b are swapped ... except if a and b both point to the same memory address, in which case they point to the value 0.
This is more an example of the problems related to reasonning about programs in the presence of pointers but I think that it qualifies as an example of tricky algorithm.
  • asked a question related to Formal Methods
Question
10 answers
Relevant answer
Answer
Axiomatic semantics is just one way of providing program semantics, in which program statements semantics is defined by using sequent-like rules. If I recall correctly the idea dates back to Floyd and was formalized by Hoare (receiver of a Turing award for his contributions to programming languages definition and semantics). These rules allow one to prove in a sequent calculus style the validity of a Hoare triple {Pre} Program {Post} (all the states that satisfy the precondition and for which the program terminates must satisfy the postcondition). Dijkstra showed how to translate these assertions to first-order logic formulas using the computation of the weakest precondition.
So, regarding verification, the relationship comes in two flavors: you can PROVE that a Hoare triple is valid by using Hoare's proof calculus (and this implies that the program is correct w.r.t. the provided pre and post conditions). Also, you can compute the weakest precondition and obtain a first-order formula (under the assumption that loops loop a bounded number of times) and use proof methods for first-order logic in order to verify the validity of Hoare triples. The latter approach is very much used in automated static analysis resorting to SMT solving or SAT solving, or related techniques.
I know the answer is not self-contained, but includes a few keywords for which abundant information is available on the web.
The book "The Science of Programming" by David Gries is a wonderful source for an introduction to this topic.
  • asked a question related to Formal Methods
Question
3 answers
Thanks all for the answers
Relevant answer
Answer
Hi,
Finding a deadlock plus shortest path leading to it can be done using a breadth-first search of the stategraph for a state without outgoing edge. In the worst-case (no deadlock) all states must be explored, so the algorithm is trivially linear in the size of the state graph. However the stategraph itself may be exponential in the size of the system description (e.g. a promela specification). I recommend reading the Spin book, a good read.
Cheers,
Frederic.