Science topics: Theory of ComputationFormal Methods
Science topic
Formal Methods - Science topic
In computer science, specifically software engineering, formal methods are a particular kind of mathematically-based techniques for the specification, development and verification of software and hardware systems.
Questions related to Formal Methods
I have a few questions and I am interested in your thoughts on the subject.
To what extent can the phenomena of consciousness be effectively modeled within the constraints imposed by limitative theorems? Can we devise a computational framework that adequately captures the nuances of conscious experience, or do these theorems suggest an intrinsic limitation to such endeavors?
How do limitative theorems affect our understanding of time within formal logical systems, and what consequences does this have for our subjective experience of time? Can formal models of time ever fully align with the phenomenological aspects of temporal perception?
In light of the constraints identified by limitative theorems, how can we refine or develop new meta-theoretical structures that allow for a more nuanced exploration of time and consciousness? Are there specific informal approaches that could complement formal methods in this context?
What are the broader consequences of these theorems for interdisciplinary research into time and consciousness? How can insights from mathematics and computer science inform philosophical and cognitive science inquiries into these topics, and vice versa?
How do limitative theorems inform the debate between physicalism and dualism in the philosophy of mind? Do they suggest that certain aspects of consciousness are inherently non-computable or beyond the reach of formal systems?
Formal systems are known deductive systems, representing some aspects of the environment, nature, thinking or, more frequently, abstract representations of the former subjects.
But, assuming the existence of some syntatically correct representation of the real world, could be infered from them some set of axioms, complete and consistent? Is there any approach to this task? Or at least: any clue?
The concept of formal system and/or its properties is present frequently in many practical and theoretical components of computer science methods, tools, theories, etc.
But it is frequent too, finding some non rigorous interpretations of formal. For example, in several definitions of ontology, formal is understood as something that "computer can understand".
Does the computer science specialist, BSc, need to know that concept? Are it and its properties useful for them?
The multi-analysis modeling of a complex system is the act of building a family of models which allows to cover a large spectrum of analysis methods (such as simulation, formal methods, enactment…) that can be performed to derive various properties of this system. The High-Level Language for Systems Specification (HiLLS) has recently been introduced as a graphical language for discrete event simulation, with potential for other types of analysis, like enactment for rapid system prototyping. HiLLS defines an automata language that also opens the way to formal verification.
I am currently working on a model checking a systematic literature review. please help in this case.
As we are known, the Falsification of CPS already has made some progress. But there is a problem: how to select input trace efficiently for finding the bad output trace. It can divide into two parts:
1. What is the input space and how to describe it
2. What optimization algorithms can help select input trace for the falsification of CPS
Can you introduce some ideas about the two parts and which next direction do you think?
Hi,
Is there a simple real-valued time-series dataset on which a vanilla RNN model can be trained. With "very simple" I mean only two to four real-valued inputs per time step and a single real-valued output per time-step.
Background: I am doing research in the intersection of machine learning and formal methods. To test a new technique for formally verifying RNNs, we need to start with a quite simple setup.
Thanks!
Hello all,
I am aware of "formal methods" as it is used in computer science, to make sure our specifications are mathematically and logically sound (before we put them into a specific language). Isn't there a tool or form of notation so we can do that while we construct legitimate hypotheses for psychological research?
It is common knowledge that the construction of natural language questions (surveys about thinking and behavior), can be very questionable when it comes to construct validity. For example, did I cover everything? Am I even asking the right question? Trial and error without a formal test of our logic doesn't seem very efficient.
Does anyone know of a formal notation or research tool that lets us test natural language questions for how "sound" they are? After all, whether computer code (if/then we can do this) or human language (if/then we can assume that), it's pretty much the same logic. Thanks for any advice!
- Artificial Intelligence's safety is important for us to build AI world, but feeling some confused about how to verified AI.
- And which differences between model checking , theorem proving in traditional formal method and the methods for Verified AI?
- For traditional cyber-physical system(CPS), which the new problems we should consider when CPS carries with neural network?
As we known, machine learning already have a great progress in cyber-physics system, eg: self-driving car, robot, aircraft...... They can help improve their work efficiency, but due to their uncertainty we can't use them in key component. However, formal method has a great completeness and accuracy.
How to use machine learning combine with formal method for guarantee safety under not influencing efficiency?
And can you provide some material about this in the practical research?
Given a model consisting of a set of (first-order) formulas, certain dependencies arise between the free variables of these formulas. For instance, given the assumptions { x <= y }, a choice of value for x constrains the possible values of y. This extends to more complex terms and formulas, e.g. f(x) depending on (x < 0) in the model { f(x) = sgn(x) * x² }.
Are there any results on the nature of such dependencies in logic or any other field? Particularly regarding automatic detection of such dependencies?
I have found dependence/independence logic (logics of imperfect information), which extend first-order logic with formulas specifying such relations explicitly, but nothing on the implicit relationships established by regular first-order logic or on automatic analyses. I thought fields such as constraint solving or SMT might have investigated this topic, but have found nothing so far.
In the area of symbolic control, which to be precisely defined I mean using finite abstraction of dynamical systems with verification algorithms to construct provably correct controllers satisfying a temporal logic formula, I am interested in the ways to tackle environment uncertainties like dynamic obstacles.
In the paper a method is proposed, but it is assumed that the car model is given in form of a Markov Decision Process(MDP) apriori, While I am interested in researches done combining the problem of finding an abstract model of the system and the problem of handling uncertainties in environment.
Is anybody aware of a research which has done so? Or any ideas how one can combine these two problems? I have done a literature review in the field and I have not came up with an idea so far.
Thanks in advance for your help.
I introduce the new model for information diffusion based on node behaviors in social networks. I simulate the models and find interesting result from it. I want to evaluate it with one formal method and find Interactive Markov Chain. Can I use it to evaluate my model?
The Baier and Katoen textbook references this paper
- E. M. Clarke and I. A. Draghicescu. Expressibility results for linear-time and branching-time logics, pages 428–437. Springer Berlin Heidelberg, Berlin, Heidelberg, 1989.
to say that, given a CTL formula ϕ, if there exists an equivalent LTL one, it can be obtained by dropping all branch quantifiers (i.e. A and E) from ϕ.
The equivalence definition (from Baier & Katoen): CTL formula Φ and LTL formula φ (both over AP) are equivalent, denoted Φ ≡ φ, if for any transition system TS over AP:
TS |= Φ if and only if TS |= φ.
(Satisfaction |= is CTL or LTL respectively.)
Is there a syntactic criterion that provides a guarantee that if a CTL formula passes the test, then an equivalent LTL formula does exist?
Please note: Just dropping all branch quantifiers is not enough. For an example, consider 'AF AG p', where p is an atomic predicate. The LTL formula 'F G p' obtained by dropping the branch quantifiers is NOT equivalent to 'AF AG p', since it is not expressible in CTL. The question is whether there is a way (sufficient, but not necessary is Ok) of looking at a CTL formula and saying that it does have an equivalent LTL one?
I am emphasizing the need for a syntactic criterion, as opposed to the semantic one: drop the branch quantifiers and check the equivalence with the resulting formula. Something along the lines of: if, after pushing negations down to atomic predicates, all branch quantifiers are universal (A) and <some additional requirement>, then the formula has an equivalent LTL one (which, necessarily, can be obtained by dropping the branch quantifiers).
An additional requirement (or a totally different criterion) is necessary -- see the `AF AG p`.
Same question on CS Theory Stack Exchange (see the link)
Do the formal languages of logic share so many properties with natural languages that it would be nonsense to separate them in more advanced investigations or, on the contrary, are formal languages a sort of ‘crystalline form’ of natural languages so that any further logical investigation into their structure is useless? On the other hand, is it true that humans think in natural languages or rather in a kind of internal ‘language’ (code)? In either of these cases, is it possible to model the processing of natural language information using formal languages or is such modelling useless and we should instead wait until the plausible internal ‘language’ (code) is confirmed and its nature revealed?
The above questions concern therefore the following possibly triangular relationship: (1) formal (symbolic) language vs. natural language, (2) natural language vs. internal ‘language’ (code) and (3) internal ‘language’ (code) vs. formal (symbolic) language. There are different opinions regarding these questions. Let me quote three of them: (1) for some linguists, for whom “language is thought”, there should probably be no room for the hypothesis of two different languages such as the internal ‘language’ (code) and the natural language, (2) for some logicians, natural languages are, in fact, “as formal languages”, (3) for some neurologists, there should exist a “code” in the human brain but we do not yet know what its nature is.
While experimenting with nuXmv and NuSMV model checkers, I observed that input variables contribute no state to the entire state-space of any given model.
The user manuals of these tools clearly mentioned syntactic differences between these kinds of variable. For example, the "IVAR" introduces the input variables paragraph; but "VAR" introduces state variables.
Is there any science behind this behavior of input variables? In particular, how do these model checkers handle input variables and state variables? Any reference on this observation will be appreciated.
I am conducting some experiments with a focus on the number of BDD nodes required for the analysis of a given property on various NuSMV or nuXmv models.
It will be appreciated if anyone can provide a guide on the procedure or recommend resources to assist me in conducting the experiments.
Thanks.
I need to define some scenarios for smart spaces. For example: 1-Lights are turned on when user enters the environment 2- Telephone starts to play messages automatically.
Is there any formal template, method or language for scenario definition in this context?
I did my search on LOTOS white papers, many contains abstract theorem, but I think there is lack of white papers with example of requirement -> LOTOS specification, and syntax or implementation into software system.
This will greatly helps my understanding on how to put LOTOS language into real problem solving issues.
Thank you.
Is there any event-based formal method that can be mapped from SOAML into that method?
Which formal method is the most suitable to analyze SOA security patterns?
Exploring the current state of the art in formal systems architecture development and production by requesting information about the current practices associated with design structure matrices.
Looking for current research efforts in system science and system engineering that use interpretive structural modeling as a formal approach.
During his work on software in the railway domain, Dines Bjorner started with a list of subdomains into which the railway domain can be divided. His list can be found here(http://euler.fd.cvut.cz/railwaydomain/?page=faq&subpage=railway-domain). From the perspective of a railway domain expert, is this list complete and up to date with respect to the current state of the railway domain? If not, what can be added or removed from the list to refine it?
Dear colleges,
I have struggled all day with a proof that appears obvious to me using a pen and a paper, but that it is not obvious to Isabelle theorem prover.
The goal I am trying to solve is:
((∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s') -->
(∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s'))
I would like to prove this without expanding pre_p and rely_p. An evidence that this should be possible is that I can prove:
∀ pre_p rely_p .
((∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s') -->
(∀ s inp s'. pre_p inp s --> rely_p s s' --> pre_p inp s'))
which is a more generic theorem. I am using Isabelle 2013-2.
I know how to prove the theorem expanding pre_p and rely_p, but this would not suit my needs, because my intention is to reuse theorems already proved, as lemma to prove other occurrences of their statements, instead of repeating the proof steps.
To make the discussion shorter, the problem can be reduced to the question in the title (i.e. (∀ s inp . pre_p inp s) --> (∀ s inp. pre_p inp s))
Thanks for any contribution towards a clarification of this issue!
A robot to functional testing and the virtuality to structural testing.
Many of the current methods for assessing risks depends on probabilities and events as randomly distributed occurences. This does not fit with intentional acts that can be persistent, reoccuring, focused etc. I am looking for any new work on providing new methods for assessing events that are not randomly distributed and for aggregating such risks. A closely related topic would be any new work on aggregating very low probability or rare events. Does anyone have a pointer or suggestions for who to look at and where to look?
When a decision maker makes a decision under certainty, it is assumed that decision maker have complete knowledge about the problem and all potential outcomes known. Why we need a decision support system?
Formal methods have mathematical foundations. They are based on algebra or finite state machines. Are they practical? Do they justify their costs?
Is it better to use light-weight implementations of formal methods in industrial projects to reduce costs and increase flexibility and practicality?
I am looking for tool chains (even a model based engineering methodology) to enable formal verification of ERTMS (railway signalling) systems. Something along the lines of how Prover works with Simulink and SCADE, but preferably a Symbolic tool like NuSMV. or other industrially viable tools with some way to have a formal verification.
Agile development is concerned with the rapid development of a software. In agile development the customer involvement plays an important role. The software is updated and upgraded according to the customer requirements.
Formal verification is concerned with mathematical approaches to verify the software. How can we integrate these formal approaches (there are many types of formal techniques) into the agile development process.
Concerning Model Based Testing, I have three models (FSM, EFSM and LTS for ioco) for the same system. Basically the main objective of these models is to generate test cases for the system and then conducting a conformance test.
I need to benchmark or validate the LTS for ioco model and hence the testing method. what are the criteria of validation? Or what are the items of comparison should I use?
I'm looking for a journal to submit my paper which is about modeling context-aware systems
Can anybody using VDM guide me about its tools?
For verification of code (in Java) I want to use JML and as JML itself is tedious to create for the developer I want to generate it from other artifacts, like UML models.
Do you have any idea about that work? I mean if you have any personal experience of the possibility of generating useful specifications from those models or any alternative way to generate them.
I appreciate in advance.
I am looking for a research topic in Specification and Verification using JML. It is for my M.Sc thesis.
Any idea is appreciated.
I am looking for the current applications/ideas/challenges of the Formal Methods in MDD in order to evaluate/analyse them.
I know that for example FMs tools is used to check a model for correctness. What are other usages and actually specific usages and challenges in MDD?
Also what is missing in the area. I mean, for example in terms of a Tool which can be of benefit in the field.
Any idea would be highly appreciated.
I would like to implement MDA with a fully formal language instead of UML as modeling language but I am out of ideas. My idea is to have a full MDA implementation, transforming a very abstract model to code. I have studied few customized UML profiles with stereotypes, but those made the model development rather unconventional and also UML is not precise enough.
Any idea would be appreciated.
Which diagrams/aspects of UML are not formal and need an external formal technique/tool to analyse the associated produced model in UML? I mean external utilities which is not in the existing dominant UML CASE tools like Rational Rose.
Dominance relationships are used in compilers and Extended Finite State Machines (EFSM). In compilers, they indicate which statements are necessarily executed before others while in EFSM, they are used in determining control dependence between transitions. Pre-dominance and Post-dominance are the two types of dominance relationship.
Kindly, I have a question regarding control dependency. in all the articles I have read so far, the post-dominance is always mentioned for control dependency in EFSM. My question is " can we use pre-dominance instead of post-dominance in EFSM control dependency? If yes, what are the conditions for determining the control dependence between two transitions in EFSM". Thank you in advance.
Attempting to understand the boundary between formal and informal language types.
I am seeking an interesting topic in that field for my master thesis.
Partial Evaluation is a technique of program specialization
Looking for a set of specific Problems to work on for my doctoral work
I'm working on a new article about context-aware modeling and I'm confused on how to do the case study.
The investigators here are searching for a way of saying that the test matched its requirements, met the expectations of the standards that are applicable, and that the test was high quality. They drew the term from HW tests. All obviously overlapping with sense of test validation, test coverages (of many sorts), and other test effectiveness measures. Have any of you developed a metric on the fit of a test against requirements, standards, etc.? Any references very welcomed.
I am not sure this is the right forum for this kind of questions, but I ask it here since this topic reflects my research area.
I am creating a project which "transforms the C code" to flow graph. Please suggest any tool or any materials. Should I change the compiler intermediate code to any specification of graph transformation system?
I'm currently writing a short essay on Formal Verification and Program Derivation, and I would like to include an example of an algorithm that seems to be correct under a good number of test cases, but fails for very specific ones. The idea is to present the importance of a logical, deductive approach to infer the properties of an algorithm, in order to avoid such mistakes.
I can't quite think of a good example, though, because I'd like something simple that a layman can understand, but all the examples I came up with are related to NP-complete graph theory problems (usually an algorithm that is correct for a subset of the inputs, but not to the general problem).
Could someone help me think of a simpler example?