Science topic

Formal Languages - Science topic

Explore the latest questions and answers in Formal Languages, and find Formal Languages experts.
Questions related to Formal Languages
  • asked a question related to Formal Languages
Question
17 answers
Right now, in 2022, we can read with perfect understanding mathematical articles and books
written a century ago. It is indeed remarkable how the way we do mathematics has stabilised.
The difference between the mathematics of 1922 and 2022 is small compared to that between the mathematics of 1922 and 1822.
Looking beyond classical ZFC-based mathematics, a tremendous amount of effort has been put
into formalising all areas of mathematics within the framework of program-language implementations (for instance Coq, Agda) of the univalent extension of dependent type theory (homotopy type theory).
But Coq and Agda are complex programs which depend on other programs (OCaml and Haskell) and frameworks (for instance operating systems and C libraries) to function. In the future if we have new CPU architectures then
Coq and Agda would have to be compiled again. OCaml and Haskell would have to be compiled again.
Both software and operating systems are rapidly changing and have always been so. What is here today is deprecated tomorrow.
My question is: what guarantee do we have that the huge libraries of the current formal mathematics projects in Agda, Coq or other languages will still be relevant or even "runnable" (for instance type-checkable) without having to resort to emulators and computer archaeology 10, 20, 50 or 100 years from now ?
10 years from now will Agda be backwards compatible enough to still recognise
current Agda files ?
Have there been any organised efforts to guarantee permanent backward compatibility for all future versions of Agda and Coq ? Or OCaml and Haskell ?
Perhaps the formal mathematics project should be carried out within a meta-programing language, a simpler more abstract framework (with a uniform syntax) comprehensible at once to logicians, mathematicians and programers and which can be converted automatically into the latest version of Agda or Coq ?
Relevant answer
Answer
I just encountered a notification about this article on Mathematical Proofs and the value of Proof Assistants, https://www.ams.org/journals/notices/202401/rnoti-p79.pdf
I resonate most with the positions by Lamport (author of LaTex and a Turing Award Laureate) and Laurence Paulson (author of ML for the Working Programmer and working very much in proof assistants). I think it will be clear that the situation about the presentation of proofs and incorporation of such proofs in mathematical publication is still very much up in the air.
I also did a ResearchGate search on "Proof Assistant" and the fire-hose of articles confirmed my view that this is yet stabilizing, although there are extensive favorite approaches.
Here is the above paper's abstract:
“A proof is one of the most important concepts of mathematics. However, there is a striking difference between how a proof is defined in theory and how it is used in practice. This puts the unique status of mathematics as exact science into peril. Now may be the time to reconcile theory and practice, i.e., precision and intuition, through the advent of computer proof assistants. This used to be a topic for experts in specialized communities. However, mathematical proofs have become increasingly sophisticated, stretching the boundaries of what is humanly comprehensible, so that leading mathematicians have asked for formal verification of their proofs. At the same time, major theorems in mathematics have recently been computer-verified by people from outside of these communities, even by beginning students. This article investigates the different definitions of a proof, the gap between them, and possibilities to build bridges. It is written as a polemic or a collage by different members of the communities in mathematics and computer science at different stages of their careers, challenging well-known preconceptions and exploring new perspectives.”
There is already an objection to this material on the list where I found it. The objection is to this statement: "This puts the unique status of mathematics as exact science into peril.” That statement disturbs me too, but maybe not for the same reason.
  • asked a question related to Formal Languages
Question
3 answers
Formal systems are known deductive systems, representing some aspects of the environment, nature, thinking or, more frequently, abstract representations of the former subjects.
But, assuming the existence of some syntatically correct representation of the real world, could be infered from them some set of axioms, complete and consistent? Is there any approach to this task? Or at least: any clue?
Relevant answer
Peter Breuer Thank you for your comment, he clarified, he pointed out the precise idea that I had in mind when I asked the question; This general theoretical question arose from a practical problem: the completion of (knowledge) graphs and the need to create an ontology, based on a set of axioms, to admit or reject new links (arcs) and objects (nodes). In fact, we are talking about a monotonic extension of the given original set.
In fact, the original set of statements is implicitly assumed to be consistent (nothing to say otherwise). But here's a small detail: speaking of "consistency" could be interpreted as assuming the existence of some kind of formal system. Probably the problem should be posed as emerging from a situation in which, there being some relation (binary, for simplicity), written (of course) in some formalized language (hence w.f.f.), representing a set of non-contradictory statements (here, not contradictory means the absence of a statement and its -semantically understood- negation).
And yes, the set of all statements is a (trivial) system of axioms.
As for the assumption of the presence of some kind of logic, it is not assumed. Here, significantly, it is relevant to take into account what is meant by logic; As far as I know, despite their historical and genetic relationships (in the sense of genesis), they are not synonymous, logic (syntactic approach) is (mainly) a kind of formal system.
Well, here I end my ramblings. Thanks for the comment.
  • asked a question related to Formal Languages
Question
4 answers
The concept of formal system and/or its properties is present frequently in many practical and theoretical components of computer science methods, tools, theories, etc.
But it is frequent too, finding some non rigorous interpretations of formal. For example, in several definitions of ontology, formal is understood as something that "computer can understand".
Does the computer science specialist, BSc, need to know that concept? Are it and its properties useful for them?
Relevant answer
Answer
For me some basic knowledge of formal systems is essential for undergraduate CS students, as this is a necessary part of the scientific education.
If not, you might still learn a lot about software construction, but you wouldn't know nor understand its bedrock.
  • asked a question related to Formal Languages
Question
7 answers
Dear all,
I am doing a research on the advantages of Formal Language Learning beyond the classroom; are there any references /articles about this topic?
Relevant answer
Answer
Thank you for your kind reply. I was curious to know if there was any specific article about that subject.
Thanks for the cooperation,
Giorgia
  • asked a question related to Formal Languages
Question
14 answers
Hello everyone,
Could you recommend courses, papers, books or websites about modeling language and formalization?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Answer
Kindly check also the following very good RG link:
  • asked a question related to Formal Languages
Question
6 answers
Hello everyone,
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Relevant answer
Mathematics helps AI scientists to solve challenging deep abstract problems using traditional methods and techniques known for hundreds of years. Math is needed for AI because computers see the world differently from humans. Where humans see an image, a computer will see a 2D- or 3D-matrix. With the help of mathematics, we can input these dimensions into a computer, and linear algebra is about processing new data sets.
Here you can find good sources for this:
  • asked a question related to Formal Languages
Question
5 answers
I am seeking evidence that formal language planning works. Classical instances might be Hebrew and Afrikaans. I would be most grateful for research papers which provide solid evidence of the effective impact of language planning on language ecologies. I am interested in large-scale political interventions rather than changes within micro-environments. The nature and quality of the evidence supporting claims of language-planning efficacy is obviously crucial.
Language plans abound, but I would be most grateful to be pointed in the direction of data which shows that these plans have worked as intended.
Relevant answer
  • asked a question related to Formal Languages
Question
5 answers
I'm working on a model for making trust in a cloud with blockchain system. I need formal language to describe my model. So my question is: what is best formal language to describe Blockchain consensus model
Relevant answer
Answer
Dear Amirhossein Pourshams,
Please, see the related links:
  • asked a question related to Formal Languages
Question
3 answers
automata, formal languages, computation, complexity, Turing machine, recursive functions, and beyond ...
Relevant answer
Answer
Sorry I put twice the same link: Here is the second link : https://cla.tcs.uj.edu.pl/
  • asked a question related to Formal Languages
Question
61 answers
We don't have a result yet, but what is your opinion on what it may be? For example, P =NP, P!=NP, or P vs. NP is undecidable? Or if you are not sure, it is feasible to simply state, I don't know.
Relevant answer
Answer
The answer is P=NP
  • asked a question related to Formal Languages
Question
3 answers
This is so far the procedure I was trying upon and then I couldn't fix it
As per my understanding here some definitions:
- lexical frequencies, that is, the frequencies with which correspondences occur in a dictionary or, as here, in a word list;
- lexical frequency is the frequency with which the correspondence occurs when you count all and only the correspondences in a dictionary.
- text frequencies, that is, the frequencies with which correspondences occur in a large corpus.
- text frequency is the frequency with which a correspondence occurs when you count all the correspondences in a large set of pieces of continuous prose ...;
You will see that lexical frequency produces much lower counts than text frequency because in lexical frequency each correspondence is counted only once per word in which it occurs, whereas text frequency counts each correspondence multiple times, depending on how often the words in which it appears to occur.
When referring to the frequency of occurrence, two different frequencies are used: type and token. Type frequency counts a word once.
So I understand that probably lexical frequencies deal with types counting the words once and text frequencies deal with tokens counting the words multiple times in a corpus, therefore for the last, we need to take into account the word frequency in which those phonemes and graphemes occur.
So far I managed phoneme frequencies as it follows
Phoneme frequencies:
Lexical frequency is: (single count of a phoneme per word/total number of counted phonemes in the word list)*100= Lexical Frequency % of a specific phoneme in the word list.
Text frequency is similar but then I fail when trying to add the frequencies of the words in the word list: (all counts of a phoneme per word/total number of counted phonemes in the word list)*100 vs (sum of the word frequencies of the targeted words that contain the phoneme/total sum of all the frequencies of all the words in the list)= Text Frequency % of a specific phoneme in the word list.
PLEASE HELP ME TO FIND A FORMULA ON HOW TO CALCULATE THE LEXICAL FREQUENCY AND THE TEXT FREQUENCY of phonemes and graphemes.
Relevant answer
Answer
Hola,
Para el cálculo de la frecuencia léxica de unidades simples o complejas, se suele utilizar WordSmith o AntCon.
Saludos
  • asked a question related to Formal Languages
Question
3 answers
This is so far the procedure I was trying upon and then I couldn't fix it
As per my understanding:
- lexical frequencies, that is, the frequencies with which correspondences occur in a dictionary or, as here, in a word list;
- lexical frequency is the frequency with which the correspondence occurs when you count all and only the correspondences in a dictionary.
- text frequencies, that is, the frequencies with which correspondences occur in a large corpus.
- text frequency is the frequency with which a correspondence occurs when you count all the correspondences in a large set of pieces of continuous prose ...;
You will see that lexical frequency produces much lower counts than text frequency because in lexical frequency each correspondence is counted only once per word in which it occurs, whereas text frequency counts each correspondence multiple times, depending on how often the words in which it appears to occur.
When referring to the frequency of occurrence, two different frequencies are used: type and token. Type frequency counts a word once.
So I understand that probably lexical frequencies deal with types counting the words once and text frequencies deal with tokens counting the words multiple times in a corpus, therefore for the last, we need to take into account the word frequency in which those phonemes and graphemes occur.
So far I managed phoneme frequencies as it follows
Phoneme frequencies:
Lexical frequency is: (single count of a phoneme per word/total number of counted phonemes in the word list)*100= Lexical Frequency % of a specific phoneme in the word list.
Text frequency is similar but then I fail when trying to add the frequencies of the words in the word list: (all counts of a phoneme per word/total number of counted phonemes in the word list)*100 vs (sum of the word frequencies of the targeted words that contain the phoneme/total sum of all the frequencies of all the words in the list)= Text Frequency % of a specific phoneme in the word list.
PLEASE HELP ME TO FIND A FORMULA ON HOW TO CALCULATE THE LEXICAL FREQUENCY AND THE TEXT FREQUENCY of phonemes and graphemes.
Relevant answer
Answer
It will help if you use a suitable and powerful qualitative research software as Atlas.ti (https://atlasti.com/) or equivalent. This software allows you to introduce and research large amounts of text, written or oral, images, videos, etc. Then, you can select diverse research techniques, including frequencies, correlations, modulations, structures, and several other tools.
  • asked a question related to Formal Languages
Question
8 answers
I am currently working on a model checking a systematic literature review. please help in this case.
Relevant answer
Answer
Define clearly the definition of Model Checking and its related definitions. Because mode checking is suite of concepts including type of models e.g. automata models or petri-nets, etc. The verification logic languages of a model-checker. First, take some model checkers on list and then see what are the different parts: model, engine, verification language, set of verification properties. Study each model checker and its each part in detail. See what each has edge over other, what types of application each can model, compare their performance and applicability. Case studies are helpful to understand tools practically.
  • asked a question related to Formal Languages
Question
2 answers
In my research I want to theoretically prove that the model transformation is correct. Specifically, it is to verify the model transformation of a state machine model to a fault tree model.Now my idea is to find a formal language description model element and use the theorem prover for analysis and verification.But how to determine which formal language and theorem prover should I use to verify the correctness of the model transformation is a problem.I hope that the experts in the research field can give me some advice and suggestions.You can follow my ideas or give some new comments, thank you!
Relevant answer
Answer
Hello,
You should describe the source model (state machine), the target model ( fault tree ) and the transformation itself using a theorem prover like Isabelle/HOL or Coq. After that, you need to define some properties of this transformation and prove their correctness using the theorem prover.
  • asked a question related to Formal Languages
Question
1 answer
In this validation process, the team has tried to make sense of the research, devising a working hypothesis, built on scientific bases, such as the reference models that for years have been the pillars of the language sciences, which are: the descriptive method of N. Chomsky ;- The method Lexico-grammatica by M. Gross ; -The Nooj system, according to the Transformational Analysis of Direct Transitive by M. Silberzstein; The probabilistic calculation by Hofmann, according to the Probabilistic latent semantic Analis. The results have given very valid and irrefutable answers such as: - The mathematical laws guide and support the linguistic text, because a language, to be elevated to universal code must be describable, with a rational scientific method. Languages can be converted into a plurality of codes and that formal languages are subjected to techniques of fixity and non-compositionality and therefore guided by mathematical laws pre-established and therefore predictable, was born for market needs and is built in the laboratory Natural languages are subjected to linguistic techniques of causality, and that the first communication and fixed, the second is innate, because ........The homo sapiens transforms the contents of his mental activities into symbols, i.e. letters, numbers, etc. according to anthropology, sociology and natural laws of his culture and therefore semantics belongs only to the man sapiens and to a certain man in the course of that history. The statute of conjecturing that we postulate is that the mathematical laws guide the mind of the homo sapiens in the structuring of the lessies, morphies, dysmorphs in the osmotic voluntary and innate conjecturing of human semantics.
Translated with www.DeepL.com/Translator
Relevant answer
Answer
Dear Prof. Ritamaria Bucciarelli:
Congratulations and thank you so much for asking this most important question in the context of both Applied and Computational Linguistics! In that regard, as far as the descriptive method is concerned, Chomsky (1956) undertakes highly focused research on several conceptions of linguistic structure to determine whether or not they can provide simple and “revealing” grammars that generate all of the sentences of English and only these. He finds that no finite-state Markov process that produces symbols with transition from state to state can serve as an English grammar. Furthermore, Chomsky (1956) draws attention to the particular subclass of such processes that produce n-order statistical approximations to English do not come closer, with increasing n, to matching the output of an English grammar. Like this manner, he formalizes the notions of phrase structures and shows that this gives us a method for describing language, which is essentially more powerful, though still representable as a rather elementary type of finite-state process. Nevertheless, Chomsky (1956) points out that it is successful only when limited to a small subset of simple sentences. In view of that, he studies the formal properties of a set of grammatical transformations that carry sentences with phrase structure into new sentences with derived phrase structure, showing that transformational grammars are processes of the same elementary type as phrase-structure grammars. Correspondingly, he underlines that the grammar of English is materially simplified if phrase structure description is limited to a kernel of simple sentences from which all other sentences are constructed by repeated transformations, and that this view of linguistic structure gives a certain insight into the use and understanding of language.
Subsequently, Chomsky (1956) states that general linguistic theory can be viewed as a metatheory, which is concerned with the problem of how to choose such a grammar in the case of each particular language on the basis of a finite corpus of sentences. In particular, as indicated by Chomsky (1956), it will consider and attempt to explicate the relation between the set of grammatical sentences and the set of observed sentences. In other words, this author emphasizes that linguistic theory attempts to explain the ability of a speaker to produce and understand- new sentences, and to reject as ungrammatical other new sequences, based on his limited linguistic experience. To this end, Chomsky (1956) gives the following example: suppose that for many languages there are certain clear cases of grammatical sentences and certain clear cases of ungrammatical sequences, e-e., (1) and (2). Respectively, in English.
(1) John ate a sandwich
(2) Sandwich a ate John.
In this case, we can test the adequacy of a proposed linguistic theory by determining, for each language, whether or not the clear cases are handled properly by the grammars constructed in accordance with this theory. For example, if a large corpus of English does not happen to contain either (1) or (2), we ask whether the grammar that is determined for this corpus will project the corpus to include (1) and exclude (2). In this sense, Chomsky (1956) points out that even though such clear cases may provide only a weak test of adequacy for the grammar of a given language taken in isolation. They provide a very strong test for any general linguistic theory and for the set of grammars to which it leads, since, as Chomsky (1956) claims, in the case of each language the clear cases be handled properly in a fixed and predetermined manner. Like so, Chomsky (1956) does foreground that we can take certain steps towards the construction of an operational characterization of “grammatical sentence” that will provide us with the clear cases required to set the task of linguistics significantly.
More to the point, Odlin (1994: 45) asserts that, nevertheless, at the practical level, the Universal Grammar Model by Chomsky suggests that more attention must be paid by teachers to the teaching of specifically syntactic aspects of vocabulary acquisition.
Bibliographical references
  • Chomsky, N. (1956). Three models for the description of language. IRE Transactions on information theory, 2(3), 113-124. Retrieved from: (https://www.princeton.edu/~wbialek/rome/refs/chomsky_3models.pdf). [Accessed August 06, 2019].
  • Odlin, T. (Ed.). (1994). Perspectives on pedagogical grammar. Cambridge University Press.
Best wishes,
Javier.
  • asked a question related to Formal Languages
Question
16 answers
Our language is the origin and the building mean of formal languages of math and physics. Artificial intelligence mashines creates even their own language.
Are there research to create new languages to create new science or to simplify and make more understandable the current science? Or is it just my fantasy? Maybe if a man can see, say in ifrared range then he could invent new words? Maybe we should go in this direction?
How will one create new language describing our world and qualitatively different from the today one? Maybe we should study other creatures likes delphines?
Relevant answer
Answer
Yes, we can make science more clear and powerful with new language, but we can't neglect English because English currently plainly settled as the principle language of universal logical correspondence, specialists keep on distributing their work in different dialects than English too.We encourage mainstream researchers to attempt to handle this issue and propose potential methodologies both for incorporating non-English scientific knowledge viably and for upgrading the multilingualism of new and existing information accessible just in English for the clients of such learning.
  • asked a question related to Formal Languages
Question
31 answers
The unavoidable fatal defect of “’potential infinite--actual infinite’ confusion” in present classical infinite idea inevitably leads to the unceasingly production of “paradox events” (different in forms but same in nature) from many infinite relating fields in present science theory system and, the self-contradictory (Self-refutation Mechanism) “self and non-self” contents in present set theory (such as T={x|x📷x}) and mathematical analysis (such as the number-of-non-number variable) is a typical example. It is true that people have been trying very hard to solve those infinite relating paradoxes, but the mistaken working idea brought very little effect-------since antiquity, people have been unaware of that these suspended “infinite paradox events” are in fact an “infinite paradox syndrome” disclosing from different angles exactly the same fundamental defects in present classical infinite theory system, have not been studying seriously the consanguineous ties among the paradoxes in the syndrome, have not been studying seriously the consanguineous relations among these paradoxes and the foundations of their related theory systems (such as number system) , have not been studying seriously and deeply the fundamental defects in present classical infinite theory system disclosed jointly by different infinite paradox families; but merely studied, made up and developed very hard all kinds of formal languages, formal operations and formal logics specially for solving surface problems. So, not only these “infinite paradox families” have never been solved but developing and expanding unceasingly.
v
Relevant answer
Answer
Thank you dear Mr. Dennis Hamilton!
According to my studies, Zeno's great creation of “Achilles--Tortoise paradox” is not only a simple mistake but is a huge paradox family and its typical modern family member is the newly discovered Harmonic Series Paradox.
Let’s see following divergent proof of Harmonic Series which can still be found in many current higher mathematical books written in all kinds of languages:
1+1/2 +1/3+1/4+...+1/n +... (1)
=1+1/2 +(1/3+1/4 )+(1/5+1/6+1/7+1/8)+... (2) >1+ 1/2 +( 1/4+1/4 )+(1/8+1/8+1/8+1/8)+... (3) =1+ 1/2 + 1/2 + 1/2 + 1/2 + ...------>infinity (4)
Because of not knowing what infinitesimals are, the unavoidable practical problem has been troubling us ever since is how many items (including infinitesimals of cause) in infinite decreasing Harmonic Series can be added up by “brackets-placing rule" to produce infinite numbers each bigger than 1/2?
This kind of “infinite-infinitesimals paradox” tells us:
1, in Harmonic Series, we can produce infinite numbers each bigger than 1/2 or 1 or 100 or 100000 or 10000000000 or… from infinite infinitesimals in Harmonic Series by “brackets-placing rule" to change an infinitely decreasing Harmonic Series with the property of Un--->0 into any infinite constant series with the property of Un--->constant or any infinitely increasing series with the property of Un--->infinity;
2, the “brackets-placing rule" to get 1/2 or 1 or 100 or 100000 or 10000000000 or… from infinite items in Harmonic Series corresponds to different runners with different speed in Zeno’s Paradox while the items in Harmonic Series corresponds to those steps of the tortoise in Zeno’s Paradox. So, not matter what kind of runner (even a runner with the speed of modern jet plane) held the race with the tortoise he will never catch up with it.
By the way, Robinson's non-standard analysis can do nothing to solve any of those suspended “infinite-infinitesimals paradoxes” either.
Sincerely yours,
Geng
  • asked a question related to Formal Languages
Question
11 answers
Following infinite related questions have never been answered clearly and scientifically since the concepts of “infinite, potential infinite, actual infinite” came into human science:
Why the concepts of “potential infinite, actual infinite” have never been clearly and scientifically defined? Are they important in present infinite related science system? If yes, what roles they play; if not, why they have been existing in our science ever since? How can we cognize the relationship between “infinite related mathematical things” and “potential infinite--actual infinite”?
Our thousands—year infinite related science history has proved that it is impossible at all to avoid “the ‘potential infinite--actual infinite’ confusing” in infinite related science areas. So, it is very free and arbitrary (just depending on one’s likes or dislikes) for people whenever treating those infinite related mathematical things because no one knows scientificaly what to do at all. Following two suspended contradictions in present infinite set theory and mathematical analysis are typical examples:
In present mathematical analysis: on the one hand, any one can use “the ‘potential infinite--actual infinite’ confusing formal language and production line” to construct all kinds of infinite related paradoxes; on the other hand, one can also use exactly the very same “formal language and production line” to construct all kinds of infinite related “important mathematical proofs and theorems”. The typical example is: Zeno’s construction (proof) of “Achilles Can Never Chase Up Turtle Paradox” and its modern version of the newly discovered Harmonic Series Paradox-------bracketing by limit theory to create infinite numbers each greater than 1/2 or 100 or 1000000000000000 or 1000000000000000000000000000000 or ... from the Un--->0 Harmonic Series and turn the Un--->0 Harmonic Series into a “Vn ---> any positive constants” infinite series (with infinite items each bigger than any positive constants, such as 100000000000000000000000000000) . Our studies have proved that both newly discovered Harmonic Series Paradox and the 300--year old Berkeley Paradox are different versions of Zeno’s Paradox, they are the members of Zeno’s Paradox Family-------- those dt--->0 increment of infinitesimals are allowed to “let be 0(dt = 0), take the limit(limdt=0), take the standard number(dt=0)” during the process of differentiation, just because we dislike to do it first then suddenly change our mind and like to do it at the end of computation; while those Un--->0 infinitesimals items are not allowed to “let be 0(Un = 0), take the limit(lim Un =0), take the standard number(Un =0)” during the process of bracketing to prove the divergence of Harmonic Series, just because we keep dislike to do it during the whole computation.
The defect of “likes--dislikes operation for infinitesimals” is the essence of The Second Mathematical Crisis triggered by the Berkeley Paradox------it is impossible to be solved at all in present “potential infinite--actual infinite” related science and mathematics.
Relevant answer
Answer
Just as Peter, I can't make anything out of
" can we treat those “1/n--->0 standard real numbers in Harmonic Series” in the way of “letting be 0(1/n = 0), taking the limit(lim 1/n =0), taking the standard number(1/n =0)” as what we have been doing in the computing process of differentiation (derivation)? "
  • asked a question related to Formal Languages
Question
63 answers
Do the formal languages of logic share so many properties with natural languages that it would be nonsense to separate them in more advanced investigations or, on the contrary, are formal languages a sort of ‘crystalline form’ of natural languages so that any further logical investigation into their structure is useless? On the other hand, is it true that humans think in natural languages or rather in a kind of internal ‘language’ (code)? In either of these cases, is it possible to model the processing of natural language information using formal languages or is such modelling useless and we should instead wait until the plausible internal ‘language’ (code) is confirmed and its nature revealed?
The above questions concern therefore the following possibly triangular relationship: (1) formal (symbolic) language vs. natural language, (2) natural language vs. internal ‘language’ (code) and (3) internal ‘language’ (code) vs. formal (symbolic) language. There are different opinions regarding these questions. Let me quote three of them: (1) for some linguists, for whom “language is thought”, there should probably be no room for the hypothesis of two different languages such as the internal ‘language’ (code) and the natural language, (2) for some logicians, natural languages are, in fact, “as formal languages”, (3) for some neurologists, there should exist a “code” in the human brain but we do not yet know what its nature is.
Relevant answer
Answer
Dear André,
do you remember some of the 2/3 of your thoughts which you thought in internal language? How do you do that?
Can you rethink them and discuss them internally? How do you do that?
Are among those thoughts some which cannot come into mind like a picture (because they are in some way more abstract)? How do you remember them, rethink them, and discuss them internally?
  • asked a question related to Formal Languages
Question
2 answers
A TQBF is a boolean formula with alternating existential and universal quantifiers. The boolean formula here is in conjunctive normal form (CNF).
Relevant answer
Answer
Yes! my question was regarding the "exponential" with respect to the number of variables. (The question "arose" when I was trying a problem reduction from QBF. )
I understood that the question was irrelevant in any sense as the number of disjunctive clauses in the  CNF  depends on the atomics. I was about to delete the question and then saw your answer. Thank you for considering my question and taking time to answer it.
  • asked a question related to Formal Languages
Question
11 answers
How can I map one formal specification language i.e., Alloy into another formal language i.e., B?
Relevant answer
Answer
Good grief. It's a wonder anyone one wants to do formal specification anymore! High handed treatment of a young student aspiring to do some work in formal methods is very depressing indeed - we all have to start somewhere. Ashish - it's great that you're asking the questions in this forum. I started on a very similar vain many years ago by developing a simple translation from Z to Prolog for simulation. It can be done, and can provide insight into the meaning (semantics) of a formal language, and also make it "real". Keep at it!
  • asked a question related to Formal Languages
Question
10 answers
Exploring the current state of the art in formal systems architecture development and production by requesting information about the current practices associated with design structure matrices.
Relevant answer
Answer
It would appear to me that others may also have thought about using DSM for forming an autonomous organization. I would like to hear from anyone who may have implemented this concept and learn of their experience. Has anyone published anything on it?
And I would like to know of anyone who may be interested in the Explainer.I have used the Explainer to show how Congress can solve some of the problems that cause them to be in gridlock. But I have not seen any evidence that these problems are being solved. If anyone else has come up with such a method, I would like to hear about it. Maybe someone has come up with such a method and they have also had trouble getting people to look below the surface to see how it works and can be used.
Don
  • asked a question related to Formal Languages
Question
6 answers
I'm reading about context free grammar and i recognized how to eliminate the left recursion but i did not find out what is the problem with left recursion?? can any one explain 
Thanks in advance
Relevant answer
Answer
Dear Ahmed
the problem with left recursion, from a computational linguistics point of view, is that it leads to infinite recursion, as mentioned in other posts. And, sadly, linguists do tend to write an awful lot of such rules, as the example below shows (a very naive DCG grammar for English relative clauses). If you 'consult' this grammar with swi-prolog, all will apparently run smooth because swi-prolog can deal with such recursive rules appropriately. If you submit the following  goal "s([the,man,that,he,knows,sleeps],[]).", you'll get "true" as an answer. But, if you ask swi-prolog to search for more results (";"), then you'll get an "Out of local stack" error because of the left-recursion. 
The general strategy is "transform your left-recursive rules into right-recursive ones". It means you must tweak your grammar to eliminate such left-recursive rules and transform them into right-recursive ones, with the help of an intermediate non-terminal (cf. for eg. http://web.cs.wpi.edu/~kal/PLT/PLT4.1.2.html). 
From an algorithmic point of view, different approaches have been published, in order to deal with such left-recursive rules (as said earlier, this is how linguists spontaneously write formal grammars). If you're looking for algorithms, you can have a look at Bob Moore's paper http://research.microsoft.com/pubs/68869/naacl2k-proc-rev.pdf.
s --> np, vp.
np --> det, n.
np --> np, relc.%this is a left-recursive rule
relc --> pror, pro, vt.
vp --> vi.
vp --> vt, np.
det --> [the].
n --> [man].
pro --> [he].
pror --> [that].
vt --> [knows].
vi --> [sleeps].
  • asked a question related to Formal Languages
Question
3 answers
Hello to all. I can't find any contributions to machine translation using pregroup grammar or Lambek calculus, on the net. I am working on this and wanted to know if there is any literature.
Relevant answer
Answer
Dear Muhammad ,
please find  the file in the attachment.
hope it helps .
  • asked a question related to Formal Languages
Question
3 answers
.
Relevant answer
Answer
----Available VDM Tools----
SpecBox-Syntax Checking,Document Printing,Semantic Analysis
VDM through Pictures(VtP)-for Editing,Visual Specification
mural-Proof Support
VDM Domain Compiler-Code Generation
  • asked a question related to Formal Languages
Question
5 answers
With grammar (bnf form) and source code as input, Antlr is generating AST/Parse tree with tokens as terminals and "nil" as parent/root in this case.This is not appropriate tree. I came to know that grammar has to be rewritten in order to generate proper AST/Parse tree. But I couldnt find any appropriate rules to rewrite grammar.
Relevant answer
Answer
I would like to know the steps that are to be followed in rewriting grammar, so that ANTLR generates appropriate parse trees.
  • asked a question related to Formal Languages
Question
22 answers
A word is primitive, if it is not the power (concatenation as multiplication) of another word. 0101 is not primitive while 01010 is.
For more than 20 years people have been trying to prove that the language consisting of all primitive words over two or more letters is not context-free. Without success. Do you have an idea?
Relevant answer
Answer
Peter, you are right.
From your proposition, we can define exactly the primitive words as follow.
Let us call CP(x) the set of all cyclic permutations of x.
Then the primitive words over an alphabet E, PW(E), will be the set of words x built from E such that the cardinal of CP(x) is equal to the length of x.
E.g
"abc" gives CP(abc) = { bca, cab, abc } (thus |CP(abc)| = 3 = |abc| ) and therefore abc is primitive
whereas
"abab" gives CP(abab)={ baba, abab } and by definition is not primitive
More formally (we exclude the zero length words):
PW( E) = { x in E* such that |x| > |CP(x)| > 0 }
Hope this could help at some point !
  • asked a question related to Formal Languages
Question
28 answers
I am looking for tool chains (even a model based engineering methodology) to enable formal verification of ERTMS (railway signalling) systems. Something along the lines of how Prover works with Simulink and SCADE, but preferably a Symbolic tool like NuSMV. or other industrially viable tools with some way to have a formal verification.
Relevant answer
Answer
Scade provides safe state machines to describe state-based behavoir. But you're right everything is finally mapped to an (at least locally) synchronous execution model,
However, when using UPPAAL or another model checker you usually do the design in another framework and build a verification model to prove your properties. That's is often non-trivial since the design model has to be abstracted effectively (e.g. UPPAAL comes to its limits easily if the system contains a lot of variables and clocks). Second, the abstraction-refinement relation has to be proven property preserving for the relevant properties, otherwise verifying something for the verification model may not tell anything about the design model.
So my first question would be whether you already have some design models for your systems, or do you constsruct your own verification models only?
  • asked a question related to Formal Languages
Question
13 answers
Agile development is concerned with the rapid development of a software. In agile development the customer involvement plays an important role. The software is updated and upgraded according to the customer requirements.
Formal verification is concerned with mathematical approaches to verify the software. How can we integrate these formal approaches (there are many types of formal techniques) into the agile development process.
Relevant answer
Answer
It's a matter of value. In Scrum, the PO has to be convinced that formal modelling (or other formal techniques) have value above cost. If so convinced, the PO can make the corresponding engineering practices part of the Definition of Done. The Product Owner may herself use formal specification techniques if they are made transparent and if it can be shown that the benefit outweighs the cost. Both of these, of course, depend on doing true Scrum where the PO really is the boss and really has final say over the product vision, final say over the PBIs on the Product Backlog, and the wherewithal to come together with the team on the corresponding components of the Definition of Done.
To disagree slightly with Nick: Keep in mind that working code is not the first priority — it's self-organisation and feedback in Agile, and it's people and Kaizen mind in Lean and in Scrum. Few formal methods cater well to that agenda.
To disagree slightly with Yasar: Agile says nothing against tedious work.
To disagree with Maria: A system is as weak as its weakest link. Using formal methods on one part to robustify it does not make the system robust; there are many formal proofs around this concept. What she describes takes quite advanced risk analysis techniques; these are the purview of the PO, supported by input from the development team.
  • asked a question related to Formal Languages
Question
11 answers
Colored Petri Nets is a formal technique for specification. Can any developer/researcher guide us about the tools and techniques that we can use to implement Petri-Nets in Industry?
Relevant answer
Answer
Download this tool bro....
  • asked a question related to Formal Languages
Question
9 answers
Attempting to understand the boundary between formal and informal language types.
Relevant answer
Answer
Very interesting.
I looked at a converted postscript version the first time.
A quick first scan indicated the standard tools of parsing and execution of tokens. Toward the back of the document, the text appeared to apply these techniques to groups of unknown symbols (I could not read the symbols).
I looked at another copy of the work , this evening, and I can read all of the pages (all known symbols.)
So, there must have been a mixup in the language and symbol set in the first document that I scanned.
Take care and have fun,
Joe