Science topic
Formal Languages - Science topic
Explore the latest questions and answers in Formal Languages, and find Formal Languages experts.
Questions related to Formal Languages
Right now, in 2022, we can read with perfect understanding mathematical articles and books
written a century ago. It is indeed remarkable how the way we do mathematics has stabilised.
The difference between the mathematics of 1922 and 2022 is small compared to that between the mathematics of 1922 and 1822.
Looking beyond classical ZFC-based mathematics, a tremendous amount of effort has been put
into formalising all areas of mathematics within the framework of program-language implementations (for instance Coq, Agda) of the univalent extension of dependent type theory (homotopy type theory).
But Coq and Agda are complex programs which depend on other programs (OCaml and Haskell) and frameworks (for instance operating systems and C libraries) to function. In the future if we have new CPU architectures then
Coq and Agda would have to be compiled again. OCaml and Haskell would have to be compiled again.
Both software and operating systems are rapidly changing and have always been so. What is here today is deprecated tomorrow.
My question is: what guarantee do we have that the huge libraries of the current formal mathematics projects in Agda, Coq or other languages will still be relevant or even "runnable" (for instance type-checkable) without having to resort to emulators and computer archaeology 10, 20, 50 or 100 years from now ?
10 years from now will Agda be backwards compatible enough to still recognise
current Agda files ?
Have there been any organised efforts to guarantee permanent backward compatibility for all future versions of Agda and Coq ? Or OCaml and Haskell ?
Perhaps the formal mathematics project should be carried out within a meta-programing language, a simpler more abstract framework (with a uniform syntax) comprehensible at once to logicians, mathematicians and programers and which can be converted automatically into the latest version of Agda or Coq ?
Formal systems are known deductive systems, representing some aspects of the environment, nature, thinking or, more frequently, abstract representations of the former subjects.
But, assuming the existence of some syntatically correct representation of the real world, could be infered from them some set of axioms, complete and consistent? Is there any approach to this task? Or at least: any clue?
The concept of formal system and/or its properties is present frequently in many practical and theoretical components of computer science methods, tools, theories, etc.
But it is frequent too, finding some non rigorous interpretations of formal. For example, in several definitions of ontology, formal is understood as something that "computer can understand".
Does the computer science specialist, BSc, need to know that concept? Are it and its properties useful for them?
Dear all,
I am doing a research on the advantages of Formal Language Learning beyond the classroom; are there any references /articles about this topic?
Hello everyone,
Could you recommend courses, papers, books or websites about modeling language and formalization?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
Hello everyone,
Could you recommend papers, books or websites about mathematical foundations of artificial intelligence?
Thank you for your attention and valuable support.
Regards,
Cecilia-Irene Loeza-Mejía
I am seeking evidence that formal language planning works. Classical instances might be Hebrew and Afrikaans. I would be most grateful for research papers which provide solid evidence of the effective impact of language planning on language ecologies. I am interested in large-scale political interventions rather than changes within micro-environments. The nature and quality of the evidence supporting claims of language-planning efficacy is obviously crucial.
Language plans abound, but I would be most grateful to be pointed in the direction of data which shows that these plans have worked as intended.
I'm working on a model for making trust in a cloud with blockchain system. I need formal language to describe my model. So my question is: what is best formal language to describe Blockchain consensus model
automata, formal languages, computation, complexity, Turing machine, recursive functions, and beyond ...
We don't have a result yet, but what is your opinion on what it may be? For example, P =NP, P!=NP, or P vs. NP is undecidable? Or if you are not sure, it is feasible to simply state, I don't know.
This is so far the procedure I was trying upon and then I couldn't fix it
As per my understanding here some definitions:
- lexical frequencies, that is, the frequencies with which correspondences occur in a dictionary or, as here, in a word list;
- lexical frequency is the frequency with which the correspondence occurs when you count all and only the correspondences in a dictionary.
- text frequencies, that is, the frequencies with which correspondences occur in a large corpus.
- text frequency is the frequency with which a correspondence occurs when you count all the correspondences in a large set of pieces of continuous prose ...;
You will see that lexical frequency produces much lower counts than text frequency because in lexical frequency each correspondence is counted only once per word in which it occurs, whereas text frequency counts each correspondence multiple times, depending on how often the words in which it appears to occur.
When referring to the frequency of occurrence, two different frequencies are used: type and token. Type frequency counts a word once.
So I understand that probably lexical frequencies deal with types counting the words once and text frequencies deal with tokens counting the words multiple times in a corpus, therefore for the last, we need to take into account the word frequency in which those phonemes and graphemes occur.
So far I managed phoneme frequencies as it follows
Phoneme frequencies:
Lexical frequency is: (single count of a phoneme per word/total number of counted phonemes in the word list)*100= Lexical Frequency % of a specific phoneme in the word list.
Text frequency is similar but then I fail when trying to add the frequencies of the words in the word list: (all counts of a phoneme per word/total number of counted phonemes in the word list)*100 vs (sum of the word frequencies of the targeted words that contain the phoneme/total sum of all the frequencies of all the words in the list)= Text Frequency % of a specific phoneme in the word list.
PLEASE HELP ME TO FIND A FORMULA ON HOW TO CALCULATE THE LEXICAL FREQUENCY AND THE TEXT FREQUENCY of phonemes and graphemes.
This is so far the procedure I was trying upon and then I couldn't fix it
As per my understanding:
- lexical frequencies, that is, the frequencies with which correspondences occur in a dictionary or, as here, in a word list;
- lexical frequency is the frequency with which the correspondence occurs when you count all and only the correspondences in a dictionary.
- text frequencies, that is, the frequencies with which correspondences occur in a large corpus.
- text frequency is the frequency with which a correspondence occurs when you count all the correspondences in a large set of pieces of continuous prose ...;
You will see that lexical frequency produces much lower counts than text frequency because in lexical frequency each correspondence is counted only once per word in which it occurs, whereas text frequency counts each correspondence multiple times, depending on how often the words in which it appears to occur.
When referring to the frequency of occurrence, two different frequencies are used: type and token. Type frequency counts a word once.
So I understand that probably lexical frequencies deal with types counting the words once and text frequencies deal with tokens counting the words multiple times in a corpus, therefore for the last, we need to take into account the word frequency in which those phonemes and graphemes occur.
So far I managed phoneme frequencies as it follows
Phoneme frequencies:
Lexical frequency is: (single count of a phoneme per word/total number of counted phonemes in the word list)*100= Lexical Frequency % of a specific phoneme in the word list.
Text frequency is similar but then I fail when trying to add the frequencies of the words in the word list: (all counts of a phoneme per word/total number of counted phonemes in the word list)*100 vs (sum of the word frequencies of the targeted words that contain the phoneme/total sum of all the frequencies of all the words in the list)= Text Frequency % of a specific phoneme in the word list.
PLEASE HELP ME TO FIND A FORMULA ON HOW TO CALCULATE THE LEXICAL FREQUENCY AND THE TEXT FREQUENCY of phonemes and graphemes.
I am currently working on a model checking a systematic literature review. please help in this case.
In my research I want to theoretically prove that the model transformation is correct. Specifically, it is to verify the model transformation of a state machine model to a fault tree model.Now my idea is to find a formal language description model element and use the theorem prover for analysis and verification.But how to determine which formal language and theorem prover should I use to verify the correctness of the model transformation is a problem.I hope that the experts in the research field can give me some advice and suggestions.You can follow my ideas or give some new comments, thank you!
In this validation process, the team has tried to make sense of the research, devising a working hypothesis, built on scientific bases, such as the reference models that for years have been the pillars of the language sciences, which are: the descriptive method of N. Chomsky ;- The method Lexico-grammatica by M. Gross ; -The Nooj system, according to the Transformational Analysis of Direct Transitive by M. Silberzstein; The probabilistic calculation by Hofmann, according to the Probabilistic latent semantic Analis. The results have given very valid and irrefutable answers such as: - The mathematical laws guide and support the linguistic text, because a language, to be elevated to universal code must be describable, with a rational scientific method. Languages can be converted into a plurality of codes and that formal languages are subjected to techniques of fixity and non-compositionality and therefore guided by mathematical laws pre-established and therefore predictable, was born for market needs and is built in the laboratory Natural languages are subjected to linguistic techniques of causality, and that the first communication and fixed, the second is innate, because ........The homo sapiens transforms the contents of his mental activities into symbols, i.e. letters, numbers, etc. according to anthropology, sociology and natural laws of his culture and therefore semantics belongs only to the man sapiens and to a certain man in the course of that history. The statute of conjecturing that we postulate is that the mathematical laws guide the mind of the homo sapiens in the structuring of the lessies, morphies, dysmorphs in the osmotic voluntary and innate conjecturing of human semantics.
Translated with www.DeepL.com/Translator
Our language is the origin and the building mean of formal languages of math and physics. Artificial intelligence mashines creates even their own language.
Are there research to create new languages to create new science or to simplify and make more understandable the current science? Or is it just my fantasy? Maybe if a man can see, say in ifrared range then he could invent new words? Maybe we should go in this direction?
How will one create new language describing our world and qualitatively different from the today one? Maybe we should study other creatures likes delphines?
The unavoidable fatal defect of “’potential infinite--actual infinite’ confusion” in present classical infinite idea inevitably leads to the unceasingly production of “paradox events” (different in forms but same in nature) from many infinite relating fields in present science theory system and, the self-contradictory (Self-refutation Mechanism) “self and non-self” contents in present set theory (such as T={x|x📷x}) and mathematical analysis (such as the number-of-non-number variable) is a typical example. It is true that people have been trying very hard to solve those infinite relating paradoxes, but the mistaken working idea brought very little effect-------since antiquity, people have been unaware of that these suspended “infinite paradox events” are in fact an “infinite paradox syndrome” disclosing from different angles exactly the same fundamental defects in present classical infinite theory system, have not been studying seriously the consanguineous ties among the paradoxes in the syndrome, have not been studying seriously the consanguineous relations among these paradoxes and the foundations of their related theory systems (such as number system) , have not been studying seriously and deeply the fundamental defects in present classical infinite theory system disclosed jointly by different infinite paradox families; but merely studied, made up and developed very hard all kinds of formal languages, formal operations and formal logics specially for solving surface problems. So, not only these “infinite paradox families” have never been solved but developing and expanding unceasingly.
v
Following infinite related questions have never been answered clearly and scientifically since the concepts of “infinite, potential infinite, actual infinite” came into human science:
Why the concepts of “potential infinite, actual infinite” have never been clearly and scientifically defined? Are they important in present infinite related science system? If yes, what roles they play; if not, why they have been existing in our science ever since? How can we cognize the relationship between “infinite related mathematical things” and “potential infinite--actual infinite”?
Our thousands—year infinite related science history has proved that it is impossible at all to avoid “the ‘potential infinite--actual infinite’ confusing” in infinite related science areas. So, it is very free and arbitrary (just depending on one’s likes or dislikes) for people whenever treating those infinite related mathematical things because no one knows scientificaly what to do at all. Following two suspended contradictions in present infinite set theory and mathematical analysis are typical examples:
In present mathematical analysis: on the one hand, any one can use “the ‘potential infinite--actual infinite’ confusing formal language and production line” to construct all kinds of infinite related paradoxes; on the other hand, one can also use exactly the very same “formal language and production line” to construct all kinds of infinite related “important mathematical proofs and theorems”. The typical example is: Zeno’s construction (proof) of “Achilles Can Never Chase Up Turtle Paradox” and its modern version of the newly discovered Harmonic Series Paradox-------bracketing by limit theory to create infinite numbers each greater than 1/2 or 100 or 1000000000000000 or 1000000000000000000000000000000 or ... from the Un--->0 Harmonic Series and turn the Un--->0 Harmonic Series into a “Vn ---> any positive constants” infinite series (with infinite items each bigger than any positive constants, such as 100000000000000000000000000000) . Our studies have proved that both newly discovered Harmonic Series Paradox and the 300--year old Berkeley Paradox are different versions of Zeno’s Paradox, they are the members of Zeno’s Paradox Family-------- those dt--->0 increment of infinitesimals are allowed to “let be 0(dt = 0), take the limit(limdt=0), take the standard number(dt=0)” during the process of differentiation, just because we dislike to do it first then suddenly change our mind and like to do it at the end of computation; while those Un--->0 infinitesimals items are not allowed to “let be 0(Un = 0), take the limit(lim Un =0), take the standard number(Un =0)” during the process of bracketing to prove the divergence of Harmonic Series, just because we keep dislike to do it during the whole computation.
The defect of “likes--dislikes operation for infinitesimals” is the essence of The Second Mathematical Crisis triggered by the Berkeley Paradox------it is impossible to be solved at all in present “potential infinite--actual infinite” related science and mathematics.
Do the formal languages of logic share so many properties with natural languages that it would be nonsense to separate them in more advanced investigations or, on the contrary, are formal languages a sort of ‘crystalline form’ of natural languages so that any further logical investigation into their structure is useless? On the other hand, is it true that humans think in natural languages or rather in a kind of internal ‘language’ (code)? In either of these cases, is it possible to model the processing of natural language information using formal languages or is such modelling useless and we should instead wait until the plausible internal ‘language’ (code) is confirmed and its nature revealed?
The above questions concern therefore the following possibly triangular relationship: (1) formal (symbolic) language vs. natural language, (2) natural language vs. internal ‘language’ (code) and (3) internal ‘language’ (code) vs. formal (symbolic) language. There are different opinions regarding these questions. Let me quote three of them: (1) for some linguists, for whom “language is thought”, there should probably be no room for the hypothesis of two different languages such as the internal ‘language’ (code) and the natural language, (2) for some logicians, natural languages are, in fact, “as formal languages”, (3) for some neurologists, there should exist a “code” in the human brain but we do not yet know what its nature is.
A TQBF is a boolean formula with alternating existential and universal quantifiers. The boolean formula here is in conjunctive normal form (CNF).
How can I map one formal specification language i.e., Alloy into another formal language i.e., B?
Exploring the current state of the art in formal systems architecture development and production by requesting information about the current practices associated with design structure matrices.
I'm reading about context free grammar and i recognized how to eliminate the left recursion but i did not find out what is the problem with left recursion?? can any one explain
Thanks in advance
Hello to all. I can't find any contributions to machine translation using pregroup grammar or Lambek calculus, on the net. I am working on this and wanted to know if there is any literature.
With grammar (bnf form) and source code as input, Antlr is generating AST/Parse tree with tokens as terminals and "nil" as parent/root in this case.This is not appropriate tree. I came to know that grammar has to be rewritten in order to generate proper AST/Parse tree. But I couldnt find any appropriate rules to rewrite grammar.
A word is primitive, if it is not the power (concatenation as multiplication) of another word. 0101 is not primitive while 01010 is.
For more than 20 years people have been trying to prove that the language consisting of all primitive words over two or more letters is not context-free. Without success. Do you have an idea?
I am looking for tool chains (even a model based engineering methodology) to enable formal verification of ERTMS (railway signalling) systems. Something along the lines of how Prover works with Simulink and SCADE, but preferably a Symbolic tool like NuSMV. or other industrially viable tools with some way to have a formal verification.
Agile development is concerned with the rapid development of a software. In agile development the customer involvement plays an important role. The software is updated and upgraded according to the customer requirements.
Formal verification is concerned with mathematical approaches to verify the software. How can we integrate these formal approaches (there are many types of formal techniques) into the agile development process.
Colored Petri Nets is a formal technique for specification. Can any developer/researcher guide us about the tools and techniques that we can use to implement Petri-Nets in Industry?
Attempting to understand the boundary between formal and informal language types.