Science topic

# Automata Theory - Science topic

Explore the latest questions and answers in Automata Theory, and find Automata Theory experts.
Questions related to Automata Theory
• asked a question related to Automata Theory
Question
Please recommend recent papers on the applications of fuzzy languages
Dear Alelsandr,
I suggest you to see links and attached files in yopic.
-Journal of Intelligent & Fuzzy Systems - Volume 34, issue 1 - Journals ...
-Fuzzy Automata and Languages: Theory and Applications ...
-Myhill–Nerode type theory for fuzzy languages and automata ...
-Fuzzy automata and languages : theory and applications in ...
-application of fuzzy languages to pattern recognition - Emerald Insight
-Applications of fuzzy languages to intelligent information retrieval ...
Best regards
• asked a question related to Automata Theory
Question
As we know that, MC/DC require at-least a predicate which consists of at-least two atomic conditions in a program. Then only we can able to compute MC/DC score if we have a set of test data. Now, when compiler tries to compile a program then it decomposes a predicate into simplified form and this simplified form is in the syntax of Low level language code or Intermediate code. In this code we may not have boolean operators or any predicate, every thing is in atomic guard conditions. So, my point is that we can only compute MC/DC score for High language code not for intermediate code? But is it exactly what we expect from a test case generator? Because test case generator or constraints solver may not know about the actual program, but it tries to explore all the paths of intermediate code. But, can we say that the test cases generated by constraints solver is indirectly tends to high level language program.
Please share your views!! If anyone want more clarifications then do let me know, will explain through an example.
Thanks,
Sangha
MCDC is a metric for measuring the quality of a test suite, often implemented using instrumented source code.  MCDC can also be applied for model coverage analysis.
In DO-178C, test cases are based on requirements and not on the source code. The logical expressions of those requirements should exist as text, tables or model elements, regardless of the source code language.
Test cases intended to satisfy MCDC coverage criteria could be designed from the logical expressions implied or specified in the requirements documents or models.
A difficulty is measuring coverage in an assembly language implementation.
Documents to dig further:
• MC/DC, per DO-178C, is discussed in "DO-178C Changes and Improvements ..." (Pothon)
• DO-248C, Supporting Information for DO-178C and DO-278A, includes FAQ #42: What needs to be considered when performing structural coverage at the object code level? ..." The main consideration is to demonstrate that the coverage analysis conducted at the object code level will provide the same level of confidence as that conducted at the Source Code level."...
• "Formalization and Comparison of MCDC and Object Branch Coverage
Criteria", (Comar et al.)
Data Flow Model Coverage Analysis: Principles and Practice (Camus et al.)
The Effect of Program and Model Structure on the Effectiveness of MC/DC Test Adequacy Coverage (Gay et al.)
• asked a question related to Automata Theory
Question
Considering a grammar 'G' having certain semantic rules provided for the list of production 'P'. If intermediate Code needs to be generated and if I follow DAG method to represent it.
In that regard, What are the other variants of Syntax tree apart from DAG for the same?
Hi Rebeka, DAG is a variant ( form) of a syntax tree which gives direction to it. There are nothing such as other variants. This much I can suggest you as from your question it is not clear what exactly you are looking for.
• asked a question related to Automata Theory
Question
I have implemented an algorithm for NFA by giving the adjacency matrix as an input, but I want to get it by structure.
JFLAP
• asked a question related to Automata Theory
Question
One can certainly view the mechanics and behavior of the ribosome and conclude a correspondence with machines; to paraphrase musician and composer Frank Zappa, mechanism is not dead - it just smells funny.
So, might readers and especially biologists give council regarding the negatives to such a view?
Thanks for the clarification.
• asked a question related to Automata Theory
Question
How different type of algebra structure like lattice, integral lattice monoid and other algebra structures increase the power of formal language.
By assigning weights or similar features to rules, one gains means of control over derivations. In a standard grammar, every terminating derivation generates a word. With weights one can filter out certain types of derivations.
For example the language a^n b^n c^n that you mention. After an initial rule S -> AC we have rules
• A -> aAb with weight -3
• C -> Cc with weight +3
• A -> ab with weight +1
• C -> c with weight  +1
where the weigths are integers. If we only accept derivations with a total weight of 2 of all the rules that have been applied, we obtain the language a^n b^n c^n. If your weights come from structures that can do even more complicated things than adding and subtracting, you can also obtain more complicated languages,
Does this example answer your question? Weights are more common for automata than for grammars. But in principle they should be able to do about the same things in one mechanism and in the other.
• asked a question related to Automata Theory
Question
A Cellular Automata Transform as proposed by Olu Lafe is useful in image processing and other applications. Also one can suggest some good tutorial over it as free ebook is not available.
• asked a question related to Automata Theory
Question
This question is coming under formal language and automata theory
So your problem of "programming" is in your example how to ensure that the number of insertions of ab and c is the same, right? Because only then the resulting word will be in a^nb^nc^n (if additionally all insertions are done in the correct places).
The key here are contexts. You do not use plain insertion rules, but rules like "insert AB between an A and a B (using capital letters now). This way you can control the place. For coordinating the numbers of insertions you need to send some kind of message through the string. For example, instead of AB you insert AB#. Now the # must be moved through the block of Bs. For example by rules "Insert # right of B#B" and "delete # left of B#".  This way you can arrive at  AABB#C. Now insert §C right of # and delete #§.
Maybe some detail does not work, since I contructed this now on the fly; but this could be refined to work. Without using contexts you cannot do this unless you use some external control like in Observer Systems, see article attached.
• asked a question related to Automata Theory
Question
I read the following example in one of my professors notes.
1) we have a SLR(1) Grammar G as following. we use SLR(1) parser generator and generate a parse table S for G. we use LALR(1) parser generator and generate a parse table L for G.
S->AB
A->dAa
A-> lambda (lambda is a string with length=0)
B->aAb
Solution: the number of elements with R (reduce) in S is more than L.
but in one site I read:
2) Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?
a) T1 and T2 has not any difference.
b) total Number of non-error entries in T1 is lower than T2
c) total Number of error entries in T1 is lower than T2
Solution:
The LALR(1) algorithm generates exactly the same states as the SLR(1) algorithm, but it can generate different actions; it is capable of resolving more conflicts than the SLR(1) algorithm. However, if the grammar is SLR(1), both algorithms will produce exactly the same machine (a is right).
any one could describe for me which of them is true?
EDIT: infact my question is why for a given SLR(1) Grammar, the parse table of LALAR(1) and SLR(1) is exactly the same, (error and non-error entries are equal and number of reduce is equal) but for the above grammar, the number of Reduced in S is more than L.
In particular, take a look at the syntax section.
If you want to do experiments yourself, our jaccie tool plus additional documentation can be found at:
Happy experimenting!
• asked a question related to Automata Theory
Question
Let A be a finite set.  Suppose for each natural index i, there is a context free language Ci over alphabet A. Suppose further that for all indices I, we have Ci is contained in C{i+1}.  The project is: to find conditions on {Ci} so that the ascending union of the Ci  is still a context free language over A.
Note that at each stage i, a pumping lemma is satisfied, as will be Ogden's Lemma, and etc.  So, one might need to work hard to find a good finiteness'' condition that would do the job.
I do not immediately know the answer, but I think you may find leads in Damian Niwinski's 1984 article about fixed points and context-free languages: http://www.sciencedirect.com/science/article/pii/S0019995884800492
• asked a question related to Automata Theory
Question
I'm so glad that ask my third question on my favorite site.
Infact i ran into multiple choice question in recent exam on Compiler Course.
Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?
a) just T1 has meaning for G.
b) T1 and T2 has not any difference.
c) total Number of non-error element in T1 is lower than T2
d) total Number of error element in T1 is lower than T2
My solution:
we know table size and state of LALAR(1) and SLR(1) is the same. but someone say number of reduced state in LALAR(1) is lower than SLR(1) (free space in LALR(1) is more than SLR1(1) because using lookahead instead of follow) and so (d) is correct. but in answer sheet we see (b) is correct. anyone can describe it for us? which of these is true?
b
• asked a question related to Automata Theory
Question
A word is primitive, if it is not the power (concatenation as multiplication) of another word. 0101 is not primitive while 01010 is.
For more than 20 years people have been trying to prove that the language consisting of all primitive words over two or more letters is not context-free. Without success. Do you have an idea?
Peter, you are right.
From your proposition, we can define exactly the primitive words as follow.
Let us call CP(x) the set of all cyclic permutations of x.
Then the primitive words over an alphabet E, PW(E), will be the set of words x built from E such that the cardinal of CP(x) is equal to the length of x.
E.g
"abc" gives CP(abc) = { bca, cab, abc } (thus |CP(abc)| = 3 = |abc| ) and therefore abc is primitive
whereas
"abab" gives CP(abab)={ baba, abab } and by definition is not primitive
More formally (we exclude the zero length words):
PW( E) = { x in E* such that |x| > |CP(x)| > 0 }
Hope this could help at some point !
• asked a question related to Automata Theory
Question
Considering finite automata as a set of states with well defined transition function, how will one formally define the element 'state' in an automaton?
It simply is a member of a finite set. Not more.
For simplicity often the natural numbers including 0
are used, up to n-1.
Informally, a state is a means of conveying
information through time. In synchronous
product automata are two or more distinct
sub-automata, each having its own state,
say the first one out of set S1, the second
automaton out of S2... The state of the complete
product automaton can then be described as
a tuple (s1, s2, ... Sn) with s1 member of S2,
s2 member of S2...
In analog computers states are continuous.
Continuous (real, complex) variables describe
the content of integrators, e.g. the charge of
capacitors.
Regards,
Joachim
• asked a question related to Automata Theory
Question
I am working on modelling the interaction between land-use changes and transport. I am using Metronamica which is a cellular automata based modelling package. One of the things I have come across from my reading, is that CA is not able to handle socio-economic variables. The problem is, in my case socioeconomic factors are very important drivers of urban change. Any suggestions on how I can overcome this?
Hello Thamuka Moyo,
Urban Change Processes
For the evaluation of operational urban models, the urban change processes to be modelled are identified. Eight types of major urban subsystem are distinguished. They are ordered by the speed by which they change, from slow to fast processes:
- Very slow change: networks, land use. Urban transport, communications and utility networks are the most permanent elements of the physical structure of cities. Large infrastructure projects require a decade or more, and once in place, are rarely abandoned. The land use distribution is equally stable; it changes only incrementally.
- Slow changes: workplaces, housing. Buildings have a life-span of up to one hundred years and take several years from planning to completion. Workplaces(non-residential buildings) such as factories, warehouses, shopping centres or offices, theatres or universities exist much longer than the firms or institutions that occupy them, just as housing exists longer than the house-holds that live in it.
- Fast change: employment, population. Firms are established or closed down, expanded or re-located; this creates new jobs or makes workers redundant and so affects employment. House-holds are created, grow or decline and eventually are dissolved, and in each stage in their life cycle adjust their location and motorisation to their changing needs; this determines the distribution of population and car ownership.
- Immediate change: goods transport, travel. The location of human activities in space gives rise to a demand for spatial interaction in the form of goods transport and travel. These inter-actions are the most flexible phenomena of spatial urban development; they can adjust in minutes or hours to changes in congestion or fluctuations in demand, though in reality adjustment may be retarded by habits, obligations or subscriptions.
There is a ninth subsystem, the urban environment. Its temporal behaviour is more complex. The direct impacts of human activities, such as transport noise and air pollution are immediate; other effects such as water or soil contamination build up incrementally over time, and still others such as long-term climate effects are so slow that they are hardly observable. All other eight sub-systems affect the environment by energy and space consumption, air pollution and noise emission, whereas only locational choices of housing investors and households, firms and workers are co-determined by environmental quality, or lack of it. All nine subsystems are partly market-driven and partly subject to policy regulation.
In the 1950s first efforts were made in the USA to study the interrelationship between trans-port and the spatial development of cities systematically. Hansen (1959) demonstrated for Washington, DC that locations with good accessibility had a higher chance of being developed, and at a higher density, than remote locations ("How accessibility shapes land use").
The recognition that trip and location decisions co-determine each other and that therefore transport and land use planning needed to be co-ordinated, quickly spread among American planners, and the 'land-use transport feedback cycle' became a commonplace in the American planning literature. The set of relationships implied by this term can be briefly summarised as follows:
Figure 1. The 'land-use transport feedback cycle'.
- The distribution of land uses, such as residential, industrial or commercial, over the urban area determines the locations of human activities such as living, working, shopping, education or leisure. - The distribution of human activities in space requires spatial interactions or trips in the transport system to overcome the distance between the locations of activities.
- The distribution of infrastructure in the transport system creates opportunities for spatial interactions and can be measured as accessibility.
- The distribution of accessibility in space co-determines location decisions and so results in changes of the land use system.
• asked a question related to Automata Theory
Question
Weighted transducers are finite-state transducers, in which each transition carries some weight in addition to the input and output labels. The weights are elements of a semiring
(S,⊕,⊗, 0, 1).
A group has only one operation, but for non-deterministic machines one wants to sum over the products of possible distinct computations. Both ring and group require commutativity. One could require this, but it would restrict the candidates for weight structures. Since usually this commutativity does not gain us anything, it seems more adequate not to require it and work in the slightly more general setting of semirings.
• asked a question related to Automata Theory
Question
Is so what is its complexity?
The answer: there is (almost certainly) no efficient algorithm. The reason is from complexity theory: most questions about regular expressions are at least PSPACE-hard (e.g., does a regular expression generate (or does an NFA accept) all strings over its alphabet). BUT: such questions are in PTIME for languages if presented by DFAs.
The point is that ﻿﻿transforming an NFA to a DFA is PSPACE-hard, proven by Meyer and Stockmeyer (and others) in the 1970s.
So, unless by a miracle PSPACE = PTIME, there is a difference. So I am not surprised that proposed algorithms only reached halfway, there are deep theoretical reasons (requiring a major breakthrough) that one couldn't reach all the way.
• asked a question related to Automata Theory
Question
So, now I propose the topic of computational efficiency, particularly with regard to shustring search.
A review of relevant literature mentions several concepts, each providing a portion to the published search algorithms. These concepts are:
LCP least common prefix array
Suffix array
Suffix tree
and variations on these examples.
It is quite possible to efficiently (say, with under 300Mb of memory and under 150 seconds of time for a sequence of 31Mbp) compute shustrings without, again, I say without the use of any of those crutches; the computation is instead direct. Further, it is a simple matter of sorting. Gross character of machine is also important, like speed of processor and processor environment overhead - dedicated processors solve one problem more quickly than does a multitasking processor.
My questions concern the run-time performance of algorithms that implement the above listed concepts. The gross measures are sufficient, amounts of time and memory versus volume of input but, order measures are useless to my particular need.
Has a reader any sense for such measures on algorithms for the above listed concepts?
Something interesting to me is that your coauthor Bojian Xu teaches at a university whose campus is about 80km away from my home.
• asked a question related to Automata Theory
Question
Korshunov in 76 says: Almost all automata with $n$ states, $k$ input symbols and $m$ output symbols have the degree of distinguishability asymptotically (as n goes to \infty) equal to log_k(log_m(n)). Maybe there is an easy proof knowing that for almost all automata with $n$ states, $k$ input symbols the diameter is O(log_k(n))
Thanks Henning, sure.
Korshunov, A. D.
The number of automata, boundedly determined functions and hereditary properties of automata. (English). Kybernetika, vol. 12 (1976), issue 1, pp. (31)-37.
and Corollary 4 would be the proof interesting me.
• asked a question related to Automata Theory
Question