Science topic

# Automata Theory - Science topic

Explore the latest questions and answers in Automata Theory, and find Automata Theory experts.

Questions related to Automata Theory

Please recommend recent papers on the applications of fuzzy languages

As we know that, MC/DC require at-least a predicate which consists of at-least two atomic conditions in a program. Then only we can able to compute MC/DC score if we have a set of test data. Now, when compiler tries to compile a program then it decomposes a predicate into simplified form and this simplified form is in the syntax of Low level language code or Intermediate code. In this code we may not have boolean operators or any predicate, every thing is in atomic guard conditions. So, my point is that we can only compute MC/DC score for High language code not for intermediate code? But is it exactly what we expect from a test case generator? Because test case generator or constraints solver may not know about the actual program, but it tries to explore all the paths of intermediate code. But, can we say that the test cases generated by constraints solver is indirectly tends to high level language program.

Please share your views!! If anyone want more clarifications then do let me know, will explain through an example.

Thanks,

Sangha

Considering a grammar 'G' having certain semantic rules provided for the list of production 'P'. If intermediate Code needs to be generated and if I follow DAG method to represent it.

In that regard, What are the other variants of Syntax tree apart from DAG for the same?

I have implemented an algorithm for NFA by giving the adjacency matrix as an input, but I want to get it by structure.

One can certainly view the mechanics and behavior of the ribosome and conclude a correspondence with machines; to paraphrase musician and composer Frank Zappa, mechanism is not dead - it just smells funny.

So, might readers and especially biologists give council regarding the negatives to such a view?

How different type of algebra structure like lattice, integral lattice monoid and other algebra structures increase the power of formal language.

A Cellular Automata Transform as proposed by Olu Lafe is useful in image processing and other applications. Also one can suggest some good tutorial over it as free ebook is not available.

This question is coming under formal language and automata theory

I read the following example in one of my professors notes.

1) we have a SLR(1) Grammar G as following. we use SLR(1) parser generator and generate a parse table S for G. we use LALR(1) parser generator and generate a parse table L for G.

S->AB

A->dAa

A-> lambda (lambda is a string with length=0)

B->aAb

Solution: the number of elements with R (reduce) in S is more than L.

but in one site I read:

2) Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?

a) T1 and T2 has not any difference.

b) total Number of non-error entries in T1 is lower than T2

c) total Number of error entries in T1 is lower than T2

Solution:

The LALR(1) algorithm generates exactly the same states as the SLR(1) algorithm, but it can generate different actions; it is capable of resolving more conflicts than the SLR(1) algorithm. However, if the grammar is SLR(1), both algorithms will produce exactly the same machine (a is right).

any one could describe for me which of them is true?

EDIT: infact my question is why for a given SLR(1) Grammar, the parse table of LALAR(1) and SLR(1) is exactly the same, (error and non-error entries are equal and number of reduce is equal) but for the above grammar, the number of Reduced in S is more than L.

Let A be a finite set. Suppose for each natural index i, there is a context free language Ci over alphabet A. Suppose further that for all indices I, we have Ci is contained in C{i+1}. The project is: to find conditions on {Ci} so that the ascending union of the Ci is still a context free language over A.

Note that at each stage i, a pumping lemma is satisfied, as will be Ogden's Lemma, and etc. So, one might need to work hard to find a good ``finiteness'' condition that would do the job.

I'm so glad that ask my third question on my favorite site.

Infact i ran into multiple choice question in recent exam on Compiler Course.

Suppose T1, T2 is created with SLR(1) and LALR(1) for Grammar G. if G be a SLR(1) Grammar which of the following is TRUE?

a) just T1 has meaning for G.

b) T1 and T2 has not any difference.

c) total Number of non-error element in T1 is lower than T2

d) total Number of error element in T1 is lower than T2

My solution:

we know table size and state of LALAR(1) and SLR(1) is the same. but someone say number of reduced state in LALAR(1) is lower than SLR(1) (free space in LALR(1) is more than SLR1(1) because using lookahead instead of follow) and so (d) is correct. but in answer sheet we see (b) is correct. anyone can describe it for us? which of these is true?

A word is primitive, if it is not the power (concatenation as multiplication) of another word. 0101 is not primitive while 01010 is.

For more than 20 years people have been trying to prove that the language consisting of all primitive words over two or more letters is not context-free. Without success. Do you have an idea?

Considering finite automata as a set of states with well defined transition function, how will one formally define the element 'state' in an automaton?

I am working on modelling the interaction between land-use changes and transport. I am using Metronamica which is a cellular automata based modelling package. One of the things I have come across from my reading, is that CA is not able to handle socio-economic variables. The problem is, in my case socioeconomic factors are very important drivers of urban change. Any suggestions on how I can overcome this?

Weighted transducers are finite-state transducers, in which each transition carries some weight in addition to the input and output labels. The weights are elements of a semiring

(S,⊕,⊗, 0, 1).

Is so what is its complexity?

So, now I propose the topic of computational efficiency, particularly with regard to shustring search.

A review of relevant literature mentions several concepts, each providing a portion to the published search algorithms. These concepts are:

LCP least common prefix array

Suffix array

Suffix tree

and variations on these examples.

It is quite possible to efficiently (say, with under 300Mb of memory and under 150 seconds of time for a sequence of 31Mbp) compute shustrings without, again, I say without the use of any of those crutches; the computation is instead direct. Further, it is a simple matter of sorting. Gross character of machine is also important, like speed of processor and processor environment overhead - dedicated processors solve one problem more quickly than does a multitasking processor.

My questions concern the run-time performance of algorithms that implement the above listed concepts. The gross measures are sufficient, amounts of time and memory versus volume of input but, order measures are useless to my particular need.

Has a reader any sense for such measures on algorithms for the above listed concepts?

Korshunov in 76 says: Almost all automata with $n$ states, $k$ input symbols and $m$ output symbols have the degree of distinguishability asymptotically (as n goes to \infty) equal to log_k(log_m(n)). Maybe there is an easy proof knowing that for almost all automata with $n$ states, $k$ input symbols the diameter is O(log_k(n))

I am looking to work on some interesting problem in Automata theory. I want to work on something from classical automata concepts such as FA's, TM, Grammars and RE. So far I am unable to narrow down some thing specific. Can any one guide or highlight any related problem set?

I am creating a project which "transforms the C code" to flow graph. Please suggest any tool or any materials. Should I change the compiler intermediate code to any specification of graph transformation system?