# An Introduction to Formal Languages and Automata

... Regular Expressions (RE) are presented as a way to describe or display regular languages. There are two different models of the Finite Automaton (FA) [19]; which include the Deterministic Finite Automaton (DFA) and the Non-deterministic Finite Automaton (NFA). In this area, regular expressions, finite automata, and regular grammars are equivalent [19].A FA is used to detect patterns of input string. ...

... There are two different models of the Finite Automaton (FA) [19]; which include the Deterministic Finite Automaton (DFA) and the Non-deterministic Finite Automaton (NFA). In this area, regular expressions, finite automata, and regular grammars are equivalent [19].A FA is used to detect patterns of input string. FA has a predictable memory, processing an entry string requires one FA state traversal per character. ...

... Therefore, FA is a suitable method at high network link rates. But FA shows great sets of regular expressions that need a large amount of memory [19].FA is applied to reduce the number of compositions likes [20,21].So, the FA can be used to reduce the number of service chain compositions. ...

Service Function Chaining (SFC) is an architecture used to orchestrate network services that assign choices to the network. This architecture is a policy structure that should be the appropriate service chain. The management of this architecture requires special configurations. Consequently, solutions are needed to offer a suitable configuration adapted to this environment. Therefore, the chains must be fully investigated before launch, which requires the definition of chaining rules. The issues addressed in this architecture include: checking the validation of the service chains and reducing the number of service chain compositions. Finite Automaton (FA) is a simple machine applied to identify the patterns within input received from some character set. One of the FA applications is the reduction in the number of compositions. The Internet Engineering Task Force (IETF) is an open standards organization that extends and cultivates Internet standards. The IETF SFC Working Group has extended the architectures and scenarios for SFC. In this paper, an FA is designed based on the IETF scenarios, including data center, mobile and security, to solve these issues. Subsequently, the evaluation by Finite Automaton Matcher (FAM) shows that service chains are accepted with the proposed FA. In addition, the proposed method reduces the number of service chain compositions and time complexity. Abstract Service Function Chaining (SFC) is an architecture used to orchestrate network services that assign choices to the network. This architecture is a policy structure that should be the appropriate service chain. The management of this architecture requires special configurations. Consequently, solutions are needed to offer a suitable configuration adapted to this environment. Therefore, the chains must be fully investigated before launch, which requires the definition of chaining rules. The issues addressed in this architecture include: checking the validation of the service chains and reducing the number of service chain compositions. Finite Automaton (FA) is a simple machine applied to identify the patterns within input received from some character set. One of the FA applications is the reduction in the number of compositions. The Internet Engineering Task Force (IETF) is an open standards organization that extends and cultivates Internet standards. The IETF SFC Working Group has extended the architectures and scenarios for SFC. In this paper, an FA is designed based on the IETF scenarios, including data center, mobile and security, to solve these issues. Subsequently, the evaluation by Finite Automaton Matcher (FAM) shows that service chains are accepted with the proposed FA. In addition, the proposed method reduces the number of service chain compositions and time complexity.

... Sometimes, NFAs are defined with a set of initial states. An NFA with multiple initial states can be transformed to an NFA with a single initial state [19]. Moreover, non-deterministic processing may be viewed as a kind of parallel computation wherein multiple independent "processed" or "threads" can be running concurrently [19] and [20]. ...

... An NFA with multiple initial states can be transformed to an NFA with a single initial state [19]. Moreover, non-deterministic processing may be viewed as a kind of parallel computation wherein multiple independent "processed" or "threads" can be running concurrently [19] and [20]. In this situation, the NFA splits to follow several choices corresponding to a process "forking" into several children. ...

... Moreover, the following theorems and lemmas are important for a clear illustration of the proposed idea. [19]. [19]. ...

The need for computation speed is ever increasing. A promising solution for this requirement is parallel computing but the degree of parallelism in electronic computers is limited due to the physical and technological barriers. DNA computing proposes a fascinating level of parallelism that can be utilized to overcome this problem. This paper presents a new computational model and the corresponding design methodology using the massive parallelism of DNA computing. We proposed an automatic design algorithm to synthesis the logic functions on the DNA strands with the maximum degree of parallelism. In the proposed model, billions of DNA strands are utilized to compute the elements of the Boolean function concurrently to reach an extraordinary level of parallelism. Experimental and analytic results prove the feasibility and efficiency of the proposed method. Moreover, analyses and results show that a delay of a circuit in this method is independent of the complexity of the function and each Boolean function can be computed with O(1) time complexity.

... Artificial intelligence systems including neural networks are extensively elaborated on in the literature, particularly in the studies [1][2][3][4][5][6][7]. These studies constitute a sufficient compendium of knowledge concerning the principle of the functioning of artificial neural networks. ...

... As it is presented in the studies [1][2][3][4][5][6][7]10], the purpose of the diagnosis of an object is to identify its state in the values of the valence logic of the assessment of states as accepted by the researcher. Therefore, the decision-making process concerning the classification of states in accordance with the decision threshold accepted in a given network is realized in the output cells of the network. ...

... The diagnostic state of the object examined in the DIAG system is determined on the basis of an examination and analysis as well as comparing the image of the set of output diagnostic signals with their image of the reference signal (nominal) (Figure 9) [2][3][4][5][6][7][8][9][10][11]. The diagnostic result in the bivalent assessment of 2VL states of low-power solar power plant equipment is shown in Figure 10. ...

The article presents the problems of diagnostics of low-power solar power plants with the use of the three-valued (3VL) state assessment {2, 1, 0}. The 3VL diagnostics is developed on the basis of two-valued diagnostics (2VL), and it is elaborated on. In the (3VL) diagnostics, the range of changes in the values of the signals from the 2VL logic was accepted for the serviceability condition: state {12VL}. This range of signal value changes for logic (3VL) was divided into two signal value change sub-ranges, which were assigned two status values in the logic (3VL): {23VL}—serviceability condition and {13VL}—incomplete serviceability condition. The state of failure for both logics applied of the valence of states is interpreted equally for the same changes in the values of diagnostic signals, the possible changes of which exceed the ranges of their permissible changes. The DIAG 2 intelligent system based on an artificial neural network was used in diagnostic tests. For this purpose, the article presents the structure, algorithm and rules of inference used in the DIAG intelligent diagnostic system. The diagnostic method used in the DIAG 2 system utilizes the method known from the literature to compare diagnostic signal vectors with the reference signal vectors assigned. The result of this vector analysis is the metric developed of the difference vector. The problem of signal analysis and comparison is carried out in the input cells of the neural network. In the output cells of the neural network, in turn, the classification of the states of the object’s elements is realized. Depending on the condition of the individual elements that make up the object, the method is able to indicate whether the elements are in working order, out of order or require quick repair/replacement.

... CDIF and SDIF methods have received extensive attention since they were proposed, but they are only applicable to radars with simple and fixed PRI's, and their iterative processes greatly deteriorate the computational efficiency of pulse deinterleaving. To solve this problem, the finite-state automaton (FSA) [19], [20] was introduced in [21] to make use of the cyclically transferring temporal states in such pulse trains, and a parallel pulse deinterleaving method was proposed. The FSA-based method greatly improves the computational efficiency and deinterleaving accuracy over the histogram-based counterparts [21]. ...

... The unknown variables in (16)- (20) are iteratively optimized to minimize the cost function, and the structure and parameters of the prediction model can be finally determined when the optimization process converges. Based on the prediction of the pulse group's embedding vector, the pulse distributions in the next pulse group can be further predicted with the decoder obtained in the previous subsection as follows, ...

... In the process of model training, the pulse groups in each pulse train, i.e., {x 1 , x 2 , · · ·}, are first transformed into vectors of {e 1 , e 2 , · · ·} with the encoder, and the DTOA's between them, i.e., {∆ 2 , ∆ 3 , · · ·}, are encoded into vector of {f 2 , f 3 , · · ·} according to (15), then sequential RNN states named s 1 , s 2 , · · · are obtained according to (16)- (19). After that, the RNN states are exploited to predict the representation vectorsê 2 ,ê 3 , · · · of upcoming pulse groups according to (20). Finally, the distances between these predictions and the observed pulse groups representation vectors are calculated and accumulated according to (21), which is taken as the loss function for iteratively optimizing the parameters in (15)- (20) as follows, ...

... We assume that the reader is familiar with the basic concepts of finite automata and formal languages. For any unexplained concepts she or he is referred to a standard textbook, e.g., [12][13][14][15] ]. Now, we recall some concepts and fix our notations. ...

... The set of all accepted words form the accepted language L(A). Finite automata are usually represented by their graphs [12][13][14][15]. ...

Finite automata with translucent letters are finite state devices that are able to accept a class of languages that is a superset of the regular languages, moreover, it contains some not context-free languages. The class is closed under union, concatenation, however, it is not closed under intersection with regular sets. There are three linguistically important non context-free languages: the multiple agreement, the cross dependencies, and the marked copy. These languages cannot be accepted by finite automata with translucent letters. In this paper, an extension of the model is presented in which the input is preprocessed by a finite state transducer. The transduced input is given to the finite automata with translucent letters, and it decides on acceptance. We prove that all the three mentioned languages are accepted by the deterministic variant of the new model.

... To express graph queries, we can also use standard first-order logic formulae over graphs [32]. These formulae adhere to the grammar 2 The concepts of left-recursive and right-recursive expressions are closely related to concepts in formal languages [39]. Indeed, all expressions we allow can be mapped to concepts in context-free grammars: node variables map to non-terminals, unions map to individual grammar rules for a non-terminal and semi-joins map to the compositions within a single grammar rule. ...

... Indeed, all expressions we allow can be mapped to concepts in context-free grammars: node variables map to non-terminals, unions map to individual grammar rules for a non-terminal and semi-joins map to the compositions within a single grammar rule. This is no coincidence: it is well known that a context-free grammar that is left-recursive or right-recursive can always be rewritten into a regular expression (using Kleene-star instead of recursion) [39]. ...

Many graph query languages rely on composition to navigate graphs and select nodes of interest, even though evaluating compositions of relations can be costly. Often, this need for composition can be reduced by rewriting toward queries using semi-joins instead, resulting in a significant reduction of the query evaluation cost. We study techniques to recognize and apply such rewritings. Concretely, we study the relationship between the expressive power of the relation algebras, which heavily rely on composition, and the semi-join algebras, which replace composition in favor of semi-joins. Our main result is that each fragment of the relation algebras where intersection and/or difference is only used on edges (and not on complex compositions) is expressively equivalent to a fragment of the semi-join algebras. This expressive equivalence holds for node queries evaluating to sets of nodes. For practical relevance, we exhibit constructive rules for rewriting relation algebra queries to semi-join algebra queries and prove that they lead to only a well-bounded increase in the number of steps needed to evaluate the rewritten queries. In addition, on sibling-ordered trees, we establish new relationships among the expressive power of Regular XPath, Conditional XPath, FO-logic and the semi-join algebra augmented with restricted fixpoint operators.

... Formally, the CFG grammar G is defined by the 4-tuple G = (V, Σ, S , R), where V is the finite set of non-terminal symbols, Σ is the finite set of terminals, the start symbol S ∈ V, and R is the set of production rules. The rules take the form α → β, where α ∈ V and β is either an empty string ε or a string of terminals and non-terminals (Linz, 2011). We can represent basic Arabic language grammar using CFG. ...

... A parse or a derivation tree is a method for visualizing the results generated by the CFG. In the derivation tree, the root represents the start symbol, the internal nodes are labeled with non-terminal symbols, and the leaves represent terminal symbols (Linz, 2011). The sentence اﺘﻟﻤﺴﺎح اﻟﺮﺟﻞ :أﻛﻞ "the man ate the crocodile" is parsed using the CFG production rules (Figure 1a), and its resultant parse tree (Figure 1b). ...

The end-case analysis of Arabic sentences is one of the keys to their meaning. This process is called i‘raab, a daunting task for the students. The outcome of the analysis is two-fold: (a) placing a proper diacritical marking on the endcases of individual words, and (b) providing a logical justification. Our objective is to generate a full i‘raab of the sentences, which could be incorporated in a computer-assisted Arabic grammar learning environment, teaching and helping the students with the i‘raab process. Our system comprises four components that work together to generate a proper i‘raab for the sentence. We devise an enhanced context-free grammar (eCFG) that covers all the cases and rules taught in Saudi schools’ grammar textbooks (grades 7–12). Moreover, our eCFG grammar eliminates the need for specialized grammars, e.g., head-driven phrase structure grammar and link grammar, to resolve complex cases involving dependencies. Furthermore, we utilize ontology to determine the correct semantics. The system was tested on 300 sentences varying in complexity from the Arabic grammar textbooks used in the schools. Our system achieved an overall accuracy of 88.33%.

... In this section, we develop the basic theory surrounding regular languages. The reader may wish to refer to a standard text such as [4] for a more in-depth discussion on these topics. Our aim will be to define the so-called growth series of a language, and to describe a powerful technique (Theorem 3.8) used to compute the growth series of a regular language. ...

... for all v ∈ Lk(σ; τ ). We claim that this map satisfies (4). For v ∈ Lk(σ; τ ), we have by definition of δ that ...

In this paper, we study geodesic growth of numbered graph products; these are a generalization of right-angled Coxeter groups, defined as graph products of finite cyclic groups. We first define a graph-theoretic condition called link-regularity, as well as a natural equivalence amongst link-regular numbered graphs, and show that numbered graph products associated to link-regular numbered graphs must have the same geodesic growth series. Next, we derive a formula for the geodesic growth of right-angled Coxeter groups associated to link-regular graphs. Finally, we find a system of equations that can be used to solve for the geodesic growth of numbered graph products corresponding to link-regular numbered graphs that contain no triangles and have constant vertex numbering.

... The strings resulting from the splicing system generate a language via formal language theory which is called a splicing language [1]. Formal language is a set of words or strings of symbols derived from an alphabet [6]. The symbols of empty string (λ), union (+), concatenation (·), star closure (*) and brackets ({} or ()) are the notations for regular expressions in formal languages that are applied in this research [6]. ...

... Formal language is a set of words or strings of symbols derived from an alphabet [6]. The symbols of empty string (λ), union (+), concatenation (·), star closure (*) and brackets ({} or ()) are the notations for regular expressions in formal languages that are applied in this research [6]. For instance, the language L given by the expression p * · (q + r) is shown in the following: L(p * · (q + r)) = L(p * )L(q + r) = (L(p)) * (L(q)) ∪ (L(r)) = p n {q, r} where n ≥ 0 = {λ, p, pp, ppp, . . ...

The mathematical modelling of DNA splicing systems is developed from the biological process of recombinant DNA where DNA molecules are cut and reassociated with the presence of a ligase and restriction enzymes. The molecules resulting from the splicing system generate a language which is known as a splicing language using formal language theory. In previous research, the splicing languages from different splicing systems have been generalised based on the sequences of restriction enzymes. In this research, the molecular aspects on the generalisations of splicing languages are discussed to validate the splicing languages through wet lab experiment. The initial string used in this model is taken from bacteriophage lambda. From the model, the predicted molecules resulting from the combination of the initial string and the chosen restriction enzymes are determined. The actual results from the experiment will be compared with the modelled results from the generalisations of splicing languages from DNA splicing system with palindromic and non-palindromic sequences of restriction enzymes with different crossings.

... Automata theory is a concept of abstract machines and automatons and the questions of programming which can be overcome by them. (Peter, 2012). Fundamentally, it is a philosophy of computer science. ...

... Both of these forms can be depicted in the form of a transition graph. The transition graph can be interpreted as a flowchart for an algorithm recognizing a language (Peter, 2012). Based on the previous study in (Yusof et al., 2011), the transition graph was used by eliminating the vertices to obtain the inert persistence splicing languages which lie in limit languages. ...

The application of automata theory on the DNA splicing system is rapidly growing from time to time. The idea of a splicing system is formalized by Tom Head in 1987. There are three essential parts in the splicing system models, which are the alphabets, initial strings, and the rules. The alphabets represent the nucleotides or the DNA, known as Adenine, Thymine, Guanine, and Cytosine, which are later abbreviated as a, t, g, c following Watson-Cricks complementary. On the other hand, the set of rules represents the restriction enzyme used for the splicing process. In this research, automata theory is used to transform the limit language into a transition graph. The n-th order limit language is then derived from grammar shown as an automated diagram and shown by transition graphs, which represent the language of transitional labels of DNA molecules derived from the respective splicing system.

... To prove their main result, Claesson and Guðmundsson employed the theory of formal languages; we will do the same. We recall the basic notions from this theory, referring the reader to [47] for more information. ...

... A language is regular if it is the set of words accepted by a deterministic finite automaton. The following lemma lists several standard properties of the collection of regular languages; we refer to [47,Chapter 4] for its proof. ...

Let $W$ be an irreducible Coxeter group. We define the Coxeter pop-stack-sorting operator $\mathsf{Pop}:W\to W$ to be the map that fixes the identity element and sends each nonidentity element $w$ to the meet of the elements covered by $w$ in the right weak order. When $W$ is the symmetric group $S_n$, $\mathsf{Pop}$ coincides with the pop-stack-sorting map. Generalizing a theorem about the pop-stack-sorting map due to Ungar, we prove that $\sup\limits_{w\in W}\left|O_{\mathsf{Pop}}(w)\right|=h$, where $O_{\mathsf{Pop}}(w)$ is the forward orbit of $w$ under $\mathsf{Pop}$ and $h$ is the Coxeter number of $W$ (with $h=\infty$ if $W$ is infinite). More generally, we define a map $f:W\to W$ to be compulsive if for every $w\in W$, $f(w)$ is less than or equal to $\mathsf{Pop}(w)$ in the right weak order. We prove that if $f$ is compulsive, then $\sup\limits_{w\in W}|O_f(w)|\leq h$. This result is new even for symmetric groups. We prove that $2$-pop-stack-sortable elements in type $B$ are in bijection with $2$-pop-stack-sortable permutations in type $A$, which were enumerated by Pudwell and Smith. Claesson and Gudmundsson proved that for each fixed nonnegative integer $t$, the generating function that counts $t$-pop-stack-sortable permutations in type $A$ is rational; we establish analogous results in types $B$ and $\widetilde A$.

... That is, such machines translate words on the input alphabet Σ to words on the output alphabet Γ. It can be shown that Moore machines and Mealy machines have the same expressivity, that is, for a Moore machine there exist an equivalent Mealy machine, and vice versa (Linz 2006). ...

In Markov Decision Processes (MDPs), rewards are assigned according to a function of the last state and action. This is often limiting, when the considered domain is not naturally Markovian, but becomes so after careful engineering of extended state space. The extended states record information from the past that is sufficient to assign rewards by looking just at the last state and action. Non-Markovian Reward Decision Processes (NRMDPs) extend MDPs by allowing for non-Markovian rewards, which depend on the history of states and actions. Non-Markovian rewards can be specified in temporal logics on finite traces such as LTLf/LDLf, with the great advantage of a higher abstraction and succinctness; they can then be automatically compiled into an MDP with an extended state space. We contribute to the techniques to handle temporal rewards and to the solutions to engineer them. We first present an approach to compiling temporal rewards which merges the formula automata into a single transducer, sometimes saving up to an exponential number of states. We then define monitoring rewards, which add a further level of abstraction to temporal rewards by adopting the four-valued conditions of runtime monitoring; we argue that our compilation technique allows for an efficient handling of monitoring rewards. Finally, we discuss application to reinforcement learning.

... Linear languages, introduced by Amar and Putzolu [36], do not belong to the original Chomsky's hierarchy. This hierarchy has been extended along the time [37], see Fig. 1, in which the class of linear languages (L Lin ), also known linear context-free languages, is situated between the classes of regular (L Reg ) and context-free languages (L CF ) [38,39]. ...

In 2001 and 2002, Daowen Qiu has established a new theory of L-valued automata which is based on complete residuated lattices. Moreover, besides the finite automata, several other classes of machines, grammars and languages have been generalized using the ideas introduced by Qiu. This paper generalizes the concept of linear grammars and linear languages in this context, moreover, it presents some operations for L-valued languages and some closure properties for the L-valued linear languages.

... In this study, The IETF-based context-free grammar as a descriptive model is defined to evaluate the correctness of the service function chain structure. The IETF-based CFG application reduces the number of compositions by removing the invalid SFCs (Linz, 2011), and the Skyline method removes service with low QoS. The skyline reduces the search space and only focuses on interesting service functions not dominated by any other service. ...

Service function chaining (SFC) is a mechanism that allows service providers to combine various service functions and exploit the available virtual infrastructure. The best selection of virtual services in the network is essential for meeting user requirements and constraints. This paper proposes a novel approach to generate the optimal composition of the service functions. To this end, a genetic algorithm based on context-free grammar (CFG) that adheres to the Internet Engineering Task Force (IETF) standard and Skyline was developed to use in SFC. The IETF uses cases of the data center, security, and mobile network filtered out the invalid service chains, which resulted in reduced search space. The proposed genetic algorithm found the Skyline service chain instance with the highest quality. The genetic operations were defined to ensure that the service function chains generated in the algorithm process were standard. The experimental results showed that the proposed service composition method outperformed the other methods regarding the quality of service (QoS), running time, and time complexity metrics. Ultimately, the proposed CFG could be generalized to other SFC use cases.

... The class of automata capable of doing any computation constitute the class of "Turing Machine" automata. In this latter class there are automata which require an infinitely long tape and are therefore not accessible to representations with real material components, which can only use a bounded amount of energy and translates into a class of finite tape-length automata (Minsky, 1967;Linz, 2012). In what follows, we will restrict ourselves to automata which do not require any infinitely long tapes or unbounded amounts of energy for their operation. ...

Computing with molecules is at the center of complex natural phenomena, where the information contained in ordered sequences of molecules is used to implement functionalities of synthesized materials or to interpret the environment, as in Biology. This uses large macromolecules and the hindsight of billions of years of natural evolution. But, can one implement computation with small molecules? If so, at what levels in the hierarchy of computing complexity? We review here recent work in this area establishing that all physically realizable computing automata, from Finite Automata (FA) (such as logic gates) to the Linearly Bound Automaton (LBA, a Turing Machine with a finite tape) can be represented/assembled/built in the laboratory using oscillatory chemical reactions. We examine and discuss in depth the fundamental issues involved in this form of computation exclusively done by molecules. We illustrate their implementation with the example of a programmable finite tape Turing machine which using the Belousov-Zhabotinsky oscillatory chemistry is capable of recognizing words in a Context Sensitive Language and rejecting words outside the language. We offer a new interpretation of the recognition of a sequence of chemicals representing words in the machine's language as an illustration of the “Maximum Entropy Production Principle” and concluding that word recognition by the Belousov-Zhabotinsky Turing machine is equivalent to extremal entropy production by the automaton. We end by offering some suggestions to apply the above to problems in computing, polymerization chemistry, and other fields of science.

... The class of automata capable of doing any computation constitute the class of "Turing Machine" automata. In this latter class there are automata which require an infinitely long tape and are therefore not accessible to representations with real material components, which can only use a bounded amount of energy and translates into a class of finite tape-length automata (Minsky, 1967;Linz, 2012). In what follows, we will restrict ourselves to automata which do not require any infinitely long tapes or unbounded amounts of energy for their operation. ...

Z.D. Cupic, A.F. Taylor, D. Horvath, M. Orlik, I.R. Epstein (Eds) 2021
"Advances in Oscillating Reactions". Lausanne: Frontiers Media SA.
This book is a collection of an Editorial and 10 chapters by different authors, originally published as individual papers in Special Issue of "Frontiers in Chemistry" under a joint title "Advances in Oscillating Reactions" (2020/2021).

... // Simplified Linz Ĥ ( Linz:1990:319) // Strachey(1965) In the above 14 instructions of the simulation of P(P) we can see that the first 7 instructions of P are repeated. The end of this sequence of 7 instructions P calls H with its own machine address as the parameters to H(P,P). ...

The halting theorem counter-examples present infinitely nested simulation (non-halting) behavior to every simulating halt decider. The pathological self-reference of the conventional halting problem proof counter-examples is overcome. The halt status of these examples is correctly determined. A simulating halt decider remains in pure simulation mode until after it determines that its input will never reach its final state. This eliminates the conventional feedback loop where the behavior of the halt decider effects the behavior of its input.

... Theorem 1. [14] Let L be an infinite linear language. There exists some positive integer m, such that for any w in L with |w| ≥ m are decomposed as w = uvxyz with |uvyz| ≤ m, |vy| ≥ 1 such that uv i xy i z in L for all i = 0, 1, 2,.... ...

In DNA computing, a sticker system is a computing mechanism involving the Watson-Crick complementarity of DNA molecules. The sticker system is known as a language generating device based on the sticker operation which is analyzed through the concept of formal language theory. The grammar of a formal language can be described by determining finite sets of variables, terminal symbols and production rules. Research on the grammar which uses the Watson-Crick complementarity has been done previously, known as Watson-Crick grammars. As an improvement to the Watson-Crick grammars, the static Watson-Crick grammars have been proposed as an analytical counterpart of sticker system which consist of regular grammar, linear grammar and context-free grammar. In this research, the closure properties of static Watson-Crick linear and context-free grammars are investigated. The result shows that the families of languages generated by static Watson-Crick linear and context-free grammars are closed under different operations.

... Some fundamental on the basic notions of the theories of formal languages and splicing systems which are used in this paper are explained in this section. The detailed information on the preliminaries involving formal language theory and algebra can be discovered in Turaev et al. [6], Linz [10] and Mateescu et al. [11]. ...

Head in 1987 was the first person to introduce the concept of splicing system as a theoretical model for DNA based computation using splicing operation. Splicing operation is a method of cutting and recombining DNA molecules under the influence of restriction enzymes such as ligase. Previous researches have proven that splicing systems with finite sets of axioms and rules generate only regular languages. Hence, in order to increase the computational power of the languages generated by splicing systems, several restrictions in the use of rules have been considered. In this paper, simple splicing systems controlled by permutation groups are defined and the computational power of the languages generated by this variable is explored.

... Attention is restricted to subclasses of MDPs by making structural assumptions about which MDP components (rewards or transition probabilities) may change in support of the generation of tasks. While this kind of assumptions is reasonable in most real life situations, there are also scenarios when either task specification is non-Markovian and thus difficult to be expressed analytically as a reward function, or the sequential tasks cannot be generated from an underlying distribution when expressed logically using formal languages (Linz 2006;Pnueli 1977). For instance, consider a scenario when an agent has learned the task of "delivering coffee and mail to office". ...

Continuously learning new tasks using high-level ideas or knowledge is a key capability of humans. In this paper, we propose Lifelong reinforcement learning with Sequential linear temporal logic formulas and Reward Machines (LSRM), which enables an agent to leverage previously learned knowledge to fasten learning of logically specified tasks. For the sake of more flexible specification of tasks, we first introduce Sequential Linear Temporal Logic (SLTL), which is a supplement to the existing Linear Temporal Logic (LTL) formal language. We then utilize Reward Machines (RM) to exploit structural reward functions for tasks encoded with high-level events, and propose automatic extension of RM and efficient knowledge transfer over tasks for continuous learning in lifetime. Experimental results show that LSRM outperforms the methods that learn the target tasks from scratch by taking advantage of the task decomposition using SLTL and knowledge transfer over RM during the lifelong learning process.

... A set of languages that share the same properties is called a family. Usually, the family of languages are classified according to their generative power, which goes from unrestricted (Type-0) to regular (Type-3) [1]. This classification according to generative power creates the Chomsky hierarchy, giving an overview of the computational capability of each family of languages. ...

Through the years, formal language theory has evolved through continual interdisciplinary work in theoretical computer science, discrete mathematics and molecular biology. The combination of these areas resulted in the birth of DNA computing. Here, language generating devices that usually considered any set of letters have taken on extra restrictions or modified constructs to simulate the behavior of recombinant DNA. A type of these devices is an insertion-deletion system, where the operations of insertion and deletion of a word have been combined in a single construct. Upon appending integers to both sides of the letters in a word, bonded insertion-deletion systems were introduced to accurately depict chemical bonds in chemical compounds. Previously, it has been shown that bonded sequential insertion-deletion systems could generate up to recursively enumerable languages. However, the closure properties of these systems have yet to be determined. In this paper, it is shown that bonded sequential insertion-deletion systems are closed under union, concatenation, concatenation closure, λ-free concatenation closure, substitution and intersection with regular languages. Hence, the family of languages generated by bonded sequential insertion-deletion systems is shown to be a full abstract family of languages.

... A derivation tree has a node that consists of a variable present on the left of the production rule while the child nodes consist of variables present on the right of the same production rule. A derivation tree illustrates how each variable is substituted in the derivation from the root identi ed with the start symbol, to the leaves or the terminals [1]. ...

As opposed to automata, grammars are used to generate strings instead of identifying them. The use of regular languages and nite automata is simple and restrictive. Context-Free Grammar or CFG is a formal grammar used to produce all possible combinations of strings in a given formal language. Context-Free Grammar consists of a set of grammar rules which are nite and predetermined. In computer science, a CFG is said to be ambiguous if a given string could be generated by the grammar in more than one way. Syntax of high-level programming languages, Parser programs and compiler design can be described using Context-Free Grammar. This paper presents a method to implement the derivations of Context-Free Grammar using Python. By applying an appropriate production rule to the leftmost non-terminal in each step, a leftmost derivation is obtained. On the contrary, by applying the appropriate production rule to the rightmost non-terminal in each step, a rightmost derivation is obtained.

... It contains a set of finite states and the transitions from one state to the next. In consequence, the process of intent refinement can be visually represented as a state diagram of DFA [14]. ...

... To build the decryption machine we use the technique of tracing, used when converting an NFA into a DFA. [6] That is, we trace the machine LP R M , and λ(K, LP R M ), simultaneously. Each machine is given the appropriate input, at each time step. ...

Putting a watermark into digital circuitry has its own set of challenges. Creating a secure watermark in printed matter usually involves including graphics that are difficult to reproduce. In circuitry, including additional circuitry that is hard to produce, one must contend with the prospect of increasing power consumption of the circuit, decreasing the speed of the circuit, and introducing watermark circuitry that is easily reproduced. In this paper we present a watermark method for sequential circuitry. It allows for several degrees of calibration, allowing the user to tune the complexity of the watermark to requirements of speed, power, and efficacy. Our method uses an encryption technique to introduce secrets about the watermark circuit, at several levels. It also employs a boundary scan testing protocol as a means to protect the watermark circuitry. Our discussion starts by describing the different tools needed in our watermark scheme. We then discuss the difficulty of the problem associated with cracking our watermark circuit. This analysis shows that, with full implementation, our method can be made quite secure.

... With the widespread usage of contemporary solutions, such as artificial intelligence and intelligent systems, current research supporting the development of expert and advisory systems focuses on challenges connected to the improvement of methods for obtaining the specialized knowledge of a person. Previous studies [10,11] have addressed this issue. Figure 1 graphically demonstrates the difficulties of evaluating the reliability of wind farm equipment is while it is being used. ...

This article deals with the importance of simulation studies for the reliability of wind farm (WF) equipment during the operation process. Improvements, upgrades, and the introduction of new solutions that change the reliability, quality, and conditions of use and operation of wind farm equipment present a research problem during study. Based on this research, it is possible to continuously evaluate the reliability of WF equipment. The topic of reliability testing of complex technical facilities is constantly being developed in the literature. The article assumes that the operation of wind farm equipment is described and modeled based on Markov processes. This assumption justified the use of Kolmogorov–Chapman equations to describe the developed research model. Based on these equations, an analytical model of the wind farm operation process was created and described. As a result of the simulation analysis, the reliability of the wind farm was determined in the form of a probability function (R0(t)) for the WPPs system.

... Here we recall some necessary concepts about the classes of regular and contextfree languages and also fix our notations [4][5][6]11,12]. ...

... The regeneration of the object takes place at the time when it is required. This is ensured by an intelligent diagnostic system of the object which is constructed based on an artificial neural network, especially such a network which reliably and credibly recognizes the states of the object for which prevention activities need to be performed [10][11][12][13][14][15][16][17][18][19][20]. There are no losses: no costs connected with ineffective use of the object, which may occur during operation when the object is not fit or it is in the state of incomplete fitness. ...

The article deals with simulation tests on the reliability of the equipment of the wind farm WF in the operation process. The improvement, modernization, and introduction of new solutions that change the reliability, as well as the quality and conditions of use and operation of wind farm equipment, require testing. Based on these tests, it is possible to continuously evaluate the reliability of the equipment of WF. The issue of reliability assessment of wind farm equipment, for which intelligent systems, diagnostic systems DIAG, and Wind Power Plant Expert System (WPPES) are used to modernize the operation process, can only be tested in a simulative way. The topic of testing the reliability of complex technical objects is constantly developing in the literature. In this paper, it is assumed that the operation of wind farm equipment is described and modeled based on Markov processes. The adoption of this assumption justified the use of the Kolmogorov–Chapman equations to describe the developed model. Based on this equation, an analytically developed model of the wind farm operation process was described. The simulation analysis determines the reliability of the wind farm in terms of the availability factor Kg(t). The simulation tests are performed in two phases using the computer program LabView. In the first stage, the reliability value in the form of the readiness factor Kg(t) as a function of changes in the mean repair time value ranging {from 0.3 to 1.0} was investigated. In the second stage, the reliability value of WF devices was examined as a function of changes in the value of the average time between successive failures, ranging from 1000 to 3000 (h)}.

... There is a set of problems that cannot be formulated in a well-defined format for humans, and therefore there is uncertainty as to how we can organize HLI-based agents to face these problems. For example, in [21], it was proved that there are some types of problems with no algorithm for solving. In addition, some problems for humans, such as the goals of human creation, are not clear, and therefore HLI-based agents will not be able to solve these types of problems because they follow the thinking process of humans. ...

In recent years, artificial intelligence has had a tremendous impact on every field, and several definitions of its different types have been provided. In the literature, most articles focus on the extraordinary capabilities of artificial intelligence. Recently, some challenges such as security, safety, fairness, robustness, and energy consumption have been reported during the development of intelligent systems. As the usage of intelligent systems increases, the number of new challenges increases. Obviously, during the evolution of artificial narrow intelligence to artificial super intelligence, the viewpoint on the challenges such as security will be changed. In addition, the recent development of human-level intelligence cannot appropriately happen without considering whole challenges in designing intelligent systems. Considering the mentioned situation, no study in the literature summarizes the challenges in designing artificial intelligence. In this paper, a review of the challenges is presented. Then, some important research questions about the future dynamism of challenges and their relationships are answered.

... Each rule then defines the set of expressions by which a certain variable, denoted by ⟨·⟩, can be replaced. Starting with the symbol ⟨ ⟩, each expression can be substituted recursively according to the specified rules until it contains either exclusively terminal symbols or the empty string [33]. The resulting derivation tree uniquely represents a multigrid preconditioner on the specified hierarchy of ...

Solving the indefinite Helmholtz equation is not only crucial for the understanding of many physical phenomena but also represents an outstandingly-difficult benchmark problem for the successful application of numerical methods. Here we introduce a new approach for evolving efficient preconditioned iterative solvers for Helmholtz problems with multi-objective grammar-guided genetic programming. Our approach is based on a novel context-free grammar, which enables the construction of multigrid preconditioners that employ a tailored sequence of operations on each discretization level. To find solvers that generalize well over the given domain, we propose a custom method of successive problem difficulty adaption, in which we evaluate a preconditioner's efficiency on increasingly ill-conditioned problem instances. We demonstrate our approach's effectiveness by evolving multigrid-based preconditioners for a two-dimensional indefinite Helmholtz problem that outperform several human-designed methods for different wavenumbers up to systems of linear equations with more than a million unknowns.

This chapter is concerned with the Hidden Markov Model as a special type of machine learning algorithm. We describe discrete Markov model as the background for understanding the Hidden Markov Model. In the main section, we describe the Hidden Markov Model that does the three cases: computing the probability of a state sequence, generating state sequence given an observation sequence, and estimating its parameters from a training set. We mention the text topic analysis as a typical task that the Hidden Markov Model is applied to. This chapter covers the special machine learning algorithm that considers the temporal sequence of input values, differently from other machine learning algorithms. This chapter is intended to describe the Hidden Markov Model and its application to the text topic analysis.

In molecular biology, protein is the most significant product of the gene expression process, which is generated by utilizing the genetic information of the DNA molecules. Based on the Central Dogma of molecular biology, proteins are synthesized by first transcribing a DNA gene, which produces mRNA molecules. This mRNA is then translated to produce the desired protein product. Finite state machine (FSM) is a distinguishable tool for modeling discrete event systems. In this paper, we introduce an FSM model for protein synthesis. We further incorporate a mechanism for mutation analysis to our FSM model that helps in determining effects on protein synthesis. The proposed model has been validated by feeding the FSM model with DNA strands. Results have shown and proven that the model mimics and produces protein products as described by the steps of Central Dogma. Furthermore, the FSM model succeeds in discovering gene mutations and consequently determining mutation type.

This paper is a summary of my seminars given in the Research Group on Mathematical Linguistics in the year 2005. It is a short survey on automata theory, including nite state automata and tree automata. The transformations (transductions) induced by nite state automata and tree automata are given.

Propomos estudar o gato e a borboleta por meio de um autômato.

The Nottingham group at 2 is the group of (formal) power series t+a2t2+a3t3+⋯ in the variable t with coefficients ai from the field with two elements, where the group operation is given by composition of power series. The depth of such a series is the largest d⩾1 for which a2=…=ad=0.
Only a handful of power series of finite order (forcedly a power of 2) are explicitly known through a formula for their coefficients. We argue in this paper that it is advantageous to describe such series in closed computational form through automata, based on effective versions of proofs of Christol's theorem identifying algebraic and automatic series.
Up to conjugation, there are only finitely many series σ of order 2n with fixed break sequence (i.e. the sequence of depths of σ∘2i). Starting from Witt vector or Carlitz module constructions, we give an explicit automaton-theoretic description of: (a) representatives up to conjugation for all series of order 4 with break sequence (1,m) for m<10; (b) representatives up to conjugation for all series of order 8 with minimal break sequence (1,3,11); and (c) an embedding of the Klein four-group into the Nottingham group at 2.
We study the complexity of the new examples from the algebro-geometric properties of the equations they satisfy. For this, we generalise the theory of sparseness of power series to a four-step hierarchy of complexity, for which we give both Galois-theoretic and combinatorial descriptions. We identify where our different series fit into this hierarchy. We construct sparse representatives for the conjugacy class of elements of order two and depth 2μ±1 (μ⩾1). Series with small state complexity can end up high in the hierarchy. This is true, for example, for a new automaton we found, representing a series of order 4 with 5 states (the minimal possible number for such a series).

In 2014, Jeandel proved that two dynamical properties regarding Turing machines can be computable with any desired error ϵ>0, the Turing machine Maximum Speed and Topological Entropy. Both problems were proved in parallel, using equivalent properties. Those results were unexpected, as most (if not all) dynamical properties are undecidable. Nevertheless, Topological Entropy positiveness for reversible and complete Turing machines was shortly proved to be undecidable, with a reduction of the halting problem with empty counters in 2-reversible Counter machines. Unfortunately, the same proof could not be used to prove undecidability of Speed Positiveness. In this research, we prove the undecidability of Homogeneous Tape Reachability Problem for aperiodic and reversible Turing machines, in order to use it to prove the undecidability of the Speed Positiveness Problem for complete and reversible Turing machines.

Structured Query Language (SQL) remains the standard language used in Relational Database Management Systems (RDBMSs) and has found applications in healthcare (patient registries), businesses (inventories, trend analysis), military, education, etc. Although SQL statements are English-like, the process of writing SQL queries is often problematic for nontechnical end-users in the industry. Similarly, formulating and comprehending written queries can be confusing, especially for undergraduate students. One of the pivotal reasons given for these difficulties lies with the simple syntax of SQL, which is often misleading and hard to understand. An ideal solution is to present these two audiences: undergraduate students and nontechnical end-users with learning and practice tools. These tools are mostly electronic and can be used to aid their understanding, as well as enable them to write correct SQL queries. This work proposes a new approach aimed at understanding and writing correct SQL queries using principles from Formal Language and Automata Theory. We present algorithms based on: regular expressions for the recognition of simple query constructs, context-free grammars for the recognition of nested queries, and a jumping finite automaton for the synthesis of SQL queries from natural language descriptions. As proof of concept, these algorithms were further implemented into interactive software tools aimed at improving SQL comprehension. Evaluation of these tools showed that the majority of participants agreed that the tools were intuitive and aided their understanding of SQL queries. These tools should, therefore, find applications in aiding SQL comprehension at higher learning institutions and assist in the writing of correct queries in data-centered industries.

The halting theorem counter-examples present infinitely nested simulation (non-halting) behavior to every simulating halt decider. Whenever the pure simulation of the input to simulating halt decider H(x,y) never stops running unless H aborts its simulation H correctly aborts this simulation and returns 0 for not halting.

This chapter discusses the design of workflows (or pipelines), which represent solutions that involve more than one algorithm. This is motivated by the fact that many tasks require such solutions. This problem is non-trivial, as the number of possible workflows (and their configurations) can be rather large. This chapter discusses various methods that can be used to restrict the design options and thus reduce the size of the configuration space. These include, for instance, ontologies and context-free grammars. Each of these formalisms has its merits and shortcomings. Many platforms have resorted to planning systems that use operators. These can be designed to be in accordance with the given ontologies or grammars. As the search space may be rather large, it is important to leverage prior experience. This topic is addressed in one of the sections, which discusses rankings of plans that have proved to be useful in the past. The workflows/pipelines that have proved successful in the past can be retrieved and used as plans in future tasks. Thus, it is possible to exploit both planning and metalearning.

The concept of folding (Automata or Graphs) has been studied by several researches over the past years. Parallel computing folded cubic graphs have been employed as a potential network topology as an alternative to the hypercubic. The study of graph folding has applications in various emerging research area like chemical structure, protein structure and biological structure. Motivated by these observations, an idea of folding a special Hamiltonian circuit graph to a cycle triple is initiated. An infinite graph G is constructed and the iterative level construction leads to a Hamiltonian circuit graph. Further the folding process to be performed must ensure in the end that the iterative folding provides the final folded as a cycle triple pseudosymmetric graph. And also a special kind of Tournament is considered and output table of directed edge labeling for the same are discussed. Both Hamiltonian Circuit and Tournament Graph have been constructed by the Finite State Machine. The main intention is to study the generalization of automaton for Hamiltonian circuit.

ResearchGate has not been able to resolve any references for this publication.