## No full-text available

To read the full-text of this research,

you can request a copy directly from the author.

Article

Please don't be shy about sending even vague pointers to people who may have complete or partial resolutions of the problems mentioned in any of the open questions columns that have appeared as earlier complexity theory columns. Though I don't give a ...

To read the full-text of this research,

you can request a copy directly from the author.

... A Finite Automaton (FA) is a simple idealized machine that is used to recognize patterns within input taken from some alphabet [18]. FAs consist of four parameters; a finite set of states, a certain start state, a set of final states, and a set of transitions T from one state to another state. ...

... The automaton M accepts the word w if a sequence of states, r 0 , r 1 , ..., r n , exists in Q under 3 conditions: r 0 = q 0 , r i+1 ∈ (r i , a i+1 ) , for i = 0, ..., n and r n ∈ F [18]. The first condition means that the machine starts from the state q 0 . ...

... Based on [18], an expression is regular if: ...

The need for computation speed is ever increasing. A promising solution for this requirement is parallel computing but the degree of parallelism in electronic computers is limited due to the physical and technological barriers. DNA computing proposes a fascinating level of parallelism that can be utilized to overcome this problem. This paper presents a new computational model and the corresponding design methodology using the massive parallelism of DNA computing. We proposed an automatic design algorithm to synthesis the logic functions on the DNA strands with the maximum degree of parallelism. In the proposed model, billions of DNA strands are utilized to compute the elements of the Boolean function concurrently to reach an extraordinary level of parallelism. Experimental and analytic results prove the feasibility and efficiency of the proposed method. Moreover, analyses and results show that a delay of a circuit in this method is independent of the complexity of the function and each Boolean function can be computed with O(1) time complexity.

... Introduction. Computational problems are classified into various complexity classes, based on the best known computational resources required by them [1]. The complexity class P SP ACE (Polynomial Space) is the class of all computational problems that can be solved by a classical computer using polynomial memory [2]. ...

... The complexity class P SP ACE (Polynomial Space) is the class of all computational problems that can be solved by a classical computer using polynomial memory [2]. This class is one of the larger complexity classes, since the classes P (Polynomial-time) -problems solvable by a classical computer in polynomial time [1], N P (Non-deterministic Polynomial-time) -problems not solvable, but verifiable, by a classical computer in polynomial time [1], and BQP (Bounded-error Quantum Polynomial-time) -problems solvable by a quantum computer in polynomial time with a bounded probability of error [2], are believed to lie within P SP ACE. Note that the class N P SP ACE (Non-deterministic Polynomial Space) equals P SP ACE, since a deterministic Turing machine can simulate a non-deterministic Turing machine without needing much more space, although it may use much more time [3]. ...

... The complexity class P SP ACE (Polynomial Space) is the class of all computational problems that can be solved by a classical computer using polynomial memory [2]. This class is one of the larger complexity classes, since the classes P (Polynomial-time) -problems solvable by a classical computer in polynomial time [1], N P (Non-deterministic Polynomial-time) -problems not solvable, but verifiable, by a classical computer in polynomial time [1], and BQP (Bounded-error Quantum Polynomial-time) -problems solvable by a quantum computer in polynomial time with a bounded probability of error [2], are believed to lie within P SP ACE. Note that the class N P SP ACE (Non-deterministic Polynomial Space) equals P SP ACE, since a deterministic Turing machine can simulate a non-deterministic Turing machine without needing much more space, although it may use much more time [3]. ...

The complexity class PSPACE includes all computational problems that can be solved by a classical computer with polynomial memory. All PSPACE problems are known to be solvable by a quantum computer too with polynomial memory and are, therefore, known to be in BQPSPACE. Here, we present a polynomial time quantum algorithm for a PSPACE-complete problem, implying that PSPACE is a subset of the class BQP of all problems solvable by a quantum computer in polynomial time. In particular, we outline a BQP algorithm for the PSPACE-complete problem of evaluating a full binary NAND tree. An existing best of quadratic speedup is achieved using quantum walks for this problem, which is still exponential in the problem size. By contrast, we achieve an exponential speedup for the problem, allowing for solving it in polynomial time. There are many real-world applications of our result, such as strategy games like chess or Go. As an example, in quantum sensing, the problem of quantum illumination, that is treated as that of channel discrimination, is PSPACE-complete. Our work implies that quantum channel discrimination, and therefore, quantum illumination, can be performed by a quantum computer in polynomial time.

... setting is testing generalization on sequence prediction problems, where an agent is trained with sequences of length ≤ N and tested with arbitrarily longer sequences N . This problem is of particular importance since it subsumes all computable problems [2][3][4][5][6]. Central to sequence prediction is inductive inference, which consists of deriving a general rule from a finite set of concrete instances and using this rule to make predictions. ...

... In formal language theory, the Chomsky hierarchy [17] classifies such (sequence prediction) problems in the order of increasing complexity. This hierarchy is associated with an equivalent hierarchy of models (automata) that are capable of solving different problem classes [2,18]. Lower-level automata have restrictive memory models and can only solve lower-level problem sets, while Turing machines with infinite memory and unrestricted memory access lie on top of the hierachy and can solve all computable problems. ...

... Problem setup While we use formal language theory to classify our tasks, learning to recognize formal languages is hard because finding appropriate negative examples is an ill-defined problem. Therefore, we evaluate our neural architectures on sequence prediction problems, since language recognition can be reformulated as such a problem [2][3][4][5][6]. Let z := (z 1 , z 2 , . . . ...

Reliable generalization lies at the heart of safe ML and AI. However, understanding when and how neural networks generalize remains one of the most important unsolved problems in the field. In this work, we conduct an extensive empirical study (2200 models, 16 tasks) to investigate whether insights from the theory of computation can predict the limits of neural network generalization in practice. We demonstrate that grouping tasks according to the Chomsky hierarchy allows us to forecast whether certain architectures will be able to generalize to out-of-distribution inputs. This includes negative results where even extensive amounts of data and training time never led to any non-trivial generalization, despite models having sufficient capacity to perfectly fit the training data. Our results show that, for our subset of tasks, RNNs and Transformers fail to generalize on non-regular tasks, LSTMs can solve regular and counter-language tasks, and only networks augmented with structured memory (such as a stack or memory tape) can successfully generalize on context-free and context-sensitive tasks.

... A proposed formal framework for spiking neural P systems by [3] remarks on the introduction of stochastic features. Computing models leading up to SN P systems that introduced stochasticity, from finite automata [4][5][6], Petri nets [7], and the Turing machine [8,9], to recent constructs like neural networks [10][11][12] and P systems [13][14][15][16][17], all suggest that stochastic spiking neural P systems are also worth investigating. Stochastic spiking neural P systems can contribute to the design of biologically plausible mathematical models of brain cognitive functions [18] and the development of asynchronous SN P systems [19]. ...

... Intuitively, a deterministic process can be interpreted as a single outcome stochastic process. On the other hand, from automata theory, a nondeterministic process is a generalization of deterministic process which can be intuitively viewed as a tree of possibilities with branching outcomes [8]. While it can be described that way (it is also similarly interpreted by [2] as random choices with equal probabilities), a nondeterministic process does not correspond, or is not equivalent to, a stochastic process. ...

Spiking neural P (SN P) systems are a class of P systems that incorporate the idea of spiking neurons. A variant called SN P systems with stochastic application of rules (\(\star\)SN P systems) replaces the nondeterministic application of rules of SN P systems with probabilistic selection. It is proven that \(\star\)SN P systems are universal, albeit using rule application probabilities of 1. This paper investigates the open problem on the universality of \(\star\)SN P systems when restricting the rule application probability of all rules to \(<1\). The restriction essentially desynchronizes all neurons. This renders the use of time to encode information unreliable, specifically, the time interval-based output and spike duplication using intermediary neurons to some degree. This paper then considered introducing new features to \(\star\)SN P systems given the difficulty of proving the universality of asynchronous SN P systems using standard rules. The following features are investigated to address the desynchronization: extended rules with the mode of output being the number of spikes sent to the environment, and colored spikes with the mode of output being the number of spikes in the output neuron. This paper then proved computational universality for each feature introduced. In the investigation of the usage of extended rules, the introduction of the new feature to \(\star\)SN P systems produced a formal definition suitable for the Extended spiking neural P system (ESNPS) described and implemented in the Optimization spiking neural P system (OSNPS).

... In Lemma 1 and consequently also in Definition 1, one is only interested in lengths which allow a pumping somewhere in the word. We now give a stronger version of the pumping lemma where also the position and length of the pumped word matter (see for instance [8,14,15]). ...

... The paper [4] gives a summary of results. By our knowledge, the minimal pumping constant and the minimal pumping length are not studied, but they occur in exercises of some textbooks (e. g., [14,Exercises 1.55]). ...

The well-known pumping lemma for regular languages states that, for any regular language L , there is a constant p (depending on L ) such that the following holds: If $$w\in L$$ w ∈ L and $$\vert w\vert \ge p$$ | w | ≥ p , then there are words $$x\in V^{*}$$ x ∈ V ∗ , $$y\in V^+$$ y ∈ V + , and $$z\in V^{*}$$ z ∈ V ∗ such that $$w=xyz$$ w = x y z and $$xy^tz\in L$$ x y t z ∈ L for $$t\ge 0$$ t ≥ 0 . The minimal pumping constant $${{{\,\mathrm{mpc}\,}}(L)}$$ mpc ( L ) of L is the minimal number p for which the conditions of the pumping lemma are satisfied. We investigate the behaviour of $${{{\,\mathrm{mpc}\,}}}$$ mpc with respect to operations, i. e., for an n -ary regularity preserving operation $$\circ $$ ∘ , we study the set $${g_{\circ }^{{{\,\mathrm{mpc}\,}}}(k_1,k_2,\ldots ,k_n)}$$ g ∘ mpc ( k 1 , k 2 , … , k n ) of all numbers k such that there are regular languages $$L_1,L_2,\ldots ,L_n$$ L 1 , L 2 , … , L n with $${{{\,\mathrm{mpc}\,}}(L_i)=k_i}$$ mpc ( L i ) = k i for $$1\le i\le n$$ 1 ≤ i ≤ n and $${{{\,\mathrm{mpc}\,}}(\circ (L_1,L_2,\ldots ,L_n)=~k}$$ mpc ( ∘ ( L 1 , L 2 , … , L n ) = k . With respect to Kleene closure, complement, reversal, prefix and suffix-closure, circular shift, union, intersection, set-subtraction, symmetric difference,and concatenation, we determine $${g_{\circ }^{{{\,\mathrm{mpc}\,}}}(k_1,k_2,\ldots ,k_n)}$$ g ∘ mpc ( k 1 , k 2 , … , k n ) completely. Furthermore, we give some results with respect to the minimal pumping length where, in addition, $$\vert xy\vert \le p$$ | x y | ≤ p has to hold.

... In computational complexity theory, the two fundamental classes of decision problems are (i) problems that can be decided in polynomial time by a deterministic Turing machine (P class) and (ii) problems that are verified in polynomial time by a non-deterministic Turing machine (NP class). The third class of problems within NP is the NP-complete (NPC), and their complexity is related to the entire NP class [13]. A decision problem π * is NP-complete iff (i) π * ∈ NP and (ii) ∀π ∈ NP, π can be polynomial transformed (≤ p ) to π * . ...

... 1: for j = 1 to n do 2: sum ← 0 3: for i = 1 to m do 4: if s i,j > a i,j then 5: return NO 6: else 7: sum ← sum + s i,j 8: end if 9: end for 10: if sum = l j then 11: return NO 12: end if 13: end for 14: if F(S) is bounded by the given value then 15: return YES 16: else 17: return NO 18: end if the objective value is unbounded or unfeasible, the answer will be no for both problems because the transformed version guarantees the same computed value as the original. ...

In this work, we investigate the variant of the Internet Shopping Optimization Problem (ISHOP) that considers different item units. This variant is more challenging than the original problem. The original ISHOP is already known as a combinatorial NP-hard problem. In this work, we present a formal proof that the ISHOP variant considering different item units belongs to the NP-Hard complexity class. The abovementioned variant is familiar to companies and consumers who need to purchase more than one unit of a specific product to satisfy their requirements. For example, companies buy different quantities of construction materials, medical equipment, office supplies, or chemical components. We propose two new evolutionary operators (crossover and mutation) and an unfeasible solution repair method for the studied ISHOP variant. Furthermore, we produce a new benchmark of 15 synthetic instances where item prices follow a random uniform distribution. Finally, to assess our evolutionary operators, we implemented two Evolutionary Algorithms, a Genetic Algorithm (GA) and a Cellular Genetic Algorithm (CGA), and an experimental evaluation against a Water Cycle Algorithm (WCA) from the state-of-the-art. Experimental results show that our proposed GA performs well with statistical significance.

... Finite automata are seen as the most basic computational model, and they have widely been investigated as language recognizer (e.g., Sipser (2013)). A deterministic finite state automaton (DFA) divides the strings generated on a specified alphabet into two sets: the accepted strings and the rejected strings. ...

... Obviously, the problem is in NL. To prove the hardness, we make a reduction from the ST-connectivity problem that is NL-complete [21]. Given an instance (H, s, t) of the ST-connectivity problem, where H is a directed graph and s, t are two nodes in H, it asks whether there is a directed path from s to t. ...

A quantum circuit must be preprocessed before implementing on NISQ devices due to the connectivity constraint. Quantum circuit mapping (QCM) transforms the circuit into an equivalent one that is compliant with the NISQ device's architecture constraint by adding SWAP gates. The QCM problem asks the minimal number of auxiliary SWAP gates, and is NP-complete. The complexity of QCM with fixed parameters is studied in the paper. We give an exact algorithm for QCM, and show that the algorithm runs in polynomial time if the NISQ device's architecture is fixed. If the number of qubits of the quantum circuit is fixed, we show that the QCM problem is NL-complete by a reduction from the undirected shortest path problem. Moreover, the fixed-parameter complexity of QCM is W[1]-hard when parameterized by the number of qubits of the quantum circuit. We prove the result by a reduction from the clique problem. If taking the depth of the quantum circuits and the coupling graphs as parameters, we show that the QCM problem is still NP-complete over shallow quantum circuits, and planar, bipartite and degree bounded coupling graphs.

... We suppose the reader familiar with the basics of complexity theory (see e.g. [34]): Complexity theory is a finer theory whose aim is to discuss the resources such as time or space that are needed to compute a given function. In the context, of functions over the integers similar to the framework of previous discussion, the complexity of a function is measured in terms of the length (written in binary) of its arguments. ...

This paper studies the expressive and computational power of discrete Ordinary Differential Equations (ODEs), a.k.a. (Ordinary) Difference Equations. It presents a new framework using these equations as a central tool for computation and algorithm design. We present the general theory of discrete ODEs for computation theory, we illustrate this with various examples of algorithms, and we provide several implicit characterizations of complexity and computability classes. The proposed framework presents an original point of view on complexity and computation classes. It unifies several constructions that have been proposed for characterizing these classes including classical approaches in implicit complexity using restricted recursion schemes, as well as recent characterizations of computability and complexity by classes of continuous ordinary differential equations. It also helps understanding the relationships between analog computations and classical discrete models of computation theory. At a more technical point of view, this paper points out the fundamental role of linear (discrete) ODEs and classical ODE tools such as changes of variables to capture computability and complexity measures, or as a tool for programming many algorithms.

... We finally make sure that r X is effectively closed as in the following definition. In what follows, we use the definition of Turing machine as in Sipser [42,Definition 3.3]. ...

We construct a short-range potential on a bidimensional full shift and finite alphabet
that exhibits a zero-temperature chaotic behaviour as introduced by van
Enter and Ruszel. A phenomenon where there exists a sequence of temperatures
that converges to zero for which the whole set of equilibrium measures at these given
temperatures oscillates between two ground states. Brémont’s work shows that the
phenomenon of non-convergence does not exist for short-range potentials in dimension
one; Leplaideur obtained a different proof for the same fact. Chazottes and Hochman
provided the first example of non-convergence in higher dimensions d ≥ 3; we extend
their result for d = 2 and highlight the importance of two estimates of recursive
nature that are crucial for this proof: the relative complexity and the reconstruction
function of an extension.

... Подобное разбиение может говорить о связи выделенных кластеров с временным (NTIME) и пространственным (NSPACE) классами сложности, выделяемыми в теории вычислений (Sipser, 2006). Соответствие наших групп задач классам сложности может позволить изучать процессуальную сторону решения, применяя модели теории вычислений (Хопкрофт, 2014). ...

Известно, что одним из наиболее ёмких способов описания носителей культуры явля-
ется её описание в терминах аналитической–холистической ментальности (Александров,
Кирдина, 2012). Различия по типам ментальности проявляющиеся как между, так и вну-
три одной культуры (Апанович и др, 2017), отражаются в том числе и в разных способах
решения задач (Тищенко и др., 2017). Кроме того, было показано, что аналитическими и
холистическими могут быть не только субъекты, но и задачи. На основе содержательных
критериев нами были описаны специальные группы задач, где, для более успешного при-
нятия решения, необходимо либо проявление аналитического мышления, либо холисти-
ческого (там же).
Выделение разных групп задач может осуществляться в процессе рассмотрения атри-
бутов задачи: предметности, синтаксиса, множества способов решения (Фридман, 2001).
Комплексное рассмотрение этих атрибутов и их связи друг с другом понимается нами
как определение задачи. Одной из формальных характеристик является энтропия текста
задачи (Shannon, 1951). Энтропия текста задачи, соответствующая атрибуту “ синтаксис”
рассматривается нами как мера неупорядоченности и сложности структуры текста. В на-
стоящей работе мы предполагали, что а) аналитические и холистические задачи, скон-
струированные по содержательным критериям, отличаются друг от друга по показателям
энтропии; б) показатели энтропии связаны с характеристиками решения указанных задач.

... In computability theory [8], a Cellular Automaton can be Turing Complete, if it can be used to simulate any single-taped Turing Machine. The term was named after the Computer Scientist and Mathematician Alan Turing. ...

Genetic Algorithms are the elementary particles of a brand-new world of computing. In recent years, technology has evolved exponentially in terms of Hardware, but not in terms of Software. Genetic Algorithms (GAs) are already filling this gap in fields like Big Data mining, Protein Folding predictions, Finance, etc. In this paper we present the possibility of using an "Unbounded Single Taped Turing" medium like a Blockchain to store a Genetic Algorithm that will be able to provide Turing Complete results on any mathematically given problem.

... Тогда текстовая задача (в независимости от предметной области) является не каким-то специфическим видом задач, а зафиксированным в печатном виде высказыванием, разная степень формальности которого приводит к изменению характеристик текста. В связи с этой особенностью стоит указать, что существующее специальное направление, относящееся к текстовым задачам (word problem), представленное в области формальных систем (Hinsley et al., 1977;Sipser, 2006), реализуется как проблема степени формальности задач. Строго говоря, задача представлена не на математическом, а на естественном языке. ...

В настоящей работе обосновывается и проверяется предположение
о существовании инвариантов в структуре текста для групп анали-
тических и холистических задач, что может обеспечивать их специ-
фичность, накладывать специальные требования к процессу реше-
ния, а также вводить ограничения на существование промежуточных
групп. Была проведена процедура сопоставления формальных ха-
рактеристик текста с показателями энтропии и эмпирического рас-
пределения частот букв текста задачи с теоретическим распределе-
нием. На основе полученных результатов делается вывод о четком
выделении двух классов задач, обладающих принципиально разны-
ми семантико-синтаксическими характеристиками, но имеющих
потенциальную возможность быть трансформированными один
в другой.
We have substantiated and verified the assumption of the existence of in-
variants in the structure of the text for groups of analytical and holistic tasks,
which can ensure their specificity and impose special requirements on the
decision process, and impose restrictions on the existence of intermediate
groups. The procedure of comparison of the formal characteristics of the text
with the entropy parameters and the empirical distribution of the letter fre-
quencies of the text of the problem with the theoretical distribution was car-
ried out. On the basis of the obtained results, the conclusion is made about
the clear identification of two classes of problems that have fundamentally
different semantic and syntactic characteristics, but have the potential to be
transformed into one another.

... There are efficient algorithms for deciding if a bipartite graph has a perfect matching [21]. Furthermore, by standard results converting Turing machines, deciding an algorithmic problem with inputs of size n in time t(n), to threshold circuits with O(t(n) 2 ) gates, solving the algorithmic problem on every input of length n , [30,33], it follows that there is a network of size polynomial in n that decides, given the incidence matrix of a graph, whether it has a perfect matching. A seminal result by Rzaborov [29] shows that the monotone complexity of the matching function is not polynomial. ...

Monotone functions and data sets arise in a variety of applications. We study the interpolation problem for monotone data sets: The input is a monotone data set with $n$ points, and the goal is to find a size and depth efficient monotone neural network, with non negative parameters and threshold units, that interpolates the data set. We show that there are monotone data sets that cannot be interpolated by a monotone network of depth $2$. On the other hand, we prove that for every monotone data set with $n$ points in $\mathbb{R}^d$, there exists an interpolating monotone network of depth $4$ and size $O(nd)$. Our interpolation result implies that every monotone function over $[0,1]^d$ can be approximated arbitrarily well by a depth-4 monotone network, improving the previous best-known construction of depth $d+1$. Finally, building on results from Boolean circuit complexity, we show that the inductive bias of having positive parameters can lead to a super-polynomial blow-up in the number of neurons when approximating monotone functions.

... We assume familiarity with the basics of Turing machines, circuits, and automata at the level of standard textbooks, e.g. [3,17]. In particular, it will be helpful to have some fluency with AuxPDAs (but we will recall the definition in Section 5). ...

We consider the cons-free programming language of Neil Jones, a simple pure functional language, which decides exactly the polynomial-time relations and whose tail recursive fragment decides exactly the logarithmic-space relations. We exhibit a close relationship between the running time of cons-free programs and the running time of logspace-bounded auxiliary pushdown automata. As a consequence, we characterize intermediate classes like NC in terms of resource-bounded cons-free computation. In so doing, we provide the first “machine-free” characterizations of certain complexity classes, like P-uniform NC. Furthermore, we show strong polynomial lower bounds on cons-free running time. Namely, for every polynomial p, we exhibit a relation R ∈Ptime such that any cons-free program deciding R must take time at least p almost everywhere. Our methods use a “subrecursive version” of Blum complexity theory, and raise the possibility of further applications of this technology to the study of the fine structure of Ptime.

... Syntactic parsing accounts for the computational operation of inferring representations of syntactic structure from sequential inputs (Jurafsky & Martin, 2009;Sipser, 2012). For syntactic parsing to be understood as a model of cognition beyond the pure computational level of description (Marr, 1982), it is necessary to account for how processing is implemented at the Fig. 1. ...

While theoretical and empirical insights suggest that the capacity to represent and process complex syntax is crucial in language as well as other domains, it is still unclear whether specific parsing mechanisms are also shared across domains. Focusing on the musical domain, we developed a novel behavioral paradigm to investigate whether a phenomenon of syntactic revision occurs in the processing of tonal melodies under analogous conditions as in language. We present the first proof‐of‐existence for syntactic revision in a set of tonally ambiguous melodies, supporting the relevance of syntactic representations and parsing with language‐like characteristics in a nonlinguistic domain. Furthermore, we find no evidence for a modulatory effect of musical training, suggesting that a general cognitive capacity, rather than explicit knowledge and strategies, may underlie the observed phenomenon in music.

... For a general background on universal algebra, see [10]; on pseudovarieties, see [2]; on computation and complexity theory, see [52]. We also refer the reader to the survey [55] on the finite basis problem for finite semigroups. ...

We exhibit a faithful representation of the stylic monoid of every finite rank as a monoid of upper unitriangular matrices over the tropical semiring. Thus, we show that the stylic monoid of finite rank $n$ generates the pseudovariety $\boldsymbol{\mathcal{J}}_n$, which corresponds to the class of all piecewise testable languages of height $n$, in the framework of Eilenberg's correspondence. From this, we obtain the equational theory of the stylic monoids of finite rank, show that they are finitely based if and only if $n \leq 3$, that the varieties they generate have uncountably many subvarieties for $n \geq 3$, and that their identity checking problem is decidable in linearithmic time. We also establish connections between the stylic monoids and other plactic-like monoids.

... We can model this situation nicely using a finite state machine. See [1] for an introduction on finite state machines. The machine in Fig. 3 computes whether a string of symbols gives a legal solution for k = 4. Any legal solution must start in the state corresponding to the empty set (no rectangles in progress, labelled with START), and proceed to trace a path in the state machine, eventually returning to the empty state. ...

In this paper we demonstrate a method for counting the number of solutions to various logic puzzles. Specifically, we remove all of the “clues” from the puzzle which help the solver to a unique solution, and instead start from an empty grid. We then count the number of ways to fill in this empty grid to a valid solution. We fix the number of rows k, vary the number of columns n, and then compute the sequence Ak(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$A_k(n)$$\end{document}, which gives the number of solutions on an empty grid of size k×n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$k \times n$$\end{document}.

... Because of their non-linear combinatorial essence, and the size of their dynamic phase space, sustainability problems are also difficult, computationally hard, or intractable (no polynomial-time algorithm exists to solve this kind of problem), meaning that exploring the full combinatorial size of the decision variables' phase space to solve them exactly (e.g., finding the global optimum) would require unpractically large amounts of computational power and time, and/or that global optimal solutions may not exist at all [92][93][94]. Hence, the non-dominated optimal solutions for intractable multi-objective sustainability problems are not demonstrably globally optimal, but only good-enough, superior, locally optimal, or efficient [95,96]. ...

The strong and functional couplings among ecological, economic, social, and technological processes explain the complexification of human-made systems, and phenomena such as globalization, climate change, the increased urbanization and inequality of human societies, the power of information, and the COVID-19 syndemic. Among complexification’s features are non-decomposability, asynchronous behavior, components with many degrees of freedom, increased likelihood of catastrophic
events, irreversibility, nonlinear phase spaces with immense combinatorial sizes, and the
impossibility of long-term, detailed prediction. Sustainability for complex systems implies enough efficiency to explore and exploit their dynamic phase spaces and enough flexibility to coevolve with their environments. This, in turn, means solving intractable nonlinear semi-structured dynamic multi-objective optimization problems, with conflicting, incommensurable, non-cooperative objectives and purposes, under dynamic uncertainty, restricted access to materials, energy, and information, and a given time horizon. Given the high-stakes; the need for effective, efficient, diverse solutions; their
local and global, and present and future effects; and their unforeseen short-, medium-, and long-term impacts; achieving sustainable complex systems implies the need for Sustainability-designed Universal Intelligent Agents (SUIAs). The proposed philosophical and technological SUIAs will be heuristic devices for harnessing the strong functional coupling between human, artificial, and nonhuman biological intelligence in a non-zero-sum game to achieve sustainability.

... A computational problem, or computation, is defined as the task of calculating a mathematical function f (x) of some input data x = x 1 , x 2 , ..., x n . The computational problem is solved by devising an algorithm, a set of logical and mathematical operations that describes how a specific function can be calculated for different input data [Sipser, 2013]. ...

This thesis treats kernel PCA and the Nystrom method. We present a novel incre- ¨ mental algorithm for calculation of kernel PCA, which we extend to incremental calculation of the Nystrom approximation. We suggest a new data-dependent ¨ method to select the number of data points to include in the Nystrom subset, ¨ and create a statistical hypothesis test for the same purpose. We further present a cross-validation procedure for kernel PCA to select the number of principal components to retain. Finally, we derive kernel PCA with the Nystrom method ¨ in line with linear PCA and study its statistical accuracy through a confidence bound.

... The Church-Turing thesis states that the intuitive notion of algorithms (or programs) is equivalent to that of a Turing Machine [7]. The latter is an abstract device that receives an input, performs some computation and produces an output. ...

Compositionality is a key property for dealing with complexity, which has been studied from many points of view in diverse fields. Particularly, the composition of individual computations (or programs) has been widely studied almost since the inception of computer science. Unlike existing composition theories, this paper presents an algebraic model not for composing individual programs but for inductively composing spaces of sequential and/or parallel constructs. We particularly describe the semantics of the proposed model and present an abstract example to demonstrate its application.

... According to this property, based on the idea of "prefer one" [25], our generation takes ary k, order n as the input and outputs all the sequences of B(k, n) and its substring → o f f set mapping table dBMap. To avoid problems such as overflow due to recursion, our generation introduces a data stack dBStack, working as a Finite-state Automaton (FA) [26], shown in Figure 7. The FA is a five-tuple (Q, Σ, δ, q 0 , F): 1. ...

Fuzzing is one of the most successful software testing techniques used to discover vulnerabilities in programs. Without seeds that fit the input format, existing runtime dependency recognition strategies are limited by incompleteness and high overhead. In this paper, for structured input applications, we propose a fast format-aware fuzzing approach to recognize dependencies from the specified input to the corresponding comparison instruction. We divided the dependencies into Input-to-State (I2S) and indirect dependencies. Our approach has the following advantages compared to existing works: (1) recognizing I2S dependencies more completely and swiftly using the input based on the de Bruijn sequence and its mapping structure; (2) obtaining indirect dependencies with a light dependency existence analysis on the input fragments. We implemented a fast format-aware fuzzing prototype, FFAFuzz, based on our method and evaluated FFAFuzz in real-world structured input applications. The evaluation results showed that FFAFuzz reduced the average time overhead by 76.49% while identifying more completely compared with Redqueen and by 89.10% compared with WEIZZ. FFAFuzz also achieved higher code coverage by 14.53% on average compared to WEIZZ.

... One solution to dramatically improve the scalability of the micro-robot swarm control is to make the robots respond to a temporal sequence of (a small number of) voltage levels rather than to the voltage directly. Finite State Machines (FSMs) can accept a set of input sequences [23] (sequences of control signal levels). Previously, [16] proposed on-board MEMS Physical FSM (PFSM) that upon the acceptance of a unique control signal sequence causes the behavioral change of a microrobot; they can be constructed from several basic modules that are combined together and thus fabricated efficiently. ...

An important problem in microrobotics is how to control a large group of microrobots with a global control signal. This paper focuses on controlling a large-scale swarm of MicroStressBots with on-board physical finite-state machines. We introduce the concept of group-based control, which makes it possible to scale up the swarm size while reducing the complexity both of robot fabrication as well as swarm control. We prove that the group-based control system is locally accessible in terms of the robot positions. We further hypothesize based on extensive simulations that the system is globally controllable. A nonlinear optimization strategy is proposed to control the swarm by minimizing control effort. We also propose a probabilistically complete collision avoidance method that is suitable for online use. The paper concludes with an evaluation of the proposed methods in simulations.

... Parsing depends on breaking down the language of strings over a grammar and the languages syntax. A grammar defines the accept-able "words" of the language, and a syntax describes how the words are put together, just as in human languages such as English (Sipser, 2006). The TRNSYS grammar and syntax contains redundant information and is difficult to parse, compared to say the Energy-Plus modeling language as presented in listing 2, also for a pump. ...

... Combinatorial optimization seeks to find an optimal solution from a large set of discrete solutions, and has ubiquitous application in many areas of science and technology [1,2]. Many such problems are known to be computationally hard for classical algorithms and fall into the NP-hard complexity class [3]. The exploration of quantum speedup for solving these hard problems is one of the major topics in modern quantum information science. ...

Programmable quantum systems based on Rydberg atom arrays have recently been used for hardware-efficient tests of quantum optimization algorithms [Ebadi et al., Science, 376, 1209 (2022)] with hundreds of qubits. In particular, the maximum independent set problem on the so-called unit-disk graphs, was shown to be efficiently encodable in such a quantum system. Here, we extend the classes of problems that can be efficiently encoded in Rydberg arrays by constructing explicit mappings from the original computation problems to maximum weighted independent set problems on unit-disk graphs, with at most a quadratic overhead in the number of qubits. We analyze several examples, including: maximum weighted independent set on graphs with arbitrary connectivity, quadratic unconstrained binary optimization problems with arbitrary or restricted connectivity, and integer factorization. Numerical simulations on small system sizes indicate that the adiabatic time scale for solving the mapped problems is strongly correlated with that of the original problems. Our work provides a blueprint for using Rydberg atom arrays to solve a wide range of combinatorial optimization problems with arbitrary connectivity, beyond the restrictions imposed by the hardware geometry.

... A natural direction of future work is to lift our techniques to tackle learning from positive examples for other finite state machines, e.g., non-deterministic finite automata (Sipser, 1997), and more expressive temporal logics, e.g., linear dynamic logic (LDL) (Giacomo and Vardi, 2013). and ACC-APG-RTP W911NF), National Science Foundation (NSF) (Contract number 1646522), and Deutsche Forschungsgemeinschaft (DFG) (Grant number 434592664). ...

We consider the problem of explaining the temporal behavior of black-box systems using human-interpretable models. To this end, based on recent research trends, we rely on the fundamental yet interpretable models of deterministic finite automata (DFAs) and linear temporal logic (LTL) formulas. In contrast to most existing works for learning DFAs and LTL formulas, we rely on only positive examples. Our motivation is that negative examples are generally difficult to observe, in particular, from black-box systems. To learn meaningful models from positive examples only, we design algorithms that rely on conciseness and language minimality of models as regularizers. To this end, our algorithms adopt two approaches: a symbolic and a counterexample-guided one. While the symbolic approach exploits an efficient encoding of language minimality as a constraint satisfaction problem, the counterexample-guided one relies on generating suitable negative examples to prune the search. Both the approaches provide us with effective algorithms with theoretical guarantees on the learned models. To assess the effectiveness of our algorithms, we evaluate all of them on synthetic data.

... We would like to comment that this remark can be reinterpreted as a form of the undecidability defined within the frame of Gödel incompleteness theorem [66]. To prove this statement, it has been shown that the spectral gap problem can be encoded in the halting problem [67] for Turing machines [22,23]. Within this approach, it follows that the spectral gap problem is at least as hard as the halting one. ...

Recently, great attention has been devoted to the problem of the undecidability of specific questions in quantum mechanics. In this context, it has been shown that the problem of the existence of a spectral gap, i.e., energy difference between the ground state and the first excited state, is algorithmically undecidable. Using this result herein proves that the existence of a quantum phase transition, as inferred from specific microscopic approaches, is an undecidable problem, too. Indeed, some methods, usually adopted to study quantum phase transitions, rely on the existence of a spectral gap. Since there exists no algorithm to determine whether an arbitrary quantum model is gapped or gapless, and there exist models for which the presence or absence of a spectral gap is independent of the axioms of mathematics, it infers that the existence of quantum phase transitions is an undecidable problem.

... We assume familiarity with basic notions in complexity theory (cf. Sipser (1997)) and use the complexity classes P, NP, coNP, Σ P 2 . For a set S, we write |S| for its cardinality. ...

Logic-based argumentation is a well-established formalism modeling nonmonotonic reasoning. It has been playing a major role in AI for decades, now. Informally, a set of formulas is the support for a given claim if it is consistent, subset-minimal, and implies the claim. In such a case, the pair of the support and the claim together is called an argument. In this paper, we study the propositional variants of the following three computational tasks studied in argumentation: ARG (exists a support for a given claim with respect to a given set of formulas), ARG-Check (is a given set a support for a given claim), and ARG-Rel (similarly as ARG plus requiring an additionally given formula to be contained in the support). ARG-Check is complete for the complexity class DP, and the other two problems are known to be complete for the second level of the polynomial hierarchy and, accordingly, are highly intractable. Analyzing the reason for this intractability, we perform a two-dimensional classification: first, we consider all possible propositional fragments of the problem within Schaefer's framework, and then study different parameterizations for each of the fragment. We identify a list of reasonable structural parameters (size of the claim, support, knowledge-base) that are connected to the aforementioned decision problems. Eventually, we thoroughly draw a fine border of parameterized intractability for each of the problems showing where the problems are fixed-parameter tractable and when this exactly stops. Surprisingly, several cases are of very high intractability (paraNP and beyond).

Automatic sequences are sequences over a finite alphabet generated by a finite-state machine. This book presents a novel viewpoint on automatic sequences, and more generally on combinatorics on words, by introducing a decision method through which many new results in combinatorics and number theory can be automatically proved or disproved with little or no human intervention. This approach to proving theorems is extremely powerful, allowing long and error-prone case-based arguments to be replaced by simple computations. Readers will learn how to phrase their desired results in first-order logic, using free software to automate the computation process. Results that normally require multipage proofs can emerge in milliseconds, allowing users to engage with mathematical questions that would otherwise be difficult to solve. With more than 150 exercises included, this text is an ideal resource for researchers, graduate students, and advanced undergraduates studying combinatorics, sequences, and number theory.

This paper shows how the use of Structural Operational Semantics (SOS) in the style popularized by the process-algebra community can lead to a succinct and pedagogically satisfying construction for building finite automata from regular expressions. Techniques for converting regular expressions into finite automata have been known for decades, and form the basis for the proofs of one direction of Kleene’s Theorem. The purpose of the construction documented in this paper is, on the one hand, to show students how small automata can be constructed, without the need for empty transitions, and on the other hand to show how the construction method admits closure proofs of regular languages with respect to many operators beyond the standard ones used in regular expressions. These results point to an additional benefit of the process-algebraic approach: besides providing fundamental insights into the nature of concurrent computation, it also can shed new light on long-standing, well-known constructions in automata theory.

Monitoring programs for finite state properties is challenging due to high memory and execution time overheads it incurs. Some events if skipped or lost naturally can reduce both overheads, but lead to uncertainty about the current monitor state. In this work, we present a theoretical framework to model traces that carry partial information (like number of events lost), and provide construction for a monitor capable of monitoring these partial traces without producing false positives while reporting violations. The constructed monitor optimally reports as many violations as possible for the partial traces. We model several loss types of practical relevance using our framework.

Predicting the secondary structure of RNA sequences has been proved quite a challenging research field for bioinformatics. Predicting structures that encapsulate the pseudoknot motif highlights why it is an NP-complete problem. In this setting, researchers focus on accurately predicting this motif and its variations by leveraging heuristic methodologies that converge while decreasing the prediction time. Any accurate heuristic does not add significant value when it involves an extended execution period, specifically considering lengthy sequences. In this work, we introduce a novel, time-efficient method that employs grammar attributes, parallel execution, and pruning techniques to create an efficient prediction tool that is helpful for biologists, bioengineers, and biomedical researchers. This version of the proposed framework features a pruning technique to reduce the search space of the grammar. It eliminates trees derived from corner-case conditions to reduce execution time by 33% regarding the grammar-based methodology and 43% regarding the brute-force approach without sacrificing the initial accuracy percentage.

mperative languages like Java, C++, and Python are mostly used for the implementation of Genetic algorithms (GA). Other programming paradigms are far from being an object of study. The paper explores the advantages of a new non-mainstream programming paradigm, with declarative and nondeterministic features, in the implementation of GA. Control Network Programming (CNP) is a visual declarative style of programming in which the program is a set of recursive graphs, that are graphically visualized and developed. The paper demonstrates how the GA can be implemented in an automatic, i.e. non-procedural (declarative) way, using the built-in CNP inference mechanism and tools for its control. The CNP programs are easy to develop and comprehend, thus, CNP can be considered a convenient programming paradigm for efficient teaching and learning of nondeterministic, heuristic, and stochastic algorithms, and in particular GA. The outcomes of using CNP in delivering a course on Advanced Algorithm Design are shown and analyzed, and they strongly support the positive results in teaching when CNP is applied.

We present a Python library for trace analysis named PyContract. PyContract is a shallow internal DSL, in contrast to many trace analysis tools that implement external or deep internal DSLs. The library has been used in a project for analysis of logs from NASA’s Europa Clipper mission. We describe our design choices, explain the API via examples, and present an experiment comparing PyContract against other state-of-the-art tools from the research and industrial communities.

Quantum finite automata (QFA) are basic computational devices that make binary decisions using quantum operations. They are known to be exponentially memory efficient compared to their classical counterparts. Here, we demonstrate an experimental implementation of multi-qubit QFAs using the orbital angular momentum (OAM) of single photons. We implement different high-dimensional QFAs encoded on a single photon, where multiple qubits operate in parallel without the need for complicated multi-partite operations. Using two to eight OAM quantum states to implement up to four parallel qubits, we show that a high-dimensional QFA is able to detect the prime numbers 5 and 11 while outperforming classical finite automata in terms of the required memory. Our work benefits from the ease of encoding, manipulating, and deciphering multi-qubit states encoded in the OAM degree of freedom of single photons, demonstrating the advantages structured photons provide for complex quantum information tasks.

Emergence of big data in today’s world leads to new challenges for sorting strategies to analyze the data in a better way. For most of the analyzing technique, sorting is considered as an implicit attribute of the technique used. The availability of huge data has changed the way data is analyzed across industries. Healthcare is one of the notable areas where data analytics is making big changes. An efficient analysis has the potential to reduce costs of treatment and improve the quality of life in general. Healthcare industries are collecting massive amounts of data and look for the best strategies to use these numbers. This research proposes a novel non-comparison based approach to sort a large data that can further be utilized by any big data analytical technique for various analyses.

We introduce two versions of Presburger Automata with the Büchi acceptance condition, working over infinite, finite-branching trees. These automata, in addition to the classical ones, allow nodes for checking linear inequalities over labels of their children. We establish tight \(\textsc {NP}\) and \(\textsc {ExpTime}\) bounds on the complexity of the non-emptiness problem for the presented machines. We demonstrate the usefulness of our automata models by polynomially encoding the two-variable guarded fragment extended with Presburger constraints, improving the existing triply-exponential upper bound to a single exponential.

We present the stellar resolution, a "flexible" tile system based on Robinson's first-order resolution. After establishing formal definitions and basic properties of the stellar resolution, we show its Turing-completeness and to illustrate the model, we exhibit how it naturally represents computation with Horn clauses and automata as well as nondeterministic tiling constructions used in DNA computing. In the second and main part, by using the stellar resolution, we formalise and extend ideas of a new alternative to proof-net theory sketched by Girard in his transcendental syntax programme. In particular, we encode both cut-elimination and logical correctness for the multiplicative fragment of linear logic (MLL). We finally obtain completeness results for both MLL and MLL extended with the so-called MIX rule. By extending the ideas of Girard's geometry of interaction, this suggests a first step towards a new understanding of the interplay between logic and computation where linear logic is seen as a (constructed) way to format computation.

Software-intensive systems constantly evolve. To prevent software changes from unintentionally introducing costly system defects, it is important to understand their impact to reduce risk. However, it is in practice nearly impossible to foresee the full impact of software changes when dealing with huge industrial systems with many configurations and usage scenarios. To assist developers with change impact analysis we introduce a novel multi-level methodology for behavioral comparison of software-intensive systems. Our fully automated methodology is based on comparing state machine models of software behavior. We combine existing complementary comparison methods into a novel approach, guiding users step by step through relevant differences by gradually zooming in on more and more details. We empirically evaluate our work through a qualitative exploratory field study, showing its practical value using multiple case studies at ASML, a leading company in developing lithography systems. Our method shows great potential for preventing regressions in system behavior for software changes.KeywordsCyber-Physical SystemsSoftware BehaviorState MachinesBehavioral ComparisonChange Impact Analysis

Cancers are complex adaptive diseases regulated by the nonlinear feedback systems between genetic instabilities, environmental signals, cellular protein flows, and gene regulatory networks. Understanding the cybernetics of cancer requires the integration of information dynamics across multidimensional spatiotemporal scales, including genetic, transcriptional, metabolic, proteomic, epigenetic, and multi-cellular networks. However, the time-series analysis of these complex networks remains vastly absent in cancer research. With longitudinal screening and time-series analysis of cellular dynamics, universally observed causal patterns pertaining to dynamical systems, may self-organize in the signaling or gene expression state-space of cancer triggering processes. A class of these patterns, strange attractors, may be mathematical biomarkers of cancer progression. The emergence of intracellular chaos and chaotic cell population dynamics remains a new paradigm in systems medicine. As such, chaotic and complex dynamics are discussed as mathematical hallmarks of cancer cell fate dynamics herein. Given the assumption that time-resolved single-cell datasets are made available, a survey of interdisciplinary tools and algorithms from complexity theory, are hereby reviewed to investigate critical phenomena and chaotic dynamics in cancer ecosystems. To conclude, the perspective cultivates an intuition for computational systems oncology in terms of nonlinear dynamics, information theory, inverse problems, and complexity. We highlight the limitations we see in the area of statistical machine learning but the opportunity at combining it with the symbolic computational power offered by the mathematical tools explored.

The aim of this paper was to perform an analysis of the state-of-the-art solutions of the permissioned blockchain compliance with the General Data Protection Regulation (GDPR), including the implementation of one of the analyzed methods and the own solution. This paper covers the subject of GDPR and its impact on already existing blockchain databases to determine the domain of the problem, including the necessity to introduce mutability in the data structure to comply with the ”right to be forgotten”. The performed analysis made it possible to discuss current research in technical terms as well as in the regulation itself. In the experimental part, attempts were made to research and implement the Reference-based Tree Structure (RBTS), including the performance tests. The proposed solution is efficient and easily reproducible. The deletion of unwanted content is quick and requires consent only from the owner of personal data; therefore, eliminating the dependency on the other blockchain network participants.

The concept of “digital literacy” has been much discussed and variously misunderstood in our society. Owing to digital communication technologies, it is often confused with other literacies and skills necessary for utilizing and evaluating digital information. As information and communication is increasingly produced, accessed, and controlled in digital formats, there is significant need to clarify among “information literacies” what “digital literacy” means and demands. In order to accomplish this, the author reviews what is meant by literacies in human society, examines the nature of the digital as a language, describes genuine digital literacy, and elucidates the sociopolitical importance of the growing digital illiteracy in global citizenry and how this might be addressed.

Das Bedarfs- und Kapazitätsmanagement (BKM) ist ein elementarer Bestandteil des Supply-Chain-Managements der Automobilhersteller. Aufgabe des BKMs ist es, den Ressourcenbedarf, der sich aus der erwarteten oder bereits realisierten Marktnachfrage ergibt, mit den Kapazitäten und Restriktionen der Lieferkette und des Produktionssystems zu synchronisieren. Eine wesentliche Herausforderung für das BKM besteht in der Unsicherheit und Volatilität der Anforderungen, die sich aus der Produktvielfalt ergeben. Informationstechnologie unterstützt zunehmend erfolgreich die komplexen BKM-Prozesse, wobei alle Systeme auf eine effiziente und ganzheitliche Produktrepräsentation angewiesen sind. Die Automobilindustrie sieht sich derzeit mit zwei bedeutenden Trends konfrontiert. Zum einen bewirkt die Diversifizierung des Antriebsstranges (insbesondere im Kontext der E-Mobilität) Veränderungen in der physischen Fahrzeugarchitektur. Zum anderen führt die Digitalisierung des Autos (z. B. autonomes Fahren) neue und veränderte Abhängigkeiten zwischen Komponenten ein (z. B. die Kompatibilität von Hardware und Software). Diese neuen und veränderten Abhängigkeiten sind bei der Entwicklung einer Produktrepräsentation für das automobile BKM angemessen zu dokumentieren. Das Ziel der vorliegenden Dissertation ist die Entwicklung einer effizienten und flexiblen Produktrepräsentation für das BKM der Automobilhersteller zur Abbildung logistikrelevanter Informationen digitalisierter Fahrzeuge.
Durch die beschleunigten Veränderungen des Automobils ist ein flexiblerer und effizienterer BKM-Prozess erforderlich. Dem BKM-Prozess liegen heute jedoch nicht alle benötigten Informationen zugrunde. Um dieses Problem zu adressieren, werden die Prozesseigenschaften anhand der Prozessvarianten des BKMs identifiziert. Durch eine Literaturrecherche und Experteninterviews werden Anforderungen und Produktinformationen aus den Unternehmensbereichen ermittelt, die in die Produktrepräsentation einfließen müssen. Die Produktinformationen aus den Unternehmensbereichen werden analysiert und segmentiert. Anforderungen und speziell Produktinformationen (Digitalcharakteristika), die durch die zunehmende Digitalisierung des Automobils zur ganzheitlichen Darstellung in die Produktrepräsentation integriert werden müssen, werden extrahiert. Durch die Digitalcharakteristika entstehen neue zu integrierende Abhängigkeiten, da sie Einfluss auf die Logistik haben. Zur Auswahl der Datenstruktur und des Konzeptes für eine effiziente und flexible Produktrepräsentation für den BKM-Prozess erfolgt eine weitere Literaturrecherche.
Auf Basis der Ergebnisse wird eine graphbasierten Ontologie der effizienten und flexiblen Produktrepräsentation konzipiert, welche die logistikrelevanten Informationen digitalisierter Fahrzeuge abbildet. Anhand eines realen Anwendungsfalls eines deutschen Automobilherstellers wird die effiziente und flexible Produktrepräsentation prototypisch umgesetzt und validiert.

(Non)-Deterministic finite automata are one of the simplest models of computation studied in automata theory. Here we study them through the lens of succinct data structures. Towards this goal, we design a data structure for any deterministic automaton D having n states over a σ-letter alphabet Σ using (σ−1)nlogn(1+o(1)) bits, that determines, given a string x, whether D accepts x in optimal O(|x|) time. We also consider the case when there are N<σn non-failure transitions, and obtain various time-space trade-offs. Here some of our results are better than the recent work of Cotumaccio and Prezza (SODA 2021). We also exhibit a data structure for non-deterministic automaton N using σn2+n bits that takes O(n2|x|) time for string membership checking. Finally, we also provide time and space efficient algorithms for performing several standard operations on the languages accepted by finite automata.

ResearchGate has not been able to resolve any references for this publication.