Theoretical Computer Science

Published by Elsevier
Print ISSN: 0304-3975
Publications
In this work the number of occurrences of a fixed non-zero digit in the width-w non-adjacent forms of all elements of a lattice in some region (e.g. a ball) is analysed. As bases, expanding endomorphisms with eigenvalues of the same absolute value are allowed. Applications of the main result are on numeral systems with an algebraic integer as base. Those come from efficient scalar multiplication methods (Frobenius-and-add methods) in hyperelliptic curves cryptography, and the result is needed for analysing the running time of such algorithms. The counting result itself is an asymptotic formula, where its main term coincides with the full block length analysis. In its second order term a periodic fluctuation is exhibited. The proof follows Delange’s method.
 
This paper shows how synchrony conditions can be added to the purely asynchronous model in a way that avoids any reference to message delays and computing step times, as well as system-wide constraints on execution patterns and network topology. Our Asynchronous Bounded-Cycle (ABC) model just bounds the ratio of the number of forward- and backward-oriented messages in certain ("relevant") cycles in the space-time diagram of an asynchronous execution. We show that clock synchronization and lock-step rounds can be implemented and proved correct in the ABC model, even in the presence of Byzantine failures. Furthermore, we prove that any algorithm working correctly in the partially synchronous Θ-Model also works correctly in the ABC model. In our proof, we first apply a novel method for assigning certain message delays to asynchronous executions, which is based on a variant of Farkas' theorem of linear inequalities and a non-standard cycle space of graphs. Using methods from point-set topology, we then prove that the existence of this delay assignment implies model indistinguishability for time-free safety and liveness properties. We also introduce several weaker variants of the ABC model, and relate our model to the existing partially synchronous system models, in particular, the classic models of Dwork, Lynch and Stockmayer and the query-response model by Mostefaoui, Mourgaya, and Raynal. Finally, we discuss some aspects of the ABC model's applicability in real systems, in particular, in the context of VLSI Systems-on-Chip.
 
Motivated by multiplication algorithms based on redundant number representations, we study representations of an integer n as a sum n=∑kεkUk, where the digits εk are taken from a finite alphabet Σ and (Uk)k is a linear recurrent sequence of Pisot type with U0=1. The most prominent example of a base sequence (Uk)k is the sequence of Fibonacci numbers. We prove that the representations of minimal weight ∑k|εk| are recognised by a finite automaton and obtain an asymptotic formula for the average number of representations of minimal weight. Furthermore, we relate the maximal number of representations of a given integer to the joint spectral radius of a certain set of matrices.
 
We survey the work on both discrete and continuous-space probabilistic systems as coalgebras, starting with how probabilistic systems are modeled as coalgebras and followed by a discussion of their bisimilarity and behavioral equivalence, mentioning results that follow from the coalgebraic treatment of probabilistic systems. It is interesting to note that, for different reasons, for both discrete and continuous probabilistic systems it may be more convenient to work with behavioral equivalence than with bisimilarity.
 
We introduce a comprehensive hybrid failure model for synchronous distributed systems, which extends a conventional hybrid process failure model by adding communication failures: Every process in the system is allowed to commit up to fℓs send link failures and experience up to fℓr receive link failures per round here, without being considered faulty; up to some fℓsa≤fℓs and fℓra≤fℓr among those may even cause erroneous messages rather than just omissions. In a companion paper (Schmid et al. (2009) [14]), devoted to a complete suite of related impossibility results and lower bounds, we proved that this model surpasses all existing link failure modeling approaches in terms of the assumption coverage in a simple probabilistic setting. In this paper, we show that several well-known synchronous consensus algorithms can be adapted to work under our failure model, provided that the number of processes required for tolerating process failures is increased by small integer multiples of fℓs, fℓr, fℓsa, fℓra. This is somewhat surprising, given that consensus in the presence of unrestricted link failures and mobile (moving) process omission failures is impossible. We provide detailed formulas for the required number of processes and rounds, which reveal that the lower bounds established in our companion paper are tight. We also explore the power and limitations of authentication in our setting, and consider uniform consensus algorithms, which guarantee their properties also for benign faulty processes.
 
When investigating the complexity of cut-elimination in first-order logic, a natural subproblem is the elimination of quantifier-free cuts. So far, the problem has only been considered in the context of general cut-elimination, and the upper bounds that have been obtained are essentially double exponential. In this note, we observe that a method due to Dale Miller can be applied to obtain an exponential upper bound.
 
Through self-assembly of branched junction molecules many different DNA structures (graphs) can be assembled. We show that every multigraph can be assembled by DNA such that there is a single strand that traces each edge in the graph at least once. This strand corresponds to a boundary component of a two-dimensional orientable surface that has the given graph as a deformation retract. This boundary component traverses every edge at least once, and it defines a circular path in the graph that "preserves the graph structure" and traverses each edge.
 
We study the entropy rate of a hidden Markov process (HMP) defined by observing the output of a binary symmetric channel whose input is a first-order binary Markov process. Despite the simplicity of the models involved, the characterization of this entropy is a long standing open problem. By presenting the probability of a sequence under the model as a product of random matrices, one can see that the entropy rate sought is equal to a top Lyapunov exponent of the product. This offers an explanation for the elusiveness of explicit expressions for the HMP entropy rate, as Lyapunov exponents are notoriously difficult to compute. Consequently, we focus on asymptotic estimates, and apply the same product of random matrices to derive an explicit expression for a Taylor approximation of the entropy rate with respect to the parameter of the binary symmetric channel. The accuracy of the approximation is validated against empirical simulation results. We also extend our results to higher-order Markov processes and to Rényi entropies of any order.
 
We extend the methodology in Baaz and Fermüller (1999) [5] to systematically construct analytic calculi for semi-projective logics-a large family of (propositional) locally finite many-valued logics. Our calculi, defined in the framework of sequents of relations, are proof search oriented and can be used to settle the computational complexity of the formalized logics. As a case study we derive sequent calculi of relations for Nilpotent Minimum logic and for Hajek's Basic Logic extended with the [Formula: see text]-contraction axiom ([Formula: see text]). The introduced calculi are used to prove that the decidability problem in these logics is Co-NP complete.
 
A family of 1-player energy parity games where Player 1 needs memory of size 2 · (n − 1) · W and initial credit (n − 1) · W. Edges are labeled by weights, states by priorities.
Energy parity games are infinite two-player turn-based games played on weighted graphs. The objective of the game combines a (qualitative) parity condition with the (quantitative) requirement that the sum of the weights (i.e., the level of energy in the game) must remain positive. Beside their own interest in the design and synthesis of resource-constrained omega-regular specifications, energy parity games provide one of the simplest model of games with combined qualitative and quantitative objectives. Our main results are as follows: (a) exponential memory is sufficient and may be necessary for winning strategies in energy parity games; (b) the problem of deciding the winner in energy parity games can be solved in NP [Formula: see text] coNP; and (c) we give an algorithm to solve energy parity by reduction to energy games. We also show that the problem of deciding the winner in energy parity games is logspace-equivalent to the problem of deciding the winner in mean-payoff parity games, which can thus be solved in NP [Formula: see text] coNP. As a consequence we also obtain a conceptually simple algorithm to solve mean-payoff parity games.
 
Range Quickselect, a simple modification of the well-known Quickselect algorithm for selection, can be used to efficiently find an element with rank k in a given range [i..j], out of n given elements. We study basic cost measures of Range Quickselect by computing exact and asymptotic results for the expected number of passes, comparisons and data moves during the execution of this algorithm. The key element appearing in the analysis of Range Quickselect is a trivariate recurrence that we solve in full generality. The general solution of the recurrence proves to be very useful, as it allows us to tackle several related problems, besides the analysis that originally motivated us. In particular, we have been able to carry out a precise analysis of the expected number of moves of the pth element when selecting the jth smallest element with standard Quickselect, where we are able to give both exact and asymptotic results. Moreover, we can apply our general results to obtain exact and asymptotic results for several parameters in binary search trees, namely the expected number of common ancestors of the nodes with rank i and j, the expected size of the subtree rooted at the least common ancestor of the nodes with rank i and j, and the expected distance between the nodes of ranks i and j.
 
Designing algorithms for distributed systems that provide a round abstraction is often simpler than designing for those that do not provide such an abstraction. Further, distributed systems need to tolerate various kinds of failures. The concept of a synchronizer deals with both: It constructs rounds and allows masking of transmission failures. One simple way of dealing with transmission failures is to retransmit a message until it is known that the message was successfully received. We calculate the exact value of the average rate of a retransmission-based synchronizer in environments with probabilistic message loss, within which the synchronizer shows nontrivial timing behavior. We show how to make this calculation efficient, and present analytical results on the convergence speed. The theoretic results, based on Markov theory, are backed up with Monte Carlo simulations.
 
For any set S ⊆ R <sup>n</sup>, let χ( S ) denote its Euler characteristic. The author shows that any algebraic computation tree or fixed-degree algebraic decision tree must have height Ω(log|χ( S )|)for deciding the membership question of a compact semi-algebraic set S . This extends a result by A. Bjorner, L. Lovasz and A. Yao where it was shown that any linear decision tree for deciding the membership question of a closed polyhedron S must have height greater than or equal to log<sub>3</sub>|χ( S )|
 
This paper presents the following algorithms to compute the sum of n d-bit integers on reconfigurable parallel machine models: i) a constant-time algorithm on a reconfigurable mesh of the bit model of size d√n log<sup>(O(1</sup>)) n×√n, ii) an O(log* n)-time algorithm on a reconfigurable mesh of the bit model of size d√(n/log* n)×√(n/log* n), iii) an O(log d+log* n)-time algorithm on a reconfigurable mesh of the word model of size √(n/(log d+log* n))×√(n/(log d+log* n)), and iv) an O(log* n)-time algorithm on a VLSI reconfigurable circuit of area O(dn/log* n)
 
In this paper we deal with the parallel approximability of a special class of Quadratic Programming (QP), called Smooth Positive Quadratic Programming. This subclass of QP is obtained by imposing restrictions on the coefficients of the QP instance. The Smoothness condition restricts the magnitudes of the coefficients while the positiveness requires that all the coefficients be non-negative. Interestingly, even with these restrictions several combinatorial problems can be modeled by Smooth QP. We show NC Approximation Schemes for the instances of Smooth Positive QP. This is done by reducing the instance of QP to an instance of Positive Linear Programming, finding in NC an approximate fractional solution to the obtained program, and then rounding the fractional solution to an integer approximate solution for the original problem. Then we show how to extend the result for positive instances of bounded degree to Smooth Integer Programming problems. Finally, we formulate several important combinatorial problems as Positive Quadratic Programs (or Positive Integer Programs) in packing/covering form and show that the techniques presented can be used to obtain NC Approximation Schemes for “dense” instances of such problems
 
Sorting permutations by operations such as reversals and block-moves has received much interest because of its applications in the study of genome rearrangements. A short block-move is an operation on a permutation that moves an element at most two positions away from its original position. This paper investigates the problem of finding a minimum-length sorting sequence of short block-moves for a given permutation. A (1+epsiv)-approximation algorithm for this problem is presented, where epsiv relies on the ratio of the number of elements to the number of inversions in the permutation. We propose a new structure in the permutation graph called umbrella, which is the basis of the new algorithm and valuable for further study.
 
We consider the following norm ∥f∥ for Boolean functions f : {0, 1}n × {0, 1}n → {0, 1}: ∥f∥ = max{|M(f*)v|, v ∈ ℝ2n, |v| = 1}. Here, M(f*) denotes the 2n × 2n-matrix obtained by setting (equation presented) Further, we refer to the usual product of real matrices and vectors and denote by \v\ the Euclidean norm of real vectors v. In this paper it will be shown by geometric arguments that for each function f : {0, 1}n × {0, 1}n → {0, 1} the following is true. - The length of each probabilistic communication protocol which computes f with error bounded by 1/2 - 1/s,s ∈ ℕ, can be estimated from below by 1/4 (n - log2 ∥f∥ - log2 √s - 2). - The number of edges of any threshold circuit of depth two computing f cannot be smaller than 2n-1/.∥f∥ These results yield better lower bounds on probabilistic communication complexity as well as on the complexity of threshold circuits of depth two. Further, characterizing ∥f∥ by the eigenvalues of M(f*) we obtain a method to construct iteratively functions which are hard to compute in the above models. This implies lower bounds for decision problems that previous probabilistic techniques from Yao (1983), Halstenberg and Reischuk (1988) and Hajnal et al. (1987) cannot be applied to.
 
We present a general framework for provably safe mobile code. It relies on a formal definition of a safety policy and explicit evidence for compliance with this policy which is attached to a binary. Concrete realizations of this framework are proof-carrying code (PCC), where the evidence for safety is a formal proof generated by a certifying compiler and typed assembly language (TAL), where the evidence for safety is given via type annotations propagated throughout the compilation process in typed intermediate languages. Validity of the evidence is established via a small trusted type checker either directly on the binary or indirectly on proof representations in a logical framework (LF)
 
Resource-bounded measure has been defined on the classes E, E<sub>2</sub>, ESPACE, E<sub>2</sub>SPACE, REC, and the class of all languages. It is shown here that if C is any of these classes and X is a set of languages that is closed under finite variations and has outer measure less than 1 in C, then X has measure 0 in C. This result strengthens Lutz's resource-bounded generalization of the classical Kolmogorov zero-one law. It also gives a useful sufficient condition for proving that a set has measure 0 in a complexity class
 
There have been several papers over the last ten years that consider the number of queries needed to compute a function as a measure of its complexity. The following function has been studied extensively in that light: F<sub>a</sub><sup>A</sup>(x<sub>1</sub>, …, x<sub>a</sub>)=A(x<sub>1</sub>)···A(x<sub>a</sub>). We are interested in the complexity (in terms of the number of queries) of approximating F<sub>a</sub><sup>A</sup>. Let b&les;a and let f be any function such that F<sub>a</sub><sup>A</sup>(x<sub>1</sub>, …, x <sub>a</sub>) and f(x<sub>1</sub>, …, x<sub>a</sub>) agree on at least b bits. For a general set A we have matching upper and lower bounds that depend on coding theory. These are applied to get exact bounds for the case where A is semirecursive, A is superterse, and (assuming P≠NP) A=SAT. We obtain exact bounds when A is the halting problem using different methods
 
Media access protocols in wireless networks require each contending node to wait for a backoff time chosen randomly from a fixed range, before attempting to transmit on a shared channel. However, nodes acting in their own selfish interest may not follow the protocol. In this paper, we use a game-theoretic approach to study how nodes might be induced to adhere to the protocol. In particular, a static version of the problem is modeled as a strategic game played by non-cooperating, rational players (the nodes). A strategy for a player corresponds to a backoff value in the medium access protocol. We are interested in designing a game which exhibits a unique Nash equilibrium corresponding to a pre-specified full-support distribution profile. In the context of the media access problem, the equilibrium of the game would correspond to nodes following the protocol, viz. choosing backoff times randomly from a given range of values according to the prespecified distribution. Building on results described in earlier work, we identify the exact relationship that must hold between the cardinalities of the players' action sets that would make it possible to design such a game.
 
This paper considers problems of fault-tolerant information diffusion in a network with cost function. We show that the problem of determining the minimum cost necessary to perform fault-tolerant gossiping among a given set of participants is NP-hard and given approximate (with respect to the cost) algorithms. We also analyze the communication time of fault-tolerant gossiping algorithms. Finally, we give an optimal cost fault tolerant broadcasting algorithm and apply our results to the atomic commitment problem
 
The characterizations of the class Θ<sub>2</sub><sup>p</sup> of languages polynomial-time truth-table reducible to sets in NP are surveyed studying the classes obtained when the characterizations are used to define functions instead of languages. It is shown that in this way three function classes are obtained. An overview of the known relationships between these classes, including some original results, is given
 
We consider the behavior of the error probability of a two-prover one-round interactive protocol repeated n times in parallel. We point out the connection of this problem with the density form of Hales-Jewett's theorem in Ramsey theory. This allows us to show that the error probability converges to 0 as n→∞
 
Given two strings: pattern P of length m and text T of length n. The string-matching problem is to find all occurrences of the pattern P in the text T. We present a simple string-matching algorithm which works in average o(n) time with constant additional space for one-dimensional texts and two-dimensional arrays. This is the first attempt to the small-space string-matching problem in which sublinear time algorithms are delivered. More precisely we show that all occurrences of one- or two-dimensional patterns can be found in O(n/r) average time with constant memory, where r is the repetition size (size of the longest repeated subword) of P
 
We describe two simple optimal-work parallel algorithms for sorting a list L=(X<sub>1</sub>,X<sub>2</sub>,...,X<sub>m</sub>) of m strings over an arbitrary alphabet Σ, where Σ<sub>i=1</sub><sup>m</sup>|X<sub>i</sub>|=n. The first algorithm is a deterministic algorithm that runs in O((log<sup>2</sup> m)/(log log m)) time and the second is a randomized algorithm that runs in O(log m) time. Both algorithms use O(m log(m)+n) operations. Compared to the best known parallel algorithms for sorting strings, the algorithms offer the following improvements: the total number of operations used by the algorithms is optimal while all previous parallel algorithms use a non-optimal number of operations; we make no assumption about the alphabet while the previous algorithms assume that the alphabet is restricted to {1,2,..., n<sup>O(1</sup>)}; the computation model assumed by the algorithms is the Common CRCW PRAM unlike the known algorithms that assume the Arbitrary CRCW PRAM; and the presented algorithms use O(m log m+n) space, while previous parallel algorithms use O(n<sup>1+ε </sup>) space, where ε is a positive constant. We also present optimal-work parallel algorithms to construct a digital search tree for a given set of strings and to search for a string in a sorted list of strings. We use the parallel sorting algorithms to solve the problem of determining a minimal starting point of a circular string with respect to lexicographic ordering
 
A public data structure is required to work correctly in a concurrent environment where many processes may try to access it, possibly at the same time. In implementing such a structure nothing can be assumed in advance about the number or the identities of the processes that might access it. While most of the known concurrent data structures are not public, there are few which are public. Interestingly, these public data structures all deal with various variants of counters, which are data structures that support two operations: increment and read. In this paper we define the notion of a public data structure, and investigate several types of public counters. Then we give an optimal construction of public counters which satisfies a weak correctness condition, and show that there is no public counter which satisfies a stronger condition. It is hoped that this work will provide insights into the design of other, more complicated, public data structures
 
The author develops a theory of disjunctive logic programming, with negations allowed to appear both in the head of a clause and in the body. As such programs can easily contain inconsistent information (with respect to the intuitions of two-valued logic), this means that the formalism allows reasoning about systems that are intuitively inconsistent but yet have models (in nonclassical model theory). Such an ability is important because inconsistencies may occur very easily during the design and development of deductive databases and/or expert systems. The author also develops a theory of disjunctive deductive databases that (perhaps) contain inconsistent information. The author shows how to associate, with any such database, an operator that maps multivalued-model states to multivalued-model states. It is shown that this operator has a least fixed point which is identical to the set of all variable-free disjunctions that are provable from the database under consideration. A procedure to answer queries to such databases is devised. Soundness and completeness results are proved. The techniques introduced are fairly general. However, the results are applicable to databases that are quantitative in nature
 
We present a domain-theoretic framework for measure theory and integration of bounded read-valued functions with respect to bounded Borel measures on compact metric spaces. The set of normalised Borel measures of the metric space can be embedded into the maximal elements of the normalised probabilistic power domain of its upper space. Any bounded Borel measure on the compact metric space can then be obtained as the least upper bound of an ω-chain of linear combinations of point valuations (simple valuations) on the zipper space, thus providing a constructive setup for these measures. We use this setting to develop a theory of integration based on a new notion of integral which generalises and shares all the basic properties of the Riemann integral. The theory provides a new technique for computing the Lebesgue integral. It also leads to a new algorithm for integration over fractals of iterated function systems
 
In PRAM emulations, universal hashing is a well-known method for distributing the address space among memory modules. However, if the memory access patterns of an application often result in high module congestion, it is necessary to rehash by choosing another hash function and redistributing data on the fly. For the case of linear hash functions h(x) - ax mod m, we present an algorithm to rehash an address space of size m on a p processor PRAM emulation in time O(m/p + log p). The algorithm requires O(log m) words of local storage per processor
 
In this paper we discuss the formal specification of parallel SIMD execution. We outline a vector model to describe SIMD execution which forms the basis of a semantic definition for a simple SIMD language definition. The model is based upon the notion of atomic parallel SIMD instructions operating on vectors of size Π where Π is the number of PEs on the machine. The vector model for parallel SIMD execution is independent of any specific computing architecture and can define parallel SIMD execution on a real SIMD machine (with a limited number of PEs) or a SIMD simulation. The model enables the formal specification of SIMD languages by providing an underlying mathematical framework for the SIMD paradigm
 
We study the existence and computation of extremal solutions of a system of inequations defined over lattices. Using the Knaster-Tarski fixed point theorem, we obtain sufficient conditions for the existence of supremal as well as infimal solution of a given system of inequations. Iterative techniques are presented for the computation of the extremal solutions whenever they exist, and conditions under which the termination occurs in a single iteration are provided. These results are then applied for obtaining extremal solutions of various inequations that arise in computation of maximally permissive supervisors in control of logical discrete event systems (DESs). Thus our work presents a unifying approach for computation of supervisors in a variety of situations
 
An atomic snapshot memory is an implementation of a multiple location shared memory that can be atomically read in its entirety without having to prevent concurrent writing. The design of wait-free implementations of atomic snapshot memories has been the subject of extensive theoretical research in recent years. This paper introduces the coordinated-collect algorithm, a novel wait-free atomic snapshot construction which we believe is a first step in taking snapshots from theory to practice. Unlike former algorithms, it uses currently available multiprocessor synchronization operations to provide an algorithm that has only O(1) update complexity and O(n) scan complexity, with very small constants. Empirical evidence collected on a simulated distributed shared-memory multiprocessor shows that coordinated-collect outperforms all known wait-free, lock-free, and locking algorithms in terms of overall throughput and latency
 
Local search has been widely used in combinatorial optimization (Local Search in Combinatorial Optimization, Wiley, New York, 1997), however, in the case of multicriteria optimization almost no results are known concerning the ability of local search algorithms to generate “good” solutions with performance guarantee. In this paper, we introduce such an approach for the classical traveling salesman problem (TSP) problem (Proc. STOC’00, 2000, pp. 126–133). We show that it is possible to get in linear time, a -approximate Pareto curve using an original local search procedure based on the 2-opt neighborhood, for the bicriteria TSP(1,2) problem where every edge is associated to a couple of distances which are either 1 or 2 (Math. Oper. Res. 18 (1) (1993) 1).
 
The syntactic theories of control and state are conservative extensions of the λυ-calculus for equational reasoning about imperative programming facilities in higher-order languages. Unlike the simple λυ-calculus, the extended theories are mixtures of equivalence relations and compatible congruence relations on the term language, which significantly complicates the reasoning process. In this paper we develop fully compatible equational theories of the same imperative higher-order programming languages. The new theories subsume the original calculi of control and state and satisfy the usual Church–Rosser and Standardization Theorems. With the new calculi, equational reasoning about imperative programs becomes as simple as reasoning about functional programs.
 
We consider the question of lookahead in the list update problem: What improvement can be achieved in terms of competitiveness if an on-line algorithm sees not only the present request to be served but also some future requests? We introduce two different models of lookahead and study the list update problem using these models. We develop lower bounds on the competitiveness that can be achieved by deterministic on-line algorithms with lookahead. Furthermore, we present on-line algorithms with lookahead that are competitive against static off-line algorithms.
 
It is proved that validity problems for two variants of propositional dynamic logic (PDL) connected with concurrent programming are highly undecidable (∏11-universal). These variants are an extension of PDL by asynchronous programming constructs shuffle and iterated shuffle and a variant of PDL with a partial commutativity relation on primitive programs. In both cases propositional variables are not used.
 
Axel Thue proved that overlapping factors could be avoided in arbitrarily long words on a two-letter alphabet while, on the same alphabet, square factors always occur in words longer than 3. Françoise Dejean stated an analogous result for three-letter alphabets: every long enough word has a factor, which is a fractional power with an exponent at least 7/4 and there exist arbitrary long words in which no factor is a fractional power with an exponent strictly greater than 7/4. The number 7/4 is called the repetition threshold of the three-letter alphabets.Thereafter, she proposed the following conjecture: the repetition threshold of the k-letter alphabets is equal to k/(k−1) except in the particular cases k=3, where this threshold is 7/4, and k=4, where it is 7/5.For k=4, this conjecture was proved by J.J. Pansiot (1984).In this paper, we give a computer-aided proof of Dejean's conjecture for several other values: 5, 6, 7, 8, 9, 10 and 11.
 
Infinite sets of terms appear frequently at different places in computer science. On the other hand, several practically oriented parts of logic and computer science require the manipulated objects to be finite or finitely representable. Schematizations present a suitable formalism to manipulate finitely infinite sets of terms. Since schematizations provide a different approach to solve the same kind of problems as constraints do, they can be viewed as a new type of constraints.The paper presents a new recurrent schematization called primal grammars. The main idea behind the primal grammars is to use primitive recursion as the generating engine of infinite sets. The evaluation of primal grammars is based on substitution and rewriting, hence no particular semantics for them is necessary. This fact allows also a natural integration of primal grammars into Prolog, into functional languages or into other rewrite-based applications.Primal grammars have a decidable unification problem and the paper presents a unification algorithm for them that produces finite results. This unification algorithm is proved sound and complete, and it terminates for every input.
 
We provide a generalization of Datalog based on generalizing databases by adding integer order constraints to relational tuples. For Datalog queries with integer (gap)-order constraints (denoted as Datalog<), we show that there is a closed-form evaluation. We also show that the tuple recognition problem can be done in PTIME in the size of the generalized database, assuming that the size of the constants in the query is logarithmic in the size of the database. Note that the absence of negation in critical, Datalog¬ queries with integer order constraints can express any Turing-computable function.
 
The classical partial orders on strings (prefix, suffix, subsegment, subsequence, lexical, and dictionary order) can be generalized to the case where the alphabet itself has a partial order. This was done by Higman for the subsequence order, and by Kundu for the prefix order. Higman proved that for any language L, the set MIN(L) of minimal elements in L with respect to the generalized subsequence order is finite. Kundu proved that for any regular language L, the set MIN(L) of minimal elements in L with respect to the generalized prefix order is also regular. Here we extend his result to the other orders and give upper bounds for the number of states of the finite automata recognizing MIN(L).The main contribution of this paper, however, is the proof of lower bounds. The upper bounds are shown to be tight; in particular, if L is recognized by a deterministic finite automaton with n states then any deterministic (or even nondeterministic) finite automaton recognizing MIN(L) needs exponentially many states in n; here, MIN is taken with respect to a generalized prefix, suffix, or subsegment order (with a partially ordered alphabet of 4 letters, whose Hasse diagram contains just one edge) or with respect to the ordinary subsequence order.We also give a new proof of a theorem of Sakoda and Sipser about the complementation of nondeterministic finite automata.
 
In this paper, we describe a quasi-linear time universal cellular automaton. This cellular automaton is not only computation universal (in the sense of simulating any Turing machine), but also intrinsically universal (it is capable of simulating arbitrary one-dimensional cellular automata, even two-way). The simulation is based on a novel programming language (the brick language), which simplifies the recursive specifications of transition functions.Moreover, we prove that cellular automata form an acceptable programming system for parallel computation, thus providing an S-m-n theorem for cellular automata. This allows us to apply well-known results of the general theory of computation to cellular automata and might give a practical framework for studying the structural complexity of cellular automata computations.
 
We examine a class of infinite two-person games on finitely coloured graphs. The main aim is to construct finite memory winning strategies for both players. This problem is motivated by applications to finite automata on infinite trees. A special attention is given to the exact amount of memory needed by the players for their winning strategies. Based on a previous work of Gurevich and Harrington and on subsequent improvements of McNaughton we propose a unique framework that allows to reestablish and to improve various results concerning memoryless strategies due to Emerson and Jutla, Mostowski, Klarlund.
 
We prove three different types of complexity lower bounds for the one-way unbounded-error and bounded-error error probabilistic communication protocols for boolean functions. The lower bounds are proved in terms of the deterministic communication complexity of functions and in terms of the notion “probabilistic communication characteristic” that we define.We present boolean functions with the different probabilistic communication characteristics which demonstrates that each of these lower bounds can be more precise than the others depending on the probabilistic communication characteristics of a function.Our lower bounds are good enough for proving that proper hierarchy for one-way probabilistic communication complexity classes depends on a measure of bounded error.As the application of lower bounds for probabilistic communication complexity, we prove two different types of complexity lower bounds for the one-way bounded-error error probabilistic space complexity.Our lower bounds are good enough for proving proper hierarchies for different one-way probabilistic space communication complexity classes inside SPACE(n) (namely for bounded error probabilistic computation, and for errors of probabilistic computation).
 
We present algebraic conditions on constraint languages Γ that ensure the hardness of the constraint satisfaction problem for complexity classes L, NL, P, NP and ModpL. These criteria also give non-expressibility results for various restrictions of Datalog. Furthermore, we show that if is not first-order definable then it is L-hard. Our proofs rely on tame congruence theory and on a fine-grain analysis of the complexity of reductions used in the algebraic study of . The results pave the way for a refinement of the dichotomy conjecture stating that each lies in P or is NP-complete and they match the recent classification of [E. Allender, M. Bauland, N. Immerman, H. Schnoor, H. Vollmer, The complexity of satisfiability problems: Refining Schaefer’s theorem, in: Proc. 30 th Math. Found. of Comp. Sci., MFCS’05, 2005, pp. 71–82] for Boolean . We also infer a partial classification theorem for the complexity of when the associated algebra of Γ is the full idempotent reduct of a preprimal algebra.
 
This short note presents a summary of the scientific contributions of Rainer Kemp (1949–2004) in the area of discrete mathematics, combinatorial enumeration, and analysis of algorithms. A complete bibliography of Kemp's publications is included.
 
An L(2,1)-labeling of a graph is a mapping from its vertex set into nonnegative integers such that the labels assigned to adjacent vertices differ by at least 2, and labels assigned to vertices of distance 2 are different. The span of such a labeling is the maximum label used, and the L(2,1)-span of a graph is the minimum possible span of its L(2,1)-labelings. We show how to compute the L(2,1)-span of a connected graph in time O *(2.6488 n ). Previously published exact exponential time algorithms were gradually improving the base of the exponential function from 4 to the so far best known 3.2361, with 3 seemingly having been the Holy Grail.
 
We investigate the runtime of a binary Particle Swarm Optimizer (PSO) for optimizing pseudo-Boolean functions f:{0,1}n→R. The binary PSO maintains a swarm of particles searching for good solutions. Each particle consists of a current position from {0,1}n, its own best position and a velocity vector used in a probabilistic process to update its current position. The velocities for a particle are then updated in the direction of its own best position and the position of the best particle in the swarm.We present a lower bound for the time needed to optimize any pseudo-Boolean function with a unique optimum. To prove upper bounds we transfer a fitness-level argument that is well-established for evolutionary algorithms (EAs) to PSO. This method is applied to estimate the expected runtime for the class of unimodal functions. A simple variant of the binary PSO is considered in more detail for the test function OneMax, showing that there the binary PSO is competitive to EAs. An additional experimental comparison reveals further insights.
 
Stochastic automata are an established formalism to describe and analyse systems according to their qualitative and quantitative behaviour. Equivalence is a basic concept for the analysis, comparison and reduction of untimed automata, whereas equivalence of stochastic automata is less established. This paper introduces a new equivalence relation for stochastic automata denoted as exact performance equivalence. It is shown that this equivalence relation preserves several important qualitative properties and also quantitative results. Exact performance equivalence is a congruence according to the synchronised product of stochastic automata. The smallest exactly equivalent automaton exists for a stochastic automaton and can be generated by a partition refinement algorithm.
 
Top-cited authors
Charles H. Bennett
Jose Meseguer
  • University of Illinois, Urbana-Champaign
Grzegorz Rozenberg
  • Leiden University
Joseph Sifakis
  • Université Grenoble Alpes
David S. Johnson
  • Columbia University