# Theory of Computing Systems

Online ISSN: 1433-0490
Print ISSN: 1432-4350
Recent publications
A classic result of Paul, Pippenger, Szemerédi and Trotter states that DTIME(n)⫋NTIME(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${\textsf {DTIME}}(n) \subsetneq {\textsf {NTIME}}(n)$\end{document}. The natural question then arises: could the inclusion DTIME(t(n))⊆NTIME(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${\textsf {DTIME}}(t(n)) \subseteq {\textsf {NTIME}}(n)$\end{document} hold for some superlinear time-constructible function t(n)? If such a function t(n) does exist, then there also exist effective nondeterministic guessing strategies to speed up deterministic computations. In this work, we prove limitations on the effectiveness of nondeterministic guessing to speed up deterministic computations by showing that the existence of effective nondeterministic guessing strategies would have unlikely consequences. In particular, we show that if a subpolynomial amount of nondeterministic guessing could be used to speed up deterministic computation by a polynomial factor, then P⫋NTIME(n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${\textsf {P}}~ \subsetneq {\textsf {NTIME}}(n)$\end{document}. Furthermore, even achieving a logarithmic speedup at the cost of making every step nondeterministic would show that SAT ∈NTIME(n) under appropriate encodings. Of possibly independent interest, under such encodings we also show that SAT can be decided in O(nlogn) steps on a nondeterministic multitape Turing machine, improving on the well-known O(n(logn)c) bound for some constant but undetermined exponent c ≥ 1.

We have introduced and extended the notion of swarm automaton to analyze the computability using swarm movement represented by multiset rewriting. The two transitions, parallel and sequential, are considered to transform a configuration of multisets at each step in the swarm automaton. In this paper, we focus on the number of agents composing each configuration and analyze the computing power of swarm automaton. From the result of swarm automaton without position information, no swarm automaton has a universal computing power even though we can use infinitely many agents both in parallel rewriting and in sequential rewriting. On the other hand, when we add the information of position for each agent, the swarm automaton has universal computability. We need just four agents in a configuration to simulate any Turing machine.

In the Arc Disjoint Cycle Packing problem, we are given a simple directed graph (digraph) G, a positive integer k, and the task is to decide whether there exist k arc disjoint cycles. The problem is known to be W[1]-hard on general digraphs parameterized by the standard parameter k. In this paper we show that the problem admits a polynomial kernel on α-bounded digraphs. That is, we give a polynomial-time algorithm, that given an instance (D,k) of Arc Disjoint Cycle Packing, outputs an equivalent instance $$(D^{\prime },k^{\prime })$$ of Arc Disjoint Cycle Packing, such that $$k^{\prime }\leq k$$ and the size of $$D^{\prime }$$ is upper-bounded by a polynomial function of k. For any integer α ≥ 1, the class of α-bounded digraphs, denoted by $${\mathcal D}_{\alpha }$$, contains a digraph D such that the maximum size of an independent set in D is at most α. That is, in D, any set of α + 1 vertices has an arc with both end-points in the set. For α = 1, this corresponds to the well-studied class of tournaments. Our results generalize the recent result by Bessy et al. [MFCS, 2019] about Arc Disjoint Cycle Packing on tournaments.

For a size parameter $$s:\mathbb {N}\to \mathbb {N}$$, the Minimum Circuit Size Problem (denoted by MCSP[s(n)]) is the problem of deciding whether the minimum circuit size of a given function f : {0,1}n →{0,1} (represented by a string of length N := 2n) is at most a threshold s(n). A recent line of work exhibited “hardness magnification” phenomena for MCSP: A very weak lower bound for MCSP implies a breakthrough result in complexity theory. For example, McKay, Murray, and Williams (STOC 2019) implicitly showed that, for some constant μ1 > 0, if $$\text {MCSP}[2^{\mu _{1}\cdot n}]$$ cannot be computed by a one-tape Turing machine (with an additional one-way read-only input tape) running in time N1.01, then P≠NP. In this paper, we present the following new lower bounds against one-tape Turing machines and branching programs: (1) A randomized two-sided error one-tape Turing machine (with an additional one-way read-only input tape) cannot compute $$\text {MCSP}[2^{\mu _{2}\cdot n}]$$ in time N1.99, for some constant μ2 > μ1. (2) A non-deterministic (or parity) branching program of size $$o(N^{1.5}/\log N)$$ cannot compute MKTP, which is a time-bounded Kolmogorov complexity analogue of MCSP. This is shown by directly applying the Nečiporuk method to MKTP, which previously appeared to be difficult. (3) The size of any non-deterministic, co-non-deterministic, or parity branching program computing MCSP is at least $$N^{1.5-o\left (1\right )}$$. These results are the first non-trivial lower bounds for MCSP and MKTP against one-tape Turing machines and non-deterministic branching programs, and essentially match the best-known lower bounds for any explicit functions against these computational models. The first result is based on recent constructions of pseudorandom generators for read-once oblivious branching programs (ROBPs) and combinatorial rectangles (Forbes and Kelley, FOCS 2018; Viola, Electron. Colloq. Comput. Complexity (ECCC) 26, 51, 2019). En route, we obtain several related results: (1) There exists a (local) hitting set generator with seed length $$\widetilde {O}(\sqrt {N})$$ secure against read-once polynomial-size non-deterministic branching programs on N-bit inputs. (2) Any read-once co-non-deterministic branching program computing MCSP must have size at least $$2^{\widetilde {\Omega }(N)}$$.

Unit disk graphs are the intersection graphs of unit radius disks in the Euclidean plane. Deciding whether there exists an embedding of a given unit disk graph, i.e. unit disk graph recognition, is an important geometric problem, and has many application areas. In general, this problem is known to be ∃ℝ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\exists \mathbb {R}$\end{document}-complete. In some applications, the objects that correspond to unit disks, have predefined (geometrical) structures to be placed on. Hence, many researchers attacked this problem by restricting the domain of the disk centers. Following the same line, we also describe a polynomial-time reduction which shows that deciding whether a graph can be realized as unit disks onto given straight lines is NP-hard, when the given lines are parallel to either x-axis or y-axis. Adjusting the reduction, we also show that this problem is NP-complete when the given lines are only parallel to x-axis. We obtain those results using the idea of the logic engine introduced by Bhatt and Cosmadakis in 1987.

We consider compact representations of collections of similar strings that support random access queries. The collection of strings is given by a rooted tree where edges are labeled by an edit operation (inserting, deleting, or replacing a character) and a node represents the string obtained by applying the sequence of edit operations on the path from the root to the node. The goal is to compactly represent the entire collection while supporting fast random access to any part of a string in the collection. This problem captures natural scenarios such as representing the past history of an edited document or representing highly-repetitive collections. Given a tree with n nodes, we show how to represent the corresponding collection in O(n) space and O(logn/loglogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O(\log n/ \log \log n)$\end{document} query time. This improves the previous time-space trade-offs for the problem. Additionally, we show a lower bound proving that the query time is optimal for any solution using near-linear space. To achieve our bounds for random access in persistent strings we show how to reduce the problem to the following natural geometric selection problem on line segments. Consider a set of horizontal line segments in the plane. Given parameters i and j, a segment selection query returns the j th smallest segment (the segment with the j th smallest y-coordinate) among the segments crossing the vertical line through x-coordinate i. The segment selection problem is to preprocess a set of horizontal line segments into a compact data structure that supports fast segment selection queries. We present a solution that uses O(n) space and support segment selection queries in O(logn/loglogn)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O(\log n/ \log \log n)$\end{document} time, where n is the number of segments. Furthermore, we prove that that this query time is also optimal for any solution using near-linear space.

This survey presents a necessarily incomplete (and biased) overview of results at the intersection of arithmetic circuit complexity, structured matrices and deep learning. Recently there has been some research activity in replacing unstructured weight matrices in neural networks by structured ones (with the aim of reducing the size of the corresponding deep learning models). Most of this work has been experimental and in this survey, we formalize the research question and show how a recent work that combines arithmetic circuit complexity, structured matrices and deep learning essentially answers this question. This survey is targeted at complexity theorists who might enjoy reading about how tools developed in arithmetic circuit complexity helped design (to the best of our knowledge) a new family of structured matrices, which in turn seem well-suited for applications in deep learning. However, we hope that folks primarily interested in deep learning would also appreciate the connections to complexity theory.

Given an undirected graph G on n nodes and m edges in the form of a data stream we study the problem of finding an Euler tour in G. Our main result is the first one-pass streaming algorithm computing an Euler tour of G in the form of an edge successor function with only O(nlog(n))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal O(n\log (n))$\end{document} RAM, which is optimal for this setting (e.g. Sun and Woodruff (2015)). Since the output size can be much larger, we use a write-only tape to gradually output the solution. The previously best-known result for finding Euler tours in data streams is implicitly given by the W-stream algorithm of Demetrescu et al. (2010) using O(m/n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal O(m/n)$\end{document} passes under the same RAM limitation. Our approach is to partition the edges into edge-disjoint cycles and to merge the cycles until a single Euler tour is achieved. In the streaming environment such a merging is far from being obvious as the limited RAM allows the processing of only a constant number of cycles at once. This enforces merging of cycles that partially are no longer present in RAM. We solve this problem with a new edge swapping technique, for which storing two certain edges per node is sufficient to merge tours without having all tour edges in RAM. The mathematical key is to model tours and their merging in an algebraic way, where certain equivalence classes represent subtours. This quite general approach might be of interest also in other routing problems.

This paper studies complexity theoretic aspects of quantum refereed games, which are abstract games between two competing players that send quantum states to a referee, who performs an efficiently implementable joint measurement on the two states to determine which of the player wins. The complexity class QRG(1) contains those decision problems for which one of the players can always win with high probability on yes-instances and the other player can always win with high probability on no-instances, regardless of the opposing player’s strategy. This class trivially contains QMA ∪co-QMA and is known to be contained in PSPACE. We prove stronger containments on two restricted variants of this class. Specifically, if one of the players is limited to sending a classical (probabilistic) state rather than a quantum state, the resulting complexity class CQRG(1) is contained in ∃⋅PP (the nondeterministic polynomial-time operator applied to PP); while if both players send quantum states but the referee is forced to measure one of the states first, and incorporates the classical outcome of this measurement into a measurement of the second state, the resulting class MQRG(1) is contained in P ⋅PP (the unbounded-error probabilistic polynomial-time operator applied to PP).

In this paper, we investigate the complexity of a number of computational problems defined on a synchronous boolean finite dynamical system, where update functions are chosen from a template set of exclusive-or and its negation. We first show that the reachability and path-intersection problems are solvable in logarithmic space-uniform AC1 if the objects execute permutations, while the reachability problem is known to be in P and the path-intersection problem to be in UP in general. We also explore the case where the reachability or intersection are tested on a subset of objects, and show that this hardens complexity of the problems: both problems become NP-complete, and even $${\Pi }^{p}_{2}$$-complete if we further require universality of the intersection. We next consider the exact cycle length problem, that is, determining whether there exists an initial configuration that yields a cycle in the configuration space having exactly a given length, and show that this problem is NP-complete. Lastly, we consider the t-predecessor and t-Garden of Eden problem, and prove that these are solvable in polynomial time even if the value of t is also given in binary as part of instance, and the two problems are in logarithmic space-uniform NC2 if the value of t is given in unary as part of instance.

In k-Digraph Coloring we are given a digraph and are asked to partition its vertices into at most k sets, so that each set induces a DAG. This well-known problem is NP-hard, as it generalizes (undirected) k-Coloring, but becomes trivial if the input digraph is acyclic. This poses the natural parameterized complexity question of what happens when the input is “almost” acyclic. In this paper we study this question using parameters that measure the input’s distance to acyclicity in either the directed or the undirected sense. In the directed sense perhaps the most natural notion of distance to acyclicity is directed feedback vertex set. It is already known that, for all k ≥ 2, k-Digraph Coloring is NP-hard on digraphs of directed feedback vertex set of size at most k + 4. We strengthen this result to show that, for all k ≥ 2, k-Digraph Coloring is already NP-hard for directed feedback vertex set of size exactly k. This immediately provides a dichotomy, as k-Digraph Coloring is trivial if directed feedback vertex set has size at most k − 1. Refining our reduction we obtain three further consequences: (i) 2-Digraph Coloring is NP-hard for oriented graphs of directed feedback vertex set at most 3; (ii) for all k ≥ 2, k-Digraph Coloring is NP-hard for graphs of feedback arc set of size at most k²; interestingly, this leads to a second dichotomy, as we show that the problem is FPT by k if feedback arc set has size at most k² − 1; (iii) k-Digraph Coloring is NP-hard for graphs of directed feedback vertex k, even if the maximum degree Δ is at most 4k − 1; we show that this is also almost tight, as the problem becomes FPT for digraphs of directed feedback vertex set of size k and Δ ≤ 4k − 3. Since these results imply that the problem is also NP-hard on graphs of bounded directed treewidth, we then consider parameters that measure the distance from acyclicity of the underlying graph. On the positive side, we show that k-Digraph Coloring admits an FPT algorithm parameterized by treewidth, whose parameter dependence is (tw!)ktw. Since this is considerably worse than the ktw dependence of (undirected) k-Coloring, we pose the question of whether the tw! factor can be eliminated. Our main contribution in this part is to settle this question in the negative and show that our algorithm is essentially optimal, even for the much more restricted parameter treedepth and for k = 2. Specifically, we show that an FPT algorithm solving 2-Digraph Coloring with dependence tdo(td) would contradict the ETH. Then, we consider the class of tournaments. It is known that deciding whether a tournament is 2-colorable is NP-complete. We present an algorithm that decides if we can 2-color a tournament in O∗(63n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$O^{*}({\sqrt [3]{6}}^{n})$\end{document} time. Finally, we explain how this algorithm can be modified to decide if a tournament is k-colorable.

It is proved that every LL(k)-linear grammar can be transformed to an equivalent LL(1)-linear grammar. The transformation incurs a blow-up in the number of nonterminal symbols by a factor of m2k−O(1), where m is the size of the alphabet. A close lower bound is established: for certain LL(k)-linear grammars with n nonterminal symbols, every equivalent LL(1)-linear grammar must have at least n⋅(m−1)2k−O(logk)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$n \cdot (m-1)^{2k-O(\log k)}$\end{document} nonterminal symbols.

We study the classic load balancing problem on dynamic general graphs, where the graph changes arbitrarily between the computational rounds, remaining connected with no permanent cut. A lower bound of Ω(n2) for the running time bound in the dynamic setting, where n is the number of nodes in the graph, is known even for randomized algorithms. We solve the problem by deterministic distributed algorithms, based on a short local deal-agreement communication of proposal/deal in the neighborhood of each node. Our synchronous load balancing algorithms achieve a discrepancy of 𝜖 within the time of $$O(nD \log (nK/\epsilon ))$$ for the continuous setting and the discrepancy of at most 2D within the time of $$O(n D \log (n K/D))$$ and a 1-balanced state within the additional time of O(nD2) for the discrete setting, where K is the initial discrepancy, and D is a bound for the graph diameter. Also, the stability of the achieved 1-balanced state is studied. The above results are extended to the case of unbounded diameter, essentially keeping the time bounds, via special averaging of the graph diameter over time. Our algorithms can be considered anytime ones, in the sense that they can be stopped at any time during the execution, since they never make loads negative and never worsen the state as the execution progresses. In addition, we describe a version of our algorithms, where each node may transfer load to and from several neighbors at each round, as a heuristic for better performance. The algorithms are generalized to the asynchronous distributed model. We also introduce a self-stabilizing version of our asynchronous algorithms.

Computing the directed path-width of a directed graph is an NP-hard problem. Even for digraphs of maximum semi-degree 3 the problem remains hard. We propose a decomposition of an input digraph G = (V,A) by a number k of sequences with entries from V, such that (u,v) ∈ A if and only if in one of the sequences there is an occurrence of u appearing before an occurrence of v. We present several graph theoretical properties of these digraphs. Among these we give forbidden subdigraphs of digraphs which can be defined by k = 1 sequence, which is a subclass of semicomplete digraphs. Given the decomposition of digraph G, we show an algorithm which computes the directed path-width of G in time O(k⋅(1+N)k)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {O}(k\cdot (1+N)^{k})$\end{document}, where N denotes the maximum sequence length. This leads to an XP-algorithm w.r.t. k for the directed path-width problem. Our result improves the algorithms of Kitsunai et al. for digraphs of large directed path-width which can be decomposed by a small number of sequences and confirm their conjecture that semicompleteness is a useful restriction when considering digraphs.

Every univariate rational series over an algebraically closed field is shown to be realised by some polynomially ambiguous unary weighted automaton. Unary weighted automata over algebraically closed fields thus always admit polynomially ambiguous equivalents. On the other hand, it is shown that this property does not hold over any other field of characteristic zero, generalising a recent observation about unary weighted automata over the field of rational numbers.

We continue the program of proving circuit lower bounds via circuit satisfiability algorithms. So far, this program has yielded several concrete results, proving that functions in $$\mathsf {Quasi}\text {-}\mathsf {NP} = \mathsf {NTIME}[n^{(\log n)^{O(1)}}]$$ and other complexity classes do not have small circuits (in the worst case and/or on average) from various circuit classes $$\mathcal { C}$$, by showing that $$\mathcal { C}$$ admits non-trivial satisfiability and/or # SAT algorithms which beat exhaustive search by a minor amount. In this paper, we present a new strong lower bound consequence of having a non-trivial # SAT algorithm for a circuit class $${\mathcal C}$$. Say that a symmetric Boolean function f(x1,…,xn) is sparse if it outputs 1 on O(1) values of $${\sum }_{i} x_{i}$$. We show that for every sparse f, and for all “typical” $$\mathcal { C}$$, faster # SAT algorithms for $$\mathcal { C}$$ circuits imply lower bounds against the circuit class $$f \circ \mathcal { C}$$, which may be stronger than $$\mathcal { C}$$ itself. In particular: # SAT algorithms for nk-size $$\mathcal { C}$$-circuits running in 2n/nk time (for all k) imply NEXP does not have $$(f \circ \mathcal { C})$$-circuits of polynomial size. # SAT algorithms for $$2^{n^{{\varepsilon }}}$$-size $$\mathcal { C}$$-circuits running in $$2^{n-n^{{\varepsilon }}}$$ time (for some ε > 0) imply Quasi-NP does not have $$(f \circ \mathcal { C})$$-circuits of polynomial size. Applying # SAT algorithms from the literature, one immediate corollary of our results is that Quasi-NP does not have EMAJ ∘ ACC0 ∘ THR circuits of polynomial size, where EMAJ is the “exact majority” function, improving previous lower bounds against ACC0 [Williams JACM’14] and ACC0 ∘THR [Williams STOC’14], [Murray-Williams STOC’18]. This is the first nontrivial lower bound against such a circuit class.

A discrete Gaussian distribution over the integers is a Gaussian distribution restricted so that its support is the set of all the integers. This paper studies the problem of sampling exactly from discrete Gaussian distributions over the integers. It is required to generate integers according to a given discrete Gaussian distribution without any statistical discrepancy. In 2016, Karney proposed an exact sampling algorithm for discrete Gaussian distributions whose parameters are rational numbers. This algorithm uses rejection sampling, and it is a discretization of his algorithm for sampling exactly from the standard normal distribution. In this paper, we give a rigorous and complete analysis of the rejection rate of this algorithm, which was not given by Karney, and show that it cannot generate integers efficiently in the case where the standard deviation of the distribution is very small (e.g. much smaller than 1/2). Then, we present an alternative algorithm for this special case, which can sample exactly and efficiently from discrete Gaussian distributions with very small standard deviations.

A c-coloring of the grid GN,M = [N] × [M] is a mapping of GN,M into [c] such that no four corners forming a rectangle have the same color. In 2009 a challenge was proposed to find a 4-coloring of G17,17. Though a coloring was produced, finding it proved to be difficult. This raises the question of whether there is some complexity lower bound. Consider the following problem: given a partial c-coloring of the GN,M grid, can it be extended to a full c-coloring? We show that this problem is NP-complete. We also give a Fixed Parameter Tractable algorithm for this problem with parameter c.

In the NP-hard Colored ( s , t )- Cut problem, the input is a graph G = ( V , E ) together with an edge-coloring ℓ : E → C , two vertices s and t , and a number k . The question is whether there is a set $S\subseteq C$ S ⊆ C of at most k colors such that deleting every edge with a color from S destroys all paths between s and t in G . We continue the study of the parameterized complexity of Colored ( s , t )- Cut . First, we consider parameters related to the structure of G . For example, we study parameterization by the number ξ i of edge deletions that are needed to transform G into a graph with maximum degree i . We show that Colored ( s , t )- Cut is W[2]-hard when parameterized by ξ 3 , but fixed-parameter tractable when parameterized by ξ 2 . Second, we consider parameters related to the coloring ℓ . We show fixed-parameter tractability for three parameters that are potentially smaller than the total number of colors | C | and provide a linear-size problem kernel for a parameter related to the number of edges with rare edge colors.

Diffusion is a natural phenomenon in many real-world networks. Spreading of ideas, rumors in an online social network; propagation of virus, malware in a computer network; spreading of diseases in a human contact network, etc. are some real-world examples of this. Diffusion often starts from a set of initial nodes known as seed nodes. A node can be in any one of the following two states: influenced (active) or not influenced (inactive). We assume that a node can change its state from inactive to active, however, not vice versa. Only the seed nodes are active initially and the information is dissipated from these seed nodes in discrete time steps. Each node v is associated with a threshold value τ(v) which is a positive integer. A node v will be influenced at time step $$t^{\prime }$$, if there are at least τ(v) number of nodes in its neighborhood which have been activated on or before time step $$t^{\prime }-1$$. The diffusion stops when no more node-activation is possible. Given a simple, undirected graph G with a threshold function $$\tau :V(G) \rightarrow \mathbb {N}$$, the Target Set Selection (TSS) problem is about choosing a minimum cardinality set, say $$S \subseteq V(G)$$, such that starting a diffusion process with S as its seed set will eventually result in activating all the nodes in G. For any non-negative integer i, we say a set $$T\subseteq V(G)$$ is a degree-i modulator of G if the degree of any vertex in the graph G − T is at most i. Degree-0 modulators of a graph are precisely its vertex covers. Consider a graph G on n vertices and m edges. We have the following results on the TSS problem: It was shown by Nichterlein et al. (Soc. Netw. Anal. Min. 3(2), 233–256 10) that it is possible to compute an optimal-sized target set in $$\boldsymbol {O}(\boldsymbol {2}^{(\boldsymbol {2}^{t}+\boldsymbol {1})\boldsymbol {t}}\boldsymbol {\cdot m})$$ time, where t denotes the cardinality of a minimum degree-0 modulator of G. We improve this result by designing an algorithm running in time $$\boldsymbol {2}^{\boldsymbol {O}(\boldsymbol {t}\log \boldsymbol {t})}\boldsymbol {n}$$. We design a $$\boldsymbol {2}^{\boldsymbol {2}^{\boldsymbol {O}(\boldsymbol {t})}}\boldsymbol {n}^{\boldsymbol {O}(\boldsymbol {1})}$$ time algorithm to compute an optimal target set for G, where t is the size of a minimum degree-1 modulator of G. We show that for a graph on n vertices of treewidth s, the TSS problem cannot be solved in $$\boldsymbol {f}(\boldsymbol {s})\boldsymbol {n}^{\boldsymbol {o}(\boldsymbol {\frac {s}{\log s}})}$$ time unless the Exponential Time Hypothesis fails. This is an improvement over the previously known lower bound of $$\boldsymbol {f}(\boldsymbol {s})\boldsymbol {n}^{\boldsymbol {o}(\sqrt {\boldsymbol {s}})}$$ due to Ben-Zwi et al. (Discret. Optim. 8(1), 87–96 16). In fact, we prove that same lower bound holds when parameterized by tree-depth or star-deletion number.

We initiate the study of effective pointwise ergodic theorems in resource-bounded settings. Classically, the convergence of the ergodic averages for integrable functions can be arbitrarily slow (Krengel Monatshefte Für Mathematik 86, 3–6 1978). In contrast, we show that for a class of PSPACE L1 functions, and a class of PSPACE computable measure-preserving ergodic transformations, the ergodic average exists and is equal to the space average on every EXP random. We establish a partial converse that PSPACE non-randomness can be characterized as non-convergence of ergodic averages. Further, we prove that there is a class of resource-bounded randoms, viz. SUBEXP-space randoms, on which the corresponding ergodic theorem has an exact converse - a point x is SUBEXP-space random if and only if the corresponding effective ergodic theorem holds for x.

In this paper we study decision tree models with various types of queries. For a given function it is usually not hard to determine the complexity in the standard decision tree model (each query evaluates a variable). However in more general settings showing tight lower bounds is substantially harder. Threshold functions often have non-trivial complexity in such models and can be used to provide interesting examples. Standard decision trees can be viewed as a computational model in which each query depends on only one input bit. In the first part of the paper we consider natural generalization of standard decision tree model: we address decision trees that are allowed to query any function depending on two input bits. We show the first lower bound of the form n − o(n) for an explicit function (namely, the majority function) in this model. We also show that in the decision tree model with AND and OR queries of arbitrary fan-in the complexity of the majority function is n − 1. In the second part of the paper we address parity decision trees that are allowed to query arbitrary parities of input bits. There are various lower bound techniques for parity decision trees complexity including analytical techniques (degree over $$\mathbb {F}_{2}$$, Fourier sparsity, granularity) and combinatorial techniques (generalizations of block sensitivity and certificate complexity). These techniques give tight lower bounds for many natural functions. We give a new inductive argument tailored specifically for threshold functions. A combination of this argument with granularity lower bound allows us to provide a simple example of a function for which all previously known lower bounds are not tight.

We prove three results on the dimension structure of complexity classes. (1) The Point-to-Set Principle, which has recently been used to prove several new theorems in fractal geometry, has resource-bounded instances. These instances characterize the resource-bounded dimension of a set X of languages in terms of the relativized resource-bounded dimensions of the individual elements of X, provided that the former resource bound is large enough to parametrize the latter. Thus for example, the dimension of a class X of languages in EXP is characterized in terms of the relativized p-dimensions of the individual elements of X. (2) Every language that is $${\leq ^{P}_{m}}$$-reducible to a p-selective set has p-dimension 0, and this fact holds relative to arbitrary oracles. Combined with a resource-bounded instance of the Point-to-Set Principle, this implies that if NP has positive dimension in EXP, then no quasipolynomial time selective language is $${\leq ^{P}_{m}}$$-hard for NP. (3) If the set of all disjoint pairs of NP languages has dimension 1 in the set of all disjoint pairs of EXP languages, then NP has positive dimension in EXP.

We consider 2-player, 2-value cost minimization games where the players’ costs take on two values, a,b, with a < b. The players play mixed strategies and their costs are evaluated by semistrictly quasiconcave cost functions representable by strictly quasiconcave, one-parameter functions $$\mathsf {F}: [0, 1] \rightarrow \mathbb {R}$$. Our main result is an impossibility result stating that: If the maximum of F is obtained in (0,1) and $$\mathsf {F} \left (\frac {1}{2}\right )\ne b$$, then there exists a 2-player, 2-value game without F-equilibrium. The counterexample to the existence of equilibria game used for the impossibility result belongs to a new class of very sparse 2-player, 2-value bimatrix games which we call simple games. In an attempt to investigate the remaining case $$\mathsf {F}\left (\frac {1}{2}\right ) = b$$, we show that: Every simple, n-strategy game has an F-equilibrium when $$\mathsf {F} \left (\frac {1}{2}\right ) = b$$. We present a linear time algorithm for computing such an equilibrium. For 2-player, 2-value, 3-strategy games, we have that if $$\mathsf {F} \left (\frac {1}{2}\right ) \le b$$, then every 2-player, 2-value, 3-strategy game has an F-equilibrium; if $$\mathsf {F} \left (\frac {1}{2}\right ) > b$$, then there exists a simple 2-player, 2-value, 3-strategy game without F-equilibrium. To the best of our knowledge, this work is the first to provide an (almost complete) answer on whether there is, for a given function F, a counterexample game without F-equilibrium.

The avoidability, or unavoidability of patterns in words over finite alphabets has been studied extensively. The word α over a finite set A is said to be unavoidable for an infinite set B⁺ of nonempty words over a finite set B if, for all but finitely many elements w of B⁺, there exists a semigroup morphism ϕ:A+→B+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\phi :A^{+}\rightarrow B^{+}$\end{document} such that ϕ(α) is a factor of w. We discuss unavoidability in the milieu of various types of complexity. For words that are unavoidable, we provide a constructive upper bound to the lengths of words that can avoid them. We then discuss the relative density of unavoidable words. Subsequently, we investigate computational aspects of unavoidable words, focusing on the computational complexity of determining whether a word is unavoidable. This culminates in a proof that this problem is NP-complete.

We study a group-formation game on an undirected complete graph G with all edge-weights in a set W⊆ℝ∪{−∞}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal { W} \subseteq \mathbb {R} \cup \{-\infty \}$\end{document}. This work is motivated by a recent information-sharing model for social networks (Kleinberg and Ligett, Games Econ. Behav. 82, 702–716 2013). Specifically, we consider partitions of the vertex-set of G into groups. The individual utility of any vertex v is the sum of the weights on the edges uv between v and the other vertices u in her group. – Informally, u and v represent social users, and the weight of uv quantifies the extent to which u and v (dis)agree on some fixed topic. – For a fixed integer k ≥ 1, a k-stable partition is a partition in which no coalition of at most k vertices would increase their respective utilities by leaving their groups to join or create another common group. Before our work, it was known that such a partition always exists if k = 1 (Burani and Zwicker, Math. Soc. Sci. 45(1), 27–52 2003). We focus on the regime k ≥ 2. Our first result is that when all the social users are either friends, enemies or indifferent to each other (i.e., W={−∞,0,1}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W} = \{-\infty ,0,1\}$\end{document}), a partition as above always exists if k ≤ 2, but it may not exist ifk ≥ 3. This is in sharp contrast with (Kleinberg and Ligett, Games Econ. Behav. 82, 702–716 2013) who proved that k-stable partitions always exist, for any k, if W={−∞,1}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W} = \{-\infty ,1\}$\end{document}. We further study the intriguing relationship between the existence of k-stable partitions and the allowed set of edge-weights W\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document}. Specifically, we give sufficient conditions for the existence or the non existence of such partitions based on tools from Graph Theory. Doing so, we obtain for most sets W\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document} the largest k(W)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$k(\mathcal {W})$\end{document} such that all graphs with edge-weights in W\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document} admit a k(W)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$k(\mathcal {W})$\end{document}-stable partition. From the computational point of view, we prove that for any W\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathcal {W}$\end{document} containing −∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$-\infty$\end{document}, the problem of deciding whether a k-stable partition exists is NP-complete for any k>k(W)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$k > k(\mathcal {W})$\end{document}. Our work hints that the emergence of stable communities in a social network requires a trade-off between the level of collusion between social users, and the diversity of their opinions.

Let ([n],d) be a metric space with diameter Δ and with average distance $$\bar {r}$$, where [n] ≡{1,2,…,n}. We show that if $${\Delta }=o(n\bar {r})$$, then $${\sum }_{i=1}^{\lfloor n/2\rfloor } d(\boldsymbol {\pi }(2i-1),\boldsymbol {\pi }(2i))$$ is $$(1/2\pm o(1))n\bar {r}$$ with probability 1 − o(1) over a uniformly random permutation π: [n] → [n]. In particular, a uniformly random perfect matching in ([n],d) has size concentrating around $$n\bar {r}/2$$ when n is even and $${\Delta }=o(n\bar {r})$$. Our result has implications for finding maximum travelling salesman tours in metric spaces.

Motivated by recent research on combinatorial markets with endowed valuations by (Babaioff et al., EC 2018) and (Ezra et al., EC 2020), we introduce a notion of perturbation stability in Combinatorial Auctions (CAs) and study the extent to which stability helps in social welfare maximization and mechanism design. A CA is γ-stable if the optimal solution is resilient to inflation, by a factor of γ ≥ 1, of any bidder’s valuation for any single item. On the positive side, we show how to compute efficiently an optimal allocation for 2-stable subadditive valuations and that a Walrasian equilibrium exists for 2-stable submodular valuations. Moreover, we show that a Parallel 2nd Price Auction (P2A) followed by a demand query for each bidder is truthful for general subadditive valuations and results in the optimal allocation for 2-stable submodular valuations. To highlight the challenges behind optimization and mechanism design for stable CAs, we show that a Walrasian equilibrium may not exist for γ-stable XOS valuations for any γ, that a polynomial-time approximation scheme does not exist for (2 − ε)-stable submodular valuations, and that any DSIC mechanism that computes the optimal allocation for stable CAs and does not use demand queries must use exponentially many value queries. We conclude with analyzing the Price of Anarchy of P2A and Parallel 1st Price Auctions (P1A) for CAs with stable submodular and XOS valuations. Our results indicate that the quality of equilibria of simple non-truthful auctions improves only for γ-stable instances with γ ≥ 3.

Given a graph class $${\mathscr{H}}$$, the task of the $${\mathscr{H}}$$-Square Root problem is to decide whether an input graph G has a square root H from $${\mathscr{H}}$$. We are interested in the parameterized complexity of the problem for classes $${\mathscr{H}}$$ that are composed by the graphs at vertex deletion distance at most k from graphs of maximum degree at most one, that is, we are looking for a square root H such that there is a modulator S of size k such that H − S is the disjoint union of isolated vertices and disjoint edges. We show that different variants of the problems with constraints on the number of isolated vertices and edges in H − S are FPT when parameterized by k by demonstrating algorithms with running time $$2^{2^{\mathcal {O}(k)}}\cdot n^{5}$$. We further show that the running time of our algorithms is asymptotically optimal and it is unlikely that the double-exponential dependence on k could be avoided. In particular, we prove that the VC-kRoot problem, that asks whether an input graph has a square root with vertex cover of size at most k, cannot be solved in time $$2^{2^{o(k)}}\cdot n^{\mathcal {O}(1)}$$ unless the Exponential Time Hypothesis fails. Moreover, we point out that VC-kRoot parameterized by k does not admit a subexponential kernel unless P = NP.

Nonuniformity is a central concept in computational complexity with powerful connections to circuit complexity and randomness. Nonuniform reductions have been used to study the isomorphism conjecture for NP and completeness for larger complexity classes. We study the power of nonuniform reductions for NP-completeness, obtaining both separations and upper bounds for nonuniform completeness vs uniform complessness in NP. Under various hypotheses, we obtain the following separations: There is a set complete for NP under nonuniform many-one reductions, but not under uniform many-one reductions. This is true even with a single bit of nonuniform advice. There is a set complete for NP under nonuniform many-one reductions with polynomial-size advice, but not under uniform Turing reductions. That is, polynomial nonuniformity cannot be replaced by a polynomial number of queries. For any fixed polynomial p(n), there is a set complete for NP under uniform 2-truth-table reductions, but not under nonuniform many-one reductions that use p(n) advice. That is, giving a uniform reduction a second query makes it impossible to simulate by a nonuniform reduction with fixed polynomial advice. There is a set complete for NP under nonuniform many-one reductions with polynomial advice, but not under nonuniform many-one reductions with logarithmic advice. This hierarchy theorem also holds for other reducibilities, such as truth-table and Turing. We also consider uniform upper bounds on nonuniform completeness. Hirahara (2015) showed that unconditionally every set that is complete for NP under nonuniform truth-table reductions that use logarithmic advice is also uniformly Turing-complete. We show that under a derandomization hypothesis, every set that is complete for NP under nonuniform truth-table reductions is also uniformly truth-table complete.

For any positive number k and for any hypergraph H with vertex set V(H) and edge set E(H)⊆2V(H)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\mathrm {E}(H)\subseteq 2^{\mathrm {V}(H)}$\end{document}, we call U⊆V(H)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$U\subseteq \mathrm {V}(H)$\end{document} a k-antimatching of H if for every matching F⊆E(H)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$F\subseteq \mathrm {E}(H)$\end{document} it holds rankA[U,F] ≤ k, where A is the V(H) ×E(H) (0,1) matrix whose (v,e)-entry is 1 if and only if v ∈ e. Consider a finite poset P with a unique maximal element and having a rooted tree as its Hasse diagram. Let H be the hypergraph with V(H) = P and with E(H) being the set of all down-sets of P. Let μ be a submodular function defined on 2V(H) such that μ(V(H)) ≥ d + (ℓ − 1)c for a positive integer ℓ and two nonnegative reals d and c. For any nonnegative reals d1,…,dℓ with ∑i=1ℓdi=d\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}${\sum }_{i=1}^{\ell } d_{i}=d$\end{document}, we show that either there is a matching {D1,…,Dℓ} of H with μ(Di) ≥ di for all i, or there is a 1-antimatching C of H such that μ(C) ≥ c. We establish a countable version of this result by assuming further that μ satisfies the weak Fatou property and reverse Fatou property. We propose a conjecture on a possible extension of our result from 1-antimatching to general k-antimatchings.

Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). But then it is also more expensive in terms of memory consumption. For languages with conditions and iterations, the number of contexts grows exponentially with the program size. This problem is not just a theoretical issue. Several papers evaluating inter-procedural context-sensitive data-flow analysis report severe memory problems, and the path-explosion problem is a major issue in program verification and model checking. In this paper we propose χ -terms as a means to capture and manipulate context-sensitive program information in a data-flow analysis. χ -terms are implemented as directed acyclic graphs without any redundant subgraphs. We introduce the k -approximation and the l -loop-approximation that limit the size of the context-sensitive information at the cost of analysis precision. We prove that every context-insensitive data-flow analysis has a corresponding k , l -approximated context-sensitive analysis, and that these analyses are sound and guaranteed to reach a fixed point. We also present detailed algorithms outlining a compact, redundancy-free, and DAG-based implementation of χ -terms.

We study the complexity of the Distributed Constraint Satisfaction Problem (DCSP) on a synchronous, anonymous network from a theoretical standpoint. In this setting, variables and constraints are controlled by agents which communicate with each other by sending messages through fixed communication channels. Our results endorse the well-known fact from classical CSPs that the complexity of fixed-template computational problems depends on the template’s invariance under certain operations. Specifically, we show that DCSP(Γ) is polynomial-time tractable if and only if Γ is invariant under symmetric polymorphisms of all arities. Otherwise, there are no algorithms that solve DCSP(Γ) in finite time. We also show that the same condition holds for the search variant of DCSP. Collaterally, our results unveil a feature of the processes’ neighbourhood in a distributed network, its iterated degree, which plays a major role in the analysis. We explore this notion establishing a tight connection with the basic linear programming relaxation of a CSP.

The study of the complexity of the equation satisfiability problem in finite groups had been initiated by Goldmann and Russell in (Inf. Comput. 178 (1), 253–262, 2002) where they showed that this problem is in for nilpotent groups while it is -complete for non-solvable groups. Since then, several results have appeared showing that the problem can be solved in polynomial time in certain solvable groups G having a nilpotent normal subgroup H with nilpotent factor G / H . This paper shows that such a normal subgroup must exist in each finite group with equation satisfiability solvable in polynomial time, unless the Exponential Time Hypothesis fails.

An essential problem in the design of holographic algorithms is to decide whether the required signatures can be realized by matchgates under a suitable basis transformation. For domain size two, Cai and Choudhary (2007, 2009) characterized all functions directly realizable as matchgate signatures without a basis transformation, and Cai and Lu (Theory Comput. Syst. 46(3), 398–415 2010; J. Comput. Syst. Sci. 77, 41–61 2011) gave a polynomial time algorithm for the realizability problem for symmetric signatures under basis transformations. We generalize this theory to arbitrary domain size k. Specifically, we give a polynomial time algorithm for the Simultaneous Realizability Problem on domain size k ≥ 3 for signatures realizable by matchgates over a basis of size 1. Using this, one can decide whether suitable signatures for a holographic algorithms on domain size k are realizable and if so, to find a suitable linear basis to realize these signatures by an efficient algorithm. We dedicate this paper to the memory of Alan L. Selman, a close personal friend and a former colleague of the second author, and also a former Editor-in-Chief for this journal. The results in this paper are a demonstration of the structure and unity of complexity theory. As this is a persistent theme of Professor Selman in his multifaceted and significant contributions to our field, we dedicate it to Alan as a tribute to his life and his influential work.

We consider the cons-free programming language of Neil Jones, a simple pure functional language, which decides exactly the polynomial-time relations and whose tail recursive fragment decides exactly the logarithmic-space relations. We exhibit a close relationship between the running time of cons-free programs and the running time of logspace-bounded auxiliary pushdown automata. As a consequence, we characterize intermediate classes like NC in terms of resource-bounded cons-free computation. In so doing, we provide the first “machine-free” characterizations of certain complexity classes, like P-uniform NC. Furthermore, we show strong polynomial lower bounds on cons-free running time. Namely, for every polynomial p, we exhibit a relation R ∈Ptime such that any cons-free program deciding R must take time at least p almost everywhere. Our methods use a “subrecursive version” of Blum complexity theory, and raise the possibility of further applications of this technology to the study of the fine structure of Ptime.

Traditionally, finite automata theory has been used as a framework for the representation of possibly infinite sets of strings. In this work, we introduce the notion of second-order finite automata, a formalism that combines finite automata with ordered decision diagrams, with the aim of representing possibly infinite sets of sets of strings. Our main result states that second-order finite automata can be canonized with respect to the second-order languages they represent. Using this canonization result, we show that sets of sets of strings represented by second-order finite automata are closed under the usual Boolean operations, such as union, intersection, difference and even under a suitable notion of complementation. Additionally, emptiness of intersection and inclusion are decidable. We provide two algorithmic applications for second-order automata. First, we show that several width/size minimization problems for deterministic and nondeterministic ODDs are solvable in fixed-parameter tractable time when parameterized by the width of the input ODD. In particular, our results imply FPT algorithms for corresponding width/size minimization problems for ordered binary decision diagrams (OBDDs) with a fixed variable ordering. Previously, only algorithms that take exponential time in the size of the input OBDD were known for width minimization, even for OBDDs of constant width. Second, we show that for each k and w one can count the number of distinct functions computable by ODDs of width at most w and length k in time h(|Σ|,w) ⋅ kO(1), for a suitable h:ℕ×ℕ→ℕ. This improves exponentially on the time necessary to explicitly enumerate all such functions, which is exponential in both the width parameter w and in the length k of the ODDs.

The declining price anomaly states that the price weakly decreases when multiple copies of an item are sold sequentially over time. The anomaly has been observed in a plethora of practical applications. On the theoretical side, Gale and Stegeman (Games and Economic Behavior, 36(1), 74–103, 2001) proved that the anomaly is guaranteed to hold in full-information sequential auctions with exactly two buyers when one item is sold in each time period. We prove that the declining price anomaly is not guaranteed in full-information sequential auctions with three or more buyers. This result applies to both first-price and second-price sequential auctions. Moreover, it applies regardless of the tie-breaking rule used to generate equilibria in these sequential auctions. To prove this result we provide a refined treatment of subgame perfect equilibria that survive the iterative deletion of weakly dominated strategies and use this framework to experimentally generate a very large number of random sequential auction instances. In particular, our experiments produce an instance with three bidders and eight items that, for a specific tie-breaking rule, induces a non-monotonic price trajectory. Theoretical analyses are then applied to show that this instance can be used to prove that for every possible tie-breaking rule there is a sequential auction on which it induces a non-monotonic price trajectory. On the other hand, our experiments show that non-monotonic price trajectories are extremely rare. In over eighteen million experiments only a 0.000183 proportion of the instances violated the declining price anomaly.

We study the problem of finding maximum weakly stable matchings when preference lists are incomplete and contain one-sided ties of bounded length. We show that if the tie length is at most L, then it is possible to achieve an approximation ratio of 1+(1−1L)L\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$1 + (1 - \frac {1}{L})^{L}$\end{document}. We also show that the same ratio is an upper bound on the integrality gap, which matches the known lower bound. In the case where the tie length is at most 2, our result implies an approximation ratio and integrality gap of 54\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$\frac {5}{4}$\end{document}, which matches the known UG-hardness result.

We study the computational complexity of decision problems about Nash equilibria in m-player games. Several such problems have recently been shown to be computationally equivalent to the decision problem for the existential theory of the reals, or stated in terms of complexity classes, $${\exists {\mathbb {R}}}$$-complete, when m ≥ 3. We show that, unless they turn into trivial problems, they are $${\exists {\mathbb {R}}}$$-hard even for 3-player zero-sum games. We also obtain new results about several other decision problems. We show that when m ≥ 3 the problems of deciding if a game has a Pareto optimal Nash equilibrium or deciding if a game has a strong Nash equilibrium are $${\exists {\mathbb {R}}}$$-complete. The latter result rectifies a previous claim of NP-completeness in the literature. We show that deciding if a game has an irrational valued Nash equilibrium is $${\exists {\mathbb {R}}}$$-hard, answering a question of Bilò and Mavronicolas, and address also the computational complexity of deciding if a game has a rational valued Nash equilibrium. These results also hold for 3-player zero-sum games. Our proof methodology applies to corresponding decision problems about symmetric Nash equilibria in symmetric games as well, and in particular our new results carry over to the symmetric setting. Finally we show that deciding whether a symmetric m-player game has a non-symmetric Nash equilibrium is $${\exists {\mathbb {R}}}$$-complete when m ≥ 3, answering a question of Garg, Mehta, Vazirani, and Yazdanbod.

Impartial selection has recently received much attention within the multi-agent systems community. The task is, given a directed graph representing nominations to the members of a community by other members, to select a member with the highest number of nominations. This seemingly trivial goal becomes challenging when there is an additional impartiality constraint, requiring that no single member can influence her chance of being selected. Recent progress has identified impartial selection rules with optimal approximation ratios. Moreover, it was noted that worst-case instances are graphs with few vertices. Motivated by this fact, we propose the study of additive approximation, the difference between the highest number of nominations and the number of nominations of the selected member, as an alternative measure of the quality of impartial selection. Our positive results include two randomized impartial selection mechanisms which have additive approximation guarantees of $${\varTheta }(\sqrt {n})$$ and $${\varTheta }(n^{2/3}\ln ^{1/3}n)$$ for the two most studied models in the literature, where n denotes the community size. We complement our positive results by providing negative results for various cases. First, we provide a characterization for the interesting class of strong sample mechanisms, which allows us to obtain lower bounds of n − 2, and of $${\varOmega }(\sqrt {n})$$ for their deterministic and randomized variants respectively. Finally, we present a general lower bound of 3 for all deterministic impartial mechanisms.

We study risk-free bidding strategies in combinatorial auctions with incomplete information. Specifically, what is the maximum profit that a complement-free (subadditive) bidder can guarantee in a multi-item combinatorial auction? Suppose there are n bidders and Bi is the value that bidder i has for the entire set of items. We study the above problem from the perspective of the first bidder, Bidder 1. In this setting, the worst case profit guarantees arise in a duopsony, that is when n = 2, so this problem then corresponds to playing an auction against a budgeted adversary with budget B2. We present worst-case guarantees for two simple and widely-studied combinatorial auctions; namely, the sequential and simultaneous auctions, for both the first-price and second-price case. In the general case of distinct items, our main results are for the class of fractionally subadditive (XOS) bidders, where we show that for both first-price and second-price sequential auctions Bidder 1 has a strategy that guarantees a profit of at least $$(\sqrt {B_{1}}-\sqrt {B_{2}})^{2}$$ when B2 ≤ B1, and this bound is tight. More profitable guarantees can be obtained for simultaneous auctions, where in the first-price case, Bidder 1 has a strategy that guarantees a profit of at least $$\frac {(B_{1}-B_{2})^{2}}{2B_{1}}$$, and in the second-price case, a bound of B1 − B2 is achievable. We also consider the special case of sequential auctions with identical items, for which we provide tight guarantees for bidders with subadditive valuations.

We study the three-dimensional stable matching problem with cyclic preferences. This model involves three types of agents, with an equal number of agents of each type. The types form a cyclic order such that each agent has a complete preference list over the agents of the next type. We consider the open problem of the existence of three-dimensional matchings in which no triple of agents prefer each other to their partners. Such matchings are said to be weakly stable. We show that contrary to published conjectures, weakly stable three-dimensional matchings need not exist. Furthermore, we show that it is NP-complete to determine whether a weakly stable three-dimensional matching exists. We achieve this by reducing from the variant of the problem where preference lists are allowed to be incomplete. We generalize our results to the k-dimensional stable matching problem with cyclic preferences for k ≥ 3.

Consider the revenue maximization problem of a risk-neutral seller with m heterogeneous items for sale to a single additive buyer, whose values for the items are drawn from known distributions. If the buyer is also risk-neutral, it is known that a simple and natural mechanism, namely the better of selling separately or pricing only the grand bundle, gives a constant-factor approximation to the optimal revenue. In this paper we study revenue maximization without risk-neutral buyers. Specifically, we adopt cumulative prospect theory, a well established generalization of expected utility theory. Our starting observation is that such preferences give rise to a very rich space of mechanisms, allowing the seller to extract arbitrary revenue. Specifically, a seller can construct extreme lotteries that look attractive to a mildly optimistic buyer, but have arbitrarily negative true expectation. Therefore, giving the seller absolute freedom over the design space results in absurd conclusions; competing with the optimal mechanism is hopeless. Instead, in this paper we study four broad classes of mechanisms, each characterized by a distinct use of randomness. Our goal is twofold: to explore the power of randomness when the buyer is not risk-neutral, and to design simple and attitude-agnostic mechanisms—mechanisms that do not depend on details of the buyer’s risk attitude—which are good approximations of the optimal in-class mechanism, tailored to a specific risk attitude. Our main result is that the same simple and risk-agnostic mechanism (the better of selling separately or pricing only the grand bundle) is a good approximation to the optimal non-agnostic mechanism within three of the mechanism classes we study.

The NP-complete Vertex Cover problem asks to cover all edges of a graph by a small (given) number of vertices. It is among the most prominent graph-algorithmic problems. Following a recent trend in studying temporal graphs (a sequence of graphs, so-called layers, over the same vertex set but, over time, changing edge sets), we initiate the study of Multistage Vertex Cover. Herein, given a temporal graph, the goal is to find for each layer of the temporal graph a small vertex cover and to guarantee that two vertex cover sets of every two consecutive layers differ not too much (specified by a given parameter). We show that, different from classic Vertex Cover and some other dynamic or temporal variants of it, Multistage Vertex Cover is computationally hard even in fairly restricted settings. On the positive side, however, we also spot several fixed-parameter tractability results based on some of themost natural parameterizations.

Top-cited authors
• The University of Edinburgh
• The University of Edinburgh
• The University of Edinburgh
• DePaul University
• École Polytechnique