Conference Paper

Hardness of Approximation in PSPACE and Separation Results for Pebble Games

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Specifically, finding the minimum number of black pebbles needed to pebble a DAG in the standard pebble game is PSPACE-complete [GLT79] and finding the minimum number of black pebbles needed in the one-shot case is NP-complete [Set75]. In addition, finding the minimum number of pebbles in both the black-white and reversible pebble games have been recently shown to be both PSPACEcomplete [CLNV15,HP10]. But the result for the black-white pebble game is proven for unbounded indegree [HP10]. ...
... However, whether it is possible to find good approximate solutions to the minimization problem has barely been studied. In fact, it was not known until this paper whether it is hard to find the minimum number of pebbles within even a non-constant additive term [CLNV15]. The best known multiplicative approximation factor is the very loose Θ(n/ log n) which is the pebbling space upper bound [HPV77], leaving much room for improvement. ...
... In this section, we provide an alternative proof of the result presented in [CLNV15] that the standard pebble game is inapproximable to any constant additive factor. Then, we show that our proof technique can be used to show our main result stated in Theorem 1. ...
Preprint
Pebble games are single-player games on DAGs involving placing and moving pebbles on nodes of the graph according to a certain set of rules. The goal is to pebble a set of target nodes using a minimum number of pebbles. In this paper, we present a possibly simpler proof of the result in [CLNV15] and strengthen the result to show that it is PSPACE-hard to determine the minimum number of pebbles to an additive n1/3ϵn^{1/3-\epsilon} term for all ϵ>0\epsilon > 0, which improves upon the currently known additive constant hardness of approximation [CLNV15] in the standard pebble game. We also introduce a family of explicit, constant indegree graphs with n nodes where there exists a graph in the family such that using constant k pebbles requires Ω(nk)\Omega(n^k) moves to pebble in both the standard and black-white pebble games. This independently answers an open question summarized in [Nor15] of whether a family of DAGs exists that meets the upper bound of O(nk)O(n^k) moves using constant k pebbles with a different construction than that presented in [AdRNV17].
... 11 then yields a correct uncomputation strategy. We note that we avoid solving the general problem of finding an uncomputation strategy for any ancilla dependency, as it is P-SPACE complete [6]. ...
... In the following, we discuss works which only support qfree gates, and define a custom strategy allowing 6 Benchmark little-belle is available at https: //github.com/epiqc/Benchmarks/blob/master/bench/ square-cirq/synthetic/little_belle.py. ...
... Table 3 further shows the exact parameters of each circuit used in our evaluation. [9,0,9,10,2,6,10,6,8,5,8,7,8,4,0,0,5,7,5,6] Table 1. We also report Unqomp results. ...
Preprint
Full-text available
Quantum circuits must run on quantum computers with tight limits on qubit and gate counts. To generate circuits respecting both limits, a promising opportunity is exploiting uncomputation to trade qubits for gates. We present Reqomp, a method to automatically synthesize correct and efficient uncomputation of ancillae while respecting hardware constraints. For a given circuit, Reqomp can offer a wide range of trade-offs between tightly constraining qubit count or gate count. Our evaluation demonstrates that Reqomp can significantly reduce the number of required ancilla qubits by up to 96%. On 80% of our benchmarks, the ancilla qubits required can be reduced by at least 25% while never incurring a gate count increase beyond 28%.
... Related work on optimization of qubit allocation and reclamation in reversible programs dates back to as early as [17], [41], where they propose to reduce qubit cost via fine-grained uncomputation at the expense of increasing time. Since then, more [18], [42]- [44] have followed in characterizing the complexity of reclamation for programs with complex modular structures. Recent work in [24], [45] show that knowing the structure of the operations in U f can also help identify bits that may be eligible for cleanup early. ...
... But the trade-offs in qubit allocation and reclamation are unique, which we will introduce as "recursive recomputation" and "qubit reservation" in Section III-C. Finding the optimal strategy for register allocation, and similarly for qubit reuse, is known to be a hard problem [42], [49]. Luckily, we are able to transfer some general insights from the rich history of classical register allocation optimization to solve the problems in qubit allocation and reclamation. ...
... It has been shown that, for programs with linear sequential dependency graph, we can use the reversible pebbling game to approach this problem [46]. However, finding the optimal points in a program with hierarchical structure is PSPACE complete [42]. For a program with levels and d callees per function, there can be as many as d possible reclamation points in the worst case. ...
Preprint
Compiling high-level quantum programs to machines that are size constrained (i.e. limited number of quantum bits) and time constrained (i.e. limited number of quantum operations) is challenging. In this paper, we present SQUARE (Strategic QUantum Ancilla REuse), a compilation infrastructure that tackles allocation and reclamation of scratch qubits (called ancilla) in modular quantum programs. At its core, SQUARE strategically performs uncomputation to create opportunities for qubit reuse. Current Noisy Intermediate-Scale Quantum (NISQ) computers and forward-looking Fault-Tolerant (FT) quantum computers have fundamentally different constraints such as data locality, instruction parallelism, and communication overhead. Our heuristic-based ancilla-reuse algorithm balances these considerations and fits computations into resource-constrained NISQ or FT quantum machines, throttling parallelism when necessary. To precisely capture the workload of a program, we propose an improved metric, the "active quantum volume," and use this metric to evaluate the effectiveness of our algorithm. Our results show that SQUARE improves the average success rate of NISQ applications by 1.47X. Surprisingly, the additional gates for uncomputation create ancilla with better locality, and result in substantially fewer swap gates and less gate noise overall. SQUARE also achieves an average reduction of 1.5X (and up to 9.6X) in active quantum volume for FT machines.
... Specifically, finding the minimum number of black pebbles needed to pebble a DAG in the standard pebble game is PSPACE-complete [GLT79] and finding the minimum number of black pebbles needed in the one-shot case is NP-complete [Set75]. In addition, finding the minimum number of pebbles in both the black-white and reversible pebble games have been recently shown to be both PSPACEcomplete [CLNV15,HP10]. But the result for the black-white pebble game is proven for unbounded indegree [HP10]. ...
... However, whether it is possible to find good approximate solutions to the minimization problem has barely been studied. In fact, it was not known until this paper whether it is hard to find the minimum number of pebbles within even a non-constant additive term [CLNV15]. The best known multiplicative approximation factor is the very loose Θ(n/ log n) which is the pebbling space upper bound [HPV77], leaving much room for improvement. ...
... In this section, we provide an alternative proof of the result presented in [CLNV15] that the standard pebble game is inapproximable to any constant additive factor. Then, we show that our proof technique can be used to show our main result stated in Theorem 1. ...
Conference Paper
Pebble games are single-player games on DAGs involving placing and moving pebbles on nodes of the graph according to a certain set of rules. The goal is to pebble a set of target nodes using a minimum number of pebbles. In this paper, we present a possibly simpler proof of the result in [4] and strengthen the result to show that it is PSPACE-hard to determine the minimum number of pebbles to an additive n1/3ϵn^{1/3-\epsilon } term for all ϵ>0\epsilon > 0, which improves upon the currently known additive constant hardness of approximation [4] in the standard pebble game. We also introduce a family of explicit, constant indegree graphs with n nodes where there exists a graph in the family such that using 0<k<n0< k < \sqrt{n} pebbles requires Ω((n/k)k)\varOmega ((n/k)^k) moves to pebble in both the standard and black-white pebble games. This independently answers an open question summarized in [14] of whether a family of DAGs exists that meets the upper bound of O(nk)O(n^k) moves using constant k pebbles with a different construction than that presented in [1].
... If within a component the ancilla variables are not linearly dependent, we abort the current procedure, and fall back on an alternative one, Reqomp-Lazy, which we describe in §4.5. We note that we avoid solving the general problem of finding an uncomputation strategy for any ancilla dependency, as it is P-SPACE complete [6]. ...
... This is particularly appealing in our setting, were we encounter many CCX gates, and most of them are uncomputed. 6 As detailed in App. B.2 (Fig. 12), we may force extra steps in the computation to ensure some values are computed at least once for non ancilla variables. ...
Article
Full-text available
Quantum circuits must run on quantum computers with tight limits on qubit and gate counts. To generate circuits respecting both limits, a promising opportunity is exploiting u n c o m p u t a t i o n to trade qubits for gates. We present Reqomp, a method to automatically synthesize correct and efficient uncomputation of ancillae while respecting hardware constraints. For a given circuit, Reqomp can offer a wide range of trade-offs between tightly constraining qubit count or gate count. Our evaluation demonstrates that Reqomp can significantly reduce the number of required ancilla qubits by up to 96%. On 80% of our benchmarks, the ancilla qubits required can be reduced by at least 25% while never incurring a gate count increase beyond 28%.
... In a different context, Potechin [Pot10] implicitly used reversible pebbling to obtain lower bounds in monotone space complexity, with the connection made explicit in later works [CP14,FPRC13]. The paper [CLNV15] (to which this overview is indebted) studied the relative power of standard and reversible pebblings with respect to space, and also established PSPACE-hardness results for estimating the minimum space required to pebble graphs (reversibly or not). ...
... Such results for pebbling time versus space are known for the standard pebble game, e.g., in [GLT80]. It is conceivable that a similar idea could be applied to the reversible pebbling reductions in [CLNV15], but it is not obvious whether just adding a small amount of space makes it possible to carry out the reversible pebbling time-efficiently enough. ...
Preprint
Full-text available
We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t and space s if and only if there is a Nullstellensatz refutation of the pebbling formula over G in size t+1 and degree s (independently of the field in which the Nullstellensatz refutation is made). We use this correspondence to prove a number of strong size-degree trade-offs for Nullstellensatz, which to the best of our knowledge are the first such results for this proof system.
... (3) The player can recolor a blue pebble to red, or a red pebble to blue. (4) The player can delete a pebble from a node at any time. Goal: Pebble all sink nodes with blue pebbles. ...
... Despite somewhat extensive research on the upper and lower bounds of optimally pebbling a DAG in pebble games, the complexity of finding a minimum solution has fewer results. In fact, it is not yet known whether it is hard to find the minimum number of pebbles within a constant or logarithmic multiplicative approximation factor [4,8]. It turns out that finding a strategy to optimally pebble a graph in the standard pebble game is computationally difficult even when each vertex is allowed to be pebbled only once. ...
Conference Paper
The red-blue pebble game was formulated in the 1980s~\citeHK81 to model the I/O complexity of algorithms on a two-level memory hierarchy. Given a directed acyclic graph representing computations (vertices) and their dependencies (edges), the red-blue pebble game allows sequentially adding, removing, and recoloring red or blue pebbles according to a few rules, where red pebbles represent data in cache (fast memory) and blue pebbles represent data on disk (slow, external memory). Specifically, a vertex can be newly pebbled red if and only if all of its predecessors currently have a red pebble; pebbles can always be removed; and pebbles can be recolored between red and blue (corresponding to reading or writing data between disk and cache, also called I/Os or memory transfers). Given an upper bound on the number of red pebbles at any time (the cache size), the goal is to compute a game execution with the fewest pebble recolorings (memory transfers) that finish with pebbles on a specified subset of nodes (outputs get computed). In this paper, we investigate the complexity of computing this trade-off between red-pebble limit (cache size) and number of recolorings (memory transfers) in general DAGs. First we prove this problem PSPACE-complete through an extension of the proof PSPACE-hardness of black pebbling complexity~\citeGLT80. Second, we consider a natural restriction on the red-blue pebble game to forbid pebble deletions, or equivalently, forbid discarding data from cache without first writing it to disk. This assumption both simplifies the model and immediately places the trade-off computation problem within NP. Unfortunately, we show that even this restricted version is NP-complete. Finally, we show that the trade-off problem parameterized by the number of transitions is W[1]-hard, meaning that there is likely no algorithm running in a fixed polynomial for constant number of transitions.
... Since the optimisation problem of the original reversible pebble game is known to be PSPACEcomplete [Chan et al. 2015], we expect that our pebble game for Qurts is also PSPACE-complete. If that is indeed the case, there could be a reduction from the pebble game to the problem of Quantified Boolean Formulas (QBF), which is a well-known PSPACE-complete problem. ...
Preprint
Uncomputation is a feature in quantum programming that allows the programmer to discard a value without losing quantum information, and that allows the compiler to reuse resources. Whereas quantum information has to be treated linearly by the type system, automatic uncomputation enables the programmer to treat it affinely to some extent. Automatic uncomputation requires a substructural type system between linear and affine, a subtlety that has only been captured by existing languages in an ad hoc way. We extend the Rust type system to the quantum setting to give a uniform framework for automatic uncomputation called Qurts (pronounced quartz). Specifically, we parameterise types by lifetimes, permitting them to be affine during their lifetime, while being restricted to linear use outside their lifetime. We also provide two operational semantics: one based on classical simulation, and one that does not depend on any specific uncomputation strategy.
... The problem complexity has been studied in ref. 43 , where the author proves that finding the minimum number of pebbles is PSPACEcomplete, as in the case of the non-reversible pebbling game. Besides, the problem is PSPACE-hard to approximate up to an additive constant 44 . An explicit asymptotic expression for the best time-space product is given in ref. 45 . ...
Article
Full-text available
Quantum compilation is the task of translating a high-level description of a quantum algorithm into a sequence of low-level quantum operations. We propose and motivate the use of Xor-And-Inverter Graphs (XAG) to specify Boolean functions for quantum compilation. We present three different XAG-based compilation algorithms to synthesize quantum circuits in the Clifford + T library, hence targeting fault-tolerant quantum computing. The algorithms are designed to minimize relevant cost functions, such as the number of qubits, the T -count, and the T -depth, while allowing the flexibility of exploring different solutions. We present novel resource estimation results for relevant cryptographic and arithmetic benchmarks. The achieved results show a significant reduction in both T -count and T -depth when compared with the state-of-the-art.
... The problem complexity has been studied in [15] where the authors prove that the problem is PSPACE-complete, as the non-reversible pebbling game. An explicit asymptotic expression for the best time-space product is given in [16], while the asymptotic behavior on trees is studied in [17]. ...
Preprint
Quantum memory management is becoming a pressing problem, especially given the recent research effort to develop new and more complex quantum algorithms. The only existing automatic method for quantum states clean-up relies on the availability of many extra resources. In this work, we propose an automatic tool for quantum memory management. We show how this problem exactly matches the reversible pebbling game. Based on that, we develop a SAT-based algorithm that returns a valid clean-up strategy, taking the limitations of the quantum hardware into account. The developed tool empowers the designer with the flexibility required to explore the trade-off between memory resources and number of operations. We present three show-cases to prove the validity of our approach. First, we apply the algorithm to straight-line programs, widely used in cryptographic applications. Second, we perform a comparison with the existing approach, showing an average improvement of 52.77%. Finally, we show the advantage of using the tool when synthesizing a quantum circuit on a constrained near-term quantum device.
Article
Uncomputation is a feature in quantum programming that allows the programmer to discard a value without losing quantum information, and that allows the compiler to reuse resources. Whereas quantum information has to be treated linearly by the type system, automatic uncomputation enables the programmer to treat it affinely to some extent. Automatic uncomputation requires a substructural type system between linear and affine, a subtlety that has only been captured by existing languages in an ad hoc way. We extend the Rust type system to the quantum setting to give a uniform framework for automatic uncomputation called Qurts (pronounced quartz). Specifically, we parameterise types by lifetimes, permitting them to be affine during their lifetime, while being restricted to linear use outside their lifetime. We also provide two operational semantics: one based on classical simulation, and one that does not depend on any specific uncomputation strategy.
Chapter
Pebble games are usually used to study space/time trade-offs. Recently, spooky pebble games were introduced to study classical space/quantum space/time trade-offs for simulation of classical circuits on quantum computers. In this paper, the spooky pebble game framework is applied for the first time to general circuits. Using this framework we prove an upper bound for quantum space in the spooky pebble game. Moreover, we present a solver for the spooky pebble game based on a SAT solver. This spooky pebble game solver is empirically evaluated by calculating optimal classical space/quantum space/time trade-offs. Within limited runtime, the solver could find a strategy reducing quantum space when classical space is taken into account, showing that the spooky pebble model is useful to reduce quantum space. KeywordsPebble gameSpooky pebble gameQuantum computingSatisfiability
Chapter
Data movements between different levels of a memory hierarchy (I/Os) are a principal performance bottleneck. This is particularly noticeable in computations that have low complexity but large amounts of input data, often occurring in “big data”. Using the red-blue pebble game, we investigate the I/O-complexity of directed acyclic graphs (DAGs) with a large proportion of input vertices. For trees, we show that the number of leaves is a 2-approximation for the optimal number of I/Os. Similar techniques as we use in the proof of the results for trees allow us to find lower and upper bounds of the optimal number of I/Os for general DAGs. The larger the proportion of input vertices, the stronger those bounds become. For families of DAGs with bounded degree and a large proportion of input vertices (meaning that there exists some constant c>0c>0 such that for every DAG G of this family, the proportion p of input vertices satisfies p>cp>c) our bounds give constant factor approximations, improving the previous logarithmic approximation factors. For those DAGs, by avoiding certain I/O-inefficiencies, which we will define precisely, a pebbling strategy is guaranteed to satisfy those bounds and asymptotics. We extend the I/O-bounds for trees to a multiprocessor setting with fast individual memories and a slow shared memory.
Article
Full-text available
We show a new connection between the clause space measure in tree-like resolution and the reversible pebble game on graphs. Using this connection, we provide several formula classes for which there is a logarithmic factor separation between the clause space complexity measure in tree-like and general resolution. We also provide upper bounds for tree-like resolution clause space in terms of general resolution clause and variable space. In particular, we show that for any formula F, its tree-like resolution clause space is upper bounded by space(π)(\pi)(log(time(π))(\log({\rm time}(\pi)), where π\pi is any general resolution refutation of F. This holds considering as space(π)(\pi) the clause space of the refutation as well as considering its variable space. For the concrete case of Tseitin formulas, we are able to improve this bound to the optimal bound space(π)logn(\pi)\log n, where n is the number of vertices of the corresponding graph
Article
Full-text available
We establish an exactly tight relation between reversible pebblings of graphs and Nullstellensatz refutations of pebbling formulas, showing that a graph G can be reversibly pebbled in time t and space s if and only if there is a Nullstellensatz refutation of the pebbling formula over G in size t + 1 and degree s (independently of the field in which the Nullstellensatz refutation is made). We use this correspondence to prove a number of strong size-degree trade-offs for Nullstellensatz, which to the best of our knowledge are the first such results for this proof system.
Chapter
We give a significantly simplified proof of the exponential separation between regular and general resolution of Alekhnovich et al. (2007) as a consequence of a general theorem lifting proof depth to regular proof length in resolution. This simpler proof then allows us to strengthen the separation further, and to construct families of theoretically very easy benchmarks that are surprisingly hard for SAT solvers in practice.
Article
We show the close connection between the enumeration of cliques in a k-clique free graph G, the running time of DPLL-style algorithms for k-clique problem, and the length of tree-like resolution refutations for formula Clique(G,k), which claims that G has a k-clique. The length of any such tree-like refutation is within a “fixed parameter tractable” factor from the number of cliques in the graph. We then proceed to drastically simplify the proofs of the lower bounds for the length of tree-like resolution refutations of Clique(G,k) shown in Beyersdorff et at. 2013, Lauria et al. 2017, which now reduce to a simple estimate of said quantity.
Conference Paper
Red-blue pebbling is a model of computation that captures the complexity of I/O operations in systems with external memory access. We focus on one-shot pebbling strategies, that is without re-computation. Prior work on this model has focused on finding upper and lower bounds on the I/O complexity of certain families of graphs. We give a polynomial-time bi-criteria approximation algorithm for this problem for graphs with bounded out-degree. More precisely, given a n-vertex DAG that admits a pebbling strategy with R red pebbles and I/O complexity opt, our algorithm outputs a strategy using O(R ⋅ log3/2 n) red pebbles, and I/O complexity O(opt ⋅ log3/2 n). We further extend our result to the generalization of red-blue pebble games that correspond to multi-level memory hierarchies. Finally, we complement our theoretical analysis with an experimental evaluation of our algorithm for red-blue pebbling.
Article
Full-text available
We prove tight size bounds on monotone switching networks for the NP-complete problem of k-clique, and for an explicit monotone problem by analyzing a pyramid structure of height h for the P-complete problem of generation. This gives alternative proofs of the separations of m-NC from m-P and of m-NCⁱ from m-NCⁱ⁺¹, different from Raz-McKenzie (Combinatorica 1999). The enumerative-combinatorial and Fourier analytic techniques in this paper are very different from a large body of work on circuit depth lower bounds, and may be of independent interest.
Conference Paper
Full-text available
An approximate computation of a Boolean function by a circuit or switching network is a computation in which the function is computed correctly on the majority of the inputs (rather than on all inputs). Besides being interesting in their own right, lower bounds for approximate computation have proved useful in many sub areas of complexity theory, such as cryptography and derandomization. Lower bounds for approximate computation are also known as correlation bounds or average case hardness. In this paper, we obtain the first average case monotone depth lower bounds for a function in monotone P. We tolerate errors that are asymptotically the best possible for monotone circuits. Specifically, we prove average case exponential lower bounds on the size of monotone switching networks for the GEN function. As a corollary, we separate the monotone NC hierarchy in the case of errors -- a result which was previously only known for exact computations. Our proof extends and simplifies the Fourier analytic technique due to Potechin, and further developed by Chan and Potechin. As a corollary of our main lower bound, we prove that the communication complexity approach for monotone depth lower bounds does not naturally generalize to the average case setting.
Article
Full-text available
Pebble games were extensively studied in the 1970s and 1980s in a number of different contexts. The last decade has seen a revival of interest in pebble games coming from the field of proof complexity. Pebbling has proven to be a useful tool for studying resolution-based proof systems when comparing the strength of different subsystems, showing bounds on proof space, and establishing size-space trade-offs. This is a survey of research in proof complexity drawing on results and tools from pebbling, with a focus on proof space lower bounds and trade-offs between proof size and proof space.
Article
Full-text available
A linear recursive procedure is one each of whose executions activates at most one invocation of itself. When linear recursion cannot be replaced by iteration, it is usually implemented with a stack of size proportional to the depth of recursion. In this paper we analyze implementations of linear recursion which permit large reductions in storage space at the expense of a small increase in computation time. For example, if the depth of recursion isn, storage space can be reduced to Ön\sqrt n at the cost of a constant factor increase in running time. The problem is treated by abstracting any implementation of linear recursion as the pebbling of a simple graph, and for this abstraction we exhibit the optimal space-time tradeoffs.
Article
Full-text available
We prove a general upper bound on the tradeoff between time and space that suffices for the reversible simulation of irreversible computation. Previously, only simulations using exponential time or quadratic space were known. The tradeoff shows for the first time that we can simultaneously achieve subexponential time and subquadratic space. The boundary values are the exponential time with hardly any extra space required by the Lange-McKenzie-Tapp method and the (log3\log 3)th power time with square space required by the Bennett method. We also give the first general lower bound on the extra storage space required by general reversible simulation. This lower bound is optimal in that it is achieved by some reversible simulations. Comment: 11 pages LaTeX, Proc ICALP 2001, Lecture Notes in Computer Science, Vol xxx Springer-Verlag, Berlin, 2001
Conference Paper
Full-text available
The equivalence problem for Kleene's regular expressions has several effective solutions, all of which are computationally inefficient. In [1], we showed that this inefficiency is an inherent property of the problem by showing that the problem of membership in any arbitrary context-sensitive language was easily reducible to the equivalence problem for regular expressions. We also showed that with a squaring abbreviation ( writing (E)2 for E×E) the equivalence problem for expressions required computing space exponential in the size of the expressions. In this paper we consider a number of similar decidable word problems from automata theory and logic whose inherent computational complexity can be precisely characterized in terms of time or space requirements on deterministic or nondeterministic Turing machines. The definitions of the word problems and a table summarizing their complexity appears in the next section. More detailed comments and an outline of some of the proofs follows in the remaining sections. Complete proofs will appear in the forthcoming papers [9, 10, 13]. In the final section we describe some open problems.
Article
Full-text available
We study constructions that convert arbitrary deterministic Turing machines to reversible machines; i.e. reversible simulations. Specifically, we study space-ecient simulations; that is, the resulting reversible machine uses O(f(S)) space, where S is the space usage of the original machine and f is very close to linear (say, nlogn or smaller). We generalize the previous results on this reversibility problem by proving a general theorem incorporating two simulations: one is space-ecient ( O(S)) and is due to Lange, McKenzie, and Tapp(5); the other is time-ecient ( O(T 1+†) for any † > 0, where T is the time usage of the original machine) and is
Article
Full-text available
We initiate an investigation of probabilistically checkable debate systems (PCDS's), a natural generalization of probabilistically checkable proof systems. A PCDS for a language L consists of a probabilistic polynomial-time verifier V and a debate between player 1, who claims that the input x is in L, and player 0, who claims that the input x is not in L. We show that there is a PCDS for L in which V flips O(log n) random coins and reads O(1) bits of the debate if and only if L is in PSPACE. This characterization of PSPACE is used to show that certain PSPACE-hard functions are as hard to approximate closely as they are to compute exactly. 1 Introduction Suppose that two candidates, B and C, agree to a debate format. Voter V is too busy to catch more than a very small number of bits of the debate. How does V decide which of B or C won the debate? In this paper, we show that if These results first appeared in our Technical Memorandum [8]. They were presented in preliminary fo...
Article
Full-text available
Future miniaturization and mobilization of computing devices requires energy parsimonious `adiabatic' computation. This is contingent on logical reversibility of computation. An example is the idea of quantum computations which are reversible except for the irreversible observation steps. We propose to study quantitatively the exchange of computational resources like time and space for irreversibility in computations. Reversible simulations of irreversible computations are memory intensive. Such (polynomial time) simulations are analysed here in terms of `reversible' pebble games. We show that Bennett's pebbling strategy uses least additional space for the greatest number of simulated steps. We derive a trade-off for storage space versus irreversible erasure. Next we consider reversible computation itself. An alternative proof is provided for the precise expression of the ultimate irreversibility cost of an otherwise reversible computation without restrictions on time and space use. A time-irreversibility trade-off hierarchy in the exponential time region is exhibited. Finally, extreme time-irreversibility trade-offs for reversible computations in the thoroughly unrealistic range of computable versus noncomputable time-bounds are given. Comment: 30 pages, Latex. Lemma 2.3 should be replaced by the slightly better ``There is a winning strategy with n+2 pebbles and m1m-1 erasures for pebble games G with TG=m2nT_G= m2^n, for all m1m \geq 1'' with appropriate further changes (as pointed out by Wim van Dam). This and further work on reversible simulations as in Section 2 appears in quant-ph/9703009
Article
Graphical models, such as Bayesian Networks and Markov networks play an important role in artificial intelligence and machine learning. Inference is a central problem to be solved on these networks. This, and other problems on these graph models are often known to be hard to solve in general, but tractable on graphs with bounded Treewidth. Therefore, finding or approximating the Treewidth of a graph is a fundamental problem related to inference in graphical models. In this paper, we study the approximability of a number of graph problems: Treewidth and Pathwidth of graphs, Minimum Fill-In, One-Shot Black (and Black-White) pebbling costs of directed acyclic graphs, and a variety of different graph layout problems such as Minimum Cut Linear Arrangement and Interval Graph Completion. We show that, assuming the recently introduced Small Set Expansion Conjecture, all of these problems are NP-hard to approximate to within any constant factor in polynomial time.
Conference Paper
We develop new theoretical tools for proving lower-bounds on the (amortized) complexity of certain functions in models of parallel computation. We apply the tools to construct a class of functions with high amortized memory complexity in the *parallel* Random Oracle Model (pROM); a variant of the standard ROM allowing for batches of *simultaneous* queries. In particular we obtain a new, more robust, type of Memory-Hard Functions (MHF); a security primitive which has recently been gaining acceptance in practice as an effective means of countering brute-force attacks on security relevant functions. Along the way we also demonstrate an important shortcoming of previous definitions of MHFs and give a new definition addressing the problem. The tools we develop represent an adaptation of the powerful pebbling paradigm (initially introduced by Hewitt and Paterson [HP70] and Cook [Coo73]) to a simple and intuitive parallel setting. We define a simple pebbling game Gp over graphs which aims to abstract parallel computation in an intuitive way. As a conceptual contribution we define a measure of pebbling complexity for graphs called *cumulative complexity* (CC) and show how it overcomes a crucial shortcoming (in the parallel setting) exhibited by more traditional complexity measures used in the past. As a main technical contribution we give an explicit construction of a constant in-degree family of graphs whose CC in Gp approaches maximality to within a polylogarithmic factor for any graph of equal size (analogous to the graphs of Tarjan et. al. [PTC76, LT82] for sequential pebbling games). Finally, for a given graph G and related function fG, we derive a lower-bound on the amortized memory complexity of fG in the pROM in terms of the CC of G in the game Gp.
Article
Propositional proof complexity is the study of the sizes of propositional proofs, and more generally, the resources necessary to certify propositional tautologies. Questions about proof sizes have connections with computational complexity, theories of arithmetic, and satisfiability algorithms. This is article includes a broad survey of the field, and a technical exposition of some recently developed techniques for proving lower bounds on proof sizes.
Conference Paper
The two-player pebble game of Dymond-Tompa is identified as a barrier for existing techniques to save space or to speed up parallel algorithms for evaluation problems. Many combinatorial lower bounds to study I versus NI and NC versus P under different restricted settings scale in the same way as the pebbling algorithm of Dymond-Tompa. These lower bounds include, (1) the monotone separation of m-I from m-NI by studying the size of monotone switching networks in Potechin '10; (2) a new semantic separation of NC from P and of NCi from NCi+1 by studying circuit depth, based on the techniques developed for the semantic separation of NC1 from NC2 by the universal composition relation in Edmonds-Impagliazzo-Rudich-Sgall '01 and in Hastad- Wigderson '97; and (3) the monotone separation of m-NC from m-P and of m-NCi from m-NCi+1 by studying (a) the depth of monotone circuits in Raz-McKenzie '99; and (b) the size of monotone switching networks in Chan- Potechin '12. This supports the attempt to separate NC from P by focusing on depth complexity, and suggests the study of combinatorial invariants shaped by pebbling for proving lower bounds. An application to proof complexity gives tight bounds for the size and the depth of some refinements of resolution refutations.
Article
The usual general-purpose computing automaton (e.g., a Turing machine) is logically irreversible—its transition function lacks a single-valued inverse. Here it is shown that such machines may be made logically reversible at every step, while retaining their simplicity and their ability to do general computations. This result is of great physical interest because it makes plausible the existence of thermodynamically reversible computers which could perform useful computations at useful speed while dissipating considerably less than kT of energy per logical step. In the first stage of its computation the logically reversible automaton parallels the corresponding irreversible automaton, except that it saves all intermediate results, thereby avoiding the irreversible operation of erasure. The second stage consists of printing out the desired output. The third stage then reversibly disposes of all the undesired intermediate results by retracing the steps of the first stage in backward order (a process which is only possible because the first stage has been carried out reversibly), thereby restoring the machine (except for the now-written output tape) to its original condition. The final machine configuration thus contains the desired output and a reconstructed copy of the input, but no other undesired data. The foregoing results are demonstrated explicitly using a type of three-tape Turing machine. The biosynthesis of messenger RNA is discussed as a physical example of reversible computation.
Article
This paper describes the simulation of an S(n) space-bounded deterministic Turing machine by a reversible Turing machine operating in space S(n). It thus answers a question posed by C. H. Bennett [SIAM J. Comput. 18, No. 4, 766-776 (1989; Zbl 0676.68010)] and refutes the conjecture, made by M. Li and P. Vitányi [Proc. R. Soc. Lond., Ser. A 452, No. 1947, 769-789 (1996; Zbl 0869.68019)], that any reversible simulation of an irreversible computation must obey Bennett’s reversible pebble game rules.
Article
Computer computations are generally irreversible while the laws of physics are reversible. This mismatch is penalized by among other things generating excess thermic entropy in the computation. Computing performance has improved to the extent that efficiency degrades unless all algorithms are executed reversibly, for example by a universal reversible simulation of irreversible computations. All known reversible simulations are either space hungry or time hungry. The leanest method was proposed by Bennett and can be analyzed using a simple ‘reversible’ pebble game. The reachable reversible simulation instantaneous descriptions (pebble configurations) of such pebble games are characterized completely. As a corollary we obtain the reversible simulation by Bennett and, moreover, show that it is a space-optimal pebble game. We also introduce irreversible steps and give a theorem on the tradeoff between the number of allowed irreversible steps and the memory gain in the pebble game. In this resource-bounded setting the limited erasing needs to be performed at precise instants during the simulation. The reversible simulation can be modified so that it is applicable also when the simulated computation time is unknown.
Article
There are two main results proved here. The first states that a certain set SP of strings (those coding “solvable path systems”) has tape complexity (log n)2 iff every set in (i.e., of deterministic polynomial time complexity) has tape complexity (log n)2. The second result gives evidence that SP does not have tape complexity (log n)k for any k.
Conference Paper
Recent research has investigated time-space tradeoffs for register allocation strategies of certain fixed sets of expressions. This paper is concerned with the time-space tradeoff for register allocation strategies of any set of expressions which compute given functions. Time-space tradeoffs for pebbling superconcentrators and grates are developed. Corollaries which follow include tradeoffs for any straight-line program which computes polynomial multiplication, polynomial convolution, the discrete Fourier transform, oblivious merging, and most sets of linear forms.
Conference Paper
An intriguing question is whether (log n)2 space is enough to recognize the class of languages recognizable in deterministic polynomial time. This question has earlier been narrowed down to the storage required to recognize a particular language called SP. SP is clearly in and it has been shown that if SP has tape complexity (log n)k, then every member of has tape complexity (log n)k. This paper presents further evidence in support of the conjecture that SP cannot be recognized using storage (log n)k for any k. We have no techniques at present for proving such a statement for Turing machines in general; we prove the result for a suitably restricted device.
Conference Paper
We study a one-person game played by placing pebbles, according to certain rules, on the vertices,of a directed,graph.,In [3] it was,shown,that,for,each,graph,with,n,vertices,and,maximum in-degree d , there is a pebbling strategy which requires at most c(d) n/log n pebbles. Here we show that this bound is tight to within a constant factor. We also analyze a variety of pebbling algorithms, including,one,which,achieves,the,O(n/log n),bound. Keywords: pebble game, register allocation, space bounds, Turing machines.
Conference Paper
We examine a pebbling problem which has been used to study the storage requirements of various models of computation. Sethi has shown this problem to be NP-hard and Lingas has shown a generalization to be P-space complete. We prove the original problem P-space complete by employing a modification of Lingas's proof. The pebbling problem is one of the few examples of a P-space complete problem not exhibiting any obvious quantifier alternation.
Conference Paper
This paper presents the new speedups DTIME(T)⊆ATIME(T/log T) and DTIME(T)⊆ PRAM-time(√T). These improve the results of Hopcroft, Paul, and Valiant (J. Assoc. Comput. Mach. 24 (1977), 332-337) that DTIME(T)⊆DSPACE(T/logT), and of Paul and Reichuk (Acta Inform. 14 (1980), 391-403) that DTIME(T)⊆ATIME( (T log log T) log T). the new approach unifies not only these two previous results, but also the result of Paterson and Valiant (Theoret. Comput. Sci. 2 (1976), 397-400) that Size(T)⊆Depth(O(T/log T)).
Conference Paper
We investigate methods for providing easy-to-check proofs of computational effort. Originally intended for discouraging spam, the concept has wide applicability as a method for controlling denial of service attacks. Dwork, Goldberg, and Naor proposed a specific memory-bound function for this purpose and proved an asymptotically tight amortized lower bound on the number of memory accesses any polynomial time bounded adversary must make. Their function requires a large random table which, crucially, cannot be compressed. We answer an open question of Dwork et al. by designing a compact representation for the table. The paradox, compressing an incompressible table, is resolved by embedding a time/space tradeoff into the process for constructing the table from its representation.
Conference Paper
An extension of a result by Grigoryev is used to derive a lower bound on the space-time product required for integer multiplication when realized by straight-line algorithms. If S is the number of temporary storage locations used by a straight-line algorithm on a random-access machine and T is the number of computation steps, then we show that (S+1)T (n2) for binary integer multiplication when the basis for the straight-line algorithm is a set of Boolean functions.
Conference Paper
The pebble game on AND/OR dags which is natural generalization of a well known pebble game on dags is considered. The following problem is given: if k pebbles are enough to place a pebble on a given vertex of an AND/OR dag. It is shown that the problem is log-space complete for languages accepted in polynomial space.
Article
Since flow of control within a program introduces a level of uncertainty, many of the studies cited have dealt only with straight line programs. This article studies register allocation for straight line programs in the context of a set of graph games. It is shown that several variants of the register allocation problem for straight line programs are polynomial complete.
Article
A reversible Turing machine is one whose transition function is 1:1, so that no instantaneous description (ID) has more than one predecessor. Using a pebbling argument, this paper shows that, for any ε>0\varepsilon > 0, ordinary multitape Turing machines using time T and space S can be simulated by reversible ones using time O(T1+ε)O(T^{1 + \varepsilon } ) and space O(SlogT)O(S\log T) or in linear time and space O(STε)O(ST^\varepsilon ). The former result implies in particular that reversible machines can simulate ordinary ones in quadratic space. These results refer to reversible machines that save their input, thereby insuring a global 1:1 relation between initial and final IDs, even when the function being computed is many-to-one. Reversible machines that instead erase their input can of course compute only 1:1 partial recursive functions and indeed provide a Godel numbering of such functions. The time/space cost of computing a 1:1 function on such a machine is equal within a small polynomial to the cost of co...
Article
The complexity of the black-white pebbling game has remained an open problem for 30 years. In this paper we show that the black-white pebbling game is PSPACE-complete.
Article
We have already met different types of exponential algorithms. Some of them use only polynomial space, among them in particular the branching algorithms. On the other hand, there are exponential time algorithms needing exponential space, among them in particular the dynamic programming algorithms. In real life applications polynomial space is definitely preferable to exponential space. However, often a “moderate” usage of exponential space can be tolerated if it can be used to speed up the running time. Is it possible by sacrificing a bit of running time to gain in space? In the first section of this chapter we discuss such an interpolation between the two extremes of space complexity for dynamic programming algorithms. In the second section we discuss an opposite technique to gain time by using more space, in particular for branching algorithms.
Article
This paper investigates one possible model of reversible computations, an important paradigm in the context of quantum computing. Introduced by Bennett, a reversible pebble game is an abstraction of reversible computation that allows to examine the space and time complexity of various classes of problems. We present a technique for proving lower and upper bounds on time and space complexity for several types of graphs. Using this technique we show that the time needed to achieve optimal space for chain topology is Ω(nlgn) for infinitely many n and we discuss time-space trade-offs for chain. Further we show a tight optimal space bound for the binary tree of height h of the form h+Θ(lg * h) and discuss space complexity for the butterfly. These results give an evidence that reversible computations need more resources than standard computations. We also show an upper bound on time and space complexity of the reversible pebble game based on the time and space complexity of the standard pebble game, regardless of the topology of the graph.
Article
We prove that any monotone switching network solving directed connectivity on N vertices must have size at least N^(Omega(logN)).
Article
While we may have the intuitive idea of one programming language having greater power than another, or of some subset of a language being an adequate 'core' for that language, we find when we try to formalize this notion that there is a serious theoretical difficulty. This lies in the fact that even quite rudimentary languages are nevertheless 'universal' in the following sense. If the language allows us to program with simple arithmetic or list-processing functions then any effective control structure can be simulated, traditionally by encoding a Turing machine computation in some way. In particular, a simple language with some basic arithmetic can express programs for any partial recursive function. Such an encoding is usually quite unnatural and impossibly inefficient. Thus, in order to carry on a practical study of the comparative power of different languages we are led to banish explicit functions and deal instead with abstract, uninterpreted programs or schemas. What follows is a brief report on some preliminary exploration in this area.
Conference Paper
A linear recursive program consists of a set of procedures where each procedure can make at most one recursive call. The conventional stack implementation of recursion requires time and space both proportional to n, the depth of recursion. It is shown that in order to implement linear recursion so as to execute in time n one does not need space proportional to n : ne for arbitrarily small e will do. It is also known that with constant space one can implement linear recursion in time n. We show that one can do much better : ne for arbitrarily small c. We also describe an algorithm that lies between these two: it takes time n.log(n) and space log(n). In this context one can demonstrate a speed-up theorem for linear recursion - given any constant-space program implementing linear recursion, one can effectively find another constant space program that runs faster almost everywhere.
Article
The performance of the fast Fourier transform algorithm is examined under limitations on computational space and time. It is shown that if the algorithm with n inputs, n as a power of two, is implemented with S temporary storage locations where S equals o(n/logn), then the computation time T grows faster than n log n. Furthermore, T can grow fast as n**2 if S equals S//m//i//n plus O(1) where S//m//i//n equals 1 plus log//2n, the minimum storage necessary. These results are obtianed by deriving tight bounds on T versus S and n.
Article
We prove tight lower bounds, of up to n ffl , for the monotone depth of functions in monotone-P. As a result we achieve the separation of the following classes. 1. monotone-NC 6= monotone-P. 2. For every i 1, monotone-NC i 6= monotone-NC i+1 . 3. More generally: For any integer function D(n), up to n ffl (for some ffl ? 0), we give an explicit example of a monotone Boolean function, that can be computed by polynomial size monotone Boolean circuits of depth D(n), but that cannot be computed by any (fan-in 2) monotone Boolean circuits of depth less than Const Delta D(n) (for some constant Const). Only a separation of monotone-NC 1 from monotone-NC 2 was previously known. Our argument is more general: we define a new class of communication complexity search problems, referred to below as DART games, and we prove a tight lower bound for the communication complexity of every member of this class. As a result we get lower bounds for the monotone depth of many functions. In...
A new pebble game that characterizes parallel complexity classes
  • H Venkateswaran
  • M Tompa