Foundations of Computer Science, 1975., 16th Annual Symposium on

Online ISSN: 0272-5428
Publications
Conference Paper
A hierarchy of probabilistic complexity classes generalizing NP has recently emerged in the work of [B], [GMR], and [GS]. The IP hierarchy is defined through the notion of an interactive proof system, in which an all powerful prover tries to convince a probabilistic polynomial time verifier that a string x is in a language L. The verifier tosses coins and exchanges messages back and forth with the prover before he decides whether to accept x. This proof-system yields "probabilistic" proofs: the verifier may erroneously accept or reject x with small probability. The class IP[f(|x|)] is said to contain L if, there exists an interactive proof system with f(|x|)- message exchanges (interactions) such that with high probability the verifier accepts x if and only if x ε L. Babai [B] showed that all languages recognized by interactive proof systems with bounded number of interactions, can be recognized by interactive proof systems with only two interactions. Namely, for every constant k, IP[k] collapses to Ip[2]. In this paper, we give evidence that interactive proof systems with unbounded number of interactions may be more powerful than interactive proof systems with bounded number of interactions. We show that for any unbounded function f(n) there exists an oracle B such that IPB [f(|x|)] ⊄ PHB. This implies that IPB[f(n)] ≠ IPB[2], since IPB[2] ⊆ Π2B for all oracles B. The techniques employed are extensions of the techniques for proving lower bounds on small depth circuits used in [FSS], [Y] and [H1].
 
Conference Paper
We study the expected behavior of the FFD binpacking algorithm applied to items whose sizes are distributed in accordance with a Poisson process with rate N on the interval [0,1/2] of item sizes. By viewing the algorithm as a succession of queueing processes we show that the expected wasted space for FFD bin-packing is bounded above by 9.4 bins, independent of N. We extend this upper bound to a FFD bin-packing of items in accordance with a non-homogeneous Poisson process with a nonincreasing intensity function λ(t) on [0,1/2].
 
Conference Paper
We present a family of protocols for flipping a coin over a telephone in a quantum mechanical setting. The family contains protocols with n + 2 messages for all n > 1, and asymptotically achieves a bias of 0.192. The case n = 2 is equivalent to the protocol of Spekkens and Rudolph with bias 0.207, which was the best known protocol. The case n = 3 achieves a bias of 0.199, and n = 8 achieves a bias of 0.193. The analysis of the protocols uses Kitaev's description of coin-flipping as a semidefinite program. We construct an analytical solution to the dual problem which provides an upper bound on the amount that a party can cheat.
 
Conference Paper
We consider worst case time bounds for NP-complete problems including 3-coloring, 3-edge-coloring, and 3-list-coloring. Our algorithms are based on a common generalization of these problems, called symbol-system satisfiability or, briefly, SSS. 3-SAT is equivalent to (2,3)-SSS while the other problems above are special cases of (3,2)-SSS; there is also a natural duality transformation from (a,b)-SSS to (b,a)-SSS. We give a fast algorithm for (3,2)-SSS and use it to improve the time bounds for solving the other problems listed above
 
Conference Paper
Private information retrieval (PIR) protocols allow a user to retrieve a data item from a database while hiding the identity of the item being retrieved. Specifically, in information-theoretic, k-server PIR protocols the database is replicated among k servers, and each server learns nothing about the item the user retrieves. The cost of such protocols is measured by the communication complexity of retrieving one out of n bits of data. For any fixed k, the complexity of the best protocols prior to our work was O(n<sup>1</sup>2k-1/). Since then several methods were developed in an attempt to beat this bound, but all these methods yielded the same asymptotic bound. In this paper, this barrier is finally broken and the complexity of information-theoretic k-server PIR is improved to n<sup>O(log log k</sup>k log k)/. The new PIR protocols can also be used to construct k-query binary locally decodable codes of length exp(n<sup>O(log log k</sup>k log k)/), compared to exp(n<sup>1</sup>k-1/) in previous constructions. The improvements presented in this paper apply even for small values of k: the PIR protocols are more efficient than previous ones for every k≥3, and the locally decodable codes are shorter for every k≥4.
 
Conference Paper
An algorithm is described that solves the all pairs shortest path problem for a nonnegatively weighted graph. The algorithm has an average requirement on quite general classes of random graphs of O(n2logn) time, where n is the number of vertices in the graph.
 
Conference Paper
In this paper we develop a general purpose algorithm that can solve a number of NP-complete problems in time T = O(2n/2) and space S = O(2n/4). The algorithm can be generalized to a family of algorithms whose time and space complexities are related by T¿S2 = O(2n). The problems it can handle are characterized by a few decomposition axioms, and they include knapsack problems, exact satisfiability problems, set covering problems, etc. The new algorithm has a considerable cryptanalytic significance, since it can break the Merkle-Hellman public key cryptosystem whose recommended size is n = 100.
 
Conference Paper
We prove a lower bound of 5/2n<sup>2</sup>-3n for the rank of n×n-matrix multiplication over an arbitrary field. Similar bounds hold for the rank of the multiplication in noncommutative division algebras and for the multiplication of upper triangular matrices
 
Conference Paper
We introduce the following model for generating .semi-random 3CNF formulas. First, an adversary is allowed to pick an arbitrary formula with n varialdes and in clauses. Then, the formula is slightly perturbed at random. Namely, the smoothing operation leaves the variables of the formula unchanged, but flips the polarity of every variable occurrence in the formula independently with probability a. If the density m/n of a 3CNF formula exceeds a certain threshold value (say, 5epsiv<sup>-3</sup>) then the smoothing operation almost surely results in a non-satisfiable formula. We present a randomized polynomial time refutation algorithm that for every sufficiently dense 3CNF formula manages to refute most of its smoothed instantiations. The density requirement for our refutation algorithm is roughly epsiv<sup>-2</sup> radic(n log log n), which almost matches the density Omega( radicn) required bv known algorithms for refuting 3CNF formulas that are completely random.
 
Conference Paper
We describe a randomized approximation algorithm which takes an instance of MAX 3SAT as input. If the instance-a collection of clauses each of length at most three-is satisfiable, then the expected weight of the assignment found is at least 7/8 of optimal. We provide strong evidence (but not a proof) that the algorithm performs equally well on arbitrary MAX 3SAT instances. Our algorithm uses semidefinite programming and may be seen as a sequel to the MAX CUT algorithm of Goemans and Williamson (1995) and the MAX 2SAT algorithm of Feige and Goemans (1995). Though the algorithm itself is fairly simple, its analysis is quite complicated as it involves the computation of volumes of spherical tetrahedra. Hastad has recently shown that, assuming P≠NP, no polynomial-time algorithm for MAX 3SAT can achieve a performance ratio exceeding 7/8, even when restricted to satisfiable instances of the problem. Our algorithm is therefore optimal in this sense. We also describe a method of obtaining direct semidefinite relaxations of any constraint satisfaction problem of the form MAX CSP(F), where F is a finite family of Boolean functions. Our relaxations are the strongest possible within a natural class of semidefinite relaxations
 
Conference Paper
We give a complexity theoretic classification of homomorphism problems for graphs and, more generally, relational structures obtained by restricting the left hand side structure in a homomorphism. For every class C of structures, let HOM(C, _) be the problem of deciding whether a given structure A ∈ C has a homomorphism to a given (arbitrary) structure B. We prove that, under some complexity theoretic assumption from parameterized complexity theory, HOM(C, _) is in polynomial time if, and only if, the cores of all structures in C have bounded tree-width (as long as the structures in C only contain relations of bounded arity). Due to a well known correspondence between homomorphism problems and constraint satisfaction problems, our classification carries over to the latter.
 
Conference Paper
We show that the permutation group membership problem can be solved in depth (logn)3 on a Monte Carlo Boolean circuit of polynomial size in the restricted case in which the group is abelian. We also show that this restricted problem is NC1-hard for NSPACE(logn).
 
Conference Paper
We reexamine, what it means to compute Nash equilibria and, more, generally, what it means to compute a fixed point of a given Brouwer function, and we investigate the complexity of the associated problems. Specifically, we study the complexity of the following problem: given a finite game, Gamma, with 3 or more players, and given epsiv > 0, compute a vector x' (a mixed strategy profile) that is within distance e (say in t^) of some (exact) Nash equilibrium. We show that approximation of an (actual) Nash equilibrium for games with 3 players, even to within any non-trivial constant additive factor epsiv < 1/2 in just one desired coordinate, is at least as hard as the long standing square-root sum problem, as well as more general arithmetic circuit decision problems, and thus that even placing the approximation problem in NP would-resolve a major open problem in the complexity of numerical computation. Furthermore, we show that the (exact or approximate) computation of Nash equilibria for 3 or more players is complete for the class of search problems, which we call FIXP, that can be cast as fixed point computation problems for functions represented by algebraic circuits (straight line programs) over basis {+, <sub>*</sub>, -, /, max, min}, with rational constants. We show that the linear fragment of FIXP equals PPAD. Many problems in game theory, economics, and probability theory, can be cast as fixed point problems for such algebraic functions. We discuss several important such problems: computing the value of Shapley's stochastic games, and the simpler games of Condon, extinction probabilities of branching processes, termination probabilities of stochastic context-free grammars, and of Recursive Markov Chains. We show that for some of them, the approximation, or even exact computation, problem can be placed-in PPAD, while for others, they are at least as hard as the square-root sum and arithmetic circuit decision problems.
 
Conference Paper
Property testing is a relaxation of classical decision problems which aims at distinguishing between functions having a predetermined property and functions being far from any function having the property. In this paper we present a novel framework for analyzing property testing algorithms with one-sided error. Our framework is based on a connection of property testing and a new class of problems which we call abstract combinatorial programs. We show that if the problem of testing a property can be reduced to an abstract combinatorial program of small dimension, then the property has an efficient tester. We apply our framework to a variety of classical combinatorial problems. Among others, we present efficient property testing algorithms for geometric clustering problems, the reversal distance problem, and graph and hypergraph coloring problems. We also prove that, informally, any hereditary graph property can be efficiently tested if and only if it can be reduced to an abstract combinatorial program of small size. Our framework allows us to analyze all our testers in a unified way and the obtained complexity bounds either match or improve the previously known bounds. We believe that our framework will help to better understand the structure of efficiently testable properties.
 
Conference Paper
We prove that random edge, the simplex algorithm that always chooses a random improving edge to proceed on, can take a mildly exponential number of steps in the model of abstract objective functions (introduced by K. W. Hoke (1998) and by G. Kalai (1988) under different names). We define an abstract objective function on the n-dimensional cube for which the algorithm, started at a random vertex, needs at least exp(const · n<sup>1</sup>3/) steps with high probability. The best previous lower bound was quadratic. So in order for random edge to succeed in polynomial time, geometry must help.
 
Conference Paper
A denotational semantics is given for a distributed language based on communication (CSP). The semantics uses linear sequences of communications to record computations; for any well formed program segment the semantics is a relation between attainable states and the communication sequences needed to attain these states. In binding two or more processes we match and merge the communication sequences assumed by each process to obtain a sequence and State of the combined process. The approach taken here is distinguished by relatively simple semantic domains and ordering.
 
Conference Paper
Algebras whose carriers are partially ordered sets and operations are monotone and algebras whose carriers are complete partial orders and operations are continuous are studied. A quotient construction is provided for both types of algebras. The notion of a variety of algebras is defined and it is shown that the analogue of Birkhoff variety theorem holds for ordered algebras but not for continuous algebras. The results presented are a good first step towards a theory of ordered data types and a study of families of interpretations of schemas.
 
Conference Paper
From the users' point of view, resource management schemes may be considered as an abstract data type. An abstract specification of such schemes using axioms holding in partial algebras and relatively distributed implementations (expressed as CSP programs) are given and analyzed. Then the idea of probabilistic implementation of guard scheduling is suggested, which allows completely distributed symmetric programs. It frees the designer of an algorithm from looking for specific probabilistic algorithms, by allowing the compiler to generate probabilistic target code from nonprobabilistic source code.
 
Conference Paper
Zero-knowledge proofs are fascinating and extremely useful constructs. Their fascinating nature is due to their seemingly contradictory definition; zero-knowledge proofs are both convincing and yet yield nothing beyond the validity of the assertion being proven. Their applicability in the domain of cryptography is vast; they are typically used to force malicious parties to behave according to a predetermined protocol. In addition to their direct applicability in cryptography, zero-knowledge proofs serve as a good benchmark for the study of various problems regarding cryptographic protocols (e.g., "secure composition of protocols" and the "use of of the adversary's program within the proof of security"). We present the basic definitions and results regarding zero-knowledge as well as some recent developments regarding this notion.
 
Conference Paper
This paper gives new results, and presents old ones in a unified formalism, concerning Church-Rosser theorems for rewriting systems. Part 1 gives abstract confluence properties, depending solely on axioms for a binary relation called reduction. Results of Newman and others are presented in a unified formalism. Systematic use of a powerful induction principle permits to generalize results of Sethi on reduction modulo equivalence. Part 2 concerns simplification systems operating on terms of a first-order logic. Results by Rosen and Knuth and Bendix are extended to give several new criteria for confluence of these systems, using the results of part 1. It is then shown how these results yield efficient methods for the mechanization of equational theories.
 
Conference Paper
We prove the first non-trivial communication complexity lower bound for the problem of estimating the edit distance (aka Levenshtein distance) between two strings. A major feature of our result is that it provides the first setting in which the complexity of computing the edit distance is provably larger than that of Hamming distance. Our lower bound exhibits a trade-off between approximation and communication, asserting, for example, thai protocols with O(1) bits of communication can only obtain approximation a ges Omega(log d/log log d), where d is the length of the input strings. This case of O(1) communication is of particular importance, since it captures constant-size sketches as well as embaddings into spaces like L<sub>1</sub> and squared-L<sub>2</sub>. two prevailing algorithmic approaches for dealing with edit distance. Furthermore, the bound holds not only for strings over alphabet Sigma= {0, 1}, but also for strings that are permu-tations (called the Ulam metric). Besides being applicable to a much richer class of algorithms than all previous results, our bounds are near-tight in at. least one case, namely of embedding permutations into L<sub>1</sub>. The proof uses a new technique, that relies on Fourier analysis in a rather elementary way.
 
Conference Paper
In this paper we consider solutions to the static dictionary problem on AC<sup>0</sup> RAMs, i.e. random access machines where the only restriction on the finite instruction set is that all computational instructions are in AC<sup>0</sup>. Our main result is a tight upper and lower bound of θ(√log n/log log n) on the time for answering membership queries in a set of size n when reasonable space is used for the data structure storing the set; the upper bound can be obtained using O(n) space, and the lower bound holds even if we allow space 2<sup>polylog n</sup>. Several variations of this result are also obtained. Among others, we show a tradeoff between time and circuit depth under the unit-cost assumption: any RAM instruction set which permits a linear space, constant query time solution to the static dictionary problem must have an instruction of depth Ω(log w/log log to), where w is the word size of the machine (and log the size of the universe). This matches the depth of multiplication and integer division, used in the perfect hashing scheme by M.L. Fredman, J. Komlos and E. Szemeredi (1984)
 
Conference Paper
The acceleration of matrix multiplication MM, is based on the combination of the method of algebraic field extension due to D. Bini, M. Capovani, G. Lotti, F. Romani and S. Winograd and of trilinear aggregating, uniting and canceling due to the author. A fast algorithm of O(N**2**. **7**3**7**8) complexity for N multiplied by N matrix multiplication is derived. With A. Schoenhage's Theorem about partial and total MM, this approach gives the exponent 2. 6054 by the price of a serious increase of the constant.
 
Conference Paper
We give a parallel RAM algorithm for simulating a deterministic auxiliary pushdown machine. If the pushdown machine uses space s(n) ≥ log n and time 2O(s(n)) then our parallel simulation algorithm takes time O(s(n)) and requires 2O(s(n)) processors. Thus any deterministic context free language is accepted in time O(log n) by our parallel RAM algorithm using a polynomial number of processors. (Our algorithm can easily be extended to also accept the LR(k) languages in time O(log n) and 2O(k) Processors. Our simulation algorithm is near optimal for parallel RAMs, since we show that the language accepted in time T(n) by a parallel RAM is accepted by a deterministic auxiliary pushdown machine with space T(n) and time 2O(T(n)2).
 
Conference Paper
We address efficient access to bandwidth in WDM (wavelength division multiplexing) optical networks. We consider tree topologies, ring topologies, as well as trees of rings. These are topologies of concrete practical relevance for which undirected underlying graph models have been studied before by P. Raghavan and E. Upfal (1993). As opposed to previous studies (A. Aggarwal et al., 1993; R. Pankaj, 1992; P. Raghavan and E. Upfal, 1993), we consider directed graph models. Directedness of fiber links is dictated by physical directedness of optical amplifiers. For trees, we give a polynomial time routing algorithm that satisfies requests of maximum load L<sub>max</sub> per fiber link using no more than 15L<sub>max</sub>/8&les;15OPT/8 optical wavelengths. This improves a 2L<sub>max</sub> scheme that is implicit by P. Raghavan and E. Upfal by extending their undirected methods to our directed model. Alternatively stated, for fixed W wavelength technology, we can load the network up to L,, 8W/15 rather than W/2. In engineering terms, this is a so called “6.66% increase of bandwidth” and it is considered substantial. For rings, the approximation factor is 2OPT. For trees of rings, the approximation factor is 15OPT/4. Technically, optical routing requirements give rise to novel coloring paradigms. Our algorithms involve matchings and multicolored alternating cycles, combined with detailed potential and averaging analysis
 
Conference Paper
A conflict of multiplicity k occurs when k stations transmit simultaneously to a multiple access channel. As a result, all stations receive feedback indicating whether k is 0, 1, or is ≥ 2. If k = 1 the transmission succeeds, whereas if k ≥ 2 all the transmissions fail. In general, no a priori information about k is available. We present and analyze an algorithm that enables the conflicting stations to cooperatively compute a statistical estimate of k, at small cost, as a function of the feedback elicited during its execution. An algorithm to resolve a conflict among two or more stations controls the retransmissions of the conflicting stations so that each eventually transmits singly to the channel. Combining our estimation algorithm with a binary tree algorithm leads to a hybrid algorithm that resolves conflicts faster on average than any other reported to date.
 
Conference Paper
We describe a data structure for representing a set of n items from a universe of m items, which uses space n+o(n) and accommodates membership queries in constant time. Both the data structure and the query algorithm are easy to implement.
 
Conference Paper
Consider the finite regular language L<sub>n</sub>={w0|w∈{0,1}*,|w|&les;n}. A. Ambainis et al. (1999) showed that while this language is accepted by a deterministic finite automaton of size O(n), any one-way quantum finite automaton (QFA) for it has size 2<sup>Ω(n/logn)</sup>. This was based on the fact that the evolution of a QFA is required to be reversible. When arbitrary intermediate measurements are allowed, this intuition breaks down. Nonetheless, we show a 2<sup>Ω(n)</sup> lower bound for such QFA for L<sub>n</sub>, thus also improving the previous bound. The improved bound is obtained from simple entropy arguments based on A.S. Holevo's (1973) theorem. This method also allows us to obtain an asymptotically optimal (1-H(p))n bound for the dense quantum codes (random access codes) introduced by A. Ambainis et al. We then turn to Holevo's theorem, and show that in typical situations, it may be replaced by a tighter and more transparent in-probability bound
 
Conference Paper
The contribution of this paper is two-fold. First, we describe two ways to construct multivalued atomic n-writer n-reader registers. The first solution uses atomic 1-writer 1-reader registers and unbounded tags. the other solution uses atomic 1-writer n-reader registers and bounded tags. The second part of the paper develops a general methodology to prove atomicity, by identifying a set of criteria which guaranty an effective construction for the required atomic mapping. We apply the method to prove atomicity of the two implementations for atomic multiwriter multireader registers.
 
Conference Paper
We model the problem of storing items in some warehouse (modeled as an undirected graph) where a server has to visit items over time, with the goal of minimizing the total distance traversed by the server. Special cases of this problem include the management of a real industrial stacker crane warehouse, automatic robot run warehouses, disk track optimization to minimize access time, managing two dimensional memory (bubble memory and mass storage systems), doubly linked list management, and the process migration problem. The static version of this problem assumes some known probability distribution on the access patterns. We initiate the study of the dynamic version of the problem, where the robot may rearrange the warehouse to deal efficiently with future events. We require no statistical assumptions on the access pattern, and give competitive algorithms that rearrange the warehouse over time to deal efficiently with the true access patterns. We give non-trivial upper bounds for the general problem, along with some interesting lower bounds. In addition, we model realistic data access patterns on disk storage by considering two practically significant scenarios: access to some database via dynamically changing alternative indices and access patterns derived from root to leaf traversals of some (unknown) tree structure. In both cases we give greatly improved competitive ratios
 
Conference Paper
We consider random access machines with a multiplication operation, having the added capability of computing logical operations on register are considered both as an integer and as a vector of bits and both arithmetic and boolean operations may be used on the same register. We prove that, counting one operation as a unit of time and considering the machines as acceptors, deterministic and nondeterministic polynomial time acceptable languages are the same, and are exactly the languages recognizable in polynomial tape by Turing machines. We observe that the same measure on machines without multiplication is polynomially related to Turing machine time-thus the added computational power due to multiplication in random access machines is equivalent to the computational power which polynomially tape-bounded Turing machine computations have over polynomially time-bounded computations. Therefore, in this formulation, it is not harder to multiply than to add if and only if PTAPE = PTIME for Turing machines. We also discuss other instruction sets for random access machines and their computational power.
 
Conference Paper
We study problems in multiobjective optimization, in which solutions to a combinatorial optimization problem are evaluated with respect to several cost criteria, and we are interested in the trade-off between these objectives (the so-called Pareto curve). We point out that, under very general conditions, there is a polynomially succinct curve that ε-approximates the Pareto curve, for any ε>0. We give a necessary and sufficient condition under which this approximate Pareto curve can be constructed in time polynomial in the size of the instance and 1/ε. In the case of multiple linear objectives, we distinguish between two cases: when the underlying feasible region is convex, then we show that approximating the multi-objective problem is equivalent to approximating the single-objective problem. If however the feasible region is discrete, then we point out that the question reduces to an old and recurrent one: how does the complexity of a combinatorial optimization problem change when its feasible region is intersected with a hyperplane with small coefficients; we report some interesting new findings in this domain. Finally, we apply these concepts and techniques to formulate and solve approximately a cost-time-quality trade-off for optimizing access to the World-Wide Web, in a model first studied by Etzioni et al. (1996) (which was actually the original motivation for this work)
 
Conference Paper
We consider the problem of designing a minimum cost access network to carry traffic from a set of endnodes to a core network. A set of trunks of K differing types are available for leasing or buying. Some trunk-types have a high initial overhead cost but a low cost per unit bandwidth. Others have a low overhead cost but a high cost per unit bandwidth. When the central core is given, we show how to construct an access network whose cost is within O(K<sup>2</sup>) of optimal, under weak assumptions on the cost structure. In contrast with previous bounds, this bound is independent of the network and the traffic. Typically, the value of K is small. Our approach uses a linear programming relaxation and is motivated by a rounding technique of Shmoys, Tardos and Aardal (1997). Our techniques extend to a more complex situation in which the core is not given a priori. In this case we aim to minimize the switch cost of the core in addition to the trunk cost of the access network. We provide the same performance bound
 
Conference Paper
The authors consider a synchronous model of distributed computation in which n nodes communicate via point-to-point messages, subject to the following constraints: (i) in a single “step”, a node can only send or receive O(logn) words, and (ii) communication is unreliable in that a constant fraction of all messages are lost at each step due to node and/or link failures. They design and analyze a simple local protocol for providing fast concurrent access to shared objects in this faulty network environment. In the protocol, clients use a hashing-based method to access shared objects. When a large number of clients attempt to read a given object at the same time, the object is rapidly replicated to an appropriate number of servers. Once the necessary level of replication has been achieved, each remaining request for the object is serviced within O(1) expected steps. The protocol has practical potential for supporting high levels of concurrency in distributed file systems over wide area networks
 
Conference Paper
Two methods are given for obtaining lower bounds on the cost of accessing a sequence of nodes in a symmetrically ordered binary search tree, in a model where rotations can be done on the tree and the entire sequence is known before accessing begins (but the accesses must be done in the order given). For example, it can be proven that the bit-reversal permutation requires Θ (n log n) time to access in this model. It is also shown that the expected cost of accessing random sequences in this model is the same as it is for the case where the tree is static.
 
Conference Paper
We provide evidence for the proposition that the computational complexity of individual problems, and of whole complexity classes, hinge on the existence of certain solvable polynomial systems that are unlikely to be encountered other than by systematic explorations for them. We consider a minimalist version of Cook's 3CNF problem, namely that of monotone planar 3 CNF formulae where each variable occurs twice. We show that counting the number of solutions of these modulo 2 is oplusP-complete (hence NP-hard) but counting them modulo 7 is polynomial time computable (sic). We also show a similar dichotomy for a vertex cover problem. To derive completeness results we use a new holographic technique for proving completeness results in oplusP for problems that are in P. For example, we can show in this way that oplus2CNF, the parity problem for 2CNF, is oplusP-complete. To derive efficient algorithms we use computer algebra systems to find appropriate holographic gates. In order to explore the limits of holographic techniques we define the notion of an elementary matchgrid algorithm to capture a natural but restricted use of them. We show that for the NP-complete general 3 CNF problem no such elementary matchgrid algorithm can exist. We observe, however, that it remains open for many natural #P-complete problems whether such elementary matchgrid algorithms exist, and for the general CNF problem whether non-elementary matchgrid algorithms exist
 
Conference Paper
A formal framework is presented in which to explore the complexity issues of data structures which accommodate various types of range queries. Within this framework, a systematic and reasonably tractable method for assessing inherent complexity is developed. Included among the interesting results are the following: the fact that non-linear lower bounds are readily accessible, and the existence of a complexity gap between linear time and n log n time.
 
Conference Paper
The paper deals with achievability of fault tolerant goals in a completely asynchronous distributed system. Fischer, Lynch, and Paterson [FLP] proved that in such a system "nontrivial agreement" cannot be achieved even in the (possible) presence of a single "benign" fault. In contrast, we exhibit two pairs of goals that are achievable even in the presence of up to t ≪ n/2 faulty processors, contradicting the widely held assumption that no nontrivial goals are attainable in such a system. The first pair deals with renaming processors so as to reduce the size of the initial name space. When only uniqueness is required of the new names, we present a lower bound of n + 1 on the size of the new name space, and a renaming algorithm which establishes an upper bound of n + t. In case the new names are required also to preserve the original order, a tight bound of 2t(n- t + 1) - 1 is obtained. The second pair of goals deals with the multi-slot critical section problem. We present algorithms for controlled access to a critical section. As for the number of slots required, a tight bound of t + 1 is proved in case the slots are identical. In the case of distinct slots the upper bound is 2t + 1.
 
Conference Paper
We consider the problem of preprocessing an edge-weighted tree T in order to quickly answer queries of the following type: does a given edge e belong in the minimum spanning tree of T ∪ {e}? Whereas the offline minimum spanning tree verification problem admits a lovely linear time solution, we demonstrate an inherent inverse-Ackermann type tradeoff in the online MST verification problem. In particular, any scheme that answers queries in t comparisons must invest Ω(n log λ<sub>t</sub> (n)) time preprocessing the tree, where λ<sub>t</sub> is the inverse of the t<sup>th</sup> row of Ackermann's function. This implies a query lower bound of Ω(α(n)) for the case of linear preprocessing time. We also show that our lower bound is tight to within a factor of 2 in the t parameter.
 
Braess's Paradox. The addition of an intuitively helpful link can negatively impact all of the users of a congested network.
Strings and springs. Severing a taut string results in the rise of a heavy weight.
A simple bad example.  
Construction in the proof of Theorem 3.1 of modified latency function ¯ e given original latency function e and Nash flow value f e. Solid lines denote graphs of functions.
Conference Paper
We consider the problem of routing traffic to optimize the performance of a congested network. We are given a network, a rate of traffic between each pair of nodes, and a latency function for each edge specifying the time needed to traverse the edge given its congestion; the objective is to route traffic such that the sum of all travel times-the total latency-is minimized. In many settings, including the Internet and other large-scale communication networks, it may be expensive or impossible to regulate network traffic so as to implement an optimal assignment of routes. In the absence of regulation by some central authority, we assume that each network user routes its traffic on the minimum-latency path available to it, given the network congestion caused by the other users. In general such a “selfishly motivated” assignment of traffic to paths will not minimize the total latency; hence, this lack of regulation carries the cost of decreased network performance. We quantify the degradation in network performance due to unregulated traffic. We prove that if the latency of each edge is a linear function of its congestion, then the total latency of the routes chosen by selfish network users is at most 4/3 times the minimum possible total latency (subject to the condition that all traffic must be routed). We also consider the more general setting in which edge latency functions are assumed only to be continuous and non-decreasing in the edge congestion
 
Conference Paper
Most on-line analysis assumes that, at each time-step, all relevant information up to that time step is available and a decision has an immediate effect. In many on-line problems, however, the time relevant information is available and the time a decision has an effect may be decoupled. For example, when making an investment, one might not have completely up-to-date information on market prices. Similarly, a buy or sell order might only be executed some time later in the future. We introduce and explore natural delayed models for several well-known on-line problems. Our analyses demonstrate the importance of considering timeliness in determining the competitive ratio of an on-line algorithm. For many problems, we demonstrate that there exist algorithms with small competitive ratios even when large delays affect the timeliness of information and the effect of decisions
 
Conference Paper
This work applies the theory of knowledge in distributed systems to the design of faulttolerant protocols for problems involving coordinated simultaneous actions in synchronous systems. We give a simple method for transforming specifications of such problems into high-level protocols programmed using explicit tests of whether certain facts are common knowledge. The resulting protocols are optimal in all runs: for every possible input to system and pattern of processor failures, they are guaranteed to perform the simultaneous actions as soon as any other protocol can possibly perform them. A careful analysis of when facts become common knowledge shows how to efficiently implement these protocols in many variants of the omissions failure model. In the generalized omissions model, however, it is shown that any protocol that is optimal in this sense must require co-NP hard computations. The analysis in this paper exposes subtle differences between the failure models, including the precise point at which this gap in complexity occurs.
 
Conference Paper
In this paper we present a new approximation algorithm for the Max Acyclic Subgraph problem. Given an instance where the maximum acyclic subgraph contains 1/2 + delta fraction of all edges, our algorithm finds an acyclic subgraph with 1/2 + Omega(delta/ log n) fraction of all edges.
 
Conference Paper
For every euro > 0, we show that the (acyclic) job shop problem cannot be approximated within ratio O(log<sup>1+euro</sup> lb), unless NP has quasi-polynomial Las-Vegas algorithms, and where lb denotes a trivial lower bound on the optimal value. This almost matches the best known results for acyclic job shops, since an O(log<sup>1+euro</sup> lb)-approximate solution can be obtained in polynomial time for every euro > 0. Recently, a PTAS was given for the job shop problem, where the number of machines and the number of operations per job are assumed to be constant. Under P ne NP, and when the number mu of operations per job is a constant, we provide an inapproximability result whose value grows with mu to infinity. Moreover, we show that the problem with two machines and the preemptive variant with three machines have no PTAS, unless NP has quasi-polynomial algorithms. These results show that the restrictions on the number of machines and operations per job are necessary to obtain a PTAS.In summary, the presented results close many gaps in our understanding of the hardness of the job shop problem and resolve (negatively) several open problems in the literature.
 
Conference Paper
We show that the problem of evaluating acylic Boolean database-queries is LOGCFL-complete and thus highly parallelizable. We present a parallel database algorithm solving this problem with a logarithmic number of parallel join operations. It follows from our main result that the acylic versions of the following important database and Al problems are LOGCFL-complete: The query output tuple problem for conjunctive queries, conjunctive query containment, clause subsumption, and constraint satisfaction
 
Conference Paper
We present a near-optimal reduction from approximately counting the cardinality of a discrete set to approximately sampling elements of the set. An important application of our work is to approximating the partition function Z of a discrete system, such as the Ising model, matchings or colorings of a graph. The standard approach to estimating the partition function Z(β*) at some desired inverse temperature B* is to define a sequence, which we call a cooling schedule, β0 0 < β1 < . . . < βe β* where Z(0) is trivial to compute and the ratios Z(βi+1) /Z(βi) are easy to estimate by sampling from the distribution corresponding to Z(βi). Previous approaches required a cooling schedule of length O* (ln-A) where A = Z(0), thereby ensuring that each ratio Z(βi+1)/Z(βi) is bounded. We present a cooling schedule of length ℓl=O* (√ln A). For well-studied problems such as estimating the partition function of the Ising model, or approximating the number of colorings or matchings of a graph, our cooling schedule is of length O* (√n) and the total number of samples required is O* (n). This implies an overall savings of a factor of roughly n in the running time of the approximate counting algorithm compared to the previous best approach. A similar improvement in the length of the cooling schedule was recently obtained by Lovász and Vempala in the context of estimating the volume of convex bodies. While our reduction is inspired by theirs, the discrete analogue of their result turns out to be significantly more difficult. Whereas a fixed schedule suffices in their selling, we prove that in the discrete setting we need an adaptive schedule, i. e., the schedule depends on Z. More precisely, we prove any non-adaptive cooling schedule has length at least O* (ln A), and we present an algorithm to find an adaptive schedule of length O*(√ln A) and a nearly matching lower bound.
 
Conference Paper
We introduce the notion of non-malleable non-interactive zero-knowledge (NIZK) proof systems. We show how to transform any ordinary NIZK proof system into one that has strong non-malleability properties. We then show that the elegant encryption scheme of Naor and Yung (1990) can be made secure against the strongest form of chosen-ciphertext attack by using a non-malleable NIZK proof instead of a standard NIZK proof. Our encryption scheme is simple to describe and works in the standard cryptographic model under, general assumptions. The encryption scheme can be realized assuming the existence of trapdoor permutations
 
Top-cited authors
Oded Goldreich
  • Weizmann Institute of Science
Alexandr Andoni
Dvora Dolev
  • Weizmann Institute of Science
Michael Luby
  • BitRipple Inc.
Russell Impagliazzo
  • University of California, San Diego