October 2016
·
24 Reads
·
47 Citations
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
October 2016
·
24 Reads
·
47 Citations
October 2013
·
75 Reads
·
9 Citations
Foundations of Computer Science, 1975., 16th Annual Symposium on
An approximate computation of a Boolean function by a circuit or switching network is a computation in which the function is computed correctly on the majority of the inputs (rather than on all inputs). Besides being interesting in their own right, lower bounds for approximate computation have proved useful in many sub areas of complexity theory, such as cryptography and derandomization. Lower bounds for approximate computation are also known as correlation bounds or average case hardness. In this paper, we obtain the first average case monotone depth lower bounds for a function in monotone P. We tolerate errors that are asymptotically the best possible for monotone circuits. Specifically, we prove average case exponential lower bounds on the size of monotone switching networks for the GEN function. As a corollary, we separate the monotone NC hierarchy in the case of errors -- a result which was previously only known for exact computations. Our proof extends and simplifies the Fourier analytic technique due to Potechin, and further developed by Chan and Potechin. As a corollary of our main lower bound, we prove that the communication complexity approach for monotone depth lower bounds does not naturally generalize to the average case setting.
August 2012
·
128 Reads
·
11 Citations
ACM Transactions on Computation Theory
In 1990, Subramanian [1990] defined the complexity class CC as the set of problems log-space reducible to the comparator circuit value problem (CCV). He and Mayr showed that NL ⊆ CC ⊆ P, and proved that in addition to CCV several other problems are complete for CC, including the stable marriage problem, and finding the lexicographically first maximal matching in a bipartite graph. Although the class has not received much attention since then, we are interested in CC because we conjecture that it is incomparable with the parallel class NC which also satisfies NL ⊆ NC ⊆ P, and note that this conjecture implies that none of the CC-complete problems has an efficient polylog time parallel algorithm. We provide evidence for our conjecture by giving oracle settings in which relativized CC and relativized NC are incomparable. We give several alternative definitions of CC, including (among others) the class of problems computed by uniform polynomial-size families of comparator circuits supplied with copies of the input and its negation, the class of problems AC0-reducible to Ccv, and the class of problems computed by uniform AC0 circuits with AXccv gates. We also give a machine model for CC, which corresponds to its characterization as log-space uniform polynomial-size families of comparator circuits. These various characterizations show that CC is a robust class. Our techniques also show that the corresponding function class FCC is closed under composition. The main technical tool we employ is universal comparator circuits. Other results include a simpler proof of NL ⊆ CC, a more careful analysis showing the lexicographically first maximal matching problem and its variants are CC-complete under AC0 many-one reductions, and an explanation of the relation between the Gale--Shapley algorithm and Subramanian’s algorithm for stable marriage. This article continues the previous work of Cook et al. [2011], which focused on Cook-Nguyen style uniform proof complexity, answering several open questions raised in that article.
June 2012
·
19 Reads
·
12 Citations
Proceedings of the Annual IEEE Conference on Computational Complexity
Kushilevitz [1989] initiated the study of information-theoretic privacy within the context of communication complexity. Unfortunately, it has been shown that most interesting functions are not privately computable [Kushilevitz 1989, Brandt and Sandholm 2008]. The unattainability of perfect privacy for many functions motivated the study of approximate privacy. Feigenbaum et al. [2010a, 2010b] define notions of worst-case as well as average-case approximate privacy and present several interesting upper bounds as well as some open problems for further study. In this article, we obtain asymptotically tight bounds on the trade-offs between both the worst-case and average-case approximate privacy of protocols and their communication cost for Vickrey auctions. Further, we relate the notion of average-case approximate privacy to other measures based on information cost of protocols. This enables us to prove exponential lower bounds on the subjective approximate privacy of protocols for computing the Intersection function, independent of its communication cost. This proves a conjecture of Feigenbaum et al. [2010a].
January 2012
·
11 Reads
·
1 Citation
This is a survey talk explaining the connection between the three items mentioned in the title.
June 2011
·
15 Reads
·
1 Citation
Subramanian defined the complexity class CC as the set of problems log-space reducible to the comparator circuit value problem. He proved that several other problems are complete for CC, including the stable marriage problem, and finding the lexicographical first maximal matching in a bipartite graph. We suggest alternative definitions of CC based on different reducibilities and introduce a two-sorted theory VCC* based on one of them. We sharpen and simplify Subramanian's completeness proofs for the above two problems and formalize them in VCC*.
March 2011
·
34 Reads
·
13 Citations
Proceedings - Symposium on Logic in Computer Science
Using Je\v{r}\'abek 's framework for probabilistic reasoning, we formalize the correctness of two fundamental RNC^2 algorithms for bipartite perfect matching within the theory VPV for polytime reasoning. The first algorithm is for testing if a bipartite graph has a perfect matching, and is based on the Schwartz-Zippel Lemma for polynomial identity testing applied to the Edmonds polynomial of the graph. The second algorithm, due to Mulmuley, Vazirani and Vazirani, is for finding a perfect matching, where the key ingredient of this algorithm is the Isolating Lemma.
January 2011
·
44 Reads
·
6 Citations
Logical Methods in Computer Science
We introduce two-sorted theories in the style of [CN10] for the complexity classes \oplusL and DET, whose complete problems include determinants over Z2 and Z, respectively. We then describe interpretations of Soltys' linear algebra theory LAp over arbitrary integral domains, into each of our new theories. The result shows equivalences of standard theorems of linear algebra over Z2 and Z can be proved in the corresponding theory, but leaves open the interesting question of whether the theorems themselves can be proved.
January 2011
·
6 Reads
·
1 Citation
Subramanian defined the complexity class CC as the set of problems log-space reducible to the comparator circuit value problem. He proved that several other problems are complete for CC, including the stable marriage problem, and finding the lexicographical first maximal matching in a bipartite graph. We suggest alternative definitions of CC based on different reducibilities and introduce a two-sorted theory VCC ∗ based on one of them. We sharpen and simplify Subramanian’s completeness proofs for the above two problems and show how to formalize them in VCC ∗.
... More success is possible when restricting to monotone span programs, see e.g.[29] for recent work, but monotone span programs are not the relevant computing model here. ...
October 2016
... This situation is similar to [19], which studies a different model of private two-party computation, and where the best upper and lower bounds are also exponential and linear. In a similar spirit, [1] proves that in a communication model of approximate privacy called PAR (based on [46]), privacy can come at an exponential cost. ...
June 2012
Proceedings of the Annual IEEE Conference on Computational Complexity
... Reversible pebblings of DAGs have been studied in [LV96,Krá04] and have been employed to shed light on time-space trade-offs in reversible simulation of irreversible computation in [LTV98,LMT00,Wil00,BTV01]. In a different line of work Potechin [Pot10] implicitly used the reversible pebble game for proving lower bounds on monotone space complexity, with the connection made explicit in the follow-up works [CP14,FPRC13]. ...
October 2013
Foundations of Computer Science, 1975., 16th Annual Symposium on
... On the other hand, comparator circuits appear to be much stronger than formulas, 2 as it is conjectured that polynomial-size comparator circuits are incomparable to NC [2]. Evidence for this conjecture is that polynomial-size comparator circuits can compute problems whose known algorithms are inherently sequential, such as stable marriage and lexicographically first maximal matching [2], and there is an oracle separation between NC and polynomial-size comparator circuits [3]. Moreover, Robere, Pitassi, Rossman and Cook [4] showed that there exists a Boolean function in mNC 2 not computed by polynomial-size monotone comparator circuits. ...
August 2012
ACM Transactions on Computation Theory
... The present paper continues the research initiated in [13,5], in which two of the present authors participated. ...
January 2011
... The present paper continues the research initiated in [13,5], in which two of the present authors participated. ...
June 2011
... For instance, it is known that PV 1 can prove the PCP Theorem [Pic15b], while APC 1 can establish several significant circuit lower bounds [MP20], including monotone circuit lower bounds for k-Clique and bounded-depth circuit lower bounds for the Parity function. Further examples include the explicit construction of expander graphs [BKKK20] and the correctness of randomized polynomial-time matching algorithms [LC11], among many others. ...
March 2011
Proceedings - Symposium on Logic in Computer Science
... Furthermore, the importance of linear algebra in bounded arithmetic and proof complexity has been identified in many works, and it has been conjectured that the determinant identities, and specifically the multiplicativity of the determinant function DET(A) · DET(B) = DET(AB), for two matrices A, B, can be proved in a formal theory that, loosely speaking, reasons with NC 2 concepts (Cook and Nguyen present this specific question in their monograph [CN10]; see also [CF12,BBP95,BP98,Sol01,SC04]). This conjecture is aligned with the intuition that basic properties of many constructions and functions of a given complexity class are provable in logical theories not using concepts beyond that class. ...
January 2011
Logical Methods in Computer Science