## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

This paper investigates which complexity classes inside NCcan contain pseudorandom function generators (PRFGs). Under the Decisional
Diffie-Hellman assumption (a common cryptographic assumption) \( \textit{TC}^{0} \)4 contains PRFGs. No lower complexity classes with this property
are currently known. On the other hand, we use effective lower
bound arguments to show that some complexity classes cannot contain
PRFGs. This provides evidence for the following conjecture: Any effective
lower bound argument for a complexity class can be turned into
an efficient distinguishing algorithm which proves that this class cannot

To read the full-text of this research,

you can request a copy directly from the authors.

... The moral is that, in order to prove stronger circuit lower bounds, one must avoid the techniques used in proofs that entail such efficient algorithms. The argument applies even to low-level complexity classes such as TC 0 [NR04, KL01,MV12], so any major progress in the future depends on proving un-Natural lower bounds. How should we proceed? ...

... Let C, D be appropriate circuit classes. Roughly speaking, the key lesson of Natural Proofs [RR97,NR04,KL01] is that, if there are D-natural properties useful against C, then there are no pseudorandom functions (PRFs) computable in C that fool D circuits; namely, there is a statistical test T computable in D such that, for every function f ∈ C (armed with an n-bit initial random seed), the test T with query access to f can distinguish f from a uniform random function. Now, if we have a PRF computable in C that can fool D circuits, this PRF can be used to obtain C seeds for randomized D circuits with one-sided error. ...

We study connections between Natural Proofs, derandomization, and the problem
of proving weak circuit lower bounds such as 'NEXP is not contained in TC^0'
which are still wide open.
Natural Proofs have three properties: they are constructive (an efficient
algorithm ALG is embedded in them), have largeness (ALG accepts a large
fraction of strings), and are useful (ALG rejects all strings which are truth
tables of small circuits). Strong circuit lower bounds that are "naturalizing"
would contradict present cryptographic understanding, yet the vast majority of
known circuit lower bound proofs are naturalizing. So it is imperative to
understand how to pursue un-Natural Proofs. Some heuristic arguments say
constructivity should be circumventable. Largeness is inherent in many proof
techniques, and it is probably our presently weak techniques that yield
constructivity. We prove:
* Constructivity is unavoidable, even for NEXP lower bounds. Informally, we
prove for all "typical" non-uniform circuit classes C, NEXP is not contained in
C if and only if there exists a constructive property that is nontrivially
useful against C-circuits.
* There are no P-natural properties useful against C if and only if
randomized exponential time can be "derandomized" using truth tables of
circuits from C as random seeds. Therefore the task of proving there are no
P-natural properties is inherently a derandomization problem, weaker than but
implied by the existence of strong pseudorandom functions.
These characterizations are applied to yield several new results. The two
main applications are that NEXP \cap coNEXP does not have n^{log n} size ACC
circuits, and a mild derandomization result for RP.

... • PRFs in TC 0 4 based on the decisional Diffie-Hellman (DDH) assumption [KL01] (improving on [NR97]), yielding hardness for depth-6 ReLU networks We can now complete the proof of Theorem 6.1 by using Lemma 3.6 and Theorem 3.3: ...

We give exponential statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting: previous SQ lower bounds held only for adversarial noise models (agnostic learning) or restricted models such as correlational SQ. Prior work hinted at the impossibility of our result: Vempala and Wilmes showed that general SQ lower bounds cannot apply to any real-valued family of functions that satisfies a simple non-degeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many well-studied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning two-hidden-layer ReLU networks, as well as new lower bounds for learning constant-depth ReLU networks from membership queries.

... This problem requires, for some integer k, a function that cannot be computed by a threshold circuit of width poly(d) and depth k, but can be computed 1 by a threshold circuit of width poly(d) and depth k ′ > k. Naor and Reingold [2004] and Krause and Lucks [2001] showed a candidate pseudorandom functions family computable by threshold circuits of depth 4, width poly(d), and poly(d)-bounded weights. By Razborov and Rudich [1997], it implies that for every k ′ > k ≥ 4, there is a natural-proof barrier for showing depth separation between threshold circuits of depth k and depth k ′ . ...

In studying the expressiveness of neural networks, an important question is whether there are functions which can only be approximated by sufficiently deep networks, assuming their size is bounded. However, for constant depths, existing results are limited to depths $2$ and $3$, and achieving results for higher depths has been an important open question. In this paper, we focus on feedforward ReLU networks, and prove fundamental barriers to proving such results beyond depths $4$, by reduction to open problems and natural-proof barriers in circuit complexity. To show this, we study a seemingly unrelated problem of independent interest: Namely, whether there are polynomially-bounded functions which require super-polynomial weights in order to approximate with constant-depth neural networks. We provide a negative and constructive answer to that question, by showing that if a function can be approximated by a polynomially-sized, constant depth $k$ network with arbitrarily large weights, it can also be approximated by a polynomially-sized, depth $3k+3$ network, whose weights are polynomially bounded.

... The existence of pseudorandom functions follows from the existence of one-way functions ( [HILL99,GGM86]) which is essentially the weakest interesting cryptographic assumption. There are even candidate constructions of pseudorandom functions computable by polynomial-size constant-depth threshold circuits (TC 0 ) as given by Naor and Reingold [NR97], whose security rests on the intractability of discrete-log and factoring-type assumptions (see also Krause and Lucks [KL01]). As such, it is widely-believed that there are pseudorandom functions, even ones computationally indistinguishable from random except to adversaries running in exp(λ Ω(1) )-time. ...

We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits. Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices. Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier. Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits.

... Theorem 1 implies that certain simple devices (namely, McCulloch-Pitts dynamical systems) cannot generate pseudorandomness. In the opposite direction, it has been proved that certain simple devices can generate pseudorandomness: examples can be found in [24], [19], [27], [26], [3]. Many examples of generators that appear random to observers with restricted computational powers are known. ...

In a pioneering classic, Warren McCulloch and Walter Pitts proposed a model of the central nervous system. Motivated by EEG recordings of normal brain activity, Chvátal and Goldsmith asked whether these dynamical systems can be engineered to produce trajectories that are irregular, disorderly, and apparently unpredictable. We show that they cannot build weak pseudorandom functions.

... However, if some number-theoretic problems are exponentially hard on average (an assumption believed to be true by many researchers), then there are pseudorandom functions in circuit classes as small as TC 0 4 (Naor and Reingold [NR04], Krause and Lucks [KL01]). As a consequence, such proofs (dubbed natural proofs in [RR97]) are not expected to prove separations for more expressive circuit classes. ...

Different techniques have been used to prove several transference theorems of
the form "nontrivial algorithms for a circuit class C yield circuit lower
bounds against C". In this survey we revisit many of these results. We discuss
how circuit lower bounds can be obtained from derandomization, compression,
learning, and satisfiability algorithms. We also cover the connection between
circuit lower bounds and useful properties, a notion that turns out to be
fundamental in the context of these transference theorems. Along the way, we
obtain a few new results, simplify several proofs, and show connections
involving different frameworks. We hope that our presentation will serve as a
self-contained introduction for those interested in pursuing research in this
area.

We formalize a framework of algebraically natural lower bounds for algebraic circuits. Just as with the natural proofs notion of Razborov and Rudich for boolean circuit lower bounds, our notion of algebraically natural lower bounds captures nearly all lower bound techniques known. However, unlike the boolean setting, there has been no concrete evidence demonstrating that this is a barrier to obtaining super-polynomial lower bounds for general algebraic circuits, as there is little understanding whether algebraic circuits are expressive enough to support "cryptography" secure against algebraic circuits.
Following a similar result of Williams in the boolean setting, we show that the existence of an algebraic natural proofs barrier is equivalent to the existence of succinct derandomization of the polynomial identity testing problem. That is, whether the coefficient vectors of polylog(N)-degree polylog(N)-size circuits is a hitting set for the class of poly(N)-degree poly(N)-size circuits. Further, we give an explicit universal construction showing that if such a succinct hitting set exists, then our universal construction suffices.
Further, we assess the existing literature constructing hitting sets for restricted classes of algebraic circuits and observe that none of them are succinct as given. Yet, we show how to modify some of these constructions to obtain succinct hitting sets. This constitutes the first evidence supporting the existence of an algebraic natural proofs barrier.
Our framework is similar to the Geometric Complexity Theory (GCT) program of Mulmuley and Sohoni, except that here we emphasize constructiveness of the proofs while the GCT program emphasizes symmetry. Nevertheless, our succinct hitting sets have relevance to the GCT program as they imply lower bounds for the complexity of the defining equations of polynomials computed by small circuits.

We study connections between the Natural Proofs of Razborov and Rudich, derandomization, and the problem of proving "weak" circuit lower bounds such as NEXP not subset of TC0, which are still wide open. Natural Proofs have three properties: they are constructive (an efficient algorithm A is embedded in them), have largeness (A accepts a large fraction of strings), and are useful (A rejects all strings which are truth tables of small circuits). Strong circuit lower bounds that are "naturalizing" would contradict present cryptographic understanding, yet the vast majority of known circuit lower bound proofs are naturalizing. So it is imperative to understand how to pursue un-Natural Proofs. Some heuristic arguments say constructivity should be circumventable: largeness is inherent in many proof techniques, and it is probably our presently weak techniques that yield constructivity. We prove the following: (i) Constructivity is unavoidable, even for NEXP lower bounds. Informally, we prove for all "typical" nonuniform circuit classes C, NEXP not subset of C if and only if there is a polynomial-time algorithm distinguishing some function from all functions computable by C-circuits. Hence NEXP not subset of C is equivalent to exhibiting a constructive property useful against C. (ii) There are no P-natural properties useful against C if and only if randomized exponential time can be "derandomized" using truth tables of circuits from C as random seeds. Therefore the task of proving there are no P-natural properties is inherently a derandomization problem, weaker than but implied by the existence of strong pseudorandom functions. These characterizations are applied to yield several new results, including improved ACC(0) lower bounds and new unconditional derandomizations. In general, we develop and apply several new connections between the existence of certain algorithms for analyzing truth tables, and the nonexistence of small circuits for problems in large classes such as NEXP.

In this paper we consider the problem of constructing a small arithmetic circuit for a polynomial for which we have oracle access. Our focus is on n-variate polynomials, over a finite field F, that have depth-3 arithmetic circuits with two multiplication gates of degree d. We obtain the following results: 1. Multilinear case: When the circuit is multilinear (multiplication gates compute multilinear polynomials) we give an algorithm that outputs, with probability 1 − o(1), all the depth-3 circuits with two multiplication gates computing the same polynomial. The running time of the algorithm is poly(n, |F|). 2. General case: When the circuit is not multilinear we give a quasi-polynomial (in n, d, |F|) time algorithm that outputs, with probability 1 − o(1), a succinct representation of the polynomial. In particular, if the depth-3 circuit for the polynomial is not of small depth-3 rank (namely, after removing the g.c.d. of the two multiplication gates, the remaining linear functions span a not too small linear space) then we output the depth-3 circuit itself. In case that the rank is small we output a depth-3 circuit with a quasi-polynomial number of multiplication gates. Our proof technique is new and relies on the factorization algorithm for multivariate black-box polynomials, on lower bounds on the length of linear locally decodable codes with 2 queries, and on a theorem regarding the structure of identically zero depth-3 circuits with four multiplication gates.

In this paper we consider the problem of constructing a small arithmetic circuit for a polynomial for which we have oracle access. Our focus is on n-variate polynomials, over a finite field F, that have depth-3 arithmetic circuits (with an addition gate at the top) with two multiplication gates of degree at most d. We obtain the following results: 1. Multilinear case. When the circuit is multilinear (multiplication gates compute multilinear polynomials) we give an algorithm that outputs, with probability 1 - o (1), all the depth-3 circuits with two multiplication gates computing the polynomial. The running time of the algorithm is poly (n, |F|). 2. General case. When the circuit is not multilinear we give a quasi-polynomial (in n, d, |F|) time algorithm that outputs, with probability 1-o (1), a succinct representation of the polynomial. In particular, if the depth-3 circuit for the polynomial is not of small depth-3 rank (namely, after removing the g.c.d. (greatest common divisor) of the two multiplication gates, the remaining linear functions span a not too small linear space), then we output the depth-3 circuit itself. In the case that the rank is small we output a depth-3 circuit with a quasi-polynomial number of multiplication gates. ◇ Prior to our work there have been several interpolation algorithms for restricted models. However, all the techniques used there completely fail when dealing with depth-3 circuits with even just two multiplication gates. Our proof technique is new and relies on the factorization algorithm for multivariate black-box polynomials, on lower bounds on the length of linear locally decodable codes with two queries, and on a theorem regarding the structure of identically zero depth-3 circuits with four multiplication gates.

In this work we give two new constructions of ε-biased generators. Our first construction significantly extends a result of Mossel et al. (Random Structures and Algorithms 2006, pages 56-81), and our second construction answers an open question of Dodis and Smith (STOC 2005, pages 654-663). In particular we obtain the following results:
1.
For every k = o(log n) we construct an ε-biased generator \(G : \{0, 1\}^{m} \rightarrow \{0, 1\}^n\) that is implementable by degree k polynomials (namely, every output bit of the generator is a degree k polynomial in the input bits). For any constant k we get that \(n = \Omega(m/{\rm log}(1/ \epsilon))^k\), which is nearly optimal. Our result also separates degree k generators from generators in NC
0k
, showing that the stretch of the former can be much larger than the stretch of the latter. The problem of constructing degree k generators was introduced by Mossel et al. who gave a construction only for the case of k = 2.
2.
We construct a family of asymptotically good binary codes such that the codes in our family are also ε-biased sets for an exponentially small ε. Our encoding algorithm runs in polynomial time in the block length of the code. Moreover, these codes have a polynomial time decoding algorithm. This answers an open question of Dodis and Smith.
The paper also contains an appendix by Venkatesan Guruswami that provides an explicit construction of a family of error correcting codes of rate 1/2 that has efficient encoding and decoding algorithms and whose dual codes are also good codes.

A syntactic read-k-times branching program has the restriction that no variable occurs more than k times on any path (whether or not consistent) of the branching program. We first extend the result in [31], to show that the "n/2 clique only function", which is easily seen to be computable by deterministic polynomial size read-twice programs, cannot be computed by nondeterministic polynomial size read-once programs, although its complement can be so computed. We then exhibit an explicit Boolean function f such that every nondeterministic syntactic read-k-times branching program for computing f has size exp {Mathematical expression}

Now that “random functions” can be efficiently constructed([GGM]), we discuss some of their possible applications to cryptography:
1)
Distributing unforgable ID numbers which can be locally verified by stations which contain only a small amount of storage.
2)
Dynamic Hashing: even if the adversary can change the key-distribution depending on the values the hashing function has assigned to the previous keys, still he can not force collisions.
3)
Constructing deterministic, memoryless authentication schemes which are provably secure against chosen message attack.
4)
Construction Identity Friend or Foe systems.

We show how to efficiently construct a pseudorandom invertible permutation generator from a pseudorandom function generator. Goldreich, Goldwasser and Micali ["How to construct random functions," Proc. 25th Annual Symposium on Foundations of Computer Science, October 24–26, 1984.] introduce the notion of a pseudorandom function generator and show how to efficiently construct a pseudorandom function generator from a pseudorandom bit generator. We use some of the ideas behind the design of the Data Encryption Standard for our construction. A practical implication of our result is that any pseudorandom bit generator can be used to construct a block private key cryptosystem which is secure against chosen plaintext attack, which is one of the strongest knownattacks against a cryptosystem

We examine a powerful model of parallel computation: polynomial size threshold circuits of bounded depth (the gates compute threshold functions with polynomial weights). Lower bounds are given to separate polynomial size threshold circuits of depth 2 from polynomial size threshold circuits of depth 3 and from probabilistic polynomial size circuits of depth 2. With regard to the unreliability of bounded depth circuits, it is shown that the class of functions computed reliably with bounded depth circuits of unreliable ∨, ∧, ¬ gates is narrow. On the other hand, functions computable by bounded depth, polynomial-size threshold circuits can also be computed by such circuits of unreliable threshold gates. Furthermore we examine to what extent imprecise threshold gates (which behave unpredictably near the threshold value) can compute nontrivial functions in bounded depth and a bound is given for the permissible amount of imprecision. We also discuss threshold quantifiers and prove an undefinability result for graph connectivity.

The analysis of linear threshold Boolean functions has recently attracted the attention of those interested in circuit complexity as well as of those interested in neural networks. Here a generalization of linear threshold functions is defined, namely, polynomial threshold functions, and its relation to the class of linear threshold functions is investigated. A Boolean function is polynomial threshold if it can be represented as a sign function ofa polynomial that consists ofa polynomial (in the number ofvariables) number ofterms. The main result ofthis paper is showing that the class ofpolynomial threshold functions (which is called PT1 is strictly contained in the class ofBoolean functions that can be computed by a depth 2, unbounded fan-in polynomial size circuit of linear threshold gates (which is called LT2). Harmonic analysis ofBoolean functions is used to derive a necessary and sufficient condition for a function to be an S-threshold function for a given set S of monomials. This condition is used to show that the number of different S-threshold functions, for a given S, is at most 2 t'/ 1)lsl. Based on the necessary and sufficient condition, a lower bound is derived on the number of terms in a threshold function. The lower bound is expressed in terms of the spectral representation of a Boolean function. It is found that Boolean functions having an exponentially small spectrum are not polynomial threshold. A family of functions is exhibited that has an exponentially small spectrum; they are called "semibent" functions. A function is constructed that is both semibent and symmetric to prove that PT is properly contained in LT2.

A constructive theory of randomness for functions, based on computational complexity, is developed, and a pseudorandom function generator is presented. This generator is a deterministic polynomial-time algorithm that transforms pairs (g, r), where g is any one-way function and r is a random k-bit string, to polynomial-time computable functionsf,: { 1, . . . , 2') + { 1, . . . , 2kl. Thesef,'s cannot be distinguished from random functions by any probabilistic polynomial-time algorithm that asks and receives the value of a function at arguments of its choice. The result has applications in cryptography, random constructions, and complexity theory. Categories and Subject Descriptors: F.0 (Theory of Computation): General; F. 1.1 (Computation by Abstract Devices): Models of Computation-computability theory; G.0 (Mathematics of Computing): General; G.3 (Mathematics of Computing): Probability and Statistics-probabilistic algorithms; random number generation

An artificial neural network (ANN) is commonly modeled by a
threshold circuit, a network of interconnected processing units called
linear threshold gates. It is shown that ANNs can be much more powerful
than traditional logic circuits, assuming that each threshold gate can
be built with a cost that is comparable to that of AND/OR logic gates.
In particular, the main results indicate that powering and division can
be computed by polynomial-size ANNs of depth 4, and multiple product can
be computed by polynomial-size ANNs of depth 5. Moreover, using the
techniques developed, a previous result can be improved by showing that
the sorting of n n -bit numbers can be carried out in a
depth-3 polynomial-size ANN. Furthermore, it is shown that the sorting
network is optimal in depth

Ordered binary decision diagrams (OBDDs) and their variants are motivated by the need to represent Boolean functions in applications. Research concerning these applications leads also to problems and results interesting from a theoretical point of view. In this paper, methods from communication complexity and information theory are combined to prove that the direct storage access function and the inner product function have the following property. They have linear π-OBDD size for some variable ordering π and, for most variable orderings π′ all functions which approximate them on considerably more than half of the inputs, need exponential π′-OBDD size. These results have implications for the use of OBDDs in experiments with genetic programming.

The learnability of multiplicity automata has attracted a lot of attention, mainly because of its implications on the learnability of several classes of DNF formulae. The authors further study the learnability of multiplicity automata. The starting point is a known theorem from automata theory relating the number of states in a minimal multiplicity automaton for a function f to the rank of a certain matrix F. With this theorem in hand they obtain the following results: a new simple algorithm for learning multiplicity automata with a better query complexity. As a result, they improve the complexity for all classes that use the algorithms of Bergadano and Varricchio (1994) and Ohnishi et al. (1994) and also obtain the best query complexity for several classes known to be learnable by other methods such as decision trees and polynomials over GF(2). They prove the learnability of some new classes that were not known to be learnable before. Most notably, the class of polynomials over finite fields, the class of bounded-degree polynomials over infinite fields, the class of XOR of terms, and a certain class of decision trees. While multiplicity automata were shown to be useful to prove the learnability of some subclasses of DNF formulae and various other classes, they study the limitations of this method. They prove that this method cannot be used to resolve the learnability of some other open problems such as the learnability of general DNF formulae or even K-term DNF for k=/spl omega/ (log n) or satisfy-s DNF formulae for s=/spl omega/(1). These results are proven by exhibiting functions in the above classes that require multiplicity automata with superpolynomial number of states.

We present a new cryptographic primitive called pseudo-random synthesizer and show how to use it in order to get a parallel construction of a pseudo-random function. We show an NC/sup 1/ implementation of pseudo-random synthesizers based on the RSA or the Diffie-Hellman assumptions. This yields the first parallel (NC/sup 2/) pseudo-random function and the only alternative to the original construction of Goldreich, Gold-wasser and Micali (GGM). The security of our constructions is similar to the security of the underling assumptions. We discuss the connection with problems in computational learning theory.

Boolean functions in AC<sup>O</sup> are studied using the harmonic analysis of the cube. The main result is that an AC<sup>O</sup> Boolean function has almost all of its power spectrum on the low-order coefficients. This result implies the following properties of functions in AC<sup>O</sup>: functions in AC<sup>O</sup> have low average sensitivity; they can be approximated well be a real polynomial of low degree; they cannot be pseudorandom function generators and their correlation with any polylog-wide independent probability distribution is small. An O ( n <sup>polylog{</sup> <sup>sup}</sup> <sup>(n)</sup>)-time algorithm for learning functions in AC<sup>O</sup> is obtained. The algorithm observed the behavior of an AC<sup>O</sup> function on O ( n <sup>polylog</sup> <sup>(n)</sup>) randomly chosen inputs and derives a good approximation for the Fourier transform of the function. This allows it to predict with high probability the value of the function on other randomly chosen inputs

It is well known that (McCulloch-Pitts) neurons are efficiently trainable to learn an unknown halfspace from examples, using linear-programming methods. We want to analyze how the learning performance degrades when the representational power of the neuron is overstrained, i.e., if more complex concepts than just halfspaces are allowed. We show that the problem of learning a probably almost optimal weight vector for a neuron is so difficult that the minimum error cannot even be approximated to within a constant factor in polynomial time (unless RP = NP); we obtain the same hardness result for several variants of this problem. We considerably strengthen these negative results for neurons with binary weights 0 or 1. We also show that neither heuristical learning nor learning by sigmoidal neurons with a constant reject rate is efficiently possible (unless RP = NP).

In this paper we study small depth circuits that contain threshold gates (with or without weights) and parity gates. All circuits we consider are of polynomial size. We prove several results which complete the work on characterizing possible inclusions between many classes defined by small depth circuits. These results are the following:
A single threshold gate with weights cannot in general be replaced by a polynomial fan-in unweighted threshold gate of parity gates.
On the other hand it can be replaced by a depth 2 unweighted threshold circuit of polynomial size. An extension of this construction is used to prove that whatever can be computed by a depthd polynomial size threshold circuit with weights can be computed by a depthd+1 polynomial size unweighted threshold circuit, whered is an arbitrary fixed integer.
A polynomial fan-in threshold gate (with weights) of parity gates cannot in general be replaced by a depth 2 unweighted threshold circuit of polynomial size.

A syntactic read-k times branching program has the restriction that no variable occurs more than k times on any path (whether or not consistent). We exhibit an explicit Boolean function f, which cannot be computed by nondeterministic syntactic read-k times branching programs of size less than expΩn k 2k , although its complement ¬f has a nondeterministic syntactic read-once branching program of polynomial size. This, in particular, means that the nonuniform analogue of NLOGSPACE=co-NLOGSPACE fails for syntactic read-k times networks with k=o(logn). We also show that (even for k=1) the syntactic model is exponentially weaker then more realistic “nonsyntactic” ones.

We describe efficient constructions for various cryptographic primitives (both in private-key and in public-key cryptography). We show these constructions to be at least as secure as the decisional version of the Diffie-Hellman assumption or as the assumption that factoring is hard. Our major result is a new construction of pseudo-random functions such that computing their value at any given point involves two multiple products. This is much more efficient than previous proposals. Furthermore, these functions have the advantage of being in TC<sup>0 </sup> (the class of functions computable by constant depth circuits consisting of a polynomial number of threshold gates) which has several interesting applications. The simple algebraic structure of the functions implies additional features. In particular, we show a zero-knowledge proof for statements of the form “y=f<sub>s</sub>(x)” and “y≠f(x)” given a commitment to a key s of a pseudo-random function f<sub>s</sub>

The learnability of multiplicity automata has attracted a lot of attention, mainly because of its implications on the learnability of several classes of DNF formulae. The authors further study the learnability of multiplicity automata. The starting point is a known theorem from automata theory relating the number of states in a minimal multiplicity automaton for a function f to the rank of a certain matrix F. With this theorem in hand they obtain the following results: a new simple algorithm for learning multiplicity automata with a better query complexity. As a result, they improve the complexity for all classes that use the algorithms of Bergadano and Varricchio (1994) and Ohnishi et al. (1994) and also obtain the best query complexity for several classes known to be learnable by other methods such as decision trees and polynomials over GF(2). They prove the learnability of some new classes that were not known to be learnable before. Most notably, the class of polynomials over finite fields, the class of bounded-degree polynomials over infinite fields, the class of XOR of terms, and a certain class of decision trees. While multiplicity automata were shown to be useful to prove the learnability of some subclasses of DNF formulae and various other classes, they study the limitations of this method. They prove that this method cannot be used to resolve the learnability of some other open problems such as the learnability of general DNF formulae or even K-term DNF for k=ω (log n) or satisfy-s DNF formulae for s=ω(1). These results are proven by exhibiting functions in the above classes that require multiplicity automata with superpolynomial number of states

We present a new cryptographic primitive called pseudo-random synthesizer and show how to use it in order to get a parallel construction of a pseudo-random function. We show an NC<sup>1</sup> implementation of pseudo-random synthesizers based on the RSA or the Diffie-Hellman assumptions. This yields the first parallel (NC<sup>2 </sup>) pseudo-random function and the only alternative to the original construction of Goldreich, Gold-wasser and Micali (GGM). The security of our constructions is similar to the security of the underling assumptions. We discuss the connection with problems in computational learning theory

We present a membership-query algorithm for efficiently learning DNF with respect to the uniform distribution. In fact, the algorithm properly learns the more general class of functions that are computable as a majority of polynomially-many parity functions. We also describe extensions of this algorithm for learning DNF over certain nonuniform distributions and from noisy examples as well as for learning a class of geometric concepts that generalizes DNF. The algorithm utilizes one of Freund's boosting techniques and relies on the fact that boosting does not require a completely distribution-independent weak learner. The boosted weak learner is a nonuniform extension of a Fourier-based algorithm due to Kushilevitz and Mansour (1991)

We prove that a single threshold gate with arbitrary weights can be simulated by an explicit polynomial-size depth 2 majority circuit. In general we show that a depth d threshold circuit can be simulated uniformly by a majority circuit of depth d + 1. Goldmann, Hastad, and Razborov showed in [10] that a non-uniform simulation exists. Our construction answers two open questions posed in [10]: we give an explicit construction whereas [10] uses a randomized existence argument, and we show that such a simulation is possible even if the depth d grows with the number of variables n (the simulation in [10] gives polynomial-size circuits only when d is constant). 1 A preliminary version of this paper appeared in Proc. 25th ACM STOC (1993), pp. 551--560. 2 Laboratory for Computer Science, MIT, Cambridge MA 02139, Email: migo@theory.lcs.mit.edu. This author 's work was done at Royal Institute of Technology in Stockholm, and while visiting the University of Bonn 3 Department of Com...

In this paper, we prove the intractability of learning several classes of Boolean functions in the distribution-free model (also called the Probably Approximately Correct or PAC model) of learning from examples. These results are representation independent , in that they hold regardless of the syntactic form in which the learner chooses to represent its hypotheses.
Our methods reduce the problems of cracking a number of well-known public-key cryptosystems to the learning problems. We prove that a polynomial-time learning algorithm for Boolean formulae, deterministic finite automata or constant-depth threshold circuits would have dramatic consequences for cryptography and number theory. In particular, such an algorithm could be used to break the RSA cryptosystem, factor Blum integers (composite numbers equivalent to 3 modulo 4), and detect quadratic residues. The results hold even if the learning algorithm is only required to obtain a slight advantage in prediction over random guessing. The techniques used demonstrate an interesting duality between learning and cryptography.
We also apply our results to obtain strong intractability results for approximating a generalization of graph coloring.

Luby and Rackoff [27] showed a method for constructing a pseudo-random permutation from a pseudo-random function. The method is based on composing four (or three for weakened security) so called Feistel permutations, each of which requires the evaluation of a pseudo-random function. We reduce somewhat the complexity of the construction and simplify its proof of security by showing that two Feistel permutations are sufficient together with initial and final pair-wise independent permutations. The revised construction and proof provide a framework in which similar constructions may be brought up and their security can be easily proved. We demonstrate this by presenting some additional adjustments of the construction that achieve the following: ffl Reduce the success probability of the adversary. ffl Provide a construction of pseudo-random permutations with large input size using pseudorandom functions with small input size. Incumbent of the Morris and Rose Goldman Career Development C...

The Probabilistic Method On the applications of multiplicity automata in learning

- N Alon
- J Spencer
- P Erd˝

N. Alon, J. Spencer & P. Erd˝ os (1992). The Probabilistic Method. Wiley. A. Beimel, F. Bergadano, N. Bshouty, E. Kushilevitz & S. Varricchio (1996). On the applications of multiplicity automata in learning. In FOCS '96, 349–358.