Page 1
How hard is it to approximate the best Nash equilibrium?
Elad Hazan
IBM Almaden
ehazan@cs.princeton.edu
Robert Krauthgamer∗
Weizmann Institute of Science
robert.krauthgamer@weizmann.ac.il
August 3, 2009
Abstract
The quest for a PTAS for Nash equilibrium in a two-player game, which emerged as a major
open question in Algorithmic Game Theory, seeks to circumvent the PPAD-completeness of
finding an (exact) Nash equilibrium by finding an approximate equilibrium. The closely related
problem of finding an equilibrium maximizing a certain objective, such as the social welfare,
was shown to be NP-hard [Gilboa and Zemel, Games and Economic Behavior, 1989]. However,
this NP-hardness is unlikely to extend to approximate equilibria, since the latter admits a
quasi-polynomial time algorithm [Lipton, Markakis and Mehta, In Proc. of 4th EC, 2003].
We show that this optimization problem, namely, finding in a two-player game an approx-
imate equilibrium achieving a large social welfare, is unlikely to have a polynomial time algo-
rithm. One interpretation of our results is that a PTAS for Nash equilibrium (if exists) should
not extend to a PTAS for finding the best Nash equilibrium, which stands in contrast to certain
algorithmic techniques used so far (e.g. sampling and enumeration).
Technically, our result is a reduction from a notoriously difficult problem in modern Combi-
natorics, of finding a planted (but hidden) clique in a random graph G(n,1/2). Our reduction
starts from an instance with planted clique size O(logn). For comparison, the currently known
algorithms are effective for a much larger clique size Ω(√n).
1 Introduction
Computational aspects of equilibrium concepts, and in particular of Nash equilibrium, have seen
major advances over the last few years, both from the side of algorithms and in terms of computa-
tional complexity (namely, completeness and hardness results). Perhaps the most celebrated result
in this area [CDT06, DGP06] proves that computing a Nash equilibrium in a finite game with two
players is PPAD-complete. Consequently, a weaker notion of ε-approximate Nash equilibrium, or
in short an ε-equilibrium, was suggested, and the following has emerged as a central open question:
Is there a PTAS for Nash equilibrium?
In other words, is there a polynomial time algorithm that finds an ε-Nash equilibrium for arbitrarily
small but fixed ε > 0? Here and in the sequel we follow the literature and assume that the game’s
∗Research supported in part by a grant from the Fusfeld Research Fund. Part of this work was done while at IBM
Almaden.
1
Page 2
payoffs are in the interval [0,1], and approximations are measured additively; see Section 2 for
precise definitions.
While every game has at least one Nash equilibrium, the game may actually have many equi-
libria, some more desirable than others. Thus, an attractive solution concept is to find a Nash
equilibrium maximizing an objective such as the social welfare (the total utility of all players). For
two-player games this problem is known to be NP-hard [GZ89, CS03]. But as we shall soon see,
this hardness result is unlikely to extend to ε-equilibrium.
A fairly simple yet surprisingly powerful technique is random sampling, where each player’s
mixed strategy ? x is replaced by another mixed strategy ? x?that has small support, obtained by
sampling a few pure strategies independently from ? x and taking ? x?to be a uniform distribution
over the chosen pure strategies. (We allow repetitions, i.e. the support is viewed as a multiset.)
This technique leads to a simple algorithm that finds in a two-player game an ε-equilibrium in
quasi-polynomial time NO(ε−2logN)[LMM03], assuming that the game is represented as two N ×N
matrices.1Indeed, applying random sampling on any Nash equilibrium together with Chernoff-
like bounds yields an ε-equilibrium consisting of mixed strategies that are each uniform over a
multiset of size O(ε−2logN), and such an ε-equilibrium can be found by enumeration (exhaustive
search). In fact, this argument applies also to the social-welfare maximization problem, and thus
the algorithm of [LMM03] finds in time NO(ε−2logN)an ε-equilibrium whose social-welfare is no
more than ε smaller than the maximum social-welfare of a Nash equilibrium in the game.
The existence of a quasi-polynomial algorithm may seem as promising evidence that a poly-
nomial algorithm exists. The latter emerged as a major goal and has drawn intense work with
encouraging progress [DMP06, KPS06, DMP07, BBM07, TS07], culminating (so far) with a poly-
nomial time algorithm that computes a 0.3393-equilibrium [TS07]. All these algorithms, with
the sole exception of [TS07], rely on the aforementioned approach of proving the existence of a
small support ε-equilibrium via sampling, and then finding such an equilibrium using enumeration
(exhaustive search) in conjunction with other algorithmic tools (such as linear programming).
While progress on the approximation side remains steady, the other side of computational
lower bounds has resisted attempts to exclude PTAS by extending the known hardness results to
approximations (beyond FPTAS), either for any equilibrium or for an objective-maximizing one.2
The reason for this difficulty might be the aforementioned quasi-polynomial time algorithms, due
to which it is less plausible that we can prove hardness of approximation based on NP-hardness or
PPAD-hardness for the corresponding question.
In this paper we give the first negative evidence for the existence of a PTAS for the objective-
maximizing question. Since NP-hardness is out of the question, we design a reduction from the
well known problem of finding a hidden (planted) clique in a random graph. The latter choice
is non-standard, as the problem appears to be hard on the average rather than in a worst-case
sense. However, in several respects it is an ideal choice. First, it admits a straightforward quasi-
polynomial time algorithm. Second, the average-case nature of the problem is particularly suited
for constructing games with a highly regular structure, which will be important in our reduction.
1Throughout, f is called quasi-polynomial if f(n) ≤ nO(log n).
2PTAS, which stands for polynomial-time approximation scheme, means that for every fixed ε > 0 there is a
polynomial-time algorithm. FPTAS is stronger in that the running time is also polyomial in 1/ε. The current
PPAD-hardness results exclude FPTAS.
2
Page 3
The hidden clique problem.
random from the following distribution Gn,1/2,k: pick a random graph from Gn,1/2and plant in it
a clique of size k = k(n).3The goal is to recover the planted clique (in polynomial time), with
probability at least (say) 1/2 over the input distribution. Note that the clique is hidden in the
sense that its “location” is adversarial and not known to the algorithm (but independent of the
random graph, e.g. of its degrees). In a random graph, the maximum size of a clique is, with high
probability, roughly 2logn, and when the parameter k is larger than this value, the planted clique
will be, with high probability, the unique maximum clique in the graph, and the problem’s goal
is simply to find the maximum clique in the graph (see Lemma 2.1 for details). The problem was
suggested independently by Jerrum [Jer92] and by Kuˇ cera [Kuˇ c95].
It is not difficult to see that the hidden clique problem becomes only easier as k gets larger, and
the best polynomial-time algorithm to date, due to Alon, Krivelevich and Sudakov [AKS98], solves
the problem whenever k ≥ Ω(√n) (see also [FK00]). Improving over this bound is a well-known open
problem, and certain algorithmic techniques provably fail this task, namely the Metropolis process
[Jer92] and the Lov´ asz-Schrijver hierarchy of relaxations [FK03]. Recent results [FK08, BV09] based
on r-dimensional tensors (generalization of matrices to dimension r ≥ 3) suggest an algorithmic
approach capable of finding a hidden clique of size O(n1/r), but currently they are not known to
run in polynomial-time.
The hidden clique problem can be easily solved in quasi-polynomial time nO(logn); for the most
difficult regime k = O(logn), this is obviously true even for worst-case instances of the maximum
clique problem.
In this problem, the input is a graph on n vertices drawn at
1.1 Our Results
We relate the worst-case hardness of finding an approximate equilibrium to that of solving the
hidden clique problem, formally stated as follows.
Theorem 1.1. There are constants ˆ ε,ˆ c > 0 such that the following holds. If there is a polynomial-
time algorithm that finds in an input two-player game an ˆ ε-equilibrium whose social-welfare is no
more than ˆ ε smaller than the maximum social-welfare of an equilibrium in this game, then there
is a (randomized) polynomial-time algorithm that solves the hidden clique problem for k = ˆ clogn
with high probability.
We remark that our proof is actually shown for the special case of symmetric two-player games
(see Section 2 for definitions), which makes the result only stronger (since this is a hardness result).
We make no attempt to optimize various constants in the proofs.
Subsequent work.
theorem, showing that a polynomial time approximation scheme for the best Nash equilibrium
implies a polynomial time algorithm that for finding a hidden-clique of size (3+δ)logn in a random
graph, or alternatively for the decision version of detecting a hidden clique of size (2 +δ)logn, for
an arbitrarily small constant δ > 0. The leading constant 2 is a natural barrier for the problem,
since a clique of size roughly 2logn exists in a random graph with very high probability.
Recently, Minder and Vilenchik [MV09] improved the constants in our main
3Gn,p denotes the distribution over graphs on n vertices generated by placing an edge between every pair of
vertices independently with probability p.
3
Page 4
1.2 Related Work
There are complexity classes that attempt to capture problems requiring running time nO(logn), see
[PY96] and references in Section 5 therein. It is plausible that our approach of relying on the hidden
clique problem, may be used to prove hardness of approximation for problems mentioned in [PY96],
such as the VC dimension of a 0-1 matrix, and the minimum dominating set in a tournament graph.
Average-case hardness.
3SAT is hard on average (for low-density formulas), which was used by Feige [Fei02] to derive
constant factor hardness of approximation for several problems, such as minimum bisection, dense
k-subgraph and maximum bipartite clique. His results may be interpreted as evidence that ap-
proximation within the same (or similar) factor is actually NP-hard, which is a plausible possibility
but not known to date. In fact, the random 3SAT refutation conjecture may be viewed [U. Feige,
private communication] as an analogue of the hidden clique problem, where straightforward algo-
rithms based on enumeration require exponential, rather than quasi-polynomial, running time. It
is not difficult to see that some of the combinatorial optimization problems addressed in [Fei02] are
also hard to approximate under the assumption that the hidden clique problem cannot be solved
in polynomial time. Consider for example the dense k-subgraph problem; the hidden clique graph
itself obviously contains a k-vertex subgraph of full density, while any algorithm that is likely to
find in it a sufficiently dense k-vertex subgraph can be used to find the planted clique, see e.g.
Lemma 5.3. The argument for the maximum bipartite clique problem is similar.
It is worth noting that the assumption that the hidden clique problem is hard was used in
a few other contexts, including for cryptographic applications [JP00], and for hardness of testing
almost k-wise independence [AAK+07]. The decision version of the hidden clique problem, namely,
to distinguish between the distributions Gn,1/2,kand Gn,1/2, is attributed to M. Saks in [KV02,
Section 5].
The hidden clique problem is related to the assumption that refuting
Computing equilibria.
equilibria in various scenarios, including for example graphical games (where the direct interac-
tion between players is limited by a graph structure), succinct games (where the payoffs can be
represented succinctly, e.g. due to strong symmetries or a combinatorial structure), and markets
(where sellers and buyers interact via prices). For more details, we refer the reader to the excellent
and timely surveys [Pap08, Rou08] and the many references therein. More concretely for Nash
equilibrium in bimatrix games, see the recent surveys [Pap07, Spi08].
In general, the problems of finding any equilibrium and that of finding an equilibrium that max-
imizes some objective need not have the same (runtime) complexity, although certain algorithmic
techniques may be effective to both. As mentioned earlier, this indeed happens for ε-equilibrium
in two-player games, when employing random sampling combined with quasi-polynomial enumera-
tion. Another example is the use of the discretization method [KLS01], which was recently used in
[DP08] to find an ε-equilibrium in anonymous games with fixed number of strategies, but actually
extends to the value-maximization version [C. Daskalakis, private communication]. Yet another
example are the algorithms of [KLS01, EGG07] for graphical games on bounded degree trees.
The last decade has seen a vast literature on computational aspects of
4
Page 5
2 Preliminaries
Let [n] = {1,2,...,n}. An event E is said to occur with high probability if Pr[E] ≥ 1 − 1/nΩ(1); the
value of n will be clear from the context. An algorithm is called efficient if it runs in polynomial time
nO(1). Throughout, we ignore rounding issues, assuming e.g. that logn is integral. All logarithims
are to base 2, unless stated explicitly.
2.1 Nash Equilibria in games
In the sequel, we restrict our attention to symmetric games, hence our definition assumes square
matrices for the payoff. A (square) two-player bi-matrix game is defined by two payoff matrices
R,C ∈ Rn×n, such that if the row and column players choose pure strategies i,j ∈ [n], respectively,
the payoff to the row and column players are R(i,j) and C(i,j), respectively. The game is called
symmetric if C = R?.
A mixed strategy for a player is a distribution over pure strategies (i.e. rows/columns), and for
brevity we may refer to it simply as a strategy. An ε-approximate Nash equilibrium is a pair of
mixed strategies (x,y) such that
∀i ∈ [n],
∀j ∈ [n],
e?
x?Cej≤ x?Cy + ε.
iRy ≤ x?Ry + ε,
Here and throughout, eiis the i-th standard basis vector, i.e. 1 in i-th coordinate i, and 0 in all
other coordinates. If ε = 0, the strategy pair is called a Nash equilibrium (NE). The definition
immediately implies the following.
Proposition 2.1. For an ε-equilibrium (x,y), it holds that for every mixed strategies ˜ x, ˜ y,
˜ x?Ry ≤ x?Ry + ε,
x?R˜ y ≤ x?Ry + ε.
As we are concerned with an additive notion of approximation, we assume that the entries of the
matrices are in the range [0,M], for M which is a constant independent of all the other parameters.
Our results easily translate to the case M = 1 by scaling all payoffs.
Consider a pair of strategies (x,y). We call the payoff of the row player x?Ry (this is actually the
expected payoff), and similarly for the column player. The value of an (approximate) equilibrium
for the game is the average of the payoffs of the two players. Recall that social-welfare is the total
payoff of the two players, and thus equals twice the value.
2.2 The hidden clique problem
Recall that in this problem the input is drawn at random from the distribution Gn,1/2,k. Intuitively,
the problem only becomes easier as k gets larger, at least in our regime of interest, namely k ≥
c0logn for sufficiently large constant c0> 0. This intuition can be made precise as follows.
Lemma 2.1. Suppose there are a constant c1> 0 and a polynomial time algorithm such that given
an instance of the hidden clique problem with k ≥ c1logn finds a clique of size 100logn with
probability at least 1/2. Then there exists a constant c0 > 0 and a randomized polynomial time
algorithm that solves the hidden clique problem for every k ≥ c0logn.
5
Page 6
This lemma is probably known, but since we could not provide a reference, we prove it in
Appendix A, essentially using ideas from [AKS98] and [McS01]. Notice that due to potential
correlations, one cannot employ simple techniques that are useful in worst-case instances, such as
repeatedly finding and removing from the input graph a clique of size 100logn (using the assumed
algorithm), not to mention of course that the assumed algorithm only succeeds with probability
1/2 (and repeating it need not amplify the success probability).
3 The Reduction
We prove Theorem 1.1 by reducing the hidden clique problem to the Nash equilibrium problem.
That is, given an input instance of the hidden clique problem we construct a two-player game
such that with high probability (over the randomness in our construction and in the hidden clique
instance), a high-value approximate equilibrium leads, in polynomial time, to a solution to the
hidden clique instance.
Techniques.
(and independently by Halperin and Hazan [HH05]), that for every graph, the quadratic form
corresponding to the graph’s adjacency matrix, when considered over the unit simplex (i.e. all
probability distributions over [n]), is maximized exactly at a (normalized) incidence vector of a
maximum clique in the graph. We rely on this observation, as one portion of the game we construct
is exactly the adjacency matrix of the hidden clique instance. However, this is not enough to obtain
a suitable Nash equilibrium instance.
First, an equilibrium is a bi-linear form rather than a quadratic form, hence the results of
[MS65, HH05] are not directly applicable. We thus use (mainly in Lemma 5.2 below) the special
properties of an approximate equilibrium to prove a relationship of similar flavor between bi-linear
forms on the adjacency matrix and large cliques (or actually dense subgraphs) in the graph.
Second, a simple use of the adjacency matrix yields a very small gap (between vectors corre-
sponding to a clique and those that do not) that is by far insufficient to rule out a PTAS. To boost
this gap we use an idea of Feder, Nazerzadeh, and Saberi [FNS07] to eliminate from the game all
equilibria of small support (cardinality at most O(logn)).
Our construction is motivated by an observation of Motzkin and Straus [MS65]
The construction.
G ∈ Gn,1/2,kof the hidden clique problem, consider the two-player game defined by the following
payoff matrices (for the row-player and the column player, respectively):
?
The matrices R,C are of size N×N for N = nc1. These matrices are constructed from the following
blocks.
Let ˆ ε,ˆ c and M,c1,c2be constants to be defined shortly. Given an instance
R =
A
B
−B?
0
?
;
C =
?
AB?
0
−B
?
1. The upper left n × n block in both R,C is the adjacency matrix of G with self-loops added.
2. The lower right block 0 in both R,C is the all zeros matrix of size (N − n) × (N − n).
6
Page 7
3. All other entries are set via an (N − n) × n matrix B whose entries are set independently at
random to be
?
0 otherwise.
Bi,j=
M
with probability3
4·
1
M;
Notice that the game is symmetric, i.e. C = R?, and that outside the upper left block A, the game
is zero-sum.
Choice of parameters.
We set the parameters in our construction as follows.
• M = 12;
• c2= 200;
• c1= 2 + c2log(4M/3); (recall N = nc1)
• ˆ c = 32M2(c1+ 2); and
• ˆ ε = 1/50M.
As is standard in computational complexity, we prove Theorem 1.1 by analyzing the complete-
ness and soundness of the reduction.
In the sequel, when we say with high probability, it means over the choice of G from Gn,1/2,k
over the construction of the game (namely the randomness in B), and over the coin tosses of our
algorithms. We note however, that most of our algorithms are deterministic; the only exception is
Lemma 2.1 (and of course the algorithms that invoke it).
4 Completeness
Lemma 4.1. With high probability, the game above has an equilibrium of value 1.
Proof. Consider the distributions (mixed strategies) x = y which are a uniform distribution over
the strategies corresponding to the planted k-clique, i.e. xi=1
xi= 0 otherwise. The value of this strategy set is1
high probability this is an equilibrium.
Consider without loss of generality the row player, and observe that her payoff when she plays
x is exactly 1. We need to show that she cannot obtain a larger payoff by playing instead any
strategy i ∈ [N]. For i ≤ n, her new payoff is at most the largest entry in A, i.e. 1. For each
i > n, her new payoff is the average of k distinct values in B, which is highly concentrated around
its mean 3/4. Formally, we use the following Chernoff-Hoeffding bound.
kif i is in the planted clique, and
2x?(R + C)y = 1. We shall prove that with
Theorem 4.1 ([Hoe63]). Let X1,...,Xm be independent random variables bounded by |Xj| ≤ C,
and let¯ X =
m
1
?
jXj. Then for all t > 0,
Pr[¯ X − E[¯ X] ≥ t] ≤ exp(−mt2
2C2).
7
Page 8
In our case, the variables satisfy |Xj| ≤ M and E[Xj] =
strategy i > n (when the other player still plays x = y). We thus obtain
3
4, and¯ X is the payoff of playing
Pr[¯ X ≥ 1] ≤ exp(−
k
32M2).
By a union bound over all strategies i > n we have that the probability there exists a strategy
i > n with payoff larger than 1 is at most (N − n) · e−
inequality follows by our choice of c1and ˆ c. This completes the proof of Lemma 4.1.
k
32M2≤ nc1−
ˆ c
32M2= 1/n2, where the last
5 Soundness
To complete the proof of Theorem 1.1, we shall show that with high probability, every ˆ ε-approximate
equilibrium with value ≥ 1− ˆ ε in the game can be used to find the hidden clique efficiently. We do
this in three steps, using the three lemmas below.
For our purpose, a bipartite subgraph is two equal-size subsets V1,V2⊆ V (G) (not necessarily
disjoint); its density is the probability that random v1∈ V1,v2∈ V2are connected by an edge in
the input graph with self-loops added, i.e. E[Av1,v2].
Lemma 5.1. With high probability, given an ˆ ε-equilibrium in the game with value ≥ 1 − ˆ ε, we can
efficiently compute a (4Mˆ ε)-equilibrium that is supported only on A and has value ≥ 1 − ˆ ε.
Lemma 5.2. With high probability, given a (4Mˆ ε)-equilibrium supported only on the matrix A and
with value ≥ 1 − ˆ ε, we can efficiently find a bipartite graph of size c2logn and density ≥3
input graph.
5in the
Lemma 5.3. With high probability, given a bipartite subgraph of size c2logn and density ≥3
input graph, we can efficiently find the entire planted hidden k-clique.
5in the
5.1Proof of Lemma 5.1
The following two claims are stated with a general parameter δ > 0, although we will later use
them only with a specific value δ = ˆ ε.
Claim 5.1. In every pair of mixed strategies achieving value ≥ 1 − δ, at most δ of the probability
mass of each player resides on strategies not in [n].
Proof. The contribution to the value of the equilibrium from outside the upper left block is 0,
because over there the game is zero-sum. Inside that block the two players receive identical payoffs,
which are according to A and thus upper bounded by 1. Thus,
??
and it immediately follows that both?
Claim 5.2. Given an ˆ ε-equilibrium where at most δ of each player’s probability mass resides on
strategies not in [n], we can find an (ˆ ε + 3Mδ)-equilibrium that is supported only on A and whose
value is at least as large.
1 − δ ≤1
2x?(R + C)y =
?
i,j∈[n]
xiyjAij≤
i∈[n]
xi
???
j∈[n]
yj
?
,
i∈[n]xiand?
j∈[n]yjare at least 1 − δ.
8
Page 9
Proof. Given an ˆ ε-equilibrium (x,y), we obtain a new equilibrium (˜ x, ˜ y) by restricting each player’s
support to [n], i.e. removing strategies not in [n] and scaling to obtain a probability distribution.
Since the game is zero-sum outside of A, removing strategies not in A does not change the value,
and since the entries in A are nonnegative, the scaling operation can only increase the value, i.e.
˜ x?(R + C)˜ y ≥ x?(R + C)y.
To bound defections, consider without loss of generality the row player. First, her payoff when
defecting to strategy i ∈ [N] does not change much, i.e. |e?
mass of y moved around is at most δ, and because entries in the same row (or same column) of R
differ by at most M. Furthermore, her payoff in the new equilibrium does not change much, i.e.
|˜ x?R˜ y − x?Ry| ≤ 2Mδ, again because at most 2δ of the total probability mass of x and y was
moved.
iR˜ y − e?
iRy| ≤ Mδ, because the total
Lemma 5.1 now follows immediately from the two claims above by setting δ = ˆ ε.
5.2Proof of Lemma 5.2
To prove this lemma, we first need the following structural claim.
Claim 5.3. With high probability, in every 1/2-approximate equilibrium supported only on the
matrix A, the total probability mass every player places on every set of c2logn pure strategies is
≤ 2/M.
Proof. For convenience, denote d = c2logn. Suppose that one of the players, say the column
player, has probability mass of more than
probability that there exists a row in B, in which the d corresponding entries all have a value of
M. If this event happens, then we do not have an ε-equilibrium, since the row player can defect to
this particular row, to obtain payoff ≥
this event happens for a single row is very small, namely pdfor p =
and they are independent. Thus, the probability that no row has a streak of M’s in the particular
d columns is at most
2
Mon a given set of d strategies. Let us compute the
2
M·M = 2, while her current payoff is ≤ 1. The probability
4M. But we have N − n rows,
3
(1 − pd)N−n≤ exp(−pdN/2) = exp(−nc1−c2log4M
where the last inequality is by our choice of c1and c2. Hence with probability ≥ 1 − e−n2/2there
is such a row, and this cannot be an equilibrium.
We now need to rule out all possible subsets sets of size d. There are?n
by a union bound, since nd· e−n2/2≤ e−Ω(n2).
Proof of Lemma 5.2. Let (x,y) be such a (4Mˆ ε)-equilibrium. Define T = {j ∈ supp(y) : x?Aej≥
4
5}, where ejis the j-th standard basis vector. Observe that T is nonempty, since x?Ay ≥ 1 − ˆ ε.
Furthermore, its total probability mass must be?
by Claim 5.3 that |T| ≥ c2logn. To get size exactly c2logn, we can just take an arbitrary subset
of T. Denoting by uT the uniform distribution on T, the pair (x,uT) satisfies
3 /2) ≤ exp(−n2/2),
d
?≤ ndsuch subsets,
and each one cannot be an equilibrium with probability ≥ 1 − e−n2/2. We can rule out all of them
j∈Tyj>
2
Mand (x,y) is a (4Mˆ ε)-equilibrium, we have
2
M, as otherwise x?Ay ≤
2
M· 1 + (1 −
2
M) ·4
5< 1 − ˆ ε by our choice of ˆ ε. Since?
j∈Tyj>
x?AuT≥4
5.
9
Page 10
Now define S = {i ∈ supp(x) : e?
as otherwise x?AuT ≤
Claim 5.3 to the 1/2-equilibrium (x,y) we then have |S| ≥ c2logn. To get size exactly c2logn, we
can just take an arbitrary subset of S. Let uSbe the uniform distribution over the set S. Then
u?
iAuT≥3
2
M) ·3
5}. Its total probability mass must be?
i∈Sxi>
2
M,
2
M· 1 + (1 −
5<
4
5, by our choice of M ≥ 10. By again applying
SAuT≥3
5, i.e. S,T define a bipartite subgraph of size c2logn and density ≥3
5.
5.3 Proof of Lemma 5.3
To prove this lemma we separate out first how to extract a clique of logarithmic size.
Claim 5.4. With high probability, given a bipartite subgraph of size c2logn and density ≥3
input graph, we can efficiently find a clique of size 100logn.
5in the
Proof. Let S,T ⊂ V (G) be the two sets forming the bipartite subgraph, and let W ⊂ V (G) denote
the vertices of the planted clique, i.e. |W| = k. A straightforward union bound shows that with
high probability at least
sets S,T ⊂ V (G) with |S| = |T| = c2logn and |S ∩W| <
each of them, the density between S \ W and T is essentially the average of Θ(c2logn)2Bernoulli
random variables each with expectation 1/2 (with the exception of some variables that may be
included twice, and except for at most c2logn terms corresponding to self-loops). Thus, by the
Chernoff-Hoeffding bound above, Pr[density(S ∩ W,T) ≥ 0.55] ≤ exp(−Ω(c2logn)2), and we get
that with high probability,
1
20of the vertices in S must lie in the planted clique. Indeed, the number of
1
20|S| is at most?n
|S|
??n
|T|
?< n2c2logn. For
density(S,T) ==|S∩W|
|S|
density(S ∩ W,T) +S\W
|S|density(S \ W,T) <3
5.
Furthermore, we can take a union bound over all choices of such S,T by choosing c2as a sufficiently
large constant.
Thus, given S,T we can try all subsets of S of size
of such sets is nO(c2)= nO(1)), and find the largest subset S?that forms a clique in G. By the above
analysis, with high probability |S?| ≥c2
Lemma 5.3 now follows by combining the above claim with Lemma 2.1. This completes also
the proof of Theorem 1.1.
1
20c2logn by exhaustive search (the number
20logn = 100logn.
6 Concluding remarks
We have shown that a PTAS for the “best” Nash equilibrium implies an efficient algorithm to a
seemingly difficult combinatorial optimization problem. The measure of quality for equilibria we
have used is the total payoff to the two players. It seems plausible that other quality measures,
such as those considered by Gilboa and Zemel [GZ89], can be reduced to the hidden clique problem
in much the same way.
The problem from which we carry out the reduction, namely the hidden clique problem, is non-
standard in the sense that it “offers” average-case hardness (in addition to being easily solvable
in quasi-polynomial time). It is plausible that it suffices to assume worst-case hardness, and show
e.g. a reduction from the problem of finding in an n-vertex graph a clique of size O(logn). In light
of the known quasi-polynomial time algorithms, an alternative complexity assumption that could
potentially be used is that (say) maximum clique cannot be solved in time 2O(√nlogn), see e.g.
10
Page 11
[FK97]. Yet another direction is to relate the hardness of computing the best Nash equilibria to
the complexity class LOGNP of [PY96], because it naturally captures the known (quasi-polynomial
time) algorithms for approximating Nash equilibria.
Finally, an intriguing question is whether such techniques can possibly be applied to the problem
of finding a PTAS for any Nash equilibrium, i.e. to the regime of the PPAD complexity class.
Acknowledgments
We thank Uri Feige, Nimrod Megiddo and Aranyak Mehta for useful conversations regarding dif-
ferent problems and concepts studied in the paper.
References
[AAK+07] N. Alon, A. Andoni, T. Kaufman, K. Matulef, R. Rubinfeld, and N. Xie. Testing k-
wise and almost k-wise independence. In 39th Annual ACM Symposium on Theory of
Computing, pages 496–505. ACM, 2007.
[AKS98] N. Alon, M. Krivelevich, and B. Sudakov. Finding a large hidden clique in a random
graph. Random Structures Algorithms, 13(3-4):457–466, 1998.
[BBM07] H. Bosse, J. Byrka, and E. Markakis. New algorithms for approximate Nash equilibria
in bimatrix games. In WINE 2007, volume 4858 of Lecture Notes in Computer Science,
pages 17–29. Springer, 2007.
[BV09] S. C. Brubaker and S. Vempala. Random tensors and planted cliques. In 13th Interna-
tional Workshop on Randomization and Computation (RANDOM), 2009. To appear.
[CDT06] X. Chen, X. Deng, and S.-H. Teng. Computing Nash equilibria: Approximation and
smoothed complexity. In 47th Annual IEEE Symposium on Foundations of Computer
Science, pages 603–612. IEEE Computer Society, 2006.
[CS03]V. Conitzer and T. Sandholm.
International Joint Conference on Artificial Intelligence, pages 765–771, 2003.
Complexity results about Nash equilibria.In 18th
[DGP06] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing
a Nash equilibrium. In 38th Annual ACM Symposium on Theory of computing, pages
71–78. ACM, 2006.
[DMP06] C. Daskalakis, A. Mehta, and C. H. Papadimitriou.
equilibria. In WINE 2006, volume 4286 of Lecture Notes in Computer Science, pages
297–306. Springer, 2006.
A note on approximate Nash
[DMP07] C. Daskalakis, A. Mehta, and C. Papadimitriou. Progress in approximate nash equilib-
ria. In 8th ACM conference on Electronic commerce, pages 355–358. ACM, 2007.
[DP08] C. Daskalakis and C. H. Papadimitriou. Discretized multinomial distributions and Nash
equilibria in anonymous games. In 49th Annual IEEE Symposium on Foundations of
Computer Science, pages 25–34. IEEE Computer Society, 2008.
11
Page 12
[EGG07] E. Elkind, L. A. Golberg, and P. W. Goldberg. Computing good nash equilibria in
graphical games. In 8th ACM conference on Electronic commerce, pages 162–171. ACM,
2007.
[Fei02] U. Feige. Relations between average case complexity and approximation complexity. In
34th annual ACM Symposium on Theory of Computing, pages 534–543. ACM, 2002.
[FK97] U. Feige and J. Kilian. On limited versus polynomial nondeterminism. Technical report,
1997.
[FK00] U. Feige and R. Krauthgamer. Finding and certifying a large hidden clique in a semi-
random graph. Random Structures Algorithms, 16(2):195–208, 2000.
[FK03] U. Feige and R. Krauthgamer. The probable value of the Lov´ asz-Schrijver relaxations
for maximum independent set. SIAM J. Comput., 32(2):345–370, 2003.
[FK08] A. M. Frieze and R. Kannan. A new approach to the planted clique problem. In Foun-
dations of Software Technology and Theoretical Computer Science (FSTTCS), volume
08004 of Dagstuhl Seminar Proceedings. Schloss Dagstuhl, Germany, 2008.
[FNS07] T. Feder, H. Nazerzadeh, and A. Saberi. Approximating nash equilibria using small-
support strategies. In 8th ACM conference on Electronic commerce, pages 352–354.
ACM, 2007.
[GZ89] I. Gilboa and E. Zemel. Nash and correlated equilibria: Some complexity considerations.
Games and Economic Behavior, 1:80–93, 1989.
[HH05] E. Halperin and E. Hazan. HAPLOFREQ - estimating haplotype frequencies efficiently.
In 9th RECOMB, volume 3500 of Lecture Notes in Computer Science, pages 553–568.
Springer, 2005.
[Hoe63] W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal
of the American Statistical Association, 58(301):13–30, 1963.
[Jer92] M. Jerrum. Large cliques elude the Metropolis process. Random Structures Algorithms,
3(4):347–359, 1992.
[JP00] A. Juels and M. Peinado. Hiding cliques for cryptographic security. Des. Codes Cryp-
togr., 20(3):269–280, 2000.
[KLS01] M. J. Kearns, M. L. Littman, and S. P. Singh. Graphical models for game theory. In 17th
Conference in Uncertainty in Artificial Intelligence, pages 253–260. Morgan Kaufmann
Publishers Inc., 2001.
[KPS06] S. C. Kontogiannis, P. N. Panagopoulou, and P. G. Spirakis. Polynomial algorithms
for approximating Nash equilibria of bimatrix games. In WINE 2006, volume 4286 of
Lecture Notes in Computer Science, pages 286–296. Springer, 2006.
[Kuˇ c95]L. Kuˇ cera. Expected complexity of graph partitioning problems. Discrete Appl. Math.,
57(2-3):193–212, 1995.
12
Page 13
[KV02] M. Krivelevich and V. H. Vu. Approximating the independence number and the chro-
matic number in expected polynomial time. J. Comb. Optim., 6(2):143–155, 2002.
[LMM03] R. J. Lipton, E. Markakis, and A. Mehta. Playing large games using simple strategies.
In 4th ACM conference on Electronic commerce, pages 36–41. ACM, 2003.
[McS01] F. McSherry. Spectral partitioning of random graphs. In 42nd IEEE symposium on
Foundations of Computer Science, page 529. IEEE Computer Society, 2001.
[MS65] T. S. Motzkin and E. G. Straus. Maxima for graphs and a new proof of a theorem of
Tur´ an. Canadian Journal of Mathematics, 17:533–540, 1965.
[MV09] L. Minder and D. Vilenchik. Small clique detection and approximate Nash equilibria. In
13th International Workshop on Randomization and Computation (RANDOM), 2009.
To appear.
[Pap07] C. H. Papadimitriou. The complexity of finding Nash equilibria. In N. Nisan, T. Rough-
garden, E. Tardos, and V. Vazirani, editors, Algorithmic Game Theory, chapter 2, pages
29–51. Cambridge University Press, 2007.
[Pap08]C. H. Papadimitriou. The search for equilibrium concepts. In SAGT, volume 4997 of
Lecture Notes in Computer Science, pages 1–3. Springer, 2008.
[PY96] C. H. Papadimitriou and M. Yannakakis. On limited nondeterminism and the complex-
ity of the v-c dimension. J. Comput. Syst. Sci., 53(2):161–170, 1996.
[Rou08] T. Roughgarden. An algorithmic game theory primer. To appear in TCS, invited survey,
2008.
[Spi08] P. G. Spirakis. Approximate equilibria for strategic two person games. In SAGT 2008,
volume 4997 of Lecture Notes in Computer Science, pages 5–21. Springer, 2008.
[TS07] H. Tsaknakis and P. G. Spirakis. An optimization approach for approximate Nash
equilibria. In WINE 2007, volume 4858 of Lecture Notes in Computer Science, pages
42–56. Springer, 2007.
A Deferred Proof from Section 2
Proof of Lemma 2.1. Suppose there exists a polynomial time algorithm A∗that, given the hidden
clique problem with k ≥ c1logn, finds a clique of size 100logn. We prove that there exists a
(randomized) polynomial time algorithm that solves the hidden clique problem exactly for every
k ≥ c0logn, where c0= 2tc1 for a sufficiently large t to be determined later. The algorithm is
composed of two stages.
Stage 1. Randomly partition the graph vertices into t parts. In each part, the expected number
of vertices from the planted clique vertices is k/t ≥c0logn
bounds it can be shown that with probability > 7/8, every part contains at least c1logn vertices
from the hidden clique. In our analysis we shall assume henceforth that this is the case.
In each part separately, first complete it into an instance of hidden clique of size exactly n, by
adding n−n/t vertices and connecting all new potential edges with probability1
t
= 2c1logn. Furthermore, using Chernoff
2. Then apply the
13
Page 14
polynomial time algorithm A∗. Observe that each part is distributed exactly as a hidden clique
instance, and by our assumption its hidden clique size is large enough that algorithm A∗succeeds,
with probability ≥ 1/2, in finding a clique of size at least 100logn. Since the different parts are
independent, the probability that A∗succeeds in one or less parts is ≤ (t + 1)2−t< 1/8 for, say,
t = 6.
If this even does not occur, report fail. Henceforth assume that A∗succeeds in at least two
parts.
In each part where A∗succeeds, we may assume that the clique size is exactly 100logn by
removing arbitrary vertices from it. It can be shown that in the random portion of the graph (i.e.
not using the planted clique), the maximum clique size is with very high probability at most 3logn.
In our analysis we shall assume henceforth that this is the case in all parts of the partition, and
hence at least 97logn among the 100logn vertices of the clique found belong to the planted clique.
Stage 2. In each part i apply the following. Identify another part j ?= i where A∗succeeded
in finding a clique of size 100logn. Select the vertices in the part i whose degree towards the
clique found in part j is at least 97logn. Call these vertices Qiand report all selected vertices, i.e.
Q = ∪iQi.
To analyze this stage, observe that a vertex v from part i that belongs to the planted clique
must have degree at least 97logn towards the clique found in another part j, and thus belongs to
Qi. On the other hand, for a vertex v in part i that does not belong to the planted clique, the
expected degree towards any fixed subset of 100logn vertices in part j is 50logn. Notice that the
chosen part j and the clique found in it are completely independent of the edges connecting different
parts, because they are determined only by the edges internal to the different parts (possibly in
a complicated way, e.g. how to break ties if A∗succeeds in more than two parts). Thus, the
probability that v has degree at least 97logn towards the corresponding clique in part j is, using
the Chernoff bound, << 1/n2. By a union bound over all vertices we get that with very high
probability all such vertices do not belong to Qi.
earlier, we get by a union bound that Q = ∪iQicontains exactly all the hidden clique vertices with
probability at least 2/3.
Combining this with the events mentioned
14
Download full-text