How Hard Is It to Approximate the Best Nash Equilibrium?

SIAM J. Comput 01/2011; 40:79-91. DOI: 10.1137/090766991
Source: DBLP

ABSTRACT The quest for a PTAS for Nash equilibrium in a two-player game seeks to circumvent the PPAD-completeness of an (exact) Nash equilibrium by finding an approximate equilibrium, and has emerged as a major open question in Algorithmic Game Theory. A closely related problem is that of finding an equilibrium maximizing a certain objective, such as the social welfare. This optimization problem was shown to be NP-hard by Gilboa and Zemel [Games and Economic Behavior 1989]. However, this NP-hardness is unlikely to extend to finding an approximate equilibrium, since the latter admits a quasi-polynomial time algorithm, as proved by Lipton, Markakis and Mehta [Proc. of 4th EC, 2003]. We show that this optimization problem, namely, finding in a two-player game an approximate equilibrium achieving large social welfare is unlikely to have a polynomial time algorithm. One interpretation of our results is that the quest for a PTAS for Nash equilibrium should not extend to a PTAS for finding the best Nash equilibrium, which stands in contrast to certain algorithmic techniques used so far (e.g. sampling and enumeration). Technically, our result is a reduction from a notoriously difficult problem in modern Combinatorics, of finding a planted (but hidden) clique in a random graph G(n, 1/2). Our reduction starts from an instance with planted clique size k = O(log n). For comparison, the currently known algorithms due to Alon, Krivelevich and Sudakov [Random Struct. & Algorithms, 1998], and Krauthgamer and Feige [Random Struct. & Algorithms, 2000], are effective for a much larger clique size k = Ω(√n).

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: When confronted with multiple Nash equilibria, decision makers have to refine their choices. Among all known Nash equilibrium refinements, the perfectness concept is probably the most famous one. It is known that weakly dominated strategies of two-player games cannot be part of a perfect equilibrium. In general, this undominance property however does not extend to -player games (E. E. C. van Damme, 1983). In this paper we show that polymatrix games, which form a particular class of -player games, verify the undominance property. Consequently, we prove that every perfect equilibrium of a polymatrix game is undominated and that every undominated equilibrium of a polymatrix game is perfect. This result is used to set a new characterization of perfect Nash equilibria for polymatrix games. We also prove that the set of perfect Nash equilibria of a polymatrix game is a finite union of convex polytopes. In addition, we introduce a linear programming formulation to identify perfect equilibria for polymatrix games. These results are illustrated on two small game applications. Computational experiments on randomly generated polymatrix games with different size and density are provided.
    Game Theory. 09/2014; 2014.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of approximating the minmax value of a mul-tiplayer game in strategic form. We argue that in 3-player games with 0-1 payoffs, approximating the minmax value within an additive constant smaller than ξ/2, where ξ = 3− √ 5 2 ≈ 0.382, is not possible by a polynomial time algorithm. This is based on assuming hardness of a version of the so-called planted clique problem in Erd˝ os-Rényi random graphs, namely that of detecting a planted clique. Our results are stated as reductions from a promise graph problem to the problem of approximating the minmax value, and we use the detection problem for planted cliques to argue for its hardness. We present two reductions: a randomized many-one reduction and a deterministic Turing reduction. The latter, which may be seen as a derandomization of the former, may be used to argue for hardness of approximating the minmax value based on a hardness assumption about deterministic algorithms. Our technique for derandomization is general enough to also apply to related work about equilibria.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider two closely related problems: planted clustering and submatrix localization. The planted clustering model assumes that a graph is generated from some unknown clusters by randomly placing edges between nodes according to their cluster memberships; the task is to recover the clusters given the graph. Special cases include the classical planted clique, planted densest subgraph, planted partition and planted coloring problems. In the submatrix localization problem, also known as bi-clustering, the goal is to locate hidden submatrices with elevated means inside a large random matrix. Of particular interest is the setting where the number of clusters/submatrices is allowed to grow unbounded with the problem size. We consider both the statistical and computational aspects of these two problems, and prove the following. The space of the model parameters can be partitioned into four disjoint regions corresponding to decreasing statistical and computational complexities: (1) the "impossible" regime, where all algorithms fail; (2) the "hard" regime, where the exponential-time Maximum Likelihood Estimator (MLE) succeeds; (3) the "easy" regime, where the polynomial-time convexified MLE succeeds; (4) the "simple" regime, where a simple counting/thresholding procedure succeeds. Moreover, we show that each of these algorithms provably fails in the previous harder regimes. Our theorems establish the first minimax recovery results for the two problems with unbounded numbers of clusters/submatrices, and provide the best known guarantees achievable by polynomial-time algorithms. These results demonstrate the tradeoffs between statistical and computational considerations.


Available from