Can Relaxation of Beliefs Rationalize the Winner's Curse?: An Experimental Study

Econometrica (Impact Factor: 3.5). 01/2010; 78(4):1435-1452. DOI: 10.2307/40928444
Source: RePEc

ABSTRACT We use a second-price common-value auction, called the maximal game, to experimentally study whether the winner's curse (WC) can be explained by models which retain best-response behavior but allow for inconsistent beliefs. We compare behavior in a regular version of the maximal game, where the WC can be explained by inconsistent beliefs, to behavior in versions where such explanations are less plausible. We find little evidence of differences in behavior. Overall, our study casts a serious doubt on theories that posit the WC is driven by beliefs. Copyright 2010 The Econometric Society.

Download full-text


Available from: Muriel Niederle, Sep 09, 2014
  • Source
    • "This result depends crucially on choosing a random level zero instead of a truthful one, under which the Level-k model is observationally equivalent to a Nash equilibrium. However, Ivanov et al. (2009a) use a clever design in which players in second-price common-value auctions bid against their own earlier-period strategies to demonstrate that overbidding ('the winner's curse') cannot be explained by subjects' misguided beliefs about their opponents as in the Level-k framework. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine whether the 'Level-k' model of strategic behavior generates re-liable cross-game testable predictions at the level of the individual player. Subjects' ob-served levels are fairly consistent within one family of similar games, but within an-other family of games there is virtually no cross-game correlation. Moreover, the relative ranking of subjects' levels is not consistent within the second family of games. Direct measures of strategic intelligence are generally not correlated with observed levels of reasoning in either family. Our results suggest that the Level-k model is just one of many heuristics that may be triggered in some strategic settings, but not in others.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: I develop a structural econometric framework for first-price auctions by generalizing the assumption of Bayesian Nash Equilibrium within the context of a level-k behavioral model, which nests equilibrium by allowing bidders to hold heterogeneous beliefs about opponents' bidding strategies. While behavioral heterogeneity causes identification to fail under benchmark equilibrium conditions, independence and exclusion restrictions recover identification of the joint distribution over valuations and bidder-types in heterogeneous populations. Establishing consistent maximum likelihood sieve estimation with an upper semicontinuous population log-likelihood function leads to a natural semi-nonparametric maximum likelihood estimator based on Legendre polynomials. The level-k model introduces a mixture structure to the estimation problem, requiring a generalized expectation maximization algorithm. Presenting evidence from a pilot study of vintage computer auctions, I find a high level of bidder sophistication in the field. To further apply the econometric framework, I characterize expected revenues in first price auctions with level-k bidders, establishing a partial identification result for expected revenues in unidentified models. An empirical analysis of USFS timber auctions finds that a misspecified equilibrium optimal reserve price could reduce expected revenues up to 30% relative to an unbinding reserve price.
    SSRN Electronic Journal 11/2009; DOI:10.2139/ssrn.1337843
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces a novel bootstrap procedure to perform inference in a wide class of partially identified econometric models. We consider econometric models defined by finitely many weak moment inequalities, -super-2 which encompass many applications of economic interest. The objective of our inferential procedure is to cover the identified set with a prespecified probability. -super-3 We compare our bootstrap procedure, a competing asymptotic approximation, and subsampling procedures in terms of the rate at which they achieve the desired coverage level, also known as the error in the coverage probability. Under certain conditions, we show that our bootstrap procedure and the asymptotic approximation have the same order of error in the coverage probability, which is smaller than that obtained by using subsampling. This implies that inference based on our bootstrap and asymptotic approximation should eventually be more precise than inference based on subsampling. A Monte Carlo study confirms this finding in a small sample simulation. Copyright 2010 The Econometric Society.
    Econometrica 03/2010; 78(2):735-753. DOI:10.3982/ECTA8056 · 3.50 Impact Factor
Show more