ArticlePDF Available

Abstract and Figures

Who would win the Climate Bet between Al Gore and Scott Armstrong? Al Gore promotes the view that the world faces great danger from manmade global warming. Scott Armstrong holds that scientific forecasting provides no basis for such a fear, and that global temperatures are likely to change little, in either direction, over policy-relevant horizons. We propose the use of prediction markets to examine public opinion for solving complex controversial problems. Initially, we launched a play-money prediction market at Early results showed that the market predicts that Scott Armstrong would win the bet. This market prediction conforms to winning probabilities derived from an analysis of historical data. Such information can be valuable as it can aid the democratic process. It provides information on the public's perception of global warming that is different from information revealed by traditional surveys and media commentary.
Content may be subject to copyright.
Using Prediction Markets for Complex Controversial Problems:
An Application to the ‘Climate Bet’
Andreas Graefe
Institute for Technology Assessment and Systems Analysis
Karlsruhe Institute of Technology, Germany
Kesten C. Green
Business and Economic Forecasting Unit, Monash University, Melbourne, Australia.
c/o PO Box 10800, Wellington 6143, New Zealand
J. Scott Armstrong
The Wharton School, University of Pennsylvania,
Philadelphia, PA, USA
March 6, 2009
Abstract. Who would win the Climate Bet between Al Gore and Scott Armstrong? Al Gore
promotes the view that the world faces great danger from manmade global warming. Scott
Armstrong holds that scientific forecasting provides no basis for such a fear, and that global
temperatures are likely to change little, in either direction, over policy-relevant horizons. We
propose the use of prediction markets to examine public opinion for solving complex
controversial problems. Initially, we launched a play-money prediction market at Early results showed that the market predicts that Scott Armstrong would win
the bet. This market prediction conforms to winning probabilities derived from an analysis of
historical data. Such information can be valuable as it can aid the democratic process. It
provides information on the public’s perception of global warming that is different from
information revealed by traditional surveys and media commentary.
For various types of problems, prediction markets have been shown to be accurate compared
to traditional group meetings and other alternative approaches. However, most of the
problems for which prediction markets have been used were simple: they only required
aggregating information or ‘facts’. Examples include forecasting elections, sports events, or
sales figures.
In aggregating dispersed information from a – virtually unlimited – number of people,
prediction markets have the potential to provide social utility. They can contribute to
participatory regulation by incorporating the views of a broad public on problems that ask for
policy interventions. Such problems are complex as policy decisions not only require
aggregating information or facts, they also involve people’s values, attitudes, emotions,
expectations, fears, and commitments. For such problems, achieving consensus becomes
We analyze the social utility of prediction markets for complex problems using, as an
example, the issue of climate policy addressed by the ‘Climate Bet’.
Prediction Markets
Having already been popular in the late 19
century for forecasting election outcomes (Rhode
& Strumpf 2004), the launch of the Iowa Electronic Markets (IEM) in 1988, accompanied by
the rise of the internet, gave prediction markets a new lease on life. Currently, prediction
markets are gaining attention in various fields of forecasting. In such markets, participants
reveal their estimates by trading stocks whose prices reflect the aggregated group estimate on
a particular topic. Based on their individual performance, traders can win money. If a trader
thinks the current group estimate is too low, he will buy stocks; if too high, he will sell. Thus,
through the prospect of gaining money, traders have an incentive to become active in the
group process whenever they believe the group estimate is inaccurate.
In being able to involve – and aggregate information from – a large number of people,
prediction markets are a structured forecasting approach. Thus, much of their forecasting
performance is due to combining forecasts from different sources of information. Prediction
markets appear to be particularly useful when there is a continuous flow of new information.
For more information about how prediction markets work as well as recent research findings
see the Special Interest Group on Prediction Markets at
Since their emergence, numerous studies have been published that demonstrate high
forecasting accuracy of prediction markets in the field and in the laboratory. For example, in
analyzing 964 polls for the five presidential elections from 1988 to 2008, Berg et al. (2008a)
found that the respective market forecasts were closer to the actual election results 74% of the
time. This performance was replicated for the 2008 election (Berg et al. 2008b), although
Erikson and Wlezien (2008) have shown that this advantage disappeared when comparing the
market forecasts to damped polls.
Spann and Skiera (2009) analyzed the accuracy of prediction markets for sports forecasting.
In comparing results for predicting the outcomes of 678 German soccer league games, they
found that prediction markets performed as well as betting odds and were more accurate than
predictions from single experts (tipsters). In laboratory experiments, Graefe and Armstrong
(2008) found prediction markets were equal to meetings, nominal groups and Delphi on a
quantitative judgment task. In addition, they found prediction markets to be particularly
valuable in situations where multiple participants have valid insight into the issue in question
While the evidence available to date ascribes high forecasting performance to prediction
markets, all studies refer to rather simple problems that require only aggregation of
information or facts. Little is known about prediction markets’ performance on more
complicated tasks that also involve people’s values, attitudes, emotions, expectations, fears,
commitments, etc. We use the current controversy about global warming to study the potential
of prediction markets for such problems.
The Climate Bet
Al Gore has claimed that there are scientific forecasts that the earth will become warmer and
that this will occur rapidly. Yet, searches of his book or on the Internet did not reveal any
quantitative forecasts or any methodology he relies on. As a result, in June 2007, Scott
Armstrong offered Al Gore a bet of $10,000 on who could best forecast annual mean
temperatures over the next ten years. Al Gore declined the bet, citing the reason that he does
not bet money.
The general objective of the challenge was to promote the proper use of science in
formulating public policy. This involves such things as full disclosure of forecasting methods
and data, and the proper testing of alternative methods. A specific objective was to develop
useful methods to forecast global temperatures. In particular, it was hoped that other
competitors would join to show the value of their forecasting methods.
Al Gore was invited to select any currently available fully disclosed climate model to produce
the forecasts (without human adjustments to the model’s forecasts). Scott Armstrong’s
forecasts would have been based on the naïve (no-change) model. The naïve model is a
commonly used benchmark in assessing forecasting methods. It is a strong competitor when
uncertainty is high or when improper forecasting methods have been used.
The criteria for accuracy would have been the average absolute forecast error at each weather
station. Averages across stations would have been calculated for each forecast horizon (e.g.,
for a six-year ahead forecast). Finally, simple unweighted averages would have been made of
the forecast errors across all forecast horizons. For example, the average across the two-year
ahead forecast errors would have received the same weight as that across the nine-year-ahead
forecast errors. This unweighted average would have been used as the criterion for
determining the winner.
It was aimed at starting the bet at the beginning of 2008. The full story can be reviewed at the
Who Would Have Won? – Solving the Climate Bet with Prediction Markets
The question about who would win this bet is important as it can promote the proper use of
science for informing public policy decisions. Thus, it will be assumed that Al Gore and Scott
Armstrong had made a gentleman’s bet involving no money and that the ten years of the bet
had started as scheduled on January 1, 2008.
To answer this question, a prediction market has been launched at on
January 28, 2009. We have chosen as it is an independent and neutral prediction
market platform that is open for everyone to participate. It offers numerous bets on different
topics, some of which are also related to global warming. All questions have been created by users. In using play-money instead of real-money, it does not face concerns
associated with gambling. Furthermore, the use of play-money should help to attract a larger
number of participants as risk-averse people might not be willing to invest real-money.
The terms of the original bet were altered for simplicity, and transparency. Thus, the criterion
was based on the UAH satellite measures on global mean temperature. An alternative would
be to use the Hadley series. In addition, the bet was simplified to use only a single starting
point; that is, January 1, 2008.
The contract specifications of the bet are shown in Figure 1. A screenshot of the prediction
market page, accessible at, is provided in Appendix 1.
The page explains the contract specifications and shows the current market forecast as well as
comments provided by users. In addition, an automatically generated list of news related to
the topic is shown in the right column of the page and participants are welcome to provide
Textbox 1: The Climate Bet Prediction Market at – Contract Specifications
Who would win the "Climate Bet" –
Al Gore or Wharton Professor Scott Armstrong?
Background: In June 2007, Wharton Professor Scott Armstrong offered Al Gore a bet of
$10,000 on who could best predict global mean temperature over the next ten years. Al
Gore declined the bet, citing the reason that he does not bet money (the full story can be
reviewed at
Now, assume that Armstrong and Gore had made a gentleman’s bet (no money) and that
the ten years of the bet started on January 1, 2008.
• Armstrong’s forecast was that there would be no change in global mean temperature
over the next ten years.
• Gore did not specify a method or a forecast. Nor did searches of his book or on the
Internet reveal any quantitative forecasts or any methodology he relies on. He did,
however, imply that the global mean temperature would increase at a rapid rate -
presumably at least as great as the IPCC’s 1992 projection of 0.03°C-per-year; thus. The
IPCC’s 1992 projection is to be taken as Gore’s forecast.
Settlement date: January 1, 2018
Settlement details: The criterion will be the mean absolute errors of Armstrong’s and
Gore’s annual forecasts for the ten year period, with the errors to be measured against
the UAH global temperature record ( The win goes to the
smallest mean absolute error.
Figure 1 shows how market participants predicted the outcome of the bet in mid-February.
Scott Armstrong was expected to win with a probability of 72% (vs. Al Gore 28%). Now, if
one thinks Gore’s chances are higher, one would buy the contract – and the price will go up.
If one thinks they are lower, one would sell – and the price would go down. Thus, through the
process of trading, one reveals information to the market for which one expects to win (play)
money. In other words, one has an incentive to become active whenever one thinks the current
forecast is wrong.
Figure 1: Forecast of the Climate Bet Market at
After signing up at, each participant receives an initial amount of H$1000
Hubdub play-money dollars, which can be used to trade on every question available at the
market platform. Participants win if they are right and lose what they staked if they are wrong.
The more Hubdub dollars they have, the higher their ranking compared to other participants.
A historical analysis of the Climate Bet
Green et al. (2009) conducted a benchmark analysis to compare the performance of the naïve
no-change forecasting model to the IPCC’s 1992 projection (i.e. Al Gore’s assumed forecast)
of an increase in global mean temperature of .03°C per year.
Forecasts from 1992 through 2008
Starting with both the 1991 Hadley and UAH temperatures, Green et al. (2009) created IPCC
projection series from 1992 through 2008 and compared the errors of each model to those
derived from the no-change benchmark model. They found that, when testing the benchmark
model against Hadley measures, the IPCC errors were similar. When testing against the UAH
measures, the IPCC errors were nearly twice as large.
Forecasts from 1851 through 1975
Using only the Hadley measures, Green et al. (2009) performed a similar analysis for the
years from 1851 through 1975. They created a single forecast series by adding the 1992 IPCC
projected warming rate of .03°C to the previous year’s figure, starting with the 1850 Hadley
figure. The benchmark was simply the 1850 Hadley figure for all years. The errors from the
using the projected warming rate were more than ten times the size of the errors from the
Calculation of probabilities from historical data
We used the Hadley data from 1850 to 2008 to calculate the probability that Armstrong will
win the bet against the Gore/IPCC projection of .03°C-per-year increase. From the 158 one-
year-ahead forecasts, we calculated that his probability of winning any year is close to chance
at 0.54. The bet, however, is for the lowest total error for a ten year period. On that, basis,
Armstrong’s probability of winning the bet is 0.68.
Early Results
Results of the first year of the climate bet
Based on the UAH data
, the actual change for 2008 was 0.05. Thus, Armstrong’s error was
0.05 where as Gore’s was 0.02. Thus, as of the end of year 1, Gore would be ahead by 0.03°C.
In this case, the choice of the criterion was important. Based on the Hadley measures of land
, the actual change for 2008 was -.08°C. Thus, in absolute terms Armstrong
missed by .08°C, while Gore missed by .11°C. Advantage Armstrong by 0.03°C.
Forecast from Climate Bet Prediction Market
At the launch of the market, the chances for both Al Gore and Scott Armstrong to win the bet
were set to 50%. With the first trade, Scott Armstrong was predicted as the winner. On March
6, the market predicted a chance of 54% for Armstrong to win, involving 101 predictions.
This matches Armstrong’s probability of winning any year, which was derived from the
historical data.
Participants slightly favor Armstrong to win the bet, a result that conforms to the historical
benchmark analysis. The results suggest that involving prediction markets for solving
questions like the Climate Bet can be valuable for obtaining the public’s perception on such
issues. They can incorporate new information on a continuous basis and track public opinion
over time. In addition, they can aid the democratic process by revealing information that
differs from information obtained by traditional polling institutions.
We searched the iPoll Databank of the Roper Center for Public Opinion Research for
(“global warming” OR “climate change”) to obtain information on how the public perceives
global warming. Table 1 lists the most recent polls since November 2008. All these polls are
phrased in a way that they refer to global warming as an issue. They ask people about how
important it is to them or how the government should address the issue. None of these polls
asks the public whether global warming actually is an issue.
Table 1: Recent polls on climate change / global warming
Polling institute Poll phrasing Date
ABC News/Washington
“…what kind of priority you think (Barack) Obama and the
Congress should give it […] Global warming”
January 13-
January 16, 2009
Pew Research Center for
the People & the Press
“Should […] dealing with global warming be a top priority,
important but lower priority, not too important, or should it not
be done?”
January 7-January
11, 2009
ABC News/Washington
“…do you think (Barack) Obama should or should
not…implement policies to try to reduce global warming?”
December 11-
December 14, 2008
ABC News/Washington
“…do you think (Barack) Obama will or will not be able
to...implement policies to try to reduce global warming?”
December 11-
December 14, 2008
NBC News, Wall Street
“…tell me which one or two of the following promises, if any,
do you think are most critical to you that Barack Obama follow
through on: […] place a cap on carbon emissions to reduce
global warming pollution.”
December 4-
December 8, 2008
Princeton Survey
Research Associates
“How much progress do you think will be made in…improving
the environment and reducing global warming?”
December 3-
December 4, 2008
BBC World Service “Please tell me whether you consider each of the following
possible US actions should be top priority, important but not a
top priority or a low priority for the new president, or should the
US not do it at all? [...] Addressing climate change”
November 24-
December 24, 2008
Prediction markets like the one on the Climate Bet can address this deficit. They can focus on
specific questions and – by providing performance-based incentives – motivate people to
participate, reflect on, and actively search for information on the given problem. Nonetheless,
concerns might be raised on using prediction markets for suchlike problems.
Who are the participants?
Since the Hubdub market is open for everyone, it is unclear who is participating. Most likely,
participants from either camp – sceptics as well as proponents of global warming – might
populate the market. However, the possibility that one camp might dominate can not be
excluded, particularly at this early stage where the market involves only a small number of
predictions. The market forecasts might gain reliability with an increasing number of
However, people might have concerns with involving unskilled amateurs in the decision-
making process by using prediction markets. In general, these concerns appear to be
Experts do not have value in forecasting change, particularly in situations involving
high uncertainty. This has been shown for forecasting future political and economic
events (Tetlock 2006) as well as conflict situations (Green & Armstrong 2007).
Although expertise beyond a minimum level was found to lead to more accurate
forecasts, additional expertise did not improve accuracy. In fact, there was some
evidence that accuracy might even decrease with increasing expertise because expert
are more resistant to new information.
In executing uninformed trades so called ‘noise traders’ provide additional liquidity.
According to assumptions of rational models of liquidity provision, this enhances
incentives for other participants to get involved and informed – and to reveal their
information through trading. In response, it is often assumed that market forecasts
might become even more accurate. This is supported by earlier research, which has
shown that prediction markets provide accurate forecasts in spite of biased traders
(Forsythe et al. 1999) or even intentional attempts to manipulate market prices (see
Can the market be manipulated?
The question about who is participating inevitably raises the question of manipulation.
Manipulation is a common and often raised concern when using prediction markets and has
been cited as one of the reasons for the dismissal of the policy analysis market in 2003 (see
Textbox 2). However, most empirical studies to date showed that attacks on result accuracy
have not been successful historically (Rhode & Strumpf 2004), in the laboratory (Hanson et
al. 2006), or in the field (Camerer 1998). Only one study reports successful manipulation of
prices at the IEM (Hansen et al. 2004). In reviewing studies of price manipulation, Wolfers
and Zitzewitz (2004) concluded that, besides a short transition phase, none of the known
attacks had a noticeable influence on the prices.
Textbox 2: The DARPA policy analysis market (PAM)
From 2001 to 2003, the Defense Advanced Research Project Agency (DARPA)
of the U.S. government sponsored the FutureMAP project, also known as the
Policy Analysis Market (PAM). The original goal of this project was to improve
existing intelligence institutions by predicting military and political instability
around the world, how the U.S. would affect such instabilities, and vice versa.
Later, the focus was narrowed to predict five parameters for each of eight
nations in the Middle East: military activity, political instability, economic growth,
U.S. military activity, and U.S. financial involvement. In addition, traders should
predict additional parameters like U.S. GDP growth, world trade, or total U.S.
military casualties.
On July 28, 2003, shortly before the scheduled start of PAM on September 1,
two Democratic Senators held a press conference accusing the U.S.
Department of Defense to plan a “terror market” for people to bet on terrorist
events. Instantly, the topic caught the interest of the media. During the next two
days, 128 media articles were published, most of them casting a damning light
on PAM. Not surprisingly, PAM was cancelled immediately.
Later, Hanson (2007), who was involved in the project, conducted a statistical
news analysis on more than 600 media articles that mentioned PAM. He found
that more informed articles favored PAM. Yet, the political decision to dismiss
PAM was made and it is unlikely that it will be reversed anytime soon. For a
review of the origin and development of the project, see Hanson (2007).
Can play-money work?
People might have concerns that play-money markets are less accurate. Since participants do
not stake real money, they might not be willing to reveal their true information. Thus far, two
studies analyzed the relative performance of play-money and real-money markets. For sports
events, Servan-Schreiber et al. (2004) could not identify differences in accuracy, while
Rosenbloom and Notz (2006) found real-money markets to be more accurate for non-sports
events. Thus, the answer to this question remains open.
Nonetheless, to address the current problem of the Climate Bet, play-money markets appear to
be feasible. In using play-money, one can attract a large number of participants that might not
be able – or willing – to invest real-money. Furthermore, people might not be willing to invest
real-money in a long-term bet but rather look for short-term and more lucrative investments.
In addition, using real-money for complicated problems like global warming might even be
counter-productive. In particular, financially well-equipped groups that have a strong interest
in the market outcome might be able to manipulate the results.
Isn’t the time frame to long?
Prediction markets are usually used to forecast events in the near future in order to maintain
participants’ motivation. Yet a trader’s interest does not necessarily decrease because of long
time horizons. The play-money market Foresight Exchange (FX)
has been active since 1994
and has managed to establish a lively community of traders involved in predicting events
decades in the future. Also, there is some evidence that prediction markets might perform well
for longer forecasting horizons. For 161 contracts that referred to ‘yes’ or ‘no’ questions,
Pennock et al. (2001) recorded the FX forecasts thirty days before the respective outcome was
known. They found that the FX forecasts strongly correlated with outcome frequencies.
Nonetheless, further empirical studies are necessary to assess the forecasting performance of
markets for longer time horizons.
Future Work
Future work will aim at addressing some of the concerns raised in the discussion. This
involves marketing the Climate Bet prediction market on various websites and blogs. The
goal is to increase the number of participants in order to obtain more reliable results.
Furthermore, we observe possibilities to launch further markets, involving more play-money
as well as real-money markets. Finally, we plan on investigating the influence of providing
additional information to participants. Currently, no information is given in addition to the
contract specifications. It will be interesting to observe how further information (like the
results of the first year of the bet or the simulation of the bet from the period from 1850
through 2008) will influence the market predictions.
We launched a play-money prediction market at to address the question of who
would win the Climate Bet between Al Gore and Scott Armstrong. The goal of this project is
to analyze the social utility of prediction markets for solving complex problems. Early results
show that the market predicts that Scott Armstrong would win the bet. Such information can
be valuable as it can aid the democratic process. It provides information on the public’s
perception of global warming that differs from information revealed by traditional surveys.
Berg, J., Nelson, F. & Rietz, T. A. (2008a). Prediction Market Accuracy in the Long Run,
International Journal of Forecasting, 24, 285-300.
Berg, J., Nelson, F. D., Neumann, G. R. & Rietz, T. (2008b). Was There Any Surprise About Obama's
Election?, Available at
Camerer, C. (1998). Can Asset Markets be Manipulated? A Field Experiment with Racetrack Betting,
Journal of Political Economy, 106, 457-482.
Erikson, R. S. & Wlezien, C. (2008). Are Political Markets Really Superior to Polls as Election
Predictors?, Public Opinion Quarterly, forthcoming,
Forsythe, R., Rietz, T. A. & Ross, T. W. (1999). Wishes, Expectations and Actions: A Survey on Price
Formation in Election Stock Markets Journal of Economic Behavior & Organization, 39, 83-
Graefe, A. & Armstrong, J. S. (2008). Comparing Face-to-face Meetings, Nominal Groups, Delphi
and Prediction Markets on an Estimation Task, Working paper. Available at
Green, K. C. & Armstrong, J. S. (2007). The Ombudsman: Value of Expertise for Forecasting
Decisions in Conflicts, Interfaces, 37, 287-299.
Green, K. C., Armstrong, J. S. & Soon, W. (2009). Validity of Climate Change Forecasting for Public
Policy Decision Making, Working Paper. Available at
Hansen, J., Schmidt, C. & Strobel, M. (2004). Manipulation in Political Stock Markets - Preconditions
and Evidence, Applied Economics Letters, 11, 459-463.
Hanson, R. (2007). The Policy Analysis Market - A Thwarted Experiment in the Use of Prediction
Markets for Public Policy, Innovations: Technology, Governance, Globalization (MIT Press),
2, 73-88.
Hanson, R., Oprea, R. & Porter, D. (2006). Information Aggregation and Manipulation in an
Experimental Market, Journal of Economic Behavior & Organization, 60, 449-459.
Pennock, D. M., Giles, C. L. & Nielsen, F. A. (2001). The real power of artificial markets, Science,
291, 987-988.
Rhode, P. W. & Strumpf, K. S. (2004). Historical Presidential Betting Markets, Journal of Economic
Perspectives, 18, 127-141.
Rosenbloom, E. S. & Notz, W. (2006). Statistical Tests of Real-Money versus Play-Money Prediction
Markets, Electronic Markets, 16, 63-69.
Servan-Schreiber, E., Wolfers, J., Pennock, D. M. & Galebach, B. (2004). Prediction Markets: Does
Money Matter?, Electronic Markets, 14, 243 - 251.
Spann, M. & Skiera, B. (2009). Sports forecasting: a comparison of the forecast accuracy of prediction
markets, betting odds and tipsters, Journal of Forecasting, 28, 55-72.
Tetlock, P. E. (2006). Expert Political Judgment: How good is it? How can we know?, Princeton
University Press.
Wolfers, J. & Zitzewitz, E. (2004). Prediction Markets, Journal of Economic Perspectives, 18, 107-
Appendix 1: The Climate Bet Prediction Market at – Screenshot
Full-text available
In important conflicts such as wars and labor-management disputes, people typically rely on the judgment of experts to predict the decisions that will be made. We compared the accuracy of 106 forecasts by experts and 169 forecasts by novices about eight real conflicts. The forecasts of experts who used their unaided judgment were little better than those of novices. Moreover, neither group’s forecasts were much more accurate than simply guessing. The forecasts of experienced experts were no more accurate than the forecasts of those with less experience. The experts were nevertheless confident in the accuracy of their forecasts. Speculating that consideration of the relative frequency of decisions across similar conflicts might improve accuracy, we obtained 89 sets of frequencies from novices instructed to assume there were 100 similar situations. Forecasts based on the frequencies were no more accurate than 96 forecasts from novices asked to pick the single most likely decision. We conclude that expert judgment should not be used for predicting decisions that people will make in conflicts. When decision makers ask experts for their opinions, they are likely to overlook other, more useful, approaches.
Full-text available
We conducted laboratory experiments for analyzing the accuracy of three structured approaches (nominal groups, Delphi, and prediction markets) relative to traditional face-to-face meetings (FTF). We recruited 227 participants (11 groups per method) who were required to solve a quantitative judgment task that did not involve distributed knowledge. This task consisted of ten factual questions, which required percentage estimates. While we did not find statistically significant differences in accuracy between the four methods overall, the results differed somewhat at the individual question level. Delphi was as accurate as FTF for eight questions and outperformed FTF for two questions. By comparison, prediction markets did not outperform FTF for any of the questions and were inferior for three questions. The relative performances of nominal groups and FTF were mixed and the differences were small. We also compared the results from the three structured approaches to prior individual estimates and staticized groups. The three structured approaches were more accurate than participants' prior individual estimates. Delphi was also more accurate than staticized groups. Nominal groups and prediction markets provided little additional value relative to a simple average of the forecasts. In addition, we examined participants' perceptions of the group and the group process. The participants rated personal communications more favorably than computer-mediated interactions. The group interactions in FTF and nominal groups were perceived as being highly cooperative and effective. Prediction markets were rated least favourably: prediction market participants were least satisfied with the group process and perceived their method as the most difficult.
Full-text available
The accuracy of prediction markets has been documented both for markets based on real money and those based on play money. To test how much extra accuracy can be obtained by using real money versus play money, we set up a real-world online experiment pitting the predictions of (real money) against those of (play money) regarding American Football outcomes during the 2003-2004 NFL season. As expected, both types of markets exhibited significant predictive powers, and remarkable performance compared to individual humans. But, perhaps surprisingly, the play-money markets performed as well as the real-money markets. We speculate that this result reflects two opposing forces: real-money markets may better motivate information discovery while play-money markets may yield more efficient information aggregation.
Full-text available
Assessing the probabilities of future events is a problem often faced by science policymakers. For example, CERN, the European laboratory for particle physics, recently had to judge whether the probability of discovering a Higgs boson was high enough to justify extending the operation of its collider (see Science, 22 Sept., p. 2014 and 29 Sept., p. 2260). At the Foresight Exchange (FX) Web site (, traders can actually bet on the outcomes of unresolved scientific questions, including whether physicists will discover the Higgs boson by 2005. The going price of the security (0.77 as of 24 Jan) can be seen as the market's assessment of the probability of the particle's discovery. FX is only a game, run with play money (FX dollars). Empirical studies [1], laboratory investigations [2], and policy proposals [3] argue that prices of real-money securities do constitute accurate likelihoods, since traders have strong (monetary) incentives to leverage pertinent information. But can we place legitimate credence on the accuracy of FX prices, which are determined solely through competition in a play-money market game?
Full-text available
Election markets have been praised for their ability to forecast election outcomes, and to forecast better than trial-heat polls. This paper challenges that optimistic assessment of election markets, based on an analysis of Iowa Electronic Market (IEM) data from presidential elections between 1988 and 2004. We argue that it is inappropriate to naively compare market forecasts of an election outcome with exact poll results on the day prices are recorded, that is, market prices reflect forecasts of what will happen on Election Day whereas trial-heat polls register preferences on the day of the poll. We then show that when poll leads are properly discounted, poll-based forecasts outperform vote-share market prices. Moreover, we show that win projections based on the polls dominate prices from winner-take-all markets. Traders in these markets generally see more uncertainty ahead in the campaign than the polling numbers warrant--in effect, they overestimate the role of election campaigns. Reasons for the performance of the IEM election markets are considered in concluding sections.
The intelligence failures surrounding the invasion of Iraq dramatically illustrate the necessity of developing standards for evaluating expert opinion. This book fills that need. Here, Philip E. Tetlock explores what constitutes good judgment in predicting future events, and looks at why experts are often wrong in their forecasts. Tetlock first discusses arguments about whether the world is too complex for people to find the tools to understand political phenomena, let alone predict the future. He evaluates predictions from experts in different fields, comparing them to predictions by well-informed laity or those based on simple extrapolation from current trends. He goes on to analyze which styles of thinking are more successful in forecasting. Classifying thinking styles using Isaiah Berlin's prototypes of the fox and the hedgehog, Tetlock contends that the fox--the thinker who knows many little things, draws from an eclectic array of traditions, and is better able to improvise in response to changing events--is more successful in predicting the future than the hedgehog, who knows one big thing, toils devotedly within one tradition, and imposes formulaic solutions on ill-defined problems. He notes a perversely inverse relationship between the best scientific indicators of good judgement and the qualities that the media most prizes in pundits--the single-minded determination required to prevail in ideological combat. Clearly written and impeccably researched, the book fills a huge void in the literature on evaluating expert opinion. It will appeal across many academic disciplines as well as to corporations seeking to develop standards for judging expert decision-making.
Prediction markets are increasingly being considered as methods for gathering, summarizing and aggregating diffuse information by governments and businesses alike. Critics worry that these markets are susceptible to price manipulation by agents who wish to distort decision making. We study the effect of manipulators on an experimental market, and find that manipulators are unable to distort price accuracy. Subjects without manipulation incentives compensate for the bias in offers from manipulators by setting a different threshold at which they are willing to accept trades.
“Prediction markets” are designed specifically to forecast events such as elections. Though election prediction markets have been being conducted for almost twenty years, to date nearly all of the evidence on efficiency compares election eve forecasts with final pre-election polls and actual outcomes. Here, we present evidence that prediction markets outperform polls for longer horizons. We gather national polls for the 1988 through 2004 U.S. Presidential elections and ask whether either the poll or a contemporaneous Iowa Electronic Markets vote-share market prediction is closer to the eventual outcome for the two-major-party vote split. We compare market predictions to 964 polls over the five Presidential elections since 1988. The market is closer to the eventual outcome 74% of the time. Further, the market significantly outperforms the polls in every election when forecasting more than 100 days in advance.
With error-prone and biased individual traders, can markets aggregate trader information and produce efficient outcomes? We review election stock market evidence that suggests this does happen. Individual traders appear biased and error-prone consistently, yet these markets prove quite efficient in predicting election outcomes. We also review work which documents comparable, but substantially different, phenomena in related laboratory markets. In addition, we report the results from a new laboratory session which shows how we can create particular biases that mirror those in election stock markets. Finally, we discuss how combined laboratory and field experiments can help us understand trader/market interactions.
Prediction markets are mechanisms that aggregate information such that an estimate of the probability of some future event is produced. It has been established that both real-money and play-money prediction markets are reasonably accurate. An SPRT-like test is used to determine whether there are statistically significant differences in accuracy between the two markets. The results establish that real-money markets are significantly more accurate for non-sports events. We also examine the effect of volume and whether differences between forecasts are market specific.