Article

The PollyVote Popular Vote Forecast for the 2020 US Presidential Election

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
While combining forecasts is well-known to reduce error, the question of how to best combine forecasts remains. Prior research suggests that combining is most beneficial when relying on diverse forecasts that incorporate different information. Here, I provide evidence in support of this hypothesis by analyzing data from the PollyVote project, which has published combined forecasts of the popular vote in U.S. presidential elections since 2004. Prior to the 2020 election, the PollyVote revised its original method of combining forecasts by, first, restructuring individual forecasts based on their underlying information and, second, adding naïve forecasts as a new component method. On average across the last 100 days prior to the five elections from 2004 to 2020, the revised PollyVote reduced the error of the original specification by eight percent and, with a mean absolute error (MAE) of 0.8 percentage points, was more accurate than any of its component forecasts. The results suggest that, when deciding about which forecasts to include in the combination, forecasters should be more concerned about the component forecasts’ diversity than their historical accuracy.
Article
Full-text available
On election eve the presidential vote can be seen fairly clearly from trial-heat polls. Earlier in the election year, polls offer less information about what will happen on Election Day, as they capture preferences to the moment and do not anticipate future changes. We know that the standing of the sitting president will be important and the economy too, but both can change as the election cycle unfolds. Our solution to the problem of early forecasting has been to turn to The Conference Board’s index of leading economic indicators (LEI). The growth in LEI through the spring of the election year is a strong predictor of the vote, as it provides a summary of the state of the economy leading up to the election year and gives advance indication of changes during the election year. Our model also includes the incumbent party candidate’s share of the two-party vote in trial-heat polls, which can be measured at any time during the election year. These polls increasingly incorporate economic conditions and also other non-economic “fundamentals.” Since before the conventions, our model has predicted 45% of the two-party vote for Trump and his current probability of winning is 4%.
Article
Full-text available
The present study reviews the accuracy of four methods (polls, prediction markets, expert judgment, and quantitative models) for forecasting the two German federal elections in 2013 and 2017. On average across both elections, polls and prediction markets were most accurate, while experts and quantitative models were least accurate. The accuracy of individual forecasts did not correlate across elections. That is, methods that were most accurate in 2013 did not perform particularly well in 2017. A combined forecast, calculated by averaging forecasts within and across methods, was more accurate than two out of three component forecasts. The results conform to prior research on US presidential elections in showing that combining is effective in generating accurate forecasts and avoiding large errors.
Article
Full-text available
This study analyzes the relative accuracy of experts, polls, and the so-called ‘fundamentals’ in predicting the popular vote in the four U.S. presidential elections from 2004 to 2016. Although the majority (62%) of 452 expert forecasts correctly predicted the directional error of polls, the typical expert’s vote share forecast was 7% (of the error) less accurate than a simple polling average from the same day. The results further suggest that experts follow the polls and do not sufficiently harness information incorporated in the fundamentals. Combining expert forecasts and polls with a fundamentals-based reference class forecast reduced the error of experts and polls by 24% and 19%, respectively. The findings demonstrate the benefits of combining forecasts and the effectiveness of taking the outside view for debiasing expert judgment.
Chapter
Full-text available
The PollyVote uses evidence-based techniques for forecasting the popular vote in presidential elections. The forecasts are derived by averaging existing forecasts generated by six different forecasting methods. In 2016, the PollyVote correctly predicted that Hillary Clinton would win the popular vote. The 1.9 percentage-point error across the last 100 days before the election was lower than the average error for the six component forecasts from which it was calculated (2.3 percentage points). The gains in forecast accuracy from combining are best demonstrated by comparing the error of PollyVote forecasts with the average error of the component methods across the seven elections from 1992 to 2016. The average errors for last 100 days prior to the election were: public opinion polls (2.6 percentage points), econometric models (2.4), betting markets (1.8), and citizens’ expectations (1.2); for expert opinions (1.6) and index models (1.8), data were only available since 2004 and 2008, respectively. The average error for PollyVote forecasts was 1.1 percentage points, lower than the error for even the most accurate component method.
Article
Full-text available
This article introduces this JBR Special Issue on simple versus complex methods in forecasting. Simplicity in forecasting requires that (1) method, (2) representation of cumulative knowledge, (3) relationships in models, and (4) relationships among models, forecasts, and decisions are all sufficiently uncomplicated as to be easily understood by decision-makers. Our review of studies comparing simple and complex methods - including those in this special issue - found 97 comparisons in 32 papers. None of the papers provide a balance of evidence that complexity improves forecast accuracy. Complexity increases forecast error by 27 percent on average in the 25 papers with quantitative comparisons. The finding is consistent with prior research to identify valid forecasting methods: all 22 previously identified evidence-based forecasting procedures are simple. Nevertheless, complexity remains popular among researchers, forecasters, and clients. Some evidence suggests that the popularity of complexity may be due to incentives: (1) researchers are rewarded for publishing in highly ranked journals, which favor complexity; (2) forecasters can use complex methods to provide forecasts that support decision-makers’ plans; and (3) forecasters’ clients may be reassured by incomprehensibility. Clients who prefer accuracy should accept forecasts only from simple evidence-based procedures. They can rate the simplicity of forecasters’ procedures using the questionnaire at simple-forecasting.com.
Article
Full-text available
This article proposes a unifying theory, or the Golden Rule, of forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century.
Article
Full-text available
Simple surveys that ask people who they expect to win are among the most accurate methods for forecasting U.S. presidential elections. The majority of respondents correctly predicted the election winner in 193 (89%) of 217 surveys conducted from 1932 to 2012. Across the last 100 days prior to the seven elections from 1988 to 2012, vote expectation surveys provided more accurate forecasts of election winners and vote shares than four established methods (vote intention polls, prediction markets, econometric models, and expert judgment). Gains in accuracy were particularly large compared to polls. On average, the error of expectation-based vote-share forecasts was 51% lower than the error of polls published the same day. Compared to prediction markets, vote expectation forecasts reduced the error on average by 6%. Vote expectation surveys are inexpensive, easy to conduct, and the results are easy to understand. They provide accurate and stable forecasts and thus make it difficult to frame elections as horse races. Vote expectation surveys should be more strongly utilized in the coverage of election campaigns.
Article
Full-text available
We summarize the literature on the effectiveness of combining forecasts by assessing the conditions under which combining is most valuable. Using data on the six US presidential elections from 1992 to 2012, we report the reductions in error obtained by averaging forecasts within and across four election forecasting methods: poll projections, expert judgment, quantitative models, and the Iowa Electronic Markets. Across the six elections, the resulting combined forecasts were more accurate than any individual component method, on average. The gains in accuracy from combining increased with the numbers of forecasts used, especially when these forecasts were based on different methods and different data, and in situations involving high levels of uncertainty. Such combining yielded error reductions of between 16% and 59%, compared to the average errors of the individual forecasts. This improvement is substantially greater than the 12% reduction in error that had been reported previously for combining forecasts.
Article
Full-text available
We used the take-the-best heuristic to develop a model to forecast the popular two-party vote shares in U.S. presidential elections. The model draws upon information about how voters expect the candidates to deal with the most important issue facing the country. We used cross-validation to calculate a total of 1000 out-of-sample forecasts, one for each of the last 100 days of the ten U.S. presidential elections from 1972 to 2008. Ninety-seven per cent of forecasts correctly predicted the winner of the popular vote. The model forecasts were competitive compared to forecasts from methods that incorporate substantially more information (e.g., econometric models and the Iowa Electronic Markets). The purpose of the model is to provide fast advice on which issues candidates should stress in their campaign. Copyright © 2010 John Wiley & Sons, Ltd.
Article
Three vote-share equations are estimated and analyzed in this paper, one for presidential elections, one for on-term House elections, and one for mid-term House elections. The sample period is 1916-2006. Considering the three equations together allows one to test whether the same economic variables affect each and to examine various serial correlation and coattail possibilities. The resulting three equation model can then be analyzed dynamically, which is done in Section 4. The main conclusions are briefly: 1) There is strong evidence that the economy affects all three vote shares and in remarkably similar ways. 2) There is no evidence of any presidential coattail effects on the on-term House elections. The presidential vote share and the on-term House vote share are highly positively correlated, but this is because they are affected by some of the same variables. 3) There is positive serial correlation in the House vote in that the previous mid-term House vote share positively affects the on-term House vote share and the previous on-term House vote share positively affects the mid-term House vote share. 4) The presidential vote share has a negative effect on the next mid-term House vote share. The most likely explanation for this is a balance argument, where voters are reluctant to let one party become too dominant. Ruled out as possible explanations for this fourth result is any reversal of a coattail effect, since there is no evidence of an effect in the first place, and a regression to the mean, since the positive serial correlation in the House vote implies no such regression. Also, it is not simply voting against the party in the White House, because the presidential variable is a vote share variable not a 0,1 incumbency variable.
Prediction Market Performance in the 2016 US Presidential Election
  • Graefe
Graefe, Andreas. 2017b. "Prediction Market Performance in the 2016 US Presidential Election." Foresight: The International Journal of Applied Forecasting 2017 (45): 38-42.
Replication Data for: The PollyVote Popular-Vote Forecast for the 2020 US Presidential Election
  • Andreas Graefe
Graefe, Andreas. 2020b. "Replication Data for: The PollyVote Popular-Vote Forecast for the 2020 US Presidential Election." Harvard Dataverse. doi:10.7910/DVN/RLECFV.
He Predicted Trump’s Win in 2016. Now He’s Ready to Call 2020
  • Allan Lichtman
Lichtman, Allan. 2020. "He Predicted Trump's Win in 2016. Now He's Ready to Call 2020." New York Times. Accessed August 5, 2020.
Third Quarter 2020 Survey of Professional Forecasters.” Federal Reserve Bank of Philadelphia
  • Philadelphiafed
Philadelphiafed. 2020. "Third Quarter 2020 Survey of Professional Forecasters." Federal Reserve Bank of Philadelphia. Accessed August 28, 2020.