Article

Keys to the White House: Forecast for 2008

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Lichtman explain his Keys model, which has successfully predicted (more than one year in advance) the popular vote winner of every presidential election from 1984-2004. He then applies the model to the 2008 presidential election. Copyright International Institute of Forecasters, 2005

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

Article
Full-text available
Using the index method, we developed the PollyBio model to predict election outcomes. The model, based on 49 cues about candidates' biographies, was used to predict the outcome of the 28 U.S. presidential elections from 1900 to 2008. In using a simple heuristic, it correctly predicted the winner for 25 of the 28 elections and was wrong three times. In predicting the two-party vote shares for the last four elections from 1996 to 2008, the model's out-of-sample forecasts yielded a lower forecasting error than 12 benchmark models. By relying on different information and including more variables than traditional models, PollyBio improves on the accuracy of election forecasting. It is particularly helpful for forecasting open-seat elections. In addition, it can help parties to select the candidates running for office.
Article
Armstrong and Graefe apply the index method to predict presidential elections. They imply that the technique is also useful for business decision making. Their idea has merit and may be relevant when the decision context is dynamic, has few prior "observations," and where domain knowledge exists. However, Armstrong and Graefe fail to adequately explain the variable selection process, clarify the conditions when the index method is appropriate, or identify the types of problems most amenable to the index method, and fail to discuss how the index method can be calibrated to help make single option decisions.
Article
Full-text available
Empirical comparisons of reasonable approaches provide evidence on the best forecasting procedures to use under given conditions. Based on this evidence, I summarize the progress made over the past quarter century with respect to methods for reducing forecasting error. Seven well-established methods have been shown to improve accuracy: combining forecasts and Delphi help for all types of data; causal modeling, judgmental bootstrapping and structured judgment help with cross-sectional data; and causal models and trend-damping help with time series data. Promising methods for cross-sectional data include damped causality, simulated interaction, structured analogies, and judgmental decomposition; for time series data, they include segmentation, rule-based forecasting, damped seasonality, decomposition by causal forces, damped trend with analogous data, and damped seasonality. The testing of multiple hypotheses has also revealed methods where gains are limited: these include data mining, neural nets, and Box–Jenkins methods. Multiple hypotheses tests should be conducted on widely used but relatively untested methods such as prediction markets, conjoint analysis, diffusion models, and game theory.
ResearchGate has not been able to resolve any references for this publication.