Article

Introduction—The 2004 Presidential Election Forecasts

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

This symposium presents seven presidential election forecasting models and their predictions of the popular two-party vote in the 2004 election. The modern age of election forecasting is now into its third decade. Models have been tested quite publicly in the heat of battle—with some doing well, others not quite so well, and still others making way for new models. In this introduction, I provide a brief overview of the models, a summary of this year's forecasts, and some thoughts about how the forecasts should be judged.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... For example, after observing Ray Fair's use of out-of-sample testing of his equation (Fair, 2002), Rebecca Morton (2006, p. 373) noted that " political science models have less ability to do such out-of-sample predictions, given the data limitations they have to begin with (Norpoth being an exception). " In fact, most political science equations routinely perform out-of-sample tests, and have done so for many years (e.g., Campbell, 2004b; Campbell & Wink, 1990; Holbrook, 2004; Lewis-Beck & Tien, 2004; Lockerbie, 2004; Wlezien & Erikson, 2004; see also Lewis-Beck, 2005, p. 153; and Campbell, 2000, p. 175). ...
... We know, for example, that the measures of the pre-election economy used in many forecasting models are refined and improved years and sometimes decades after the election, and that public opinion measures of approval and voter preferences contain sampling and other measurement errors. However, the fact that election forecast errors are matters of degree and are to be expected does not mean that forecasters are " off the hook " (Campbell, 2004a, p. 734). Interval forecasts are not beyond evaluation. ...
Article
This article examines four problems with past evaluations of presidential election forecasting and suggests one aspect of the models that could be improved. Past criticism has had problems with establishing an overall appraisal of the forecasting equations, in assessing the accuracy of both the forecasting models and their forecasts of individual election results, in identifying the theoretical foundations of forecasts, and in distinguishing between data-mining and learning in model revisions. I contend that overall assessments are innately arbitrary, that benchmarks can be established for reasonable evaluations of forecast accuracy, that blanket assessments of forecasts are unwarranted, that there are strong (but necessarily limited) theoretical foundations for the models, and that models should be revised in the light of experience, while remaining careful to avoid data-mining. The article also examines the question of whether current forecasting models grounded in retrospective voting theory should be revised to take into account the partial-referendum nature of non-incumbent, open-seat elections such as the 2008 election.
... We are not ignorant that our model lacked accuracy in predicting the outcome of the 2021 election. Model tinkering has been explored in the election forecasting literature (e.g., (Campbell 2004;Lewis-Beck, and Tien 2008). Such adjustments can be reasonable, but they must be prudent, theory-driven, and avoid overstretching (Lewis-Beck, and Tien 2008). ...
... In election studies, analyzing and predicting U.S. presidential elections is of both theoretical and practical importance, making it a longstanding focal point for political scientists (Campbell 2004). As the world's only superpower, the United States' strategies and policies substantially impact the international order. ...
Preprint
Full-text available
The recent wave of artificial intelligence, epitomized by large language models (LLMs), has presented opportunities and challenges for methodological innovation in political science, sparking discussions on a potential paradigm shift in the social sciences. However, how can we understand the impact of LLMs on knowledge production and paradigm transformation in the social sciences from a comprehensive perspective that integrates technology and methodology? What are LLMs' specific applications and representative innovative methods in political science research? These questions, particularly from a practical methodological standpoint, remain underexplored. This paper proposes the "Intelligent Computing Social Modeling" (ICSM) method to address these issues by clarifying the critical mechanisms of LLMs. ICSM leverages the strengths of LLMs in idea synthesis and action simulation, advancing intellectual exploration in political science through "simulated social construction" and "simulation validation." By simulating the U.S. presidential election, this study empirically demonstrates the operational pathways and methodological advantages of ICSM. By integrating traditional social science paradigms, ICSM not only enhances the quantitative paradigm's capability to apply big data to assess the impact of factors but also provides qualitative paradigms with evidence for social mechanism discovery at the individual level, offering a powerful tool that balances interpretability and predictability in social science research. The findings suggest that LLMs will drive methodological innovation in political science through integration and improvement rather than direct substitution.
... In Bezug auf Wahlen kann dies das Setzen des vergangenen Wahlergebnisses oder des Mittelwerts aller vorangegangenen Wahlen als Prognose sein. Campbell (2004) illustriert diesen Anspruch anschaulich anhand der US-Präsidentschaftswahlen: Ein Prognosemodell sollte einen geringeren MAE als 4,8 Prozentpunkte haben, denn sonst könnte man die Präsidentschaftswahl auch einfach über den Durchschnitt aller Wahlergebnisse seit 1948 schätzen. Ein Modell muss also besser funktionieren als ein naiver atheoretischer Ansatz. ...
Chapter
Prognosen stellen in der Politikwissenschaft ein zwar noch kleines, aber stetig wachsendes Forschungsfeld dar, welches in verschiedenen Teilbereichen der Disziplin Anwendung findet. Gemeint sind hiermit statistische Modelle, mit denen explizit politikwissenschaftlich relevante Phänomene vor ihrem Eintreten vorhergesagt werden. Dabei folgen sie den wissenschaftlichen Leitlinien der intersubjektiven Nachvollziehbarkeit und Reproduzierbarkeit. Dieser Beitrag führt ein in die Grundlagen politikwissenschaftlicher Prognosen. Den Schwerpunkt der Darstellung bilden Wahlprognosen, insbesondere strukturelle Modelle, welche beispielhaft anhand eines kanonischen Wahlprognosemodells erläutert werden. Daneben werden synthetische Modelle, Aggregationsmodelle, „Wisdom of the crowd“-Ansätze und Prognosemärkte diskutiert.
... The error of the vote forecast will be, if not the lowest, among the lowest on record. Campbell (1994;2001;2004;2008;2016); Campbell and Garand (2000); and Campbell and Mann (1992). Some of the early forecasts appeared outside one of the Campbell symposia (e.g., Lewis-Beck [1985]). ...
... The individual expert forecasts were compared to the respective forecasts from polls and fundamentals from the day an 2See, for example, the special symposiums in PS: Political Science & Politics published before each of the U.S. presidential elections from 2004 to 2016 (Campbell, 2004;2008;. ...
Article
Full-text available
This study analyzes the relative accuracy of experts, polls, and the so-called ‘fundamentals’ in predicting the popular vote in the four U.S. presidential elections from 2004 to 2016. Although the majority (62%) of 452 expert forecasts correctly predicted the directional error of polls, the typical expert’s vote share forecast was 7% (of the error) less accurate than a simple polling average from the same day. The results further suggest that experts follow the polls and do not sufficiently harness information incorporated in the fundamentals. Combining expert forecasts and polls with a fundamentals-based reference class forecast reduced the error of experts and polls by 24% and 19%, respectively. The findings demonstrate the benefits of combining forecasts and the effectiveness of taking the outside view for debiasing expert judgment.
... Election forecasting has become a thriving discipline in the United States, as evident from the sheer number of competing models published in articles and symposia in the run up to presidential elections since 2004 (e.g., Campbell, 2004;Lewis-Beck and Stegmaier, 2014). While election forecasting based on statistical models originated in the US, the method has traveled to other advanced democracies (for an overview, see Lewis-Beck, 2005) as well as newer democracies (Lewis-Beck and Bélanger, 2012). ...
Article
Full-text available
Serious election forecasting has become a routine activity in most Western democracies, with various methodologies employed, for example, polls, models, prediction markets, and citizen forecasting. In the Netherlands, however, election forecasting has limited itself to the use of polls, mainly because other approaches are viewed as too complicated, given the great fragmentation of the Dutch party system. Here we challenge this view, offering the first structural forecasting model of legislative elections there. We find that a straightforward Political Economy equation managed an accurate forecast of the 2017 contest, clearly besting the efforts of the pollsters.
... In 2004, the first APSA Symposium on presidential election forecasts reported predictions from 7 different academic models (Campbell, 2004). In 2008, it reported on 9 models (Campbell, 2008). ...
Research
Full-text available
This paper presents an adaptation of the DeSart and Holbrook presidential election forecast model for the purpose of making longer-range forecasts of presidential elections up to a year in advance of the election. Relying upon state electoral histories, home state advantage, and “time for change” variables, the model produces in-sample forecasts similar to that of the DeSart and Holbrook September forecast. On the basis of this model, a series of forecasts are generated for the 2016 election.
... In election forecasting, rules to evaluate scientific approaches (e.g., polls, political stock markets and statistical models) have been designed and applied to gauge their accuracy [18]. Examples of election forecasting models evaluation have been reported in the United States, France and the United Kingdom [19]. ...
Article
Nowadays, lots of service providers offer predictive services that show in advance a condition or occurrence about the future. As a consequence, it becomes necessary for service customers to select the predictive service that best satisfies their needs. The QuPreSS reference model provides a standard solution for the selection of predictive services based on the quality of their predictions. QuPreSS has been designed to be applicable in any predictive domain (e.g., weather forecasting, economics, and medicine). This paper presents Mercury, a tool based on the QuPreSS reference model and customized to the weather forecast domain. Mercury measures weather predictive services' quality, and automates the context-dependent selection of the most accurate predictive service to satisfy a customer query. To do so, candidate predictive services are monitored so that their predictions can be eventually compared to real observations obtained from a trusted source. Mercury is a proof-of-concept of QuPreSS that aims to show that the selection of predictive services can be driven by the quality of their predictions. Throughout the paper, we show how Mercury was built from the QuPreSS reference model and how it can be installed and used.
... As of 6 September 2004, with one exception, the political science forecast models predicted a Bush popular vote plurality. From low to high, the forecasts of the popular two-party vote for Bush were 49.9% by Lewis-Beck and Tien, 51.7 to 52.9% by Wlezien and Erikson,53 (Campbell, 2004b) . Economist Ray Fair's forecast at that time was for a Bush vote of 57.5 %. ...
Article
Full-text available
Presidential elections are largely structured by certain fundamentals that are in place before the campaigns begin. These are the public’s opinion about the in‐party and the candidate choice, the general state of the election‐year economy, and incumbency. This trinity of fundamentals have in various ways been incorporated into statistical models that accurately forecast the major party division of the popular vote well before Election Day. This article examines the historical associations between several indicators of these fundamental forces and the national vote. It also examines the state of these indicators in the 2004 presidential election. They indicate that the fundamentals leading into the 2004 campaign generally favoured George W. Bush and anticipated his re‐election.
... From 1992 on, about half a dozen teams of forecasters, usually but not always the same people, could be depended on to release pre-election presidential forecasts. Here is the forecast range for the incumbent two-party popular vote share in these contests: 1992 (Bush) 44.8 per cent-55.7 per cent; 1996 (Clinton) 54.8 per cent-58.1 per cent; 2000 (Gore) 53 per cent-62 per cent; 2004 (Bush) 49.9 per cent-57.6 per cent (Campbell and Garand 2000;Campbell 2004a). ...
Article
To forecast an election means to declare the outcome before it happens. Scientific approaches to election forecasting include polls, political stock markets and statistical models. I review these approaches, with an emphasis on the last, since it offers more lead time. Consideration is given to the history and politics of statistical forecasting models of elections. Rules for evaluating such models are offered. Examples of actual models come from the United States, France and the United Kingdom, where this work is rather new. Compared to other approaches, statistical modelling seems a promising method for forecasting elections.
... Often, these latter two are pitted against each other, in competition (Lewis-Beck, 2001). It is possible to show that, in the long-run, the two methods yield about the same error (Campbell, 2004;Lewis-Beck, 2005). ...
Article
Here, we address the issue of forecasting from statistical models, and how they might be improved. Our real-world example is the forecasting of US presidential elections. First, we ask whether a model should be changed. To illustrate problems and opportunities, we examine the forecasting history of different models, in particular our own, which has tried to foresee presidential selection since 1984. We apply what we learn to the question of whether our Jobs model, which offered an accurate ex ante point estimate for 2004, should be changed for 2008. We conclude there is room for judicious, theory-driven adjustment, but also raise a caution about inadvertent curve-fitting. Some evidence is offered that simple core models, based on strong theory, may perform almost as well as more stretched models.
... The results of presidential elections can be predicted with a high degree of accuracy from indicators of economic growth and public approval of the incumbent administration: voters re-elect the incumbent during times of economic growth, but opt for change during times of distress. Changes in GNP over the past year or the level of public approval of the incumbent president four months before the election are relevant to election outcomes; day-to-day tactics of the candidates in October seemingly are not (see Bartels and Zaller 2001; Campbell 2004). At the very least, this evidence suggests that the prevailing political context is just as important as anything the candidates themselves might say over the course of the campaign. ...
Article
be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher, bepress, which has been given certain exclusive rights by the author. The Forum is produced by The Berkeley Electronic Press (bepress).
Article
Full-text available
The outcome of the 2016 election made it abundantly clear that victory in US presidential contests depends on the Electoral College much more than on direct universal suffrage. This fact points to the importance of using state-level models to arrive at adequate predictions of winners and losers in US presidential elections. In fact, the use of a model disaggregated to the state level and focusing on three types of measures—namely, changes in the unemployment rate, presidential popularity, and indicators of long-term patterns in the regional strength of the Democratic and Republican parties—has in the past enabled us to produce fairly accurate forecasts of the number of Electoral College votes for the presidential candidates of the two major American parties. In this article, we bring various modifications to this model to improve its overall accuracy. With Joe Biden out of the race, this revised model predicts that Donald Trump will succeed in winning back the presidency with 341 electoral votes against 197 for Kamala Harris.
Chapter
Full-text available
Chapter
Full-text available
Prognosen stellen in der Politikwissenschaft ein zwar noch kleines, aber stetig wachsendes Forschungsfeld dar, welches in verschiedenen Teilbereichen der Disziplin Anwendung findet. Gemeint sind hiermit statistische Modelle, mit denen explizit politikwissenschaftlich relevante Phänomene vor ihrem Eintreten vorhergesagt werden. Dabei folgen sie den wissenschaftlichen Leitlinien der intersubjektiven Nachvollziehbarkeit und Reproduzierbarkeit. Dieser Beitrag führt ein in die Grundlagen politikwissenschaftlicher Prognosen. Den Schwerpunkt der Darstellung bilden Wahlprognosen, insbesondere strukturelle Modelle, welche beispielhaft anhand eines kanonischen Wahlprognosemodells erläutert werden. Daneben werden synthetische Modelle, Aggregationsmodelle, „Wisdom of the crowd“-Ansätze und Prognosemärkte diskutiert.
Article
Using global data for election predictions Assumptions underlying election result predictions have been questioned recently. Kennedy et al. assessed more than 650 executive office elections in over 85 countries and performed two live forecasting experiments. They analyzed a variety of potential predictors theorized to be of importance, ranging from economic performance to polling data. Elections were about 80 to 90% predictable, despite uncertainties with available data. Polling data were very important to successful prediction, although it was necessary to correct for systematic biases. Unexpectedly, economic indicators were only weakly predictive. As data sources improve and grow, predictive power is expected to increase. Science , this issue p. 515
Article
This chapter explores differences in federal budget communication associated with the development and passage of the Federal Budget Resolution for Fiscal Years 1999, 2000, and 2001. While theory suggests that party-based differences within budget communication exist, empirical studies have not yet explored the full extent of these differences. The goal of this research is to illustrate the significant party-based differences in the goals and values communicated by the actors within the federal budget process. These findings inform our understanding of how actors within this key governing process communicate. This understanding will better equip public administrators to engage others in dialogue and debate that facilitates agreement and understanding.
Article
Full-text available
The 2011-2012 Arab Spring uprising can be considered a new political phenomenon with respect to collective action and the origin of network governance in North Africa and the Middle East. Nevertheless, current formal and empirical models are incapable of analyzing and predicting the future of the uprisings. Therefore, the conceptualization of these models must be reviewed given the increasing need for a political analytical model that can assess the state of the state and consider the influence of non-state actors on service provision and security mechanisms inside their society. The circumstances require a simple conceptual model to describe state status (stable or unstable) in a simple representational form for countries such as Egypt following the Arab Spring. This study propose a framework to explain the influence of network governance on state stability, it was preferable that this model be general and conceptual. Thus, this framework can offer a more realistic explanation of the political transformations that occurred in the Arab Spring countries, such as Egypt. The analysis showed that formal mathematical models could not persuasively explain the Arab Spring phenomenon because such models are based on theories and ideas that are inapplicable to the changes in the political environment that occurred in these countries. The proposed framework, attempts to describe state status, whereby a state is stable or unstable and it is not necessary for the state to be a failed state. This framework aims to help political analysts develop recommendations for policy- and decision-makers on how to avoid state instability.
Article
Political analyst Mark Smith offers the most original and compelling explanation yet of why America has swung to the right in recent decades. How did the GOP transform itself from a party outgunned and outmaneuvered into one that defines the nation's most important policy choices? Conventional wisdom attributes the Republican resurgence to a political bait and switch--the notion that conservatives win elections on social issues like abortion and religious expression, but once in office implement far-reaching policies on the economic issues downplayed during campaigns. Smith illuminates instead the eye-opening reality that economic matters have become more central, not less, to campaigns and the public agenda. He analyzes a half century of speeches, campaign advertisements, party platforms, and intellectual writings, systematically showing how Republican politicians and conservative intellectuals increasingly gave economic justifications for policies they once defended through appeals to freedom. He explains how Democrats similarly conceived economic justifications for their own policies, but unlike Republicans they changed positions on issues rather than simply offering new arguments and thus helped push the national discourse inexorably to the right. The Right Talk brings clarity, reason, and hard-nosed evidence to a contentious subject. Certain to enrich the debate about the conservative ascendancy in America, this book will provoke discussions and reactions for years to come.
Article
"A fresh and incisive contribution to our understanding of presidential elections and the presidency. Ranging beyond media horse race coverage and quantitative models of voting behavior, Crockett provides several innovative explanations for presidential elections past and present."- Steven Schier, Congdon Professor of Political Science, Carleton College "Well written and carefully argued, Crockett's book continues the exploration of 'opposition presidencies' begun in his excellent book The Opposition Presidency"
Article
Reporting data and predicting trends through the 2008 campaign, this classroom-tested volume offers again James E. Campbell's "theory of the predictable campaign," incorporating the fundamental conditions that systematically affect the presidential vote: political competition, presidential incumbency, and election-year economic conditions. Campbell's cogent thinking and clear style present students with a readable survey of presidential elections and political scientists' ways of studying them. The American Campaign also shows how and why journalists have mistakenly assigned a pattern of unpredictability and critical significance to the vagaries of individual campaigns. This excellent election-year text provides: a summary and assessment of each of the serious predictive models of presidential election outcomes; a historical summary of many of America's important presidential elections; a significant new contribution to the understanding of presidential campaigns and how they matter.
Article
Full-text available
Surveys have long been critical tools for understanding elections and forecasting their results. As the number of election surveys has increased in prevalence, researchers, journalists, and standalone political bloggers have sought to learn from the wealth of information released. This paper explores three central strategies for pooling surveys and other information to better understand both the state of an election and what can be expected when voters head to the polls. Aggregation, predictive modeling, and hybrid models are assessed as ways of improving on the results of individual surveys. For each method, central questions, key choices, applications, and considerations for use are discussed. Trade-offs in choices between pooling strategies are considered, and the accuracies of each set of strategies for forecasting results in 2012 are compared. Although hybrid models have the potential to most accurately pool election information and make predictions, the simplicity of aggregations and the theory-testing capacity of predictive models can sometimes prove more valuable.
Article
We consider ensemble Bayesian model averaging (EBMA) in the context of small- prediction tasks in the presence of large numbers of component models. With large numbers of observations for calibrating ensembles, relatively small numbers of component forecasts, and low rates of missingness, the standard approach to calibrating forecasting ensembles introduced by Raftery et al. (2005) performs well. However, data in the social sciences generally do not fulfill these requirements. In these circumstances, EBMA models may miss-weight components, undermining the advantages of the ensemble approach to prediction. In this article, we explore these issues and introduce a “wisdom of the crowds” parameter to the standard EBMA framework, which improves its performance. Specifically, we show that this solution improves the accuracy of EBMA forecasts in predicting the 2012 US presidential election and the US unemployment rate.
Article
Full-text available
Politico-Econometric Models and Electoral Predictions for May 2007 in France This paper discusses the question of electoral prediction based on politicoeconometric models. It presents a brief historical sketch of this specific research domain in political economy, and a synthesis of the present model-based predictions for the French 2007 presidential elections. Because the reviewed models predict different and somewhat opposed results, we suggest to use an ex ante arbitrage, based on a simple indicator, the Figaro-Sofres popularity index for the socialist party. The arbitrage between potential winners appears to be very clear. As this paper is written six weeks ante eventum, it can be seen as a kind of natural experiment in itself, useful to test the predictive capacity of our selected indicator.
Article
Election forecasting models assume retrospective economic voting and clear mechanisms of accountability. Indeed, previous research indicates that incumbent political parties are held accountable for the state of the economy. In this article we investigate a ‘hard case’ for the assumptions of election forecasting models. Belgium is a multiparty system with perennial coalition governments. Furthermore, it has two completely segregated party systems (Dutch and French language). Since the prime minister during the period 1974–2011 has always been a Dutch language politician, French language voters could not even vote for the prime minister, so this cognitive shortcut to establishing political accountability is not available. The results of an analysis for the French speaking parties (1981–2010) show that retrospective economic voting occurs even in these conditions of opaque accountability, as election results respond to indicators with regard to GDP and unemployment levels. Party membership figures can be used to model the popularity function in election forecasting.
Article
Full-text available
Cette contribution propose un modèle théorique permettant d’identifier les tendances générales dans le mouvement des électeurs volatiles. Notre méthode permet à partir de données décentralisées de compléter les enquêtes d’opinions pour comprendre quelle part du résultat d’un parti provient d’électeurs fidèles conservés d’une élection à l’autre, et quelle part de l’électorat a été renouvelée. Elle peut également être utilisée à des fins prédictives : à partir d’un faible nombre de résultats provenant potentiellement de zones considérées comme peu représentatives, notre modèle permet d’extraire une tendance générale et de la reproduire sur l’ensemble d’un territoire donné. Nous proposons une typologie des résultats possibles des différents partis (gagnant, perdant, stable ou renouvelé). A titre d’illustration, nous appliquons ce modèle aux élections fédérales belges de 2010, en utilisant comme référence le résultat des élections régionales de 2009.
Article
Congressional election forecasting has experienced steady growth. Currently fashionable models stress prediction over explanation. The independent variables do not offer a substantive account of the election outcome. Instead, these variables are tracking variablesthat is, indicators that may trace the result but fail to explain it. The outstanding example is the generic ballot measure, which asks respondents for whom they plan to vote in the upcoming congressional race. While this variable correlates highly with presidential party House seat share, it is bereft of substance. The generic ballot measure is the archetypical tracking variable, and it holds pride of place in the Abramowitz (2010) model. Other examples of such tracking variables are exposed seats or lagged seats, features of the Campbell (2010) model. The difficulty with such tracking models is twofold. First, they are not based on a theory of the congressional vote. Second, because they are predictive models, they offer a suboptimal forecasting instrument when compared to models specified according to strong theory.
Article
Full-text available
This paper compares the forecast of the 2004 presidential election generated with the fiscal model with those produced with models included in what we call the Campbell Collection, after the editor or co-editor of several successive special issues of P.S. Political Science and Politics devoted to forecasting. The analysis shows that the fiscal model performs as well or better than the typical member of the Campbell Collection on several operational criteria. It does so without the benefit of survey data. Also, alone among the models, including Ray Fair's, which is also included in the comparison, the fiscal model offers practical advice to presidents. Echoing Machiavelli, it says that, if they wish to extend their party's tenure in the White House, they should forego a policy of fiscal expansion.
Article
On Labor Day, 57 days before the election, using the Gallup poll's division of likely voters and GDP growth during the second quarter of the year, the trial-heat and economy forecasting model predicted that George W. Bush would receive 53.8% of the two-party popular vote (Campbell 2004a). Out of concerns about relying too heavily on a single poll and the possible complications associated with the Republican Convention running right up to the Labor Day weekend, a companion model based on the pre-convention Gallup poll, the net convention poll bump, and the economy was constructed. It forecast a slightly closer election, with a Bush vote of 52.8%.
Article
1. The errors were: Wlezien and Erikson .5% and 1.7%, Lewis-Beck and Tien 1.3%, Abramowitz 2.4%, Campbell 2.5%, Norpoth 3.5%, Holbrook 4.9%, and Lockerbie 6.4%. Due to a coding mistake, Holbrook's forecast was 56.1% for Bush, not his originally re- ported 54.5%. The forecasting model by Cuzan and
Article
Although neural networks are increasingly used in a variety of disciplines there are few applications in political science. Approaches to electoral forecasting traditionally employ some form of linear regression modelling. By contrast, neural networks offer the opportunity to consider also the non-linear aspects of the process, promising a better performance, efficacy and flexibility. The initial development of this approach preceded the 2001 general election and models correctly predicted a Labour victory. The original data used for training and testing the network were based on the responses of two experts to a set of questions covering each general election held since 1835 up to 1997. To bring the model up to date, 2001 election data were added to the training set and two separate neural networks were trained using the views of our original two experts. To generate a forecast for the forthcoming general election, answers to the same questions about the performance of parties during the current parliament, obtained from a further 35 expert respondents, were offered to the neural networks. Both models, with slightly different probabilities, forecast another Labour victory. Modelling electoral forecasts using neural networks is at an early stage of development but the method is to be adapted to forecast party shares in local council elections. The greater frequency of such elections will offer better opportunities for training and testing the neural networks.
Article
Although the use of models has come to dominate much of the scientific study of politics, our understanding of the role or function that models play in the scientific enterprise has not kept pace. Political science clings to an outdated theory-based approach to scientific inference known as hypothetico-deductivism. We argue for a new approach to scientific in-ference that highlights the centrality of models in scientific reasoning, avoids the pitfalls of the hypothetico-deductive method, and offers political scientists a new way of think-ing about the relationship between the natural world and the models with which we are so familiar.
Article
Full-text available
Electoral prediction from Twitter data is an appealing research topic. It seems relatively straightforward and the prevailing view is overly optimistic. This is problematic because while simple approaches are assumed to be good enough, core problems are not addressed. Thus, this article aims to (1) provide a balanced and critical review of the state of the art; (2) cast light on the presume predictive power of Twitter data; and (3) propose some considerations to push forward the field. Hence, a scheme to characterize Twitter prediction methods is proposed. It covers every aspect from data collection to performance evaluation, through data processing and vote inference. Using that scheme, prior research is analyzed and organized to explain the main approaches taken up to date but also their weaknesses. This is the first meta-analysis of the whole body of research regarding electoral prediction from Twitter data. It reveals that its presumed predictive power regarding electoral prediction has been somewhat exaggerated: Social media may provide a glimpse on electoral outcomes but, up to now, research has not provided strong evidence to support it can currently replace traditional polls. Nevertheless, there are some reasons for optimism and, hence, further work on this topic is required, along with tighter integration with traditional electoral forecasting research.
Article
Many contend that President Bush's reelection and increased vote share in 2004 prove that the Iraq War was either electorally irrelevant or aided him. We present contrary evidence. Focusing on the change in Bush's 2004 showing compared to 2000, we discover that Iraq casualties from a state significantly depressed the President's vote share there. We infer that were it not for the approximately 10,000 U.S. dead and wounded by Election Day, Bush would have won nearly 2% more of the national popular vote, carrying several additional states and winning decisively. Such a result would have been close to forecasts based on models that did not include war impacts. Casualty effects are largest in “blue” states. In contrast, National Guard/Reservist call-ups had no impact beyond the main casualty effect. We discuss implications for both the election modeling enterprise and the debate over the “casualty sensitivity” of the U.S. public.
Article
The election in the German federal state of Saarland in late August 2009 took place under very special circumstances. This analysis nevertheless tried to predict its results by means of statistical regression models in order to test how well scientific forecasts would fare under very special and unusual circumstances in comparison to polls and political stock markets as the other two main devices for projecting election outcomes. Model forecasts for the election, validated by previous state elections in the Saarland, were generated based on advanced versions of the forecasting model developed by Thomas Gschwend and Helmut Norpoth for German federal elections. The essential results happened to emerge as predicted: first, that the ruling conservative party CDU would lose its majority and wouldn’t even reach a majority together with their preferred coalition partner from the FDP. Second, majorities for three-party coalitions including the Greens seemed well possible at least numerically. And third and last a reelection of Lafontaine as prime minister proved to be highly unlikely under all envisioned scenarios. In comparison to polls and political stock markets the model forecasts for the CDU, SPD and LINKE together as well as for the Green Party were almost all closest to their true results, only the result for the FDP was clearly underestimated.
Article
Full-text available
The Jobs Model of presidential election forecasting predicted well in 2004. The model, based on data available in August 2004, generated an error of only 1.3 percentage points when forecasting the incumbent share of the two-party popular vote (Lewis-Beck and Tien 2004). In contrast, the median forecast from seven teams of statistical modelers was off 2.6 percentage points (Campbell 2004, 734). We believe that the Jobs Model was more accurate because it broadened measurement of economic performance, a conceptual variable lying at the core of most of these efforts. Take, as a representative example, the Growth Model in Table 1, Column 1. Its forecast for George W. Bush was 54.0% (almost exactly at the median for the above-mentioned group of forecasters). This model was earlier reported by us, but rejected on grounds of specification error (Lewis-Beck and Tien 2004, 754). We argued that the changing nature of the American economy required attention to a hitherto neglected variable—job creation. When this variable, new jobs over the presidential term, is added to the Growth Model, the fit statistics improve dramatically (see Table 1, Column 2,).
Article
Full-text available
The statistical modelers are back. The presidential election forecasting errors of 2000 did not repeat themselves in 2004. On the contrary, the forecasts, from at least seven different teams, were generally quite accurate (Campbell 2004; Lewis-Beck 2005). Encouragingly, their prowess is receiving attention from forecasters outside the social sciences, in fields such as engineering and commerce. Noteworthy here is the recent special issue on U.S. presidential election forecasting published in the International Journal of Forecasting, containing some 10 different papers (Campbell and Lewis-Beck 2008). Our contribution in that special issue explored the question of whether our Jobs Model, off by only 1 percentage point in its 2004 forecast, was a simple product of data-mining (Lewis-Beck and Tien 2008).
Article
The 2004 US presidential election proved again how difficult it is to predict vote shares on the basis of polls. Midday media exit polls suggested that Senator Kerry would become the 44th President. Political scientists and econometricians, led by Ray Fair, have promulgated theoretical arguments and empirical results to predict US presidential elections, using macro-economic data and political factors. Respecifying Fair's war variable to include Korea and Vietnam and removing serial correlation improves his election forecasting without public opinion poll variables. This generalized Fair model predicts President Bush's two-party vote share would be 52.3 percent, well below predictions by Fair and prestigious political scientists.
Article
This paper assesses the current state of U. S. presidential election forecasting, describing forecast methods and their predictive accuracies for the most recent election, 2004. Three types of forecasts were made for the election using the methods noted: 1) point forecasts of the popular vote (by campaign polls, futures contracts on candidates' performance, regression models, Delphi expert surveys, and a combination of forecasts from these methods); 2) point forecasts of the electoral vote (by regression models, probability models based on state polls, a compilation of median polls in states, and exit polls); and 3) dichotomous forecasts of the popular-vote winner (by a multi-indicator index, cut-points for single indicators, and bellwether states). Candidate futures provided the most accurate popular-vote forecasts. A state probability model and the median state poll technique were the most accurate electoral vote methods. All three dichotomous techniques successfully predicted the election winner.
Article
Full-text available
Article
The popularity of the president as ascertained months prior to a presidential election permits an accurate prediction of the election outcome, even when the incumbent president is not running for reelection.
Article
Current Research This section of POQ is reserved for brief reports of research in progress, discussions of unresolved problems, methodological studies, and public opinion data not extensively analyzed or interpreted. Succinct case histories are welcomed, as well as hypotheses and insights that may be useful to other students of public opinion. Usually, materialin this section is shorter, more informal, and more tentative than in preceding pages.
Article
Our primary aim is to forecast, rather than explain, presidential election results, using aggregate time series data from the post-World War II period. More particularly, we seek prediction of the presidential winner well before the election actually occurs. After comparing the performance of several naive blvariate models based on economic performance, international involvement, political experience, and presidential popularity, we go on to formulate a multivariate model. This economy-popularity regression model rather accurately forecasts the winner 6 months in advance of the election, by employing spring measures of presidential popularity and the growth rate in real GNP per capita. Furthermore, the model's performance, both ex post facto and prior to the election, compares favorably with the Gallup final preelection poll taken only a few days before the election.
Article
This article updates through the 1992 election the equation originally pre-sented in Fair (1978) explaining votes for president. Previous updates are in Fair (1982, 1988, 1990). The equation made a large error in predicting the I9yZ election (as will be seen), and much of this article is concerned with this problem. The new results suggest that in forming expectations voters look back further than the old results suggested they did. The general modei that is behind the equation is reviewed in Section 1, and the data that have been used are discussed in Section 2.The equation is then updated, estimated, and tested in Section 3.Section 4 contains predic-tions of the 1996 election, conditional on the state of the economy, and Sec-tion 5 concludes with some cavtits.