ArticlePDF Available

Abstract and Figures

When deciding for whom to vote, voters should select the candidate they expect to best handle issues, all other things equal. A simple heuristic predicted that the candidate who is rated more favorably on a larger number of issues would win the popular vote. This was correct for nine out of ten U.S. presidential elections from 1972 to 2008. We then used simple linear regression to relate the incumbent's relative issue ratings to the actual two-party popular vote shares. The resulting model yielded out-of-sample forecasts that were competitive with those from the Iowa Electronic Markets and established quantitative models. The issue-index model has implications for political decision makers, as it can help to track campaigns and to decide which issues to focus on.
Content may be subject to copyright.
Electronic copy available at:
Forthcoming in the
Journal of Behavioral Decision Making
Forecasting Elections from Voters’ Perceptions of
Candidates’ Ability to Handle Issues
March 24, 2012
Andreas Graefe
Department of Communication Science
and Media Research
LMU Munich, Germany
J. Scott Armstrong
The Wharton School
University of Pennsylvania, Philadelphia, PA
Abstract: When deciding for whom to vote, voters should select the candidate they expect to best handle
issues, all other things equal. A simple heuristic predicted that the candidate who is rated more favorably
on a larger number of issues would win the popular vote. This was correct for nine out of ten U.S.
presidential elections from 1972 to 2008. We then used simple linear regression to relate the incumbent’s
relative issue ratings to the actual two-party popular vote shares. The resulting model yielded out-of-
sample forecasts that were competitive with those from the Iowa Electronic Markets and other established
quantitative models. This model has implications for political decision-makers, as it can help to track
campaigns and to decide which issues to focus on.
Keywords. index method, unit weighting, experience table, presidential election, accuracy
Acknowledgments. We thank Alfred Cuzán, Jason Dana, Ray Fair, Gerd Gigerenzer, Kesten Green,
Robin Hogarth, Philippe Jacquart, Randall Jones and Allan Lichtman for their helpful comments. We also
received suggestions when presenting earlier versions of this paper at the 2008 and 2009 International
Symposia on Forecasting and the 2010 Bucharest Dialogues on Expert Knowledge, Prediction,
Forecasting: A Social Sciences Perspective. Janice Dow, Rui Du, Joseph Cloward, and Max Feldman
helped with collecting data. Sheela Prasad and Michael Guth helped with coding the issues. Nathan
Fleetwood, Jen Kwok, Kelsey Matevish, and Rebecca Mueller did editorial work.
Electronic copy available at:
When deciding for whom to vote, voters use many different strategies. Redlawsk (2004) reported
experimental data showing that some people aim to evaluate the candidates on all issues in order to make
the “best” decision, whereas others use simple heuristics to limit their comparison to a small subset of
issues. In the extreme case, people may compare candidates on a single issue such as the economy, a
behavior known as single-issue voting.
Graefe and Armstrong (2012) developed the big-issue model to forecast U.S. presidential election
outcomes centered on only a single piece of information. Based on the take-the-best heuristic (Gigerenzer
& Goldstein, 1996), the big-issue model predicts that the candidate with the strongest voter support on the
single most important issue facing the country will win the popular vote. The big-issue model provides a
quick and inexpensive forecast that is expected to be accurate when the most important issue is of
widespread importance.
In situations where there is no single issue that is clearly more important than all others combined,
or if the relative importance of issues changes over time, it would seem prudent to include more issues.
This is likely to improve on accuracy and stability of the forecast. We developed a model for forecasting
U.S. presidential elections that incorporates voters’ perceptions of the candidates’ relative performance on
the complete set of issues raised in polls. For this, we used the index method, as it is especially useful for
selection problems with many important variables and a substantial amount of prior knowledge. The
resulting issue-index model can aid candidates in developing campaign strategies around issues.
Index method
The index method has long been used for forecasting and selection problems. Analysts prepare a list of
key variables and specify from prior evidence whether they are favorable (+1), unfavorable (-1), or
indeterminate (0) in their influence on a certain outcome. Alternatively, the scoring can be 1 for a positive
position and 0 otherwise. The analysts simply add the scores to determine the forecast. The higher the total
score, the higher the forecast of the dependent variable. For selection problems with multiple choices, the
analyst would pick the option with the highest score.
The index method does not estimate weights from historical data on the variable of interest. This makes
the method particularly valuable in situations with small samples and many variables, or in situations in
which the variables change over time. The underlying idea is to use unit weights for assessing each
variable’s directional influence on the outcome. Thus, the index method requires good domain knowledge
(e.g., prior research or expert knowledge).
In general, the index method is useful if (1) a large number of variables are important, (2) good
knowledge exists regarding which variables have an effect and the direction of that effect, and (3) new
variables are likely to arise. The primary disadvantage of the index method is that it is difficult to estimate
Electronic copy available at:
the size of a variable’s effect on the outcome. For a discussion of the conditions under which index models
are useful see Graefe and Armstrong (2011).
Prior research on unit weights
The index method has been criticized for giving each variable a unit weight. Many analysts believe that
employing differential weights will increase the accuracy of a model. However, prior evidence on the
relative performance of unit weighting and multiple regression (which estimates optimal weights from the
given data set for which predictions are needed) suggests that the issue of weights is not critical for
selection problems (e.g., Dawes & Corrigan, 1974; Dawes, 1979). Rather, evidence has shown that unit-
weight models often provide more accurate ex ante forecasts than regression weights for the same data.
Einhorn & Hogarth (1975) compared the predictive performance of multiple regression and unit
weights for selection problems. They concluded that unit weighting is more accurate than regression if the
sample size is not large, and the number of predictor variables and inter-correlation among these variables
is high. For an analytic solution of the conditions under which unit weights outperform regression see
Davis-Stober et al. (2010).
Empirical studies have been consistent with this finding. In analyzing published data in the
domain of applied psychology, Schmidt (1971) found regression to be less accurate than unit weighting. In
a review of the literature, Armstrong (1985:230) found regression to be slightly more accurate in three
studies (for academic performance, personnel selection, and medicine) but less accurate in five (three on
academic performance and one each on personnel selection and psychology). Czerlinski et al. (1999)
compared multiple regression and unit weighting for 20 prediction problems (including psychological,
economic, environmental, biological, and health problems), with the number of variables varying from 3
to 19. Most of these examples were taken from statistics textbooks, where they were used to demonstrate
the application of multiple regression. The authors reported that, not surprisingly, multiple regression
exhibited the best fit to the data set that was used to build the model (i.e., the training data). However, unit
weighting showed higher accuracy when predicting new data.
Cuzán & Bundrick (2009) applied an equal weighting approach to three regression models for
predicting popular vote shares in U.S. presidential elections: Fair’s equation (Fair, 1978) and two
variations of the fiscal model (Cuzán & Heggen, 1984). For the 23 elections from 1916 to 2004, the equal
weighting scheme outperformed two of the three regression models and did equally well as the third
when making out-of-sample predictions. When the authors used data from the 32 elections from 1880 to
2004, they found that equal weighting yielded a lower mean absolute error than the three regression
Index models for election forecasting
Lichtman (2008) was the first to use the index method for forecasting U.S. presidential elections. His
“Keys” model assigns values of zero or one to an index of thirteen predictor variables. The model predicts
the incumbent party to lose the popular vote if it loses six or more of the thirteen keys. Examples of the
keys include two measures of economic conditions, questions of whether the incumbent president was
involved in a major scandal, and whether the current administration was affected by foreign or military
success (or failure). The “Keys” model provided correct forecasts retrospectively for all of 31 elections
since 1860 and prospectively for all of the last seven elections. No model has matched this level of
accuracy in picking the winner of the popular vote. In addition, the forecast of the “Keys” model has some
(though few) decision-making implications: it advises political parties to nominate candidates that are
highly charismatic or considered national heroes.
Armstrong and Graefe (2011) used an index of 59 biographical variables to predict the winner of
the popular vote among the 29 U.S. presidential election winners from 1896 to 2008. The variables
included the candidate’s relationship status (married vs. single), educational background (prestigious
college or not), and height (taller or shorter than opponent). The “bio-index” model correctly predicted the
winner in 27 of the 29 elections and yielded ex ante forecasts of the popular vote shares for the four
elections from 1996 to 2008 that were as accurate as the best of seven econometric models. The bio-index
model uses variables that have decision-making implications for political campaigns. It can help political
parties select candidates running for office.
To capture the perceived issue-handling competence of candidates and translate it into a single score, the
index method seemed an appropriate choice for several reasons: (1) the number of issues (i.e., variables)
that are considered important in a particular election campaign is large (sometimes more than 40), (2) the
importance of certain issues (e.g., the economy, crime, war, global warming, or health care) changes
during, as well as between, elections, and (3) the number of observations is small (i.e., information about
how voters perceive candidates to handle the issues was available only for the last ten elections from 1972
to 2008).
We collected and analyzed data from polls that asked voters to name the candidate who would be more
successful in handling an issue. For example: “Now I'm going to mention a few issues and for each one,
please tell me if you think Barack Obama or John McCain would better handle that issue if they were
elected president…(cf. CNN/Opinion Research Corporation Poll. July 27-29, 2008). The issues included
topics such as terrorism, the economy, and immigration.
The availability of polling data on issues is a recent development. Polling data were obtained by
searching the iPOLL Databank of the Roper Center for Public Opinion Research for the time frame
starting exactly one year before each respective Election Day. For the elections before 1988, data were
collected by manually searching for all available polls. For the elections from 1988 to 2008, data were
collected by searching “better job OR best job” to manage the large number of available polls. For 2008,
data were collected from Given the lack of data on issues in the earlier years, the
analyses were conducted starting in 1972. As shown in Table 1, a total of 427 polls were reviewed to
determine voters’ opinion on 314 issues for the ten elections from 1972 to 2008.
Table 1: Final number of polls, issues, and index scores per election year
Total issue-index score (S) for
No. of
Winner of the
popular vote
(R / D)
Correct predictions
9 out of 10
* incorrect prediction
Selecting the issues
The selection of issues followed an operational definition: “A political issue is a matter of public concern
and is something that the next president can be expected to take action about. An issue always focuses on
a particular problem. Issues do not include policies for solving problems.” Four coders (both authors and
two research assistants) independently classified each item of a list of 129 potential issues as to whether or
not it fits this definition. The coders fully agreed on 70% of the items and were tied for 5%. If a tie existed
between the four coders on a particular item (i.e., two coders did classify an item as an issue while the
remaining two coders did not), the authors made the final decision. For the remaining 26%, the coders
voted 3 to 1. The complete data, including the coding of the issues, used in this study are available with
the supporting information in the online version of this article.1
Generating the index and calculating scores
Voters’ support for the candidates on each issue was used as a variable in the index. On each day in the
forecast horizon, results were averaged from all available polls to calculate the voters’ support for the
candidates on a particular issue. In case of repeated polls by the same polling institute, poll results were
first averaged for each polling institute. Averaging was expected to improve reliability and thus, reduce
forecast error.
For each issue, index scores were generated for the candidates, assigning “1” to the candidate
receiving the higher voter support and “0” to the opponent. In cases in which candidates achieved equal
voter support, both candidates were assigned “0.” Finally, the index scores were summed to calculate the
overall index score (S) for each candidate. Table 2 displays a sample calculation for an index made of two
1 The data can be accessed at
Table 2: Example calculation of simple two-issue index scores
Voter support
Index scores
Issue of concern to voters
Polling institution
ABC News/Washington Post Poll. June 12-15, 2008
Health care
Diageo/Hotline Poll. June 5-8, 2008
ABC News/Washington Post Poll. July 10-13, 2008
Time Poll. June 18-25, 2008
Total issue-index scores (S)
Issue-index heuristic to determine the election winner
The following heuristic was used to predict the popular vote winner: the candidate with the higher overall
index score (S) will win the popular vote. Note the heuristic’s simplicity; it does not require historical
information from previous elections. In using S to forecast the election winner of a specific election, the
model only draws on information from the given election.
Table 1 shows the heuristic forecasts on Election Eve of a particular election. The forecasts
correctly predicted the winner of the popular vote for nine of the ten elections in our sample. In 1980, it
did not predict Reagan to win against Carter.
Issue-index model to predict two-party vote shares
Simple linear regression was utilized to generate the issue-index model in order to predict the two-party
popular vote shares. The regression model is advantageous to the heuristic in that it compensates for the
uncertainty in the estimated relationship. The predictor variable is the relative index score (R) of the
incumbent party’s candidate, which represents the percentage of issues that favored the candidate of the
incumbent party.
That is, only a single predictor variable is used to represent all issues. The dependent variable is
the actual two-party popular vote share received by the candidate of the incumbent party (V). For the ten
elections from 1972 to 2008, this yielded the following vote equation:
V = 40.3 + 0.22 * R.
Thus, the model predicts that an incumbent would start with 40.3% of the vote; from there,
depending on the value of R, the incumbent would be able to increase his share of the vote. If the
percentage of issues favoring the incumbent went up by 10 percentage points, the incumbent’s vote share
would go up by 2.2 percentage points. Consistent with traditional forecasting models, the model reveals a
slight advantage for the incumbent. If the candidates each achieve equal index scores (i.e. R = 50%), the
candidate of the incumbent party is predicted as the winner (i.e., V = 51.3%).
Testing the predictive validity of issue-indices
As noted above, the issue-indices are designed to improve decision-making. As a test of this, we looked at
predictive validity by comparing the forecasts to those from other methods.
The issue-indices provide two ways for predicting the outcome of elections: (1) a simple heuristic
to predict the popular vote winner and (2) the issue-index model to predict both the popular vote winner
and the two-party popular vote shares.
Predicting the popular vote winner
For each election year, the forecast origin started 150 days prior to Election Day, which moved forward
one day at a time until Election Eve. For elections that occurred from 1980 to 2008, forecasts could be
calculated for each of the 150 days prior to Election Day. For the elections in 1972 and 1976, the first
issue poll was released 88 and 124 days prior to Election Day, respectively. Thus, a sample of 1,412
forecasts was collected over all ten elections.
As shown in Table 1, the number of polls was quite small from 1972 through 1988, ranging from
5 in 1976 to 34 in 1984. Since 1992, each election encompasses at least 60 polls, so we expected that the
index method would be relatively more accurate during that period.
The performance of issue-indices for predicting the winner varied as new polls became available
during the forecasting horizon. The results, reported as the hit rate, are shown in Table 3. The hit rate is
the percentage of forecasts that correctly determined the election winner.
Table 3: Number of daily forecasts and hit rates of the issue-index heuristic, the issue-index
model, and the IEM winner-takes-all markets
No. of forecasts
Issue-index heuristic
Issue-index model
No. of forecasts
Benchmark approaches
We compared the hit rates of the two issue-index approaches to the naïve model and to forecasts from the
Iowa Electronic Markets (IEM). The naïve model is a common benchmark in forecasting research that
states that it is not possible to predict the winner. That is, the hit rate of the naïve model equals 50%.
The IEM is a prediction (or betting) market that was first launched in 1988 as a futures market in
which contracts were traded on the popular vote shares achieved by the two major parties. Betting markets
for predicting election outcomes have an interesting history. Rhode & Strumpf (2004) studied historical
betting markets that existed for the 15 presidential elections from 1884 through 1940 and found that these
markets “did a remarkable job forecasting elections in an era before scientific polling” (2004:127). The
reason is that prediction markets are a sophisticated approach for aggregating information and creating
forecasts. The market forecasts reveal the combined judgment of market participants, who bet real money
and therefore, have an incentive to efficiently process relevant information. In comparing IEM vote share
prices with 964 trial-heat polls for the five presidential elections from 1988 to 2004, Berg et al. (2008)
found that IEM market forecasts were closer to the actual election results 74% of the time. However,
Erikson and Wlezien (2008) found polls to be more accurate than IEM forecasts when the polls’ pre-
election lead times were discounted by regressing the vote on polls taken at comparable times across
Since 1992, the IEM also operates a winner-take-all market that provides a forecast of which
candidate will win the popular vote. Table 3 shows the hit rates of the IEM winner-take-all markets for the
last 150 days prior to Election Day; except for 1992, when the winner-take-all markets were only available
from 116 days prior to Election Day.
Accuracy of the issue-index heuristic
The issue-index heuristic correctly predicted the winner 74% of the times and therefore, performed well in
comparison to the naïve model. This performance was achieved without using information from previous
election years. As expected, the heuristic was more accurate since 1992 when the number of issues was
much larger; the hit rate was 71% for the 1972-88 elections and 76% for the 1992-2008 elections.
However, the heuristic was less accurate than the IEM forecasts, which achieved a hit rate of 83% across
the five elections from 1992 to 2008.
Accuracy of the issue-index model
The forecasts of the issue-index model were calculated through N-1 cross-validation, which is a standard
procedure in forecasting research for measuring out-of-sample accuracy. This means that for each
election, we dropped the observation from that year, fitted the model based on the remaining data, and
then forecasted the omitted observation.
Across the ten elections, the resulting models correctly predicted the winner 81% of the times.
This was substantially better than the naïve forecast of 50%. The model was also more accurate in the
most recent five elections at 90%, up from 71% in the five elections from 1972 to 1988. In addition, the
issue-index model’s hit rate of 90% from 1992 to 2008 was slightly better than that of the IEM (83%).
Predicting the incumbent’s two-party vote share
The vote equation of the issue-index model allows for forecasting two-party popular vote shares. To
compare the model’s accuracy to other models, it is necessary to define a certain lead-time for when the
forecast is generated. Most of the traditional forecasting models produce their forecasts around Labor Day,
about eight to nine weeks prior to Election Day in the respective election year. Therefore, Tables 4 and 5
report the forecasts of the issue-index model calculated at about nine weeks, or 63 days, before Election
Day. Note that such a one-off comparison conceals an important advantage of the issue-index model,
which is the ability to continuously incorporate new information and thus, track campaigns.
The out-of-sample forecasts presented in Table 4 were again calculated through N-1 cross-
validation. The ex ante forecasts for the last three elections from 2000 to 2008 presented in Table 5 were
generated by successive updating. That is, only data that would have been available at the time of the
respective elections were used to build the model: to predict the 2008 election, data on the nine elections
from 1972 to 2004 were used; for the 2004 election, data on the eight elections from 1972 to 2000 were
used; and, for the 2000 election, data on the seven elections from 1972 to 1996 were used.
Benchmark approaches
We used the naïve model, the big issue model, eight established econometric models, and the IEM vote
share markets as benchmarks for assessing the accuracy of the issue-index model’s vote share forecasts.
Again, the naïve model assumes that it is not possible to make a forecast and simply predicts a
fifty-fifty split of the two-party popular vote.
The big-issue model (Graefe & Armstrong, 2012) predicts the election outcome based on how
voters expect the candidates to deal with the most important issue facing the country. This model was the
first attempt to develop a policy model for issues. The big-issue model provided a good initial rule of
thumb and fast advice on which issues candidates should stress in their campaigns. Since it is based on the
take-the-best heuristic (Gigerenzer & Goldstein, 1996), it works best if the most important variable
dominates any combination of less important variables (Martignon & Hoffrage, 2002). Since, for
elections, it might seldom be the case that one issue is more important than all other issues together, we
expected further improvement from using an index of issues.
Most econometric models are usually able to predict the correct election winner far in advance.
However, the models’ individual track record in predicting vote shares is mixed, and forecast errors for a
single model can vary widely across elections. Most of the established econometric models use between
two to five predictor variables, thereby usually including a measure of the economy’s state and some
measure of the incumbent’s popularity. For an overview of the predictor variables used in most of the
econometric models, see Jones and Cuzán (2008).
The IEM vote-share markets provide daily updated forecasts of the two-party popular vote shares
achieved by the candidates. Forecasts from these markets are available from 1988.
Out-of-sample accuracy of the issue-index model
As shown in Table 4, the issue-index model’s 63-day ahead out-of-sample forecasts correctly predicted 8
out of the 10 elections and yielded a mean absolute error (MAE) of 3.5 percentage points. This is an error
reduction of 22% compared to the naïve model (MAE: 4.5 percentage points). As expected, with a MAE
of 1.7, the model was more accurate for the five elections since 1992, compared to a MAE of 5.3 for the
five elections from 1972 to 1988.
The out-of-sample forecasts of the issue-index model were also compared to the IEM’s vote-share
markets for the six elections from 1988 to 2008. Across the 150 days prior to Election Day, the MAE over
the six elections from 1988 to 2008 was similar: 2.3 percentage points. However, as shown in Figure 1,
there were differences between both methods over time. While the issue-index model was more accurate
early in the forecasting horizon, the IEM was superior closer to Election Day. The results suggest that
issue-indices are particularly helpful for long-term forecasting.
Table 4: Out-of-sample forecasts of the issue-index model and actual two-party vote-shares for
the candidate of the incumbent party:
63 days prior to Election Day
Absolute error
model forecast
* predicted wrong election winner
Figure 1: MAE of the issue-index model and the IEM vote-share markets (1988-2008)
MAE (1988-2008)
Days to Election Day
IEM vote-share
Issue-index model
Ex ante accuracy of the issue-index model
The critical test is how well the models forecast prospectively (that is, for years not included in the
estimation sample). Table 5 provides the errors for the ex ante forecasts of the issue-index model, the big-
issue model, eight econometric models, the IEM vote-share markets, and the naïve model. The forecasts
of most econometric models were published in PS: Political Science and Politics, 34(1), 37(4), and 41(4).
The forecasts for Fair’s model were obtained from his website ( The
forecasts of the big-issue model were derived from Graefe & Armstrong (2012).
Table 5: Issue-index model vs. benchmarks: Absolute errors of ex ante forecasts
Forecast error
Forecast model
Approximate date of forecast
Issue-index model
Early September (63 days prior to Election Day)
Econometric model
May / June
Late July
Late July / early August
Lewis-Beck & Tien
Late August
Wlezien & Erikson
Late August
Late August / early September
Early September
Big-issue model
Early September (63 days prior to Election Day)
Prediction market
Iowa Electronic Markets
(vote-share market)
Early September (63 days prior to Election Day)
Naïve model
No lead time
On average, the early September forecasts yielded a lower MAE than each of the eight
econometric models. In addition, the MAE was only about half as large as the average error of the
econometric models and the big-issue model. The IEM forecasts were most accurate, missing actual vote
shares on average by only 1 percentage point. Ironically, due to the closeness of the past three elections
(particularly in 2000 and 2004), the naïve model was the second most accurate model in that time period.
The issue-index model continues the stream of research on using the index method for forecasting
elections by incorporating information about how voters perceive the candidates to handle the issues
considered important in a particular campaign. Issues play a fundamental role in election campaigns.
Campaign strategists try to make their candidate look competent on issues that are perceived as important
and run campaigns to emphasize this issue advantage. If crime handling differentiates the candidates,
crime will be emphasized by a campaign. In turn, the issue of crime will become more salient to the
electorate. In recent years, an increasing number of polls have been directed at exploring voters’
perceptions about issues, and the Internet has made this information more readily available.
Traditional election forecasting models regard a U.S. presidential election as a referendum on the
incumbent president’s performance. That is, if voters are happy with the incumbent’s performance, they
will vote for the incumbent party’s candidate; otherwise, they will vote for the candidate of the opposing
party. Most existing models use a measure of the economy (e.g., GDP growth), usually along with one or
more political measures. Thereby, the incumbent president’s popularity has been identified as the single
best predictor variable for forecasting election outcomes (Lewis-Beck & Rice, 1992). A common
explanation for the success of this measure is that it can be considered as a proxy for how the president is
handling both economic and noneconomic issues.
Measuring issue handling performance: Issue-indices vs. incumbent popularity
However, there are some disadvantages with using the incumbent’s popularity as a proxy for issue
handling performance. (1) Incumbent popularity is a retrospective measure as voters are asked to assess
how the president is (or has been) handling his job. Yet, U.S. presidential voters not only evaluate past
performance, but they also look at how well off they will be in the future. (2) The measure focuses solely
on the president’s performance and ignores the performance of the challenger. For example, there might
be situations in which voters are satisfied with how the president is handling the issues, but still think that
the challenger could do an even better job. Vice versa, voters might rate the incumbent president’s job
performance as poor, but expect the alternative to be even worse. Models based on the incumbent
president’s popularity cannot capture such information. In addition, the validity of the incumbent’s
performance measure is questionable in open-seat elections without the incumbent president running. (3)
The measure does not provide insights on which issues the president is favored. Therefore, strategists
cannot use it for campaign planning purposes.
The issue-index addresses these limitations of the incumbent’s popularity as it allows for a
prospective assessment of the relative performance of the candidates of the two major parties on each
individual issue. As a result, issue-indices represent aggregate voter decision-making, but do not allow for
drawing inferences on individual voter decision-making. As noted earlier, Redlawsk (2004) found that the
strategies voters use to decide for whom to vote range from simple single-issue voting to a complex
evaluation of the candidates’ performance on all available issues. Our approach cannot shed further light
on this question, as the election result could be the outcome of different individual voting strategies.
Future research should evaluate whether, and how, individual voters (should) use an issue-index when
deciding whom to vote for.
Issue-indices as decision aids
While the issue-index model is limited in providing insights on individual voter decision-making, it can
provide advice to political decision makers. According to the issue-index model, the election outcome is
the result of a referendum on the issue-handling reputation of the candidates. A candidate’s issue-handling
reputation is influenced by issue ownership of the candidate’s party (Petrocik 1996). In addition, it might
be influenced by relative candidate evaluations. The candidate that is favored on one issue might also be
favored (or less repudiated) on issues that normally favor the candidate of the other party. For example, in
the 1992 elections, Clinton was viewed as better than Bush on almost all issues, including some of which
Democrats almost never fare well, such as dealing with crime.
Figure 2 shows how voters perceived the candidates’ issue-handling competence for the elections
from 1972 to 2008. Consistently, Democrats were seen as better in dealing with social welfare issues.
Except for 1980, 1996, and 2000, voters favored the Republican candidate on foreign affairs and defense
issues. Perceptions of economic and social issues were mixed.
Figure 2: Perceived issue-handling competence of candidates (1972-2008)
Note that, as the number of issues increased for more recent elections, differences between the candidates
became clearer. In the last two elections, Democrats were favored for economic and welfare issues. The
Republicans gained back and kept their advantage for foreign policy and defense in a post-9/11 world. In
2008, voter support on social and other issues switched from Republicans to Democrats.
Candidates might be able to influence their issue-handling reputation by effective campaigning. If
issue-handling reputation for a certain problem is about equal for both candidates, a candidate could
increase his marketing effort to gain ownership of this issue. Candidates could raise and promote issues
that favor them, but which have not received attention from the public yet. Finally, candidates could adopt
new or revised positions and diverge from traditional party views. By emphasizing such changes, a
candidate might be able to change his issue-handling reputation as perceived by voters. Issue-indices can
help candidates identify issues to focus on in their campaign.
Although the issue-index model implies that candidates can increase their appeal to voters by
effective campaigning, the common view in political science is that campaigns have only a limited impact
on the election outcome. The main reason for this is the strong degree of partisanship among U.S. voters.
As noted by Campbell (1996:423), “no matter how bad the campaign goes for a party, it can count on
receiving about 40% of the two-party vote; no matter how well a campaign goes for a party, it will receive
no more than about 60% of the two-party vote.” With the intercept of about 40.3% and the coefficient
0.22, the vote equation of the issues-index model is consistent with this view. Imagine a situation in which
the incumbent’s campaign completely fails and the voters favor the challenger on all issues (i.e., relative
index score R=0). In this case, the issue-index model would predict the incumbent to gain 40.3% of the
popular vote. Vice versa, if voters favored the incumbent on every single issue (R=100), the model would
predict the incumbent to receive at maximum 62.3% of the popular vote.
Benefits and limitations of issue-indices
Issue-indices are simple to use and easy to understand. By using a simple heuristic, issue-indices allow for
the prediction of the popular vote winner without a need for historical data analysis. In addition, issue-
index scores can be used in combination with simple linear regression to allow for quantitative
predictions. However, a disadvantage is the cost of summarizing knowledge to both develop the model
and update it with new information.
Unfortunately, the index method’s simplicity may be its biggest drawback. Summarizing evidence
from the literature, Hogarth (2012) showed that people exhibit a general resistance to simple solutions.
Although there is evidence that simple models can outperform more complicated ones, there is a belief
that complex methods are necessary to solve complex problems.
Thus, it is not surprising that the index method has faced some skepticism. An early example is
Burgess (1939), who described the use of the index method for predicting the success of paroling
individuals from prison. Based on a list of 25 factors, which were rated either “favorable” (+1) or
“unfavorable” (0), an index score was calculated for each individual to determine the chance of successful
parole. This approach was questioned since Burgess (1939) did not assess the relative importance of
different variables, and no consideration was given to their magnitude (i.e. how favorable the ratings
The issue-index might face similar reservations, as it does not (a) weigh the importance of issues
and (b) measure by how much voters favor a candidate over the other on a particular issue. However, the
issue-index deliberately did not include such information for a number of reasons.
First, it is not clear that this would increase forecast accuracy. The empirical evidence summarized
earlier does not support the use of differential weights over unit weights for many problems in the social
sciences. Also, when addressing the concerns with the approach used by Burgess (1939), Gough (1962)
did not obtain more accurate parole predictions.
Second, there is reason to believe that the relative importance of issues might not matter much.
Based on results from a 1985 survey of U.S. voters, Petrocik (1996:830) concluded that for many voters
“almost any problem is important.” In this survey, respondents (divided into Republican and Democratic
identifiers) had to rate the importance of 18 issues on a scale from zero (least important) to ten (most
important). The average score was 7.8. Of all 36 ratings, 29 achieved a mean score of seven or higher.
Third, weighting the importance of issues and measuring the magnitude of candidate evaluations
would boost the model’s complexity, particularly in terms of collecting and analyzing data on issue
importance. Furthermore, the importance of weights may vary over time.
We hope that other researchers will address these issues. To support them in this endeavor, we
have made our data publicly available.
The index method was applied to the ten U.S. presidential elections from 1972 to 2008 in order to provide
a forecast based on voters’ perceptions regarding how the candidates would handle the issues. In using a
simple heuristic, the approach correctly predicted the popular vote winner in 9 of 10 elections. By tracking
issue polls that are now widely available, candidates can use this information to decide which issues they
should stress in their campaigns.
By using a simple linear regression of the incumbent’s relative index scores against the actual
votes, forecasts of the popular two-party vote shares were obtained. The resulting model provided ex ante
forecasts that were competitive with forecasts from eight econometric models and more accurate than the
big-issue model for the three elections from 2000 to 2008. Across the last five elections from 1992 to
2008, the issue-index model provided out-of-sample forecasts that yielded a higher hit rate and similar
MAE than the Iowa Electronic Markets.
Index models are expected to be useful for other problems involving a large number of variables,
small data sets, and a good knowledge base, conditions that are common for many prediction problems in
the social sciences. Examples include selection problems such as predicting which CEO a company should
hire, where to locate a retail store, which product to develop, or whom to marry.
Armstrong, J. S. (1985). Long-range forecasting: From crystal ball to computer, New York: John Wiley.
Armstrong, J. S. & Graefe, A. (2011). Predicting elections from biographical information about
candidates, Journal of Business Research, 64, 699-706. DOI:10.1016/j.jbusres.2010.08.005.
Berg, J. E., Nelson, F. D. & Rietz, T. A. (2008). Prediction market accuracy in the long run. International
Journal of Forecasting, 24, 285-300. DOI:10.1016/j.ijforecast.2008.03.007
Burgess, E. W. (1939). Predicting success or failure in marriage. New York: Prentice-Hall.
Campbell, J. E. (1996). Polls and votes: The trial-heat presidential election forecasting model, certainty,
and political campaigns, American Politics Quarterly, 24, 408-433.
Cuzán, A. G. & Bundrick, C. M. (2009). Predicting presidential elections with equally-weighted
regressors in Fair's equation and the fiscal model, Political Analysis, 17, 333-340. DOI:
Cuzán, A. G. & Heggen, R. J. (1984). A fiscal model of presidential elections in the United States, 1880-
1980, Presidential Studies Quarterly, 14, 98-108.
Czerlinski, J., Gigerenzer, G. & Goldstein, D. G. (1999). How good are simple heuristics? In: G.
Gigerenzer, G. & Todd, P. M. (Eds.), Simple heuristics that make us smart. New York: Oxford
University Press, pp. 97-118.
Davis-Stober, C. P., Dana, J. & Budescu, D. V. (2010). A constrained linear estimator for multiple
regression, Psychometrika, 75, 521-541. DOI: 10.1007/S11336-010-9162-8
Dawes, R.M. (1979). The robust beauty of improper linear models. American Psychologist, 34, 571-582.
Dawes, R.M., & Corrigan, B. (1974). Linear models in decision making. Psychological Bulletin, 81, 95-
Einhorn, H. J. & Hogarth, R. M. (1975). Unit weighting schemes for decision-making, Organizational
Behavior & Human Performance, 13, 171-192.
Erikson R. S. & Wlezien, C. (2008). Are political markets really superior to polls as election predictors?
Public Opinion Quarterly, 72, 190-215.
Fair, R. C. (1978). The effect of economic events on votes for president, Review of Economics and
Statistics, 60, 159-173.
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded
rationality. Psychological Review, 103, 650669. DOI: 10.1037/0033-295X.103.4.650
Gough, H. G. (1962). Clinical versus statistical prediction in psychology. In: L. Postman (Eds.),
Psychology in the making. New York: Knopf, pp. 526-584.
Graefe, A. & Armstrong, J. S. (2012). Predicting elections from the most important issue: A test of the
take-the-best heuristic, Journal of Behavioral Decision Making, 25, 41-48. DOI: 10.1002/bdm.710
Graefe, A. & Armstrong, J. S. (2011). Conditions under which index models are useful: Reply to bio-
index commentaries, Journal of Business Research, 64, 693-695. DOI:10.1016/j.jbusres.2010.08.008
Hogarth, R. M. (2012). When simple is hard to accept. In: P. M. Todd, G. Gigerenzer & the ABC
Research Group (Eds.), Ecological rationality: Intelligence in the world. Oxford: Oxford University
Press, pp. 61-79.
Jones, R. J. & Cuzán, A. G. (2008). Forecasting U.S. Presidential Elections: A brief review, Foresight,
Issue 10, 29-34.
Lewis-Beck, M. S. & Rice, T. W. (1992). Forecasting Elections, Washington D.C.: CQ Press.
Lichtman, A. J. (2008). The keys to the white house: An index forecast for 2008, International Journal of
Forecasting, 24, 299-307. DOI:10.1016/j.ijforecast.2008.02.004
Martignon, L. & Hoffrage, U. (2002). Fast, frugal, and fit: Simple heuristics for paired comparison,
Theory and Decision, 52, 29-71. DOI: 10.1023/A:1015516217425
Petrocik, J. R. (1996). Issue ownership in presidential elections, with a 1980 case study, American Journal
of Political Science, 40, 825.
Redlawsk, D. P. (2004). What voters do: Information search during election campaigns, Political
Psychology, 25, 595-610. DOI: 10.1111/j.1467-9221.2004.00389.x
Rhode, P. W. & Strumpf, K. S. (2004). Historic presidential betting markets, Journal of Economic
Perspectives, 18, 127-142. DOI:10.1257/0895330041371277
Schmidt, F. L. (1971), The relative efficiency of regression and simple unit predictor weights in applied
differential psychology, Educational and Psychological Measurement, 31, 699-714.
... Quite analogous to the commercial marketing environment, the diverse political environment consists of voters having different issues, opinions, predilections and lifestyles (Graefe and Armstrong, 2013;Egorov, 2015). Therefore, just like commercial marketers, the political parties aspire to formulate their campaigns in a way to accommodate the different voter segments (Baines et al., 1999;Horowitz, 2016). ...
Purpose Although the concept of political party brand personality has received substantial recognition in the political marketing literature, however, no study as yet has contributed in identifying a causal relationship between the party brand personality and voter behaviour. Therefore, this paper aims to address this gap in the academic literature by determining the relationship between the multifaceted advertising-brand personality-satisfaction-loyalty constructs in political context. Design/methodology/approach The sample for the study consisting of 930 responses was drawn from the major cities of Punjab state in India through multistage stratified random sampling. AMOS-based structural equation modelling was used to test the proposed model. Findings Results revealed that voters’ attitude towards political advertisements had a significant effect on their satisfaction and loyalty when brand personality had a mediating role in this effect. Additionally, the influence of party brand personality on satisfaction and loyalty of voters was different for the selected four political parties. Practical implications The study carries strong implications for the political parties and the political marketers to develop pertinent marketing and communication strategies that are consistent with their personality traits, with an endeavour to enhance the satisfaction and loyalty of voters. Originality/value The most imperative discovery of this study is to determine the mediating role of party brand personality on relationship between political advertisements, voter satisfaction and party loyalty. Such a study of an emerging economy contributes significantly to the marketing theory and practice owing to the diversity and fragmentation across India with respect to religion, caste, creed and race of voters.
... Predictors include variables such as marital status, the age of the first arrest, the type of crime, and the last grade completed in school (Walker, 2013). Recent research tested the index method for predicting US presidential elections by creating an index model with biographical information about candidates (Armstrong and Graefe, 2011) and an index model with voter perceptions of each candidate's ability to handle important issues (Graefe and Armstrong, 2013). The index scores provided predictions that were competitive with those from established methods, including regression analysis. ...
Purpose – This paper aims to test whether a structured application of persuasion principles might help improve advertising decisions. Evidence-based principles are currently used to improve decisions in other complex situations, such as those faced in engineering and medicine. Design/methodology/approach – Scores were calculated from the ratings of 17 self-trained novices who rated 96 matched pairs of print advertisements for adherence to evidence-based persuasion principles. Predictions from traditional methods – 10,809 unaided judgments from novices and 2,764 judgments from people with some expertise in advertising and 288 copy-testing predictions – provided benchmarks. Findings – A higher adherence-to-principles-score correctly predicted the more effective advertisement for 75 per cent of the pairs. Copy testing was correct for 59 per cent, and expert judgment was correct for 55 per cent. Guessing would provide 50 per cent accurate predictions. Combining judgmental predictions led to substantial improvements in accuracy. Research limitations/implications – Advertisements for high-involvement utilitarian products were tested on the assumption that persuasion principles would be more effective for such products. The measure of effectiveness that was available –day-after-recall – is a proxy for persuasion or behavioral measures. Practical/implications – Pretesting advertisements by assessing adherence to evidence-based persuasion principles in a structured way helps in deciding which advertisements would be best to run. That procedure also identifies how to make an advertisement more effective. Originality/value – This is the first study in marketing, and in advertising specifically, to test the predictive validity of evidence-based principles. In addition, the study provides the first test of the predictive validity of the index method for a marketing problem.
... Predictors include such variables as whether the offender is married, the age of first arrest, the type of crime, and the last grade completed in school (Walker 2013). Recent research tested the index method for predicting U.S. presidential elections by using the index method with biographical information about candidates (Armstrong and Graefe 2011) and another with voter perceptions of each candidate's ability to handle important issues (Graefe and Armstrong 2013). The index scores provided predictions that were competitive with those from established methods, including regression analysis. ...
While combining forecasts is well-known to reduce error, the question of how to best combine forecasts remains. Prior research suggests that combining is most beneficial when relying on diverse forecasts that incorporate different information. Here, I provide evidence in support of this hypothesis by analyzing data from the PollyVote project, which has published combined forecasts of the popular vote in U.S. presidential elections since 2004. Prior to the 2020 election, the PollyVote revised its original method of combining forecasts by, first, restructuring individual forecasts based on their underlying information and, second, adding naïve forecasts as a new component method. On average across the last 100 days prior to the five elections from 2004 to 2020, the revised PollyVote reduced the error of the original specification by eight percent and, with a mean absolute error (MAE) of 0.8 percentage points, was more accurate than any of its component forecasts. The results suggest that, when deciding about which forecasts to include in the combination, forecasters should be more concerned about the component forecasts’ diversity than their historical accuracy.
Full-text available
Problem How to help practitioners, academics, and decision makers use experimental research findings to substantially reduce forecast errors for all types of forecasting problems. Methods Findings from our review of forecasting experiments were used to identify methods and principles that lead to accurate forecasts. Cited authors were contacted to verify that summaries of their research were correct. Checklists to help forecasters and their clients undertake and commission studies that adhere to principles and use valid methods were developed. Leading researchers were asked to identify errors of omission or commission in the analyses and summaries of research findings. Findings Forecast accuracy can be improved by using one of 15 relatively simple evidence-based forecasting methods. One of those methods, knowledge models, provides substantial improvements in accuracy when causal knowledge is good. On the other hand, data models – developed using multiple regression, data mining, neural nets, and “big data analytics” – are unsuited for forecasting. Originality Three new checklists for choosing validated methods, developing knowledge models, and assessing uncertainty are presented. A fourth checklist, based on the Golden Rule of Forecasting, was improved. Usefulness Combining forecasts within individual methods and across different methods can reduce forecast errors by as much as 50%. Forecasts errors from currently used methods can be reduced by increasing their compliance with the principles of conservatism (Golden Rule of Forecasting) and simplicity (Occam’s Razor). Clients and other interested parties can use the checklists to determine whether forecasts were derived using evidence-based procedures and can, therefore, be trusted for making decisions. Scientists can use the checklists to devise tests of the predictive validity of their findings.
This study explored voters’ perceptions of Hillary Clinton and Donald Trump regarding their general trust in the two 2016 presidential candidates, voters’ demographics, five underlying drivers of trust, and important campaign issues. The study also examined how perceptions of trust on issues were evidenced in the popular vote and in key swing states and the Electoral College. The study used two online census-representative surveys to examine registered voters’ perceptions: one survey of 1,500 respondents conducted immediately before the first presidential debate (September 7-15, 2016) and a second survey of a different sample of 1,500 immediately after the third debate (October 20-31), 2016. Analysis of the results confirmed relatively low-trust levels for both Hillary Clinton and Donald Trump and an electorate divided demographically about their trust in the two candidates. The five trust drivers yielded statistically significant differences between the candidates. Clinton was evaluated as more competent, concerned, and reliable, and a person with whom participants identified. With the second survey, Trump statistically surpassed Clinton for openness and honesty. Regarding the three issues of most importance in the campaign, Clinton and Trump had equivalent trust evaluations for dealing with the U.S. economy/jobs, but Trump was more trusted regarding terrorism/national security and Clinton was more trusted regarding health care. The overall trust evaluations for Clinton, coupled with intentions to vote, contribute to understanding Clinton’s popular vote victory. However, the importance of terrorism/national security in swing states and Trump’s trust advantage on that issue contributes to understanding the Electoral College vote by comparison with the popular vote.
Full-text available
This article proposes a unifying theory, or Golden Rule, of forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century.
Full-text available
In this year's presidential election, as in 2004, the Pollyvote applied the evidence-based principle of combining all credible forecasts (Armstrong 2001) to predict the election outcome. Pollyvote is calculated by averaging within and across four components for forecasting the incumbent's share of the two-party vote, weighting them all equally. The components were updated on a daily basis, or whenever new data became available, and include combined trial heat polls (using the RCP poll average from, a seven-day rolling average of the vote-share contract prices on the Iowa Electronic Market (IEM), 16 quantitative models, and a survey of experts on American politics.
Full-text available
This article proposes a unifying theory, or the Golden Rule, of forecasting. The Golden Rule of Forecasting is to be conservative. A conservative forecast is consistent with cumulative knowledge about the present and the past. To be conservative, forecasters must seek out and use all knowledge relevant to the problem, including knowledge of methods validated for the situation. Twenty-eight guidelines are logically deduced from the Golden Rule. A review of evidence identified 105 papers with experimental comparisons; 102 support the guidelines. Ignoring a single guideline increased forecast error by more than two-fifths on average. Ignoring the Golden Rule is likely to harm accuracy most when the situation is uncertain and complex, and when bias is likely. Non-experts who use the Golden Rule can identify dubious forecasts quickly and inexpensively. To date, ignorance of research findings, bias, sophisticated statistical procedures, and the proliferation of big data, have led forecasters to violate the Golden Rule. As a result, despite major advances in evidence-based forecasting methods, forecasting practice in many fields has failed to improve over the past half-century.
Full-text available
Humans and animals make inferences about the world under limited time and knowledge. In contrast, many models of rational inference treat the mind as a Laplacean Demon, equipped with unlimited time, knowledge, and computational might. Following Herbert Simon's notion of satisficing, this chapter proposes a family of algorithms based on a simple psychological mechanism: one-reason decision making. These fast-and-frugal algorithms violate fundamental tenets of classical rationality: It neither looks up nor integrates all information. By computer simulation, a competition was held between the satisficing "take-the-best" algorithm and various "rational" inference procedures (e.g., multiple regression). The take-the-best algorithm matched or outperformed all competitors in inferential speed and accuracy. This result is an existence proof that cognitive mechanisms capable of successful performance in the real world do not need to satisfy the classical norms of rational inference.
The general problem of forming composite variables from components is prevalent in many types of research. A major aspect of this problem is the weighting of components. Assuming that composites are a linear function of their components, composites formed by using standard linear regression are compared to those formed by simple unit weighting schemes, i.e., where predictor variables are weighted by 1.0. The degree of similarity between the 2 composites, expressed as the minimum possible correlation between them, is derived. This minimum correlation is found to be an increasing function of the intercorrelation of the components and a decreasing function of the number of predictors. Moreover, the minimum is fairly high for most applied situations. The predictive ability of the 2 methods is compared. For predictive purposes, unit weighting is a viable alternative to standard regression methods because unit weights: are not estimated from the data and therefore do not 'consume' degrees of freedom; are 'estimated' without error (i.e., they have no standard errors); cannot reverse the 'true' relative weights of the variables. Predictive ability of the 2 methods is examined as a function of sample size and number of predictors. It is shown that unit weighting will be superior to regression in certain situations and not greatly inferior in others. Various implications for using unit weighting are discussed and applications to several decision making situations are illustrated.
Theory: This paper develops and applies an issue ownership theory of voting that emphasizes the role of campaigns in setting the criteria for voters to choose between candidates. It expects candidates to emphasize issues on which they are advantaged and their opponents are less well regarded. It explains the structural factors and party system variables which lead candidates to differentially emphasize issues. It invokes theories of priming and framing to explain the electorate's response. Hypotheses: Issue emphases are specific to candidates; voters support candidates with a party and performance based reputation for greater competence on handling the issues about which the voter is concerned. Aggregate election outcomes and individual votes follow the problem agenda. Method: Content analysis of news reports, open-ended voter reports of important problems, and the vote are analyzed with graphic displays and logistic regression analysis for presidential elections between 1960 and 1992. Results: Candidates do have distinctive patterns of problem emphases in their campaigns; election outcomes do follow the problem concerns of voters; the individual vote is significantly influenced by these problem concerns above and beyond the effects of the standard predictors.
Review of J. Scott Armstrong's 1978 book on forecasting. Click on the DOI link above to read the review
There is probably no subject which has been studied more thoroughly by political scientists than American presidential elections. A vast literature has developed examining the effects of attitudes toward the parties, candidates, and issues on voter decision-making in these quadrennial contests (for a comprehensive review of this literature see Asher, 1988). Despite the proliferation of literature on this topic, however, relatively little research has addressed what is perhaps the most basic question about presidential elections: who wins and who loses? A few scholars have developed models for predicting the national outcomes of presidential elections. Brody and Sigelman (1983) proposed a model based on the incumbent president's approval rating in the final Gallup Poll before the election. This extremely simple model yielded an unadjusted R ² of .71. Hibbs (1982) proposed a different bivariate model based entirely on the trend in real per capita disposable income since the last presidential election. This model yielded an unadjusted R ² of .63. Thus, neither of these bivariate models proved to be highly accurate. Lewis-Beck and Rice (1984) have developed a forecasting model which combines economic conditions and presidential popularity, Their model, which uses the president's approval rating in May and the change in real per capita GNP during the second quarter of the election year to predict the popular vote for president, yields an unadjusted R ² of .82.