Content uploaded by Mary Stegmaier
Author content
All content in this area was uploaded by Mary Stegmaier on Nov 17, 2014
Content may be subject to copyright.
E462 Eigenvalue with the Largest Magnitude – Dominant Eigenvalue
Newman M (2010) Networks: an introduction. Oxford
University Press, Oxford
Saad Y (2011) Numerical methods for large eigenvalue
problems. Classics in applied mathematics, 2nd edn.
SIAM, Philadelphia. Freely available at http://www.
siam.org/books/cl66. Accessed Apr 2013
Strang G (2009) Linear algebra and applications, 4th edn.
Brooks Cole, New York
The Center for Measuring University Performance. http://
mup.asu.edu/. Accessed Sept 2013
Eigenvalue with the Largest
Magnitude – Dominant Eigenvalue
Iterative Methods for Eigenvalues/Eigenvectors
Elastic Computing
Cloud Computing
E-Learning
Learning Networks
Election Forecasting, Scientific
Approaches
Michael S. Lewis-Beck1and Mary Stegmaier2
1Department of Political Science, University of
Iowa, Iowa City, IA, USA
2Truman School of Public Affairs, University of
Missouri, Columbia, MO, USA
Synonyms
Election prediction; Forecasting elections;
Forecasting models; Political prediction markets;
Political stock markets; Trial heat polls
Glossary
Forecasting models Econometric models that
predict election results based on statisti-
cal relationships between macroeconomic
and/or macro-political factors and election
results
Political stock markets These futures or bet-
ting markets allow investors to invest money,
through the purchase and sale of shares in
candidates or political parties. The election
results determine the payoff for the investors
Prescientific election forecasts Election pre-
dictions based on intuition, bellwethers, and
other non-replicable procedures
Scientific election forecasts Approaches that
use replicable systematic quantitative
approaches to predicting elections. These
include polling, models, and stock markets
Trial heat polls Public opinion polls that mea-
sure vote intention of the public on that
day. The poll assesses the position of the
competing candidates (or parties) relative to
each other
Election Forecasting
In democratic nations, elections are premier polit-
ical events. They require citizens to choose their
leaders and leaders to be held accountable. The
winners of an electoral contest are allowed to
wield power, sometimes great power. Therefore,
citizens follow campaigns with interest and are
often eager to know who will win. In other words,
they would like a forecast of the outcome. This
impulse to election forecasting has been around
for a long time. To forecast an election means
to declare its outcome before it happens. For
example, say in advance which candidate will
win the race.
There are many methods of forecasting elec-
tions, and they can be divided into two groups:
prescientific and scientific. While this distinction
may seem clear, the differences are not always
easy to spot. Below, we emphasize the scientific
approach to election forecasting, tracing its his-
torical development in the study of United States
Election Forecasting, Scientific Approaches 463 E
E
elections. Arriving at the contemporary period,
we observe three leading approaches – polls,
models, and markets. We go on to highlight the
modeling approach, which lends itself most fully
to systematic social science inquiry. We explore
examples, generic and specific, from that litera-
ture. As a basis of comparison, we examine as
well forecasting of elections from other countries.
While much has been done in the field, much
remains to be done. Therefore, we end with a
discussion of methodological issues to be solved.
Prescientific Approaches
Prescientific methods are usually based on
appeals to intuition (e.g., “I just feel he will
lose”), to authority (e.g., “Her campaign manager
said she had it in the bag”), or special knowledge
(e.g., “I have read all the pamphlets and it’s
going to be Smith.”). It is not that forecasts
from these methods are surely wrong. Indeed,
they may be right, but they would be right
by chance. Further, and most critically, they
represent methods that cannot be replicated. In
other words, another person cannot reproduce the
prescientific forecaster’s steps and necessarily
arrive at the same conclusion.
A prescientific approach that has “pseudo-
scientific” trappings is that of the “bellwether.”
The most famous example comesfrom the phrase
“As Maine goes, so goesthe nation,”a rule widely
used to forecast presidential elections in the early
twentieth century. It was based on the fact that
from 1860 to 1932, the state sided with the winner
in 16 of 19 presidential contests. Unfortunately,
the rule broke down, as bellwethers do. In the
next 14 elections, Maine “got it right” only 7
times. Despite the fact that bellwethers rest on
coincidence, rather than causal connection, they
continue to be popular. Witness, before each
presidential election, the search of journalists for
that one state that will predict the whole contest.
Interestingly, that choice of states changes. At
different times, it has been New York, Califor-
nia, Illinois, New Mexico, Delaware, or Missouri
(Lewis-Beck and Rice 1992, Chap. 1). Currently,
the favorite among many news commentators
seems to be Ohio.
Scientific Approaches: Polling
Polling has been used to assess public opinion
since ancient times when, for example, in the
Greek polis strands of straw might have been
distributed, and citizens asked to cast them. In the
United States, “straw” polls were run early in the
twentieth century by newspapers and magazines
anxious to gauge support for different political
candidates. The most notorious was that of the
Literary Digest, in 1936, aiming to assess the
comparative presidential strength of Franklin
Roosevelt and Alf Landon (See the review in
Squire 1988). Millions of readers responded
to their mail-out ballot, showing Landon in
a landslide, 60–37%. In fact, Roosevelt won
handily. What went wrong with the Literary
Digest poll? The essential error was that it
did not compose a representative sample of
voters. No sample, no matter how big, can
overcome a mistaken sampling strategy, such
as that followed here. Specifically, they sampled
only readers of the magazine who, it has been
demonstrated, overrepresented the better-off in
America at that time (owners of automobiles
and telephones, who were more likely to vote
Republican).
Since then, beginning with the Gallup
organization, polls have become more and more
scientific. When is a poll scientific? When the
target population is sampled by probability
methods, ensuring that every respondent has a
known nonzero probability of selection. Properly
done, a poll can yield a precise estimate of
opinion, within known limits of error. For
example, with a probability sample of 1,500
respondents, we can say, with 95 % confidence,
that a candidate vote estimate of 54% is accurate,
plus or minus a sampling error of about 3 %.
That is, the true value of the percentage in the
population is likely in the interval f51, 57g.
In other words, we are 95% confident the
candidate will win. Of course there are many
other considerations, besides point estimate
accuracy, that must be taken into account in
estimating a vote share. For example, how was
the vote question worded, when was the poll
taken, was the interview face-to-face or over
the telephone? These, and similar conditions,
E464 Election Forecasting, Scientific Approaches
can affect the accuracy of a poll (For a useful
current discussion of sampling error in polling,
see Weisberg 2005).
George Gallup began his firm in the 1930s,
and it remains an industry leader. During a
contemporary campaign, it surveys frequently,
asking respondents their vote intention in the
upcoming presidential election. (A polling
approach to forecasting not considered here relies
on election result expectation, rather than vote
intention. This is known as “citizen forecasting”
(Lewis-Beck and Stegmaier 2011)). The Gallup
lynchpin presidential poll occurs a day or two
before the November election date. The tradition
of this Final Gallup Pre-Election Survey began
in 1948. As an example, take these results from
November 1, 1980: Reagan = 47, Carter = 44,
Undecided= 7. In that race, Reagan did win.
However, from a forecasting perspective, that win
was not easy to foreseebecause of sampling error
(Assuming a probability sample of N= 1,500,
the 95 % confidence interval estimates for
Reagan and Carter, respectively, are f44, 50gand
f41, 47g). In other words, there was a good
chance, prospectively, that Carter could win.
Ignoring the probability estimates, and simply
looking at the point estimate, the Final Gallup
Poll recorded an average error for the winning
candidate of 2.1 % points, across its twentieth-
century trials (1948–2000).
In addition to the problem of sampling error,
there is the issue of when to stop polling, as voter
opinion can evolve until the last day. For these
reasons, Gallup officials stress that the poll results
for vote intention on a given day are merely
“snapshots in time,” indicating what might hap-
pen that day, but not necessarily tomorrow.More-
over, besides the change in vote intention from
day to day within one polling organization, there
is the change in vote intention across polling
firms. These latter changes are sometimes re-
ferred to as “house effects” (Pickup and Johnston
2008). By now, there are many firms, in addition
to Gallup, competing in the world of political
polling, such as Roper, Harris, Pew, the New York
Times, and news network polls, to name only
afew.
A difficulty for the forecaster involves know-
ing which from the myriad of polls to watch.
A solution some follow amounts to averaging the
several polls on the same day (week or month).
Different sources are in aid of this, such as the
blogs Polly Vote or Real Clear Politics.Averag-
ing, though, might mean you are mixing the bad
with the good. Even if one decides on which poll
(or set of polls) to follow, there is the nagging
question of lead time. For a forecaster, lead is
necessary. That is, the election must be called
before the election, well before if possible. As the
forecast is made closer and closer to the election
date, it becomes less and less valuable. Indeed, if
it is simply made the day before the election, it
becomes trivial. The lead time issue is perhaps
the Achilles Heel for polling, when used as a
forecasting device. To achieve a desirable amount
of lead, it may become necessary to turn to other
approaches.
Scientific Approaches: Markets
Polling remains the leading instrument for
election forecasting. However, a relatively recent
rival approach comes from markets. The idea
involves trading “stocks” for different candidates
in an election. The first effort, initially called
the Iowa Political Stock Market and now named
the Iowa Electronic Market (IEM), works like
this. Registered with the Securities and Exchange
Commission, it offers traders the opportunity to
purchase shares representing election candidates,
say X and Y. The purchase value goes up as,
say, the stock of Candidate X is increasingly
purchased. The purchase price is converted into
a probability of winning, and at the close of
each trading day, these prices (probabilities)
are posted. Thus, the IEM offers a daily tally
of how the candidates are faring, according to
the market. These results are released up until
election day. For example, on November 1,
2004, the IEM forecast 50.5% of the popular
vote for Bush, for an error of 0.7% points.
While the IEM has only issued forecasts since
1988, its boosters are bullish on its successes
(Berg et al. 2008).
Election Forecasting, Scientific Approaches 465 E
E
With the exception of the IEM, political bet-
ting in the USA is illegal. However, these mar-
kets have become quite popular in other democ-
racies. Some examples of other trading mar-
kets include Intrade, the University of British
Columbia Election Stock Market, Betfair Politics
Zone in the UK, and Centrebet in Australia. The
expansion of these markets has enabled scholars
to study their predictive accuracy, especially in
comparison to polls and forecasting models (Berg
et al. 2008; Leigh and Wolfers 2006; Wall et al.
2012).
A big question concerns how these markets
work. One criticism is that they do not issue one
forecast for the election event itself. Rather, they
issue daily forecasts, making the choice of many
forecasts possible. Commonly, the most recent
forecast takes preference, up to and including the
day before the election. Of course, the difficulty
there is the trivial lead time, announcing today
who will be elected tomorrow. A second ques-
tion concerns the composition of the trader pool
which, at least initially for the IEM, was college
students operating under a low investment cap.
A third question involves information traders use
to achieve their level of accuracy. In particular, do
they add any information beyond what is avail-
able in the polls? A growing body of scholarship
suggests that their information simply incorpo-
rates, or at least mimics, the polls. In other words,
the markets contain no independent information
(Erikson and Wlezien 2012).
Scientific Approaches: Models
An earlier, and perhaps stronger, rival to the
markets comes from statistical models. While
economic forecasting models have a venerated
tradition, election forecasting models have come
into their own more recently. Initial work, at least
in political science, began with simple, even
bivariate models, where national election
outcomes are held to be a function of a macro-
political variable, for example, United States
presidential elections predicted from presidential
popularity in Lewis-Beck and Rice (1982)
and Sigelman (1979). Soon, macroeconomic
variables, such as economic growth, were added
to the mix. Below stands an early model of this
form (Lewis-Beck and Rice 1984):
Vot e D33:03 C0:34Pt6C1:42Gt6Ce
.4:05/.1:78/
R2D0:82 SEE D3:68 N D9(1)
where Vote = the percentage popular vote
received by the president’s party, Pt6=the
Gallup presidential approval rating 6 months
before the election, Gt6= the GNP growth
rate in the second quarter of the election year,
the figures in parentheses = t-ratios, the R2=the
coefficient of multiple determination, SEE = the
Standard Error of Estimate, and N = 9 presidential
elections 1948–1980.
Soon, many models developed along the lines
of Sect. 1, following what we call the core Politi-
cal Economy model. Stated generically, it reads
Vot e Df.government popularity;
economic performance/(2)
Models with this structure generally predict pres-
idential outcomes fairly well. (See the reviews
in Lewis-Beck and Rice 1992; Lewis-Beck and
Tien 2011.) They function well because they
are derived from strong theories explaining vote
choice, namely,referendum theory and economic
voting theory (Tufte 1978; Fiorina 1981; Lewis-
Beck and Stegmaier 2007). In these theories, vote
choice occurs on the basis of the incumbent per-
formance on economic and noneconomic issues.
In general, these models examine aggregate
time series on United States presidential
elections, from World War II to the present.
(Exceptions are Norpoth 2008, who looks at a
national time series extending well back into
the century, and Klarner 2008, who looks at
data disaggregated to the level of the states.)
Almost always, the estimation procedure is
ordinary least squares (OLS) regression, on
a single-equation model. The forecasting of
American presidential elections has become a
thriving industry. For example, in fall 2008, prior
E466 Election Forecasting, Scientific Approaches
to the Obama-McCain contest, nine different
forecasting teams published their predictions
of the November outcome (Campbell 2008a).
A not atypical example comes from Abramowitz
(2008) in his Time-for-Change model, which
reads as follows:
Vot e Df (presidential popularity, economic
performance, incumbent terms in office).
(3)
Here are the estimates (OLS) for the model:
Vot e D51:42 C0:11PC0:60G4:27TCe
.61:60/.5:11/.5:26/.4:21/
R2D0:92 Adj:R2D0:90 SEE D1:78
ND15 .elections 1948–2004/(4)
where vote= the popular vote share for the pres-
ident’s party, P= Gallup presidential approval
in June, G= Gross Domestic Product Growth in
the second quarter of the election year, T=the
number of terms the president’s party has been
in office, and the statistics are defined as with
Sect. 1.
As we observe, the model displays strong
goodness-of-fit statistics (the R2and the SEE),
which are important for forecasters. Also, note
that the independent variable values have good
lead time, being measured by the summer of the
election year. However, one thing is especially
noteworthy about this model – it adds an insti-
tutional variable, incumbent party terms in office.
The addition of a variable to account for some
institutional characteristic has become standard
in the current generation of models. For exam-
ple, it may be a variable for convention bump,
incumbency, open seats, and partisan alignment.
This suggests a useful revision of the current
generic description of a United States presidential
election forecasting model, as follows:
Vot e Df.government popularity, economic
performance, institutional condition/:
How well do these forecasting models
perform? We can look, first, at the current results
from 2008, using the nine models mentioned
above. Did they forecast the outcome? (See
the useful summary from Campbell 2008a,
from whom we borrow below.) First, de riguer,
they all issue a forecast many days before the
contest, in a range from 57 days to 294 days
before (See, respectively, Campbell 2008b and
Norpoth 2008). Second, looking at individual
point estimates, a good deal of variation can be
observed, ranging from a 52.7 % for McCain to a
low of 41.8 % for McCain, for a 10.9 percentage
point difference in forecasts overall. Third, they
all agree that Obama would win, save one.
Indeed, the median forecast, from the group of 9
forecasts, was 48.0% for McCain, quite close to
the actual outcome (i.e., McCain garnered 46.3 %
of the two-party popular vote). Collectively, then,
the models perform very well in forecasting the
outcome of this difficult election.
Historically, how well have these models
done? To help answer that question, six leading
forecasting teams graciously provided us with
their data, for the period 1948–2000. (These
contributors are listed, along with a recent
publication, as follows: Abramowitz 2008;
Campbell 2008b; Erikson and Wlezien 2008;
Holbrook 2008; Lewis-Beck and Tien 2008;
Norpoth 2008.) Employing their individual data
sets, respectively, we forecast each election in the
series, out-of-sample using a jackknife technique
(Nadeau and Lewis-Beck 2012). The average
absolute prediction error, across these models
and elections, is only 1.36% points. In other
words, compared to the comparable period for the
Gallup Final Poll, reported above, they exhibit
more accuracy and with a longer lead time to
boot.
In recent years, forecasters have begun
constructing models for democratic contests
in other countries. Because the contexts of
elections vary widely across countries depending
on the party system and electoral rules, US
models cannot simply be transported abroad.
For example, the French case provides for
two-round presidential elections and multiple
competing parties. Lewis-Beck and Rice (1992)
Election Forecasting, Scientific Approaches 467 E
E
take these contextual differences into account
as they develop models to forecast French
presidential and National Assembly elections.
In addition to France, election prediction
models in the UK have flourished. A special
issue of Electoral Studies (Gibson and Lewis-
Beck 2011) published models predicting the
May 2010 UK Parliamentary Election, which
provided modellers a significant challenge with
an election that resulted in a Hung Parliament.
And more recently, a special issue of the
International Journal of Forecasting (Lewis-
Beck and B´
elanger 2012) published models for
countries where election forecasting is a new
enterprise.
“Nowcasting” is a new approach that uses
forecasting models to predict elections as if they
were held now (Lewis-Beck et al. 2011). This
approach first estimates the forecasting model
based on prior elections. Then, the current
value(s) of the predictive variable(s) is plugged
into the forecast equation to estimate the election
result if it were held today. This approach is
useful in countries where election dates are
fixed as a way of knowing where the parties
stand but even more so in countries like the UK
where election timing is endogenous. If the lag
on the predictor variable is 3 months, one can
use the current value to predict an election result
3 months from now. And if the election is upon
us, then the lagged value can be used to predict
the election now.
Methodological Issues
Election forecasters face a number of method-
ological issues. Foremostly, they must satisfy the
accuracy criterion. In elections, that means the
closer the point estimate, usually of a vote share,
the better. Each approach has different stresses.
For the pollster forecaster, there is the sample
quality. Which polls should be used? All or only
some? Are certain houses to be excluded? Certain
outliers excluded? Is an averaging strategy to be
pursued? Over what time period? For the mar-
keter forecaster, there is trader quality. Are my
traders representative in relevant ways? Are the
price incentives enough to motivate them prop-
erly? Do they offer anything beyond the polls?
For the modeler, there is the specification quality
(Lewis-Beck and Tien 2008). Are the right vari-
ables in the model? Should I stress prediction at
the sacrifice of explanation? Can I get campaign
effects included? Should I go to a multi-equation
format?
Even assuming quality control, the question of
lead time nags. That is, the greater the distance
in days, from making the forecast to the date
of the election, the more useful and impres-
sive the forecast will be. We are captivated by
astronomers, who tell us years in advance when
and where Halley’s Comet will cross the sky.
More prosaically, for elections, a forecast made
3 months before resonates more fully than one
made just 3 days before. However, a tension
arises between the goals of lead and accuracy:
more lead usually means a more valuable fore-
cast, but a less accurate one. However, this is not
always so. For example, polling data very close to
the election are often less precise, implying that
a forecaster can fall victim to “nearsightedness.”
(As has been shown, electorates, in the final
weeks of a campaign, can show considerable
volatility Lewis-Beck 2001.)
Parsimony represents another important crite-
rion. When the modeler loads many variables in
a model, in a blind effort to boost the R-squared,
that equation runs the risk of crashing on trial.
Instead, parsimony dictates a few independent
variables, selected on the basis of theory. The
imperative for the modeler is double here since
the sample sizes are so small. While the samples
are not generally small for marketers or pollsters,
they still face a parsimony issue in that their daily
numbers give them too many forecasts to choose
from. One solution here has been to combine
forecasts, say weekly or monthly, perhaps across
polling houses.
But with a combination strategy, the issue
becomes which polls to include. All or some?
If some, how to decide? As a partial solution,
some researchers simply trim off the outliers.
Having a surfeit of forecasts also raises the im-
portant criterion of replication. If, for example, a
team is generating a different forecast every day,
it becomes difficult for readers to replicate its
work. Some bloggers or journalists who engage
E468 Election Forecasting, Scientific Approaches
in almost continual forecasting seem to manifest
this problem: they frequently report new, updated
forecasts but neglect to tell the reader much about
how the numbers were arrived at. These princi-
ples of forecasting – accuracy, lead, parsimony,
and replication – are easy to state but harder
to follow. For a review of these principles in
practice, see Lewis-Beck (2005).
Conclusions
Predicting elections has a long history in
American politics. While early efforts were
nonscientific, the advent of scientific public
opinion polling in the late 1930s and 1940s made
available to citizens a sense of how the nation
was leaning at election time. Today, competition
among polling agencies has produced a plethora
of polls, which the media report on as part of
the horse-race election coverage. However, as a
forecasting tool, the trial heat polls have a serious
limitation. Since they ask voters who they would
vote for if the election were held today, they don’t
provide valued lead time. The trial heat poll that
predicts the vote on the actual election day, is the
one conducted on election eve.
This shortcoming of polls, and the emergence
of scientific models that explain election results,
led to the development of election forecasting
models. These statistical models predict elections
based on economic data, approval measures, and
other factors. The great advantage of models is
that forecasts can be generated months before the
election. Today, these forecasting models for US
elections and elections abroad are published in
leading political science journals and are covered
in the national media as part of the preelection
coverage.
Political stock markets, and in particular
the pioneering Iowa Electronic Markets, have
provided a competing approach to polling and
models. Investors buy and trade stock in
candidates or parties, with the stock values repre-
senting the odds of victory. These markets have
proliferated internationally in countries where
political betting is legal, offering scholars the
opportunity to use these data as a forecasting
tool and to assess the prediction accuracy of the
markets compared to polls and models.
As we look toward the future of election
forecasting, we envision methods for integrating
these approaches. Already, polling measures are
used in forecasting models, but with market data
now available for elections over time and in
a variety of countries, this could be a fruitful
line of future inquiry. Further, as markets report
daily values, they could also be applied in
“nowcasts.”
Cross-References
Collective Intelligence, Overview
Data Mining
Least Squares
Legislative Prediction with Political and Social
Network Analysis
Regression Analysis
References
Abramowitz AI (2008) Forecasting the 2008 presidential
election with the time-for-change model. PS: Pol Sci
Polit 41:691–695
Berg J, Forsythe R, Nelson F, Rietz T (2008) Results from
a Dozen years of election futures market research.
In: Plott C, Smith V (eds) Handbook of experimental
economic results. Elsevier, Amsterdam
Campbell JE (2008a) Editor’s introduction: forecasting
the 2008 national elections. PS: Pol Sci Polit 41:
679–682
Campbell JE (2008b) The trial-heat forecast of the 2008
presidential vote: performance and value consider-
ations in an open-seat election. PS: Pol Sci Polit
41:697–701
Erikson RS, Wlezien C (2008) Leading economic indica-
tors, the polls, and the presidential vote. PS: Pol Sci
Polit 41:703–707
Erikson RS, Wlezien C (2012) Markets vs. polls as elec-
tion predictors: an historical assessment. Elect Stud
31:532–539
Fiorina MP (1981) Retrospective voting in American
national elections. Yale University Press, New Haven
Gibson R, Lewis-Beck MS (eds) (2011) Electoral fore-
casting symposium. Elect Stud 30(2):247–287
Holbrook TM (2008) Incumbency, national conditions,
and the 2008 presidential election. PS: Pol Sci Polit
41:709–712
Emergency Response Networks 469 E
E
Klarner C (2008) Forecasting the 2008 U.S. House, Senate
and presidential elections at the district and state level.
PS: Pol Sci Polit 41:723–728
Leigh A, Wolfers J (2006) Competing approaches to fore-
casting elections: economic models, opinion polling
and prediction markets. Econ Rec 82:325–340
Lewis-Beck MS (2001) Modelers v. pollsters: the election
forecasts debate. Harv Int J Press Polit 6(2):10–14
Lewis-Beck MS (2005) Election forecasting: principles
and practice. Br J Polit Int Relat 7(2):145–164
Lewis-Beck MS, B´
elanger E (eds) (2012) Special section:
election forecasting in neglected democracies. Int J
Forecast 28(4):767–829
Lewis-Beck MS, Rice TW (1982) Presidential popularity
and presidential vote. Public Opin Q 46:534–537
Lewis-Beck MS, Rice TW (1984) Forecasting presidential
elections: a comparison of naive models. Pol Behav
6:9–21
Lewis-Beck MS, Rice TW (1992) Forecasting elections.
Congressional Quarterly Press, Washington
Lewis-Beck MS, Stegmaier M (2007) Economic models
of voting. In: Dalton R, Klingemann H-D (eds) The
Oxford handbook of political behavior. Oxford Uni-
versity Press, Oxford, pp 518–537
Lewis-Beck MS, Stegmaier M (2011) Citizen forecast-
ing: can UK voters see the future? Elect Stud 30(2):
264–268
Lewis-Beck MS, Tien C (2008) Forecasting presidential
elections: when to change the model? Int J Forecast
24(2):227–236
Lewis-Beck MS, Tien C (2011) Election forecasting. In:
Clements M, Hendry D (eds) The Oxford handbook
of economic forecasting. Oxford University Press,
Oxford, pp 655–671
Lewis-Beck MS, Nadeau R, B´
elanger E (2011) Nowcast-
ing v. polling: the 2010 UK election trials. Elect Stud
30(2):284–287
Nadeau R, Lewis-Beck MS (2012) Does a presidential
candidate’s campaign affect the election outcome?
Foresight 24:15–18
Norpoth H (2008) On the Razor’s edge: the forecast of the
primary model. PS: Pol Sci Polit 41:683–686
Pickup M, Johnston R (2008) Campaign trial heats as
election forecasts: measurement error and bias in
2004 presidential campaign polls. Int J Forecast 24(2):
272–284
Sigelman L (1979) Presidential popularity and
presidential elections. Public Opin Q 43:
532–534
Squire P (1988) Why the 1936 literary digest poll failed.
Public Opin Q 52:125–33
Tufte ER (1978) Political control of the economy. Prince-
ton University Press, Princeton
Wall M, Sudulich ML, Cunningham K (2012) What are
the odds? Using constituency-level betting markets to
forecast seat shares in the 2010 UK general elections.
J Elect Public Opin Parties 22:3–26
Weisberg HF (2005) The total survey error approach: a
guide to the new science of survey research. University
of Chicago Press, Chicago
Election Prediction
Election Forecasting, Scientific Approaches
Electronic Communities of Practice
Networks of Practice
Electronic Health Record
Social Networks in Healthcare, Case Study
Elementary Matrix
Matrix Algebra, Basics of
E-Mail
Social Engineering/Phishing
Emergence
Stability and Evolution of Scientific Networks
Emergency
Disaster Response and Relief, VGI Volunteer
Motivation in
Social Networks in Emergency Response
Emergency Response Networks
Inter-organizational Networks