ChapterPDF Available

The Campaign that Wasn’t: Tracking Public Opinion over the 44th Parliament and the 2016 Election Campaign


Abstract and Figures

This chapter assesses the health of political polling in Australia during the period of 2013-2016 and identifies major movements in voting intentions in this period. It finds that most national polls for two-party preferred voting intentions are reliable however there are systematic underestimations of Labor primary voting intentions and overestimations of Greens voting intentions. Seat-specific polling is comparatively lower quality in contrast to national polling. Most movements in voting intentions in the period studied occurred long before the election campaign exposing the popular media narratives of a "Mediscare" and "Greenslide" as unsubstantiated on the available evidence.
Content may be subject to copyright.
The Campaign that Wasn’t:
Tracking Public Opinion over
and the
Election C
Simon Jackman and Luke Mansillo
Elections are a game of inches where political parties attempt to move
the needle of public opinion just enough to get their candidates across
the line when all the ballots are counted. It is a high-stakes game that
political parties play: parliamentary careers are on the line, as is the public
policy trajectory the nation will take—both with profound consequences
for citizens. Polls are often a source of evidence with which to praise
or admonish various campaign strategies and tactics in pre-election
prognostication and post-election soul-searching.1 Campaign directors,
strategists, pollsters and the candidates themselves are often hailed as
geniuses, dunces, heroes or villains in narratives about how elections are
won or lost (Halperin and Heilemann 2010; Williams 1997). Generally,
political scientists tend to be far more circumspect than journalists and
commentators about the eects of campaigns on election outcomes.
Decades of scholarship have shown that voter preferences are less pliable
than is often supposed, the eects of political advertisements are at
1 e analytic and communication tools available to contemporary political campaigns (Issenberg
2012) regularly go under the microscope, as do the power of gaes, mishaps and ‘cut through’ or
unscripted moments, as well as the media portrayals of those events (e.g. Shorten 2004; Tien 2008).
best small and eeting, and campaign eorts by the major parties often
neutralise one another.2 Many journalists and commentators have oered
campaign-based explanations for the closeness of the 2016 Australian
elections. For example, Labor’s ‘Mediscare’ strategy late in the campaign
is often invoked in explaining the narrowness of the Coalitions victory.
Wayne Errington and Peter van Onselen make the unqualied assertion
that ‘[u]ndoubtably, Labor’s disingenuous scare campaign resonated with
the electorate’ (2016: 180).
In this chapter we present an alternative view, closer to the scholarly
consensus about campaign eects. We examine change in public opinion
during the 44th Parliament, providing context for movement in voting
intentions during the 2016 election campaign period. We also examine
the quality of seat-specic polling ahead of the 2016 election.
We make four major ndings:
1. Large movements in voter support occurred well before the formal
campaign period, around events such as the September 2015
leadership spill that resulted in Malcolm Turnbull’s ascension to the
prime ministership and the 2014 Budget.
2. In the formal election campaign, there was relatively little movement
in voting intentions, which was consistent with the major party
campaigns neutralising each other. Contrary to popular narratives
about the campaign, movements in voting intentions during the
formal campaign period were smaller in magnitude than at other
comparable periods during the 44th Parliament.
3. Polling organisations systematically overestimate Greens voting
intentions, but underestimate Labor voting intentions. Two-party
preferred voting intentions estimates were accurate when
averaged across public polling. It would seem that a small,
industry-wide underestimate of the Coalition vote was oset by
theindustry-wideoverestimate of the Green vote, yielding an accurate
estimate of the two-party preferred division of the vote.
4. Seat-specic polling underestimates Labor, but overestimates support
for the Greens.
2 Reviews of the sizeable academic literature on campaign eects appear in Shanto Iyengar and
Adam Simon (2000), D. Sunshine Hillygus (2010) and John Sides and Lynn Vavreck (2014).
6. 
In general, seat-specic polls are subject to substantial biases, so much
so that the typical seat-specic poll should be treated as if it had just
one-sixth the nominal, stated sample size of the poll.
Data and methodology
e national-level polling data analysed here span the period between
the 2013 and 2016 elections. e data collection includes virtually
all polls in the public domain between the elections.3 Only polls with
known eldwork dates and known sample sizes are analysed. ere are
399 national polls in this period with primary vote estimates and 400
polls with two-party preferred estimates.4 ere are 10 de facto polling
organisations responsible for the polls analysed (also see Goot, Chapter5,
this volume). Newspoll and Galaxy are treated as two separate polling
organisations for the purposes of this analysis.5
Each poll is a snapshot of opinion, captured during a short temporal
window. e precision of a poll is an increasing function of its sample size.
For all but one day every three years (election day), the Australian publics
voting intentions are not directly observed. e rest of the time, voting
intentions are measured imperfectly through polls that change over time.
In this chapter, we estimate the true state of public opinion underlying
published polls with a statistical model (Jackman 2005, 2009). e
model treats voting intentions as a hidden or latent state and uses what is
visible—the published results of opinion polls—to recover the trajectory
of voting intentions between the 2013 and 2016 federal elections.
By combining polls, the model increases the amount of information
available for estimating latent public opinion, thus increasing the precision
of the resulting estimates. By estimating and correcting for biases specic
3 We thank William Bowe for sharing data collected for Crikey, enabling data quality checks for
missing data and data entry errors. We thank Murray Goot for graciously crosschecking our data
collection with his own collection of polling data. Responsibility for the accuracy of the data remains
our responsibility.
4 e dierence between the two arises from an additional Morgan poll conducted immediately
after the 2016 Budget, which published an estimate for the two-party preferred vote but did not
publish primary voting intention estimates.
5 e contract that News Corp had with Cudex, a joint venture between News Corp and the
British public relations rm WPP to conduct Newspoll-branded public-opinion polling research for
the Australian, was transferred to Galaxy Research in July 2015 (Australian 2015). e Galaxy-run
Newspoll adopts a mixture of robo-calling and online panel sampling techniques (Stirton 2015).
to polling organisations (‘house eects’), the model improves the estimate
of latent voting intentions. We refer to bias not as favouritism, a partiality
or prejudice for or against a political party, nor do we assert or imply any
normative quality or the intention of any polling industry participant to
change the content or appearance of their polling results (i.e. fabricate
their research). Rather, we borrow from statistics our meaning of bias.6
In addition, the model has a dynamic component. is acknowledges the
fact that voting intentions change over election campaigns and especially
over the three-year term of a parliament. e model includes ‘jumps
or discontinuities for events that can reasonably be expected to rapidly
(ifnot instantaneously) move opinion. e model’s estimates are further
improved by anchoring voting intention estimates to the 2013 and 2016
election results; as on election days, voting intentions are not latent, but
are directly observed.
Our model for poll results is
yi~N(ξt,i + δj,i, σ2)
where yi is a proportion, the estimate of a party’s vote share in a published
poll i; ξt,i is the true but latent level of support for the party on dayt,the
median date of eld work; δj,i is the bias of polling organisation j,
thepolling organisation elding poll i; and σ2 is the variance of the error of
poll i, adecreasing function of ni, the known sample size of poll i. Weset
σ2 = (yi × (1 – yi))/ni . e normal distribution is justied by standard large
sample arguments about the form of sampling error.
e dynamic component of the model is
ξt~N(ξt-1 + γkDk,t, ω2), t = 2, …, T
where t indexes the 1,038 days between the 2013 and 2016 elections
(inclusive). e model is a random walk in which today’s voting intentions
will be equal to the previous day’s voting intentions ξt-1 absent any polling
information to the contrary (which enters the model via the rst equation
explained above). e γk
parameters are ‘jumps’, measuring the extent to
which event k disrupts the trajectory of voting intentions; Dk,t
is a binary
indicator, set to one on the day that event k occurs and zero otherwise.
6 e bias function of an estimator is the dierence between an estimator’s expected value and the
true value of the parameter being estimated. If the dierence between an estimate and the true value
is zero, the estimates are called unbiased. Bias is an objective property of an estimator.
6. 
e variance term ω2 measures the day-to-day variability or volatility of
voting intentions. ξ1
ξT are set to the 2013 and 2016 election results,
respectively, for a given party.7 We specify two potential jump events:
Turnbull’s ascension to the prime ministership on 15 September 2015
and the prorogation of parliament on 21 March 2016.8
Voting intentions 2013–16
Before the 2016 formal election campaign there was considerable
movement in voting intentions. Figures 6.1 and 6.2 summarise these
movements for Labor, the Coalition and Green rst preferences and
for Coalition two-party preferred voting intentions. Large, electorally
consequential movements in voting intentions occurred within months
of the Abbott government coming to power, and following the September
2015 Liberal leadership spill.
ere was a steady decline in Coalition primary voting intentions from
the 2013 election result until the presentation of the 2014 Budget on
13May. Before the 2014 Budget, the Coalition’s primary vote fell below
its poor showing in the 2007 landslide result (42.09 per cent) and did not
recover until Turnbull took the leadership. is is in contrast to much
commentary that cites the 2014 Budget for the Abbott government’s
woes (see e.g. Kirby 2014; Makinda 2015; Marston 2014; Ryan 2015).
e Coalitions primary vote remained little changed save a few small
uctuations, seldom statistically signicant.
After Turnbull became prime minister in September 2015, the Coalitions
primary vote briey moved well above its 2013 result (45.55 per cent).
ere was an immediate 5 per cent increase in the Coalition primary
vote the day Turnbull became prime minister (see Figure 6.2); this
7 e normal distribution is chosen largely for convenience; other assumptions about the
form of the day-to-day innovations might be plausible—for example, a heavy-tailed distribution
such as the t-distribution. e ‘jump’ component of the model captures some of the more obvious
sudden or abrupt changes in public opinion, such that the remaining innovations are probably well
accommodated by the normal model assumed here. Alternative distributional forms for the day-to-
day innovations are a topic for another paper.
8 Breaks were tested for the opening of parliament, the three federal budgets, the rst sitting day
of parliament following the summer recess, and the start of the ‘Mediscare’ campaign initiated with
atelevision advertisement featuring former Labor PM Bob Hawke on 11 June (ALP 2016). ere
was little dierence to the t of the model and these additional break points were dropped from the
model reported here.
continued to improve until January. e Coalition’s renewed popularity
had ahalf-life of about four months—the 8 per cent gain in January from
the leadership change dropped to a 4 per cent advantage by April. ere
was a net improvement (3 per cent) to the Coalition’s vote from Tony
Abbott’s defenestration until the prorogation of parliament in March
2016. Or when viewed from the peak, there was a 5 per cent fall in
Coalition voting intentions, from Christmas 2015 until the prorogation
of parliament. Leadership changes have been quite frequent in recent
Australian political history, with large boosts in a party’s electoral standing
in the polls followed by steady decline to the status quo ante or lower.
e reasons for the rapid reversion in voting intentions after leadership
changes are not well understood.9
e Labor primary vote had an immediate bounce following the 2013
election with another improvement shortly after the rst parliamentary
sitting in 2013. ere was no signicant bounce in Labor support from
the 2014 Budget; Australian Labor Party (ALP) vote intentions remained
unchanged from their position in December 2013. When Turnbull
replaced Abbott, there was an immediate 3.8 per cent fall in Labor voting
intentions (see Figure 6.2), followed by a further 2 per cent decline in the
next two months. From January until the prorogation, Labor gained 4 per
cent more vote share, the inverse of the Coalition’s loss.
e Greens’ primary vote moved reasonably slowly over the life of the
parliament. e Greens won 8.65 per cent of House of Representatives
rst preferences in 2013. Our analysis suggests a slow improvement in the
Greens’ electoral position over 2014, but especially over 2015—noting
that Richard Di Natale became the Greens leader on 6 May 2015. By the
time of the Turnbull ascension, we estimate the Greens had 12 per cent
of rst preferences or nearly a 50 per cent improvement on their 2013
result. After Turnbull became prime minister, Green support fell by about
1 per cent (see Figure 6.2) with roughly another 1 per cent ebbing away
through 2016 to the 10.23 per cent Greens rst preference result recorded
at the 2 July election.
9 We agree that an update to the literature on leadership eects on Australian public opinion
isprobably warranted (McAllister 2003; Keord 2013).
6. 
Figure 6.1. Trajectories of support for various parties (voting intentions,
per cent), 2013–16
Note. Shaded regions indicate 95 per cent credible intervals, open circles indicate polls.
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
As seen in Figure 6.1, changes in the Coalition’s two-party preferred
vote reect changes in its primary vote, but there are some dierences.
e Coalition shed 4 per cent of its two-party preferred vote between
the 2013 election and the end of year, more than 1 per cent per month.
e Coalitions two-party preferred result remained at 49 per cent until
April 2014. A further 3 per cent was lost in April 2014 in the lead-up to
the 2014 Budget. From June 2014 to September 2015, there was relative
stability in two-party preferred voting intentions until Turnbull became
prime minister in September 2015. e Coalition two-party preferred
gures peaked at almost 55 per cent in the 2015–16 Christmas/New Year
period, but fell dramatically (to 50 per cent) as the 2016 parliamentary
sittings commenced.
Figure 6.2. Jumps in voting intentions associated with the prorogation
of parliament and Turnbull ascension
Note. Vertical lines span 95 per cent credible intervals.
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
6. 
Two campaign myths: ‘Mediscare’ and
the ‘Green slide’
We nd relatively little movement in voting intentions during the formal
campaign period (see Figure 6.3), a point noted by some commentators
during the campaign itself (e.g. Hartcher 2016). We dene the de facto
start of the 2016 election campaign with the proroguing of parliament
on 21 March, well ahead of the Budget delivered on 3 May and theissue
ofwrits on 9 May. e public opinion movements we nd in the campaign
are small in contrast to the previous 31 months.
e closeness of the election was widely attributed to Labor’s ‘Mediscare’
campaign (see e.g. Errington and van Onselen 2016; Gillespie 2016;
Williams 2016). We nd this to be questionable given the available
evidence (see Elliot and Manwaring, Chapter 24, this volume). At
prorogation, the Coalition primary vote was 43 per cent. is had fallen
to 41 per cent a month out from polling day. e precipitous decline from
January was arrested by early June (see Figure 6.3). In the last month of
the campaign, it seems that the Coalition primary vote had an almost
1per cent recovery, but this level of change is too small to be condently
detected by the available data and our model. e most we can say is that
there appeared to be very little movement in Coalition primary voting
intentions compared to the period before the campaign.
e ‘Mediscare’ campaign began in earnest with advertisements featuring
Bob Hawke rst appearing on 11 June. In Errington and van Onselen’s
(2016: 154) assessment, ‘[t]he Mediscare attack was designed not just
to appeal more to swinging voters (as well as galvanising Labor voters)
but to show that Shorten was playing to win’, and that a ‘pathway to
victory’ was possibly indicated by ‘Labor’s tracking polling and focus-
group research [which] had picked up an unusual high concern with
health funding.’ Errington and van Onselen (ibid.) assert that ‘Labor had
tapped into a rich vein of distrust voters had with the government—the
trick would be exploiting it to maximum eect’. But once the ‘Mediscare
campaign was rolled out, the Coalition primary vote did not deteriorate—
and even improved thereafter—suggesting the ‘Mediscare’ campaign was
nowhere near as powerful as many accounts have asserted.10
10 For a more detailed discussion, see Elliot and Manwaring, Chapter 24, this volume.
Figure 6.3. Trajectories of support for various parties (voting intentions,
per cent), restricted to calendar year 2016
Note. Shaded regions indicate 95 per cent credible intervals, open circles indicate polls.
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
6. 
is said, Labor too appears to have had a 1 per cent improvement in its
primary vote over the campaign period, between the prorogation inMarch
and the 2 July election. Labor lost vote share over the period post-budget
to early June—around 1 per cent—but appears to have regained it by
election day. But much like the Coalition primary vote, the Labor vote
remained stable for the campaign relative to the recent parliamentary
period. Again, there is little evidence that the ‘Mediscare’ campaign had
a signicant impact on public opinion. As Figure 6.3 makes clear, the
larger and more consequential movement in Labor’s primary vote share—
and the Coalitions vote share, too—occurred between Christmas 2015
and the start of the formal campaign. Labor’s rst preference vote share
improved from 30 per cent to 35 per cent in the rst ve months of
2016, and meandered around that level through the formal campaign
period. From when parliament was prorogued until the Budget, the
Greens averaged 11 per cent of rst preferences; this fell by 1 per cent after
the Budget. Once the writs for the 2016 election were issued the Greens
and several commentators were bullish on the Greens’ prospects (Chang
2016; Evershed et al. 2016; see also Jackson, Chapter 13, this volume).
However, we nd that the Greens vote share continued to decline or at
best was stagnant. e Coalition’s two-party preferred vote was highly
stable for the campaign, much like its primary vote. is was in contrast
to movements throughout the parliamentary period. e 1 per cent gain
Labor made in the month after the writs were issued was lost in the last
month of the campaign.
Polling organisation bias
Figure 6.4 summarises the biases or ‘house eects’ for each polling
organisation, for Coalition, Labor and Greens primary and Coalition
two-party preferred voting intentions. Vertical lines indicate the range of
95 per cent credible intervals around each bias estimate. A given house
eect estimate can be interpreted as being indistinguishable from zero at
conventional levels of statistical signicance, if the 95 per cent credible
interval overlaps zero. e house eect labelled ‘Average’ is the average of
the house eects; in eect, this average house eect enables us to show the
extent to which the polling industry displayed a collective bias.
Figure 6.4. Polling organisation bias estimates
   
the polling industry displayed a collective bias.
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
For Coalition primary vote estimates, three of the 10 polling organisations
had a statistically signicant bias. Morgan and both Newspoll regimes
underestimated Coalition primary voting intentions. Morgan
underestimated the Coalition primary vote more than either Newspoll
regime. Five of the 10 polling organisations analysed showed a signicant
Labor bias in their estimates of the primary vote. Essential and Newspoll,
since July 2015, overestimated Labor primary voting intentions, whereas
Ipsos, Morgan and Newspoll until June 2015 signicantly underestimated
6. 
Labor primary voting intentions.11 Changes to the Australians public
opinion research sourcing also changed its underlying methodological
procedures. is has implications for Labor primary vote estimates. eold
Newspoll underestimated the Labor primary vote by 1.9per cent, but
after the polling organisation changed, the new Newspoll overestimated it
by 1.9 per cent; the dierence being 3.8 per cent.
In July 2015, political editors and commentators were quick to remark
on the shift in Newspoll. Fairfax Media’s chief political correspondent
Mark Kenny noted, ‘primary support for the ALP [was] a relatively
healthy 39 per cent, up from 37 per cent a fortnight ago and 5 per cent
up from its 34 per cent in mid-June’ (Kenny 2015). While the polls
present much fodder for journalists to discuss, they appear not to have
been terribly mindful of potential poll bias shifts. Averaged over the
polling organisations used here, there is no ‘industry-wide’, collective
bias in polling for Labor rst preferences or Coalition two-party preferred
estimates. ere is a small underestimate of Coalition rst preferences
(about 0.6 per cent), but this estimate is not distinguishable from zero
atconventional levels of statistical signicance.
en again, there is an unambiguous tendency for the polling industry
to have overestimated Greens rst preferences. Four of the 10 polling
organisations signicantly overestimated the Greens’ primary vote: Ipsos,
Morgan, Nielsen and Newspoll before July 2015. Essential underestimates
the Greens’ primary vote, but this is to a much lesser extent than other
polling organisations overestimate Greens voting intentions. e industry,
on average, overestimated the Greens support by more than 1 per cent.
Overestimates of the Greens primary vote have also been observed in New
Zealand (Wright, Farrar and Russell 2013). ere could be a few reasons
for the large Greens primary vote overestimation observed. Some likely
causes include incorrect weighting of younger respondents, voter confusion
between House of Representatives and Senate voting, survey design issues
(question wording and response options), and respondents—who expressed
an intention to vote for the Greens—being less likely to turn out.
No signicant house biases for national Coalition two-party preferred
voting intentions were observed. In this respect, polling organisations
performed well. It appears that a small, collective underestimate of the
11 From July 2014, Nielsen ceased Australian public opinion research operations on voting
intentions (Mitchell 2014). Fairfax Media has since sourced its public opinion polling from Ipsos
Coalition vote was oset by the collective overestimate of the Green
vote, yielding an unbiased estimate of the two-party preferred division
Seat-specic polling
We collected estimates of rst preference voting intentions from 88 seat-
specic polls that were conducted in 48 electoral divisions from January
2016 until election day. ese electoral divisions were typically more
marginal than the average seat. Sample sizes for these polls ranged from
500 to 1,600, with an average of 626. e sample sizes for these polls are
smaller than those in national polls (an average of 1,498 respondents).
Tomeasure the error for seat-specic polling, we compared poll estimates
to the election results in the corresponding seat. Figure 6.5 displays these
comparisons. e orange line is a 45-degree line; all data points would
lie on this line if poll results perfectly predicted the election results.
eblue line is a regression line, summarising the relationship between
poll estimates and actual results.
Table 6.1 presents summaries of the poll errors. Averaged across seats,
seat-specic polls overestimated the Greens vote by 0.7 per cent and the
Coalition vote by 0.6 per cent, and underestimated the Labor vote by
2.2per cent and Xenophon candidates by just 0.25 per cent. e bias with
respect to Labor vote shares is especially pronounced, with underestimates
of Labor’s showing in seats like Macarthur (New South Wales (NSW)) and
Franklin (Tasmania (TAS)) larger than 10 per cent. e median absolute
error (ignoring whether the poll error is an overestimate or an underestimate)
is actually slightly larger for the Coalition than for Labor (3.5 per cent versus
3.28 per cent), but both errors are reasonably large. e root mean square
error (RMSE) is largest for Labor—on the order of 5 per cent—and 4.3 per
cent for the Coalition. is is considerably larger than the RMSE we ought
to see from polls with a sample size of roughly 600 respondents.12 e poll
errors for the Greens and Xenophon candidates are smaller in magnitude
than those for Labor and the Coalition because the magnitudes involved are
smaller quantities (e.g. the median Green vote share in the seats covered by
these polls is 8per cent).
12 Unbiased polls with a sample size of 626 respondents trying to estimate a (known) target
of50per cent will have a root mean square error of .02 or 2 per cent.
6. 
Figure 6.5. Performance of seat-specic polls
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
Table 6.1. Summary of poll errors
Coalition ALP Green Xenophon
Average –0.58 2.19 –0.68 0.25
Median absolute 3.50 3.28 1.42 1.69
RMSE 4.32 4.99 2.27 3.83
 129.91 92.14 142.89 95.49
Coverage rate 57.50 53.16 78.21 60.00
Number of polls 80 79 78 15
Number of seats 46 45 44 10
              
              
corresponding outcome.
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
Indeed, it is possible to invert the formula for RMSE to recover the eective
sample size of the seat-specic polls, generating an estimate of the quality
of information in these polls.13 e total survey error between the poll’s
predicted result and the observed result is expressed in the same terms as
total sampling error. For polls estimating Coalition and Labor vote shares,
the eective sample size is around 100, far below the 626 average sample
size in these polls; an indication of the unreliability of seat-specic polls.
Seat-specic polls are subject to substantial biases, so much so that they
contain as much information as an unbiased sample of just one-sixth the
nominal sample size of the poll.
Similarly, we report the coverage rate of the poll estimates in Figure 6.5, the
proportion of times that 95 per cent condence intervals formed around
each poll estimate actually lie within the observed election result.14 is,
too, is a useful measure of the performance of the polls. Unbiased polls
that utilise simple random sampling ought to have coverage rates equal to
their nominal coverage rates given by statistical theory. is means, using
95 per cent condence intervals, the election results should fall within
the 95 per cent condence interval bounds on 95 per cent of occasions.
Table 6.1 shows poor coverage rates for estimates of Coalition, Labor
and Xenophon support: just 58 per cent, 53 per cent and 60 per cent,
respectively. Poll estimates of Green support in specic seats fare a little
better, with a coverage rate of 78 per cent.
Figure 6.6 disaggregates poll errors (measured as absolute values) by
pollsterand by party. ere were some impressive misses. e bulk of polls
were produced by ReachTEL, Newspoll and Galaxy, which had median
absolute errors (MAE) of 3.5 per cent, 4.3 per cent and 2.6 percent
respectively in their seat-specic estimates of Coalition support. e
pollster with the greatest Coalition MAE was MediaReach, with just one
poll in Solomon (Northern Territory (NT)), which had an error of 8.4
per cent. is large error—and many others not so large—are well beyond
what we might reasonably expect from random sampling with the sample
sizes reported here.15 Patently, other sources of survey error are at work,
13 We do this by rearranging the formula , setting p equal to the
median outcome for a given party over the seats covered by those polls, then solving for n.
14 In computing the coverage rate, we form a 95 per cent condence interval around the published
poll result via a normal approximation to the sampling distribution of each poll result, setting the
upper and lower limits of the condence interval to , where , where p
is the poll result expressed as a proportion and n is the published sample size of the poll.
15 Recall that in footnote 12 we computed the standard error for a poll-based estimate of a proportion
of 0.5 with a random sample of 626 (the average sample size of the seat-specic polls we analyse). e
expected median absolute error from unbiased polls with this sample size is 1.34 per cent.
6. 
including frame errors (the sampling frame is not representative of the
electorate), non-response bias (the set of respondents taking the survey
are not representative of the electorate, even after corrections such as
weighting), or errors in weighting.
Figure 6.6. Performance of seat-specic polls by party and pollster
Note. Polling errors (absolute values) of estimates distance from the actual election result
are plotted, by party and pollster. The orange point marks the median absolute error for
a pollster when estimating the indicated party’s level of support in an electoral division.
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
Errors in seat-specic ALP polling were similar to Coalition polling
errors. e best-performing pollster was MediaReach, with their one poll
in Solomon (NT), reporting Labor rst preferences of 42 per cent (actual
40.87 per cent). Galaxy had an MAE of 2.2per cent, the lowest error
rate of pollsters that regularly elded during the campaign. Newspoll did
not perform much worse with an MAE of 2.9per cent. e3.5 per cent
ALP ReachTEL MAE and large distribution underscores the variability
in quality of seat-specic polling. Greens seat-specic polling produced
errors that are generally smaller than those for the major parties but,
as noted earlier, this largely stems from the fact that Greens vote shares
(both estimated and actual) are so much smaller than those for the major
parties. In relative terms, the Green errors are actually much larger. e
MAE of seat-specic poll estimates of Labor’s support was 3.28 per cent
(see Table 6.1); Labor’s median vote share across the seats in which we
have poll estimates was 35.55 per cent, implying that the MAE is about
9.2 per cent of Labor’s median vote share. e Green MAE is 1.42 on a
median vote share of just 8.04 per cent, implying that the MAE is 17.7
per cent of the Greens’ median vote share, almost double the relative size
of Labor’s polling error.
A plausible hypothesis is that seat-specic polling fares better when
conducted close to election day, and that errors in the polls might be larger
when conducted weeks or months earlier, before the campaign has rmed
up voters’ decisions. We explore this hypothesis with the analysis shown
in Figure 6.7, plotting the magnitude of seat-specic poll errors by the
eld date of the poll. With the exception of seat-specic poll estimates of
Labor vote share, there is little evidence that the accuracy of seat-specic
polling improved as the election grew closer. e trend lines in Figure6.7
are horizontal for both Coalition and Green seat-specic polling, and
statistically indistinguishable from a horizontal or ‘no change’ trend for the
relatively small number of polls assessing seat-level support for Xenophon
candidates. For Labor seat-by-seat outcomes, exceptionally large polling
errors (e.g. greater than 10 per cent in magnitude) are concentrated in polls
conducted more than six weeks before the election, although poll errors
as large as 10 per cent were recorded in polls elded less than two weeks
prior to the election; Macarthur (NSW) was the source of the largest error
in ALP seat polling (a 14 per cent miss by a ReachTEL poll on 19May)
and it also supplied 10 per cent misses for Galaxy (twice, 11May and
22 June) and Newspoll/Galaxy (14 June), underestimating ALP’s 51.9
per cent result in every case. ree seat-specic polls elded very close to
6. 
the election—on 29 and 30 June in Adelaide (South Australian (SA)) by
Galaxy, Chisholm (Victoria (VIC)) by ReachTEL and in Port Adelaide
(SA) by Galaxy—performed very well with respect to Labor vote share,
with errors of less than 1 per cent in each case. e same polls missed
Coalition vote shares by magnitudes of 2.6 per cent, 5.4 per cent and
4.4 per cent respectively, but performed relatively well with respect to the
Greens, with errors with magnitudes of 1.4, 2.4 and 0 per cent (to one
decimal place).
Figure 6.7. Performance of seat-specic polls, by party and over time
(days until election)
Note. Each plotted point is the error (absolute value) of a separate poll. The blue line
summarises the time trend of the absolute errors.
Source. © Simon Jackman and Luke Mansillo collated these data over the course of the
All of this evidence suggests interpreting seat-specic polling with great
caution. Labor support was systematically underestimated across the
polling industry and across seats. Stated condence intervals for seat-
specic estimates are far too small, or, equivalently, the actual statistical
power of the polls is far less than the nominal condence intervals and
‘margins of error’ accompanying media reports of the polls themselves.
Published condence intervals and ‘margins of error’ should be inated by
a factor of 240 per cent for estimates of Coalition support, by 270percent
for estimates of Labor support and by 185 per cent for estimates of Green
support. ese are extremely large ination factors; for instance, a seat-
specic poll estimating Labor support that claims to have a margin of
error of ±3 per cent ought to be considered as having a condence interval
of ±8.1 per cent.
e 44th Australian Federal Parliament experienced considerable volatility
in voting intentions. Turnbull’s ascension produced the largest change
in voting intentions in the three-year period between the 2013 and
2016 elections. e 5 per cent fall in Coalition support from Christmas
2015 until prorogation would suggest that Turnbull went to the 2016
election too late. ese large movements stand in stark contrast to small,
statistically negligible movements in voter sentiment during the campaign
period. Since we nd very little movement in voting intentions during the
formal campaign, it would seem that media narratives about the power of
campaign events are best considered with a grain of salt. For instance, it
is simply not the case that Labor’s ‘Mediscare’ campaign undermined the
Coalitions electoral position. e evidence available to us indicates that
public opinion was stable over the campaign.
We also make several conclusions about the quality of polls. Poll estimates
of the two-party preferred vote were generally of high quality. No survey
house displays statistically signicant bias in their two-party preferred
estimates. Estimates of national, rst preference vote shares were also
largely accurate. Morgan and Newspoll underestimated the Coalition’s
rst preference vote share by between 1.1 per cent and 1.9per cent.
Essential and Newspoll overestimate the Labor primary between
0.8 per cent and 1.2 per cent, while the active polling organisations
Ipsos and Morgan estimated their primary vote between 1.8 per cent and
6. 
1.9per cent. Greens primary vote estimation contained more bias, with
overestimations that range between 1.9 per cent and 2.6 per cent across
polling organisations.
We nd seat-specic polling to be highly unreliable. ese polls
systematically underestimate the Labor vote and overestimate the Greens
vote. e bias in the average seat-specic poll is so great that these polls
should be cautiously treated since they have an eective sample size a sixth
of that elded. is is in marked contrast to the performance of national
polling, indicating that reliably generating high-quality samples of small
areas (Commonwealth electoral divisions) is a challenging task for almost
all of the polling organisations we considered here.
Australian. 2015. ‘Newspoll changes polling partner to Galaxy Research’.
Australian, 5 May, 4.
Australian Labor Party (ALP). 2016. ‘Bob Hawke speaks out for
Medicare, do you?’ YouTube, 11 June. Available at:
Chang, Charis. 2016. ‘e Greens could be “quite powerful” in next
parliament’., 12 May. Available at:
Errington, Wayne and Peter van Onselen. 2016. e Turnbull Gamble.
Melbourne: Melbourne University Publishing.
Evershed, Nick, Bridie Jabour, Michael Sa and Miles Martignoni. 2016.
‘How to understand the numbers ying around election 2016 – Behind
the Lines Podcast’. Guardian, 12 May. Available at: www.theguardian.
Gillespie, Jim. 2016. ‘Labor’s “Mediscare” campaign capitalised on
Coalition history of hostility towards Medicare’. e Conversation,
5July. Available at:
Halperin, Mark and John Heilemann. 2010. Game Change: Obama and
the Clintons, McCain and Palin, and the Race of a Lifetime. New York:
Hartcher, Peter. 2016. ‘Federal election 2016: It’s 50–50, so what was the
point of it all?’ Sydney Morning Herald, 1 July. Available at: www.smh.
Hillygus, D. Sunshine. 2010. ‘Campaign eects on vote choice’. In Jan
E. Leighley (ed.), Oxford Handbook of American Elections and Political
Behavior. Oxford: Oxford University Press, pp. 326–45.
Issenberg, Sasha. 2012. e Victory Lab. New York: Broadway Books.
Iyengar, Shanto and Adam F. Simon. 2000. ‘New perspectives and evidence
on political communication and campaign eects’. Annual Review
ofPsychology 51: 149–69.
Jackman, Simon. 2005. ‘Pooling the polls over an election campaign’.
Australian Journal of Political Science 40(4): 499–517.
10.1080/ 10361140500302472
——. 2009. Bayesian analysis for the social sciences. Hoboken, NJ:
JohnWiley & Sons.
Keord, Glenn. 2013. ‘e presidentialisation of Australian politics?
Kevin Rudd’s leadership of the Australian Labor Party’. Australian
Journal of Political Science 48(2): 135–46.
Kenny, Mark. 2015. ‘Bill Shorten to unveil 50% renewable energy target
at Labor conference’. Sydney Morning Herald, 22 July. (Original title:
‘Shorten to propose bold new climate goal’). Available at: www.
Kirby, Tony. 2014. ‘Health and science suer major cuts in Australia’s
budget’. e Lancet 383: 1874–76.
6. 
Makinda, Samuel. 2015. ‘Between Jakarta and Geneva: Why Abbott
needs to view Africa as a great opportunity’. Australian Journal
ofInternational Aairs 69(1): 53–68.
Marston, Greg. 2014. ‘Welfare for some, illfare for others: e social
policy agenda of the Abbott Government’. Australian Review of
Public Aairs, October. Available at:
McAllister, Ian. 2003. ‘Prime ministers, opposition leaders and
government popularity in Australia’. Australian Journal of Political
Science 38(2): 259–77.
Mitchell, Jake. 2014. ‘Nielsen ends 40 years of public polling’. Sydney
Morning Herald, 2 June. Available at:
Ryan, Matthew. 2015. ‘Contesting “actually existing” neoliberalism’.
eJournal of Australian Political Economy 76: 79–102.
Shorten, Bill. 2004. After the Deluge? Rebuilding Labor and a Progressive
Movement. Fitzroy, Victoria: Arena Printing and Publishing Pty Ltd.
Sides, John and Lynn Vavreck. 2014. e Gamble: Choice and Chance
in the 2012 Presidential Election. Princeton, NJ: Princeton University
Stirton, John. 2015. ‘New Galaxy “Newspoll” to rely on robopolling and
online data’. Sydney Morning Herald, 9 May. Available at: www.smh.
Tien, Rodney. 2008. ‘Campaign tactics, media bias and the politics
of explanations: Accounting for the Coalitions loss in 2007’.
Communication, Politics & Culture 41(2): 8–29.
Williams, Pamela. 1997. e Victory: e Inside Story of the Takeover
ofAustralia. Sydney: Allen & Unwin.
——. 2016. ‘Federal election 2016: How Labor’s Mediscare plot was
hatched’. Weekend Australian, 4 July. Available at: www.theaustralian.
Wright, Malcolm, David Farrer and Deborah Russell. 2013. ‘Polling
accuracy in a multiparty election’. International Journal of Public
Opinion Research 26(1): 113–24.
is text is taken from Double Disillusion: e 2016 Australian
Federal Election, editedbyAnika Gauja, Peter Chen, Jennifer Curtin
and Juliet Pietsch, published2018 by ANUPress, eAustralian
NationalUniversity,Canberra, Australia.
... We rely on a methodology that one of us helped pioneer (Jackman 2005(Jackman , 2009) and is used widely by polling analysts in Australia (Mark the Ballot 2017) and internationally (Ellis 2017;Pickup et al. 2011). We previously deployed this modelling strategy to analyse polls leading up to the 2016 Australian federal election (Jackman and Mansillo 2018). ...
... The total sample of a poll n p we divide by t(p) days in poll p to give the incorporate information from each over each day of fielding with the correct precision. This produces a smoother estimate of voting intentions than the previous attempts that used the median date of fielding (Jackman 2005;Jackman and Mansillo 2018). 4 ...
Full-text available
In Chapter 7, ‘National polling and other disasters’, Luke Mansillo and Simon Jackman examine the failure of the national polls conducted before the election to anticipate the result. The national polls—which had been reasonably accurate predictors of election outcomes in recent years—powerfully shaped expectations among the public, journalists and politicians themselves that Labor would win the election. Mansillo and Jackman fit a ‘state-space model’ to the public opinion polls fielded between the 2016 and 2019 federal elections, identifying the estimated trajectory of voting intentions between the two elections, house effects (biases specific to each polling organisation) and the discontinuity in public opinion associated with the transition from Malcolm Turnbull to Morrison as prime minister in August 2018. Polling error in 2019 was largely associated with underestimating Coalition support, while overestimating support for minor parties, especially on the part of YouGov Australia. Some of this polling error could have been anticipated given the observed biases in polls fielded before the 2016 federal election (Jackman and Mansillo 2018), but most of the 2016–19 error was new. What was especially striking about the polling errors in 2019 was that: a) errors in estimates of first preferences did not ‘wash out’ when converted to two-party-preferred estimates, such that b) the resulting errors in the two party-preferred estimates were large by historical standards, and c) they led to an incorrect prediction as to which party would form government, at which point larger-than-typical ‘poll error’ became a fully fledged crisis of confidence in polls and the polling industry. The chapter identifies pollster malpractice through ‘herding’; published polls during the campaign period were far too close, suggesting adjustment of weighting procedures to match estimates from rival polling organisations.
Political scientists often use public opinion polls to test their theories. Yet these data present some difficulties. First, they are noisy. Second, they occur at irregular intervals. Third, they measure both public preferences and pollsters’ survey design choices. We introduce a new dataset, PollBasePro, that accounts for these issues. It includes voting intention estimates for each of Britain’s three largest parties on each day between the 1955 general election and the present. As the dataset covers 24,106 days in total, it is perhaps more comprehensive than any other existing time series of British political opinion. We then use our estimates to test a question of pressing importance: how daily deaths attributable to COVID-19 have influenced support for the governing Conservative Party.
Full-text available
Political scientists often use public opinion polls to test their theories. Yet these data present some difficulties. First, they are noisy. Second, they occur at irregular intervals. Third, they measure both public preferences and pollsters’ survey design choices. We introduce a new dataset, PollBasePro, that accounts for these issues. It includes voting intention estimates for each of Britain’s three largest parties on each day between the 1955 general election and the present. As the dataset covers 24,106 days in total, it is perhaps more comprehensive than any other existing time series of British political opinion. We then use our estimates to test a question of pressing importance: how daily deaths attributable to COVID-19 have influenced support for the governing Conservative Party.
The focus of the chapters preceding this one has been on describing and analysing the campaign practices of Australian parties. This includes how these practices affect party organisation. This chapter analyses what voters think of data-driven campaign practices. Drawing on data from an original survey instrument, I explore Australian voter attitudes to a range of relevant issues and themes. This includes questions about whether Australian political parties should remain exempt from privacy legislation, how concerned voters are about parties collecting data on them, and their views on how online advertising should be dealt with by social media companies. The results show that there is a disconnect between the preferences of Australian voters and the current regulatory and legislative environment. There is also a disconnect between voter preferences and the policies of social media companies.
In this chapter, I begin by tracing the rise of digital campaigning in Australia, including the use of targeted social media advertising. I describe changes in the macro-political environment, including how voters receive political news and engage politically. I draw heavily on interview data with digital campaigners from the parties, consultants who have worked for the parties as well as other relevant campaigners. I argue that despite the preoccupation with digital in the international scholarship, it remains a secondary channel in Australian election campaigns. The reasons for this include the institutional architecture of Australian democracy, as well as scepticism about how effective digital is at persuading voters, especially in the major parties.
This article sets out to explain how the relatively unremarkable 2018 by-election result in which a sitting Labor candidate held her seat with a mediocre swing towards her resulted in the panicked removal of Prime Minister Malcolm Turnbull from office and his immediate resignation from the parliament. The combined Queensland state Coalition party, the Liberal National Party, convinced itself that it could win the marginal outer-metropolitan seat of Longman in Queensland but when its expectations were dashed, it became spooked and set in train a chain of events that ousted Turnbull and installed Scott Morrison as prime minister. Turnbull was widely seen by the Coalition party room as having run a lack-lustre campaign in the 2016 federal election, and not having performed well in the 2018 by-election campaigns. Perhaps unwisely, Turnbull made the Longman by-election a direct leadership contest between himself and opposition leader Bill Shorten. However, Labor’s tactics in the by-election ‘outmanned, outspent and out-campaigned’ the Coalition’s faltering campaign in the seat, causing the relatively unremarkable outcome in Longman to become a catalyst for a challenge to Turnbull’s leadership. When parliament reconvened, Peter Dutton became the ‘stalking horse’ who resulted in the rise of Scott Morrison to the top office.
This study contributes to the scholarship on negative campaigning, revealing the important dynamics of party and media messaging and its subsequent effects on issue salience and vote choice. Using a large-scale dataset combined with content analysis of media coverage and party press releases, we offer an innovative methodology that provides evidence showing the effect of a prominent negative campaign (“Mediscare”) launched by the centre-left Australian Labor Party during Australia’s 2016 federal election. We find political elites can influence what voters are paying attention to and, when issue salience is high, this can influence vote choice. We find Labor’s “Mediscare” had two main effects. It significantly raised the issue salience of healthcare with voters, and it had an impact on vote choice, particularly in marginal electorates. The scare campaign providing a reinforcement effect for Labor, arresting declining support for the party that was evident prior to the commencement of the negative campaign. We conclude that under the circumstances of high public awareness, “issue ownership” and compulsory voting, this negative campaign was effective in shaping the 2016 electoral outcome.
Full-text available
This article describes how far the literature of campaign effects on vote choice has come, and where it should go. It seems clear that the effects of campaigns are more constrained than often is presented in the media. Moreover, the state of research regarding the influence of campaigns on vote choice (rather than turnout), including campaign effects due to citizen learning, campaign priming, and, more directly, voter persuasion, is discussed. The recent evidence of campaign effects largely reflects the availability of better data and more sophisticated research designs. In addition to individual-level variation in the way voters process campaign information, there is also variation in the particular campaign messages they receive. The great variation in candidate strategy and voter decision making should be viewed as both an opportunity and challenge for campaign scholars.
Full-text available
The presidentialisation debate centres on the question of whether contemporary political leaders in parliamentary systems are more powerful than their predecessors. This article applies the presidentialisation thesis of Poguntke and Webb (2005) to the period in which Kevin Rudd led the federal parliamentary Labor Party in Australia. Their model identifies three distinct faces of presidentialisation: the executive face, the party face and the electoral face. This article argues that the evidence of presidentialisation under Rudd's leadership is mixed. The most compelling evidence is reflected in how Rudd interacted with the Labor Party, rather than his interaction with the executive or impact on voting behaviour. 关于总统化的辩论集中于当前国会的政治领袖是否比他们的前任更有权力。本文将普刚克和韦博(2005)的总统化理论应用于陆克文领导澳大利亚联邦议会工党的时期。他们二人总结了总统化的三个方面:行政面、党派面、选举面。本文指出,陆克文领导时期总统化的证据含混不清。最有说服力的证据倒是反映在陆克文与工党而不是与政府的互动或对选举行为的影响上。
Australia's engagement with Africa during the Rudd and Gillard governments was primarily driven by the national interest, which revolved around three issues: humanitarianism, support for mining corporations, and the United Nations Security Council seat. This article argues that there is a need for the Abbott government to retain the same depth and breadth of relationships with Africa. It is in the interest of both Australia and African states for the Australian government to remain committed to humanitarian objectives and to help African countries meet some of their Millennium Development Goal targets. Moreover, the continued support of Australian mining corporations operating in Africa, especially through the training of African policy makers in mining governance, is good for both Africa and Australia. Finally, Australia's continued success in multilateral diplomacy will depend on support from all parts of the world, including Africa. Australia's success at the multilateral level will, in turn, result in bilateral benefits in other regions, including the Asia-Pacific.
Bayesian methods are increasingly being used in the social sciences, as the problems encountered lend themselves so naturally to the subjective qualities of Bayesian methodology. This book provides an accessible introduction to Bayesian methods, tailored specifically for social science students. It contains lots of real examples from political science, psychology, sociology, and economics, exercises in all chapters, and detailed descriptions of all the key concepts, without assuming any background in statistics beyond a first course. It features examples of how to implement the methods using WinBUGS - the most-widely used Bayesian analysis software in the world - and R - an open-source statistical software. The book is supported by a Website featuring WinBUGS and R code, and data sets.
Poll results vary over the course of a campaign election and across polling organisations, making it difficult to track genuine changes in voter support. I present a statistical model that tracks changes in voter support over time by pooling the polls, and corrects for variation across polling organisations due to biases known as ‘house effects’. The result is a less biased and more precise estimate of vote intentions than is possible from any one poll alone. I use five series of polls fielded over the 2004 Australian federal election campaign (ACNielsen, the ANU/ninemsn online poll, Galaxy, Newspoll, and Roy Morgan) to generate daily estimates of the Coalition's share of two-party preferred (2PP) and first preference vote intentions. Over the course of the campaign there is about a 4 percentage point swing to the Coalition in first preference vote share (and a smaller swing in 2PP terms), that begins prior to the formal announcement of the election, but is complete shortly after the leader debates. The ANU/ninemsn online poll and Morgan are found to have large and statistically significant biases, while, generally, the three phone polls have small and/or statistically insignificant biases, with ACNielsen and (in particular) Galaxy performing quite well in 2004.* An earlier version of this paper was prepared for the annual meeting of the Australasian Political Studies Association, University of Adelaide, 29 September–1 October 2004.