Content uploaded by Christopher Wlezien
Author content
All content in this area was uploaded by Christopher Wlezien on Jan 31, 2023
Content may be subject to copyright.
News and Public Opinion: Which Comes First?*
Journal of Politics, forthcoming.
Christopher Wlezien
University of Texas at Austin
wlezien@austin.utexas.edu
Abstract: Much research demonstrates a positive association between news coverage and
public opinion, both perceptions and preferences: When the news goes in one direction,
opinion does too. While this relationship is clear, what accounts for it is not. The
assumption in most previous research is that media causes public opinion; this is true in most
observational studies and almost all experimental ones. But there is reason to expect that the
causality runs in the other direction as well, where the public drives media coverage. In this
paper, I describe the logic of two-way flows and then undertake an analysis of three different
cases of US public opinion over time – economic perceptions, candidate support, and policy
preferences – using measures of the content of news coverage based on automated content
analyses. Vector autoregression results indicate that opinion tends to come first; it “causes”
coverage in every case and the reverse holds less frequently and always to a lesser degree.
Although much research clearly remains, the results underscore the role the public can play
in news coverage, one that always should be entertained and assessed empirically, not settled
by assumption.
Keywords: Media, perceptions, preferences, economy, elections, policy, time series
* Replication files using Stata software are available in the JOP Data Archive on Dataverse
(https://dataverse.harvard.edu/dataverse/jop). The empirical analyses in this article and the online
appendix have been successfully replicated by the JOP replication analyst. Some of the analysis
relates to research supported by National Science Foundation Grants SES-1728792 and SES-
1728558.
The editor called his reporter, who happens to be the now-older author of this
article. “The publisher wants to talk with you,” the editor said. After agreeing on
a time, the reporter showed up and was introduced to the publisher, who then
asked: “So, who do you think is going to win [the upcoming election] in Happy
Hills [a fictional town]?” The reporter said that he didn’t know. “It’s a small
town. There aren’t any polls.” The publisher replied “Yes, I know that. But
you’re our man on the ground there; what do you think?” The reporter told him
and the publisher responded “That’s my sense too.” Once out the door, the
reporter turned to the editor and asked: “What was that about?” He replied:
“Don’t you get it?” He then smiled and added: “Our publisher’s making
endorsements.”
There is little gainsaying the importance of mass media in modern societies. For better or worse,
they connect us with the outside world. This almost goes without saying. It also leads many
scholars and other observers to infer that mass media largely determine what we think about,
what we know, and even what we want. This despite decades of research demonstrating limits to
media effects, even during the apex of their concentrated power and audience reach.
Increasingly, we choose our media. Demand thus matters, which engenders competition. But,
there has been choice for some time, and people exercised it well before the advent of the
internet and social media. On television, there was cable and satellite. Before that, even
broadcast offered choice. Newspapers typically offered less of a menu, and that clearly has
declined over time, but choice remains even there, at least for national news. And people always
could opt out, and many apparently did and do (Prior 2007). Throughout, there have been
incentives for news organizations to gauge and respond to public sentiment (Hamilton 2004).
While incentives are important, there are other reasons to expect the news to be responsive, one
passive and the other more active. First, reporters and editors are people, and experience many
of the same things that the rest of us do, which they may reflect in their reporting, i.e., they are
samples of the broader population and so may be representative. Second, even to the extent they
are not like the rest of us, and so see the world differently, they may see their roles as
representing, at least to some degree, what “the public” cares about and thinks (Dunaway and
Graber 2022).
For various reasons, then, we might expect the news to follow the public. This may help explain
why news organizations have been involved in public opinion polling for such a long period,
particularly going back in time, when the costs of doing so were much greater than they are
today.
1
They went out of their way to collect the information, so presumably it influences their
reporting (and editorializing), both the topics they cover and how they cover them.
Yet, most research on the media and the public assumes that the public follows the media. This
is true in most observational studies (e.g. Dilliplane 2013). It is truer still in experimental
research, which almost always focuses on the effects of media treatments (e.g. Levendusky
2013). The assumption may be understandable given the importance of media in our lives. And
this goes double for experimental work, as it is difficult to manipulate opinion and assess the
impact on coverage.
But is the flow only in one direction from the media to the public? Or does the public also
influence the media? These are interesting questions, as we want to know the answers. The
answers also are important, as they matter for effective political accountability and
representation. They shed light on whether the public really is – can be! – influential in politics
and policymaking.
This article considers two-way flows between the media and the public. It focuses on the actual
substance of coverage, not just the amount of coverage in different areas, which is important to
1
Polls in the past also may have provided more accurate reflections of public sentiment.
be sure but only part of the story. After setting out the basic approach, relying on the logic of
Granger causality, I analyze how the news relates to public opinion in three different cases in the
United States (US): economic perceptions, candidate support, and policy preferences. Each of
the areas is the subject of a lot of media and scholarly attention, and there are enough historical
data on both public opinion and news coverage to estimate time series dynamics. The results
reveal pervasive public influence on coverage that exceeds the influence of that coverage on
public opinion in every case. I consider the consequences for our understanding of media and its
effects in the concluding section of the paper.
The Media and Public Opinion in the Literature
Mass media are important. They are sources of information. It’s where we learn about many
things. This is well-known (see, e.g., Jamieson 2017; Dunaway and Graber 2022). They also are
sources of misinformation. This also is well-known, and has been true for a long time, before
social media and even cable (and satellite) television (also see Dunaway 2021). Misinformation
is not necessarily deliberate, of course. There is simple sampling error, after all, keeping in mind
that the news does not fully and perfectly reflect everything that is happening. Misinformation
need not be random, of course. News providers make choices about what to emphasize and how
to do it, what scholars often refer to as “frames.” The point is that there is reason to think that
media matter for the public, and this has been reflected in scholarly research for a long time.
Observational research
There is a very long and deep tradition of studying media effects on the public. Research finds
effects on political socialization, political knowledge, agenda-setting, political attitudes, and
political behavior as well, among others (see Graber and Dunaway 2022).
2
While most of this
research – indeed, almost all observational research – assesses media effects, there is some
appreciation for media responsiveness (e.g., Hamilton 2004; Gentzkow and Shapiro 2010). This
is especially true in research on policy, where scholars have considered whether the media
agenda reflects public opinion about problems (e.g., Soroka 2002; Barbera, et al 2019). It finds
evidence that news organizations tend to provide coverage on the issues people care about most,
e.g., when the public is concerned about immigration, the media tend to focus on it. This is
important to be sure, particularly given that a lot of the policy-focused work on agenda-setting
still starts from the premise the media drives public attention. And there is little research focuses
on the actual substance of coverage about policy – whether the news reflects preferences for
government action, and the amount and type of policy.
One stream of research on media coverage and public opinion does consider flows in both
directions – the economy. Here, scholars have considered whether reporting on conditions not
only influence the public’s economic perceptions but is influenced by them. And some of this
research actually finds evidence of two-way flows between coverage and perceptions
(Stevenson, et al 1994; Haller and Norpoth 1997; Wu, et al 2002; Soroka, et al 2015; Wlezien, et
al 2017). Other work does not find both flows, however, and concludes instead that media
coverage is exogenous to public opinion and also influences it (Fan 1993; Blood and Phillips
2
This is a basic list of some of the research foci in the broad area; for more extensive reviews,
see Prior (2013), Jamieson (2017), Iyengar (2017), Dunaway (N.d.), and Dunaway and Settle
(N.d.). Some research addresses media influence on policy (Carpenter 2002; John 2006; also see
Baumgartner and Jones 1993). While there appears to be an effect, the mechanism is not clear.
That is, it may be that media coverage influences policymakers’ own preferences, which can
relate to both priorities for action and preferred policies. It also may be that coverage influences
policymakers’ perceptions of public opinion, again relating to priorities and/or preferred policies.
Both may be at work.
1997; and Hollanders and Vliegenthart 2011). The direction of influence thus is not clear even
for the economy.
It is less clear elsewhere, where the possibility has not received much attention. Note that some
scholars have considered the possibility of two-way flows in analyses of electoral preferences
during the 2012 and 2016 presidential nomination contests in the US. Here, Sides and Vavreck
(2014) and Reuning and Dietrich (2018) found evidence that the amount of news coverage
influenced public attention, and the former also demonstrated that coverage mattered for public
preferences, particularly during the “invisible” primary before the primary elections begin (also
see Sides 2015; Patterson 2016).
3
Neither found evidence that public support of candidates
influences the news. While strongly suggestive, and confirming of the conventional scholarly
wisdom, it is a small amount of research and focused on primary elections. And recent research
does find that preferences mattered for coverage in the US general election campaign of 2016
(Wlezien and Soroka 2019).
Besides economic perceptions and to a lesser extent electoral support, the possibility that the
news and the public influence each other has not been consistently entertained. That said, we do
know from previous research that consumers exercise choice among information providers
(especially see Arceneaux and Johnson 2013), and there is reason to think this structures news
provision itself (Hamilton 2004; Gentzkow and Shapiro 2010; Padgett et al. 2021). Web traffic
analyses are (strongly) supportive of demand-side influences (Hindman 2018), as is longer-
standing research on horse-race coverage of election campaigns (e.g., Iyengar, et al. 2004). But
3
Interestingly, Patterson (2016) finds that media attention during the period is closely associated
with candidates’ viability as reflected in the polls.
it is not clear based on the extant research whether and to what extent content actually leads or
follows the public in different areas.
Experimental research
The assumption of one-way flows is even more commonplace among experimental studies (e.g.
Levendusky 2013). This is not at all surprising, as it is difficult for scholars to manipulate public
opinion. Even if this could be done, it is difficult to assess the implications for coverage out in
the field. And, to state the obvious, assessing the broader consequences for media coverage is
impossible in the lab. This of course limits what scholars can do.
On the Possibility of Two-Way Flows
I have already discussed the kinds of motivations that would lead news providers to follow the
public, but it is worth retracing those steps.
Consider first that reporters and editors are part of society, and so share experiences with the
broader population. To the extent this is true, they may reflect much of what average citizens
want simply because they also want those things. Of course, this depends on the degree to which
experiences are common across the two groups. It thus is important that news providers are not a
representative sample of the broader population (Dunaway and Graber 2022); this does not
preclude overlap, though it does imply a degree of difference that may matter for both the media
agenda and content of coverage.
Second, there is a sociological explanation as well. Even to the extent they are not representative
of the population, editors and journalists may see their roles as representing the public’s concerns
and preferences. To the extent this is true, they may reflect the what the public thinks simply
because they think it is important to do so. This depends on accurate perceptions of what the
public wants, of course, and journalists have various sources of information about public
preferences. Opinion polls are one of these sources, and also help serve as a check on other
information (see Geer 1996).
Third, and perhaps most importantly, market incentives provide motivation (Hamilton 2004;
Gentzkow and Shapiro 2010). Even where editors and reporters are not representative of the
public and do not see their roles as representing them, competition matters and so too do
audience tastes. Both shape the news. This is especially true when media actors depend on
subscriptions and advertising, much as economists would predict. But consider that even
publicly-funded actors face incentives to represent public tastes, as this can factor into support
for retaining or expanding funding.
These are real possibilities that are to some degree documented in the literature. They may help
explain why news organizations have been conducting polls of the American public for so long,
traditionally at substantial cost. The results of such polls may form the focus of coverage, of
course. They also might guide coverage, as editors and reporters seek to reflect the perceptions
and preferences of the public, e.g., whether the economy is getting better, whether one candidate
or another is doing better, and where more spending is needed or would be beneficial. It may
even impact the editorial positions various outlets take.
Contemplating these possibilities may make one wonder why we expect one-way flows from the
media to the public in the first place, and then why we would not expect flows to run primarily
from the public to the news. Of course, the patterns may depend on the variable of interest and
the circumstances, the nature and timing of events and issues, and where the direction and
magnitude of media (public) influence is greater (lesser) in some areas than others. Indeed, there
is reason to suppose that the demand for information by both the public and news providers is
important, which I consider below in the context of the three empirical cases.
On the Representation of Two-Way Flows
Disentangling the causality between media coverage and the public is difficult, as there is reason
to suppose that they impact each other pretty much simultaneously. We certainly can assess the
correlation between the two variables at particular points in time, but that leaves the direction(s)
of flow(s) unclear. In such circumstances, scholars often rely on cross-lag models, otherwise
known as Granger causality in time series analysis. This involves modeling each variable as a
function of its lagged value and the lagged value of the other variable(s), and is the approach we
adopt here. Given we are working with only two variables here – news content and public
opinion – the model is straightforward. See the depiction in Figure 1.
-- Figure 1 about here --
To be absolutely clear, the model implies the following two equations:
Publict = a0 + b1 Publict-1 + b2 Mediat-1 (1)
Mediat = a1 + b3 Mediat-1 + b4 Publict-1. (2)
The cross-lag approach is a conservative one, as results will tend to understate the influence of
media coverage on opinion and the influence of the public on coverage itself, that is, to the
degree the influence of the variables is more current and already reflected in the lag(s) of the
variable(s). But it still allows us to assess directional influence, where positive evidence is to be
taken seriously. This may explain its use in much previous research using time series (see
Freeman 1983; Box-Steffensmeier, et al 2014), and increasingly in analysis of panel data (see,
e.g. Hood, et al 2008; Lenz 2009; Evans and Chzhen 2016; Kearney 2017).
4
In addition to
estimating basic single-lag models, I turn to higher order vector autoregression (VAR) models,
which add additional lags, thus allowing a more general analysis that makes a difference in some
cases (Freeman, et al 1989). Note that it also is possible to exploit sequencing in measurement in
one of the three cases, where observations of the public and the news are not actually
simultaneous.
Some (Basic) Theoretical Expectations
We might expect relationships to depend on the variable of interest, so that the news (opinion)
matters more for some aspects of opinion (news coverage) than others. One leading suspect is
information, as the media provide information to us and we provide information to them. The
effect of information will depend on consumption, but also the degree to which it contrasts with
or reinforces information the public and media actors already have. Of critical importance is the
degree to which each has independent sources of information that can influence the other.
For this analysis, I analyze how the news relates to public opinion in three cases: economic
perceptions, candidate support, and policy preferences. These are three different but important
cases in the United States and elsewhere. Each is the regular focus of news coverage and also
pollsters’ questions, including those sponsored by news organizations themselves. Each also has
been the subject of much scholarly attention.
The first case is perceptions of economic conditions. Here, the news can provide information
about what is happening out in the world and so positively influence perceptions, leading people
4
In the latter, scholars commonly make reference to cross-lagged panel model (CLPM) or the
general cross-lagged panel model (GCLM).
to think that the economy is good (bad) because the news reports that it is. At the same time,
people have their own experiences and observations, some contextual, which may positively
influence media coverage – it is information for news providers. Thus, we may expect causal
flows in both directions. And the relationship may differ for retrospections and prospections,
possibly in (seemingly) counterintuitive ways. For example, prospections may influence the
news more than retrospections do, as reporters may need/want more information from us about
the future than the past.
The second case relates to public preferences for political candidates during election cycles.
Here, the news also provides information about performance over the campaign timeline and so
can positively influence preferences, where a good news day for a candidate leads the public to
be more supportive. But, even where people get most of their information from the media,
preferences may evolve independently of the news, for instance, because of how they respond to
(and discount) information (e.g. Zaller 1992). The ebb and flow of support for the candidates
thus provides an indication of candidate performance that can positively influence coverage too
(Kahn and Kenney 2002). Indeed, there is reason to suppose that the relationships are
conditioned by the evolution of preferences itself. Where preferences are changing, for instance,
during the nominating conventions, news (and the public) may have greater influence. Later on,
as preferences harden, there is reason to expect less media (and public) influence.
The third case concerns preferences for policy. Here, the news can provide information to the
public about government action but also about policy need, the latter of which may positively
influence preferences; indeed, the former – information about government action – appears to
have negative feedback effects on relative preferences, those that we usually measure, per the
thermostatic model (Soroka and Wlezien 2022).
5
We can consider and test the possibility that
media coverage positively influences (relative) preferences independently of policy. But, there
also is reason to think that those preferences positively influence coverage, that is, where
providers of the news reflect what people (say they) want in the content, if only based on the
polls they conduct and cover.
An Empirical Analysis of Three Cases
I have provided basic, very general expectations regarding the three cases, each of which can be
assessed empirically. I take them in the order reflected above, beginning with economic
perceptions, about which there has been a good amount of previous research, some of which
reveals two-way flows, as discussed.
Economic News Tone and Public Perceptions
To measure the public’s economic perceptions, I use the University of Michigan’s Survey of
Consumers in the US. Importantly, the resulting data is neither collected nor sponsored by the
organizations that produce the news content used in the analysis. I rely on both economic
retrospections—assessments of the past year—and prospections—expectations for the next year.
The survey items focus on direction—better, same, worse—of broad “sociotropic” conditions,
specifically, “business conditions,” i.e., not personal “egocentric” ones. For the analysis, I use
monthly averages beginning in January, 1980, which is when the news data are available.
5
Here, the public adjusts its preferences for “more” government policy downward (upward) in
response to increases (decreases) in government activity, though these relative preferences can –
and do – shift as the public’s (underlying) preferred levels of policy change (see Wlezien 1995;
Soroka and Wlezien 2010; Wlezien and Soroka 2021).
For news coverage, I rely on previous research measuring media tone of “economic” coverage
from the New York Times and Washington Post in the Lexis-Nexis database (Soroka, et al 2015).
That research employs the Lexicoder Sentiment Dictionary to identify “positive” and “negative”
words in domestic economy articles to create a basic indicator of net tone as follows: (Positive-
Negative)/Total.
6
This measure is produced monthly beginning in January, 1980, when the
media content from Lexis-Nexis is first available, and ending in 2013.
-- Figures 2 and 3 about here --
The measures of public perceptions and media coverage are shown in Figures 2 and 3. Here we
see evidence that perceptions and coverage covary, at least to some degree, as monthly
correlations are 0.42 for retrospections and 0.38 for prospections. (Correlations using three
month-moving averages are slightly larger – 0.50 and 0.44, respectively.) All three variables
tend to hover around their mean values, though retrospections do drift more than prospections.
Importantly, Dickey-Fuller tests indicate that all variables are stationary, which is important, as
this makes it easier to estimate and interpret the results of the cross-lag equations (1 and 2) of
public perceptions and news coverage.
7
The results of doing so using standardized variables are
shown in Table 1.
The first two columns of the table contain separate regressions for the public’s retrospective and
prospective perceptions, and the third presents results for news tone. In the first column it is
6
Variations on this simple measure work much the same.
7
That is, both sides of the equation are stationary, which means that it is balanced (see Banerjee,
et al 1993; Maddala and Kim 1998; for a discussion relating to political science research, see
Enns and Wlezien 2017). For results, see Supplemental Table S1.
clear from the coefficient (0.94) on the lagged dependent variable that retrospections in each
month tend to be closely related to those perceptions in the previous month. This is as we expect
given the pattern in Figure 2. Those perceptions also are positively associated with tone of
economic news coverage in the previous month, implying that as coverage becomes more (less)
positive, public evaluations of the economic past tend to increase (decrease). The estimated
coefficient also is statistically significant (p<.01). As discussed earlier, this is strong evidence of
media effects, as influence of the news already may be reflected in lagged retrospections, and so
there is reason to suppose that the estimate is a conservative one.
-- Table 1 about here --
The results for prospections in the second column of Table 1 are somewhat different. There is
evidence from the coefficient (0.85) on the lagged dependent variable that perceptions of the
future also carry over partially from month to month, though to a lesser degree than for
retrospections. There also is evidence of a positive relationship between lagged news tone and
economic prospections, thought the coefficient (0.006) is not highly reliable (p=.42), making it
difficult to infer a real effect. Again, the estimate may understate the true impact of news on
those perceptions, but given the results of our analysis, it appears that the influence of the news
on economic perceptions depends on whether the public is looking backward or forward.
What about effects of public perceptions on news tone? The results in the third column of Table
1 show that lagged values of both retrospections and prospections influence current news tone
independently of lagged tone. The coefficient (0.40) for the latter still reveals some persistence
in coverage over time, but (much) less than what we observed for retrospections and
prospections. Given lagged tone, both of the lagged perceptions variables positively influence
current coverage; the coefficients not only are positive, they are statistically significant (p<.01).
These estimates also may understate the influence of the public on economic news, of course.
The evidence implies that the public leads the news more consistently than the news leads the
public.
8
The results in Table 1 are informative about direction and reliability, but it is difficult to say
much about the size of the effects. For that, we want to see how the estimates unfold over time.
Impulse response functions (IRFs) are particularly useful for this purpose. They depict the
effects of a shock in an independent variable at a particular point in time on the dependent
variable at subsequent time points given our specification.
9
Before creating the IRFs, it is
important to note that diagnostic analyses indicate that a second order vector autoregressions
(VARs) – using two lags of all variables – performs better than the first order VAR estimated in
Table 1. Results of that model are quite similar (see Supplemental Table S2), though the
specification does make IRFs more useful as a tool. Using standardized variables provides a
basis for comparing the size of effects. Figure 4 displays the resulting IRFs. (Plots based on
VARs including leading economic indicators (LEI) are in Supplemental Figure S1.)
-- Figure 4 about here --
The left-hand column shows the effects of a shock in news tone on retrospections and
prospections and the right-hand column shows the effects of shocks in retrospections and
8
The results are somewhat sensitive to the inclusion of measures of the real economy from The
Conference Board, particularly the index of leading economic indicators (LEI). Including that
series in models weakens the effects of tone on retrospections and prospections on tone, the latter
of which is not surprising given that economic expectations are one of the leading indicators.
The effects may be clearest from the impulse response functions, which I introduce below.
9
There are various ways to produce the IRFs, and I use the simplest formulation that follows the
model specification and does not attempt to decompose contemporaneous effects.
prospections on tone. Thus, in the upper left frame, a one standard deviation shock in news tone
produces an approximate 0.10 standard deviation shift in retrospections over a few months. In
the upper right frame, a one standard deviation shock in retrospections has a much larger
(standardized) impact on tone, particularly in the ensuing month, which decays over time. In the
bottom frames, we can see that tone has no discernible effect on prospections over time but the
latter do substantially structure the former over time. This is as we expect from Table 1, but
notice that the IRF in the lower right-hand frame indicates that effects of prospections on tone
are quite persistent, peaking after 3-4 months and decaying slowly thereafter. Public economic
perceptions appear to matter more for the news than the news matters for those perceptions.
The foregoing analysis provides what seems to be strong evidence of public influence on news
coverage. Economic perceptions are but one case, of course. Let us now consider another one
where there may be reason to think that the media matter more given previous research (see
Sides and Vavreck 2014): electoral preferences.
Candidate News Tone and Electoral Support
For this analysis, I analyze news coverage and voter preferences during the 2016 US
presidential election campaign. The preference time series is based on hundreds of national
polls from the Huffington Post that register how respondents would vote “if the election were
held today” or the like conducted during the election year. While some of the polls are
sponsored by news organizations, many are not, and the measures I use aggregate across all
survey houses. Where multiple results for different universes were reported for the same
polling organizations and dates, data for the universe that appears to best approximate the
voting electorate is used, i.e., a sample of likely voters over a sample of registered voters. Most
importantly, all overlap in the polls conducted by the same survey houses for the same
reporting organizations is removed, e.g., where a survey house operates a tracking poll and
reports 3-day moving averages, I only use poll results for every third day. This leaves 308
separate national polls during the election year, 100 after the unofficial Labor Day kickoff of
the general election campaign.
It is not entirely clear how to aggregate polls for this analysis, as I do not have separate daily
readings of polls. That is, polls almost always are conducted over multiple days, on average 4.5
days (s.d = 1.8). That means that we cannot tell for sure what the public’s preferences are on
each day, let alone confirm the release of the data, the latter of which may be of relevance for
those interested in media responsiveness. To measure preferences on day t, we could exclude all
polls that were in the field after that day, i.e., t+1, t+2, etc. This might help isolate opinion
effects on media at t+1, as we could be sure that preferences were not measured on that day or
any day after. However, estimating a cross-lag model with the measure will conceal the effects
of measured preferences from earlier days, t-1, t-2, and so on, that are reflected in media
coverage at time t-1 (and possibly before). In a similar way, to assess media effects (from time t)
on preferences at time t+1, we could exclude all polls in the field prior day to t+1, which will
tend to conceal the influence of media at each point in time to the extent our measure captures
opinion change in subsequent days (and media effects do not persist). It should come as little
surprise that this approach to estimating opinion and media effects tends to find very little, which
I describe in greater detail below.
Given these concerns, for the main analysis, I opt to create opinion measures by pooling all polls
on each day they are in the field, weighting each by the inverse of the number of days in the
field, i.e., weighting results from 3-day polls by one-third, results from 4-day polls by one-fourth,
and so on. Before pooling, I create house-adjusted poll readings.
10
This pooling approach has
the effect of dampening survey error, especially sampling error, and allows almost daily readings
over the final 200 days of the 2016 campaign. Readings on successive days clearly are not truly
independent, of course, as they will capture a lot of the same things by definition, which is of
consequence for our analysis of dynamics. Since the readings reflect preferences on surrounding
dates, they also will tend to inflate estimates of both media and opinion effects, and so results of
our analyses should be viewed with caution. That said, they do provide an initial foundation on
which we can add results of other analyses exploring temporal relationships and using alternative
measures of opinion.
-- Figure 5 about here --
To measure media coverage, I again rely on a measure from previous research that captures the
net tone of news relating to “Clinton” and “Trump,” this time in nine newspapers: besides the
New York Times and Washington Post, the measure includes articles from the Chicago Tribune,
Denver Post, Houston Chronicle, LA Times, Philadelphia Inquirer, St. Louis Post-Dispatch, and
USA Today. To produce the measure, the Lexicoder Sentiment Dictionary is employed on
sentences about candidates to produce a daily measure of net Clinton-Trump tone based on the
net positive-negative words in those sentences. The particular measure follows Lowe et al.
(2011) and Proksch et al. (2016), and is the log of positive and negative counts, specifically:
[(pos counts + .05) / (neg counts + .05)]. This is an empirical logit, slightly smoothed towards
zero. For more information, see Wlezien and Soroka (2019).
10
For this, I date the polls by the middle day of the period the survey is in the field, and then
estimate regression containing dummy variables for each date and survey house with three or
more polls in the data set, and then subtract out the house coefficients.
The measures of electoral support and news tone are shown in Figure 5 for the final 200 days of
the election cycle, when we have regular daily readings of both electoral preferences and the
news.
11
There is a positive relationship between the two, though only a modest 0.21 based on the
daily numbers. (It is a slightly larger 0.29 using three-day moving averages.) Both variables
also appear to have fairly constant means, i.e., stationarity, which is supported by diagnostic
tests.
12
This is important, as it allows us to directly estimate the cross-lag equations of electoral
preferences and news tone for the period, the results of which are shown in Table 2.
-- Table 2 about here --
Results point to two-flow flows between preferences and news tone. In the equation for
preferences in the first column of Table 2, the coefficient for lagged tone is positive and
significantly different from 0; in the equation for tone, the coefficient for lagged preferences also
is positive and statistically significant. These results imply that preferences and the news
positively influence each other over time, which is important. They do not clearly reveal how
much impact they have on each other. For this, we can employ impulse response functions
(IRFs) once more. As for economic perceptions and media coverage, we assess the impact of a
one standard deviation shock in each of our variables on the other. The resulting IRFs are shown
in Figure 6. The figure illustrates stark differences in the impact of the variables. The simulated
impact of preferences on news tone (in the right-hand frame of Figure 6) is about six times the
11
The news measures still are missing on six days during this period, which I simply ignore for
this analysis.
12
Dickey-Fuller tests are confirming of level stationarity for the preference and media measures
– see Supplemental Table S3. Note that there is a slight positive trend in both time series during
the fall campaign that requires adjustment for analysis in that specific period; those trends are not
evident over the longer campaign. Also note that this is little consequence for results of analysis
during the fall, as discussed below.
impact of the latter on the former (in the left-hand frame). While each appears to influence the
other, the public looks to matter more for the news than the reverse.
-- Figure 6 about here --
The results in Figure 6 are based on the entire 200-day period, and it may be that the influence of
our variables differs across that timeline. As discussed earlier, it may be that the news (and the
public) are more influential during some periods than others, e.g., being more pronounced during
the convention “season” than during the fall campaign that follows. To assess this possibility, I
break up the timeline into three segments: one before the conventions, one between the first
(Republican) convention and Labor Day, and the third for the post-Labor Day period. Figure 7
plots the resulting IRFs for the impact of news on preferences in the different periods and Figure
8 shows corresponding IRFs for preferences on the news.
The upper left-hand frame of Figure 7 is identical to the one on the left side of Figure 6. This
provides a basis for comparison with the other frames – before the conventions in the upper
right, during the convention season in the lower left, and the fall campaign in the lower right.
The plots indicate that the influence of news content varies during the campaign, and is only
clear during the conventions. There is a slight, insignificant hint of an effect before that time and
no effect whatsoever during the fall. While there are real media effects on electoral preferences,
therefore, these appear to be confined to the period during which the campaigns are most in
control of their messages and also most influential (Erikson and Wlezien 2012).
-- Figure 7 about here --
We see a somewhat different pattern in Figure 8. Here, there is evidence of public effects on the
news prior to the conventions in the upper right frame that continues through the convention
season depicted in the lower left. Thereafter, preferences do not matter at all, much as we saw
for media effects in Figure 7.
13
The lack of a connection during the latter period may not
surprise, as preferences usually have crystallized by that point in time and this is known – at least
knowable – to news providers. There just is not much variation in either electoral support or
news content during the fall (see Figure 5). Before Labor Day, the public appears to influence
media reporting, even as it is influenced by it, at least during the period of the conventions.
14
The foregoing analysis provides additional evidence of public influence; indeed, it may appear
even more compelling than what we found for economic perceptions. As discussed earlier,
however, there is reason to be cautious here given the difficulties of getting the temporal
sequence right.
15
Additional analyses provide a hint of greater media effects, e.g., cross-
correlations using the pooled polls from our main analysis above indicate that the relationship
between the news and preferences increases slightly with lags of the former. This mostly reflects
what happens during the convention season, as the evidence is weaker beforehand and
particularly afterward, and there actually is some suggestion during the fall campaign that the
public leads the news over longer stretches of time. While not entirely contrary or definitive,
13
As there are positive trends in both the news and electoral preference variables during the fall
campaign in 2016 (see footnote 12), it is important to mention that detrending the two variables
does not significantly alter the relationships in that period displayed in Figures 7 and 8.
14
As can be seen in Supplemental Table S4, adding conventions into the models seriously
dampens the estimated effects of news tone on public preferences.
15
Consider that analyses of polls that ended (started) before (after) each day show weaker
(greater) effects of polls (tone) on tone (polls), particularly during the convention season. This
does not come as a complete surprise, as it is what we would expect if the news reacts to changes
in preferences that occurred while polls that end on each particular day were in the field, i.e., in
the preceding days. Recall that polls during the 2016 campaign were in the field an average of
4.5 days. Correlations between tone and the two measures of preferences – those that end and
start on each day – over the final 200 days of the timeline are quite weak, only 0.04 and 0.13,
respectively; the correlation between the two preferences measures during the period is a modest
0.36.
these additional analyses do raise questions, so it probably best to view the foregoing results as
offering support for public influence on electoral coverage, not definitive evidence of such an
effect. That they largely match what we found for economic perceptions is reassuring, but it is
just one more case. Let us now consider another: public preferences for spending.
-- Figure 8 about here --
Spending News Coverage and Public Preferences
Here we assess the relationship between news about government spending and the public’s
budgetary preferences, again in the US. I focus the analysis on three salient policy domains
where the public has been shown to respond thermostatically to spending – defense, welfare, and
health (Wlezien 1995; Soroka and Wlezien 2022).
16
The measure of preferences is based on data
from the General Social Survey (GSS), which regularly asks the US public about their relative
preferences for spending, that is, their support for “more” or “less” spending.
17
As for economic
perceptions, the polling organization producing the public opinion data we use is independent of
the news media, which permits stronger (if still limited) claims of actual responsiveness. To
measure public preferences, I calculate the net support for spending, which equals the percentage
favoring more minus the percentage favoring less in each year.
18
It is a commonly used
16
Other research finds such responsiveness in the US and elsewhere, including Erikson, et al
(2002), Jennings (2009), Ura and Ellis (2010), Wlezien and Soroka (2012), and Morgan and
Kang (2015).
17
Technically, the question wording asks respondents whether spending we are spending “too
little” or “too much” or “about the right amount” in various areas. The GSS has asked these
questions in February-March of all years until 1994 (except 1979, 1981 and 1992) and in even
numbered years since that time. Due to the COVID-19 pandemic, the regular GSS was
postponed until late 2020, which is part of the reason I end the analysis in 2019.
18
In years that poll data are missing (see footnote 17), linear interpolation is used.
measure, one that is perfectly correlated with the mean support for spending over time, but
provides a more intuitive empirical referent.
-- Figure 9 about here --
The measure of media coverage differs from that for the first two cases, in that it is not based on
tone but on the nature of coverage, following an approach introduced by Neuner, et al (2019).
They employ three specific dictionaries to identify sentences relating to spending change in each
of the spending domains. One dictionary focuses on the policy domain, a second on spending,
and a third on change, and they are applied jointly to identify sentences in each area that concern
spending and also capture direction.
19
For this analysis, I rely on content from 17 newspapers:
the Atlanta Journal-Constitution, Arkansas Democrat-Gazette (Little Rock, AR), Arizona
Republic (Phoenix), Boston Globe, Chicago Tribune, Denver Post, Houston Chronicle, Los
Angeles Times, Star Tribune (Minneapolis, MN), New York Times, Orange County Register
(California), Philadelphia Inquirer, St. Louis Post-Dispatch (Missouri), Seattle Times, Tampa
Bay Times, USA Today, and the Washington Post.
20
Each extracted sentence is coded as
indicating either upward change (scored as +1), downward change (scored as -1), or no change
(scored as 0), and then sum all of these codes for each fiscal year, which runs from October 1
through September 30 and is dated in the year it ends, e.g., fiscal year 2021 ended in October,
2021. The measure takes a positive value when upward change sentences outnumber downward
change sentences and a negative value in years in which downward change sentences outnumber
upward change sentences. It also captures magnitude, depending on the number of sentences
19
Supervised machine learning produces similar results (also see Dun, et al 2021).
20
TV news content from the six leading broadcast and cable networks reveals similar patterns
(see Soroka and Wlezien 2022).
indicating increases and decreases. Although clearly a basic measure of the media signal, it is an
intuitive starting point. It allows observations for each fiscal year from 1980 through 2018.
Figure 9 plots the measures of public preferences and the media signal in the three spending
domains for fiscal years 1976 through 2018. The correlations between the two are positive in
each instance – 0.45 for defense, 0.49 for welfare, and 0.53 for health. Although the variables
drift up and down, there is reason to expect both to be stationary, which is borne out by
diagnostic analyses.
21
This allows us to turn to the results of the cross-lag models of preferences
and the news, again using standardized variables, here by domain, results of which are
summarized in Table 3.
-- Table 3 about here --
The results in the table are based on pooled analysis of the welfare, health, and defense spending
domains, including fixed effects for welfare and health. There we can see that the cross-lag
coefficients both are positive, but only the estimated effect of preferences on spending news in
the second column is statistically significant (p<.01). This implies that the news follows the
public but does not lead. That conclusion is further supported by analysis that includes spending
21
The expectations are quite strong. To begin with, consider that the thermostatic model implies
that relative preferences (R) represent the linear combination of the public’s preferred levels of
policy (P*) and policy itself (P), the latter two of which are expected to be cointegrated, where P
follows P* (see Soroka and Wlezien 2010). R thus changes because P* and/or P changes, and
our media signal measure is designed to capture the latter, and so we expect it to be stationary,
which is consistent with previous analyses of media coverage showing that news shocks decay.
Results are not perfectly clear but do support these theoretical expectations – see Supplemental
Table S5. Note that there is a slight upward trend in media coverage of defense (and health)
spending that has little consequence for the results of the analysis that follows. Detrended news
variables appear stationary (Mackinnon p-values = .07 and .01) and the estimated effects of news
and opinion using detrended variables are not significantly or substantively different from those
using the raw measures reported below.
change on the right-hand side of the preference equation (see Supplemental Table S6), which is
important because it may be that news has two effects: one that informs the public about
spending, and so produces negative, thermostatic feedback on relative preferences, and another
that positively influences those preferences by structuring the (underlying) demand for spending.
It appears based on our analysis that the news serves to mostly inform the public about what
policymakers do, and otherwise reflects what the public wants.
The resulting IRFs for each of the spending domains based on by-domain VAR analyses
highlight the difference in news and opinion effects. Figure 10 shows no real influence of
coverage on preferences for either defense, welfare, or health. Figure 11, by contrast, reveals
positive effects of the public on news content in all three domains, ones that are significantly
greater than 0 for both health and welfare. Here, the public once again seems to lead the content
of the news of spending, though budgetary policy itself also matters (also see Soroka and
Wlezien 2022).
-- Figures 10 and 11 about here --
Discussion
Public opinion appears to “cause” news coverage in each of the three cases considered here. The
reverse also holds but less frequently, and often to a lesser degree. The media effects that are
evident are isolated, e.g., news tone during the 2016 campaign only reliably influenced candidate
preferences during the conventions – a period of seemingly basic mediation. Media effects also
are comparatively smaller than those we see flowing from the public to the news. It may be that
media influence on the public occurs more quickly and so is not as evident – as that of the public
on the media – using cross-lag models. This simply underscores the fundamental limits of such
analyses; they only take us so far, though it is noteworthy that the results are fairly consistent
across three the different cases, using three different temporal intervals.
Media and public effects do vary in understandable ways, seemingly reflecting the degree to
which they provide information to the other. Media “influence” appears to be greatest where
coverage provides information to the public, e.g., about the party conventions during election
campaigns. Public “influence” also appears greatest where expressed opinion provides
information to media actors, e.g., public preferences for policy. These patterns are suggestive
about the how need/demand for information structures the direction (and size) of effects, and this
looks to be a fruitful avenue for future research.
The analysis considers just three cases in the US. The pattern may not hold more generally in
the US or elsewhere. Differences in media themselves may matter, both within the US and
comparatively as well. And differences in audiences among traditional, “legacy” outlets and
new media may have consequences for media coverage and its effects. Differences across
individuals also almost certainly matter. Here, it is important to keep in mind the pervasiveness
of “parallel publics” in perceptions and preferences. This holds for a variety of groups, including
partisan ones, even as polarization appears to have increased. (See Supplemental Figure S2.)
22
People evidently are not completely segregated in their media sources, or else content across the
sources different partisans consult tends to flow together over time. And the patterns we observe
may have changed over time and continue to change, particularly as the media environment
evolves.
22
Those data are drawn from Soroka and Wlezien (2022). The mean correlation between
spending preferences of different partisan groups depicted in Figure S2 is 0.80.
There thus is reason to expect a lot of heterogeneity, and so I consider the results as only
beginning to unravel the connections between media and the public – a lot of research remains. I
see them as providing a step forward, maybe a couple of steps. They mostly serve to underscore
what seems to me (and some others) a fairly obvious possibility: Public opinion matters for the
substance of the news. It is a possibility that should be regularly entertained and, where possible,
tested, not rejected before the fact, based solely on assumptions. This is important for both
observational analyses, where it can be explicitly addressed, and experimental ones, where it
may be more difficult. In the latter, it still is possible to recognize – and even account for – the
potential endogeneity of media content and consumption (Arceneaux & Johnson 2013; de
Benedictis-Kessner et al 2019), i.e., that neither are, in reality, randomly assigned.
Acknowledgments: This article is based on a presidential address delivered at the Annual Meeting of
the Southern Political Science Association in San Antonio, January 14, 2022, and also at Sciences Po,
Paris, Stanford University, Technical University of Munich (TUM), and the University of Texas at
Austin. I thank various coauthors for related research, particularly Stuart Soroka but also Lindsay Dun,
Fabian Neuner, and Dominik Stecula, and numerous others for helpful comments, including Cristina
Adams, Bethany Albertson, Kevin Arceneaux, Jon Bond, Ryan Carlin, Peter Enns, Gauthier Fally,
Kerstin Hamann, Hans Hassell, Steven van Hauwaert, Robert Howard, Mark Hurwicz, Jon Krosnick,
Steven Kull, Jonathan Ladd, Mike Liveright, Xiaobo Lu, Jared McDonald, Elliott Morris, Natalie
Neufeld, Dan Nielson, Amy Pond, Franzi Pradel, Rebecca Reid, Matthew Singer, Bat Sparrow, Zeynep
Somer-Topcu, Yannis Theocharis, Sierra Davis Thomander, Lee Walker, Stefanie Walter, Daisy Ward,
Amanda Wintersieck, Jan Zilinsky, and especially Johanna Dunaway and John Gerring.
References
Altheide, David L. 1997. “The News Media, the Problem Frame, and the Production of Fear.”
Sociological Quarterly 38(4): 647–68.
Arceneaux, Kevin and Martin Johnson. 2013. Changing Minds or Changing Channels?
Chicago: University of Chicago Press.
Banerjee, Anindya, Juan Dolado, John W. Galbraith and David F. Hendry. 1993. Co-Integration,
Error Correction, and the Econometric Analysis of Non-Stationary Data. Oxford: Oxford
University Press.
Barberá, Pablo, Andreu Casas, Jonathan Nagler, Patrick J. Egan, Richard Bonneau, John T. Jost,
and Joshua A. Tucker. 2019. “Who Leads? Who Follows? Measuring Issue Attention and
Agenda Setting by Legislators and the Mass Public Using Social Media Data.” American
Political Science Review 113 (4): 883–901. https://doi.org/10.1017/S0003055419000352.
Baumgartner, Frank and Bryan Jones. 1993. Agendas and Instability in American Politics
. Chicago: University of Chicago Press.
Beland, Daniel. 2010. “Reconsidering Policy Feedbacks: How Policies Affect Politics.”
Administration and Society 42(2): 568-590.
Beland, Daniel and Edella Schlager. 2019. “Varieties of Policy Feedback: Looking Backward
and Moving Forward.” Policy Studies Journal 47(2): 184–205.
Blood, Deborah J., and Peter C. B. Phillips. 1995. “Recession Headline News, Consumer
Sentiment, the State of the Economy and Presidential Popularity: A Time Series Analysis
1989–1993.” International Journal of Public Opinion Research 7(1): 2–22.
Blood, Deborah J., and Peter C. B. Phillips. 1997. “Economic Headline News on the Agenda:
New Approaches to Understanding Causes and Effects.” In Communication and Democracy:
Exploring the Intellectual Frontiers in Agenda-Setting Theory, ed. Maxwell McCombs, Donald
L. Shaw, and David Weaver. Mahwah, NJ: Lawrence Erlbaum, 97–113.
Box-Steffensmeier, Janet, John Freeman, Matthew Hitt, and Jon Pevehouse. 2014. Time Series
Analysis for the Social Sciences. Cambridge: Cambridge University Press.
Boydstun, Amber. 2013. Making the News. Chicago: University of Chicago Press.
Campbell, Andrea. 2012. “Policy Makes Mass Politics.” Annual Review of Political Science 15:
333-351.
Card, Dallas, Amber E Boydstun, Justin H Gross, Philip Resnik, Noah A Smith. 2015. “The
Media Frames Corpus: Annotations of Frames across Issues.” Proceedings of the 53rd Annual
Meeting of the Association for Computational Linguistics and the 7th International Joint
Conference on Natural Language Processing (Short Papers), pages 438–444, Beijing, China,
July 26-31, 2015.
Carpenter, Daniel. 2002. “Groups, the Media, Waiting Costs, and FDA Drug Approval.”
American Journal of Political Science 46(2): 490-505.
Converse, Philip E. 1970. “Attitudes and Non-Attitudes: Continuation of a Dialogue. In Edward
R. Tufte (ed.), The Quantitative Analysis of Social Problems. Reading, Mass: Addison-Wesley.
Daku, Mark, Stuart Soroka and Lori Young. 2015. Lexicoder. Software available at
lexicoder.com.
de Benedictis-Kessner, J., Baum, M. A., Berinsky, A. J., & Yamamoto, T. (2019). Persuading the
enemy: Estimating the persuasive effects of partisan media with the preference-incorporating
choice and assignment design. American Political Science Review 113(4): 902-916.
De Boef, Suzanna, and Paul Kellstedt. 2004. “The Political (and Economic) Origins of
Consumer Confidence.” American Journal of Political Science 48(4): 933–49.
Deutsch, K.W., 1966. The Nerves of Government: Models of Political Communication and
Control. New York: Free Press.
Dilliplane, S., 2014. Activation, conversion, or reinforcement? The impact of partisan news
exposure on vote choice. American Journal of Political Science 58(1(:79-94.
Dun, Lindsay, Stuart Soroka, and Christopher Wlezien. 2021. “Dictionaries, Supervised
Learning, and Media Coverage of Public Policy.” Political Communication 38 (1-2): 140-158.
https://doi.org/10.1080/10584609.2020.1763529 .
Dunaway, Johanna. (2021). “Polarisation and Misinformation.” In, The Routledge Companion to
Media Disinformation and Populism, H. Tumber and S. Waisbord, Eds., New York, NY:
Routledge Press.
Dunaway, Johanna and Doris Graber. 2022. Mass Media and American Politics. Thousand
Oaks, CA: CQ Press.
Dunaway, Johanna and Jaime Settle. 2021. “Opinion Formation and Polarization in the News
Feed Era: Effects from Digital, Social, and Mobile Media.” In Cambridge Handbook of
Political Psychology. D. Osborne and C. Sibley, Eds., New York, NY: Cambridge University
Press.
Enns, Peter and Christopher Wlezien. 2017. “Understanding Equation Balance in Time Series
Regression.” The Political Methodologist 24:2-12.
Erikson, Robert S., Michael B. MacKuen and James A. Stimson. 2002. The Macro Polity.
Cambridge: Cambridge University Press.
Evans, Geoffrey and Yekaterina Chzhen. 2016. “Re-evaluating the Valence Model of Political
Choice.” Political Science Research and Methods 4(1):199-210.
Fan, David. 1993. “Predictions of Consumer Confidence Sentiment from the Press.” Presented at
the annual meeting of the American Association of Public Opinion Research.
Freeman, John. 1983. “Granger Causality and the Time Series Analysis of Political
Relationships.” American Journal of Political Science 27(2):327-358.
Freeman, John, John Williams, and Tse-Min Lin. 1989. “Vector Autoregression and the Study
of Politics.” American Journal of Political Science 33(4):842-877.
Geer, John. 1996. From Tea Leaves to Public Opinion Polls. New York: Columbia University
Press.
Gentzkow, M. and Shapiro, J.M., 2010. What drives media slant? Evidence from US daily
newspapers. Econometrica 78(1): 35-71.
Goidel, Kirby and Ronald Langley. 1995. “Media Coverage of the Economy and Aggregate
Economic Evaluations.” Political Research Quarterly 48(1): 313–328.
Grimmer, Justin and Brandon M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of
Automatic Content Analysis for Political Texts.” Political Analysis 21(3): 267-297.
Hamilton, James T. 2004. All the News that’s Fit to Sell: How the Market Transforms
Information Into News. Princeton University Press.
Haller, H. Brandon, and Helmut Norpoth. 1997 “Reality Bites: News Exposure and Economic
Opinion.” Public Opinion Quarterly 61: 555–75.
Hindman, Matthew. 2018. The Internet Trap. Princeton: Princeton University Press.
Hollanders, David and Rens Vliegenthart. 2011. “The Influence of Negative Newspaper
Coverage on Consumer Confidence: The Dutch Case.” Journal of Economic Psychology 32(3):
367–73.
Hood, MV, Quentin Kidd, and Irwin Morris. 2008. “Two Sides of the Same Coin? Employing
Granger Causality Tests in a Time Series Cross-Section Framework.” Political Analysis
16(3):324-344.
Iyengar, Shanto. 2017. “A Typology of Media Effects.” In Kathleen Hall Jamieson and Kate
Kenski (eds.), The Oxford Handbook of Political Communication. Oxford: Oxford University
Press.
Iyengar, S., Norpoth, H., & Hahn, K. S. 2004. “Consumer demand for election news: The
horserace sells.” Journal of Politics, 66(1), 157–175.
Jamieson, Kathleen Hall. 2017. “Creating the Hybrid Field of Political Communication.” In
Kathleen Hall Jamieson and Kate Kenski (eds.), The Oxford Handbook of Political
Communication. Oxford: Oxford University Press.
Jennings, Will. 2009. “The Public Thermostat, Political Responsiveness and Error Correction:
Border Control and Asylum in Britain, 1994-2007.” British Journal of Political Science
39:847-870.
John, Peter. 2006. “Explaining Policy Change: The Impact of the Media, Public Opinion, and
Political Violence on Urban Budgets in England. Journal of European Public Policy 13(7):
1053-1068.
Jurka, Timothy P., Loren Collingwood, Amber Boydstun, Emiliano Grossman, and Wouter van
Atteveldt. 2012. “RTextTools: Automatic text classification via supervised learning.”
http://cran.r-project.org/web/packages/RTextTools/index.html.
Kearney, Michael. 2017. “Cross-Lagged Panel Models.” In M.R. Allen (ed.), Sage
Encyclopedia of Communication Research Methods. Thousand Oaks: Sage Publications.
Lawrence, Regina G. 2000. “Game-Framing the Issues: Tracking the Strategy Frame in Public
Policy News.” Political Communication 17(2): 93-114.
Lenz, Gabriel. 2009. “Learning and Opinion Change, not Priming: Reconsidering the Priming
Hypothesis.” American Journal of Political Science 53:4 821-837.
Levendusky, Matthew. (2013). Partisan media exposure and attitudes toward the
opposition. Political communication 30(4):565-581.
Maddala, G.S. and In-Moo Kim. 1998. Unit Roots, Cointegration, and Structural Change. 1st ed.
New York: Cambridge University Press
McCombs, Maxwell W., and Donald L. Shaw. 1972. “The Agenda-Setting Function of Mass
Media.” Public Opinion Quarterly 36(2):176-187.
Morgan, Stephen L. and Minhyoung Kang. 2015. “A New Conservative Cold Front? Democrat
and Republican Responsiveness to the Passage of the Affordable Care Act. Sociological
Science 2:502-526.
Neuner, Fabian, Stuart Soroka, and Christopher Wlezien. 2019. “Mass Media and Public
Responsiveness to Policy.” International Journal of Press/Politics 24(3):269-292.
Page, Benjamin I. and Robert Y. Shapiro. 1992. The Rational Public: Fifty Years of Trends in
Americans’ Policy Preferences. Chicago: University of Chicago Press.
Patterson, Thomas E. 2016. “Pre-Primary News Coverage of the 2016 Presidential Race:
Trump’s Rise, Sanders’ Emergence, Clinton’s Struggle.” Shorenstein Center on Media,
Politics and Public Policy, Campaigns, Elections and Parties Papers, June 13, 2016.
https://shorensteincenter.org/pre-primary-news-coverage-2016-trump-clinton-sanders/
Popkin, Samuel and Michael Dimock. 1999. Political Knowledge and Citizen Competence. In
Stephen Elkin and Karol Salton (eds), Citizen Competence and Democratic Institutions.
University Park: Pennsylvania State University.
Prior, Markus. 2013. “Media and Political Polarization.” Annual Review of Political Science
16:101-127.
Reuning, Kevin and Nick Dietrich. 2018. “Media Coverage, Public Interest, and Support in the
2016 Republican Invisible Primary.” Working paper, SSRN.
file:///C:/Users/cw26629/Downloads/SSRN-id2709208.pdf. Accessed July 8, 2022.
Roberts, Stewart, Tingley, Lucas, Leder-Luis, Gadarian, Albertson, and Rand. 2014. “Structural
topic models for open-ended survey responses.” American Journal of Political Science 58(4):
1064-1082.
Sides, John. 2015. “Why is Trump Surging? Blame the Media.” Washington Post (July).
https://www.washingtonpost.com/news/monkey-cage/wp/2015/07/20/why-is-trump-surging-
blame-the-media/
Sides, John and Lynn Vavreck. 2014. The Gamble. Princeton: Princeton University Press.
Soroka, Stuart. 2014. “Reliability and Validity in Automated Content Analysis,”
in Communication and Language Analysis in the Corporate World, Roderick P. Hart, ed.,
Hershey PA: CGI Global.
Soroka, Stuart, Dominic Stecula, and Christopher Wlezien. 2015. “It’s (Change in) the (Future)
Economy, Stupid: Economic Indicators, the Media, and Public Opinion.” American Journal of
Political Science 59(2): 457-474.
Soroka, Stuart N. and Christopher Wlezien. 2022. Information and Democracy. Cambridge:
Cambridge University Press.
———. 2010. Degrees of Democracy: Politics, Public Opinion and Policy. Cambridge
University Press.
Stevenson, Randolph T. 2001. “The Economy and Policy Mood: A Fundamental Dynamic of
Democratic Politics?” American Journal of Political Science 45(3): 620–33.
Ura, Joseph Daniel, and Christopher R Ellis. 2012. “Partisan Moods: Polarization and the
Dynamics of Mass Party Preferences.” The Journal of Politics 74 (01): 277–91.
Weaver, Vesla and Amy Lerman. 2010. “The Political Consequences of the Carceral State.”
American Political Science Review 104:817-833.
Wlezien, Christopher. 2004. “Patterns of Representation: Dynamics of Public Preferences and
Policy.” Journal of Politics 66:1-24.
———. 1996. “Dynamics of Representation: The Case of US Spending on Defense.” British
Journal of Political Science 26:81-103.
———. 1995. "The Public as Thermostat: Dynamics of Preferences for Spending." American
Journal of Political Science 39:981-1000.
Wlezien, Christopher and Stuart Soroka. 2019. “Mass Media and Electoral Preferences during
the 2017 US Presidential Race.” Political Behavior 41(4): 945–970.
———. 2012. “Political Institutions and the Opinion-Policy Link.” West European Politics
35(6): 1407-1432.
Wlezien, Christopher, Stuart Soroka and Dominik Stecula. 2017. “A Cross-National Analysis of
the Causes and Consequences of Economic News.” Social Science Quarterly 98(3): 1010-
1025.
Wu, H. Dennis, Robert L. Stevenson, Hsiao-Chi Chen, and Z. Nuray Guner. 2002. “The
Conditioned Impact of Recession News: A Time-Series Analysis of Economic Communication
in the United States, 1987–1996.” International Journal of Public Opinion Research 14(1): 19–
36.
Young, Lori, and Stuart Soroka. 2012. “Affective News: The Automated Coding of Sentiment in
Political Texts.” Political Communication 29(2): 205–31.
Biographical statement: Christopher Wlezien is Hogg Professor of Government and Faculty
Associate of the Policy Agendas Project at the University of Texas at Austin, Austin, TX, USA,
78731.
Table 1: Cross-Lag Models of Economic Perceptions and News Tone
Dependent Variable
Retrospections
Prospections
Tone
Retrospectionst-1
0.942
---
0.136
(0.014)
(0.048)
Prospectionst-1
---
0.850
0.159
(0.029)
(0.047)
Tonet-1
0.063
0.006
0.400
(0.014)
(0.029)
(0.050)
Intercept
-0.001
0.002
-0.006
(0.013)
(0.027)
(0.043)
Observations
380
380
380
R-squared
0.94
0.73
0.31
Adjusted R-squared
0.94
0.73
0.30
Root MSE
0.246
0.519
0.835
Note: Standard errors in parentheses. All variables are standardized.
Table 2: Cross-Lag Models of Electoral Preferences and Net Candidate Tone
Dependent Variable
Preferences
Tone
Preferencest-1
0.852
0.254
(0.035)
(0.101)
Tonet-1
0.051
0.330
(0.023)
(0.068)
Intercept
-0.039
0.072
(0.025)
(0.074)
Observations
188
188
R-squared
0.77
0.16
Adjusted R-squared
0.77
0.15
Root MSE
0.338
0.975
Note: Standard errors in parentheses. All variables are standardized.
Table 3: Cross-Lag Models of Spending Preferences and News Coverage
Dependent Variable
Preferences
News
Preferencest-1
0.740
0.299
(0.057)
(0.090)
Coveraget-1
0.036
0.451
(0.054)
(0.085)
Intercept
-0.093
0.041
(0.081)
(0.128)
Observations
114
114
R-squared
0.69
0.40
Adjusted R-squared
0.68
0.38
Root MSE
0.498
0.787
Note: Includes fixed effects for welfare and health domains. Standard errors in parentheses. All
variables are standardized.
Figure 2: Economic Retrospections and News Tone, 1980-2013
-1 -.5 0.5 11.5
Tone
-100 -50
050
Retrospections
1980m1 1990m1 2000m1 2010m1
Date, Monthly
Retrospections Tone
Figure 3: Economic Prospections and News Tone, 1980-2013
-1 -.5 0.5 11.5
Tone
-40 -20
020 40 60
Prospections
1980m1 1990m1 2000m1 2010m1
Date, Monthly
Prospections Tone
Figure 4: Impulse Response Functions Relating Economic Perceptions and News Tone
Figure 5: Clinton Poll Share and Clinton-Trump News Tone, Final 200 Days, 2016
-.4 -.2 0.2 .4 .6
Tone
48 50 52 54 56
Preferences
-200 -150 -100 -50 0
Day
Preferences Tone
Figure 6: Impulse Response Functions, Poll Share and News Tone, Final 200 Days, 2016
Figure 7: Impulse Response Functions, from News to the Polls,
Different Periods of the Campaign, 2016
Figure 8: Impulse Response Functions, from Polls to the News,
Different Periods of the Campaign, 2016
Figure 9: Spending Preferences and the News in Three Domains, 1976-2018
-2 -1 012
News
-40 -20
020 40
Preferences
1980 1990 2000 2010 2020
Year
Preferences News
Defense
-3 -2 -1 012
News
-50 -40 -30 -20 -10
Preferences
1980 1990 2000 2010 2020
Year
Preferences News
Welfare
-2 -1 0123
News
40 50 60 70 80
Preferences
1980 1990 2000 2010 2020
Year
Preferences News
Health
Figure 10: Impulse Response Functions, from News to Preferences
Figure 11: Impulse Response Functions, from Preferences to News