ArticlePDF Available

Abstract

en The average citizen often does not experience government policy directly, but learns about it from the mass media. The nature of media coverage of public policy is thus of real importance, for both public opinion and policy itself. It nevertheless is the case that scholars of public policy and political communication have invested rather little time in developing methods to track public policy coverage in media content. The lack of attention is all the more striking in an era in which media coverage is readily available in digital form. This paper offers a proposal for tracking coverage of the actual direction of policy change in mass media. It begins with some methodological considerations, and then draws on an expository case—defense spending in the United States—to assess the effectiveness of our automated content‐analytic methods. Results speak to the quantity and quality in media coverage of policy issues, and the potential role of mass media—to both inform and mislead—in modern representative democracy. Abstract zh 普通公民通常不会直接感受到政府的政策,而是间接地从大众媒体中进行了解。因此,媒体对公共政策的报道对于公众舆论和政策本身都具有重要意义。然而,公共政策学和政治传播学的学者却投入相当少的时间去发展研究方法来追踪媒体内容中的公共政策。在媒体报道以数字形式呈现的时代,缺乏对此的关注更加令人惊讶。本文提出一个方案来追踪大众媒体对政策变化实际方向所进行的报道。本文首先从一些对方法论的思考开始,然后利用一个说明性案例——美国的国防开支,来评估我们的自动化内容分析方法的有效性。我们的结果说明了媒体对政策问题进行的报道的数量和质量,以及大众传媒在现代代议制民主中的潜在作用——既是信息提供者又是信息误导者。
Tracking the Coverage of Public Policy in Mass Media*
Policy Studies Journal, forthcoming
Stuart Soroka Christopher Wlezien
University of Michigan University of Texas at Austin
ssoroka@umich.edu wlezien@austin.utexas.edu
Abstract: The average citizen often does not experience government policy directly, but
learns about it from the mass media. The nature of media coverage of public policy is thus
of real importance, for both public opinion and policy itself. It is nevertheless the case that
scholars of public policy and political communication have invested rather little time in
developing methods to track public policy coverage in media content. The lack of attention
is all the more striking in an era in which media coverage is readily available in digital
form. This paper offers a proposal for tracking coverage of the actual direction of policy
change in mass media. It begins with some methodological considerations, and then draws
on an expository case defense spending in the US to assess the effectiveness of our
automated content-analytic methods. Results speak to the quantity and quality in media
coverage of policy issues, and the potential role of mass media to both inform and mislead
in modern representative democracy.
Keywords: Policy feedback, public responsiveness, thermostatic model
*Earlier versions of this paper were presented at the Policy Studies Journal workshop on
Policy Feedback and Feed Forward, University of Arizona, Tucson, September 29-30,
2017, and the 76th Annual Midwest Political Science Association Conference, April 5-8,
2018, Chicago IL, and at the University of Amsterdam. We thank Edella Schlager for
including us in the workshop and for her helpful comments, as well as those of Carolyn
Barnes, Daniel Beland, Nate Breznau, Richard Fording, Jane Gingrich, Kristin Goss,
Deondra Rose, Ann Schneider, Rens Vliegenthart, and especially Fabian Neuner, as well
as the three anonymous referees. We also thank our research assistants on this project:
Sydney Foy, Amanda Hampton, and Dominic Valentino. The research is supported by
National Science Foundation Grants SES-1728792 and SES-1728558.
Representative democracy depends on a sufficiently attentive and informed public. It thus is
encouraging that a growing body of literature demonstrates that publics respond thermostatically
to policy change, adjusting their preferences for more policy downward (upward) when policy
increases (decrease). At the same time, there is work suggesting that the average citizen has, at
best, only a very partial understanding of politics and policy.
1
How can both of these findings be
true? One prominent view of public opinion (e.g., Page and Shapiro 1992) focuses on the
advantages of aggregation although individuals make mistakes, those errors cancel out in the
aggregate. We want to focus here on another possibility, namely, that thermostatic responsiveness
does not actually require much information about policy, and what is required is readily available
in media mass reporting. After all, some people must be receiving fairly accurate information about
policy change for the miracle of aggregation to work, i.e., in order for there to be a responsive
signal left after the noise cancels out.
2
To be clear: we do not suppose that citizens learn about policy by reading government decisions.
(Consider that few of us have seen bills or budgets on paper or in digital form.) We suggest, rather,
that broad shifts in policy may be captured in news content, that citizens may thus be able to learn
1
As regards thermostatic responsiveness, see, e.g., Wlezien (1995; 1996), Erikson, MacKuen and
Stimson (2002), Soroka and Wlezien (2005), Jennings (2009), Soroka and Wlezien (2010), Ura
and Ellis (2010), Ellis and Faricy (2011), Wlezien and Soroka (2012), Morgan and Kang (2015).
As regards the public’s inattentiveness and information, see, e.g., Berelson, et al (1954), Converse
(1964; 1970), Bennett (1988), Page and Shapiro (1992), Delli Carpini and Keeter (1996), Popkin
and Dimock (1999).
2
Though see Althaus (2003) for a more complicated and less complimentary view of aggregation.
relatively easily about the general direction (and magnitude) of policy change, and that this allows
for effective thermostatic responsiveness. The information requirements necessary for the public
to respond thermostatically to policy actually may not be that great. (See Soroka and Wlezien
2010: 41-42.)
This paper offers an initial test of the possibility that media coverage actually reflects policy
change, based on a large-scale, automated content analysis of roughly 1.6 million news stories
over a thirty-six-year period in ten major US newspapers, exploring the availability of cues about
increases or decreases in spending commitments. We do not examine citizens’ interest in or
responsiveness to these cues at present, but take a first step, and simply look at the extent to which
cues about policy direction are available in media content. We do so by developing a hierarchical
dictionary-based approach to identifying directional cues about budgetary policy. Our approach is
deployable across large corpuses of news content, and potentially generalizable across policy
domains as well. Here, we focus on defense spending. We compare the “signal” provided by these
cues to trends in government spending; and consider this a preliminary examination of the
possibility that the negative feedback that characterizes thermostatic responsiveness or, indeed,
its alternative, positive feedback itself is facilitated by media content.
3
While the broader substantive goal is to begin to answer questions about the sources of
thermostatic responsiveness, our focus below is principally on developing the methods required
for such a test. Methodologically speaking, we regard this as a first step towards a wide range of
3
For research on positive feedback of policy on public preferences, see Soss and Schram (2007),
Weaver and Lerman (2010), and Barabas (2009). Other research finds both negative and positive
feedback, e.g., Pacheco (2013) and Ura (2014).
analyses that might benefit from measures of media coverage of policy change. There is a
considerable body of work on democratic representation that could benefit from measures of media
coverage of policy, including research on positive feedback. Note also that the measures here do
not just capture correspondence between media and policy, but also differences between them. So
work on political journalism focused on the accuracy or inaccuracy of coverage could benefit as
well. So too might research focused on policymakers’ responsiveness to media content itself. (We
discuss some of this research below.) In short, we expect that what we do may contribute to
research in which media coverage of policy is relevant, of which thermostatic negative feedback
is just one example.
Our results suggest that cues about the direction in which defense policy is changing are readily
available in news coverage. Indeed, they are so plentiful that even a cursory engagement with news
content seems likely to give citizens the information they require to make feasible thermostatic
responsiveness, and representative democracy more generally. This may not be universally true,
of course it will certainly vary across policy domains, countries and time. But this initial work
points to the (perhaps surprising) availability of policy cues in media content, as well as the
potential for this line of analysis on a much broader scale. To begin with, let us review and
reorganize the literature on public responsiveness to policy.
Policy and Public Responsiveness
That policy can feed back on public opinion is well known, and research highlights two general
relations.
The first is negative feedback, where the public adjusts its public preference “inputs”
thermostatically based on policy outputs (Wlezien 1995; Soroka and Wlezien 2010). This
expectation has deep roots in political science research, stretching back to the classic Eastonian
(1965) depiction of a political system and Deutsch’s (1963) models of control. Here, the public’s
preference for more policy its relative preference, R represents the difference between the
public’s preferred level of policy (P*) and the level it actually gets (P):
Rt = P*t - Pt, (1)
where t represents time. In the model, relative preferences change if either the preferred level of
policy or policy itself changes.
4
This equation is straightforward in theory, but less so in practice,
as we rarely observe P*. Survey organizations typically do not ask people how much policy they
want, and instead ask about relative preferences, whether we are spending “too little,” whether
spending should “be increased,” or whether we should “do more.” The public preference, however
defined, also is necessarily relative.
5
Because we do not directly measure the public’s preferred level of policy (P*), and because all of
the variables have (or would have) different metrics, we need to rewrite the model as follows:
Rt = a0 + β1 Ot + β2 Pt + et, (2)
4
Unlike the thermostat that governs the heating (and/or air conditioning) units in our homes, which
sends a dichotomous signal, R captures both direction and magnitude. Also note that while we
have characterized public responsiveness across time, an identical model applies across spatial
contexts (and issues) as well.
5
In one sense, this is quite convenient, as we can actually measure the thermostatic signal the
public sends to policy makers to test the model, we need a measure of relative preferences, after
all.
where a and et represent the intercept and the error term respectively, and O designates a variety
of “other” determinants of relative preferences, e.g., economic and national security. To be clear,
these variables should not be viewed as controls, but as instruments for P*. (This is important
when we turn to positive feedback below.) The most critical part of Equation 2 is the coefficient
of feedback, β2; if people respond thermostatically, β2 will be less than 0.
Such negative feedback of policy on preferences is the fundamental feature of the thermostatic
model. It is what distinguishes a reasonably informed publicone that knows something about
what policymakers actually dofrom an uninformed public. Observing it means that the signal
that the public sends to policymakers contains useful information, which makes possible effective
accountability and control, as the public is in a position to reward or punish the incumbent
government for its actions.
The public may not respond thermostatically to policy change, of course, and it even may be that
policy feeds back positively on preferences an increase in spending could lead people to want
more spending in that domain. This positive feedback thus is between P and P* in equation 1
above, where the public’s preferred levels of policy are conceived to be a function of policy itself:
P*t = f{Pt}. (3)
To be absolutely clear, there would be positive feedback if this relation is positive.
6
6
Just as the coefficient of feedback in equation 2 need not be negative, the relationship in
equation 3 need not be positive, and a negative effect is possible if policy increases cause people
to want less than they did at the previous point in time. This is what we might predict if policy
It is important to make clear that the characterization of feedback represented here contrasts with
others in the policy literature (see Mettler and Soss 2004, Beland 2010, and Campbell 2012 for
comprehensive and cogent reviews). Schneider and Ingram’s (1993) classic treatise posits that
policy design influences “social constructions of target populations,” which can influence politics
and future policy itself. Some research shows that policy can have self-reinforcing effects,
increasing public support for a particular policy (Campbell 2003). Other related research
demonstrates implications for citizenship and civic participation (Mettler 2005). Very recent work
identifies self-undermining effects, where policy actually undermines public support over time
(Beland N.d.). Policy also can influence support for elected officials (Fording N.d.).
Most of this other research conceives of individuals as responsive to their own personal
consumption of policy, rather than the collective policy per se (e.g., Soss and Schram 2007). This
is of relevance to our representation of positive feedback, as we might observe that individuals
P*it would reflect the micro-level Pit instead of the macro Pt. Of course, individuals could respond
to both the micro and macro levels, as follows:
P*it = f{Pit, Pt}. (4)
For example, people may observe an increase in policy that works and favor more, which may be
most common when government is entering a new area, e.g., the US policy response to the Great
Depression.
does not improve conditions or makes things worse. Also see Soroka and Wlezien (2010: 115-
119).
Much like equation 1 above, equation 4 is more straightforward in theory than in practice, as we
often do not directly observe P*it and cannot fully account for it using instruments. This means
that we sometimes only can detect positive feedback indirectly, from the coefficient β2 in equation
2. The coefficient thus would encompass both negative and positive feedback, and actually capture
their net effect, i.e., if β2 < 0, we cannot conclude that there is no positive feedback, just that
negative feedback exceeds any positive effect. By implication, the coefficient provides a minimal
estimate of each positive and negative type of effect.
Regardless of our expectations or results, the mechanism of feedback, thermostatic or otherwise,
is of critical importance we are interested in how the public receives information about policy
outputs. Below, we take initial steps toward assessing whether mass media serve this function.
On Media Coverage of Public Policy
There exists an important body of work on media coverage related to policy. The largest such body
of work has examined media coverage of issues in order to better understand the sources of public
priorities. Here, media attention is a source or indicator of the public salience of issues, where the
greater the attention, the greater the salience. There is a long history of this type of scholarship,
beginning with McCombs and Shaw’s (1972) classic research, and then a vast body of literature
on public and policy “agenda-setting.” Baumgartner and Jones’ work on policy agendas (1993)
has been especially influential. The policy agendas account in which media play an especially
prominent role is Boydstun’s (2014) Making the News, which makes clear that media are central
in establishing the salience of an issue, which in turn is central to policy representation itself
whether policymakers respond to public preferences for policy in different domains (see, e.g.,
Soroka and Wlezien, 2010).
Other research by Boydstun and coauthors (e.g., Card et al. 2015) shifts from the volume of
coverage to its substance, specifically, what they call “policy frames. This work draws on a vast
literature on policy framing or “issue definition” in media coverage (e.g., Iyengar 1991), a
literature arguing either for the impact that frames have on citizens’ policy preferences, or that
media frames impact how policymakers respond to the public. (These two things are related, of
course.) This account is focused not just on the possibility of policy action, but the action itself.
Existing media research thus has focused exclusively on inputs into the policymaking process, not
policy outputs themselves. To be clear: measures of media coverage have not been concentrated
on what policymakers actually do. Given our interest in public responsiveness, this is the focus of
our research we want to know whether and how mass media content reflects what is happening
to policy. This requires both theoretical expectations about what media might reflect about policy
and then an empirical strategy to identify the degree to which this actually occurs.
Conceptualizing media coverage of policy is not perfectly straightforward. It is tempting to think
that news media reflect the level of policy that has been adopted. If the government has a large
role to play in health care, then the media signal on health policy would be correspondingly large,
whether or not it has changed recently. The media signal (M) at a point in time t then would be a
function of the size of the policy (P), as in the following very basic equation:
Mt = g{Pt}. (5)
If we could conceive of the two variables being measured on the same scale, we might even posit
that the relationship between them would be an identity function. Here media coverage would
perfectly capture policy, perhaps with some random error.
7
In one sense, this is how we originally conceived of media coverage of the economy in our previous
work on that subject (Soroka et al. 2015). We expected that when the economy is good, coverage
would be positive, and, when the economy is bad, coverage would be negative. This is not how it
turned out, however. We found that coverage of the economy in the US reflects its direction, not
its level. What drives the media is economic change: when the economy is getting better, coverage
is positive; when the economy is getting worse, coverage is negative. The level of the economy
matters little at all.
8
While this may surprise, it may make sense. It relates to and may help
explain the focus of voters on economic change in their political judgments (see, e.g., Erikson,
et al 2002).
9
More generally, this be may what media (and people more generally) can best do.
They can capture whether policy has increased or decreased, and perhaps whether it has increased
or decreased by a little or a lot. This is an empirical question, of course.
7
In terms of the equation relating them, the coefficient would be 1.0 and the intercept 0.0. Of
course, there also could be systematic bias in media coverage, and there is a considerable literature
on this, both generally and on specific policy issues. See, e.g., Lawrence (2000), Friedman et al.
(1999).
8
This is true not just in the US but elsewhere too (Wlezien et al. 2017).
9
It also relates to the reliance by survey organizations on questions tapping people’s relative
preferences, their preferences for policy change.
Our focus is on media coverage of policy change, first, as noted above, because we have reason to
suppose that media reports on policy change, and second, and perhaps more critically, because we
can more directly measure media coverage of change (M). That is, we can envision and implement
an analysis that identifies media content characterizing the direction in which policy is moving. Is
spending on defense increasing? Or decreasing? Relatively simple dictionaries allow us to capture
the direction of policy, as we will see shortly. Indeed, we might even be able to assess the
magnitude of the change in addition to its direction. It is by contrast less clear how we could
measure coverage of levels of policy, particularly policies that are best reflected by interval-level
data, e.g., spending, which happens to be the focus of our empirical analysis.
10
Even accepting
that the media reports on policy levels, an appropriate content analytic dictionary seems elusive,
or at least less effective.
In sum, it seems likely that media coverage focuses on policy change and that it is possible to
design a content analysis that could reliably extract this signal. Armed with a measure of the
media policy signal, we could assess whether it follows actual policy change over time. That is,
we could estimate the following:
Mt = h{∆Pt}. (6)
Measurement and estimation here presume a particular policy focus, or set of foci. In theory, we
could conduct analyses across a variety of policy domains j in a variety of settings, i.e., states or
countries, k. For this exploratory analysis, we concentrate on a single domain in a single country,
10
Not all policy is best represented by spending data, of course, and we discuss this issue further
in the concluding section.
spending on defense in the US. This is because there are a lot of issues involved in producing valid
measures of media coverage, detailed in the discussion that follows.
Creating a Media Corpus
A corpus of media stories can be drawn from any full-text resource. We rely on Lexis-Nexis due
to access to the Web Services Kit (WSK), which facilitates the downloading of several hundred
thousand stories, formatted in xml, in a single search request. A search request can of course be
based on either pre-coded subjects, or full-text keywords, or both. We use a combination, as
follows: STX001996 or BODY(national defense) or BODY(national security) or BODY(defense
spending) or BODY(military spending) or BODY(military procurement) or body (weapons
spending).
11
STX001996 is the “National Security” index term, one of five sub-topics with the
“International Relations and National Security” topic. It captures the lion’s share of articles on
defense policy, spending and otherwise. Of course, Lexis-Nexis’ assignment of topics is most
likely a function of their own dictionary-based word search, but our assumption is that their search
is more developed than ours would be. Even so, in order to not miss other spending-related articles,
we add the full-text (BODY) search terms identified above.
We arrive at the above search terms based on some preliminary tests, exploring the reliability with
which different searches capture relevant articles, and avoid too many irrelevant ones. Even so,
we invariably miss some articles relevant to spending, and our analyses identify a considerable
volume of irrelevant material as well. We suspect that using the “National Security” index term
means that we err on the side of Type I rather than Type II errors, i.e., we are more likely to include
11
Note that full-text search terms are searched as phrases, i.e., “national security,” not “national”
and “security” separately.
items that we shouldn’t than exclude ones we should include. That said, we expect that most
irrelevant articles do not factor into our measure of the net media signal, since we use a
combination of layered dictionaries to identify the instances of spending mentions most likely to
pertain to change in defense spending. Diagnostic analyses support this expectation, as we will
see.
Our working database relies on the following newspapers (producing this number of stories up to
December 2016, and available from this year onwards): Boston Globe (114,191 articles, beginning
in 1988), Chicago Tribune (230,187 articles, beginning in 1985), Denver Post (42,544 articles,
beginning in 1994), Houston Chronicle (127,175 articles, beginning in 1991), LA Times (244,976
articles, beginning in 1985), New York Times (324,343 articles, beginning in 1980), Philadelphia
Inquirer (69,346 articles, beginning in 1994), Tampa Bay Tribune (106,874 articles, beginning in
1987), USA Today (71,303 articles, beginning in 1989) and Washington Post (292,628 articles,
beginning in 1980). The total database thus includes 1,623,567 stories, albeit with more from the
mid-1990s onwards. Note that Lexis-Nexis archives web-only stories from some newspapers, but
coverage is sporadic, so these are excluded from our database. Note also that our selection of
newspapers is based on availability, alongside circulation, with some consideration given to
regional coverage. In the end, we have ten of the highest-circulation newspapers in the US, three
of which aim for national audiences, and seven of which cover considerably large regions in the
northeastern, southern, midwestern, and western parts of the country. Combining these newspapers
offers, we believe, a reasonable representation of the national news stream, at least where
newspapers are concerned. Using a relatively wide range of newspapers has an additional
advantage: to the extent that the language and/or focus of defense stories varies across outlets,
there are advantages to building and testing a dictionary across a corpus that is relatively broad.
In order to reduce the number of false positives, i.e., stories that are captured in our search but not
directly related to defense, we run a simple dictionary search over the full-text of the entire corpus,
counting the number of stories that include at least one of the following words:
DEFENSE: army, navy, naval, air force, marines, defense, military,
soldier, war, cia, homeland, weapon, terror, security, pentagon,
submarine, warship, battleship, destroyer, airplane, aircraft,
helicopter, bomb, missile, plane, service men, base, corps, iraq,
afghanistan, nato, naval, cruiser, intelligence
Some 98,882 articles include none of these keywords roughly 6% of our sample. We take this as
evidence that the vast bulk of our retrieved articles are about defense, at least in part. The remaining
6% may well include a defense-related word not included in our dictionary. Even so, we exclude
this set of articles from subsequent analyses.
Extracting the “Media Policy Signal”
Our corpus has already applied several dictionaries to the data first, in the use of Lexis-Nexis
topics (derived from proprietary dictionaries) and additional full-text search terms, and second in
the application of the DEFENSE dictionary to cull articles that may not be relevant. Even this
database will include a good deal of content not directly related to defense spending, of course.
The focus of our analysis is actually on a small subset of this content: sentences that include
mentions of spending and direction.
We begin by identifying sentences that mention spending. We do so using another simple
dictionary, implemented in Lexicoder (Daku et al. 2015), and constructed from our own reading
of keyword-in-context (kwic) retrievals, augmented by thesaurus searches. Kwic retrievals are in
this instance designed to extract every sentence in which a given keyword is used. Our dictionary-
building procedure is thus as follows: (1) we read a random draw of articles extracted using Lexis-
Nexis keywords, and establish a simple set of words that seem to capture spending, (2) we
augment that list using a thesaurus, and (3) we search for each of our dictionary words, extracting
kwic entries and reviewing those entries to ensure that every word is used, most of the time, in the
way in which we anticipate in this instance, in the context of a sentence about spending. This
process reveals new words that are added to the dictionary and reviewed based on kwic retrievals.
We remove some words that are used in ways we did not anticipate; we keep words that are used
most if not all of the time in the way in which we anticipate. Our dictionary is thus tested on
and expanded using the corpus to which we are applying it. This highlights the importance of
building a dictionary using a corpus that is broadly generalizable, if a generalizable dictionary is
the objective. In this instance, we hope to build dictionaries applicable to other newspapers, and
perhaps other mass media, both for and beyond the defense domain. Of course, the generalizability
of the dictionary created here will require further testing. For the time being, we can be confident
only that we are building a dictionary that produces valid results for the current corpus.
How can we be confident about the validity of our dictionaries? First, in order to reduce processing
time, we make a database that includes a random draw of 10,000 stories from each of our 10
newspapers. These 100,000 stories are then outputted into a folder for preprocessing in advance of
content analysis preprocessing that removes punctuation and strange characters, makes all text
lower case, and separates every sentence (based on periods) into a separate line. This facilitates
the use of the “kwic” (keyword in content) function in Lexicoder, which extracts all sentences in
which a given word is used. We build a database of all kwic sentences that include any of the
words in our spending dictionary, which is as follows:
SPEND: allocat*, appropriation*, budget*, cost*, earmark*, expend*,
fund*, grant*, outlay*, resourc*, spend*
The resulting database includes 95,906 sentences from our 100,000 news stories. Within this
database of spending sentences, we then run dictionary counts for another two dictionaries,
capturing upward and downward words (and built using the same review of kwic entires as
discussed above):
UP: accelerat*, accession, accru*, accumulat*, arise*, arose, ascen*,
augment*, boom*, boost, climb*, elevat*, exceed*, expand*, expansion,
extend*, gain*, grow*, heighten*, higher, increas*, increment*, jump*,
leap*, more, multiply*, peak*, rais*, resurg*, rise*, rising, rose,
skyrocket*, soar*, surg*, escalat*, up, upraise, upsurge, upward
DOWN: collaps*, contract*, cut*, decay*, declin*, decompos*, decreas*,
deflat*, deplet*, depreciat*, descend*, diminish*, dip*, drop*, dwindl*,
fall*, fell, fewer, less, lose, losing, loss, lost, lower*, minimiz*,
plung*, reced*, reduc*, sank, sink*, scarcit*, shrank, shrink*, shrivel*,
shrunk, slash*, slid*, slip*, slow*, slump*, sunk*, toppl*, trim*, tumbl*,
wane, waning, wither*
Note that any one of the dictionaries we use is sure to identify a good deal of irrelevant material.
But layering dictionaries on top of each other will, we think, produce an increasingly reliable
measure. We filter articles using both Lexis-Nexis search terms and our own DEFENSE dictionary
to remove articles that seem unlikely to be about defense, then we apply the SPEND dictionary
and extract spending-related sentences, and finally we apply the UP and DOWN dictionaries to
focus on sentences that identify change.
This is identical to the use of “hierarchical dictionary counts,” as implemented using Lexicoder in
Young and Soroka (2012) and Belanger and Soroka (2012). In each of those cases, layered
dictionaries were used to extract the tone of political candidates. Here, we use a combination of
kwic retrievals and subsequent dictionary counts to gradually narrow our database to the content
most pertinent to a media policy signal. Our expectation is that not all direction words will be used
in relation to spending, just as not all spending mentions will co-occur with direction keywords.
We nevertheless expect that the concurrence of these two dictionaries will identify sentences that
are related to spending direction.
We regard this application of dictionaries as very similar to the “learning” inherent in supervised
learning methods used for large-N content analysis (e.g., Jurka et al. 2012; see Grimmer and
Stewart 2013 for an especially helpful review.) There sometimes is a perception that dictionaries
are simple word lists, concocted based only on a thesaurus, where individual words are not
subjected to testing, and where results are thus likely to either capture, or miss, a good deal of
irrelevant material. This certainly can be true, but the use of several iterations of testing during the
dictionary-building stage, and the subsequent use of multiple dictionaries that essentially removes
false positives, i.e., a spending word that is in fact not related to defense, makes for a rather
different dictionary-based analysis one that has used a corpus and human coding to “learn” about
the terms most relevant to the analysis.
Note that we are not staking a claim on the success of dictionary-based analysis in all instances.
As Grimmer and Stewart (2013) note, the best content-analytic approach will vary widely
depending on the requirements of the data and theory. We are suggesting that dictionaries may be
appropriate in this case, however, because of the limited and readily-identifiable words referring
to both spending and direction and the need to identify explicit cues that would be clear to media
consumers. The former likely reduces the gains from supervised learning methods, since human
coding may offer little gain in reliability. The latter reduces the advantages of automated clustering
methods such as latent Dirichlet allocation (LDA; e.g., Blei et al. 2003) or structural topic
modelling (STM; e.g., Roberts et al. 2014), which capture correlations between words that need
not be proximate or related in ways that would be meaningful for the average reader. Each of these
other approaches obviously are of real value in other types of content analysis. There nevertheless
is reason to think that for the task at hand, layered dictionary counts may be quite effective. And,
as we will see, they work.
How well does our approach to identifying relevant spending cues work? We do not present the
results of all the iterations of dictionary-building here, but we can illustrate the basic findings in a
straightforward way. Having built a database of sentences that includes a spending keyword, and
then applying the UP and DOWN dictionaries to those sentences, we submit a selection of
sentences to human coding. The sentences are evenly divided between (a) no direction keywords
and (b) one or more direction keywords. They are also distributed across each of the spending
keywords in our SPEND dictionary, with slightly larger samples for the words that occur most
frequently in our corpus.
We extract kwic retrievals in two random draws of 340, and we report results from the combined
680 retrievals here. Coders applied codes independently after minimal training, during which we
instruct them to assign a 1 for sentences that are clearly about defense spending, and 0 otherwise.
In this sample, all three coders assign the same value 84% of the time. All sentences include one
of the spending keywords; the inclusion of a direction keyword as well leads to a slight increase
in inter-coder agreement, from 82% to 85%. In instances where coders disagree, we take the
majority decision, i.e., if two coders believe that the sentence is about defense spending, we take
this decision. Doing so suggests that only 33% (223) of the sentences retrieved are clearly about
defense spending. To be clear: inter-coder agreement is high (also see Neuner et al. 2017), but
many of the extracted sentences are not directly about defense spending.
Even this amount of noise may be tolerable to the extent that we are interested in aggregated
results, even high levels of noise may cancel out.
12
Even so, 67% noise is rather more than we
would like. Our review of articles suggested that the broad Lexis-Nexis search had retrieved a high
number of articles that were only very tangentially related to defense spending. Our search had the
advantage of capturing what likely was all relevant content, but at the cost of a lot of irrelevant
material. We thus subjected our kwic retrievals to one final round of vetting: we run the DEFENSE
dictionary again, on the kwic entries themselves, to distinguish sentences that actually directly
mention a defense keyword. Doing so improves results: of the 193 sentences that do not include a
DEFENSE keyword, only 3 are coded as relevant; of the 487 sentences that do include a
DEFENSE keyword, 45% are relevant. For sentences that include an UP or DOWN keyword and
a DEFENSE keyword, the number is 50%.
The successive application of dictionaries serves to gradually reduce the noise in our kwic
retrievals the raw material for a measure of the “media policy signal.” There nevertheless is
space for improvement, and so we revisit the reliability of our SPENDING dictionary, which is
intended to capture spending mentions across a good number of policy domains, but which may
as a consequence be capturing some irrelevant material for defense specifically. Table 1 shows a
test of each spending keyword (we show just single versions of each keyword, though the search
12
For a discussion of the need for large corpuses in order to deal with the error that is typically
unavoidable in automated content analysis, see Soroka (2014).
captures several variations of each), across all human-coded kwic retrievals, and the subsets that
(a) must include a DEFENSE keyword and then (b) must also include an UP or DOWN keyword.
-- Table 1 about here --
Results highlight the steady increase in reliability that is a consequence of running successive
dictionaries to narrow the sample. They also point to several spending keywords that appear to
produce rather noisy results in the defense domain, especially “grant” and “resource.” There are
few words that, even in conjunction with other dictionaries, produce kwic retrievals that are more
than 80% relevant. But these two keywords seem to produce very few relevant sentences. We will
keep this in mind below.
In the meantime, following this pre-testing with a subset of our data, we create a kwic-retrieval
database of all sentences with a SPEND keyword in all 1.6-million news articles. The resulting
working database has 1,601,036 sentences. Note that this is roughly as many sentences as there
are articles, but this is not because all sentences mention spending once. Some articles have many
spending mentions, and many articles have no sentences that mention spending. Indeed, only 15%
of the articles in our initial database contain the sentences that find their way into out kwic
database. (Our initial search is very broad, and does not impose restrictions based on sentence-
level dictionaries, after all. See the preceding Creating a Media Corpus section.) Table 2 shows
the breakdown of kwic retrievals by newspaper, with increasingly restrictive conditions. The final
column shows the number of sentences that are central to our measure of the “media policy signal.”
-- Table 2 about here --
Sentences in which there are more UP words than DOWN words are coded as 1; sentences in
which there are more DOWN words than UP words are coded as -1; other sentences are coded as
0. (Note that the vast majority of sentences have just one direction keyword.) We then sum these
values across sentences, by fiscal year (October-September). The resulting measure, based on
sentences in the last column of Table 2, captures both direction and magnitude, and can be
calculated for all newspapers, or single newspapers, or single spending keywords, etc. We explore
several of these possibilities below.
The Media Policy Signal and Policy Change: The Case of US Spending on Defense
How closely does our measure of the “media policy signal” track actual government spending on
defense? What might that tell us about the potential for thermostatic public responsiveness in this
domain? And are there systematic differences across keywords and newspapers? Our exploration
begins with fiscal-year aggregations of kwic retrievals from the New York Times only, since it is
one of the few newspapers available from 1980 onwards.
13
These are analyzed alongside changes
in defense appropriations, in FY2000 US dollars, drawn from the Historical Tables distributed by
the OMB.
To cut to the chase: the correlation between spending change and a New York Times signal based
on all kwic retrievals with UP/DOWN keywords but without a DEFENSE keyword is 0.65; the
correlation between spending change and a New York Times signal based on kwic retrievals that
13
The main reason to focus on the New York Times on it shown here is that it covers the longest
time period. This also avoids the difficulties involved in combining newspapers, not just because
they are available over different time periods, but because it unclear exactly how coverage in
different newspapers should be combined. This is discussed further in the concluding section of
the paper.
also include a DEFENSE keyword is 0.70.
14
Focusing on the narrower set of kwic retrievals does
not have as striking an impact on results as we might expect (the correlation between the two New
York Times media signals is 0.92). This is perhaps a sign that error in the broader, i.e., noisier,
sample cancels out. Regardless, both aggregate series are highly correlated with actual spending.
Figure 1 shows the trend in spending change (black) alongside the UP/DOWN + DEFENSE
keyword spending signal (gray).
-- Figure 1 about here --
We take Figure 1 as a strong test of construct validity for our measure of the media signal. We do
not expect media to perfectly represent policy change, but in a highly salient and relatively simple
domain, we expect a national broadsheet newspaper to be relatively accurate. Figure 1 makes clear
that this has been the case. A public that was responding to defense spending as it has been
represented by the New York Times over the past 35 years would look very similar to a public that
was responding directly to defense spending.
There would be some differences, however, and even small differences could be critical for public
preferences and policy development. Of course, it is difficult to tell the extent to which those
differences are a consequence of what the New York Times is reporting, or weaknesses in our
content analysis. They are likely driven by both, but preliminary evidence suggests that the trend
in Figure 1 is not deeply affected by irrelevant kwic retrievals. Consider this: if we remove all
14
Of course, it is possible that media coverage is picking up spending decisions from the previous
year or else spending decisions being made for the following year. It thus is worth noting that the
correlation between the media signal in year t is 0.52 with spending in fiscal year t-1 and 0.67 with
spending in fiscal year t+1.
kwic retrievals that use the words “grant” or “resource” (the words in our dictionary with the lowest
relevancy scores), the correlation between spending and the New York Times media signal shifts
only marginally, from 0.70 to 0.71.
-- Table 3 about here --
To what extent are results in Figure 1 the consequence of using a relatively highbrow broadsheet
newspaper? Table 3 explores this by presenting correlations between changes in spending and a
media policy signal for each individual newspaper. Results are based on data since FY1993, as
this is the first year for which all ten newspapers are available. There are some potentially
consequential differences between newspapers, where the Denver Post and Philadelphia Inquirer
produce signals that are least correlated with changes in spending (0.55-0.59); and the Washington
Post produces the signal most correlated with changes in spending (0.80). The implication is that
citizens informed by media about defense spending will be reacting to slightly different, and
potentially meaningfully different, representations of policy change, depending on their media
source.
Even so, the mean correlation between newspaper series is 0.75 and in many cases (36%) exceeds
0.80. There thus is a lot of similarity to the policy signal we glean from the different papers. This
implies a corresponding similarity to the information the public receives. Figure 2 highlights this
similarity. The top panel figure shows the New York Times raw media signal in black, with each
other newspaper in gray. We do not distinguish between the newspapers here the aim is just to
highlight the degree to which they move in tandem. Newspapers produce different overall volumes
of news coverage, and over-time variation in the raw measure reflects this. (This highest numbers
in this figure are from the Washington Post.) The bottom panel shows all series in standard units,
where each is divided by its standard deviation. This standardization further highlights he common
trend evident in the raw data. Though note some interesting outliers, including the especially strong
emphasis on decreases in USA Today in 2011 and the especially strong emphasis on increases in
the Tampa Bay Times in 1989.
-- Figure 2 about here --
To what extent do results vary across spending keywords? We have already seen in Table 1 that
the relevance of kwic retrievals varies by spending keyword. Table 4 shows bivariate correlations
between changes in spending and series that use (a) all available keywords in the SPEND
dictionary or (b) remove the two worst-performing words, based on expert coding, “grant” and
“resource.” Results here are striking: in spite of the many errors in coding that may exist for
“grant” and “resource,” correlations between the media policy signal and spending change do not
improve markedly when these words are excluded. Indeed, for some newspapers the correlation
weakens with the removal of these words. The implication here is that even when there is a good
deal of noise, i.e., false up and down codes, that noise largely cancels out in these highly aggregated
measures.
-- Table 4 about here --
Though the results in Table 4 do not reveal much difference, there is good reason to fine-tune
dictionaries, often to the specific corpus under investigation. Were we looking at environmental
policy, for instance, the word “resource” might be entirely unusable – it may refer to natural
resources far more than financial ones. The aim moving forward must be to reconsider the
dictionary in the face of the corpus to which it is to be applied.
Conclusion and Discussion
A carefully constructed measure of media coverage of policy outputs is of real significance for
those interested in policy responsiveness and representation. As noted above, most citizens learn
about most policies indirectly, often through mass media. The opinion-policy link thus depends
heavily not just on the volume but on the accuracy of media coverage of policy. Where media
provide accurate policy cues, there are good reasons to expect public responsiveness and policy
representation. Where media cues are systematically different from actual policy, the potential for
responsiveness and representation is limited (see Neuner et al. 2017). A means of identifying the
media policy signal offers not just a measure of media accuracy, it speaks to the potential for
representative democracy, policy domain by policy domain, across time and space.
We regard this as a preliminary effort at developing a broad applicable measure of the media policy
signal. There are several possibilities that we have not yet explored but suggest for future work.
We have not yet fully considered how best to combine results from multiple newspapers, for
instance. Figure 2 offers two ways of thinking about across-newspaper differences, but a single
measure of the media policy signal might sum raw counts or weight counts based on newspaper
circulation data or by the populations they serve. If the objective is to capture the signal that
actually reaches people, weighting obviously is crucial. The volume of coverage may offer insights
into the salience of or attention to policy, which differs from policy change and can vary over
time and across issues (and political contexts).
15
We also have not explored the applicability of these dictionaries to domains other than defense.
Our long-term objective is to do exactly this. Our hope is that the dictionaries will require only
15
Also see Jennings and Wlezien (2015).
minimal revision for other domains; at least, that has been our objective here. This is a testable
proposition, left for future work. So too is the possibility of developing a rather different set of
dictionaries for use in policy domains that are less characterized by spending. Defense may be an
easy test for our measure; environment would be much tougher. Even so, it may be possible to
develop a set of dictionaries that capture change in environmental regulation, and this would be
useful in understanding responsiveness and representation in that domain.
For the time being, results here help explain how the American public has been found to respond
thermostatically to change in defense spending (Wlezien 1995; 1996; Soroka and Wlezien 2010).
Defense spending is not experienced directly by most Americans, of course. But information about
defense spending is readily available in media content. Consider the following: there were
approximately 67,000 sentences in the New York Times that include very clear spending cues,
where we identified words from all of the dictionaries outlined above. These sentences are drawn
from 36 years of data, which means there were roughly 1,861 sentences per year, an average of 5
per day. In the Washington Post, the number is nearly 6 per day; in the Boston Globe, 2 per day;
in the Chicago Tribune, 2 per day; and so on. Even peripheral attention to media coverage of
defense issues would, we surmise, expose people to cues about the direction in which that policy
was moving. It follows that there are good reasons for scholars of policy and representation to
further explore the media policy signal. This is true whether feedback is thermostatic or else takes
the other forms discussed above, as media coverage of policy is of value well beyond the literature
on thermostatic public responsiveness.
References
Althaus, Scott. 2003. Collective Preferences in Democratic Politics. Cambridge: Cambridge
University Press.
Altheide, David L. 1997. “The News Media, the Problem Frame, and the Production of Fear.”
Sociological Quarterly 38(4): 64768.
Barabas, Jason. 2009. “Not the Next IRA: How Health Savings Accounts Shape Public Opinion.
Journal of Health Policy, Politics and Law 34:181-217.
Barabas, Jason and Jennifer Jerit. 2009. “Estimating the Causal Effects of Media Coverage on
Policy Specific Knowledge.” American Journal of Political Science 53(1): 73-89.
Bartle, John, Sebastian Dellepiane-Avellaneda, and James Stimson. 2011. “The Moving Centre:
Preferences for Government Activity in Britain, 1950–2005.” British Journal of Political Science
41(2): 259-285.
Baumgartner, Frank, and Bryan D. Jones. 1993. Agendas and Instability in American Politics.
Chicago: University of Chicago Press.
Beland, Daniel. N.d. “Policy Feedback and the Politics of the Affordable Care Act.” Policy Studies
Journal (this issue).
———. 2010. “Reconsidering Policy Feedbacks: How Policies Affect Politics.” Administration
and Society 42(2): 568-590.
Bennett, Stephen. 1988. “’Know-Nothings’ Revisited: The Meaning of Political Ignorance Today.
Social Science Quarterly 69:476-490.
Berelson, Bernard R., Paul F. Lazarsfeld, and William N. McPhee. 1954. Voting. Chicago:
University of Chicago Press.
Blei, David, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet Allocation. Journal of
Machine Learning and Research 3:9931022.
Boydstun, Amber. 2013. Making the News. Chicago: University of Chicago Press.
Bucchi, Massimiano, and Renato G. Mazzolini. 2003. “Big Science, Little News: Science
Coverage in the Italian Daily Press, 1946-1997.” Public Understanding of Science 12(1): 7-24.
Campbell, Andrea. 2012. “Policy Makes Mass Politics.” Annual Review of Political Science 15:
333-351.
———. 2003. How Politics Makes Citizens. Princeton: Princeton University Press.
Card, Dallas, Amber E Boydstun, Justin H Gross, Philip Resnik, Noah A Smith. 2015. The Media
Frames Corpus: Annotations of Frames across Issues.” Proceedings of the 53rd Annual Meeting
of the Association for Computational Linguistics and the 7th International Joint Conference on
Natural Language Processing (Short Papers), pages 438444, Beijing, China, July 26-31, 2015.
Converse, Philip E. 1970. “Attitudes and Non-Attitudes: Continuation of a Dialogue. In Edward
R. Tufte (ed.), The Quantitative Analysis of Social Problems. Reading, Mass: Addison-Wesley.
-----. 1964. “The Nature of Belief Systems in Mass Publics.” In David Apter (ed.), Ideology and
Discontent. New York: Free Press.
Daku, Mark, Stuart Soroka and Lori Young. 2015. Lexicoder. Software available at lexicoder.com.
Davie, William R., and Jung Sook Lee. 1995. “Sex, Violence, and Consonance/Differentiation:
An Analysis of Local TV News Values.” Journalism and Mass Communication Quarterly 72(1):
12838.
Delli Carpini, Michael and Scott Keeter. 1996. What American Know about Politics and Why it
Matters. New Haven: Yale University Press.
Deutsch, K.W., 1966. The Nerves of Government: Models of Political Communication and
Control. New York: Free Press.
Dunaway, Johanna. 2011. “Institutional Influences on the Quality of Campaign News
Coverage.” Journalism Studies 12(1):27-44.
Durr, Robert H. 1993. “What Moves Policy Sentiment?” American Political Science Review 87:
158-170.
Easton, David. 1965. A Framework for Political Analysis. Englewood Cliffs NJ: Prentice-Hall.
Eichenberg, Richard, and Richard Stoll. 2003. “Representing Defence: Democratic Control of the
Defence Budget in the United States and Western Europe.” Journal of Conflict Resolution 47:
399-423.
Ellis, Christopher, and Christopher Faricy. 2011. “Social Policy and Public Opinion: How the
Ideological Direction of Spending Influences Public Mood.” The Journal of Politics 73 (04):
10951110.
Erikson, Robert S., Michael B. MacKuen and James A. Stimson. 2002. The Macro Polity.
Cambridge: Cambridge University Press.
Fording, Richard. N.d. “Medicaid Expansion and the Political Fate of Governors who Support it.”
Policy Studies Journal (this issue).
Friedman, Sharon H., Sharon Dunwoody and Carol L. Rogers, eds. 1999. Communicating
Uncertainty: Media Coverage of New and Controversial Science. New York: Routledge.
Grimmer, Justin and Brandon M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of
Automatic Content Analysis for Political Texts.” Political Analysis 21(3): 267-297.
Hakhverdian, A. 2012. “The Causal Flow between Public Opinion and Policy: Government
Responsiveness, Leadership, or Counter Movement?” West European Politics, 35(6): 1386-
1406.
Iyengar, Shanto. 1991. Is Anyone Responsible? How Television Frames Political Issues. Chicago:
University of Chicago Press.
Jennings, Will. 2009. “The Public Thermostat, Political Responsiveness and Error Correction:
Border Control and Asylum in Britain, 1994-2007.” British Journal of Political Science 39:847-
870.
Jennings, Will and Christopher Wlezien. 2015. Preferences, Problems, and Representation.”
Political Science Research and Methods 3(3): 659-681.
Jurka, Timothy P., Loren Collingwood, Amber Boydstun, Emiliano Grossman, and Wouter van
Atteveldt. 2012. RTextTools: Automatic text classification via supervised learning. http://cran.r-
project.org/web/packages/RTextTools/index.html.
Lawrence, Regina G. 2000. “Game-Framing the Issues: Tracking the Strategy Frame in Public
Policy News.” Political Communication 17(2): 93-114.
McCombs, Maxwell W., and Donald L. Shaw. 1972. “The Agenda-Setting Function of Mass
Media.” Public Opinion Quarterly 36(2):176-187.
Mettler, Suzanne. 2005. Soldiers to Citizens: Thee G.I. Bill and the Making of the Greatest
Generation. New York: Oxford University Press.
Mettler, Suzanne, and Joe Soss. 2004. “The Consequences of Public Policy for Democratic
Citizenship: Bridging Policy Studies and Mass Politics.” Perspectives on Politics 2(1):55-73.
Morgan, Stephen L. and Minhyoung Kang. 2015. “A New Conservative Cold Front? Democrat
and Republican Responsiveness to the Passage of the Affordable Care Act. Sociological
Science 2:502-526.
Neuner, Fabian, Stuart Soroka and Christopher Wlezien. 2017. “The Clues in the News: Mass
Media and Public Responsiveness to Policy.” Paper presented at the annual meeting of the
SPSA, New Orleans LA.
Pacheco, Julianna. 2013. “Attitudinal Policy Feedback and Public Opinion.” Public Opinion
Quarterly 77:714-734.
Page, Benjamin I. and Robert Y. Shapiro. 1992. The Rational Public: Fifty Years of Trends in
Americans’ Policy Preferences. Chicago: University of Chicago Press.
Popkin, Samuel and Michael Dimock. 1999. Political Knowledge and Citizen Competence. In
Stephen Elkin and Karol Salton (eds), Citizen Competence and Democratic Institutions.
University Park: Pennsylvania State University.
Roberts, Stewart, Tingley, Lucas, Leder-Luis, Gadarian, Albertson, and Rand. 2014. “Structural
topic models for open-ended survey responses.” American Journal of Political Science 58(4):
1064-1082.
Soroka, Stuart. 2014. “Reliability and Validity in Automated Content Analysis,”
in Communication and Language Analysis in the Corporate World, Roderick P. Hart, ed.,
Hershey PA: CGI Global.
Soroka, Stuart, Dominic Stecula, and Christopher Wlezien. 2015. “It’s (Change in) the (Future)
Economy, Stupid: Economic Indicators, the Media, and Public Opinion.” American Journal of
Political Science 59(2): 457-474.
Soroka, Stuart N. and Christopher Wlezien. 2010. Degrees of Democracy: Politics, Public Opinion
and Policy. Cambridge University Press.
Soss, Joe and Sanford Schram. 2007. “A Public Transformed? Welfare Reform as Policy
Feedback.” American Political Science Review 101:111-127.
Stimson, James. 1999. Public Opinion in American: Moods Cycles and Swings, 2nd ed. Boulder
CO: Westview Press.Ura, Joseph. 2014. “Backlash and Legitimation: Macro Political Responses
to Supreme Court Decisions.” American Journal of Political Science 58:1100-126.
Ura, Joseph Daniel, and Christopher R Ellis. 2012. “Partisan Moods: Polarization and the
Dynamics of Mass Party Preferences.” The Journal of Politics 74 (01): 27791.
Weaver, Vesla and Amy Lerman. 2010. “The Political Consequences of the Carceral State.”
American Political Science Review 104:817-833.
Wlezien, Christopher. 2004. “Patterns of Representation: Dynamics of Public Preferences and
Policy.” Journal of Politics 66:1-24.
———. 1996. “Dynamics of Representation: The Case of US Spending on Defense.” British
Journal of Political Science 26:81-103.
———. 1995. "The Public as Thermostat: Dynamics of Preferences for Spending." American
Journal of Political Science 39:981-1000.
Wlezien, Christopher and Stuart Soroka. 2012. “Political Institutions and the Opinion-Policy
Link.” West European Politics 35(6): 1407-1432.
Wlezien, Christopher, Stuart Soroka and Dominik Stecula. 2017. “A Cross-National Analysis of
the Causes and Consequences of Economic News.Social Science Quarterly 98(3): 1010-1025.
Table 1. Relevance of kwic retrievals, by spending keyword
Kwic retrieval
w/ DEFENSE
keyword
w/ DEFENSE and UP or
DOWN keyword
Allocate
40%
53%
69%
Appropriation
25%
34%
38%
Budget
45%
67%
62%
Cost
25%
34%
37%
Earmark
30%
43%
43%
Expenditure
68%
84%
73%
Fund
21%
28%
41%
Grant
9%
13%
17%
Outlay
63%
76%
67%
Resource
10%
15%
27%
Spend
49%
69%
82%
Table 2. Kwic retrievals, by newspaper
# sentences
w/ DEFENSE
keyword
w/ DEFENSE and UP or
DOWN keyword
Boston Globe
107,534
40,885
20,070
Chicago Tribune
145,124
55,785
26,082
Denver Post
44,873
12,438
5,349
Houston Chronicle
93,482
33,212
14,843
LA Times
238,666
95,100
46,918
New York Times
328,608
131,640
67,026
Philadelphia Inquirer
60,923
19,529
8,637
Tampa Bay Tribune
95,154
28,434
12,057
USA Today
70,647
24,136
10,878
Washington Post
416,025
156,563
77,327
Total
1,601,036
597,772
289,187
Figure 1. Changes in defense spending and the New York Times “media policy signal”
Table 3. Correlations between spending and media policy signals, by newspaper, 1993-2016
Spending
NYT
BGL
CTR
DVP
HCH
LAT
PHI
TBT
USA
NYT
.76
BGL
.61
.90
CTR
.76
.89
.75
DVP
.55
.73
.57
.64
HCH
.65
.91
.83
.87
.64
LAT
.74
.93
.80
.89
.65
.83
PHI
.59
.78
.62
.85
.55
.77
.86
TBT
.68
.68
.58
.67
.69
.56
.67
.69
USA
.79
.78
.53
.77
.60
.70
.76
.68
.56
WPO
.80
.93
.79
.93
.75
.86
.90
.81
.68
.82
Figure 2. The media policy signal, by newspaper
.
Table 4. Correlations between spending and alternate media policy signals, by newspaper
All keywords
Without “grant”
and resource”
NYT
.76
.78
BGL
.60
.61
CTR
.76
.74
DVP
.55
.53
HCH
.65
.67
LAT
.74
.75
PHI
.59
.58
TBT
.68
.67
USA
.79
.78
WPO
.80
.79
... The extent to which election promises influence attitudes towards voting depends on the extent to which the media highlights election campaign promises ( Muller, 2020; Soroka & Wlezien, 2019). The media focuses on political campaign promises by major parties than minor parties (Kostadinova, 2017). ...
... Voters, for example, either demonstrate against or vote to reward and punish governments following successes and failures (Managa, 2012;Johnson & Ryu, 2010;Born et al., 2018). Some voters can capitalize on the media to make informed decisions in elections (Kostadinova & Kostadinova, 2016;Andersen et al, 2005;Soroka & Wlezien, 2019), while others are unable to do so because of lack of party manifestos, illiteracy, poverty, short-sightedness, and weak memories (Adekola et al., 2019;Ojo, 2008;Danjibo & Oladeji, 2007;Muller, 2020). ...
Article
Full-text available
This study sought to juxtapose Ghana’s first three decades [1960s-1980s] of democratic and military regimes with its second three decades [the 1990s-2010s] of only democratic regimes to understand its political and economic trajectories and dynamics. Drawing on the Mandate Theory and leveraging on the interpretivist paradigm, the study adopted a qualitative content analysis and in-depth interviews to collect and analyze the views and opinions of respondents in Accra. The findings show that (a) The 1992 constitution has effectively halted Ghana’s 1960s and 80s cyclical political turmoil, ushering in ‘Pax Ghanaianica’, (b) There is a correlation between campaign promises and real GDP growth under the Fourth Republic, and (c) There is no correlation between a stable democracy and a stable real GDP growth under the Fourth Republic [i.e., The political campaign promises relative to power/electrification/energy have been unable to sustain production in Industry, Agriculture, Commerce, and Services]. The study concludes that Ghana’s roller coaster economic development may re-trigger ‘alien’ incursions into contemporary Ghanaian politics. It recommends that the political economy of Ghana should be structured to propel sustainable production in Industry, Agriculture, Commerce, and Services.
... The gatekeeper theory and agenda setting theory mentioned above also emphasize that the media has the ability to select information and selectively disseminate information in the dissemination and implementation of public policies. Secondly, media can also promote public feedback and increase public participation, thus influencing policy formulation [15,16]. The media can not only inform and facilitate public feedback, but also promote accountability by monitoring the implementation of policies through reporting and analysis [17,18]. ...
Article
Full-text available
This paper examines the role of media in the dissemination and implementation of public policy in India. Through an analysis of various media platforms, including television, newspapers, and social media, the study explores how the media influences public perception and understanding of government policies. The paper highlights the dual role media plays in both promoting policies and holding the government accountable by fostering public debate and scrutiny. Key case studies, such as the Digital India initiative, demonstrate how effective media engagement can lead to increased policy awareness and citizen participation. However, the study also identifies significant challenges, including media bias, the spread of misinformation, and unequal access to media in rural areas. These issues can hinder the successful communication of policies and undermine trust in government efforts. The paper concludes by stressing the need for media reform, greater media literacy, and improved infrastructure to ensure that public policy dissemination reaches all sectors of society and remains accurate and unbiased. This research underscores the critical role media plays in shaping democratic governance and public policy outcomes in India.
... We do this using a SPEND dictionary, which includes the following words: SPEND: allocat*, appropriation*, budget*, cost*, earmark*, expend*, fund*, grant*, outlay*, resourc*, spend* This dictionary search (and all subsequent dictionary searches) is implemented in the quanteda package in R (Benoit et al., 2018). Note that the dictionary has been subjected to testing in Soroka and Wlezien (2019), and was constructed from a reading of keyword-incontext (kwic) retrievals, augmented by thesaurus searches. The dictionary-building procedure, in a nutshell, was as follows: (1) read a random draw of articles extracted using Lexis-Nexis keywords, and establish a simple set of words that seem to capture "spending," (2) augment that list using a thesaurus, and (3) search for each dictionary word, extracting kwic entries and reviewing those entries to ensure that every word is used, most of the time, in the way in which we anticipatein this instance, in the context of a sentence about spending. ...
... de los medios que consumen (Soroka y Wlezien, 2019). Así, el contenido difundido por los distintos medios determina la percepción que los ciudadanos tienen sobre asuntos políticos (Müller, 2020, p. 698) e interviene en la conformación de opiniones y en las decisiones de electorales (Strömbäck, 2008). ...
Article
Este trabajo analiza el seguimiento informativo de la campaña de las elecciones andaluzas de 2022 y su vinculación con el interés manifestado por los ciudadanos en la campaña y con su participación a través del voto y como transmisores de contenidos sobre la misma. Para determinar la existencia de relaciones entre las variables consideradas se usa el test de hipótesis chi-cuadrado. La fuerza de las asociaciones se mide con el coeficiente V de Cramer y el coeficiente de contingencia. Los resultados muestran la existencia de asociación entre el interés en la campaña y su seguimiento mediático. La televisión es el medio preferido para informarse, seguido por la prensa digital y la radio. Los debates y las entrevistas son los contenidos más consumidos. La participación mediante conversaciones se lleva a cabo especialmente en entornos familiares. Sin embargo, no se observa relación entre el interés en el seguimiento mediático de la campaña y la participación a través del voto. Incluso aquellas personas que siguieron la campaña con mayor interés parecían tener decidido el voto ya antes de los comicios.
... 79). The use of KWIC is well established and is observable across different areas of focus, including law (Kraft, 1964;Sopjani & Hamiti, 2022), political discourse (Douglas & Douglas, 2021;Jackson & Heath, 2023;Jones, 2019;Pennings, 2010), and media studies (Duffy et al., 2020;Fox et al., 2023;Soroka & Wlezien, 2019;Zhou, 2022). The KWIC approach used in this study was focused on contextualizing the term suffering at the sentence level of observation. ...
Article
Full-text available
Human suffering is a complex phenomenon that can manifest physically or psychologically. As the negative valence of affective phenomena, with the positive being pleasure or happiness, human suffering could easily be interpreted as something to avoid. Sartre explored existential aspects of human suffering in Being and Nothingness. Examining each occurrence of the word suffering in that work provides a basis for understanding the roles Sartre assigned to it within the human experience and consequently provides a more nuanced appreciation of this complex phenomenon. An electronic copy of Being and Nothingness was searched for all occurrences of the word suffering (N = 50), tabularized using the Key Word In Context (KWIC) approach at the sentence level, resulting in 40 sentences for thematic analysis. That analysis resulted in 5 themes, showing the complexity of suffering in Sartre's major philosophical work and suggesting that suffering is quintessentially human, unavoidable, and potentially liberating.
... Lastly, this study did not explicitly account for the increasing role of social media and mass media (Kuhlmann et al. 2020;Soroka and Wlezien 2019) and the degree to which a policy entrepreneur draws media attention. We also acknowledge that it lacks legislators' perspectives on policy entrepreneurs' presence and their efforts. ...
Article
This study examines whether and how policy entrepreneurs and their interactions with state legislatures influence the adoption and diffusion of a child abuse prevention policy, that is, Erin’s Law, across U.S. state legislatures. Employing 8 years of state-level data (2011–2018), we claim that a policy entrepreneur’s impact on policy adoption is conditional on the degree of legislative professionalism and the state’s political ideology. The event history analysis (EHA) and logistic regression (Logit) analyses reveal that policy entrepreneurs’ speaking engagements decrease the time to adoption and increase the likelihood of adoption, and the effect becomes stronger when states’ political ideology aligns with the political landscape surrounding the issue. However, our findings did not support the countervailing role of a policy entrepreneur in leveling gaps in the degree of legislative professionalism and ideological preferences across state legislatures.
Article
Lawmakers who can supply themself with information to manage complex problems are at an advantage, but equally important is the filtering and contextualizing of applicable information. Members of Congress manage a deluge of policy and political information, and this short paper examines one strategy members of Congress use to mitigate that burden—by investing in reference material and news media to filter and summarize the policymaking environment. Using congressional disbursement data from the 116th and 117th Congress, I explain how institutional experience is associated with an office's choice to invest in more information filtering. Freshman members of the House are more likely to turn to news media or publications to provide insight into the congressional agenda, offer political context, and prioritize issues of importance. Reference investment is not tied to partisanship, ideology, lawmaking capacity, or previous office, but those junior offices who need to get up to lawmaking speed invest in external information filtering. These results illuminate how central institutional knowledge is to lawmaking in Congress, and how much an office is willing to invest to mitigate that weakness.
Article
The last decade has seen a proliferation of research bolstering the theoretical and methodological rigor of the Multiple Streams Framework (MSF), one of the most prolific theories of agenda-setting and policy change. This Element sets out to address some of the most prominent criticisms of the theory, including the lack of empirical research and the inconsistent operationalization of key concepts, by developing the first comprehensive guide for conducting MSF research. It begins by introducing the MSF, including key theoretical constructs and hypotheses. It then presents the most important theoretical extensions of the framework and articulates a series of best practices for operationalizing, measuring, and analyzing MSF concepts. It closes by exploring existing gaps in MSF research and articulating fruitful areas of future research.
Article
Full-text available
The study explored how understanding people's behaviours and desires can inform policy design and contribute to policy feedback theory. We focused on uses of time that are affected by diverse policies. Given the growing interest in promoting well‐being and the connection between the use of time and well‐being, we examined behaviours and desires regarding uses of time. In this exploratory study, we employed a quantitative research method. We surveyed 671 Israeli adults on their time use, desires for time use, and support for policy alternatives in three policy fields: work, education, and welfare. In five out of 11 policy alternatives, we found a connection between behavioural variables and support for policy alternatives. While exploratory, our findings contribute innovative insights into the connection between behavioural variables and support for policy alternatives related to time use. Theoretically, the article highlights the importance of incorporating behavioural ‘signalling knowledge’ as an essential input at the policy design stage and contributes to the policy feedback literature on multidisciplinary policies.
Article
Full-text available
Objective: Work on economic news argues that US coverage focuses primarily on changes rather than levels of future economic conditions; it also both affects and reflects public economic sentiment. Given that economic perceptions are related to policy preferences and government support, this is of consequence for politics. This paper explores the generalizability of these findings. Methods: Using nearly 100,000 stories over 30 years in the US, UK, and Canada, we compare media tone, public opinion and economic conditions. Result: Results demonstrate that media tone and public opinion follow future economic change in all three countries. Media and opinion are also related, but the effect mostly runs from the public to the media, not the other way around. Conclusion: These results confirm the generalizability of prior findings, and the importance of considering more than a simple uni-directional link between media coverage and public economic sentiment.
Article
Full-text available
We describe the first version of the Me-dia Frames Corpus: several thousand news articles on three policy issues, annotated in terms of media framing. We motivate framing as a phenomenon of study for computational linguistics and describe our annotation process.
Article
Full-text available
Social scientists have long hand-labeled texts to create datasets useful for studying topics from congressional policymaking to media reporting. Many social scientists have begun to incorporate machine learning into their toolkits. RTextTools was designed to make machine learning accessible by providing a start-to-finish product in less than 10 steps. After installing RTextTools, the initial step is to generate a document term matrix. Second, a container object is created, which holds all the objects needed for further analysis. Third, users can use up to nine algorithms to train their data. Fourth, the data are classified. Fifth, the classification is summarized. Sixth, functions are available for performance evaluation. Seventh, ensemble agreement is conducted. Eighth, users can cross-validate their data. Finally, users write their data to a spreadsheet, allowing for further manual coding if required.
Article
Full-text available
Through an analysis of the 2004 through 2014 General Social Survey (GSS), this article demonstrates that the 2010 passage of the Affordable Care Act (ACA) decreased support for spending on health among Democrats, Independents, and Republicans, contrary to the conjecture that a rigid partisanship equilibrium has taken hold among voters in the United States. Instead, only a partisan deflection is present, with spending preferences declining more for Republicans than for Democrats, and with Independents in between. Through supplemental analysis of the GSS panel data, as well as comparative analysis of other GSS items on national spending preferences, government responsibility, and confidence in leaders, this article also undermines support for an alternative explanation that cannot be entirely eliminated from plausibility, which is that the identified period effect that emerged in 2010 and persisted through 2014 is a response to the Great Recession and resulting deficit spending by the federal government. Implications for public opinion research are discussed, lending support to current models of thermostat effects and policy-specific political mood from the political science literature, which are informed by an older literature on weather fronts in public opinion that originated in the sociology literature.
Book
A hell of a gift, an opportunity. "Magnanimous." "One of the greatest advantages I ever experienced." These are the voices of World War II veterans, lavishing praise on their beloved G.I. Bill. Transcending boundaries of class and race, the Bill enabled a sizable portion of the hallowed "greatest generation" to gain vocational training or to attend college or graduate school at government expense. Its beneficiaries had grown up during the Depression, living in tenements and cold-water flats, on farms and in small towns across the nation, most of them expecting that they would one day work in the same kinds of jobs as their fathers. Then the G.I. Bill came along, and changed everything. They experienced its provisions as inclusive, fair, and tremendously effective in providing the deeply held American value of social opportunity, the chance to improve one's circumstances. They become chefs and custom builders, teachers and electricians, engineers and college professors. But the G.I. Bill fueled not only the development of the middle class: it also revitalized American democracy. Americans who came of age during World War II joined fraternal groups and neighborhood and community organizations and took part in politics at rates that made the postwar era the twentieth century's civic "golden age." Drawing on extensive interviews and surveys with hundreds of members of the "greatest generation," Suzanne Mettler finds that by treating veterans as first-class citizens and in granting advanced education, the Bill inspired them to become the active participants thanks to whom memberships in civic organizations soared and levels of political activity peaked. Mettler probes how this landmark law produced such a civic renaissance. Most fundamentally, she discovers, it communicated to veterans that government was for and about people like them, and they responded in turn. In our current age of rising inequality and declining civic engagement, Soldiers to Citizens offers critical lessons about how public programs can make a difference.
Article
en There is a large body of literature devoted to how “policies create politics” and how feedback effects from existing policy legacies shape potential reforms in a particular area. Although much of this literature focuses on self‐reinforcing feedback effects that increase support for existing policies over time, Kent Weaver and his colleagues have recently drawn our attention to self‐undermining effects that can gradually weaken support for such policies. The following contribution explores both self‐reinforcing and self‐undermining policy feedback in relationship to the Affordable Care Act, the most important health‐care reform enacted in the United States since the mid‐1960s. More specifically, the paper draws on the concept of policy feedback to reflect on the political fate of the ACA since its adoption in 2010. We argue that, due in part to its sheer complexity and fragmentation, the ACA generates both self‐reinforcing and self‐undermining feedback effects that, depending of the aspect of the legislation at hand, can either facilitate or impede conservative retrenchment and restructuring. Simultaneously, through a discussion of partisan effects that shape Republican behavior in Congress, we acknowledge the limits of policy feedback in the explanation of policy stability and change. Abstract zh 有大量文献专门讨论“政策如何创造政治”以及现有政策研究传统中的反馈效应如何构建特定领域内的潜在改革这类问题。这些文献中的大部分都集中在自增强的反馈效应上,其表现为随着时间的推移而增加对现有政策的支持,但Kent Weaver及其同事最近让我们注意到了自削弱效应,这种效应可以逐渐减少对这些政策的支持。“平价医疗法案”(Affordable Care Act)是自20世纪60年代中期以来美国颁布的最重要的医疗改革法案,下文探讨了与之相关的自强化和自削弱的政策反馈。更具体地说,本文借鉴了政策反馈的概念,来反映ACA自 2010年得到通过之后的政治命运。我们认为,由于其强烈的复杂性和分散性,ACA产生了自强化和自削弱两种反馈效应,至于产生哪一种反馈效应,这取决于当前立法是可以促进还是阻碍保守的紧缩与结构调整。同时,通过讨论影响共和党在国会中行为的党派效应,我们承认了政策反馈在解释政策的稳定和变革时的局限性。
Article
en We provide evidence regarding potential policy feedback effects of healthcare reform by estimating the effect of Medicaid expansion on public support for the state actor most closely associated with responsibility for the expansion decision—the governor. The discretion granted to state governments concerning Medicaid expansion has created the potential for significant variation in mass feedback effects across the states. We are particularly interested in how these effects are influenced by the emerging racial polarization over healthcare policy, and how this may lead to different types of feedback effects that align with partisan, ideological, and racial cleavages. We estimate the impact of Medicaid expansion on gubernatorial approval, utilizing five waves of the Cooperative Congressional Election Study (2008–16). We find that on average, expansion led to a modest, yet statistically significant increase in gubernatorial approval. However, there is important variation both within and across states in the effect of expansion. Specifically, we find that governors were more likely to be rewarded for expansion by those who supported President Obama, and those who resided in states where the Medicaid recipient population is more likely to be white. Abstract zh 州长是与扩大联邦医疗补助规模这一决定在责任上关联最大的州行为者,我们通过估算医疗补助扩展会在多大程度上影响公众对州长的支持,为“医疗改革可以带来潜在的政策反馈效应”这一论断提供了证据。州政府在扩展医疗补助上拥有自由裁量权,这使得各州在公众反馈效应上出现差异。本文尤为关注反馈效如何被医疗政策中新兴的种族两极化观点所影响,以及这种影响将如何引起与党派、意识形态和种族分裂观点相对应的不同类别的反馈效应。我们使用合作国会选举研究(Cooperative Congressional Election Study)在2008到2016年间五次调查的结果,估算了医疗补助扩展对州长获批连任的影响。我们发现,平均而言,医疗补助扩展带来了州长获得批准的小幅但统计意义上显著的增长,但扩展带来的这种影响在各州内部和各州之间有显著差异。具体来说,我们发现那些支持奥巴马总统的人以及那些居住在联邦医疗补助受助人口以白人为主的州的人更有可能支持他们的州长,使得州长可以获得推行医疗补助扩展的“奖励”。
Article
In light of the research in other chapters in this volume, this chapter considers some of the important and as-yet-unresolved methodological issues in automated content analysis. The chapter focuses on DICTION in particular, but the concerns raised here also apply to automated content analytic techniques more generally. Those concerns are twofold. First, the chapter considers the importance of aggregation for the reliability of content analyses, both human- and computer-coded. Second, the chapter reviews some of the difficulties associated with testing the validity of the kinds of complex (latent) variables on which DICTION is focused. On the whole, the chapter argues that this (and its companion) volume reflect just some of the many possibilities for DICTION-based analyses, but researchers must proceed with a certain amount of caution as well.
Article
Since so few people appear knowledgeable about public affairs, one might question whether collective policy preferences revealed in opinion surveys accurately convey the distribution of voices and interests in a society. This study, the first comprehensive treatment of the relationship between knowledge, representation, and political equality in opinion surveys, suggests some surprising answers. Knowledge does matter, and the way it is distributed in society can cause collective preferences to reflect disproportionately the opinions of some groups more than others. Sometimes collective preferences seem to represent something like the will of the people, but frequently they do not. Sometimes they rigidly enforce political equality in the expression of political viewpoints, but often they do not. The primary culprit is not any inherent shortcoming in the methods of survey research. Rather, it is the limited degree of knowledge held by ordinary citizens about public affairs. Accounting for these factors can help survey researchers, journalists, politicians, and concerned citizens better appreciate the pitfalls and possibilities for using opinion polls to represent the people's voice.
Article
Some groups participate in politics more than others. Why? And does it matter for policy outcomes? In this richly detailed and fluidly written book, Andrea Campbell argues that democratic participation and public policy powerfully reinforce each other. Through a case study of senior citizens in the United States and their political activity around Social Security, she shows how highly participatory groups get their policy preferences fulfilled, and how public policy itself helps create political inequality.Using a wealth of unique survey and historical data, Campbell shows how the development of Social Security helped transform seniors from the most beleaguered to the most politically active age group. Thus empowered, seniors actively defend their programs from proposed threats, shaping policy outcomes. The participatory effects are strongest for low-income seniors, who are most dependent on Social Security. The program thus reduces political inequality within the senior population--a laudable effect--while increasing inequality between seniors and younger citizens.A brief look across policies shows that program effects are not always positive. Welfare recipients are even less participatory than their modest socioeconomic backgrounds would imply, because of the demeaning and disenfranchising process of proving eligibility. Campbell concludes that program design profoundly shapes the nature of democratic citizenship. And proposed policies--such as Social Security privatization--must be evaluated for both their economic and political effects, because the very quality of democratic government is influenced by the kinds of policies it chooses.