Content uploaded by Christopher Wlezien
Author content
All content in this area was uploaded by Christopher Wlezien on Jun 30, 2018
Content may be subject to copyright.
Tracking the Coverage of Public Policy in Mass Media*
Policy Studies Journal, forthcoming
Stuart Soroka Christopher Wlezien
University of Michigan University of Texas at Austin
ssoroka@umich.edu wlezien@austin.utexas.edu
Abstract: The average citizen often does not experience government policy directly, but
learns about it from the mass media. The nature of media coverage of public policy is thus
of real importance, for both public opinion and policy itself. It is nevertheless the case that
scholars of public policy and political communication have invested rather little time in
developing methods to track public policy coverage in media content. The lack of attention
is all the more striking in an era in which media coverage is readily available in digital
form. This paper offers a proposal for tracking coverage of the actual direction of policy
change in mass media. It begins with some methodological considerations, and then draws
on an expository case — defense spending in the US — to assess the effectiveness of our
automated content-analytic methods. Results speak to the quantity and quality in media
coverage of policy issues, and the potential role of mass media – to both inform and mislead
– in modern representative democracy.
Keywords: Policy feedback, public responsiveness, thermostatic model
*Earlier versions of this paper were presented at the Policy Studies Journal workshop on
Policy Feedback and Feed Forward, University of Arizona, Tucson, September 29-30,
2017, and the 76th Annual Midwest Political Science Association Conference, April 5-8,
2018, Chicago IL, and at the University of Amsterdam. We thank Edella Schlager for
including us in the workshop and for her helpful comments, as well as those of Carolyn
Barnes, Daniel Beland, Nate Breznau, Richard Fording, Jane Gingrich, Kristin Goss,
Deondra Rose, Ann Schneider, Rens Vliegenthart, and especially Fabian Neuner, as well
as the three anonymous referees. We also thank our research assistants on this project:
Sydney Foy, Amanda Hampton, and Dominic Valentino. The research is supported by
National Science Foundation Grants SES-1728792 and SES-1728558.
Representative democracy depends on a sufficiently attentive and informed public. It thus is
encouraging that a growing body of literature demonstrates that publics respond thermostatically
to policy change, adjusting their preferences for more policy downward (upward) when policy
increases (decrease). At the same time, there is work suggesting that the average citizen has, at
best, only a very partial understanding of politics and policy.
1
How can both of these findings be
true? One prominent view of public opinion (e.g., Page and Shapiro 1992) focuses on the
advantages of aggregation – although individuals make mistakes, those errors cancel out in the
aggregate. We want to focus here on another possibility, namely, that thermostatic responsiveness
does not actually require much information about policy, and what is required is readily available
in media mass reporting. After all, some people must be receiving fairly accurate information about
policy change for the miracle of aggregation to work, i.e., in order for there to be a responsive
signal left after the noise cancels out.
2
To be clear: we do not suppose that citizens learn about policy by reading government decisions.
(Consider that few of us have seen bills or budgets on paper or in digital form.) We suggest, rather,
that broad shifts in policy may be captured in news content, that citizens may thus be able to learn
1
As regards thermostatic responsiveness, see, e.g., Wlezien (1995; 1996), Erikson, MacKuen and
Stimson (2002), Soroka and Wlezien (2005), Jennings (2009), Soroka and Wlezien (2010), Ura
and Ellis (2010), Ellis and Faricy (2011), Wlezien and Soroka (2012), Morgan and Kang (2015).
As regards the public’s inattentiveness and information, see, e.g., Berelson, et al (1954), Converse
(1964; 1970), Bennett (1988), Page and Shapiro (1992), Delli Carpini and Keeter (1996), Popkin
and Dimock (1999).
2
Though see Althaus (2003) for a more complicated and less complimentary view of aggregation.
relatively easily about the general direction (and magnitude) of policy change, and that this allows
for effective thermostatic responsiveness. The information requirements necessary for the public
to respond thermostatically to policy actually may not be that great. (See Soroka and Wlezien
2010: 41-42.)
This paper offers an initial test of the possibility that media coverage actually reflects policy
change, based on a large-scale, automated content analysis of roughly 1.6 million news stories
over a thirty-six-year period in ten major US newspapers, exploring the availability of cues about
increases or decreases in spending commitments. We do not examine citizens’ interest in or
responsiveness to these cues at present, but take a first step, and simply look at the extent to which
cues about policy direction are available in media content. We do so by developing a hierarchical
dictionary-based approach to identifying directional cues about budgetary policy. Our approach is
deployable across large corpuses of news content, and potentially generalizable across policy
domains as well. Here, we focus on defense spending. We compare the “signal” provided by these
cues to trends in government spending; and consider this a preliminary examination of the
possibility that the negative feedback that characterizes thermostatic responsiveness – or, indeed,
its alternative, positive feedback itself – is facilitated by media content.
3
While the broader substantive goal is to begin to answer questions about the sources of
thermostatic responsiveness, our focus below is principally on developing the methods required
for such a test. Methodologically speaking, we regard this as a first step towards a wide range of
3
For research on positive feedback of policy on public preferences, see Soss and Schram (2007),
Weaver and Lerman (2010), and Barabas (2009). Other research finds both negative and positive
feedback, e.g., Pacheco (2013) and Ura (2014).
analyses that might benefit from measures of media coverage of policy change. There is a
considerable body of work on democratic representation that could benefit from measures of media
coverage of policy, including research on positive feedback. Note also that the measures here do
not just capture correspondence between media and policy, but also differences between them. So
work on political journalism focused on the accuracy or inaccuracy of coverage could benefit as
well. So too might research focused on policymakers’ responsiveness to media content itself. (We
discuss some of this research below.) In short, we expect that what we do may contribute to
research in which media coverage of policy is relevant, of which thermostatic negative feedback
is just one example.
Our results suggest that cues about the direction in which defense policy is changing are readily
available in news coverage. Indeed, they are so plentiful that even a cursory engagement with news
content seems likely to give citizens the information they require to make feasible thermostatic
responsiveness, and representative democracy more generally. This may not be universally true,
of course – it will certainly vary across policy domains, countries and time. But this initial work
points to the (perhaps surprising) availability of policy cues in media content, as well as the
potential for this line of analysis on a much broader scale. To begin with, let us review and
reorganize the literature on public responsiveness to policy.
Policy and Public Responsiveness
That policy can feed back on public opinion is well known, and research highlights two general
relations.
The first is negative feedback, where the public adjusts its public preference “inputs”
thermostatically based on policy “outputs” (Wlezien 1995; Soroka and Wlezien 2010). This
expectation has deep roots in political science research, stretching back to the classic Eastonian
(1965) depiction of a political system and Deutsch’s (1963) models of control. Here, the public’s
preference for more policy – its relative preference, R – represents the difference between the
public’s preferred level of policy (P*) and the level it actually gets (P):
Rt = P*t - Pt, (1)
where t represents time. In the model, relative preferences change if either the preferred level of
policy or policy itself changes.
4
This equation is straightforward in theory, but less so in practice,
as we rarely observe P*. Survey organizations typically do not ask people how much policy they
want, and instead ask about relative preferences, whether we are spending “too little,” whether
spending should “be increased,” or whether we should “do more.” The public preference, however
defined, also is necessarily relative.
5
Because we do not directly measure the public’s preferred level of policy (P*), and because all of
the variables have (or would have) different metrics, we need to rewrite the model as follows:
Rt = a0 + β1 Ot + β2 Pt + et, (2)
4
Unlike the thermostat that governs the heating (and/or air conditioning) units in our homes, which
sends a dichotomous signal, R captures both direction and magnitude. Also note that while we
have characterized public responsiveness across time, an identical model applies across spatial
contexts (and issues) as well.
5
In one sense, this is quite convenient, as we can actually measure the thermostatic signal the
public sends to policy makers – to test the model, we need a measure of relative preferences, after
all.
where a and et represent the intercept and the error term respectively, and O designates a variety
of “other” determinants of relative preferences, e.g., economic and national security. To be clear,
these variables should not be viewed as controls, but as instruments for P*. (This is important
when we turn to positive feedback below.) The most critical part of Equation 2 is the coefficient
of feedback, β2; if people respond thermostatically, β2 will be less than 0.
Such negative feedback of policy on preferences is the fundamental feature of the thermostatic
model. It is what distinguishes a reasonably informed public—one that knows something about
what policymakers actually do—from an uninformed public. Observing it means that the signal
that the public sends to policymakers contains useful information, which makes possible effective
accountability and control, as the public is in a position to reward or punish the incumbent
government for its actions.
The public may not respond thermostatically to policy change, of course, and it even may be that
policy feeds back positively on preferences – an increase in spending could lead people to want
more spending in that domain. This positive feedback thus is between P and P* in equation 1
above, where the public’s preferred levels of policy are conceived to be a function of policy itself:
P*t = f{Pt}. (3)
To be absolutely clear, there would be positive feedback if this relation is positive.
6
6
Just as the coefficient of feedback in equation 2 need not be negative, the relationship in
equation 3 need not be positive, and a negative effect is possible if policy increases cause people
to want less than they did at the previous point in time. This is what we might predict if policy
It is important to make clear that the characterization of feedback represented here contrasts with
others in the policy literature (see Mettler and Soss 2004, Beland 2010, and Campbell 2012 for
comprehensive and cogent reviews). Schneider and Ingram’s (1993) classic treatise posits that
policy design influences “social constructions of target populations,” which can influence politics
and future policy itself. Some research shows that policy can have self-reinforcing effects,
increasing public support for a particular policy (Campbell 2003). Other related research
demonstrates implications for citizenship and civic participation (Mettler 2005). Very recent work
identifies self-undermining effects, where policy actually undermines public support over time
(Beland N.d.). Policy also can influence support for elected officials (Fording N.d.).
Most of this other research conceives of individuals as responsive to their own personal
consumption of policy, rather than the collective policy per se (e.g., Soss and Schram 2007). This
is of relevance to our representation of positive feedback, as we might observe that individuals’
P*it would reflect the micro-level Pit instead of the macro Pt. Of course, individuals could respond
to both the micro and macro levels, as follows:
P*it = f{Pit, Pt}. (4)
For example, people may observe an increase in policy that works and favor more, which may be
most common when government is entering a new area, e.g., the US policy response to the Great
Depression.
does not improve conditions or makes things worse. Also see Soroka and Wlezien (2010: 115-
119).
Much like equation 1 above, equation 4 is more straightforward in theory than in practice, as we
often do not directly observe P*it and cannot fully account for it using instruments. This means
that we sometimes only can detect positive feedback indirectly, from the coefficient β2 in equation
2. The coefficient thus would encompass both negative and positive feedback, and actually capture
their net effect, i.e., if β2 < 0, we cannot conclude that there is no positive feedback, just that
negative feedback exceeds any positive effect. By implication, the coefficient provides a minimal
estimate of each – positive and negative – type of effect.
Regardless of our expectations or results, the mechanism of feedback, thermostatic or otherwise,
is of critical importance – we are interested in how the public receives information about policy
outputs. Below, we take initial steps toward assessing whether mass media serve this function.
On Media Coverage of Public Policy
There exists an important body of work on media coverage related to policy. The largest such body
of work has examined media coverage of issues in order to better understand the sources of public
priorities. Here, media attention is a source or indicator of the public salience of issues, where the
greater the attention, the greater the salience. There is a long history of this type of scholarship,
beginning with McCombs and Shaw’s (1972) classic research, and then a vast body of literature
on public and policy “agenda-setting.” Baumgartner and Jones’ work on policy agendas (1993)
has been especially influential. The policy agendas account in which media play an especially
prominent role is Boydstun’s (2014) Making the News, which makes clear that media are central
in establishing the salience of an issue, which in turn is central to policy representation itself –
whether policymakers respond to public preferences for policy in different domains (see, e.g.,
Soroka and Wlezien, 2010).
Other research by Boydstun and coauthors (e.g., Card et al. 2015) shifts from the volume of
coverage to its substance, specifically, what they call “policy frames.” This work draws on a vast
literature on policy framing or “issue definition” in media coverage (e.g., Iyengar 1991), a
literature arguing either for the impact that frames have on citizens’ policy preferences, or that
media frames impact how policymakers respond to the public. (These two things are related, of
course.) This account is focused not just on the possibility of policy action, but the action itself.
Existing media research thus has focused exclusively on inputs into the policymaking process, not
policy outputs themselves. To be clear: measures of media coverage have not been concentrated
on what policymakers actually do. Given our interest in public responsiveness, this is the focus of
our research – we want to know whether and how mass media content reflects what is happening
to policy. This requires both theoretical expectations about what media might reflect about policy
and then an empirical strategy to identify the degree to which this actually occurs.
Conceptualizing media coverage of policy is not perfectly straightforward. It is tempting to think
that news media reflect the level of policy that has been adopted. If the government has a large
role to play in health care, then the media signal on health policy would be correspondingly large,
whether or not it has changed recently. The media signal (M) at a point in time t then would be a
function of the size of the policy (P), as in the following very basic equation:
Mt = g{Pt}. (5)
If we could conceive of the two variables being measured on the same scale, we might even posit
that the relationship between them would be an identity function. Here media coverage would
perfectly capture policy, perhaps with some random error.
7
In one sense, this is how we originally conceived of media coverage of the economy in our previous
work on that subject (Soroka et al. 2015). We expected that when the economy is good, coverage
would be positive, and, when the economy is bad, coverage would be negative. This is not how it
turned out, however. We found that coverage of the economy in the US reflects its direction, not
its level. What drives the media is economic change: when the economy is getting better, coverage
is positive; when the economy is getting worse, coverage is negative. The level of the economy
matters little at all.
8
While this may surprise, it may make sense. It relates to – and may help
explain – the focus of voters on economic change in their political judgments (see, e.g., Erikson,
et al 2002).
9
More generally, this be may what media (and people more generally) can best do.
They can capture whether policy has increased or decreased, and perhaps whether it has increased
or decreased by a little or a lot. This is an empirical question, of course.
7
In terms of the equation relating them, the coefficient would be 1.0 and the intercept 0.0. Of
course, there also could be systematic bias in media coverage, and there is a considerable literature
on this, both generally and on specific policy issues. See, e.g., Lawrence (2000), Friedman et al.
(1999).
8
This is true not just in the US but elsewhere too (Wlezien et al. 2017).
9
It also relates to the reliance by survey organizations on questions tapping people’s relative
preferences, their preferences for policy change.
Our focus is on media coverage of policy change, first, as noted above, because we have reason to
suppose that media reports on policy change, and second, and perhaps more critically, because we
can more directly measure media coverage of change (∆M). That is, we can envision and implement
an analysis that identifies media content characterizing the direction in which policy is moving. Is
spending on defense increasing? Or decreasing? Relatively simple dictionaries allow us to capture
the direction of policy, as we will see shortly. Indeed, we might even be able to assess the
magnitude of the change in addition to its direction. It is by contrast less clear how we could
measure coverage of levels of policy, particularly policies that are best reflected by interval-level
data, e.g., spending, which happens to be the focus of our empirical analysis.
10
Even accepting
that the media reports on policy levels, an appropriate content analytic dictionary seems elusive,
or at least less effective.
In sum, it seems likely that media coverage focuses on policy change and that it is possible to
design a content analysis that could reliably extract this “signal.” Armed with a measure of the
“media policy signal,” we could assess whether it follows actual policy change over time. That is,
we could estimate the following:
∆Mt = h{∆Pt}. (6)
Measurement and estimation here presume a particular policy focus, or set of foci. In theory, we
could conduct analyses across a variety of policy domains j in a variety of settings, i.e., states or
countries, k. For this exploratory analysis, we concentrate on a single domain in a single country,
10
Not all policy is best represented by spending data, of course, and we discuss this issue further
in the concluding section.
spending on defense in the US. This is because there are a lot of issues involved in producing valid
measures of media coverage, detailed in the discussion that follows.
Creating a Media Corpus
A corpus of media stories can be drawn from any full-text resource. We rely on Lexis-Nexis due
to access to the Web Services Kit (WSK), which facilitates the downloading of several hundred
thousand stories, formatted in xml, in a single search request. A search request can of course be
based on either pre-coded subjects, or full-text keywords, or both. We use a combination, as
follows: STX001996 or BODY(national defense) or BODY(national security) or BODY(defense
spending) or BODY(military spending) or BODY(military procurement) or body (weapons
spending).
11
STX001996 is the “National Security” index term, one of five sub-topics with the
“International Relations and National Security” topic. It captures the lion’s share of articles on
defense policy, spending and otherwise. Of course, Lexis-Nexis’ assignment of topics is most
likely a function of their own dictionary-based word search, but our assumption is that their search
is more developed than ours would be. Even so, in order to not miss other spending-related articles,
we add the full-text (BODY) search terms identified above.
We arrive at the above search terms based on some preliminary tests, exploring the reliability with
which different searches capture relevant articles, and avoid too many irrelevant ones. Even so,
we invariably miss some articles relevant to spending, and our analyses identify a considerable
volume of irrelevant material as well. We suspect that using the “National Security” index term
means that we err on the side of Type I rather than Type II errors, i.e., we are more likely to include
11
Note that full-text search terms are searched as phrases, i.e., “national security,” not “national”
and “security” separately.
items that we shouldn’t than exclude ones we should include. That said, we expect that most
irrelevant articles do not factor into our measure of the net media signal, since we use a
combination of layered dictionaries to identify the instances of spending mentions most likely to
pertain to change in defense spending. Diagnostic analyses support this expectation, as we will
see.
Our working database relies on the following newspapers (producing this number of stories up to
December 2016, and available from this year onwards): Boston Globe (114,191 articles, beginning
in 1988), Chicago Tribune (230,187 articles, beginning in 1985), Denver Post (42,544 articles,
beginning in 1994), Houston Chronicle (127,175 articles, beginning in 1991), LA Times (244,976
articles, beginning in 1985), New York Times (324,343 articles, beginning in 1980), Philadelphia
Inquirer (69,346 articles, beginning in 1994), Tampa Bay Tribune (106,874 articles, beginning in
1987), USA Today (71,303 articles, beginning in 1989) and Washington Post (292,628 articles,
beginning in 1980). The total database thus includes 1,623,567 stories, albeit with more from the
mid-1990s onwards. Note that Lexis-Nexis archives web-only stories from some newspapers, but
coverage is sporadic, so these are excluded from our database. Note also that our selection of
newspapers is based on availability, alongside circulation, with some consideration given to
regional coverage. In the end, we have ten of the highest-circulation newspapers in the US, three
of which aim for national audiences, and seven of which cover considerably large regions in the
northeastern, southern, midwestern, and western parts of the country. Combining these newspapers
offers, we believe, a reasonable representation of the national news stream, at least where
newspapers are concerned. Using a relatively wide range of newspapers has an additional
advantage: to the extent that the language and/or focus of defense stories varies across outlets,
there are advantages to building and testing a dictionary across a corpus that is relatively broad.
In order to reduce the number of false positives, i.e., stories that are captured in our search but not
directly related to defense, we run a simple dictionary search over the full-text of the entire corpus,
counting the number of stories that include at least one of the following words:
DEFENSE: army, navy, naval, air force, marines, defense, military,
soldier, war, cia, homeland, weapon, terror, security, pentagon,
submarine, warship, battleship, destroyer, airplane, aircraft,
helicopter, bomb, missile, plane, service men, base, corps, iraq,
afghanistan, nato, naval, cruiser, intelligence
Some 98,882 articles include none of these keywords – roughly 6% of our sample. We take this as
evidence that the vast bulk of our retrieved articles are about defense, at least in part. The remaining
6% may well include a defense-related word not included in our dictionary. Even so, we exclude
this set of articles from subsequent analyses.
Extracting the “Media Policy Signal”
Our corpus has already applied several dictionaries to the data – first, in the use of Lexis-Nexis
topics (derived from proprietary dictionaries) and additional full-text search terms, and second in
the application of the DEFENSE dictionary to cull articles that may not be relevant. Even this
database will include a good deal of content not directly related to defense spending, of course.
The focus of our analysis is actually on a small subset of this content: sentences that include
mentions of spending and direction.
We begin by identifying sentences that mention spending. We do so using another simple
dictionary, implemented in Lexicoder (Daku et al. 2015), and constructed from our own reading
of keyword-in-context (kwic) retrievals, augmented by thesaurus searches. Kwic retrievals are in
this instance designed to extract every sentence in which a given keyword is used. Our dictionary-
building procedure is thus as follows: (1) we read a random draw of articles extracted using Lexis-
Nexis keywords, and establish a simple set of words that seem to capture “spending,” (2) we
augment that list using a thesaurus, and (3) we search for each of our dictionary words, extracting
kwic entries and reviewing those entries to ensure that every word is used, most of the time, in the
way in which we anticipate – in this instance, in the context of a sentence about spending. This
process reveals new words that are added to the dictionary and reviewed based on kwic retrievals.
We remove some words that are used in ways we did not anticipate; we keep words that are used
most – if not all – of the time in the way in which we anticipate. Our dictionary is thus tested on
and expanded using the corpus to which we are applying it. This highlights the importance of
building a dictionary using a corpus that is broadly generalizable, if a generalizable dictionary is
the objective. In this instance, we hope to build dictionaries applicable to other newspapers, and
perhaps other mass media, both for and beyond the defense domain. Of course, the generalizability
of the dictionary created here will require further testing. For the time being, we can be confident
only that we are building a dictionary that produces valid results for the current corpus.
How can we be confident about the validity of our dictionaries? First, in order to reduce processing
time, we make a database that includes a random draw of 10,000 stories from each of our 10
newspapers. These 100,000 stories are then outputted into a folder for preprocessing in advance of
content analysis – preprocessing that removes punctuation and strange characters, makes all text
lower case, and separates every sentence (based on periods) into a separate line. This facilitates
the use of the “kwic” (keyword in content) function in Lexicoder, which extracts all sentences in
which a given word is used. We build a database of all kwic sentences that include any of the
words in our spending dictionary, which is as follows:
SPEND: allocat*, appropriation*, budget*, cost*, earmark*, expend*,
fund*, grant*, outlay*, resourc*, spend*
The resulting database includes 95,906 sentences from our 100,000 news stories. Within this
database of spending sentences, we then run dictionary counts for another two dictionaries,
capturing upward and downward words (and built using the same review of kwic entires as
discussed above):
UP: accelerat*, accession, accru*, accumulat*, arise*, arose, ascen*,
augment*, boom*, boost, climb*, elevat*, exceed*, expand*, expansion,
extend*, gain*, grow*, heighten*, higher, increas*, increment*, jump*,
leap*, more, multiply*, peak*, rais*, resurg*, rise*, rising, rose,
skyrocket*, soar*, surg*, escalat*, up, upraise, upsurge, upward
DOWN: collaps*, contract*, cut*, decay*, declin*, decompos*, decreas*,
deflat*, deplet*, depreciat*, descend*, diminish*, dip*, drop*, dwindl*,
fall*, fell, fewer, less, lose, losing, loss, lost, lower*, minimiz*,
plung*, reced*, reduc*, sank, sink*, scarcit*, shrank, shrink*, shrivel*,
shrunk, slash*, slid*, slip*, slow*, slump*, sunk*, toppl*, trim*, tumbl*,
wane, waning, wither*
Note that any one of the dictionaries we use is sure to identify a good deal of irrelevant material.
But layering dictionaries on top of each other will, we think, produce an increasingly reliable
measure. We filter articles using both Lexis-Nexis search terms and our own DEFENSE dictionary
to remove articles that seem unlikely to be about defense, then we apply the SPEND dictionary
and extract spending-related sentences, and finally we apply the UP and DOWN dictionaries to
focus on sentences that identify change.
This is identical to the use of “hierarchical dictionary counts,” as implemented using Lexicoder in
Young and Soroka (2012) and Belanger and Soroka (2012). In each of those cases, layered
dictionaries were used to extract the tone of political candidates. Here, we use a combination of
kwic retrievals and subsequent dictionary counts to gradually narrow our database to the content
most pertinent to a media policy signal. Our expectation is that not all direction words will be used
in relation to spending, just as not all spending mentions will co-occur with direction keywords.
We nevertheless expect that the concurrence of these two dictionaries will identify sentences that
are related to spending direction.
We regard this application of dictionaries as very similar to the “learning” inherent in supervised
learning methods used for large-N content analysis (e.g., Jurka et al. 2012; see Grimmer and
Stewart 2013 for an especially helpful review.) There sometimes is a perception that dictionaries
are simple word lists, concocted based only on a thesaurus, where individual words are not
subjected to testing, and where results are thus likely to either capture, or miss, a good deal of
irrelevant material. This certainly can be true, but the use of several iterations of testing during the
dictionary-building stage, and the subsequent use of multiple dictionaries that essentially removes
false positives, i.e., a spending word that is in fact not related to defense, makes for a rather
different dictionary-based analysis – one that has used a corpus and human coding to “learn” about
the terms most relevant to the analysis.
Note that we are not staking a claim on the success of dictionary-based analysis in all instances.
As Grimmer and Stewart (2013) note, the best content-analytic approach will vary widely
depending on the requirements of the data and theory. We are suggesting that dictionaries may be
appropriate in this case, however, because of the limited and readily-identifiable words referring
to both spending and direction and the need to identify explicit cues that would be clear to media
consumers. The former likely reduces the gains from supervised learning methods, since human
coding may offer little gain in reliability. The latter reduces the advantages of automated clustering
methods such as latent Dirichlet allocation (LDA; e.g., Blei et al. 2003) or structural topic
modelling (STM; e.g., Roberts et al. 2014), which capture correlations between words that need
not be proximate or related in ways that would be meaningful for the average reader. Each of these
other approaches obviously are of real value in other types of content analysis. There nevertheless
is reason to think that for the task at hand, layered dictionary counts may be quite effective. And,
as we will see, they work.
How well does our approach to identifying relevant spending cues work? We do not present the
results of all the iterations of dictionary-building here, but we can illustrate the basic findings in a
straightforward way. Having built a database of sentences that includes a spending keyword, and
then applying the UP and DOWN dictionaries to those sentences, we submit a selection of
sentences to human coding. The sentences are evenly divided between (a) no direction keywords
and (b) one or more direction keywords. They are also distributed across each of the spending
keywords in our SPEND dictionary, with slightly larger samples for the words that occur most
frequently in our corpus.
We extract kwic retrievals in two random draws of 340, and we report results from the combined
680 retrievals here. Coders applied codes independently after minimal training, during which we
instruct them to assign a 1 for sentences that are clearly about defense spending, and 0 otherwise.
In this sample, all three coders assign the same value 84% of the time. All sentences include one
of the spending keywords; the inclusion of a direction keyword as well leads to a slight increase
in inter-coder agreement, from 82% to 85%. In instances where coders disagree, we take the
majority decision, i.e., if two coders believe that the sentence is about defense spending, we take
this decision. Doing so suggests that only 33% (223) of the sentences retrieved are clearly about
defense spending. To be clear: inter-coder agreement is high (also see Neuner et al. 2017), but
many of the extracted sentences are not directly about defense spending.
Even this amount of “noise” may be tolerable – to the extent that we are interested in aggregated
results, even high levels of noise may cancel out.
12
Even so, 67% noise is rather more than we
would like. Our review of articles suggested that the broad Lexis-Nexis search had retrieved a high
number of articles that were only very tangentially related to defense spending. Our search had the
advantage of capturing what likely was all relevant content, but at the cost of a lot of irrelevant
material. We thus subjected our kwic retrievals to one final round of vetting: we run the DEFENSE
dictionary again, on the kwic entries themselves, to distinguish sentences that actually directly
mention a defense keyword. Doing so improves results: of the 193 sentences that do not include a
DEFENSE keyword, only 3 are coded as relevant; of the 487 sentences that do include a
DEFENSE keyword, 45% are relevant. For sentences that include an UP or DOWN keyword and
a DEFENSE keyword, the number is 50%.
The successive application of dictionaries serves to gradually reduce the noise in our kwic
retrievals – the raw material for a measure of the “media policy signal.” There nevertheless is
space for improvement, and so we revisit the reliability of our SPENDING dictionary, which is
intended to capture spending mentions across a good number of policy domains, but which may
as a consequence be capturing some irrelevant material for defense specifically. Table 1 shows a
test of each spending keyword (we show just single versions of each keyword, though the search
12
For a discussion of the need for large corpuses in order to deal with the error that is typically
unavoidable in automated content analysis, see Soroka (2014).
captures several variations of each), across all human-coded kwic retrievals, and the subsets that
(a) must include a DEFENSE keyword and then (b) must also include an UP or DOWN keyword.
-- Table 1 about here --
Results highlight the steady increase in reliability that is a consequence of running successive
dictionaries to narrow the sample. They also point to several spending keywords that appear to
produce rather noisy results in the defense domain, especially “grant” and “resource.” There are
few words that, even in conjunction with other dictionaries, produce kwic retrievals that are more
than 80% relevant. But these two keywords seem to produce very few relevant sentences. We will
keep this in mind below.
In the meantime, following this pre-testing with a subset of our data, we create a kwic-retrieval
database of all sentences with a SPEND keyword in all 1.6-million news articles. The resulting
working database has 1,601,036 sentences. Note that this is roughly as many sentences as there
are articles, but this is not because all sentences mention spending once. Some articles have many
spending mentions, and many articles have no sentences that mention spending. Indeed, only 15%
of the articles in our initial database contain the sentences that find their way into out kwic
database. (Our initial search is very broad, and does not impose restrictions based on sentence-
level dictionaries, after all. See the preceding Creating a Media Corpus section.) Table 2 shows
the breakdown of kwic retrievals by newspaper, with increasingly restrictive conditions. The final
column shows the number of sentences that are central to our measure of the “media policy signal.”
-- Table 2 about here --
Sentences in which there are more UP words than DOWN words are coded as 1; sentences in
which there are more DOWN words than UP words are coded as -1; other sentences are coded as
0. (Note that the vast majority of sentences have just one direction keyword.) We then sum these
values across sentences, by fiscal year (October-September). The resulting measure, based on
sentences in the last column of Table 2, captures both direction and magnitude, and can be
calculated for all newspapers, or single newspapers, or single spending keywords, etc. We explore
several of these possibilities below.
The Media Policy Signal and Policy Change: The Case of US Spending on Defense
How closely does our measure of the “media policy signal” track actual government spending on
defense? What might that tell us about the potential for thermostatic public responsiveness in this
domain? And are there systematic differences across keywords and newspapers? Our exploration
begins with fiscal-year aggregations of kwic retrievals from the New York Times only, since it is
one of the few newspapers available from 1980 onwards.
13
These are analyzed alongside changes
in defense appropriations, in FY2000 US dollars, drawn from the Historical Tables distributed by
the OMB.
To cut to the chase: the correlation between spending change and a New York Times signal based
on all kwic retrievals with UP/DOWN keywords but without a DEFENSE keyword is 0.65; the
correlation between spending change and a New York Times signal based on kwic retrievals that
13
The main reason to focus on the New York Times on it shown here is that it covers the longest
time period. This also avoids the difficulties involved in combining newspapers, not just because
they are available over different time periods, but because it unclear exactly how coverage in
different newspapers should be combined. This is discussed further in the concluding section of
the paper.
also include a DEFENSE keyword is 0.70.
14
Focusing on the narrower set of kwic retrievals does
not have as striking an impact on results as we might expect (the correlation between the two New
York Times media signals is 0.92). This is perhaps a sign that error in the broader, i.e., noisier,
sample cancels out. Regardless, both aggregate series are highly correlated with actual spending.
Figure 1 shows the trend in spending change (black) alongside the UP/DOWN + DEFENSE
keyword spending signal (gray).
-- Figure 1 about here --
We take Figure 1 as a strong test of construct validity for our measure of the media signal. We do
not expect media to perfectly represent policy change, but in a highly salient and relatively simple
domain, we expect a national broadsheet newspaper to be relatively accurate. Figure 1 makes clear
that this has been the case. A public that was responding to defense spending as it has been
represented by the New York Times over the past 35 years would look very similar to a public that
was responding directly to defense spending.
There would be some differences, however, and even small differences could be critical for public
preferences and policy development. Of course, it is difficult to tell the extent to which those
differences are a consequence of what the New York Times is reporting, or weaknesses in our
content analysis. They are likely driven by both, but preliminary evidence suggests that the trend
in Figure 1 is not deeply affected by irrelevant kwic retrievals. Consider this: if we remove all
14
Of course, it is possible that media coverage is picking up spending decisions from the previous
year or else spending decisions being made for the following year. It thus is worth noting that the
correlation between the media signal in year t is 0.52 with spending in fiscal year t-1 and 0.67 with
spending in fiscal year t+1.
kwic retrievals that use the words “grant” or “resource” (the words in our dictionary with the lowest
relevancy scores), the correlation between spending and the New York Times media signal shifts
only marginally, from 0.70 to 0.71.
-- Table 3 about here --
To what extent are results in Figure 1 the consequence of using a relatively highbrow broadsheet
newspaper? Table 3 explores this by presenting correlations between changes in spending and a
media policy signal for each individual newspaper. Results are based on data since FY1993, as
this is the first year for which all ten newspapers are available. There are some potentially
consequential differences between newspapers, where the Denver Post and Philadelphia Inquirer
produce signals that are least correlated with changes in spending (0.55-0.59); and the Washington
Post produces the signal most correlated with changes in spending (0.80). The implication is that
citizens informed by media about defense spending will be reacting to slightly different, and
potentially meaningfully different, representations of policy change, depending on their media
source.
Even so, the mean correlation between newspaper series is 0.75 and in many cases (36%) exceeds
0.80. There thus is a lot of similarity to the policy signal we glean from the different papers. This
implies a corresponding similarity to the information the public receives. Figure 2 highlights this
similarity. The top panel figure shows the New York Times raw media signal in black, with each
other newspaper in gray. We do not distinguish between the newspapers here – the aim is just to
highlight the degree to which they move in tandem. Newspapers produce different overall volumes
of news coverage, and over-time variation in the raw measure reflects this. (This highest numbers
in this figure are from the Washington Post.) The bottom panel shows all series in standard units,
where each is divided by its standard deviation. This standardization further highlights he common
trend evident in the raw data. Though note some interesting outliers, including the especially strong
emphasis on decreases in USA Today in 2011 and the especially strong emphasis on increases in
the Tampa Bay Times in 1989.
-- Figure 2 about here --
To what extent do results vary across spending keywords? We have already seen in Table 1 that
the relevance of kwic retrievals varies by spending keyword. Table 4 shows bivariate correlations
between changes in spending and series that use (a) all available keywords in the SPEND
dictionary or (b) remove the two worst-performing words, based on expert coding, “grant” and
“resource.” Results here are striking: in spite of the many errors in coding that may exist for
“grant” and “resource,” correlations between the media policy signal and spending change do not
improve markedly when these words are excluded. Indeed, for some newspapers the correlation
weakens with the removal of these words. The implication here is that even when there is a good
deal of noise, i.e., false up and down codes, that noise largely cancels out in these highly aggregated
measures.
-- Table 4 about here --
Though the results in Table 4 do not reveal much difference, there is good reason to fine-tune
dictionaries, often to the specific corpus under investigation. Were we looking at environmental
policy, for instance, the word “resource” might be entirely unusable – it may refer to natural
resources far more than financial ones. The aim moving forward must be to reconsider the
dictionary in the face of the corpus to which it is to be applied.
Conclusion and Discussion
A carefully constructed measure of media coverage of policy outputs is of real significance for
those interested in policy responsiveness and representation. As noted above, most citizens learn
about most policies indirectly, often through mass media. The opinion-policy link thus depends
heavily not just on the volume but on the accuracy of media coverage of policy. Where media
provide accurate policy cues, there are good reasons to expect public responsiveness and policy
representation. Where media cues are systematically different from actual policy, the potential for
responsiveness and representation is limited (see Neuner et al. 2017). A means of identifying the
media policy signal offers not just a measure of media accuracy, it speaks to the potential for
representative democracy, policy domain by policy domain, across time and space.
We regard this as a preliminary effort at developing a broad applicable measure of the media policy
signal. There are several possibilities that we have not yet explored but suggest for future work.
We have not yet fully considered how best to combine results from multiple newspapers, for
instance. Figure 2 offers two ways of thinking about across-newspaper differences, but a single
measure of the “media policy signal” might sum raw counts or weight counts based on newspaper
circulation data or by the populations they serve. If the objective is to capture the signal that
actually reaches people, weighting obviously is crucial. The volume of coverage may offer insights
into the salience of – or attention to – policy, which differs from policy change and can vary over
time and across issues (and political contexts).
15
We also have not explored the applicability of these dictionaries to domains other than defense.
Our long-term objective is to do exactly this. Our hope is that the dictionaries will require only
15
Also see Jennings and Wlezien (2015).
minimal revision for other domains; at least, that has been our objective here. This is a testable
proposition, left for future work. So too is the possibility of developing a rather different set of
dictionaries for use in policy domains that are less characterized by spending. Defense may be an
easy test for our measure; environment would be much tougher. Even so, it may be possible to
develop a set of dictionaries that capture change in environmental regulation, and this would be
useful in understanding responsiveness and representation in that domain.
For the time being, results here help explain how the American public has been found to respond
thermostatically to change in defense spending (Wlezien 1995; 1996; Soroka and Wlezien 2010).
Defense spending is not experienced directly by most Americans, of course. But information about
defense spending is readily available in media content. Consider the following: there were
approximately 67,000 sentences in the New York Times that include very clear spending cues,
where we identified words from all of the dictionaries outlined above. These sentences are drawn
from 36 years of data, which means there were roughly 1,861 sentences per year, an average of 5
per day. In the Washington Post, the number is nearly 6 per day; in the Boston Globe, 2 per day;
in the Chicago Tribune, 2 per day; and so on. Even peripheral attention to media coverage of
defense issues would, we surmise, expose people to cues about the direction in which that policy
was moving. It follows that there are good reasons for scholars of policy and representation to
further explore the media policy signal. This is true whether feedback is thermostatic or else takes
the other forms discussed above, as media coverage of policy is of value well beyond the literature
on thermostatic public responsiveness.
References
Althaus, Scott. 2003. Collective Preferences in Democratic Politics. Cambridge: Cambridge
University Press.
Altheide, David L. 1997. “The News Media, the Problem Frame, and the Production of Fear.”
Sociological Quarterly 38(4): 647–68.
Barabas, Jason. 2009. “Not the Next IRA: How Health Savings Accounts Shape Public Opinion.
Journal of Health Policy, Politics and Law 34:181-217.
Barabas, Jason and Jennifer Jerit. 2009. “Estimating the Causal Effects of Media Coverage on
Policy Specific Knowledge.” American Journal of Political Science 53(1): 73-89.
Bartle, John, Sebastian Dellepiane-Avellaneda, and James Stimson. 2011. “The Moving Centre:
Preferences for Government Activity in Britain, 1950–2005.” British Journal of Political Science
41(2): 259-285.
Baumgartner, Frank, and Bryan D. Jones. 1993. Agendas and Instability in American Politics.
Chicago: University of Chicago Press.
Beland, Daniel. N.d. “Policy Feedback and the Politics of the Affordable Care Act.” Policy Studies
Journal (this issue).
———. 2010. “Reconsidering Policy Feedbacks: How Policies Affect Politics.” Administration
and Society 42(2): 568-590.
Bennett, Stephen. 1988. “’Know-Nothings’ Revisited: The Meaning of Political Ignorance Today.
Social Science Quarterly 69:476-490.
Berelson, Bernard R., Paul F. Lazarsfeld, and William N. McPhee. 1954. Voting. Chicago:
University of Chicago Press.
Blei, David, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet Allocation. Journal of
Machine Learning and Research 3:993–1022.
Boydstun, Amber. 2013. Making the News. Chicago: University of Chicago Press.
Bucchi, Massimiano, and Renato G. Mazzolini. 2003. “Big Science, Little News: Science
Coverage in the Italian Daily Press, 1946-1997.” Public Understanding of Science 12(1): 7-24.
Campbell, Andrea. 2012. “Policy Makes Mass Politics.” Annual Review of Political Science 15:
333-351.
———. 2003. How Politics Makes Citizens. Princeton: Princeton University Press.
Card, Dallas, Amber E Boydstun, Justin H Gross, Philip Resnik, Noah A Smith. 2015. “The Media
Frames Corpus: Annotations of Frames across Issues.” Proceedings of the 53rd Annual Meeting
of the Association for Computational Linguistics and the 7th International Joint Conference on
Natural Language Processing (Short Papers), pages 438–444, Beijing, China, July 26-31, 2015.
Converse, Philip E. 1970. “Attitudes and Non-Attitudes: Continuation of a Dialogue. In Edward
R. Tufte (ed.), The Quantitative Analysis of Social Problems. Reading, Mass: Addison-Wesley.
-----. 1964. “The Nature of Belief Systems in Mass Publics.” In David Apter (ed.), Ideology and
Discontent. New York: Free Press.
Daku, Mark, Stuart Soroka and Lori Young. 2015. Lexicoder. Software available at lexicoder.com.
Davie, William R., and Jung Sook Lee. 1995. “Sex, Violence, and Consonance/Differentiation:
An Analysis of Local TV News Values.” Journalism and Mass Communication Quarterly 72(1):
128–38.
Delli Carpini, Michael and Scott Keeter. 1996. What American Know about Politics and Why it
Matters. New Haven: Yale University Press.
Deutsch, K.W., 1966. The Nerves of Government: Models of Political Communication and
Control. New York: Free Press.
Dunaway, Johanna. 2011. “Institutional Influences on the Quality of Campaign News
Coverage.” Journalism Studies 12(1):27-44.
Durr, Robert H. 1993. “What Moves Policy Sentiment?” American Political Science Review 87:
158-170.
Easton, David. 1965. A Framework for Political Analysis. Englewood Cliffs NJ: Prentice-Hall.
Eichenberg, Richard, and Richard Stoll. 2003. “Representing Defence: Democratic Control of the
Defence Budget in the United States and Western Europe.” Journal of Conflict Resolution 47:
399-423.
Ellis, Christopher, and Christopher Faricy. 2011. “Social Policy and Public Opinion: How the
Ideological Direction of Spending Influences Public Mood.” The Journal of Politics 73 (04):
1095–1110.
Erikson, Robert S., Michael B. MacKuen and James A. Stimson. 2002. The Macro Polity.
Cambridge: Cambridge University Press.
Fording, Richard. N.d. “Medicaid Expansion and the Political Fate of Governors who Support it.”
Policy Studies Journal (this issue).
Friedman, Sharon H., Sharon Dunwoody and Carol L. Rogers, eds. 1999. Communicating
Uncertainty: Media Coverage of New and Controversial Science. New York: Routledge.
Grimmer, Justin and Brandon M. Stewart. 2013. “Text as Data: The Promise and Pitfalls of
Automatic Content Analysis for Political Texts.” Political Analysis 21(3): 267-297.
Hakhverdian, A. 2012. “The Causal Flow between Public Opinion and Policy: Government
Responsiveness, Leadership, or Counter Movement?” West European Politics, 35(6): 1386-
1406.
Iyengar, Shanto. 1991. Is Anyone Responsible? How Television Frames Political Issues. Chicago:
University of Chicago Press.
Jennings, Will. 2009. “The Public Thermostat, Political Responsiveness and Error Correction:
Border Control and Asylum in Britain, 1994-2007.” British Journal of Political Science 39:847-
870.
Jennings, Will and Christopher Wlezien. 2015. “Preferences, Problems, and Representation.”
Political Science Research and Methods 3(3): 659-681.
Jurka, Timothy P., Loren Collingwood, Amber Boydstun, Emiliano Grossman, and Wouter van
Atteveldt. 2012. RTextTools: Automatic text classification via supervised learning. http://cran.r-
project.org/web/packages/RTextTools/index.html.
Lawrence, Regina G. 2000. “Game-Framing the Issues: Tracking the Strategy Frame in Public
Policy News.” Political Communication 17(2): 93-114.
McCombs, Maxwell W., and Donald L. Shaw. 1972. “The Agenda-Setting Function of Mass
Media.” Public Opinion Quarterly 36(2):176-187.
Mettler, Suzanne. 2005. Soldiers to Citizens: Thee G.I. Bill and the Making of the Greatest
Generation. New York: Oxford University Press.
Mettler, Suzanne, and Joe Soss. 2004. “The Consequences of Public Policy for Democratic
Citizenship: Bridging Policy Studies and Mass Politics.” Perspectives on Politics 2(1):55-73.
Morgan, Stephen L. and Minhyoung Kang. 2015. “A New Conservative Cold Front? Democrat
and Republican Responsiveness to the Passage of the Affordable Care Act. Sociological
Science 2:502-526.
Neuner, Fabian, Stuart Soroka and Christopher Wlezien. 2017. “The Clues in the News: Mass
Media and Public Responsiveness to Policy.” Paper presented at the annual meeting of the
SPSA, New Orleans LA.
Pacheco, Julianna. 2013. “Attitudinal Policy Feedback and Public Opinion.” Public Opinion
Quarterly 77:714-734.
Page, Benjamin I. and Robert Y. Shapiro. 1992. The Rational Public: Fifty Years of Trends in
Americans’ Policy Preferences. Chicago: University of Chicago Press.
Popkin, Samuel and Michael Dimock. 1999. Political Knowledge and Citizen Competence. In
Stephen Elkin and Karol Salton (eds), Citizen Competence and Democratic Institutions.
University Park: Pennsylvania State University.
Roberts, Stewart, Tingley, Lucas, Leder-Luis, Gadarian, Albertson, and Rand. 2014. “Structural
topic models for open-ended survey responses.” American Journal of Political Science 58(4):
1064-1082.
Soroka, Stuart. 2014. “Reliability and Validity in Automated Content Analysis,”
in Communication and Language Analysis in the Corporate World, Roderick P. Hart, ed.,
Hershey PA: CGI Global.
Soroka, Stuart, Dominic Stecula, and Christopher Wlezien. 2015. “It’s (Change in) the (Future)
Economy, Stupid: Economic Indicators, the Media, and Public Opinion.” American Journal of
Political Science 59(2): 457-474.
Soroka, Stuart N. and Christopher Wlezien. 2010. Degrees of Democracy: Politics, Public Opinion
and Policy. Cambridge University Press.
Soss, Joe and Sanford Schram. 2007. “A Public Transformed? Welfare Reform as Policy
Feedback.” American Political Science Review 101:111-127.
Stimson, James. 1999. Public Opinion in American: Moods Cycles and Swings, 2nd ed. Boulder
CO: Westview Press.Ura, Joseph. 2014. “Backlash and Legitimation: Macro Political Responses
to Supreme Court Decisions.” American Journal of Political Science 58:1100-126.
Ura, Joseph Daniel, and Christopher R Ellis. 2012. “Partisan Moods: Polarization and the
Dynamics of Mass Party Preferences.” The Journal of Politics 74 (01): 277–91.
Weaver, Vesla and Amy Lerman. 2010. “The Political Consequences of the Carceral State.”
American Political Science Review 104:817-833.
Wlezien, Christopher. 2004. “Patterns of Representation: Dynamics of Public Preferences and
Policy.” Journal of Politics 66:1-24.
———. 1996. “Dynamics of Representation: The Case of US Spending on Defense.” British
Journal of Political Science 26:81-103.
———. 1995. "The Public as Thermostat: Dynamics of Preferences for Spending." American
Journal of Political Science 39:981-1000.
Wlezien, Christopher and Stuart Soroka. 2012. “Political Institutions and the Opinion-Policy
Link.” West European Politics 35(6): 1407-1432.
Wlezien, Christopher, Stuart Soroka and Dominik Stecula. 2017. “A Cross-National Analysis of
the Causes and Consequences of Economic News.” Social Science Quarterly 98(3): 1010-1025.
Table 1. Relevance of kwic retrievals, by spending keyword
Kwic retrieval
w/ DEFENSE
keyword
w/ DEFENSE and UP or
DOWN keyword
Allocate
40%
53%
69%
Appropriation
25%
34%
38%
Budget
45%
67%
62%
Cost
25%
34%
37%
Earmark
30%
43%
43%
Expenditure
68%
84%
73%
Fund
21%
28%
41%
Grant
9%
13%
17%
Outlay
63%
76%
67%
Resource
10%
15%
27%
Spend
49%
69%
82%
Table 2. Kwic retrievals, by newspaper
# sentences
w/ DEFENSE
keyword
w/ DEFENSE and UP or
DOWN keyword
Boston Globe
107,534
40,885
20,070
Chicago Tribune
145,124
55,785
26,082
Denver Post
44,873
12,438
5,349
Houston Chronicle
93,482
33,212
14,843
LA Times
238,666
95,100
46,918
New York Times
328,608
131,640
67,026
Philadelphia Inquirer
60,923
19,529
8,637
Tampa Bay Tribune
95,154
28,434
12,057
USA Today
70,647
24,136
10,878
Washington Post
416,025
156,563
77,327
Total
1,601,036
597,772
289,187
Figure 1. Changes in defense spending and the New York Times “media policy signal”
Table 3. Correlations between spending and media policy signals, by newspaper, 1993-2016
Spending
NYT
BGL
CTR
DVP
HCH
LAT
PHI
TBT
USA
NYT
.76
BGL
.61
.90
CTR
.76
.89
.75
DVP
.55
.73
.57
.64
HCH
.65
.91
.83
.87
.64
LAT
.74
.93
.80
.89
.65
.83
PHI
.59
.78
.62
.85
.55
.77
.86
TBT
.68
.68
.58
.67
.69
.56
.67
.69
USA
.79
.78
.53
.77
.60
.70
.76
.68
.56
WPO
.80
.93
.79
.93
.75
.86
.90
.81
.68
.82
Figure 2. The media policy signal, by newspaper
.
Table 4. Correlations between spending and alternate media policy signals, by newspaper
All keywords
Without “grant”
and resource”
NYT
.76
.78
BGL
.60
.61
CTR
.76
.74
DVP
.55
.53
HCH
.65
.67
LAT
.74
.75
PHI
.59
.58
TBT
.68
.67
USA
.79
.78
WPO
.80
.79