ArticlePDF Available


Science communication via testimony requires a certain level of trust. But in the context of ideologically-entangled scientific issues, trust is in short supply—particularly when the issues are politically ‘entangled’. In such cases, cultural values are better predictors than scientific literacy for whether agents trust the publicly-directed claims of the scientific community. In this paper, we argue that a common way of thinking about scientific literacy—as knowledge of particular scientific facts or concepts—ought to give way to a second-order understanding of science as a process as a more important notion for the public’s trust of science.
Journal for General Philosophy of Science
1 3
Understanding andTrusting Science
MatthewH.Slater1 · JoannaK.Huxster2· JuliaE.Bresticker3
© Springer Nature B.V. 2019
Science communication via testimony requires a certain level of trust. But in the context
of ideologically-entangled scientific issues, trust is in short supply—particularly when
the issues are politically ‘entangled’. In such cases, cultural values are better predictors
than scientific literacy for whether agents trust the publicly-directed claims of the scien-
tific community. In this paper, we argue that a common way of thinking about scientific
literacy—as knowledge of particular scientific facts or concepts—ought to give way to a
second-order understanding of science as a process as a more important notionfor the pub-
lic’s trust of science.
Keywords Science communication· The social structure of science· Consensus
1 Introduction
The state of scientific literacy in America and in other developed nations has been an issue
of concern for many decades now (Miller 1983, 2004; Bodmer 1985; OECD 2007). More
recently, a palpable anti-science sentiment has become more prominent (McCright and
Dunlap 2011; Kahan 2015). This is reflected, in part, by the fact that large portions of the
public remain intransigent with respect to their dismissal of policy-relevant science, such
as that concerning the risks of anthropogenic climate change (Leiserowitz et al. 2016).
Given the amount of attention that scientific education and communication have received
in the intervening decades, these facts may seem a little surprising. Why have wefailed to
bring about better outcomes?
A number of plausible explanations could be cited, from the persistence of problematic
models of science communication to the increasing prominence of well-funded anti-science
* Matthew H. Slater
Joanna K. Huxster
Julia E. Bresticker
1 Department ofPhilosophy, Bucknell University, Lewisburg, PA, USA
2 Environmental Studies, Eckerd College, SaintPetersburg, FL, USA
3 Department ofMedical Physics, Duke University, Durham, NC, USA
M.H.Slater et al.
1 3
groups (Dunlap and McCright 2010, 2011; Brulle 2014). And while we believe that these
are indeed relevant factors in our present difficulties, our focus in this essay will be to raise
(again) the question of whether we have our priorities for scientific literacy in order—and
whether a somewhat different approach might mitigate some of the damaging social–politi-
cal dynamics that make the consensus gaps we observe so recalcitrant.
This is not the right forum for a full hearing on the question of how we should con-
ceptualize ‘scientific literacy’ or ‘the public understanding of science’ (let alone how to
bring about such goods). Our aims in this essay are more programmatic. First, we wish to
offer a framework for formulating and evaluating different conceptions of scientific literacy
(Sects. 2, 3); second, having briefly considered the outlines of a popular conception, find-
ing it wanting when it comes to enabling its possessors to appreciate the epistemic signifi-
cance of scientific consensus, we will outline a conception that emphasizes certain social
dimensions of the scientific enterprise that seem to us undervalued in most discussions of
scientific literacy. We call this approach the Social Structure of Science (SSS) conception
of Scientific Literacy and argue on conceptual/epistemic grounds that its possessors will
be better positioned to recognize occasions on which the scientific community is appropri-
ately regarded as a source of epistemic authority. To our mind, this makes it an attractive
social goal; however, we do not argue here that this variety of scientific literacy is the end-
all or universally appropriate minimum standard for the public’s grasp of science. Our hope
is that this essay will provide STS researchers1 a useful starting point for engaging in an
important and growing area of interdisciplinary research in which their expertise is needed.
2 Varieties ofScientic Literacy
What is scientific literacy and what is it for? Such questions resist univocal answer. This is
not overly surprising. Consider other forms of practical competence or epistemic success.
What does it mean to be technologically literate, for example? Presumably the answer to
this question will depend on the context in which judgments about technological literacy
are to be made. What is expected and valued will depend on what standards are in play.
Standards in turn depend (in part) on goals, which are themselves sensitive to context. Sim-
ilar comments apply to epistemic competencies. We might say that you understand how
a car engine works—unless you were employed in the front office of the local auto shop,
in which context we might be disinclined to attribute suchunderstanding lest it engender
faulty expectations of your capabilities (Wilkenfeld etal. 2016).
This strikes us as a productive light in which to consider Benjamin Shen’s much cited
(1975) three-fold distinction between Practical, Civic, and Cultural “forms of science liter-
acy”. Shen defined the first as “the possession of the type of scientific and technical know-
how that can be immediately put to use to help improve living standards” (Shen 1975, 265).
Today, in the democratic, developed world, many might be inclined to think of “civic sci-
ence literacy” as particularlysignificant. Here he defined the aim as “[enabling] the citizen
to […] participate more fully in the democratic processes of an increasingly technological
society” (ibid., 266). Cultural science literacy Shen explicated (in shades of Snow 1959)
by analogy to the sort of competence and familiarity a scientist or engineer might seek
1 Among whom we include historians, sociologists, and philosophers. We note that it is a little surprising,
in particular, that philosophers of science and epistemologists have had little to say on this topic (as indeed
they have had rather little to say to each other).
Understanding andTrusting Science
1 3
to develop by studying ancient history, poetry, or classics: it is “motivated by a desire to
know something about science as a major human achievement; it is to science what music
appreciation is to music. It solves no practical problems directly, but it does help bridge the
widening gulf between the scientific and humanistic cultures” (Shen 1975, 267).
Shen’s varieties of scientific literacy are thus functionally defined in terms of what they
aim to bring about.2 He says relatively little about what specifically it takes to satisfy each
concept.3 Many proposals have been offered (and criticized) in the intervening decades
(Miller 1983; Thomas and Durant 1987; Shamos 1995; DeBoer 2000; Laugksch 2000;
Miller 2010a; Snow and Dibner 2016). Before offering our own proposal for a concep-
tion of scientific literacy that we think is worthy of being taken seriously, let us take a step
back and consider the form that such proposals may usefully take. Very plausibly, scien-
tific literacy centrally involves a certain kind of epistemic success. We interpret this suc-
cess expansively, potentially to include such states as propositional knowledge, know-how,
understanding, and so on. We are not so expansive, however, to include affective states—
e.g., taking a certain attitude about some aspect of science. Tempting as this might be,
doing so has the effect of making analytic what should remain empirical questions about
the connection between one’s grasp of science and one’s attitudes toward it (Thomas and
Durant 1987, 10 make a similar point).
Conceiving of such epistemic states as relations, a straightforward approach to scien-
tific literacy will start by specifying this relation (or relations) and its (or their) relata. Our
framework thus involves answering three questions:
(1) What epistemic relation(s) are at stake? If scientific literacy is a kind of epistemic suc-
cess, what kind of success is it?
(2) Who or what are taken to be the primary subjects of this success? Is it every individual
member of the public, only some, the public as a whole (or some other option)?
(3) What is the content of this success? For example, what facts or theories are to be truly
believed (or known or understood or …) in order for one to count as being scientifically
literate (in the given sense)?
Further components may of course be added to accommodate non-epistemic dimensions
of a conception that cannot adequately be captured by (1–2). We shall assume in this paper,
however, that the epistemic can sufficiently subsume these aspects.
To get a better sense of how answering these questions can generate different concep-
tions of scientific literacy, let us consider some potential variety to these answers. We take
the questions ‘out of order’ to simplify the discussion, starting with (2).
2.1 The Possessors ofScientic Literacy
Who are ‘possessors’ of (a variety of) scientific literacy? We can think of this as a question
with normative content (as in who is expected to possess scientific literacy for the given
conception?) or simply as a factual question of what the target of a certain evaluation is.
2 For this reason, Norris and Phillips call scientific literacy a “programmatic concept” (Norris and Phillips
2009, 271).
3 Shen’s few gestures towards greater specificity tend themselves to be functionally defined: e.g., “the sci-
entifically literate layman knows how to separate the nontechnical from the technical, the subjective from
the objective, and to make full use of scientific expertise without being overwhelmed by it” (Shen 1975,
M.H.Slater et al.
1 3
For now, let us stick with the latter interpretation.4 A straightforward answer is that when
we attribute scientific literacy, we attribute it to individual people. For example, when one
is asking after the scientific literacy of the American ‘lay public’ it seems that one typically
is interested in the scientific literacy of each member of this group.
But other possibilities are worth considering. Instead of focusing on individuals, one
may wish to countenance communities (or other ensembles of epistemic agents) as the rel-
evant possessors of scientific literacy. In a recent report of the National Academy of Sci-
ences (Snow and Dibner 2016), the Committee on Science Literacy and Public Perception
of Science acknowledged the concept of “community-level science literacy” as the idea
that certain knowledge or abilities might be possessed not by individuals but by groups of
people (cf. Bird 2010; Ludwig 2014; Miller 2010b). Perhaps we see hints of this thought
in Shen’s discussion of Practical Science Literacy. We leave this interesting nuance unex-
plored in this paper and focus henceforth on the scientific literacy of individuals.
2.2 Content/Subject Matter
In asking after the content of scientific literacy, we are asking what is to be grasped or
known (or …?) by the scientifically literate. Here too we should expect a contextual or
developmental element in any plausible answer to the above. This is characteristic of edu-
cational policy documents aiming to outline programs for science education (OECD 2007;
NRC 2012; PISA 2012; Snow and Dibner 2016). Answers here typically include particular
pieces of scientific fact (e.g., that the earth orbits the sun or that molecules are composed
of atoms), theories (evolution by natural selection, universal gravitation), or concepts (e.g.,
radiation, genetic inheritance, and so on). Another common answer to the content question
emphasizes concepts from the so-called “Nature of Science” (NoS): e.g., theory, hypoth-
esis, confirmation, and perhaps other basic methodological ideas in the vicinity. We shall
return to the question of content—and the range of answers we see—in greater detail in
Sects. 34.
2.3 The Epistemic Relation
Suppose that we have in mind a conception of both the content of scientific literacy and the
possessors of that content. What is the epistemic relationship between them? This is a ques-
tion that seems to us surprisingly neglected in the existing literature. There are a number of
straightforward options here: one might know something about science, one might merely
truly believe it (perhaps without reasons good enough to count as knowing), or one might
understand something about science. What is the relationship we should favor in our con-
ception of scientific literacy? The answer might seem obvious: the reason why literacy has
seemed an apt label for this quality stems in part from the comparison with grasping—or
understanding—a language (Norris and Phillips 2003). One is not literate in, one does not
understand, a foreign language when one merely knows what some words mean; literacy is
more flexible and holistic, expressing a kind of grasp or mastery. In the epistemic context,
it involves seeing how things “hang together” (Zagzebski 2001; Elgin 2006; Grimm 2012).
4 A further complication that becomes salient when taking up the normative interpretation of this question
is that different communities, political contexts, social roles, and so on may carry different expectations for
a certain depth and content of scientific literacy; we thank an anonymous reviewer for raising this point.
Understanding andTrusting Science
1 3
The fact that “scientific literacy” is usually discussed under the rubric of “public under-
standing of science” in Europe further corroborates this suggestion (Laugksch 2000, 71).5
We are sympathetic to this line of thought, but it is too simple as stated (and thus the
question deserves a place in our conceptual framework). First, as a matter of practice, sci-
entific literacy is often treated as boiling down to agents’ knowledge (indeed, their mere
true belief) of some facts. Most widely used measures of it consist in multiple-choice
items. Sometimes this occurs despite assertions or intimations that understanding is the
relevant goal.6 Second, even if understanding takes a prominent role in a conception of sci-
entific literacy, knowledge may yet be involved. Whatever the precise relationship between
knowledge and understanding (Kvanvig 2003; Grimm 2006), it is credible that an under-
standing of a subject matter often incorporates various bits of propositional knowledge.
Even the richest understandings are ultimately based to some extent on the say-so of others
(Coady 1992; Lipton 1998; Goldman 2001). Moreover, a conception of scientific literacy
might also involve a certain range of rote knowledge—even of propositions that are not
themselves understood in any deep way—in addition to a deeper understanding of other
matters. Thus, a conception of scientific literacy may plausibly involve a range of different
epistemic relations between agents and content. Getting clear on these relations is impor-
tant for determining how best to bring about more of it via education and communication,
as it is not universally granted that understanding can be transmitted via testimony (Hills
2009; cf. Boyd 2017).
So much for describing an approach to filling out a conception of scientific literacy.
Clearly other frameworks are possible. Why, for instance, don’t we include goals in our
framework questions? In part, because we see value in seeking greater specificity in the
specification of content, capabilities, or epistemic relations and then asking what more gen-
eral good for individuals or society scientific literacy so defined might be expected to bring
about. Of course, a given conception may be motivated in the first place by an expectation
of its social function; it may thus be thought of or labelled in terms of that function. We
merely wish to leave it open whether the specific content of a conception would in fact
serve an intended end in a given context. Let us now turn to the more difficult matter of
evaluating such conceptions. As before, our aim is not to provide an exhaustive survey of
which sorts of scientific literacy are of value (and in what contexts); our discussion will
focus on a certain range of evaluations that we hope will provide useful context for evaluat-
ing our own conception of scientific literacy.
3 Evaluating Conceptions ofScientic Literacy
We mentioned above the ‘functional’ or goal-directed character of Shen’s three concep-
tions of scientific literacy. An obvious approach to evaluating a given proposal for a par-
ticular population could thus be evaluated in terms of, first, whether the goal itself is of
value, and, second, whether the proposed answers to our framework questions are poised to
bring it about.
5 Shen’s initial gloss of it is also typical: it is “in the interest of everybody […] to gain a better understand-
ing of science and its applications […]. Such an understanding might be called ‘science literacy’” (1975,
265; emphasis added).
6 Previous research has shown that epistemic success terms like “knowledge” and “understanding” are
often left undistinguished from one another or even conflated in the scholarly literature on scientific literacy
and the public understanding of science (Huxster etal. 2018).
M.H.Slater et al.
1 3
What sort of goals might we seek or expect? Without describing an exhaustive typol-
ogy, the literature on scientific literacy offers various examples of goals purportedly of
either practical or intrinsic value. Shen’s Practical and Civic forms of scientific literacy are
examples of the former, while Cultural scientific literacy is an example of the latter: some-
one who improves the latter does so, he writes, “in the same spirit in which a science stu-
dent might study ancient history, an engineer read poetry, or a physician delight in classical
tragedies […]. [It] is motivated by a desire to know something about science as a major
human achievement […]. It solves no practical problems” (Shen 1975, 267). As Michael
Strevens put it at the outset of his book on scientific explanation, “If science provides any-
thing of intrinsic value, it is explanation. Prediction and control are useful […] but when
science is pursued as an end rather than as a means, it is for the sake of understanding—
the moment when a small, temporary being reaches out to touch the universe and makes
contact” (Strevens 2008, 3). On whether seeking to fulfill the goals of Cultural Scientific
Literacy—as opposed to studying ancient history or poetry—we take no position. It does
seem doubtful that a univocal case for a purely intrinsically motivated conception of scien-
tific literacy will be in the offing.
So let us consider instead conceptions motivated by practical goals. Civic Scientific Lit-
eracy will probably be high on the minds of science educators and communicators con-
cerned about the opinion gaps between the lay public (particularly in the U.S. and U.K.)
and the scientific consensus on various issues. Shen motivated its value by pointing out
how common it was (in 1975) for legislative bills in the U.S., to “have a scientific or tech-
nological basis […] [involving]health, energy, food and agriculture, natural resources,
the environment, product safety, outer space, communication, transportation, and others”
(Shen 1975, 266). Little has changed in the intervening decades—except of course we can
add to Shen’s list. Consider a recent essay of Miller’s:
Today’s political agenda includes a debate over the consequences of and solutions for
global climate change, a continuing debate over the use of embryonic stem cells in
biomedical research, a spirited set of disagreements over future energy sources, and
a lingering concern over the possibility of a viral pandemic. In Europe, the political
landscape is still divided over nuclear power and genetically modified foods. No seri-
ous student of public policy or science policy thinks that the public-policy agenda
will become less populated by scientific issues in the twenty-first century. Yet only
28% of American adults have sufficient understanding of basic scientific ideas to be
able to read the Science section in the Tuesday New York Times. (Miller 2010a, 241)
The question, then, is what is needed for citizens to actively participate in the democratic
processes that weigh in on such issues. Miller’s view has two main components, again
indexed to a certain functional competency: (1) “a basic vocabulary of scientific terms and
constructs” and (2) “a general understanding of the nature of scientific inquiry […] suffi-
cient to read and comprehend the Tuesday science section of The New York Times” (Miller
2004, 273–274). Presumably content falling under the heading of the “Nature of Science”
(NoS)—a range of conceptual and methodological aspects of science such what scientific
theories are, their status as revisable and provisional, how they may be tested and con-
firmed, and so on—is part of the latter, if not also the former. The former, as judged by
Miller’s measurement instruments, consists in an agent’s grasp of such facts as whether the
center of the Earth is hot or what lasers do (Miller 2010a, 47; see also Snow and Dibner
2016, 15).
While it is certainly plausible that a basic understanding of scientific vocabulary and
a grasp of basic facts about the natural world may be a necessary condition for being an
Understanding andTrusting Science
1 3
informed participant in democratic decision making7 concerning issues informed by or
involving science and technology (which is to say a large portion of decision making in
developed nations), it is quite a bit less clear whether—assuming other conditions are
met—possession of scientific literacy on conceptions like Miller’s credibly in fact results
inits possessors’ informed participation.
Defenders of a strong focus on NoS content sometimes seem to suggest that grasp
of this concept will allow members of the lay public to evaluate scientific claims them-
selves—including determining whether a given scientific claim can be relied upon (OECD
2007, 34). Here we anticipate a connection with Miller’s justification: perhaps one thing
that is practically useful about being able to competently read science reporting is the abil-
ity to know when that reporting is reliable or whether the claims themselves are plausible.
This suggests that the epistemic relation centrally in question in these conceptions is under-
standing. As Elgin notes, understanding involves “an adeptness in using the information
one has, not merely an appreciation that things are so” (Elgin 2007, 35; see also Grimm
2012; Zagzebski 2001, 110–111). But while it is plausible that such a grasp of basic foun-
dational scientific and NoS content might allow agents to weed out certain obviously prob-
lematic content, it seems doubtful that it would allow one to evaluate apparently competent
but competing claims.8 As Stephen Jay Gould pointed out in a (Gould 1999) editorial in
Science, this is something that other scientists can barely manage; he wrote that science
had then “reached the point where most technical literature not only falls outside the possi-
bility of public comprehension but also […] outside our own competence in scientific dis-
ciplines far removed from our personal expertise” (cf. Shamos 1995). And note that Gould
has in mind only comprehension, not evaluation. Nearly 20years later, this situation has
only become more dramatic.
The practical reality is that the public is not—and likely will never be—in a position
to vet scientific claims themselves (Anderson 2011, 144; Jasanoff 2014, 24; Keren 2018).
They must instead rely on the division of epistemic labor and trust the scientific community
as a source of intellectual authority, relying on the community itself to vet its own deliver-
ances. This latter claim needs to be nuanced if it is to be plausible; what, for instance, is
the force of the “must”? What is the scope and strength of this trust? This is a question
for another time (Zagzebski 2012; Keren 2007, 2014). For now, let us assume a plausibly
conservative general gloss on trust of, and/or deference to, scientific authority. The diffi-
culty, as Shen saw, is that it is sometimes difficult to identify this authority; he wrote of the
legislators “who have to decide on [matters concerning science]” that they “usually do not
lack expert advice from contending sides; rather they complain of not knowing which set
of experts to believe” (Shen 1975, 266). This problem persists.
Many, we submit, would find it plausible that the attitudes and abilities that enable
such trust are an important social goal for a conception of Civic Scientific Literacy. This
is shown, in part, by the fact that the public’s deviation from scientific consensus is often
treated as evidence of the widespread lack of scientific literacy. But supposing that we
accept this desideratum as important, recent public opinion research should give us fur-
ther pause concerning the worth (or sufficiency) of the foundational conception of scien-
tific literacy discussed above. In a series of papers, Dan Kahan and colleagues have shown
7 How to define this last idea with more precision is a difficult question; our thoughts here turn initially to
work by Kitcher (2001, 2011) on “well-ordered science”, though we have no particular account to offer.
8 This is not to deny that there won’t be some occasions on which an understanding of basic scientific facts
and methods will not allow laypeople to reject some theories as ill-defended or pseudoscientific.
M.H.Slater et al.
1 3
that higher levels of scientific literacy—understood as comprising basic scientific facts
and methods9—do not correlate with higher levels of deference to scientific authority for
socially controversial subjects: despite expectations “[a]s respondents’ science-literacy
scores increased, concern with climate change decreased slightly (r = 0.05, P = 0.05)”
(Kahan etal. 2012, 732). Moreover, this effect was greater for those who identify with the
political right; the more “scientifically literate” right-leaners are, the less likely they are to
accept the scientific consensus about the causes and risks of climate change (ibid., 733).
One might understandably object that such results should be regarded as inert with
respect to our promotion of other conceptions of scientific literacy. The present social
context for science is politically and culturally charged in a variety of ways. As has been
carefully documented by historians and social scientists, a great deal of effort has been
expended in recent decades by individuals and organized groups (many industry-funded)
to cloud the science on important issues or undercut the trustworthiness of the scientific
community at large (Diethelm and McKee 2009; Torcello 2016; Smith and Leiserow-
itz 2012; Brulle 2014; McCright etal. 2016; Dunlap and McCright 2011, 2010; Oreskes
and Conway 2010). In contemporary society, such efforts are facilitated by what might be
euphemistically dubbed “the democratization of information flow” via social media, which
enables the establishment of political/ideological “echo-chambers” (Takahashi and Tandoc
2016; Jasny etal. 2015; Carmichael etal. 2017; Bernauer 2013; Leiserowitz etal. 2013).
These phenomena have been thoroughly explored in the case of climate science where,
despite a near perfect consensus among climate scientists (and the scientific community at
large), major portions of the public remain skeptical (Leiserowitz etal. 2016).
Thus, as Anderson suggests, perhaps what is missing from our conceptions of scientific
literacy is not so much ability as inclination; she writes: “While citizens have the capacity
to reliably judge trustworthiness, many Americans appear ill-disposed to do so” (Anderson
2011, 145); perhaps, then, we should focus on changing “the social conditions” that influ-
ence the public’s attitudes about science.10 We shall suggest in the next section, however,
that a somewhat different approach to NoS-style conceptions of scientific literacy may be
relevant to laypersons’ trust of the scientific community.11
4 Understanding theSocial Structure ofScience
Anderson argues that many of the members of the lay public have the capacity to judge
the trustworthiness of scientific authorities, including both individual scientists and the
scientific community as a whole: “second-order judgments [of expert trustworthiness]
address whose testimony regarding scientific matters should be trusted, and whether
the trustworthy agree on the issue in question” (Anderson 2011, 145). This involves
making three judgments about authorities’ (1) expertise (or competence), (2) honesty,
11 In this effort, space constraints force us to focus on the content pillar of our conception; there is more to
say about both the agent and relation pillars that must wait for another occasion.
9 Kahan calls his measurement scale “Ordinary Science Intelligence” (OSI), which incorporates ques-
tions from the National Science Board’s 2010 Science and Engineering Indicators as well as some common
numeracy and cognitive reflection items (see Kahan 2017, for discussion and validation).
10 This presumes, of course, a separation between the epistemic and affective dimensions of scientific lit-
eracy that may in real life be quite a bit more blurry. We take no position in this context on how we should
respond to this blurriness.
Understanding andTrusting Science
1 3
and (3) epistemic responsibility (ibid., 145–146). Anderson’s framework on expert trust
thus dovetails closely with work in epistemology on testimony—which, as she and oth-
ers point out, is ubiquitous in our epistemic lives (Hardwig 1985; Coady 1992; Lipton
1998; Lackey 2008). She amply demonstrates that the resources for making such judg-
ments are available to anyone who can conduct a web search. Again, it comes down
to the social–cultural conditions—and resultant attitudinal dispositions—that incline
one to expend the effort to identify appropriate authorities and instances of consensus
(Almassi 2012).
We think that there is more to be said on the epistemic side, however. Focus on the
question of one’s trust of the scientific community (in cases where there is a strong
consensus), rather than on individual scientists. It is one thing to be able to recognize
cases of scientific consensus. It is quite another to recognize the epistemic significance
of such consensus. Why is it that this consensus should interest us? What kind of con-
sensus is important (Odenbaugh 2012; Miller 2013; Keren2018)? What is it about the
scientific community that makes this so? We submit that these are matters on which the
public’s understanding of science could be improved. The suggestion is that improving
them may result in greater willingness to seek and defer to scientific consensus where it
It is clear enough in individual cases of testimony that knowing things about how a
potential source thinks, what their motivations may well be, and so on, can be relevant to
judgments about the questions that Anderson identifies as important. You will probably be
more inclined to trust a source about the quality of a particular car model if you know that
they would not benefit from your purchasing the car in question. You can determine this, of
course, by finding out whether they are employed by the relevant company or work as an
agent for that company in some other way (e.g., as an advertiser). But consider that seeing
these facts as relevant proxies for the question of influence (and thus honesty) depends on
having a certain amount of background knowledge concerning how individuals might ben-
efit from your purchasing decisions. We sideline such knowledge in talking about this sort
of case because it is so obvious and so clearly shared.
The relevant background knowledge in the context of science is considerably less obvi-
ous and not widely shared—especially when it comes to the question of scientific con-
sensus, but also in other aspects of judging scientific authority. Consider Anderson’s four
signs of concern for judging epistemic responsibility: “Evasion of peer-review”, “Dialogic
irrationality”, “Advancing crackpot theories”, and “Voluntarily associating with crackpots”
(Anderson 2011, 147–148). Our previous research (and anecdotal experience) suggests that
the concept of peer-review is rarely understood (Huxster etal., unpublished manuscript);
most members of the lay public, we suspect, do not know that such a process exists (let
alone understand the role it plays in the scientific enterprise or avoid common misconcep-
tions about it if they do—e.g., that it is, for the most part blind and unpaid). Moreover,
when it comes to the avoidance of “crackpot theories”, many members of the public har-
bor a model of the scientific enterprise that regards such labels as ad hominems. This was
expressed in a much-quoted passage from Michael Crichton’s 2003 speech at Caltech:
Let’s be clear: the work of science has nothing whatever to do with consensus. Con-
sensus is the business of politics. Science, on the contrary, requires only one investi-
gator who happens to be right, which means that he or she has results that are verifia-
M.H.Slater et al.
1 3
ble by reference to the real world. In science consensus is irrelevant. What is relevant
is reproducible results. The greatest scientists in history are great precisely because
they broke with the consensus.12
Many members of the lay public seem to share something like this individualistic model
of science—stemming, one can’t help but think, from the celebration of individual “Great
Men of Science” such as Galileo, Darwin, and Einstein who, it is believed, represented
lone voices against an overly dogmatic community of science.
Historians and philosophers of science of course understand that this is a vast over-
simplification and that science has changed dramatically in the intervening decades (or
centuries). The social structure of science is complex, nuanced, and still contested by
researchers. This may be why it is an aspect of scientific literacy that is both lacking in
the lay public and not well represented in measurement instruments for scientific literacy
or our thinking about the NoS.13 But while many of the details may yet be controversial,
we believe that the broad strokes of such an understanding—for example, of the sense in
which scientists are simultaneously competing and collaborating with one another (Kitcher
1990; Kuhn 1962; Strevens 2003; Oreskes and Conway 2010, 272–273)—are both well in
hand and conceptually important to the recognition of the epistemic significance of scien-
tific consensus and, in general, the recognition of epistemic responsibility.14
Why so? A fuller argument must wait for another occasion, but one strand of justifica-
tion is the following. First, we need to recognize that the dominant lay model of science is
individualistic. This has some immediate consequences for the public’s trust of scientists.
Regarding a source as epistemically trustworthy involves seeing that source as being (a)
in a position to know and (b) being apt to honestly represent the information in question
(Lipton 1998). However, recent research has shown that individual scientists are generally
judged by the public as being “competent but cold” (Fiske and Dupree 2014, 13593)—
that is, they are generally seen as in a position to know but not necessarily to be trusted.
After all, individual scientists have been guilty of misconduct of various forms; they are
sometimes biased or ‘pig-headed’; they are, after all, human. But as one moves from an
individualistic model of science to a communitarian model, one can begin to appreciate
how certain forms of consensus (and consensus-forming processes) ameliorate the honesty
question (b) above. Less important than trusting scientists as individual testifiers is defer-
ring to the scientific community as a whole—in a sense, treating the group as a source of
testimony (Odenbaugh 2012). As Roberts and Wood aptly put it: “Kuhn alerts [us] that
much that is salutary in the intellectual life is guided and channeled by institutions and
social pressures that transcend the character of individuals, correcting for vice and sup-
porting virtues. Aberrations like David Irving and Henry Casaubon are often forestalled
or made less pernicious by processes of peer review” (Roberts and Wood 2007, 201–202).
12 A stable and authoritative URL for a transcript of this speech seems to be difficult to come by—one tran-
script is available at http://steph ensch neide r.stanf catio ns/PDF_Paper s/Crich ton20 03.pdf—but
readers may search for “Aliens Cause Global Warming”.
13 Lombrozo etal.’s (2008) instrument for assessing understanding of the nature of science includes two
items relevant to the scientific community: “The scientific community is essential to the process and pro-
gress of science,” and “Unlike many other professions, science is almost always a solitary endeavor” (Lom-
brozo etal., 292).
14 In this sense, we submit, our collective understanding of the social structure of science resembles our
understanding of many scientific issues—anthropogenic climate change, for example—on which the general
core of the theory is at this point almost beyond doubt while significant uncertainties remain about some of
the finer details.
Understanding andTrusting Science
1 3
But it is not only peer-review and the various vetting processes that are significant in
Kuhn’s view. It is the fact that, as a loose assemblage of various communities, scientists are
engaged in a collaborative and competitive enterprise. This is part of the reason why sci-
ence is seen by insiders as “self-correcting”: bad actors are excommunicated, crackpot or
badly supported theories are ignored, fruitful theories are pursued until such point as their
anomalies encourage certain practitioners to forge out on their own to explore new frame-
works (Jamieson 2018). When this haphazard assemblage of more or less independent
agents speaks with one voice, prima facie, we ought to listen. Supposing that one accepts
that the public’s prima facie trust of the scientific community (when speaking with a con-
sensus voice) is often warranted and an important dispositional goal for citizens of tech-
nologically developed democracies, we submit that a conception of scientific literacy that
enables and encourages such a disposition is an attractive candidate for at least a core com-
ponent of Civic Scientific Literacy. We hypothesize that a nuanced understanding of sci-
ence as a social enterprise—what may be called the “social structure of science” (SSS)—
may be expected to increase this disposition and so argue that further work to (a) fill out
the content of the SSS and (b) test this hypothesis empirically are warranted.
We close this section by noting two further points about the SSS conception of Civic
Scientific Literacy. First, by focusing on the social-epistemic background for the signifi-
cance of certain forms of scientific consensus, we are effectively side-stepping some of
the more difficult questions about how expertise should be detected, particularly on con-
tested issues (Goldman 2001; Pettit 2006; Brossard and Nisbet 2006; Almassi 2012; Fiske
2012). It is compatible with our approach that suspension of belief is the right epistemic
attitude to take in cases where experts appear to disagree(Slater etal. 2018). Second, in
support of the empirical plausibility that the SSS conception would contribute broader
social goals of rational policymaking, science communication researchers have proposed
that consensus messaging serves as “gateway belief,” even for polarizing science (van der
Linden etal. 2014, 2015). But it is worth emphasizing that in our conception of the SSS
approach, understanding is the key epistemic relation at issue: merely knowing some iso-
lated facts about the way scientists work seems unlikely to form a sufficiently robust and
flexible background against which the epistemic significance of scientific consensus—and
how to detect it—can emerge.
5 Next Steps
Our efforts in this paper have obviously been preliminary; more work is needed. But let
us sum up before offering some parting suggestions for where we can go next. First, we
offered a general framework for thinking about different conceptions of scientific literacy,
arguing that greater attention to the epistemic properties and the correlative capacities
stemming from a given conception is needed. We also argued that a plausible desidera-
tum—ability and inclination to identify and trust robust consensus messages from sci-
ence—is not credibly met by popular conceptions. Moreover, other desiderata associated
with such conceptions are probably out of reach. Finally, we proposed that a greater focus
on the social structure of science in a conception of Civic Scientific Literacy would do bet-
ter to meet an important desideratum and is thus something that ought to be promoted.
This hypothesis stands in need of further specification and empirical testing: is it indeed
the case that members of the lay public with a good grasp of the social structure of science
will tend tobe more willing to trust consensus messages from the scientific community?
M.H.Slater et al.
1 3
Will such an inclination translate to ideologically-entangled issues such as climate change
or the safety of childhood vaccines? How much of the lay public should be expected to
possess SSS-literacy?15 We are currently pursuing this research; but we hope that others—
particularly HPS and STS researchers—will also contribute to this broad effort. We con-
clude by identifying what we take to be several fruitful avenues through which such schol-
ars might contribute to this effort.
First, they can contribute to the effort to characterize a general, consensus picture of
what aspects of the social structure of science are relevant to the public’s treatment of the
scientific community as a source of epistemic authority. This includes both descriptive and
normative aspects and requires addressing a highly non-trivial question of the appropriate
level of granularity and idealization for how this picture might be described in the context
of science education and communication.
Second, and relatedly, STS scholars can contribute to efforts to develop better measure-
ment instruments and frameworks for studying the public’s understanding of and trust of
science and scientific institutions.
Third, epistemologists can provide insight about both the epistemic relation connect-
ing the public to a range of scientific content as well as how the SSS and other aspects of
scientific literacy might be successfully communicated—e.g., through education or public
messaging and engagement initiatives. If our suspicion that a robust conception of under-
standing is relevant to scientific literacy, we will need better models of how understanding
(in addition to knowledge) may be transmitted (or produced) by testimony or other means.
Finally (but not exhaustively), philosophers can contribute to the project Anderson iden-
tified of changing the social conditions under which scientific issues become entangled and
recognition of scientific authority becomes problematic.
Almassi, B. (2012). Climate change, epistemic trust, and expert trustworthiness. Ethics & the Environment,
17(2), 29–49.
Anderson, E. (2011). Democracy, public policy, and lay assessments of scientific testimony. Episteme, 8(2),
Bernauer, T. (2013). Climate change politics. Annual Review of Political Science, 16(1), 421–448.
Bird, A. (2010). Social knowing: The social sense of ‘scientific knowledge’. Philosophical Perspectives,
24(1), 23–56.
Bodmer, W. (1985). The public understanding of science: Report of a Royal Society ad hoc group endorsed
by the Council of the Royal Society. London: The Royal Society. http://royal socie dedFi les/
Royal _Socie ty_Conte nt/polic y/publi catio ns/1985/10700 .pdf.
Boyd, K. (2017). Testifying understanding. Episteme, 14(1), 103–127.
Brossard, D., & Nisbet, M. C. (2006). Deference to scientific authority among a low information public:
Understanding U.S. opinion on agricultural biotechnology. International Journal of Public Opinion
Research, 19(1), 24–52.
Brulle, R. J. (2014). Institutionalizing delay: Foundation funding and the creation of U.S. climate change
counter-movement organizations. Climatic Change, 122(4), 681–694.
Carmichael, J. T., Brulle, R. J., & Huxster, J. K. (2017). The great divide: Understanding the role of media
and other drivers of the partisan divide in public concern over climate change in the USA, 2001–2014.
Climatic Change, 141(4), 599–612. https :// 4-017-1908-1.
Coady, C. A. J. (1992). Testimony. Oxford: Oxford University Press.
15 For example, might we reasonably limit this expectation to policy leaders and citizens attentive to sci-
ence policy (cf. Miller and Inglehart 2012)?
Understanding andTrusting Science
1 3
DeBoer, G. E. (2000). Scientific literacy: Another look at its historical and contemporary meanings and its
relationship to science education reform. Journal of Research in Science Teaching, 37(6), 582–601.
Diethelm, P., & McKee, M. (2009). Denialism: What is it and how should scientists respond? European
Journal of Public Health, 19(1), 2–4.
Dunlap, R. E., & McCright, A. M. (2010). Climate change denial: Sources, actors and strategies. In C.
Lever-Tracy (Ed.), Routledge handbook of climate change and society. London: Routledge.
Dunlap, R. E., & McCright, A. M. (2011). Organized climate change denial. In J. S. Dryzek, R. B. Nor-
gaard, & D. Schlosberg (Eds.), The Oxford handbook of climate change and society. Oxford: Oxford
University Press.
Elgin, C. Z. (2006). From knowledge to understanding. In S. Hetherington (Ed.), Epistemology futures.
Oxford: Oxford University Press.
Elgin, C. Z. (2007). Understanding and the facts. Philosophical Studies, 132(1), 33–42.
Fiske, S. T. (2012). Managing ambivalent prejudices: Smart-but-cold and warm-but-dumb stereotypes. The
Annals of the American Academy of Political and Social Science, 639(1), 33–48.
Fiske, S. T., & Dupree, C. (2014). Gaining trust as well as respect in communicating to motivated audiences
about science topics. Proceedings of the National Academy of Science, 111(4), 13593–13597.
Goldman, A. (2001). Experts: Which ones should you trust? Philosophy and Phenomenological Research,
63(1), 85–110.
Gould, S. J. (1999). Take another look. Science, 286(5441), 899.
Grimm, S. (2006). Is understanding a species of knowledge? British Journal for the Philosophy of Science,
57(3), 515–535.
Grimm, S. (2012). The value of understanding. Philosophy Compass, 7(2), 103–117.
Hardwig, J. (1985). Epistemic dependence. The Journal of Philosophy, 82(7), 335–349.
Hills, A. (2009). Moral testimony and moral epistemology. Ethics, 120(1), 94–127.
Huxster, J. K., Landrum, A. R., & Slater, M. H. (unpublished manuscript). Understanding the scientific
enterprise: Development and validation of a novel scientific literacy measure (SSSI).
Huxster, J. K., Slater, M. H., Leddington, J., etal. (2018). Understanding “understanding” in Public Under-
standing of Science. Public Understanding of Science, 27(7), 756–771. https ://
62517 73542 9.
Jamieson, K. H. (2018). Crisis or self-correction: Rethinking media narratives about the well-being of sci-
ence. Proceedings of the National Academy of Science, 115(11), 2620–2627. https ://
pnas.17082 76114 .
Jasanoff, S. (2014). A mirror for science. Public Understanding of Science, 23(1), 21–26.
Jasny, L., Waggle, J., & Fisher, D. R. (2015). An empirical examination of echo chambers in US climate
policy networks. Nature Climate Change, 5, 782–786.
Kahan, D. M. (2015). What is the “science of science communication”? Journal of Science Communication,
14(3), 1–12.
Kahan, D. M. (2017). ‘Ordinary science intelligence’: A science-comprehension measure for study of risk
and science communication, with notes on evolution and climate change. Journal of Risk Research,
20(8), 995–1016. https :// 877.2016.11480 67.
Kahan, D. M., Maggie Wittlin, D., Braman, P. S., etal. (2012). The polarizing impact of science literacy and
numeracy on perceived climate change risks. Nature Climate Change, 2, 732–735.
Keren, A. (2007). Epistemic authority, testimony and the transmission of knowledge. Episteme, 4(3),
Keren, A. (2014). Trust and belief: A preemptive reasons account. Synthese, 191(12), 2593–2615.
Keren, A. (2018). The public understanding of what? Laypersons’ epistemic needs, the division of cognitive
labor, and the demarcation of science. Philosophy of Science, 85(5), 781–792.
Kitcher, P. (1990). The division of cognitive labor. Journal of Philosophy, 87(1), 5–22.
Kitcher, P. (2001). Science, truth, and democracy. Oxford: Oxford University Press.
Kitcher, P. (2011). Science in a democratic society. Amherst, NY: Prometheus Press.
Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.
Kvanvig, J. (2003). The value of knowledge and the pursuit of understanding. Cambridge: Cambridge Uni-
versity Press.
Lackey, J. (2008). Learning from words. Oxford: Oxford University Press.
Laugksch, R. C. (2000). Scientific literacy: A conceptual overview. Science Education, 84(1), 71–94.
Leiserowitz, A., Edward, M., Connie, R.-R., etal. (2016). Climate change in the American mind: March,
2016. Yale University and George Mason University. New Haven, CT: Yale Program on Climate
Change Communication.
Leiserowitz, A. A., Maibach, E. W., Roser-Renouf, C., etal. (2013). Climategate, public opinion, and the
loss of trust. American Behavioral Scientist, 57(6), 818–837.
M.H.Slater et al.
1 3
Lipton, P. (1998). The epistemology of testimony. Studies in the History and Philosophy of Science,
29(1), 1–31.
Lombrozo, T., Anastasia, T., & Michael, W. (2008). The importance of understanding the nature of sci-
ence for accepting evolution. Evolution: Education and Outreach, 1(3), 290–298.
Ludwig, D. (2014). Extended cognition in science communication. Public Understanding of Science,
23(8), 982–995.
McCright, A. M., Charters, M., Dentzman, K., et al. (2016). Examining the effectiveness of climate
change frames in the face of a climate change denial counter-frame. Topics in Cognitive Science,
8(1), 76–97.
McCright, A. M., & Dunlap, R. E. (2011). The politicization of climate change and polarization in
the American public’s views of global warming, 2001–2010. The Sociological Quarterly, 52(2),
Miller, J. D. (1983). Scientific literacy: A conceptual and empirical review. Daedalus, 112(2), 29–48.
Miller, J. D. (2004). Public understanding of, and attitudes toward, scientific research: What we know
and what we need to know. Public Understanding of Science, 13(3), 273–294.
Miller, J. D. (2010a). The conceptualization and measurement of civic scientific literacy for the twenty-
first century. In J. Meinwald & J. G. Hildebrand (Eds.), Science and the educated american: A core
component of liberal education (pp. 241–255). Washington, D.C.: American Academy of Arts and
Miller, J. D. (2010b). Adult science learning in the internet era. Curator, 53(2), 191–208.
Miller, B. (2013). When is consensus knowledge based? distinguishing shared knowledge from mere
agreement. Synthese, 190(7), 1293–1316.
Miller, J. D., & Inglehart, R. (2012). Public attitudes toward science and technology. In S. B. William
(Ed.), Leadership in science and technology: A reference handbook (pp. 298–306). Thousand Oaks:
SAGE Publications Inc.
Norris, S. P., & Phillips, L. M. (2003). How literacy in its fundamental sense is central to scientific lit-
eracy. Science Education, 87(2), 224–240.
Norris, S. P., & Phillips, L. M. (2009). Scientific Literacy. In D. R. Olson & N. Torrance (Eds.), Hand-
book of research on literacy (pp. 271–285). Cambridge: Cambridge University Press.
NRC, The National Research Council. (2012). A framework for K-12 science education: Practices,
crosscutting concepts, and core ideas. Washington, DC: The National Academies Press.
Odenbaugh, J. (2012). Climate, consensus, and contrarians. In W. P. Kabasenche, M. O’Rourke, & M.
H. Slater (Eds.), The environment: Philosophy, science, and ethics (pp. 137–150). Cambridge, MA:
MIT Press.
OECD. (2007). PISA 2006: Science competencies for tomorrow’s world. Vol. 1: Analysis. Paris: Organi-
sation for Economic Co-operation and Development.
Oreskes, N., & Conway, E. M. (2010). Merchants of doubt. New York: Bloomsbury Press.
Pettit, P. (2006). When to defer to majority testimony—and when not. Analysis, 66(3), 179–187.
PISA. (2012). Results from PISA 2012: United States. Paris: Organisation for Economic Co-operation
and Development.
Roberts, R. C., & Jay Wood, W. (2007). Intellectual virtues: An essay in regulative epistemology.
Oxford: Oxford University Press.
Shamos, M. H. (1995). The myth of scientific literacy. New Brunswick: Rutgers University Press.
Shen, B. S. P. (1975). Science literacy. American Scientist, 63(3), 265–268.
Slater, M. H., Huxster, J. K., Bresticker, J. E., etal. (2018). Denialism as applied skepticism. Erkenntnis.
https :// 0-018-0054-0.
Smith, N., & Leiserowitz, A. (2012). The rise of global warming skepticism: Exploring affective image
associations in the united states over time. Risk Analysis, 32(6), 1021–1032.
Snow, C. P. (1959). The two cultures and the scientific revolution. New York: Cambridge University Press.
Snow, C. E., & Dibner, K. A. (Eds.). (2016). Science literacy: Concepts, contexts, and consequences.
Washington, D.C.: The National Academies Press.
Strevens, M. (2003). The role of the priority rule in science. The Journal of Philosophy, 100(2), 55–79.
Strevens, M. (2008). Depth. Cambridge: Harvard University Press.
Takahashi, B., & Tandoc, E. C. (2016). Media sources, credibility, and perceptions of science: Learning
about how people learn about science. Public Understanding of Science, 25(6), 674–690.
Thomas, G., & Durant, J. R. (1987). Why should we promote the public understanding of science? Sci-
entific Literacy Papers, 1, 1–14.
Torcello, L. (2016). The ethics of belief, cognition, and climate change pseudoskepticism: implications
for public discourse. Topics in Cognitive Science, 8(1), 19–48.
Understanding andTrusting Science
1 3
van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D., etal. (2014). How to communicate the scien-
tific consensus on climate change: Plain facts, pie charts or metaphors? Climatic Change, 126(1–2),
van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D., etal. (2015). The scientific consensus on climate
change as a gateway belief: Experimental evidence. PLoS ONE, 10(2), e0118489.
Wilkenfeld, D. A., Plunkett, D., & Lombrozo, T. (2016). Depth and deference: When and why we attribute
understanding. Philosophical Studies, 173(2), 373–393.
Zagzebski, L. T. (2001). Recovering understanding. In M. Steup (Ed.), Knowledge, truth, and duty: Essays
on epistemic justification, responsibility, and virtue (pp. 235–251). Oxford: Oxford University Press.
Zagzebski, L. T. (2012). Epistemic authority: A theory of trust, authority, and autonomy in belief. Oxford:
Oxford University Press.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
... Pardo & Calvo, 2002;Weingart & Guenther, 2016), which might be one reason a decline in trust in science has yet to be detected. Therefore, Slater et al. (2019) have called for better instruments and frameworks on public (dis)trust in science. ...
... This article is a reaction to researchers calling for better instruments and frameworks on public trust in science (e.g. Slater et al., 2019). We introduced a theoretical systematisation of (dis)trust in science as a multidimensional concept and analysed to what extent survey items and open-ended questions can be integrated into this model. ...
Over the past several years, scholars have debated the public’s (dis)trust in science. Since the ‘science and society’ paradigm of science communication has defined the crisis of trust between science and the public as a major concern, this article is interested in how public (dis)trust in science is measured in representative surveys of public perceptions of science and technology. The goal is to systematise survey measures using a theoretical model of (dis)trust in science as a multidimensional variable that is relevant to the relationship between the public, (intermediaries) and science. A systematic review of items and open-ended questions (n = 736) used in 20 representative surveys from various countries was conducted. The results show that surveys rarely measure distrust in science, and instead focus on trust in science – mainly at the macro-level – rather than trust in scientists (micro-level) or scientific organisations (meso-level). Benevolence is the dimension of trust considered most frequently; the media is predominantly included as a general type of contact with science without a direct link to (dis)trust. Hence, representative surveys cover a number of different aspects of public (dis)trust in science. However, there is room for improvement. Thus, this paper concludes with recommendations for future measures.
... Arguably, by making it more likely that rigorously vetted scientific theories will yield more accurate (or anyway empirically adequate) representations of the world, such institutions and norms (when well-functioning) are an important part of the story of how scientific consensus derives its epistemic significance (Longino, 1990;B. Miller, 2013;Oreskes, 2019; Slater et al., 2019). While it is not much of a jump from here to the proposition that a grasp of the workings of the scientific enterprise would contribute to one's trust of science, we are not prepared to assert this connection here; it is, after all, an empirical hypothesis that-so far as we are aware-has not been tested. ...
... We also noted some important ways in which our respective measures diverge. A key difference involves our conceptual frameworks and instrument content: while Bauer et al. focus on teamwork as the main social aspect of science, our instrument also addresses the existence of competition and disagreement in science, which we believe are relevant to the public's trust of consensus science (Slater et al., 2019). Our measure also goes into greater depth on a number of important social-institutional concepts and processes within science (such as education, peer-discussion/criticism, and the epistemic standing of publications); Bauer et al.'s instrument is more focused on cross-national collaboration and governmental funding schemes than our instrument is. ...
Full-text available
Significant gaps remain between public opinion and the scientific consensus on many issues. We present the results of three studies (N = 722 in total) for the development and testing of a novel instrument to measure a largely unmeasured aspect of scientific literacy: the enterprise of science, particularly in the context of its social structures. We posit that this understanding of the scientific enterprise is an important source for the public’s trust in science. Our results indicate that the Social Enterprise of Science Index (SESI) is a reliable and valid instrument that correlates positively with trust in science (r = .256, p < .001), and level of education (r = .245, p < .001). We also develop and validate a six question short version of the SESI for ease of use in longer surveys.
... In what follows, we set these questions aside and assume that producing a public with a more nuanced grasp of the ways in which science functions as a social enterprise (including, inter alia, the normal means of regulating its activities, forming research agendas, vetting and solidifying its results, policing and educating its members, and so on) is a valuable social goal. If science is deserving of our trust in certain circumstances, we believe that it will only be so in the context of an understanding of science works as a social institution (Slater et al., 2019). The broad question of the paper, then, is to what extent the distribution of different types of articles about science can be expected to contribute to such an understanding. ...
... Indeed, as Boykoff and Boykoff (2007) note, the journalistic norms of personalization, dramatization, and novelty (and the concomitant "taboo" against repetition, sensu Gans, 1979) generally militate against taking the "long view" on science -the view that captures the practices which generate robust scientific consensus and thus durable scientific knowledge -in most journalistic contexts. In short, we might say that journalistic norms and practices encourage a focus on discrete events whereas the epistemic significance of science for the broader public's use arguably stems from a longerrun social process (Longino, 1990;Oreskes, 2019;Slater et al., 2019). 17 Beyond representing a missed opportunity, we believe that this limited portrayal of science poses a risk of further compromising public trust of science. ...
Full-text available
Efforts to cultivate scientific literacy in the public are often aimed at enabling people to make more informed decisions — both in their own lives (e.g., personal health, sustainable practices, &c.) and in the public sphere. Implicit in such efforts is the cultivation of some measure of trust of science. To what extent does science reporting in mainstream newspapers contribute to these goals? Is what is reported likely to improve the public's understanding of science as a process for generating reliable knowledge? What are its likely effects on public trust of science? In this paper, we describe a content analysis of 163 instances of science reporting in three prominent newspapers from three years in the last decade. The dominant focus, we found, was on particular outcomes of cutting-edge science; it was comparatively rare for articles to attend to the methodology or the social–institutional processes by which particular results come about. At best, we argue that this represents a missed opportunity.
... There are many well-documented instances of individuals and groups attempting to discredit scientific information. One prominent case concerns ongoing campaigns aimed at discrediting climate science, which look to "cloud the science on important issues or undercut the trustworthiness of the scientific community at large" (Slater et al., 2019). For instance, Brulle (2014) describes how the "climate change counter movement" involves a number of activities, including "political lobbying, contributions to political candidates, and a large number of communication and media efforts that aim at undermining climate science" (682). ...
Full-text available
A perennial problem in social epistemology is the problem of expert testimony, specifically expert testimony regarding scientific issues: for example, while it is important for me to know information pertaining to anthropogenic climate change, vaccine safety, Covid-19, etc., I may lack the scientific background required to determine whether the information I come across is, in fact, true. Without being able to evaluate the science itself, then, I need to find trustworthy expert testifiers to listen to. A major project in social epistemology has thus become determining what the markers of trustworthiness are that laypersons can appeal to in order to identify and acquire information from expert testifiers. At the same time, the ways in which we acquire scientific information has changed significantly, with much of it nowadays being acquired in online environments. While much has been said about the potential pitfalls of seeking information online (e.g. the prevalence of filter bubbles, echo chambers, and the overall proliferation of “fake news”), little has been said about how the nature of seeking information online should make us think about the problem of expert testimony. Indeed, it seems to be an underlying assumption that good markers of trustworthiness apply equally well when seeking information from expert testifiers in online and offline environments alike, and that the new challenges and opportunities presented by online environments merely affects the methods by which we can acquire evidence of said trustworthiness. Here I argue that in making this assumption one risks failing to account for how unique features of the ways in which we acquire information online affect how we evaluate the trustworthiness of experts. Specifically, I argue for two main claims: first, that the nature of information-seeking online is such that the extent to which information is susceptible to manipulation is a dominant marker of trustworthiness; second, as a result, one will be more likely to seek out a particular kind of expert testifier in online environments, what I call a cooperative as opposed to preemptive expert. The result is that criteria for expert trustworthiness may look significantly different when acquiring information online as opposed to offline.
... But, in truth, there has not been much thoughtful or substantive discussion of what the goals of science education should be beyond the platitudes that fill the opening pages of every new statement about its importance. It seems that establishing an appropriate level of trust, which has always been central to science (Oreskes, 2019;Pennock, 2019;Shapin, 2004), should be the overriding goal and that the first step towards that should be in understanding the source of knowledge about the world (see, e.g., Slater, Huxster, & Bresticker, 2019). ...
... But, in truth, there has not been much thoughtful or substantive discussion of what the goals of science education should be beyond the platitudes that fill the opening pages of every new statement about its importance. It seems that establishing an appropriate level of trust, which has always been central to science (Oreskes, 2019;Pennock, 2019;Shapin, 2004), should be the overriding goal and that the first step towards that should be in understanding the source of knowledge about the world (see, e.g., Slater, Huxster, & Bresticker, 2019). ...
Full-text available
This piece is about the varied arguments for teaching science over the years. The argument in the mid 1800s, for example, centered on the utilitarian value of scientific knowledge for industry and everyday affairs, while in the 1950s the study of science was viewed as important for building public support for research in the United States. Towards the end of the nineteenth century there was a period of time when the predominant argument for science instruction rested on a moral purpose—the building of character and personal virtue. For early proponents of science education, moral uplift came from student engagement in the process of science—in coming to face the facts of the natural world that were the basis for the discovery of truth. This essay explores whether such goals for teaching science might once again have a place at a time when scientific expertise and knowledge are increasingly being minimized or dismissed. I think that it does.
A powerful symbiotic relationship is the one between photography and the field of environmental science. They coexist together in such a way that the progress of one inherently allows for progress in the other. The purpose of this thesis is to investigate and illuminate this specific link. From the earliest cameras, photography was able to capture small details that the eye wasn’t able to see. This ability gave scientists the opportunity to capture images of up-close cells, viruses, certain species, and more. As the popularity of caring for the environment increased, the technologies of science and photography grew alongside. The documentation of climate change, the impacts of pollution, and all the damage humans were causing pushed mass amounts of support towards environmental science. Public awareness made great reason for governmental change. Everyday around us there are consequences of human actions in terms of climate change, but it can be hard to see on a personal level. Photography captures this problem and forces us to acknowledge it. The average person can no longer ignore it because the documentation is right there. Scientists use photography to seize what they are seeing and support their theories. In environmental science alone, photography has provided the field with the ability to visualize detrimental changes in the world, discover new species, and monitor environments. This undeniable link is one which deserves to be further investigated to better understand how it can be harnessed to bring about change.
While people’s views about science are related to identity factors (e.g. political orientation) and to knowledge of scientific theories, knowledge about how science works in general also plays an important role. To test this claim, we administered two detailed assessments about the practices of science to a demographically representative sample of the US public ( N = 1500), along with questions about the acceptance of evolution, climate change, and vaccines. Participants’ political and religious views predicted their acceptance of scientific claims, as in prior work. But a greater knowledge of the nature of science and a more mature view of how to mitigate scientific disagreements each related positively to acceptance. Importantly, the positive effect of scientific thinking on acceptance held regardless of participants’ political ideology or religiosity. Increased attention to developing people’s knowledge of how science works could thus help to combat resistance to scientific claims across the political and religious spectrum.
In recent years, there has been considerable interest in studying and using scientific consensus messaging strategies to influence public opinion. Researchers disagree, sometimes vociferously, about how to examine the potential influence of consensus messaging, debating one another publicly and privately. In this essay, we take a step back and focus on some of the important questions that scholars might consider when researching scientific consensus messaging. Hopefully, reflecting on these questions will help researchers better understand the reasons for the different points of debate and improve the work moving forward.
Full-text available
The scientific community, we hold, often provides society with knowledge—that the HIV virus causes AIDS, that anthropogenic climate change is underway, that the MMR vaccine is safe. Some deny that we have this knowledge, however, and work to undermine it in others. It has been common (but not uncontroversial) to refer to such agents as “denialists”. At first glance, then, denialism appears to be a form of skepticism. But while we know that various denialist strategies for suppressing belief are generally effective, little is known about which strategies are most effective. We see this as an important first step toward their remediation. This paper leverages the approximate comparison to various forms of philosophical skepticism to design an experimental test of the efficacy of four broad strategies of denial at suppressing belief in specific scientific claims. Our results suggest that assertive strategies are more effective at suppressing belief than questioning strategies.
Full-text available
This essay seeks to explain what the “science of science communication” is by doing it. Surveying studies of cultural cognition and related dynamics, it demonstrates how the form of disciplined observation, measurement, and inference distinctive of scientific inquiry can be used to test rival hypotheses on the nature of persistent public conflict over societal risks; indeed, it argues that satisfactory insight into this phenomenon can be achieved only by these means, as opposed to the ad hoc story-telling dominant in popular and even some forms of scholarly discourse. Synthesizing the evidence, the essay proposes that conflict over what is known by science arises from the very conditions of individual freedom and cultural pluralism that make liberal democratic societies distinctively congenial to science. This tension, however, is not an “inherent contradiction”; it is a problem to be solved — by the science of science communication understood as a “new political science” for perfecting enlightened self-government.
Technical Report
Full-text available
This report is based on findings from a nationally representative survey—Climate Change in the American Mind—conducted by the Yale Program on Climate Change Communication and the George Mason University Center for Climate Change Communication. Interview dates: March 18–31, 2016.
Full-text available
Recent scholarship has identified a large and growing divide on how Republicans and Democrats view the issue of climate change. A number of these studies have suggested that this polarization is a product of systematic efforts to spread doubt about the reality of climate change through the media in general and conservative media in particular. However, research to date has largely relied on speculation about such a relationship rather than empirical evidence. We improve on existing research by conducting an empirical analysis of the factors affecting national-level, quarterly shifts in public concern about climate change between January 2001 and December 2014. Our analysis focuses on the potential role played by four factors that should account for changes in levels of concern regarding climate change: (1) media coverage, (2) extreme weather, (3) issuance of major scientific reports, and (4) changes in economic activity and foreign conflict. Some results suggest that partisan media influences beliefs in ways expected by communication scholars who describe “echo chamber” effects and “boomerang” effects. Among other supporting evidence, we find that partisan media not only strengthen views of like-minded audiences but also when Republicans are presented with opposing frames about climate change from liberal media, they appear to reject the messages such that they are less concerned about the issue. Findings also demonstrate that the dissemination of science increases concern about climate change among Democrats but has no influence on Republicans. Finally, extreme weather does not increase concern among Democrats or Republicans. Implications for future research are discussed.
After documenting the existence and exploring some implications of three alternative news narratives about science and its challenges, this essay outlines ways in which those who communicate science can more accurately convey its investigatory process, self-correcting norms, and remedial actions, without in the process legitimizing an unwarranted “science is broken/in crisis” narrative. The three storylines are: (i) quest discovery, which features scientists producing knowledge through an honorable journey; (ii) counterfeit quest discovery, which centers on an individual or group of scientists producing a spurious finding through a dishonorable one; and (iii) a systemic problem structure, which suggests that some of the practices that protect science are broken, or worse, that science is no longer self-correcting or in crisis.
What must laypersons understand about science to allow them to make sound decisions on science-related issues? And what is the role of philosophers of science in attempts to advance this kind of understanding? Relying on recent developments in social epistemology, this paper argues, first, that scientific education should have the goal not of bringing laypersons' understanding of science closer to that of expert insiders but rather of cultivating the kind of competence characteristic of “competent outsiders” with respect to science (Feinstein 2011); and second, that philosophers of science have an important role to play in attempts to promote this kind of understanding, but that they will have to approach central questions in the field differently for them to successfully fulfill this role.
This study examines the conflation of terms such as “knowledge” and “understanding” in peer-reviewed literature, and tests the hypothesis that little current research clearly distinguishes between importantly distinct epistemic states. Two sets of data are presented from papers published in the journal Public Understanding of Science. In the first set, the digital text analysis tool, Voyant, is used to analyze all papers published in 2014 for the use of epistemic success terms. In the second set of data, all papers published in Public Understanding of Science from 2010–2015 are systematically analyzed to identify instances in which epistemic states are empirically measured. The results indicate that epistemic success terms are inconsistently defined, and that measurement of understanding, in particular, is rarely achieved in public understanding of science studies. We suggest that more diligent attention to measuring understanding, as opposed to mere knowledge, will increase efficacy of scientific outreach and communication efforts.
This paper describes the ‘ordinary science intelligence’ scale (OSI_2.0). Designed for use in the empirical study of risk perception and science communication, OSI_2.0 comprises items intended to measure a latent capacity to recognize and make use of valid scientific evidence in everyday decision-making. The derivation of the items, the relationship of them to the knowledge and skills OSI requires, and the psychometric properties of the scale are examined. Evidence of the external validity of OSI_2.0 is also presented. Finally, the utility of OSI_2.0 is briefly illustrated by its use to assess standard survey items on evolution and global warming: when administered to members of a US general population sample, these items are more convincingly viewed as indicators of one or another latent cultural identity than as indicators of science comprehension.