Content uploaded by Joanna Huxster
Author content
All content in this area was uploaded by Joanna Huxster on Jul 18, 2022
Content may be subject to copyright.
ORIGINAL RESEARCH
Erkenntnis
https://doi.org/10.1007/s10670-022-00569-z
Abstract
Despite decades of concerted eorts to communicate to the public on important
scientic issues pertaining to the environment and public health, gaps between pub-
lic acceptance and the scientic consensus on these issues remain stubborn. One
strategy for dealing with this shortcoming has been to focus on the existence of sci-
entic consensus on the relevant matters. Recent science communication research
has added support to this general idea, though the interpretation of these studies
and their generalizability remains a matter of contention. In this paper, we describe
results of a qualitative interview study on dierent models of scientic consensus
and the relationship between such models and trust of science, nding that familiar-
ity with scientic consensus is rarer than might be expected. These results suggest
that consensus messaging strategies may not be eective.
1 Introduction
In the epilogue of their inuential Merchants of Doubt, Oreskes and Conway oer
something of a justication for our trust of science. Some tasks — like buying a home
— involve ceding trust to others. The stakes are high. If the ocials in question are
incompetent (or dishonest), we risk nancial ruin. Yet we do it anyway. Why? Their
Received: 21 March 2021 / Accepted: 24 April 2022
© The Author(s), under exclusive licence to Springer Nature B.V. 2022
Public Conceptions of Scientific Consensus
Matthew H.Slater1· Joanna K.Huxster2· Emily R.Scholeld3
Matthew H. Slater
matthew.slater@gmail.com
Joanna K. Huxster
huxstejk@eckerd.edu
Emily R. Scholeld
emily.scholeld@gmail.com
1 Department of Philosophy, Bucknell University, Lewisburg, PA, USA
2 Environmental Studies, Eckerd College, St. Petersburg, FL, USA
3 Departments of Philosophy and Biology, Bucknell University, Lewisburg, PA, USA
1 3
M. H. Slater et al.
(short) answer: because we don’t have much of a choice. We don’t have the expertise
or access needed to do the title search, for example. So we “trust someone who is
trained, licensed, and experienced to do it for us” (2010a, 272). Our trust of science,
they suggest, is similarly compelled:
If we don’t trust others or don’t want to relinquish control, we can often do
things for ourselves. We can cook our own food, clean our own homes, do our
own taxes, wash our own cars, even school our own children. But we cannot
do our own science. So it comes to this: we must trust our scientic experts on
matters of science, because there isn’t a workable alternative. (272; our italics)
A cynical reaction is tempting: if the last few decades have revealed anything about
modern society, it’s that many feel all too willing to reject scientists’ conclusions on
all manner of subjects — from the safety of vaccines to the existence and threat of
anthropogenic climate change (ACC). More recently, even the question of whether
simple face masks are safe to wear and eective at reducing the spread of diseases
like COVID-19 have been controversial (Funk and Tyson 2020; van Green and Tyson
2020). In this light, one might be tempted to reject their analogy; there is an alterna-
tive to trusting science: not trusting science.1
On the other hand, perhaps the analogy is apt. The force of the injunction to trust
some purported authority turns in part on one’s take on the ‘workability’ of not trust-
ing that authority. One doesn’t have to purchase a home, after all, or trust banks to
hold one’s money. One doesn’t have to avail oneself of life-saving vaccines. Are
these poor nancial or health decisions? From the perspective of one who already
trusts such entities, the answer may well be ‘yes’; they may even regard the alterna-
tives as simply unworkable. But without that trust, it is dicult to make the case for
trust from the negative consequences of not trusting without begging the question
about whether trust is warranted. Given that trusting can make us vulnerable (Baier,
1986; Jones, 1996), some might reasonably judge that it is better to play it safe. In
any case, it scarcely requires much sophisticated empirical study to recognize that
telling people they should trust science because they have no choice is unlikely to be
a productive means of producing such trust.2
What are the better alternatives for cultivating trust in science communication?
This is a (very general3) question that many science advocates and communication
researchers have been trying to answer for decades. The lack of signicant success
over this long period testies to the question’s diculty. In recent years, however, a
science communications strategy has emerged with both conceptual–normative and
1 Indeed, there’s nascent evidence that many who we might think of as “anti-science” — “Flat-Earthers,”
for example — are in fact committed to doing their own science (Olshansky, Peaslee, and Landrum
2020). Such dispositions exist on a continuum with other sorts of contrarians (e.g., “Anti-Vaxxers”) doing
their own “research” (including selectively reading the scientic literature in an eort to support their
conclusions); for more on the complexities here, see Goldenberg (2021).
2 Indeed, given trends of anti-intellectualism and anti-elitism, it would not be surprising if such a strategy
triggered a boomerang eect (Merkley 2020; Zhou 2016).
3 Given the vagaries of epistemic trust, it may well be too general; that won’t matter much for our pur-
poses here.
1 3
Public Conceptions of Scientific Consensus
(apparent) empirical support: to communicate about socially-contentious scientic
issues framed as matters of scientic consensus. This basic idea has seen some uptake
in the context of various public outreach projects on climate change4 and many mem-
bers of the news media seem eager to adopt it as a panacea for our science-commu-
nication ills.5
Unfortunately, we believe that there’s reason for caution about consensus-framing
as a general strategy for science communication. While it is possible to articulate a
prima facie compelling normative justication for this strategy — showing why the
existence of a scientic consensus (of a certain kind) concerning a claim provides
a kind of epistemic warrant for accepting that claim in question — such a justica-
tion requires that the messaging takes a form that appears unlikely to be generally
eective. This is because (as we will argue) scientic consensus, as a concept, seems
not to be broadly understood. We arrive at this conclusion as a result of an interview
study that members of this research team undertook in order to gain a more robust
sense of the prevalent conceptions of scientic consensus in the American lay-pub-
lic, details of which we present below.6 Given certain normative assumptions about
how one should communicate science (or anything) to a wider public, we arrive at a
dilemma for consensus-framed science communication: in the prevailing conditions,
we should expect it to be either unsupportable or ineective.
The plan of the paper is as follows. In §2, we will return to the question of the
public’s trust of science and consider the normative justication for accepting propo-
sitions on which there is a scientic consensus of a certain kind. Crucial to what
follows is the distinction between consensus and mere agreement — a distinction
that those practicing and researching science communication have not consistently
drawn. We will argue that only when understood as a consensus (in a certain robust
sense that sets it apart from mere agreement) can consensus-framing properly convey
epistemic warrant.7 In §3, we describe our qualitative study that suggests that, framed
as such, this epistemic warrant will likely be lost on a signicant portion of the lay
public; §4 assembles and discusses our dilemma and considers possible responses.
We conclude in §5 with some tentative thoughts about next steps for both philoso-
phers and science communication researchers.
4 The Consensus Project <http://theconsensusproject.com> for communicating about the existence and
urgency of ACC is a prominent example.
5 We share some exemplary references in footnote 10.
6 By ‘the lay public’ (and related terms) we do not wish to suggest a belief in a single undierentiated
group; rather, we use the term much as de Melo-Marín and Intemann do, “to refer to all ‘publics’ or
layperson stakeholders who might be aected by the production of knowledge…[without making] the
assumption that this is a monolithic group” (2018, 9).
7 Note that we are arguing for such framing as one (among potentially several) necessary conditions —
and not a sucient condition — for the existence of such warrant.
1 3
M. H. Slater et al.
2 Trust of Science
2.1 From Trust of Individual Scientists to Trust of Scientific Consensus
Consider rst a simple case: a layperson’s trust of an individual scientist to accu-
rately inform them of a particular scientic conclusion relevant to their lives — for
example, whether drinking a glass of red wine every night would harm their health in
some way. It is familiar that the general social epistemic task in evaluating testimony
(in general) involves assessing testiers on at least two dimensions — their compe-
tence and honesty. While the sort of basic plausibility lters we typically employ
(Lipton, 1998) no doubt have some role to play — most of us would probably reject
out of hand claims that a glass of wine will kill us or that it will cure our ails — in
many scientic contexts, it seems likely that the two dimensions of trustworthiness
will need to do most of the epistemic heavy lifting. Science, after all, has been known
to produce deeply counterintuitive knowledge.8
When it comes to the competence dimension, it is controversial whether the task is
realistic for those without much scientic training. Some suggest that the challenge is
in principle meetable, however. Oreskes and Conway gesture in this direction shortly
after oering their brief justication for lay trust of science:
because scientists are not (in most cases) licensed, we need to pay attention to
who the experts actually are — by asking questions about their credentials, their
past and current research, the venues in which they are subjecting their claims
to scrutiny, and the sources of nancial support they are receiving. (2010, 272)
In a similar spirit, Anderson (2011) describes various criteria for judging honesty
and epistemic responsibility, arguing that lay assessment of these qualities is possible
even for those with relatively modest educational attainment (cf. Feinstein, 2011;
Keren, 2018).
On the other hand, the perception of expertise can sometimes be a matter of moti-
vated cognition (Kahan, Jenkins-Smith, and Braman 2011; Suldovsky 2016; Sul-
dovsky, Landrum, and Stroud 2019; Stewart 2019). Complicating matters further is
research suggesting that science, as a profession, occupies a somewhat ambiguous
position in the public consciousness. As a general matter, while the public tends to
accord scientists considerable competence (only engineers rank higher), they occupy
only a middling position when it comes to “warmth” (Fiske & Dupree, 2014, 13,595).
Such aective dimensions of trust cannot be easily discounted. Epistemic trust is tied
up for many with a moral sense of trust (“What kind of people are these folks?” “Do
they have my best interests at heart?”). As de Melo-Martín and Intemann note, “When
we trust, we are vulnerable to others. Hence, trust is risky; our trust can be betrayed.
If people trust scientic experts to produce and disseminate sound knowledge and
8 Nor, of course, are most members of the lay public able to evaluate the credibility of a scientic conclu-
sion by consulting the details of the research (Anderson, 2011, 144). Our discussion of epistemic trust in
this context is necessarily brief and impressionistic, as this is a deeply complicated subject. Trust, in our
usage, does not mean complete deference (as suggested by some investigations; see, e.g., Anderson et al.,
2012); as a starting ante, we take it as minimally involving taking a testier’s claims seriously.
1 3
Public Conceptions of Scientific Consensus
scientists fail to do so, people will have incorrect beliefs and make inadequate deci-
sions” (2018, 90). Recognizing that the aims and values of a given scientist may not
cohere with one’s own — and that, being people, scientists are as apt as anyone to
dissemble or mislead (given the right incentives and character aws) — might lead
one to withhold their epistemic trust.
Such complications at the individual level suggest an alternative locus for the
prima facie trustworthiness of science: the scientic community (as a somehow united
whole)9 — or, to construe things more narrowly: scientic consensus (concerning a
particular issue). It is at this community level that particular scientic claims are vet-
ted via peer-review and less formal post-peer-review practices. It is at this level that
replications are attempted, disputes are prosecuted, papers are cited (positively and
critically), results used as a platform for further work, and so on. When “the knowl-
edge machine” of the scientic enterprise (Strevens, 2020) is ring on all cylinders,
it is arguably reasonable to identify a kind of social objectivity attached to results on
which there is robust scientic consensus (Longino, 1990). Think of this as the out-
line of a normative argument for the ex ante epistemic value of scientic consensus
and thus a justication for the use of a consensus messaging strategy (CMS). The
argument would need lling out to be fully plausible, of course; but suppose we grant
the conclusion for a moment.
That such a normative case can be made does not, of course, entail that we’d be
wise to adopt a CMS in response to our science communication challenges. Some
science communication researchers, however, have recently oered descriptive,
empirical support for CMSs on the basis of the “pivotal role” that perceived scientic
consensus plays in the acceptance of science (Lewandowsky, Gignac, and Vaughan
2013). Van der Linden et al., (2015), citing the foregoing study, argue that “per-
ceived scientic agreement [is] a ‘gateway belief’ that either supports or undermines
other key beliefs about climate change, which in turn, inuence support for public
action” (2; see also van der Linden, Leiserowitz, and Maibach 2019). These results
— including their generality and real-world ecacy — remain controversial (Lan-
drum & Slater, 2020; Kahan, 2017; Landrum, Hallman, and Jamieson 2019; cf. van
der Linden, Leiserowitz, and Maibach 2017). But the basic appeal of the underlying
idea is obvious — particularly in cases like ACC. Thanks in large part to the well-
funded campaigns to cast doubt on climate science (Oreskes and Conway 2010a;
Brulle 2014), the public consistently underestimates the level of scientic consensus
on ACC (Hamilton, 2016, 201; Leiserowitz et al., 2016) It stands to reason that if they
came to believe that there was a scientic consensus on ACC, they would also tend
to accept that ACC was occurring.10Mutatis mutandis, the hope goes, for other pieces
of socially-contentious science.
9 United how and to what degree is a matter we take up in a preliminary way momentarily.
10 As one might also suspect, van der Linden’s study was quickly picked up by a number of news out-
lets and op-ed pages, many of whom reported the experimental results as furnishing practical advice;
e.g., https://www.nytimes.com/2020/01/02/opinion/climate-change-deniers.html, https://www.washing-
tonpost.com/news/energy-environment/wp/2015/02/26/can-this-gateway-belief-get-people-to-accept-cli-
mate-change/, https://phys.org/news/2015-05-scientic-consensus-gateway-belief-climate.html.
1 3
M. H. Slater et al.
2.2 Distinguishing Consensus from Mere Agreement
We will not attempt to evaluate the descriptive case for CMSs here — not directly,
at least. Before describing our own empirical study that we contend bears on the
tenability of CMSs, however, let us return to the normative case for their adoption:
should the existence of a robust scientic consensus on X warrant a belief that X is
true?11 This evidently depends both on what we mean by ‘consensus’ and what we
may presume about the relevant background beliefs — e.g., how one conceives of
consensus as coming about. The attentive reader of the empirical literature on CMSs
may have noticed an occasional slide between talk of consensus and talk of agree-
ment. Consider again van der Linden (2015) quoted above; here’s more of the context
of that quotation:
We posit that belief or disbelief in the scientic consensus on human-caused
climate change plays an important role in the formation of public opinion on
the issue. This is consistent with prior research, which has found that highlight-
ing scientic consensus increases belief in human-caused climate change [here
they cite (Lewandowsky, Gignac, and Vaughan 2013)]. More specically, we
posit perceived scientic agreement as a “gateway belief” that either supports
or undermines other key beliefs about climate change. (2; our emphasis)
This sort of conation between agreement and consensus is also evident when one
examines the stimuli for the studies in question, where participants are asked to
estimate the level of agreement on climate change as a matter of a precise percent-
age. While treating consensus and percent agreement as functionally equivalent is
methodologically expedient, there are serious questions about whether doing so is
warranted.
To see this, consider a parallel to our normative question above: should the nearly
unanimous agreement of a group of people on X warrant a belief that X is true? Surely
the only reasonable answer to such a schematic question is (at best): it depends. How
was this agreement reached? How diverse is this agreeing group — in their values,
ideologies, prior commitments, &c.? What is the nature of their expertise (if any)?
How relevant is it to the issue at hand, for instance? While the question of the social
epistemology of consensus has received only sporadic philosophical attention over
the years (for some exceptions, see 1990; 2002; Beatty 2006; 2017; Solomon, 2007;
Odenbaugh, 2012; Miller, 2013; 2019; Stegenga, 2016), the non-identity of consen-
sus with mere agreement is widely granted. Ditto for the claim that for consensus
to deserve our epistemic respect, it should amount to more than mere agreement.
Miller, for example, asks when a consensus is “knowledged-based or epistemically
11 In asking this question, we of course need to nesse the issue of how one comes to the belief that there
is a scientic consensus on a particular matter — for this will rarely be a matter of direct observation (or
inference from many such observations). Rather it is a fact about the world — about the distribution of
beliefs — that we often need to take on others’ authority or say so. This may seem to raise a red ag for
the strategy; why suppose that CMSs will work where direct testimony from authorities (like individual
scientists or scientic organizations) fail if the former depend, in some sense, on the latter? We set this
concern aside in what follows.
1 3
Public Conceptions of Scientific Consensus
justied” (2013; 2019), oering a broadly abductive answer (“when knowledge is the
best explanation” of the consensus) and suggesting conditions under which we might
expect knowledge (rather than accident, bias, or various sorts of social pressure) to
provide the best explanation of the consensus in question — including a condition
of “social diversity” à la Longino (1990). Others oer broadly similar accounts (Ste-
genga, 2016) or note conditions under which consensus should not be taken as reli-
ably indicative of the truth (Beatty, 2006).12
Here, we submit, understanding something about “how science works” as a social
enterprise may be pivotal for appreciating the prima facie epistemic signicance
of scientic consensus — or at least being in a position to ask the right questions
Anderson, 2011; Oreskes, 2019, ch.2). One of the more salient features of the sci-
entic enterprise uncovered in the last century is its tendency toward self-scrutiny
via a balance, of sorts, between competition, skepticism, and collaboration within
the scientic community (Merton, 1973; Kuhn, 1962; Longino, 1990; Kitcher, 1990;
Strevens, 2017, 2020) — a balance which, to an approximation, has the potential to
keep in check individual “pigheadedness” (or even harness it for good, as discussed
in Morton 2014) when certain conditions concerning the composition and activity
of the community are met. Now, again, while there is clearly much more to be said
about these conditions and the nature and limits of the epistemic warrant that scien-
tic consensus can provide, the core point should seem quite plausible: matters of
scientic consensus only provide such warrant in the context of a fairly rich set of
background beliefs about what scientic consensus is and how it is formed. While
such background beliefs are presumably common amongst the readers of this journal,
it is an open question what mental model of scientic consensus prevails among the
wider public. This is the question that we approach empirically in the study described
in the next section.
Before turning to the study, it is worth reecting on two further practical problems
that a CMS which treats consensus and agreement as synonymous would face. First,
we simply don’t have reliable survey data on the level of agreement among domain
experts on all (or even most) scientic issues. A widely discussed poll mentioning
“AAAS scientists” (Funk and Rainie 2015) is in fact a poll of AAAS members —
subgroups of which include AAAS Members (a broad group including journalists,
humanists, science communicators, among presumably many other non-scientists),
Working Ph.D. Scientists, and Active Research Scientists.13 Depending on one’s
view of whose agreement is relevant — is it all working scientists or only special-
ists? — such surveys, where they exist, will be of questionable value.
12 An interesting possibility, raised by a reviewer for this journal, is that the distinction that we are pointing
to is really “a philosopher ’s distinction” that scientists themselves do not recognize (hence the conation
we see in some of the empirical studies we cite). While we do not take a stance on what scientists recognize
on this matter (as we have not studied the question), it is worth pointing out that the fact that the conation
is made in several surveys does not suggest that the distinction between consensus and mere agreement
is not widely recognized. Note as well that even if scientists do not generally explicitly recognize this
distinction, they presumably understand facts about the scientic enterprise that would render facts about
agreement implicitly more than mere agreement. This matter deserves further empirical study.
13 The latter are dened as “working Ph.D. scientists who also report having received a research grant
within the past ve years”: https://www.pewresearch.org/science/2015/07/23/an-elaboration-of-aaas-sci-
entists-views/.
1 3
M. H. Slater et al.
Second, even if we had the more ne-grained surveys on various issues, previous
research on public conceptions of consensus suggests that many people have a very
low tolerance for dissent. Aklin and Urpelainen report that “the scientic community
can only convince the public about the existence of a problem with a high degree of
consensus [meaning agreement]. In other words, even a modest amount of scientic
dissent signicantly decreases public support for environmental policy” (2014, 174).
This makes intuitive sense. In a scientically sophisticated vernacular, ‘consensus’ is
as much a qualitative as quantitative matter; just as it involves a conception of a rigor-
ous process of contestation and a fair hearing of the evidence, we would submit that
it also (as a byproduct) involves an increasing marginalization of dissenting voices.
Treated as a purely quantitative matter, on the other hand, a member of the lay public
might reasonably wonder (e.g., concerning ACC): “What do those 3% of apparently
dissenting scientists say? What evidence do they have? Shouldn’t we consider this as
well?” (Landrum & Slater, 2020, 3). It is thus an open question whether matters on
which science-savvy observers recognize a consensus would be treated as such by
the lay public if the issue was discussed in terms of agreement, say, on the order of
a mere 75%.14
Thus, a CMS using ‘consensus’ — abjuring the inrmities of a percentage-agree-
ment gloss and potentially signaling the existence of a more robust process of for-
mation — would seem to be preferable, both normatively and practically. But is it
workable? This is a matter on which further direct experimental study is needed. The
study we describe below concerning how members of the lay public conceptualize
scientic consensus bears on the workability question indirectly. To it we now turn.
3 The Study: Public Conceptions of Scientic Consensus
3.1 Aims
Our primary aims in this study were (1) to examine what models exist in the general
public for scientic consensus and to determine how sophisticated such models are;
and (2) to determine whether and how scientic consensus gures into the public’s
trust of science. We chose semi-structured interviews and an analysis methodology
based in grounded theory, as explained below, to capture qualitative data to answer
these questions and to develop further hypotheses concerning the public’s conception
of scientic consensus.
3.2 Methods
The authors and team of student researchers (24) conducted a total of 70 semi-struc-
tured interviews between September of 2018 and December of 2019 from a variety
14 A third practical diculty for CMSs, gestured at in footnote 9, involves the fact that the existence of a
consensus will typically be communicated by a single source (e.g., a news report, an individual science
communicator, a statement from a scientic body such as the National Academy of Science, or AAAS)
rather than something that is, as it were, directly observed (or inferred).
1 3
Public Conceptions of Scientific Consensus
of backgrounds and locations in the U.S, including data from 16 dierent states.
The researchers initially employed convenience sampling via acquaintance to collect
interviews, and then, in an attempt to increase the range of age, education attain-
ment, religiosity, and political ideology represented, moved to purposive sampling
later in the process of data collection. In particular, the purposive sampling targeted
participants with lower levels of education and conservative political ideologies as
those populations were underrepresented in the original set of data. Demographic
information for the sample can be viewed in Table S1 in the online supporting mate-
rial.15 While we need to be cautious about generalizing these results to the entire U.S.
population (especially to habitually underrepresented communities), this is a respect-
able sample size for a qualitative study of this nature. They are meant to explore
participants’ views in greater depth than can be achieved using quantitative measures.
One particular way in which our sample fails to be demographically representative
is in their relatively high level of education attainment, which might incline one to
expect greater sophistication in conceptions of science.
The student researchers were trained in interview methodology and normed by
the rst two authors through a series of practice interviews. Interviews were then
conducted either face-to-face or via videoconferencing, audio recorded with partici-
pant consent, and transcribed and checked by the authors. The semi-structured inter-
views used open-ended questions inviting participants to share their understanding
of science, scientic consensus, and reasons for trusting (or not) scientic results.
Early questions were fairly general and designed to provide participants opportuni-
ties for mentioning scientic consensus (or concepts in the vicinity) naturally without
prompting. Subsequent, more-focused questions addressed whether participants were
familiar with the idea of a scientic consensus, and (if so) asked them to describe
their conception of that term. The interview also included questions (some about two
hypothetical scenarios) designed to allow the researchers to gauge the sophistication
of participants’ understanding of scientic consensus. The full interview script can be
found in the online supporting material. After participating in the interview, partici-
pants were given a survey to collect demographic information and data concerning
participants’ understanding of science as a social enterprise (to be used in a future
analysis).
The authors coded the relevant questions of the transcribed interviews and entered
the resulting data into spreadsheets. Simple descriptive coding schemes were pre-
determined based on the interview questions (e.g. codes for mentioning consensus
when discussing trust in science or not), but many codes having to do with level
of sophistication in conception of scientic consensus and denitions of “science”
were developed through an inductive process of reading and re-reading transcripts,
identifying recurring themes or words, and nding appropriate categories into which
response types could be grouped. This common technique for qualitative interview
coding borrows from grounded theory (Glaser & Strauss, 1967; Birks and Mills
2015). Further information on coder norming, the coding protocol, and inter-rater
reliability (mean Krippendorf’s Alpha for all raters on all variables = 0.90) can be
found in a detailed methods section in the online supporting material.
15 https://osf.io/eygwj/.
1 3
M. H. Slater et al.
For this analysis, we focused on four variables: (1) Approach to science, (2) Con-
sensus in response to trust, (3) Familiarity with consensus, and (4) Sophistication of
consensus model. Our inductive coding practice generated sub-categories into which
we sorted participants for each of the four main variables. Each variable and the
corresponding results are briey described below, with discussion about how the
qualitative and quantitative data relate to our research questions and hypotheses. The
nal codebook with full explanations can be found in the online supporting material.
3.3 Results and Discussion
1. Approach to Science variable. Our research questions and aims were centered on
participants’ conceptualization of scientic consensus, but in order to contextualize
their views on this subject and mask our focus, interviews began with questions about
how participants understand science. Most responses (44%) fell into a heterogeneous
category we labeled “Muddled.” This category included responses identifying sci-
ence only as a subject of academic study or (to our surprise) the natural world itself.16
Other common responses in this category saw science as an eort to “prove some-
thing is true” but without any evident conception of how scientists went about this.
The “Broad” category of responses (24%) included any that characterized sci-
ence as the pursuit of knowledge or understanding broadly without any mention of
concrete outcomes. These responses tended to include statements like “science is
studying what happens in the world.” “Process/Method-Oriented” and “Outcome-
Oriented” approaches to science were both relatively common (21% and 9% of inter-
viewees, respectively). “Process/Method-Oriented” responses generally focused on
the distinctive methods of science — like experimentation, testing of hypotheses,
or systematic observation. “Outcome-Oriented” approaches tended to focus on the
“products of science,” such as discoveries, understanding, knowledge, cures for dis-
eases, or technological advancements.
The least common type of response was labeled “Enterprise-Oriented” — this cat-
egory was intended to encompass conceptions of science that highlighted the sense
in which it is a social enterprise aimed at producing, revising, and curating knowl-
edge and understanding of certain features of the world. The “Enterprise-Oriented”
category was developed prior to interview coding, as a possible category that we
hypothesized might be attributed to participants who connected their trust of certain
pieces of science to the question of whether a consensus existed on that science. Only
one of the 70 participants expressed an “Enterprise-Oriented” approach to science.
2. Consensus in Response to Trust variable. Interviewers asked participants
whether they trusted science, and then asked participants to explain their
response. In some cases, interviewers asked participants if they trusted indi-
vidual scientists or science as a whole. Very few of the interviewees (3) spon-
taneously mentioned a conception of scientic consensus (including general
agreement among scientists) as a reason to trust science. An additional six inter-
viewees did mention consensus as a reason to trust science after the prompt
16 For example: “when I think of science…I actually think of nature and space” or “[science is] life”.
1 3
Public Conceptions of Scientific Consensus
regarding science as a whole versus individual scientists (coded as “Mixed” in
our coding scheme). The vast majority (87%) of respondents, however, gave
various other reasons to trust or distrust science. Some of these were based on
ideas about science having the “facts” or being “concrete.” An example of this
can be seen in the excerpt below:Interviewer: Do you feel like you generally
trust science?
Participant Z1: Yes.
Interviewer: Why?
Participant Z1: It’s concrete.
Interviewer: Could you say more?
Participant Z1: I feel science is concrete in terms of it’s not religion or philoso-
phy or political viewpoints. It’s science and math. It’s more concrete.
Other responses were more focused how science is portrayed in politics, or in media
representations, as is represented in the response below:
Participant AP1. Yeah, I trust science. I think it depends on, I guess, what it is.
Like I’m a rm believer, I like vaccines and I don’t believe in that if I get a shot
I’m going to become dyslexic. I don’t believe in the common media portrayals
of science.… So, I denitely do trust science, I just don’t trust them in [the]
media’s portrayal of science, if that makes sense.
Participant CM1. I would say [I trust science], I have no reason not to trust
it. I think I start not to trust it when it becomes political, you know? So when
you have politicians starting to argue about science like okay, like what? And
again, I think that’s my natural inclination to be suspicious of politics in general
because you know, they’ll say whatever they want to say in order to advance
their interests, whether it’s completely... I’m not saying it’s a lie, but there’s
denitely a lot of half truths that oat around up there.
In some cases, trust in science was described as justied for reasons of methodology
and “proof,” as in the following example:
Participant SJ3: I trust science because...they do an experiment. Trial and
error...they don’t just say, okay, it’s scientically proven, but they have a rea-
son behind each…each theory, or each reasoning. So, for example, people say
organic food is better, but there are scientic reasons…you can prove that cer-
tain organic foods are better to eat. They have these reasonings behind it.
Our results suggest, in answer to our second research question, that it is relatively
rare for members of the lay public to connect their trust of science or scientic claims
with beliefs about scientic consensus. For the most part, consensus seemed to be
unrelated to participants’ thinking about the grounds for trusting science.
3. Familiarity with Consensus variable: During the interviews, researchers asked
participants if they were familiar with the idea of scientic consensus. This
occurred after questions regarding trust of science, how new ideas become
1 3
M. H. Slater et al.
accepted in science, and a scenario about whether participants would be inclined
to accept results from new research, giving the participants ample opportunity to
bring up consensus (or cognate ideas) naturalistically (vanishingly few did). In
response to this question, 30 participants (43%) indicated that they were familiar
with the term ‘scientic consensus’. These responses were coded as cons_fam
(“consensus familiar”) regardless of the accuracy of the participants’ subsequent
denition of the term. With this question we were only trying to get a sense of the
proportion of interviewees who would recognize the term if it was given to them.
Fifteen participants asked for a denition or to be reminded of what the term
meant, and then expressed some understanding or recognition after the reminder.
These responses were labeled cons_np (for “needed prompt”) and were consid-
ered distinct from the 25 cases (36%) in which participants did not know what
scientic consensus was prior to a denition and expressed at most acquiescence
(and sometimes confusion) when given the denition (labeled cons_unfam).
4. Sophistication variable: While interviewers asked participants to describe their
conception of scientic consensus, various parts of the interview were designed
to elicit further detail in the participants’ models of consensus from which its
sophistication could be judged. Our inductive coding approach generated four
categories of levels of sophistication.
The rst level of sophistication, labeled Unsure/No View, was applied when a partici-
pant reported being unfamiliar with consensus, did not express much recognition, or
did not evince a distinctive view when oered a basic denition by the interviewer
and or in the scenarios designed to encourage them to think about the scientic com-
munity (or sub-communities). Generally, these participants accepted the minimal
characterization oered by interviewers (see below), but oered little else. This code
was compatible with a participant expressing some claims about the likely formation,
distribution, or relevance of consensus on prompting, but this usually happened as a
clear guess associated with the interviewer’s denition. The following example rep-
resents a typical Unsure/No View response:
Interviewer: Are you familiar with the idea of scientic consensus?
Participant M3: No.
Interviewer: By consensus I mean something like general agreement.
Participant M3: Okay.
Interviewer: How common do you suppose consensus is in science?
Participant M3: Depending on the issue, I’m sure there’s a lot of it.
Interviewer: And is there one topic or subject or issue that you think has a sig-
nicant amount of consensus?
Participant M3: Not really.
Interviewer: Okay, do you have a sense of how scientic consensus comes
about?
Participant M3: There has been improvement throughout the years. It’s kinda
hard to debate it. So, I would say that the longer the study, you have more.
1 3
Public Conceptions of Scientific Consensus
Only seven of the 70 interviewees (10%) were categorized as having this level of
sophistication. Far more common (47%) was the second level of sophistication,
which we labeled Muddled. In these cases, the participants thought of consensus
in normatively non-standard ways, often at opposite ends of a spectrum of neces-
sary agreement. In some cases, participants believed that 100% agreement between
scientists, with no toleration for dissent, was necessary for consensus. In others,
participants thought that just a small plurality of scientists, perhaps multiple people
working in the same lab, or one other scientist convinced by the evidence, constituted
a consensus. Some participants evidently conceived of scientic consensus as some-
thing pertaining to the level of agreement in the general public (e.g., “it’s when the
masses, the majority of the people accept something scientic as true.”). This code
also encompassed cases in which a more standard conception of scientic consensus
was expressed, but was accompanied by non-standard beliefs about how consensus
was reached, such as through a group of privileged insiders, through only the scien-
tists deemed most intelligent, through governmental “approval” or peer review, or as
the manifestation of a kind of “groupthink” as in the example below:
Interviewer: How common do you supposed consensus is in science?
Participant SJ1: Probably fairly — it’s kind of like groupthink.
Interviewer: Do you have a sense of how scientic consensus comes about?
Participant SJ1: Yeah, I think it’s what I said before, that the more often some-
one states something as fact, the more apt people are to accept it as fact, whether
it is or it isn’t.
More standard understandings of scientic consensus were categorized as Main-
stream. The 22 participants (31%) whose responses were coded with this third level
of sophistication thought of scientic consensus as general, strong agreement of the
relevant agents. Here’s typical response for this category:
Interviewer: Are you familiar with the idea of scientic consensus?
Participant C2 : Yes. That means that the greater body of the scientists agree
on a conclusion.
Interviewer: How common to you supposed scientic consensus is in science?
Participant C2: It’s tough to answer that. There’s all sorts of questions. Some
of it — the consensus is easy. Others — the consensus is much more dicult
because the evidence isn’t convincing enough. So it’s common to have it, it’s
common not to have it.
A clear, mainstream understanding of the term is present here. Our use of this cat-
egory tolerated some minor, non-standard models of how consensus comes about,
such as suggestions that all relevant scientists might meet in person to discuss a
subject and reach a consensus. Generally speaking, it was compatible with a loose
identication of consensus as general agreement.17
17 The two scenarios were often instrumental in discerning mainstream understandings of consensus from
the previous two categories. For example, participants who regarded the agreement by scientists working
1 3
M. H. Slater et al.
The nal, and most nuanced model of scientic consensus was labeled Sophisti-
cated, and was seen in eight of the participants in this study (11%). This code built
upon the Mainstream category; recipients added an appreciation of certain nuances
of consensus that contribute to its epistemic signicance. This could include a more
complete understanding of how consensus comes about or a recognition of the com-
patibility of consensus with minority or outsider dissent; responses in this group
might also recognize the desirability of social diversity among the relevant agents,
and/or their relative independence in forming their views. These nuances appeared
in response to questions and scenarios throughout the interview, as in the example
below:
Participant SS1: [How common scientic consensus is] obviously ranges on
the topic, what the eld of study is. There are certain elds where there’s a lot
more research, a lot more money pumped into it. So a good example I would
say just like climate science. That’s where there’s a really large consensus on
that eld. Other elds don’t have that same certainty.... There’s always going to
be people on the other side of that is going to disagree with you, but when you
have a majority of the people.
(later in interview) Interviewer: So, imagine that all the scientists in a certain
corporation that conducts medical research agreed on the cause of an illness.
[Do] you regard that as a consensus on what you would be inclined to accept
their conclusions?
Participant SS1: No, because it was just from one. You said one corporation?
... No, it has to be outside sources. They have that obviously incentive to sell
that product.... I don’t nd that to be credible at all…. If they had overwhelming
evidence from outside of the corporation [I would nd that credible].
Overall, converting our four sophistication codes to numbers (1–4, from least to
most), the average sophistication score across our 70 interviews was 2.4. While we
do not take the numerical values we associated with our category descriptions to
constitute a well-dened scale — there is clearly room to disagree about whether
a “muddled” view of consensus is “better or worse” than having no view at all —
this average being noticeably below a Mainstream of 3.0 conveys something impor-
tant about the overall sophistication of our participants’ mental models of scientic
consensus. Or, put another way, our observation was that a majority (57%) of our
interviewees either lacked a pre-existing view of what scientic consensus was or
harbored signicant misunderstandings about it. Especially when we reect on the
fact that even a Mainstream model of scientic consensus that treats it as (potentially)
little distinguished from mere agreement may lack the sophistication we posit is nec-
essary for generating the relevant epistemic warrant, we face the worrying possibility
that nearly 90% of our participants lacked what was needed to appreciate the signi-
cance of scientic consensus.
at a pharmaceutical corporation as showing that there was a scientic consensus about the cause of a cer-
tain illness were automatically disqualied from the Mainstream sophistication category.
1 3
Public Conceptions of Scientific Consensus
Breaking out our latter three variables by the ve categories in the Approach to
Science variable, we observe a noticeable trend towards greater recognition and
sophistication concerning scientic consensus for those with what we would con-
sider more sophisticated conceptions of the scientic enterprise. Those with muddled
views of science (44% of our participants) were unlikely to associate consensus with
their trust of science and, indeed, tended to be unfamiliar with the concept itself (see
Table 1 below).
3.4 Limitations
As with all qualitative studies with this methodology and sample size, limitations
exist in the generalizability of the results. Although we aimed for diversity through
our purposive sampling, people of color, politically conservative individuals, and
those with less education are underrepresented in this sample. These results are
also not readily generalizable to populations outside of the U.S. We note, however,
that our participants overrepresent those demographic groups — such as those with
higher levels of educational attainment — that one might expect to possess a more
nuanced understanding of the scientic enterprise. If this is the case, our results may,
in fact, overestimate the level of sophistication about scientic consensus in the gen-
eral public.
Furthermore, it is possible that there are views or models of consensus that were
not drawn out by our particular interview protocol. For example, consensus might
matter functionally to members of the general public when it comes to their trust of
science, though it is rarely explicitly thought to matter. We did attempt, in the cre-
ation of this interview protocol, to give respondents ample opportunity to mention
consensus or neighboring concepts, but we can rule out neither this possibility nor the
Table 1 Summary Results by Approach to Science1
1. Approach to Science
(percentage of total
participants)
2. Consensus in response
to Trust?
3. Familiar with Consensus? 4. Sophis-
tication
(mean
score)
Muddled (44%) 90% no
6% mixed
3% yes
29% familiar (fam)
16% needed prompt (np)
55% unfamiliar (unfam)
2.0
Broad (24%) 88% no
6% mixed
6% yes
59% fam
24% np
18% unfam
2.6
Outcome-Oriented (9%) 67% no,
17% mixed
17% yes
33% fam
33% np
33% unfam
2.7
Process-Oriented (21%) 87% no,
13% mixed,
0% yes
53% fam,
27% np
20% unfam
3.0
Enterprise-Oriented ( 1%) 100% no 100% fam 4.0
(single
result)
1 Percentages do not sum to exactly 100% because of rounding
1 3
M. H. Slater et al.
possibility that particular ways of asking questions masked the role consensus plays
in some participants’ trust of science.
4 A dilemma for CMSs
These limitations in mind, our results point to a (two-tier) dilemma for the advis-
ability of using CMSs to communicate with the public about science. The rst horn
of the dilemma stems from the observation that the idea of scientic consensus often
seemed simply unfamiliar to our study participants. When the concept is recognized
at all, participants as a whole did not show much sophistication in their grasp of
it. Moreover, as we noted above, it was only in the vast minority of cases that the
existence of a consensus came up as relevant to a participant’s trust of science, even
after prompting. Though we need to be cautious about drawing signicant conclu-
sions from these ndings, at the very least they should temper expectations for the
ecacy of CMSs for generating trust in scientic messages. Indeed, they may sug-
gest an explanation for the inconsistent results in eorts to replicate that model in
other contexts and in other ways (see, e.g., Deryugina and Shurchkov 2016; Bolsen
& Druckman 2018; Landrum, Hallman, and Jamieson 2019; Chinn and Hart 2021b).
More empirical research is clearly needed on this point.
A natural way of responding to the lack of recognition of (or sophistication about)
the concept of consensus is to replace it in our scientic messaging strategies with
mere agreement. Perhaps the persuasive eect of a rich conception of scientic con-
sensus could be triggered instead by messages focusing on measures of agreement
among scientists on a given issue. This leads to the second horn of dilemma — itself
another dilemma: framing a CMS in terms of agreement will likely either fail to be a
generally workable strategy or fail to be a normatively acceptable strategy.
Our case against workability was sketched above (§2.2): While we have (arguably)
good measurements of the (impressively high) extent of agreement among climate
scientists about ACC (Oreskes, 2004; Cook et al., 2016), other issues have not been
studied at this level of detail, making percent-agreement eectively unavailable as an
alternative for many scientic issues. Or worse, as we suggested above, it could be
that levels of agreement noticeably below 100% will induce boomerang / reactance
eects stemming from questions about the nature of the disagreement (Zhou, 2016;
Chinn and Hart 2021a). Even in the case of ACC, with its near unanimity in the sci-
entic community, climate change skeptics have (apparently successfully) employed
a “Galilean Gambit” (Landrum & Slater, 2020, 3) to magnify the signicance of even
extreme minority views.18
The normative case against framing a CMS (when workable) in terms of mere
agreement is, we think, intuitive. Suppose that mere agreement should not be regarded
18 Moreover, as Landrum & Huxster (2021, 3) point out, dierent estimates of the level of agreement on a
certain issue can become fodder for skeptics — as when the results of the Pew Research Center / AAAS
survey mentioned above (2015) indicated that “87% of scientists say that climate change is mostly due to
human activities” rather than the often-report 97%. Such a discrepancy, of course, can be explained along
the lines mentioned in §2.2; the point is that the precision can also invite unproductive (or motivated)
scrutiny.
1 3
Public Conceptions of Scientific Consensus
as providing epistemic warrant except against the backdrop of a range of background
beliefs about the epistemic context of this agreement, processes that likely brought it
about, and so on. Suppose further that such a backdrop cannot be assumed (or that we
know it to be rare). Then, at best, representing a fact as supported by mere scientic
agreement is tantamount to asserting something on grounds that one knows to be
unjustied. While this is not a case of straightforward lying — one is not attempting
to create false beliefs in another — it does appear to be a kind of dishonesty. It is thus
prima facie wrong. We think this is true even if one believes the claim being asserted
(and believes that it would be good for the recipient of our assertion to believe it).
Consider an analogy: suppose we know that climate-denier Dave will reexively
believe anything that Tom Hanks asserts (whatever the truth of such assertions are).
We might then be tempted to argue to Dave that he should believe that climate change
is real because Tom Hanks has said it is. Doing so constitutes a kind of manipulation
and thus arguably oends against his intellectual autonomy (cf. Riley, 2017; Fricker,
2021).
Now, of course, there’s room to resist this line of argument or the conclusion we
draw from it. Perhaps when the stakes are high enough, the prima facie wrong of
the dishonesty can be overcome by the social benet of getting people to believe in
a certain way. Such believers might not count as knowing (being, in a certain sense,
“Gettierized”), but this may be a matter of indierence when it comes to the social
good that is brought about by their true belief. That looks at least plausible in the case
of climate change — on which more presently.
One might also argue that it’s possible to avoid insincerity while still using others’
false beliefs; returning to our analogy, we could eectively sidestep the matter of
the evidential relevance of Tom Hanks. Rather than arguing as above, for example,
one might instead say, “Look Dave: you think that everything Tom Hanks says is
correct, right? I think that’s nonsense, myself, but have you heard that he thinks that
climate change is real? So by your lights, you should believe that it’s real!” First,
it’s not obvious to us that this completely avoids the manipulation; but grant for the
sake of argument that it does. Is this sort of maneuver possible in the case of glossing
consensus as mere agreement? Perhaps if we already knew that beliefs about the epis-
temic signicance of mere agreement were widespread, we could simply appeal to
these beliefs even if we found them to be evidentially dubious. But we don’t seem to
know this. Indeed, as Intemann has pointed out, “[c]limate skeptics have rejected the
empirical evidence for a scientic consensus precisely because they are dubious of
the processes and practices that have produced agreement in climate science” (2017,
193). Without a pre-existing peg to hang our hat on — viz. that mere agreement is
epistemically weighty — we would again presumably be in a position of falsely rep-
resenting that the agreement is evidentially relevant to the target belief.
Perhaps it’s implausible to regard glossing consensus as mere agreement as dis-
honest. It might be more akin to a harmless idealization or speaking in a language that
members of the lay public can more readily understand (see, for example, Oreskes
and Conway 2010b, 687). In an editorial in Public Understanding of Science, that
journal’s editor suggested that the eld should rethink “the very meaning of key terms
like ‘quality’ and ‘accuracy’. Accuracy of science communication was traditionally
dened as adherence to the specialist message, but is this still the case?…We prob-
1 3
M. H. Slater et al.
ably need a new notion of accuracy” (Bucchi, 2017, 891). Charitably interpreted, we
can read this as an encouragement to science communicators to consider more care-
fully and strategically how certain messages will likely be received — e.g., instead
of talking about the extent to which the existence of anthropogenic climate change is
conrmed or very highly probable, characterizing our epistemic state as knowing that
it is occurring. As before, however, it is not clear how this sort of approach would
work in the case of communicating the consensus about climate change. While mere
agreement and consensus may of course overlap — the scientic consensus about
climate change involves a high degree of agreement — the former is not a mere ide-
alization of the latter.
Let us consider a nal way of resisting this horn of our dilemma. In a fascinating
and provocative series of articles, John (2018; 2019; 2021) has explored the limits of
norms of sincerity and openness when it comes to science communication and expert
testimony. In cases, for example, where non-experts harbor a “false ‘folk philosophy
of science’” it might be that sincerity on certain matters will create in them false
beliefs; likewise, “as in Climategate, transparency and openness may destroy war-
ranted trust…. If we care about the promotion of true belief, we should not demand
that scientists are transparent and open” (2018, 7). Indeed, John argues, there are
situations in which one may need to choose “between making an honest assertion and
making an eective assertion,” (9) (i.e., an assertion that would be in a non-expert’s
epistemic interest to believe). Perhaps glossing consensus as mere agreement is like
this: a way of producing a true belief in the lay public by way of a false assertion, a
case of ‘well-leading’ rather than ‘misleading’ (10).
It would take us too far aeld to evaluate John’s arguments in any depth. But even
granting their basic thrust, much more would need to be said in favor of the eective-
ness of an agreement-framed-CMS. Recall that this question arises in the context
of the second horn of the second-tier dilemma — concerning an issue, like ACC,
on which the scientic community and (even more) relevant experts agree. On this
issue, the eectiveness of agreement-framed-CMSs for at least the immediate accep-
tance of ACC has been something of a mixed bag (see citations in §2.1); even when
signicant eects show up, eect sizes are small, and no one yet knows whether the
relevant belief revisions would occasion changes in one’s actions relevant to climate
change (for a review of the relevant literature, see Landrum & Slater 2020). More
empirical research is needed here, as John agrees (2018, 10).
Aside from this “immediate” question of ecacy — can agreement-framed-CMSs
shift basic beliefs about ACC (and like matters)? — we have a number of concerns
about the longer-term ecacy of such strategies stemming from possible downstream
consequences of representing that agreement as epistemically signicant. One obvi-
ous worry for pursuing such strategies vigorously is that doing so might serve to
entrench a faulty norm of acceptance: that scientic matters should only be accepted
where there is near-unanimity. This would in turn make communication more dif-
cult on issues discussed in §2.2 — that is, issues either about which we lack good
information about the level of agreement of individual scientists or on which the
level of agreement, while compatible with there being a robust consensus, may not
surpass a heightened bar. Another worry is that it may put communicators in the pre-
carious position of needing to defend the epistemic signicance of agreement against
1 3
Public Conceptions of Scientific Consensus
objections like those gestured towards by Intemann above. Responses that open the
door to accusations of dishonesty or manipulation might further corrode trust of such
communicators. While this ecacy question is a good deal more dicult to study
empirically, it too should be thought through and investigated carefully.
5 Conclusion & next steps
To summarize the overall structure of our dilemma is that if a CMS is sophisticated
(abjuring a facile identication of scientic consensus and mere agreement), then the
results of our study lead us to doubt that it will be eective; if the CMS, on the other
hand, takes the simple approach and equates consensus and agreement, then it will
either be dicult to employ in a broad range of cases or will transgress the sincerity
norm in science communication (for communicators who accept our earlier points,
anyway). The conclusion of the previous section was that even if this norm admits
of exceptions in certain cases, we need to be cautious about potential downstream
consequences for public trust and contributing to a more challenging communication
environment overall.
Reection on our dilemma brings us to a nal, tentative point. We saw that greater
sophistication in one’s view of science tended to coincide with it being more likely
that one would be aware of the idea of scientic consensus and demonstrate greater
sophistication in one’s grasp of the concept. This is not overly surprising. The fact that
consensus was so rarely associated with our study participants’ trust of science sug-
gests, though, that science educators and communicators could do more to produce
an understanding of science that helps make more salient how healthy and robust
forms of consensus come about, why such consensus should be seen as epistemically
signicant, and why such signicance is compatible with the existence of minority
dissent. It seems to us very plausible that a grasp of certain of the social–institutional
features of the scientic enterprise — particularly, the balance between cooperation
and competition — would provide an apt background for judging whether a consen-
sus is likely to be indicative of the truth or could be explained away as groupthink, a
bandwagon eect, or a conspiracy (Intemann, 2017; Slater, Huxster, and Bresticker
2019).
One of our next steps is to attempt to test this hypothesis by making use of the
survey data concerning participants’ grasp of the social enterprise of science we col-
lected after each interview. We also intend to undertake a deeper coding eort on
these interviews to further explore the public’s perceptions of science and scientic
consensus. Meanwhile, we believe that philosophers of science and epistemologists
have an important role to play in contributing to the important and ongoing empiri-
cal research on eective (and acceptable) science communication strategies going
forward.
Acknowledgements This project originated in Slater’s “Science in the Public Eye” course (Fall 2018).
Students in this course helped formulate the project and develop the interview protocol, and many con-
ducted and transcribed interviews. Thanks to funding from the Dean’s oce at Bucknell and the U.S.
National Science Foundation (SES-1734616) for the nancial support to make this course possible.
Research assistants on this project (from both Bucknell University and Eckerd College) include Adam
1 3
M. H. Slater et al.
Rueda, Aleks Bloschichak, Andrew Champlin, Anjali Patel, Colleen Buckley, Conall Rubin-Thomas,
Conor Moore, Kathryn Genovesi, Curtis Weaver, Erin Goldberg, Gray Reid, Jesse Lopez, Katie Edwards,
Leah Kramer, Mary Marshall, Michael Erickson, Owen Klinger, Rus Murphy, Savannah Weaver, Soham
Patel, Subarno Turja, and Zach Krieger. Early results were presented at the 2018 Philosophy of Science
Association meeting; we would like to thank that audience for its helpful comments and questions. Thanks
as well to the reviewers for this journal for their constructive feedback on earlier versions of this paper.
References
Aklin, M. & Urpelainen, J. (2014). Perceptions of Scientic Dissent Undermine Public Support for Envi-
ronmental Policy. Environmental Science & Policy, 38, 173–77. https://doi.org/10/f5w45n
Anderson, A. A., Scheufele, D. A., Brossard, D., & Corley, E. A. (2012). The Role of Media and Deference
to Scientic Authority in Cultivating Trust in Sources of Information about Emerging Technologies.
International Journal of Public Opinion Research,24(2), 225–37. https://doi.org/10/cbrh92
Anderson, E. (2011). Democracy, Public Policy, and Lay Assessments of Scientic Testimony. Episteme,
8(2), 144–64. https://doi.org/10/ctj8dx
Baier, A. (1986). Trust and Antitrust. Ethics, 96(2), 231–260. https://doi.org/10.1086/292745
Beatty, J. (2006). Masking Disagreement Among Experts. Episteme, 3(1–2), 52–67
Beatty, J. (2017). Consensus: Sometimes It Doesn’t Add Up. In Gissis, S., Lamm, E., & A. Shavit (Eds.),
Landscapes of Collectivity. (pp. 179–198). Cambridge, MA: MIT Press.
Birks, M. & Mills, J. (2015). Grounded Theory: A Practical Guide. Second edition. Los Angeles: SAGE
Bolsen, T., & Druckman, J. N. (2018). Do Partisanship and Politicization Undermine the Impact of a Sci-
entic Consensus Message about Climate Change? Group Processes & Intergroup Relations21(3),
389–402. https://doi.org/10/gdfds4
Brulle, R. J. (2014). Institutionalizing Delay: Foundation Funding and the Creation of U.S. Climate Change
Counter-Movement Organizations. Climatic Change, 122(4), 681–94. https://doi.org/10/f2pdbh
Bucchi, M. (2017). Credibility, Expertise and the Challenges of Science Communication 2.0. Public
Understanding of Science,26(8), 890–93. https://doi.org/10/ggzw29
Chinn, S., & Hart, P. S. (2021a). Climate Change Consensus Messages Cause Reactance. Environmental
Communication, 1–9. https://doi.org/10.1080/17524032.2021.1910530
Chinn, S., & Hart, P. S. (2021b). Eects of Consensus Messages and Political Ideology on Climate Change
Attitudes: Inconsistent Findings and the Eect of a Pretest. Climatic Change, 167(3–4), 47. https://
doi.org/10.1007/s10584-021-03200?2
Cook, J., Oreskes, N., Doran, P. T., et al. (2016). Consensus on Consensus: A Synthesis of Consensus Esti-
mates on Human-Caused Global Warming. Environmental Research Letters, 11(4), 048002. https://
doi.org/10/gcv7m4
Deryugina, T. & Shurchkov, O. (2016). The Eect of Information Provision on Public Consensus about
Climate Change. PLOS ONE,11(4), e0151469. https://doi.org/10/f8wzg7
Feinstein, N. (2011). Salvaging Science Literacy. Science Education, 95(1), 168–85. https://doi.org/10/
bp4m3m
Fiske, S. T., & Dupree, C. (2014). Gaining Trust as Well as Respect In Communicating to Motivated Audi-
ences about Science Topics. Proceedings of the National Academy of Sciences, 111 (Supplement 4):
13593–97. https://doi.org/10/f6gm24
Fricker, E. (2021). Epistemic Self-Governance and Trusting the Word of Others. In Matheson, J.
& K. Lougheed (Eds.), Epistemic Autonomy (pp. 323–42). New York: Routledge. https://doi.
org/10.4324/9781003003465?22
Funk, C. & Rainie, L. (2015). Public and Scientists’ Views on Science and Society. Wash-
ington, D.C.: Pew Research Center. http://www.pewinternet.org/2015/01/29/
public-and-scientists-views-on-science-and-society/
Funk, C. & Tyson, A. (2020). Partisan Dierences Over the Pandemic Response Are Growing.
Washington D.C.: Pew Research Center. https://www.pewresearch.org/science/2020/06/03/
partisan-dierences-over-the-pandemic-response-are-growing/
Glaser, B., & Strauss, A. (1967). The Discovery of Grounded Theory. Chicago: Aldine
Goldenberg, M. J. (2021). Vaccine Hesitancy: Public Trust, Expertise, and the War on Science. Pittsburgh:
University of Pittsburgh Press
1 3
Public Conceptions of Scientific Consensus
van Green, T., & Tyson, A. (2020). 5 Facts about Partisan Reactions to COVID–19 in
2020. Washington D.C.: Pew Research Center. https://www.pewresearch.org/
fact-tank/2020/04/02/5-facts-about-partisan-reactions-to-covid–19-in-the-u-s/
Hamilton, L. C. (2016). Public Awareness of the Scientic Consensus on Climate. SAGE Open, 6(4), 1–11.
https://doi.org/10/gn43
Intemann, K. (2017). Who Needs Consensus Anyway? Addressing Manufactured Doubt and Increasing
Public Trust in Climate Science. Public Aairs Quarterly, 31(3), 189–208
John, S. (2018). Epistemic Trust and the Ethics of Science Communication: Against Transparency, Open-
ness, Sincerity and Honesty. Social Epistemology, 32(2), 75–87. https://doi.org/10.1080/02691728.
2017.1410864
John, S. (2019). Science, Truth and Dictatorship: Wishful Thinking or Wishful Speaking? Studies in His-
tory and Philosophy of Science Part A, 78(December), 64–72. https://doi.org/10/gnmnd7
John, S. (2021). Scientic Deceit. Synthese, 198(1), 373–94. https://doi.org/10/gnmnfd
Jones, K. (1996). Trust as an Aective Attitude. Ethics, 107, 4–25. https://doi.org/10.1086/233694
Kahan, D. (2017). The ‘Gateway Belief’ Illusion: Reanalyzing the Results of a Scientic-Consensus Mes-
saging Study. Journal of Science Communication, 16(5), A03. https://doi.org/10/gg435b
Kahan, D., Jenkins-Smith, H., & Braman, D. (2011). Cultural Cognition of Scientic Consensus. Journal
of Risk Research, 14(2), 147–74. https://doi.org/10/bdrqf6
Keren, A. (2018). The Public Understanding of What? Laypersons’ Epistemic Needs, the Division of
Cognitive Labor, and the Demarcation of Science. Philosophy of Science, 85(5), 781–92. https://doi.
org/10/gfrd9f
Kitcher, P. (1990). The Division of Cognitive Labor. The Journal of Philosophy, 87(1), 5–22
Kuhn, T. S. (1962). The Structure of Scientic Revolutions. Chicago: University of Chicago Press
Landrum, A. R., Hallman, W. K., & Jamieson, K. H. (2019). Examining the Impact of Expert Voices:
Communicating the Scientic Consensus on Genetically-Modied Organisms. Environmental Com-
munication, 13(1), 51–70
Landrum, A. R., & Huxster, J. K. (2021). Mask Messaging for COVID–19: Examining the Eectiveness of
a Scientic Consensus Message versus and Explanatory Graphic. RAPID Preliminary Report 2. San
Francisco: KQED.org. https://www.kqed.org/about/16011/mask-messaging-for-covid19
Landrum, A. R., & Slater, M. H. (2020). Open Questions in Scientic Consensus Messaging Research.
Environmental Communication, 14(8), 1033–46. https://doi.org/10/gg4t53
Leiserowitz, A., Maibach, E., Roser-Renouf, C., Feinberg, G., & Rosenthal, S. (2016). Climate Change in
the American Mind: March, 2016. New Haven: Yale Program on Climate Change Communication.
http://climatecommunication.yale.edu/publications/climate-change-american-mind-march–2016/
Lewandowsky, S., & Gignac, G. E., & Vaughan, S. (2013). The Pivotal Role of Perceived Scientic Con-
sensus in Acceptance of Science. Nature Climate Change, 3(4), 399–404. https://doi.org/10/gg3mv2
van der Linden, S. L., Leiserowitz, A. A., Feinberg, G. D., & Maibach, E. W. (2015). The Scientic
Consensus on Climate Change as a Gateway Belief: Experimental Evidence. PLOS ONE, 10(2),
e0118489. https://doi.org/10/f68jv2
van der Linden, S. L., Leiserowitz, A. A., & Maibach, E. W. (2017). Gateway Illusion or Cultural Cogni-
tion Confusion? Journal of Science Communication, 16(5), A04. https://doi.org/10/gg438k
van der Linden, S. L., Leiserowitz, A. A., & Maibach, E. W. (2019). The Gateway Belief Model: A Large-
Scale Replication. Journal of Environmental Psychology, 62, 49–58. https://doi.org/10/gfv473
Lipton, P. (1998). The Epistemology of Testimony. Studies in History and Philosophy of Science, 29(1),
1–31. https://doi.org/10/d2ntbb
Longino, H. E. (1990). Science as Social Knowledge: Values and Objectivity in Scientic Inquiry. Princ-
eton: Princeton University Press
Longino, H. E. (2002). The Fate of Knowledge. Princeton: Princeton University Press
de Melo-Martín, I., and Kristen Intemann (2018). The Fight Against Doubt. New York: Oxford University
Press
Merkey Merkley, E. (2020). Anti-Intellectualism, Populism, and Motivated Resistance to Expert Consen-
sus. Public Opinion Quarterly 84(1), 24–48. https://doi.org/10/gg433m
Merton, R. K. (1973). The Sociology of Science: Theoretical and Empirical Investigations. University of
Chicago Press
Miller, B. (2013). When Is Consensus Knowledge Based? Distinguishing Shared Knowledge from Mere
Agreement. Synthese, 190(7), 1293–1316. https://doi.org/10/gg435n
1 3
M. H. Slater et al.
Miller, B. (2019). The Social Epistemology of Consensus and Dissent. In Fricker, M., Graham, P. J., & N.
J. L. L. Pedersen (Eds.), The Routledge Handbook of Social Epistemology (pp. 230–39). New York:
Routledge. https://doi.org/10.4324/9781315717937?23
Morton, A. (2014). Shared Knowledge from Individual Vice: The Role of Unworthy Epistemic Emotions.
Philosophical Inquiries, 2(1), 163–172
Odenbaugh, J. (2012). Climate, Consensus, and Contrarians. In Kabasenche, W. P., O'Rourke, M., &
Slater, M. H. (Eds.), The Environment: Philosophy, Science, and Ethics (pp. 137–150). Cambridge,
MA: MIT Press
Olshansky, A., Peaslee, R. M., & Asheley, R. L. (2020). Flat-Smacked! Converting to Flat Eartherism.
The Journal of Media and Religion, 19(2), 46–59. https://doi.org/10.1080/15348423.2020.1774257
Oreskes, N. (2004). The Scientic Consensus on Climate Change. Science, 306(5702), 1686–1686. https://
doi.org/10/cbt9bh
Oreskes, N. (2019). Why Trust Science? Princeton: Princeton University Press
Oreskes, N., & Conway, E. M. (2010a). Merchants of Doubt. New York: Bloomsbury Press
Oreskes, N., & Conway, E. M. (2010b). Defeating the Merchants of Doubt. Nature, 465(7299), 686–687.
https://doi.org/10.1038/465686a
Funk, C., & Rainie, L. (2015). Public and Scientists’ Views on Science and Society. Wash-
ington, D.C.: Pew Research Center. https://www.pewresearch.org/science/2015/01/29/
public-and-scientists-views-on-science-and-society/
Riley, E. (2017). The Benecent Nudge Program and Epistemic Injustice. Ethical Theory and Moral Prac-
tice, 20(3), 597–616. https://doi.org/10/gnms9v
Slater, M. H., Huxster, J. K., & Bresticker, J. E. (2019). Understanding and Trusting Science. Journal for
General Philosophy of Science, 50(2), 247–61. https://doi.org/10/gf7hzf
Solomon, M. (2007). The Social Epistemology of NIH Consensus Conferences. In Kincaid, H. & McKit-
rick, J. (Eds.), Establishing Medical Reality (pp. 167–77). Dordrecht: Springer Netherlands. https://
doi.org/10.1007/1-4020-5216?2_12
Stegenga, J. (2016). Three Criteria for Consensus Conferences. Foundations of Science, 21(1), 35–49.
https://doi.org/10/gg4355
Stewart, C. (2019). Expertise and Authority. Episteme (Online First). https://doi.org/10/gft3vc
Strevens, M. (2017). Scientic Sharing: Communism and the Social Contract. In Boyer-Kassem, T., May-
Wilson, C., & Weisberg, M. (Eds.), Scientic Collaboration and Collective Knowledge (pp. 3–33).
New York: Oxford University Press
Strevens, M. (2020). The Knowledge Machine: How Irrationality Created Modern Science. New York:
Liverlight Publishing
Suldovsky, B. (2016). In Science Communication, Why Does the Idea of the Public Decit Always
Return? Exploring Key Inuences. Public Understanding of Science, 25(4), 415–26. https://doi.
org/10/gg435k
Suldovsky, B., & Landrum, A. R., & Stroud, N. J. (2019). Public Perceptions of Who Counts as a Sci-
entist for Controversial Science. Public Understanding of Science, 28(7), 797–811. https://doi.
org/10.1177/0963662519856768
Zhou, J. (2016). Boomerangs versus Javelins: How Polarization Constrains Communication on Climate
Change. Environmental Politics, 25(5), 788–811. https://doi.org/10/gg434f
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional aliations.
1 3
A preview of this full-text is provided by Springer Nature.
Content available from Erkenntnis
This content is subject to copyright. Terms and conditions apply.