ArticlePDF Available

Influences of study design on the effectiveness of consensus messaging: The case of medicinal cannabis

PLOS
PLOS ONE
Authors:

Abstract and Figures

This study examines to what extent study design decisions influence the perceived efficacy of consensus messaging, using medicinal cannabis as the context. We find that researchers’ decisions about study design matter. A modified Solomon Group Design was used in which participants were either assigned to a group that had a pretest (within-subjects design) or a posttest only group (between-subjects design). Furthermore, participants were exposed to one of three messages—one of two consensus messages or a control message—attributed to the National Academies of Sciences, Engineering and Medicine. A consensus message describing a percent (97%) of agreeing scientists was more effective at shifting public attitudes than a consensus message citing substantial evidence, but this was only true in the between-subject comparisons. Participants tested before and after exposure to a message demonstrated pre-sensitization effects that undermined the goals of the messages. Our results identify these nuances to the effectiveness of scientific consensus messaging, while serving to reinforce the importance of study design.
This content is subject to copyright.
RESEARCH ARTICLE
Influences of study design on the
effectiveness of consensus messaging: The
case of medicinal cannabis
Asheley R. LandrumID
1
*, Brady Davis
1
, Joanna Huxster
2
, Heather Carrasco
3
1College of Media & Communication, Texas Tech University, Lubbock, TX, United States of America,
2Department of Environmental Studies, Eckerd College, St. Petersburg, FL, United States of America,
3Rawls College of Business, Texas Tech University, Lubbock, TX, United States of America
*A.Landrum@ttu.edu
Abstract
This study examines to what extent study design decisions influence the perceived efficacy
of consensus messaging, using medicinal cannabis as the context. We find that research-
ers’ decisions about study design matter. A modified Solomon Group Design was used in
which participants were either assigned to a group that had a pretest (within-subjects
design) or a posttest only group (between-subjects design). Furthermore, participants were
exposed to one of three messages—one of two consensus messages or a control message
—attributed to the National Academies of Sciences, Engineering and Medicine. A consen-
sus message describing a percent (97%) of agreeing scientists was more effective at shift-
ing public attitudes than a consensus message citing substantial evidence, but this was only
true in the between-subject comparisons. Participants tested before and after exposure to a
message demonstrated pre-sensitization effects that undermined the goals of the mes-
sages. Our results identify these nuances to the effectiveness of scientific consensus mes-
saging, while serving to reinforce the importance of study design.
Introduction
The Gateway Belief Model (i.e., GBM [1]), argues that communicating about scientific consen-
sus to the general public indirectly influences change in people’s support for policies by first
increasing their perceptions of scientific consensus and then aligning their attitudes with that
of the scientists [1]. The vast majority of studies using the GBM examine consensus messaging
in the context of climate change [24]. A handful of studies have also examined the GBM in
the context of genetically modified organisms [5,6], and at least one study so far has looked at
the effects of consensus messaging on the issue of vaccination [7]. This study examines consen-
sus messaging about the efficacy of cannabis for treating chronic pain.
According to the GBM, messages about scientific consensus on climate change correct
faulty assumptions about the robustness of such consensus (measured as participants’ esti-
mates of the percent of scientists who agree with a proposition). These corrected beliefs then
PLOS ONE
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 1 / 17
a1111111111
a1111111111
a1111111111
a1111111111
a1111111111
OPEN ACCESS
Citation: Landrum AR, Davis B, Huxster J,
Carrasco H (2021) Influences of study design on
the effectiveness of consensus messaging: The
case of medicinal cannabis. PLoS ONE 16(11):
e0260342. https://doi.org/10.1371/journal.
pone.0260342
Editor: Lucy J. Troup, University of the West of
Scotland, UNITED KINGDOM
Received: July 6, 2021
Accepted: November 8, 2021
Published: November 29, 2021
Peer Review History: PLOS recognizes the
benefits of transparency in the peer review
process; therefore, we enable the publication of
all of the content of peer review and author
responses alongside final, published articles. The
editorial history of this article is available here:
https://doi.org/10.1371/journal.pone.0260342
Copyright: ©2021 Landrum et al. This is an open
access article distributed under the terms of the
Creative Commons Attribution License, which
permits unrestricted use, distribution, and
reproduction in any medium, provided the original
author and source are credited.
Data Availability Statement: Data and code can be
found on our project page on OSF.io at https://osf.
io/38rju/ (DOI: 10.17605/OSF.IO/38RJU).
influence individuals’ views and attitudes about the risks posed by climate change, which influ-
ence support for relevant policies [4,7].
The model has been tested primarily in the context of climate change, using the “97% of cli-
mate scientists agree. . . message with a pie graph highlighting the 97% number. Concerns
exist among some, however, regarding the applicability of these results outside of climate
change. First, not all consensus messages can be accurately summarized as a proportion of sci-
entists who agree (and arguably, consensus about climate change should not be interpreted
that way either [8]). To this end, this study uses and compares two consensus messaging strate-
gies. The first highlights the same numerical percentage that is used by the climate change
GBM studies, 97%, here attributed to medical, as opposed to climate, scientists. Notably, stud-
ies that have used percentages lower than 97% to 98% have found weaker support [9,10]. The
proportion of medical scientists who believe that cannabis is an effective treatment for chronic
pain has not been established in the way that proportions of agreeing scientists on other issues
have [11] or the way that consensus estimates have been established on climate change [12],
but we include this condition for the purpose of comparison. We call this strategy the descrip-
tive norm/authority appeal as it takes the social norms approach to changing people’s behavior
by describing what others “think and do,” but, instead of describing lay publics’ social group
members [13], the message describes views of scientists who are epistemic authorities. The sec-
ond messaging strategy is accurate to the case of consensus surrounding medical cannabis use
for chronic pain—a message we developed from a report written by a consensus panel formed
by the National Academies of Sciences, Engineering, and Medicine (i.e., NASEM). One of the
findings of this report is that there is substantial evidence that cannabis is an effective treat-
ment for chronic pain in adult patients [14]. We call this the “evidence message” as it puts
more emphasis on the weight of the evidence evaluated by a panel of experts as opposed to
naming a proportion of agreeing scientists.
Although this “evidence message” design may be a closer description to what philosophers
of science would label scientific consensus, and it may be more in line with how consensus is
established [8], there is evidence that it may be a less effective communication strategy than
the descriptive norm/authority appeal. For example, Myers et al. [3] found that when agree-
ment among scientists was described but a numerical estimate was not used (e.g., “an over-
whelming majority of scientists have concluded... vs “97% of scientists have concluded”),
participants’ estimates of scientific consensus and other variables of interest did not signifi-
cantly differ from the control condition. Similarly, Landrum et al. [6], which used a message
highlighting a NASEM consensus panel on genetically modified organisms, also found no sig-
nificant difference between exposure to the consensus message and participants’ estimates of
agreement among scientists. Landrum and Slater [8] propose that messages may be more or
less successful depending on whether the question about estimating consensus is aligned with
the message design. That is, if the message describes a proportion of agreeing scientists, the
question asked to participants ought to be “what percent of scientists agree.” On the other
hand, if the message designed describes the process of consensus or a body of evidence, the
question asked to participants ought to be to what extent they agree that consensus exists or
that most of the evidence is supportive. To examine this, we randomly assigned participants in
the current study to receive either the numerical (“what percent of scientists. . .”) or the agree-
ment (“to what extent do you agree that. . .”) version of the consensus estimate question.
Another concern related to the applicability of the GBM outside of the climate studies
relates to the choice of mediating variables used in the model. The GBM includes three mediat-
ing variables, two of which are specific to climate change: belief that climate change is real and
belief that climate change is caused by humans. These first two mediating variables are
expected to influence the third mediating variable, worry about climate change. Although the
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 2 / 17
Funding: The authors received no specific funding
for this work.
Competing interests: The authors have declared
that no competing interests exist.
other two items cannot, the worry item can be modified for other contexts to represent percep-
tions of risk about the issue at hand. In the case of cannabis, we asked participants how much
risk they believe both medicinal and recreational cannabis pose to human health, safety, and/
or prosperity. In addition to this risk perception question, we also asked participants how safe
they feel using cannabis is and to what extent they personally believe that cannabis is effective
for the treatment of chronic pain. See Table 1.
Furthermore, in many of the attempts to implement the GBM to test for potential indirect
(and direct) effects of scientific consensus messaging, researchers have used only between-sub-
jects manipulations [6,15]. However, in the original and subsequent GBM papers by the origi-
nal authors [1,4], pre- and post-message exposure data is collected and the difference scores
are used in the mediation model. Although it may not be immediately clear from visualizations
of the GBM (e.g., Fig 1 [1]), condition (consensus message vs. control) is expected to predict
change in perceived scientific agreement between time 1 and time 2, which is expected to pre-
dict change in beliefs (climate change is real, climate change is human caused) between time 1
Table 1. Survey items.
Variable Question Text Scale
Believable
1
This message is _______________. 0 Not Believable to 100 Believable
Credible
1
The source of this message, the National Academies of Sciences, Engineering, and Medicine, is _____________. 0 Not Credible to 100 Very
Credible
Deceptive
1
The message is _______________. 0 Not Deceptive to 100 Very
Deceptive
Perceptions of Consensus
dns
2
What percent of medical scientists do you believe agree that there is substantial evidence that marijuana/cannabis is
effective for the treatment of chronic pain?
0% to 100%
dnp
2
What percent of the U.S. public do you believe agree that there is substantial evidence that marijuana/cannabis is
effective for the treatment of chronic pain?
0% to 100%
cns
3
To what extent do you agree or disagree that there is consensus among the medical scientific community that
marijuana/cannabis is effective for the treatment of chronic pain?
0 Strongly Disagree to
100 Strongly Agree
cnp
3
To what extent do you agree or disagree that there is consensus among the U.S. public that marijuana/cannabis is
effective for the treatment of chronic pain?
0 Strongly Disagree to
100 Strongly Agree
Attitudes
eff To what extent do you, personally, believe that marijuana/cannabis is effective for the treatment of chronic pain? 0 Not Effective to 100 Very
Effective
safe How safe do you, personally, believe using marijuana/cannabis is? 0 Not at all safe to 100 Very safe
rmed How much risk do you believe medical marijuana/cannabis poses to human health, safety, and/or prosperity? 0 No risk at all to 100 Very high
risk
rrec How much risk do you believe recreational marijuana/cannabis poses to human health, safety, and/or prosperity? 0 No risk at all to 100 Very high
risk
Policy Support
ma21 Medical marijuana/cannabis should be made legal for adults ages 21 and older 0 Strongly disagree to
100 Strongly agree
mall Medical marijuana/cannabis should be made legal for people of all ages, including those under 18. 0 Strongly disagree to
100 Strongly agree
ra21 Recreational marijuana/cannabis should be made legal for adults ages 21 and older 0 Strongly disagree to
100 Strongly agree
rall Recreational marijuana/cannabis should be made legal for people of all ages, including those under 18. 0 Strongly disagree to
100 Strongly agree
1
Items asked only at time 2 (after being presented with the message).
2
Half of the sample were asked to estimate percentage of agreement at time 2.
3
Half of the sample were asked to what extent they agree or disagree that consensus exists at time 2.
https://doi.org/10.1371/journal.pone.0260342.t001
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 3 / 17
and time 2, etc. To be consistent with the original intention of the GBM and to test for differ-
ences between these two designs, we conducted a modified Solomon group design in which we
collected both pretest/posttest data and posttest-only data.
Current study
This study aims to contribute to our understanding of the efficacy of consensus messaging by
examining how researchers’ decisions about study design (e.g., whether data collected is cross
sectional or pretest/posttest, how variables are operationalized, how consensus is approached
and described, [8] influence study results. As stated earlier, we examine these questions using
cannabis as the context. We chose medicinal cannabis as the context for a few reasons. First,
scientific consensus has been established for this issue: a consensus panel convened by the
National Academies of Science, Engineering, and Medicine (NASEM) determined that there is
substantial evidence that cannabis is an effective treatment for chronic pain in adults [14]. Sec-
ond, like for other issues for which consensus messaging has been studied (e.g., climate
change, genetically modified organisms, vaccines), public policy arguably does not align with
the available scientific evidence; despite its promising effects, medical cannabis remains illegal
Fig 1. Mean difference scores by condition and question for the pretest/posttest sample. Error bars represent 95% confidence intervals. There was approximately
one week between pretest and posttest. ��p<.001, p<.01, p<.05 for two-tailed, single sample ttests.
https://doi.org/10.1371/journal.pone.0260342.g001
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 4 / 17
in many states (17 at the time of data collection) and about one-third of the U.S. public oppose
legalizing cannabis [16]. In fact, according to the consensus report, regulatory barriers—such
as the classification of cannabis as a Schedule I substance—hinder the advancement of research
on cannabis [14]. Using cannabis as an example, the current study tests and challenges aspects
of the Gateway Belief Model, which provides an explanation for how scientific consensus mes-
saging may improve public support for policies related to publicly controversial science.
Methods
The study was approved by the Institutional Review Board at Texas Tech University as exempt
research involving human subjects (IRB2020-302). Data were collected from a national sample
of 1,558 U.S. adults recruited using Amazon’s Cloud Research Services tool at the end of June
2020 and beginning of July 2020. Prior to answering any study questions, participants read a
digital consent form that explained the study and the participants’ rights and provided contact
information for the IRB office and the principal investigator. Participants were then asked
whether they consented to participate in the study. Participants who selected yes continued on
and participants who said no were redirected to the end of the survey.
Participants ranged in age from 18 to 82 (M= 41.11, Median = 39, SD = 13.28). For self-
identified race and ethnicity, 9.3% of the sample reported identifying as Black or African
American, 6.6% reported identifying as Hispanic/Latino, and 8.9% reported identifying as
Asian; and 52.3% of the sample identified as female. The highest level of education earned for
8% of the sample was high school, around 31% of the sample completed at least some college
coursework, and around 60% had at least a college education. Furthermore, 48.16% of the sam-
ple indicated that they were somewhat to very liberal, 22.56% were moderate, and 29.28% were
somewhat to very conservative.
We conducted our survey experiment using a modified Solomon group design. This design
is used to test for pretest sensitization, which occurs when participants’ posttest ratings are
influenced by exposure to pretest questions, but also to test for any differential condition
effects based on whether participants completed the pretest [17]. We included a one-week gap
between the pretest (time 1) and the experiment with posttest (time 2).
During the time 1 pretest, 956 participants (the pretest/posttest sample) were introduced to
the topic of medical cannabis While we exclusively use the term cannabis in this manuscript,
we intentionally deviate slightly in the experimental instrument. The term marijuana is com-
monly used within the context of legislature and regulatory guidelines (Romi et al., 2021).
Therefore, in an effort towards ecological validity, we use the terms cannabis and marijuana
interchangeably within the experimental instrument and any use of either terms appears as
was presented to the participants with the following prompt which was adapted from content
on the Mayo Clinic’s website [18].
Medical marijuana—also called medical cannabis—is a term for derivatives of the cannabis
sativa plant that are thought to relieve serious and chronic symptoms. Some states allow mari-
juana use for medical purposes. Federal law regulating marijuana supersedes state laws.
Because of this, people may still be arrested and charged with possession in states where mari-
juana for medical use is legal. In this study we will ask you about your views towards the use
of both recreational and medical marijuana
Following the prompt, participants were asked a series of questions about their perceptions
of consensus surrounding the use of medical and recreational cannabis both from medical sci-
entists and the U.S. public, their own attitudes towards recreational and medical cannabis, and
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 5 / 17
their support (or lack thereof) for legalization policies. Participants who completed the time 1
pretest survey were marked as eligible to sign up for the time 2 posttest survey a week later,
and 935 returning participants completed this posttest survey. During the same time period,
610 new participants (posttest-only sample) completed the posttest-only survey at time 2,
which was identical to the posttest survey. To ensure we would not have duplicate participants,
those who took the pretest/posttest surveys were marked ineligible to sign up for the posttest-
only survey and vice versa.
At time 2, participants (from both the pretest/posttest sample and posttest-only samples)
were randomly assigned to one of three message conditions: (1) a consensus message stating
that there is substantial evidence that cannabis is effective for the treatment of chronic pain in
adults (i.e., evidence message), (2) a descriptive norm/authority consensus message stating
that 97% of medical scientists agree that cannabis is effective for the treatment of chronic pain
in adults (i.e., 97% message), and (3) a control message stating that researchers are investigat-
ing the potential uses of Cannabidiols (CBD), a compound in cannabis that does not have psy-
choactive effects (i.e., control message). All three messages were attributed to the National
Academies (NASEM). These messages are available on our project site on osf.io (https://osf.io/
w8u6k/). Following exposure to the message, as a manipulation check, we asked participants
which of the following statements best describes the main point of the message they just saw:
(a) There is scientific consensus that cannabis is effective for the treatment of chronic pain, (b)
Research on the effectiveness of cannabis is still ongoing, (c) I don’t know, or (d) I prefer not
to answer. Overall, participants were accurate at identifying the main point of the message.
Approximately 87% of the control participants chose option b(i.e., research is still ongoing),
and 93% of the descriptive norm condition participants and 89% of the evidence message con-
dition participants correctly chose option a(i.e., scientific consensus exists).
Furthermore, at time 2, we randomly assigned participants to answer one of two question
formats about their perceptions of consensus. Half of the sample was asked to estimate what
percent of medical scientists and what percent of the U.S. public agree that cannabis is effective
for the treatment of chronic pain (on scales from 0 to 100%). The other half of the sample was
asked to what extent they agree or disagree that there is consensus among medical scientists
and among the U.S. public that cannabis is effective for the treatment of chronic pain (on
scales of 0 –strongly disagree to 100 –strongly agree). Recall, during the pretest at time 1, par-
ticipants were asked to answer both questions.
See our project page on the Open Science Framework https://osf.io/w8u6k/ for data files, R
script, stimuli, and survey questions.
Results
Test for pretest sensitization effects on the condition manipulation
Following the recommendations of Braver and Braver [19] for analyzing Solomon group
designs, we began by determining whether an interaction effect exists between our condition
manipulation and the sample (pre/posttest sample, posttest-only sample). The presence of a
significant interaction would indicate both that pre-sensitization exists and that it likely mod-
erates any effects of the condition manipulation. Because we had several outcome variables, we
first conducted a multivariate analysis of variance (MANOVA) Because we split the sample for
the perceptions of consensus items, we left these four variables out of the MANOVA and
found a significant main effect of our condition manipulation (Pillai’s Trace = 0.053, approxi-
mate F= 3.62, p<.001). There was also a significant main effect of sample (Pillai’s Trace =
0.021, approximate F= 2.86, p= .001), suggesting pre-test sensitization exists. Importantly,
there was a significant interaction between sample and condition (Pillai’s Trace = 0.024,
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 6 / 17
approximate F= 1.64, p= .030), meaning that the effects of our experimental manipulation are
likely conditional on whether participants answered pretest questions.
Because we found evidence that pre-test sensitization exists and it likely affects the condi-
tion manipulation, we followed up on the significant main effect of condition using simple
effects tests on the pre/posttest sample (within-subject effects) looking for a difference between
time 1 and time 2 as well as simple effects of the condition manipulation on the posttest-only
sample [19]. Descriptive statistics for the outcome variables for each sample are reported in the
S1 Table. Full analyses results, including ANOVA tables, are available in the supplementary
materials.
Test for within-subject effects of condition manipulation—a difference
score analysis
Our study design allowed us to examine whether the differences between individuals pre- and
post-message exposure ratings (i.e., their “difference scores”) varied significantly from 0—in
other words, were there significant increases or decreases in the outcome variables between
time 1 and time 2. We calculated difference scores by subtracting participants’ pretest ratings
from their posttest ratings (posttest—pretest = difference score) and then conducted single
sample t-tests We conducted single-sample t-tests on the difference scores as opposed to
paired-samples t-tests because the visual depiction (see Fig 2) is simpler—i.e., there are fewer
bars to keep track of. We include the analysis using paired samples t-tests in the supplementary
materials on OSF. Note that the results do not differ based on which version of t-test we use.
Note that these analyses are only of the pretest/posttest sample. Fig 1 shows the mean differ-
ences by question and message condition.
Participants often moved in the non-expected direction between the pretest and posttest
surveys. For instance, in all three conditions, participants said they agreed less that cannabis is
effective for the treatment of chronic pain after they were exposed to the messages, even
though two of the messages stated that there is consensus that cannabis is an effective treat-
ment. Similarly, in all three conditions, participants expected a lower level of consensus among
the U.S. public on the effectiveness of cannabis to treat chronic pain than before message expo-
sure. Notably, the messages did not mention public views, only scientific ones. Though some
may speculate that this move in the less desirable direction among participants could be boo-
merang or reactance effects [20,21], an alternative explanation is that this is the result of pre-
test sensitization, especially given that this shift was seen in the control condition as well. In
this case, discussing the issue incidentally may have made medicinal cannabis seem like a
more controversial issue among the public than study participants originally thought.
We did find two significant expected effects for the condition in which participants were
exposed to the 97% message: participants in this condition rated recreational cannabis as less
risky than they did at time 1 (although they didn’t shift on medical cannabis which is what the
message was about), and they increased the percent of scientists presumed to agree from time
1 to time 2. See Fig 1 and S2 Table.
Test for between-subject effects of condition manipulation
Because we initially found a significant interaction between our condition manipulation and
the sample from the MANOVA, which indicates that pre-sensitization exists and that it likely
moderates any effects of the condition manipulation, we analyzed our pretest/posttest sample
and posttest-only sample separately [19]. To analyze our posttest sample, we followed up on
the MANOVA with one-way ANOVAs on each of the dependent variables (see Table 1). In
this case, we were specifically looking for differences between the consensus message
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 7 / 17
Fig 2. Mean rating for each item by condition and question for the posttest only sample. Error bars represent standard error. Significant differences between
message conditions (determined by Tukey tests) are shown. ��p<.001, ��p<.01, p<.05.
https://doi.org/10.1371/journal.pone.0260342.g002
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 8 / 17
conditions (the 97% message, the evidence message) and the control message. See Fig 2 and S3
Table.
Unlike the results from the pretest/posttest sample, the results from the posttest-only sam-
ple are more supportive of the hypothesis that there are effects of consensus messaging—at
least when it comes to messages that describe a descriptive norm amongst authorities (i.e., 97%
of medical scientists agree). Indeed, there were significant differences between the 97% mes-
sage and the control message for four key variables: percent of scientists perceived to agree,
belief that cannabis is an effective treatment for chronic pain, and support for legalizing medi-
cal and recreational cannabis for those aged 21 years and older.
Notably, these differences between the control condition and consensus message condition
were not significant when the consensus message described substantial evidence (as opposed
to a proportion of agreeing scientists). And in many cases, participants’ item ratings in the
97% message condition differed significantly from those in the evidence message condition.
This leads to questions about whether participants are actually influenced by the consensus
aspect of the message or some other characteristic that differs between the two types of consen-
sus messages examined in this study (e.g., the presence of a number, a high percentage).
Aligning the message strategy with the measurement design
Sometimes the relationship between exposure to a consensus message and participants’ esti-
mates of scientific consensus is not statistically significant when non-numeric messages are
used [3,8,22]. Landrum and Slater [8] hypothesize that this may be due to a lack of alignment
between the type of message used and the way the participants’ perceptions of scientific con-
sensus is measured. For example, Bolsen and Druckman [23] found a significant relationship
between exposure to a process-based consensus message (describing how consensus was
formed from a National Academies of Sciences panel) and their measure of perception of sci-
entific consensus (i.e., whether most scientists agree). We designed this study to answer this
question by randomly assigning participants at posttest to two different forms of measurement
for this question: one that asked them to estimate the percent of scientists who agree and one
that asked them how much they agreed or disagreed that there is scientific consensus (see
Table 1). We found mixed evidence regarding this hypothesis. See Table 2.
Change in perceptions of scientific consensus between time 1 and time 2. First, we
looked at change in perceptions of scientific consensus among the pretest/posttest sample. A 3
(Message Condition) by 2 (Measure) ANOVA suggests that there is a main effect of condition,
F(2, 882) = 6.88, p = .001, η
p2
= 0.02, but not an effect of Measure, F(1, 882) = 1.54, p = .215,
η
p2
= 0.002, or an interaction effect between condition and measure, F(2, 882) = 1.87, p = .155,
η
p2
= 0.004. Follow-up simple GLM analyses show that the relationships between condition
Table 2. Is there a significant relationship between condition manipulation and participants’ perception of con-
sensus based on consensus message strategy used and measurement?
Significant relationship between consensus message used
and perception of consensus?
97% vs. Control Evidence vs. Control
Pretest/posttest sample
ΔEstimated percent of scientists who agree Yes No
ΔAgreement that consensus exists Yes Yes
Posttest-only sample
Estimated percent of scientists who agree Yes No
Agreement that consensus exists No No
https://doi.org/10.1371/journal.pone.0260342.t002
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 9 / 17
(consensus vs. control) and participants’ perception of consensus are as follows. The relation-
ship between condition manipulation—the 97% message compared to the control—signifi-
cantly predicts both participants’ estimates of the percentage of scientists who agree (b= 5.76,
p= .005) and participants agreement that consensus exists (b= 6.08, p= .022). However, the
relationship between condition manipulation—the evidence message compared to the control
—predicts only participants’ agreement that consensus exists (b= 4.88, p= .028) and not par-
ticipants’ estimates of the percentage of scientists who agree (b= -0.70, p= .732). This lends
some support to the hypotheses that the measurement must be aligned with the message, but
this seems to be true only for the evidence message.
Perceptions of scientific consensus between conditions. Next, we looked at participants
perceptions of scientific consensus at time 2 among the posttest-only sample. Unlike for the
within-subjects data, a 3 (Message Condition) by 2 (Measure) ANOVA shows a significant
interaction between message condition and measure, F(2, 604) = 6.65, p = .001, ηp2 = 0.02, in
addition to the main effect of condition, F(2, 604) = 12.64, p <.001, ηp2 = 0.04. There was no
significant main effect of measure, F(1, 604) = 1.27, p = .260, ηp2 = 0.002. Follow-up simple
GLM analyses show the relationships between condition manipulation (consensus vs. control)
and participants’ perceptions of consensus are as follows. The relationship between condition
manipulation—the 97% message compared to the control—significantly predicts participants’
estimates of the percentage of scientists who agree (b = 11.91, p <.001) but not participants
agreement that consensus exists (b = 4.04, p = .169). However, the relationship between condi-
tion manipulation—the evidence message compared to the control—does not significantly
predict participants’ agreement that consensus exists (b = 1.77, p = .540) nor participants’ esti-
mates of the percentage of scientists who agree (b = -5.65, p = .074). In this case, the alignment
appears to matter for the 97% message, but not for the evidence message.
Conceptually replicating the Gateway Belief Model
To test the hypothesis by the GBM that consensus messaging indirectly influences policy sup-
port by correcting people’s estimates of scientific consensus and shifting their attitudes, we
conducted mediation analysis using PROCESS (model 6 [24]). As stated earlier, the mediators
in the GBM must be altered for different topics. The GBM for climate change includes three
mediators after the estimate of the proportion of agreeing scientists—i.e., belief in climate
change, belief in human causation, worry about climate change, two of which are not applica-
ble to other issues like genetically modified organisms or cannabis. We used participants’ per-
ceptions of risk for medical cannabis. An alternative to this model using our data could use
“effectiveness” in place of the risk perception. However, as the change in effectiveness was neg-
ative for each of the conditions, this didn’t make sense to test for the current study See Fig 3.
Furthermore, consistent with van der Linden et al. [1], each of the mediators and the outcome
variable are difference scores (time 2 –time 1). For the model shown, the only effect we were
able to replicate was the effect of message condition on the change in the estimated percent of
agreeing scientists. No other paths (direct or indirect) were significant. For the full results, see
the supplementary materials.
We also ran the model using the posttest-only data. Importantly, this model describes the
relationship between the variables measured at time 2 and not the relationship between change
scores the way that the original GBM specified. In the case of the posttest-only data, the model
worked as predicted. In terms of direct effects, compared to the control condition, the descrip-
tive norm condition is related to a greater proportion of medical scientists assumed to agree,
which is related to lower risk perceptions for medical cannabis, which is then related to greater
support for legalization policies. See Fig 4. Furthermore, the indirect effect of condition on
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 10 / 17
support for legalization of medical cannabis through perceptions of the percent of medical sci-
entists who agree and risk perceptions was significant (b = 1.44, 95% CI [0.37, 3.06]).
Discussion
This study aimed to contribute to our understanding of the efficacy of consensus messaging by
examining how researchers’ decisions about study design might influence study results, using
medicinal cannabis as the context. We began by first testing for direct experimental effects on
each of the outcome variables, including participants’ beliefs about and estimations of
Fig 3. Modified version of the GBM for medical cannabis using pretest/posttest data (change scores). All shown
paths were tested but the only significant path was from the message manipulation to the change in estimated percent
of agreeing scientists. Note that condition only reflects the descriptive norm/authority message versus the control
message and does not include the evidence message.
https://doi.org/10.1371/journal.pone.0260342.g003
Fig 4. Modified version of the GBM for medical cannabis using the posttest-only data. Note that condition only
reflects the descriptive norm/authority message versus the control message and does not include the evidence message.
https://doi.org/10.1371/journal.pone.0260342.g004
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 11 / 17
consensus, beliefs about cannabis’s efficacy for treating chronic pain, perceptions of risk asso-
ciated with using medicinal and recreational cannabis, and support for policies legalizing their
use. Then, we examined how researchers’ decisions about study design (e.g., whether data col-
lected is cross sectional or pretest/posttest, how variables are operationalized, how consensus
is approached and described [8]) influenced study results. Finally, we tested two models aim-
ing to conceptually replicate the GBM for determining whether the predicted indirect path
from consensus messaging to policy support is present.
Experimental effects of consensus messaging
First, we aimed to test for direct experimental effects on each of the outcome variables. Because
of our study design, we were able to test this in two ways: whether participants changed their
ratings of the outcome variables after being exposed to the messages (i.e., difference score anal-
ysis) and whether participants who were exposed to the consensus messages (as opposed to the
control messages) had different ratings of the outcome variables (i.e., between conditions anal-
ysis). Notably, we found evidence of pretest sensitization and evidence suggesting that pretest
sensitization influenced the effect of the condition manipulation. Therefore, we needed to ana-
lyze the two samples separately [19].
Differences in ratings before and after exposure to the consensus message. These spe-
cific consensus messages about the effectiveness of medical cannabis to treat chronic pain
should have influenced participants to increase their perceptions of scientific consensus,
potentially decrease their perceptions of risk associated with medical cannabis, increase their
beliefs that medical cannabis is effective for the treatment of chronic pain, and potentially
increase support for the legalization policies. However, we found that in many cases, partici-
pants moved in the non-hypothesized direction. For example, in all three conditions, partici-
pants shifted their ratings to agree less that medical cannabis is an effective treatment for
chronic pain. Although pretest sensitization often assumes that the pretest will increase partici-
pants’ awareness and/or responsiveness to the condition manipulation [19], here, participants
often shifted in the opposite direction. One possibility for this, as we discussed earlier, is that
participants may have begun to wonder if cannabis is a more controversial issue than they
originally thought because we asked them these questions on more than one occasion.
Differences in ratings between conditions for posttest-only sample. Amongst the post-
test-only data, we generally found that the 97% message appears to influence participants in
the expected ways relative to the control. That is, compared to the control condition, partici-
pants in the 97% message condition estimated a larger proportion of agreeing scientists on
average, were more likely to agree that cannabis is an effective treatment for chronic pain, and
were more likely to support legalization for both medical and recreational cannabis for adults
(ages 21 and older). Although this was the case for the 97% message compared to the control,
this was not true for the “evidence” message compared to the control. We discuss the implica-
tions of this difference further below.
Effects of study design decisions
The second aim of this research was to contribute to theory by examining how researchers’
decisions about study design in the consensus messaging literature influence study results. We
already discussed the differences associated with pretest/posttest data compared to cross-sec-
tional data. We now discuss some of the other design decisions that appeared to influence the
results.
The descriptive norms/authority approach (97% message) appeared to be more influen-
tial than the description of the weight of evidence. In the introduction, we discussed two
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 12 / 17
approaches to describing scientific consensus that we would test in this study: the descriptive
norms/authority approach (i.e., the 97% message) and the evidence-based approach (i.e., the
evidence message). We mentioned that prior work suggests that the descriptive norms/author-
ity approach may be a more effective strategy for persuasion even if it is a less accurate repre-
sentation of what scientific consensus is, and these previous findings were supported by our
results. Even when the descriptive norm/authority message—the 97% message—didn’t vary
significantly from the control condition, it often varied significantly from the evidence mes-
sage condition.
Consensus messages influenced numerical estimates of consensus but not agreement
that consensus exists. One hypothesis from Landrum and Slater [8] was that the reason
non-numeric messages (e.g., messages that stress the evidence or describe agreement without
specific numbers) may not predict perceptions of consensus is that the question is often not
aligned with the message and needs to be so for the treatment to be effective. That is, if a non-
numeric question is asked, then the participants need to be asked to express a non-numeric
form of the perception of scientific consensus (e.g., to what extent people agree that scientific
consensus exists) rather than to estimate a numeric proportion of scientists in agreement.
However, we found mixed evidence regarding this hypothesis. See Table 2. Future research
should continue to investigate this.
Conceptually replicating the Gateway Belief Model
Finally, we aimed to test a conceptual replication of the GBM for determining whether the pre-
dicted indirect path from consensus messaging to policy support is present. Interestingly, we
failed to replicate the GBM when we constructed the model using difference scores as is con-
sistent with the original GBM studies [1,4]. However, we did find the expected relationships
between variables when we used cross-sectional data. Using cross-sectional data predicts rela-
tionships between the ratings at time 2 as opposed to predicting the change in participants’ rat-
ings between time 1 and time 2 (i.e., difference scores). One reason that the first model (using
difference scores) didn’t show the hypothesized relationships may be related to the presence of
the pre-sensitization effects and the conditional effects of our experimental manipulation
based on the pre-sensitization.
Limitations
Several limitations to this study need to be taken into consideration when interpreting the
results. First, we collected the data via Amazon’s Cloud Research platform. Although care was
taken to attempt to get a diverse sample by requesting groups of participants based on age and
religiosity, the sample is not nationally representative. This panel is generally known for having
a more politically liberal panel (especially amongst older participants [25]), and this is true of
our sample. However, although there are some differences between the political ideology
groups in support of legalization policies, the strongest demographic predictor of support for
cannabis legalization that has been reported is generation [16]. Differences in support for can-
nabis legalization based on gender, race/ethnicity, or education have not been found by prior
nationally representative surveys [16].
A second limitation is that the data for this study was collected during the summer 2020
when many individuals were quarantining due to the COVID-19 pandemic. Some mainstream
news coverage during the time of data collection suggested that cannabis may provide treat-
ment for some of the side effects of COVID-19 [26], which could have increased support fur-
ther for cannabis legalization. Indeed, cannabis sales escalated across the U.S. and Canada
during this period [27,28]; and in some states, cannabis dispensaries were considered
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 13 / 17
“essential businesses” [29]. However, other news coverage suggested that bills for cannabis
legalization were being sidelined while local and state governments were focused on the pan-
demic [30].
A third limitation is that our control condition may not have functioned as we had
intended. We chose to make the control condition about ongoing research related to CBD
because we wanted a control message that was tangentially related to cannabis but was not a
consensus message and was not about medical uses of cannabis. We expected that there would
be no change between pretest and posttest for this control condition (e.g., CBD). According to
the FDA, marijuana is different from CBD [31]. CBD is one compound in the cannabis plant,
is not psychoactive (c.f., tetrahydrocannabinol, or THC), and is marketed in an array of health
and wellness products in places where cannabis remains illegal [31]. Most participants (87%)
who were randomly assigned the control condition message about CBD understood that this
message was NOT stating that scientific consensus exists, and they chose the option that indi-
cated that research on the effectiveness of cannabis is still ongoing. Although the control con-
dition message mentioned that “researchers are still investigating. . . the topic of the
investigation was CBD, and the participants may not have made a distinction between the two.
In retrospect, we should have included a response option that specifically mentioned CBD and
not cannabis. Importantly, though, we would still expect the message to work in a similar way
as if it were clearly not about cannabis. Since participants generally seemed to understand that
the message meant no consensus exists (because research is still ongoing), we would not expect
change between pretest and posttest on our outcome variables. We would have only expected
negative change for the control condition if the message had stated that there is scientific con-
sensus that cannabis is NOT effective or if there was pretest sensitization. We have no reason
to believe our results were due to the former as the manipulation check item suggests partici-
pants understood the purpose of the message. Thus, we do not believe our results were nega-
tively affected by this potential issue.
Lastly, it is also worthwhile to consider that many of the consensus messaging studies have
focused on issues for which public support is much lower than it is for cannabis. According to
the Pew Research Center, in 2019, approximately only 8% of U.S. adults believe that cannabis
should be kept illegal in all circumstances (medical and recreational); in contrast, 59% say it
should be legal in all circumstances and 32% say it should be legal only for medical use [16].
Overall, in our data, support for medical cannabis was very high. At pretest, before seeing any
consensus message, participants assumed an average of 72% of scientists (SD = 18.19,
median = 75%) agreed that cannabis is effective for the treatment of chronic pain. They
strongly agreed, themselves, that cannabis is an effective treatment (M= 77.25 out of 100,
SD = 21.17, median = 81). They agreed that cannabis is safe (M= 68.23 out of 100, SD = 28.3,
median = 75). And they perceived the risk of medical cannabis to be low (M= 31.61 out of
100, SD = 29.16, median = 20). Furthermore, support for legalization of medical cannabis for
adults (21 and older) was high (M= 80.73 of 100, SD = 26.51, median = 91). Thus, there may
be no need to use consensus campaigns to increase public support on this issue. We did see
less support for legalizing medical cannabis for people of all ages, including children
(M= 53.48 of 100, SD = 36.38, median = 60). So, future research could consider testing mes-
sages specifically discussing the effectiveness of medical cannabis among younger populations.
Conclusions
This study provides more evidence that study design decisions influence the extent to which
exposure to a consensus message influences public perceptions and indirectly influences policy
support (as posed by the Gateway Belief Model). One such decision is the way in which
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 14 / 17
consensus messages are described; our study adds to the literature suggesting that the descrip-
tive norm/authority appeal strategy is more persuasive than describing the existence of sub-
stantial evidence. However, as Landrum and Slater [8] discussed, there are philosophical issues
with treating the descriptive norm/authority appeal strategy as a “consensus” message as well
as practical issues (e.g., there is not always an accurate measurement of the proportion of
agreeing scientists).
One obvious question might now be: what does the public know or think about scientific
consensus? While less of an issue when using a specific percentage of scientists, as in the “97%”
treatments for climate change, messages simply stating that there is a scientific consensus on
an issue rely on the public understanding what constitutes such a consensus. On the contrary,
the topic of scientific consensus appears to not be broadly understood. It may be that the over-
simplification of consensus messaging overlooks the complexities which differentiate consen-
sus from mere agreement. If this is true, a connection may exist between this vagueness in
definition and public belief that consensus is manufactured and a product of group think [32,
33]. Further, sufficient understanding of the epistemic significance of the term “consensus”
might not be attainable simply through learning a definition of the term [32]. In a large inter-
view study, Slater et al. [33] find that few members of the general public are aware of the con-
cept of scientific consensus at all, and that those who are familiar have a limited and
unsophisticated grasp of it. This brings us to another apparent dilemma for consensus-framed
science communication and particularly the use of the GBM for communicating science about
which a percentage consensus message is not available: the public’s limited understanding of
the subject is likely to make messaging around it ineffective.
Supporting information
S1 Table. DFescriptive statistics for each of the outcome variables for the pretest/posttest
sample and the posttest only sample.
(DOCX)
S2 Table. Descriptives, pvalues, and Cohen’s dfor single-sample t-tests (pretest/posttest
sample).
(DOCX)
S3 Table. Between conditions effects for posttest only sample.
(DOCX)
Author Contributions
Conceptualization: Asheley R. Landrum, Brady Davis, Joanna Huxster.
Data curation: Asheley R. Landrum.
Formal analysis: Asheley R. Landrum.
Methodology: Asheley R. Landrum, Joanna Huxster.
Project administration: Asheley R. Landrum.
Supervision: Asheley R. Landrum.
Visualization: Asheley R. Landrum.
Writing original draft: Asheley R. Landrum, Brady Davis.
Writing review & editing: Asheley R. Landrum, Joanna Huxster, Heather Carrasco.
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 15 / 17
References
1. van der Linden SL, Leiserowitz AA, Feinberg GD, Maibach EW. The Scientific Consensus on Climate
Change as a Gateway Belief: Experimental Evidence. PLoS ONE. 2015; 10(2): e0118489. https://doi.
org/10.1371/journal.pone.0118489 PMID: 25714347
2. Kerr JR, Wilson MS. Perceptions of scientific consensus do not predict later beliefs about the reality of
climate change: A test of the gateway belief model using cross-lagged panel analysis. Journal of Envi-
ronmental Psychology. 2018a; 59: 107–110. https://doi.org/10.1016/j.jenvp.2018.08.012
3. Myers TA, Maibach E, Peters E, & Leiserowitz A. Simple Messages Help Set the Record Straight about
Scientific Agreement on Human-Caused Climate Change: The Results of Two Experiments. PLOS
ONE. 2015; 10(3): e0120985. https://doi.org/10.1371/journal.pone.0120985 PMID: 25812121
4. van der Linden SL, Leiserowitz AA, & Maibach EW. The gateway belief model: A large-scale replication.
Journal of Environmental Psychology. 2019; 62:49–58. https://doi.org/10.1016/j.jenvp.2019.01.009
5. Kerr JR, Wilson MS. Changes in perceived scientific consensus shift beliefs about climate change and
GM food safety. PLoS ONE. 2018b; 13(7), e0200295. https://doi.org/10.1371/journal.pone.0200295
PMID: 29979762
6. Landrum AR, Hallman WK, Jamieson KH. (2019). Examining the impact of expert voices: Communicat-
ing the scientific consensus on genetically modified organisms. Environmental Communication. 2019;
13(1): 51–70. https://doi.org/10.1080/17524032.2018.1502201
7. van der Linden SL, Clarke CE, & Maibach EW. Highlighting consensus among medical scientists
increases public support for vaccines: Evidence from a randomized experiment. BMC Public Health.
2015; 15(1): 1207. https://doi.org/10.1186/s12889-015-2541-4 PMID: 26635296
8. Landrum AR, Slater MH. Open Questions in Scientific Consensus Messaging Research. Environmental
Communication. 2020; 14(8): 1033–1046. https://doi.org/10.1080/17524032.2020.1776746
9. Aklin M, Urpelainen J. Perceptions of scientific dissent undermine public support for environmental pol-
icy. Environmental Science & Policy. 2014; 38: 173–177. https://doi.org/10.1016/j.envsci.2013.10.006
10. Chinn S, Lane DS, Hart PS. In consensus we trust? Persuasive effects of scientific consensus commu-
nication. Public Understanding of Science. 2018; 27(7): 807–823. https://doi.org/10.1177/
0963662518791094 PMID: 30058947
11. Pew Research Center. Public and Scientists’ Views on Science and Society. 2015 January 29 [Cited
2021 July 5]. Available from: https://www.pewresearch.org/science/2015/01/29/public-and-scientists-
views-on-science-and-society/
12. Cook J, Oreskes N, Doran PT, Anderegg WRL, Verheggen B, Maibach EW, et al. Consensus on con-
sensus: A synthesis of consensus estimates on human-caused global warming. Environmental
Research Letters. 2016; 11(4): 048002. https://doi.org/10.1088/1748-9326/11/4/048002
13. Berkowitz AD. An Overview of the Social Norms Approach. In Costigan Lederman L and Stewart L, edi-
tors. Changing the Culture of College Drinking: A Socially Situated Health Communication Campaign.
Hampton Press; 2005.
14. NASEM. The Health Effects of Cannabis and Cannabinoids: The Current State of Evidence and Rec-
ommendations for Research. The National Academies of Sciences, Engineering and Medicine. 2017.
https://doi.org/10.17226/24625 PMID: 28182367
15. Dixon G. Applying the Gateway Belief Model to Genetically Modified Food Perceptions: New Insights
and Additional Questions. Journal of Communication. 2016; 66(6): 888–908. https://doi.org/10.1111/
jcom.12260
16. Pew Research Center. Two-thirds of Americans support marijuana legalization. FACTTANK: News in
the Numbers. 2019 November 14 [Cited 2021 July 5]. Available from: https://www.pewresearch.org/
fact-tank/2019/11/14/americans-support-marijuana-legalization/
17. Navarro M, Siegel J. Solomon Four-Group Design. In Frey BB, Editor. The SAGE Encyclopedia of Edu-
cation Research, Measurement, and Evaluation. SAGE Publications, Inc., 2018. Available from:
https://sk.sagepub.com/reference/sage-encyclopedia-of-educational-research-measurement-
evaluation/i19335.xml
18. Mayo Clinic. What you can expect from medical marijuana. Mayo Clinic. [Cited 30 March 2021]. Avail-
able from: https://www.mayoclinic.org/healthy-lifestyle/consumer-health/in-depth/medical-marijuana/
art-20137855
19. Braver MW, Braver SL. Statistical treatment of the Solomon four-group design: A meta-analytic
approach. Psychological Bulletin. 1988; 104(1): 150–154. https://doi.org/10.1037/0033-2909.104.1.
150
20. Chinn S, Hart PS. Climate Change Consensus Messages Cause Reactance. Environmental Communi-
cation. 2021. https://doi.org/10.1080/17524032.2020.1805344 PMID: 33688373
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 16 / 17
21. Ma Y, Dixon G, Hmielowski JD. (2019). Psychological Reactance from Reading Basic Facts on Climate
Change: The Role of Prior Views and Political Identification. Environmental Communication. 2019; 13
(1): 71–86. https://doi.org/10.1080/17524032.2018.1548369
22. Deryugina T, Shurchkov O. The Effect of Information Provision on Public Consensus about Climate
Change. PLoS ONE. 2016; 11(4), e0151469. https://doi.org/10.1371/journal.pone.0151469 PMID:
27064486
23. Bolsen T, Druckman JN. Do Partisanship and Politicization Undermine the Impact of a Scientific Con-
sensus Message about Climate Change? Group Processes & Intergroup Relations. 2018; 21(3): 389–
402. https://doi.org/10.1177/1368430217737855
24. Hayes AF. Introduction to Mediation, Moderation, and Conditional Process Analysis, Second Edition: A
Regression-Based Approach. Guilford Publications; 2017.
25. Huff C, Tingley D. “Who are these people?” Evaluating the demographic characteristics and political
preferences of MTurk survey respondents. Research & Politics. 2015; 2(3): 2053168015604648.
https://doi.org/10.1177/2053168015604648
26. Earlenbaugh E. CBD For Coronavirus? New Study Adds Evidence for Cannabis as Covid-19 Treat-
ment. Forbes. 2020 July 15 [Cited 2021 July 5]. Available from: https://www.forbes.com/sites/
emilyearlenbaugh/2020/07/15/cbd-for-coronavirus-new-study-adds-evidence-for-cannabis-as-covid-
19-treatment/
27. Hughes T. Coronavirus, quarantine: Legal marijuana shops see spike in pot sales. USA Today. 2020
March 17 [Cited 2021 July 5] Available from: https://www.usatoday.com/story/news/nation/2020/03/17/
coronavirus-fears-prompt-americans-buy-more-legal-marijuana/5067578002/
28. Khan S. As customers hoard pot brownies, North American weed firms see lockdown boost. Reuters.
2020 March 24 [Cited 2021 July 5] Available from: https://www.reuters.com/article/us-health-
coronavirus-cannabis-idUSKBN21B2DC
29. Booker, B. (2020, March 18). Amid Coronavirus, San Francisco, New York, Deem Marijuana Busi-
nesses “Essential.” NPR.Org. 2020 March 18 [Cited 2021 July 5]. Available from: https://www.npr.org/
2020/03/18/817779558/amid-coronavirus-san-francisco-new-york-deem-marijuana-businesses-
essential
30. Zhang M. Pandemic upends pot legalization. POLITICO. 2020 April 8 [Cited 2021 July 5]. Available
from: https://www.politico.com/news/2020/04/08/coronavirus-pandemic-upends-pot-legalization-
174073
31. FDA. What You Need to Know (and What We are Working to Find Out) About Products Containing Can-
nabis or Cannabis-derived Compounds, Including CBD. 2020 March 5 [cited 2021 October 18]. Avail-
able from: https://www.fda.gov/consumers/consumer-updates/what-you-need-know-and-what-were-
working-find-out-about-products-containing-cannabis-or-cannabis
32. Intemann K. Who Needs Consensus Anyway? Addressing Manufactured Doubt and Increasing Public
Trust in Climate Science. Public Affairs Quarterly. 2017; 31(3): 189–208. Available from: https://www.
jstor.org/stable/44732792
33. Slater MH, Huxster JK, Scholfield E. Public Conceptions of Scientific Consensus. https://doi.org/10.
31219/osf.io/yehsp [Preprint]. 2021 [cited 2021 November 11]. Available from: https://osf.io/yehsp/
PLOS ONE
Consensus on medicinal cannabis
PLOS ONE | https://doi.org/10.1371/journal.pone.0260342 November 29, 2021 17 / 17
... Readers seem to be sensitive to statements indicating scientific consensus or conflict, 198 and adjust their perceptions accordingly (Bolsen & Druckman, 2018;199 Goldberg et al., 2022;Imundo & Rapp, 2022;Johnson, 2018;Kerr et 200 al., 2022;Kobayashi, 2018;Kohl et al., 2016;Landrum et al., 2021;Løhre et al., 2019;Myers 201 et al., 2015;Van der Linden et al., 202 2019; van der Linden, Clarke, et al., 2015;. One of 203 the most frequently tested consensus messages was that "97% of climate scientists agree that 204 humans are causing climate change". ...
... Kelp et al., 2022;Landrum et al., 2021;Landrum et al., 2019;. Typical statements included that "97% of climate scientists agree that humans are causing climate change" -a statement that is frequently used in popular media and by authorities to communicate scientific consensus regarding anthropogenic climate change. ...
Article
Full-text available
Communicating research findings to the public in a clear but engaging manner is challenging, yet central for maximizing their societal impact. This systematic review aimed to derive evidence-based strategies for science communication from experimental studies. Three databases were searched in December 2022. Experimental studies published in English or German were included if they tested the effect of providing written information about science to adults aged 16+ years by assessing the impact on at least one of four domains of science communication aims (understanding and knowledge, attitudes and trust, intention and behavior, engagement). A total of 171 publications were included. Derived strategies include avoiding jargon, carefully structuring texts, including citations and expert sources, being mindful about how and when to indicate conflict or uncertainty in science, using neutral language, and highlighting Open Science principles and replicability. They can be used to communicate science effectively to lay audiences, benefitting the society.
... Attitudes toward a wide range of issues are related to how much consensus people think there is among experts. This includes controversial topics such as climate change [4][5][6][7] genetically-modified food [8], nuclear power [9], or medicinal cannabis [10]. Importantly, such a correlation has also been demonstrated in the context of infectious diseases [11,12]. ...
... We included the main and interactive effects of messaging (control vs. consensus) and design (pre-post vs. post-only) for each model. Statistically significant effects of the design and design x messaging interaction would suggest sensitization effects [10,61]; especially problematic would be statistically significant interaction term as it would suggest that the effects of consensus messaging manipulation depends on whether participants responded to DV questions beforehand. However, no statistically significant effect of the design was found, nor of the interaction between study design and consensus messaging for any of the DVs (i.e., perceived consensus, vaccine beliefs, worry, policy support, and vaccination intentions). ...
Article
Full-text available
We examine the relationships between the perception of the scientific consensus regarding vaccines, and vaccine attitudes and intentions (N total = 2,362) in the context of COVID-19 disease. Based on the correlational evidence found (Study 1), perceived scientific consensus and vaccine attitudes are closely related. This association was stronger among people who trust (vs. distrust) scientists; however, political ideology did not moderate these effects. The experimental evidence (Studies 2–3) indicates that consensus messaging influences the perception of consensus; nonetheless, the effects on vaccine attitudes or intentions were non-significant. Furthermore, message aiming at reducing psychological reactance was similarly ineffective in changing attitudes as traditional consensus message.
... Readers seem to be sensitive to statements indicating scientific consensus or conflict in a text, and adjust their perceptions accordingly Kobayashi, 2018;Kohl et al., 2016;Landrum et al., 2021;Løhre et al., 2019;Myers et al., 2015;Rode et al., 2021;Shi et al., 2022;Van der Linden et al., 2019;van der Linden, Clarke, et al., 2015;. One of the most frequently tested consensus messages was that "97% of climate scientists agree that humans are causing climate change". ...
... Kelp et al., 2022;Landrum et al., 2021;Landrum et al., 2019;Van der Linden et al., 2019). Typical statements included that "97% of climate scientists agree that humans are causing climate change" -a statement that is frequently used in popular media and by authorities to communicate scientific consensus regarding anthropogenic climate change. ...
Preprint
Full-text available
Communicating research findings to the public in a clear but engaging manner is challenging, yet central for maximizing their societal impact. This systematic review aimed to derive evidence-based strategies for science communication from experimental studies. Three databases were searched in December 2022. Experimental studies published in English or German were included if they tested the effect of providing written information about science to adults aged 16+ years by assessing the impact on at least one of four domains of science communication aims (understanding and knowledge, attitudes and trust, intention and behavior, engagement). A total of 171 studies were included. Derived strategies include avoiding jargon, carefully structuring texts, including citations and expert sources, being mindful about how and when to indicate conflict or uncertainty in science, using neutral language, and highlighting Open Science principles and replicability. They can be used to communicate science effectively to lay audiences, benefitting society.
Article
Full-text available
Scholars continue to search for solutions to shift climate change skeptics’ views on climate science and policy. However, research has shown that certain audiences are resistant to change regarding environmental issues. To explore this issue further, we examine the presence of reactance among different audiences in response to simple, yet prominently used, climate change messages. Our results show that emphasizing the scientific consensus of climate change produces reactance, but only among people who question the existence of climate change. Moreover, adding political identification to the model as an additional moderating variable shows the increases in reactance occur among Republicans who question the existence of climate change. Finally, our results show that reactance to climate change messaging may lead to backfiring effects on important outcomes tied to climate change such as risk perceptions, climate change beliefs, and support for mitigation policies.
Article
Full-text available
Despite an overwhelming scientific consensus, a sizable minority of people doubt that human activity is causing climate change. Communicating the existence of a scientific consensus has been suggested as a way to correct individuals’ misperceptions about human-caused climate change and other scientific issues, though empirical support is mixed. We report an experiment in which psychology students were presented with consensus information about two issues, and subsequently reported their perception of the level of consensus and extent of their endorsement of those issues. We find that messages about scientific consensus on the reality of anthropogenic climate change and the safety of genetically modified food shift perceptions of scientific consensus. Using mediation models we also show that, for both these issues, high consensus messages also increase reported personal agreement with the scientific consensus, mediated by changes in perceptions of a scientific consensus. This confirms the role of perceived consensus in informing personal beliefs about climate change, though results indicate the impact of single, one-off messages may be limited.
Article
Several empirical studies purportedly demonstrate the existence of a scientific consensus on climate change. Such studies have been pursued as a response to concerns that private industries and think tanks have "manufactured" public doubt and derailed regulatory policies. While there is overwhelming evidence for anthropogenic global warming, studies aiming to empirically establish the existence of consensus rely on several problematic assumptions about the nature of consensus and the role of consensus in policy making. Even more worrisome, reinforcing such assumptions in public may actually undermine, rather than increase, trust in climate science.
Preprint
Despite decades of concerted efforts to communicate to the public on important scientific issues pertaining to the environment and public health, gaps between public acceptance and the scientific consensus on these issues remain stubborn. One strategy for dealing with this shortcoming has been to focus on the existence of the scientific consensus. Recent science communication research has added support to this general idea, though the interpretation of these studies and their generalizability remains a matter of contention. In this paper, we describe results of a large qualitative interview study on different models of scientific consensus and the relationship between such models and trust of science, finding that familiarity with scientific consensus is rarer than might be expected. These results suggest that consensus messaging strategies may not be effective.
Article
Several recent studies have debated whether climate change consensus messages cause reactance, although they sometimes employ different procedures and measurement. This study uses procedures and measures from competing studies to allow for a comparison of the respective approaches. We find that climate change consensus messages cause reactance, particularly among Republicans and those who do not believe in anthropogenic climate change. These findings highlight concerns that consensus messaging strategies may be ineffective or backfire among audiences that science communicators are most keen to target.
Article
In recent years, there has been considerable interest in studying and using scientific consensus messaging strategies to influence public opinion. Researchers disagree, sometimes vociferously, about how to examine the potential influence of consensus messaging, debating one another publicly and privately. In this essay, we take a step back and focus on some of the important questions that scholars might consider when researching scientific consensus messaging. Hopefully, reflecting on these questions will help researchers better understand the reasons for the different points of debate and improve the work moving forward.
Article
The Gateway Belief Model describes a process of attitudinal change where a shift in people's perception of the scientific consensus on an issue leads to subsequent changes in their attitudes which in turn predict changes in support for public action. In the current study, we present the first large-scale confirmatory replication of the GBM. Specifically, we conducted a consensus message experiment on a national quota sample of the US population (N=6,301). Results support the mediational hypotheses of the GBM: an experimentally induced change in perceived scientific consensus causes subsequent changes in cognitive (belief) and affective (worry) judgments about climate change, which in turn are associated with changes in support for public action. The scientific consensus message also had a direct effect on support for public action. We further found an interaction with both political ideology and prior attitudes such that conservatives and climate change disbelievers were more likely to update their beliefs toward the consensus. We discuss the model's theoretical and practical implications, including why conveying scientific consensus can help reduce politically motivated reasoning
Article
The gateway belief model posits that perceptions of scientific agreement play a causal role in shaping beliefs about the existence of anthropogenic climate change. However, experimental support for the model is mixed. The current study takes a longitudinal approach, examining the causal relationships between perceived consensus and beliefs. Perceptions of scientific consensus and personal beliefs about climate change were collected over a five-month period in a student sample (N = 356). Cross-lagged panel analysis revealed that perceived scientific consensus did not prospectively predict personal agreement with the reality of climate change, thus the current study did not find support for the gateway belief model. However, the inverse pathway was significant for those with liberal voting intentions: personal beliefs about the reality of anthropogenic climate change prospectively predicted subsequent estimates of consensus. The results suggest that individuals’ perceptions of a consensus among scientists do not have a strong influence on their personal beliefs about climate change.
Article
Scholars are divided over whether communicating to the public the existence of scientific consensus on an issue influences public acceptance of the conclusions represented by that consensus. Here, we examine the influence of four messages on perception and acceptance of the scientific consensus on the safety of genetically modified organisms (GMOs): two messages supporting the idea that there is a consensus that GMOs are safe for human consumption and two questioning that such a consensus exists. We found that although participants concluded that the pro-consensus messages made stronger arguments and were likely to be more representative of the scientific community’s attitudes, those messages did not abate participants’ concern about GMOs. In fact, people’s pre-manipulation attitudes toward GMOs were the strongest predictor of of our outcome variables (i.e. perceived argument strength, post-message GMO concern, perception of what percent of scientists agree). Thus, the results of this study do not support the hypothesis that consensus messaging changes the public’s hearts and minds, and provide more support, instead, for the strong role of motivated reasoning.
Article
Scholars have recently suggested that communicating levels of scientific consensus (e.g. the percentage of scientists who agree about human-caused climate change) can shift public opinion toward the dominant scientific opinion. Initial research suggested that consensus communication effectively reduces public skepticism. However, other research failed to find a persuasive effect for those with conflicting prior beliefs. This study enters this contested space by experimentally testing how different levels of consensus shape perceptions of scientific certainty. We further examine how perceptions of certainty influence personal agreement and policy support. Findings indicate that communicating higher levels of consensus increases perceptions of scientific certainty, which is associated with greater personal agreement and policy support for non-political issues. We find some suggestive evidence that this mediated effect is moderated by participants’ overall trust in science, such that those with low trust in science fail to perceive higher agreement as indicative of greater scientific certainty.