ArticlePDF Available

Abstract and Figures

The persuasive power of brain images has captivated scholars in many disciplines. Like others, we too were intrigued by the finding that a brain image makes accompanying information more credible (McCabe & Castel in Cognition 107:343-352, 2008). But when our attempts to build on this effect failed, we instead ran a series of systematic replications of the original study-comprising 10 experiments and nearly 2,000 subjects. When we combined the original data with ours in a meta-analysis, we arrived at a more precise estimate of the effect, determining that a brain image exerted little to no influence. The persistent meme of the influential brain image should be viewed with a critical eye.
Content may be subject to copyright.
BRIEF REPORT
On the (non)persuasive power of a brain image
Robert B. Michael &Eryn J. Newman &Matti Vuorre &
Geoff Cumming &Maryanne Garry
#Psychonomic Society, Inc. 2013
Abstract The persuasive power of brain images has capti-
vated scholars in many disciplines. Like others, we too were
intrigued by the finding that a brain image makes accompa-
nying information more credible (McCabe & Castel in Cog-
nition 107:343-352, 2008). But when our attempts to build
on this effect failed, we instead ran a series of systematic
replications of the original studycomprising 10 experi-
ments and nearly 2,000 subjects. When we combined the
original data with ours in a meta-analysis, we arrived at a
more precise estimate of the effect, determining that a brain
image exerted little to no influence. The persistent meme of
the influential brain image should be viewed with a critical
eye.
Keywords Judgment and decision making .
Neuroimaging .Statistics
A number of psychological research findings capture our
attention. Take the finding that people agree more with the
conclusions in a news article when it features an image of
the brain, even though that image is nonprobativeprovid-
ing no information about the accuracy of the conclusions
already in the text of the article (McCabe & Castel, 2008). In
a time of explosive growth in the field of brain research and
the encroaching inevitability of neuroscientific evidence in
courtrooms, the persuasive influence of a brain image is
both intriguing and worrying.
Perhaps because of its implications, this research has
received much attention in both the popular and scholarly
press (nearly 40 citations per year, according to Google
Scholar, as of November 30, 2012). Although McCabe and
Castel (2008) did not overstate their findings, many others
have. Sometimes, these overstatements were linguistic exag-
gerations. One author of a paper in a medical journal
reported that brain images . . . can be extremely mislead-
ing(Smith, 2010). Other authors of a paper in a social
issues journal concluded, clearly people are too easily
convinced(Kang, Inzlicht, & Derks, 2010). Other over-
statements made claims beyond what McCabe and Castel
themselves reported: In an education journal, authors wrote
that brain images make both educators and scientists more
likely to believe the statements(Hinton & Fischer, 2008),
while in a forensic psychiatric journal, others worried about
the potential of neuroscientific data to hold significant
prejudicial, and at times, dubious probative, value for
addressing questions relevant to criminal responsibility and
sentencing mitigation(Treadway & Buckholtz, 2011).
These and other misrepresentations show that the persua-
sive power of brain images captivates scholars in many
disciplines. We too were captivated by this finding and
attempted to build on itbut were surprised when we had
difficulty obtaining McCabe and Castels(2008) basic find-
ing. Moreover, in searching the published literature, we
were likewise surprised to discover that the effect had not
been replicated. In one paper, some brain images were more
influential than others on subjectsevaluations of an articles
credibility, but because there was no condition in which
subjects evaluated the article without a brain image, we
cannot draw conclusions about the power of brain images
per se (Keehner, Mayberry, & Fischer, 2011). Other papers
show that written neuroscience information makes bad
explanations of psychological phenomena seem more satis-
fying (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008)
R. B. Michael :E. J. Newman :M. Vuorre :M. Garry (*)
School of Psychology, Victoria University of Wellington, PO Box
600, Wellington, New Zealand 6147
e-mail: Maryanne.Garry@vuw.ac.nz
G. Cumming
School of Psychological Science, La Trobe University, Melbourne,
Australia
Psychon Bull Rev
DOI 10.3758/s13423-013-0391-6
and that written fMRI evidence can even lead to more guilty
verdicts in a mock juror trial (McCabe, Castel, & Rhodes,
2011). We even found work in another domain showing that
meaningless mathematics boosts the quality of abstracts
(Erikkson, 2012). But we did not find any other evidence
that brain images themselves wield power.
Given the current discussion in psychological science
regarding the importance of replication (see the Novem-
ber 2012 Perspectives on Psychological Science,the
February 2012 Observer, and www.psychfiledrawer.org),
we therefore turned our attention to a concentrated
attempt to more precisely estimate how much more
people will agree with an articles conclusions when it
is accompanied by a brain image. Here, we report a
meta-analysis including McCabe and Castels(2008)
original data and 10 of our own experiments that use
their materials.
1
We arrive at a more precise estimate of
the size of the effect, concluding that a brain image
exerts little to no influence on the extent to which
people agree with the conclusions of a news article.
Method
Subjects
Across 10 experiments, a total of 1,971 people correctly
completed all phases of the experiment (Table 1shows the
experiment number, subject pool, medium of data collec-
tion, sample size, and compensation). We collected demo-
graphic information from the Mechanical Turk subjects.
These subjects ranged in age from 16 to 82 years (M=
29.24, median = 26, SD = 10.78). Eleven percent had
completed a PhD or Masters degree; 35 % had completed
a Bachelors degree; 52 % had completed high school; and
the remaining 2 % had not finished high school.
Design
In each experiment, we manipulated, between subjects, the
presence or absence of a brain image.
Procedure
We told subjects that they were taking part in a study
examining visual and verbal learning styles. Subjects
then read a brief news article, "Brain Scans Can Detect
Criminals,from McCabe and Castel's (2008)thirdex-
periment. The article was from the BBC News Web site
and summarized a study discussed in Nature (BBC
News, 2005;Wild,2005). All of our attempts to repli-
cate focused on this experiment, because it produced
McCabe and Castel's largest effect (d= 0.40). Although
there were two other experiments in their paper, the first
used different materials and a different dependent mea-
sure and was a within-subjects design. Their second
experiment, also a within-subjects design, examined the
effects of different types of brain images but did not
have a baseline condition with no brain image and,
therefore, did not permit brain versus no-brain
comparisons.
In their third experiment, McCabe and Castel (2008)
used a 2 × 2 between-subjects design, manipulating (1)
the presence or absence of a brain image depicting
activity in the frontal lobes and (2) whether the article
featured experts critiquing the articles claims. Although
they did not explain the rationale for the critique manip-
ulation, it stands to reason that criticism would counter-
act the persuasive influence of a brain image. Indeed,
they adopted that reasoning in a later paper showing
that the persuasive influence of written neuroscientific
evidence on juror verdicts decreases when the validity
of that evidence is questioned (McCabe et al., 2011).
McCabe and Castel found that the critique manipulation
did not influence peoples ratings of the articlescon-
clusions, nor did it interact with the presence of a brain
image, so in Experiments 15 we used the article with-
out the critique.
But when we took a closer look at McCabe and Castels
(2008) raw data, we found that the influence of the brain
image was larger when the articles claims were critiqued, as
compared with when they were not, t
critique
(52) = 2.07, p=
.04, d= 0.56; t
no_critique
(52) = 1.13, p= .27, d= 0.31. Note
that this surprising result runs counter to the explanation that
evidence is less influential when its validity is called into
question (McCabe et al., 2011). With these findings in mind,
in Experiments 610, we used the article in which experts
criticized the articles claims in an extra 100 words.
After reading the article (critiqued or not), subjects
responded to the statement "Do you agree or disagree with
the conclusion that brain imaging can be used as a lie detec-
tor?" on a scale from 1 (strongly disagree)to4(strongly
agree). Subjects were randomly assigned to a condition in
which the article appeared alone or a condition in which an
image of the brain appeared alongside the article. In all online
experiments, subjects then encountered an attention check,
which they had to pass to stay in the data set (Oppenheimer,
Meyvis, & Davidenko, 2009).
2
1
We thank Alan Castel for sharing his materials with us and for his
many helpful discussions.
2
When we included subjects who failed the attention check in the
meta-analysis, the estimated raw effect size was an even smaller 0.04,
95 % CI [0.00, 0.11]. Across studies, exclusion rates varied from 24 %
to 31 % of subjects. These rates are lower than those found by
Oppenheimer et al. (2009).
Psychon Bull Rev
Results and discussion
How much influence does an image of the brain have
on peoples agreement with the conclusions of a news
article? To answer this question, we first calculated the
raw effect size for each experiment by determining the
difference between mean agreement ratings among peo-
ple in the brain and the no-brain conditions. We report
these findings in Table 2.
To find a more precise estimate of the size of the effect,
we used ESCI software (Cumming, 2012) to run a random
effects model meta-analysis of our 10 experiments and two
findings from McCabe and Castel (2008). We display the
results in Fig. 1. The result of this meta-analysis is an
estimated raw effect size of 0.07, 95 % CI [0.00, 0.14], z=
1.84, p= .07. On a 4-point scale, this estimate represents
movement up the scale by 0.07 points, or 2.4 % (cf. McCabe
and Castels[2008] original raw effect size of 0.26, or
8.7 %). We also found no evidence of heterogeneity
across the experiments. Tauthe estimated standard de-
viation between experimentswas small (0.07), and the
CI included zero as a plausible value (95 % CI [0,
0.13]; note, of course, that tau cannot be less than 0),
suggesting that the observed variation across experi-
ments could very plausibly be attributed to sampling
variability. This finding is important because, at first
glance, it appears as though the brain image might be
more persuasive on paper than online, but the statistics
simply do not support this idea. We also examined the
impact of other potentially important moderators. We
ran an analysis of covariance on the dependent measure,
using age and education as covariates and condition
(brain, no brain) as an independent variable. Neither
covariate interacted with the independent variable,
Table 1 Characteristics of our 10 experiments included in the meta-analysis
Experiment Subject pool Medium NCompensation
1 Mechanical Turk Online 197 US$0.30
2 Victoria undergraduate subject pool Online 75 Course credit
3 Wellington high school students Paper 45 Movie voucher
4 Mechanical Turk Online 368 US$0.50
5 Victoria Intro Psyc subject pool Paper 529 Course credit
6 Mechanical Turk Online 113 US$0.50
7 General public Paper 68 None
8 Mechanical Turk Online 191 US$0.50
9 Mechanical Turk Online 194 US$0.50
10 Mechanical Turk Online 191 US$0.50
In Experiments 3, 5, and 7, subjects saw a paper version of the articleas in the original McCabe and Castel (2008) studyand in the remaining
experiments, they saw an online version that looked nearly identical to the paper version (Qualtrics Labs Inc., 2012).
Table 2 Summary of results of our 10 experiments included in the meta-analysis
Experiment No brain Brain ES 95 % CI tp
N M SD N M SD LL UL
1. Mechanical Turk 99 2.90 0.58 98 2.86 0.61 0.04 0.21 0.13 0.46 .643
2. Victoria undergraduate subject pool 42 2.62 0.54 33 2.85 0.57 0.23 0.03 0.48 1.79 .078
3. Wellington high school students 24 2.96 0.36 21 3.07 0.55 0.11 0.16 0.39 0.82 .415
4. Mechanical Turk 184 2.93 0.60 184 2.89 0.60 0.05 0.17 0.07 0.78 .435
5. Victoria Intro Psyc subject pool 274 2.86 0.59 255 2.91 0.52 0.04 0.05 0.14 0.92 .357
6. Mechanical Turk [critique] 58 2.50 0.84 55 2.60 0.83 0.10 0.21 0.41 0.64 .527
7. General public [critique] 34 2.41 0.78 34 2.74 0.51 0.32 0.00 0.64 2.02 .048
8. Mechanical Turk [critique] 98 2.73 0.67 93 2.68 0.69 0.06 0.25 0.14 0.58 .561
9. Mechanical Turk [critique] 99 2.54 0.66 95 2.72 0.68 0.18 0.01 0.37 1.88 .062
10. Mechanical Turk [critique] 94 2.66 0.65 97 2.64 0.71 0.02 0.21 0.17 0.21 .836
ES = effect size, calculated as the difference between no-brain and brain means. LL and UL are the lower and upper limits of the 95 % confidence
interval of the effect size. Positive effect sizes signify higher average ratings for articles featuring a brain image.
Psychon Bull Rev
suggesting that the persuasive influence of a brain im-
age is not moderated by age or education.
How are we to understand the size of the brain
image effect in context? Let us consider subjectshy-
pothetical responses, as shown in Fig. 2. The line
marked No Brainrepresents the weighted mean agree-
ment of subjects who read the article without a brain
image. The line marked Brainrepresents how far
subjectsagreement would shift, in the mean, if they
had read the article with a brain image. Taken together,
this figure, coupled with the meta-analysis, makes it
strikingly clear that the image of the brain exerted little
to no influence. The exaggerations of McCabe and
Castels(2008) work by other researchers seem even
more worrisome in light of this more precise estimate
of the effect size.
It is surprising, however, that an image of the brain exerted
little to no influence on peoples judgments. We know that
images can exert powerful effects on cognitionin part,
because they facilitate connections to prior knowledge. For
instance, when pictures clarify complex ideas (such as the
workings of a bicycle pump) and bridge the gap between what
nonexperts know and do not know, people comprehend and
remember that material better (Mayer & Gallini, 1990;see
Carney & Levin, 2002,forareview).
Manipulations like these that boost comprehension can
also make other concepts related to the material feel more
easily available in memory, and we know that people inter-
pret this feeling of ease as diagnostic of familiarity and truth
(Newman, Garry, Bernstein, Kantner, & Lindsay, 2012;
Tversky & Kahneman, 1973; Whittlesea, 1993; see Alter
& Oppenheimer, 2009, for a review). But a brain image
depicting activity in the frontal lobes is different. To people
who may not understand how fMRI works, or even where
the frontal lobes are, seeing an image of the brain may not
be any more helpful than seeing an ink blot. It seems
reasonable to speculate, therefore, that images of the brain
are like other technical images: To people who cannot
connect them to prior knowledge, there is no boost of
comprehension, nor a feeling of increased cognitive avail-
ability. This speculation leads directly to an interesting
question: To what extent is the influence of a brain image
moderated by prior knowledge?
Another explanation for the trivial effect of brain
images is that people have become more skeptical about
neuroscience information since McCabe and Castels
(2008) study. Indeed, the media itself has begun engaging
in critical self-reflection. For instance, a recent article in
the New York Times railed against the cultural tendency,
in which neuroscientific explanations eclipse historical,
McCabe & Castel, 2008
Experiment 1
Experiment 2
Experiment 3
Experiment 4
Experiment 5
McCabe & Castel, 2008
Experiment 6
Experiment 7
Experiment 8
Experiment 9
Experiment 10
Result of meta-analysis
Fig. 1 Forest plot of effect sizes between studies. Each row represents
one experiment, starting with the original McCabe and Castel (2008)
finding when the article did not feature criticism, then our five repli-
cations, then McCabe and Castels finding when the article featured
criticism, then our five replications. Our experiments are numbered 1
10, as also in Tables 1and 2. The location of each square on the
horizontal axis represents the effect sizethe difference between mean
agreement ratings in the no-brain and brain conditions (the maximum
possible difference between these two means is ±3; positive values
indicate a higher mean score for brain). The black lines extending
either side of a square represent a 95 % confidence interval. The size
of each square indicates the sample size and weighting an experiment
is given in the meta-analysis. Finally, the red diamond shows the result
of the meta-analysis, with the center of the diamond indicating the
estimated effect size and the spread of the diamond representing a 95 %
confidence interval
Fig. 2 An illustration of the brain effect. The no-brain bar represents
the weighted average for subjects who read the article without a brain
image (2.77). The difference between the no-brain bar and the brain bar
is the estimated effect size from the meta-analysis (0.07)
Psychon Bull Rev
political, economic, literary and journalistic interpreta-
tions of experienceand phenomena like neuro law,
which, in part, uses the evidence of damaged brains as
the basis for legal defense of people accused of heinous
crimes(Quart, 2012). If people have indeed grown skep-
tical, we might then expect them to also be protected
against the influence of other forms of neuroscience in-
formation. To test this hypothesis, we ran a series of
replications of another well-known 2008 study showing
that people rated bad explanations of scientific phenome-
na as more satisfying when those explanations featured
neuroscience language (Weisberg et al., 2008).
Our five replications, which appear in Table 3, produced
similar patterns of results. To estimate the size of the effect
more precisely, we followed a similar approach as we did
earlier, running a random effects model meta-analysis of our
five experiments and the original finding from Weisberg et
al. (2008). The result was an estimated raw effect size of
0.40, 95 % CI [0.23, 0.57], z= 4.71, p< .01. On a 7-point
scale, this estimate represents movement up the scale by
0.40 points, or 6.67 %. The CI does not include zero as a
plausible value, providing evidence against the idea that
people have become savvy enough about neuroscience to
be protected against its influence more generally.
Why the disparity, then, between the trivial effects of a
brain image and the more marked effects of neuroscience
language? A closer inspection reveals that the Weisberg et
al. (2008) study is not simply the language analog of the
McCabe and Castel (2008) study. For instance, Weisberg et
al. found that neuroscience language makes bad explana-
tions seem better but has less (or no) effect on good explan-
ations. By contrast, McCabe and Castel did not vary the
quality of their explanations. In addition, Weisberg et al.
compared written information that did or did not feature
neuroscience language, but McCabe and Castel added a
brain image to an article that already featured some neuro-
science language. Perhaps, then, the persuasive influence
of the brain image is small when people have already
been swayed by the neuroscience language in the article.
Although such a possibility is outside the scope of this
article, it is an important question for future research.
How are we to understand our results, given that other
research shows that some brain images are more influential
than others (Keehner et al., 2011)? One possibility is that
Keehner and colleagueswithin-subjects designin which
subjects considered a series of five different brain images
encouraged people to rely on relative ease of processing
when making judgments across the different images (Alter
& Oppenheimer, 2008). By contrast, McCabe and Castels
(2008) between-subjects design does not allow people to
adopt this strategy. And recall, of course, that because
Keehner and colleagues did not compare the influence of
any brain image with that of no brain image, their work still
does not address that basic question.
Although our findings do not support popular descrip-
tions of the persuasiveness of brain images, they do fit with
very recent research and discussions questioning their allure
(Gruber & Dickerson, 2012; Farah & Hook, 2013). Impor-
tantly, our estimation approach avoids the dichotomous
thinking that dominates media discourse of popular psycho-
logical effects and, instead, emphasizesin accord with
APA standardsinterpretation of results based on point
and interval estimates. Furthermore, research in the domain
of jury decision making suggests that brain images have
little or no independent influence on juror verdictsa con-
text in which the persuasive influence of a brain image
would have serious consequences (Greene & Cahill, 2012;
Schweitzer & Saks, 2011; Schweitzer et al., 2011). Taken
together, these findings and ours present compelling evi-
dence that when it comes to brains, the amazingly persis-
tent meme of the overly influential image
3
has been wildly
overstated.
Author Note We are grateful for the support of the New Zealand
Government through the Marsden Fund, administered by the Royal
Society of New Zealand on behalf of the Marsden Fund Council.
3
We thank Martha Farah (personal communication, June 20, 2012) for
coining this delightful term.
Table 3 Summary of results of experiments replicating Weisberg, Keil, Goodstein, Rawson, and Gray (2008)
Experiment No Neuro Neuro ES 95 % CI tp
N M SD N M SD LL UL
1. Mechanical Turk 61 4.20 0.89 60 4.36 0.99 0.16 0.18 0.50 0.93 .355
2. Mechanical Turk 68 4.08 1.02 80 4.40 0.94 0.32 0.00 0.64 1.98 .049
3. Mechanical Turk 78 4.17 1.07 79 4.50 0.85 0.33 0.02 0.63 2.12 .036
4. Mechanical Turk 82 4.16 1.03 70 4.61 0.89 0.45 0.14 0.76 2.83 .005
5. Mechanical Turk 117 4.18 1.02 102 4.58 1.07 0.40 0.12 0.68 2.83 .005
ES = effect size, calculated as the difference between no-neuro and neuro means. LL and UL are the lower and upper limits of the 95 % confidence
interval of the effect size. Positive effect sizes signify higher average satisfaction ratings for bad explanations featuring neuroscience.
Psychon Bull Rev
Robert B. Michael gratefully acknowledges support from Victoria
University of Wellington.
References
Alter, A. L., & Oppenheimer, D. M. (2008). Effects of fluency on
psychological distance and mental construal (or why New York is
a large city, but New York is a civilized jungle). Psychological
Science, 19, 161167. doi:10.1111/j.1467-9280.2008.02062.x
Alter, A. L., & Oppenheimer, D. M. (2009). Uniting the tribes
of fluency to form a metacognitive nation. Personality and
Social Psychology Review, 13, 219235. doi:10.1177/
1088868309341564
BBC News. (2005, September 21). Can brain scans detect criminals?
BBC News. Retrieved from http://news.bbc.co.uk/2/hi/uk_news/
4268260.stm
Carney, R. N., & Levin, J. R. (2002). Pictorial illustrations still im-
prove studentslearning from text. Educational Psychology Re-
view, 14, 526. doi:10.1023/A:1013176309260
Cumming, G. (2012). Understanding the new statistics: Effect sizes,
confidence intervals, and meta-analysis. New York: Routledge.
Erikkson, K. (2012). The nonsense math effect. Judgment and Deci-
sion Making, 7(6), 746749.
Farah, M. J., & Hook, C. J. (2013). The seductive allure of seductive
allure.Perspectives on Psychological Science 8,8890.
doi:10.1177/1745691612469035
Greene, E., & Cahill, B. S. (2012). Effects of neuroimaging evidence
on mock juror decision making. Behavioral Sciences & the Law,
30, 280296. doi:10.1002/bsl.1993
Gruber, D., & Dickerson, J. A. (2012). Persuasive images in popular
science: Testing judgments of scientific reasoning and credibility.
Public Understanding of Science, 21, 938948. doi:10.1177/
0963662512454072
Hinton, C., & Fischer, K. W. (2008). Research schools: Grounding
research in educational practice. Mind, Brain, and Education, 2,
157160. doi:10.1111/j.1751-228X.2008.00048.x
Kang, S. K., Inzlicht, M., & Derks, B. (2010). Social neuroscience and
public policy on intergroup relations: A Hegelian analysis. Jour-
nal of Social Issues, 66, 585601. doi:10.1111/j.1540-
4560.2010.01664.x
Keehner, M., Mayberry, L., & Fischer, M. H. (2011). Different clues
from different views: The role of image format in public percep-
tion of neuroimaging results. Psychononmic Bulletin and Review,
18, 422428. doi:10.3758/s13423-010-0048-7
Mayer, R. G., & Gallini, J. K. (1990). When is an illustration worth ten
thousand words? Journal of Educational Psychology, 82, 715
726. doi:10.1037/0022-0663.82.4.715
McCabe, D. P., & Castel, A. D. (2008). Seeing is believing: The effect
of brain images on judgments of scientific reasoning. Cognition,
107, 343352. doi:10.1016/j.cognition.2007.07.017
McCabe, D. P., Castel, A. D., & Rhodes, M. G. (2011). The
influence of fMRI lie detection evidence on juror decision-
making. Behavioral Sciences & the Law, 29, 566577.
doi:10.1002/bsl.993
Newman, E. J., Garry, M., Bernstein, D. M., Kantner, J., & Lindsay, S.
D. (2012). Nonprobative photographs (or words) inflate truthi-
ness. Psychonomic Bulletin and Review. doi:10.3758/s13423-012-
0292-0
Oppenheimer, D., Meyvis, T., & Davidenko, N. (2009). Instructional
manipulation checks: Detecting satisficing to increase statistical
power. Journal of Experimental Social Psychology, 45, 867872.
doi:10.1016/j.jesp.2009.03.009
Qualtrics Labs Inc. (2012). http://www.qualtrics.com
Quart, A. (2012, November 25). Neuroscience: Under attack. The New
York Times, p. SR12.
Schweitzer, J. J., & Saks, M. J. (2011). Neuroimage evidence and the
insanity defense. Behavioral Sciences & the Law, 29, 592607.
doi:10.1002/bsl.995
Schweitzer, N. J., Saks, M. J., Murphy, E. R., Roskies, A. L., Sinnott-
Armstrong, W., & Gaudet, L. M. (2011). Neuroimages as evi-
dence in a mens rea defense. Psychology, Public Policy, and Law,
17, 357393. doi:10.1037/a0023581
Smith, D. F. (2010). Cognitive brain mapping for better or worse.
Perspectives in Biology and Medicine, 53, 321329.
doi:10.1353/pbm.0.0165
Treadway, M. T., & Buckholtz, J. W. (2011). On the use and misuse of
genomic and neuroimaging science in forensic psychiatry: Cur-
rent roles and future directions. Child and Adolescent Psychiatric
Clinics of North America, 20, 533546. doi:10.1016/j.chc.
2011.03.012
Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for
judging frequency and probability. Cognitive Psychology,
5(507), 207232. doi:10.1016/0010-0285(73)90033-9
Weisberg, D. S., Keil, F. C., Goodstein, J., Rawson, E., & Gray, J. R.
(2008). The seductive allure of neuroscience explanations. Journal of
Cognitive Neuroscience, 20, 470477. doi:10.1162/jocn.2008.20040
Whittlesea, B. W. A. (1993). Illusions of familiarity. Journal of Exper-
imental Psychology: Learning, Memory, and Cognition, 19,
12351253.
Wild, J. (2005). Brain imaging ready to detect terrorists, say neuro-
scientists. Nature, 437, 457. doi:10.1038/437457a
Psychon Bull Rev
... Taken together, these results suggest that prior beliefs are a significant determinant of participants' responses and that the pervasiveness of neuroscientific explanations in the media (Beck, 2010;O'Connor et al., 2012) might reflect a central feature of participants' naïve theories of psychosocial phenomena (cf. Hook and Farah, 2013;Michael et al., 2013). ...
... Such results might not be as informative about critical features of scientific explanations as they are about a specific kind of scientific explanation: a conjunction of imagery and tacit beliefs associated with an argument. Supporting this, a large-scale study conducted by Michael et al. (2013) attempted to replicate the influence of imagery (McCabe and Castel, 2008) using multiple methods of presentation (online and written), multiple participant pools (general public, MTurk, and undergraduates) as well as multiple incentives (e.g., none, course credit, and financial compensation). They failed to replicate previous results (see also, Hook and Farah, 2013). ...
... It is clear from the present study that the effect observed by Weisberg et al. (2008;see also, McCabe and Castel, 2008) is not limited to neuroscientific explanations, and that many more factors need to be controlled when examining such reasoning biases. Failures to replicate this effect (Hook and Farah, 2013;Michael et al., 2013) might stem from such factors. In order to develop more effective scientific communications, further research is required to specific the specific features of scientific (e.g., Kraft et al., 2015) and medical explanations (Carpenter, 2010;Park et al., 2020) that make them persuasive. ...
Article
Full-text available
Explanations are central to understanding the causal relationships between entities within the environment. Instead of examining basic heuristics and schemata that inform the acceptance or rejection of scientific explanations, recent studies have predominantly examined complex explanatory models. In the present study, we examined which essential features of explanatory schemata can account for phenomena that are attributed to domain-specific knowledge. In two experiments, participants judged the validity of logical syllogisms and reported confidence in their response. In addition to validity of the explanations, we manipulated whether scientists or people explained an animate or inanimate phenomenon using mechanistic (e.g., force, cause) or intentional explanatory terms (e.g., believes, wants). Results indicate that intentional explanations were generally considered to be less valid than mechanistic explanations and that ‘scientists’ were relatively more reliable sources of information of inanimate phenomena whereas ‘people’ were relatively more reliable sources of information of animate phenomena. Moreover, after controlling for participants’ performance, we found that they expressed greater overconfidence for valid intentional and invalid mechanistic explanations suggesting that the effect of belief-bias is greater in these conditions.
... Indeed, the current training for school psychologists produces "undeveloped candidates", who may not be prepared enough to fulfill all of the tasks. Moreover, research has shown that both laypersons and experts in fields other than psychology and neuroscience are more likely to believe in neuromyths when they possess little knowledge about psychology and/or neuroscience [39]. Numerous studies have shown how widespread neuromyths are among practitioners in educational settings [31,40,41]. ...
Article
Full-text available
The research fields of cognitive neuroscience and education are often criticized because of the gap that separates them. In the past 20 years, many actions have been taken to bridge this gap; advantages and criticisms of these efforts have been observed. Only some changes could be documented, and they were not sufficiently commensurate with the efforts. To overcome these limitations, a different metaphor is outlined, consisting of a common field that should be cultivated by scholars operating from both perspectives. The new metaphor moves the perspective from “what is missing” (the bridge) to an existing field that requires concrete actions to be taken. The proposal details which topics from the two disciplines should be considered relevant when cultivating the common field. Then, based on the metaphor of the common field, real-life suggestions about how to develop these competencies are proposed, and recommendations for further actions are provided based on sustainability principles. The utilization of school psychologists (namely, their transition to educational scientists) and the introduction of optional stages and in-tandems involving cooperation between existing university courses in education and neuroscience are seen as feasible interventions. This change in vision is expected to drive further actions toward more effective cooperation between cognitive neuroscience and education.
... Although a pseudoscientific rationale increased the parapsychologists' perceived credibility, it had no significant effect on the endorsement of paranormal beliefs - Garrett and Cutting (2017) conducted a similar experiment, replicating the previously observed lack of differences between the three versions regarding the perceived believability of the paranormal story. Likewise, other studies support that, although science mimicry tend to increase sources' credibility, it does not promote change in beliefs (Bromme, Scharrer, Stadtler, Hömberg, & Torspecken, 2015;Knobloch-Westerwick, Johnson, Silver, & Westerwick, 2015;Thomm & Bromme, 2012;Zaboski & Therriault, 2020), as the effect of scientific jargon is mediated by its adjustment to previous beliefs (Scurich & Shniderman, 2014) and is not persuasive by itself (Gruber & Dickerson, 2012;Hook & Farah, 2013;Michael, Newman, Vuorre, Cumming, & Garry, 2013). ...
Article
Full-text available
This article presents an integrative model for the endorsement of pseudoscience: the explanation-polarisation model. It is based on a combination of perceived explanatory satisfaction and group polarisation, offering a perspective different from the classical confusion-based conception, in which pseudoscientific beliefs would be accepted through a lack of distinction between science and science mimicry. First, I discuss the confusion-based account in the light of current evidence, pointing out some of its explanatory shortcomings. Second, I develop the explanation-polarisation model, showing its explanatory power in connection with recent research outcomes in cognitive and social psychology.
... Our prediction that it would was based in part on other demonstrations of minimally relevant information influencing judgments of explanation quality. Weisberg, Keil, Goodstein, Rawson, and Gray (2008) found that novices and students rated bad explanations as being more satisfying when information irrelevant to the explanations was added (this finding was replicated by Michael, Newman, Vuorre, Cumming, & Garry, 2013, and extended by Fernandez-Duque, Evans, Christian, & Hodges, 2014, and Weisberg, Taylor, & Hopkins, 2015. Related results have been reported by Tal and Wansink (2014), Haard, Slater, and Long (2004), McCabe and Castel (2008), and Keehner, Mayberry, and Fischer (2011), but see Gruber and Dickerson (2012), Schweitzer, Baker, and Risko (2013), and Hook and Farah (2013). ...
Article
Polarization is rising in most countries in the West. How can we reduce it? One potential strategy is to ask people to explain how a political policy works—how it leads to consequences— because that has been shown to induce a kind of intellectual humility: Explanation causes people to reduce their judgments of understanding of the issues (their “illusion of explanatory depth”). It also reduces confidence in attitudes about the policies; people become less extreme. Some attempts to replicate this reduction of polarization have been unsuccessful. Is the original effect real or is it just a fluke? In this paper, we explore the effect using more timely political issues and compare judgments of issues whose attitudes are grounded in consequentialist reasoning versus protected values. We also investigate the role of social proof. We find that understanding and attitude extremity are reduced after explanation but only for consequentialist issues, not those based on protected values. There was no effect of social proof.
... -differences in education and vocabulary in pedagogy and neuroscience (Howard-Jones 2014), -different levels of analyzes carried out in both disciplines -from single neurons to international education policies (Goswami 2006), -limited availability of the results from original empirical research (e.g. paid access, or access only to a specific group of specialists), which favors increased reliance on media reports or interpretations of pseudoscientists (Ansari, Coch 2006), -lack of specialists and organizations specializing in both disciplines (Ansari, Coch 2006;Goswami 2006), -the attractiveness and ease of putting into practice explanations that are apparently based on neuroscience but have a strong marketing foundation (McCabe, Castel 2008;Weisberg et al. 2008), -the so-called media noise, which is evident in the fact that the media, often presenting new reports, omit important information (e.g. research methodology), do it in a simplified manner or provide information that is irrelevant, but of a marketing nature (Wallace 1993;Beck 2010;Pasquinelli 2012), -the so-called Dunning-Kruger effect, i.e. a psychological phenomenon in which unskilled people in some area of life tend to overestimate their skills in this area, while highly qualified people tend to underestimate their abilities (Kruger, Dunning 1999), -the so-called attitude of "neurorealism" (Racine, Waidman, Rosenberg, Illes 2006), in which people tend to have greater confidence in any results or publications that refer to, for example, research in the field of neurobiology -even if it is pseudoscientific or irrelevant to the topic at hand (McCabe, Castel 2008;Weisberg et al. 2008;Michael et al. 2013). An important fact also concerns the problem of internationality and cultural conditions. ...
Article
Full-text available
The article presents the results of research conducted among Polish teachers. Their aim was to check the prevalence of neuromyths in schools and kindergartens, and to identify predictors of both belief in neuromyths and the level of knowledge about the structure and functioning of the brain. The obtained results partially confirmed the reports from international studies. Neuromyths turned out to be very popular among Polish teachers, even despite the high level of basic knowledge in the field of neurobiology. The research also revealed a number of factors that determine the level of the above-mentioned knowledge. The influence of age, gender, seniority, workplace, interest in training in neuroeducation, earlier access to knowledge in the field of neurobiology or the use of neuromyths-based work methods in educational practice has not been confirmed. Abstrakt: W artykule zaprezentowano wyniki badań przeprowadzonych wśród polskich nauczycieli. Ich celem było sprawdzenie powszechności neuromitów w szkołach i przedszkolach oraz wskazanie predyktorów zarówno wiary w neuromity, jak i poziomu wiedzy dotyczącej budowy i funkcjonowania mózgu. Uzyskane wyniki częściowo potwierdziły doniesienia z międzynarodowych badań. Neuromity okazały się bardzo popularne wśród polskich nauczycieli, nawet pomimo wysokiego poziomu podstawowej wiedzy z zakresu neurobiologii. Badania uwidoczniły również szereg czynników, które warunkują poziom wyżej wskazanej wiedzy. Nie potwierdzono wpływu wieku, płci, stażu pracy, miejsca pracy ani zainteresowania dokształcaniem w problematyce neuroedukacji, wcześniejszym dostępem do wiedzy z zakresu neurobiologii czy stosowaniem w praktyce edukacyjnej metod pracy opartych na neuromitach.
... One may ask what other factors might have affected participants' causality attributions, besides individuals' prior knowledge on temporal sequences of events and their implicit conceptions on the etiology of yet poorly characterized psychiatric diseases. For example, there is some literature on how brain plots nudge the scientific evaluation of lay people [23][24][25][26][27], but in the present study, we did not further investigate how different plots may alter brain and mind primacy. The brain plot we presented in addition to the simulated data was consistent across contexts, and it was solely presented in order to plausibly mimic and ensure consistency with conventional neuroimaging studies. ...
... -differences in education and vocabulary in pedagogy and neuroscience (Howard-Jones 2014), -different levels of analyzes carried out in both disciplines -from single neurons to international education policies (Goswami 2006), -limited availability of the results from original empirical research (e.g. paid access, or access only to a specific group of specialists), which favors increased reliance on media reports or interpretations of pseudoscientists (Ansari, Coch 2006), -lack of specialists and organizations specializing in both disciplines (Ansari, Coch 2006;Goswami 2006), -the attractiveness and ease of putting into practice explanations that are apparently based on neuroscience but have a strong marketing foundation (McCabe, Castel 2008;Weisberg et al. 2008), -the so-called media noise, which is evident in the fact that the media, often presenting new reports, omit important information (e.g. research methodology), do it in a simplified manner or provide information that is irrelevant, but of a marketing nature (Wallace 1993;Beck 2010;Pasquinelli 2012), -the so-called Dunning-Kruger effect, i.e. a psychological phenomenon in which unskilled people in some area of life tend to overestimate their skills in this area, while highly qualified people tend to underestimate their abilities (Kruger, Dunning 1999), -the so-called attitude of "neurorealism" (Racine, Waidman, Rosenberg, Illes 2006), in which people tend to have greater confidence in any results or publications that refer to, for example, research in the field of neurobiology -even if it is pseudoscientific or irrelevant to the topic at hand (McCabe, Castel 2008;Weisberg et al. 2008;Michael et al. 2013). An important fact also concerns the problem of internationality and cultural conditions. ...
... Even indirect context cues, such as those emphasizing the scientific nature of a piece of information can increase the probability that (dubious) information is believed 36 . Some experimental evidence, for instance, suggests that irrelevant neuroscience information [37][38][39] or nonsense mathematical equations 40 can boost the perceived quality of presented claims, though note that replication studies suggest that mere brain images may not suffice 41,42 . Notably, these effects were present only among non-experts (that is, people with little formal neuroscientific or mathematical training). ...
Article
Full-text available
People tend to evaluate information from reliable sources more favourably, but it is unclear exactly how perceivers’ worldviews interact with this source credibility effect. In a large and diverse cross-cultural sample (N = 10,195 from 24 countries), we presented participants with obscure, meaningless statements attributed to either a spiritual guru or a scientist. We found a robust global source credibility effect for scientific authorities, which we dub ‘the Einstein effect’: across all 24 countries and all levels of religiosity, scientists held greater authority than spiritual gurus. In addition, individual religiosity predicted a weaker relative preference for the statement from the scientist compared with the spiritual guru, and was more strongly associated with credibility judgements for the guru than the scientist. Independent data on explicit trust ratings across 143 countries mirrored our experimental findings. These findings suggest that irrespective of one’s religious worldview, across cultures science is a powerful and universal heuristic that signals the reliability of information.
Article
Laypeople prefer brain explanations of behavior (Weisberg, Keil, Goodstein, Rawson, & Gray, 2008). We suggest that this preference arises from ‘intuitive Dualism’. For the Dualist, mentalistic causation elicits a mind-body dissonance, as it suggests that the immaterial mind affects the body. Brain causation attributes behavior to the body, so it alleviates the dissonance, hence, preferred. We thus predict stronger brain preference for epistemic traits – those perceived as least material, even when no explanation is required. To test this prediction, participants diagnosed clinical conditions using matched brain- and behavioral tests. Experiments 1-2 showed that epistemic traits elicited stronger preference for brain tests. Experiment 3 confirmed that epistemic traits are perceived as immaterial. Experiment 4 showed that, the less material the trait seems, the stronger the surprise (possibly, dissonance) and brain preference. Results offer new insights into public perception of science, the role of intuitive Dualism, and the seductive allure of neuroscience.
Article
Full-text available
Mathematics is a fundamental tool of research. Although potentially applicable in every discipline, the amount of training in mathematics that students typically receive varies greatly between different disciplines. In those disciplines where most researchers do not master mathematics, the use of mathematics may be held in too much awe. To demonstrate this I conducted an online experiment with 200 participants, all of which had experience of reading research reports and a postgraduate degree (in any subject). Participants were presented with the abstracts from two published papers (one in evolutionary anthropology and one in sociology). Based on these abstracts, participants were asked to judge the quality of the research. Either one or the other of the two abstracts was manipulated through the inclusion of an extra sentence taken from a completely unrelated paper and presenting an equation that made no sense in the context. The abstract that included the meaningless mathematics tended to be judged of higher quality. However, this "nonsense math effect" was not found among participants with degrees in mathematics, science, technology or medicine.
Article
Full-text available
Research conducted primarily during the 1970s and 1980s supported the assertion that carefully constructed text illustrations generally enhance learners' performance on a variety of text-dependent cognitive outcomes. Research conducted throughout the 1990s still strongly supports that assertion. The more recent research has extended pictures-in-text conclusions to alternative media and technological formats and has begun to explore more systematically the “whys,” “whens,” and “for whoms” of picture facilitation, in addition to the “whethers” and “how muchs.” Consideration is given here to both more and less conventional types of textbook illustration, with several “tenets for teachers” provided in relation to each type.
Article
Full-text available
This article tested the assumption that functional magnetic resonance imaging (fMRI) images in popular science news articles make those articles appear more reasonable and persuasive to readers. In addition to fMRI images, this study also examined the potential impact of science fiction and artistic images commonly found in popular news articles. 183 undergraduates were asked to evaluate one of four versions of an article, each with a different image. The researchers discovered no significant differences between readers' evaluations of the news article with the images isolated as the only independent variable. This suggests that images alone may not have a strong effect upon evaluation, that no image is necessarily more persuasive than another as implied by earlier studies and that further research is needed to determine what, if any, role images play in conjunction with the text to create a persuasive effect.
Article
Full-text available
The idea of fMRI's "seductive allure" is supported by two widely cited studies. Upon closer analysis of these studies, and in light of more recent research, we find little empirical support for the claim that brain images are inordinately influential. © The Author(s) 2013.
Article
Full-text available
In 3 experiments, students read expository passages concerning how scientific devices work, which contained either no illustrations (control), static illustrations of the device with labels for each part (parts), static illustrations of the device with labels for each major action (steps), or dynamic illustrations showing the "off" and "on" states of the device along with labels for each part and each major action (parts-and-steps). Results indicated that the parts-and-steps (but not the other) illustrations consistently improved performance on recall of conceptual (but not nonconceptual) information and creative problem solving (but not verbatim retention), and these results were obtained mainly for the low prior-knowledge (rather than the high prior-knowledge) students. The cognitive conditions for effective illustrations in scientific text include appropriate text, tests, illustrations, and learners. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
When people evaluate claims, they often rely on what comedian Stephen Colbert calls "truthiness," or subjective feelings of truth. In four experiments, we examined the impact of nonprobative information on truthiness. In Experiments 1A and 1B, people saw familiar and unfamiliar celebrity names and, for each, quickly responded "true" or "false" to the (between-subjects) claim "This famous person is alive" or "This famous person is dead." Within subjects, some of the names appeared with a photo of the celebrity engaged in his or her profession, whereas other names appeared alone. For unfamiliar celebrity names, photos increased the likelihood that the subjects would judge the claim to be true. Moreover, the same photos inflated the subjective truth of both the "alive" and "dead" claims, suggesting that photos did not produce an "alive bias" but rather a "truth bias." Experiment 2 showed that photos and verbal information similarly inflated truthiness, suggesting that the effect is not peculiar to photographs per se. Experiment 3 demonstrated that nonprobative photos can also enhance the truthiness of general knowledge claims (e.g., Giraffes are the only mammals that cannot jump). These effects add to a growing literature on how nonprobative information can inflate subjective feelings of truth.
Article
In three experiments, students read expository passages concerning how scientific devices work, which contained either no illustrations (control), static illustrations of the device with labels for each part (parts), static illustrations of the device with labels for each major action (steps), or dynamic illustrations showing the "off" and "on" states of the device along with labels for each part and each major action (parts-and-steps). Results indicated that the parts-and-steps (but not the other) illustrations consistently improved performance on recall of conceptual (but not nonconceptiual) information and creative problem solving (but not verbatim retention), and these results were obtained mainly for the low prior-knowledge (rather than the high prior-knowledge) students. The cognitive conditions for effective illustrations in scientific text include appropriate text, tests, illustrations, and learners.
Book
This is the first book to introduce the new statistics—effect sizes, confidence intervals, and meta-analysis-in an accessible way. It is chock full of practical examples and tips on how to analyze and report research results using these techniques. The book is invaluable to readers interested in meeting the new APA Publication Manual guidelines by adopting the new statistics—which are more informative than null hypothesis significance testing, and becoming widely used in many disciplines. This highly accessible book is intended as the core text for any course that emphasizes the new statistics, or as a supplementary text for graduate and/or advanced undergraduate courses in statistics and research methods in departments of psychology, education, human development, nursing, and natural, social, and life sciences. Researchers and practitioners interested in understanding the new statistics, and future published research, will also appreciate this book. A basic familiarity with introductory statistics
Article
Feelings of familiarity are not direct products of memory. Although prior experience of a stimulus can produce a feeling of familiarity, that feeling can also be aroused in the absence of prior experience if perceptual processing of the stimulus is fluent (e.g., B. W. Whittlesea et al, 1990). This suggests that feelings of familiarity arise through an unconscious inference about the source of processing fluency. The present experiments extend that conclusion. First, they show that a wide variety of feelings about the past are controlled by a fluency heuristic, including feelings about the meaning, pleasantness, duration, and recency of past events. Second, they demonstrate that the attribution process does not rely only on perceptual fluency, but can be influenced even more by the fluency of conceptual processing. Third, they show that although the fluency heuristic itself is simple, people's use of it is highly sophisticated and makes them robustly sensitive to the actual historical status of current events. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Recent developments in the neuropsychology of criminal behavior have given rise to concerns that neuroimaging evidence (such as MRI and functional MRI [fMRI] images) could unduly influence jurors. Across four experiments, a nationally representative sample of 1,476 jury-eligible participants evaluated written summaries of criminal cases in which expert testimony was presented in support of a mental disorder as exculpatory. The evidence varied in the extent to which it presented neuroscientific explanations and neuroimages in support of the expert's conclusion. Despite suggestive findings from previous research, we found no evidence that neuroimagery affected jurors' judgments (verdicts, sentence recommendations, judgments of the defendant's culpability) over and above verbal neuroscience-based testimony. A meta-analysis of our four experiments confirmed these findings. In addition, we found that neuroscientific evidence was more effective than clinical psychological evidence in persuading jurors that the defendant's disorder reduced his capacity to control his actions, although this effect did not translate into differences in verdicts. (PsycINFO Database Record (c) 2012 APA, all rights reserved)