ArticlePDF Available

The Seductive Allure of Neuroscience Explanations


Abstract and Figures

Explanations of psychological phenomena seem to generate more public interest when they contain neuroscientific information. Even irrelevant neuroscience information in an explanation of a psychological phenomenon may interfere with people's abilities to critically consider the underlying logic of this explanation. We tested this hypothesis by giving naïve adults, students in a neuroscience course, and neuroscience experts brief descriptions of psychological phenomena followed by one of four types of explanation, according to a 2 (good explanation vs. bad explanation) x 2 (without neuroscience vs. with neuroscience) design. Crucially, the neuroscience information was irrelevant to the logic of the explanation, as confirmed by the expert subjects. Subjects in all three groups judged good explanations as more satisfying than bad ones. But subjects in the two nonexpert groups additionally judged that explanations with logically irrelevant neuroscience information were more satisfying than explanations without. The neuroscience information had a particularly striking effect on nonexperts' judgments of bad explanations, masking otherwise salient problems in these explanations.
Content may be subject to copyright.
The Seductive Allure of Neuroscience Explanations
Deena Skolnick Weisberg, Frank C. Keil, Joshua Goodstein,
Elizabeth Rawson, and Jeremy R. Gray
& Explanations of psychological phenomena seem to gener-
ate more public interest when they contain neuroscientific
information. Even irrelevant neuroscience information in an
explanation of a psychological phenomenon may interfere with
people’s abilities to critically consider the underlying logic of
this explanation. We tested this hypothesis by giving naı¨v e
adults, students in a neuroscience course, and neuroscience ex-
perts brief descriptions of psychological phenomena followed
by one of four types of explanation, according to a 2 (good
explanation vs. bad explanation) 2 (without neuroscience
vs. with neuroscience) design. Crucially, the neuroscience in-
formation was irrelevant to the logic of the explanation, as
confirmed by the expert subjects. Subjects in all three groups
judged good explanations as more satisfying than bad ones.
But subjects in the two nonexpert groups additionally judged
that explanations with logically irrelevant neuroscience infor-
mation were more satisfying than explanations without. The
neuroscience information had a particularly striking effect on
nonexperts’ judgments of bad explanations, masking other-
wise salient problems in these explanations. &
Although it is hardly mysterious that members of the
public should find psychological research fascinating,
this fascination seems particularly acute for findings that
were obtained using a neuropsychological measure. In-
deed, one can hardly open a newspaper’s science sec-
tion without seeing a report on a neuroscience discovery
or on a new application of neuroscience findings to eco-
nomics, politics, or law. Research on nonneural cogni-
tive psychology does not seem to pique the public’s
interest in the same way, even though the two fields are
concerned with similar questions.
The current study investigates one possible reason why
members of the public find cognitive neuroscience so
particularly alluring. To do so, we rely on one of the
functions of neuroscience information in the field of psy-
chology: providing explanations. Because articles in both
the popular press and scientific journals often focus on
how neuroscientific findings can help to explain human
behavior, people’s fascination with cognitive neuroscience
can be redescribed as people’s fascination with explana-
tions involving a neuropsychological component.
However, previous research has shown that people
have difficulty reasoning about explanations (for reviews,
see Keil, 2006; Lombrozo, 2006). For instance, people can
be swayed by teleological explanations when these are not
warranted, as in cases where a nonteleological process,
such as natural selection or erosion, is actually implicated
(Lombrozo & Carey, 2006; Kelemen, 1999). People also
tend to rate longer explanations as more similar to ex-
perts’ explanations (Kikas, 2003), fail to recognize circu-
larity (Rips, 2002), and are quite unaware of the limits
of their own abilities to explain a variety of phenomena
(Rozenblit & Keil, 2002). In general, people often believe
explanations because they find them intuitively satisfying,
not because they are accurate (Trout, 2002).
In line with this body of research, we propose that
people often find neuroscience information alluring be-
cause it interferes with their abilities to judge the quality
of the psychological explanations that contain this infor-
mation. The presence of neuroscience information may
be seen as a strong marker of a good explanation, re-
gardless of the actual status of that information within
the explanation. That is, something about seeing neu-
roscience information may encourage people to believe
they have received a scientific explanation when they
have not. People may therefore uncritically accept any
explanation containing neuroscience information, even
in cases when the neuroscience information is irrelevant
to the logic of the explanation.
To test this hypothesis, we examined people’s judg-
ments of explanations that either do or do not contain
neuroscience information, but that otherwise do not dif-
fer in content or logic. All three studies reported here
used a 2 (explanation type: good vs. bad) 2(neurosci-
ence: without vs. with) design. This allowed us to see
both people’s baseline abilities to distinguish good psy-
chological explanations from bad psychological explana-
tions as well as any influence of neuroscience information
Yale University
D 2008 Massachusetts Institute of Technology Journal of Cognitive Neuroscience 20:3, pp. 470–477
on this ability. If logically irrelevant neuroscience infor-
mation affects people’s judgments of explanations, this
would suggest that people’s fascination with neuropsy-
chological explanations may stem from an inability or
unwillingness to critically consider the role that neurosci-
ence information plays in these explanations.
There were 81 participants in the study (42 women,
37 men, 2 unreported; mean age = 20.1 years, SD =
4.2 years, range = 18–48 years, based on 71 reported
ages). We randomly assigned 40 subjects to the With-
out Neuroscience condition and 41 to the With Neuro-
science condition. Subjects thus saw explanations that
either always did or always did not contain neuroscience
information. We used this between-subjects design to
prevent subjects from directly comparing explanations
that did and did not contain neuroscience, providing a
stronger test of our hypothesis.
We wrote descriptions of 18 psychological phenomena
(e.g., mutual exclusivity, attentional blink) that were meant
to be accessible to a reader untrained in psychology or
neuroscience. For each of these items, we created two
types of explanations, good and bad, neither of which
contained neuroscience. The good explanations in most
cases were the genuine explanations that the researchers
gave for each phenomenon. The bad explanations were
circular restatements of the phenomenon, hence, not
explanatory (see Table 1 for a sample item).
For the With Neuroscience conditions, we added neu-
roscience information to the good and bad explanations
from the Without Neuroscience conditions. The added
neuroscience information had three important features:
(1) It always specified that the area of activation seen in
the study was an area already known to be involved in
tasks of this type, circumventing the interpretation that
the neuroscience information added value to the expla-
nation by localizing the phenomenon. (2) It was always
identical or nearly identical in the good explanation and
the bad explanation for a given phenomenon. Any
general effect of neuroscience information on judgment
should thus be seen equally for good explanations and
bad explanations. Additionally, any differences that may
occur between the good explanation and bad explana-
tion conditions would be highly unlikely to be due to
any details of the neuroscience information itself. (3)
Most importantly, in no case did the neuroscience
information alter the underlying logic of the explanation
itself. This allowed us to test the effect of neuroscience
information on the task of evaluating explanations,
independent of any value added by such information.
Before the study began, three experienced cognitive
neuroscientists confirmed that the neuroscience infor-
mation did not add value to the explanations.
Subjects were told that they would be rating explana-
tions of scientific phenomena, that the studies they
would read about were considered solid, replicable re-
search, and that the explanations they would read were
not necessarily the real explanations for the phenomena.
Table 1. Sample Item
Good Explanation Bad Explanation
Without Neuroscience The researchers claim that this ‘‘curse’’ happens
because subjects have trouble switching their
point of view to consider what someone else
might know, mistakenly projecting their own
knowledge onto others.
The researchers claim that this ‘‘curse’’ happens
because subjects make more mistakes when
they have to judge the knowledge of others.
People are much better at judging what they
themselves know.
With Neuroscience Brain scans indicate that this ‘‘curse’’ happens
because of the frontal lobe brain circuitry
known to be involved in self-knowledge.
Subjects have trouble switching their point of
view to consider what someone else might
know, mistakenly projecting their own
knowledge onto others.
Brain scans indicate that this ‘‘curse’’ happens
because of the frontal lobe brain circuitry
known to be involved in self-knowledge.
Subjects make more mistakes when they have
to judge the knowledge of others. People are
much better at judging what they themselves
Researchers created a list of facts that about 50% of people knew. Subjects in this experiment read the list of facts and had to say which ones they
knew. They then had to judge what percentage of other people would know those facts. Researchers found that the subjects responded differently
about other people’s knowledge of a fact when the subjects themselves knew that fact. If the subjects did know a fact, they said that an inaccurately
large percentage of others would know it, too. For example, if a subject already knew that Hartford was the capital of Connecticut, that subject
might say that 80% of people would know this, even though the correct answer is 50%. The researchers call this finding ‘‘the curse of knowledge.’’
The neuroscience information is highlighted here, but subjects did not see such marking.
Weisberg et al. 471
For each of the 18 stimuli, subjects read a one-paragraph
description of the phenomenon followed by an expla-
nation of that phenomenon. They rated how satisfying
they found the explanation on a 7-point scale from 3
(very unsatisfying)to+3(very satisfying) with 0 as the
neutral midpoint.
Results and Discussion
Preliminary analyses revealed no differences in perfor-
mance based on sex or level of education, so ratings
were collapsed across these variables for the analyses.
Additionally, subjects tended to respond similarly to all
18 items (Cronbach’s a = .79); the set of items had
acceptable psychometric reliability as a measure of the
construct of interest.
Our primary goal in this study was to discover what
effect, if any, the addition of neuroscience information
would have on subjects ratings of how satisfying they
found good and bad explanations. We analyzed the rat-
ings using a 2 (good explanation vs. bad explanation)
2 (without neuroscience vs. with neuroscience) repeated
measures analysis of variance (ANOVA; see Figure 1).
There was a significant main effect of explanation type
[F(1, 79) = 144.8, p < .01], showing that good expla-
nations (M = 0.88, SE = 0.10) are rated as significantly
more satisfying than bad explanations (M = 0.28, SE =
0.12). That is, subjects were accurate in their assess-
ments of explanations in general, finding good expla-
nations to be better than bad ones.
There was also a significant main effect of neuroscience
[F(1, 79) = 6.5, p < .05]. Explanations with neuroscience
information (M =0.53,SE = 0.13) were rated as sig-
nificantly more satisfying than explanations that did not
include neuroscience information (M =0.06,SE =0.13).
Adding irrelevant neuroscience information thus some-
how impairs people’s baseline ability to make judgments
about explanations.
We also found a significant interaction between expla-
nation type and neuroscience information [F(1, 79) =
18.8, p < .01]. Post hoc tests revealed that although the
ratings for good explanations were not different without
neuroscience (M = 0.86, SE = 0.11) than with neuro-
science (M = 0.90, SE = 0.16), ratings for bad expla-
nations were significantly lower for explanations without
neuroscience (M = 0.73, SE = 0.14) than explanations
with neuroscience (M = 0.16, SE = 0.16). Note that this
difference is not due to a ceiling effect; ratings of good
explanations are still significantly below the top of the
scale [t(80) = 21.38, p < .01]. This interaction indi-
cates that it is not the mere presence of verbiage about
neuroscience that encourages people to think more
favorably of an explanation. Rather, neuroscience infor-
mation seems to have the specific effect of making bad
explanations look significantly more satisfying than they
would without neuroscience.
This puzzling differential effect of neuroscience in-
formation on the bad explanations may occur because
participants gave the explanations a more generous in-
terpretation than we had expected. Our instructions
encouraged participants to think of the explanations as
being provided by knowledgeable researchers, so they
may have considered the explanations less critically than
we would have liked. If participants were using some-
what relaxed standards of judgment, then a group of
subjects that is specifically trained to be more critical of
judging explanations should not fall prey to the effect
of added neuroscience information, or at least not as
Experiment 2 addresses this issue by testing a group
of subjects trained to be critical in their judgments: stu-
dents in an intermediate-level cognitive neuroscience
class. These students were receiving instruction on the
basic logic of cognitive neuroscience experiments and on
the types of information that are relevant to drawing
conclusions from neuroscience studies. We predicted that
Figure 1. Novice group.
Mean ratings of how
satisfying subjects found
the explanations. Error bars
indicate standard error of
the mean.
472 Journal of Cognitive Neuroscience Volume 20, Number 3
this instruction, together with their classroom experience
of carefully analyzing neuroscience experiments, would
eliminate or dampen the impact of the extraneous neuro-
science information.
Subjects and Procedure
Twenty-two students (10 women; mean age = 20.7 years,
SD = 2.6 years, range = 18–31 years) were recruited from
an introductory cognitive neuroscience class and received
no compensation for their participation. They were in-
formed that although participation was required for the
course, the results of the experiment would have no im-
pact on their class performance and would not be known
by their professor until after their grades had been posted.
They were additionally allowed to choose whether their
data could be used in the published research study, and
all students elected to have their data included.
Subjects were tested both at the beginning of the se-
mester and at the end of the semester, prior to the final
The stimuli and task were identical to Experiment 1,
with one exception: Both main variables of explanation
type and presence of neuroscience were within-subject
due to the small number of participants.
Results and Discussion
Preliminary analyses showed no differences in perfor-
mance based on class year, so this variable is not con-
sidered in the main analyses. There was one significant
interaction with sex that is discussed shortly. Responses
to the items were again acceptably consistent (Cronbach’s
a = .74).
As with the novices in Experiment 1, we tested whether
the addition of neuroscience information affects judg-
ments of good and bad explanations. For the students in
this study, we additionally tested the effect of training on
evaluations of neuroscience explanations. We thus ana-
lyzed the students’ ratings of explanatory satisfaction
using a 2 (good explanation vs. bad explanation) 2
(without neuroscience vs. with neuroscience) 2 (pre-
class test vs. postclass test) repeated measures ANOVA
(see Figure 2).
We found a significant main effect of explanation type
[F(1, 21) = 50.9, p < .01], confirming that the students
judged good explanations (M = 0.37, SE = 0.14) to
be more satisfying than bad explanations (M = 0.43,
SE = 0.19).
Although Experiment 1 found a strong effect of the
presence of neuroscience information in explanations,
we had hypothesized that students in a neuroscience
course, who were learning to be critical consumers of
neuroscience information, would not show this effect.
However, the data failed to confirm this hypothesis;
there was a significant main effect of neuroscience
[F(1, 21) = 47.1, p < .01]. Students, like novices, judged
that explanations with neuroscience information (M =
0.43, SE = 0.17) were more satisfying than those without
neuroscience information (M = 0.49, SE = 0.16).
There was additionally an interaction effect between
explanation type and presence of neuroscience [F(1,
21) = 8.7, p < .01], as in Experiment 1. Post hoc analy-
ses indicate that this interaction happens for the same
reason as in Experiment 1: Ratings of bad explanations
increased reliably more with the addition of neurosci-
ence than did good explanations. Unlike the novices, the
students judged that both good explanations and bad
explanations were significantly more satisfying when
they contained neuroscience, but the bad explanations
were judged to have improved more dramatically, based
on a comparison of the differences in ratings between
Figure 2. Student group.
Mean ratings of how
satisfying subjects found
the explanations. Error bars
indicate standard error of
the mean.
Weisberg et al. 473
explanations with and without neuroscience [t(21) =
2.98, p < .01]. Specialized training thus did not discour-
age the students from judging that irrelevant neuro-
science information somehow contributed positively to
both types of explanation.
Additionally, our analyses found no main effect of
time, showing that classroom training did not affect
the students’ performance. Ratings before taking the
class and after completing the class were not significant-
ly different [F(1, 21) = 0.13, p > .10], and there were
no interactions between time and explanation type [F(1,
21) = 0.75, p > .10] or between time and presence of
neuroscience [F(1, 21) = 0.0, p > .10], and there was no
three-way interaction among these variables [F(1, 21) =
0.31, p > .10]. The only difference between the preclass
data and the postclass data was a significant interaction
between sex and neuroscience information in the pre-
class data [F(1, 20) = 8.5, p < .01], such that the dif-
ference between women’s preclass satisfaction ratings
for the Without Neuroscience and the With Neurosci-
ence conditions was significantly larger than this differ-
ence in the men’s ratings. This effect did not hold in the
postclass test, however. These analyses strongly indicate
that whatever training subjects received in the neurosci-
ence class did not affect their performance in the task.
These two studies indicate that logically irrelevant neu-
roscience information has a reliably positive impact on
both novices’ and students’ ratings of explanations, par-
ticularly bad explanations, that contain this information.
One concern with this conclusion is our assumption that
the added neuroscience information really was irrele-
vant to the explanation. Although we had checked our
items with cognitive neuroscientists beforehand, it is
still possible that subjects interpreted some aspect of
the neuroscience information as logically relevant or
content-rich, which would justify their giving higher rat-
ings to the items with neuroscience information. The
subjects’ differential performance with good and bad
explanations speaks against this interpretation, but per-
haps something about the neuroscience information
genuinely did improve the bad explanations.
Experiment 3 thus tests experts in neuroscience, who
would presumably be able to tell if adding neuroscience
information should indeed make these explanations
more satisfying. Are experts immune to the effects of
neuroscience information because their expertise makes
them more accurate judges? Or are experts also some-
what seduced by the allure of neuroscience information?
Subjects and Procedure
Forty-eight neuroscience experts participated in the study
(29 women, 19 men; mean age = 27.5 years, SD =
5.3 years, range = 21–45 years). There were 28 subjects
in the Without Neuroscience condition and 20 subjects
in the With Neuroscience condition.
We defined our expert population as individuals who
are about to pursue, are currently pursuing, or have
completed advanced degrees in cognitive neuroscience,
cognitive psychology, or strongly related fields. Our par-
ticipant group contained 6 participants who had com-
pleted college, 29 who were currently in graduate school,
and 13 who had completed graduate school.
The materials and procedure in this experiment were
identical to Experiment 1, with the addition of four de-
mographic questions in order to confirm the expertise
of our subjects. We asked whether they had ever partic-
ipated in a neuroscience study, designed a neuroscience
study, designed a psychological study that did not neces-
sarily include a neuroscience component, and studied
neuroscience formally as part of a course or lab group.
The average score on these four items was 2.9 (SD =
0.9), indicating a high level of expertise among our
Results and Discussion
Preliminary analyses revealed no differences in perfor-
mance based on sex or level of education, so all subse-
quent analyses do not consider these variables. We
additionally found acceptably consistent responding to
the 18 items (Cronbach’s a = .71).
We analyzed subjects’ ratings of explanatory satisfac-
tion in a 2 (good explanation vs. bad explanation) 2
(without neuroscience vs. with neuroscience) repeated
measures ANOVA (see Figure 3).
We found a main effect of explanation type [F(1,
46) = 54.9, p < .01]. Just like the novices and students,
the experts rated good explanations (M = 0.19, SE =
0.11) as significantly more satisfying than bad ones (M =
0.99, SE = 0.14).
Unlike the data from the other two groups, the ex-
perts’ data showed no main effect of neuroscience, indi-
cating that subjects rated explanations in the same way
regardless of the presence of neuroscience information
[F(1, 46) = 1.3, p > .10].
This lack of a main effect must be interpreted in light
of a significant interaction between explanation type and
presence of neuroscience [F(1, 46) = 8.9, p < .01]. Post
hoc analyses reveal that this interaction is due to a
differential effect of neuroscience on the good explana-
tions: Good explanations with neuroscience (M = 0.22,
SE = 0.21) were rated as significantly less satisfying than
good explanations without neuroscience [M =0.41,SE =
0.13; F(1, 46) = 8.5, p < .01]. There was no change in
ratings for the bad explanations (without neuroscience
M = 1.07, SE = 0.19; with neuroscience M = 0.87,
SE = 0.21). This indicates that experts are so attuned to
proper uses of neuroscience that they recognized the in-
sufficiency of the neuroscience information in the With
Neuroscience condition. This recognition likely led to the
474 Journal of Cognitive Neuroscience Volume 20, Number 3
drop in satisfaction ratings for the good explanations,
whereas bad explanations could not possibly have been
improved by what the experts knew to be an improper ap-
plication of neuroscience information. Informal post hoc
questioning of several participants in this study indicated
that they were indeed sensitive to the awkwardness
and irrelevance of the neuroscience information in the
These results from expert subjects confirm that the
neuroscience information in the With Neuroscience con-
ditions should not be seen as adding value to the expla-
nations. The results from the two nonexpert groups are
thus due to these subjects’ misinterpretations of the
neuroscience information, not the information itself.
Summary of Results
The three experiments reported here explored the im-
pact of adding scientific-sounding but empirically and
conceptually uninformative neuroscience information
to both good and bad psychological explanations. Three
groups of subjects (novices, neuroscience class students,
and neuroscience experts) read brief descriptions of
psychological phenomena followed by a good or bad
explanation that did or did not contain logically irrelevant
neuroscience information. Although real neuropsycho-
logical data certainly can provide insight into behavior
and into psychological mechanisms, we specifically
wanted to investigate the possible effects of the presence
of neuroscience information, regardless of the role that
this information plays in an explanation. The neurosci-
ence information in the With Neuroscience condition
thus did not affect the logic or content of the psycholog-
ical explanations, allowing us to see whether the mere
mention of a neural process can affect subjects’ judg-
ments of explanations.
We analyzed subjects ratings of how satisfying they
found the explanations in the four conditions. We found
that subjects in all groups could tell the difference be-
tween good explanations and bad explanations, regard-
less of the presence of neuroscience. Reasoning about
these types of explanations thus does not seem to be
difficult in general because even the participants in our
novice group showed a robust ability to differentiate
between good and bad explanations.
Our most important finding concerns the effect that
explanatorily irrelevant neuroscience information has on
subject’s judgments of the explanations. For novices and
students, the addition of such neuroscience information
encouraged them to judge the explanations more favor-
ably, particularly the bad explanations. That is, extrane-
ous neuroscience information makes explanations look
more satisfying than they actually are, or at least more
satisfying than they otherwise would be judged to be.
The students in the cognitive neuroscience class showed
no benefit of training, demonstrating that only a semes-
ter’s worth of instruction is not enough to dispel the
effect of neuroscience information on judgments of ex-
planations. Many people thus systematically misunder-
stand the role that neuroscience should and should
not play in psychological explanations, revealing that
logically irrelevant neuroscience information can be
seductive—it can have much more of an impact on par-
ticipants’ judgments than it ought to.
However, the impact of superfluous neuroscience in-
formation is not unlimited. Although novices and stu-
dents rated bad explanations as more satisfying when
they contained neuroscience information, experts did
not. In fact, subjects in the expert group tended to rate
good explanations with neuroscience information as
worse than good explanations without neuroscience,
indicating their understanding that the added neurosci-
ence information was inappropriate for the phenome-
non being described. There is thus some noticeable
Figure 3. Expert group.
Mean ratings of how
satisfying subjects found
the explanations. Error bars
indicate standard error of
the mean.
Weisberg et al. 475
benefit of extended and specific training on the judg-
ment of explanations.
Why are Nonexperts Fooled?
Nonexperts judge explanations with neuroscience in-
formation as more satisfying than explanations without
neuroscience, especially bad explanations. One might be
tempted to conclude from these results that neuroscience
information in explanations is a powerful clue to the
goodness of explanations; nonexperts who see neurosci-
ence information automatically judge explanations con-
taining it more favorably. This conclusion suggests that
these two groups of subjects fell prey to a reasoning
heuristic (e.g., Shafir, Smith, & Osherson, 1990; Tversky
& Kahneman, 1974, 1981). A plausible heuristic might
state that explanations involving more technical language
are better, perhaps because they look more ‘‘scientific.’’
The presence of such a heuristic would predict that
subjects should judge all explanations containing neuro-
science information as more satisfying than all explana-
tions without neuroscience, because neuroscience is itself
a cue to the goodness of an explanation.
However, this was not the case in our data. Both novices
and students showed a differential impact of neuroscience
information on their judgments such that the ratings for
bad explanations increased much more markedly than rat-
ings for good explanations with the addition of neuro-
science information. This interaction effect suggests that
an across-the-board reasoning heuristic is probably not re-
sponsible for the nonexpert subjects’ judgments.
We see a closer affinity between our work and the so-
called seductive details effect (Harp & Mayer, 1998; Garner,
Alexander, Gillingham, Kulikowich, & Brown, 1991; Garner,
Gillingham, & White, 1989). Seductive details, related but
logically irrelevant details presented as part of an argument,
tend to make it more difficult for subjects to encode and
later recall the main argument of a text. Subjects’ attention
is diverted from important generalizations in the text to-
ward these interesting but irrelevant details, such that they
perform worse on a memory test and have a harder time
extracting the most important points in the text.
Despite the strength of this seductive details effect in
this previous work and in our current work, it is not im-
mediately clear why nonexpert participants in our study
judged that seductive details, in the form of neuroscience
information, made the explanations we presented more
satisfying. Future investigations into this effect could an-
swer this question by including qualitative measures to
determine precisely how subjects view the differences
among the explanations. In the absence of such data,
we can question whether something about neuroscience
information in particular did the work of fooling our
subjects. We suspect not—other kinds of information be-
sides neuroscience could have similar effects. We focused
the current experiments on neuroscience because it
provides a particularly fertile testing ground, due to its
current stature both in psychological research and in the
popular press. However, we believe that our results are
not necessarily limited to neuroscience or even to psy-
chology. Rather, people may be responding to some more
general property of the neuroscience information that
encouraged them to find the explanations in the With
Neuroscience condition more satisfying.
To speculate about the nature of this property, people
seeking explanations may be biased to look for a sim-
ple reductionist structure. That is, people often hear
explanations of ‘‘higher-level’’ or macroscopic phenom-
ena that appeal to ‘‘lower-level’’ or microscopic phe-
nomena. Because the neuroscience explanations in the
current study shared this general format of reducing
psychological phenomena to their lower-level neurosci-
entific counterparts, participants may have jumped to
the conclusion that the neuroscience information pro-
vided them with a physical explanation for a behavioral
phenomenon. The mere mention of a lower level of
analysis may have made the bad behavioral explanations
seem connected to a larger explanatory system, and hence
more insightful. If this is the case, other types of logically
irrelevant information that tap into a general reductionist
framework could encourage people to judge a wide
variety of poor explanations as satisfying.
There are certainly other possible mechanisms by
which neuroscience information may affect judgments of
explanations. For instance, neuroscience may illustrate a
connection between the mind and the brain that people
implicitly believe not to exist, or not to exist in such a
strong way (see Bloom, 2004a). Additionally, neuroscience
is associated with powerful visual imagery, which may
merely attract attention to neuroscience studies but which
is also known to interfere with subjects’ abilities to explain
the workings of physical systems (Hayes, Huleatt, & Keil,
in preparation) and to render scientific claims more con-
vincing (McCabe & Castel, in press). Indeed, it is possible
that ‘‘pictures of blobs on brains seduce one into thinking
that we can now directly observe psychological processes
(Henson, 2005, p. 228). However, the mechanism by
which irrelevant neuroscience information affects judg-
ment may also be far simpler: Any meaningless terminol-
ogy, not necessarily scientific jargon, can change behavior.
Previous studies have found that providing subjects with
‘‘placebic’’ information (e.g., ‘‘May I use the Xerox ma-
chine; I have to make copies?’’) increases compliance with
a request over and above a condition in which the re-
searcher simply makes the request (e.g., ‘‘May I use the
Xerox machine?’’) (Langer, Blank, & Chanowitz, 1978).
These characteristics of neuroscience information
may singly or jointly explain why subjects judged ex-
planations containing neuroscience information as gen-
erally more satisfying than those that did not. But the
most important point about the current study is not that
neuroscience information itself causes subjects to lose
their grip on their normally well-functioning judgment
processes. Rather, neuroscience information happens to
476 Journal of Cognitive Neuroscience Volume 20, Number 3
represent the intersection of a variety of properties that
can conspire together to impair judgment. Future re-
search should aim to tease apart which properties are
most important in this impairment, and indeed, we are
planning to follow up on the current study by examining
comparable effects in other special sciences. We predict
that any of these properties alone would be sufficient for
our effect, but that they are more powerful in combina-
tion, hence especially powerful for the case of neuro-
science, which represents the intersection of all four.
Regardless of the breadth of our effect or the mech-
anism by which it occurs, the mere fact that irrelevant
information can interfere with people’s judgments of
explanations has implications for how neuroscience
information in particular, and scientific information in
general, is viewed and used outside of the laboratory.
Neuroscience research has the potential to change our
views of personal responsibility, legal regulation, educa-
tion, and even the nature of the self (Farah, 2005;
Bloom, 2004b). To take a recent example, some legal
scholars have suggested that neuroimaging technology
could be used in jury selection, to ensure that jurors are
free of bias, or in questioning suspects, to ensure that
they are not lying (Rosen, 2007). Given the results re-
ported here, such evidence presented in a courtroom, a
classroom, or a political debate, regardless of the scien-
tific status or relevance of this evidence, could strongly
sway opinion, beyond what the evidence can support
(see Feigenson, 2006). We have shown that people seem
all too ready to accept explanations that allude to
neuroscience, even if they are not accurate reflections
of the scientific data, and even if they would other-
wise be seen as far less satisfying. Because it is unlikely
that the popularity of neuroscience findings in the pub-
lic sphere will wane any time soon, we see in the cur-
rent results more reasons for caution when applying
neuroscientific findings to social issues. Even if expert
practitioners can easily distinguish good neuroscience
explanations from bad, they must not assume that those
outside the discipline will be as discriminating.
We thank Paul Bloom, Martha Farah, Michael Weisberg, two
anonymous reviewers, and all the members of the Cognition
and Development Lab for their conversations about this work.
Special thanks is also due to Marvin Chun, Marcia Johnson,
Christy Marshuetz, Carol Raye, and all the members of their
labs for their assistance with our neuroscience items. We ac-
knowledge support from NIH Grant R-37-HD023922 to F. C. K.
Reprint requests should be sent to Deena Skolnick Weisberg, De-
partment of Psychology, Yale University, P. O. Box 208205, New
Haven, CT 06520-8205, or via e-mail:
1. Because we constructed the stimuli in the With Neuro-
science conditions by modifying the explanations from the
Without Neuroscience conditions, both the good and the bad
explanations in the With Neuroscience conditions appear less
elegant and less parsimonious than their without-neuroscience
counterparts, as can be seen in Table 1. But this design provides
an especially stringent test of our hypothesis: We expect that
explanations with neuroscience will be judged as more satisfying
than explanations without, despite cosmetic and logical flaws.
Bloom, P. (2004a). Descartes’ baby. New York: Basic Books.
Bloom, P. (2004b). The duel between body and soul. The
New York Times, A25.
Farah, M. J. (2005). Neuroethics: The practical and the
philosophical. Trends in Cognitive Sciences, 9, 34–40.
Feigenson, N. (2006). Brain imaging and courtroom
evidence: On the admissibility and persuasiveness of fMRI.
International Journal of Law in Context, 2, 233–255.
Garner, R., Alexander, P. A., Gillingham, M. G., Kulikowich,
J. M., & Brown, R. (1991). Interest and learning from text.
American Educational Research Journal, 28, 643–659.
Garner, R., Gillingham, M. G., & White, C. S. (1989).
Effects of ‘‘seductive details’’ on macroprocessing and
microprocessing in adults and children. Cognition and
Instruction, 6, 41–57.
Harp, S. F., & Mayer, R. E. (1998). How seductive details
do their damage: A theory of cognitive interest in science
learning. Journal of Educational Psychology, 90, 414–434.
Hayes, B. K., Huleatt, L. A., & Keil, F. (in preparation).
Mechanisms underlying the illusion of explanatory depth.
Henson, R. (2005). What can functional neuroimaging tell
the experimental psychologist? Quarterly Journal of
Experimental Psychology, 58A, 193–233.
Keil, F. C. (2006). Explanation and understanding. Annual
Review of Psychology, 51, 227–254.
Kelemen, D. (1999). Function, goals, and intention: Children’s
teleological reasoning about objects. Trends in Cognitive
Sciences, 3, 461–468.
Kikas, E. (2003). University students’ conceptions of different
physical phenomena. Journal of Adult Development, 10,
Langer, E., Blank, A., & Chanowitz, B. (1978). The mindlessness
of ostensibly thoughtful action: The role of ‘‘placebic’’
information in interpersonal interaction. Journal of
Personality and Social Psychology, 36, 635–642.
Lombrozo, T. (2006). The structure and function of
explanations. Trends in Cognitive Sciences, 10, 464–470.
Lombrozo, T., & Carey, S. (2006). Functional explanation
and the function of explanation. Cognition, 99, 167–204.
McCabe, D. P., & Castel, A. D. (in press). Seeing is believing:
The effect of brain images as judgments of scientific
reasoning. Cognition.
Rips, L. J. (2002). Circular reasoning. Cognitive Science, 26,
Rosen, J. (2007, March 11). The brain on the stand. The New
York Times Magazine, 49.
Rozenblit, L., & Keil, F. (2002). The misunderstood limits of
folk science: An illusion of explanatory depth. Cognitive
Science, 92, 1–42.
Shafir, E. B., Smith, E. E., & Osherson, D. N. (1990). Typicality
and reasoning fallacies. Memory & Cognition, 18, 229–239.
Trout, J. D. (2002). Scientific explanation and the sense of
understanding. Philosophy of Science, 69, 212–233.
Tversky, A., & Kahneman, D. (1974). Judgment under
uncertainty: Heuristics and biases. Science, 185, 1124–1131.
Tversky, A., & Kahneman, D. (1981). The framing of decisions
and the psychology of choice. Science, 211, 453–458.
Weisberg et al. 477
... Recent studies support the idea that people trust scientific terminology even when it is irrelevant or unnecessary: Laypeople are more likely to accept scientific explanations that are accompanied with irrelevant neuroscientific jargon (Weisberg, Keil, Goodstein, Rawson & Gray, 2008;Weisberg, Taylor & Hopkins, 2015); in particular, scientific jargon makes non-experts more willing to accept low-quality scientific arguments. Similarly, experienced readers will judge scientific abstracts as higher quality when they include irrelevant mathematical equations (Eriksson, 2012). ...
... Our third area of investigation deals with how knowledge of science influences the relationship between scientific bullshit receptivity and pseudo-profound bullshit receptivity. Prior work has highlighted that laypersons and those who lack knowledge of science may be particularly susceptible to the influence of irrelevant scientific jargon (Weisberg et al., 2008). Non-experts are particularly susceptible to bad scientific arguments. ...
Full-text available
Pseudo-profound bullshit receptivity is the tendency to perceive meaning in important-sounding, nonsense statements. To understand how bullshit receptivity differs across domains, we develop a scale to measure scientific bullshit receptivity — the tendency to perceive truthfulness in nonsensical scientific statements. Across three studies (total N = 1,948), scientific bullshit receptivity was positively correlated with pseudo-profound bullshit receptivity. Both types of bullshit receptivity were positively correlated with belief in science, conservative political beliefs, and faith in intuition. However, compared to pseudo-profound bullshit receptivity, scientific bullshit receptivity was more strongly correlated with belief in science, and less strongly correlated with conservative political beliefs and faith in intuition. Finally, scientific literacy moderated the relationship the two types of bullshit receptivity; the correlation between the two types of receptivity was weaker for individuals scoring high in scientific literacy.
... For example, Eriksson (2012) demonstrated that the inclusion of irrelevant (and nonsensical in context) math formulae in the abstracts of scientific papers caused graduatedegree holders in education, the humanities, and other nonmathematics fields to rate these scientific papers as higher in quality. Similarly, Weisberg, Keil, Goodstein, Rawson, and Gray (2008) found that including irrelevant neuroscience explanations for psychological phenomena caused readers to judge these explanations as more satisfying compared to when the same explanation was given without irrelevant neuroscience information. Notably, this difference was especially pronounced when an initial explanation was of poor quality. ...
Full-text available
Across four studies participants ( N = 818) rated the profoundness of abstract art images accompanied with varying categories of titles, including: pseudo-profound bullshit titles (e.g., The Deaf Echo ), mundane titles (e.g., Canvas 8 ), and no titles. Randomly generated pseudo-profound bullshit titles increased the perceived profoundness of computer-generated abstract art, compared to when no titles were present (Study 1). Mundane titles did not enhance the perception of profoundness, indicating that pseudo-profound bullshit titles specifically (as opposed to titles in general) enhance the perceived profoundness of abstract art (Study 2). Furthermore, these effects generalize to artist-created abstract art (Study 3). Finally, we report a large correlation between profoundness ratings for pseudo-profound bullshit and “International Art English” statements (Study 4), a mode and style of communication commonly employed by artists to discuss their work. This correlation suggests that these two independently developed communicative modes share underlying cognitive mechanisms in their interpretations. We discuss the potential for these results to be integrated into a larger, new theoretical framework of bullshit as a low-cost strategy for gaining advantages in prestige awarding domains.
... An analysis of the controversy concerning so-called "neuromania" (Legrenzi & Umiltà, 2011) helps to better un derstand these results. Neuroscientific research is particularly fascinating to the general public, so Weisberg, Keil, Goodstein, Rawson, and Gray (2008) conducted an experiment on the seductive allure of explanations with irrelevant neuroscientific information. As expected, irrelevant information with neuroscientific jargon were preferred by the participants, regardless of the quality of the underlying logic of the explanation. ...
Full-text available
This article presents an integrative model for the endorsement of pseudoscience: the explanation-polarisation model. It is based on a combination of perceived explanatory satisfaction and group polarisation, offering a perspective different from the classical confusion-based conception, in which pseudoscientific beliefs would be accepted through a lack of distinction between science and science mimicry. First, I discuss the confusion-based account in the light of current evidence, pointing out some of its explanatory shortcomings. Second, I develop the explanation-polarisation model, showing its explanatory power in connection with recent research outcomes in cognitive and social psychology.
Dramatic advances in neuroscience have improved physicians’ abilities to diagnose and manage neurological and psychiatric disorders for their patients. Alongside established modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and functional MRI (fMRI), advanced neuroimaging technologies provide new tools for understanding normal human behavior and diagnosing neuropsychiatric disorders impacting human behavior. But the application of these novel technologies, designed to help patients in the treatment setting, to the forensic setting presents unique ethics challenges. Forensic psychiatry is a subspecialty in which scientific and clinical expertise is applied in legal contexts, and in specialized clinical consultations in areas such as risk assessment or employment. In contrast to the treatment setting where advancing the patient’s welfare is primary, the primary duty in forensic settings is to foster truth. Thus, an honest forensic opinion based on good science and evidence may not necessarily benefit the person being evaluated and could cause that person harm. Similarly, artificial intelligence (AI) and machine learning technology are applied to a growing number of clinical and forensic settings, bringing potential to transform how psychiatrists assess an individual’s risk for violence and risk for suicide. Despite this promise, however, these emerging technological advances present significant ethical dilemmas, medico-legal limitations, and the risk of misuse if applied unethically. In this chapter, recent neuroscientific advances in the fields of functional neuroimaging and AI “deep learning” algorithms are reviewed in detail along with the relevant legal and ethical framework, advantages, and potential drawbacks.
In recent years, strategy researchers have sought to combine behavioural theories with traditional economic views of the firm. As the behavioural trend continues, insights from behavioural neuroscience will play an increasing role in strategic management. Powell (2011) coined the term ‘neurostrategy’ to describe research at the intersection of strategic management and behavioural neuroscience. He argued that properly designed research projects in neurostrategy can help researchers to validate strategy constructs, measure variables, test theories and generate new research ideas. He also noted that neurostrategy brings new challenges – for example, interdisciplinary collaborations can be time-consuming and costly, and brain processes are not always the appropriate unit of analysis in strategic management. On balance, neurostrategy can contribute to strategic management if strategy researchers work closely with neuroscientists on targeted research problems for which brain imaging, neuropharmacology and other neuroscientific methods can provide behavioural insights.
Neuroscience, as an academic concentration and area of research, has grown significantly in past decades and has influenced the content and methods of closely related fields. Psychology programs have expanded biopsychology course offerings, increased the hiring of faculty with neuroscience academic concentrations, and provide considerable emphasis on the biology of behavior in introductory psychology courses. The goals of this chapter are to provide instructors with an understanding of neuroscience content in psychology programs, outline the competencies that students gain from taking biopsychology courses, and offer teaching resources. The chapter begins with a review of the history of neuroscience, including its current role in psychology programs and in shaping undergraduate curriculum. We then outline biopsychology competencies and organize content into three core concepts: foundational knowledge of the nervous system, application of the foundational knowledge, and understanding the clinical/social impact. Each core concept is connected to prompts for addressing influential themes in biopsychology (scientific literacy, evolution, and neuroplasticity and adaptability). Example learning activities and teaching resources that align with core concepts and themes are provided. The final sections of the chapter discuss the opportunities, challenges, and lessons learned in teaching biopsychology with evidence-based pedagogical approaches, including self-regulated learning, active learning through the use of high-impact practices, centering professional development skills through course work, and tips for successful instruction.
Research using psychophysiological methods holds great promise for refining clinical assessment, identifying risk factors, and informing treatment. Unfortunately, unique methodological features of existing approaches limit inclusive research participation and, consequently, generalizability. In this brief overview and commentary, we provide a snapshot of the current state of representation in clinical psychophysiology with a focus on the forms and consequences of ongoing exclusion of Black participants. We illustrate issues of inequity and exclusion that are unique to clinical psychophysiology and consider intersections among social constructions of Blackness and biased design of current technology used to measure electroencephalography, skin conductance, and other signals. We then highlight work by groups dedicated to quantifying and addressing these limitations. We discuss the need for reflection and input from a wider variety of affected individuals to develop and refine new technologies given the risk of further widening disparities. Finally, we provide broad recommendations for clinical-psychophysiology research.
Is formal simplicity a guide to learning in humans, as simplicity is said to be a guide to the acceptability of theories in science? Does simplicity determine the difficulty of various learning tasks? I argue that, similarly to how scientists sometimes preferred complex theories when this facilitated calculations, results from perception, learning and reasoning suggest that formal complexity is generally unrelated to what is easy to learn and process by humans, and depends on assumptions about available representational and processing primitives. “Simpler” hypotheses are preferred only when they are also easier to process. Historically, “simpler”, easier‐to‐process, scientific theories might also be preferred if they are transmitted preferentially. Empirically viable complexity measures should build on the representational and processing primitives of actual learners, even if explanations of their behaviour become formally more complex.
Full-text available
Recent studies in neuroeducation highlight the benefits of teaching children about how the brain works. However, very little is known about children’s naive conceptions about the brain. The current study examined these representations, by asking 6–10 year-old children (N=257) and adults (N = 38) to draw a brain and the inside of a belly as a control drawing. The drawings were scored using a content analysis and a list of graphic indicators was derived. First, all the graphic indicators used in the brain drawings were different from those used in the belly drawings, suggesting that children are able to distinguish these two organs. Second, with age, children depict (i) an increasing number of indicators, (ii)more complex indicators, (iii) indicators that remore anatomically correct, to depict the brain. There is an important evolution between 6 and 8 years-old but also between 10 years-old and adults. These results are discussed in relation to children’s metacognitive knowledge and to their implications for neuroeducation.
Full-text available
Educational neuroscience represents a concerted interdisciplinary effort to bring the fields of cognitive science, neuroscience and education to bear on classroom practice. This article draws attention to the current and potential implications of importing biological ideas, language and imagery into education. By analysing examples of brain-based consumer products and services, we express a concern that neuroscience discourse can promote reductive and deterministic ways of understanding the developing child, masking phenomenological, psychosocial, or cultural influences. Moreover, a lack of neuroscience literacy and the appeal of neuroscience explanations may leave this field especially vulnerable to misunderstanding and misappropriation. We conclude by suggesting some opportunities to mitigate these problems, thereby facilitating constructive interdisciplinary dialogue.
Texte complet ici/Full text here:
Many decisions are based on beliefs concerning the likelihood of uncertain events such as the outcome of an election, the guilt of a defendant, or the future value of the dollar. Occasionally, beliefs concerning uncertain events are expressed in numerical form as odds or subjective probabilities. In general, the heuristics are quite useful, but sometimes they lead to severe and systematic errors. The subjective assessment of probability resembles the subjective assessment of physical quantities such as distance or size. These judgments are all based on data of limited validity, which are processed according to heuristic rules. However, the reliance on this rule leads to systematic errors in the estimation of distance. This chapter describes three heuristics that are employed in making judgments under uncertainty. The first is representativeness, which is usually employed when people are asked to judge the probability that an object or event belongs to a class or event. The second is the availability of instances or scenarios, which is often employed when people are asked to assess the frequency of a class or the plausibility of a particular development, and the third is adjustment from an anchor, which is usually employed in numerical prediction when a relevant value is available.
In 4 experiments, students who read expository passages with seductive details (i.e., interesting but irrelevant adjuncts) recalled significantly fewer main ideas and generated significantly fewer problem-solving transfer solutions than those who read passages without seductive details. In Experiments 1, 2, and 3, revising the passage to include either highlighting of the main ideas, a statement of learning objectives, or signaling, respectively, did not reduce the seductive details effect. In Experiment 4, presenting the seductive details at the beginning of the passage exacerbated the seductive details effect, whereas presenting the seductive details at the end of the passage reduced the seductive details effect. The results suggest that seductive details interfere with learning by priming inappropriate schemas around which readers organize the material, rather than by distracting the reader or by disrupting the coherence of the passage.
Scientists and laypeople alike use the sense of understanding that an explanation conveys as a cue to good or correct explanation. Although the occurrence of this sense or feeling of understanding is neither necessary nor sufficient for good explanation, it does drive judgments of the plausibility and, ultimately, the acceptability, of an explanation. This paper presents evidence that the sense of understanding is in part the routine consequence of two well-documented biases in cognitive psychology: overconfidence and hindsight. In light of the prevalence of counterfeit understanding in the history of science, I argue that many forms of cognitive achievement do not involve a sense of understanding, and that only the truth or accuracy of an explanation make the sense of understanding a valid cue to genuine understanding.
Zusammenfassung. Die Psychologie weist zur Biologie, von der Genetik bis zur Ethologie, vielfaltige fruchtbare und in der Sache unproblematische Beziehungen auf. In den kognitiven Neurowissenschaften sind jedoch Vorstellungen problematisch, denen zufolge einer neurophysiologischen Analyseebene eine privilegierte Stellung fur das Verstandnis mentaler Prozesse zukomme. Der Beitrag zeigt noch einmal auf, dass derartige Vorstellungen auf tiefgehenden Missverstandnissen naturwissenschaftlicher Forschungsprinzipien beruhen und fur die explanatorischen Aufgaben psychologischer Theoriebildung unfruchtbar sind. Er identifiziert zwei Kategorien von Ursachen, warum dennoch neuroreduktionistische Positionen gegenwartig einen so grosen Einfluss in der Psychologie haben. Die wissenschaftspsychologischen Ursachen liegen in der Natur unseres alltaglichen Erklarungskonzeptes mit seiner Vorliebe fur konkrete, sinnlich manifeste Wirkfaktoren sowie in unserer Alltagskonzeption psychischer Phanomene. Die wissenschaftssoziolog...
In der Öffentlichkeit und im akademischen Umfeld entsteht zunehmend der Eindruck, dass psycholo- gische Probleme durch neurowissenschaftliche und/oder biologische Befunde erklärt werden. Wird durch die zunehmende Dominanz der biologischen und neurowissenschaftlichen Erklärungsansätze die originär akademische Psychologie überflüssig? Gerät sie in Gefahr, ihre Eigenständigkeit zu verlieren? Wie sind die gegenseitigen Beziehungen zwischen Biologie/Neuro- wissenschaften und Psychologie? Im Rahmen dieses Beitrages diskutieren wir die aktuellen Probleme des Verhältnisses von Biologie/Neurowissenschaften und Psychologie. Wir thematisieren auch problematische Grenzüberschreitungen zwischen Biologie und Psychologie, fragen uns, ob biologische Messwerte wirklich immer von Vorteil sind, und diskutieren die Sugges- tionskraft von Hirnbildern. Trotz der problematischen Einflüsse der Biologie/Neurowissenschaften auf die Psychologie sehen wir allerdings Chancen und gleichfalls Herausforderungen für die akademische Psychologie, die uns veranlassen, einige Emp- fehlungen an die akademische Psychologie zu formulieren, um die Auseinandersetzung mit der Biologie/Neurowissenschaft effizienter zu gestalten.
People feel they understand complex phenomena with far greater precision, coherence, and depth than they really do; they are subject to an illusion-an illusion of explanatory depth. The illusion is far stronger for explanatory knowledge than many other kinds of knowledge, such as that for facts, procedures or narratives. The illusion for explanatory knowledge is most robust where the environment supports real-time explanations with visible mechanisms. We demonstrate the illusion of depth with explanatory knowledge in Studies 1-6. Then we show differences in overconfidence about knowledge across different knowledge domains in Studies 7-10. Finally, we explore the mechanisms behind the initial confidence and behind overconfidence in Studies 11 and 12, and discuss the implications of our findings for the roles of intuitive theories in concepts and cognition. © 2002 Leonid Rozenblit. Published by Cognitive Science Society, Inc. All rights reserved.