Content uploaded by Timothy Ketelaar
Author content
All content in this area was uploaded by Timothy Ketelaar on Apr 15, 2021
Content may be subject to copyright.
Emotions and Motivated Reasoning
1
Defect or Design Feature?:
Toward an Evolutionary Psychology of the Role of Emotion in Motivated
Reasoning
Timothy Ketelaar
Department of Psychology, New Mexico State University
Invited Chapter to appear in:
The Oxford Handbook of Evolution and the Emotions
Laith Al-Shawaf, Ph.D. & Todd Shackelford, Ph.D. (editors)
April 15th, 2021
Total main body of text = 11,326 Words
Author Note
Correspondence concerning this article should be addressed to Timothy Ketelaar, Department of
Psychology, New Mexico State University, Las Cruces, NM 88003-8001. Electronic mail may be
sent to ketelaar@nmsu.edu.
This manuscript is not the official manuscript of record. This manuscript corresponds to the
“in press” version of the Chapter and is subject to final revisions before publication.
Emotions and Motivated Reasoning
2
Abstract
The current chapter applies an evolutionary lens to motivated reasoning and other forms of
identity-protective cognition. A central assumption is that in many instances in which different
individuals have reviewed the same evidence, but sincerely claim to have observed different
“facts” (e.g., “they saw a Game”) or sincerely assert that they have “rationally” arrived at
contradictory conclusions regarding the same evidence, we are not witnessing a trivial difference
of opinion, but rather a clash of competing worldviews. After introducing several philosophical
caveats from Kant’s Transcendental Psychology that cast doubt on the claim that all members of a
species will necessarily have access to the same “shared reality” or worldview, I review
evolutionary insights into the utility of cognitive biases (i.e., the Smoke Detector Principle),
including the argument that argumentation itself evolved, not for locating truth, but in service of
achieving consensus. Equipped with these evolutionary and philosophical insights, and utilizing
an “Affect-as-Information” framework, I review the literature at the intersection of emotion and
motivated reasoning. I argue that these processes may be more accurately seen, not as “bugs” in
our “mental software,” but as “design features” of the human mind.
(191 words).
Key words: Emotion, Motivated Reason, Worldview, Kant,
Emotions and Motivated Reasoning
3
Defect or Design Feature?:
Toward an Evolutionary Psychology of the Role of Emotion in Motivated
Reasoning
In their classic 1954 monograph, “They saw a Game,” Albert Hastorf and Hadley Cantril asked
Princeton and Dartmouth students to watch a motion picture of the 1951 football game between
these two Ivy League rivals. Undergraduates from both institutions were given identical
instructions: watch the game film, record any penalties (taking care to categorize each infraction
as either flagrant or mild) and identify which team was responsible. Newspaper accounts
depicted a brutal contest with numerous penalties and several players leaving the field with
injuries, including Dick Kazmaier, Princeton’s Heisman trophy winning quarterback who left the
game in the second quarter with a broken nose. Although the final score of the game was
undisputed (Princeton defeated Dartmouth 13-0), student’s perceptions of the infractions
committed by each squad varied across the two Ivy League institutions.
Despite the fact that students from both schools had watched the same game film, the number and
quality of infractions they “observed” differed substantially. These disparate perceptions were
not random, but varied as a function of whether the infractions were ostensibly committed by
their own team or by their rival. Hastorf and Cantril (1954) observed, for example, that
Dartmouth students characterized only 50% of their own team’s infractions as “flagrant,” whereas
Princeton students categorized the vast majority of these same penalties as “flagrant.” Similarly,
Princeton undergraduates categorized the vast majority of their own team’s penalties as “mild,”
whereas Dartmouth students categorized far fewer of Princeton’s infractions as “mild.” These
striking discrepancies in student’s perceptions of the same football game –perceptions of the same
“objective” evidence--led Hastorf and Cantril (1954, p. 133) to conclude:
the data here indicate that there is no such 'thing' as a 'game' existing 'out there' in
its own right which people merely 'observe.'
Instead, they argued:
The game 'exists' for a person and is experienced by him only insofar as certain
happenings have significances in terms of his purpose.
Today we recognize the selective and distorted perceptions of these Ivy League students as a
classic example of motivated reasoning, a form of cognitive bias in which an actor’s perceptions
of the world appear to reflect a desired, often self-serving, set of conclusions rather than an
accurate account of the evidence (Kunda, 1990).
Motivated Reasoning
Hastorf and Cantril (1954) also recounted the story of a Dartmouth alum who had received a copy
of the same game film shortly after it had been reviewed by the Princeton students. The alum
watched the game film, but could not identify many of the flagrant penalties observed by the
Princeton students. He wired his Dartmouth colleagues concerned that this footage (of the
Dartmouth player’s infractions) may have been edited out of the version of the film that had been
mailed to him:
Emotions and Motivated Reasoning
4
Preview of Princeton movies indicates considerable cutting of important part
please wire explanation and possibly air mail missing part before showing
scheduled for January 25 we have splicing equipment.
In other words, having failed to locate the flagrant Dartmouth penalties “observed” by the
Princeton students, this Dartmouth alum assumed that these events must have been cut from the
film. What he did not realize was that the film had not been altered. The striking differences
between his own observations and the perceptions of students from a rival institution were not a
reflection of different content depicted in two differently edited versions of the same game film
(they viewed the same recording), but instead reflected markedly different perceptions of the
same recorded images. Given these striking findings, Hastorf and Cantril’s (1954) paper has
become one of the most widely recognized illustrations of motivated reasoning.
The motivation to arrive at a particular set of inferences—including the formation of conclusions
that appear to be at odds with the available evidence—is not limited to biased observations of
sporting contests or perceptual errors generated while viewing ambiguous stimuli in a psychology
laboratory (Balcetis & Dunning, 2006); there are now numerous studies documenting blinkered
interpretations of scientific data and “legally-significant” facts, including biased interpretations of
scientific evidence (Ditto, Munro, Apanovitch, Scepansky, & Lockhart, 2003; Stanovich, West, &
Toplak, 2013). In one well-known demonstration of how motivated reasoning can creep into the
evaluation of scientific evidence, Kunda (1987) asked undergraduates to read and evaluate a
scientific paper concerning the effects of caffeine consumption on the risk of a female developing
painful lumps in her breast. This “scientific paper” was a fiction created by Kunda and her
colleagues, but the results were compelling evidence for motivated reasoning—demonstrating that
female students who were also heavy coffee drinkers found more “flaws” in this research paper
than did their male counterparts, or less caffeinated women.
In another series of studies, Kahan and colleagues (Kahan, Peters, Cantrell Dawson, & Slovic,
2017) demonstrated how easy it is for individuals to draw conclusions that are at odds with the
scientific evidence, especially when it is possible to construct an alternative interpretation that is
more in line with one’s pre-existing political beliefs. In a series of experiments, Kahan and
colleagues (2017) first presented participants with the results of a hypothetical clinical trial of a
new experimental skin cream for treating rashes. In this fictious experiment, one group of
patients was described as having been treated with the skin cream for two weeks, whereas a
second (control) group did not receive the treatment. The researchers created a contingency table
(see Figure 1 below) to display the number of patients (treatment vs. no treatment) whose rash got
better or worse
1
. The researchers purposefully constructed the data to make arriving at a correct
conclusion challenging—i.e., the contingency table depicted a large number of patients whose
rash got better in the treatment condition (see Table A in Figure 1 below); but this particular
finding was misleading because the percentage of improved cases was higher (84% versus 75%)
among patients who did not receive the skin cream treatment, thus demonstrating that the skin
cream was not more effective (compared to doing nothing, see Table A in Figure 1).
1
Kahan and colleagues (2017) counter-balanced whether the clinical trial was a success or failure across participants
(as seen in Tables A & B in Figure 1).
Emotions and Motivated Reasoning
5
Figure 1. Treatment Condition (adapted from Kahan et al., 2017)
Before testing for motivated reasoning, Kahan and colleagues (2017) first established that
participants were capable of drawing correct conclusions from contingency table data, regardless
of their political ideology. Consistent with this assumption, participants previously identified as
scoring higher on a measure of numeracy (the ability to draw correct inferences from numeric
data) were—unsurprisingly—much more likely to generate correct inferences from the
contingency table compared to their less numerate counterparts, regardless of their political
affiliation. However, when the topic of the study was dramatically switched in a second
experiment—by simply changing the labels in the contingency table--to reflect a much more
politically-charged topic (i.e., the effectiveness of gun control policy in reducing crime)--
motivated reasoning was clearly present in participant’s interpretations of the same contingency
tables with switched labels (see Tables C & D in Figure 1).
More specifically, when asked to interpret a contingency table displaying data on the
effectiveness of a “concealed firearms ban,” participants who scored highest in numeracy were
more likely to supply the correct conclusion, but only when the data supported their political
views. By contrast, when the data in the contingency table directly contradicted their political
views, these same highly numerate participants were less likely to generate the correct conclusion
from the data. In short, across several studies, Kahan and colleagues (2017) found that
individuals are more likely to display motivated reasoning when interpreting evidence that is
relevant to their political beliefs compared to when they are asked to interpret the same data
presented as reflecting a much less politically-charged topic (e.g., evaluating the effectiveness of
a skin cream).
In summary, the tendency to defend an identity-relevant worldview with biased reasoning has
been conceptually replicated many times (see Ditto & Lopez, 1992; Ditto et al, 1998, 2003;
Kahan, Landrum, Carpenter, Helft,. & Jamieson, 2017; Klein & Harris, 2009; Kunda, 1990, 2001;
Haidt, 2012; McKenna, 2021; Schaller, 1992; Taber & Lodge 2006) and is a central feature of the
phenomenon of motivated reasoning, or what is sometimes called “myside bias” (Stanovich,
West, & Toplak, 2013).
Emotions and Motivated Reasoning
6
Overview
This chapter applies an evolutionary lens to motivated reasoning and other forms of identity-
protective cognition. A central assumption is that our understanding of cognitive bias and
dysrationalia is incomplete without an appreciation of the role that emotions play in these
processes. Moreover, this focus on emotion aligns well with a common interpretation of Hastorf
and Cantril’s (1954) classic study of motivated reasoning:
The students’ emotional stake in affirming their loyalty to their institutions…had
unconsciously shaped what they had seen when viewing events captured on film.
This study is now recognized as a classic demonstration of “motivated cognition,”
the ubiquitous tendency of people to form perceptions, and to process factual
information generally, in a manner congenial to their values and desires. (Kahan,
et. al., 2012, p. 853, emphasis added).
After introducing several philosophical caveats from Kant’s Transcendental Psychology that cast
doubt upon the claim that all members of a species will necessarily have access to the same
“shared reality,” I review evolutionary insights into the utility of cognitive biases (i.e., the Smoke
Detector Principle), including the argument that argumentation itself evolved, not for locating
truth, but for achieving consensus (Mercier & Sperber, 2011, 2017). Equipped with these
evolutionary and philosophical insights, and adopting an “Affect-as-Information” framework, I
summarize the scientific literature on emotion and motivated reasoning.
Kantian Idealism and Evolutionary Psychology
Motivated reasoning is typically employed as a strategy to promote or defend a cherished set of
beliefs (i.e., a worldview). Before we explore the role of emotion in this biased form of
reasoning, it will be helpful to first consider how we acquire our basic assumptions about the
social and physical world. In his Critique of Pure Reason Kant (1781) argued that any discussion
of metaphysical reality should begin by acknowledging that our perceptions of “reality” are
constructed, even determined, by our psychological faculties.
I am not claiming that Kant was arguing for philosophical relativism, post-modernist social
constructivism, or epistemic nihilism. Kant was not arguing that it was meaningless to talk about
a ”real” world existing independent of our own minds. Instead, Kant was pointing out that even if
we assume—as most scientists do --that such a physical world exists, it may be philosophically
intractable to accurately characterize its existence (a view often referred to as Idealism, see
Kitcher, 1990). This difficulty in establishing the nature of metaphysical reality arises, according
to Kant, because our ability to understand the physical world, or to even conceptualize its
existence, is ineluctably dependent upon the psychological faculties with which we perceive that
same world. It follows that any two individuals with somewhat different mental faculties, or who
are experiencing different states of their own faculties can (and often do) generate different
perceptions of the external world. Consider, for example, a neuro-typical individual who readily
distinguishes the colors pink and blue. When this individual encounters a florist’s decorative
arrangement of Forget-me-nots, the difference between pink and blue flowers is literally “self-
evident.” Yet, for approximately 4% of the human population (8 % of men) who have a form of
color blindness (Caufield, 2021), this “self-evident” perception is not possible without special
aids. From the perspective of a congenitally color blind observer, their initial exposure to the
poetic couplet “Roses are red, Violets are blue” can be a non sequitur in the same sense that
judging the “appropriate amount of eye contact” during informal conversation is a daunting social
Emotions and Motivated Reasoning
7
judgment task for the approximately 5.4 million individuals in the U.S who have the same visual
acuity of their compatriots, but find themselves located in the extreme tail of a distribution of a
psychological trait known as systematizing-empathizing (also referred to as Autism Spectrum
Disorder, see Baron-Cohen, 2003; Silberman, 2015). In short, what is in our heads (our specific
mental faculties) can both enable and constrain the “reality” that we perceive “in the world
2
.”
Far from being a pedantic philosophical exercise, these Kantian insights continue to generate
coherent debates regarding the metaphysics of perception (e.g., for contemporary debates
regarding whether colors are an objective property of the world, see Thompson, Palacios, &
Varela, 1991; for a larger discussion of the qualia debate, see Dennett, 1991). Although Kant
believed that a metaphysics was possible, he cautioned that any rigorous study of the mind (i.e.,
any scrupulous science of psychology) should take into account this fundamental recognition that
our understanding of the world--including our own minds and the “self-evident” perceptions they
generate--is not achieved via a pure, unbiased faculty of reasoning. Afterall, Kant’s most
influential treatise was not titled Approbation for the Perfection, Breadth, and Potency of Human
Reasoning, but rather the far more modest Critique of Pure Reason.
Kant’s Transcendental Psychology (see Kitcher, 1990, for an accessible overview) is quite
consistent with evolutionary psychology. Both frameworks allow us to appreciate how different
members of the same species can (and often do) encounter different “realities” by virtue of
possessing reliably developing
3
individually-different evolved psychological faculties. To
illustrate the compatibility between Kantian idealism and evolutionary psychology, let’s consider
several examples from the domain of olfactory and gustatory perception. Many species, including
humans, possess a vast repertoire of context-sensitive ingestive behaviors (eating, drinking) that
enable members of that species to solve a wide range of adaptive problems centering around the
more general problem of maintaining metabolic homeostasis (i.e., efficiently regulating the
intake, processing, and storage of potential energy, etc.). To surmount these adaptive challenges
many species have evolved a variety of bio-mechanical devices and strategies such as chewing,
swallowing, digesting, for regulating their nutrient intake. More relevant to Kantian
transcendental psychology, a number of species—including humans--possess specialized
cognitive (information-processing) systems that appear to be “designed” to generate subjective
psychological states corresponding to the organism’s current metabolic state (e.g., hunger, thirst,
smell, taste, etc., see Berridge, 1991; Kringelbach, & Berridge, 2017). These specialized
2
Kant’s idealism can appear to be the polar opposite of Gibson’s (1979) ecological psychology because Gibson
(1979) focused much less on the question of “What is inside your head” (i.e., mental faculties) and more on the
question of “What is your head inside of?” However, Gibson (1979) recognized that the affordances of an organism’s
environment (i.e., its umwelt) are the result of an interaction between the historically recurrent invariant properties of
the organism’s ancestral environments and the specific evolved perceptual and psychological faculties --that arose
over the organism’s evolutionary history--to exploit/respond to those affordances.
3
Consistent with Kant’s transcendental psychology, the past half century of evolutionary-developmental science has
identified a plethora of reliably developing, often highly environmentally-sensitive, neurologically based traits
capable of generating adaptively patterned, individually-calibrated, perceptions and representations of the physical
and social world (Barkow, Tooby, & Cosmides, 1992; Boyce & Ellis, 2005; Tooby & Cosmides, 1990a). Examples
of reliably developing and individually-tailored evolved psychological faculties include relatively well understood
perceptual-cognitive systems such depth perception (calibrated to reliably appearing interoceptive cues of self-
propelled locomotion, Dahl, et al., 2013), language acquisition devices (with distinct epigenetic activating conditions,
Pinker, 1994, 1999), and episodic memory (tailored to the physical constraints of the world, thus making it difficult
or impossible to recall or recollect “physically impossible objects and motions” (Schacter, et al., 1991; Shiffar &
Freyd, 1991).
Emotions and Motivated Reasoning
8
information-processing devices appear to generate an adaptively-patterned “augmented reality”
that the organism deploys while foraging for nutrients. Consider, for example, the reliably
emerging changes observed in the subjective sense of smell and taste of certain foods for women
in the first trimester of their pregnancy. This temporary shift in perceptions (compared to later
trimesters) appears to be well-suited to the avoidance of teratogens when her fetus is most
vulnerable to birth defects (see Lieberman & Carlton, 2018, for a review). To take a more
familiar example, consider the reliable finding that the subjective “pleasantness” of the smell and
taste of a favorite food (e.g., a slice of pizza) will differ dramatically dependent upon whether the
individual is in a metabolic state of caloric need (i.e., hungry) compared to when that same
individual is sated (i.e., after consuming three slices), a well-documented phenomenon known as
allesthesia (Cabanac, 1971). Nor should it be surprising that certain species which--in their
ancestral environments--evolved to ingest a limited range of dietary substances, will often assign
different, largely species-specific, subjective “values” to certain edible substances. For example,
if the ancestral diet of koalas and panda bears consisted almost exclusively of eucalyptus leaves
and bamboo (respectively), an evolutionary biologist might not be surprised to discover that the
subjective value assigned to these two substances—i.e., how “good” these foods taste--might
differ systematically between the two species, suggesting that the degree of concordance between
the adaptive “benefits” of consuming a particular food in ancestral environments and the current
psychological “value” assigned to that food is not accidental. To entertain a more striking
example, consider that the subjective “taste” of dung surely varies much less as a function of
whether you are hungry or sated than upon whether you are a member of the species Homo
sapiens or Scarabaeoidea (i.e., the dung beetle, for whom animal feces is a dietary staple). From
an evolutionary perspective it is not surprising to observe that these subjective psychological
states— referred to as “qualia”--can differ substantially from one species to the next. Moreover,
it is not surprising that our evolved “psychological software” generates these subjective states in a
context-sensitive (and therefore individually different) manner, producing perceptional
representations of the environment that are attuned to specific changes in the organism’s
momentary physiological state (see Berridge, 1999) and uniquely shaped to the historical fitness
affordance landscape for that particular species (Buss, 1991; Dangles et al., 2009; Tooby &
Cosmides, 1990b).
Because different species possess distinct perceptual faculties uniquely suited to the invariant
features of their ancestral environments, different species can, in principle, experience different
perceptual worlds—a supposition famously explored by the philosopher Thomas Nagel in his
(1974) essay “What is it like to be a bat?” Although Nagel came to the conclusion that it would
be impossible to know the mind of other species, over the past half century philosophers of mind,
comparative psychologists, and animal cognition researchers have cast doubt upon Nagel’s
pessimistic assertion. Philosophers have demonstrated that although it is challenging to
understand how another organism perceives the world, it is not impossible (see Dennett, 1991,
Dennett & Hofstadter, 2000). Moreover, animal cognition researchers and comparative
psychologists have studied the “metaphysics” of numerous species, both in the lab as well as in
their natural environments, and several researchers have spent much of their careers developing
and testing hypotheses regarding how several primate species perceive and represent their social
and physical environments (Cheney & Seyfarth, 1990, 2007; Povinelli, 2003). Evolutionary
biologists have even coined the term “umwelt“ precisely for the purpose of describing these
species-specific perceptual worlds anticipated by Kant (see Baggs & Chemero, 2018; Burnett,
2011):
Emotions and Motivated Reasoning
9
This umwelt differs for each organism, which means that it is difficult for us to
truly understand how another organism perceives the world…It should also be
noted that it is common for sensory systems to change with development of an
animal, meaning that the umwelt that organism inhabits can often change over the
course of its life (Dangles et al., 2009)...This perceptual world is highly dependent
upon the senses that a particular organism possesses, although it is also affected
by the internal workings of an animal's nervous system at any given time
(Burnett, 2011, p. 75).
From an evolutionary perspective it is not surprising to observe that the umwelt for humans does
not include an ability to perceive the electrical fields generated by other organisms, even though
this perceptual ability is an important feature of the physical world experienced by sharks. This is
the case because many cartilaginous fish, including sharks, have evolved specialized perceptual
organs, known as Lorenzini ampullae, which allow them to detect the electric fields of their prey
(Camperi, Tricas, & Brown, 2007). Thus, if one accepts this evolutionary-Kantian insight (that an
organism’s perceptions of the world are actively constructed by their evolved senses), it is easier
to appreciate that different species--by virtue of possessing different mental faculties tailored to
the unique adaptive challenges faced by that species—will not experience identical perceptual
worlds. Similarly, it follows that different members of the same species encountering the same
physical environment, and even the same individual experiencing the same environment in
different contexts (e.g., while hungry vs. sated), will not experience the same umwelt. In this
light, the observation that our evolved reasoning mechanisms often generate perceptual worlds
that appear biased or flawed---especially when compared to a single objective standard—may not
be so mysterious. Several evolutionary scholars have suggested that evolved psychological
mechanisms capable of generating context-sensitive representations of the world, including
systematically self-serving representations, might be better understood as evolved design features
rather than defects (Haselton & Nettle, 2006; Nesse, 2001a, 2005).
The Evolution of Adaptive Biases: The Smoke Detector Principle
The software engineer’s response “that’s not a bug, it’s a feature” captures the evolutionary logic
underlying why certain “design features” can appear to be avoidable errors or unnecessary biases
(Nesse, 2001a, 2005). Natural and sexual selection are not teleological processes and thus one
should not expect that evolution will invariably lead to adaptive solutions optimally attuned to all
relevant features of the environment; this is especially the case for novel aspects of current
environments that differ substantially from the environments in which a psychological mechanism
evolved (Nesse, 1994, 2019). Moreover, in regard to any attempt to reverse-engineer the mental
software that humans evolved for perceiving their physical and social world, is it not reasonable,
to begin by acknowledging Kant’s arguments about the intractability of determining what an
”objective” (as opposed to subjective) map of the world would even look like? Thus, a Kantian
approach to evolutionary psychology reminds us that the computational task of generating an
effective “mental map” of the specific threats and opportunities in an organism’s environment –
i.e., the fitness affordance landscape for that particular member of that particular species at that
particular point in its life history--will ineluctably depend upon on the nature and conditional state
of the specific mental faculties that the organism evolved for perceiving the invariant statistical
properties of that world (Gibson, 1979).
Change the operating conditions for any particular mental faculty that an organism employs to
perceive its physical or social milieu, and you will likely change–if only temporarily--that
Emotions and Motivated Reasoning
10
individual’s mental map of the world. Consider, for example, how the mental map of a
chimpanzee’s social network might change after the death of an important ally (de Waal, 1989),
or how humans might perceive their political leaders differently while experiencing an irrelevant
happy mood as compared to when they are experiencing an irrelevant sad mood (see Forgas &
Moylan, 1982). Similarly, if you switch your focus to the mind of a different species you may
suddenly find that this new species operates with a different mental map of the world, perhaps
even a map that is incompatible with the perceptual world experienced by other species. It is well
established, for example, that adult humans reliably perceive a pile of dung as a dangerous
opportunity to contract disease (Lieberman & Carlton, 2018), whereas a mature dung beetle will
invariably interpret the same stimulus as a propitious opportunity to ingest its favorite meal.
These observations are not evidence that one species has somehow generated a more accurate
map of the world. Instead, these examples illustrate how the trade-offs associated with employing
a “highly accurate” map of the world versus constructing a subjective, yet “accurate enough” map
are species-specific and uniquely tailored to the ecological niche occupied by that species in its
ancestral environments (Todd & Gigerenzer, 2012; Haselton & Nettle, 2006).
The past several decades of research in evolutionary-developmental psychology reveals that
humans are endowed with a wide variety of perceptual faculties that reliably unfold over the
course of development in a context-sensitive fashion (Boyce & Ellis, 2005). Accordingly,
different members of the same species can–by dint of their having different developmental
histories, or finding themselves in different environments—experience different perceptual
worlds systematically calibrated to their developmental trajectory or their current circumstances.
From the perspective of Kantian idealism and evolutionary psychology, the cost-benefit calculus
associated with possessing such specialized and environmentally-sensitive psychological
faculties—evolved for reasoning about the social and physical world-- is not predicated on these
faculties generating precisely accurate representations of the world. It turns out, for example, that
some organisms can prevail in the grand Darwinian steeplechase of life by possessing reasoning
mechanisms that place their bets in accord with an adaptationist interpretation of Pascal’s Wager
–an evolutionary gamble in which it becomes “rational” to “over-estimate” the probability of
extremely improbable events if the costs associated with failing to predict these events would be
catastrophic
4
(Nesse, 2019; see also Taleb, 2010, 2014). Regardless of whether these “black
swan” events result in catastrophic losses or windfall profits, what matters is that the organism is
able to translate its predictive success into greater reproductive success (i.e., greater success at
passing on the genetic basis for these risk-sensitive perceptual systems; see Nesse, 2001a, 2005).
When the metric for evaluating “success” is evolutionary fitness (rather than accuracy), the
benefits associated with an information-processing system “designed” to grossly over-estimate
the likelihood of a highly improbable event can be worth the metabolic costs associated with the
organism’s hyper-sensitivity to environmental cues that were reliable predictors of rare but
catastrophic outcomes in ancestral environments. Appreciating the evolutionary tradeoffs
between “highly accurate performance” and “good enough performance” has been of such
significance to Darwinian medicine and evolutionary psychiatry that these disciplines have
developed a special nomenclature to capture this insight: the Smoke Detector Principle (Nesse,
4
For example, the annual cost savings accrued by several U.S. cities that chose to prepare for a rare 100-year flood
event—in lieu of preparing for a much less probable, but even more devasting 500-year flood event-- was irrelevant
once the much less probable event occurred [an unfortunate set of circumstances experienced by residents of New
Orleans (Hurricane Katrina) and Houston (which experienced three 500-year floods between 2015-2017)]. See
Taleb, 2010, 2014 for a more detailed account of this point.
Emotions and Motivated Reasoning
11
2001a, 2005). According to this principle, it can be evolutionarily beneficial to “over-spend”
resources in preparing for a rare –but catastrophic--event such as an attack by a dangerous
predator simply because:
The costs of such responses tend to be low compared to the benefits of avoiding
danger. So when danger may or may not be present, the small cost of a response
ensures protection against a much larger harm. That is why we put up with false
alarms from smoke detectors (Nesse, 2019, pp. 73-74).
An understanding of the role of tradeoffs can also help us to appreciate the evolutionary origins of
perceptual organs with design flaws, such as the optical “blind spot” in the vertebrate eye, the
result of a lack of light-sensitive receptors where the optic nerve attaches to the retina (Nesse,
2019). In this case, a more “optimally” designed eye –i.e., one without a blind spot—has been
precluded by the vagaries of evolutionary history, including path dependencies associated with
epigenetic processes and gene expression. Simply put, once natural selection went down the path
of connecting the optic nerve to the back of the vertebrate retina, this made the evolution of a
vertebrate eye without a blind spot improbable. Appreciating these evolutionary tradeoffs allows
us to understand how the costs associated with a flawed perceptual organ--an eye with a blind
spot--can be offset by the fact that even a somewhat poorly designed perceptual organ provides
greater benefits than an even more poorly designed organ, or no eye at all. Similarly, we might
appreciate how the costs associated with a reasoning mechanism that generates cognitive biases
can be offset by fact that even a somewhat “inaccurate” reasoning mechanism can provide greater
benefits than an even more poorly designed reasoning mechanism, or no reasoning ability at all.
Motivated Reasoning as a Tool for Persuasion: The Argumentative Theory of Reasoning
Despite considerable evidence suggesting that humans routinely engage in flawed and biased
reasoning strategies (Gilovich, 1991; Griffin, & Kahneman, 2002; Kahneman, Slovic, & Tversky,
1982), it is still widely assumed that bolstering our capacity for reasoning will necessarily lead to
improved knowledge and better decision-making. By contrast, Mercier and Sperber (2011, 2017)
suggest that efforts aimed at promoting more sophisticated forms of reasoning in pursuit of
increased rationality might backfire if our capacity for reasoning evolved not for truth-seeking,
but for other purposes. More specifically, their Argumentative Theory of Reasoning (Mercier &
Sperber’s 2011, 2017) contends that human reasoning evolved not primarily for the purpose of
locating “truths” about the world (a point that Kant might appreciate), but to win arguments and
achieve consensus. As Brockman (2011) notes:
The idea here is that the confirmation bias is not a flaw of reasoning, it's actually a
feature. It is something that is built into reasoning; not because reasoning is
flawed or because people are stupid, but because actually people are very good at
reasoning — but they're very good at reasoning for arguing.
If our capacity for sophisticated reasoning evolved for purposes other than truth-seeking, might
this explain the pervasiveness—perhaps even the utility—of many common forms of cognitive
bias? According to Sperber and Mercier (2011, 2017), our intuitive capacities for reasoning so
frequently lead to outcomes that are irrational (Gilovich, Griffin, & Kahneman, 2002) not because
humans are incapable of sound reasoning, but because reasoning evolved to aid us in intuitively
and systematically looking for arguments to justify our beliefs and actions. As many experienced
attorneys and politicians know, it is not surprising to observe that strategies employed to win
arguments or achieve consensus sometimes fall short when compared to more objective standards
Emotions and Motivated Reasoning
12
of truth-seeking. Seen in this light, many examples of “poor reasoning,” such as confirmation
bias (see Nickerson, 1988 for a review) and other forms of motivated reasoning, appear to
function to serve an identity-protective function, rather than a truth-seeking function (see Kahan,
Hoffman, Braman, Evans, & Rachlinkski, 2012; Kahan, Peters, Wittlin, , Slovic, Larrimore
Ouellette, Braman, & Mandel, 2012; Kahan, Peters, Dawson, & Slovic, 2017; McKenna, 2021).
Examples of common argumentative biases include: 1) the tendency to spend more effort
evaluating arguments that one disagrees with (Edwards & Smith, 1996; Taber & Lodge, 2006),
2), the tendency to search for counter-arguments when one disagrees with an argument (Petty &
Cacioppo, 1979; Eagly, Kulesa, Brannon, Shaw, & Hutson-Comeaux, 2000), and 3) the tendency
to search for flaws in an opposing argument, stopping when the search has uncovered a specific
short-coming (i.e., problems with the experimental design, concerns regarding statistical
reasoning, or a flawed inference somewhere in the model, see Klaczynski, 1997; Klaczynski &
Robinson, 2000; Perkins, 1985; Perkins, Farady, & Bushey, 1991). In short, Mercier and Sperber
(2011, p. 72, emphasis added) argue that:
motivated reasoning leads to a biased assessment: Arguments with unfavored
conclusions are rated as less sound and less persuasive than arguments with
favored conclusions.
If our capacity for reasoning functions as a tool for argumentation (see Haidt, 2001, 2012, and
Harris, 2010, for similar views of moral reasoning), sophisticated argumentative skills may not
always be deployed in the service of “objectively” evaluating competing arguments; they can also
be employed in the defense of a weak, but self-serving, line of reasoning:
[W]here does reason come into the picture? It is an attempt to justify the choice
after it has been made. And it is, after all, the only way we have to try to explain
to other people why we made a particular decision. But given our lack of access
to the brain processes involved, our justification is often spurious: A post-hoc
rationalization, or even a confabulation” (Frith, 2008, p. 45).
Consistent with the view that reasoning is a tool for persuasion rather than truth-seeking,
etymologists have noted that the word “sophisticated” derives from the Greek sophistēs which
has a positive connotation as “wise man” or “expert,” but is also the root of the word
“sophistry.” Along these lines, several studies have found that individuals with greater
knowledge of science and superior technical reasoning skills ironically arrive at more (not less)
polarized conclusions when confronted with evidence pertaining to culturally controversial issues
(Drummond & Fischhoff, 2017; Kahan, Peters, Wittlin, , Slovic, Larrimore Ouellette, Braman, &
Mandel, 2012; Kahan, Landrum, Carpenter, Helft, & Jamieson, 2017). These findings are in line
with what some scholars have referred to as the “Intelligence Trap
5
,” a phenomenon in which
cognitive bias—including confirmation bias, over-confidence bias, and motivated reasoning—is
more (not less) common among participants with ostensibly superior reasoning ability (see Heuer,
1999, 2019; Robson, 2019). These findings have prompted some social scientists to suggest that
the association between greater scientific acumen and more polarized opinions is a plausible
5
Several scholars have recently argued that rational/analytical ability is independent of general intelligence (for a
review, see Stanovich, Toplak & West, 2016, 2018). In this light, individuals with greater analytical ability may find
themselves caught in an “Intelligence Trap” whereby their increased argumentative skill is problematically associated
with an increasingly sophisticated ability to “rationalize,” a cognitive tendency that may operate at the expense of
other more valid inference-making strategies (see Mercier & Sperber, 2017; Robson, 2019 for reviews).
Emotions and Motivated Reasoning
13
explanation for the ironic finding that individuals in more secular postindustrial societies often are
more (not less) skeptical of science than individuals in less secular societies:
Indeed, the more secular postindustrial societies, exemplified by the Netherlands,
Norway, and Denmark, prove most skeptical toward the impact of science and
technology, and this is in accordance with the countries where the strongest public
disquiet has been expressed about certain contemporary scientific developments
such as the use of genetically modified food, biotechnological cloning, and
nuclear power (Norris & Inglehart, 2009, p. 67).
Although such findings run contrary to the common belief that science skepticism is associated
with low levels of scientific literacy or poor reasoning abilities (see Kahan et. al., 2012), these
results are consistent with the argumentative theory of reasoning. The consistent evidence that
motivated reasoning is common, even among skilled reasoners, when they are debating
contentious issues has prompted some researchers to conclude that:
divisions over climate change stem not from the public’s incomprehension of
science but from a distinctive conflict of interest: between the personal interest
individuals have in forming beliefs in line with those held by others with whom
they share close ties and the collective one they all share in making use of the best
available science to promote common welfare. (Kahan, Peters, Wittlin, , Slovic,
Larrimore Ouellette, Braman, & Mandel, 2012, p. 2, emphasis added).
An uncharitable conclusion one might draw from the motivated reasoning literature is that human
reasoning is so biased that neither scientists nor science-based arguments can be trusted.
However, a more nuanced –and arguably more accurate--conclusion is that while rhetorical skill
can be an effective tool for promoting rational discourse and truth-seeking, these same
argumentative talents can sometimes create more discord than insight when they are employed for
self-serving purposes.
Affect-as-Information
To illustrate how research at the intersection of emotions and motivating reasoning is amenable to
an adaptationist analysis, I utilize the “affect-as-information” model as a framework for
appreciating the evolutionary tradeoffs associated with emotional influences on human reasoning.
The affect-as-information model (see Schwarz & Clore, 1983, 2003) views the feeling states (i.e.,
affective experiences) that accompany each emotion as important sources of information that
systematically influence a wide variety of thought processes, including many forms of reasoning
once conceptualized as purely “cold” cognitive processes. Research in the affect-as-information
tradition has shown that many forms of reasoning, from judgments of life satisfaction to
perceptions of political leaders, are easily influenced by emotions (even incidental emotions).
When seen through the lens of an evolutionary perspective, an affect-as-information framework is
particularly helpful in identifying aspects of emotional information-processing that are good
candidates for evolved “design features.” For example, the aversive feeling state that
accompanies the experience of guilt has been shown to reliably exert a motivating influence on
cooperative behavior by virtue of “disincentivizing” self-interested behavior (Ketelaar & Au,
2003). Along these lines, I have suggested that our capacity to produce “guilty feelings” may
have evolved, in part, to provide us with “information” about the “costs” of not cooperating in
situations that resemble indefinitely repeated social bargaining games (Ketelaar, 2004, 2006).
Emotions and Motivated Reasoning
14
This interpretation harkens back to Adam Smith’s (1759) view that certain emotions operate as
“moral sentiments” that “commit” us to pursuing a more virtuous course of action, in part, by
enabling us –like Odysseus strapped to the mast--to overcome the immediate attraction of less
virtuous courses of action. Consistent with Adam Smith’s view, evolutionary interpretations of
guilt have converged on the idea that this emotion evolved as a “commitment device” that, when
activated, provides the agent with a powerful incentive to stay the “cooperative” course,
especially when we are exposed to spuriously attractive immediate incentives that run contrary to
our long-term interests (see Frank, 1988, 2001; Hirschleifer, 1987, 2001; Ketelaar, 2004, 2006;
Nesse, 2001b). Consider, for example, how strong feelings of guilt can commit a person to a
long-term goal of losing weight by enabling them to overcome the immediate attraction of a
second piece of cake. In this manner, emotional commitment devices, such as guilt:
Can and do compete with feelings that spring from rational calculations about
material pay offs... consider, for example, a person capable of strong guilt
feelings. This person will not cheat even when it is in her material interest to do
so. The reason is not that she fears getting caught but that she simply does not
want to cheat. Her aversion to feelings of guilt affectively alters the pay offs she
faces. (Frank, 1988, p. 53, emphasis in the original).
An affect-as-information framework can also help us to understand why a “well-designed”
emotion mechanism might generate responses that appear wasteful or even irrational in modern
environments. Afterall, if the “affective information” associated with our emotional feelings is
calibrated to the cost-benefit calculus of ancestral environments, it would not be surprising to
observe that our experience of positive affect (pleasant feelings) and negative affect (unpleasant
feelings) in modern environments does not always correspond to a rational mapping of the cost-
benefit structure of our contemporary circumstances.
To illustrate how a psychological mechanism might generate behavior that makes little sense in
modern environments, despite the fact that it is “well-designed” to operate in ancestral
environments, it may be helpful to consider the example of a hypothetical “snake fear”
mechanism. In the environments in which this “snake fear” mechanism evolved, the “costs”
associated with a hyper-sensitive fear response may have been relatively low compared to the
“cost” of failing to detect and respond appropriately to environmental cues (i.e., snake-shaped
objects) reliably associated with catastrophic outcomes (i.e., injury or possible death). Consider,
for example, the small costs–in both modern and ancestral environments--associated with being
compelled--by your fear—to avoid walking down a particular path when confronted with a distant
object that resembles either a harmless tree branch or a poisonous snake. In modern
environments with health insurance, Medivac helicopters, and emergency rooms stocked with
venom antiserum, the benefits of occasionally avoiding a poisonous snake might not be worth the
costs of experiencing numerous false alarms (e.g., responding to tree branches, garden hoses, and
other snake-like objects). By contrast, in ancestral environments, the benefits of avoiding even a
single snake bite—with an accompanying high probability of significant injury or death--might
make a hyper-reactive snake-fear mechanism worth the costs of numerous false alarms.
Thus armed with a better understanding of the notion of evolutionary cost-benefit tradeoffs
(Nesse & Williams, 1994), it is easier to appreciate why evolution by natural and sexual selection
does not invariably lead to perceptual mechanisms that generate accurate (unbiased)
representations of the physical and social world. In this light, we turn now to a review of the
literature on emotions and motivated reasoning, with the aim of investigating whether these
Emotions and Motivated Reasoning
15
processes might be more accurately seen not as “bugs” in our “mental software” but as “design
features” of the human mind.
Emotions and Identity-Protective Cognition
U.S. Senator Daniel Patrick Moynihan allegedly once said: “You are entitled to your own
opinions. But you are not entitled to your own facts.” Moynihan’s aphorism may be clever, but it
is not a viable guide to understanding ideological debates in which an attack on your opinions can
seem like an attack on your social identity. This is the case because ideological disputes
frequently go beyond minor differences of opinion and are often characterized as sincere
disagreements concerning competing “truths” about the world as seen through the eyes of the
disputants (Shweder & Levine, 1984). Consider, for example, how often ideological debates
evolve into passionate disputes in which one side of the argument claims that the other side can’t
be serious or that their ideological opponent can’t believe what they are asserting because such
beliefs are so “obviously” false from their perspective. It stands to reason, however, that in many
instances in which different individuals have reviewed the same evidence, but sincerely claim to
have observed different “facts” (e.g., “they saw a Game”) or earnestly claim to have “rationally”
arrived at contradictory conclusions from the same evidence, we are not witnessing a mere
difference of opinions, but a clash of identity-relevant worldviews. Even combatants in a
rhetorical skirmish between incompatible visions of the world sometimes find themselves exiting
a debate reeling with unpleasant emotions and finding themselves in an intellectually vulnerable
position, having encountered evidence or arguments that, if true, would undermine some of their
fundamental assumptions about the world:
To the extent that one has become emotionally committed to, or publicly identified
with, a particular theory, its failure in the face of evidence imposes psychic costs
that can be painful (Sowell, 2007, p. 240).
In this regard, social science research has found that merely activating an individual’s group
identity or threatening their cultural values is enough to generate strong emotions which can
influence the individual’s subsequent evaluation of “legally significant” facts (see Flynn, Nyhan,
& Reifler; 2017; Kahan & Nussbaum, 1996). Munro et al. (2012), for example, presented
participants in two experiments with scientific evidence that disconfirmed their pre-existing
beliefs. Borrowing from Schwarz and Clore’s (1983) classic mood misattribution paradigm, the
experimenters gave half of the participants the opportunity to misattribute any negative affect
(that they experienced while reading the belief disconfirming evidence) to an irrelevant source
(poor room conditions or caffeinated water). The other half (control group) were not given this
opportunity to misattribute their unpleasant feelings. Consistent with an affect-as-information
interpretation, Munro and colleagues (2012) observed that participants in the misattribution
condition --who could easily dismiss their feelings of discomfort as being due to an irrelevant
source—perceived the methodological rigor of the studies to be much stronger compared to
participants in the control condition who were given no such opportunity to dismiss their feelings
of discomfort. In a similar study, Klein and Harris (2009) asked women to read an article that
linked alcohol consumption to breast cancer. Consistent with the claim that threats to identity-
relevant worldviews reliably generate emotional responses that can bias the processing of
threatening information, women who were moderate-to-heavy drinkers showed an attentional bias
towards ignoring threatening words in the article. Consistent with an affect-as-information
framework, this bias was reduced for those women who were randomly assigned to receive a
positive “self-affirmation” message just prior to reading the article. In sum, numerous studies of
Emotions and Motivated Reasoning
16
motivated reasoning have demonstrated that emotions can be an important driver of motivated
reasoning, especially when an individual is evaluating evidence that is relevant to highly
politicized, identity-relevant issues such as climate change or HPV vaccines (Kahan, Peters,
Wittlin, Slovic, Ouellette, Braman, & Mandel, 2012; Kahan, et al., 2017; McKenna, 2021). In
characterizing the role that emotional processes play in identity-protective cognition, Munro and
colleagues (2012, p. 11) argue that:
Belief-disconfirming scientific studies elicit negative affect, which mobilizes an
expenditure of cognitive resources to reconcile the inconsistency between one’s
pre-existing beliefs and the scientific evidence. The increased cognitive
processing often results in an unfavorable critique of the scientific evidence
Most individuals are motivated to reduce these feelings of discomfort that arise whenever they
perceive an inconsistency (dissonance) between their commitment to an important set of beliefs
and their recognition that strong evidence exists which contradicts those beliefs. When an
individual experiences strong emotions (e.g., fear, anger, etc.) upon encountering evidence
contradicting the fundamental tenets of an important belief system, this heightened emotional
state is often a sign that they have interpreted this attack as a serious threat to their social identity
(Festinger, 1957; Harmon-Jones, 2019; Young, 1995). Social scientists refer to this state of
psychological discomfort as cognitive dissonance (Festinger, 1957; Harmon-Jones, 2019).
Interestingly, serious threats to an individual’s worldview, including threats that produce
cognitive dissonance, do not invariably result in the individual abandoning their belief system. In
fact, in many cases, encountering overwhelmingly contradictory evidence will lead to a renewed
sense of commitment to that worldview, rather than a search for an alternative system of meaning.
The classic example is Festinger’s study of a “Doomsday cult” whose members had to cope with
the cognitive dissonance experienced when the world did not come to an end on December 21,
1954, as predicted by their cult leader (Festinger, Riecken, & Schacter, 1956). Rather than reject
their leaders prophecy as false, and their belief system as problematic, Festinger observed that
cult members engaged in increased proselytizing and affirmations of faith in the immediate
aftermath of the failed prediction upon which their cult was based.
Holding ideological ground, or even defending it with renewed vigor, makes sense, however,
when the individual (whose worldview has been attacked) is a rhetorically sophisticated advocate
armed with a coherent, well-integrated set of functionally interdependent facts and arguments--
much like a skilled and ethical defense attorney arguing on behalf of an innocent client by
introducing exonerating evidence and relevant case law. Nevertheless, there are at least two
conditions in which even a rhetorically skilled advocate might find themselves ill-equipped to
defend their worldview, and both circumstances are likely to produce a considerable amount of
cognitive dissonance in any defendant whose identity is tied to the belief system they are
defending:
1) An Ideologically Bereft Worldview:
An advocate who is rhetorically skilled will still have trouble producing a strong (logical and
evidence-based) defense of their worldview, if that worldview is not founded upon clear
evidence and sound reasoning, or lacks a stable, coherent, and internally consistent set of
foundational tenets.
Example: It would be easier for a sophisticated scientist to successfully defend the claim that
“Newton’s theory anticipated the existence of the planet Neptune (before its discovery)” than
Emotions and Motivated Reasoning
17
the claim “The Moon is made out of spirit essence looted from Terror Demons.” The first
case involves defending a claim that references a coherent set of assumptions supported by
solid evidence and sound reasoning; the second case does not.
2) A Poorly Defended Worldview
Even in cases where a worldview is founded upon supportive evidence and sound reasoning,
and can be characterized as a stable, coherent, and internally consistent set of foundational
tenets, a rhetorically skilled advocate will still have trouble producing a strong defense of this
worldview if they do not have access to the supportive data and sound reasons that support
this belief system. In this regard, political scientists have argued that much of the early
research on ideology and mass belief systems (e.g., Converse, 1964) over-estimated the extent
to which even ideological “elites” possess sophisticated belief systems that would enable them
to access the best evidence and most compelling reasons when called upon to defend their
ideological commitments (see Kalmoe, 2020, Kuklinski & Peyton, 2007, for reviews).
Example: It would be easier for a well-trained attorney to successfully defend (in a court of
law) a legal claim (e.g., “Parody and satire are protected forms of speech”) if this attorney is
familiar with the relevant case law (e.g., Hustler v. Falwell, 485 U.S. 46, 1988) compared to a
similarly well-trained attorney who is not familiar with the relevant case law. The attorney in
the first case has access to the relevant evidence and lines of reasoning needed to successfully
defend the claim. The second attorney may have the same litigation skills, but lacking access
to the most relevant evidence and lines of reasoning, has little hope of prevailing against a
more fully-prepared adversary.
Although these two conditions represent somewhat different pathways to the employment of
“less-than-rational” strategies—such as motivated reasoning—that may be called upon to defend
one’s ideological commitments, bereft worldviews and poorly defended worldviews share two
things in common. In particular, they share an element of hypocrisy, a sense of disconnect
between the effort that an advocate spends in promoting or defending their worldview and their
knowledge of its core tenets. In the case of a bereft worldview, this “false advertising” is in
regard to the ideology being promoted: A type of “virtue signaling” whereby the substance of
what is communicated appears to matter much less than the simple fact that the individual has
successfully signaled which ideological team they are on, and which moral and ethical virtues
they endorse (see Miller, 2019, for review of the possible evolutionary benefits of virtue
signaling). In these circumstances, the advocate has less resemblance to a well-informed defense
attorney, and appears more like a politically correct journalist (see Taibbi, 2019) who is more
concerned with being directionally correct than technically accurate, perhaps as a result of
“navigating newsrooms where they were being discouraged, sometimes openly, from pursuing
true stories with the ‘wrong’ message,” and knowing that their allies will forgive them if they flub
the details of the worldviews that they promote, so long as they’re supporting the “right” talking
points (Taibbi, March 1, 2021). In the case of a poorly defended worldview, the hypocrisy is in
regard to the qualifications of the promoter, not the ideology: Simply put, the advocate “does not
know what they are talking about” and their ignorance makes them a poor representative of that
worldview. How common are these circumstances? Some social scientists have argued that
when one conducts a rigorous investigation of political ideologies, for example, one finds that a
substantial number of pundits (the so-called political “elites”) do not know many of the basic
tenets of the ideologies they promote or defend (Kalmoe, 2020; see also Kurzban, 2010: Kurzban
& Weeden, 2014, for discussion of the prevalence of hypocrisy in public discourse). In both
Emotions and Motivated Reasoning
18
cases, the cognitive dissonance generated in response to the inconsistency between their strong
allegiance to their worldview, and their recognition that they are ill-prepared to rationally defend
it, could motivate them to defend their worldviews through more expedient means
6
—including
motivated reasoning.
Emotions and moral reasoning, including perceptions of justice
What happens when our ancient emotions find themselves ensconced in contemporary
courtrooms and legal proceedings? If modern environments contain many of the same invariant
statistical properties that our emotion mechanisms evolved to react to, then we should not be
surprised to observe ourselves occasionally generating emotional responses that make little sense
in these situations, even if these same emotional proclivities were an effective means of
responding to threats and challenges in ancestral environments. One arena in which these
reliably activated emotional responses may have unintended, and perhaps unwanted,
consequences is in the realm of moral and legal decision-making. The modern criminal justice
system, for example, is ostensibly a bastion of reason and evidence, yet there is substantial
evidence that emotions (including irrelevant emotions) regularly intrude upon our reasoning
about moral, ethical, and legal issues. Moreover, our capacity to instinctively experience
emotions when certain environmental cues are present could be exploited by sophists with less
than noble intentions. In his book Against Empathy, psychologist Paul Bloom (2016) identifies
several concerns regarding the “weaponizing” of emotional empathy in persuasive arguments
employed in legal settings.
One example of the potential for emotional bias in the courtroom, according to Bloom (2016), is
the use of victim impact statements during sentencing or at parole hearings. Victim impact
statements consist of written or oral statements which provide crime victims with an opportunity
to describe the emotional, physical, and financial harm that they or others have suffered as a
direct result of a crime. These statements are permitted in the sentencing phase of trials in 44
U.S. States. Bloom [2019, cf. Illing (2019) January 16, 2019 8:52am EST] notes:
I could not imagine a better recipe for bias and unfair sentencing decisions than
this…You suddenly turn the deep questions of how to punish criminals into a
question of how much do I feel for this person in front of me? So the bias would
be incredibly powerful.
One might counter, however, that a high degree of bias is unlikely to occur in a courtroom
because the jury is given the opportunity to “see with their own eyes” the evidence that is most
relevant to the guilt or innocence of the accused. As reasonable as this counter-argument might
appear to be, research on motivated reasoning suggests that this view may be perilously naïve
(Braman, D. & Kahan, D. M., 2003; Kahan, Hoffman, & Braman, 2009).
In a conceptual replication of Hastorf and Cantril’s (1954) “They saw a Game” study,
psychologist and legal scholar Daniel Kahan explored the influence of motivated reasoning in a
hypothetical courtroom setting in which research participants (the “jury”) were asked to interpret
“legally-significant” facts (Kahan, Hoffman, Braman, Evans, & Rachlinkski, 2012). In a study
6
By referring to motivated reasoning as an “expedient” strategy for defending one’s worldview, I do not mean to
imply that the deployment of motivated reasoning is necessarily a conscious deliberative process. In fact, several
lines of research suggest that motivating reasoning is often a tacit process (see Balcetis & Dunning, 2006; Kahan,
Hoffman, Braman, Evans & Rachlinksi, 2012).
Emotions and Motivated Reasoning
19
titled, “They saw a Protest,” Kahan and colleagues (2012) showed participants a videotape of a
political demonstration and asked them to judge whether law enforcement had over-reacted in
their efforts to disperse protestors allegedly obstructing or intimidating people from using a public
facility. Participants were told to imagine themselves as jurors in a court case that focused on the
lawfulness of the police’s actions.
Because the identities of the demonstrators could not be easily discerned from the videotape,
Kahan was able to experimentally manipulate the “affiliation” of the protestors. Rather than
pitting Dartmouth undergraduates against Princeton students as Hastorf and Cantril (1954) had
done, Kahan et al (2012) created two distinct ideologically motivated groups of demonstrators: 1)
Ideologically Conservative Protestors and 2) Ideologically Progressive Protestors
7
. Half of the
“jury” (the participants) were randomly assigned to the Ideologically Conservative Protest
condition in which they were told that they were watching a demonstration that had occurred
outside of an abortion clinic, a scenario in which demonstrators were protesting legalized abortion
(i.e., implying that the protesters were ideologically conservative). The other half of the
participants were told that they were watching a demonstration that occurred at a military
recruiting event on a college campus, an event where the demonstrators were protesting the
military’s ban on service by openly gay and lesbian soldiers (“Don’t ask, Don’t Tell”) implying
that the protesters were ideologically progressive.
Importantly, Kahan also asked participants to rate themselves as ideologically conservative or
progressive. Consistent with motivated reasoning, Kahan found that participants who adopted a
conservative worldview saw the police as behaving violently towards the protesters, but only
when the protesters were identified as fellow conservatives. By contrast, when the protestors
were labelled as progressives, conservative participants viewed the police as behaving less
violently (i.e., more peacefully).
Conversely, individuals adopting a progressive worldview saw the police as behaving violently
towards the protesters, but only when the protesters were their fellow progressives. Similarly,
progressive participants viewed the police as acting peacefully when the protestors were
conservatives. In short, motivated reasoning appears to be commonplace, occurring in
circumstances ranging from interpreting numbers in a contingency table to interpreting legal
evidence (a video of police interacting with protestors) that you can “see with your own eyes.”
Moreover, emotional influences on perceptions of social justice and morality can be observed
beyond the courtroom (see Lukianoff & Haidt, 2018; Kahan & Nussbaum, 1996; Posner, 2008).
In this regard, the science of emotion and morality has recently undergone a radical
reconceptualization in which the traditional view--portraying moral judgment as primarily
reflecting the application of moral “reasoning” (e.g., Kohlberg, 1981)--has been challenged by a
new perspective that emphasizes the role of moral sentiments (such as sympathy/empathy) in
judgments of moral approbation or disapproval [see Haidt’s (2001, 2007) Social Intuitionist
Model; also Greene (2013)]. In a typical study of emotional influences on moral judgment,
Lerner, Goldberg, and Tetlock (1998) observed that individuals experimentally placed in an angry
mood subsequently made harsher attributions of blame toward a hypothetical co-worker whose
negligence had caused them harm. Moreover, these same individuals also ascribed more severe
punishments to the same co-worker, but only when they were primed with an angry mood (and
7
Ideologically conservatives and ideologically progressives were defined as Hierarchical Communitarians and
Egalitarian Individualists, respectively; see Kahan Hoffman, Braman, Evans, & Rachlinkski (2012) for details.
Emotions and Motivated Reasoning
20
not when they were placed in a neutral mood). Another study found that research participants
provoked to feel disgust, sadness, or fear were observed to require less evidence to make a strong
negative moral trait attribution (i.e., uncharitable, unfriendly) compared to individuals placed into
a sanguine mood (Trafimow, Bromgard, Finlay, & Ketelaar, 2005). In short, emotional
influences appear to be a fundamental part of many forms of moral “reasoning” that have
traditionally been conceptualized as processes of rational decision-making (Bloom, 2016; Haidt,
2001; Greene, 2013; Ketelaar, 2004; 2006; Ketelaar & Koenig, 2007).
Barsky and Kaplan (2007) conducted a meta-analysis (45 studies, 57 distinct samples) on the
association between affective states and perceptions of justice. Their analysis revealed a reliable
association between measures of state and trait positive and negative affect and perceptions of
distributive, procedural, and interactional justice. These relationships were in the predicted
directions, with mean population-level correlations (between emotion and perceptions of justice)
ranging in absolute magnitude from 0.09 to 0.43. The typical finding was that participants in
more a negative mood viewed the social world as less just, compared to participants in a more
positive mood (Barsky & Kaplan, 2007). This association between strong emotions and
perceptions of social justice has been observed not only in the laboratory but also in the
workplace. Lang, Bliese, Lang, and Adler (2011), for example, explored the relationship between
social justice in organizations and the emotional health of employees. In this context, the phrase
organizational justice referred to at least two kinds of perceptions: 1) perceptions of fair and
respectful treatment by their supervisors or other authorities, and 2) perceptions of how clearly
resource allocations were explained by their superiors.
Interestingly, previous research in organizational settings had often interpreted the correlation
between social justice in the workplace and employee psychological health as evidence that unfair
treatment in the workplace led to reduced psychological health. However, most of these field
studies were correlational in design, and thus did not allow for a test of causal direction (see Lang,
Bliese, Lang, & Adler, 2011, for a review). To explore the association between emotion and
social justice in a more controlled setting, Lang, Bliese, Lang, and Adler (2011) created three
longitudinal data sets in applied field settings (military organizations) that would allow them to
test whether (a) organizational justice perceptions influence depressive symptoms over time and
(b) depressive symptoms have a lagged relation with perceptions of organizational justice. Their
study revealed evidence that negative affect (depressive symptoms) did lead to perceptions of
organizational injustice. Moreover, they observed no effect of organizational injustice
perceptions on depressive symptoms. One explanation for these emotional influences on
perceptions of justice entails an “affect-as-information” interpretation, as van den Bos (2003, p.
482) explains:
It is not uncommon for people forming justice judgments to lack information that
is most relevant in the particular situation. In information-uncertain conditions,
people may therefore construct judgments by relying on how they feel about the
events they have encountered and justice judgments may hence be strongly
influenced by affect information.
van den Bos (2003) tested this claim in several experiments in which participants
completed a series of simple cognitive tasks (e.g., counting the number of squares inside
a pattern presented on a computer screen) and were later rewarded with lottery tickets in
proportion to the number of tasks they successfully completed. Across three
experiments, van den Bos (2001) observed that individuals experimentally placed into a
Emotions and Motivated Reasoning
21
negative mood subsequently judged that they had been treated in a less just manner
compared to individuals placed into a more positive mood. Consistent with an “affect-as-
information” interpretation, the effects of an experimentally produced mood state had an
effect on judgments of justice only when participants were uncertain about how they (and
other participants) were going to be rewarded. van den Bos (2003, p. 482) noted:
These findings thus reveal that in situations of information uncertainty, people’s
judgments of justice can be very subjective, susceptible to affective states that
have no logical relationship with the justice judgments they are constructing.
Emotions and Reasoning about Risk
Psychological scientists studying risk perception have reached essentially the same conclusion –
about our ability to reason accurately about risk--that Hastorf and Cantril (1954) reached in their
studies of motivated reasoning in college football fans. Just as Hastorf and Cantril argued that
“there is no such thing as a ‘game’ existing ‘out there,’” modern-day cognitive scientists (Slovic,
1999, p. 690) recently concluded that:
[R]isk is inherently subjective. In this view, risk does not exist `'out there,"
independent of our minds and cultures, waiting to be measured. Instead, human
beings have invented the concept risk to help them understand and cope with the
dangers and uncertainties of life. Although these dangers are real, there is no such
thing as "real risk" or "objective risk."
Consistent with this Kantian view, Johnson and Tversky (1983, p. 26) observed that the negative
affect generated while participants read news accounts of various tragic causes of deaths (ranging
from natural disasters and automobile accidents to homicides and heart attacks) led to “pervasive
global effects on their estimates of fatalities” regardless of the specific cause of death that they
were asked to estimate. In other words, the negative affect associated with reading a news report
of a depressing death from stomach cancer routinely generalized to increased perceptions of
mortality risk from toxic chemical spills, terrorism, and auto accidents. Similar effects were
demonstrated when participants were asked to read pleasant (non-tragic) news stories, which
produced increases in positive affect and corresponding decreases in estimates of risk of death
across a wide range of causes (e.g. airplane accidents, leukemia, electrocution, etc.). Although
much of the research on risk perception does not adopt an evolutionary perspective (see Barrett &
Fiddick, 1999; Elmer, Cosmides, & Tooby, 2008, for notable exceptions), most risk researchers
acknowledge the important role that “affect-as-information” and “affect heuristic” processes play
in guiding judgments and decision-making regarding risk (Slovic, 1999; Slovic, Peters, Finucane,
& MacGregor, 2005; Slovic, Finucane, Peters & MacGregor, 2016; Pachur, Hertwig, & Steinman,
2012).
Naturalistic experiments examining emotion and risk perception are consistent with these
laboratory findings. For example, in the months following the September 11, 2001 terrorist
airplane attacks in the US, many Americans avoided the smaller death risk associated with
terrorist attacks by avoiding travel by airplane. Instead, they assumed a much larger death risk
(by automobile accident) when they opted for travel via the roadways. Gigerenzer (2004)
examined air travel, highway traffic, and fatal traffic accidents for the three months following
September 11 and his analysis revealed not only that air travel decreased and highway traffic
increased, but that an additional 350 people died in traffic accidents during this period, more than
the approximately 250 people who died in airplanes on September 11, 2001. Similar findings
Emotions and Motivated Reasoning
22
were observed by Lopez-Rousseau (2005), who tracked people’s transportation choices following
the March 2004 terrorist train bombing in Spain which killed approximately 200 people. Lopez-
Rousseau (2005) found similar decreases in the mode of transportation associated with a terrorist
act (railway travel decreased after the train bombing), although he did not observe an increase in
automobile fatalities. In fact, he observed that the Spanish reduced both forms of travel
(automobile and train) in the immediate aftermath of the bombing. Both studies (Gigerenzer,
2004; Lopez-Rousseau, 2005; see also Myers, 2001) are consistent with the claim that witnessing
or reading about distressing events can influence estimates of risk outside of the social science
laboratory.
In sum, research on decision-making regarding risk suggests that risk perception is a product of
cognitive and emotional processes capable of generating context-sensitive representations of the
world. Consistent with the smoke detector principle, these risk perceptions generate “false
alarms” that over-estimate the likelihood of improbable catastrophic events in precisely those
circumstances in which the costs of these hyper-sensitive responses (a minor inconvenience in
one’s daily commute) seem infinitesimally small in relation to the much larger costs (serious
injury or death) that these mechanisms help us to avoid. Ironically, these emotional mechanisms,
which likely evolved to assess risks relevant to ancestral environments-- such as estimating the
probably of an attack by a dangerous predator –can also lead to ironic and sometimes tragic
outcomes (i.e., increases in automobile fatalities, Gigerenzer, 2004) when these same mechanisms
are activated outside of the environments in which they evolved to operate (lions, tigers, and bears
as opposed to planes, trains, and automobiles). Nonetheless, these emotional biases are not
necessarily evidence that our minds are poorly designed for assessing risk:
The public is not irrational. Their judgments about risk are influenced by emotion
and affect in a way that is both simple and sophisticated. The same holds true for
scientists. Public views are also influenced by worldviews, ideologies, and values;
so are scientists' views, particularly when they are working at the limits of their
expertise. (Slovic, 1999, p. 689).
Final Conclusions
When psychological scientists study the intersection of cognitive bias and human emotion these
phenomena are often lumped together as “defective” aspects of human nature that undermine our
otherwise sophisticated capacity for rational thought (Ketelaar & Clore, 1997). Software
engineers encounter a similar challenge in explaining why their ostensibly well-designed software
programs regularly generate odd behaviors that seem—to the non-engineer—like pointless
mistakes or easily avoidable errors. This chapter is offered as a demonstration that the application
of evolutionary and philosophical insights, including the well-established “affect-as-information”
framework, can provide a powerful set of tools for distinguishing aspects of human emotion that
might be referred to as “ design features” from those aspects that appear to be more accurately
characterized as non-adaptive by-products, or noise (Haselton & Nettle, 2006; Haselton &
Ketelaar, 2006, Nesse, 2019). So, what adaptive function(s) might emotions serve that could also
explain how these “design features” of the human mind are so often characterized as bugs in our
mental software?
If we are to believe Kant and hence assume that access to a single metaphysical reality is difficult
or impossible for members of any species to achieve, then a fitness advantage could be conferred
upon the members of a group-living species who were able to more successfully navigate (relative
Emotions and Motivated Reasoning
23
to their competitors) their social and physical world by virtue of possessing a psychological
adaptation that reliably “constructed” a “shareable
8
” representation of their environment around
which they could coordinate their behavior with conspecifics. To consider one speculative
example of a plausible adaptive problem that our consciously accessible emotions (i.e., our
affective feeling states) may have evolved to solve, there are a variety of indefinitely repeated
social exchange relationships that would have been routinely encountered in ancestral
environments (which can be modelled, with the help of behavioral economic and game theoretic
methods, as indefinitely repeated social bargaining games) in which locating and coordinating on
a specific equilibrium strategy would have afforded a fitness advantage over competitors who
were less capable of locating and coordinating upon these strategies (see Ketelaar, 2004, 2006).
One way that this adaptive challenge could be addressed would be through the evolution of
cognitive and emotional mechanisms capable of constructing a stable “mental map” of the fitness
affordance
9
landscape for members of a particular species. This “shared reality” would then
enable conspecifics to coordinate their thoughts and behaviors around their shared perceptual and
cognitive experiences, as illustrated by a non-emotional example from our visual system which
allows two individuals to “experience” the same bird flying in the same sky, even though the
visual input these two individuals receive is, by definition, not identical because any two
individuals occupying two different locations in space will necessary encounter different visual
inputs. Although the details of how this model of emotions as mechanisms for creating a
“shared” mental map of the fitness affordance landscape are speculative and beyond the scope of
this chapter, this brief discussion is offered with the aim of demonstrating that it is possible to
identify plausible adaptive problems—e.g., the challenge of generating a shared cognitive-
emotional reality that social organisms could employ to coordinate their actions—in which
emotions and motivated reasoning might play a central role. Moreover, the “shared reality”
generated by these evolved psychological mechanisms could have been adaptive in ancestral
environments, even if the “mental maps” they generated could sometimes be categorized as
“biased” when compared to a single, objective standard.
In other words, perhaps our evolved emotional “software” is, in part, responsible for generating
the shared worldviews that we routinely and tacitly employ as “mental maps” of our physical and
social worlds. At their best, these emotion-based worldviews can provide us with a stable,
coherent, and internally consistent set of foundational assumptions upon which we can construct a
useful understanding of the social world and our place in it. And precisely because we are so
dependent on these guiding frameworks, endowed by our evolved emotions, these worldviews
8
The cognitive and emotional mechanisms that generate a ”shareable” mental map would have to effectively balance
the costs and benefits associated with two distinct computational problems: (a) the problem of effectively
representing the moment-to-moment changes in species-specific threats and opportunities (i.e., corresponding to the
specific ecological niche occupied by a particular species in its ancestral environments), and (b) the problem of
effectively representing the moment-to-moment changes in the individual-specific threats and opportunities (i.e.,
highly-attuned to the characteristics of each particular individual’s unique developmental history and phenotypic
traits). In this regard, Freyd’s (1983, 1990) description of “consciousness” as “shareability” would be one way to
think about how human perceptual experiences are typically “conscious” precisely because they are shareable.
9
The term “fitness affordance,” was first introduced by evolutionary psychologist Geoffrey Miller in the 1990s as an
adaptationist extension of James Gibson’s (1979) affordance concept. In the current context, the term “fitness
affordance landscape” refers to the specific threats (negative affordances) and opportunities (positive affordances) in
an organism’s current environment, defined in terms of the specific environmental cues that would have been
associated, in ancestral environments, with evolutionary relevant fitness costs and benefits (see Miller, 2010).
Emotions and Motivated Reasoning
24
can become a core part of our social identities, identities (and worldviews) that we are motivated
to defend. As a result, the same worldviews that bring us comfort and guide us in finding
meaning and a sense of purpose, can also be the source of some of our most biased and self-
serving interpretations. In this light, it may be reasonable to consider that some of our emotional
reasoning processes may be more accurately seen not as “bugs” in our “mental software” but as
“design features” of the human mind.
Emotions and Motivated Reasoning
25
References
Baggs, E. & Chemero, A. (2018). Radical embodiment in two directions, Synthese,
https://doi.org/10.1007/s11229-018-02020-9
Balcetis, E. and Dunning, D. (2006). See What You Want to See: Motivational Influences on Visual
Perception, Journal of Personality and Social Psychology, 91, 612–625.
Barkow, J. H., Cosmides, L., & Tooby, J. (Eds.). (1992). The adapted mind: Evolutionary psychology
and the generation of culture. Oxford University Press.
Baron-Cohen, S. (2003). The Essential Difference. New York: Basic Books.
Barrett, H.C. & Fiddick, L. (1999). Evolution and risky decisions, Trends in Cognitive Sciences, 4, 251–
252.
Barsky, A. & Kaplan. S. A. (2007). If you feel bad, it's unfair: A quantitative synthesis of affect and
organizational justice perceptions, Journal of Applied Psychology, 92, 286–295.
Berridge, K.C. (1991). Modulation of Taste Affect by Hunger, Caloric Satiety, and Sensory-Specific
Satiety in the Rat, Appetite, 16, 103-120.
Berridge, K. (1999). Pleasure, Pain, Desire, and Dread: Hidden Core Processes of Emotion. In Kahneman D.,
Diener E., & Schwarz N. (Eds.), Well-Being: Foundations of Hedonic Psychology (pp. 525-557). Russell Sage
Foundation.
Bloom, P. (2016). Against Empathy: The case for rational compassion. New York, NY: HarperCollins
Books.
Bloom, P. (2019). cf. Illing (January 16, 2019 8:52am EST). The case against empathy: Why this Yale
psychologist thinks you should be compassionate, not empathetic.
https://www.vox.com/conversations/2017/1/19/14266230/empathy-morality-ethics-psychology-compassion-
paul-bloom
Boyce, W. T., Ellis , B.J. (2005). Biological sensitivity to context: I. An evolutionary–developmental
theory of the origins and functions of stress reactivity, Development and psychopathology, 17, 271-301.
Braman, D. & Kahan, D. M., (2003). More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk
Perceptions. University of Pennsylvania Law Review, 151, Yale Law School, Public Law Research Paper No.
05, Available at SSRN: https://ssrn.com/abstract=286205 or http://dx.doi.org/10.2139/ssrn.286205
Brockman, J. (2011). The Argumentative Theory: A Conversation with Hugo Mercier [4.27.11]
Edge.org https://www.edge.org/conversation/hugo_mercier-the-argumentative-theory
Burnett, S. (2011). Perceptual Worlds and Sensory Ecology. Nature Education Knowledge, 3, 75.
Buss, D. M. (1991). Evolutionary Personality Psychology, Annual Review of Psychology, 42, 459-491.
Cabanac, Michel (1971). Physiological Role of Pleasure, Science, 173, 1103–1107.
Camperi M, Tricas TC, Brown BR (2007) From morphology to neural information: The electric sense of
the skate. PLoS Computational Biology, 3: e113. doi:10.1371/journal. pcbi.0030113
Caufield, J. (2021, February 11). Color Blindness. Color Blindness Awareness.
https://www.colourblindawareness.org/colour-blindness/.
Cheney, D. L. & Seyfarth, R. M. (1990). How Monkeys See the World: Inside the Mind of Another
Species, Chicago: University of Chicago Press.
Cheney, D. L. & Seyfarth, R. M. (2007). Baboon Metaphysics: The Evolution of a Social Mind, Chicago:
University of Chicago Press.
Emotions and Motivated Reasoning
26
Converse, P. E. (1964/2006). The nature of belief systems in mass publics (1964), Critical Review, 18, 1-
74.
Cronin, H. (1991). The Ant and the Peacock. Cambridge: Cambridge University Pres
Dangles, O., Irschick, D., Chittka, L. & Casas, J. (2009). Variability in sensory ecology: Expanding the
bridge between physiology and evolutionary biology. Quarterly Review of Biology, 84, 51–74.
Dahl, A., Campos, J., Anderson, D., Uchiyama, I., Witherington, D., Ueno, M., Poutrain-Lejeune, L. &
Barbu-Roth, M. (2013). The epigenesis of wariness of heights, Psychological Science, 24, 1361–1367.
Darwin, C. (1871). The descent of man: And selection in relation to sex. London: J. Murray.
Dennett, D. C. (1991). Consciousness Explained, Boston: Little, Brown and Company.
Dennett, D. C. & Hofstadter, D. R. (2000). The Mind’s I: Fantasies and Reflections on Self and Soul, New
York: Perseus Books Group.
de Waal, F. B. M. (1982) Chimpanzee Politics: Power and Sex Among Apes ,London: Jonathan Cape.
Ditto. P.H. & Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria for
preferred and non-preferred conclusions. Journal of Personality and Social Psychology, 63, 568-584.
Ditto, P. H., Munro, G. D., Apanovitch, A. M., Scepansky, J. A. & Lockhart, L. K. (2003). Spontaneous
skepticism: The interplay of motivation and expectation in responses to favorable and unfavorable medical
diagnoses. Personality and Social Psychology Bulletin, 29,1120–32.
Ditto, P. H., Scepansky, J. A., Munro, G. D., Apanovitch, A. M. & Lockhart, L. K. (1998). Motivated
sensitivity to preference-inconsistent information. Journal of Personality and Social Psychology, 75, 53–69.
Drummond, C. & Fischhoff, B. (2017). Individuals with greater science literacy and education have more
polarized beliefs on controversial science topics. Proceedings of the National Academy of Sciences, 114, 9587-
9592.
Eagly, A. H., Kulesa, P., Brannon, L. A., Shaw, K., & Hutson-Comeaux, S. (2000). Why counter
attitudinal messages are as memorable as proattitudinal messages: The importance of active defense against
attack. Personality and Social Psychology Bulletin, 26, 1392-1408
Edwards, K., & Smith, E. E. (1996). A disconfirmation bias in the evaluation of arguments. Journal of
Personality and Social Psychology, 71, 5– 24.
Elmer, E. Cosmides, L. & Tooby, T. (2008). Relative status regulates risky decision-making about
resources in men: Evidence for the co-evolution of motivation and cognition, Evolution and Human Behavior,
29, 106–118.
Festinger, L (1957). Cognitive Dissonance. Stanford: Stanford University Press
Festinger, L, Riecken, H. & Schacter, S.(1956/2009). When Prophecy Fails. Martino Fine Books
Flynn, D.J., Nyhan, B. & Reifler, J. (2017). The Nature and Origins of Misperceptions: Understanding
False and Unsupported Beliefs about Politics, Advances in Political Psychology, 38, 127-150.
Forgas, J. P. & Moylan, S. (1982). After the Movies: Transient Mood and Social Judgments, Personality
and Social Psychology Bulletin, 13, 467-477.
Frank, R.H. (1988). Passions within reason. New York, NY:W. W. Norton.
Frank, R. H. (2001). Cooperation through emotional commitment. In R. M. Nesse (Ed.), Evolution and
the capacity for commitment (p. 57–76). Russell Sage Foundation.
Freyd, J. (1983). Shareability: The social psychology of epistemology. Cognitive Science, 7, 191-210.
Freyd, J. (1990). Natural selection or shareability? [commentary] Behavior and Brain Sciences, 13, 732-734.
Emotions and Motivated Reasoning
27
Frith, C. (2008). Reason 2: No one really uses reason. The New Scientist, 199, 45.
Gibson, J. J. (1979). The Ecological Approach to Visual Perception. New York: Taylor & Francis Group.
Gigerenzer, G. (2004). Dread risk, September 11, and fatal traffic accidents. Psychological Science, 15,
286-287.
Gilovich, T. (1991). How We Know What Isn't So: The Fallibility of Human Reason in Everyday Life,
New York: The Free Press.
Gilovich, Griffin, & Kahneman, (2002). Heuristics and Biases: The Psychology of Intuitive Judgement,
Cambridge: Cambridge University Press.
Greene, J. (2013). Moral Tribes: Emotion, reason, and the gap between us and them. New York, NY: The
Penguin Press.
Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment.
Psychological Review, 108, 814-834.
Haidt, J. (2007). The new synthesis in moral psychology, Science, 316, 998.
Haidt, J. (2012). The Righteous Mind: Why Good People are Divided by Politics and Religion,
Pantheon.
Harmon-Jones, E. (Ed). (2019). Cognitive Dissonance: Reexamining a Pivotal Theory in Psychology, 2nd
Edition, Washington, DC: American Psychological Association.
Harris, S. (2010). The Moral Landscape: How science can determine human values. New York: Free
Press.
Haselton, M. G. & Nettle, D. (2006). The Paranoid Optimist: An Integrative Evolutionary Model of
Cognitive Biases , Personality and Social Psychology Review, 10, 47-66.
Haselton, M. & Ketelaar, T. (2006). Irrational emotions or emotional wisdom: The evolutionary
psychology of affect and behavior, in J. Forgas (Ed.) Affect in social thinking and behavior 8, 21-39.
Hastorf, A. H., & Cantril, H. (1954). They saw a game; a case study. The Journal of Abnormal and Social
Psychology, 49, 129–134.
Heuer, R. J., Jr. (1999). Chapter 2. Perception: Why Can’t We See What is There to be Seen?
Psychology of Intelligence Analysis. History Staff, Center for the Study of Intelligence, Central Intelligence
Agency. Retrieved 2007-10-29.
Heuer, R. J., Jr. (2019). Psychology of Intelligence Analysis. Eastford, CT: Martino Fine Books
Hirshleifer, J. (1987). On emotions as guarantors of threats and promises. In J. Dupré (Ed.),The latest on
the best: Essays on evolution and optimality (p. 307–326). The MIT Press.
Hirshleifer, J. (2001). Game-Theoretic Interpretations of Commitment, In R. M. Nesse (Ed.), Evolution
and the capacity for commitment (p. 77-94). Russell Sage Foundation.
Illing, S. (2019, Jan 16). The case against empathy: Why this Yale psychologist thinks you should be
compassionate, not empathetic. Vox, https://www.vox.com/conversations/2017/1/19/14266230/empathy-
morality-ethics-psychology-compassion-paul-bloom
Norris, P. & Inglehart, R. (2009). Sacred and Secular: Religion and Politics Worldwide. Cambridge:
Cambridge University Press.
Johnson, E. J. & Tversky, A. (1983). Affect, Generalization, and the Perception of Risk, Journal of
Personality and Social Psychology, 45, 20-31.
Kahneman, D. Slovic, P. & Tversky, A. (1982). Judgment Under Uncertainty: Heuristics and Biases.
Cambridge: Cambridge University Press.
Emotions and Motivated Reasoning
28
Kahan, D. M., Hoffman, D.A., Braman, D. (2009). Whose eyes are you going to believe? Scott v. Harris
and the Perils of Cognitive Illiberalism, Harvard Law Review, 122, 837-906.
Kahan, D. M. (2012c). Do mass political opinions cohere? And do psychologists "generalize without
evidence" more often than political scientists? The Cultural Cognition Project at Yale Law School:
http://www.culturalcognition.net/blog/2012/12/20/do-mass-political-opinions-cohere-and-do-psychologists-
gener.html
Kahan, D. M., Hoffman, D.A., Braman, D., Evans, D., & Rachlinkski, J.J. (2012). "They Saw a Protest":
Cognitive Illiberalism and the Speech-Conduct Distinction, Stanford Law Review, 64, 851-906
Kahan, D. M., & Nussbaum, M. C. (1996). Two conceptions of emotion in law, Yale Law School Legal
Scholarship Repository.
Kahan, D. M., Peters, E., Cantrell Dawson, E. & Slovic, P. (2017). Motivated Numeracy and Enlightened
Self-Government. Behavioural Public Policy, 1, 54-86.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Larrimore Ouellette, L., Braman, D. & Mandel, G.
(2012). The polarizing impact of science literacy and numeracy on perceived climate change risks, Nature
Climate Change, 2, 732-735.
Kahan D. M., Landrum, A., Carpenter, K. Helft, L. & Jamieson, K. H. (2017). Science Curiosity and
Political Information Processing, Advances in Political Psychology, 38, 179-199.
Kalmoe, N. P. (2020). Uses and Abuses of Ideology in Political Psychology, Political Psychology 10
February 2020 https://doi.org/10.1111/pops.12650
Kant, I. (1781/2007). Critique of Pure Reason. London: Penguin Books Ltd.
Ketelaar, T. (2004). Ancestral emotions, current decisions: Using evolutionary game theory to explore
the role of emotions in decision-making. In C. Crawford & C. Salmon (Eds) Evolutionary Psychology, Public
Policy and Personal Decisions, (pp. 145-168) Mahwah, NJ: Lawrence Erlbaum Associates.
Ketelaar. T. (2006). The role of moral sentiments in economic decision making, in D. DeCremer, M.
Zeelenberg, & K. Murnighan (Eds.) Social Psychology and Economics, Mahwah, NJ: Lawrence Erlbaum
Associates.
Ketelaar, T., & Au, W. T. (2003). The effects of feelings of guilt on the behavior of uncooperative
individuals in repeated social bargaining games: An affect-as-information interpretation of the role of emotion
in social interaction. Cognition and Emotion, 17, 429–453.
Ketelaar, T. and Clore, G. L. (1997). Emotions and reason: The proximate effects and ultimate functions
of emotions. In Matthews, G. (Ed.) Personality, Emotion, and Cognitive Science, (pp. 355-396). Advances in
Psychology Series, Amsterdam: Elsevier Science Publishers (North-Holland).
Ketelaar, T. & Koenig, B. (2007). Justice, Fairness, and Strategic Emotional Commitments, in, D. de
Cremer, (Ed.). Justice and Emotions: Current Developments, Mahwah, NJ.: Lawrence Erlbaum Associates., pp.
133-154.
Kitcher, P. (1990). Kant’s transcendental psychology. Oxford: Oxford University Press.
Klaczynski, P. A. (1997). Bias in adolescents' everyday reasoning and its relationship with intellectual
ability, personal theories, and self-serving motivation. Developmental Psychology, 33, 273–283.
Klaczynski, P. A., & Robinson, B. (2000). Personal theories, intellectual ability, and epistemological
beliefs: Adult age differences in everyday reasoning biases. Psychology and Aging, 15, 400–416.
Klein, W.M.P., & Harris, P. R. (2009). Self-affirmation enhances attentional bias towards Kübler-Ross,
E. (1969). On Death and Dying. Routledge
Kohlberg, L. (1981). The philosophy of moral development : moral stages and the idea of justice. San
Francisco Harper & Row,
Emotions and Motivated Reasoning
29
Kringelbach, M. L. &. Berridge, K. C. (2018). The Affective Core of Emotion Linking Pleasure,
Subjective Well-Being, and Optimal Metastability in the Brain, Emotion Review, 9 , 191–199.
Kuklinski, J.H. & Peyton, B. (2007). Belief Systems and Political Decision Making, in The Oxford
Handbook of Political Behavior
Kunda, Ziva (1987), “Motivated Inference: Self-Serving Generation and Evaluation of Causal Theories,”
Journal of Personality and Social Psychology, 53 (4), 636–47.
Kunda, Z. (1990). The case for motivated reasoning, Psychological Bulletin, 108, 480-498.
Kunda, Z. (2001). Social Cognition: Making sense of people. Cambridge, MA: The MIT Press.
Kurzban, R. (2010). Why everyone (else) is a hypocrite. Princeton, NJ: Princeton University Press.
Kurzban, R. & Weeden, J. (2014). The Hidden Agenda of the Political Mind. Princeton, NJ: Princeton
University Press
Lang, J., Bliese, P. D., Lang, J. W. B., & Adler, A. B. (2011). Work gets unfair for the depressed: Cross-
lagged relations between organizational justice perceptions and depressive symptoms. Journal of Applied
Psychology, 96, 602–618.
Lerner, J. S., Goldberg, J. H., & Tetlock, P. E. (1998). Sober second thought: The effects of
accountability, anger, and authoritarianism on attributions of responsibility. Personality and Social Psychology
Bulletin, 24, 563-574.
Lieberman, D. & Carlton, P. (2018). Objection: Disgust, morality, and the law. Oxford University Press.
Lopez-Rousseau, A. (2005). Avoiding the Death Risk of Avoiding a Dread Risk The Aftermath of March
11 in Spain, Psychological Science, 16, 426-428.
Lukianoff, G. & Haidt, J. (2018). The Coddling of the American Mind. New York, NY: Penguin Press.
McKenna, R. (2021). Asymmetrical Irrationality: Are only other people stupid? In J. de Ridder and M.
Hannon (Eds), Routledge Handbook of Political Epistemology, New York: Taylor & Francis Group, 285-296.
Mercier, H. & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory.
Behavioral and Brain Sciences, 34, 57-111.
Mercier, H. & Sperber, D. (2017). The Enigma of Reasoning. Cambridge, MA: Harvard University Press.
Miller, G. F. (2010). Reconciling Evolutionary Psychology and Ecological Psychology: How to Perceive
Fitness Affordances. Acta Psychologica Sinica, 39, 546-555.
Munro, G.D., Stansbury, J.A., & Tsai, J. (2012). Causal Role for Negative Affect: Misattribution in
Biased Evaluations of Scientific Information, Self and Identity, 11, 1–15.
Myers, D. G. (December 2001). Do we fear the right things? Observer, 14, 3
Nagel, T. (1974). What is it like to be a bat?, Phil. Review. 83, 435-450
Nesse, R.M. (2001a). The smoke detector principle: Natural selection and the regulation of defenses,
Annals of the New York Academy of Sciences, 935, 75-85.
Nesse, R.M. (2001b). Evolution and the Capacity for Commitment. New York: Russell Sage
Foundation.
Nesse, R.M. (2005). Natural selection and the regulation of defenses: A signal detection analysis of the
smoke detector principle, Evolution and Human Behavior, 26,88–105.
Nesse, R. M. (2019). Good Reasons for Bad Feelings: Insights from the Frontier of Evolutionary
Psychiatry, New York: Penguin Random House.
Emotions and Motivated Reasoning
30
Nesse, R.M. & Williams, G. C. (1994). Why We Get Sick: The New Science of Darwinian Medicine,
New York: Random House.
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of
General Psychology, 2, 175–220.
Pachur, T., Hertwig, R. & Steinman, F. (2012). How do people judge risks: Availability heuristic, affect
heuristic, or both? Journal of Experimental Psychology: Applied, 18, 314-330.
Perkins, D. N. (1985). Post-primary education has little impact on informal reasoning. Journal of
Educational Psychology, 77, 562–571.
Perkins, D. N., Farady, M., & Bushey, B. (1991). Everyday reasoning and the roots of intelligence. In J.
Voss, D. Perkins, & J. Segal (Eds.), Informal reasoning and education (pp.
83–105). Hillsdale, NJ: Erlbaum.
Petty, R. E. & Cacioppo, J. T. (1979) Issue involvement can increase or decrease persuasion by
enhancing message-relevant cognitive responses. Journal of Personality and Social Psychology, 37, 1915–1926.
Pinker, S. (1994). The language instinct, London: Allen Lane, the Penguin Press.
Pinker, S. (1999). Words and rules: the ingredients of language, New York: Basic Books,
Povinelli, D. J. (2003). Folk physics for apes: The chimpanzee's theory of how the world works. Oxford
University Press.
Robson, D. (2019). The Intelligence Trap. New York: W.W. Norton & Company.
Schacter, D. L., Cooper, L. A., Delaney, S. M., Peterson, M. A., & Tharan, M. (1991). Implicit memory
for possible and impossible objects: Constraints on the construction of structural descriptions. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 17, 3–19.
Schaller, M. (1992). In-Group Favoritism and Statistical Reasoning in Social Inference: Implications for
Formation and Maintenance of Group Stereotypes, Journal of Personality and Social Psychology, 63, 61-74.
Schwarz, N., & Clore, G. L. (1983). Mood, misattribution, and judgments of well-being: Informative and
directive functions of affective states. Journal of Personality and Social Psychology, 45, 513–523.
Schwarz, N. & Clore, G. L. (2003). Mood as Information: 20 Years Later, Psychological Inquiry, 14,
296-303.
Shiffar, M. & Freyd, J. (1991). Apparent motion of the human body, Psychological Science, 1, 257-264.
Shweder, R. A. & Levine, R. A. (1984). Culture Theory: Essays on Mind, Self and Emotion. Cambridge:
Cambridge University Press.
Silberman, S. (2015). Neurotribes: The Legacy of Autism and the Future of Neurodiversity . New York:
Penguin Random House LLC.
Slovic, P. (1999). Trust, Emotion, Sex, Politics, and Science: Surveying the Risk-Assessment Battlefield,
Risk Analysis, 19, 689-701.
Slovic, Peters, Finucane, & MacGregor, D.G. (2005). Affect, risk, and decision making, Health
Psychology, 24, S35–S40.
Slovic, P., Finucane, Peters & MacGregor, (2016). The Affect Heuristic, European Journal of
Operational Research, 177, 1333-1352.
Sowell, T. (2007). A Conflict of Visions: Ideological Origins of Political Struggles, New York, NY:
Basic Books.
Smith, A. (1759). The theory of moral sentiments. London: Printed for A. Millar, and A. Kincaid and J.
Bell
Emotions and Motivated Reasoning
31
Stanovich, K. E., West, R.F., & Toplak, M. (2013). Myside Bias, Rational Thinking, and Intelligence,
Current Directions in Psychological Science, 22, 259-264.
Stanovich, K. E., West, R.F., & Toplak, M. (2018). The Rationality Quotient: Toward a test of rational
thinking, Cambridge: MIT Press.
Taber, C. S. & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs, American
Journal of Political Science, 50, 755-769.
Taibbi, M. (March 1, 2021) In Defense Of Substack: UCLA professor Sarah T. Roberts mourns the good
old days of gatekeeping and credential-worship. TK News by Matt Taibbi. https://taibbi.substack.com/p/in-
defense-of-substack.
Taibbi, M. (2019). Hate, Inc.: Why Today’s Media Makes Us Despise One Another. OR Books.
Taleb, N. N. (2010). The black swan: the impact of the highly improbable. 2nd ed., Random House.
Taleb, N. N. (2014). Antifragile: things that gain from disorder. New York: Random House.
Thompson, E., Palacios, A. & Varela, F. J. (1991). Ways of coloring: Comparative color vision as a case
study for cognitive science , Behavioral and Brain Sciences, 15, 1–26.
Tooby, J. & Cosmides, L. (1990a). On the Universality of Human Nature and the Uniqueness of the
Individual: The Role of Genetics and Adaptation, Journal of personality, 58, 17-67.
Tooby, J. & Cosmides, L. (1990b). The past explains the present: Emotional adaptations and the
structure of ancestral environments, Ethology and Sociobiology, 11, 375-424.
Todd, P. M., & Gigerenzer, G. (Eds.). (2012). Ecological rationality: Intelligence in the world. Oxford
University Press.
van den Bos, K. (2003). On the Subjective Quality of Social Justice: The Role of Affect as Information in
the Psychology of Justice Judgments. Journal of Personality and Social Psychology, 85, 482–498.
Young, A. (1995). The Harmony of Illusions: Inventing Post-Traumatic Stress Disorder. Princeton, NJ:
Princeton University Press.