ArticlePDF Available

Abstract

Whenever we see voters explain away their preferred candidate's weaknesses, dieters assert that a couple scoops of ice cream won't really hurt their weight loss goals, or parents maintain that their children are unusually gifted, we are reminded that people's preferences can affect their beliefs. This idea is captured in the common saying, "People believe what they want to believe." But people don't simply believe what they want to believe. Psychological research makes it clear that "motivated beliefs" are guided by motivated reasoning--reasoning in the service of some self-interest, to be sure, but reasoning nonetheless. People generally reason their way to conclusions they favor, with their preferences influencing the way evidence is gathered, arguments are processed, and memories of past experience are recalled. Each of these processes can be affected in subtle ways by people's motivations, leading to biased beliefs that feel objective. In this symposium introduction, we set the stage for discussion of motivated beliefs in the papers that follow by providing more detail about the underlying psychological processes that guide motivated reasoning.
Journal of Economic Perspectives—Volume 30, Number 3—Summer 2016—Pages 133–140
Whenever we see voters explain away their preferred candidate’s weak-
nesses, dieters assert that a couple scoops of ice cream won’t really hurt
their weight loss goals, or parents maintain that their children are unusu-
ally gifted, we are reminded that people’s preferences can affect their beliefs. This
idea is captured in the common saying, “People believe what they want to believe.”
But people don’t simply believe what they want to believe. The psychological
mechanisms that produce motivated beliefs are much more complicated than that.
Personally, we’d like to believe that our contributions to the psychological literature
might someday rival those of Daniel Kahneman, but, try as we might, the disparity in
citations, prizes, invitations—you name it—makes holding such a belief impossible.
People generally reason their way to conclusions they favor, with their preferences
influencing the way evidence is gathered, arguments are processed, and memories of
past experience are recalled. Each of these processes can be affected in subtle ways by
people’s motivations, leading to biased beliefs that feel objective (Gilovich and Ross
2015; Pronin, Gilovich, and Ross 2004). As Kunda (1990) put it, “people motivated
to arrive at a particular conclusion attempt to be rational and to construct a justifica-
tion of their desired conclusion that would persuade a dispassionate observer. They
draw the desired conclusion only if they can muster up the evidence necessary to
support it” (p. 482–83). Motivated reasoning is constrained.
The Mechanics of Motivated Reasoning
Nicholas Epley is the John T. Keller Professor of Behavioral Science, University of Chicago,
Booth School of Business, Chicago, Illinois. Thomas Gilovich is the Irene Blecker Rosenfeld
Professor of Psychology, Cornell University, Ithaca, New York. Their email addresses are
epley@chicagobooth.edu and tdg1@cornell.edu.
For supplementary materials such as appendices, datasets, and author disclosure statements, see the
article page at
http://dx.doi.org/10.1257/jep.30.3.133 doi=10.1257/jep.30.3.133
Nicholas Epley and Thomas Gilovich
134 Journal of Economic Perspectives
Psychological research makes it clear, in other words, that “motivated beliefs”
are guided by motivated reasoning—reasoning in the service of some self-interest,
to be sure, but reasoning nonetheless. We hope that being explicit about what
psychologists have learned about motivated reasoning will help clarify the types of
motivated beliefs that people are most likely to hold, specify when such beliefs are
likely to be strong and when they are likely to be relatively weak or fragile, and illu-
minate when they are likely to guide people’s behavior.
In this introduction, we set the stage for the discussion of motivated beliefs in
the papers that follow by providing more detail about the underlying psycholog-
ical processes that guide motivated reasoning, including a discussion of the varied
motives that drive motivated reasoning and a description of how goals can direct
motivated reasoning to produce systematically biased beliefs. The first paper in
this symposium, by Roland Bénabou and Jean Tirole, presents a theoretical frame-
work for how motives might influence behavior in several important domains; two
additional papers focus on specific motives that can guide motivated reasoning:
Russell Golman, George Loewenstein, Karl Ove Moene, and Luca Zarri discuss how
a “preference for belief consonance” leads people to try to reduce the gap between
their beliefs and those of relevant others, and Francesca Gino, Michael Norton, and
Roberto Weber consider how people engage in motivated reasoning to feel as if they
are acting morally, even while acting egoistically.
A more detailed understanding of motivated beliefs and motivated reasoning
yields a middle-ground view of the quality of human judgment and decision-making.
It is now abundantly clear that people are not as smart and sophisticated as rational
agent models assert (Kahneman and Tversky 2000; Thaler 1991; Simon 1956), in
the sense that people do not process information in unbiased ways. But people
are also not as simple-minded, naïve, and prone to simply ignoring unpalatable
information as a shallow understanding (or reporting) of motivated beliefs might
suggest.
Motives for Reasoning
People reason to prepare for action, and so reasoning is motivated by the goals
people are trying to achieve. A coach trying to win a game thinks about an oppo-
nent’s likely moves more intensely than a cheerleader trying to energize the crowd.
A lawyer trying to defend a client looks for evidence of innocence, whereas a lawyer
seeking to convict tries to construct a chain of reasoning that will lead to a guilty
verdict. A person feeling guilty about harming another focuses on ways to assuage
the guilt, while the person harmed is likely to focus on the nature and extent of the
harm. As the great psychologist and philosopher William James (1890, p. 333) wrote
more than a century ago: “My thinking, is first and last and always for the sake of my
doing, and I can only do one thing at a time.”
One of the complexities in understanding motivated reasoning is that people
have many goals, ranging from the fundamental imperatives of survival and
Nicholas Epley and Thomas Gilovich 135
reproduction to the more proximate goals that help us survive and reproduce, such
as achieving social status, maintaining cooperative social relationships, holding
accurate beliefs and expectations, and having consistent beliefs that enable effective
action. Sometimes reasoning directed at one goal undermines another. A person
trying to persuade others about a particular point is likely to focus on reasons why
his arguments are valid and decisive—an attentional focus that could make the
person more compelling in the eyes of others but also undermine the accuracy
of his assessments (Anderson, Brion, Moore, and Kennedy 2012). A person who
recognizes that a set of beliefs is strongly held by a group of peers is likely to seek
out and welcome information supporting those beliefs, while maintaining a much
higher level of skepticism about contradictory information (as Golman, Loewen-
stein, Moene, and Zarri discuss in this symposium). A company manager narrowly
focused on the bottom line may find ways to rationalize or disregard the ethical
implications of actions that advance short-term profitability (as Gino, Norton, and
Weber discuss in this symposium).
The crucial point is that the process of gathering and processing information
can systematically depart from accepted rational standards because one goal—
desire to persuade, agreement with a peer group, self-image, self-preservation—can
commandeer attention and guide reasoning at the expense of accuracy. Econo-
mists are well aware of crowding-out effects in markets. For psychologists, motivated
reasoning represents an example of crowding-out in attention.
In any given instance, it can be a challenge to figure out which goals are
guiding reasoning. Consider the often-cited examples of “ above-average” effects in
self-evaluation: on almost any desirable human trait, from kindness to trustworthi-
ness to the ability to get along with others, the average person consistently rates
him- or herself above average (Alicke and Govorun 2005; Dunning, Meyerowitz,
and Holzberg 1989; Klar and Giladi 1997). An obvious explanation for this result
is that people’s reasoning is guided by egoism, or the goal to think well of oneself.
Indeed, a certain percentage of above-average effects can be explained by egoism
because unrelated threats to people’s self-image tend to increase the tendency for
people to think they are better than others, in an apparent effort to bolster their
self-image (as in Beauregard and Dunning 1998).
But above-average effects also reflect people’s sincere attempts to assess accu-
rately their standing in the world. For instance, many traits are ambiguous and hard
to define, such as leadership or creativity. When people try to understand where
they stand relative to their peers on a given trait, people quite naturally focus on
what they know best about that trait—and what they know best are the personal
strengths that guide their own lives. As Thomas Schelling (1978, pp. 64–65) put it,
“Careful drivers give weight to care, skillful drivers give weight to skill, and those
who think that, whatever else they are not, at least they are polite, give weight to
courtesy, and come out high on their own scale. This is the way that every child has
the best dog on the block.” The above-average effect, in other words, can result from
a self-enhancement goal, or from a non-motivated tendency to define traits egocen-
trically. Supporting Schelling’s analysis, the above-average effect is significantly
136 Journal of Economic Perspectives
reduced when traits are given precise definitions, or when the traits are inherently
less ambiguous such as “punctual” or “tall” (Dunning, Meyerowitz, and Holzberg
1989).
Knowing which goal is guiding reasoning is critical for predicting the influence
of specific interventions. For example, economists routinely predict that biases in
judgment will be reduced when the stakes for accurate responding are high. This
prediction implicitly assumes that people are not trying to be accurate already. But
in fact, many cognitive biases are not affected by increased incentives for accuracy
because the individuals in question are already trying hard to be accurate (Camerer
and Hogarth 1999). Increasing the incentive to achieve a goal should influence
behavior only when people are not already trying to achieve that goal.
How Motives Influence Beliefs
Understanding that multiple goals can shape reasoning does not explain
how reasoning can become systematically biased. Reasoning involves the recruit-
ment and evaluation of evidence. Goals can distort both of these basic cognitive
processes.
Recruiting Evidence
When recruiting evidence to evaluate the validity of a given belief, an impartial
judge would consider all of the available evidence. Most people do not reason like
impartial judges, but instead recruit evidence like attorneys, looking for evidence
that supports a desired belief while trying to steer clear of evidence that refutes
it. In one memorable example, essayist Johanna Gohmann (2015) describes her
improbable teenage crush on the actor Jimmy Stewart, and her reaction as she
learned more and more about Mr. Stewart: “As I flipped through the pages my
eyes skimmed words like ‘womanizer’ and ‘FBI informant,’ and I slapped it shut,
reading no further.” If you avoid recruiting evidence that you would prefer not to
believe, your beliefs will be based on only a comforting slice of the available facts.
One prominent example of motivated avoidance comes from studies of people’s
reactions to the prospect of having Huntington’s disease: few people who are at risk
of getting the disease get tested before showing symptoms, and those with symptoms
who avoid testing have beliefs that are just as optimistic as those who show no symp-
toms (Oster, Shoulson, and Dorsey 2013).
Even when people do not actively avoid information, psychological research
consistently demonstrates that they have an easier time recruiting evidence
supporting what they want to be true than evidence supporting what they want to be
false. But even here, people are still responsive to reality and don’t simply believe
whatever they want to believe. Instead, they recruit subsets of the relevant evidence
that are biased in favor of what they want to believe. Failing to recognize the biased
nature of their information search leaves people feeling that their belief is firmly
supported by the relevant evidence.
The Mechanics of Motivated Reasoning 137
Biased information processing can be understood as a general tendency for
people to ask themselves very different questions when evaluating propositions they
favor versus oppose (Gilovich 1991). When considering propositions they would prefer
to be true, people tend to ask themselves something like “Can I believe this?” This
evidentiary standard is rather easy to meet; after all, some evidence can usually be found
even for highly dubious propositions. Some patients will get better after undergoing
even a worthless treatment; someone is bound to conform to even the most baseless
stereotype; some fact can be found to support even the wackiest conspiracy theory.
In contrast, when considering propositions they would prefer not be true, people
tend to ask themselves something like “Must I believe this?” This evidentiary standard
is harder to meet; after all, some contradictory evidence can be found for almost
any proposition. Not all patients benefit from demonstrably effective treatments;
not all group members conform to the stereotypes of their group; even the most
comprehensive web of evidence will have a few holes. More compelling evidence is
therefore required to pass this “Must I?” standard. In this way, people can again end
up believing what they want to believe, not through mindless wishful thinking but
rather through genuine reasoning processes that seem sound to the person doing it.
In one study that supports this Can I?/Must I? distinction, students were told
that they would be tested for an enzyme deficiency that would lead to pancreatic
disorders later in life, even among those (like presumably all of them) who were not
currently experiencing any symptoms (Ditto and Lopez 1992). The test consisted
of depositing a small amount of saliva in a cup and then putting a piece of litmus
paper into the saliva. Half the participants were told they would know they had the
enzyme deficiency if the paper changed color; the other half were told they would
know they had it if the paper did not change color. The paper was such that it did
not change color for anyone.
Participants in these two conditions reacted very differently to the same result—
the unchanged litmus paper. Those who thought it reflected good news were quick
to accept that verdict and did not keep the paper in the cup very long. Those who
thought the unchanged color reflected bad news, in contrast, tried to recruit more
evidence. They kept the paper in the cup significantly longer, even trying out (as
the investigators put it) “a variety of different testing behaviors, such as placing the
test strip directly on their tongue, multiple redipping of the original test strip (up
to 12 times), as well as shaking, wiping, blowing on, and in general quite carefully
scrutinizing the recalcitrant . . . test strip.” A signal that participants wanted to receive
was quickly accepted; a signal they did not want to receive was subjected to more
extensive testing.
People’s motivations thus do not directly influence what they believe. Instead,
their motivations guide what information they consider, resulting in favorable
conclusions that seem mandated by the available evidence.
Evaluating Evidence
Of course, even when looking at the very same evidence, people with different
goals can interpret it differently and come to different conclusions. In one telling
138 Journal of Economic Perspectives
experiment cited in this symposium, participants who were randomly assigned to
play the role of a prosecuting attorney judged the evidence presented in trial to be
more consistent with the defendant’s guilt than did participants randomly assigned
to play the role of the defense attorney (Babcock and Loewenstein 1997).
These distorting influences can take many forms, influencing the apparent
meaning of the evidence before us. For instance, any given action can be thought of
in multiple ways. A father lifting a child off the floor could be described as “picking
up a child” or “caring for the child.” The two equally apt descriptions have very
different meanings. Caring for a child is a more significant, benevolent act than
simply picking up the child. A person trying to extol a parent’s character will be
more likely to code the event in a higher-level term like “caring” than a person
trying to demean a parent’s character. Differences in how people construe the very
same action can lead two people to observe the same event but “see” very different
things (Maas, Salvi, Arcuri, and Semin 1989; Trope and Lieberman 2003; Vallacher
and Wegner 1987).
Psychologists have examined a host of ways in which people’s goals influence
how they evaluate information, and we won’t review that voluminous literature
here. But it is worth noting that psychologists have been especially interested in
the distortions that arise in the service of consistency. Leon Festinger’s (1957)
theory of cognitive dissonance has been particularly influential. The central idea
is that people are motivated to reconcile any inconsistencies between their actions,
attitudes, beliefs, or values. When two beliefs are in conflict, or when an action
contradicts a personal value, the individual experiences an unpleasant state of
arousal that leads to psychological efforts to dampen or erase the discrepancy, often
by changing a belief or attitude.
Festinger’s (1957) theory stemmed in part from his earlier work on group
dynamics and what he called “pressures to uniformity” (Festinger 1950). When
differences of opinion arise within a group, a palpable tension arises that group
members try to resolve. That tension, he maintained, is diminished only when
agreement is achieved, typically by the majority pressuring the minority to go along.
Festinger’s theory of cognitive dissonance essentially took what he had observed in
groups and put it in the head of the individual: that is, what plays out interperson-
ally in group dynamics also takes place in individual psychodynamics. We all feel
psychological discomfort when our actions, attitudes, beliefs, or values conflict, and
that discomfort leads us to seek ways to reduce the dissonance.
By focusing on cognitive processes that occur in the head of the individual,
Festinger (1957) helped to usher in a period in which social psychology became
a lot less social. But dissonance reduction is often a group effort. We help one
another feel better about potentially upsetting inconsistencies in our thoughts
and deeds. Our friends reassure us that we chose the right job, the right house,
or the right spouse. We console an acquaintance who’s messed up by saying that
“it’s not so bad,” “he had it coming,” or “things would have turned out the same
regardless of what you did.” Indeed, whole societies help their members justify the
ill-treatment of minorities, the skewed division of resources, or the degradation of
Nicholas Epley and Thomas Gilovich 139
the environment through a variety of mechanisms, including everyday discourse,
mass media messages, the criminal code, and even how the physical environment
is structured.
The social element of rationalization and dissonance reduction fits nicely with
the insightful piece by Golman, Loewenstein, Moene, and Zarri on people’s prefer-
ence for belief consonance. Furthermore, by connecting the preference for belief
consonance to the existing literature on dissonance reduction, a great body of
empirical research can be tapped to advance our understanding of when and why
people will have an easy time achieving the belief consonance they seek, and when
and why they are likely to struggle.
Coda
The most memorable line from the classic film Gone with the Wind—indeed,
the most memorable line in the history of American movies according to the Amer-
ican Film Institute—is Rhett Butler’s dismissive comment, “Frankly Scarlett, I don’t
give a damn.” But a different line from that film has attracted more interest from
psychologists: Scarlett O’Hara’s frequent lament, “I can’t think about that right
now. . . . I’ll think about it tomorrow.”
The comment captures people’s intuitive understanding of how motivations
and emotions influence our judgments and decisions. When Scarlett doesn’t want
to accept some unwelcome possibility, she willfully cuts herself off from the relevant
evidence. She can continue to believe what she wants because she never consults
evidence that would lead her to believe differently.
Scarlet’s path is one way that people can end up believing what they want
to believe. But as we have noted, there are many others. Furthermore, people’s
preferred beliefs, developed and sustained through whatever path, guide their
behavior whenever they are called to mind as choices are made. The path from
motives to beliefs to choices should not be a black box to be filled with analyti-
cally convenient assumptions. Different motives can guide reasoning in different
ways on different occasions—altering how information is recruited and evaluated—
depending on what a person is preparing to do. We are delighted to see a topic with
such a long history in psychological science being taken seriously by economists.
Thanks to George Loewenstein, who took the leading role in stimulating and organizing the
papers that appear in this symposium.
140 Journal of Economic Perspectives
Alicke, Mark D., and Olesya Govorun. 2005.
“The Better-than-Average Effect.” In The Self in
Social Judgment, edited by Mark D. Alicke, David
A. Dunning, and Joachim I. Krueger, 85–106. New
York: Psychology Press.
Anderson, Cameron, Sebastien Brion, Don
A. Moore, and Jessica A. Kennedy. 2012. “A
Status-Enhancement Account of Overconfidence.”
Journal of Personality and Social Psychology 103(4):
718–35.
Babcock, Linda, and George Loewenstein.
1997. “Explaining Bargaining Impasse: The Role
of Self-Serving Biases.” Journal of Economic Perspec-
tives 11(1): 109–26.
Beauregard, Keith S., and David Dunning. 1998.
“Turning Up the Contrast: Self-Enhancement
Motives Prompt Egocentric Contrast Effects in
Social Judgments.” Journal of Personality and Social
Psycholog y 74(3): 606–621.
Camerer, Colin F., and Robin M. Hogarth. 1999.
“The Effects of Financial Incentives in Experi-
ments: A Review and Capital-Labor-Production
Framework.” Journal of Risk and Uncertainty
19(1–3): 7–42.
Ditto, Peter H., and David F. Lopez. 1992.
“Motivated Skepticism: Use of Differential Deci-
sion Criteria for Preferred and Nonpreferred
Conclusions.” Journal of Personality and Social
Psycholog y 63(4): 568–84.
Dunning, David, Judith A. Meyerowitz, and Amy
D. Holzberg. 1989. “Ambiguity and Self-Evaluation:
The Role of Idiosyncratic Trait Definitions in
Self-Serving Assessments of Others.” Journal of
Personality and Social Psycholog y 57(6): 1082–90.
Festinger, Leon. 1950. “Informal Social Commu-
nication.” Psychological Review 57(5): 271–82.
Festinger, Leon. 1957. A Theory of Cognitive
Dissonance. Stanford, CA: Stanford University
Press.
Gigerenzer, Gerd. 2004. “Dread Risk,
September 11, and Fatal Traffic Accidents.” Psycho-
logical Science 15(4): 286–87.
Gilovich, Thomas. 1991. How We Know What
Isn’t So: The Fallibility of Human Reason in Everyday
Life. New York, NY: Free Press.
Gilovich, Thomas, and Lee Ross. 2015. The
Wisest in the Room: How You Can Benefit from Social
Psycholog y’s Most Powerful Insights. New York, NY:
Free Press.
Gohmann, Johanna. 2015. “Jimmy Stewart Was
My Teen Idol.” Salon, December 24. http://www.
salon.com/2015/12/24/jimmy_stewart_was_my_
teen_idol/.
James, William. 1890. Principles of Psycholog y, vol.
2. New York, NY: Cosimo.
Kahneman, Daniel, and Amos Tversky (eds.)
2000. Choices, Values, and Frames. New York, NY:
Cambridge University Press and the Russell Sage
Foundation.
Klar, Yechiel, and Eilath E. Giladi. 1997. “No
One in My Group Can Be Below the Group’s
Average: A Robust Positivity Bias in Favor of
Anonymous Peers.” Journal of Personality and Social
Psycholog y 73(5): 885–901.
Kunda, Ziva. 1990. “The Case for Motivated
Reasoning.” Psychological Bulletin 108(3): 480–98.
Maass, Anne, Daniela Salvi, Luciano Arcuri, and
Gún R. Semin. 1989. “Language Use in Intergroup
Contexts: The Linguistic Intergroup Bias.” Journal
of Personality and Social Psychology 57(6): 981–93.
Oster, Emily, Ira Shoulson, and E. Ray Dorsey.
2013. “Limited Life Expectancy, Human Capital,
and Health Investments.” American Economic
Review 103(5): 1977–2002.
Pronin, Emily, Thomas Gilovich, and Less Ross.
2004. “Objectivity in the Eye of the Beholder:
Divergent Perceptions of Bias in Self versus
Others.” Psychological Review 111(3): 781–99.
Schelling, Thomas C. 1978. Micromotives and
Macrobehavior. New York, NY: W.W. Norton.
Simon, Herbert A. 1956. “Rational Choice and
the Structure of the Environment.” Psychological
Review 63(2): 129–138.
Thaler, Richard H. 1991. Quasi-Rational
Economics. New York: Russell Sage Foundation.
Trope, Yaacpv, and Nira Liberman. 2003.
“Temporal Construal.” Psycholog ical Review 110(3):
403–421.
Vallacher, Robin R., and Wegner, Daniel M.
1987. “What Do People Think They’re Doing?
Action Identification and Human Behavior.”
Psychological Review 94(1): 3–15.
References
... Likewise, some research shows that egocentrism varies according to a target's perceived morality (Zhou & Shapiro, 2022). For example, people may show greater egocentric estimates for more moral than immoral targets due to multiple mechanisms, such as cognitive mechanisms (e.g., 'I am moral, so moral people probably think more like I do because they are similar to me;' e.g., Ames et al., 2012) and motivational mechanisms (e.g., wanting to believe moral people side with the self on beliefs and attitudes; Epley & Gilovich, 2016;Kunda, 1990). Either way, they are less motivated to adjust egocentric estimates to become less egocentric. ...
... other humans) on issues (e.g. Ross et al., 2012), people should | 3 of 21 EGOCENTRIC ATTITUDES OF GOD AND SATAN be less motivated to correct for potential egocentric biases (Epley & Gilovich, 2016;Kunda, 1990). However, due to the cross-sectional design, it is impossible to know whether people simply based their own attitudes on what they perceived to be God's attitudes. ...
... For example, people may compensate for limited access to others' minds via stronger projection (Todd & Tamir, 2024). Likewise, people may be motivated to conclude their minds particularly align with moral exemplars and oppose evildoers (Epley & Gilovich, 2016;Kunda, 1990;Ross et al., 2012); or, as people generally consider themselves moral (Tappin & McKay, 2017), they may think their views are especially opposed by evil agents but aligned with similarly moral agents (e.g. Ames, 2004). ...
Article
In addition to sources (e.g. scripture) that directly disseminate religious agents' minds (e.g. attitudes), an egocentric model suggests one's own mind may serve as a basis for estimating religious agents' minds. However, the egocentric model is rarely directly tested for inferences of religious agents' minds, and such tests have largely been limited to correlational methodologies, morally charged topics, and to a focus on God or Jesus rather than evil religious agents (e.g. Satan). To expand testing, we conducted two studies with Christians that addressed these limiting factors. In Study 1, correlational evidence supported the egocentric model in how participants estimated both God's and Satan's attitudes on moral topics. In Study 2, experimental evidence supported this conclusion and extended it to both moral and amoral topics: People estimated God's and Satan's attitudes differently as a function of a persuasion manipulation that changed their own knowledge on issues. These findings extend support for an egocentric account of how Christians can infer religious agents' minds.
... Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's prior beliefs or values. Reasoning, in which people access, construct, and evaluate information in a biased fashion to arrive at, or endorse, a preferred conclusion also is consistent with motivated reasoning (Beck et al., 2023;Cusimano & Lombrozo, 2023;Epley & Gilovich, 2016). Motivated reasoning is defined as the biased processing of information cues in accordance with prevailing motivations (Kunda, 1990). ...
Article
Full-text available
Capital markets depend on truthful corporate financial reporting. To assure financial statement integrity, auditors serve a critical gatekeeper role between corporations and investors. While public corporations pay audit fees, auditors ultimately serve the public interest and must uphold the highest standards of ethical conduct. To fulfill this public trust, auditors must remain independent of clients and be skeptical of potentially biased reporting. However, despite recent safeguards, research indicates that threats to professional skepticism persist. Drawing from social psychology, we argue that auditors’ antagonistic narcissism contributes to compromised skepticism, as this trait is associated with unethical behavior and diminished concern for others. In a laboratory experiment engaging 154 CPAs, we show that trait antagonistic narcissism undermines auditor skepticism. Specifically, auditors high (low) in antagonistic narcissism are less (more) skeptical of questionable client reporting. We also examine how client-related cues influence judgment. On average, recent poor financial performance increases skepticism, while strong ESG performance decreases it. Additionally, we find auditors high in antagonistic narcissism are least skeptical in the absence of recent poor financial performance and, concurrently, in the presence of strong ESG performance. Auditors low in antagonistic narcissism are most skeptical in the opposite cue combination. Our findings reveal a troubling level of variance in ethics-related skepticism based on personality and contextual cues. This work contributes to research on audit quality and professional ethics by showing how dispositional traits may weaken auditors’ ability to identify and respond to others’ unethical reporting, ultimately compromising their gatekeeping role.
... As a result, belief updating is often imperfect and subject to various cognitive and motivational constraints. For instance, phenomenon such as the "optimistic bias" (Sharot, 2011) or the "motivated reasoning" (Epley and Gilovich, 2016;Kunda, 1990) generally demonstrates how people tend to interpret new information in ways that confirm their pre-existing beliefs, even in the presence of objectively contradicting information. This resistance to change suggests that updating beliefs is not always a rational or straightforward process (McKay and Dennett, 2009). ...
... This is called Motivational Reasoning. These goals can range from survival and reproduction to achieving a social status and maintaining social relationships [11]. ...
Conference Paper
This research investigates the impact of the state-of-mind and four domains of bias influence decision making in cyber security. The states of mind are rest and panic, while the four domains of biases are: (1) prior hypothesis and focusing on limited targets, (2) exposure to limited alternatives, (3) insensitivity to outcome probabilities, and (4) illusion of manageability, influence. The state-of-mind and the bias domains are integrated into a multilevel adaptive network to illustrate decision-making with respect to state-of-mind and biases. The results demonstrate that the first two domains of bias increase when there is a change in the state-of-mind from rest to panicked, whereas the last two domains decrease. This indicates that people tend to fall back on familiarity when their state-of-mind is panicked.
... When collecting evidence to test a preferred proposition, people actively look for evidence that confirms it and tend to steer clear of the evidence that does not (Molden & Higgins, 2005;Nickerson, 1998). Similarly, people interpret identical information differently depending on their goals (Kunda, 1990;Maoz et al., 2002) by ignoring or downplaying the importance of evidence that does not fit their favored hypothesis (Engel & Glöckner, 2013;Epley & Gilovich, 2016). Studies in neuroscience support this by showing that confirmatory information is more readily assimilated than disconfirmatory information (Kappes et al., 2020;Rollwage et al., 2020;Sharot et al., 2011). ...
Article
Full-text available
Objective: Myside bias—the tendency to evaluate and generate evidence as well as test hypotheses in a manner biased toward prior beliefs—causes disputants in litigation to harbor overconfident expectations of judicial awards and reduces odds of settlement. Two studies tested three interventions to suppress myside bias in civil litigation settings. Hypotheses: I predicted that the participants in the baseline conditions would exhibit myside bias in award estimates and argument ratings and that the interventions would attenuate it. Method: Two between-subjects experimental studies using students of law (n = 164, Mage = 24.21 years, 53% female; n = 181, Mage = 20.89 years, 61% female) compared the participants’ award estimates and argument ratings in a simulated civil dispute. The interventions (a) manipulated the advocates to think they represented the opposing side during initial information processing (side-switch condition), (b) required the participants to generate and evaluate arguments for both sides (dialectical condition), and (c) affected the participants’ motivations by threatening dismissal in case of estimation error (goal states condition). Results: Baseline groups in both studies displayed significant myside bias in award estimates (all ds ≥ 1.12) and argument ratings (all ds ≥ 1.29). In Study 1, the side-switch intervention eliminated bias in argument ratings (d = 0.73 and 0.72) but only reduced (d = 0.35) rather than eliminated bias in award estimates. In Study 2, the dialectical intervention reduced bias in argument ratings (d = 0.74 and 0.58) but did not eliminate it; it also failed to reduce bias in award estimates. The goal states intervention suppressed myside bias in both argument ratings (d = 0.76 and 0.82) and award estimates (d = 0.78). Conclusions: Myside bias in litigation settings is robust and difficult to suppress. Accountability interventions show potential as bias-attenuating strategies.
... Can we quantitatively identify when judges have an easier time recruiting evidence supporting what they want to be true than evidence supporting what they want to be false [1]? This tendency is called motivated reasoning, and several recent models and experiments on In prior studies of motivating reasoning in law, law student subjects are exogenously provided precedents (reasons) ( [8,9]) to address the issue that differences in reasoning might be due to memory or knowledge. ...
Article
Full-text available
This study explores politically motivated reasoning among U.S. Circuit Court judges over the past 120 years, examining their writing style and use of previous case citations in judicial opinions. Employing natural language processing and supervised machine learning, we scrutinize how judges’ language choices and legal citations reflect partisan slant. Our findings reveal a consistent, albeit modest, polarization in citation practices. More notably, there is a significant increase in polarization within the textual content of opinions, indicating a stronger presence of motivated reasoning in their prose. We also examine the impact of heightened scrutiny on judicial reasoning. On divided panels and as midterm elections draw near, judges show an increase in dissent votes while decreasing in polarization in both writing and citation practices. Furthermore, our study explores polarization dynamics among judges who are potential candidates for Supreme Court promotion. We observe that judges on the shortlist for Supreme Court vacancies demonstrate greater polarization in their selection of precedents. “I pay very little attention to legal rules, statutes, constitutional provisions ... The first thing you do is ask yourself — forget about the law — what is a sensible resolution of this dispute? ... See if a recent Supreme Court precedent or some other legal obstacle stood in the way of ruling in favor of that sensible resolution. ... When you have a Supreme Court case or something similar, they’re often extremely easy to get around.” (An Exit Interview with Richard Posner, The New York Times, Sep. 11, 2017).
Article
While most heterosexual couples say they want to divide childcare responsibilities evenly, they tend to allocate childcare unevenly. To explain this inconsistency, we focus on one possible channel, beliefs: workers anticipate (correctly or incorrectly) that employers penalize men and women differently for absences from work related to children. We conduct an online hiring experiment using framed “childcare shocks” with workers and employers. We elicit workers' beliefs about employer wage penalties for work absences and examine whether these beliefs align with employers' wage offers. Workers expect employers to penalize workers more harshly than employers do. Workers expect penalties to be worse for men than women, but employers penalize women more than men.
Article
This paper examines Evidence-Based Decision Making (EBDM) within the context of ecological rationality. It contrasts classical rationality, which prioritizes comprehensive and logical evidence utilization, with ecological rationality, which emphasizes practical decision making (DM) under real-world constraints. Our examination underscores the importance of adaptive heuristics, professional judgment, and the integration of experience and expertise in forming intuitive responses. It also examines the limitations of framing intuitive versus analytical thinking as a strict dichotomy and advocates for a balanced approach that considers context and practical constraints. Finally, the paper addresses the potential impacts of motivated reasoning and bias in decision-making. Concluding with practical recommendations, it guides practitioners in applying EBDM in an ecologically rational way, stressing the need to balance an emphasis on classical rationality with professional judgment, expertise, and the specificities of each decision context.
Article
Full-text available
In 2 experiments, we attempted to reduce belief-consistent biases in interpretations of a polarized problem by making information easier to interpret. In the experiments, participants solved numerical problems that were either framed in a politically polarized (the effects of Muslim prayer rooms on support for Islamic extremism) or a neutral setting (the effects of a skin cream on skin rash). In both studies, the problems were presented twice, with the second presentation accompanied with an aid to facilitate problem-solving. In Experiment 1, this aid came in the form of an informative text on how to calculate the numbers to solve the problem. In Experiment 2, the aid provided participants with the first calculus necessary to solve the problem: transforming frequencies to percentages. Overall, results demonstrated belief-consistent responses in the polarized scenario when participants attempted to solve the first problem (higher accuracy when the correct conclusion was in line with participants’ ideology). Information on how to calculate the problem (Experiment 1) only slightly reduced the biased responses, whereas the added percentages (Experiment 2) led to a substantial reduction of the bias. Thus, we demonstrate that the facilitation of complex information on a polarized topic reduces biases in favor of rational reasoning.
Article
Full-text available
Three experiments show that information consistent with a preferred conclusion is examined less critically than information inconsistent with a preferred conclusion, and consequently, less information is required to reach the former than the latter. In Study 1, Ss judged which of 2 students was most intelligent, believing they would work closely with the one they chose. Ss required less information to decide that a dislikable student was less intelligent than that he was more intelligent. In Studies 2 and 3, Ss given an unfavorable medical test result took longer to decide their test result was complete, were more likely to retest the validity of their result, cited more life irregularities that might have affected test accuracy, and rated test accuracy as lower than did Ss receiving more favorable diagnoses. Results suggest that a core component of self-serving bias is the differential quantity of cognitive processing given to preference-consistent and preference-inconsistent information. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
When people are asked to compare their abilities to those of their peers, they predominantly provide self-serving assessments that appear objectively indefensible. This article proposes that such assessments occur because the meaning of most characteristics is ambiguous, which allows people to use self-serving trail definitions when providing self-evaluations. Studies 1 and 2 revealed that people provide self-serving assessments to the extent that the trait is ambiguous, that is, to the extent that it can describe a wide variety of behaviors. Study 3 more directly implicated ambiguity in these apraisals. As the number of criteria that Ss could use in their evaluations increased, Ss endorsed both positive and negative characteristics as self-descriptive to a greater degree. Study 4 demonstrated that the evidence and criteria that people use in self-evaluations is idiosyncratic. Asking Ss explicitly to list the evidence and criteria they considered before providing self-evaluations did not influence their self-appraisals. However, requiring Ss to evaluate themselves using a list generated by another individual caused them to lower their self-appraisals. Discussion centers on the normative status of these self-serving appraisals, and on potential consequences for social judgment in general.
Article
Full-text available
We review 74 experiments with no, low, or high performance-based financial incentives. The modal result has no effect on mean performance (though variance is usually reduced by higher payment). Higher incentive does improve performance often, typically judgment tasks that are responsive to better effort. Incentives also reduce presentation effects (e.g., generosity and risk-seeking). Incentive effects are comparable to effects of other variables, particularly cognitive capital and task production demands, and interact with those variables, so a narrow-minded focus on incentives alone is misguided. We also note that no replicated study has made rationality violations disappear purely by raising incentives.
Article
Full-text available
Contrast effects occur when people judge the behavior and attitudes of others relative to their own. We tested a motivational account suggesting that these effects arise because people tailor their judgments of others to affirm their own self-worth. Consistent with that interpretation, participants displayed more egocentric contrast in their judgments of another person's intelligence (i.e., their evaluation of his score on the Scholastic Aptitude Test was more negatively related to their own score) after their self-esteem was threatened than after it was bolstered (Studies 1 and 2). High-self-esteem individuals displayed more judgmental contrast overall than did their low-esteem counterparts (Study 2). Strongly pro-choice participants whose esteem was threatened also displayed more contrast in their judgments of another person's attitude on abortion, relative to esteem-bolstered participants (Study 3). Discussion centers on the implications of these findings for theory on social comparison, self-affirmation, and social judgment.
Article
Full-text available
In Studies 1-8, participants judged an anonymous student as better than the average student, as above the group median, and as better than most other students on a variety of desirable traits. This effect was retained when name and age were removed and student ID number was the only individuating feature, when both the average student and the anonymous student were provide with a first name, and when the order of presentation was reversed. However, the effect was reduced when an enriched version of the average student was provided. In Study 9, an anonymous member of a highly disliked out-group was judged as worse than the out-group average member. These results indicate difficulty in comparing a singular target to a generalized target. A singular-target-focused model of comparative judgments is used to describe how people conduct these assessments.
Article
Full-text available
Important asymmetries between self-perception and social perception arise from the simple fact that other people's actions, judgments, and priorities sometimes differ from one's own. This leads people not only to make more dispositional inferences about others than about themselves (E. E. Jones & R. E. Nisbett, 1972) but also to see others as more susceptible to a host of cognitive and motivational biases. Although this blind spot regarding one's own biases may serve familiar self-enhancement motives, it is also a product of the phenomenological stance of naive realism. It is exacerbated, furthermore, by people's tendency to attach greater credence to their own introspections about potential influences on judgment and behavior than they attach to similar introspections by others. The authors review evidence, new and old, of this asymmetry and its underlying causes and discuss its relation to other psychological phenomena and to interpersonal and intergroup conflict.
Article
Three experiments examine how the type of language used to describe in-group and out-group behaviors contributes to the transmission and persistence of social stereotypes. Two experiments tested the hypothesis that people encode and communicate desirable in-group and undesirable out-group behaviors more abstractly than undesirable in-group and desirable out-group behaviors. Experiment 1 provided strong support for this hypothesis using a fixed-response scale format controlling for the level of abstractness developed from Semin and Fiedler's (1988a) linguistic category model. Experiment 2 yielded the same results with a free-response format. Experiment 3 demonstrated the important role that abstract versus concrete communication plays in the perpetuation of stereotypes. The implications of these findings and the use of the linguistic category model are discussed for the examination of the self-perpetuating cycle of stereotypes in communication processes.
  • William James
James, William. 1890. Principles of Psychology, vol.
The Better-than-Average Effect A Status-Enhancement Account of Overconfidence
  • Mark D Alicke
  • Olesya Govorun
  • Jessica A Kennedy
Alicke, Mark D., and Olesya Govorun. 2005. " The Better-than-Average Effect. " In The Self in Social Judgment, edited by Mark D. Alicke, David A. Dunning, and Joachim I. Krueger, 85–106. New York: Psychology Press. Anderson, Cameron, Sebastien Brion, Don A. Moore, and Jessica A. Kennedy. 2012. " A Status-Enhancement Account of Overconfidence. " Journal of Personality and Social Psychology 103(4): 718–35.
The Effects of Financial Incentives in Experiments: A Review and Capital-Labor-Production Framework Motivated Skepticism: Use of Differential Decision Criteria for Preferred and Nonpreferred Conclusions
  • Colin F Camerer
  • Robin M Hogarth Ditto
  • H Peter
  • David F Lopez
Camerer, Colin F., and Robin M. Hogarth. 1999. " The Effects of Financial Incentives in Experiments: A Review and Capital-Labor-Production Framework. " Journal of Risk and Uncertainty 19(1–3): 7–42. Ditto, Peter H., and David F. Lopez. 1992. " Motivated Skepticism: Use of Differential Decision Criteria for Preferred and Nonpreferred Conclusions. " Journal of Personality and Social Psychology 63(4): 568–84.