Content uploaded by Aileen Oeberst
Author content
All content in this area was uploaded by Aileen Oeberst on Mar 22, 2023
Content may be subject to copyright.
https://doi.org/10.1177/17456916221148147
Perspectives on Psychological Science
1 –24
© The Author(s) 2023
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/17456916221148147
www.psychologicalscience.org/PPS
ASSOCIATION FOR
PSYCHOLOGICAL SCIENCE
Thought creates the world and then says, “I didn’t
do it.”
—David Bohm (physicist)
One of the essential insights from psychological
research is that human information processing is often
biased. For instance, people overestimate the extent
to which their opinions and beliefs are shared (e.g.,
Nickerson, 1999), and they apply differential standards
in the evaluation of behavior depending on whether it
is about a member of their own or another group (e.g.,
Hewstone etal., 2002), just to name a few. For many
such biases there are prolific strands of research, and
for the most part these strands do not refer to one
another. As such parallel research endeavors may pre-
vent us from detecting common principles, the current
article seeks to bring a set of biases together by
suggesting that they might actually share the same
“recipe.” Specifically, we suggest that they are based on
prior beliefs plus belief-consistent information process-
ing. Put differently, we raise the question of whether a
finite number of different biases—at the process level—
represent variants of “confirmation bias,” or peoples’
tendency to process information in a way that is con-
sistent with their prior beliefs (Nickerson, 1998). Even
more importantly, we argue that different biases could
be traced back to the same underlying fundamental
beliefs and outline why at least some of these funda-
mental beliefs are likely held widely among humans.
In other words, we propose for discussion a unifying
framework that might provide a more parsimonious
1148147PPSXXX10.1177/17456916221148147Oeberst and ImhoffPerspectives on Psychological Science
research-article2023
Corresponding Author:
Aileen Oeberst, Department of Media Psychology, University of Hagen
Email: aileen.oeberst@fernuni-hagen.de
Toward Parsimony in Bias Research:
A Proposed Common Framework of
Belief-Consistent Information Processing
for a Set of Biases
Aileen Oeberst1,2 and Roland Imhoff3
1Department of Media Psychology, University of Hagen; 2Leibniz-Institut für Wissensmedien, Tübingen; and
3Department of Social and Legal Psychology, Johannes Gutenberg University of Mainz
Abstract
One of the essential insights from psychological research is that people’s information processing is often biased.
By now, a number of different biases have been identified and empirically demonstrated. Unfortunately, however,
these biases have often been examined in separate lines of research, thereby precluding the recognition of shared
principles. Here we argue that several—so far mostly unrelated—biases (e.g., bias blind spot, hostile media bias,
egocentric/ethnocentric bias, outcome bias) can be traced back to the combination of a fundamental prior belief and
humans’ tendency toward belief-consistent information processing. What varies between different biases is essentially
the specific belief that guides information processing. More importantly, we propose that different biases even share
the same underlying belief and differ only in the specific outcome of information processing that is assessed (i.e., the
dependent variable), thus tapping into different manifestations of the same latent information processing. In other
words, we propose for discussion a model that suffices to explain several different biases. We thereby suggest a
more parsimonious approach compared with current theoretical explanations of these biases. We also generate novel
hypotheses that follow directly from the integrative nature of our perspective.
Keywords
biased information processing, beliefs, belief-consistent information processing
2 Oeberst and Imhoff
Table 1. Biases and the Fundamental Beliefs on Which They Might Be Based
Fundamental belief Bias Brief description
My experience is a reasonable
reference.
Spotlight effect
(e.g., Gilovich etal., 2000)
Overestimating the extent to which (an aspect
of) oneself is noticed by others
Illusion of transparency
(e.g., Gilovich & Savitsky, 1999)
Overestimating the extent to which one’s own
inner states are noticed by others
Illusory transparency of intention
(e.g., Keysar, 1994)
Overestimating the extent to which an
intention behind an ambiguous utterance
(that is clear to oneself) is clear to others
False consensus
(e.g., Nickerson, 1999)
Overestimation of the extent to which one’s
opinions, beliefs, etc., are shared
Social projection
(e.g., Robbins & Krueger, 2005)
Tendency to judge others as similar to oneself
I make correct assessments of
the world.
Bias blind spot
(e.g., Pronin etal., 2002a)
Being convinced that mainly others succumb
to biased information processing
Hostile media bias
(e.g., Vallone etal., 1985)
Partisans perceiving media reports as biased
toward the other side
I am good. Better-than-average effect
(e.g., Alicke & Govorun, 2005)
Overestimating one’s performance in relation
to the performance of others
Self-serving bias
(e.g., Mullen & Riordan, 1988)
Attributing one’s failures externally but one’s
successes internally
My group is a reasonable
reference.
Ethnocentric bias
(e.g., Oeberst & Matschke, 2017)
Giving precedence to one’s own group (not
preference)
In-group projection
(e.g., Bianchi etal., 2010)
Perceiving one’s group (vs. other groups)
as more typical of a shared superordinate
identity
My group (members) is (are)
good.
In-group bias/partisan bias
(e.g., Tarrant etal., 2012)
Seeing one’s own group in a more favorable
light than other groups (e.g., morally
superior, less responsible for harm)
Ultimate attribution error
(e.g., Hewstone, 1990)
External (vs. internal) attribution for negative
(vs. positive) behaviors of in-group
members; reverse pattern for out-group
members
Linguistic intergroup bias
(e.g., Maass etal., 1989)
Using more abstract (vs. concrete) words when
describing positive (vs. negative) behavior of
in-group members and the reverse pattern
for out-group members
Intergroup sensitivity effect
(e.g., Hornsey etal., 2002)
Criticisms evaluated less defensively when
made by an in-group (vs. out-group)
member
People’s attributes (not context)
shape outcomes.
Fundamental attribution error/
correspondence bias
(e.g., L. Ross, 1977)
Preference for dispositional (vs. situational)
attribution with regard to others
Outcome bias
(e.g., Baron & Hershey, 1988)
Evaluation of the quality of a decision as a
function of the outcome (valence)
account of the previously researched biases presented
in Table 1. And we argue that research on the respec-
tive biases should elaborate on whether and how those
biases truly exceed confirmation bias. The proposed
framework also implies several novel testable hypoth-
eses, thus providing generative potential beyond its
integrative function.
We begin by outlining the foundations of our reason-
ing. First, we define “beliefs” and provide evidence for
their ubiquity. Second, we outline the many facets of
belief-consistent information processing and elaborate
on its pervasiveness. In the third part of the article, we
discuss a nonexhaustive collection of hitherto indepen-
dently treated biases (e.g., spotlight effect, false con-
sensus effect, bias blind spot, hostile media effect) and
how they could be traced back to one of two funda-
mental beliefs plus belief-consistent information pro-
cessing. We then broaden the scope of our focus and
Perspectives on Psychological Science XX(X) 3
discuss several other phenomena to which the same
reasoning might apply. Finally, we provide an integra-
tive discussion of this framework, its broader applicabil-
ity, and potential limitations.
The Ubiquity of Beliefs and Belief-
Consistent Information Processing
Because we claim a set of biases to be essentially based
on a prior belief plus belief-consistent information pro-
cessing, we first elaborate on these two parts of the rec-
ipe. First, we outline how we conceptualize beliefs and
argue that they are an indispensable part of human cog-
nition. Second, we introduce the many routes by which
belief-consistent information processing may unfold and
present research speaking to its pervasiveness.
Beliefs
We consider beliefs as hypotheses about some aspect
of the world that come along with the notion of
accuracy—either because people examine beliefs’ truth
status or because they already have an opinion about
the accuracy of the beliefs. Beliefs in the philosophical
sense (i.e., “what we take to be the case or regard as
true”; Schwitzgebel, 2019) fall into this category (e.g.,
“This was the biggest inauguration audience ever”;
“Homeopathy is effective”; “Rising temperatures are
human-made”), as does knowledge, a special case of
belief (i.e., a justified true belief; Ichikawa & Steup,
2018).
Following from this conceptualization, there are cer-
tain characteristics that are relevant for the current pur-
pose. First, beliefs may or may not be actually true.
Second, beliefs may result from any amount of deliberate
processing or reflection. Third, beliefs may be held with
any amount of certainty. Fourth, beliefs may be easily
testable (e.g., “Canada is larger than the United States”)
after some specifications (e.g., “I am rational”), partly
testable (e.g., not falsifiable; e.g., “Traumatic experiences
are repressed”), or not testable at all (e.g., “Freedom is
more important than security”). It is irrelevant for the
current purpose whether a belief is false, entirely lacks
foundation, or is untestable. All that matters is that the
person holding this belief either has an opinion about
its truth status or examines its truth status.
The ubiquity of beliefs
There is an abundance of research suggesting that the
human cognitive system is tuned to generating beliefs
about the world: An incredible plethora of psychologi-
cal research on schemata, scripts, stereotypes, attitudes
(even about unknown entities; Lord & Taylor, 2009),
top-down processing, but also learned helplessness and
a multitude of other phenomena demonstrates that we
readily form beliefs by generalizing across objects and
situations (e.g., W. F. Brewer & Nakamura, 1984; Brosch
etal., 2010; J. S. Bruner & Potter, 1964; Darley & Fazio,
1980; C. D. Gilbert & Li, 2013; Greenwald & Banaji,
1995; Hilton & von Hippel, 1996; Kveraga etal., 2007;
Maier & Seligman, 1976; Mervis & Rosch, 1981; Roese
& Sherman, 2007). Furthermore, people (as well as
some animals) generate beliefs about the world even
when it is inappropriate because there is actually no
systematic pattern that would allow for expectations
(e.g., A. Bruner & Revusky, 1961; Fiedler etal., 2009;
Hartley, 1946; Keinan, 2002; Langer, 1975; Riedl, 1981;
Skinner, 1948; Weber etal., 2001; Whitson & Galinsky,
2008).
Explanations for such superstitions, but also for a
variety of untestable or unwarranted beliefs, repeatedly
refer to the benefits arising from even illusory beliefs.
Believing in some kind of higher force (e.g., God), for
instance, may provide explanations for relevant phe-
nomena in the world (e.g., thunder, pervasive suffering
in the world) and may thereby increase perceptions of
predictability, control, self-efficacy, and even justice, all
of which have been shown to be beneficial for individu-
als, even if they are illusory (e.g., Alloy & Abramson,
1979; Alloy & Clements, 1992; Day & Maltby, 2003;
Green & Elliott, 2010; Kay, Gaucher, etal., 2010; Kay,
Moscovitch, & Laurin, 2010; Langer, 1975; Taylor &
Brown, 1988, 1994; Taylor etal., 2000; Witter etal.,
1985). Religious ideas, in particular, have furthermore
fostered communion, orderly coexistence, and even
cooperation among individuals, benefiting both indi-
viduals as well as entire groups (e.g., Bloom, 2012;
Dow, 2006; Graham & Haidt, 2010; Johnson & Fowler,
2011; Koenig etal., 1999; MacIntyre, 2004; Peoples &
Marlowe, 2012). Indeed, there are numerous unwar-
ranted—or even blatantly false—beliefs that either have
no (immediate and thus likely detectable) detrimental
consequences or even lead to positive consequences
(for placebo effects, see Kaptchuk etal., 2010; Kennedy
& Taddonio, 1976; Price etal., 2008; for magical think-
ing, see Subbotsky, 2004; for belief in a just world, see
Dalbert, 2009; Furnham, 2003), which fosters the sur-
vival of such beliefs.
Beyond demonstrations of peoples’ readiness toward
forming beliefs, research has repeatedly affirmed peo-
ple’s tendency to be intolerant of ambiguity and uncer-
tainty and found a preference for “cognitive closure”
(i.e., a made-up mind) instead (Dijksterhuis etal., 1996;
Furnham & Marks, 2013; Furnham & Ribchester, 1995;
Kruglanski & Freund, 1983; Ladouceur etal., 2000;
Webster & Kruglanski, 1997). And last but not least,
D. T. Gilbert (1991) made a strong case for the Spinozan
4 Oeberst and Imhoff
view that comprehending something is so tightly con-
nected to believing it that beliefs may be unaccepted
only after deliberate reflection—and yet may affect our
behavior (Risen, 2016). In other words, beliefs emerge
the very moment we understand something about the
world. Children understand (and thus believe) some-
thing takes place long before they have developed the
cognitive capabilities that are needed to deny proposi-
tions (Pea, 1980). After all, children are continuously
and thoroughly exposed to an environment (e.g., expe-
rience, language, culture, social context) that provides
an incredibly rich source of beliefs that are transmitted
subtly as well as blatantly and thereby effectively shapes
humans’ worldviews and beliefs from the very begin-
ning. Taken together, the research has indicated that
people readily generate beliefs about the world (D. T.
Gilbert, 1991; see also Popper, 1963). Consequently,
beliefs are an indispensable part of human cognition.
Belief-consistent information
processing—facets and ubiquity
To date, researchers have accumulated much evidence
for the notion that beliefs serve as a starting point of
how people perceive the world and process informa-
tion about it. For instance, individuals tend to scan the
environment for features more likely under the hypoth-
esis (i.e., belief) than under the alternative (“positive
testing”; Zuckerman etal., 1995). People also choose
belief-consistent information over belief-inconsistent
information (“selective exposure” or “congeniality bias”;
for a meta-analysis, see Hart etal., 2009). They tend
to erroneously perceive new information as confirming
their own prior beliefs (“biased assimilation”; for an
overview, see Lord & Taylor, 2009; “evaluation bias”;
e.g., Sassenberg etal., 2014) and to discredit informa-
tion that is inconsistent with prior beliefs (“motivated
skepticism”; Ditto & Lopez, 1992; Taber & Lodge, 2006;
“disconfirmation bias”; Edwards & Smith, 1996; “parti-
san bias”; Ditto etal., 2019). At the same time, people
tend to stick to their beliefs despite contrary evidence
(“belief perseverance”; C. A. Anderson etal., 1980;
C. A. Anderson & Lindsay, 1998; Davies, 1997; Jelalian
& Miller, 1984), which, in turn, may be explained and
complemented by other lines of research. “Subtyping,”
for instance, allows for holding on to a belief by catego-
rizing belief-inconsistent information into an extra cat-
egory (e.g., “exceptions”; for an overview, see Richards
& Hewstone, 2001). Likewise, the application of dif-
ferential evaluation criteria to belief-consistent and
belief-inconsistent information systematically fosters
“belief perseverance” (e.g., Sanbonmatsu etal., 1998;
Trope & Liberman, 1996; see also Koval etal., 2012;
Noor etal., 2019; Tarrant etal., 2012). Partly, people
hold even stronger beliefs after facing disconfirming
evidence (“belief-disconfirmation effect”; Bateson,
1975; see also “cognitive dissonance theory”; Festinger,
1957; Festinger etal., 1955/2011).
All of the phenomena mentioned above are expres-
sions of the principle of belief-consistent information
processing (see also Klayman, 1995). That is, although
specifics in the task, the stage for information process-
ing, and the dependent measure may vary, all of these
phenomena demonstrate the systematic tendency
toward belief-consistent information processing. Put
differently, belief-consistent information processing
emerges at all stages of information processing such as
attention (e.g., Rajsic etal., 2015), perception (e.g.,
Cohen, 1981), evaluation of information (e.g., Ask &
Granhag, 2007; Lord etal., 1979; Richards & Hewstone,
2001; Taber & Lodge, 2006), reconstruction of informa-
tion (e.g., Allport & Postman, 1947; Bartlett, 1932;
Kleider etal., 2008; M. Ross & Sicoly, 1979; Sahdra &
Ross, 2007; Snyder & Uranowitz, 1978), and the search
for new information (e.g., Hill etal., 2008; Kunda, 1987;
Liberman & Chaiken, 1992; Pyszczynski etal., 1985;
Wyer & Frey, 1983)—including one’s own elicitation
of what is searched for (“self-fulfilling prophecy”;
Jussim, 1986; Merton, 1948; Rosenthal & Jacobson, 1968;
Rosenthal & Rubin, 1978; Sheldrake, 1998; Snyder &
Swann, 1978; Watzlawick, 1981). Moreover, many stages
(e.g., evaluation) allow for applying various strategies
(e.g., ignoring, underweighting, discrediting, refram-
ing). Consequently, individuals have a great number
of options at their disposal (think of the combinations)
so that the degrees of freedom in their processing of
information allows for countless possibilities for belief-
consistent information processing, which may explain
how belief-consistent conclusions arise even under
the least likely circumstances (e.g., Festinger etal.,
1955/2011).
In sum, belief-consistent information processing
seems to be a fundamental principle in human informa-
tion processing that is not only ubiquitous (e.g.,
Gawronski & Strack, 2012; Nickerson, 1998; see also
Abelson etal., 1968; Feldman, 1966) but also a conditio
humana. This notion is also reflected in the fact that
motivation is not a necessary prerequisite for engaging
in belief-consistent information processing: Several stud-
ies have shown that belief-consistent information pro-
cessing arises for hypotheses for which people have no
stakes in the specific outcome and thus no interest in
particular conclusions (i.e., motivated reasoning; Kunda,
1990; e.g., Crocker, 1982; Doherty etal., 1979; Evans,
1972; Klayman & Ha, 1987, 1989; Mynatt etal., 1978;
Sanbonmatsu etal., 1998; Skov & Sherman, 1986; Snyder
& Swann, 1978; Snyder & Uranowitz, 1978; Wason,
1960). In addition, research under the label “contextual
Perspectives on Psychological Science XX(X) 5
bias” can be classified as unmotivated confirmation bias
because it demonstrates how contextual features (e.g.,
prior information about the credibility of a person) may
bias information processing (e.g., the evaluation of the
quality of a statement from that person; e.g., Bogaard
etal., 2014; see also Dror etal., 2006; Elaad etal., 1994;
Kellaris etal., 1996; Risinger etal., 2002). In other words,
the same mechanisms apply, regardless of peoples’
interest in the outcome (Trope & Liberman, 1996).
Hence, belief-consistent information processing takes
place even when people are not motivated to confirm
their belief. Furthermore, belief-consistent information
processing has been shown even when people are moti-
vated to be unbiased (e.g., Lord etal., 1984), or at least
want to appear unbiased. This is frequently the case in
the lab, where participants are motivated to hide their
beliefs (for an overview of subtle discrimination, see
Bertrand etal., 2005). But it is even more true in sci-
entific research (Greenwald etal., 1986), forensic inves-
tigations (Dror etal., 2006; Murrie etal., 2013; Rassin
etal., 2010), and in the courtroom (or legal decision-
making, more generally), in which an unbiased judg-
ment is the ultimate goal that is rarely reached (Devine
etal., 2001; Hagan & Parker, 1985; Mustard, 2001; Pruitt
& Wilson, 1983; Sommers & Ellsworth, 2001; Steblay
etal., 1999; for overviews, see Faigman etal., 2012;
Kang & Lane, 2010). Taken together, overabundant
research demonstrates that belief-consistent informa-
tion processing is a pervasive phenomenon for which
motivation is not a necessary ingredient.
Biases Reexplained as Confirmation Bias
Having pointed to the ubiquity of beliefs and belief-
consistent information processing, let us now return to
the nonexhaustive list of biases in Table 1, for which
we propose to entertain the notion that they may arise
from shared beliefs plus belief-consistent information
processing. As can be seen at first glance, we bring
together biases that have been investigated in separate
lines of research (e.g., bias blind spot, hostile media
bias). We argue that all of the biases mentioned in Table
1 could, in principle, be understood as a result of a
fundamental belief plus belief-consistent information
processing because they might all be based on a fun-
damental underlying belief. Furthermore, in specifying
the fundamental beliefs, we suggest that several biases
actually share the same belief (e.g., “I make correct
assessments”; see Table 1)—thereby representing only
variations in which the underlying belief is expressed.
To be sure, the current approach does not preclude
contribution from other factors to the biases at hand. We
merely raise the question of whether the parsimonious
combination of belief plus belief-consistent information
processing alone might provide an explanation that
suffices to predict the existence of the biases listed in
Table 1. That is, other factors could contribute, attenuate,
or exacerbate these biases, but our recipe alone would
already allow their prediction. Let us now see how some
of the biases mentioned in Table 1 could be traced back
to the (same) fundamental beliefs and thereby be
explained by them—when acknowledging the principle
of belief-consistent information processing. We do so by
spelling this out for the biases based on two fundamental
beliefs (“My experience is a reasonable reference” and
“I make correct assessments”).
“My experience is a reasonable reference”
A number of biases seem to imply that people take both
their own (current) phenomenology and themselves as
starting points for information processing. That is, even
when a judgment or task is about another person, peo-
ple start from their own lived experience and project
it—at least partly—onto others as well (e.g., Epley
etal., 2004). For instance, research on phenomena fall-
ing under the umbrella of the “curse of knowledge” or
“epistemic egocentrism” speaks to this issue because
people are bad at taking a perspective that is more
ignorant than their own (Birch & Bloom, 2004; for an
overview, see Royzman etal., 2003). People overesti-
mate, for instance, the extent to which their appearance
and actions are noticed by others (“spotlight effect”;
e.g., Gilovich etal., 2000), the extent to which their
inner states can be perceived by others (“illusion of
transparency”; e.g., Gilovich etal., 1998; Gilovich &
Savitsky, 1999), and the extent to which people expect
others to grasp the intention behind an ambiguous
utterance if its meaning is clear to evaluators (“illusory
transparency of intention”; Keysar, 1994; Keysar etal.,
1998). Likewise, people overestimate similarities
between themselves and others (“self-anchoring” and
“social projection”; Bianchi etal., 2009; Otten, 2004;
Otten & Epstude, 2006; Otten & Wentura, 2001; A. R.
Todd & Burgmer, 2013; van Veelen etal., 2011; for a
meta-analysis, see Robbins & Krueger, 2005), as well
as the extent to which others share their own perspec-
tive (“false consensus effect”; Nickerson, 1999; for a
meta-analysis, see Mullen etal., 1985).
Taken together, a number of biases seem to result
from people taking—by default—their own phenom-
enology as a reference in information processing (see
also Nickerson, 1999; Royzman etal., 2003). Put differ-
ently, people seem to—implicitly or explicitly—regard
their own experience as a reasonable starting point
when it comes to judgments about others and fail to
adjust sufficiently. Instead of disregarding—or even dis-
crediting—their own experience as an appropriate
6 Oeberst and Imhoff
starting point, people rely on it. When judging, for
instance, the extent to which others may perceive their
own inner state, people act on the belief that their own
experience of their inner state (e.g., their nervousness)
is a reasonable starting point, which, in turn, guides
their information processing. They might start out with
a specific and directed question (e.g., “To what extent
do others notice my nervousness?”) instead of an open
and global question (e.g., “How do others see me?”).
People might also focus on information that is consis-
tent with their own phenomenology (e.g., their
increased speech rate) as potential cues that others
could draw on. Finally, people may ignore or under-
weight information that is inconsistent with their phe-
nomenology (e.g., their limbs being completely calm)
or discredit such observations as potentially valid cues
for others. In the same vein, people assume that others
draw on the same information (e.g., their increased
speech rate) and that they draw the same conclusions
from it (i.e., their nervousness). All of this—as well as
the empirical evidence for the biases outlined within
this section—suggests that people do take their own
experience as a reference when processing information
to arrive at judgments regarding others and how others
see them.
Now let us entertain the notion that people do, by
default, regard their own experience as a reasonable
reference for their judgments about others. Would
biases such as the spotlight effect, the illusion of trans-
parency (of intention), false consensus, and social pro-
jection not (by default) follow from the default belief
when taking the general human tendency of belief-
consistent information processing into account? From
our point of view, the answer is an emphatic “yes.” If
people judge the extent to which others notice (some-
thing about) them or their inner states or intentions,
hold the fundamental belief that their own experi-
ence is a reasonable reference, and engage in belief-
consistent information processing, we should—by
default and on average—observe an overestimation of
the extent to which an aspect of oneself or one’s own
inner states are noticed by others as suggested by the
spotlight effect and the illusion of transparency (of
intention). Likewise, people should overestimate the
extent to which others are similar to themselves (social
projection) and share their own opinions and beliefs
(false consensus).
This reasoning might be reminiscent of anchoring-
and-(insufficient)-adjustment accounts (Tversky &
Kahneman, 1974), and there are certainly parallels so
that one could speak of a mere reformulation. A crucial
difference is, however, that we explicate a fundamental
belief that explains why people anchor on their own
phenomenology when making judgments about others:
They (implicitly or explicitly) believe that their own
experience is a reasonable reference, even for others.
Yet another advantage of our proposed framework is
that it acknowledges even more parallels to other biases
and provides a more parsimonious account. After all,
we argue that these biases—at their core—could be
understood as a variation of confirmation bias (based
on a shared belief). That is, we propose an explanation
that suffices to predict the existence of these biases
while clearly acknowledging that other factors may and
do contribute, attenuate, or exacerbate these biases.
“I make correct assessments”
Let us turn our attention to a second group of biases
and entertain the notion that they stem from the default
belief of making correct assessments, which people
hold for themselves but not for others. As we argue
below, biases such as the bias blind spot and the hostile
media bias are almost logical consequences of people’s
simple assumption that their assessments are correct.
Having the belief of making correct assessments also
implies not falling prey to biases. Precisely such a meta-
bias of expecting others to be more prone (compared
to oneself) to such biases has been subsumed under
the phenomenon of the bias blind spot. The bias blind
spot describes humans’ tendency to “see the existence
and operation of cognitive and motivational biases
much more in others than in themselves” (Pronin etal.,
2002a, p. 369; for reviews, see Pronin, 2007; Pronin
etal., 2004). If people start out from the default assump-
tion that they make correct assessments, as suggested
by our framework, one part of the bias blind spot is
explained right away: the conviction that one’s own
assessments are unbiased (see also Frantz, 2006). After
all, trust in one’s assessments may effectively prevent
the identification of own biases and errors—either by
failing to see the necessity to rethink judgments or by
failing to identify biases therein. The other part, how-
ever, is implied in the fact that people do not hold the
same belief for others (for a somewhat similar notion,
see Pronin etal., 2004, 2006). Importantly, we propose
that people do not generate the same fundamental
beliefs about others, particularly not about a broad or
vague group of others that is usually assessed in studies
(e.g., the “average American” or the “average fellow
classmate”; Pronin etal., 2002a, 2006; see also the sec-
tion on fundamental beliefs and motivation). The logi-
cal consequence of people’s believing in the accuracy
of their own assessments while simultaneously not
holding the same conviction in the accuracy of others’
assessments is that people expect others to succumb
to biases more often than they themselves do (e.g.,
Kruger & Gilovich, 1999; Miller & Ratner, 1998; van
Perspectives on Psychological Science XX(X) 7
Boven etal., 1999). Another consequence is to assume
errors on the part of others if discrepancies between
their and one’s own judgments are observed (Pronin,
2007; Pronin etal., 2004; Ross etal., 2004, as cited from
Pronin etal., 2004).
The hostile media bias describes the phenomenon
by which, for instance, partisans of conflicting groups
view the same media reports about an intergroup con-
flict as biased against their own side (Vallone etal.,
1985; see also Christen etal., 2002; Dalton etal., 1998;
Matheson & Dursun, 2001; Richardson etal., 2008; for
a meta-analysis, see Hansen & Kim, 2011). The reason-
ing of our framework here is similar to the one applied
to the bias blind spot (see also Lord & Taylor, 2009): If
people assume their own assessments are correct and,
by nature of being correct their views are also unbiased,
it is almost necessary to assume others (people/media
reports) are biased if their views differ. People starting
from the belief to make correct assessments process
the available information (e.g., a discrepancy between
their own view and media reports) in a way that is
consistent with this basic belief (e.g., by attributing the
discrepancy to a bias in others, not themselves). In
addition, in line with our argument of rather general
mechanisms being at play, the hostile media effect
was found in representative samples (e.g., Gunther &
Christen, 2002) and even for people that were less con-
nected with the issue at hand (Hansen & Kim, 2011),
that is, not strongly involved with the issue as Vallone
et al. (1985) initially regarded as prerequisite.
To summarize, we argue that the bias blind spot and
the hostile media bias can essentially be explained by
one fundamental underlying belief: People generally
trust their assessments but do not hold the same trust
for others’ assessments. As a consequence, they are
overconfident and do not question their own judgment
as systematically as they question the judgment of others
(e.g., when confronted with a different view). Hence,
we suggest that these biases are based on the same
recipe (belief plus belief-consistent information process-
ing). Even more, we suggest that these biases are based
on the same fundamental belief: people’s belief that they
themselves make correct assessments. By doing so, we
not only provide a more parsimonious account for dif-
ferent biases but also bring together biases that have
heretofore been treated as unrelated because they have
been researched in very different areas within psychol-
ogy (e.g., whereas hostile media bias is mainly addressed
in the intergroup context, bias blind spot is not).
Further Clarifications and Distinctions
So far, we have attempted to show that the biases listed
in Table 1 can be understood as a combination of
beliefs plus belief-consistent information processing.
This is not to say that no other factors or mechanisms
are at play but rather to put forth the idea that belief
plus belief-consistent information processing suffices
as an explanation (with the corollary that the funda-
mental beliefs are not held for other people as well).
In the next section, we add some clarifications to our
approach regarding the role of “innocent” processes,
motivation, and deliberation, which also differentiate
our approach from others. We also contrast our reason-
ing with a Bayesian perspective.
The role of innocent processes
We have repeatedly emphasized the parsimony of our
account but several explanations have been put for-
ward that are even more parsimonious in the sense
that they outline how biases can emerge from innocent
processes without any prior beliefs that led participants
to draw biased conclusions (e.g., Alves etal., 2018;
Chapman, 1967; Fiedler, 2000; Hamilton & Gifford,
1976; Meiser & Hewstone, 2006). Instead, characteris-
tics of the environment (e.g., information ecologies)
and basic principles of information processing can lead
to profoundly biased conclusions, according to these
authors (e.g., evaluating members of novel groups or
minorities more negatively; Alves etal., 2018). Within
these frameworks, individuals’ only contribution to
biased conclusions lies in their lack of metacognitive
abilities that would enable them to detect (and control
for) such biases (e.g., Fiedler, 2000; Fiedler et al.,
2018). Obviously, a crucial difference between these
accounts and our current perspective is that they start
out from the notion of perfectly open-minded individu-
als that do not hold any relevant beliefs (i.e., a tabula
rasa), whereas our main argument rests on the assump-
tion that many biases actually result from already hav-
ing beliefs. Although this difference already makes
clear that these two perspectives do not necessarily
compete with one another, but could—in principle—
both contribute to biases (at different stages), we are
very skeptical about the prevalence of open-minded-
ness (of not holding any prior belief; see also Fiedler,
2000, p. 662).
As outlined above, we regard beliefs as an indispens-
able part of human cognition because people are
extremely ready to generate beliefs about the world.
Therefore, we are skeptical of a truly open mind (in
the sense of having literally no prior beliefs or convic-
tions) to be a prevalent case. Nevertheless, innocent
circumstances (such as the information ecology) might
explain a possible origin of (biased) expectations and
beliefs where there were none before (see also Nisbett
& Ross, 1980; Sanbonmatsu etal., 1998).
8 Oeberst and Imhoff
The role of motivation
One recurrent theme in the explanations of several
biases is the notion of motivation (e.g., Kruglanski
etal., 2020). The bias blind spot, for instance, is some-
times interpreted as an expression of individuals’
motives for superiority (see Pronin, 2007). More gener-
ally, for biases based on the beliefs “I am good” and
“My group is good,” a number of explanations are based
on presumed motives for a positive self-concept or even
for self-enhancement (e.g., J. D. Brown, 1986; Campbell
& Sedikides, 1999; Hoorens, 1993; John & Robins, 1994;
Kwan etal., 2008; Sedikides & Alicke, 2012; Sedikides
etal., 2005; Shepperd et al., 2008; Tajfel & Turner,
1986). Following from our account, however, such moti-
vational antecedents are not necessary to explain
biases. To be clear, we do not claim that motivation is
per se irrelevant. Rather, we can well imagine that moti-
vation may amplify each and every bias. We argue here,
however, that motivation is not a necessary precondi-
tion to arrive at any of the biases listed.
Fundamental beliefs and motivation. Are people
motivated to make correct assessments of the world?
Probably yes. Do people need a motive to arrive at the
belief that they make correct assessments? Certainly not.
Instead, people might simply overgeneralize from their
everyday experiences (Riedl, 1981). People almost always
correctly expect darkness to follow the light, the down-
fall after a jump, thirst and hunger after some period
without water and food, fatigue subsequent to an
extended period of intensive activity, the keys where
they left them, electricity from the sockets, a hangover
after a lot of alcohol, newspapers to change contents
each day, doctors trying to make things better, and salary
being paid regularly—just to mention a tiny fraction of
the abundance of correct assessments in everyday life
(D. T. Gilbert, 1991).
Not all assessments or beliefs about the world are
correct, however. Crucially, various mechanisms pre-
clude the realization of making incorrect assessments.
First, we have already pointed out that some beliefs
may be untestable or unfalsifiable—which has its own
psychological advantages as one cannot be proven
wrong (Friesen etal., 2015).1 Second, people usually
do not attempt to falsify their beliefs, even if that would
be possible and desirable (Popper, 1963). Instead, they
engage in the many ways of belief-consistent informa-
tion processing as we have outlined above. This, of
course, also contributes to the existence and mainte-
nance of beliefs—and first and foremost to the belief
that one makes correct assessments (see also Swann &
Buhrmester, 2012). After all, processing information in
a belief-consistent way and “confirming” one’s beliefs
entails the experience of making correct assessments.
Third, even if we set aside the human tendency toward
belief-consistent information processing, it is often not
possible for people to realize their incorrect assess-
ments. Be it because they lack direct access to the
processes that influence their perceptions and evalua-
tions (Nisbett & Wilson, 1977; Wilson & Brekke, 1994;
Wilson etal., 2002; see also Frantz, 2006; Pronin etal.,
2004, 2006) or because they lack a reference for com-
parison that would be necessary to identify biases. In
the real world, for instance, people often have no
access to others’ perceptions and thoughts, which gen-
erally precludes the recognition of overestimations
(e.g., of the extent to which own inner states are
noticed by others, i.e., illusion of transparency). Simi-
larly, once a society decided to hold a person captive
because of the potential danger that emanates from the
person, there is no chance to realize the person was
not dangerous. Likewise, people cannot systematically
trace the effects of a placebo to their own expectations
(e.g., Kennedy & Taddonio, 1976; Price etal., 2008),
just to mention a few instances. In other words, people
cannot exert the systematic examination that character-
izes scientific scrutiny (which, however, also does not
preclude being biased, e.g., Greenwald etal., 1986) and
thereby do not detect their incorrect assessments. Taken
together, due to a number of reasons people over-
whelmingly perceive themselves as making correct
assessments. Be it because they are correct, or because
they are simply not corrected. Such an overgeneraliza-
tion to a fundamental belief of making correct assess-
ments could, thus, actually be regarded as a reasonable
extrapolation. Consequently, no motivation is needed
to arrive at this fundamental belief. Rather, we expect
healthy individuals to be—by default—naive realists
(see also Griffin & Ross, 1991; Ichheiser, 1949; Pronin
etal., 2002b; L. Ross & Ward, 1996). In other words, we
propose people generally start from the default assump-
tion that their assessments are accurate.
Since people do not have immediate access to the
experiences and phenomenology of others, however,
they do not hold the same default belief for other peo-
ple. This is crucial with regard to biases. After all, if
people did not only believe that they make correct
assessments of the world, but at the same time and with
the same verve also believed that other people make
correct assessments of the world, we would not expect
biases such as the “bias blind spot” to occur. The fact
that people do not hold such a conviction for others,
however, does not necessarily involve motivation
either—the belief may be lacking for the simple reason
that people do not have immediate access to others’
experiences. Consequently, motivation is not necessary
to arrive at self-other differences in this regard. Let us
Perspectives on Psychological Science XX(X) 9
illustrate this with regard to ingroup bias: If people
merely held the belief that their own group is good
(see also Cvencek etal., 2012; Mullen etal., 1992), but
did not hold the same belief for other groups, ingroup
favoritism could result—without assuming people to
believe that their group was better than other groups.
Indeed, a lot of research suggests that people have an
automatic ingroup favoritism, but not a parallel auto-
matic outgroup derogation (see Fiske, 1998, for an
overview).
As we postulate that motivation is not necessary for
the fundamental beliefs to arise, we also propose that
motivation is not a necessary ingredient for self- or
group-favoring outcomes. Crucially, this is at odds with
the most commonly accepted theoretical explanation
for in-group bias—the social-identity approach (Tajfel
& Turner, 1979; Turner etal., 1987), which posits that
(a) memberships in social groups are an essential part
of individuals’ self-concepts (see also R. Brown, 2000)
and (b) individuals strive to see themselves in a positive
light. Following from these two postulates, there is a
fundamental tendency to favor the social group they
identify with (i.e., in-group bias; e.g., M. B. Brewer,
2007; Hewstone etal., 2002). Contrary to this approach,
we argue that it does not need a motivational compo-
nent (i.e., the striving for a positive self-concept). To
be sure, motivation may add to and thus likely pro-
nounce in-group bias, but we do not expect it to be a
necessary precondition. In fact, our reasoning is in line
with the observation that people do not show height-
ened self-esteem after engaging in in-group bias (for
an overview, see Rubin & Hewstone, 1998), as would
be expected from original social-identity theorizing.
Belief-consistent information processing and moti-
vation. Quite frequently, belief-consistent information
processing—such as in the context of confirmation
bias—is equated with motivated information processing
(Kunda, 1990), in which people are motivated to defend,
maintain, or confirm their prior beliefs. Some authors
have even suggested speaking of “my-side bias” rather
than confirmation bias (e.g., Mercier, 2017). In fact, belief
and motivation often come together: Some beliefs just
feel better than others, and “people find it easier to
believe propositions they would like to be true than
propositions they would prefer to be false” (Nickerson,
1998, p. 197; see also Kruglanski etal., 2018; on the “Pol-
lyanna principle,” see Matlin, 2017). In addition, in some
beliefs, people may have already invested a lot (e.g.,
one’s beliefs about the optimal parenting style or about
God/paradise; e.g., Festinger etal., 1955/2011; see also
McNulty & Karney, 2002) so that the beliefs are psycho-
logically extremely costly to give up (e.g., ideologies/
political systems one has supported for a long time; Lord
& Taylor, 2009). Hence, wanting a belief to be true
likely pronounces belief-consistent information process-
ing (Kunda, 1990; see also Tesser, 1978) and may even
include strategic components (e.g., the deliberate search
for belief-consistent information; Festinger, et al., 1955/
2011; Yong etal., 2021). But despite this prevalent asso-
ciation of confirmation bias and motivated information
processing, the latter is not a necessary precondition of
the former. On the contrary, as already outlined above,
belief-consistent information processing takes place
when people are not motivated to confirm their belief as
well as when people are motivated to be unbiased or at
least want to appear unbiased. Consequently, belief-con-
sistent information processing is a fundamental principle
that is not contingent on motivation.
The role of deliberation
Although deliberation is not entirely independent of
motivation, it deserves extra discussion because it can
be and has been viewed as a remedy for biases. Specifi-
cally, knowledge about the specific bias, the availability
of resources (e.g., time), as well as the motivation to
deliberate are considered to be necessary and sufficient
preconditions to effectively counter bias according to
some models (e.g., Oswald, 2014; for a similar case, see
the potential solutions proposed by Nickerson, 1999).
Although this might be true for logical problems that
suggest an immediate (but wrong) solution to partici-
pants (e.g., “strategy-based” errors in the sense of
Arkes, 1991), much research attests to people’s failure
to correct for biases even if they are aware of the prob-
lem, urged or motivated to avoid them, and are pro-
vided with the necessary opportunity (e.g., Harley,
2007; Lieberman & Arndt, 2000; for a meta-analysis
about ignoring inadmissible evidence, see Steblay etal.,
2006).
A very plausible reason for this is that people fail to
take effective countermeasures spontaneously (Giroux
etal., 2016; Kelman etal., 1998). Recall the biases that
we suggested might follow from an overgeneralizing of
one’s own phenomenology. Overgeneralizing one’s
own phenomenology, in turn, effectively boils down to
ignoring information one has. This may be quite easy
if the nature of the information to be ignored and the
judgment to be made are clear-cut, as is the case in
most theory-of-mind paradigms (for an overview, see
Flavell, 2004). In a typical false-belief study, for instance,
people are required to set aside their knowledge that
an item was removed from its place in the absence of
a person who had previously observed its placement,
and the people have to indicate where the other person
would believe the item to be. Essentially, the informa-
tion in this task is binary (present vs. not present; i.e.,
10 Oeberst and Imhoff
people have to ignore the knowledge that the item was
removed and therefore is not in its original place any-
more). In addition, the information to be ignored refers
to an aspect of physical reality that is (a) objective in
that interindividual agreement should be perfect in this
regard and (b) readily accessible (see also Clark &
Marshall, 1981). Consequently, not only the information
to be ignored but also its impact on the required judg-
ment may be unequivocally and exhaustively identi-
fied—and therefore effectively controlled for.
However, the situation is substantially different in
the tasks underlying the spotlight effect, the illusion
of transparency (of intention), as well as the false con-
sensus effect (and egocentrism in similar tasks; e.g.,
Chambers & De Dreu, 2014; for other association-based
errors, see Arkes, 1991). Here, the information to be
ignored is often not binary (e.g., one’s emotions, one’s
attitudes) and therefore also not necessarily entirely
and unequivocally identifiable to people themselves
(i.e., the specific extent or intensity). Furthermore, even
without the requirement to ignore some information,
the task is much fuzzier in itself (i.e., determining how
others view me, the extent to which others may deter-
mine what’s going on in my head, how others feel
about certain topics). These tasks lack the objectivity
and knowledge required to undo the influence of the
information to be ignored (for an elimination of the
false consensus effect if representative information is
readily available, see Engelmann & Strobel, 2012; see
also Bosveld etal., 1994).
Under these circumstances, the simple attempt to
ignore information likely fails (e.g., Fischhoff, 1975,
1977; Pohl & Hell, 1996; Steblay etal., 2006; see also
Dror etal., 2015; Servick, 2015). After all, the very
information to ignore is not even clearly identifiable,
nor is its impact on the task—which needs to be deter-
mined to be able to correct for it. Consequently, the
very obvious strategy people likely choose—to some-
how inhibit or ignore information they have—is inef-
fective. Thus, exhibiting a bias may not be due to a lack
of deliberation. In addition, (unspecific) deliberation
alone might not help in avoiding biases (in fact, more
deliberation may even entail more belief-consistent
information processing and, thus, more bias; Nestler
etal., 2008). Rather, avoidance of biases might need a
specific form of deliberation. Interestingly, much
research shows that there is an effective strategy for
reducing many biases: to challenge one’s current per-
spective by actively searching for and generating argu-
ments against it (“consider the opposite”; Lord etal.,
1984; see also Arkes, 1991; Koehler, 1994). This strategy
has proven effective for a number of different biases
such as confirmation bias (e.g., Lord et al., 1984;
O’Brien, 2009), the “anchoring effect” (e.g., Mussweiler
etal., 2000), and the hindsight bias (e.g., Arkes etal.,
1988). At least in part, it even seems to be the only
effective countermeasure (e.g., for the hindsight bias,
see Roese & Vohs, 2012). Essentially, this is another
argument for the general reasoning of this article,
namely that biases are based on the same general pro-
cess—belief-consistent information processing. Conse-
quently, it is not the amount of deliberation that should
matter but rather its direction. Only if people tackle the
beliefs that guide—and bias—their information process-
ing and systematically challenge them by deliberately
searching for belief-inconsistent information, we should
observe a significant reduction in biases—or possibly
even an unbiased perspective. From the perspective of
our framework, we would thus derive the hypothesis
that the listed biases could be reduced (or even elimi-
nated) when people deliberately considered the oppo-
site of the proposed underlying fundamental belief by
explicitly searching for information that is inconsistent
with the proposed underlying belief. That is, we would
expect a significant reduction of the spotlight effect,
the illusion of transparency (of intention), the false
consensus effect, and social projection if people were
led to deliberately consider the notion and search for
information suggesting that their own experience might
not be an adequate reference for the respective judg-
ments about others. Likewise, we would expect a debi-
asing effect on the bias blind spot and the hostile media
bias if people deliberately considered the notion that
they do not make correct assessments. Put differently,
if our framework is correct, the parsimony on the level
of explanation would also translate to parsimony on
the level of debiasing: The very same strategy could be
effective for various biases.
Bayesian belief updating
Our framework suggests a unifying look at how people
with existing beliefs process information. As such, it
contains two ingredients also prominently part of
Bayesian belief updating (e.g., Chater etal., 2010; Jones
& Love, 2011). In an idealized Bayesian world, people
hold beliefs (i.e., priors), and any new information will
either solidify these beliefs or attenuate them depend-
ing on its consistency with the prior. Importantly, how-
ever, strong prior beliefs will not be changed dramatically
by just one weak additional bit of information. Instead,
to meaningfully change firmly held beliefs requires
extremely strong or a lot of contradictory evidence.
Cumulative and consistent experience with the world
will thus often lead to a situation in which a new bit
of information seems negligibly irrelevant and will not
evoke a great deal of belief updating. This may sound
reminiscent of our approach of fundamental (i.e., strong
Perspectives on Psychological Science XX(X) 11
prior) beliefs plus belief-consistent information pro-
cessing, but there are marked differences, as we briefly
elaborate.
First, information processing in the classical Bayesian
framework is not biased. Although the possibility of
biased prior beliefs is well acknowledged (Jones &
Love, 2011), rational processing of novel information
is a core assumption. That is, the same bit of informa-
tion means the exact same thing for each recipient; it
will just affect their beliefs to differing degrees because
they have different and differently strong priors. This
is dramatically different from our perspective with its
focus on how the same bit of information is attributed,
remembered, processed, interpreted, and attended to
differently as a function of one’s prior beliefs (see also
Mandelbaum, 2019). This notion of biased information
processing is utterly absent from the Bayesian world
(see also next section). Take, for instance, the finding
that the same behavior (e.g., torture) is evaluated dif-
ferently depending on whether the actor is a member
of one’s own group or of another group (e.g., Noor
etal., 2019; Tarrant etal., 2012). Or likewise, take the
differential evaluation of the same scientific method
depending on whether its result is consistent or incon-
sistent with one’s prior belief (e.g., Lord etal., 1979).
Both are incompatible with the fundamental idea of
Bayesian belief updating. And more generally, our
approach is about the impact of prior beliefs on the
processing of information rather than the impact of
(novel) information on prior beliefs. Given these stark
differences, it is not surprising that many predictions
we derive from our understanding are not derivable
from a Bayesian belief-updating perspective.
Second, by relying on the rich empirical evidence
on belief-consistent information processing, we explic-
itly emphasize the many ways in which the novel infor-
mation is already a result of biased information
processing: When people selectively attend to or search
for belief-consistent information (positive testing, selec-
tive exposure, congeniality bias), when they selectively
reconstruct belief-consistent information from their
memory, and when they behave in a way such that they
elicit the phenomenon they searched for themselves
(self-fulfilling prophecy), they already display bias (see
also next section). People are biased in eliciting new
data, and those data are then processed; people do not
simply update their beliefs on the basis of information
they (more or less arbitrarily) encounter in the world.
As a result, people likely gather a biased subsample of
information, which, in turn, will not only lead to biased
prior beliefs but may also lead to strong beliefs that are
actually based on rather little (and entirely homoge-
neous) information. But there are even more and
more extreme ways in which prior beliefs may bias
information processing: Prior beliefs may, for instance,
affect whether or not a piece of information is regarded
as informative at all for one’s beliefs (Fischhoff & Beyth-
Marom, 1983). Categorizing belief-inconsistent informa-
tion into an extra class of exceptions (that are implicitly
uninformative to the hypothesis) is such an example
(or subtyping; see also Kube & Rozenkrantz, 2021).
Likewise, discrediting a source of information easily
legitimates the neglect of information (see disconfirma-
tion bias). In its most extreme form, however, prior
beliefs may not be put to a test at all. Instead, people
may treat them as facts or definite knowledge, which
may lead people to ignore all further information or to
classify belief-inconsistent information simply as false.
In sum then, our reasoning deviates from the Bayes-
ian approach in that its core assumption is one of
biased (vs. unbiased) information processing. Specifi-
cally, we propose prior beliefs to bias the processing
of novel information as well as other stages of informa-
tion processing, including those that elicit (novel) infor-
mation. Furthermore, the hypotheses that fall from our
framework cannot be likewise derived from the Bayes-
ian perspective.
Bias, rationality, and adaptivity
Given that this article and its presented framework are
about biases, it seems reasonable to add a few elabora-
tions on this as well as related concepts. Although we
are mainly proposing a framework to explain biases
that have already been documented and defined by
others, it is noteworthy that all of the biases listed in
Table 1 comprise one of the following two conceptu-
alizations of the term “bias”: On the one hand, some of
these biases are defined as a systematic deviation from
an objectively accurate reference. For instance, if peo-
ple are convinced that their opinions and beliefs are
shared to a larger extent than what is actually the case,
their evaluation (about others) deviates from the objec-
tive reference (i.e., the evaluation of others) and thus
indicates a false consensus. Essentially, all biases that
refer to an overestimation or underestimation build on
the comparison between peoples’ judgments and the
actual (empirical) reference. This is possible because
the judgment itself refers to some aspect in the world
that can be directly assessed and, thus, compared.
On the other hand, for several judgments such refer-
ences are lacking for comparison. For instance, with
what could or should one compare a person’s evalua-
tion of a scientific method or moral judgments to draw
conclusions about potential biases? A typical approach
is to examine whether the same target (e.g., scientific
method, behavior of another person) is evaluated dif-
ferently depending on factors that should actually be
12 Oeberst and Imhoff
irrelevant. In other words, bias is here conceptualized
(or rather demonstrated) as an impact of factors that
should not play a role (i.e., the influence of unwar-
ranted factors). For instance, if the identical scientific
method is evaluated differently depending on whether
or not it supports or challenges one’s prior beliefs (i.e.,
is discredited when yielding belief-inconsistent results;
Lord etal., 1979), this denotes disconfirmation bias
(Edwards & Smith, 1996) or partisan bias (Ditto etal.,
2019). Likewise, when the very same behavior (e.g.,
torture, violent attacks) is evaluated differently depend-
ing on whether the actor is a member of one’s own
group or of another group, one speaks of “in-group
bias” (e.g., Noor etal., 2019; Tarrant etal., 2012). In
other words, bias in this case is operationalized as a
systematic difference in information processing and
its outcome as a function of unwarranted factors.
This notion of unwarranted factors also differentiates
biases from other phenomena: For instance, we would
not speak of bias in the case of experimental manipula-
tions (e.g., mood inductions) affecting individuals’
retrieval of happy memories to regulate their mood
(e.g., Josephson, 1996). However, if the same manipula-
tion affected individuals’ perception of novel informa-
tion (e.g., Forgas & Bower, 1987; Wright & Bower,
1992), we would subsume it under the umbrella term
“bias.” This aligns well with our definition of beliefs as
hypotheses about the world that come along with the
notion of accuracy. In other words, it is about beliefs
that state or claim something to be true, for example,
“I make correct assessments” or “I am good,” regardless
of whether or not it actually is true (e.g., “This was the
biggest inauguration audience ever”; see also above).
Because biases have been conceptualized with a
sense of accuracy in mind and a plethora of research
has by now documented peoples’ biases, thus pointing
to the frequent inaccuracy of their judgments, two sec-
ondary questions have been raised and lively debated
in the past: the question of the (ir)rationality of biases
and the question of the adaptivity of (certain) biases
(e.g., Evans, 1991; Evans etal., 1993; Fiedler etal., 2021;
Gigerenzer etal., 2011; Gigerenzer & Selten, 2001; Hahn
& Harris, 2014; Haselton etal., 2009; Oaksford & Chater,
1992; Sedlmeier etal., 1998; Simon, 1990; Sturm, 2012;
P. M. Todd & Gigerenzer, 2001, 2007). In particular, it
has been argued that many biases and heuristics could
be regarded as rational in the context of real-world set-
tings, in which people lack complete knowledge and
have an imperfect memory as well as limited capacities
(e.g., Gigerenzer etal., 2011; Simon, 1990). In the same
context, researchers have argued that some of the heu-
ristics lead to biases mainly in specific lab tasks while
resulting in rather accurate judgments in many real-
world situations (e.g., Fiedler etal., 2021; Sedlmeier
etal., 1998; P. M. Todd & Gigerenzer, 2001). In other
words, they argued for the adaptivity of these heuristics,
which are mostly correct, whereas research focuses on
the few (artificial) situations in which they lead to incor-
rect results (i.e., biases). Apart from the fact that this
debate mainly revolved around heuristics and biases
that we do not deal with here (e.g., the set of heuristics
introduced by Tversky & Kahneman, 1974), an adequate
treatise of the rationality and adaptivity is beyond the
scope of this article for two reasons. First, consideration
of the (ir)rationality as well as adaptivity of biases is a
complex and rich topic that could fill an article on its
own. One factor that complicates the topic is that there
is no single conceptualization of rationality but a variety
of different perpectives on this topic (e.g., normative
vs. descriptive, theoretical vs. practical, process vs. out-
come; for an overview, see Knauff & Spohn, 2021), each
of which come along with different definitions of ratio-
nality or standards of comparisons that allow for conclu-
sions about rationality. The same holds for adaptivity
because it would inevitably have to be clarified what
adaptivity refers to (e.g., survival, success—in whatever
sense, accurate representations). Second, and even more
importantly from a research perspective, biases are first
and foremost phenomena evidenced by data and empir-
ical observations—whereas the question of the rational-
ity of these observations is basically an evaluation of
this observation and, thus, yet another issue. In present-
ing a framework of common underlying mechanisms,
however, this article’s focus is on the explanation of the
biases, not on their (normative) evaluation.
Broadening the Scope
Let us return to the application of our recipe to biases.
Above we spelled out our reasoning in detail by taking
two fundamental beliefs and discussing how they might
explain a number of biases. Specifically, we put up for
discussion that the general recipe of a belief plus belief-
consistent information processing may suffice to pro-
duce the biases listed in Table 1. It is beyond the scope
of the current article to do so for each of the other biases
contained in Table 1. Instead, we would like to reexam-
ine additional phenomena under this unifying lens.
Let us begin with hindsight bias, the tendency to
overestimate what one could have known before the
fact (Fischhoff, 1975; for an overview, see Roese & Vohs,
2012; for meta-analyses, see Christensen-Szalanski &
Willham, 1991; Guilbault etal., 2004). That people over-
estimate the extent to which uninformed others may
know about outcomes or an event that one has already
learned about could likewise be understood as people
taking their own experience as a reference when mak-
ing judgments about others. When judgments were
Perspectives on Psychological Science XX(X) 13
about themselves in a still ignorant prior state, however,
our framework would at least need the extra specifica-
tion that people took their current experience as a
reference when asked about previous times, which is
quite plausible (e.g., Levine & Safer, 2002; Markus,
1986; McFarland & Ross, 1987; Wolfe & Williams, 2018).
In more general terms, people oftentimes hold the
(erroneous) conviction that they have held their current
beliefs forever (e.g., Greenwald, 1980; Swann &
Buhrmester, 2012; von der Beck etal., 2019).
Several other phenomena that are usually not con-
ceptualized as bias, or at least not linked to bias research,
could essentially be understood as variations of confir-
mation bias as well. Stereotypes, for example, are basi-
cally beliefs people hold about others (“people of group
X are Y”) and likewise elicit belief-consistent informa-
tion processing and even behavior (e.g., discrimination).
Belief in specific conspiracy theories might be under-
stood as an expression of the rather general belief that
“seemingly random events were intentionally brought
about by a secret plan of powerful elites.” This basic
belief as an underlying principle might provide a par-
simonious explanation of why the endorsements of vari-
ous conspiracy theories flock together (Bruder etal.,
2013); of why such a “conspiracy mentality” is correlated
with the general tendency to see agency (anthropomor-
phism; Imhoff & Bruder, 2014), negative intentions of
others (Frenken & Imhoff, 2022), and patterns where
there are none (van Prooijen etal., 2018); and also of
other paranormal beliefs that play down the role of
randomness (Pennycook etal., 2015).
When we consider the breadth of our conceptualiza-
tion of beliefs, it becomes clear that the integrative
potential of our account might be even larger: The
beliefs we have elaborated on and presented in Table
1 are likely rather fundamental beliefs in that they are
chronically accessible and central to people. Recall,
however, that our conceptualization of beliefs also
entails beliefs that are rather irrelevant to a person and
only situationally induced. In consideration of this fact,
a number of experimental manipulations may be sub-
sumed under our reasoning as well. Across diverse
research fields, scholars have provided participants with
the task to test a given hypothesis. Although such
experimenter-generated hypotheses are clearly different
from long-held and widely shared beliefs, they follow
a similar recipe if we regard them as situationally
induced beliefs. For instance, the questions that Snyder
and Swann (1978) had their participants examine—
whether target person X is introverted/extraverted—can
be regarded as a situationally induced belief that is
examined by participants. Even if they did not generate
the belief themselves and even if they were indifferent
with regard to its truth, it guided their information
processing and systematically led to the confirmation
of the induced belief. So the question arises as to
whether a number of experimental manipulations (e.g.,
assimilation vs. contrast; Mussweiler, 2003, 2007; pro-
motion vs. prevention focus; Freitas & Higgins, 2002;
Galinsky et al., 2005; mindset inductions; Burnette
etal., 2013; Taylor & Gollwitzer, 1995) could also be
treated as experimenter-induced beliefs that elicit
belief-consistent information processing. In this regard,
a plethora of psychological findings could be integrated
into one overarching model.
Summary and Novel Hypotheses
Now that we have outlined our reasoning in detail and
highlighted its integrative potential, let us turn to the
hypotheses it generates. The main hypothesis (H1) we
have repeatedly mentioned throughout is that several
biases can be traced back to the same basic recipe of
belief plus belief-consistent information processing.
Undermining belief-consistent information process-
ing (e.g., by successfully eliciting a search for belief-
inconsistent information) should—according to this
logic—attenuate biases. Thus, to the extent that an
explicit instruction to “consider the opposite” (of the
proposed underlying belief) is effective in undermin-
ing belief-consistent information processing, it should
attenuate virtually any bias to which our recipe is
applicable, even if this has not been documented in
the literature so far. Thus, cumulative evidence that
experimentally assigning such a strategy fails to reduce
biases named here would speak against our model.
At the same time, we have proposed that several
biases are actually based on the same beliefs, which
leads to the assumption that biases sharing the same
beliefs should show a positive correlation (or at least
a stronger positive correlation than biases that are
based on different beliefs, H2). Thus, collecting data
from a whole battery of bias tasks would allow a con-
firmatory test of whether the underlying beliefs serve
as organizing latent factors that can explain the correla-
tions between the different bias manifestations.
Further hypotheses follow from the fact that there is
a special case of a fundamental belief in that its content
inherently relates to biases—the belief that one makes
correct assessments. Essentially, it might be regarded
as a kind of “g factor” of biases (for a similar idea, see
Fiedler, 2000; Fiedler etal., 2018; Metcalfe, 1998). Fol-
lowing from this, we expect natural (e.g., interindi-
vidual) or experimentally induced differences in the
belief of making correct assessments (e.g., undermining
it; for discussions on the phenomenon of gaslighting,
e.g., see Gass & Nichols, 1988; Rietdijk, 2018; Tobias &
Joseph, 2020) to be mirrored not only in biases based
14 Oeberst and Imhoff
on this but also other beliefs (H3). However, in consid-
eration of the fact that we essentially regard several
biases as a tendency to confirm the underlying funda-
mental belief (via belief-consistent information process-
ing), “successfully” biased information processing
should nourish the belief in one’s making correct
assessments—as one’s prior beliefs have been con-
firmed (H4). For example, people who believe their
group to be good and engage in belief-consistent infor-
mation processing leading them to conclusions that
confirm their belief are at the same time confirmed in
their convictions that they make correct assessments of
the world. The same should work for other biases such
as the “better-than-average effect” or “outcome bias,”
for instance. If I believe myself to be better than the
average, for instance, and subsequently engage in con-
firmatory information processing by comparing myself
with others who have lower abilities in the particular
domain in question, this should strengthen my belief
that I generally assess the world correctly. Likewise, if
I believe that it is mainly peoples’ attributes that shape
outcomes and—consistent with this belief—attribute a
company’s failure to its CEO’s mismanagement, I get
“confirmed” in my belief that I make correct assess-
ments. Only if belief-consistent information processing
failed would the belief that one makes correct assess-
ments likewise not be nourished. This is, however, not
extremely likely given the plethora of research showing
that people may see confirmation of the basic belief
even if there is actually none or only equivocal confir-
mation (e.g., Doherty etal., 1979; Friesen etal., 2015;
Lord etal., 1979; Isenberg, 1986), let alone disconfirma-
tion (Festinger etal., 1955/2011; Traut-Mattausch etal.,
2004). If, however, engaging in any (other) form of bias
expression would attenuate biases following from the
belief of making correct assessments, this would
strongly speak against our rationale.
There is one exception, however. If one was aware
that one is processing information in a biased way and
was unable to rationalize this proceeding, biases should
not be expressed because it would threaten one’s belief
in making correct assessments. In other words, the
belief in making correct assessments should constrain
biases based on other beliefs because people are rather
motivated to maintain an illusion of objectivity regard-
ing the manner in which they derived their inferences
(Pyszczynski & Greenberg, 1987; Sanbonmatsu etal.,
1998). Thus, there is a constraint on motivated informa-
tion processing: People need to be able to justify their
conclusions (Kunda, 1990; see also C. A. Anderson
etal., 1980; C. A. Anderson & Kellam, 1992). If people
were stripped of this possibility, that is, if they were
not able to justify their biased information processing
(e.g., because they are made aware of their potential
bias and fear that others could become aware of it as
well), we should observe attempts to reduce that par-
ticular bias and an effective reduction if people knew
how to correct for it (H5).
Above and beyond these rather general hypotheses,
further corollaries of our account unfold. For instance,
we would expect the same group favoritism for groups
people do not belong to and identify with but which they
believe to be good (H6). This hypothesis would not be
predicted by the social-identity approach (Tajfel & Turner,
1979; Turner et al., 1987), which is most commonly
referred to when explaining in-group favoritism.
Conclusion
There have been many prior attempts of synthesizing
and integrating research on (parts of) biased informa-
tion processing (e.g., Birch & Bloom, 2004; Evans, 1989;
Fiedler, 1996, 2000; Gawronski & Strack, 2012; Gilovich,
1991; Griffin & Ross, 1991; Hilbert, 2012; Klayman & Ha,
1987; Kruglanski etal., 2012; Kunda, 1990; Lord & Taylor,
2009; Pronin etal., 2004; Pyszczynski & Greenberg,
1987; Sanbonmatsu etal., 1998; Shermer, 1997; Skov &
Sherman, 1986; Trope & Liberman, 1996). Some of them
have made similar or overlapping arguments or implic-
itly made similar assumptions to the ones outlined here
and thus resonate with our reasoning. In none of them,
however, have we found the same line of thought and
its consequences explicated.
To put it briefly, theoretical advancements necessi-
tate integration and parsimony (the integrative poten-
tial), as well as novel ideas and hypotheses (the
generative potential). We believe that the proposed
framework for understanding bias as presented in this
article has merits in both of these aspects. We hope to
instigate discussion as well as empirical scrutiny with
the ultimate goal of identifying common principles
across several disparate research strands that have here-
tofore sought to understand human biases.
Transparency
Action Editor: Timothy J. Pleskac
Editor: Klaus Fiedler
Declaration of Conflicting Interests
The author(s) declared that there were no conflicts of
interest with respect to the authorship or the publication
of this article.
Funding
This research was supported by Leibniz Gemeinschaft Grant
SAW-2017-IWM-4 (to A. Oeberst) and German Research
Foundation Grant OE 604/3-1 (to A. Oeberst).
ORCID iD
Aileen Oeberst https://orcid.org/0000-0002-1094-9610
Perspectives on Psychological Science XX(X) 15
Note
1. One aspect that may additionally contribute to confirmation
is the vagueness of the beliefs. Note that the beliefs we pro-
pose to underlie several biases below are rather fundamental
in nature and are thus rather abstract and global (e.g., “I am
good”). Obviously, there are several variations of the belief to
be good—depending, for instance, on the domain or dimension
that is evaluated (e.g., morality, competence) and the specific
context (e.g., in a game, at work, on weekends). Moreover,
there may certainly be exceptions to it (e.g., “I am generally a
moral person, but I am aware that I am stingy when it comes
to anonymous donations”), but the general beliefs still function
as a kind of default that guides information processing. Their
variability may actually contribute to their confirmation because
it leaves so many degrees of freedom (e.g., “The money that I
do not donate is spent on other choices of moral integrity, and
the fact that I admit not donating further reflects on my honesty
and thus ultimately my morality”; Dunning etal., 1989).
References
Abelson, R. P., Aronson, E. E., McGuire, W. J., Newcomb, T. M.,
Rosenberg, M. J., & Tannenbaum, P. H. (1968). Theories of
cognitive consistency: A sourcebook. Rand-McNally.
Alicke, M. D., & Govorun, O. (2005). The better-than-average
effect. In M. D. Alicke, D. A. Dunning, J. I. Krueger, M. D.
Alicke, D. A. Dunning, & J. I. Krueger (Eds.), The self in
social judgment (pp. 85–106). Psychology Press.
Alloy, L. B., & Abramson, L. Y. (1979). Judgment of con-
tingency in depressed and nondepressed students:
Sadder but wiser? Journal of Experimental Psychology:
General, 108(4), 441–485. https://doi.org/10.1037/0096-
3445.108.4.441
Alloy, L. B., & Clements, C. M. (1992). Illusion of control:
Invulnerability to negative affect and depressive symp-
toms after laboratory and natural stressors. Journal of
Abnormal Psychology, 101(2), 234–245. https://doi
.org/10.1037/0021-843X.101.2.234
Allport, G. W., & Postman, L. (1947). The psychology of rumor.
Henry Holt & Co.
Alves, H., Koch, A., & Unkelbach, C. (2018). A cognitive-
ecological explanation of intergroup biases. Psycho-
logical Science, 29(7), 1126–1133. https://doi.org/10
.1177/0956797618756862
Anderson, C. A., & Kellam, K. L. (1992). Belief perseverance,
biased assimilation, and covariation detection: The effects
of hypothetical social theories and new data. Personality
and Social Psychology Bulletin, 18(5), 555–565. https://
doi.org/10.1177/0146167292185005
Anderson, C. A., Lepper, M. R., & Ross, L. (1980). Perseverance
of social theories: The role of explanation in the persis-
tence of discredited information. Journal of Personality
and Social Psychology, 39(6), 1037–1049.
Anderson, C. A., & Lindsay, J. J. (1998). The development, per-
severance, and change of naïve theories. Social Cognition,
16(1), 8–33. https://doi.org/10.1521/soco.1998.16.1.8
Arkes, H. (1991). Costs and benefits of judgment errors:
Implications for debiasing. Psychological Bulletin, 110(3),
486–498.
Arkes, H. R., Faust, D., Guilmette, T. J., & Hart, K. (1988).
Eliminating the hindsight bias. Journal of Applied
Psychology, 73(2), 305–307. https://doi.org/10.1037/0021-
9010.73.2.305
Ask, K., & Granhag, P. A. (2007). Motivational bias in crimi-
nal investigators’ judgments of witness reliability. Journal
of Applied Social Psychology, 37(3), 561–591. https://doi
.org/10.1111/j.1559-1816.2007.00175.x
Baron, J., & Hershey, J. C. (1988). Outcome Bias in Decision
Evaluation. Journal of Personality and Social Psychology,
54(4), 569–579.
Bartlett, S. C. (1932). Remembering: A study in experimental
and social psychology. Cambridge University Press.
Bateson, C. D. (1975). Rational processing or rationalization?
The effect of disconfirming information on a stated reli-
gious belief. Journal of Personality and Social Psychology,
32, 176–184.
Bertrand, M., Chugh, D., & Mullainathan, S. (2005). Implicit
discrimination. The American Economic Review, 95(2),
94–98.
Bianchi, M., Machunsky, M., Steffens, M. C., & Mummendey, A.
(2009). Like me or like us. Is ingroup projection just social
projection? Experimental Psychology, 56(3), 198–205.
https://doi.org/10.1027/1618-3169.56.3.198
Bianchi, M., Mummendey, A., Steffens, M. C., & Yzerbyt, V. Y.
(2010). What do you mean by “European”? Evidence
of spontaneous ingroup projection. Personality and
Social Psychology Bulletin, 36(7), 960–974. https://doi
.org/10.1177/0146167210367488
Birch, S. A., & Bloom, P. (2004). Understanding children’s
and adults’ limitations in mental state reasoning.
Trends in Cognitive Sciences, 8(6), 255–260. https://doi
.org/10.1016/j.tics.2004.04.011
Bloom, P. (2012). Religion, morality, evolution. Annual Review
of Psychology, 63(1), 179–199. http://doi.org/10.1146/
annurev-psych-120710-100334
Bogaard, G., Meijer, E. H., Vrij, A., Broers, N. J., & Merckelbach,
H. (2014). Contextual bias in verbal credibility assessment:
Criteria-based content analysis, reality monitoring and
scientific content analysis. Applied Cognitive Psychology,
28(1), 79–90. https://doi.org/10.1002/acp.2959
Bosveld, W., Koomen, W., & van der Pligt, J. (1994). Selective
exposure and the false consensus effect: The availabil-
ity of similar and dissimilar others. British Journal of
Social Psychology, 33(4), 457–466. https://doi.org/10
.1111/j.2044-8309.1994.tb01041.x
Brewer, M. B. (2007). The social psychology of intergroup
relations: Social categorization, ingroup bias, and out-
group prejudice. In A. W. Kruglanski & E. T. Higgins
(Eds.), Social psychology: Handbook of basic principles
(pp. 695–715). Guilford Press.
Brewer, W. F., & Nakamura, G. V. (1984). The nature and
functions of schemas. In R. S. Wyer Jr. & T. K. Srull (Eds.),
Handbook of social cognition (pp. 119–160). Lawrence
Erlbaum Associates.
Brosch, T., Pourtois, G., & Sander, D. (2010). The percep-
tion and categorisation of emotional stimuli: A review.
Cognition and Emotion, 24(3), 377–400. https://doi.org/
10.1080/02699930902975754
16 Oeberst and Imhoff
Brown, J. D. (1986). Evaluations of self and others: Self-
enhancement biases in social judgments. Social Cognition,
4, 353–476. https://doi.org/10.1521/soco.1986.4.4.353
Brown, R. (2000). Social Identity Theory: Past achievements,
current problems and future challenges. European
Journal of Social Psychology, 30, 745–778. https://
doi.org/10.1002/1099-0992(200011/12)30:6<745::AID-
EJSP24>3.0.CO;2-O
Bruder, M., Haffke, P., Neave, N., Nouripanah, N., & Imhoff, R.
(2013). Measuring individual differences in generic
beliefs in conspiracy theories across cultures: Conspiracy
Mentality Questionnaire. Frontiers in Psychology, 4, Article
225. http://doi.org/10.3389/fpsyg.2013.00225
Bruner, A., & Revusky, S. H. (1961). Collateral behav-
ior in humans. Journal of the Experimental Analysis
of Behavior, 4(4), 349–350. https://doi.org/10.1901/
jeab.1961.4-349
Bruner, J. S., & Potter, M. C. (1964). Interference in visual
recognition. Science, 144(3617), 424–425. https://doi
.org/10.1126/science.144.3617.424
Burnette, J. L., O’Boyle, E. H., VanEpps, E. M., Pollack, J. M.,
& Finkel, E. J. (2013). Mind-sets matter: A meta-analytic
review of implicit theories and self-regulation. Psycho-
logical Bulletin, 139(3), 655–701. https://doi.org/10
.1037/a0029531
Campbell, W. K., & Sedikides, C. (1999). Self-threat mag-
nifies the self-serving bias: A meta-analytic integration.
Review of General Psychology, 3(1), 23–43. https://doi
.org/10.1037/1089-2680.3.1.23
Chambers, J. R., & De Dreu, C. K. (2014). Egocentrism drives
misunderstanding in conflict and negotiation. Journal of
Experimental Social Psychology, 51, 15–26. https://doi
.org/10.1016/j.jesp.2013.11.001
Chapman, L. J. (1967). Illusory correlation in observa-
tional report. Journal of Verbal Learning and Verbal
Behavior, 6(1), 151–155. https://doi.org/10.1016/S0022-
5371(67)80066-5
Chater, N., Oaksford, M., Hahn, U., & Heit, E. (2010). Bayesian
models of cognition. Wiley Interdisciplinary Reviews:
Cognitive Science, 1, 811–823.
Christen, C. T., Kannaovakun, P., & Gunther, A. C. (2002).
Hostile media perceptions: Partisan assessments of press
and public during the 1997 United Parcel Service strike.
Political Communication, 19(4), 423–436. https://doi
.org/10.1080/10584600290109988
Christensen-Szalanski, J. J., & Willham, C. F. (1991). The hind-
sight bias: A meta-analysis. Organizational Behavior and
Human Decision Processes, 48(1), 147–168. https://doi
.org/10.1016/0749-5978(91)90010-Q
Clark, H. H., & Marshall, C. R. (1981). Definite reference and
mutual knowledge. In A. K. Joshi, B. L. Webber, & I. A. Sag
(Eds.), Elements of discourse understanding (pp. 10–63).
Cambridge University Press.
Cohen, C. E. (1981). Person categories and social percep-
tion: Testing some boundaries of the processing effect
of prior knowledge. Journal of Personality and Social
Psychology, 40(3), 441–452. https://doi.org/10.1037/0022-
3514.40.3.441
Crocker, J. (1982). Biased questions in judgment of covaria-
tion studies. Personality and Social Psychology Bulletin,
8, 214–220. https://doi.org/10.1177/0146167282082005
Cvencek, D., Greenwald, A. G., & Meltzoff, A. N. (2012).
Balanced identity theory. Review of evidence for implicit
consistency in social cognition. In B. Gawronski & F. Strack
(Eds.), Cognitive consistency: A fundamental principle in
social cognition (pp. 157–177). Guilford Press.
Dalbert, C. (2009). Belief in a just world. In M. R. Leary &
R. H. Hoyle (Eds.), Handbook of individual differences in
social behavior (pp. 288–297). Guilford Press.
Dalton, R. J., Beck, P. A., & Huckfeldt, R. (1998). Partisan cues
and the media: Information flows in the 1992 presiden-
tial election. American Political Science Review, 92(1),
111–126. https://doi.org/10.2307/2585932
Darley, J. M., & Fazio, R. H. (1980). Expectancy confirma-
tion processes arising in the social interaction sequence.
American Psychologist, 35, 867–881.
Davies, M. F. (1997). Belief persistence after evidential dis-
crediting: The impact of generated versus provided
explanations on the likelihood of discredited outcomes.
Journal of Experimental Social Psychology, 33, 561–578.
https://doi.org/10.1006/jesp.1997.1336
Day, L., & Maltby, J. (2003). Belief in good luck and psy-
chological well-being: The mediating role of optimism
and irrational beliefs. The Journal of Psychology, 137(1),
99–110. https://doi.org/10.1080/00223980309600602
Devine, D. J., Clayton, L. D., Dunford, B. B., Seying, R.,
& Pryce, J. (2001). Jury decision making: 45 years of
empirical research on deliberating groups. Psychology,
Public Policy, and Law, 7(3), 622–727. https://doi.org/
10.1037/1076-8971.7.3.622
Dijksterhuis, A. P., van Knippenberg, A. D., Kruglanski, A. W.,
& Schaper, C. (1996). Motivated social cognition: Need
for closure effects on memory and judgment. Journal of
Experimental Social Psychology, 32(3), 254–270. https://
doi.org/10.1006/jesp.1996.0012
Ditto, P. H., Liu, B. S., Clark, C. J., Wojcik, S. P., Chen, E. E.,
Grady, R. H., Celniker, J. B., & Zinger, J. F. (2019). At
least bias is bipartisan: A meta-analytic comparison of
partisan bias in liberals and conservatives. Perspectives
on Psychological Science, 14(2), 273–291. https://doi
.org/10.1177/1745691617746796
Ditto, P. H., & Lopez, D. F. (1992). Motivated skepticism: Use
of differential decision criteria for preferred and non-
preferred conclusions. Journal of Personality and Social
Psychology, 63(4), 568–584. https://doi.org/10.1037/0022-
3514.63.4.568
Doherty, M. E., Mynatt, C. R., Tweney, R. D., & Schiavo, M. D.
(1979). Pseudodiagnosticity. Acta Psychologica, 43, 111–
121. https://doi.org/10.1016/0001-6918(79)90017-9
Dow, J. (2006). The evolution of religion: Three anthropologi-
cal approaches. Method & Theory in the Study of Religion,
18(1), 67–91.
Dror, I. E., Charlton, D., & Péron, A. E. (2006). Contextual
information renders experts vulnerable to making errone-
ous identifications. Forensic Science International, 156,
74–78. https://doi.org/10.1016/j.forsciint.2005.10.017
Perspectives on Psychological Science XX(X) 17
Dror, I. E., Thompson, W. C., Meissner, C. A., Kornfield, I.,
Krane, D., Saks, M., & Risinger, M. (2015). Letter to the
editor—Context management toolbox: A linear sequen-
tial unmasking (LSU) approach for minimizing cognitive
bias in forensic decision making. Journal of Forensic
Sciences, 60(4), 1111–1112. https://doi.org/10.1111/1556-
4029.12805
Dunning, D., Meyerowitz, J. A., & Holzberg, A. D. (1989).
Ambiguity and self-evaluation: The role of idiosyncratic
trait definitions in self-serving assessments of ability.
Journal of Personality and Social Psychology, 57(6),
1082–1090. https://doi.org/10.1037/0022-3514.57.6.1082
Edwards, K., & Smith, E. E. (1996). A disconfirmation bias in the
evaluation of arguments. Journal of Personality and Social
Psychology, 71(1), 5–24. https://doi.org/10.1037/0022-
3514.71.1.5
Elaad, E., Ginton, A., & Ben-Shakhar, G. (1994). The effects
of prior expectations and outcome knowledge on
polygraph examiners’ decisions. Journal of Behavioral
Decision Making, 7(4), 279–292. https://doi.org/10.1002/
bdm.3960070405
Engelmann, D., & Strobel, M. (2012). Deconstruction and
reconstruction of an anomaly. Games and Economic
Behavior, 76(2), 678–689. https://doi.org/10.1016/j.geb
.2012.07.009
Epley, N., Morewedge, C. K., & Keysar, B. (2004). Perspective
taking in children and adults: Equivalent egocentrism
but differential correction. Journal of Experimental Social
Psychology, 40(6), 760–768. https://doi.org/10.1016/
j.jesp.2004.02.002
Evans, J. S. B. T. (1972). Interpretation and matching bias
in a reasoning task. Quarterly Journal of Experimental
Psychology, 24, 193–199.
Evans, J. S. B. T. (1989). Bias in human reasoning: Causes
and consequences. Psychology Press.
Evans, J. S. B. T. (1991). Theories of human reasoning: The
fragmented state of the art. Theory & Psychology, 1(1),
83–105.
Evans, J. S. B. T., Over, D. E., & Manktelow, K. I. (1993).
Reasoning, decision making and rationality. Cognition,
49, 165–187.
Faigman, D. L., Kang, J., Bennett, M. W., Carbado, D. W.,
Casey, P., Dasgupta, N., Godsil, R. D., Greenwald, A. G.,
Levinson, J. D., & Mnookin, J. (2012). Implicit bias in the
courtroom. UCLA Law Review, 59, 1124–1187.
Feldman, S. (1966). Cognitive consistency. Motivational ante-
cedents and behavioral consequences. Academic Press.
https://doi.org/10.1016/C2013-0-12075-2
Festinger, L. (1957). A theory of cognitive dissonance. Stanford
University Press.
Festinger, L., Riecken, H. W., & Schachter, S. (2011). When
prophecy fails. Wilder Publications. (Original work pub-
lished 1955)
Fiedler, K. (1996). Explaining and simulating judgment biases
as an aggregation phenomenon in probabilistic, multiple-
cue environments. Psychological Review, 103(1), 193–213.
Fiedler, K. (2000). Beware of samples! A cognitive-ecological
sampling approach to judgment biases. Psychological
Review, 107(4), 659–676. https://doi.org/10.1037/0033-
295X.107.4.659
Fiedler, K., Freytag, P., & Meiser, T. (2009). Pseudocontingencies:
An integrative account of an intriguing cognitive illu-
sion. Psychological Review, 116(1), 187–206. https://doi
.org/10.1037/a0014480
Fiedler, K., Hofferbert, J., & Wöllert, F. (2018). Metacognitive
myopia in hidden-profile tasks: The failure to control for
repetition biases. Frontiers in Psychology, 9, Article 903.
https://doi.org/10.3389/fpsyg.2018.00903
Fiedler, K., Prager, J., & McCaughey, L. (2021). Heuristics and
biases. In M. Knauff & W. Spohn (Eds.), The handbook of
rationality (pp. 159–200). MIT Press.
Fischhoff, B. (1975). Hindsight is not equal to foresight:
The effect of outcome knowledge on judgment under
uncertainty. Journal of Experimental Psychology: Human
Perception and Performance, 1(3), 288–299. https://doi
.org/10.1037/0096-1523.1.3.288
Fischhoff, B. (1977). Perceived informativeness of facts. Journal
of Experimental Psychology: Human Perception and
Performance, 3(2), 349–358. https://doi.org/10.1037/0096-
1523.3.2.349
Fischhoff, B., & Beyth-Marom, R. (1983). Hypothesis evalua-
tion from a Bayesian perspective. Psychological Review,
90, 239–260.
Fiske, S. T. (1998). Stereotyping, prejudice, and discrimination.
In D. T. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), The hand-
book of social psychology (pp. 357–411). McGraw-Hill.
Flavell, J. H. (2004). Theory-of-mind development: Retrospect
and prospect. Merrill-Palmer Quarterly, 50(3), 274–290.
https://doi.org/10.1353/mpq.2004.0018
Forgas, J. P., & Bower, G. H. (1987). Mood effects on person-
perception judgments. Journal of Personality and Social
Psychology, 53(1), 53–60.
Frantz, C. M. (2006). I AM being fair: The bias blind spot
as a stumbling block to seeing both sides. Basic and
Applied Social Psychology, 28(2), 157–167. https://doi
.org/10.1207/s15324834basp2802_5
Freitas, A. L., & Higgins, E. T. (2002). Enjoying goal-directed
action: The role of regulatory fit. Psychological Science,
13(1), 1–6. https://doi.org/10.1111/1467-9280.00401
Frenken, M., & Imhoff, R. (2022). Malevolent intentions and
secret coordination. Dissecting cognitive processes in
conspiracy beliefs via diffusion modeling. Journal of
Experimental Social Psychology, 103, Article 104383.
https://doi.org/10.1016/j.jesp.2022.104383
Friesen, J. P., Campbell, T. H., & Kay, A. C. (2015). The psy-
chological advantage of unfalsifiability: The appeal of
untestable religious and political ideologies. Journal
of Personality and Social Psychology, 108(3), 515–529.
https://doi.org/10.1037/pspp0000018
Furnham, A. (2003). Belief in a just world: Research prog-
ress over the past decade. Personality and Individual
Differences, 34(5), 795–817. https://doi.org/10.1016/
S0191-8869(02)00072-7
Furnham, A., & Marks, J. (2013). Tolerance of ambiguity: A
review of the recent literature. Psychology, 4(9), 717–728.
https://doi.org/10.4236/psych.2013.49102
18 Oeberst and Imhoff
Furnham, A., & Ribchester, T. (1995). Tolerance of ambigu-
ity: A review of the concept, its measurement and appli-
cations. Current Psychology, 14(3), 179–199. https://doi
.org/10.1007/BF02686907
Galinsky, A. D., Leonardelli, G. J., Okhuysen, G. A., &
Mussweiler, T. (2005). Regulatory focus at the bargain-
ing table: Promoting distributive and integrative success.
Personality and Social Psychology Bulletin, 31(8), 1087–
1098. https://doi.org/10.1177/0146167205276429
Gass, G. Z., & Nichols, W. C. (1988). Gaslighting: A marital
syndrome. Contemporary Family Therapy, 10(1), 3–16.
https://doi.org/10.1007/BF00922429
Gawronski, B., & Strack, F. (Eds.). (2012). Cognitive con-
sistency. A fundamental principle in social cognition.
Guilford Press.
Gigerenzer, G., Hertwig, R., & Pachur, T. (2011). Heuristics:
The foundations of adaptive behavior. Oxford University
Press.
Gigerenzer, G., & Selten, R. (2001). Bounded rationality: The
adaptive toolbox. MIT Press.
Gilbert, C. D., & Li, W. (2013). Top-down influences on visual
processing. Nature Reviews Neuroscience, 14(5), 350–363.
https://doi.org/10.1038/nrn3476
Gilbert, D. T. (1991). How mental systems believe. American
Psychologist, 46(2), 107–119. https://doi.org/10.1037/0003-
066X.46.2.107
Gilovich, T. (1991). How we know what isn’t so. The fallibility
of human reason in everyday life. Free Press.
Gilovich, T., Medvec, V. H., & Savitsky, K. (2000). The spotlight
effect in social judgment: An egocentric bias in estimates
of the salience of one’s own actions and appearance.
Journal of Personality and Social Psychology, 78(2), 211–
222. https://doi.org/10.1037//0022-3514.78.2.211
Gilovich, T., & Savitsky, K. (1999). The spotlight effect and the
illusion of transparency: Egocentric assessments of how
we are seen by others. Current Directions in Psychological
Science, 8(6), 165–168. https://doi.org/10.1111/1467-
8721.00039
Gilovich, T., Savitsky, K., & Medvec, V. H. (1998). The illu-
sion of transparency: Biased assessments of others’ abil-
ity to read one’s emotional states. Journal of Personality
and Social Psychology, 75(2), 332–346. https://doi.org/
10.1037//0022-3514.75.2.332
Giroux, M. E., Coburn, P. I., Harley, E. M., Connolly, D. A.,
& Bernstein, D. M. (2016). Hindsight bias and law.
Zeitschrift für Psychologie, 224, 190–203. https://doi.org/
10.1027/2151-2604/a000253
Graham, J., & Haidt, J. (2010). Beyond beliefs: Religions
bind individuals into moral communities. Personality
and Social Psychology Review, 14(1), 140–150. https://
doi.org/10.1177/1088868309353415
Green, M., & Elliott, M. (2010). Religion, health, and psycho-
logical well-being. Journal of Religion and Health, 49(2),
149–163. https://doi.org/10.1007/s10943-009-9242-1
Greenwald, A. G. (1980). The totalitarian ego. Fabrication
and revision of personal history. American Psychologist,
35(7), 603–618.
Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cogni-
tion: Attitudes, self-esteem, and stereotypes. Psychological
Review, 102(1), 4–27.
Greenwald, A. G., Pratkanis, A. R., Leippe, M. R., &
Baumgardner, M. H. (1986). Under what conditions does
theory obstruct research progress? Psychological Review,
93(2), 216–229.
Griffin, D. W., & Ross, L. (1991). Subjective construal, social
inference, and human misunderstanding. Advances in
Experimental Social Psychology, 24, 319–359. https://doi
.org/10.1016/S0065-2601(08)60333-0
Guilbault, R. L., Bryant, F. B., Brockway, J. H., & Posavac, E. J.
(2004). A meta-analysis of research on hindsight bias.
Basic and Applied Social Psychology, 26(2–3), 103–117.
https://doi.org/10.1080/01973533.2004.9646399
Gunther, A. C., & Christen, C. T. (2002). Projection or per-
suasive press? Contrary effects of personal opinion and
perceived news coverage on estimates of public opinion.
Journal of Communication, 52(1), 177–195. https://doi
.org/10.1111/j.1460-2466.2002.tb02538.x
Hagan, J., & Parker, P. (1985). White-collar crime and pun-
ishment: The class structure and legal sanctioning of
securities violations. American Sociological Review, 50,
302–316.
Hahn, U., & Harris, A. J. L. (2014). What does it mean to be
biased: Motivated reasoning and rationality. Psychology
of Learning and Motivation, 61, 41–102. https://doi
.org/10.1016/B978-0-12-800283-4.00002-2
Hamilton, D. L., & Gifford, R. K. (1976). Illusory correla-
tion in interpersonal perception: A cognitive basis of
stereotypic judgments. Journal of Experimental Social
Psychology, 12(4), 392–407. https://doi.org/10.1016/
S0022-1031(76)80006-6
Hansen, G. J., & Kim, H. (2011). Is the media biased against
me? A meta-analysis of the hostile media effect research.
Communication Research Reports, 28(2), 169–179. https://
doi.org/10.1080/08824096.2011.565280
Harley, E. M. (2007). Hindsight bias in legal decision making.
Social Cognition, 25(1), 48–63. https://doi.org/10.1521/
soco.2007.25.1.48
Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg,
M. J., & Merril, L. (2009). Feeling validated versus being
correct: A meta-analysis of selective exposure to informa-
tion. Psychological Bulletin, 135(4), 555–588. https://doi
.org/10.1037/a0015701
Hartley, E. (1946). Problems in prejudice. King’s Cross Press.
Haselton, M. G., Bryant, G. A., Wilke, A., Frederick, D.,
Galperin, A., Frankenhuis, W. E., & Moore, T. (2009).
Adaptive rationality: An evolutionary perspective on cog-
nitive bias. Social Cognition, 27(5), 733–763.
Hewstone, M. (1990). The ‘ultimate attribution error’? A
review of the literature on intergroup causal attribution.
European Journal of Social Psychology, 20, 311–335.
https://doi.org/10.1002/ejsp.2420200404
Hewstone, M., Rubin, M., & Willis, H. (2002). Intergroup bias.
Annual Review of Psychology, 53(1), 575–604. https://doi
.org/10.1146/annurev.psych.53.100901.135109
Hilbert, M. (2012). Toward a synthesis of cognitive biases:
How noisy information processing can bias human deci-
sion making. Psychological Bulletin, 138(2), 211–237.
https://doi.org/10.1037/a0025940
Hill, C., Memon, A., & McGeorge, P. (2008). The role of confir-
mation bias in suspect interviews: A systematic evaluation.
Perspectives on Psychological Science XX(X) 19
Legal and Criminological Psychology, 13(2), 357–371.
https://doi.org/10.1348/135532507X238682
Hilton, J. L., & von Hippel, W. (1996). Stereotypes. Annual
Review of Psychology, 47(1), 237–271. https://doi.org/10
.1146/annurev.psych.47.1.237
Hoorens, V. (1993). Self-enhancement and superiority biases in
social comparison. European Review of Social Psychology,
4(1), 113–139. https://doi.org/10.1080/1479277934
3000040
Hornsey, M. J., Oppes, T., & Svensson, A. (2002). “It’s OK if we
say it, but you can’t”: Responses to intergroup and intra-
group criticism. European Journal of Social Psychology,
32, 293–307. https://doi.org/10.1002/ejsp.90
Ichheiser, G. (1949). Misunderstandings in human relations: A
study in false social perception. University of Chicago Press.
Ichikawa, J. J., & Steup, M. (2018). The analysis of knowledge.
In E. N. Zalta (Ed.), The Stanford encyclopedia of phi-
losophy (Summer 2018 ed.). Stanford University. https://
plato.stanford.edu/archives/sum2018/entries/knowledge-
analysis
Imhoff, R., & Bruder, M. (2014). Speaking (un-)truth to power:
Conspiracy mentality as a generalised political attitude.
European Journal of Personality, 28(1), 25–43. https://
doi.org/10.1002/per.1930
Isenberg, D. J. (1986). Group polarization: A critical review
and meta-analysis. Journal of Personality and Social
Psychology, 50(6), 1141–1151. https://doi.org/10.1037/
0022-3514.50.6.1141
Jelalian, E., & Miller, A. G. (1984). The perseverance of beliefs:
Conceptual perspectives and research developments.
Journal of Social and Clinical Psychology, 2(1), 25–56.
John, O. P., & Robins, R. W. (1994). Accuracy and bias in
self-perception: Individual differences in self-enhance-
ment and the role of narcissism. Journal of Personality
and Social Psychology, 66(1), 206–219. https://doi.org/
10.1037/0022-3514.66.1.206
Johnson, D. D., & Fowler, J. H. (2011). The evolution of
overconfidence. Nature, 477(7364), 317–320. https://doi
.org/10.1038/nature10384
Jones, M., & Love, B. C. (2011). Bayesian fundamentalism or
enlightenment? On the explanatory status and theoretical
contributions of Bayesian models of cognition. Behavioral
and Brain Sciences, 34, 169–231. https://doi.org/10.1017/
S0140525X10003134
Josephson, B. R. (1996). Mood regulation and memory:
Repairing sad moods with happy memories. Cognition
& Emotion, 10(4), 437–444. https://doi.org/10.1080/0269
99396380222
Jussim, L. (1986). Self-fulfilling prophecies: A theoretical and
integrative review. Psychological Review, 93(4), 429–445.
Kang, J., & Lane, K. (2010). Seeing through colorblindness:
Implicit bias and the law. UCLA Law Review, 58, 465–520.
Kaptchuk, T. J., Friedlander, E., Kelley, J. M., Sanchez, M. N.,
Kokkotou, E., Singer, J. P., Kowalczykowski, M., Miller,
F. G., Kirsch, I., & Lembo, A. J. (2010). Placebos without
deception: A randomized controlled trial in irritable bowel
syndrome. PLOS ONE, 5(12), Article e15591. https://doi
.org/10.1371/journal.pone.0015591
Kay, A. C., Gaucher, D., McGregor, I., & Nash, K. (2010).
Religious belief as compensatory control. Personality
and Social Psychology Review, 14(1), 37–48. https://doi
.org/10.1177/1088868309353750
Kay, A. C., Moscovitch, D. A., & Laurin, K. (2010). Randomness,
attributions of arousal, and belief in God. Psychological
Science, 21(2), 216–218. https://doi.org/10.1177/0956797
609357750
Keinan, G. (2002). The effects of stress and desire for
control on superstitious behavior. Personality and
Social Psychology Bulletin, 28(1), 102–108. https://doi
.org/10.1177/0146167202281009
Kellaris, J. J., Dahlstrom, R. F., & Boyle, B. A. (1996).
Contextual bias in ethical judgment of marketing prac-
tices. Psychology & Marketing, 13(7), 677–694. https://
doi.org/10.1002/(SICI)1520-6793(199610)13:7<677::AID-
MAR3>3.0.CO;2-E
Kelman, M., Fallas, D., & Folger, H. (1998). Decomposing
hindsight bias. Journal of Risk and Uncertainty, 16(3),
251–269. https://doi.org/10.1023/A:1007755019837
Kennedy, J. E., & Taddonio, J. L. (1976). Experimenter effects
in parapsychological research. Journal of Parapsychology,
40(1), 1–33.
Keysar, B. (1994). The illusory transparency of intention:
Linguistic perspective taking in text. Cognitive Psychology,
26(2), 165–208. https://doi.org/10.1006/cogp.1994.1006
Keysar, B., Barr, D. J., & Horton, W. S. (1998). The egocentric
basis of language use: Insights from a processing approach.
Current Directions in Psychological Science, 7(2), 46–49.
https://doi.org/10.1111/1467-8721.ep13175613
Klayman, J. (1995). Varieties of confirmation bias. Psychology
of Learning and Motivation, 32, 385–418. https://doi
.org/10.1016/S0079-7421(08)60315-1
Klayman, J., & Ha, Y.-W. (1987). Confirmation, disconfirma-
tion, and information in hypothesis testing. Psychological
Review, 94(2), 211–228. https://doi.org/10.1037/0033-
295X.94.2.211
Klayman, J., & Ha, Y.-W. (1989). Hypothesis testing in rule
discovery: Strategy, structure, and content. Journal
of Experimental Psychology: Learning, Memory, and
Cognition, 15(4), 596–604. https://doi.org/10.1037/0278-
7393.15.4.596
Kleider, H. M., Pezdek, K., Goldinger, S. D., & Kirk, A. (2008).
Schema-driven source misattribution errors: Remembering
the expected from a witnessed event. Applied Cognitive
Psychology, 22(1), 1–20. https://doi.org/10.1002/acp.1361
Knauff, M., & Spohn, W. (2021). Psychological and philosophi-
cal frameworks of rationality—A systematic introduction.
In M. Knauff & W. Spohn (Eds.), The handbook of ratio-
nality (pp. 1–70). MIT Press.
Koehler, D. J. (1994). Hypothesis generation and confidence in
judgment. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 20(2), 461–469. https://doi
.org/10.1037/0278-7393.20.2.461
Koenig, H. G., Hays, J. C., Larson, D. B., George, L. K., Cohen,
H. J., McCullough, M. E., Meador, K. G., & Blazer, D. G.
(1999). Does religious attendance prolong survival? A
six-year follow-up study of 3,968 older adults. Journals
of Gerontology: Medical Sciences, 54A(7), M370–M376.
https://doi.org/10.1093/gerona/54.7.M370
Koval, P., Laham, S. M., Haslam, N., Bastian, B., & Whelan,
J. A. (2012). Our flaws are more human than yours:
20 Oeberst and Imhoff
Ingroup bias in humanizing negative characteristics.
Personality and Social Psychology Bulletin, 38(3), 283–
295. https://doi.org/10.1177/0146167211423777
Kruger, J., & Gilovich, T. (1999). “Naive cynicism” in every-
day theories of responsibility assessment: On biased
assumptions of bias. Journal of Personality and Social
Psychology, 76(5), 743–753. https://doi.org/10.1037/0022-
3514.76.5.743
Kruglanski, A. W., Bélanger, J. J., Chen, X., Köpetz, C., Pierro,
A., & Mannetti, L. (2012). The energetics of motivated
cognition: A force-field analysis. Psychological Review,
119(1), 1–20. https://doi.org/10.1037/a0025488
Kruglanski, A. W., & Freund, T. (1983). The freezing and
unfreezing of lay-inferences: Effects on impressional
primacy, ethnic stereotyping, and numerical anchoring.
Journal of Experimental Social Psychology, 19(5), 448–468.
https://doi.org/10.1016/0022-1031(83)90022-7
Kruglanski, A. W., Jasko, K., & Friston, K. (2020). All think-
ing is ‘wishful’ thinking. Trends in Cognitive Sciences, 24,
413–424. https://doi.org/10.1016/j.tics.2020.03.004
Kruglanski, A. W., Jasko, K., Milyavsky, M., Chernikova, M.,
Webber, D., Pierro, A., & di Santo, D. (2018). Cognitive
consistency theory in social psychology: A paradigm
reconsidered. Psychological Inquiry, 29(2), 45–59. https://
doi.org/10.1080/1047840X.2018.1480619
Kube, T., & Rozenkrantz, L. (2021). When beliefs face real-
ity: An integrative review of belief updating in mental
health and illness. Perspectives on Psychological Science,
16, 247–274. https://doi.org/10.1177/174569162093
1496
Kunda, Z. (1987). Motivated inference: Self-serving generation
and evaluation of causal theories. Journal of Personality
and Social Psychology, 53(4), 636–647. https://doi.org/
10.1037/0022-3514.53.4.636
Kunda, Z. (1990). The case for motivated reasoning. Psycho-
logical Bulletin, 108(3), 480–498.
Kveraga, K., Ghuman, A. S., & Bar, M. (2007). Top-down pre-
dictions in the cognitive brain. Brain and Cognition, 65(2),
145–168. https://doi.org/10.1016/j.bandc.2007.06.007
Kwan, V. S. Y., John, O. P., Robins, R. W., & Kuang, L. L.
(2008). Conceptualizing and assessing self-enhancement
bias: A componential approach. Journal of Personality
and Social Psychology, 94(6), 1062–1077. https://doi
.org/10.1037/0022-3514.94.6.1062
Ladouceur, R., Gosselin, P., & Dugas, M. J. (2000). Experimental
manipulation of intolerance of uncertainty: A study of
a theoretical model of worry. Behaviour Research and
Therapy, 38(9), 933–941. https://doi.org/10.1016/S0005-
7967(99)00133-3
Langer, E. J. (1975). The illusion of control. Journal of
Personality and Social Psychology, 32(2), 311–328. https://
doi.org/10.1037/0022-3514.32.2.311
Levine, L. J., & Safer, M. A. (2002). Sources of bias in mem-
ory for emotions. Current Directions in Psychological
Science, 11(5), 169–173. https://doi.org/10.1111/1467-8721
.00193
Liberman, A., & Chaiken, S. (1992). Defensive processing
of personally relevant health messages. Personality and
Social Psychology Bulletin, 18(6), 669–679. https://doi
.org/10.1177/0146167292186002
Lieberman, J. D., & Arndt, J. (2000). Understanding the lim-
its of limiting instructions: Social psychological explana-
tions for the failures of instructions to disregard pretrial
publicity and other inadmissible evidence. Psychology,
Public Policy, and Law, 6(3), 677–711. https://doi.org/
10.1037/1076-8971.6.3.677
Lord, C. G., Lepper, M. R., & Preston, E. (1984). Considering
the opposite: A corrective strategy for social judgment.
Journal of Personality and Social Psychology, 47(6), 1231–
1243. https://doi.org/10.1037/0022-3514.47.6.1231
Lord, C. G., Ross, L., & Lepper, M. (1979). Biased assimilation
and attitude polarization: The effects of prior theories on
subsequently considered evidence. Journal of Personality
and Social Psychology, 37(11), 2098–2109. https://doi
.org/10.1037/0022-3514.37.11.2098
Lord, C. G., & Taylor, C. A. (2009). Biased assimilation: Effects
of assumptions and expectations on the interpretation
of new evidence. Social and Personality Psychology
Compass, 3(5), 827–841. https://doi.org/10.1111/j.1751-
9004.2009.00203.x
Maass, A., Salvi, D., Arcuri, L., & Semin, G. (1989). Language
use in intergroup contexts: The linguistic intergroup bias.
Journal of Personality and Social Psychology, 57(6), 981–
993. https://doi.org/10.1037//0022-3514.57.6.981
MacIntyre, F. (2004). Was religion a kinship surrogate? Journal
of the American Academy of Religion, 72(3), 653–694.
https://doi.org/10.1093/jaarel/lfh063
Maier, S. F., & Seligman, M. E. (1976). Learned helplessness:
Theory and evidence. Journal of Experimental Psychology:
General, 105(1), 3–46. https://doi.org/10.1037/0096-
3445.105.1.3
Mandelbaum, E. (2019). Troubles with Bayesianism: An intro-
duction to the psychological immune system. Mind &
Language, 34, 141–157.
Markus, G. B. (1986). Stability and change in political atti-
tudes: Observed, recalled, and “explained.” Political
Behavior, 8, 21–44. http://doi.org/10.1007/BF00987591
Matheson, K., & Dursun, S. (2001). Social identity precur-
sors to the hostile media phenomenon: Partisan per-
ceptions of coverage of the Bosnian conflict. Group
Processes & Intergroup Relations, 4(2), 116–125. https://
doi.org/10.1177/1368430201004002003
Matlin, M. W. (2017). Pollyanna principle. In R. F. Pohl (Ed.),
Cognitive illusions: Intriguing phenomena in thinking,
judgment and memory (pp. 315–335). Routledge.
McFarland, C., & Ross, M. (1987). The relation between cur-
rent impressions and memories of self and dating part-
ners. Personality and Social Psychology Bulletin, 13(2),
228–238. https://doi.org/10.1177/0146167287132008
McNulty, J. K., & Karney, B. R. (2002). Expectancy confirma-
tion in appraisals of marital interactions. Personality and
Social Psychology Bulletin, 28(6), 764–775. https://doi
.org/10.1177/0146167202289006
Meiser, T., & Hewstone, M. (2006). Illusory and spurious cor-
relations: Distinct phenomena or joint outcomes of exem-
plar-based category learning? European Journal of Social
Psychology, 36(3), 315–336. https://doi.org/10.1002/ejsp.304
Mercier, H. (2017). Confirmation bias – Myside bias. In R. F.
Pohl (Ed.), Cognitive illusions: Intriguing phenomena in
thinking, judgment and memory (pp. 99–114). Routledge.
Perspectives on Psychological Science XX(X) 21
Merton, R. K. (1948). The self-fulfilling prophecy. The Antioch
Review, 8(2), 193–210.
Mervis, C. B., & Rosch, E. (1981). Categorization of natural
objects. Annual Review of Psychology, 32(1), 89–115.
Metcalfe, J. (1998). Cognitive optimism: Self-deception or
memory-based processing heuristics? Personality and
Social Psychology Review, 2(2), 100–110. https://doi
.org/10.1207/s15327957pspr0202_3
Miller, D. T., & Ratner, R. K. (1998). The disparity between
the actual and assumed power of self-interest. Journal of
Personality and Social Psychology, 74(1), 53–62. https://
doi.org/10.1037/0022-3514.74.1.53
Mullen, B., Atkins, J. L., Champion, D. S., Edwards, C., Hardy,
D., Story, J. E., & Vanderklok, M. (1985). The false con-
sensus effect: A meta-analysis of 115 hypothesis tests.
Journal of Experimental Social Psychology, 21(3), 262–283.
https://doi.org/10.1016/0022-1031(85)90020-4
Mullen, B., Brown, R., & Smith, C. (1992). Ingroup bias as a
function of salience, relevance, and status: An integration.
European Journal of Social Psychology, 22(2), 103–122.
https://doi.org/10.1002/ejsp.2420220202
Mullen, B., & Riordan, C. A. (1988). Self-serving attributions
for performance in naturalistic settings: A meta-analytic
review. Journal of Applied Social Psychology, 18(1), 3–22.
Murrie, D. C., Boccaccini, M. T., Guarnera, L. A., & Rufino,
K. A. (2013). Are forensic experts biased by the side that
retained them? Psychological Science, 24(10), 1889–1897.
https://doi.org/10.1177/0956797613481812
Mussweiler, T. (2003). Comparison processes in social judg-
ment: Mechanisms and consequences. Psychological
Review, 110(3), 472–489. https://doi.org/10.1037/0033-
295x.110.3.472
Mussweiler, T. (2007). Assimilation and contrast as compari-
son effects: A selective accessibility model. In D. A. Stapel
& J. Suls (Eds.), Assimilation and contrast in social psy-
chology (pp. 165–185). Psychology Press.
Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming
the inevitable anchoring effect: Considering the opposite
compensates for selective accessibility. Personality and
Social Psychology Bulletin, 26(9), 1142–1150. https://doi
.org/10.1177/01461672002611010
Mustard, D. B. (2001). Racial, ethnic, and gender disparities
in sentencing: Evidence from the US federal courts. The
Journal of Law and Economics, 44(1), 285–314. https://
doi.org/10.1086/320276
Mynatt, C. R., Doherty, M. E., & Tweney, R. D. (1978).
Consequences of confirmation and disconfirmation in
a simulated research environment. Quarterly Journal of
Experimental Psychology, 30, 395–406.
Nestler, S., Blank, H., & von Collani, G. (2008). Hindsight
bias doesn’t always come easy: Causal models, cognitive
effort, and creeping determinism. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 34, 1043–
1054. https://doi.org/10.1037/0278-7393.34.5.1043
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phe-
nomenon in many guises. Review of General Psychology,
2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
Nickerson, R. S. (1999). How we know—And sometimes mis-
judge—What others know: Imputing one’s own knowledge
to others. Psychological Bulletin, 125(6), 737–759. https://
doi.org/10.1037/0033-2909.125.6.737
Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies
and shortcomings of social judgment. Prentice Hall.
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can
know: Verbal reports on mental processes. Psychological
Review, 84(3), 231–259. https://doi.org/10.1037/0033-
295X.84.3.231
Noor, M., Kteily, N., Siem, B., & Mazziotta, A. (2019).
“Terrorist” or “mentally ill”: Motivated biases rooted in
partisanship shape attributions about violent actors.
Social Psychological and Personality Science, 10(4), 485–
493. https://doi.org/10.1177/1948550618764808
Oaksford, M., & Chater, N. (1992). Bounded rationality in
taking risks and drawing inferences. Theory & Psychology,
2(2), 225–230. https://doi.org/10.1177/0959354392022009
O’Brien, B. (2009). Prime suspect: An examination of factors
that aggravate and counteract confirmation bias in crimi-
nal investigations. Psychology, Public Policy, and Law,
15(4), 315–334. https://doi.org/10.1037/a0017881
Oeberst, A., & Matschke, C. (2017). Word order and world
order. Titles of intergroup conflicts may increase eth-
nocentrism by mentioning the in-group first. Journal of
Experimental Psychology: General, 146, 672–690. http://
doi.org/10.1037/xge0000300
Oswald, M. E. (2014). Strafrichterliche Urteilsbildung
[Judgment and decision making in criminal law]. In
T. Bliesener, F. Lösel, & G. Köhnken (Eds.), Lehrbuch
Rechts psychologie (pp. 244–260). Huber.
Otten, S. (2004). Self-anchoring as predictor of in-group favor-
itism: Is it applicable to real group contexts? Current
Psychology of Cognition, 22(4–5), 427–443.
Otten, S., & Epstude, K. (2006). Overlapping mental rep-
resentations of self, ingroup and outgroup: Unraveling
self-stereotyping and self-anchoring. Personality and
Social Psychology Bulletin, 32, 957–969. https://doi
.org/10.1177/0146167206287254
Otten, S., & Wentura, D. (2001). Self-anchoring and in-group
favoritism: An individual profiles analysis. Journal of
Experimental Social Psychology, 37, 525–532. https://doi
.org/10.1006/jesp.2001.1479
Pea, R. D. (1980). The development of negation in early child
language. In D. R. Olson (Ed.), The social foundations of lan-
guage and thought (pp. 156–186). W. W. Norton & Company.
Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., &
Fugelsang, J. A. (2015). On the reception and detection
of pseudo-profound bullshit. Judgment and Decision
Making, 10, 549–563.
Peoples, H. C., & Marlowe, F. W. (2012). Subsistence and
the evolution of religion. Human Nature, 23(3), 253–269.
https://doi.org/10.1007/s12110-012-9148-6
Pohl, R. F., & Hell, W. (1996). No reduction in hindsight
bias after complete information and repeated testing.
Organizational Behavior and Human Decision Processes,
67(1), 49–58. https://doi.org/10.1006/obhd.1996.0064
Popper, K. J. (1963). Conjectures and refutations: The growth
of scientific knowledge. Routledge & Kegan Paul.
Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A compre-
hensive review of the placebo effect: Recent advances and
22 Oeberst and Imhoff
current thought. Annual Review of Psychology, 59, 565–590.
https://doi.org/10.1146/annurev.psych.59.113006.095941
Pronin, E. (2007). Perception and misperception of bias in
human judgment. Trends in Cognitive Sciences, 11(1),
37–43. https://doi.org/10.1016/j.tics.2006.11.001
Pronin, E., Gilovich, T., & Ross, L. (2004). Objectivity in the
eye of the beholder: Divergent perceptions of bias in
self versus others. Psychological Review, 111(3), 781–799.
https://doi.org/10.1037/0033-295X.111.3.781
Pronin, E., Kennedy, K., & Butsch, S. (2006). Bombing ver-
sus negotiating: How preferences for combating terror-
ism are affected by perceived terrorist rationality. Basic
and Applied Social Psychology, 28, 385–392. https://doi
.org/10.1207/s15324834basp2804_12
Pronin, E., Lin, D. Y., & Ross, L. (2002a). The bias blind spot:
Perceptions of bias in self versus others. Personality and
Social Psychology Bulletin, 28(3), 369–381. https://doi
.org/10.1177/0146167202286008
Pronin, E., Puccio, C., & Ross, L. (2002b). Understanding
misunderstanding: Social psychological perspectives. In
T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics
and biases: The psychology of intuitive judgment (pp. 636–
665). Cambridge University Press. https://doi.org/10.1017/
CBO9780511808098.038
Pruitt, C. R., & Wilson, J. Q. (1983). A longitudinal study of
the effect of race on sentencing. Law and Society Review,
17(4), 613–635.
Pyszczynski, T., & Greenberg, J. (1987). Toward an integra-
tion of cognitive and motivational perspectives on social
inference: A biased hypothesis-testing model. Advances
in Experimental Social Psychology, 20, 297–340. https://
doi.org/10.1016/S0065-2601(08)60417-7
Pyszczynski, T., Greenberg, J., & Holt, K. (1985). Maintaining
consistency between self-serving beliefs and available
data: A bias in information evaluation. Personality and
Social Psychology Bulletin, 11(2), 179–190. https://doi
.org/10.1177/0146167285112006
Rajsic, J., Wilson, D., & Pratt, J. (2015). Confirmation bias in
visual search. Journal of Experimental Psychology: Human
Perception & Performance, 41(5), 1353–1364. https://doi
.org/10.1037/xhp0000090
Rassin, E., Eerland, A., & Kuijpers, I. (2010). Let’s find the evi-
dence: An analogue study of confirmation bias in crimi-
nal investigations. Journal of Investigative Psychology and
Offender Profiling, 7(3), 231–246. https://doi.org/10.1002/
jip.126
Richards, Z., & Hewstone, M. (2001). Subtyping and sub-
grouping: Processes for the prevention and promotion
of stereotype change. Personality and Social Psychology
Review, 5(1), 52–73. https://doi.org/10.1207/S15327957
PSPR0501_4
Richardson, J. D., Huddy, W. P., & Morgan, S. M. (2008). The
hostile media effect, biased assimilation, and percep-
tions of a presidential debate. Journal of Applied Social
Psychology, 33(5), 1255–1270. https://doi.org/10.1111/
j.1559-1816.2008.00347.x
Riedl, R. (1981). Die Folgen des Ursachendenkens [The con-
sequences of causal reasoning]. In Watzlawick, P. (Hrsg.),
Die erfundene Wirklichkeit (pp. 67–90). Piper Verlag.
Rietdijk, N. (2018). (You drive me) crazy. How gaslighting
undermines autonomy [Unpublished master’s thesis].
Utrecht University.
Risen, J. L. (2016). Believing what we do not believe:
Acquiescence to superstitious beliefs and other powerful
intuitions. Psychological Review, 123(2), 182–207. https://
doi.org/10.1037/rev0000017
Risinger, D. M., Saks, M. J., Thompson, W. C., & Rosenthal,
R. (2002). The Daubert/Kumho implications of observer
effects in forensic science: Hidden problems of expecta-
tion and suggestion. California Law Review, 90(1), 1–56.
Robbins, J. M., & Krueger, J. I. (2005). Social projection to
ingroups and outgroups: A review and meta-analysis.
Personality and Social Psychology Review, 9(1), 32–47.
https://doi.org/10.1207/s15327957pspr0901_3
Roese, N. J., & Sherman, J. W. (2007). Expectancy. In A. W.
Kruglanski & E. T. Higgins (Eds.), Social psychology:
A handbook of basic principles (Vol. 2., pp. 91–115).
Guilford Press.
Roese, N. J., & Vohs, K. D. (2012). Hindsight bias. Perspectives
on Psychological Science, 7(5), 411–426. https://doi
.org/10.1177/1745691612454303
Rosenthal, R., & Jacobson, L. (1968). Pygmalion in the class-
room. The Urban Review, 3(1), 16–20.
Rosenthal, R., & Rubin, D. B. (1978). Interpersonal expectancy
effects: The first 345 studies. The Behavioral and Brain
Sciences, 3, 377–415.
Ross, L. (1977). The intuitive psychologist and his shortcom-
ings: Distortions in the attribution process. In L. Berkowitz
(Ed.), Advances in experimental social psychology (Vol. 10,
pp. 173–220). Academic Press.
Ross, L., & Ward, A. (1996). Naive realism in everyday life:
Implications for social conflict and misunderstanding. In
E. S. Reed, E. Turiel, & T. Brown (Eds.), Values and knowl-
edge (pp. 103–135). Psychology Press.
Ross, M., & Sicoly, F. (1979). Egocentric biases in availabil-
ity and attribution. Journal of Personality and Social
Psychology, 37(3), 322–336. https://doi.org/10.1037/0022-
3514.37.3.322
Royzman, E. B., Cassidy, K. W., & Baron, J. (2003). “I know,
you know”: Epistemic egocentrism in children and adults.
Review of General Psychology, 7(1), 38–65. https://doi
.org/10.1037/1089-2680.7.1.38
Rubin, M., & Hewstone, M. (1998). Social identity theory’s
self-esteem hypothesis: A review and some suggestions
for clarification. Personality and Social Psychology Review,
2(1), 40–62. https://doi.org/10.1207/s15327957pspr0201_3
Sahdra, B., & Ross, M. (2007). Group identification and historical
memory. Personality and Social Psychology Bulletin, 33(3),
384–395. https://doi.org/10.1177/0146167206296103
Sanbonmatsu, D. M., Posavac, S. S., Kardes, F. R., & Mantel,
S. P. (1998). Selective hypothesis testing. Psychology
Bulletin & Review, 5(2), 197–220. https://doi.org/10.3758/
BF03212944
Sassenberg, K., Landkammer, F., & Jacoby, J. (2014). The influ-
ence of regulatory focus and group vs. individual goals
on the evaluation bias in the context of group decision
making. Journal of Experimental Social Psychology, 54,
153–164. https://doi.org/10.1016/j.jesp.2014.05.009
Perspectives on Psychological Science XX(X) 23
Schwitzgebel, E. (2019). Belief. In E. N. Zalta (Ed.), The
Stanford encyclopedia of philosophy (Fall 2019 ed.).
Stanford University. https://plato.stanford.edu/archives/
fall2019/entries/belief
Sedikides, C., & Alicke, M. D. (2012). Self-enhancement and
self-protection motives. In R. M. Ryan (Ed.), The Oxford
handbook of human motivation (pp. 303–322). Oxford
University Press.
Sedikides, C., Gaertner, L., & Vevea, J. L. (2005). Pancultural
self-enhancement reloaded: A meta-analytic reply to Heine
(2005). Journal of Personality and Social Psychology, 89(4),
539–551. https://doi.org/10.1037/0022-3514.89.4.539
Sedlmeier, P., Hertwig, R., & Gigerenzer, G. (1998). Are judg-
ments of the positional frequencies of letters systemati-
cally biased due to availability? Journal of Experimental
Psychology: Learning, Memory, and Cognition, 24, 754–
770. https://doi.org/10.1037/0278-7393.24.3.754
Servick, K. (2015). Forensic labs explore blind testing to pre-
vent errors. Evidence examiners get practical about fight-
ing cognitive bias. Science, 349, 462–463.
Sheldrake, R. (1998). Experimenter effects in scientific research:
How widely are they neglected. Journal of Scientific
Exploration, 12(1), 73–78.
Shepperd, J., Malone, W., & Sweeny, K. (2008). Exploring
causes of the self-serving bias. Social and Personality
Compass, 2, 895–908. https://doi.org/10.1111/j.1751-
9004.2008.00078.x
Shermer, M. (1997). Why people believe weird things:
Pseudoscience, superstition, and other confusions of our
time. Freeman/Times Books/Henry Holt & Co.
Simon, H. A. (1990). Invariants of human behavior. Annual
Review of Psychology, 41, 1–19.
Skinner, B. F. (1948). Superstition in the pigeon. Journal of
Experimental Psychology, 38, 168–172.
Skov, R. B., & Sherman, S. J. (1986). Information-gathering
processes: Diagnosticity, hypothesis-confirmatory strate-
gies, and perceived hypothesis confirmation. Journal of
Experimental Social Psychology, 22, 93–121. https://doi
.org/10.1016/0022-1031(86)90031-4
Snyder, M., & Swann, W. B., Jr. (1978). Hypothesis-testing
processes in social interaction. Journal of Personality
and Social Psychology, 36(11), 1202–1212. https://doi
.org/10.1037/0022-3514.36.11.1202
Snyder, M., & Uranowitz, S. W. (1978). Reconstructing the
past: Some cognitive consequences of person percep-
tion. Journal of Personality and Social Psychology, 36(9),
941–950. https://doi.org/10.1037/0022-3514.36.9.941
Sommers, S. R., & Ellsworth, P. C. (2001). White juror bias: An
investigation of prejudice against Black defendants in the
American courtroom. Psychology, Public Policy, and Law,
7(1), 201–229. https://doi.org/10.1037/1076-8971.7.1.201
Steblay, N., Hosch, H. M., Culhane, S. E., & McWethy, A.
(2006). The impact on juror verdicts of judicial instruc-
tion to disregard inadmissible evidence: A meta-analysis.
Law and Human Behavior, 30(4), 469–492. https://doi
.org/10.1007/s10979-006-9039-7
Steblay, N. M., Besirevic, J., Fulero, S. M., & Jimenez-Lorente, B.
(1999). The effects of pretrial publicity on juror verdicts:
A meta-analytic review. Law and Human Behavior, 23(2),
219–235.
Sturm, T. (2012). The “rationality wars” in psychology: Where
they are and where they could go. Inquiry, 55(1), 66–81.
https://doi.org/10.1080/0020174X.2012.643628
Subbotsky, E. (2004). Magical thinking. Reality or illusion?
The Psychologist, 17(6), 336–339. https://doi.org/10.1348/
026151004772901140
Swann, W. B., Jr., & Buhrmester, M. D. (2012). Self-verification:
The search for coherence. In M. R. Leary & J. P. Tangney
(Eds.), Handbook of self and identity (2nd ed., pp. 405–
424). Guilford Press.
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in
the evaluation of political beliefs. American Journal of
Political Science, 50(3), 755–769. https://doi.org/10.1111/
j.1540-5907.2006.00214.x
Tajfel, H., & Turner, J. C. (1979). An integrative theory of
intergroup conflict. In W. G. Austin & S. Worchel (Eds.),
Social psychology of intergroup relations (pp. 33–47).
Brooks Cole.
Tajfel, H., & Turner, J. C. (1986). The social identity theory of
intergroup behavior. In S. Worchel & W. G. Austin (Eds.),
Psychology of intergroup relations (pp. 7–24). Nelson-Hall
Publishers.
Tarrant, M., Branscombe, N., Warner, R., & Weston, D. (2012).
Social identity and perceptions of torture: It’s moral when
we do it. Journal of Experimental Social Psychology, 48(2),
513–518. https://doi.org/10.1016/j.jesp.2011.10.017
Taylor, S. E., & Brown, J. D. (1988). Illusion and well-being:
A social psychological perspective on mental health.
Psychological Bulletin, 103(2), 193–210.
Taylor, S. E., & Brown, J. D. (1994). Positive illusions and well-
being revisited: Separating fact from fiction. Psychological
Bulletin, 116(1), 21–27.
Taylor, S. E., & Gollwitzer, P. M. (1995). Effects of mindset
on positive illusions. Journal of Personality and Social
Psychology, 69(2), 213–226. https://doi.org/10.1037/0022-
3514.69.2.213
Taylor, S. E., Kemeny, M. E., Reed, G. M., Bower, J. E., &
Gruenewald, T. L. (2000). Psychological resources, posi-
tive illusions, and health. American Psychologist, 55(1),
99–109. https://doi.org/10.1037/0003-066X.55.1.99
Tesser, A. (1978). Self-generated attitude change. Advances
in Experimental Social Psychology, 11, 289–338. https://
doi.org/10.1016/S0065-2601(08)60010-6
Tobias, H., & Joseph, A. (2020). Sustaining systemic racism
through psychological gaslighting: Denials of racial profil-
ing and justifications of carding by police utilizing local
news media. Race and Justice, 10(4), 424–455. https://
doi.org/10.1177/2153368718760969
Todd, A. R., & Burgmer, P. (2013). Perspective taking and auto-
matic intergroup evaluation change: Testing an associative
self-anchoring account. Journal of Personality and Social
Psychology, 104(5), 786–802. https://doi.org/10.1037/
a0031999
Todd, P. M., & Gigerenzer, G. (2001). Shepard’s mirrors or
Simon’s scissors? Behavioral and Brain Sciences, 24, 704–
705. https://doi.org/10.1017/S0140525X01650088
Todd, P. M., & Gigerenzer, G. (2007). Environments that
make us smart: Ecological rationality. Current Directions
in Psychological Science, 16(3), 167–171. https://doi
.org/10.1111/j.1467-8721.2007.00497.x
24 Oeberst and Imhoff
Traut-Mattausch, E., Schulz-Hardt, S., Greitemeyer, T., & Frey,
D. (2004). Expectancy confirmation in spite of discon-
firming evidence: The case of price increases due to
the introduction of the Euro. European Journal of Social
Psychology, 34(6), 739–760. https://doi.org/10.1002/
ejsp.228
Trope, Y., & Liberman, A. (1996). Social hypothesis testing:
Cognitive and motivational mechanisms. In E. T. Higgins
& A. W. Kruglanski (Eds.), Social psychology: Handbook
of basic principles (pp. 239–270). Guilford Press.
Turner, J. C., Hogg, M. A., Oakes, P. J., Reicher, S. D., &
Wetherell, M. S. (1987). Rediscovering the social group: A
self-categorization theory. Basil Blackwell.
Tversky, A., & Kahneman, D. (1974). Judgment under uncer-
tainty: Heuristics and biases. Science, 185(4157), 1124–
1131. https://doi.org/10.1126/science.185.4157.1124
Vallone, R. P., Ross, L., & Lepper, M. R. (1985). The hostile
media phenomenon: Biased perception and perceptions
of media bias in coverage of the Beirut massacre. Journal
of Personality and Social Psychology, 49(3), 577–585.
https://doi.org/10.1037/0022-3514.49.3.577
van Boven, L., Kamada, A., & Gilovich, T. (1999). The perceiver
as perceived: Everyday intuitions about the correspondence
bias. Journal of Personality and Social Psychology, 77(6),
1188–1199. https://doi.org/10.1037/0022-3514.77.6.1188
van Prooijen, J.-W., Douglas, K. M., & De Inocencio, C. (2018).
Connecting the dots: Illusory pattern perception predicts
belief in conspiracies and the supernatural. European
Journal of Social Psychology, 48(3), 320–335. https://doi
.org/10.1002/ejsp.2331
van Veelen, R., Otten, S., & Hansen, N. (2011). Linking self
and ingroup: Self-anchoring as distinctive cognitive
route to social identification. European Journal of Social
Psychology, 41(5), 628–637. https://doi.org/10.1002/
ejsp.792
von der Beck, I., Cress, U., & Oeberst, A. (2019). Is there
hindsight bias without real hindsight? Conjectures are
sufficient to elicit hindsight bias. Journal of Experimental
Psychology: Applied, 25(1), 88–99. https://doi.org/10.1037/
xap0000185
Wason, P. C. (1960). On the failure to eliminate hypotheses
in a conceptual task. Quarterly Journal of Experimental
Psychology, 12(3), 129–140. https://doi.org/10.1080/
17470216008416717
Watzlawick, P. (1981). Die erfundene Wirklichkeit. Wie
wissen wir, was wir zu wissen glauben? Beiträge zum
Konstruktivismus [The invented reality. How do we
know, what we believe to know? Contributions to
Constructivism]. Piper.
Weber, R., Camerer, C., Rottenstreich, Y., & Knez, M. (2001).
The illusion of leadership: Misattribution of cause in coor-
dination games. Organization Science, 12(5), 582–598.
Webster, D. M., & Kruglanski, A. W. (1997). Individual differ-
ences in need for cognitive closure. Journal of Personality
and Social Psychology, 67, 1049–1062.
Whitson, J. A., & Galinsky, A. D. (2008). Lacking control
increases illusory pattern perception. Science, 322(5898),
115–117. https://doi.org/10.1126/science.1159845
Wilson, T. D., & Brekke, N. (1994). Mental contamination and
mental correction: Unwanted influences on judgments
and evaluations. Psychological Bulletin, 116(1), 117–142.
https://doi.org/10.1037/0033-2909.116.1.117
Wilson, T. D., Centerbar, D. B., & Brekke, N. (2002). Mental
contamination and the debiasing problem. In T. Gilovich,
D. Griffin, & D. Kahneman (Eds.), Heuristics and biases:
The psychology of intuitive judgment (pp. 185–200).
Cambridge University Press.
Witter, R. A., Stock, W. A., Okun, M. A., & Haring, M. J.
(1985). Religion and subjective well-being in adulthood: A
quantitative synthesis. Review of Religious Research, 26(4),
332–342. https://doi.org/10.2307/3511048
Wolfe, M. B., & Williams, D. J. (2018). Poor metacognitive
awareness of belief change. The Quarterly Journal of
Experimental Psychology, 71, 1898–1910. https://doi.org/
10.1080/17470218.2017.1363792
Wright, W. F., & Bower, G. H. (1992). Mood effects on subjec-
tive probability assessment. Organizational Behavior and
Human Decision Processes, 52(2), 276–291.
Wyer, R. S., Jr., & Frey, D. (1983). The effects of feedback
about self and others on the recall and judgments of
feedback-relevant information. Journal of Experimental
Social Psychology, 19(6), 540–559. https://doi.org/10
.1016/0022-1031(83)90015-X
Yong, J. C., Li, N. P., & Kanazawa, S. (2021). Not so much
rational but rationalizing: Humans evolved as coherence-
seeking, fiction-making animals. American Psychologist,
76(5), 781–793. http://doi.org/10.1037/amp0000674
Zuckerman, M., Knee, C. R., Hodgins, H. S., & Miyake, K.
(1995). Hypothesis confirmation: The joint effect of posi-
tive test strategy and acquiescence response set. Journal
of Personality and Social Psychology, 68, 52–60. https://
doi.org/10.1037/0022-3514.68.1.52