Content uploaded by Hershey H. Friedman
Author content
All content in this area was uploaded by Hershey H. Friedman on Jul 21, 2023
Content may be subject to copyright.
Content uploaded by Hershey H. Friedman
Author content
All content in this area was uploaded by Hershey H. Friedman on Jul 19, 2023
Content may be subject to copyright.
Content uploaded by Hershey H. Friedman
Author content
All content in this area was uploaded by Hershey H. Friedman on Jul 17, 2023
Content may be subject to copyright.
Content uploaded by Hershey H. Friedman
Author content
All content in this area was uploaded by Hershey H. Friedman on Nov 13, 2017
Content may be subject to copyright.
Cognitive Biases and Their Influence on Critical Thinking and Scientific
Reasoning: A Practical Guide for Students and Teachers
Hershey H. Friedman, Ph.D.
Professor of Business
Department of Business Management
Murray Koppelman School of Business
Brooklyn College of the City University of New York
Email: x.friedman@att.net
Abstract
People often deviate from rationality when they face too much information or need to decide
quickly. They may use heuristics (rules of thumb) to simplify their thinking which can lead to
cognitive biases. Researchers have discovered 200 cognitive biases that result in inaccurate or
irrational judgments and decisions, ranging from actor-observer to zero risk bias. This paper
explores many of these biases and suggests ways to overcome them and improve decision-
making.
Keywords: rational man, cognitive biases, heuristics, anchoring bias, availability bias,
representativeness heuristic, confirmation bias, neglect of probability bias, overconfidence bias.
1
Many of the fundamental principles of economic theory have recently been challenged.
Economic theory is primarily based on the premise of the “rational economic man.” Rational
man makes decisions based solely on self-interest and wants to maximize utility. However, the
rational man theory may be a theory that is dead or rapidly dying. After the Great Recession of
2008, Alan Greenspan, former Chairman of the Federal Reserve, told Congress: “I made a
mistake in presuming that the self-interests of organizations, specifically banks and others, were
such that they were best capable of protecting their own shareholders” (Ignatius, 2009). Nouriel
Roubini, a prominent economist known as Dr. Doom for predicting the housing market collapse
in 2006, stated that "The rational man theory of economics has not worked" (Ignatius, 2009).
Kahneman (2011: 374) avows: “Theories can survive for a long time after conclusive evidence
falsifies them, and the rational-agent model certainly survived the evidence we have seen, and
much other evidence as well.”
Kahneman (2011, p. 269) describes how he was handed an essay written by the Swiss
economist Bruno Frey that stated: “The agent of economic theory is rational, selfish, and his
tastes do not change.” Kahneman was astonished that economists could believe this given that it
was apparent to psychologists that “people are neither fully rational nor completely selfish, and
that their tastes are anything but stable. Our two disciplines seemed to be studying different
species, which the behavioral economist Richard Thaler later dubbed Econs and Humans.”
Many economists now realize that man does not always behave rationally. Although,
some, such as Gigerenzer (2018), who describes a “bias bias,” disagree with many of the
findings of the behavioral economists. Thaler and Mullainathan (2008) describe how in
experiments involving “ultimatum” games, we see evidence that people do not behave as
2
traditional economic theory predicts they will. People will act “irrationally” and reject offers they
feel are unfair:
In an ultimatum game, the experimenter gives one player, the proposer,
some money, say ten dollars. The proposer then makes an offer of x, equal
to or less than ten dollars, to the other player, the responder. If the
responder accepts the offer, he gets x and the proposer gets 10 − x. If the
responder rejects the offer, then both players get nothing. Standard
economic theory predicts that proposers will offer a token amount (say
twenty-five cents) and responders will accept, because twenty-five cents is
better than nothing. But experiments have found that responders typically
reject offers of less than 20 percent (two dollars in this example) (Thaler
and Mullainathan, 2008, para. 8).
This is why we must also draw on insights from the discipline of psychology. Ariely
(2008) uses the latest research to demonstrate that people are predictably irrational; they use
heuristics or rules of thumb to make decisions. Heuristics may be seen as “cognitive shortcuts”
that humans utilize when there is a great deal of required information to collect in order to make
a correct decision but time (or desire to do the extensive research) or money is limited (Caputo,
2013). Using rules of thumb may help a person make quick decisions but might lead to a
systematic bias. Smith (2015) lists 67 cognitive biases that interfere with rational decision-
making. A cognitive bias is defined as:
[A] systematic error in thinking that affects the decisions and judgments
that people make. Sometimes these biases are related to memory. The way
you remember an event may be biased for a number of reasons and that in
turn can lead to biased thinking and decision-making. In other instances,
cognitive biases might be related to problems with attention. Since
attention is a limited resource, people have to be selective about what they
pay attention to in the world around them (Chery, 2016, para. 2).
There are about 200 known cognitive biases, and the list continues to grow (Flyvbjerg,
2021). According to Benson (2022), cognitive biases help us address four different problems:
3
Problem 1: Too much information to deal with (information overload) so
our brain uses tricks (“cognitive shortcuts’) to select the information we
are most likely to use.
Problem 2: Not enough meaning from the bits and pieces of information
we are aware of; but we need to make sense out of what we perceive. To
solve this problem, we fill in the gaps.
Problem 3: Need to act fast when time and money are limited.
Problem 4: What should we remember? To be efficient, our brains need to
remember what we believe are the most important and useful pieces of
information; it is impossible to recall everything (Paras. 7-10).
These are the downsides of cognitive biases, according to Benson:
We don’t see everything. Some of the information we filter out is actually
useful and important.
Our search for meaning can conjure illusions. We sometimes imagine
details that were filled in by our assumptions, and construct meaning and
stories that aren’t really there.
Quick decisions can be seriously flawed. Some of the quick reactions and
decisions we jump to are unfair, self-serving, and counter-productive.
Our memory reinforces errors. Some of the stuff we remember for later
just makes all of the above systems more biased, and more damaging to
our thought processes (Benson, 2022, para. 17).
Similarly, Heick (2019) places the 180+ biases into a graphic consisting of four
categories: Too Much Information; Not Enough Meaning; Need to Act Fast; and What Should
We Remember? Desjardins (2021) groups 188 cognitive biases in one infographic.
Researchers from various disciplines have been examining cognitive biases in order to
understand how to improve decision-making in their areas. Caputo (2013), who was concerned
with the negotiation process, asserts that “cognitive misperceptions can highly bias human
behavior when making judgments and decisions, and this is true in negotiations.” The military
has been studying cognitive biases to improve decision-making in the US army. The military has
found that “Because these heuristics generalize situations and allow people to make quick
decisions despite time constraints or imperfect information, they often result in predictable errors
in judgments (cognitive biases)” (Mission Command, 2015). The Central Intelligence Agency
4
(CIA) devotes several chapters in its manual to cognitive biases. The following reason is given
for studying these biases:
Psychologists have conducted many experiments to identify the
simplifying rules of thumb that people use to make judgments on
incomplete or ambiguous information, and to show--at least in laboratory
situations--how these rules of thumb prejudice judgments and decisions.
The following four chapters discuss cognitive biases that are particularly
pertinent to intelligence analysis because they affect the evaluation of
evidence, perception of cause and effect, estimation of probabilities, and
retrospective evaluation of intelligence reports (Heuer Jr., 2008; see
Chapters 9-12).
McCann (2014) came up with ten cognitive biases that can result in poor decisions by
executives in finance. Cognitive biases have been found to cause patient harm in healthcare
facilities (Joint Commission, 2016). Smith (2015) avers that a good marketer must understand
cognitive biases for the purpose of converting prospects into customers. Dror, McCormack &
Epstein (2015) focused on understanding how cognitive biases work in the legal system. They
were mainly concerned with how these biases affect the “impartiality” of expert witnesses. They
underscore that:
[A] mere expectation can bias the cognitive and brain mechanisms
involved in perception and judgment. It is very important to note that
cognitive biases work without awareness, so biased experts may think and
be incorrectly convinced that they are objective, and be unjustifiably
confident in their conclusion (Dror, McCormack & Epstein, 2015).
It is clear that individuals who want to make rational decisions that are unbiased in all
kinds of situations, not only negotiations, military intelligence, or healthcare, should attempt to
understand the various cognitive biases that distort clear thinking. The best way to reduce or
eliminate cognitive biases is to be aware of them.
5
Some Cognitive Biases that Adversely Affect Rational Decision Making
Actor-Observer Bias
The actor-observer bias refers to a “tendency to attribute one's own actions to external
causes, while attributing other people's behaviors to internal causes” (Chery, 2017). Thus, if
someone else cuts in line, it is because he is a jerk. If I cut in line, it is because I am late for a
crucial appointment. Zur (1991) found that cognitive biases may affect how we perceive
enemies' actions.
Research has repeatedly demonstrated how the enemy's hostile actions are
more likely to be attributed to natural characteristics, while positive,
conciliatory or peaceful actions are more likely to be attributed to
situational factors. In other words, when the enemy is acting peacefully, it
is because it is forced to do so by external circumstances and not by its
own choice. When it acts aggressively, it is due to personal choice or
characteristic behavior (Zur, 1991).
Anchoring Bias
Thaler & Sunstein (2008, pp. 23-24) provide an example of how anchoring works: People
who are asked to guess the population of Tallahassee will probably have no idea. Suppose
subjects are randomly assigned to two groups. Group A is first told that Los Angeles has 4
million people and then asked to guess the population of Tallahassee. Subjects in Group B are
first told that the population of Liberty, NY is 9,900. What will happen is that Group A will
make a much higher guess than Group B as to the population of Tallahassee. The reason is that
the first number they are given is used as an anchor and then adjusted. Group A will modify the 4
6
million downward, knowing that Tallahassee is much smaller than Los Angeles. Group B will
adjust upward, knowing that Tallahassee is larger than Liberty, NY.
Lawyers use anchoring to establish a number in a lawsuit. The lawyer will ask for $30
million in damages, knowing very well that there is no way the jury will award this kind of
number for, say, a libelous story in the paper about the client. However, she might get her client
a few million dollars since the $30 million will be used as an anchor. Retailers might use phony
markdowns (original price $800) to anchor a price and get customers to overpay for a product.
Thompson (2013) states: “people don't really like making decisions. We have habits, we
like thinking automatically. So sometimes we avoid making choices altogether because it
stresses us out” Real estate agents understand this and take advantage of buyers by employing
the following technique.
Since buying a house is highly consequential and difficult to reverse,
rational people should look at a great many options and think them
through very carefully. A good agent will show you a few houses that are
expensive and not very nice, and then one at almost the same price and far
nicer. Many buyers will respond by stopping their search and jumping on
this bargain. Our susceptibility to "bargains" is one of the cognitive
devices we use to simplify choice situations, and one that companies are
conscious of when they position their products (Thompson, 2013, para.
12).
Authority Bias
Authority bias is a cognitive bias that makes people more prone to believe and follow the
views of those they see as authority figures. They may even do this when it conflicts with their
own moral beliefs. This is why it is crucial to verify that the supposed authority figures are
indeed experts, and also seek a second opinion.
7
Availability Bias
This refers to the overestimation of risks that are readily available in memory. How easily
things come to mind is a heuristic that makes people overestimate the importance of certain
kinds of information. If something is difficult to remember, one will assume it is less likely to
occur. Kahneman (2011) defines availability bias as follows:
There are situations in which people assess the frequency of a class or the
probability of an event by the ease with which instances or occurrences
can be brought to mind. For example, one may assess the risk of heart
attack among middle-aged people by recalling such occurrences among
one's acquaintances. Similarly, one may evaluate the probability that a
given business venture will fail by imagining various difficulties it could
encounter. This judgmental heuristic is called availability. Availability is a
useful clue for assessing frequency or probability, because instances of
large classes are usually reached better and faster than instances of less
frequent classes. However, availability is affected by factors other than
frequency and probability (Kahneman, 2011, p. 425).
Availability bias means there is a tendency to overestimate the risks of accidents that are
easy to recall. Why are people more worried about being killed with a gun than drowning in a
pool? Or, why do we think more people die of homicides than suicides? According to Thaler &
Sunstein (2008, pp. 24-26), people "assess the likelihood of risks by asking how readily
examples come to mind." Therefore, familiar risks (e.g., those reported in the media) are more
frightening to people than those not familiar. Thousands die yearly from injuries resulting from
falling in the shower, yet people are more worried about being killed by a terrorist. The danger of
being hurt from texting while driving (or even walking) is quite significant. According to Thaler
& Sunstein (2008, p. 26): "easily remembered events may inflate people's probability
judgments." This is also why people believe that accidents are responsible for as many deaths as
disease. It works both ways. Events we cannot bring to mind will have lower probabilities of
occurring. Of course, a marketer can make risks familiar by showing them in advertisements.
8
Two biases that affect availability are recency and salience. Recency refers to the
tendency to give more weight to the latest, most recent information or events rather than older
information or events. Saliency bias refers to the fact that
Big, dramatic events, such as explosions, gun battles, and natural disasters,
stick in our heads and stay there, undermining our ability to think
objectively about things like causation, probabilities, and death rates.
Since September 2001, motor-vehicle accidents have killed more than four
hundred thousand Americans, but how often do you worry or get upset
about them? (Cassidy, 2013, para. 2).
The media makes us aware of the threat of terrorist attacks. It is, however, statistically
much more likely that an American will die in a car accident than being injured in a terrorist
attack. There is one chance in a hundred that a person will die in a car accident over a lifetime,
and the chance of being killed in a terrorist attack is 1 in 20 million
(http://www.lifeinsurancequotes.org/additional-resources/deadly-statistics/).
Availability Cascade
Kuran & Sunstein (2007, p. 683) define an availability cascade as “a self-reinforcing
process of collective belief formation by which an expressed perception triggers a chain reaction
that gives the perception increasing plausibility through its rising availability in public
discourse.” Basically, if something is repeated often enough, it will gain much more credibility.
As the popular saying goes: “repeat something long enough and it will become true.”
Backfire Effect
One would think that people would change their beliefs and opinions when presented
with facts that contradict them. However, the truth is that what often happens when people’s
beliefs – especially those firmly held – are challenged by contradictory evidence, these incorrect
9
beliefs get even more potent. It is a daunting task to change people’s views with facts.
Certainty and misinformation are compelling and potent, making it difficult for facts to
change people’s minds. There is evidence that not only do facts not correct misinformation, but
they make it more persistent and potent (Gorman & Gorman, 2017; Kolbert, 2017; Mercier &
Sperber, 2017; Wadley, 2012). Colleen Seifert, a researcher at the University of Michigan, states
the following concerning misinformation.
Misinformation stays in memory and continues to influence our thinking,
even if we correctly recall that it is mistaken. Managing misinformation
requires extra cognitive effort from the individual… If the topic is not very
important to you, or you have other things on your mind, you are more
likely to make use of misinformation. Most importantly, if the information
fits with your prior beliefs, and makes a coherent story, you are more
likely to use it even though you are aware that it's incorrect (Wadley,
2012).
Bandwagon Effect Bias
This bias refers to the tendency of people to adopt a particular behavior, belief, attitude,
or style if a large number of people have also accepted it (Chery, 2015). It is a type of
groupthink. The fact that many people believe something does not make it true. The bandwagon
effect may have an impact on how people vote. People want to vote for winners and may vote for
someone perceived (polls may affect this) as being far ahead in the polls. Advertising may also
try to convince us that a product is good simply because millions of people use it. There is some
evidence that opinion polls may contribute to the bandwagon effect by influencing undecided
voters to go along with the majority (Obermaier, Koch & Baden, 2013).
10
Barnum Effect
The Barnum effect, also known as the Forer Effect, describes when people believe that
general and vague information that could relate to anyone applies uniquely to themselves. Many
individuals believe that horoscopes were meant for them. The reality is that they are written in
such vague terms that they can apply to almost anyone. Tarot card readers and psychics also take
advantage of the Barnum effect and make people think they have special abilities. Marketers also
use a similar approach to make people believe that a product was customized to meet their
unique needs.
Base Rate Fallacy
The base rate fallacy is a cognitive bias that occurs when people focus too much on the
specific, specialized details of a situation (the individuating information which is distinct) and
disregard the overall, general frequency or probability of something occurring (the base rate). In
a nutshell, the general probability is overlooked in favor of the specific probability. The specific
probability might focus on a particular case or a small sample. This can lead to people making
inaccurate judgments or decisions. Base rate fallacy is one of six examples of the problem of
representativeness or similarity discussed by Tversky and Kahneman (19740.
The following experiment is discussed in Kahneman (2011, pp. 146- 154): Subjects were
told the following about Tom W., a graduate student:
Tom W. is of high intelligence, although lacking in true creativity. He has
a need for order and clarity and for neat and tidy systems in which every
detail finds its appropriate place. His writing is rather dull and mechanical,
occasionally enlivened by somewhat corny puns and flashes of
imagination of the sci-fi type. He has a strong drive for competence. He
seems to have little feel and little sympathy for other people, and does not
enjoy interacting with others. Self-centered, he nonetheless has a deep
moral sense (p. 147).
11
The above description led people to ignore prior probabilities regarding the relative size of
majors in different disciplines. Subjects asked to rank nine fields of specialization indicated that
Tom W. was most likely majoring in computer science and engineering. Essentially, the
similarity to a stereotype of a group trumps the actual size of the group (the prior probability).
Flyvbjerg (2021) maintains that base-rate fallacy is one of the more serious biases in
project management because project planners tend to see their projects as being special and
distinctive. Flyvbjerg points out that every individual is distinctive, but the medical profession
has made enormous progress by focusing on what people have in common. A project may appear
unique in a particular state but might be familiar if one examines how the plan worked in other
states and countries.
When George H. W. Bush ran against Michael Dukakis for president, the infamous
Willie Horton advertising campaign was used to distort the reality of furlough programs for
prisoners. All 50 states ⸻including California, when Ronald Reagan was governor ⸻ had these
programs. The advertisement ⸻considered among the top 10 campaigns ever ⸻stated that
Dukakis:
[A]llowed first-degree murderers to have weekend passes from prison.
One was Willie Horton, who murdered a boy in a robbery, stabbing him
19 times. Despite a life sentence, Horton received 10 weekend passes from
prison. Horton fled, kidnapped a young couple, stabbing the man and
repeatedly raping his girlfriend. Weekend prison passes—Dukakis on
crime (WNYC, 2015, para. 2).
This ad had an enormous adverse impact on criminal justice reform by focusing people’s
attention on one case and ignoring the base-rate information.
It should be noted that representativeness/similarity is a general, shared term that
describes various errors individuals make when judging probabilities. Tversky and Kahneman
12
(1974) identified six situations where representativeness/similarity caused fallacious reasoning:
(1) Insensitivity to the prior probability of outcomes; (2) Insensitivity to sample size;
(3) Misconceptions of chance; (4) Insensitivity to predictability; (5) The illusion of validity; and
(6) Misconceptions of regression (to the mean). One should also add the “Conjunction Fallacy”
to this list.
Better-than-Average Bias
According to Kim and Han (2023), the better-than-average bias is a “social comparative
bias in which people evaluate their performance or abilities more favorably than they do those of
average others.” Thus, people think they are better than others in several areas, including
morality, intelligence, and health; they believe they are less likely to get sick than others. This
bias is also known as illusory superiority and self-enhancement bias; it can have negative and
positive consequences. For example, It can make people feel good about themselves, increase
their self-esteem, lower depression, and enhance their well-being. But it can also make them less
logical, prone to conflicts with others, and overconfident (see also overconfidence bias).
Bias Blind Spot
People tend to have a bias blind spot, meaning they are likelier to rate themselves as less
susceptible to biases (including cognitive biases) than others. We are also more able to detect
biases in others than in ourselves. According to one researcher:
People seem to have no idea how biased they are. Whether a good
decision-maker or a bad one, everyone thinks that they are less biased than
their peers …This susceptibility to the bias blind spot appears to be
pervasive, and is unrelated to people’s intelligence, self-esteem, and actual
ability to make unbiased judgments and decisions (Reo, 2015).
13
Thus, physicians believe that gifts from pharmaceutical companies are likely to
unconsciously bias decisions made by other doctors. These gifts, however, will not prejudice
their own medical decisions (Reo, 2015).
Certainty Bias
Certainty bias is the cognitive bias that makes us overestimate the accuracy of our beliefs,
judgments, and opinions. People resist new information that challenges or contradicts their
preexisting ideas, attitudes, thoughts, and beliefs. This is because people stick to their views even
when there is a preponderance of evidence indicating that they are wrong. Consider the amount
of scientific evidence that the planet is experiencing climate change and that vaccines do not
cause autism. Millions of people still cannot accept that they are wrong.
With certainty bias, the focus is on the beliefs and opinions of the individual. With
overconfidence bias, a type of certainty bias, the emphasis is on people’s convictions regarding
their knowledge. People tend to overestimate their expertise and are wrongly overconfident.
Certainty Effect (Zero-Risk Bias)
Studies show that people prefer options that reduce a small risk to zero over a more
significant reduction in a much more considerable risk. In other words, we prefer the absolute
certainty of a smaller benefit (i.e., complete elimination of risk) to the lesser certainty of
receiving a more considerable benefit. Generally, people tend to give higher weights to outcomes
they perceive as highly probable or certain and lower weights to outcomes they believe have
lower probabilities.
14
The risk of having an autistic child is much smaller than the risk of a child dying from
infectious diseases. Yet many parents try to reduce the risk of autism by not vaccinating their
children (actually, there is no evidence linking autism to vaccines) and take on the much higher
risk associated with infectious diseases such as measles, rubella, and mumps.
In one study, people preferred reducing a given risk by 5%⸻ from 5% to 0%⸻ rather
than halving a considerable risk of 50% to 25%. The latter option reduced the risk more, yet the
public preferred zero risk (Decision Lab, 2023). People prefer options where risk can be
eliminated entirely over better alternatives. Most people would choose a guaranteed $1 million
over a 95% chance of $2 million with a 5% chance of 0, even though the expected value of the
latter choice is much higher. Money-back guarantees probably are effective because the
consumer perceives them as reducing risk to zero (Decision Lab, 2023).
Kahneman (2011, pp. 312-314) discusses the Allais paradox to demonstrate how even the
greatest statisticians were susceptible to a certainty effect.
In problems A and B, which would you choose?
A. 61% chance to win $520,000 OR 63% chance to win $500,000
B. 98% chance to win $520,000 OR 100% chance to win $500,000
(Kahneman, 2011, p. 313).
Most people prefer the left-hand option in problem A and the right-hand option
(certainty) in problem B. This pattern of choice makes no logical sense and violates utility
theory. Allais demonstrated that “the leading decision theorists in the world had preferences
inconsistent with their own view of rationality!” Kahneman explains this using the certainty
effect.
People are willing to pay a great deal to eliminate risk entirely. This sometimes results in
laws that focus on attempting to remove all risk regardless of the actual benefits. The cost and
effort required to reduce the risk to zero may not be worth it, given the limited resources
15
available to the government. The same can be said of all the tests done by the healthcare system.
The costs involved in zero-risk healthcare are enormous, and spending the money on preventive
medicine and healthcare for the indigent may make more sense.
One of the most powerful words in advertising is “free.” This may relate to the zero-risk
bias. When something is free, there is no risk attached to acquiring it. There should be no
difference between purchasing two bottles of champagne at $40 each for $80 or paying $80 for
one bottle and getting the second one free; either way, the consumer receives two bottles for $80.
However, the word “free” changes everything.
For example, in one study where people were offered a choice of a fancy
Lindt truffle for 15 cents and a Hershey’s kiss for a penny, a large
majority (73%) chose the truffle. But when we offered the same
chocolates for one penny less each—the truffle for 14 cents and the kiss
for nothing—only 31% of participants selected it. The word “free,” we
discovered, is an immensely strong lure, one that can even turn us away
from a better deal and toward the “free” one (Ariely, 2009, para. 5).
Choice-Supportive Bias
Choice-supportive bias is the tendency for people making a decision to remember their
choice as being better than it actually was simply because they made it. We overrate the selected
option and underrate the options that were rejected. Post-purchase rationalization is also a type of
choice-supportive bias. One who does not want to fall into the trap of choice-supportive bias
must constantly check and reevaluate to see whether a decision was correct; one should not
defend flawed choices. After all, everyone makes mistakes.
Clustering Illusion Bias
People tend to see patterns in what are essentially random streaks. Gamblers tend to do
this and attempt to “beat the system” by taking advantage of these phantom patterns in various
16
games of chance, such as cards (“hot hand”) or the roulette wheel. People tend to see patterns in
price fluctuations of multiple stocks. The Gambler’s Fallacy is another cognitive bias that
involves a lack of understanding of random streaks.
Confirmation Bias
Once people form an opinion, they “embrace information that confirms that view while
ignoring, or rejecting, information that casts doubt on it … Thus, we may become prisoners of
our assumptions” (Heshmat, 2015). People tend to only listen to information that supports their
preconceptions. People may have the ability to see flaws in their opponent’s arguments.
However, when it comes to their own opinions, that is when they are blind.
Kahneman speaks of “adversarial collaboration,” which means bringing together two
researchers who disagree and having them conduct an experiment jointly (Matzke et al., 2013;
Kahneman, 2012). This is a way to reduce the confirmation bias that arises when a researcher
consciously or unconsciously designs an experiment in such a way so as to provide support for a
particular position (Matzke et al., 2013).
Given the massive amount of research available to scholars, it is not difficult for a
researcher to cherry-pick the literature and only reference studies that provide support for a
particular opinion (confirmation bias) and exclude others (Goldacre, 2011). Even if individual
studies are done correctly, this does not guarantee that a researcher writing a state-of-the-art
review paper will write an accurate, undistorted synthesis of the literature. Indeed, Celia Mulrow
demonstrated that many review articles were biased (Goldacre, 2011). Motivated reasoning bias
is the flip side of confirmation bias (Marcus, 2008, p. 56)
17
Congruence Bias
Congruence bias is similar to confirmation bias. It is a tendency to test a given hypothesis
(usually our own beliefs) rather than consider alternative theories that might produce better
results. In effect, someone guilty of congruency bias is trying to prove that s/he is right. This is
the reason that alternative hypotheses are not considered. From the quotes below, it is clear that
Arthur Conan Doyle, creator of Sherlock Holmes, understood the importance of being aware of
the potential existence of several hypotheses rather than starting with one. After the facts are
collected, a detective or researcher selects the theory that does the best job of fitting the facts.
The following three quotes from Arthur Conan Doyle’s Sherlock Holmes stories describe
how research should be done:
“It is a capital mistake to theorize before one has data. Insensibly one
begins to twist facts to suit theories, instead of theories to suit facts (“A
Scandal in Bohemia”).
“One should always look for a possible alternative and provide against it.
It is the first rule of criminal investigation" (“Adventure of Black Peter”).
“When you have excluded the impossible, whatever remains, however
improbable, must be the truth” (“Sign of the Four”). (Buxbaum, 2013,
paras. 2, 10, 6, resp.).
Some researchers are convinced that marijuana is a gateway drug leading to addiction to
harder drugs such as heroin. Indeed, there is evidence that a large percentage of addicts did start
with marijuana when they were adolescents. However, there is an alternative hypothesis
suggested by the National Institute on Drug Abuse:
An alternative to the gateway-drug hypothesis is that people who are more
vulnerable to drug-taking are simply more likely to start with readily
available substances such as marijuana, tobacco, or alcohol, and their
subsequent social interactions with others who use drugs increases their
chances of trying other drugs. Further research is needed to explore this
question (National Institute on Drug Abuse, 2017, para. 4).
18
Conjunction Fallacy
According to probability theory, the probability of a conjunction, the joint probability of
A and B [(P (A and B)], cannot exceed the likelihood of either of its two individual constituents,
P (A) or P (B). In other words, P (A and B) ≤ P (A) and P (A and B) ≤ P (B). For example, the
probability of being a man with red hair is less than or equal to the likelihood of being a man; the
probability of being a man with red hair is less than or equal to the possibility of having red hair.
Despite this, people will make this mistake with the so-called “Linda Problem.” This
study is discussed in Kahneman (2011) but was initially published by Tversky and Kahneman
(1983).
Linda is 31 years old, single, outspoken, and very bright. She majored in
philosophy. As a student, she was deeply concerned with issues of
discrimination and social injustice, and also participated in antinuclear
demonstrations. Which one of these is more probable?
(a) Linda is a bank teller.
(b) Linda is an insurance salesperson.
(c) Linda is a bank teller and is active in the feminist movement
(Kahneman, 2011, pp. 156-157).
Logically, as noted above, option (c) cannot be more likely than option (a), but
Kahneman (2011) found that about 85 percent of respondents claimed that it was. Even advanced
graduate students who had taken several statistics courses made this mistake. Tversky &
Kahneman posit that the reason most people get this wrong is because they use a heuristic called
representativeness. Representativeness (or similarity) refers to the tendency of people to judge
the likelihood of an event occurring by finding something similar and then assuming (often
incorrectly) that the probabilities of the two events must be similar. Option (c) appears to be
more representative and better resembles the behavior of Linda. People do not think of a bank
teller as being a political activist.
19
Conservatism Bias
People tend to favor a prior view even when presented with new information or evidence,
i.e., there is a tendency to stick to old information and a reluctance to accept something new.
People do not revise their beliefs sufficiently when presented with new evidence because of
conservatism bias. Conservatism bias is related to status quo bias. Azzopardi (2010, p. 88) makes
this distinction: “The status quo bias is emotional and causes people to hold on to how things are.
The conservatism bias is cognitive and causes people to hold on to their previous opinions and
idea frames even though facts have changed.” This may help explain why HR professionals are
reluctant to consider candidates with different types of backgrounds and qualifications.
Curse of Knowledge Bias
The curse of knowledge bias occurs when we assume that others have the same level of
expertise or know-how as we do. We forget what it was like to be new to something and how
difficult it was to learn. We also use words and terms others may not know, making our
explanations challenging to follow. This can affect our ability to teach, communicate, or
convince others. It can also make us arrogant and overconfident. That’s why educators should
always empathize with their students and recall their own learning experiences using the most
straightforward language rather than technical terms and jargon.
Decoy Effect
Suppose customers are asked to choose between options A and B. Each option has
advantages (Option A may offer fewer features but be less expensive than Option B, which
offers more features). The decoy effect occurs when a third option (the decoy), Option C, is
introduced that is worse than, say, Option B but causes more people to choose the higher-priced
20
Option B. The decoy is purposely introduced to get customers to select the higher-priced option.
This is an example of how this would work and get more people to choose Option B.
Option A – Price of $250 7 features
Option B – Price of $400 10 features
Option C (Decoy) – Price of $500 9 features
Déformation Professionelle Bias
Déformation professionelle is a cognitive bias that comes from the tendency to view the
world in a narrow way and through the eyes of one’s discipline or profession. People suffering
from this see the world in a distorted way and not as it really is. The quote from Mark Twain
saying that “to a man with a hammer, everything looks like a nail” is reminiscent of this bias.
Déformation Professionelle Bias is similar to what Friedman & Friedman (2010) refer to as
disciplinary elitism.
Dualistic Thinking
Some cognitive biases encourage discrimination and prejudice; one of the worst is
dualistic thinking which produces an "us vs. them" approach to life. Dualistic thinking, also
known as black-and-white, binary, or polarized thinking, is a general tendency to see things as
good or bad, right or wrong, and us or them, without room for compromise and seeing shades of
gray. This all-or-nothing cognitive approach leads to poor decision-making and creates polarized
groups (think of today's Democrats and Republicans). It interferes with one's ability to be an
innovator, which requires one to be open-minded.
21
This type of dualistic thinking is known in the mental health field as "splitting," which is
a "defense mechanism in which people unconsciously frame ideas, individuals, or groups in all-
or-nothing or either-or terms (e.g., all-powerful vs. 100% powerless)" (Redstone, 2021, para. 2).
It is often seen in people who have a borderline personality disorder (Villines, 2022). Splitting is
a severe problem when dealing with people with different opinions or interacting with those from
other races or religions. It is emotionally dysregulating, fostering behavioral issues like
aggression and leading to psychic pain and mental illness. It also makes it difficult for people to
have constructive dialogue and works against our shared ideals as a society, like love, peace,
justice, and unity (Redstone, 2021, para 6).
Dunning-Kruger Effect
This is the tendency of people who are ignorant or unskilled in an area to overestimate
their abilities and believe that they are much more competent than they truly are. People who
have absolutely no knowledge of, say, Egyptology will not suffer from the Dunning-Kruger
Effect. It is people who have a little bit of knowledge that are likely to have a great deal of
confidence in their capabilities.
Kruger & Dunning (1999) documented this effect in a paper titled "Unskilled and
Unaware of It: How Difficulties in Recognizing One's Own Incompetence Lead to Inflated Self-
Assessments." They asserted that individuals need a reasonable amount of skill and knowledge
to accurately estimate the actual amount of skill and knowledge they possess. A little knowledge
is indeed dangerous (Poundstone, 2017).
22
Many scholars believe that the Dunning-Kruger Effect is not a valid concept and may be
explained by the effects of regression to the mean (Danvers, 2020; Gignac and Zajenkowski,
2020). Time will tell whether this effect is a statistical artefact or an actual cognitive bias.
Endowment Effect
There is a tendency for people who own an object to value it more than those who do not
own it. Thus, people demand more to give up or sell something they own than they would be
willing to pay to acquire it. This relates to the status quo bias and loss aversion and is
inconsistent with economic theory. Based on several experiments, Kahneman, Knetsch & Thaler
(1990, p. 1342) concluded: “The evidence presented in this paper supports what may be called an
instant endowment effect: the value that an individual assigns to such objects as mugs, pens,
binoculars, and chocolate bars appears to increase substantially as soon as that individual is
given the object.”
Escalation of Commitment Bias
This is the tendency for an individual or a group to stick with a failing decision or action
rather than accepting that a mistake was made and altering course. There is a reluctance to admit
that the original decision was wrong even when there is clear evidence that this is the case.
Countries sometimes do this and continue to fight an unwinnable war. The expression “Throwing
good money after bad” is reminiscent of this irrational fallacy. It is sometimes called the “sunken
cost fallacy.”
23
Expectation Bias
This refers to the tendency for the researcher’s expectations to affect the outcome of a
study. It also refers to the fact that people remember things the way they expected them to occur;
this is why many memories are false. The need for double-blind studies is to minimize
expectation bias. Expectation bias is one of the few cognitive biases that has been researched in
the field of auditing (Pike, Curtis & Chui, 2013).
False Consensus Effect
People tend to overestimate how much others share their attitudes, behaviors, beliefs,
preferences, and opinions. We tend to think that others think the same way we do.
Framing Bias
Tversky & Kahneman (1981) were among the first to identify this cognitive bias known
as framing. People respond differently to choices/preferences depending on how they are
presented. In particular, there will be different responses depending on whether the choices are
offered as a gain or loss (see loss aversion). Thus, doctors are more likely to prescribe a
procedure when it is described as having a 93% survival rate within five years than if it is
presented as having a 7% mortality rate within five years (McNeil, Pauker, & Tversky, 1988).
Similarly, 9 out of 10 students will rate condoms as effective if they are informed that they have
a “95 percent success rate” in stopping HIV transmission; if, however, they are told that it has a
“5 percent failure rate,” then only 4 out of 10 students rate condoms as being effective (Linville,
Fischer, & Fischhoff, 1992). This is why it is more important for a marketer to emphasize what a
prospective customer loses by not making a purchase than what they gain by making the
24
purchase (Flynn, 2013).
The following example, based on the research of Tversky and Kahneman (1981),
demonstrates the principle of loss aversion and framing. Participants were told the “US is
preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people.”
They could pick one of two alternative programs to address the problem:
If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is a 1/3 probability that 600 people will be
saved, and a 2/3 probability that no people will be saved.
72% of participants chose option 1, while only 28% of participants chose
option 2 (Tversky and Kahneman, 1981, p. 453).
A second group of subjects were given the same cover story involving an unusual Asian
disease as above but with two alternative scenarios. Note that the only difference is in how the
options are framed.
If Program C is adopted, 400 people will die.
If Program D is adopted, there is a 1/3 probability that nobody will die,
and a 2/3 probability that 600 people will die (Tversky and Kahneman,
1981, p. 453).
The responses were almost the opposite of the first group of subjects: 22% of
participants chose option 1, and 78% chose option 2. The only difference was how the options
were framed.
Patel provides the following examples of framing:
We are more likely to enjoy meat labeled 75 percent lean meat as opposed
to 25 percent fat. 93 percent of Ph.D. students registered early when the
framing was in terms of a penalty fee for late registration, with only 67
percent registering early when the framing was in terms of a discount for
earlier registration. More people will support an economic policy if the
employment rate is emphasized than when the associated unemployment
rate is highlighted (Patel, 2015).
25
Kahneman (2011, p. 373) explains the different results from opt-out and opt-in systems
as a framing effect. This is especially important when getting people to donate organs such as
kidneys. Countries that use an opt-out system – where the default is that you are an organ donor
and you have to check a box if you do not want to be one – have significantly more organ donors
than countries where an opt-in system is used (i.e., the individual must explicitly state that s/he is
willing to be an organ donor). The differences in one study showed that the organ donation rate
was almost 100% in Austria with an opt-out system versus 12% in Germany with an opt-in
system.
Another example given by Kahneman (2011, pp. 372-373) is the following:
Consider two car owners who seek to reduce their costs. Adam switches
from a gas-guzzler of 12 mpg to a slightly less voracious guzzler that runs
at 14 mpg. The environmentally virtuous Beth switches from a 30 mpg car
to one that runs at 40 mpg. Suppose both drivers travel equal distances
over a year. Who will save more gas by switching? (Kahneman, 2011:
372).
The answer is counter-intuitive. If they both drive 10,000 miles, Adam saves about 119
gallons (from about 833 gallons to 714 gallons), and Beth saves approximately 83 gallons (from
333 gallons to 250 gallons). The problem has to do with framing. The savings become more
evident if the information is in gallons per mile (gpm) rather than mpg. Adam switches from a
car that consumes .0833 gpm to one that consumes .0714 gpm — saving .0119 gpm. Beth
switches from a car that consumes .0333 gpm to .0250 gpm —saving .0083 gpm.
Fundamental Attribution Error
The fundamental attribution error refers to the tendency of a person observing another
person’s behavior to attribute it to internal factors or personality and to underestimate the effect
26
of situational causes (i.e., external influences). In other words, we believe others do what they do
because of their internal disposition (personality). Thus, if you see someone fighting with
another person, you will probably attribute it to someone with a violent temper. Of course, it is
quite possible that he is the victim of a mugging attempt and is trying to defend himself.
Sherman (2014) provides the following example of the fundamental attribution error:
A classic example is the person who doesn’t return your call. You could
go the usual route and think, “He is an inconsiderate slob and
my parents were right years ago when they said I should have dropped
him as a friend.” But the fundamental attribution error would remind you
that there might very well be other reasons why this person hasn’t called
you back. Maybe he is going through major issues in his life. Maybe he is
traveling for work. Maybe he honestly forgot (Sherman, 2014).
Gambler’s Fallacy (also known as Monte Carlo Fallacy)/Misconception of Chance
Gambler’s fallacy is a cognitive bias in which a person mistakenly believes that past
outcomes will affect future outcomes even with a random process. For example, if you flip a coin
five times and get five heads, one guilty of this bias will expect a tail on the next toss. Of course,
since each toss is an independent event, the probability is a constant 50%. People incorrectly
believe that random processes are self-correcting and “a deviation in one direction induces a
deviation in the opposite direction to restore the equilibrium” (Tversky and Kahneman, 1974, p.
1125). This example is related to this fallacy and is known as the “misconception of chance.”
The misconception of chance is an example of the representativeness/similarity bias discussed
by Tversky and Kahneman (1974).
A coin is to be tossed six times. Which sequence is more likely?
Sequence 1: H T H T T H
Sequence 2: H H H T T T
27
Of course, both are equally likely. However, people will think sequence 1 is more likely
than sequence 2 because it appears more random (Tversky & Kahneman, 1974).
This bias was found to influence decision-makers such as refugee asylum judges, loan
officers, and baseball umpires. They also made the same mistake in underestimating the
probabilities of sequential streaks, such as five baseball strikes in a row or approving asylum for,
say, six refugees in a row. Thus, “misperceptions of what constitutes a fair process can
perversely lead to unfair decisions” (Chen, Moskowitz & Shue, 2016).
Halo Effect
Halo effect is a cognitive bias that occurs in impression formation in which a person
assumes that because someone possesses positive characteristic A, then they will also have
positive characteristics B, C, D, E, and F. It also occurs with negative traits. If a person possesses
negative characteristic A, then he will also have negative characteristics B, C, D, E, and F.
Kahneman (2011, pp. 206-208) feels that the halo effect together with outcome bias helps
explain the popularity of various books dealing with leadership. These books focus on successful
firms and then attribute it to leadership style. Actually, it most cases, it is simply luck. Chance
often explains the success of particular firms and the failures of others, not the competence of
leadership. Indeed, over time, the situation often reverses itself, and successful firms become
unsuccessful and vice versa. Kahneman claims that the message of Built to Last, a leadership
book by Jim Collins and Jerry I. Porras, is that “good managerial practices can be identified and
that good practices will be rewarded by good results.” Kahneman (2011, p. 207) asserts: “In the
presence of randomness, regular patterns can only be mirages.” According to Fitza (2013),
28
chance or luck often has a more significant effect on firm performance – it may account for 70%
-- than the actual abilities of the CEO.
Hindsight Bias
This is sometimes called the “I knew it all along” effect. It is the tendency to see past
events as being more predictable than before they occurred. After an event occurs (e.g., the
election of Donald Trump), people believe that they knew he would win before the election took
place. Boyd (2015) says that “Hindsight bias can make you overconfident. Because you think
you predicted past events, you’re inclined to think you can see future events coming. You bet too
much on the outcome being higher, and you make decisions, often poor ones, based on this
faulty level of confidence.”
Hyperbolic Discounting
Hyperbolic discounting is a cognitive bias that explains many supposedly irrational
behaviors, such as addictions, health choices, and personal financial decisions. McCann (2014)
lists it as a critical bias adversely affecting corporate finance decisions. Hyperbolic discounting
refers to people's tendency to prefer a reward that arrives sooner rather than wait longer for a
larger reward in the future. People discount the value of the award that comes later in the future.
A rational person would use a constant discount rate to discount the value of a future reward (this
is known as exponential discounting and has been used in economic theory); this means the
discount rate should not change across different wait times. In reality, however, people use a
time-inconsistent discounting model: The further out in the future the reward, the more we
discount it (Kinari, Ohtake & Tsutsui, 2009; Frederick, Loewenstein & O’Donoghue, 2002).
29
Thus, one may prefer receiving $5000 now to $5200 in 3 months. However, if the choice
is $5000 in two years or $5200 in two years and three months, most people would opt for the
$5200. People do not mind waiting three months if the wait occurs in two years. What this
indicates is that the discount rate used by people is not constant or rational: As delay length
increases, the time discount rate decreases.
Try this experiment on your friends: Show them a $100 bill and ask: “Would you rather
have this $100 bill now or wait two weeks and get $109?” You find that people are not that
rational and want things now. Most will take the $100. Of course, a sensible person should wait
two weeks for the $109—this is equivalent to earning a 9% return ($9 / $100) for two weeks of
waiting. Does anyone know of a bank that offers 9% interest for two weeks?
Ikea Effect
There is a tendency for people to overvalue and overrate objects they made or assembled
by themselves, such as IKEA furniture, regardless of the actual quality of the finished product.
However, this effect seems only to exist when the labor resulted in the successful completion of
the project. If subjects did not complete the task, the IKEA effect disappeared (Norton, Mochon,
& Ariely, 2012).
Illusion of Control
People tend to overestimate how much control they have over external factors such as
prices, costs, demand, and the stock market. In some cases, people believe they can control the
outcome of something random such as the toss of dice.
30
Identifiable Victim Effect
People have a tendency to respond more strongly and be willing to offer greater
assistance to a single identifiable victim or person at risk than to a large group of anonymous
people at risk. This may be why talking about a single case of a disease victim may be more
effective in raising money than describing millions of victims. Lee & Feeley (2016) conducted a
meta-analysis of this effect.
Illusory Correlation
Illusory correlation is a cognitive bias that makes people believe that two variables or
random events are associated when there is no relationship (zero correlation). This can happen
because two unrelated events, such as a hitting streak and wearing a particular hat, coincided and
made someone believe they were causally related. Likewise, people may assume that the cold
weather triggers their migraine or humidity worsens their joint pain.
Information Overload Bias
People make the mistake of believing that more information means better decisions.
Actually, too much information often results in poorer decisions since people cannot handle all
the information available to them. There is only a limited amount of information the brain can
process. Information overload can cause increased stress and what has been referred to as
information fatigue. Behavioral economists disagree with neoclassical economists and posit that
too many choices lead to poorer decisions (Pollitt & Shaorshadze, 2011).
31
Ariely (2008, pp. 152-153) demonstrates how having too many options often results in
the failure to make any decision. For example, someone trying to purchase a laptop might spend
several months trying to buy the best laptop and not consider the “consequence of not deciding.”
The difference among the laptops might be minimal. Still, the time spent dwelling over trivial
differences and the lost opportunities of not having a computer is not taken into account. We
often waste far too much time making a trivial decision when we would be better off flipping a
coin to make a choice. To learn more about the problem of offering too many options, read Barry
Schwartz’s book entitled, The Paradox of Choice: Why More is Less, or view his TED lecture at
http://www.ted.com/talks/barry_schwartz_on_the_paradox_of_choice.html.
Insensitivity to Prior Probability of Outcomes
Tversky & Kahneman (1974) found that prior probabilities are properly used when no
specific evidence is given; when worthless, detailed evidence is provided, prior probabilities are
ignored. This is related to the representativeness/similarity bias discussed above. This example
from Tversky & Kahneman (1974) illustrates how this bias works. The following description of
Dick is provided to the subjects.
Dick is a 30-year-old man. He is married with no children. A man of high
ability and high motivation, he promises to be quite successful in his field.
He is well liked by his colleagues.
This description was intended to convey no information relevant to the
question of whether Dick is an engineer or a lawyer. Consequently, the
probability that Dick is an engineer should equal the proportion of
engineers in the group, as if no description had been given. The subjects,
however, judged the probability of Dick being an engineer to be .5
regardless of whether the stated proportion of engineers in the group was
.7 or .3. Evidently, people respond differently when given no evidence and
when given worthless evidence (p. 1125).
32
The base rate fallacy and insensitivity to the prior probability of outcomes share several
similarities and may overlap but are not equivalent. Both involve the tendency to minimize or
undervalue an event's base rate (the general frequency or prior probability) when making
decisions or evaluating. The base rate fallacy, however, is more narrow and relates to situations
where people depend too much on individuating information (information unique to a specific
case) rather than the base rate.
For instance, if you hear that Tom lacks creativity, enjoys corny puns, has a need for
order and clarity, and prefers neat and tidy systems, you might infer that he has a higher chance
of being an engineer than a social worker, even though you are aware that there are many more
social workers than engineers in the population.
Insensitivity to the prior probability of outcomes is a more general term that includes any
situation where people disregard the base rate or the previous likelihood of an outcome when
making forecasts or deductions. For example, suppose you are told that a medical diagnostic test
for a rare type of cancer has a 92% accuracy rate. In that case, you might be more likely to
believe that a positive test result means that the person surely has the disease, irrespective of the
low base rate, say three in 100,000.
Insensitivity to Sample Size (Law of Small Numbers)
This cognitive bias, also an example of the representativeness/similarity problem, is the
tendency of people to underestimate the amount of variation that occurs in small samples. There
is considerably more variation in small samples than in large samples, and people do not
consider this when estimating probabilities. The problem below, known as the “Hospital
33
problem,” was used by Tversky & Kahneman (1974) to illustrate the insensitivity to the sample
size problem.
A certain town is served by two hospitals. In the larger hospital about 45
babies are born each day, and in the smaller hospital about 15 babies are
born each day. As you know, about 50 percent of all babies are boys.
However, the exact percentage varies from day to day. Sometimes it may
be higher than 50 percent, sometimes lower. For a period of 1 year, each
hospital recorded the days on which more than 60 percent of the babies
born were boys. Which hospital do you think recorded more such days?
The larger hospital (22.1%)
The smaller hospital (22.1 %)
About the same (55.8%)
Most subjects judged the probability of obtaining more than 60 percent
boys to be the same in the small and in the large hospital, presumably
because these events are described by the same statistic and are therefore
equally representative of the general population. In contrast, sampling
theory entails that the expected number of days on which more than 60
percent of the babies are boys is much greater in the small hospital than in
the large one, because a large sample is less likely to stray from 50
percent. This fundamental notion of statistics is evidently not part of
people's repertoire of intuitions (Tversky & Kahneman, 1974, p. 1125).
It is clear from probability theory that the smaller hospital is much more likely to deviate
a great deal from the expected probability of 50%. Thus, a person tossing a coin three times will
likely get three tails (12.5% chance). But if the coin is tossed 100 times, it is highly unlikely to
deviate much from the 50% probability of getting a tail, i.e., 50 tails. For more on the Hospital
Problem, see Noll & Sharma (2014).
Kahneman (2011, pp. 112-113) discusses the problem of selecting samples and indicates
that many studies use samples that are too small to confirm their true hypotheses. This means
there is insufficient power to reject the claim about a population (i.e., the null hypothesis) even
when it is false. There is a way to ensure that a sample is large enough to have sufficient power,
but most researchers rely on intuition rather than formulas. As Kahneman (2011: 114) points out:
“The strong bias toward believing that small samples closely resemble the population from
34
which they are drawn is also part of a larger story: we are prone to exaggerate the consistency
and coherency of what we see.”
Kahneman (2011, pp. 117-118) cites a study concluding that small schools were more
successful than large ones because 6 of the top 50 schools in Pennsylvania were small (an
overrepresentation by a factor of four). This resulted in vast amounts of money invested by the
Gates Foundation in creating small schools. In actuality, inferior schools also tend to be smaller
than the average school. The truth is that small schools are not better than large schools but have
more variability. The evidence suggests that large schools may be better overall because they
provide more curricular options.
The bottom line is that it is essential to realize that many occurrences, including “hot
hands” and winning streaks, are often chance. People should be careful before attributing streaks
to some causal effect (e.g., he is a great manager).
Intergroup (In-Group) Bias
Intergroup bias is the tendency to evaluate members of the in-group more favorably than
members of the out-group. This bias can be expressed in various ways, including the allocation
of resources, evaluation of peers, behaviors such as discrimination, and attitudes such as
prejudice. If a person believes that another individual belongs to the same group as herself, she
will have more positive ratings of that person and show favoritism.
35
Interpretation Bias
Interpretation bias is a “cognitive bias in which ambiguous situations are appraised as
negative or threatening” (Beard and Peckham, 2020). Therefore, someone from a marginalized
group might instantly think that the police officer who gave them a ticket is prejudiced.
Just World Bias
The just-world cognitive bias is a heuristic some people use to make sense of the world.
It is the belief that the world is fair and that people generally get what they deserve. This bias can
cause individuals to blame victims for their own misfortune or attribute success or failure to a
character trait rather than bad luck or external factors. Thus, the condition of homeless people
might be attributed to laziness or substance abuse. People might see the high unemployment of
minorities as being due to personal characteristics rather than discrimination.
Loss Aversion
The pain of losing something we own outweighs the joy of winning by as much as two to
one. Thus, for example, the pain of losing $100 that you currently have is roughly double the
intensity of the joy you would experience finding $100. This is why a different decision will be
made if the same choice is framed as a gain rather than a loss.
Interestingly, researchers believe it takes at least five positive, nice remarks to offset one
unpleasant comment in marital interactions. Loss aversion is also an issue in consumer shopping.
People reacted more strongly to a 10% increase in the price of eggs than a 10% decrease in the
price (Heshmat, 2018).
Loss aversion can also explain why people are more likely to use their income to
36
purchase insurance to protect themselves from a painful loss rather than use the funds to invest in
the stock market and possibly earn considerably higher returns (with a chance to lose money).
Sticking to the status quo rather than seeking change, even when the change could be
advantageous, is also related to loss aversion.
Memory Bias
Memory biases are cognitive biases that involve the tendency to remember past events in
a way that matches one’s current feelings, thoughts, or beliefs. They can occur with either
positive or negative stimuli. For example, someone who feels like a victim might only remember
when others from a different group harmed them and ignore the times they were helped or
supported by them.
Mere Exposure Effect
This refers to the tendency to prefer and like things merely because we are more familiar
with them. This suggests that repeated exposure to some philosophy or idea will help make them
more acceptable to others.
Misconception of Chance
See Gambler’s Fallacy.
Moral licensing
Moral licensing is a cognitive bias that allows people to act unethically or immorally
without feeling like they are contradicting their moral values or compromising their self-image
37
of being an ethical individual. It makes people feel morally justified in engaging in a bad
behavior (e.g., cheating on taxes) after doing something good before (e.g., donating to charity).
After all, the good deed done in the past makes them feel morally superior and entitled to behave
unethically because they have proven that they are good from the previous act.
Men who publicly identify as feminists and contribute to women’s rights causes often
face allegations of sexual harassment or sexual abuse. Indeed, this is what happened to
celebrities such as Harvey Weinstein and others. They were later exposed as sexual predators.
Most likely, they used moral licensing to justify this. This is why it is not unusual for people who
explicitly rejected sexist hiring practices on paper to still prefer a male candidate for a job.
Companies with diversity and inclusion programs may believe this is enough to demonstrate
their morality. This may lead them to justify their discriminatory actions towards their minority
employees. Moreover, employees who reluctantly participate in seminars or talks on diversity
and inclusion may feel they have done their good deed and then bully or mistreat coworkers from
minority groups (Collier, 2021).
Marketers use this bias to increase sales. For example, airlines might donate some of their
profits to charities so people will ignore how poorly they treat their employees. In the same way,
consumers who make a green purchase may feel morally entitled to indulge in a luxury purchase
later, using their eco-friendly choice as an excuse for their lavish, self-indulgent spending
(Simbrunner and Schlegelmilch, 2017).
Motivated Blindness
Motivated blindness provides a psychological reason that many people engage in
unethical behavior. It refers to individuals' psychological tendency to overlook unethical
38
behaviors when it is in their interest to remain ignorant. Once people have a vested interest in
something, they can no longer be objective. This is why conflicts of interest are such a problem;
it is almost impossible to behave ethically when a conflict of interest exists. Bazerman &
Tenbrunsel (2011a) demonstrate how motivated blindness caused many ethical failures,
including the Great Recession of 2008.
It’s well documented that people see what they want to see and easily miss
contradictory information when it’s in their interest to remain ignorant—a
psychological phenomenon known as motivated blindness. This bias
applies dramatically with respect to unethical behavior (para. 14).
As noted above, “People tend to have a bias blind spot, meaning that they are more likely
to rate themselves as being less susceptible to biases (this includes cognitive biases) than others.”
Bazerman & Tenbrunsel (2011b, p. 37) observe, "Most of us dramatically underestimate the
degree to which our behavior is affected by incentives and other situational factors.” On the other
hand, we overestimate how others will be influenced by incentives (e.g., paying people to donate
blood).
Motivated Reasoning
As noted above, motivated reasoning is related to confirmation bias. Marcus (2008: 56)
defines motivated reasoning as “our tendency to accept what we wish to believe (what we are
motivated to believe) with much less scrutiny than what we don’t want to believe.” Marcus
makes the following distinction between motivated reasoning and confirmation bias: “Whereas
confirmation bias is an automatic tendency to notice data that fit with our beliefs, motivated
reasoning is the complementary tendency to scrutinize ideas more carefully if we don’t like them
than if we do.” People’s reluctance to scrutinize and analyze contrary ideas makes it difficult to
change their beliefs. This may contribute to status quo bias.
39
Negativity Bias
Negativity bias is a cognitive bias that causes us to pay more attention to negative
information and things than positive ones and dwell on them. It means we are more likely to
notice and recall negative experiences, respond more strongly to bad news than good news, and
focus more on insults than praise. Individuals are much more likely to relive painful memories
than blissful ones. Negativity makes us recall traumatic experiences better than happy ones and
thus makes us less joyful and stressed. People focus more on an event's downsides (e.g., potential
losses) than the upsides when deciding what to do. Loss aversion is a symptom of the dominance
of negativity (Kahneman, 2011, pp. 300-309). Moore (2019) maintains that this bias can affect
the impressions we form of colleagues in the workplace. One bad experience with one member
of a minority group is more likely to be recalled than numerous positive experiences with the
same group.
Neglect of Probability Bias
The neglect of probability is the tendency to completely ignore probabilities when
making decisions under uncertainty. People often focus on the adverse outcome rather than on
the likelihood that it will occur. The car ride to the airport is much more dangerous than flying
in a plane, yet people are more apprehensive about flying. The danger of being killed in a
terrorist attack is extremely low. Yet, people are not afraid to text while driving, which is highly
likely to result in an accident but are fearful of being victims of terrorism.
The following example illustrates the neglect of probability when it comes to lotteries.
Two games of chance: In the first, you can win $10 million, and in the
second, $10,000. Which do you play? … The probability of winning is
one in 100 million in the first game, and one in 10,000 in the second game.
So which do you choose? (Meaning Ring, 2016, para. 1).
40
The correct answer is the second lottery since it has an expected monetary value ten times
greater than the first lottery. Most people, however, would choose the first lottery. This bias may
also explain why people tend to be more afraid of flying than driving, even though the likelihood
of dying in a plane crash is considerably lower than in a car accident.
Omission bias
Omission bias is the tendency to judge commissions – active, harmful actions that hurt
others – as worse and more immoral than otherwise equivalent omissions (e.g., allowing others
to die). We think it is worse to directly and actively harm others than cause harm passively by
not doing something, even when the same number of people are hurt. The famous “Runaway
Trolley” case is reminiscent of this bias. Approximately 90% of subjects are willing to pull a
lever that diverts the runaway trolley and kills one person but saves the lives of five people. On
the other hand, very few people would be willing to throw a fat man off a bridge to stop the
runaway trolley and thereby save five people (known as ‘would you kill the fat man?’). In both
cases, the math is the same: one person dies in an effort to save five (Bakewell, 2013).
Optimism Bias
This refers to the tendency to be overly optimistic about favorable outcomes. People do
not believe that bad things will happen to them. Evatt (2010, para. 2) asserts: “Most people
expect they have a better-then-average chance of living long, healthy lives; being successfully
employed and happily married; and avoiding a variety of unwanted experiences such as being
robbed and assaulted, injured in an automobile accident, or experiencing health problems.”
41
Similarly, most newlyweds underestimate the chance of getting divorced; most smokers feel that,
unlike other smokers, they are less likely to develop smoking-related diseases such as cancer.
Outcome Bias
Outcome bias is a cognitive bias that refers to the tendency to judge the quality of a
decision by focusing on the eventual outcome rather than examining the factors that existed
when the decision was made. For example, a doctor might make a correct decision and go ahead
with, say, doing a C-section. If the baby dies, people (and juries) are more likely to believe that
the doctor made a poor decision. People have this inclination to overemphasize outcomes rather
than the factors and issues present when the decision was made. A general might do something
foolhardy. However, if he wins the battle, people will think he is a brilliant strategist.
Outcome bias should not be confused with hindsight bias. With hindsight bias, there has
been memory distortion, and the past has not been accurately recalled. The person actually
believes that s/he predicted that an event would occur, even though this was not the case. With
outcome bias, on the other hand, the past is not misremembered; it is ignored or devalued. This is
due to the tendency to minimize the uncertainties that existed when the decision was made and to
focus mainly on the outcome.
Overconfidence Bias
Overconfidence bias is a type of certainty bias. However, with certainty bias, the focus is
on our beliefs; with overconfidence bias, the focus is on our knowledge and talents. People tend
to overestimate their abilities and are overconfident. This is an even more significant problem
with experts. This overconfidence often results in people taking more substantial risks than they
should. Kolbert (2017) highlights, "People believe that they know way more than they actually
42
do.” Sloman & Fernbach (2017) also speak of the “knowledge illusion”; we do not understand
how little we actually know. With certain kinds of questions, answers that people feel that their
response is 99% certain to be correct” turn out to be incorrect 40% of the time (Kasanoff, 2017).
Several books have been written about expert predictions which usually turn out to be
wrong. Experts do only slightly better than random chance. Kahneman (2011, pp. 218-219) cites
research conducted by Tetlock (2005) that demonstrates how poorly experts who make a living
“commenting or offering advice on political and economic trends” actually perform. They do not
do better than monkeys throwing darts on a board displaying the various possible outcomes
(Kahneman 2011, p. 219).
This is what can be said about expert predictions:
When they’re wrong, they’re rarely held accountable, and they rarely
admit it, either. They insist that they were just off on timing, or blindsided
by an improbable event, or almost right, or wrong for the right reasons.
They have the same repertoire of self-justifications that everyone has, and
are no more inclined than anyone else to revise their beliefs about the way
the world works, or ought to work, just because they made a mistake.
Extensive research in a wide range of fields shows that many people not
only fail to become outstandingly good at what they do, no matter how
many years they spend doing it, they frequently don’t even get any better
than they were when they started. In field after field, when it came to
centrally important skills—stockbrokers recommending stocks, parole
officers predicting recidivism, college admissions officials judging
applicants—people with lots of experience were no better at their jobs
than those with very little experience (Eveleth, 2012, paras. 5, 7).
Kahneman (2011, pp. 222-233) believes that algorithms often do a better job at
predictions than experts. He describes several situations where one should rely on a simple
checklist consisting of, say, six relevant characteristics rather than relying on an expert.
Kahneman discusses a simple algorithm developed by Dr. Virginia Apgar in 1953 to determine
whether a newborn infant was in distress. Her method is superior to the expert judgment of
43
obstetricians since it focuses on several cues. Kahneman does point out the hostility towards
using algorithms. Incidentally, Apgar’s algorithm, still in use, has saved thousands of lives.
Kahneman (2011, p. 226) cites the work of Dawes and claims that a simple formula that uses
predictors (i.e., independent variables) with equal weights is often superior to multiple regression
models that use complex statistics to assign different weights to each of the predictor variables.
Multiple regression models are often affected by “accidents of sampling.” Of course, some
common sense is needed to select the independent variables most likely to predict the dependent
variable accurately. Dawes claims that the simple algorithm of “frequency of lovemaking minus
frequency of quarrels” does an excellent job of predicting marital stability (Kahneman, 2011, p.
226). The bottom line is that we should not be overly impressed with the judgment of experts.
It does, however, sometimes pay to be overconfident. There is evidence that others
overrate individuals who are overconfident and sure of their abilities; underconfident individuals
are underrated by others as being worse than they happen to be (Lamba & Nityananda, 2014).
This may explain why politicians have no problem being so sure of themselves and
overpromising (Hutson, 2014).
The importance of overconfidence is being used to explain why there is a gender gap in
the corporate world. Men are more egotistical than women, making them appear more capable
(Hutson, 2014). Kahneman (2011) believes that one has to be very careful with people who are
overconfident and assertive. Before accepting that they know what they are talking about, one
has to have some way of measuring this empirically. He concludes that “overconfident
professionals sincerely believe they have expertise, act as experts and look like experts. You will
have to struggle to remind yourself that they may be in the grip of an illusion.”
44
Peak–End Rule
The peak-end rule is a cognitive bias that deals with how people judge experiences, both
pleasant (e.g., vacations) and unpleasant (e.g., sticking a hand in ice-cold water). People tend to
perceive experiences mainly by how it was at the peak and when it ended. The peak is the most
intense part of the experience and might be positive or negative. Interestingly, people do not
average the entire experience to arrive at an overall rating.
Whitbourne (2012) provides the following examples to demonstrate the peak-end rule.
[P]articipants exposed to 30 seconds of 14-degree ice water (very cold!)
rated the experience as more painful than participants exposed to 90
seconds of exposure to 60 seconds of 14-degree ice water plus 30
additional seconds of 15 degree ice water. In other words, participants
found the 90 seconds of ice water exposure less painful than those exposed
to 60 seconds of nearly equally cold water because the 90 seconds ended
with exposure to a “warmer” stimulus. We will rate an experience as less
painful, than, if it ends on a slightly less painful way. The “peak end” in
this case was a one degree difference in water temperature.
People will prefer and even choose exposing themselves to more pain
(objectively determined) if the situation ends with them feeling less pain…
If you are having a tooth drilled, you’d find it was less painful if the
dentist ends the procedure with some lightening of the drill’s intensity,
even if the procedure is longer than it would otherwise be.
We approach not only our experiences of pleasure and pain in this way,
but also our acquisition of objects that we’re given as gifts… participants
given free DVDs were more pleased with the gifts if they received the
more popular ones after the less popular ones, then if they received the
exact same DVDs in the opposite order. When it comes to pleasure, it’s all
about the ending (Whitbourne, 2012, paras. 7-9).
Pessimism Bias
This refers to the tendency of some individuals to be overly pessimistic and exaggerate
the likelihood that negative events will occur. People with this type of outlook believe that
negative things will keep happening to them and they will not succeed at all kinds of tasks.
Individuals who suffer from depression are very likely to have a pessimism bias (Alloy &
45
Ahrens, 1987). It is the opposite of optimism bias.
Planning Fallacy
This bias is related to overconfidence and an illusion of control (McCann, 2014). People
tend to underestimate the time and cost it will take to complete a project or task. What often
happens when completing a job is that something unforeseen happens. McCann (2014) lists this
bias, together with the illusion of control and overconfidence, as special problems in corporate
finance. Kahneman (2011: 249-251) cites a survey conducted in 2002 of American homeowners
who remodeled their kitchens. They thought the cost of the job would be around $18,658 and
ended up spending an average of $38,769.
This happens frequently when the government estimates the cost of a new weapons
system or buildings. Kahneman (2011, p. 251) provides a simple solution known as “reference
class forecasting.” The forecaster should try to gather information about time or cost from
outsiders involved in similar ventures and use this information to come up with a baseline
prediction. The forecaster should then decide whether they are too optimistic regarding time and
cost and see if the baseline prediction needs to be adjusted.
Projection Bias
Projection bias refers to the tendency of people to misperceive how their future tastes and
preferences will differ from current tastes and preferences. People tend to exaggerate to
themselves the degree to which their future preferences, values, and tastes will be the same as
their current preferences, values, and tastes (Loewenstein, O’Donoghue, & Rabin, 2003).
Projection bias leads to all kinds of poor choices, including becoming addicted to, say, cigarettes,
46
buying too many impulse items when shopping while hungry, and ordering too much food in a
restaurant when ordering at the beginning of the meal while hungry. People deciding on where to
vacation during the summer who make their plans during the winter when it is freezing will tend
to go to places that are too hot because of projection bias. They will assume that they need a hot
place. Of course, once the winter is over, preferences change.
Reactance
Human beings value their freedom and ability to make any choice. Suppose someone
tries to restrict their choice, and people feel that they are being forced into a particular behavior.
In that case, they will resent the diminution in freedom and act in a manner that restores their
autonomy. In other words, they often do the opposite of what the authority figure tells them to
do.
Reactive Devaluation
Reactive devaluation is a cognitive bias that results when people reject or downgrade
ideas merely because they originated from an opponent, competitor, or other antagonist. One
way to overcome this is by not revealing the source of the idea or pretending that it came from
someone the other party likes.
Regression Toward the Mean (also known as regression to the mean) Bias
Regression toward the mean bias was first documented by Sir Francis Galton (1886), who
examined the relationship between parents' height and their children's height. He found that, in
general, parents who are taller than average tend to have children who are taller than average;
and parents who are shorter than average tend to have children who are shorter than average.
47
However, in instances where the parents' average height was greater than the average for the
population (e.g., suppose the father is 6’8” and the mother is 5’11”), the children tended to be
shorter than the parents. Similarly, when the parents' average height was shorter than the average
for the population (e.g., suppose the father is 5’1” and the mother is 4’10”), the children tended
to be taller than the parents.
Regression to the mean is a widespread statistical phenomenon and has many
implications. Thus, if you play a slot machine and have a “hot hand” and win several times in a
row (this is due to chance), you might conclude that you have a winning streak and keep playing.
However, regression toward the mean indicates that if you keep playing, your luck will run out,
and you will start losing. The same is true in sports. An athlete with a phenomenal year who hits
60 home runs will probably not do as well the following year. Smith (2016) discusses the so-
called “Sports Illustrated Cover Jinx.” There is no curse associated with being on the cover of
Sports Illustrated. The reason players tend to have a poor year after being on the cover of Sports
Illustrated is not a curse but due to the regression to the mean. Smith (2016) demonstrates that
the five baseball players with the highest batting averages in 2014 (the average for the five was
.328) did worse in 2015 (the average dropped to .294).
Regression to the mean will not happen if two perfectly correlated variables are measured
(there is no random effect). If two variables do not have a strong correlation, regression to the
mean will occur. Kahneman (2011, p.181) provides the following examples. Given that the
correlation between SAT scores and college GPA is .60, and the correlation between income and
education level is .40, there will be a more substantial regression-to-the-mean effect. The weaker
the correlation, the more significant the role of randomness. The batting average of baseball
players during one season correlates with the batting average of a subsequent season, but the
48
correlation is not perfect. Also, if a measurement is far from the population mean, there will be a
stronger regression-to-the-mean effect since the amount of room to regress is much larger than if
the measurement is close to the population mean.
Regression towards the mean can result in serious mistakes by researchers and decision-
makers. They may believe something is due to an experimental factor when it is simply due to
chance. Suppose you take a sample of 200 ADHD children who score very highly on
aggressiveness and feed them borscht thrice daily. If you examine the aggressiveness scores 60
days later, the scores should be lower because of regression towards the mean, not drinking
borscht. This is true of any measurements. If you examine the scores of subjects that are either
much higher or much lower than average and then take a second set of measurements from the
same people, the second set of scores should be closer to the population average.
Morton & Torgerson (2003) feel that all healthcare professionals should be aware of
regression to the mean if they want to make correct decisions.
Clinicians use diagnostic tests to target and monitor treatment. Regression
to the mean can confound this strategy. The preliminary test has a high
probability of giving an abnormal result through chance, and initial
treatment may be unnecessary. Because of this chance effect, there is a
high probability that subsequent measurements will spontaneously regress
toward the mean value. This misleads clinicians and patients into thinking
that treatment has been effective when the treatment was either not
required or ineffective…
Public health interventions are often aimed at sudden increases in disease and
thus vulnerable to the effects of regression to the mean (Morton & Torgerson,
2003, para. 5).
Kahneman (2011, pp. 175-176) describes the mistakes made in teaching flight instructors.
The belief that praising trainee pilots for an excellent landing often resulted in a subsequent poor
landing is contrary to theories that claim that good performance should be rewarded so that
49
subjects become conditioned to do well. The correct explanation was regression towards the
mean.
Kahneman (2011, pp. 181-182) underscores that a statement such as “Highly intelligent
women tend to marry men who are less intelligent than they are” will result in many interesting
theories involving causality. For example, some people will feel this is because intelligent
women do not want to compete with their husbands. In actuality, regression to the mean provides
a more straightforward explanation. Tversky and Kahneman (1974) describe “misconceptions of
regression” as one of the six cases of the representativeness/similarity bias in judgment.
Representativeness Heuristic
This cognitive bias is a mental shortcut that we use when estimating probabilities. As
noted above in the discussion of base rate fallacy, representativeness/similarity heuristic is a
general, shared term that describes various errors individuals make when judging probabilities.
Tversky and Kahneman (1974) identified six situations where representativeness/similarity
caused fallacious reasoning: (1) Insensitivity to the prior probability of outcomes; (2)
Insensitivity to sample size; (3) Misconceptions of chance; (4) Insensitivity to predictability;
(5) The illusion of validity; and (6) Misconceptions of regression (to the mean). One should also
add the “Conjunction Fallacy” to this list.
We make decisions regarding the likelihood of a particular event based on calculating
how similar it is to an existing belief, stereotype, or mental prototype. The problem with this
heuristic is that it may result in disregarding important information and thus making a poor
decision. For example, one researcher found that decisions made by jurors could be affected by
the wearing of eyeglasses which increases intelligence ratings of defendants and decreases guilty
50
verdicts. The authors also found several interaction effects between the defendant’s race and the
wearing of eyeglasses (Brown, Henriquez, and Groscup, 2008). Facial tattoos probably can also
influence how we perceive someone.
Selective Perception Bias
People tend to allow their expectations or beliefs to influence how they perceive the
world. Thus, information that contradicts existing beliefs will tend to be overlooked and/or
forgotten; information in agreement with their expectations will be noticed and retained
(selective retention).
Self-Serving Bias
There is no question that people want to see themselves in a positive light.” (Heine et al.,
1999; Wang et al., 2015). Self-serving bias, a type of attributional bias, enables people to see
themselves in a positive light. It is a type of cognitive bias that involves attributing one’s
successes to internal, personal characteristics (internal attributions) and blaming one’s failures on
outside forces beyond one’s control (external attributions). In other words, we take personal
credit when we succeed (e.g., getting an A+ in a course), but if something does not work out
(e.g., getting a D in a class), we tend to deny responsibility and blame outside factors such as a
poor teacher or an unfair test. One thing self-serving bias accomplishes is improving one’s self-
esteem and strengthening the ego. However, it makes it difficult for a person to desire to improve
if s/he believes all failures are due to outside forces.
51
Semmelweis Reflex
The Semmelweis Reflex refers to the tendency to reject new ideas because they
contradict established beliefs and paradigms. This was named after Dr. Ignaz Semmelweis, who
could not convince doctors to wash their hands before delivering babies (see story at
http://www.exp-platform.com/Pages/SemmelweisReflex.aspx).
Spotlight Effect
The spotlight effect is a cognitive bias that describes the tendency that makes individuals
overestimate the degree to which they are observed and noticed by others. If people believe they
are in the spotlight and being noticed more than they actually are, they become more self-
conscious and worried about their behavior and appearance. Awareness of this bias can help one
be more accurate in evaluating social situations.
Status Quo Bias
Status quo bias is a cognitive bias that occurs when people favor the familiar and prefer
that things remain the same rather than opting for change. People seem to prefer inaction to
making decisions. It also manifests itself when inertia results in people continuing with a
previously-made decision rather than trying something new. People are more upset about the
negative consequences of making a new decision than the consequences of not making any
decision (Kahneman & Tversky, 1982). Choosing by default (default may be a historical
precedent or a choice made noticeable), an automated choice heuristic, is related to status quo
bias.
52
Stereotyping Bias
Stereotyping is a mental shortcut used by people when making decisions about strangers.
When stereotyping, we have certain expectations about the qualities and attributes members of a
group (e.g., women, blacks, Jews, homosexuals, Hispanics, Asians, Moslems, etc.) possess. It is
much easier for the brain to remember a generalization than specifics because they require less
effort. Thus, a generalization like “all __________ are violent” is much easier to recall than
dealing with scores of individuals (Benson, 2022). One might make certain assumptions about a
person who identifies as a liberal Democrat or conservative Republican. However, many
stereotypes are incorrect and based on fallacious beliefs about certain groups. In any case, there
is a great deal of variability among individuals that comprise a group.
Survivor Bias
This refers to the tendency to focus on the people or objects that survived or succeeded.
We tend to ignore the non-survivors and might completely overlook them because they have
become invisible. Unfortunately, in many cases, the non-survivors or failures can provide us with
much information. However, because they are not around, we may be unaware of the
considerable amount of missing data.
Shermer (2014) provides the following interesting example of survivor bias citing Gary
Smith, author of the book Standard Deviations:
Smith illustrates the effect with a playing card hand of three of clubs, eight
of clubs, eight of diamonds, queen of hearts and ace of spades. The odds
of that particular configuration are about three million to one, but Smith
says, “After I look at the cards, the probability of having these five cards is
1, not 1 in 3 million.” (Smith, 2014, para. 2).
53
Survivor bias is also known as sampling or selecting on the dependent variable. This is
where the researcher selects cases where some measure or phenomenon of interest has been
observed while excluding the cases where the variable or phenomenon of interest has not been
observed. The selected cases are then used to prove the measure or phenomenon of interest. For
example, suppose a researcher looks only at successful firms as measured by annual returns. She
concludes that these firms were headed by leaders who had humility and that CEO humbleness
makes a company great. This finding may or may not be valid. The flaw in the researcher’s
reasoning is that she also did not examine unsuccessful firms. It is quite possible that humble
CEOs also head unsuccessful firms.
In Search of Excellence by Tom Peters and Robert H. Waterman (1982) is one of the
most popular business books. The authors studied 43 of America’s best-run in an effort to
determine what made them successful and came up with eight basic principles of management.
In other words, they sampled based on the dependent variable of “excellent firms in 1982.” The
question is, what happened to those firms? Eckel (2013) says that “two-thirds of them
underperformed the S&P 500 over a decade. Some faltered badly, and some even went out of
business.” Kodak, K Mart, and Wang Labs are three examples of firms on Peter and
Waterman’s (1982) list that went bankrupt. Amdahl, also on the list, was successful until the
early 1990s and then started losing money and was eventually taken over. Baum and Smith
(2015) also found that the stock performance of these companies did not stand the test of time.
54
Von Restorff Effect
This bias is named after the German psychologist Hedwig von Restorff (1906–1962). She
found that radically different and distinctive things are more likely to stand out in one’s brain
than ordinary items. This is the logic of highlighting terms that we want to remember.
Heuer, Jr. (2010, Chapter 10) suggests other ways to make information stand out:
Specifically, information that is vivid, concrete, and personal has a greater
impact on our thinking than pallid, abstract information that may actually have
substantially greater value as evidence. For example:
*Information that people perceive directly, that they hear with their own
ears or see with their own eyes, is likely to have greater impact than
information received secondhand that may have greater evidential value.
*Case histories and anecdotes will have greater impact than more
informative but abstract aggregate or statistical data (Heuer, Jr., 2010,
Chapter 10).
Zeigarnik Effect
There is a tendency for people to find it easier to remember a task that is incomplete and has not
been finished than one which has been completed. This probably has to do with the way short-
term memory works. This effect is named after Russian psychologist Bulma Zeigarnaik who first
wrote about it (Zeigarnik, 1927).
Zero-Risk Bias
See Certainty Effect.
55
Conclusion
Taylor (2013) highlights that cognitive biases may harm businesses because they often
result in poor decisions. He notes that there are several ways to reduce these biases. First, he
posits that one must be aware of the different types of biases. By studying cognitive biases and
understanding them, one can reduce their impact. Second, he asserts that collaboration is
probably the most powerful tool for minimizing cognitive biases. This is why it is crucial to have
diverse groups (groupthink is also a bias) that can work together to make a decision. He
highlights some recommendations made by Daniel Kahneman, who recommends that the
following questions should be asked so as to minimize cognitive bias:
Is there any reason to suspect the people making the recommendation of
biases based on self-interest, overconfidence, or attachment to past
experiences? Realistically speaking, it is almost impossible for people to
not have these three influence their decisions.
Was there groupthink or were there dissenting opinions within the
decision-making team? This question can be mitigated before the
decision-making process begins by collecting a team of people who will
proactively offer opposing viewpoints and challenge the conventional
wisdom of the group (Taylor, 2013, para. 13).
Soll, Milkman & Payne (2015) provide suggestions on how to outsmart some cognitive
biases. They discuss three tools that can be used to prevent what they call “misweighting,” i.e.,
placing too much weight on the wrong information: blinding, checklists, and algorithms.
Blinding is one way to eliminate the effects of such factors as stereotyping. One orchestra had
job candidates audition behind a screen to prevent gender bias. This resulted in a considerable
increase (from 5% to 40%) in female players. The use of checklists helps place the focus on what
is truly relevant and helps reduce cognitive biases that may result in poor choices. This has
helped venture capitalists and HR people make better selections. Algorithms are far from perfect
56
since people create them, but they are still considerably better than relying solely on human
judgment (Soll, Milkman & Payne, 2015).
How many decisions a day does the average person make? This is a challenging question
to answer. The number often cited online is 35,000 (Hoomans, 2015). Of course, most of these
decisions are as trivial as when to get out of bed. Many decisions, however, are quite serious, and
a poor choice can cause immense harm. Indeed, wars are often the result of cognitive biases
when it comes to understanding the enemy (Zur, 1991). Indeed, politicians, business people,
military leaders, negotiators, and investors must improve their decision-making abilities. This
means doing everything possible to minimize cognitive biases. One cannot be a critical thinker
without understanding cognitive biases and knowing how to deal with them.
57
References
Alloy, L. B. & Ahrens, A. H. (1987). Depression and pessimism for the future: Biased use of
statistically relevant information in predictions for self versus others. Journal of
Personality and Social Psychology, 52(2), 366-378.
Ariely, D. (2009). The end of rational economics. Harvard Business Review, July. Retrieved
from https://hbr.org/2009/07/the-end-of-rational-economics
Ariely, D. (2008). Predictably irrational. HarperCollins Publishers.
Azzopardi, P. V. (2010). Behavioural technical analysis: An introduction to behavioural finance
and its role in technical analysis. Harriman House.
Bakewell, S. (2013). Clang went the trolley: ‘Would you kill the fat man?’ and ‘the trolley
problem.’ New York Times Book Review. Retrieved from
http://www.nytimes.com/2013/11/24/books/review/would-you-kill-the-fat-man-and-the-
trolley-problem.html
Baum, G. & Smith, G. (2015) Great companies: Looking for success secrets in all the wrong
places. Journal of Investing, Fall, 61-72. Available at: http://economics-
files.pomona.edu/GarySmith/SuccessSecrets.pdf
Bazerman, M. H. & Tenbrunsel, A. E. (2011a). Ethical breakdowns. Harvard Business Review,
April. Retrieved from https://hbr.org/2011/04/ethical-breakdowns
Bazerman, M. H. & Tenbrunsel, A. E. (2011b). Blind spots: Why we fail to do what's right and
what to do about it. Princeton University Press.
Beard, C., & Peckham, A. D. (2020). Interpretation bias modification. In J. S. Abramowitz & S.
M. Blakey (Eds.), Clinical handbook of fear and anxiety: Maintenance processes and
treatment mechanisms (pp. 359–377). American Psychological
Association. https://doi.org/10.1037/0000150-020
Benson, B. (2022). Cognitive bias cheat sheet. Better Humans. Retrieved from
https://betterhumans.pub/cognitive-bias-cheat-sheet-55a472476b18
Boyd, D. (2015, August 30). Innovators beware the hindsight bias. Psychology Today. Retrieved
from https://www.psychologytoday.com/blog/inside-the-box/201508/innovators-beware-
the-hindsight-bias
Brown, M. J., Henriquez, E., & Groscup, J. (2008).The effects of eyeglasses and race on juror
decisions involving a violent crime. American Journal of Forensic Psychology, 26(2),
25–43.
Buxbaum, R. E. (2013). The scientific method isn’t the method of scientists. Rebresearch.com.
Blog. Retrieved from http://www.rebresearch.com/blog/the-scientific-method-isnt-the-
method-of-scientists/
Caputo, A. (2013). A literature review of cognitive biases in negotiation processes. International
Journal of Conflict Management, 24(4), 374-398.
Cassidy J. (2013, September 11). The saliency bias and 9/11: Is America recovering? New
Yorker. Retrieved from http://www.newyorker.com/news/john-cassidy/the-saliency-bias-
and-911-is-america-recovering
Chen, D., Moskowitz, T. J. & Shue, K. (2016). Decision-making under the gambler’s fallacy:
Evidence from asylum judges, loan officers, and baseball umpires. Quarterly Journal of
Economics, 131(3), March, 1-60. DOI: 10.1093/qje/qjw017
Chery, K. (2015, November 9). What is the bandwagon effect? Verywell.com. Retrieved from
https://www.verywell.com/what-is-the-bandwagon-effect-2795895
58
Chery, K. (2016). What is a cognitive bias: Definition and examples. Verywell.com. Retrieved
from https://www.verywell.com/what-is-a-cognitive-bias-2794963
Chery, K. (2017). What is the actor-observer bias? Verywell.com. Retrieved from
https://www.verywell.com/what-is-the-actor-observer-bias-2794813\
Collier, C. (2021, January 27). This is how moral licensing hurts diversity and inclusion in the
company. Retrieved from https://drcherrycoaching.com/this-is-how-moral-licensing-
hurts-diversity-and-inclusion-in-the-company/
Danvers, A. (2020, December 30). Dunning-Kruger isn’t real. Psychology Today. Retrieved
from https://www.psychologytoday.com/us/blog/how-do-you-know/202012/dunning-
kruger-isnt-real
Desjardins, J. (2021, August 26). Every single cognitive bias in one infographic. Visual
Capitalist. Retrieved from https://www.visualcapitalist.com/every-single-cognitive-bias/
Dror, I., E. McCormack, B. M. & Epstein, J. (2015). Cognitive bias and its impact on expert
witnesses and the court. Judges’ Journal, 54(4), Retrieved from
http://www.americanbar.org/publications/judges_journal/2015/fall/cognitive_bias_and_it
s_impact_on_expert_witnesses_and_the_court.html#6
Decision Lab (2023). Why do we seek certainty in risky situations? DecisionLab.com. Retrieved
from https://thedecisionlab.com/biases/zero-risk-bias
Eckel, B. (2013, November). Fake science. Reinventing Business. Retrieved from
http://www.reinventing-business.com/2013/10/fake-science.html
Evatt, C. (2010). Brain biases. Retrieved from
http://brainshortcuts.blogspot.com/2010/11/optimism-bias.html
Eveleth, R. (2012, July 31). Why experts are almost always wrong. Smithsonian.com. Retrieved
from http://www.smithsonianmag.com/smart-news/why-experts-are-almost-always-
wrong-9997024/
Fitza, M. A. (2013). The use of variance decomposition in the investigation of CEO effects: How
large must the CEO effect be to rule out chance? Strategic Management Journal, 35(12),
December, 1839-1852.
Flynn, S. (2013, May 8). Behavioural economics: part three – understanding purchasing pains.
PowerRetail. Retrieved from: http://www.powerretail.com.au/marketing/behavioural-
economics-part-three-understanding-purchasing-pains/
Flyvbjerg, B. (2021). Top ten behavioral biases in project management: An overview. Project
Management Journal, 52(6), 531– 546. https://doi.org/10.1177/87569728211049046
Frederick, S., Loewenstein, G. & O’Donoghue, T. (2002). Time discounting and time preference:
A critical review. Journal of Economic Literature, 40, June, 351-401.
Friedman, H. H. & Friedman, L. W. (2009, May 3). Bigotry in academe: Disciplinary elitism.
SSRN.com. Retrieved from
SSRN: https://ssrn.com/abstract=1398505 or http://dx.doi.org/10.2139/ssrn.1398505
Galton, F. (1886). Regression towards mediocrity in hereditary statute. Journal of the
Anthropological Institute, 15, 246-263.
Gigerenzer, G. (2018). The bias bias in behavioral economics. Review of Behavioral Economics,
5(3-4), 303–336. https://doi.org/10.1561/105.00000092
Gignac, G. E. & Jankowski, M. (2020). The Dunning-Kruger effect is (mostly) a statistical
artefact: Valid approaches to testing the hypothesis with individual differences data.
Intelligence, 80, May-June, 10149. https://doi.org/10.1016/j.intell.2020.101449
59
Goldacre, B. (2011). The dangers of cherry-picking evidence. Guardian. Retrieved from
https://www.theguardian.com/commentisfree/2011/sep/23/bad-science-ben-goldacre
Gorman, S. E. & Gorman, J. M. (2017). Denying to the grave: Why we ignore the facts that will
save us. Oxford University Press.
Heike, T. (2019, July 3). The cognitive bias codex. Teach Thought. Retrieved from
https://www.teachthought.com/critical-thinking/cognitive-biases/
Heine S. J., Lehman D. R., Markus H. R., Katayama S. (1999). Is there a universal need for
positive self-regard? Psychological Review, 106, 766–794.
Heshmat, S. (2015, April 23). What is confirmation bias? Psychology Today. Retrieved from
https://www.psychologytoday.com/blog/science-choice/201504/what-is-confirmation-
bias
Heshmat, S. (2018, March 8). What is loss aversion? Psychology Today. Retrieved from
https://www.psychologytoday.com/us/blog/science-choice/201803/what-is-loss-aversion
Heuer, Jr., R. J. (2008). Psychology of intelligence analysis. CIA's Center for the Study of
Intelligence. Available at https://www.cia.gov/library/center-for-the-study-of-
intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-
analysis/art3.html
Hoomans, J. (2015, March 20). 35,000 decisions: The great choices of strategic leaders. Leading
Edge Journal. Retrieved from http://go.roberts.edu/leadingedge/the-great-choices-of-
strategic-leaders
Hutson, M. (2014). It pays to be overconfident, even when you have no idea what you’re doing.
New York Magazine. Retrieved from http://nymag.com/scienceofus/2014/05/pays-to-be-
overconfident.html
Ignatius, D (February 8, 2009). The death of rational man. Washington Post. Retrieved from
http://articles.washingtonpost.com/2009-02-08/opinions/36876289_1_nouriel-roubini-
behavioral-economics-irrational-psychological-factors
Joint Commission (2016). Cognitive biases in health care. Quick Safety, Issue 28, October.
Retrieved from
https://www.jointcommission.org/assets/1/23/Quick_Safety_Issue_28_Oct_2016.pdf
Kahneman, D. (2012). The human side of decision making: Thinking things through with Daniel
Kahneman. Journal of Investment Consulting, 13(1), 5-14.
Kahneman, D. (2011). Thinking fast and slow. Farrar, Straus and Giroux.
Kahneman, D., & Tversky, A. (1982). The psychology of preference. Scientific American, 246,
160-173.
Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist,
39(4), 341-350. doi:10.1037/0003-066x.39.4.341
Kahneman, D., Knetsch, J. L. & Thaler, R. H. (1990). Experimental tests of the endowment
effect and the Coase theorem. Journal of Political Economy, 98(6), December, 1325-48.
Kasanoff, B. (2017, March 29). 175 reasons why you don’t think clearly. Forbes. Retrieved from
https://www.forbes.com/sites/brucekasanoff/2017/03/29/sorry-you-cant-make-a-logical-
data-driven-decision-without-intuition/#1e6bbf847f60
Kim, M.Y., Han, K. (2023). For me or for others? The better-than-average effect and negative
feelings toward average others during the COVID-19 pandemic. Current Psychology, 42,
13173–13181. https://doi.org/10.1007/s12144-021-02548-z
Kinari, Y., Ohtake, F. & Tsutsui, Y. (2009). Time discounting: Declining impatience and interval
effect. Journal of Risk and Uncertainty, 39(1), 87-112. doi:10.1007/s11166-009-9073-1
60
Kolbert, E. (2017, February 27). Why facts don’t change our minds. New Yorker. Retrieved from
http://www.newyorker.com/magazine/2017/02/27/why-facts-dont-change-our-minds
Kruger, J. & Dunning, D. (1999). Unskilled and unaware of It: How difficulties in recognizing
one's own incompetence lead to inflated self-assessments. Journal of Personality and
Social Psychology, 77 (6), 1121–34. doi: 10.1037/0022-3514.77.6.1121
Kuran, T. & Sunstein, C. (2007). Availability cascades and risk regulation. University of
Chicago Public Law & Legal Theory Working Paper, No. 181. Retrieved from
http://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1036&context=public_l
aw_and_legal_theory
Lamba S. & Nityananda, V (2014). Self-deceived individuals are better at deceiving others.
PLoS ONE, August, 9(8): e104562. doi:10.1371/journal.pone.0104562
Lee, S. & Feeley, T. H. (2016). The identifiable victim effect: A meta-analytic review. Social
Influence, 11(3), 199-215. http://dx.doi.org/10.1080/15534510.2016.1216891
Linville, P. W., Fischer, G. W., & Fischhoff, B. (1992). AIDS risk perceptions and decision
biases, in J. B. Pryor and G. D. Reeder (Eds.), The social psychology of HIV
infection. Hillsdale, NJ: Erlbaum.
Loewenstein, G., O’Donoghue, T., & Rabin, M. (2003). Projection bias in predicting future
utility. Quarterly Journal of Economics, 118 (4), 1209–1248.
Marcus, G. (2008). Kluge: The haphazard evolution of the human mind. New York: Houghton
Mifflin Company.
Matzke, D., Nieuwenhuis, S., van Rijn, H., Slagter, H. A, van der Molen, M. W. &
Wagenmakers, E. J. (2013). Two birds with one stone: A preregistered adversarial
collaboration on horizontal eye movements in free recall. Retrieved from http://dora.erbe-
matzke.com/papers/DMatzke_EyeMovements.pdf
McCann, D. (2014, May 22). 10 cognitive biases that can trip up finance. CFO. Retrieved from
http://ww2.cfo.com/forecasting/2014/05/10-cognitive-biases-can-trip-finance/
McNeil, B. J., Pauker, S. G., &. Tversky, A. (1988). On the framing of medical decisions, in D.
E. Bell, H. Raiffa, and A. Tversky (Eds.), Decision making: Descriptive, normative, and
prescriptive interactions. Cambridge, England: Cambridge University Press.
Meaning Ring (2016, March 28). Why you’ll soon be playing mega trillions.
Retrieved from http://meaningring.com/2016/03/28/neglect-of-probability-by-rolf-
dobelli/
Mercier, H. & Sperber, D. (2017). The enigma of reason. Harvard University Press.
Mission Command (2015, January 9). Cognitive biases and decision makings: A literature
review and discussion of implications for the US army. White Paper. Mission Command
Center of Excellence. Retrieved from
http://usacac.army.mil/sites/default/files/publications/HDCDTF_WhitePaper_Cognitive%
20Biases%20and%20Decision%20Making_Final_2015_01_09_0.pdf
Moore, C. (2019, December 30). What is negativity bias and how can it be overcome? Positive
Psychology. Retrieved from https://positivepsychology.com/3-steps-negativity-bias/
Morton, V. & Torgerson, D. J. (2003, May 17). Effect of regression to the mean on decision
making in health care. BMJ, 326(7398), 1083-1084. doi: 10.1136/bmj.326.7398.1083
Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1125994/
National Institute on Drug Abuse (2017, January). Is marijuana a gateway drug? Retrieved from
https://www.drugabuse.gov/publications/research-reports/marijuana/marijuana-gateway-
drug
61
Noll, J. & Sharma, S. (2014). Qualitative meta-analysis on the hospital task: Implications for
research Journal of Statistics Education, 22(2). Available at
www.amstat.org/publications/jse/v22n2/noll.pdf
Norton, M. I., Mochon, D. & Ariely, D. (2012). The IKEA effect: When labor leads to love.
Journal of Consumer Psychology, 22(3), July, 453–460.
Obermaier, M., Koch, T. & Baden, C. (2015). Everybody follows the crowd? Effects of opinion
polls and past election results on electoral preferences. Journal of Media Psychology,
DOI: http://dx.doi.org/10.1027/1864-1105/a000160
Patel, N. (2015, May 18). 5 psychological hacks that will make your pricing page irresistible.
Marketing Land. Retrieved from http://marketingland.com/5-psychological-hacks-will-
make-pricing-page-irresistible-121535
Peters, T. & Waterman, R. H. (1982). In search of excellence. New York: Harper & Row.
Pike, B., Curtis, M. B. & Chui, L. (2013). How does an initial expectation bias influence
auditors' application and performance of analytical procedures? Accounting Review, July,
88(4), 1413-1431.
Pollitt, M. G. & Shaorshadze, I. (2011). The role of behavioural economics in energy and climate
policy. Cambridge Working Papers in Economics (CWPE) No. 1165. University of
Cambridge. Retrieved from http://www.econ.cam.ac.uk/dae/repec/cam/pdf/cwpe1165.p
Poundstone, W. (2017, January 21). The Dunning-Kruger president. Psychology Today.
Retrieved from https://www.psychologytoday.com/blog/head-in-the-cloud/201701/the-
dunning-kruger-president
Redstone, I. (2021, January 11). Splitting: The psychology behind binary thinking and how it
limits a diversity of opinions. Forbes. Retrieved from
https://www.forbes.com/sites/ilanaredstone/2021/01/11/splitting-the-psychology-behind-
binary-thinking-and-how-it-limits-a-diversity-of-opinions/
Reo, S. (2015, June 8). Researchers find everyone has a bias blind spot. Carnegie Mellon
University News. Retrieved from
https://www.cmu.edu/news/stories/archives/2015/june/bias-blind-spot.html
Sherman, M. (2014, June 20). Why we don’t give each other a break. Psychology Today.
Retrieved from https://www.psychologytoday.com/blog/real-men-dont-write-
blogs/201406/why-we-dont-give-each-other-break
Shermer, M. (2014, September 1). How the survivor bias distorts reality. Scientific American.
Retrieved from https://www.scientificamerican.com/article/how-the-survivor-bias-
distorts-reality/ doi:10.1038/scientificamerican0914-94
Simbrunner, P. & Schlegelmilch, B.B. (2017). Moral licensing: A culture-moderated meta-
analysis. Management Review Quarterly. 67, 201–225. https://doi.org/10.1007/s11301-
017-0128-0
Sloman, S. & Fernbach, P. (2017). The knowledge illusion: Why we never think alone. New
York: Riverhead Books.
Smith. G. (2016, October 12). The Sports Illustrated cover jinx: Is success a curse? Psychology
Today. Retrieved from https://www.psychologytoday.com/blog/what-the-
luck/201610/the-sports-illustrated-cover-jinx
Smith, G. (2014). Standard deviations: Flawed assumptions, tortured data, and other ways to lie
with statistics. New York: Overlook Press.
62
Smith, J. (2015). 67 ways to increase conversion with cognitive biases. Neuromarketing.
Retrieved from http://www.neurosciencemarketing.com/blog/articles/cognitive-biases-
cro.htm#
Soll, J. B., Milkman, K. L. & Payne, J. W. (2015). Outsmart your own biases. Harvard Business
Review, May, Retrieved from https://hbr.org/2015/05/outsmart-your-own-biases
Taylor, J. (2013, May 20). Cognitive biases are bad for business. Psychology Today. Retrieved
from https://www.psychologytoday.com/blog/the-power-prime/201305/cognitive-biases-
are-bad-business
Tetlock, P. (2005). Expert political judgment: How good is it? How can we know?
Princeton, New Jersey: Princeton University Press.
Thaler, R. H. and Mullainathan, S. (2008). How behavioral economics differs from traditional
economics. The Concise Library of Economics. Retrieved from
http://www.econlib.org/library/Enc/BehavioralEconomics.html
Thaler, R. H. & Sunstein, C. R. (2008). Nudge. New Haven, CT: Yale University Press.
Thompson, D. (2013, January 16). The irrational consumer: Why economics is dead wrong about
how we make choices. Atlantic.com. Retrieved from
http://www.theatlantic.com/business/archive/2013/01/the-irrational-consumer-why-
economics-is-dead-wrong-about-how-we-make-choices/267255/
Tversky, A. & Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and biases. Science,
185, No. 4157, 1124- 1131.
Tversky, A. & Kahneman, D. (1981). The framing of decisions and the psychology of
choice. Science, 211, 453–458.
Tversky, A. & Kahneman, D. (1983). Extension versus intuitive reasoning: The conjunction
fallacy in probability judgment. Psychological Review, 90 (4), October, 293–315.
doi:10.1037/0033-295X.90.4.293.
Villines, Z. (2022, October 20). What is splitting in borderline personality disorder (BPD)?
Medical News Today. Retrieved from https://www.medicalnewstoday.com/articles/bpd-
splitting
von Restorff, H. (1933). Über die wirkung von bereichsbildungen im spurenfeld. Psychologische
Forschung. 18, 299-342. doi:10.1007/BF02409636
Wadley, J. (2012, September 20). New study analyzes why people are resistant to correcting
misinformation, offers solutions. Michigan News. Retrieved from
http://ns.umich.edu/new/releases/20768-new-study-analyzes-why-people-are-resistant-to-
correcting-misinformation-offers-solutions
Wang, X., Zheng, L., Cheng, X., Li, L., Sun, L., Wang, Q. & Guo, X. (2015). Actor-recipient
role affects neural responses to self in emotional situations. Frontiers in Behavioral
Neuroscience, 9:83, Published online Published online 2015 Apr 15.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4397920/doi: 10.3389/fnbeh.2015.00083
Whitbourne, S. K. (2012, September 8). Happiness: It’s all about the ending. Psychology Today.
Retrieved from https://www.psychologytoday.com/blog/fulfillment-any-
age/201209/happiness-it-s-all-about-the-ending
WNYC (2015, May 18). The campaign ad that reshaped criminal justice. WNYC Studios.
Retrieved from https://www.wnycstudios.org/podcasts/takeaway/segments/crime-
reshaped-criminal-justice
Zeigarnik, B. (1927). Uber das behalten yon erledigten und underledigten handlungen.
Psychologische Forschung, 9, 1-85.
63
Zur, O. (1991). The love of hating: The psychology of enmity. History of European Ideas, 13(4),
345-369.