ArticlePDF Available

Top Ten Behavioral Biases in Project Management: An Overview



Behavioral science has witnessed an explosion in the number of biases identified by behavioral scientists, to more than 200 at present. This article identifies the 10 most important behavioral biases for project management. First, we argue it is a mistake to equate behavioral bias with cognitive bias, as is common. Cognitive bias is half the story; political bias the other half. Second, we list the top 10 behavioral biases in project management: (1) strategic misrepresentation, (2) optimism bias, (3) uniqueness bias, (4) the planning fallacy, (5) overconfidence bias, (6) hindsight bias, (7) availability bias, (8) the base rate fallacy, (9) anchoring, and (10) escalation of commitment. Each bias is defined, and its impacts on project management are explained, with examples. Third, base rate neglect is identified as a primary reason that projects underperform. This is supported by presentation of the most comprehensive set of base rates that exist in project management scholarship, from 2,062 projects. Finally, recent findings of power law outcomes in project performance are identified as a possible first stage in discovering a general theory of project management, with more fundamental and more scientific explanations of project outcomes than found in conventional theory.
Top Ten Behavioral Biases in Project
Management: An Overview
Bent Flyvbjerg
Behavioral science has witnessed an explosion in the number of biases identied by behavioral scientists, to more than 200 at
present. This article identies the 10 most important behavioral biases for project management. First, we argue it is a mistake
to equate behavioral bias with cognitive bias, as is common. Cognitive bias is half the story; political bias the other half.
Second, we list the top 10 behavioral biases in project management: (1) strategic misrepresentation, (2) optimism bias, (3) unique-
ness bias, (4) the planning fallacy, (5) overcondence bias, (6) hindsight bias, (7) availability bias, (8) the base rate fallacy, (9)
anchoring, and (10) escalation of commitment. Each bias is dened, and its impacts on project management are explained,
with examples. Third, base rate neglect is identied as a primary reason that projects underperform. This is supported by pre-
sentation of the most comprehensive set of base rates that exist in project management scholarship, from 2,062 projects. Finally,
recent ndings of power law outcomes in project performance are identied as a possible rst stage in discovering a general
theory of project management, with more fundamental and more scientic explanations of project outcomes than found in con-
ventional theory.
behavioral economics, project management, cognitive bias, political bias, strategic misrepresentation, optimism bias, uniqueness
bias, planning fallacy, overcondence bias, hindsight bias, availability bias, base rate fallacy, anchoring, escalation of commitment
Since the early work of Tversky and Kahneman (1974), the
number of biases identied by behavioral scientists has
exploded in what has been termed a behavioral revolution in
economics, management, and across the social and human sci-
ences. Today, Wikipedias list of cognitive biases contains more
than 200 items (List of cognitive biases,2021). The present
article gives an overview of the most important behavioral
biases in project planning and management, summarized in
Table 1. They are the biases most likely to trip up project plan-
ners and managers and negatively impact project outcomes, if
the biases are not identied and dealt with up front and
during delivery.
Many would agree with Kahneman (2011, p. 255) that opti-
mism bias may well be the most signicant of the cognitive
biases.However, behavioral biases are not limited to cognitive
biases, though behavioral scientists, and especially behavioral
economists, often seem to think so. For instance, in his
history of behavioral economics, Nobel laureate Richard
Thaler (2015, p. 261) denes what he calls the real point of
behavioral economicsas to highlight behaviors that are in
conict with the standard rational model.But, nothing in
this denition limits the object of behavioral economics to cog-
nitive bias. Other types of bias, for example, political bias, also
conict with the standard rational model, although you would
never know this from reading Thalers (2015) history of the
eld. Thaler (2015, p. 357) speaks of the unrealism of hyper-
rational models,and we agree. But, behavioral economics
itself suffers from such unrealism, because it ignores that
many behavioral phenomena are better explained by political
bias than by cognitive bias.
In short, behavioral economics in its present form suffer from
an overfocus on cognitive psychology: Economic decisions get
overaccounted for in psychological terms, when other perspec-
tivesfor instance political, sociological, and organizational
may be more pertinent. If all you have is a hammer, everything
looks like a nail. Similarly, if all you have is psychology,
everything gets diagnosed as a psychological problem, even
when it is not. Behavioral economics suffers from a psychology
bias,in this sense. Cognitive bias is only half the story in
behavioral science. Political bias is the other half.
University of Oxford, Oxford, UK
IT University of Copenhagen, Copenhagen, Denmark
Corresponding Author:
Bent Flyvbjerg.
Project Management Journal
2021, Vol. 52(6) 531546
© 2021 Project Management Institute, Inc.
Article reuse guidelines:
DOI: 10.1177/87569728211049046
Political biasunderstood as deliberate strategic distortions
arises from power relations, instead of from cognition, and
has long been the object of study in political economy.
Political bias is particularly important for big, consequential
decisions and projects, which are often subject to high politi-
calorganizational pressures. In fact, for very large projects
so-called megaprojectsthe most signicant behavioral bias
is arguably political bias, more specically, strategic misrepre-
sentation (Flyvbjerg et al., 2002; Flyvbjerg et al., 2018; Wachs,
2013). Cognitive bias may account well for outcomes in the
simple lab experiments done by behavioral scientists. But for
real-world decision-makingin big hierarchical organizations,
involving ofce politics, salesmanship, jockeying for position
and funds, including in the C-suite and ministerial ofces,
with millions and sometimes billions of dollars at stake
political bias is pervasive and must be taken into account. Or
so I argue.
It should be emphasized again that many other behavioral
biases exist than those mentioned in Table 1, which are relevant
to project planning and management, for example, illusion of
control, conservatism bias, normalcy bias, recency bias, proba-
bility neglect, the costbenet fallacy, the ostrich effect, and
more. But, the 10 mentioned here may be considered the
most important, and, in this sense, they are deemed to be the
most common biases with the most direct impact on project
Discussions With Kahneman
My rst opportunity to reect systematically on the relationship
between political and cognitive bias was an invitation in 2003
from the editor of Harvard Business Review (HBR)to
comment on an article by Lovallo and Kahneman (2003). The
year before, Kahneman had won the Nobel Prize in
Economics for his path-breaking work with Amos Tversky
(who died in 1996) on heuristics and biases in decision-making,
including optimism bias, which was the topic of the HBR
article. The editor explained to me that he saw Kahneman and
me as explaining the same phenomenacost overruns,
delays, and benet shortfalls in investment decisionsbut
with fundamentally different theories. As a psychologist,
Kahneman explained outcomes in terms of cognitive bias,
especially optimism bias and the planning fallacy. As an eco-
nomic geographer, I explained the same phenomena in terms
of political economic bias, specically strategic misrepresenta-
tion. So which of the two theories is right, asked the HBR
The editors question resulted in a spirited debate in the
pages of HBR. I commented on the article by Kahneman and
Lovallo (2003) that they,
underrate one source of bias in forecastingthe deliberate
cookingof forecasts to get ventures started. My colleagues
and I call this the Machiavelli factor. The authors [Kahneman
and Lovallo] mention the organizational pressures forecasters
face to exaggerate potential business results. But adjusting
forecasts because of such pressures can hardly be called opti-
mism or a fallacy; deliberate deception is a more accurate
term. Consequently, Lovallo and Kahnemans analysis of
the planning fallacy seems valid mainly when political pressures
are insignicant. When organizational pressures are signicant,
both the causes and cures for rosy forecasts will be
different from those described by the authors(Flyvbjerg,
2003, p. 121).
Kahneman and Lovallo (2003, p. 122) responded:
Flyvbjerg and his colleagues reject optimism as a primary
cause of cost overruns because of the consistency of the over-
runs over a signicant time period. They assume that people,
Table 1. Top 10 Behavioral Biases in Project Planning and
Name of Bias Description
1. Strategic
The tendency to deliberately and
systematically distort or misstate
information for strategic purposes. Aka
political bias, strategic bias, or power bias.
2. Optimism bias The tendency to be overly optimistic about
the outcome of planned actions,
including overestimation of the
frequency and size of positive events and
underestimation of the frequency and
size of negative ones.
3. Uniqueness bias The tendency to see ones project as more
singular than it actually is.
4. Planning fallacy (writ
The tendency to underestimate costs,
schedule, and risk and overestimate
benets and opportunities.
5. Overcondence bias The tendency to have excessive condence
in ones own answers to questions.
6. Hindsight bias The tendency to see past events as being
predictable at the time those events
happened. Also known as the
I-knew-it-all-along effect.
7. Availability bias The tendency to overestimate the
likelihood of events with greater ease of
retrieval (availability) in memory.
8. Base rate fallacy The tendency to ignore generic base rate
information and focus on specic
information pertaining to a certain case or
small sample.
9. Anchoring The tendency to rely too heavily, or
anchor,on one trait or piece of
information when making decisions,
typically the rst piece of information
acquired on the relevant subject.
10. Escalation of
The tendency to justify increased
investment in a decision, based on the
cumulative prior investment, despite new
evidence suggesting the decision may be
wrong. Also known as the sunk cost
532 Project Management Journal 52(6)
particularly experts, should learn not only from their mistakes
but also from othersmistakes. This assumption can be chal-
lenged on a number of grounds.
Ultimately, the HBR debate did not so much resolve
the question as clarify it and demonstrate its relevance.
Kahneman and I therefore continued the discussion ofine.
Others have commented on Kahnemans generosity in aca-
demic interactions. He invited me to visit him at home, rst
in Paris and later in New York, to develop the thinking on
political and cognitive bias and how they may be interrelated.
He was more rigorous than anyone Iddiscussedbiaswith
before, and I found the discussions highly productive.
In addition to being generous, Kahneman is deeply curious
and empirical. Based on our discussions, he decided he
wanted to investigate political bias rsthand and asked if I
could arrange for him to meet people exposed to such bias.
I facilitated an interview with senior ofcials I knew at
the Regional Plan Association of the New York-New
Jersey-Connecticut metropolitan (tristate) area, with ofces
near Kahnemans home in New York. Their work includes
forecasting and decision-making for major infrastructure
investments in the tristate region, which are among the
largest, the most expensive, and most complex in the world.
They were the types of projects I studied to develop my the-
ories of strategic misrepresentation. Decision-making on such
projects is a far cry from the lab experiments used by
Kahneman and other behavioral scientists to document
classic cognitive biases like loss aversion, anchoring, opti-
mism, and the planning fallacy.
When Kahneman and I compared notes again, we agreed
the balanced position regarding real-world decision-making
is that both cognitive and political biases inuence outcomes.
Sometimes one dominates, sometimes the other, depending on
what the stakes are and the degree of political-organizational
pressures on individuals. If the stakes are low and political-
organizational pressures are absent, which is typical for lab
experiments in behavioral science, then cognitive bias will
dominate, and such bias will be what you nd. But if the
stakes and pressures are highfor instance, when deciding
whether to spend billions of dollars on a new subway line in
Manhattanpolitical bias and strategic misrepresentation
are likely to dominate and will be what you uncover, together
with cognitive bias, which is hardwired and therefore present
in most, if not all, situations.
Imagine a scale for measuring political-organizational
pressures, from weak to strong. At the lower end of the
scale, one would expect optimism bias to have more explan-
atory power of outcomes relative to strategic misrepresenta-
tion. But with more political-organizational pressures,
outcomes would increasingly be explained in terms of strate-
gic misrepresentation. Optimism bias would not be absent
when political-organizational pressures increase, but opti-
mism bias would be supplemented and reinforced by bias
caused by strategic misrepresentation. Finally, at the upper
end of the scale, with strong political-organizational pres-
suresfor example, the situation where a chief executive
ofcer or minister must have a certain projectone would
expect strategic misrepresentation to have more explanatory
power relative to optimism bias, again without optimism bias
being absent. Big projects, whether in business or govern-
ment, are typically at the upper end of the scale, with high
political-organizational pressures and strategic misrepresen-
tation. The typical project in the typical organization is some-
where in the middle of the scale, exposed to a mix of strategic
misrepresentation and optimism bias, where it is not always
clear which one is stronger.
The discussions with Kahneman taught me that although I
had fully acknowledged the existence of cognitive bias in my
initial work on bias (Flyvbjerg et al., 2002), I needed to empha-
size cognition more to get the balance right between political
and psychological biases in real-life decision-making. This
was the object of later publications (Flyvbjerg, 2006;
Flyvbjerg, 2013; Flyvbjerg et al., 2004; Flyvbjerg et al.,
2009; Flyvbjerg et al., 2016). More importantly, however, in
our discussions and in a relatively obscure article by
Kahneman and Tversky (1979a), I found an idea for how to
eliminate or reduce both cognitive and political biases in
decision-making. I developed this into a practical tool called
reference class forecasting(Flyvbjerg, 2006). In Thinking,
Fast and Slow, Kahneman (2011, p. 251) was kind enough to
endorse the method as an effective tool for bringing the
outside view to bear on projects in order to debias them.
Finally, it has been encouraging to see Kahneman begin to
mention political bias in his writings, including in his seminal
book, Thinking, Fast and Slow, where he explicitly points out
Errors in the initial budget are not always innocent. The
authors of unrealistic plans are often driven by the desire to
get the plan approvedwhether by their superiors or by a
clientsupported by the knowledge that projects are rarely
abandoned unnished merely because of overruns in costs
or completion times(Kahneman, 2011, pp. 250251).
That is clearly not a description of cognitive bias, which is
innocent per denition, but of political bias, specically strate-
gic misrepresentation aimed at getting projects underway. As
such, it contrasts with other behavioral economists, for instance,
Thaler (2015) who leaves political bias unmentioned in his best-
selling history of behavioral economics.
Most likely, none of the above would have happened
without the HBR editors simple question, Strategic misrepre-
sentation or optimism bias, which is it?The discussions with
Kahneman proved the answer to be: Both.
We use this insight below to describe the most important
behavioral biases in project planning and management, starting
with strategic misrepresentation, followed by optimism bias and
eight other biases.
Flyvbjerg 533
Strategic Misrepresentation
Strategic misrepresentation is the tendency to deliberately
and systematically distort or misstate information for
strategic purposes (Jones & Euske, 1991; Steinel & De Dreu,
2004). This bias is sometimes also called political bias, strategic
bias, power bias, or the Machiavelli factor (Guinote & Vescio,
2010). The bias is a rationalization where the ends justify the
means. The strategy (e.g., achieve funding) dictates the bias
(e.g., make projects look good on paper). Strategic mis-
representation can be traced to agency problems and political-
organizational pressures, for instance, competition for scarce
funds or jockeying for position. Strategic misrepresentation is
deliberate deception, and as such, it is lying, per denition
(Bok, 1999; Carson, 2006; Fallis, 2009).
Here, a senior Big-Four consultant explains how strategic
misrepresentation works in practice:
In the early days of building my transport economics and
policy group at [name of company omitted], I carried out a lot
of feasibility studies in a subcontractor role to engineers. In vir-
tually all cases it was clear that the engineers simply wanted to
justify the project and were looking to the trafc forecasts to
help in the process I once asked an engineer why their cost
estimates were invariably underestimated and he simply
answered if we gave the true expected outcome costs nothing
would be built’” (personal communication, authors archives,
italics added).
Signature architecture is notorious for large cost overruns. A
leading signature architect, Frances Jean Nouvel, winner of the
Pritzker Architecture Prize, explains how it works:
I dont know of buildings that cost less when they were com-
pleted than they did at the outset. In France, there is often a the-
oretical budget that is given because it is the sum that politically
has been released to do something. In three out of four cases this
sum does not correspond to anything in technical terms. This is a
budget that was made because it could be accepted politically.
The real price comes later. The politicians make the real price
public where they want and when they want(Nouvel, 2009,
p. 4, italics added).
This is strategic misrepresentation. Following its play-
book, a strategic cost or schedule estimate will be low,
because it is more easily accepted, leading to cost and sched-
ule overruns later. Similarly, a strategic benetestimatewill
be high, leading to benet shortfalls. Strategic misrepresen-
tation therefore produces a systematic bias in outcomes.
And, this is precisely what the data show (see Table 2).
We see the theory of strategic misrepresentation ts the
data well. Explanations of project outcomes in terms of stra-
tegic misrepresentation have been set forth by Wachs (1989,
1990, 2013), Kain (1990), Pickrell (1992), Flyvbjerg et al.
(2002, 2004, 2005, 2009), and Feynman (2007a, 2007b)
among others.
Strategic misrepresentation will be particularly strong
where political-organizational pressures are high, as argued
above, and such pressures are especially high for big, strategic
projects. The bigger and more expensive the project, the more
strategic import it is likely to have with more attention from
top management and with more opportunities for political-
organizational pressures to develop, other things being
equal. For project planning and management, the following
propositions apply:
Proposition 1: For small projects, with low strategic import and
no attention from top management, bias, if present, is likely to
originate mainly with cognitive bias, for example, optimism
Proposition 2: For big projects, with high strategic import and
ample attention from top management, bias, if present, is
likely to originate mainly with political bias, for example, stra-
tegic misrepresentation, although cognitive bias is also likely to
be present.
Strategic misrepresentation has proved especially important
in explaining megaproject outcomes. For megaproject manage-
ment, strategic misrepresentation may be expected to be the
dominant bias (Flyvbjerg, 2014).
Professor Martin Wachs of UC Berkeley and UCLA, who
pioneered research on strategic misrepresentation in transporta-
tion infrastructure forecasting, recently looked back at more
than 25 years of scholarship in the area. After carefully weigh-
ing the evidence for and against different types of explanations
of forecasting inaccuracy, Wachs summarized his ndings in
the following manner:
While some scholars believe this [misleading forecasting] is a
simple technical matter involving the tools and techniques of
cost estimation and patronage forecasting, there is growing evi-
dence that the gaps between forecasts and outcomes are the
results of deliberate misrepresentation and thus amount to a col-
lective failure of professional ethics Often rms making
the forecasts stand to benet if a decision is made to proceed
with the project(Wachs, 2013, p. 112).
Wachs found a general incentive to misrepresent forecasts
for infrastructure projects and that this incentive drives fore-
casting outcomes. Wachss review and the studies cited
above falsify the notion that optimism and other cognitive
biases may serve as a stand-alone explanation of cost under-
estimation and benet overestimation, which has been the
common view in behavioral economics. Explanations in
terms of cognitive bias are especially wanting in situations
with high political and organizational pressures. In such sit-
uations, forecasters, planners, and decision makers inten-
tionally use the following Machiavellian formula to make
534 Project Management Journal 52(6)
their projects look good on paper, with a view to securing
their approval and funding:
Underestimated costs +Overestimated benefits
Finally, recent research has found that not only do politi-
cal and cognitive biases compound each other in the manner
described above. Experimental psychologists have shown
that political bias directly amplies cognitive bias in the
sense that people who are powerful are affected more
strongly by various cognitive biasesfor example, availabil-
ity bias and recency biasthan people who are not (Weick &
Guinote, 2008). A heightened sense of power also increases
individualsoptimism in viewing risks and their propensity
to engage in risky behavior (Anderson & Galinsky, 2006,
p. 529). This is because people in power tend to disregard
the rigors of deliberate rationality, which are too slow and
cumbersome for their purposes. They preferconsciously
or notsubjective experience and intuitive judgment as the
basis for their decisions, as documented by Flyvbjerg
(1998, p. 69 ff.), who found that people in power will delib-
erately exclude experts from meetings when much is at stake,
in order to avoid clashes in high-level negotiations between
people in powers intuitive decisions and expertsdelibera-
tive rationality. Guinote and Vescio (2010) similarly found
that people in power rely on ease of retrieval more than
people without power. In consequence, total biaspolitical
plus cognitiveescalates, but not in a simple linear manner
where total bias equals the sum of political and cognitive
biases but instead in a complex, convex way where political
bias amplies cognitive bias, leading to convex risk. This,
undoubtedly, is one reason we nd strong convexities in
the planning and management of big projects. Decisions
about big projects are typically made by highly powerful
people, and such individuals are convexity generators, with
political bias driving their cognitive biases, which are
larger for powerful individuals than for nonpowerful ones.
Optimism Bias
Optimism bias is a cognitive bias, and it is the tendency for indi-
viduals to be overly bullish about the outcomes of planned
actions (Kahneman, 2011, p. 255). Sharot (2011, p. xv) calls
it one of the greatest deceptions of which the human mind is
capable.Where strategic misrepresentation is deliberate, opti-
mism bias is nondeliberate. In the grip of optimism, people
including expertsare unaware that they are optimistic. They
make decisions based on an ideal vision of the future rather
than on a rational weighing of realistic gains, losses, and prob-
abilities. They overestimate benets and underestimate costs.
They involuntarily spin scenarios of success and overlook the
potential for mistakes and miscalculations. As a result, plans
are unlikely to deliver as expected in terms of benets and costs.
Almost 100 years ago, when Geoffrey Faber founded what
would become Faber & Faber, the renowned London publish-
ing house, he was so certain of his project that he bet his
mothers, his own, and a few friendsfortunes on it, concluding,
everybody would benetwith a substantial income(Faber,
2019, p. 6, underline in the original). A year later, the new pub-
lishing house was in its rst of several near-bankruptcies, and
Faber wrote in his diary:
Ind it hard to justify my buoyant self-condence of last year
I ought, I think, to have foreseen trouble and gone more cau-
tiously(Faber, 2019, pp. 2728).
Thats optimism bias and what it does to individuals.
Geoffrey Faber is not the only entrepreneur to have been
tripped up like this. Its typical. Whats less typical is that
Table 2. Base Rates for Cost and Benet Overrun in 2,062 Capital Investment Projects Across Eight Types
Investment Type
Cost Overrun (A/E) Benet Overrun (A/E)
nAverage p
nAverage p
Dams 243 1.96 < 0.0001 84 0.89 < 0.0001
6 1.41 0.031 4 0.42 0.12
Rail 264 1.40 < 0.0001 74 0.66 < 0.0001
Tunnels 48 1.36 < 0.0001 23 0.81 0.03
Power plants 100 1.36 0.0076 23 0.94 0.11
Buildings 24 1.36 0.00087 20 0.99 0.77
Bridges 49 1.32 0.00012 26 0.96 0.099
Roads 869 1.24 < 0.0001 532 0.96 < 0.0001
Total 1,603 1.39/1.43
< 0.0001 786 0.94/0.83
< 0.0001
Note. Project planners and managers clearly do not get base rates right. The data show strong biases for (1) cost underestimation and overrun and (2) benet
overestimation and shortfall. Overrun is measured as actual divided by estimated costs and benets (A/E), respectively, in real terms, baselined at the nal
investment decision. See Flyvbjerg (2016, pp. 181182) for a description of the dataset used in the table.
The p-value of Wilcoxon test with null hypothesis that the distribution is symmetrically centered around 1.
BRT: Bus rapid transit.
Weighted and unweighted average, respectively.
Flyvbjerg 535
Faber & Faber survived to tell the story. Most companies fail
and are forgotten.
Optimism bias can be traced to cognitive biases, in other
words, systematic deviations from rationality in the way the
mind processes information (OSullivan, 2015; Sharot et al.,
2007; Shepperd et al., 2002). These biases are thought to be
ubiquitous. In project planning and management, an optimistic
cost or schedule estimate will be low, leading to cost and sched-
ule overruns. An optimistic benet estimate will be high,
leading to benet shortfalls. Optimism therefore produces a sys-
tematic bias in project outcomes, which is what the data show
(see Table 2). The theory of optimism bias thus ts the data
well, which lends support to its validity.
Interestingly, however, when researchers ask forecasters
about causes of inaccuracies in their forecasts, they do not
state optimism bias as a main cause, whereas they do mention
strategic misrepresentation and the usual suspects: scope
changes, complexity, price changes, unexpected underground
conditions, bad weather, and so on (Flyvbjerg et al., 2005,
pp. 138140). Psychologists would argue this is because opti-
mism bias is a true cognitive bias. As such it is unreected by
forecasters, including when they participate in surveys of
stated causes of forecasting inaccuracy, which is why such
surveys cannot be trusted. Psychologists would further argue
there is a large body of experimental evidence for the existence
of optimism bias (Buehler et al., 1994, 1997; Newby-Clark
et al., 2000). However, the experimental data are mostly from
simple laboratory experiments with students. This is a
problem, because it is an open question to what extent the
results apply outside the laboratory, in real-life situations like
project planning and management.
Optimism bias can be both a blessing and a curse. Optimism
and a can-doattitude are obviously necessary to get projects
done. Kahneman (2011, p. 255) calls optimism the engine of
capitalism.I would go further and call it the engine of life.
But, optimism can seriously trip us up if we are unaware of
its pitfalls and therefore take on risks we would have avoided
had we known the real, nonoptimistic, odds. This has been
known and reected since at least the ancient Greeks. More
than two millennia ago, the Greek historian Thucydides
(2009, p. 220) said about the Athenians that they expected
no reversesto their current good fortune”—in other words,
they were optimistic, specically overcondentand this
caused the fall of Athens in the Peloponnesian War, according
to Thucydides.
No upside can compensate for the ultimate downside: death.
This is a fundamental asymmetry between upside and downside
in human existence and is probably why humans are predis-
posed to loss aversion, as documented by prospect theory
(Kahneman & Tversky, 1979b). Quite simply, it is rational in
evolutionary terms to be more concerned about downside
than upside. Deathdoes not have to be of an individual, need-
less to say. It can be of a nation, a city, a business, or a project.
In my research, I have found that successful leaders have a
rare combination of hyperrealism and can-do optimism
(Flyvbjerg & Gardner, 2022). I call such individuals realistic
optimists.Risto Siilasmaa, chairman of Nokia during its
recent successful turnaround, goes one step further in highlight-
ing the two disparate dispositions, when he emphasizes para-
noid optimismas the key to success in leading projects and
businesses, always planning for the worst-case scenario: The
more paranoid we are, the harder we will continue to labor to
shift the probability curve in our favor and the more optimistic
we can afford to be(Siilasmaa, 2018, p. xvi). If you are
looking for someone to successfully lead a project, this is the
type of person you want: a realistic optimist, if not a paranoid
one. You would never get on a plane if you overheard the
pilot say to the copilot, Im optimistic about the fuel situation.
Similarly, one should not trust a project leader who is optimistic
about the budget or schedule, which is the fuel of projects.
During the Apollo program (19611972), the NASA
administration criticized its cost engineers for being optimistic
with a US$10 billion estimate for the program (approximately
US$90 billion in 2021 dollars). The administration told the engi-
neers that their assumption that everythings going to work
was wrong (Bizony, 2006, p. 41). The engineers
then increased their estimate to US$13 billion, which the admin-
istration adjusted to US$20 billion and got approved by
Congress, to the shock of the engineers. Today, the NASA
administrations US$7 billion increase has a technical name:
optimism bias uplift.NASA jokingly called it the administra-
tors discount.But they were serious when they advised that all
senior executives in charge of large, complex projects must apply
such a discount to make allowance for the unknown. Whatever
the name, it is the single most important reason Apollo has
gone down in history as that rare species of multi-billion-dollar
project: one delivered on budget. The NASA administration
knew exactly what [it] was doingfor Apollo, as rightly
observed by space historian Piers Bizony (ibid.).
Explanations of project outcomes in terms of optimism bias
originate with Kahneman and Tversky (1979a) and have been
further developed by Kahneman and Lovallo (1993), Lovallo
and Kahneman (2003), Flyvbjerg (2009a), and Flyvbjerg
et al. (2004, 2009).
We saw above that strategic project planners and managers
sometimes underestimate cost and overestimate benetto
achieve approval for their projects. Optimistic planners and
managers also do this, albeit unintentionally. The result is the
same, however, namely cost overruns and benet shortfalls.
Thus, optimism bias and strategic misrepresentation reinforce
each other, when both are present in a project. An interviewee
in our research described this strategy as showing the project at
its best(Flyvbjerg et al., 2004, p. 50). It results in an inverted
Darwinism, survival of the unttest(Flyvbjerg, 2009b). It is
not the best projects that get implemented like this, but the
projects that look best on paper. And, the projects that look
best on paper are the projects with the largest cost underesti-
mates and benet overestimates, other things being equal.
But, the larger the cost underestimate on paper, the greater
the cost overrun in reality. And, the larger the overestimate of
536 Project Management Journal 52(6)
benets, the greater the benet shortfall. Therefore, the projects
that have been made to look best on paper become the worst, or
unttest, projects in reality.
Uniqueness Bias
Uniqueness bias was originally identied by psychologists as
the tendency of individuals to see themselves as more singular
than they actually are, for example, singularly healthy, clever,
or attractive (Goethals et al., 1991; Suls et al., 1988; Suls &
Wan, 1987). In project planning and management, the term
was rst used by Flyvbjerg (2014, p. 9), who dened unique-
ness bias as the tendency of planners and managers to see
their projects as singular. It is a general bias, but it turns out
to be particularly rewarding as an object of study in project
management, because project planners and managers are sys-
tematically primed to see their projects as unique.
The standard denition of a project, according to the biggest
professional organization in the eld, the U.S.-based Project
Management Institute (PMI, 2017, p. 4), directly emphasizes
uniqueness as one of two dening features of what a project
is: A project is a temporary endeavor undertaken to create a
unique product, service, or result(italics added). Similarly,
the U.K.-based Association for Project Management (APM,
2012) stresses uniqueness as the very rst characteristic of
what a project is in their ofcial denition: A project is a
unique, transient endeavour, undertaken to achieve planned
objectives(italics added). Academics, too, dene projects in
terms of uniqueness, here Turner and Müller (2003, p. 7,
italics added): A project is a temporary organization to
which resources are assigned to undertake a unique, novel
and transient endeavour managing the inherent uncertainty
and need for integration in order to deliver benecial objectives
of change.Similar views of uniqueness as key to the nature of
projects may be found in Grün (2004, p. 3, p. 245), Fox and
Miller (2006, p. 3, p. 109), and Merrow (2011, p. 161).
We maintain that the understanding of projects as unique
is unfortunate, because it contributes to uniqueness bias with
project planners and managers. In the grip of uniqueness
bias, project managers see their projects as more singular
than they actually are. This is reinforced by the fact that
new projects often use nonstandard technologies and
Uniqueness bias tends to impede managerslearning,
because they think they have little to learn from other projects
as their own project is unique. Uniqueness bias may also feed
overcondence bias (see below) and optimism bias (see
above), because planners subject to uniqueness bias tend to
underestimate risks. This interpretation is supported by research
on IT project management reported in Flyvbjerg and Budzier
(2011), Budzier and Flyvbjerg (2013), and Budzier (2014).
The research found that managers who see their projects as
unique perform signicantly worse than other managers. If
you are a project leader and you overhear team members
speak of your project as unique, you therefore need to react.
It is self-evidently true, of course, that a project may be unique
in its own specic geography and time. For instance, California
has never built a high-speed rail line before, so in this sense, the
California High-Speed Rail Authority is managing a unique
project. But, the project is only unique to California and therefore
not truly unique. Dozens of similar projects have been built
around the world, with data and lessons learned that would be
highly valuable to California. In that sense, projects are no differ-
ent from people. A quote, often ascribed to the anthropologist
Margaret Mead, captures the point well: Always remember
that you are absolutely unique. Just like everyone else.Each
person not only is obviously unique but also has a lot in
common with other people. The uniqueness of people has not
stopped the medical profession from making progress based on
what humans have in common. The problem with project man-
agement is that uniqueness bias hinders such learning across pro-
jects, because project managers and scholars are prone to
localism bias,which we dene as the tendency to see the
local as global, due to availability bias for the local. Localism
bias explains why local uniqueness is easily and often confused
with global uniqueness. In many projects, it does not even occur
to project planners and managers to look outside their local
project, because our project is unique,which is a mantra one
hears over and over in projects and that is surprisingly easy to
get project managers to admit to.
Uniqueness bias feeds what Kahneman (2011, p. 247) calls
the inside view.Seeing things from this perspective, planners
focus on the specic circumstances and components of the
project they are planning and seek evidence in their own expe-
rience. Estimates of budget, schedule, and so forth are based on
this information, typically built from the inside and out,or
bottom-up, as in conventional cost engineering. The alternative
is the outside view,which consists of viewing the project
you are planning from the perspective of similar projects that
have already been completed, basing your estimates for the
planned project on the actual outcomes of these projects. But
if your project is truly unique, then similar projects clearly do
not exist, and the outside view becomes irrelevant and impossible.
This leaves you with the inside view as the only option for
planning your project. Even if a project is not truly unique,
if the project team thinks it is, then the outside view will be
left by the wayside, and the inside view will reign supreme,
which is typical. In the competition with the inside view,
the outside view does not stand a chance,as pithily observed
by Kahneman (2011, p. 249). The inside view is the perspec-
tive people spontaneously adopt when they plan, reinforced by
uniqueness bias for project planners and managers. The inside
view is therefore typical of project planning and management.
The consequences are dire, because only the outside view
effectively takes into account all risks, including the so-called
unknown unknowns.These are impossible to predict from
the inside, because there are too many ways a project can go
wrong. However, the unknown unknowns are included in
the outside view, because anything that went wrong with the
completed projects that constitute the outside view is included
Flyvbjerg 537
in their outcome data (Flyvbjerg, 2006). Using these data for
planning and managing a new project therefore leaves you
with a measure of all risk, including unknown unknowns.
Uniqueness bias makes you blind to unknown unknowns.
The outside view is an antidote to uniqueness bias.
Project managers, in addition to being predisposed, like every-
one else, to the inside view and uniqueness, have been indoctri-
nated by their professional organizations to believe projects are
unique, as we saw above. Thus its no surprise it takes substantial
experience to cut loose from the conventional view. Patrick
OConnell, an experienced megaproject manager and
ex-Practitioner Director of OxfordsBTCentreforMajor
Programme Management, told me, The rst 20 years as a mega-
project manager I saw uniqueness in each project; the next 20
years similarities.The NASA administration, mentioned
above, balked when people insisted the Apollo program, with
its aim of landing the rst humans on the moon, was unique.
How could it not be, as putting people on the moon had never
been done before, people argued. The administration would
have none of it. They deplored those who saw the program as
so specialas so exceptional,because such people did not
understand the reality of the project. The administration insisted,
in contrast, that the basic knowledge and technology and the
human and material resources necessary for the job already
existed,so there was no reason to reinvent the wheel (Webb,
1969, p. 11, p. 61). The NASA-Apollo view of uniqueness
bias saw this bias for what it is: a fallacy.
In sum, uniqueness bias feeds the inside view and optimism,
which feed underestimation of risk, which makes project teams
take on risks they would likely not have accepted had they
known the real odds. Good project leaders do not let themselves
be fooled like this. They accept that projects may be unique
locally, yes. But they understand that to be locally unique is
an oxymoron. Local uniqueness is, however, the typical
meaning of the term unique,when used in project manage-
ment. It is a misnomer that undermines project performance
and thus the project management profession. Truly unique pro-
jects are rare. We have lots to learn from other projects. And if
we dont learn, we will not succeed with our projects.
The Planning Fallacy (Writ Large)
The planning fallacy is a subcategory of optimism bias that
arises from individuals producing plans and estimates that are
unrealistically close to best-case scenarios. The term was origi-
nally coined by Kahneman and Tversky (1979a, p. 315) to
describe the tendency for people to underestimate task comple-
tion times. Buehler et al. (1994, 1997) continued work following
this denition. Later, the concept was broadened to cover the
tendency for people to, on the one hand, underestimate costs,
schedules, and risks for planned actions and, on the other,
overestimate benets and opportunities for those actions.
Because the original narrow and later broader concepts are so
fundamentally different in the scope they cover, Flyvbjerg and
Sunstein (2017) suggested the term planning fallacy writ
largefor the broader concept, to avoid confusing the two.
Flyvbjerg et al. (2003, p. 80) call the tendency to plan
according to best-case scenarios the EGAP principle,for
Everything Goes According to Plan. The planning fallacy and
the EGAP principle are similar in the sense that both result in
a lack of realism, because of their overreliance on best-case sce-
narios, as with the NASA cost engineers above. Both lead to
base rate neglect, illusion of control, and overcondence. In
this manner, both feed into optimism bias.
At the most fundamental level, Kahneman and Tversky
(1979a) identied the planning fallacy as arising from a tendency
with people to neglect distributional information when they plan.
People who plan would adopt what Kahneman and Tversky
(1979a, p. 315) rst called an internal approach to prediction
and later renamed the inside view,under the inuence of
which people would focus on the constituents of the specic
problem rather than on the distribution of outcomes in similar
cases.Kahneman and Tversky (1979a) emphasized that The
internal approach to the evaluation of plans is likely to produce
underestimation [of schedules].For the planning fallacy writ
large, such underestimation applies to costs, schedules, and risk,
whereas overestimation applies to benets and opportunities.
Interestingly, Guinote (2017, pp. 365366) found in an
experiment that subjects who had been made to feel in power
were more likely to underestimate the time needed to complete
a task than those not in power, demonstrating a higher degree of
planning fallacy for people in power. Again, this is an example
of how power bias and cognitive bias interact, resulting in
amplication and convexity.
The planning fallacys combination of underestimated costs
and overestimated benets generates risks to the second degree.
Instead of cost risk and benet risk canceling out one another
as other theories predict, for example, Hirschmans (2014) prin-
ciple of the Hiding Handunder the planning fallacy, the two
types of risk reinforce each other, creating convex (accelerated)
risks for projects from the get-go. The planning fallacy goes a
long way in explaining the Iron Law of project management:
Over budget, over time, under benets, over and over again
(Flyvbjerg, 2017). As a project leader, you want to avoid
convex risks because such risks are particularly damaging.
You want to avoid committing the planning fallacy and espe-
cially for people in power.
Overcondence Bias, Hindsight Bias, and
Availability Bias
Overcondence bias is the tendency to have excessive
condence in ones own answers to questions and to not fully
recognize the uncertainty of the world and ones ignorance of
it. People have been shown to be prone to what is called the
illusion of certaintyin (a) overestimating how much they
understand and (b) underestimating the role of chance events
and lack of knowledge, in effect underestimating the
538 Project Management Journal 52(6)
variability of events they are exposed to in their lives (Moore &
Healy, 2008; Pallier et al., 2002; Proeger & Meub, 2014).
Overcondence bias is found with both laypeople and
experts, including project planners and managers (Fabricius &
Büttgen, 2015).
Overcondence bias is fed by illusions of certainty, which
are fed by hindsight bias also known as the I-knew-it-all-along
effect.Availability biasthe tendency to overweigh what-
ever comes to mindsimilarly feeds overcondence bias.
Availability is inuenced by the recency of memories and by
how unusual or emotionally charged they may be, with more
recent, more unusual, and more emotional memories being
more easily recalled. Overcondence bias is a type of optimism,
and it feeds overall optimism bias.
A simple way to illustrate overcondence bias is to ask people
to estimate condence intervals for statistical outcomes. In one
experiment, the chief nancial ofcers (CFOs) of large U.S. cor-
porations were asked to estimate the return next year on shares
in the relevant Standard & Poors index (Kahneman, 2011,
p. 261). In addition, the CFOs were asked to give their best
guess of the 80% condence interval for the estimated returns
by estimating a value for returns they were 90% sure would be
too low (the lower decile, or P10) and a second value they were
90% sure would be too high (the upper decile, or P90), with
80% of returns estimated to fall between these two values (and
20% outside). Comparing actual returns with the estimated con-
dence interval, it was found that 67% of actual returns fell outside
the estimated 80% condence interval, or 3.35 times as many as
estimated. The actual variance of outcomes was grossly underesti-
matedbythesenancial experts, which is the same as saying they
grossly underestimated risk. It is a typical nding. The human
brain, including the brains of experts, spontaneously underesti-
mates variance. For whatever reason, humans seem hardwired
for this.
In project management, overcondence bias is built into the
tools experts use for risk management. The tools, which are typi-
cally based on computer models using so-called Monte Carlo sim-
ulations, or similar, look scientic and objective but are anything
but. Again, this is easy to document. You simply compare
assumed variance in a specic, planned project with actual, his-
toric variance for its project type, and you nd the same result
as for the CFOs above (Batselier & Vanhoucke, 2016). The bias
is generated by experts assuming thin-tailed distributions of risk
(normal or near-normal), when the real distributions are fat-tailed
(lognormal, power law, or similar probability distribution) (Taleb,
2004). The error is not with Monte Carlo models as such, but with
erroneous input into the models. Garbage in, garbage out, as
always. To eliminate overcondence bias, you want a more objec-
tive method that takes all distributional information into account,
not just the distributional information experts can think of, which
is subject to availability bias. The method needs to run on histor-
ical data from projects that have actually been completed.
Flyvbjerg (2006) describes such a method.
In the thrall of overcondence bias, project planners and
decision makers underestimate risk by overrating their level
of knowledge and ignoring or underrating the role of chance
events in deciding the fate of projects. Hiring experts will gen-
erally not help, because experts are just as susceptible to
overcondence bias as laypeople and therefore tend to underes-
timate risk, too. There is even evidence that the experts who
are most in demand are the most overcondent. In other
words, people are attracted to, and willing to pay for, con-
dence, more than expertise (Kahneman, 2011, p. 263;
Tetlock, 2005). Risk underestimation feeds the Iron Law of
project management and is the most common cause of project
downfall. Good project leaders must know how to avoid this.
Individuals produce condence by storytelling. The more
coherent a story we can tell about what we see, the more con-
dent we feel. But, coherence does not necessarily equal validity.
People tend to assume what you see is all there is,called
WYSIATI by Kahneman (2011, pp. 8788), who gives this
concept pride of place in explaining a long list of biases, includ-
ing overcondence bias. People spin a story based on what they
see. Under the inuence of WYSIATI, they spontaneously
impose a coherent pattern on reality, while they suppress
doubt and ambiguity and fail to allow for missing evidence,
says Kahneman. The human brain excels at inferring patterns
and generating meaning based on skimpy, or even nonexistent,
evidence. But, coherence based on faulty or insufcient data is
not true coherence, needless to say. If we are not careful, our
brains quickly settle for anything that looks like coherence
and uses it as a proxy for validity. This may not be a big
problem most of the time, and may even be effective, on
average, in evolutionary terms, which could be why the brain
works like this. But for big consequential decisions, typical of
project planning and management, it is not an advisable strat-
egy. Nevertheless, project leaders and their teams often have
a very coherentand very wrongstory about their project,
for instance that the project is unique, as we saw above under
uniqueness bias, or that the project may be completed faster
and cheaper than the average project or that everything will
go according to plan. The antidote is better, more carefully
curated stories, based on better data.
Gigerenzer (2018, p. 324) has rightly observed that overcon-
dence, presented by psychologists as a nondeliberate cognitive
bias, is in fact often a deliberate strategic bias used to achieve
predened objectives; in other words, it is strategic misrepre-
sentation. Financial analysts, for instance, who earn their
money by mostly incorrect predictions such as forecasting
exchange rates or the stock market had better be overcondent;
otherwise few would buy their advice,argues Gigerenzer, who
further observes about this fundamental confusion of one type
of bias for a completely different one that, [c]onceptual
clarity is desperately needed(Gigerenzer, 2018, p. 324).
Finally, regarding the relationship between power bias and
cognitive bias mentioned above, powerful individuals have
been shown to be more susceptible to availability bias than indi-
viduals who are not powerful. The causal mechanism seems to
be that powerful individuals are affected more strongly by ease
of retrieval than by the content they retrieve, because they are
Flyvbjerg 539
more likely to go with the owand trust their intuition than
individuals who are not powerful (Weick & Guinote, 2008).
This nding has been largely ignored by behavioral economists,
including Thaler (2015) in his history of the eld. This is
unfortunate, because the nding documents convexity to the
second degree for situations with power. By overlooking this,
behavioral economists make the same mistake they criticize
conventional economists for, namely overlooking and underes-
timating variance and risk. Conventional economists make the
mistake by disregarding cognitive bias; behavioral economists
by ignoring power bias and its effect on cognitive bias.
Underestimating convexity is a very human mistake, to be
sure. We all do it. But, it needs to be accounted for if we
want to understand all relevant risks and protect ourselves
against them in project planning and management.
The Base Rate Fallacy
The base rate fallacysometimes also called base rate bias or
base rate neglectis the tendency to ignore base rate informa-
tion (general data pertaining to a statistical population or a large
sample, e.g., its average) and focus on specic information
(data only pertaining to a certain case or a small number of
cases) (Bar-Hillel, 1980; Tversky & Kahneman, 1982). If you
play poker and assume different odds than those that apply,
you are subject to the base rate fallacy and likely to lose. The
objective odds are the base rate.
People often think the information they have is more rele-
vant than it actually is or they are blind to relevant information
they do not have. Both situations result in the base rate fallacy.
Probability neglect”—a term coined by Sunstein (2002,
pp. 6263) to denote the situation where people overfocus on
bad outcomes with small likelihoods, for instance terrorist
attacksis a special case of the base rate fallacy.
The base rate fallacy is fed by other biases, for instance,
uniqueness bias, described above, which results in extreme
base rate neglect, because the case at hand is believed to be sin-
gular, wherefore information about other cases is deemed irrel-
evant. The inside view, hindsight bias, availability bias, recency
bias, WYSIATI bias, overcondence bias, and framing bias
also feed the base rate fallacy. Base rate neglect is particularly
pronounced when there is a good, strong story. Big, monumen-
tal projects typically have such a story, contributing to extra
base rate neglect for those. Finally, we saw above that people,
including experts, underestimate variance. In the typical
project, base rate neglect therefore combines with variance
neglect, following this formula:
Base Rate Neglect +Variance Neglect =Strong Convexity
Preliminary results from our research indicate that variance
neglect receives less attention in project management than base
rate neglect, which is unfortunate, because the research also indi-
cates that variance neglect is typically larger and has even more
drastic impact on project outcomes than base rate neglect.
The base rate fallacy runs rampant in project planning and
management, as documented by the Iron Law described
earlier. Table 2 shows the most comprehensive overview that
exists of base rates for costs and benets in project manage-
ment, based on data from 2,062 projects covering eight
project types. Most projects do not get base rates right, as doc-
umented by averages that are different from one (1.0 correct
base rate) at a level of statistical signicance so high
(p< 0.0001) it is rarely found in studies of human behavior.
The base rate fallacy is deeply entrenched in project manage-
ment, as the data show. Flyvbjerg and Bester (2021) argue
that base rate neglect results in a new behavioral bias, which
they call the costbenet fallacy,which routinely derail
costbenet analyses of projects to a degree where such analy-
ses cannot be trusted.
As pointed out by Kahneman (2011, p. 150), anyone who
ignores base rates and the quality of evidence in probability
assessments will certainly make mistakes.The cure for the
base rate fallacy, in and out of project management, is to get
the base rate right by taking an outside view, for instance
through reference class forecasting, carrying out premortems,
or doing decision hygiene (Flyvbjerg, 2006; Klein, 2007;
Kahneman et al., 2011, 2021, pp. 312324, 371372).
If you are a project planner or manager, the easiest and most
effective way to get started with curbing behavioral biases in
your work is getting your base rates right, for the projects
you are working on. Hopefully, most can see that if you do
not understand the real odds of a game, you are unlikely to
succeed at it. But that is the situation for most project planners
and managers: they do not get the odds right for the game they
are playing: project management. Table 2 documents this
beyond reasonable doubt and establishes realistic base rates
for a number of important areas in project management that
planners can use as a starting point for getting their projects
right. Data for other project types were not included for
reasons of space but show similar results.
Anchoring is the tendency to rely too heavily, or anchor,on
one piece of information when making decisions. Anchoring
was originally demonstrated and theorized by Tversky and
Kahneman (1974). In their perhaps most famous experiment,
subjects were asked to estimate the percentage of African coun-
tries in the United Nations. First, a number between 0 and 100
was determined by spinning a wheel of fortune in the subjects
presence. Second, the subjects were instructed to indicate
whether that number was higher or lower than the percentage
of African countries in the United Nations. Third, the subjects
were asked to estimate this percentage by moving upward or
downward from the given number. The median estimate was
25% for subjects who received the number 10 from the wheel
of fortune as their starting point, whereas it was 45% for sub-
jects who started with 65. A random anchor signicantly inu-
enced the outcome.
540 Project Management Journal 52(6)
Similar results have been found in other experiments for a
wide variety of different subjects of estimation (Chapman &
Johnson, 1999; Fudenberg et al., 2012). Anchoring is pervasive.
The human brain will anchor in most anything, whether random
numbers, previous experience, or false information. It has
proven difcult to avoid this (Epley & Gilovich, 2006;
Simmons et al., 2010; Wilson et al., 1996). The most effective
way of dealing with anchoring is therefore to make sure the
brain anchors in relevant information before making decisions.
An obvious choice would be to anchor in base rates that are per-
tinent to the decision at hand, as proposed by Flyvbjerg (2006).
This advice is similar to recommending that gamblers must
know the objective odds of each game they play. It is sound
advice but often goes unheeded in project management.
Project planners and managers tend to err by anchoring their
decisions in plans that are best-case, instead of most likely, sce-
narios, as mentioned above. Planners and organizations also fre-
quently anchor in their own limited experience, instead of
seeking out a broader scope of histories, which would be
more representative of the wider range of possible outcomes
that actually apply to the project they are planning.
This happened to Hong Kongs MTR Corporation when they
were tasked with building the rst high-speed rail line in the terri-
tory. MTR anchored in its own experience with urban and conven-
tional rail instead of throwing the net wider and looking at
high-speed rail around the world. High-speed rail is signicantly
more difcult to build than urban and conventional rail, and
MTR had never built a high-speed rail line before. Despiteor
perhaps because ofMTRs proven competence in building
urban and conventional rail, the anchor for the high-speed rail
line proved optimistic, resulting in signicant cost and schedule
overruns for the new venture (Flyvbjerg et al., 2014).
Ansar et al. (2014, p. 48) similarly found that planners of
large dams around the world have generally anchored in the
North American experience with building dams, for no better
reason than North America built their dams rst. By choosing
this anchor, planners ended up allowing insufcient adjust-
ments to fully reect local risks, for example, exchange rate
risks, corruption, logistics, and the quality of local project man-
agement teams. This resulted in optimistic budgets and higher
cost overruns for dams built outside North America.
Anchoring is fed by other biases, including availability bias
and recency bias, which induce people to anchor in the most
available or most recent information, respectively. Anchoring
results in base rate neglect, in other words, underestimation of
the probabilities, and thus the risks, that face a project (see pre-
vious section). Smart project leaders avoid this by anchoring
their project in the base rate for similar projects to the one they
are planning, for instance, by benchmarking their project
against outcomes for a representative class of similar, completed
projects. Flyvbjerg (2013) explains how to do this, and
Kahneman (2011, p. 154) explicitly identies anchoring in the
base rate as the cure for the WYSIATI bias mentioned above.
Anchoring in the base rate is similar to taking an outside view,
and the outside view is an anchor that is meaningful,as
rightly observed by Tetlock and Gardner (2015, pp. 117120),
whereas spontaneous anchors typically are less meaningful and
lead to biased decisions with hidden risks.
Escalation of Commitment
Last, but not least, escalation of commitment (sometimes also
called commitment bias) is the tendency to justify increased
investment in a decision, based on the cumulative prior invest-
ment, despite new evidence suggesting the decision may be
wrong and additional costs will not be offset by benets.
Consider the example of two friends with tickets for a profes-
sional basketball game a long drive from where they live. On
the day of the game, there is a big snowstorm. The higher the
price the friends paid for the tickets, the more likely they are
to brave the blizzard and attempt driving to the game, investing
more time, money, and risk (Thaler, 2015, p. 20). That is esca-
lation of commitment. In contrast, the rational approach when
deciding whether to invest further in a venture would be to dis-
regard what you have already invested.
Escalation of commitment applies to individuals, groups,
and whole organizations. It was rst described by Staw
(1976) with later work by Brockner (1992), Staw (1997),
Sleesman et al. (2012), and Drummond (2014, 2017).
Economists use related terms like the sunk-cost fallacy
(Arkes & Blumer, 1985) and lock-in(Cantarelli et al.,
2010b) to describe similar phenomena. Escalation of commit-
ment is captured in popular proverbs such as, Throwing
good money after badand In for a penny, in for a pound.
In its original denition, escalation of commitment is unre-
ected and nondeliberate. People do not know they are
subject to the bias, as with other cognitive biases. However,
once you understand the mechanism, it may be used deliber-
ately. In his autobiography, famous Hollywood director Elia
Kazan (1997, pp. 412413) explains how he used sunk costs
and escalation of commitment to get his projects going:
Quickly I planned my position on costs My tactic was one
familiar to directors who make lms off the common path: to get
the work rolling, involve actors contractually, build sets, collect
props and costumes, expose negative, and so get the studio in
deep. Once money in some signicant amount had been spent,
it would be difcult for Harry [Cohn, President and co-founder
of Columbia Pictures] to do anything except scream and holler.
If he suspended a lm that had been shooting for a few weeks,
hed be in for an irretrievable loss, not only of money but of
face.The thing to do was get the lm going.
Kazan here combines strategic misrepresentation with cogni-
tive bias to achieve takeoff for his projects. The misrepresenta-
tion consists in initially (a) being economical with the truth
regarding the real cost of his projects and (b) just get the
lm goingto sink in sufcient cost to create a point of no
return. After this, Kazan trusts the studio head to fall victim
to cognitive bias, specically sunk cost and escalation of
Flyvbjerg 541
commitment, in the grip of which he will allocate more money
to the lm instead of halting it, which might have been the ratio-
nal decision. This is the studio heads version of Thalers (2015)
driving into the blizzard,described above. As argued earlier,
such interaction between cognitive and political bias is common
in shaping project outcomes. Most project managers will know
examples similar to Kazans. It is too simple to think of out-
comes as being generated solely by either cognitive bias or
political bias. Such purity may be constructed in lab experi-
ments. In real life, both are typically at play with complex inter-
actions between the two.
A number of excellent case studies exist that demonstrate the
pertinence of escalation of commitment to project planning and
management, for example, of Expo 86 (Ross & Staw, 1986), the
Shoreham nuclear power plant (Ross & Staw, 1993), and
Denver International Airport (Monteagre & Keil, 2000), each of
which present their own version of driving into the blizzard.
We saw above how optimism bias undermines project per-
formance. Escalation of commitment amplies this. Consider
that once a forecast turns out to have been optimistic, often
the wisest thing would be to give up the project. But, escalation
of commitment and the sunk cost fallacy keep decision-makers
from doing the right thing. Instead, they keep going, throwing
good money after bad.
Escalation of commitment often coexists with and is rein-
forced by what has been called preferential attachmentor
the Yule process(Barabási, 2014; Barabási & Albert, 1999;
Gabaix, 2009). Preferential attachment is a procedure in
which some quantity, for example, money or connections in a
network, is distributed among a number of individuals or
units according to how much they already have, so that those
who have much receive more than those who have little,
known also as the Matthew effect.
In project planning and management, Flyvbjerg (2009b)
argued that the investments that look best on paper get
funded and that these are the investments with the largest cost
underestimates and therefore the largest need for additional
funding during delivery, resulting in preferential attachment
of funds to these investments, once they have their initial
funding. After an investment has been approved and funded,
typically there is lock-in and a point of no return, after which
escalation of commitment follows, with more and more funds
allocated to the original investment to close the gap between
the original cost underestimate and actual outturn cost
(Cantarelli et al., 2010b; Drummond, 2017).
Interestingly, preferential attachment has been identied as a
causal mechanism that generates outcome distributions with a
fat upper tail, specically power law distributions (Barabási,
2014; Krapivsky & Krioukov, 2008). In the case of cost, this
would predict an overincidence (compared with the Gaussian)
of extreme cost overruns. So far, we have tested the thesis for
cost and cost overrun with the Olympic Games, where the
thesis found strong support in the data (Flyvbjerg et al.,
2021). Currently, we are further testing the thesis for informa-
tion technology projects, while tests of other project types are
in the pipeline. Should the thesis hold across project types,
we may be in the rst stages of discovering a general theory
of project management, with more fundamental and more scien-
tic explanations of project outcomes than those found in con-
ventional theory.
Scientic revolutions rarely happen without friction. So, too,
for the behavioral revolution. It has been met with skepticism,
including from parts of the project management community
(Flyvbjerg et al., 2018). Some members prefer to stick with con-
ventional explanations of project underperformance in terms of
errors of scope, complexity, labor and materials prices, archae-
ology, geology, bad weather, ramp-up problems, demand uc-
tuations, and so forth (Cantarelli et al., 2010a).
Behavioral scientists would agree with the skeptics that
scope changes, complexity, and so forth are relevant for under-
standing what goes on in projects but would not see them as
root causes of outcomes. According to behavioral science, the
root cause of, say, cost overrun is the well-documented fact
that project planners and managers keep underestimating
scope changes, complexity, and so forth in project after project.
From the point of view of behavioral science, the mecha-
nisms of scope changes, complex interfaces, price changes,
archaeology, geology, bad weather, and business cycles are
not unknown to project planners and managers, just as it is
not unknown that such mechanisms may be mitigated.
However, project planners and managers often underestimate
these mechanisms and mitigation measures, due to optimism
bias, overcondence bias, the planning fallacy, and strategic
misrepresentation. In behavioral terms, unaccounted for scope
changes are manifestations of such underestimation on the
part of project planners, and it is in this sense bias and underes-
timation are root causes and scope changes are just causes. But
because scope changes are more visible than the underlying root
causes, they are often mistaken for the cause of outcomes, for
example, cost overrun.
In behavioral terms, the causal chain starts with human bias
(political and cognitive), which leads to underestimation of
scope during planning, which leads to unaccounted for scope
changes during delivery, which leads to cost overrun. Scope
changes are an intermediate stage in this causal chain through
which the root causes manifest themselves. Behavioral science
tells project planners and managers, Your biggest risk is you.
It is not scope changes, complexity, and so forth in themselves
that are the main problem; it is how human beings misconceive
and underestimate these phenomena, through optimism bias,
overcondence bias, and strategic misrepresentation. This is a
profound and proven insight that behavioral science brings to
project planning and management. You can disregard it, of
course. But if you do, project performance would likely suffer.
You would be the gambler not knowing the odds of their game.
Behavioral science is not perfect. We saw above how behav-
ioral economics suffers from a psychology bias,in the sense it
542 Project Management Journal 52(6)
tends to reduce behavioral biases to cognitive biases, ignoring
political bias in the process, thus committing the very sin it
accuses conventional economics of, namely theory-induced
blindness resulting in limited rationality. Gigerenzer (2018)
goes further and criticizes behavioral economics for bias
bias,and he is right when he calls for conceptual clarication.
Not all behavioral biases are well dened, or even well delin-
eated: many and large overlaps exist among different biases
that need clarication, including for the 10 described above.
Just as seriously, many biases have only been documented in
simplied lab experiments but are tacitly assumed to hold in real-
life situations outside the lab, without sound demonstration that
the assumption holds. Finally, the psychology usedby behavioral
economists is not considered cutting-edge by psychologists, a
fact openly acknowledged by Thaler (2015, p. 180), who
further admits it is often difcult to pin down which specic
behavioral bias is causing outcomes in a given situation or to
rule out alternative explanations (Thaler, 2015, p. 295).
Nevertheless, the behavioral revolution seems to be here to
stay, and it entails an important change of perspective for
project management: The problem with project cost overruns
and benet shortfalls is not error but bias, and as long as we try
to solve the problem as something it is not (error), we will not
succeed. Estimates and decisions need to be debiased, which is
fundamentally different from eliminating error. Furthermore,
the problem is not even cost overruns or benet shortfalls, it is
cost underestimation and benet overestimation. Overrun, for
instance, is mainly a consequence of underestimation, with the
latter happening upstream from overrun, for big projects often
years before overruns manifest. Again, if we try to solve the
problem as something it is not (cost overrun), we will fail. We
need to solve the problem of upstream cost underestimation in
order to solve the problem of downstream cost overrun. Once
we understand these straightforward insights, we understand
that we and our projects are better off with an understanding of
behavioral science and behavioral bias than without it.
Anderson, C., & Galinsky, A. D. (2006). Power, optimism, and risk-
taking. European Journal of Social Psychology,36, 511536.
Ansar, A., Flyvbjerg, B., Budzier, A., & Lunn, D. (2014). Should we
build more large dams? The actual costs of hydropower mega-
project development. Energy Policy,69,4356.
Arkes, H. R., & Blumer, C. (1985). The psychology of sunk cost.
Organizational Behavior and Human Decision Making,35(1),
Association of Project Management (APM). (2012). APM body of
knowledge (6th Ed.) Retrieved from
Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments.
Acta Psychologica,44(3), 211233.
Barabási, A.-L. (2014). Linked: How everything is connected to every-
thing else and what it means for business, science, and everyday
life. Basic Books.
Barabási, A.-L., & Albert, R. (1999). Emergence of scaling in random
networks. Science,286(5439), 509512.
Batselier, J., & Vanhoucke, M. (2016). Practical application and empir-
ical evaluation of reference class forecasting for project manage-
ment. Project Management Journal,47(5), 3651.
Bizony, P. (2006). The man who ran the moon: James Webb, JFK, and
the secret history of Project Apollo. Icon Books.
Bok, S. (1999). Lying: Moral choice in public and private life. Vintage,
rst published in 1979.
Brockner, J. (1992). The escalation of commitment to a failing course
of action: Toward theoretical progress. Academy of Management
Review,17(1), 3961.
Budzier, A. (2014). Theorizing outliers: Explaining variation in IT
project performance (DPhil thesis). Green-Templeton College.
Budzier, A., & Flyvbjerg, B. (2013). Making sense of the impact
and importance of outliers in project management through the
use of power laws. Proceedings of IRNOP (International
Research Network on Organizing by Projects), Volume 11, June,
pp. 128.
Buehler, R., Grifn, D., & MacDonald, H. (1997). The role of moti-
vated reasoning in optimistic time predictions. Personality and
Social Psychology Bulletin,23(3), 238247.
Buehler, R., Grifn, D., & Ross, M. (1994). Exploring the
planning fallacy: Why people underestimate their task comple-
tion times. Journal of Personality and Social Psychology,67,
Cantarelli, C. C., Flyvbjerg, B., Molin, E. J. E., & van Wee, B. (2010a).
Cost overruns in large-scale transportation infrastructure
projects: Explanations and their theoretical embeddedness.
European Journal of Transport and Infrastructure Research,
10(1), 518.
Cantarelli, C. C., Flyvbjerg, B., van Wee, B., & Molin, E. J. E. (2010b).
Lock-in and its inuence on the project performance of large-scale
transportation infrastructure projects: Investigating the way in
which lock-in can emerge and affect cost overruns. Environment
and Planning B: Planning and Design,37,792807.
Carson, T. L. (2006). The denition of lying. Noûs,40, 284306.
Chapman, G. B., & Johnson, E. J. (1999). Anchoring, activation, and
the construction of values. Organizational Behavior and Human
Decision Processes,79(2), 115153.
Drummond, H. (2014). Is escalation always irrational? Originally
published in Organization Studies,19(6), 1998, here from
B. Flyvbjerg (Ed.), Megaproject planning and management:
Essential readings (Vol. II, pp. 291309). Edward Elgar.
Drummond, H. (2017). Megaproject escalation of commitment: An
update and appraisal. In B. Flyvbjerg (Ed.), The Oxford hand-
book of megaproject management (pp. 194216). Oxford
University Press.
Epley, N., & Gilovich, T. (2006). The anchoring-and-adjustment heu-
ristic: Why the adjustments are insufcient. Psychological
Science,17(4), 311318.
Faber, T. (2019). Faber & Faber: The untold story. Faber & Faber.
Fabricius, G., & Büttgen, M. (2015). Project managersovercon-
dence: How is risk reected in anticipated project success?
Business Research,8, 239263.
Flyvbjerg 543
Fallis, D. (2009). What is lying? The Journal of Philosophy,106(1),
Feynman, R. P. (2007a). Richard P. Feynmans minority report to the
space shuttle Challenger inquiry, in The pleasure of nding
things out. Penguin, rst published in 1999, pp. 151169.
Feynman, R. P. (2007b). Mr. Feynman goes to Washington: Investigating
the space shuttle Challenger disaster, in What do you care what other
people think? Further adventures of a curious character. Penguin,
rst published in 1988, pp. 113237.
Flyvbjerg, B. (1998). Rationality and power: Democracy in practice.
The University of Chicago Press.
Flyvbjerg, B. (2003). Delusions of success: Comment on Dan Lovallo
and Daniel Kahneman. Harvard Business Review. December,
pp. 121122.
Flyvbjerg, B. (2006). From Nobel Prize to project management:
Getting risks right. Project Management Journal,37(3), 515.
Flyvbjerg, B. (2009a). Optimism and misrepresentation in early project
development. In T. Williams, K. Samset, & K. Sunnevag (Eds.),
Making essential choices with scant information: Front-end
decision making in major projects (pp. 147168). Palgrave
Flyvbjerg, B. (2009b). Survival of the unttest: Why the worst infra-
structure gets built, and what we can do about it. Oxford
Review of Economic Policy,25(3), 344367.
Flyvbjerg, B. (2013). Quality control and due diligence in project
management: Getting decisions right by taking the outside
view. International Journal of Project Management,31(5),
Flyvbjerg, B. (2014). What you should know about megaprojects and
why: An overview. Project Management Journal,45(2), 619.
Flyvbjerg, B. (2016). The fallacy of benecial ignorance: A test
of Hirschmans hiding hand. World Development,84,
Flyvbjerg, B. (2017). Introduction: The iron law of megaproject man-
agement. In B. Flyvbjerg (Ed.), The Oxford handbook of mega-
project management (pp. 118). Oxford University Press.
Flyvbjerg, B., Ansar, A., Budzier, A., Buhl, S., Cantarelli, C., Garbuio,
M., Glenting, C., Holm, M. S., Lovallo, D., Lunn, D., Molin, E.,
Rønnest, A., Stewart, A., & van Wee, B. (2018). Five things you
should know about cost overrun. Transportation Research Part
A: Policy and Practice,118, 174190.
Flyvbjerg, B., & Bester, D. W. (2021). The cost-benet fallacy: Why
cost-benet analysis is broken and how to x it. Journal of
Benet-Cost Analysis.
Flyvbjerg, B., Bruzelius, N., & Rothengatter, W. (2003). Megaprojects
and risk: An anatomy of ambition. Cambridge University Press.
Flyvbjerg, B., & Budzier, A. (2011). Why your IT project may be riskier
than you think. Harvard Business Review,89(9), 2325.
Flyvbjerg, B., Budzier, A., & Lunn, D. (2021). Regression to the tail:
Why the Olympics blow up. Environment and Planning A:
Economy and Space,53(2), 233260.
Flyvbjerg, B., Garbuio, M., & Lovallo, D. (2009). Delusion and deception
in large infrastructure projects: Two models for explaining and pre-
venting executive disaster. California Management Review,51(2),
Flyvbjerg, B., & Gardner, D. (2022). Big plans: Why most fail, how
some succeed. Penguin Random House.
Flyvbjerg, B., Glenting, C., & Rønnest, A. (2004). Procedures for
dealing with optimism bias in transport planning: Guidance
document. UK Department for Transport, London, June.
Flyvbjerg, B., Holm, M. K., & Buhl, S., & L, S. (2005). How (in)accu-
rate are demand forecasts in public works projects? The case of
transportation. Journal of the American Planning Association,
71(2), 131146.
Flyvbjerg, B., Holm, M. K. S., & Buhl, S. L. (2002). Underestimating
costs in public works projects: Error or lie? Journal of the
American Planning Association,68(3), 279295.
Flyvbjerg, B., Hon, C.-k., & Fok, W. H. (2016). Reference class fore-
casting for Hong Kongs major roadworks projects. Proceedings
of the Institution of Civil Engineers,169(CE6), 1724.
Flyvbjerg, B., Kao, T. C., & Budzier, A. (2014). Report to the
Independent Board Committee on the Hong Kong Express
Rail Link Project, in MTR Independent Board Committee, Second
Report by the Independent Board Committee on the Express Rail
Link Project (Hong Kong: MTR), pp. A1A122.
Flyvbjerg, B., & Sunstein, C. R. (2017). The principle of the malevo-
lent hiding hand; or, the planning fallacy writ large. Social
Research,83(4), 9791004.
Fox, J. R., & Miller, D. B. (2006). Challenges in managing large pro-
jects. Defense Acquisition University Press.
Fudenberg, D., Levine, D. K., & Maniadis, Z. (2012). On the robust-
ness of anchoring effects in WTP and WTA experiments.
American Economic Journal: Microeconomics,4(2), 131145.
Gabaix, X. (2009). Power laws in economics and nance. Annual
Review of Economics,1, 255293.
Gigerenzer, G. (2018). The bias bias in behavioral economics. Review
of Behavioral Economics,5, 303336.
Goethals, G. R., Messick, D. M., & Allison, S. (1991). The uniqueness
bias: Studies in constructive social comparison. In J. Suls &
T. A. Wills (Eds.), Social comparison: Contemporary theory
and research (pp. 149176). Erlbaum.
Grün, O. (2004). Taming giant projects: Management of multi-
organization enterprises. Springer.
Guinote, A. (2017). How power affects people: Activating, wanting,
and goal seeking. Annual Review of Psychology,68, 353381.
Guinote, A., & Vescio, T. K. (2010). The social psychology of power.
Guilford Press.
Hirschman, A. O. (2014). The principle of the hiding hand, originally
published in The Public Interest, Winter 1967, pp. 1023, in
B. Flyvbjerg (Ed.), Megaproject planning and management:
Essential readings, (vol. I, pp. 149162). Edward Elgar.
Jones, L. R., & Euske, K. J. (1991). Strategic misrepresentation in bud-
geting. Journal of Public Administration Research and Theory,
1(4), 437460.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Kahneman, D., & Lovallo, D. (1993). Timid choices and bold fore-
casts: A cognitive perspective on risk taking. Management
Kahneman, D., & Lovallo, D. (2003). Response to Bent Flyvbjerg.
Harvard Business Review, December, p. 122.
544 Project Management Journal 52(6)
Kahneman, D., Lovallo, D., & Sibony, O. (2011). Before you make
that big decision. Harvard Business Review, June, pp. 5160.
Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A aw in
human judgment. William Collins.
Kahneman, D., & Tversky, A. (1979a). Intuitive prediction: Biases and
corrective procedures. In S. Makridakis & S. C. Wheelwright
(Eds.), Studies in the management sciences: Forecasting (Vol.
12, pp. 313327). North Holland.
Kahneman, D., & Tversky, A. (1979b). Prospect theory: An analysis of
decisions under risk. Econometrica,47, 313327.
Kain, J. F. (1990). Deception in Dallas: Strategic misrepresentation in
rail transit promotion and evaluation. Journal of the American
Planning Association,56(2), 184196.
Kazan, E. (1997). A life. Da Capo Press (rst published in 1988).
Klein, G. (2007). Performing a project premortem. Harvard Business
Review, September, pp. 12.
Krapivsky, P., & Krioukov, D. (2008). Scale-free networks as prea-
symptotic regimes of superlinear preferential attachment.
Physical Review E,78(026114), 111.
List of Cognitive Biases. (2021). In Wikipedia. https://en.wikipedia.
Lovallo, D., & Kahneman, D. (2003). Delusions of success: How opti-
mism undermines executivesdecisions. Harvard Business
Review, July, 5663.
Merrow, E. W. (2011). Industrial megaprojects: Concepts, strategies,
and practices for success. Wiley.
Monteagre, R., & Keil, M. (2000). De-escalating information technol-
ogy projects: Lessons from the Denver International Airport. MIS
Quarterly,24, 417447.
Moore, D. A., & Healy, P. J. (2008). The trouble with overcondence.
Psychological Review,115(2), 502517.
Newby-Clark, I. R., Ross, M., Buehler, R., Koehler, D. J., & Grifn, D.
(2000). People focus on optimistic scenarios and disregard
pessimistic scenarios while predicting task completion times.
Journal of Experimental Psychology: Applied,6(3), 171182.
Nouvel, J. (2009). Interview in Weekendavisen, Copenhagen, 16
January 16, p. 4 (DR-Byen).
OSullivan, P. (2015). The neural basis of always looking on the bright
side. Dialogues in Philosophy, Mental and Neuro Sciences,8(1),
Pallier, G., Wilkinson, R., Danthiir, V., Kleitman, S., Knezevic, G.,
Stankov, L., & Roberts, R. D. (2002). The role of individual dif-
ferences in the accuracy of condence judgments. The Journal of
General Psychology,129(3), 257299.
Pickrell, D. (1992). A desire named streetcar: Fantasy and fact in rail transit
planning. Journal of the American Planning Association,58(2),
Proeger, T., & Meub, L. (2014). Overcondence as a social bias:
Experimental evidence. Economics Letters,122(2),
Project Management Institute (PMI). (2017). A guide to the project
management body of knowledge (PMBOK
guide) Sixth
edition. Author.
Ross, J., & Staw, B. M. (1986). Expo 86: An escalation prototype.
Administrative Science Quarterly,31(2), 274297.
Ross, J., & Staw, B. M. (1993). Organizational escalation and exit: The
case of the Shoreham Nuclear Power Plant. Academy of
Management Journal,36(4), 701732.
Sharot, T. (2011). The optimism bias: A tour of the irrationally positive
brain. Pantheon.
Sharot, T., Riccardi, A. M., Raio, C. M., & Phelps, E. A. (2007). Neural
mechanisms mediating optimism bias. Nature,450,102105.
causes of comparative optimism. Psychologica Belgica,42,6598.
Siilasmaa, R. (2018). Transforming Nokia: The power of paranoid
optimism to lead through colossal change. McGraw Hill.
Simmons, J. P., LeBoeuf, R. A., & Nelson, L. D. (2010). The effect of
accuracy motivation on anchoring and adjustment: Do people
adjust from provided anchors? Journal of Personality and
Social Psychology,99(6), 917932.
Sleesman, D. J., Conlon, D. E., McNamara, G., & Miles, J. E. (2012).
Cleaning up the big muddy: A meta-analytic review of the deter-
minants of escalation of commitment. Academy of Management
Journal,55(3), 541562.
Staw, B. M. (1976). Knee-deep in the big muddy: A study of escalating
commitment to a chosen course of action. Organizational
Behavior and Human Resources,16(1), 2744.
Staw, B. M. (1997). The escalation of commitment: An update and
appraisal. In Z. Shapira (Ed.), Organizational decision making
(pp. 191215). Cambridge University Press.
Steinel, W., & De Dreu, C. K. W. (2004). Social motives and strategic
misrepresentation in social decision making. Journal of
Personality and Social Psychology,86(3), 419434.
Suls, J., & Wan, C. K. (1987). In search of the false uniqueness phe-
nomenon: Fear and estimates of social consensus. Journal of
Personality and Social Psychology,52, 211217.
Suls, J., Wan, C. K., & Sanders, G. S. (1988). False consensus and
false uniqueness in estimating the prevalence of health-protective
behaviors. Journal of Applied Social Psychology,18,6679.
Sunstein, C. R. (2002). Probability neglect: Emotions, worst cases, and
law. Yale Law Review,112(61), 61107.
Taleb, N. N. (2004). Fooled by randomness: The hidden role of chance
in life and in the markets. Penguin.
Tetlock, P. E. (2005). Expert political judgment: How good is it? How
can we know? Princeton University Press.
Tetlock, P. E., & Gardner, D. (2015). Superforecasting: The art and
science of prediction. Random House.
Thaler, R. H. (2015). Misbehaving: How economics became behaviou-
ral. Allen Lane.
Thucydides (2009). The Peloponnesian war, translated by Martin
Hammond. Oxford University Press.
Turner, J. R., & Müller, R. (2003). On the nature of the project as a tem-
porary organization. International Journal of Project Management,
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty:
Heuristics and biases. Science,185(4157), 11241131.
Tversky, A., & Kahneman, D. (1982). Evidential impact of base rates.
In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment
under uncertainty: Heuristics and biases (pp. 153162).
Cambridge University Press.
Flyvbjerg 545
Wachs, M. (1989). When planners lie with numbers. Journal of the
American Planning Association,55(4), 476479.
Wachs, M. (1990). Ethics and advocacy in forecasting for public
policy. Business and Professional Ethics Journal,9(1 and 2),
Wachs, M. (2013). The past, present, and future of professional ethics
in planning. In N. Carmon & S. S. Fainstein (Eds.), Policy, plan-
ning, and people: Promoting justice in urban development
(pp. 101119). University of Pennsylvania Press.
Webb, J. (1969). Space-age management: The large-scale approach.
Weick, M., & Guinote, A. (2008). When subjective experiences
matter: Power increases reliance on the ease of retrieval.
Journal of Personality and Social Psychology,94, 956970.
Wilson, T. D., Houston, C. E., Etling, K. M., & Brekke, N. (1996). A
new look at anchoring effects: Basic anchoring and its anteced-
ents. Journal of Experimental Psychology: General,125(4),
Author Biography
Bent Flyvbjerg is the rst BT Professor and inaugural Chair of
Major Programme Management at the University of Oxford
and the Villum Kann Rasmussen Professor and Chair at the IT
University of Copenhagen. He is the most cited scholar in the
world in project management. His books and articles have
been translated into 20 languages. He has received numerous
honors and awards, including the Project Management
Institute Research Achievement Award, two Fulbright
Scholarships, and a knighthood. He is a frequent commentator
in the news, including The New York Times,The Economist,
the Wall Street Journal, the Financial Times, the BBC, and
CNN. He serves as an advisor to 10 Downing Street and govern-
ment and business around the world. His most recent book is Big
Plans: Why Most Fail, How Some Succeed (with Dan Gardner,
Penguin Random House, 2022). He can be contacted at
546 Project Management Journal 52(6)
... The present study follows this trend and collects mainly data on construction projects but also on energy, Oil and Gas, and IT projects to identify common and differential patterns across sectors. Megaprojects are typically one-off investments (of the magnitude described above) that are characterized by high levels of complexity (Turner and Xue 2018) and often associated with high failure rates, attributed to planning fallacies (Flyvbjerg 2021), including optimism bias, but also strategic misrepresentation, escalation of commitment (Denicol, Davies, and Krystallis 2020), cognitive biases (Flyvbjerg 2021) or inappropriate management techniques (Turner 2022), to name a few. A detailed state-of-the-art review of the associated literature can be found in (Denicol, Davies, and Krystallis 2020). ...
... The present study follows this trend and collects mainly data on construction projects but also on energy, Oil and Gas, and IT projects to identify common and differential patterns across sectors. Megaprojects are typically one-off investments (of the magnitude described above) that are characterized by high levels of complexity (Turner and Xue 2018) and often associated with high failure rates, attributed to planning fallacies (Flyvbjerg 2021), including optimism bias, but also strategic misrepresentation, escalation of commitment (Denicol, Davies, and Krystallis 2020), cognitive biases (Flyvbjerg 2021) or inappropriate management techniques (Turner 2022), to name a few. A detailed state-of-the-art review of the associated literature can be found in (Denicol, Davies, and Krystallis 2020). ...
Little is known about the governance of inter-organizational networks for projects. This study empirically develops a theoretical framework for this, using 28 project networks as case studies, applying 124 interviews in ten countries. The abductively developed three-layered governance framework has the individual network for a project at its lowest layer, explained through Multi-level Governance Theory. This is steered by a middle layer for the governance of networks, addressing the steering of the different networks these organizations are part of. At the top is metagovernance, where the ground rules are set by governments or investors. For each layer, the governance dimensions, as well as the ena-blers and disablers between layers, are defined. The study's resulting theory provides an overall understanding of the governance of multiple networks for projects and provides practitioners with the parameters to optimize their networks for better project results. ARTICLE HISTORY
... This persistent commitment to troubled projects is evident across the globe, including, for example, Canada's "Phoenix Pay" system [5], England's NHS paperless system [2,3], Australia's visa processing system [44], the U.S. Veterans Administration health records system [108], and Ghana's customs systems [1]. Further, escalation continues to be a problem in "mega" projects [22,32] that often have significant IS infrastructure components, as well as smaller scale corporate [e.g., 7,28] and government projects [e.g., 19,87,123,124]. In recent years researchers are finding evidence of escalation in new product development and digital transformation projects [57,58,116,120,121,122]. ...
... This shortcoming does not mean that escalation is out of flavor -quite the contrary, there is still a vibrant discourse into project escalation using various methods in the IS field [e.g., 4,61,62,64,96,98] and in organizational research in general, including, for example, work into corporate social responsibility [33], firm strategy and operations [84,117], and distributed teams [71]. In particular escalation became an important topic in new product development [58,116,120]and research into infrastructural mega-projects [22,32]. However, there is less qualitative, case-based IS research. ...
Project escalation involves the continued, persistent commitment to a failing project. Through a qualitative meta-analysis of 15 published cases of large information systems (IS) projects in escalation situations, we develop an institutional perspective on IS projects in escalation situations. This perspective describes how project persistence emerges from a plurality of legitimizing institutional logics that decision-makers draw upon at different project stages to maintain and reduce their commitment to the project. Logics related to the project’s approval are not the same logics that guide decisions throughout the project. For example, while we find that innovation and economic logics of return on investment are salient before approval, economic costs tend to be more salient after approval, along with technical impositions and managerial concerns. We further find that managerial logics are particularly salient in reducing commitment to projects, and we detail the differences and point out contextual triggers of external scrutiny and leadership changes that can contribute to reduced commitment to a project and eventual de-escalation.
... We covered central aspects of negative heuristics, their impact on leadership, and how they may be mitigated in Flyvbjerg (2013Flyvbjerg ( , 2014Flyvbjerg ( , 2021a and Flyvbjerg et al. (2009). Here we focus on positive heuristics and especially how they pertain to experienced leaders responsible for successfully building, running, and changing big projects, programs, and organizations. ...
... Take the Outside View. Base-rate neglect is a key cause of project failure (Flyvbjerg 2021a). The cure is to employ the outside view, which calculates empirical base rates from a reference class of completed projects similar to that you are planning. ...
Full-text available
Background Sustainable transport is fundamental to progress in realising the agenda of sustainable development, as a quarter of energy-related global greenhouse gas emissions come from the transport sector. In developing countries, metropolitan areas have adopted the agenda to better serve the urban population with safe, affordable, and environmentally-friendly transport systems. However, this drive must include relevant indicators and how their operationalisation can deal with institutional barriers, such as challenges to cross-sectoral coordination. Objective This study aims to explore context-specific indicators for developing countries, focusing on the case of the Jakarta metropolitan area. Methods Expert judgement was used to assess the selection criteria. The participants were experts from government institutions, non-government organisations, and universities. Results The findings show that safety, public transport quality, transport cost, air pollution, and accessibility are contextual indicators for application in developing countries. Similarities are shown with the research results from other indexes/sets of indicators for developing countries, for example, the Sustainable Urban Transport Index (SUTI) of UN ESCAP. However, some of these indicators leave room for improvement, such as the balance between strategic and operational levels of application. Conclusion Therefore, this research suggests that global sets of indicators should be adjusted before being implemented in particular developing country contexts.
... If the bias is not identified and dealt with up front, cost overruns are inevitable. Flyvbjerg (2021) takes this one step further. His position is that behavioral biases are not limited to cognitive biases, and that behavioral economics in its present form suffers from an overfocus on cognitive psychology: Economic decisions get over-accounted for in psychological terms, when political, sociological, and organizational perspectives may be more pertinent. ...
... Political bias is particularly important for big projects. Flyvbjerg (2021) argues that for large projects the most significant behavioral bias is political bias, also referred to as strategic misrepresentation. For real-world decision-making in big hierarchical organizations with millions and sometimes billions of dollars at stake, political bias is pervasive. ...
This article describes the process from first proposals in the early 1990s to project completion many years later for seven large Swedish road and railway projects. The purpose is to find reasons for the massive cost overruns as well as explanations for why projects are brought to completion despite much higher costs than when the decision to build was made. Cost overruns are set in an institutional context to highlight the interplay among national, regional, and local policymakers. National investment programs are seen as promises by other parts of society, irrespective of whether project costs increase during the process toward procurement and implementation. Another aspect is that the infrastructure manager’s administrative framework currently makes it impossible to compare costs in contracts with final cost, meaning that there is no institutionalized learning process in place. Design preparations and the estimation of costs for new projects must therefore be done without an understanding of what has been working well in the implementation of previous projects. While Benefit-Cost Analysis (BCA) played no role in the planning of the seven projects, the article sends a stark warning that early cost estimates provide poor input for assessing project rate of return.
... We covered central aspects of negative heuristics, their impact on leadership, and how they may be mitigated in Flyvbjerg (2013Flyvbjerg ( , 2014Flyvbjerg ( , 2021a and Flyvbjerg et al. (2009). Here we focus on positive heuristics and especially how they pertain to experienced leaders responsible for successfully building, running, and changing big projects, programs, and organizations. ...
... Take the Outside View. Base-rate neglect is a key cause of project failure (Flyvbjerg 2021a). The cure is to employ the outside view, which calculates empirical base rates from a reference class of completed projects similar to that you are planning. ...
... The project leadership is also considered to operate within collective rationalisation where the project reality is ignored, and the project managers make decisions with a feeble understanding of their working environment (Janis, 1982). Subsequently, the project is poorly managed with no project security plan (Land, Ricks and Ricks, 2014), lacks ISSN 2377-3219 2022 a Project Management Plan (including RAM) with little confidence in their capability to make the right decisions -s4.2 Management Plan; s4.5 Monitor and Control Project Work (PMI, 2017) and due to groupthink, conduct themselves to create strategic levels of "misrepresentation" (Flyberg, 2021). As one individual (1) suggested, "…The WB Group are isolated from reality of the project. ...
Full-text available
Risk and security management is an important aspect of construction activities, especially in countries where the level of security is challenging. This paper is focused on evaluating the impacts of risk/security measures resulting from stakeholder failures related to explosives mismanagement and security events, over an 18-month period, on a large, complex Dam project in a remote area in Pakistan. In July 2021, an incident occurred that had huge ramifications for the risk/security management of a large dam construction site. A qualitative methodology was utilised, where content analysis was conducted on project documentary evidence and where the research design targeted a closed population of 12 - Engineer supervisors/managers to explore their personal opinions. The outcomes indicated that the Employer, the Engineer, WB group and contractors engage in destructive managerial behaviour considered primarily to reduce project performance and create unsafe project situations that were systemically induced. Further, stakeholders are not managing explosives or the security situation underpinning poor project physical progress, leading to consistent project failure issues.
... This is of benefit to the industry, and new caving projects can leverage this information to help set their projects up for success. Flyvbjerg (2021) identifies a number of project management biases that can impact project development. Industry observations confirm a number of these biases are applicable to caving projects, namely: ...
Digitisation has gained industry-wide momentum and is changing how the sector operates. Emerging digital technologies such as Artificial Intelligence (AI) are increasingly implemented at the organisational, or most commonly the project, level. Notwithstanding, some industry leaders have failed to shape an environment of opportunity to disrupt; often used as a tool to pursue productivity, doing more of the same. We argue that there is an opportunity to disrupt the industry’s construct, rather than to replicate ancient habits at a faster pace. However, such a pursuit creates a complex paradigm. Leaders demand innovation that makes sense to humans, i.e., to themselves, creating an epistemological barrier that limits the aspirational goal of disruption. This is better classified as Sustaining Innovation (SI) as opposed to Disruptive Innovation (DI), as it does not displace the original ideals and fundamentals of the sector’s construct. A possible avenue to disrupt the current industry knowledge construct is to engage in a different level of consciousness, by generating insights through AI techniques at a size and speed impossible thus far. We present a case study in the field of Project Management and show results of disruptive insights in a portfolio of approximately US$ 20B, exploring the orthodoxies of such breakaway. We further discuss the leadership traits which could motivate DI in AEC, arguing that this requires an open engagement in an exploratory journey with limited certainty ex-ante, driven by awareness and vision. The realm of DI is not a replication of the observable world, rather an augmentation of it.KeywordsAECDigital transformationSustaining innovationDisruptive innovationArtificial Intelligence
Full-text available
How should government and business leaders solve big problems? Ought policy responses to occur in bold leaps or multitudinous methodical moves? Here we show that one-off major projects, with a high level of bespoke content, are prone systematically to poorer outcomes than projects built with a repeatable platform strategy. Repeatable projects are cheaper, faster, and scale in volume and variety at much lower risk of failure. We arrive at these conclusions using comparative evidence—NASA vs SpaceX—on cost, speed-to-market and schedule, and scalability outcomes of their respective space missions. Our reference class dataset consists of 203 space missions spanning 1963–2021, of which 181 missions belong to NASA and 22 belong to SpaceX. We find that SpaceX’s platform strategy was 10X cheaper and 2X faster than NASA’s bespoke strategy. Moreover, SpaceX’s platform strategy was less risky, virtually eliminating cost overruns. We further show that achieving platform repeatability is a strategically diligent process involving experimental learning sequences. Sectors of the economy where governments find it difficult to control spending or timeframes or to get benefits quickly enough—e.g. health, education, climate, defence—are ripe for a platform rethink.
Full-text available
Most cost-benefit analyses assume that the estimates of costs and benefits are more or less accurate and unbiased. But what if, in reality, estimates are highly inaccurate and biased? Then the assumption that cost-benefit analysis is a rational way to improve resource allocation would be a fallacy. Based on the largest dataset of its kind, we test the assumption that cost and benefit estimates of public investments are accurate and unbiased. We find this is not the case with overwhelming statistical significance. We document the extent of cost overruns, benefit shortfalls, and forecasting bias in public investments. We further assess whether such inaccuracies seriously distort effective resource allocation, which is found to be the case. We explain our findings in behavioral terms and explore their policy implications. Finally, we conclude that cost-benefit analysis of public investments stands in need of reform and we outline four steps to such reform.
Full-text available
The Olympic Games are the largest, highest-profile, and most expensive megaevent hosted by cities and nations. Average sports-related costs of hosting are $12.0 billion. Non-sports-related costs are typically several times that. Every Olympics since 1960 has run over budget, at an average of 172 percent in real terms, the highest overrun on record for any type of megaproject. The paper tests theoretical statistical distributions against empirical data for the costs of the Games, in order to explain the cost risks faced by host cities and nations. It is documented, for the first time, that cost and cost overrun for the Games follow a power-law distribution. Olympic costs are subject to infinite mean and variance, with dire consequences for predictability and planning. We name this phenomenon "regression to the tail": it is only a matter of time until a new extreme event occurs, with an overrun larger than the largest so far, and thus more disruptive and less plannable. The generative mechanism for the Olympic power law is identified as strong convexity prompted by six causal drivers: irreversibility, fixed deadlines, the Blank Check Syndrome, tight coupling, long planning horizons, and an Eternal Beginner Syndrome. The power law explains why the Games are so difficult to plan and manage successfully, and why cities and nations should think twice before bidding to host. Based on the power law, two heuristics are identified for better decision making on hosting. Finally, the paper develops measures for good practice in planning and managing the Games, including how to mitigate the extreme risks of the Olympic power law.
Full-text available
Humans generally exhibit a pervasive future bias in favour of optimism. We overestimate the likelihood of success in work, relationships and financial investments. Similarly, we underestimate the probability of experiencing negative events such as, serious illness or financial ruin. The optimism bias is widely considered as one of the most reproducible, prevalent and robust cognitive biases observed in psychology and behavioural economics. The catastrophic impact of the recent economic collapse has laid this cognitive bias bare. In this introductory overview, current understanding of the neural basis of the optimism bias is explored. Topics considered include: converse negative biases in depressive illnesses, the role of dopamine in optimism bias generation and modulation and evidence from functional neuroimaging studies. Research on the optimism bias has afforded us a unique window into decision-making, reward-processing and the potential for systematic irrationality in the human mind.
Decision making in organizations is often pictured as a coherent and rational process in which alternative interests and perspectives are considered in an orderly manner until the optimal alternative is selected. Yet, as many members of organizations have discovered from their own experience, real decision processes in organizations only seldom fit such a description. This book brings together researchers who focus on cognitive aspects of decision processes, on the one hand, and those who study organizational aspects such as conflict, incentives, power, and ambiguity, on the other. It draws from the tradition of Herbert Simon, who studied organizational decision making's pervasive use of bounded rationality and heuristics of reasoning. These multiple perspectives may further our understanding of organizational decision making. Organizational Decision Making is particularly well suited for students and faculties of business, psychology, and public administration.