Content uploaded by Silje Kristiansen

Author content

All content in this area was uploaded by Silje Kristiansen on Sep 21, 2019

Content may be subject to copyright.

Full Terms & Conditions of access and use can be found at

http://www.tandfonline.com/action/journalInformation?journalCode=rjrr20

Download by: [24.91.119.232] Date: 04 October 2017, At: 11:46

Journal of Risk Research

ISSN: 1366-9877 (Print) 1466-4461 (Online) Journal homepage: http://www.tandfonline.com/loi/rjrr20

The gambler’s fallacy fallacy (fallacy)

Marko Kovic & Silje Kristiansen

To cite this article: Marko Kovic & Silje Kristiansen (2017): The gambler’s fallacy fallacy (fallacy),

Journal of Risk Research, DOI: 10.1080/13669877.2017.1378248

To link to this article: http://dx.doi.org/10.1080/13669877.2017.1378248

Published online: 28 Sep 2017.

Submit your article to this journal

Article views: 9

View related articles

View Crossmark data

JOURNAL OF RISK RESEARCH, 2017

https://doi.org/10.1080/13669877.2017.1378248

The gambler’s fallacy fallacy (fallacy)

Marko Kovicaand Silje Kristiansenb

aZIPAR – Zurich Institute of Public Affairs Research, Zurich, Switzerland; bCollege of Arts, Media & Design,

Northeastern University, Boston, MA, USA

ABSTRACT

The gambler’s fallacy is the irrational belief that prior outcomes in a series of

events aﬀect the probability of a future outcome, even though the events

in question are independent and identically distributed. In this paper, we

argue that in the standard account of the gambler’s fallacy, the gambler’s

fallacy fallacy can arise: the irrational belief that all beliefs pertaining to the

probabilities of sequences of outcomes constitute the gambler’s fallacy,

when, in fact, they do not. Speciﬁcally, the odds of the probabilities of some

sequences of outcomes can be epistemically rational in a given decision-

making situation. Not only are such odds of probabilities of sequences of

outcomes not the gambler’s fallacy, but they can be implemented as a

simple heuristic for avoiding the gambler’s fallacy in risk-related decision-

making. However, we have to be careful not to fall prey to a variant of the

gambler’s fallacy, the gambler’s fallacy fallacy (fallacy), in which we do not

calculate odds for the probabilities of sequences that matter, but rather

simply believe that the raw probability for the occurrence of a sequence of

outcomes is the probability for the last outcome in that sequence.

ARTICLE HISTORY

Received 31 December 2016

Accepted 20 July 2017

KEYWORDS

Gambler’s fallacy; cognitive

biases; cognitive heuristics;

probability; risk perception

1. Introduction: gamblers, these poor irrational devils

Human cognition is systematically riddled with a speciﬁc category of errors, so-called cognitive

heuristics or biases (Tversky and Kahneman 1974). Cognitive biases as a mode of ‘fast’, automated

thinking (Evans 2008;Frankish 2010;Evans and Stanovich 2013) often lead to inferences that are good

enough in a given decision-making situation, but they can also lead to decisions that clearly deviate

from rational, utility maximizing behavior. Cognitive biases are especially prominent in situations that

involve risk assessment (Kasperson et al. 1988;Sjöberg 2000), as biases such as loss aversion (Tversky

and Kahneman 1991), status quo bias (Samuelson and Zeckhauser 1988), the availability heuristic

(Tversky and Kahneman 1973) and quite a few more amply demonstrate.

One very prominent bias in the context of risk perception is the gambler’s fallacy: the irrational

tendency for a negative recency eﬀect whereby we tend to estimate the probability of an event as

being conditional on past occurrences of that event, even though all events in the sequence of events

are independent and identically distributed (Lepley 1963;Bar-Hillel and Wagenaar 1991). The beauty

of the gambler’s fallacy lies in its simplicity and clarity. Whereas some biases tend to appear only

when choice alternatives are framed in particular linguistic ways, the gambler’s fallacy is so obviously

a cognitive bias that its existence is essentially taken for granted. The reality of the gambler’s fallacy

is widely accepted, and researchers have proposed sophisticated explanations, such as ad hoc mental

Markov models, for why people fall prey to the gambler’s fallacy (Oskarsson et al. 2009). Also, the

fact that we are talking about a gambler’s fallacy is in and of itself a great heuristic. It is not a secret

that gamblers tend to be irrational. The odds are never in their favor, yet they keep believing that

CONTACT Marko Kovic marko.kovic@zipar.org

© 2017 Informa UK Limited, trading as Taylor & Francis Group

Downloaded by [24.91.119.232] at 11:46 04 October 2017

2M. KOVIC AND S. KRISTIANSEN

their luck is about to turn around; all they need is a couple of more bets. The beauty of the concept

of the gambler’s fallacy also lies in the fact that it so accurately describes irrational behavior that we

encounter in our everyday lives, beyond the somewhat limited context of gambling (Chen, Moskowitz,

and Shue 2016).

The gambler’s fallacy is real. However, some beliefs in the context of independent events in a series

of events that would, prima facie, be subsumed under the gambler’s fallacy because they are concerned

with prior outcomes in a sequence of outcomes are not actually examples of the gambler’s fallacy, but

rather epistemically rational beliefs. The goal of this paper is to clarify this potential mistaken gambler’s

fallacy; as, as we call it, the gambler’s fallacy fallacy.

1.1. The gambler’s fallacy, frequentist probability and the law of small numbers

The standard account and presupposition of the gambler’s fallacy is that the gambler’s fallacy rep-

resents an irrational belief in the ‘law of small numbers’ (Tversky and Kahneman 1971;Rabin 2002):

when one is committing the gambler’s fallacy, so the argument goes, the error one is making lies in

believing that the probabilistic properties of large samples also apply to small samples. According to

this standard account, it is rational to believe that, say, after 1000 coin ﬂips of a fair coin, heads will

have come up around 50% of the time. But, according to the ‘law of small numbers’ account of the

gambler’s fallacy, it is irrational to believe that the same distribution of outcomes will result after only

10 coin ﬂips.

It is indeed irrational to believe in something like the ‘law of small numbers’, since the law of large

numbers does not translate to small, ﬁnite samples. It is possible that part of the problem with the

gambler’s fallacy simply stems from the intuitive belief in something like the ‘law of small numbers’.

However, this standard account of the gambler’s fallacy – the gambler’s fallacy as the belief in the

‘law of small numbers’ – relies on a frequentist notion of probability. Frequentist notions of probability,

both as empirical and as hypothetical frequentism, are limited (Hájek 1996,2009), not least because the

idea of frequentist probability makes implicit ontological claims that are not easily reconcilable with

ontological realism that is the philosophical underpinning of the empirical sciences. It is important to

note that the law of large numbers is not equivalent to frequentism. Rather, the law of large numbers

is a justiﬁcation of the frequentist idea of probability (Batanero, Henry, and Parzysz 2005;Hájek 2009).

It can even be argued that the modern understanding of frequentist probability originated with Jakob

Bernoulli’s formulation of the weak law of large numbers (Verburgt 2014).

If we revisit the coin ﬂipping example with a non-frequentist idea of probability, we see that a

diﬀerent idea of probability yields a diﬀerent idea of rational beliefs. If you had to guess how many

times a coin will land heads in a series of only 10 ﬂips, you can express a rational belief given prior

information: given a probability of 0.5 that a coin ﬂip will result in heads, your best guess would be

that there will be 5 heads in a series of 10 ﬂips. That does not mean that you have anything close to

perfect certainty in that outcome, but simply that, given your prior information, that expectation is

justiﬁed. The prior information in this example is not a belief about hypothetical outcomes if we were

to ﬂip a coin 1000 times, 10,000 times, or inﬁnitely often. Instead, the prior information is an expression

of uncertainty given some information about reality.

It has been noted for some time that the diagnosis of cognitive biases is contingent on our

underlying beliefs about probability (Gigerenzer 1991). This is also true of the gambler’s fallacy. When

we frame the gambler’s fallacy in terms of frequentist probability, then some beliefs are labeled as

irrational even though they are, potentially, rational – we refer to this misclassiﬁcation of some speciﬁc

beliefsasthegambler’s fallacy fallacy. Within the standard account of the gambler’s fallacy, any belief

pertaining to the probabilistic properties of small samples is deemed irrational. That assessment, we

argue, it painting with too broad a brush: there is one case where probabilistic beliefs about a small

sample, or, more precisely: beliefs about the probabilities of sequences of outcomes, do not constitute

the gambler’s fallacy but are, instead, rational beliefs. By reﬁning our understanding of the gambler’s

fallacy in this manner, a variant of the gambler’s fallacy that is diﬀerent from the general gambler’s

Downloaded by [24.91.119.232] at 11:46 04 October 2017

JOURNAL OF RISK RESEARCH 3

fallacy becomes manifest; we refer to that gambler’s fallacy variant as the gambler’s fallacy fallacy

(fallacy).

2. The gambler’s fallacy

Imagine a gambler, Jane Doe, who engages in a small gamble. Jane Doe is not a pathological gambler;

she is just playing a game for hedonic, recreational purposes. For the purpose of the following

arguments, she also assumes the role of an enlightened gambler: she is not just playing for the

fun or thrill of it, but she is actively engaged in metacognition while she is playing. In other words,

Jane Doe is ‘thinking slow’, whereas, in the real world, even recreational, non-pathological gamblers

are probably ‘thinking fast’. Jane Doe is not a realistic stand-in for gamblers, but more of a narrative

aid.

The game Jane Doe is playing is very simple: she is rolling a regular, fair six-sided die and her prior

information is that every number on the die has a probability of exactly 1

6of being rolled. Jane can roll

the die three times. If she rolls the number four at least once, she wins. If she does not roll the number

four at least once, she loses.

Jane has rolled the die twice already. Unfortunately, she did not get a success yet, but instead two

failures in a row. Jane is about to roll the die for the third and ﬁnal time. She feels that third time’s the

charm – after all, she failed twice in a row, and now, it is time for her chances to balance out. After

all, the die is supposed to be fair, and in her subjective perception, Jane feels like that fairness should

bring about a success after a series of failures. This intuition that Jane feels before rolling the die for the

third time is the gambler’s fallacy. In this example, Jane’s gambler’s fallacy takes the following speciﬁc

form:

Pr (s|f,f)= Pr (s)

Jane is committing the gambler’s fallacy because she intuitively believes that the probability for a

success given two failures is not the same as the probability for a success in only one roll of the die.

That is not true: the probability for a success is not aﬀected by prior outcomes.

Of course, not all situations in which the gambler’s fallacy applies are games of rolling the die three

times. However, the logic of the game Jane Doe is playing can be generalized onto other situations. A

generalized gambler’s fallacy can be described in the following manner:

Pr (On+1|Oi)= Pr (O);i=1, ...,n

The gambler’s fallacy is the belief that the probability for an outcome Oafter a series of outcomes is

not the same as the probability for a single outcome.

3. The gambler’s fallacy fallacy

Jane was about to roll the die for the third and ﬁnal time when she realized that her wishful thinking

led her to the gambler’s fallacy. The probability for a success on her third try, Jane had to concede to

herself, is no more and no less than 1

6.

However, Jane still has some probabilistic information about her current game that seems intuitively

non-trivial to her. That information does not pertain to the probability for success in the last try, but to

overall probabilities of possible sequences of outcomes in her game. A ﬁrst, general thought that occurs

to Jane is that she knows the overall probability of succeeding at least once in a game of three die rolls:

Pr (∃s)=1−5

63

=0.42

The probability that there will be at least one success, denoted above with the existential quantiﬁer

∃,is0.42, or 42%. What should Jane do with this probability? Of course, Jane instantly realizes that an

Downloaded by [24.91.119.232] at 11:46 04 October 2017

4M. KOVIC AND S. KRISTIANSEN

overall probability of obtaining at least one success is not very pertinent to her current situation. After

all, those 42% contain all possible permutations that contain at least one success, but right now, Jane

has already exhausted a great part of those possible permutations, or sequences. For example, the

probability of obtaining three successes in a row should not inform Jane’s beliefs about her current

situation because she cannot end up with that particular sequence. It is tempting to fall back into

the gambler’s fallacy and believe that the probability for success at the third roll is 42%, but Jane can

successfully resist the urge for this irrational belief.

In her current situation, Jane is certain that the sequence of outcomes that she will ultimately end

up with has to either be f,f,for f,f,s; Jane, of course, hopes for the latter. What is the probability

that she will end up with the sequence f,f,s? At this point, Jane is fairly certain of the answer to that

question:

Pr (f,f,s|f,f)=1

6=0.17

The probability for ending up with the sequence f,f,safter two failures is simply the probability for a

single success in rolling the die. However, there is a second probability with regards to the sequence

f,f,sthat Jane is thinking of:

Pr (f,f,s)=5

6×5

6×1

6=0.12

The overall probability of ending up with the sequence f,f,sis around 12%. How does this statement

relate to the statement above of a 17% probability? These are, of course, diﬀerent probabilities. The

probability of around 17%, or, more precisely, 1

6, is the conditional probability of ending up with the

sequence f,f,s, given two failures – and this is, of course, the same as the probability for a single

success. Jane is intuitively conﬁdent that this probability is the one she should, rationally, take into

account in her current situation. It is possible to give her intuition a slightly clearer form by simply

plugging the respective probabilities into Bayes’ rule:

Pr (s|f,f)=Pr (f,f|s)Pr (s)

Pr (f,f)=(5

6×5

6)1

6

(5

6×5

6)=1

6=0.17

In contrast, the probability of around 12% refers to the overall probability of ending up with the

sequence f,f,sgiven the nature of the game. Whereas Pr (s|f,f)is a conditional probability, Pr (f,f,s)

is a compound probability.

The general probability of the sequence f,f,sis 0.12. On its own, this probability is of limited use to

Jane, because 1 −0.12 is the probability for all other sequences, including sequences that, rationally,

are of no interest to Jane in her current situation. For example, the probability of the sequence s,f,f

should not impact Jane’s current beliefs in any way, because that sequence is not possible in her

current situation. Therefore, the probability for the sequence f,f,sis epistemically relevant only when

it is compared other epistemically relevant probabilities. In Jane’s current situation, there is only one

other relevant sequence – the sequence that Jane wishes to avoid, three failures in a row:

Pr (f,f,f)=5

63

=0.58

In her current situation, Jane knows that the overall probability of the sequence f,f,sis 0.12, and the

overall probability of the sequence f,f,fis much higher, 0.58. How should Jane process these two

probabilities? Simply taken on their own, those probabilities are still relative to all possible sequences

of outcomes, and those are not of interest to Jane. However, there is a simple way to process the two

probabilities that are of interest: calculating their odds. Doing this is simple enough:

o(f,f,s):(f,f,f)=

5

6×5

6×1

6

5

63=1

5=1:5

Downloaded by [24.91.119.232] at 11:46 04 October 2017

JOURNAL OF RISK RESEARCH 5

The probability of ending up with the sequence f,f,fis 5 times higher than the probability of ending

up with the sequence f,f,s. These odds are not at odds with avoiding the gambler’s fallacy. This

becomes obvious when we compare the odds of the probabilities for a single success and for a single

failure given two failures (which are, of course, simply the probabilities for a single success and for a

single failure):

o(s|f,f):(f|f,f)=

1

6

5

6

=1

5=1:5

The odds are exactly the same as the odds for the sequences f,f,sand f,f,f. What does this mean?

When Jane is comparing the probabilities for the sequences that are relevant to her, she will arrive at

information that is epistemically rational, meaning that she is not committing the gambler’s fallacy.

The odds for ending up with the sequence that she hopes to arrive at, f,f,s, are the same as the odds

of succeeding in her third and ﬁnal try. Therefore, the odds for ending up with the sequence that Jane

hopes to ultimately end up with can rationally inform her beliefs in her current situation.

In the standard account of the gambler’s fallacy as the belief in the law of small numbers, considering

sequences of outcomes in any way is considered irrational, because the law of small numbers does not

pertain to small samples. We call the belief that sequences of outcomes should be always disregarded

the gambler’s fallacy fallacy: As can be easily demonstrated, sequences of outcomes can be used in an

epistemically rational manner when odds of epistemically relevant sequences are considered.

In a very condensed form, the argument presented in this section is the following:

•First two die rolls: f,f

•Probability for son the third roll: 1

6

•Probability for fon the third roll: 5

6

•Odds of these two probabilities: 1

6:5

6=1:5

•General probability for the sequence f,f,s:5

6×5

6×1

6=25

216

•General probability for the sequence f,f,f:5

6×5

6×5

6=125

216

•Odds of these two probabilities: 25

216 :125

216 =1:5

The odds for a success given two failures is the same as the odds for the sequence that ends with

a success. This is a simple demonstration that thinking about the probabilities of sequences is not

irrational when (and only when) the odds of the epistemically relevant sequences are considered.

4. The gambler’s fallacy fallacy (fallacy)

Let us assume that Jane failed to turn the probabilities for the sequences f,f,sand f,f,finto odds, and

that instead, she proceeded to believe that the probability Pr (f,f,s)=0.12 is the direct probability

that she will succeed in her third and ﬁnal try. Such a belief is rather obviously fallacious, but it does

not quite constitute the gambler’s fallacy. Instead, it is more of a variant of the gambler’s fallacy. We

label this variant of the gambler’s fallacy the gambler’s fallacy fallacy (fallacy).

In a more general form, the gambler’s fallacy fallacy (fallacy) can be described in the following

manner:

Pr (Oi)=Pr (On);i=1, ...,n

If we compare the gambler’s fallacy fallacy (fallacy) to the gambler’s fallacy as presented in Section 2,

we see that the two fallacies are not the same. The gambler’s fallacy is the belief that the probability

for an outcome given a series of outcomes is not the same as the probability for a singular outcome.

The gambler’s fallacy fallacy (fallacy), on the other hand, is the belief that the probability for a series of

outcomes is the same as the probability for the last outcome in that series of outcomes.

Downloaded by [24.91.119.232] at 11:46 04 October 2017

6M. KOVIC AND S. KRISTIANSEN

5. In summary: the gambler’s fallacy, the gambler’s fallacy fallacy, and the gambler’s

fallacy fallacy (fallacy)

In the previous Sections 2–4, we have discussed the gambler’s fallacy, the gambler’s fallacy fallacy, and

the gambler’s fallacy fallacy (fallacy). This nomenclature is genealogically inspired since we are talking

about concepts that stem from the concept of the gambler’s fallacy. But that wordplay might sound

ever so slightly confusing. For the sake of clarity, therefore, we brieﬂy summarize the three concepts

in this subsection.

The gambler’s fallacy is the belief that the probability for an outcome after a series of outcomes is

not the same as the probability for a single outcome. The gambler’s fallacy is real and true in cases

where the events in question are independent and identically distributed.

The gambler’s fallacy fallacy is our argument that, contrary to the standard account of the gambler’s

fallacy, probabilities of sequences of outcomes can be epistemically rational in situations where the

gambler’s fallacy might arise. This is the case when (and only when) the odds of the probabilities of the

relevant sequences of outcomes are compared to each other. Those odds are the same as the odds of

the singular outcomes at the end of those sequences.

The gambler’s fallacy fallacy (fallacy) is the irrational belief that the probability for a series of

outcomes is the same as the probability for the last outcome in that series of outcomes. The gambler’s

fallacy fallacy (fallacy) is a variant of the gambler’s fallacy that arises from an irrational implementation

of the gambler’s fallacy fallacy argument.

5.1. Prior vs. posterior probabilities

In the preceding sections, our enlightened gambler Jane Doe has thought about conditional as well as

compound probabilities. In those examples, both the conditional probabilities as well as the compound

probabilities are discussed in terms of general probabilities: if one were to throw a die three times,

some outcome would happen with some probability. These general probabilities can also be called

prior probabilities.

What happens when we substitute some of the prior probabilities with so-called posterior probabili-

ties? More speciﬁcally, let’s say that for the sequence f,f,s, we are treating f,fas speciﬁc outcomes that

have already occurred rather than parts of a sequence of outcomes that can occur in general. In other

words, if we observe f,fand we express the probability that those outcomes have occurred, we are

assigning a kind of posterior probability. In our scenario, Jane has observed both outcomes f,fwith

her own eyes, and she is fairly conﬁdent that she has correctly observed the outcomes. Therefore, Jane

decides that the posterior probability of the ﬁrst fhaving occurred is 1, and the posterior probability

of the second fhaving occurred is 1 as well. Does anything change about the gambler’s fallacy, the

gambler’s fallacy fallacy, and the gambler’s fallacy fallacy (fallacy) when we think in terms of prior and

posterior probabilities?

Of course not: the gambler’s fallacy is fallacious apriori, and therefore, it has to be fallacious a

posteriori as well. Accordingly, all of the simple calculations above work just as they do with prior

probabilities. However, it is rather important not to believe that using posterior instead of prior

probabilities is a ‘solution’ for the gambler’s fallacy. For example, if one were to use the posterior

probabilities of f,fin the sequence f,f,s, it is tempting to declare 1 ×1×1

6=1

6as a ‘solution’ for

the gambler’s fallacy, when, of course, it is anything but – this is simply the gambler’s fallacy fallacy

(fallacy). The fact that the ‘solution’ in this example is correct is irrelevant, because this is simply a case

of epistemic luck (Engel 1992;Pritchard 2004) where one might accidentally, but irrationally arrive at

atruebelief.

Downloaded by [24.91.119.232] at 11:46 04 October 2017

JOURNAL OF RISK RESEARCH 7

6. Discussion

6.1. Real-world implications of the gambler’s fallacy fallacy

There might be some value in discussing the gambler’s fallacy in purely theoretical terms, but the

more important reason why the gambler’s fallacy matters is that error occurs in real-world risk-related

decision-making situations. Take the context of natural disasters as an example: one might be inclined

to believe that, since some place has not experienced major earthquakes in a long time, a major

earthquake is ‘overdue’. Or, conversely and potentially more gravely: after a major earthquake has

occurred, one might believe that no earthquakes will happen for a while. Or take the context of

criminal justice as another example: After a judge has ruled a number of terrorist suspects as not

guilty, he might bias his next decision by the implicit belief that in a long sequence of suspects, there

should be one real and dangerous terrorist. Or, conversely: the judge might believe that the probability

for ﬁve guilty terrorist suspects in a row is very low, and he might therefore develop a bias towards

ruling the ﬁfth suspect not guilty. Or take complex technologies as yet another example: after a period

in which there have been no critical failures of nuclear power plants, one might feel that an accident

is around the corner. Or, conversely: after a critical failure in a nuclear power plant, one might believe

that another such accident will not occur in a long time.

The gambler’s fallacy is a phenomenon that clearly matters in the real world, and we should strive

to reduce its prevalence. The concept of the gambler’s fallacy fallacy can do so in two ways: ﬁrst, by

reducing ‘false positive’ misclassiﬁcations of beliefs as the gambler’s fallacy, and second, and more

importantly, by providing a heuristic for real-world decision-making.

The ﬁrst aspect of the usefulness of the gambler’s fallacy fallacy is not a reduction of the occurrence

of the gambler’s fallacy, but rather a better detection mechanism. As we argue above, there are beliefs

that are epistemically rational but that might be misclassiﬁed as instances of the gambler’s fallacy

when applying a narrowly frequentist understanding of probability. Obviously, it is desirable to have

as few of these ‘false positive’ misclassiﬁcations as possible.

The second aspect of the usefulness of the gambler’s fallacy fallacy is more important. In real-world

decision-making, we often want to reduce the impact of cognitive biases as much as possible, and so-

called debiasing strategies have been explored for decades (Fischoﬀ 1981). Many potential debiasing

strategies are based on active learning about cognitive biases and metacognition through cognitive

forcing (Croskerry 2003;Croskerry, Singhal, and Mamede 2013). In the context of the gambler’s fallacy,

it is probably possible to learn about the fallacy and reduce its impact by forcing ourselves to enter a

metacognitive mode of thinking. But that strategy can only have limited success. The gambler’s fallacy

occurs in situations in which we might not always have a lot of time to enter a careful mode of thinking;

we are under pressure and we need quick inferences. This is where the gambler’s fallacy fallacy as a

heuristic comes into play: the idea of framing probabilities of sequences that matter in a given context

as odds can nudge us into avoiding the gambler’s fallacy.

Thinking in probabilities is diﬃcult, and actively avoiding the gambler’s fallacy can be challenging

(if it weren’t, the gambler’s fallacy would not be a thing in the ﬁrst place.). This is where the gambler’s

fallacy (fallacy) becomes relevant: rather than actively avoiding the gambler’s fallacy (a cognition-

intensive task), a gentle ‘nudge’ into thinking about odds of sequences could help avoid the gambler’s

fallacy. This means that thinking in odds that compare relevant sequences of outcomes can potentially

alleviate the impact of the gambler’s fallacy by oﬀering an intuitive and simple heuristic.

6.2. How do real-world people deal with probabilities of sequences?

In the previous section, we have put forward the argument that taking into consideration the odds of

the probabilities of sequences of outcomes can help alleviate the gambler’s fallacy because these odds

are identical to the odds of the probabilities for the singular outcomes at the end of those sequences.

We argue that such a comparison of odds might alleviate the gambler’s fallacy because it could serve

as an intuitive heuristic since odds are a tool that is common in everyday, real-world situations. That

Downloaded by [24.91.119.232] at 11:46 04 October 2017

8M. KOVIC AND S. KRISTIANSEN

argument, however, raises an important question: How do people actually process probabilities of

sequence in the real world?

There is reason to believe that human perception of probabilities of sequences is biased towards

real-world experiences of sequences rather than based on idealized theoretical probabilities (Hahn

and Warren 2009,2010). That is a fairly important insight with at least four implications for the present

paper. First, Jane Doe, the idealized enlightened gambler that we introduced in Section 2,isindeed

only a narrative support and not a realistic representation of human cognition during the judgment

of probabilities. Human cognition in the context of the gambler’s fallacy is characterized by a strong

reliance on intuition and experience rather than by a state of extended metacognition (metacognition

as ‘thinking slow’ is what our idealized gambler Jane Doe is doing). Second, if our judgments about

probabilities in the context of the gambler’s fallacy are based on real-world experience, then the

standard account of the gambler’s fallacy as the belief in the ‘law of small numbers’ becomes less

plausible since the origin of the gambler’s fallacy is not a generalization of the law of large numbers

onto small samples, but rather prior experience. Third, subjective intuitions about the probabilities of

sequences of outcomes seem not to be completely epistemically irrational since our intuition seems

to approximate odds of the probabilities of diﬀerent sequences fairly well. Fourth, these arguments

and ﬁndings add a degree of plausibility to our main conclusion: operating with odds of probabilities

of outcomes does indeed seem to be something that we are intuitively capable of and comfortable

with, even in modes of thinking that are ‘fast’ rather than ‘slow’.

6.3. What about the hot hand fallacy?

The gambler’s fallacy is often presented and discussed together with the so-called hot hand fallacy

(Ayton and Fischer 2004;Sundali and Croson 2006), a belief in ‘streakiness’ of outcomes. The gambler’s

fallacy and the hot hand fallacy might be related, but for at least two reasons, the arguments proposed

in this paper do not apply to the hot hand fallacy. First, the hot hand fallacy is quite obviously not just

a symmetrically opposite belief to the gambler’s fallacy. The gambler’s fallacy can be understood as:

Pr (On+1|Oi)= Pr (O);i=1, ...,n

The hot hand fallacy, on the other hand, looks more like this:

Pr (Oa|Ob)<Pr (Oa|Oa)

In the context of Jane Doe’s die rolling game, this would mean that:

Pr (s|f)<Pr (s|s)

The underlying belief of the hot hand fallacy is the belief that prior outcomes positively aﬀect the

probability for the same outcome in the future. For example, if Jane Doe believed that rolling a four

on a die increased the probability of rolling a four again on her next try, she would be committing

the hot hand fallacy. If, on the other hand, Jane believed more generally that the probability of rolling

a four on her second try is not the same as rolling a four in one singular try, then she would be

committing the gambler’s fallacy. In this example, Jane’s hot hand fallacy is accidentally a special

case of the gambler’s fallacy, the underlying beliefs are very diﬀerent, and underlying belief of the

hot hand fallacy is categorically more irrational: whereas the gambler’s fallacy is a misreading of the

probabilities of sequences motivated by wishful thinking, the hot hand fallacy is a total abandonment

of probabilities in favor of wishful thinking.

Second, the hot hand fallacy is often relevant in domains where the events of interest are not

necessarily independent and identically distributed. In such domains, the hot hand fallacy is not a

cognitive issue, but simply an empirical one. For example, the hot hand fallacy in the context of

basketball (Gilovich, Vallone, and Tversky 1985) has been challenged on the grounds that a diﬀerent

Downloaded by [24.91.119.232] at 11:46 04 October 2017

JOURNAL OF RISK RESEARCH 9

way of empirically measuring streakiness actually provides support for the existence of hot hands

(Miller and Sanjurjo 2016).

All of this is not to say that the hot hand fallacy is irrelevant – far from it. For example, so-called

Black Swan events play an important role in risk perception and policy-making (Mueller and Stewart

2016;Wardman and Mythen 2016). Irrational overreactions to Black Swan events represent a form

of the hot hand fallacy, whereby policy-makers believe that the occurrence of an outcome increases

the probability of the same outcome in the future, even though such dependence might not actually

exist. A typical and persistent example of this is the belief that big earthquakes might become more

probable after one or several big earthquakes have occurred, even though earthquake data indicates

that big earthquakes are independent (Daub et al. 2012;Parsons and Eric 2012;Shearer and Stark

2012).

7. Conclusion

The arguments presented in this paper are, conceptually, minor but non-trivial adjustments of our

understanding of the gambler’s fallacy. In the standard account of the gambler’s fallacy, a frequentist

notion of probability is applied, or at least implied. According to that standard account, the error of

the gambler’s fallacy lies in believing that long-run frequencies of (inﬁnitely) large samples should

be represented in small samples as well. A diﬀerent probabilistic approach, whereby probabilities are

quantiﬁcations of uncertainty, allows for a more ﬁne-grained understanding of the gambler’s fallacy.

In this understanding, the gambler’s fallacy as the belief that the probability for an outcome is

conditional on prior outcomes when it is not remains true. However, a gambler’s fallacy fallacy also

comes into view: the belief that all beliefs about sequences of outcomes are epistemically void or

irrational in a situation in which the gambler’s fallacy can occur. That is not the case. The odds of

contextually relevant sequences of outcomes can be epistemically rational and they can provide a

relevant source of information.

7.1. Our beliefs about probability matter

The gambler’s fallacy is perhaps one of the best known cognitive biases, but so far, it has received little

attention in the areas of risk research and risk assessment. The concept of the gambler’s fallacy can

provide an analytical lens through which to understand some problems in risk assessment. After all,

there is a plethora of situations that involve risk in which the gambler’s fallacy can play a detrimental

role. If we are to consider the gambler’s fallacy as a concept that is relevant in risk analysis, then we

also have to think about how to think about the gambler’s fallacy. The main argument of this paper

is that the epistemological foundation of the gambler’s fallacy should be aligned with that of risk

analysis: probability as a quantiﬁcation of uncertainty. When we apply probability as it is understood

in risk analysis to the gambler’s fallacy, then we get a more accurate picture of what the gambler’s

fallacy is, and, perhaps even more importantly, of what it is not. Ultimately, this allows us to devise

countermeasures against the gambler’s fallacy, such as, as proposed in this paper, calculating simple

and intuitive odds for the probabilites of sequences of outcomes.

Disclosure statement

No potential conﬂict of interest was reported by the authors.

References

Armstrong, D. M. 1973.Belief, Truth and Knowledge. London: Cambridge University Press.

Ayton, Peter, and Ilan Fischer. 2004. “The Hot Hand Fallacy and the Gambler’s Fallacy: Two Faces of Subjective

Randomness?” Memory & Cognition 32 (8): 1369–1378. http://link.springer.com/article/10.3758/BF03206327.

Bar-Hillel, Maya, and Willem A. Wagenaar. 1991. “The Perception of Randomness.” Advances in Applied Mathematics 12 (4):

428–454. http://www.sciencedirect.com/science/article/pii/019688589190029I.

Downloaded by [24.91.119.232] at 11:46 04 October 2017

10 M. KOVIC AND S. KRISTIANSEN

Batanero, Carmen, Michel Henry, and Bernard Parzysz. 2005. “The Nature of Chance and Probability.” In Exploring

Probability in School, edited by Graham A. Jones, Vol. 40, Mathematics Education Library. 15–37. Springer US.

doi:10.1007/0-387-24530-8_2.http://link.springer.com/chapter/10.1007/0-387-24530- 8_2.

Buchak, Lara. 2014. “Belief, Credence, and Norms.” Philosophical Studies 169: 285–311. https://link.springer.com/article/

10.1007/s11098-013-0182- y.

Chen, Daniel, Tobias J. Moskowitz, and Kelly Shue. 2016.Decision-making under the Gambler’s Fallacy: Evidence from Asylum

Judges, Loan Oﬃcers, and Baseball Umpires. Working Paper 22026, National Bureau of Economic Research. http://www.

nber.org/papers/w22026.

Croskerry, Pat. 2003. “Cognitive Forcing Strategies in Clinical Decisionmaking.” Annals of Emergency Medicine 41 (1):

110–120. http://www.annemergmed.com/article/S0196-0644(02)84945- 9/abstract.

Croskerry, Pat, Geeta Singhal, and Sílvia Mamede. 2013. “Cognitive Debiasing 2: Impediments to and Strategies

for Change.” BMJ Quality & Safety bmjqs–2012–001713. http://qualitysafety.bmj.com/content/early/2013/08/30/

bmjqs-2012- 001713.

Daub, Eric G., Eli Ben-Naim, Robert A. Guyer, and Paul A. Johnson. 2012. “Are Megaquakes Clustered?” Geophysical Research

Letters 39 (6): L06308. http://onlinelibrary.wiley.com/doi/10.1029/2012GL051465/abstract.

Engel, Mylan. 1992. “Is Epistemic Luck Compatible with Knowledge?” The Southern Journal of Philosophy 30 (2): 59–75.

http://onlinelibrary.wiley.com/doi/10.1111/j.2041-6962.1992.tb01715.x/abstract.

Evans, Jonathan St B. T. 2008. “Dual-processing Accounts of Reasoning, Judgment, and Social Cognition.” Annual Review

of Psychology 59 (1): 255–278. doi:10.1146/annurev.psych.59.103006.093629.

Evans, Jonathan St B. T., and Keith E. Stanovich. 2013. “Dual-process Theories of Higher Cognition: Advancing the Debate.”

Perspectives on Psychological Science 8 (3): 223–241. http://pps.sagepub.com/ content/8/3/ 223.

de Finetti, Bruno. 1970. “Logical Foundations and Measurement of Subjective Probability.” Acta Psychologica 34: 129–145.

http://www.sciencedirect.com/science/article/pii/0001691870900120.

Fischoﬀ, Baruch. 1981.Debiasing. Technical report.

Frankish, Keith. 2010. “Dual-process and Dual-system Theories of Reasoning.” Philosophy Compass 5 (10): 914–926. http://

onlinelibrary.wiley.com/doi/10.1111/j.1747-9991.2010.00330.x/abstract.

Gallie, Walter B. 1955. “Essentially Contested Concepts.” Proceedings of the Aristotelian Society 56: 167–198. Bibtex:

Gallie1955, http://www.jstor.org/stable/4544562.

Gigerenzer, Gerd. 1991. “How to Make Cognitive Illusions Disappear: Beyond “Heuristics and Biases”.” European Review of

Social Psychology 2 (1): 83–115. doi:10.1080/14792779143000033.

Gilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot Hand in Basketball: On the Misperception

of Random Sequences.” Cognitive Psychology 17 (3): 295–314. http://www.sciencedirect.com/science/article/pii/

0010028585900106.

Hahn, Ulrike, and Paul A. Warren. 2009. “Perceptions of Randomness: Why Three Heads are Better than Four.” Psychological

Review 116 (2): 454–461.

Hahn, Ulrike, and Paul A. Warren. 2010. “Why Three Heads are a Better Bet than Four: A Reply to Sun, Tweney, and Wang

(2010).” Psychological Review 117 (2): 706–711.

Hájek, Alan. 1996. “‘Mises Redux’ – Redux: Fifteen Arguments against Finite Frequentism.” Erkenntnis (1975-) 45 (2/3):

209–227. http://www.jstor.org/stable/20012727.

Hájek, Alan. 2009. “Fifteen Arguments against Hypothetical Frequentism.” Erkenntnis 70: 211–235.

Hintikka, Jaakko. 1962.Knowledge and Belief: An Introduction to the Logic of the Two Notions. Ithaca: Cornell University

Press.

Kasperson, Roger E., Ortwin Renn, Paul Slovic, Halina S. Brown, Jacques Emel, Robert Goble, Jeanne X. Kasperson,

and Samuel Ratick. 1988. “The Social Ampliﬁcation of Risk: A Conceptual Framework.” Risk Analysis 8 (2): 177–187.

doi:10.1111/j.1539-6924.1988.tb01168.x.

Kelly, Thomas. 2003. “Epistemic Rationality as Instrumental Rationality: A Critique.” Philosophy and Phenomenological

Research 66 (3): 612–640. http://onlinelibrary.wiley.com/doi/10.1111/j.1933-1592.2003.tb00281.x/abstract.

Lepley, William M. 1963. “‘The Maturity of the Chances’ : A Gambler’s Fallacy.” The Journal of Psychology 56 (1): 69–72.

doi:10.1080/00223980.1963.9923699.

Miller, Joshua Benjamin, and Adam Sanjurjo. 2016.Surprised by the Gambler’s and Hot Hand Fallacies? A Truth in the Law of

Small Numbers. SSRN Scholarly Paper ID 2627354. Rochester, NY: Social Science Research Network. https://papers.ssrn.

com/abstract=2627354.

Mueller, John, and Mark G. Stewart. 2016. “The Curse of the Black Swan.” Journal of Risk Research 19 (10): 1319–1330.

doi:10.1080/13669877.2016.1216007.

Oskarsson, An T., Leaf Van Boven, Gary H. McClelland, and Reid Hastie. 2009. “What’s Next? Judging Sequences of Binary

Events.” Psychological Bulletin 135 (2): 262–285.

Parsons, Tom, and L. Eric. 2012. “Geist, Were Global M \ geqslant8.3 Earthquake Time Intervals Random between 1900 and

2011?” Bulletin of the Seismological Society of America 102 (4): 1583–1592. http://www.bssaonline.org/content/102/4/

1583.

Pritchard, Duncan. 2004. “Epistemic Luck.” Journal of Philosophical Research 29: 191–220. https://www.pdcnet.org/pdc/

bvdb.nsf/purchase?openform&fp=jpr&id=jpr_2004_0029_0191_0220.

Downloaded by [24.91.119.232] at 11:46 04 October 2017

JOURNAL OF RISK RESEARCH 11

Rabin, Matthew. 2002. “Inference by Believers in the Law of Small Numbers.” The Quarterly Journal of Economics 117 (3):

775–816. http://qje.oxfordjournals.org/content/117/3/775.

Ramsey, Frank P. 2016. “Truth and Probability.” In Readings in Formal Epistemology, edited by Horacio Arló-Costa,

Vincent F. Hendricks, and Johan van Benthem, Vol. 1, Springer Graduate Texts in Philosophy, 21–45. Springer.

doi:10.1007/978-3-319-20451-2_3.http://link.springer.com/chapter/10.1007/978-3-319- 20451-2_3.

Samuelson, William, and Richard Zeckhauser. 1988. “Status Quo Bias in Decision Making.” Journal of Risk and Uncertainty

1 (1): 7–59. http://link.springer.com/article/10.1007/BF00055564.

Searle, John R. 1976. “A Classiﬁcation of Illocutionary Acts.” Language in Society 5 (1): 1–23. Bibtex: Searle1976, http://

www.jstor.org/stable/4166848.

Shearer, Peter M., and Philip B. Stark. 2012. “Global Risk of Big Earthquakes has not Recently Increased.” Proceedings of

the National Academy of Sciences of the United States of America 109 (3): 717–721. http://www.ncbi.nlm.nih.gov/pmc/

articles/PMC3271898/.

Sjöberg, Lennart. 2000. “Factors in Risk Perception.” Risk Analysis 20 (1): 1–12. http://onlinelibrary.wiley.com/doi/10.1111/

0272-4332.00001/abstract.

Sundali, James, and Rachel Croson. 2006. “Biases in Casino Betting: The Hot Hand and the Gambler’s Fallacy.”

ResearchGate 1 (Jul): 1–12. https://www.researchgate.net/publication/5140572_Biases_in_Casino_Betting_The_Hot_

Hand_and_the_Gambler’s_Fallacy.

Tversky, Amos, and Daniel Kahneman. 1971. “Belief in the Law of Small Numbers.” Psychological Bulletin 76 (2): 105–110.

Tversky, Amos, and Daniel Kahneman. 1973. “Availability: A Heuristic for Judging Frequency and Probability.” Cognitive

Psychology 5 (2): 207–232. http://www.sciencedirect.com/science/article/pii/0010028573900339.

Tversky, Amos, and Daniel Kahneman. 1974. “Judgment under Uncertainty: Heuristics and Biases.” Science 185 (4157):

1124–1131. http://www.sciencemag.org/content/185/4157/1124.

Tversky, Amos, and Daniel Kahneman. 1991. “Loss Aversion in Riskless Choice: A Reference-dependent Model.” The

Quarterly Journal of Economics 106 (4): 1039–1061. http://www.jstor.org/stable/2937956.

Verburgt, Lukas M. 2014. “Remarks on the Idealist and Empiricist Interpretation of Frequentism: Robert Leslie Ellis

versus John Venn.” BSHM Bulletin: Journal of the British Society for the History of Mathematics 29 (3): 184–195.

doi:10.1080/17498430.2014.889269.

Wardman, Jamie K., and Gabe Mythen. 2016. “Risk Communication: Against the Gods or against All Odds?

Problems and Prospects of Accounting for Black Swans.” Journal of Risk Research 19 (10): 1220–1230.

doi:10.1080/13669877.2016.1262002.

Appendix 1. Probability, beliefs, rationality: some conceptual remarks

Throughout this paper, we talk about ‘probability’, ‘beliefs’ and ‘rationality’. Even though these terms are part of everyday

language as well as of scientiﬁc jargon, they are not easy to deﬁne. To some degree, they might represent so-called

essentially contested concepts (Gallie 1955): There is a common baseline understanding of those concepts, but there is

justiﬁed disagreement about their precise deﬁnition. Even though the concepts of probability, belief and rationality do

not have a single agreed upon deﬁnition, it is worth brieﬂy exploring those concepts in the way they are applied in this

paper.

A.1. What is probability?

Probability is a number that can take the value of anything between 0 and 1. Or, expressed diﬀerently, a probability is any

number xfor which {x∈R|0≤x≤1}. But that, of course, is only a description of probability, not a deﬁnition. Even

though probability is a term that we use on a day-to-day basis, deﬁning what we really mean when we are talking about

probability is not at all easy or straightforward.

There are several interpretations of probability, two of which are particularly relevant: Frequentist interpretations of

probability and subjectivist interpretations of probability. Within the frequentist paradigm of probability, probability is

deﬁned in two ways. Finite frequentist probability (Hájek 1996) posits that the probability of an attribute Ain a reference

class Bis the relative frequency of occurrences of Awithin B. Finite frequentist probability is perhaps the most intuitive

interpretation of probability. If we pick ten apples from a bowl, and three of the ten apples are red, the other ones

green, then we can easily calculate a probability for picking a green or red apple. The second frequentist interpretation

of probability is hypothetical frequentist probability. Hypothetical frequentist probability posits that the probability of an

attribute Ain a reference class Bis the limit of the relative frequency of AsamongBs if there was an inﬁnite sequence

of Bs(Hájek 2009). Hypothetical frequentism is very similar to ﬁnite frequentism; the only diﬀerence is the assumption

that some experiment is repeated inﬁnitely often. Hypothetical frequentism is not as intuitively applicable to real-world

situations as ﬁnite frequentism, because the idea of collecting some data inﬁnitely often, obviously, does not work in

reality. But hypothetical frequentism is fairly important in real-world applications. For example, so-called frequentist

statistics rely on hypothetical frequentism for calculating p-values.

The subjectivist view of probability is very diﬀerent from the frequentist one. Whereas the general idea of frequentism is

to (hypothetically) count frequencies and derive probabilities from them, subjectivist probability proposes to understand

Downloaded by [24.91.119.232] at 11:46 04 October 2017

12 M. KOVIC AND S. KRISTIANSEN

probability as an expression of degree of belief in uncertain situations (de Finetti 1970). The subjectivist interpretation

might not be immediately as intuitive as the frequentist one, but subjectivist probability is, arguably, the only actual

interpretation of probability – a frequentist attempt at probability remains meaningless unless we infuse it with some

subjectivist meaning. Take, for example, a simple frequentist scenario of coin ﬂipping: you ﬂip a coin 100 times and try to

derive the probability for heads from those empirical observations. Let us say that you have obtained 53 heads in your

100 trials. The proportion of heads in your reference class is 0.53. In a very simplistic manner, you could now state that

the probability for heads is around Pr (heads)=0.5. But what would you actually mean by that? Chances are you are not

simply trying to use another word for proportion. Instead, when we work with a probability, we are usually expressing

a belief about some proposition about the world. When we observe that the proportion of heads in our sample is 0.53,

then we can use that information to form a belief about our coin. This means that frequentism is simply one method for

arriving at justiﬁed probabilistic beliefs.

If we propose to understand probability in the subjectivist tradition as an expression of degree of belief in uncertain

situations, then we should also brieﬂy address what is meant by uncertainty and certainty. More speciﬁcally, a potential

point of conceptual confusion is the relationship between certainty and probability. One could, intuitively, assume that a

probability of 1 equals certainty, but that is a rather loose understanding of certainty. Rather than expressing certainty,

probability 1 means that something is almost sure or almost certain. Certainty is not a probabilistic, but rather a logical

concept. For example, the statement p∧¬pis certainly false, and the statement ¬{p∧¬p}is certainly true. Expressed

more generally, the diﬀerence between probability 1 and certainty is the diﬀerence between a sample space where

one event Ehas probability 1 and another sample space for which E=.

A.2. What are beliefs?

The casual reader of this paper might wonder why we are referring to ‘beliefs’ throughout this paper. After all, science is

supposed to be the domain of facts and not mere beliefs, correct? In everyday language, the term belief has a connotation

of something like faith. In scientiﬁc contexts, however, belief means something diﬀerent: (theories of) epistemology

(Hintikka 1962;Armstrong 1973). Within epistemological theories, beliefs are usually thought of as certain kinds of

propositional attitudes. Propositions are declarations with a world-to-mind direction of ﬁt (Searle 1976) that are capable

of having a truth value.

Beliefs can be thought of as binary classiﬁcators: either something is or is not true. Such a binary approach to beliefs,

however, is incomplete given that we have an understanding of probability: propositions are not simply either true or

false, but they can be true with some probability. When we supplement the notion of belief with the notion of probability,

we arrive at the understanding of beliefs as degrees of belief (Ramsey 2016) – and the idea of beliefs as degrees of belief

is, of course, precisely the subjectivist deﬁnition of probability. For the sake of conceptual clarity, the idea of beliefs as

degrees of belief is sometimes referred to as credence (Buchak 2014).

A.3. What is rationality?

Rationality, much like probability and belief, has an everyday meaning as well a more stringent philosophical one. In the

latter case, there are two categories of rationality: instrumental rationality and epistemic rationality. Instrumental rationality

is the rationality of rational choice theory: utility maximization. Instrumental rationality is such rationality whereby one

makes decisions that maximize one’s utility. Described less opaquely, instrumental rationality is behavior that maximizes

the probability of achieving one’s goals. Epistemic rationality, on the other hand, is rationality in the sense of justiﬁed

beliefs. Epistemic rationality, therefore, is such rationality whereby one has good reasons (i.e. justiﬁcation) for holding the

beliefs one holds.

Throughout this paper, we talk about epistemic rationality: being rational in the context of the gambler’s fallacy means

holding justiﬁed probabilistic beliefs as opposed to unjustiﬁed ones. However, in the context of the gambler’s fallacy,

we are not simply holding epistemically rational beliefs for the sake of being rational about some proposition about

the world. Obviously, we also want to improve our decision-making by being epistemically rational. It can be argued,

therefore, that epistemic rationality is, ultimately, also a form of instrumental rationality (Kelly 2003). If some epistemically

rational belief we hold does not aﬀect a decision, we can still think of it as instrumentally rational: by means of being

justiﬁed, we are most likely to achieve our goal of truth.

Downloaded by [24.91.119.232] at 11:46 04 October 2017