ArticlePDF Available

The gambler’s fallacy fallacy (fallacy)

Authors:

Abstract

The gambler’s fallacy is the irrational belief that prior outcomes in a series of events affect the probability of a future outcome, even though the events in question are independent and identically distributed. In this paper, we argue that in the standard account of the gambler’s fallacy, the gambler’s fallacy fallacy can arise: the irrational belief that all beliefs pertaining to the probabilities of sequences of outcomes constitute the gambler’s fallacy, when, in fact, they do not. Specifically, the odds of the probabilities of some sequences of outcomes can be epistemically rational in a given decision-making situation. Not only are such odds of probabilities of sequences of outcomes not the gambler’s fallacy, but they can be implemented as a simple heuristic for avoiding the gambler’s fallacy in risk-related decision-making. However, we have to be careful not to fall prey to a variant of the gambler’s fallacy, the gambler’s fallacy fallacy (fallacy), in which we do not calculate odds for the probabilities of sequences that matter, but rather simply believe that the raw probability for the occurrence of a sequence of outcomes is the probability for the last outcome in that sequence.
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=rjrr20
Download by: [24.91.119.232] Date: 04 October 2017, At: 11:46
Journal of Risk Research
ISSN: 1366-9877 (Print) 1466-4461 (Online) Journal homepage: http://www.tandfonline.com/loi/rjrr20
The gambler’s fallacy fallacy (fallacy)
Marko Kovic & Silje Kristiansen
To cite this article: Marko Kovic & Silje Kristiansen (2017): The gambler’s fallacy fallacy (fallacy),
Journal of Risk Research, DOI: 10.1080/13669877.2017.1378248
To link to this article: http://dx.doi.org/10.1080/13669877.2017.1378248
Published online: 28 Sep 2017.
Submit your article to this journal
Article views: 9
View related articles
View Crossmark data
JOURNAL OF RISK RESEARCH, 2017
https://doi.org/10.1080/13669877.2017.1378248
The gambler’s fallacy fallacy (fallacy)
Marko Kovicaand Silje Kristiansenb
aZIPAR – Zurich Institute of Public Affairs Research, Zurich, Switzerland; bCollege of Arts, Media & Design,
Northeastern University, Boston, MA, USA
ABSTRACT
The gambler’s fallacy is the irrational belief that prior outcomes in a series of
events affect the probability of a future outcome, even though the events
in question are independent and identically distributed. In this paper, we
argue that in the standard account of the gambler’s fallacy, the gambler’s
fallacy fallacy can arise: the irrational belief that all beliefs pertaining to the
probabilities of sequences of outcomes constitute the gambler’s fallacy,
when, in fact, they do not. Specifically, the odds of the probabilities of some
sequences of outcomes can be epistemically rational in a given decision-
making situation. Not only are such odds of probabilities of sequences of
outcomes not the gambler’s fallacy, but they can be implemented as a
simple heuristic for avoiding the gambler’s fallacy in risk-related decision-
making. However, we have to be careful not to fall prey to a variant of the
gambler’s fallacy, the gambler’s fallacy fallacy (fallacy), in which we do not
calculate odds for the probabilities of sequences that matter, but rather
simply believe that the raw probability for the occurrence of a sequence of
outcomes is the probability for the last outcome in that sequence.
ARTICLE HISTORY
Received 31 December 2016
Accepted 20 July 2017
KEYWORDS
Gambler’s fallacy; cognitive
biases; cognitive heuristics;
probability; risk perception
1. Introduction: gamblers, these poor irrational devils
Human cognition is systematically riddled with a specific category of errors, so-called cognitive
heuristics or biases (Tversky and Kahneman 1974). Cognitive biases as a mode of ‘fast’, automated
thinking (Evans 2008;Frankish 2010;Evans and Stanovich 2013) often lead to inferences that are good
enough in a given decision-making situation, but they can also lead to decisions that clearly deviate
from rational, utility maximizing behavior. Cognitive biases are especially prominent in situations that
involve risk assessment (Kasperson et al. 1988;Sjöberg 2000), as biases such as loss aversion (Tversky
and Kahneman 1991), status quo bias (Samuelson and Zeckhauser 1988), the availability heuristic
(Tversky and Kahneman 1973) and quite a few more amply demonstrate.
One very prominent bias in the context of risk perception is the gambler’s fallacy: the irrational
tendency for a negative recency effect whereby we tend to estimate the probability of an event as
being conditional on past occurrences of that event, even though all events in the sequence of events
are independent and identically distributed (Lepley 1963;Bar-Hillel and Wagenaar 1991). The beauty
of the gambler’s fallacy lies in its simplicity and clarity. Whereas some biases tend to appear only
when choice alternatives are framed in particular linguistic ways, the gambler’s fallacy is so obviously
a cognitive bias that its existence is essentially taken for granted. The reality of the gambler’s fallacy
is widely accepted, and researchers have proposed sophisticated explanations, such as ad hoc mental
Markov models, for why people fall prey to the gambler’s fallacy (Oskarsson et al. 2009). Also, the
fact that we are talking about a gambler’s fallacy is in and of itself a great heuristic. It is not a secret
that gamblers tend to be irrational. The odds are never in their favor, yet they keep believing that
CONTACT Marko Kovic marko.kovic@zipar.org
© 2017 Informa UK Limited, trading as Taylor & Francis Group
Downloaded by [24.91.119.232] at 11:46 04 October 2017
2M. KOVIC AND S. KRISTIANSEN
their luck is about to turn around; all they need is a couple of more bets. The beauty of the concept
of the gambler’s fallacy also lies in the fact that it so accurately describes irrational behavior that we
encounter in our everyday lives, beyond the somewhat limited context of gambling (Chen, Moskowitz,
and Shue 2016).
The gambler’s fallacy is real. However, some beliefs in the context of independent events in a series
of events that would, prima facie, be subsumed under the gambler’s fallacy because they are concerned
with prior outcomes in a sequence of outcomes are not actually examples of the gambler’s fallacy, but
rather epistemically rational beliefs. The goal of this paper is to clarify this potential mistaken gambler’s
fallacy; as, as we call it, the gambler’s fallacy fallacy.
1.1. The gambler’s fallacy, frequentist probability and the law of small numbers
The standard account and presupposition of the gambler’s fallacy is that the gambler’s fallacy rep-
resents an irrational belief in the ‘law of small numbers’ (Tversky and Kahneman 1971;Rabin 2002):
when one is committing the gambler’s fallacy, so the argument goes, the error one is making lies in
believing that the probabilistic properties of large samples also apply to small samples. According to
this standard account, it is rational to believe that, say, after 1000 coin flips of a fair coin, heads will
have come up around 50% of the time. But, according to the ‘law of small numbers’ account of the
gambler’s fallacy, it is irrational to believe that the same distribution of outcomes will result after only
10 coin flips.
It is indeed irrational to believe in something like the ‘law of small numbers’, since the law of large
numbers does not translate to small, finite samples. It is possible that part of the problem with the
gambler’s fallacy simply stems from the intuitive belief in something like the ‘law of small numbers’.
However, this standard account of the gambler’s fallacy – the gambler’s fallacy as the belief in the
‘law of small numbers’ – relies on a frequentist notion of probability. Frequentist notions of probability,
both as empirical and as hypothetical frequentism, are limited (Hájek 1996,2009), not least because the
idea of frequentist probability makes implicit ontological claims that are not easily reconcilable with
ontological realism that is the philosophical underpinning of the empirical sciences. It is important to
note that the law of large numbers is not equivalent to frequentism. Rather, the law of large numbers
is a justification of the frequentist idea of probability (Batanero, Henry, and Parzysz 2005;Hájek 2009).
It can even be argued that the modern understanding of frequentist probability originated with Jakob
Bernoulli’s formulation of the weak law of large numbers (Verburgt 2014).
If we revisit the coin flipping example with a non-frequentist idea of probability, we see that a
different idea of probability yields a different idea of rational beliefs. If you had to guess how many
times a coin will land heads in a series of only 10 flips, you can express a rational belief given prior
information: given a probability of 0.5 that a coin flip will result in heads, your best guess would be
that there will be 5 heads in a series of 10 flips. That does not mean that you have anything close to
perfect certainty in that outcome, but simply that, given your prior information, that expectation is
justified. The prior information in this example is not a belief about hypothetical outcomes if we were
to flip a coin 1000 times, 10,000 times, or infinitely often. Instead, the prior information is an expression
of uncertainty given some information about reality.
It has been noted for some time that the diagnosis of cognitive biases is contingent on our
underlying beliefs about probability (Gigerenzer 1991). This is also true of the gambler’s fallacy. When
we frame the gambler’s fallacy in terms of frequentist probability, then some beliefs are labeled as
irrational even though they are, potentially, rational – we refer to this misclassification of some specific
beliefsasthegambler’s fallacy fallacy. Within the standard account of the gambler’s fallacy, any belief
pertaining to the probabilistic properties of small samples is deemed irrational. That assessment, we
argue, it painting with too broad a brush: there is one case where probabilistic beliefs about a small
sample, or, more precisely: beliefs about the probabilities of sequences of outcomes, do not constitute
the gambler’s fallacy but are, instead, rational beliefs. By refining our understanding of the gambler’s
fallacy in this manner, a variant of the gambler’s fallacy that is different from the general gambler’s
Downloaded by [24.91.119.232] at 11:46 04 October 2017
JOURNAL OF RISK RESEARCH 3
fallacy becomes manifest; we refer to that gambler’s fallacy variant as the gambler’s fallacy fallacy
(fallacy).
2. The gambler’s fallacy
Imagine a gambler, Jane Doe, who engages in a small gamble. Jane Doe is not a pathological gambler;
she is just playing a game for hedonic, recreational purposes. For the purpose of the following
arguments, she also assumes the role of an enlightened gambler: she is not just playing for the
fun or thrill of it, but she is actively engaged in metacognition while she is playing. In other words,
Jane Doe is ‘thinking slow’, whereas, in the real world, even recreational, non-pathological gamblers
are probably ‘thinking fast’. Jane Doe is not a realistic stand-in for gamblers, but more of a narrative
aid.
The game Jane Doe is playing is very simple: she is rolling a regular, fair six-sided die and her prior
information is that every number on the die has a probability of exactly 1
6of being rolled. Jane can roll
the die three times. If she rolls the number four at least once, she wins. If she does not roll the number
four at least once, she loses.
Jane has rolled the die twice already. Unfortunately, she did not get a success yet, but instead two
failures in a row. Jane is about to roll the die for the third and final time. She feels that third time’s the
charm – after all, she failed twice in a row, and now, it is time for her chances to balance out. After
all, the die is supposed to be fair, and in her subjective perception, Jane feels like that fairness should
bring about a success after a series of failures. This intuition that Jane feels before rolling the die for the
third time is the gambler’s fallacy. In this example, Jane’s gambler’s fallacy takes the following specific
form:
Pr (s|f,f)= Pr (s)
Jane is committing the gambler’s fallacy because she intuitively believes that the probability for a
success given two failures is not the same as the probability for a success in only one roll of the die.
That is not true: the probability for a success is not affected by prior outcomes.
Of course, not all situations in which the gambler’s fallacy applies are games of rolling the die three
times. However, the logic of the game Jane Doe is playing can be generalized onto other situations. A
generalized gambler’s fallacy can be described in the following manner:
Pr (On+1|Oi)= Pr (O);i=1, ...,n
The gambler’s fallacy is the belief that the probability for an outcome Oafter a series of outcomes is
not the same as the probability for a single outcome.
3. The gambler’s fallacy fallacy
Jane was about to roll the die for the third and final time when she realized that her wishful thinking
led her to the gambler’s fallacy. The probability for a success on her third try, Jane had to concede to
herself, is no more and no less than 1
6.
However, Jane still has some probabilistic information about her current game that seems intuitively
non-trivial to her. That information does not pertain to the probability for success in the last try, but to
overall probabilities of possible sequences of outcomes in her game. A first, general thought that occurs
to Jane is that she knows the overall probability of succeeding at least once in a game of three die rolls:
Pr (s)=15
63
=0.42
The probability that there will be at least one success, denoted above with the existential quantifier
,is0.42, or 42%. What should Jane do with this probability? Of course, Jane instantly realizes that an
Downloaded by [24.91.119.232] at 11:46 04 October 2017
4M. KOVIC AND S. KRISTIANSEN
overall probability of obtaining at least one success is not very pertinent to her current situation. After
all, those 42% contain all possible permutations that contain at least one success, but right now, Jane
has already exhausted a great part of those possible permutations, or sequences. For example, the
probability of obtaining three successes in a row should not inform Jane’s beliefs about her current
situation because she cannot end up with that particular sequence. It is tempting to fall back into
the gambler’s fallacy and believe that the probability for success at the third roll is 42%, but Jane can
successfully resist the urge for this irrational belief.
In her current situation, Jane is certain that the sequence of outcomes that she will ultimately end
up with has to either be f,f,for f,f,s; Jane, of course, hopes for the latter. What is the probability
that she will end up with the sequence f,f,s? At this point, Jane is fairly certain of the answer to that
question:
Pr (f,f,s|f,f)=1
6=0.17
The probability for ending up with the sequence f,f,safter two failures is simply the probability for a
single success in rolling the die. However, there is a second probability with regards to the sequence
f,f,sthat Jane is thinking of:
Pr (f,f,s)=5
6×5
6×1
6=0.12
The overall probability of ending up with the sequence f,f,sis around 12%. How does this statement
relate to the statement above of a 17% probability? These are, of course, different probabilities. The
probability of around 17%, or, more precisely, 1
6, is the conditional probability of ending up with the
sequence f,f,s, given two failures – and this is, of course, the same as the probability for a single
success. Jane is intuitively confident that this probability is the one she should, rationally, take into
account in her current situation. It is possible to give her intuition a slightly clearer form by simply
plugging the respective probabilities into Bayes’ rule:
Pr (s|f,f)=Pr (f,f|s)Pr (s)
Pr (f,f)=(5
6×5
6)1
6
(5
6×5
6)=1
6=0.17
In contrast, the probability of around 12% refers to the overall probability of ending up with the
sequence f,f,sgiven the nature of the game. Whereas Pr (s|f,f)is a conditional probability, Pr (f,f,s)
is a compound probability.
The general probability of the sequence f,f,sis 0.12. On its own, this probability is of limited use to
Jane, because 1 0.12 is the probability for all other sequences, including sequences that, rationally,
are of no interest to Jane in her current situation. For example, the probability of the sequence s,f,f
should not impact Jane’s current beliefs in any way, because that sequence is not possible in her
current situation. Therefore, the probability for the sequence f,f,sis epistemically relevant only when
it is compared other epistemically relevant probabilities. In Jane’s current situation, there is only one
other relevant sequence – the sequence that Jane wishes to avoid, three failures in a row:
Pr (f,f,f)=5
63
=0.58
In her current situation, Jane knows that the overall probability of the sequence f,f,sis 0.12, and the
overall probability of the sequence f,f,fis much higher, 0.58. How should Jane process these two
probabilities? Simply taken on their own, those probabilities are still relative to all possible sequences
of outcomes, and those are not of interest to Jane. However, there is a simple way to process the two
probabilities that are of interest: calculating their odds. Doing this is simple enough:
o(f,f,s):(f,f,f)=
5
6×5
6×1
6
5
63=1
5=1:5
Downloaded by [24.91.119.232] at 11:46 04 October 2017
JOURNAL OF RISK RESEARCH 5
The probability of ending up with the sequence f,f,fis 5 times higher than the probability of ending
up with the sequence f,f,s. These odds are not at odds with avoiding the gambler’s fallacy. This
becomes obvious when we compare the odds of the probabilities for a single success and for a single
failure given two failures (which are, of course, simply the probabilities for a single success and for a
single failure):
o(s|f,f):(f|f,f)=
1
6
5
6
=1
5=1:5
The odds are exactly the same as the odds for the sequences f,f,sand f,f,f. What does this mean?
When Jane is comparing the probabilities for the sequences that are relevant to her, she will arrive at
information that is epistemically rational, meaning that she is not committing the gambler’s fallacy.
The odds for ending up with the sequence that she hopes to arrive at, f,f,s, are the same as the odds
of succeeding in her third and final try. Therefore, the odds for ending up with the sequence that Jane
hopes to ultimately end up with can rationally inform her beliefs in her current situation.
In the standard account of the gambler’s fallacy as the belief in the law of small numbers, considering
sequences of outcomes in any way is considered irrational, because the law of small numbers does not
pertain to small samples. We call the belief that sequences of outcomes should be always disregarded
the gambler’s fallacy fallacy: As can be easily demonstrated, sequences of outcomes can be used in an
epistemically rational manner when odds of epistemically relevant sequences are considered.
In a very condensed form, the argument presented in this section is the following:
First two die rolls: f,f
Probability for son the third roll: 1
6
Probability for fon the third roll: 5
6
Odds of these two probabilities: 1
6:5
6=1:5
General probability for the sequence f,f,s:5
6×5
6×1
6=25
216
General probability for the sequence f,f,f:5
6×5
6×5
6=125
216
Odds of these two probabilities: 25
216 :125
216 =1:5
The odds for a success given two failures is the same as the odds for the sequence that ends with
a success. This is a simple demonstration that thinking about the probabilities of sequences is not
irrational when (and only when) the odds of the epistemically relevant sequences are considered.
4. The gambler’s fallacy fallacy (fallacy)
Let us assume that Jane failed to turn the probabilities for the sequences f,f,sand f,f,finto odds, and
that instead, she proceeded to believe that the probability Pr (f,f,s)=0.12 is the direct probability
that she will succeed in her third and final try. Such a belief is rather obviously fallacious, but it does
not quite constitute the gambler’s fallacy. Instead, it is more of a variant of the gambler’s fallacy. We
label this variant of the gambler’s fallacy the gambler’s fallacy fallacy (fallacy).
In a more general form, the gambler’s fallacy fallacy (fallacy) can be described in the following
manner:
Pr (Oi)=Pr (On);i=1, ...,n
If we compare the gambler’s fallacy fallacy (fallacy) to the gambler’s fallacy as presented in Section 2,
we see that the two fallacies are not the same. The gambler’s fallacy is the belief that the probability
for an outcome given a series of outcomes is not the same as the probability for a singular outcome.
The gambler’s fallacy fallacy (fallacy), on the other hand, is the belief that the probability for a series of
outcomes is the same as the probability for the last outcome in that series of outcomes.
Downloaded by [24.91.119.232] at 11:46 04 October 2017
6M. KOVIC AND S. KRISTIANSEN
5. In summary: the gambler’s fallacy, the gambler’s fallacy fallacy, and the gambler’s
fallacy fallacy (fallacy)
In the previous Sections 24, we have discussed the gambler’s fallacy, the gambler’s fallacy fallacy, and
the gambler’s fallacy fallacy (fallacy). This nomenclature is genealogically inspired since we are talking
about concepts that stem from the concept of the gambler’s fallacy. But that wordplay might sound
ever so slightly confusing. For the sake of clarity, therefore, we briefly summarize the three concepts
in this subsection.
The gambler’s fallacy is the belief that the probability for an outcome after a series of outcomes is
not the same as the probability for a single outcome. The gambler’s fallacy is real and true in cases
where the events in question are independent and identically distributed.
The gambler’s fallacy fallacy is our argument that, contrary to the standard account of the gambler’s
fallacy, probabilities of sequences of outcomes can be epistemically rational in situations where the
gambler’s fallacy might arise. This is the case when (and only when) the odds of the probabilities of the
relevant sequences of outcomes are compared to each other. Those odds are the same as the odds of
the singular outcomes at the end of those sequences.
The gambler’s fallacy fallacy (fallacy) is the irrational belief that the probability for a series of
outcomes is the same as the probability for the last outcome in that series of outcomes. The gambler’s
fallacy fallacy (fallacy) is a variant of the gambler’s fallacy that arises from an irrational implementation
of the gambler’s fallacy fallacy argument.
5.1. Prior vs. posterior probabilities
In the preceding sections, our enlightened gambler Jane Doe has thought about conditional as well as
compound probabilities. In those examples, both the conditional probabilities as well as the compound
probabilities are discussed in terms of general probabilities: if one were to throw a die three times,
some outcome would happen with some probability. These general probabilities can also be called
prior probabilities.
What happens when we substitute some of the prior probabilities with so-called posterior probabili-
ties? More specifically, let’s say that for the sequence f,f,s, we are treating f,fas specific outcomes that
have already occurred rather than parts of a sequence of outcomes that can occur in general. In other
words, if we observe f,fand we express the probability that those outcomes have occurred, we are
assigning a kind of posterior probability. In our scenario, Jane has observed both outcomes f,fwith
her own eyes, and she is fairly confident that she has correctly observed the outcomes. Therefore, Jane
decides that the posterior probability of the first fhaving occurred is 1, and the posterior probability
of the second fhaving occurred is 1 as well. Does anything change about the gambler’s fallacy, the
gambler’s fallacy fallacy, and the gambler’s fallacy fallacy (fallacy) when we think in terms of prior and
posterior probabilities?
Of course not: the gambler’s fallacy is fallacious apriori, and therefore, it has to be fallacious a
posteriori as well. Accordingly, all of the simple calculations above work just as they do with prior
probabilities. However, it is rather important not to believe that using posterior instead of prior
probabilities is a ‘solution’ for the gambler’s fallacy. For example, if one were to use the posterior
probabilities of f,fin the sequence f,f,s, it is tempting to declare 1 ×1×1
6=1
6as a ‘solution’ for
the gambler’s fallacy, when, of course, it is anything but – this is simply the gambler’s fallacy fallacy
(fallacy). The fact that the ‘solution’ in this example is correct is irrelevant, because this is simply a case
of epistemic luck (Engel 1992;Pritchard 2004) where one might accidentally, but irrationally arrive at
atruebelief.
Downloaded by [24.91.119.232] at 11:46 04 October 2017
JOURNAL OF RISK RESEARCH 7
6. Discussion
6.1. Real-world implications of the gambler’s fallacy fallacy
There might be some value in discussing the gambler’s fallacy in purely theoretical terms, but the
more important reason why the gambler’s fallacy matters is that error occurs in real-world risk-related
decision-making situations. Take the context of natural disasters as an example: one might be inclined
to believe that, since some place has not experienced major earthquakes in a long time, a major
earthquake is ‘overdue’. Or, conversely and potentially more gravely: after a major earthquake has
occurred, one might believe that no earthquakes will happen for a while. Or take the context of
criminal justice as another example: After a judge has ruled a number of terrorist suspects as not
guilty, he might bias his next decision by the implicit belief that in a long sequence of suspects, there
should be one real and dangerous terrorist. Or, conversely: the judge might believe that the probability
for five guilty terrorist suspects in a row is very low, and he might therefore develop a bias towards
ruling the fifth suspect not guilty. Or take complex technologies as yet another example: after a period
in which there have been no critical failures of nuclear power plants, one might feel that an accident
is around the corner. Or, conversely: after a critical failure in a nuclear power plant, one might believe
that another such accident will not occur in a long time.
The gambler’s fallacy is a phenomenon that clearly matters in the real world, and we should strive
to reduce its prevalence. The concept of the gambler’s fallacy fallacy can do so in two ways: first, by
reducing ‘false positive’ misclassifications of beliefs as the gambler’s fallacy, and second, and more
importantly, by providing a heuristic for real-world decision-making.
The first aspect of the usefulness of the gambler’s fallacy fallacy is not a reduction of the occurrence
of the gambler’s fallacy, but rather a better detection mechanism. As we argue above, there are beliefs
that are epistemically rational but that might be misclassified as instances of the gambler’s fallacy
when applying a narrowly frequentist understanding of probability. Obviously, it is desirable to have
as few of these ‘false positive’ misclassifications as possible.
The second aspect of the usefulness of the gambler’s fallacy fallacy is more important. In real-world
decision-making, we often want to reduce the impact of cognitive biases as much as possible, and so-
called debiasing strategies have been explored for decades (Fischoff 1981). Many potential debiasing
strategies are based on active learning about cognitive biases and metacognition through cognitive
forcing (Croskerry 2003;Croskerry, Singhal, and Mamede 2013). In the context of the gambler’s fallacy,
it is probably possible to learn about the fallacy and reduce its impact by forcing ourselves to enter a
metacognitive mode of thinking. But that strategy can only have limited success. The gambler’s fallacy
occurs in situations in which we might not always have a lot of time to enter a careful mode of thinking;
we are under pressure and we need quick inferences. This is where the gambler’s fallacy fallacy as a
heuristic comes into play: the idea of framing probabilities of sequences that matter in a given context
as odds can nudge us into avoiding the gambler’s fallacy.
Thinking in probabilities is difficult, and actively avoiding the gambler’s fallacy can be challenging
(if it weren’t, the gambler’s fallacy would not be a thing in the first place.). This is where the gambler’s
fallacy (fallacy) becomes relevant: rather than actively avoiding the gambler’s fallacy (a cognition-
intensive task), a gentle ‘nudge’ into thinking about odds of sequences could help avoid the gambler’s
fallacy. This means that thinking in odds that compare relevant sequences of outcomes can potentially
alleviate the impact of the gambler’s fallacy by offering an intuitive and simple heuristic.
6.2. How do real-world people deal with probabilities of sequences?
In the previous section, we have put forward the argument that taking into consideration the odds of
the probabilities of sequences of outcomes can help alleviate the gambler’s fallacy because these odds
are identical to the odds of the probabilities for the singular outcomes at the end of those sequences.
We argue that such a comparison of odds might alleviate the gambler’s fallacy because it could serve
as an intuitive heuristic since odds are a tool that is common in everyday, real-world situations. That
Downloaded by [24.91.119.232] at 11:46 04 October 2017
8M. KOVIC AND S. KRISTIANSEN
argument, however, raises an important question: How do people actually process probabilities of
sequence in the real world?
There is reason to believe that human perception of probabilities of sequences is biased towards
real-world experiences of sequences rather than based on idealized theoretical probabilities (Hahn
and Warren 2009,2010). That is a fairly important insight with at least four implications for the present
paper. First, Jane Doe, the idealized enlightened gambler that we introduced in Section 2,isindeed
only a narrative support and not a realistic representation of human cognition during the judgment
of probabilities. Human cognition in the context of the gambler’s fallacy is characterized by a strong
reliance on intuition and experience rather than by a state of extended metacognition (metacognition
as ‘thinking slow’ is what our idealized gambler Jane Doe is doing). Second, if our judgments about
probabilities in the context of the gambler’s fallacy are based on real-world experience, then the
standard account of the gambler’s fallacy as the belief in the ‘law of small numbers’ becomes less
plausible since the origin of the gambler’s fallacy is not a generalization of the law of large numbers
onto small samples, but rather prior experience. Third, subjective intuitions about the probabilities of
sequences of outcomes seem not to be completely epistemically irrational since our intuition seems
to approximate odds of the probabilities of different sequences fairly well. Fourth, these arguments
and findings add a degree of plausibility to our main conclusion: operating with odds of probabilities
of outcomes does indeed seem to be something that we are intuitively capable of and comfortable
with, even in modes of thinking that are ‘fast’ rather than ‘slow’.
6.3. What about the hot hand fallacy?
The gambler’s fallacy is often presented and discussed together with the so-called hot hand fallacy
(Ayton and Fischer 2004;Sundali and Croson 2006), a belief in ‘streakiness’ of outcomes. The gambler’s
fallacy and the hot hand fallacy might be related, but for at least two reasons, the arguments proposed
in this paper do not apply to the hot hand fallacy. First, the hot hand fallacy is quite obviously not just
a symmetrically opposite belief to the gambler’s fallacy. The gambler’s fallacy can be understood as:
Pr (On+1|Oi)= Pr (O);i=1, ...,n
The hot hand fallacy, on the other hand, looks more like this:
Pr (Oa|Ob)<Pr (Oa|Oa)
In the context of Jane Doe’s die rolling game, this would mean that:
Pr (s|f)<Pr (s|s)
The underlying belief of the hot hand fallacy is the belief that prior outcomes positively affect the
probability for the same outcome in the future. For example, if Jane Doe believed that rolling a four
on a die increased the probability of rolling a four again on her next try, she would be committing
the hot hand fallacy. If, on the other hand, Jane believed more generally that the probability of rolling
a four on her second try is not the same as rolling a four in one singular try, then she would be
committing the gambler’s fallacy. In this example, Jane’s hot hand fallacy is accidentally a special
case of the gambler’s fallacy, the underlying beliefs are very different, and underlying belief of the
hot hand fallacy is categorically more irrational: whereas the gambler’s fallacy is a misreading of the
probabilities of sequences motivated by wishful thinking, the hot hand fallacy is a total abandonment
of probabilities in favor of wishful thinking.
Second, the hot hand fallacy is often relevant in domains where the events of interest are not
necessarily independent and identically distributed. In such domains, the hot hand fallacy is not a
cognitive issue, but simply an empirical one. For example, the hot hand fallacy in the context of
basketball (Gilovich, Vallone, and Tversky 1985) has been challenged on the grounds that a different
Downloaded by [24.91.119.232] at 11:46 04 October 2017
JOURNAL OF RISK RESEARCH 9
way of empirically measuring streakiness actually provides support for the existence of hot hands
(Miller and Sanjurjo 2016).
All of this is not to say that the hot hand fallacy is irrelevant – far from it. For example, so-called
Black Swan events play an important role in risk perception and policy-making (Mueller and Stewart
2016;Wardman and Mythen 2016). Irrational overreactions to Black Swan events represent a form
of the hot hand fallacy, whereby policy-makers believe that the occurrence of an outcome increases
the probability of the same outcome in the future, even though such dependence might not actually
exist. A typical and persistent example of this is the belief that big earthquakes might become more
probable after one or several big earthquakes have occurred, even though earthquake data indicates
that big earthquakes are independent (Daub et al. 2012;Parsons and Eric 2012;Shearer and Stark
2012).
7. Conclusion
The arguments presented in this paper are, conceptually, minor but non-trivial adjustments of our
understanding of the gambler’s fallacy. In the standard account of the gambler’s fallacy, a frequentist
notion of probability is applied, or at least implied. According to that standard account, the error of
the gambler’s fallacy lies in believing that long-run frequencies of (infinitely) large samples should
be represented in small samples as well. A different probabilistic approach, whereby probabilities are
quantifications of uncertainty, allows for a more fine-grained understanding of the gambler’s fallacy.
In this understanding, the gambler’s fallacy as the belief that the probability for an outcome is
conditional on prior outcomes when it is not remains true. However, a gambler’s fallacy fallacy also
comes into view: the belief that all beliefs about sequences of outcomes are epistemically void or
irrational in a situation in which the gambler’s fallacy can occur. That is not the case. The odds of
contextually relevant sequences of outcomes can be epistemically rational and they can provide a
relevant source of information.
7.1. Our beliefs about probability matter
The gambler’s fallacy is perhaps one of the best known cognitive biases, but so far, it has received little
attention in the areas of risk research and risk assessment. The concept of the gambler’s fallacy can
provide an analytical lens through which to understand some problems in risk assessment. After all,
there is a plethora of situations that involve risk in which the gambler’s fallacy can play a detrimental
role. If we are to consider the gambler’s fallacy as a concept that is relevant in risk analysis, then we
also have to think about how to think about the gambler’s fallacy. The main argument of this paper
is that the epistemological foundation of the gambler’s fallacy should be aligned with that of risk
analysis: probability as a quantification of uncertainty. When we apply probability as it is understood
in risk analysis to the gambler’s fallacy, then we get a more accurate picture of what the gambler’s
fallacy is, and, perhaps even more importantly, of what it is not. Ultimately, this allows us to devise
countermeasures against the gambler’s fallacy, such as, as proposed in this paper, calculating simple
and intuitive odds for the probabilites of sequences of outcomes.
Disclosure statement
No potential conflict of interest was reported by the authors.
References
Armstrong, D. M. 1973.Belief, Truth and Knowledge. London: Cambridge University Press.
Ayton, Peter, and Ilan Fischer. 2004. “The Hot Hand Fallacy and the Gambler’s Fallacy: Two Faces of Subjective
Randomness?” Memory & Cognition 32 (8): 1369–1378. http://link.springer.com/article/10.3758/BF03206327.
Bar-Hillel, Maya, and Willem A. Wagenaar. 1991. “The Perception of Randomness.” Advances in Applied Mathematics 12 (4):
428–454. http://www.sciencedirect.com/science/article/pii/019688589190029I.
Downloaded by [24.91.119.232] at 11:46 04 October 2017
10 M. KOVIC AND S. KRISTIANSEN
Batanero, Carmen, Michel Henry, and Bernard Parzysz. 2005. “The Nature of Chance and Probability.” In Exploring
Probability in School, edited by Graham A. Jones, Vol. 40, Mathematics Education Library. 15–37. Springer US.
doi:10.1007/0-387-24530-8_2.http://link.springer.com/chapter/10.1007/0-387-24530- 8_2.
Buchak, Lara. 2014. “Belief, Credence, and Norms.” Philosophical Studies 169: 285–311. https://link.springer.com/article/
10.1007/s11098-013-0182- y.
Chen, Daniel, Tobias J. Moskowitz, and Kelly Shue. 2016.Decision-making under the Gambler’s Fallacy: Evidence from Asylum
Judges, Loan Officers, and Baseball Umpires. Working Paper 22026, National Bureau of Economic Research. http://www.
nber.org/papers/w22026.
Croskerry, Pat. 2003. “Cognitive Forcing Strategies in Clinical Decisionmaking.” Annals of Emergency Medicine 41 (1):
110–120. http://www.annemergmed.com/article/S0196-0644(02)84945- 9/abstract.
Croskerry, Pat, Geeta Singhal, and Sílvia Mamede. 2013. “Cognitive Debiasing 2: Impediments to and Strategies
for Change.” BMJ Quality & Safety bmjqs–2012–001713. http://qualitysafety.bmj.com/content/early/2013/08/30/
bmjqs-2012- 001713.
Daub, Eric G., Eli Ben-Naim, Robert A. Guyer, and Paul A. Johnson. 2012. “Are Megaquakes Clustered?” Geophysical Research
Letters 39 (6): L06308. http://onlinelibrary.wiley.com/doi/10.1029/2012GL051465/abstract.
Engel, Mylan. 1992. “Is Epistemic Luck Compatible with Knowledge?” The Southern Journal of Philosophy 30 (2): 59–75.
http://onlinelibrary.wiley.com/doi/10.1111/j.2041-6962.1992.tb01715.x/abstract.
Evans, Jonathan St B. T. 2008. “Dual-processing Accounts of Reasoning, Judgment, and Social Cognition.” Annual Review
of Psychology 59 (1): 255–278. doi:10.1146/annurev.psych.59.103006.093629.
Evans, Jonathan St B. T., and Keith E. Stanovich. 2013. “Dual-process Theories of Higher Cognition: Advancing the Debate.”
Perspectives on Psychological Science 8 (3): 223–241. http://pps.sagepub.com/ content/8/3/ 223.
de Finetti, Bruno. 1970. “Logical Foundations and Measurement of Subjective Probability.” Acta Psychologica 34: 129–145.
http://www.sciencedirect.com/science/article/pii/0001691870900120.
Fischoff, Baruch. 1981.Debiasing. Technical report.
Frankish, Keith. 2010. “Dual-process and Dual-system Theories of Reasoning.” Philosophy Compass 5 (10): 914–926. http://
onlinelibrary.wiley.com/doi/10.1111/j.1747-9991.2010.00330.x/abstract.
Gallie, Walter B. 1955. “Essentially Contested Concepts.” Proceedings of the Aristotelian Society 56: 167–198. Bibtex:
Gallie1955, http://www.jstor.org/stable/4544562.
Gigerenzer, Gerd. 1991. “How to Make Cognitive Illusions Disappear: Beyond “Heuristics and Biases”.” European Review of
Social Psychology 2 (1): 83–115. doi:10.1080/14792779143000033.
Gilovich, Thomas, Robert Vallone, and Amos Tversky. 1985. “The Hot Hand in Basketball: On the Misperception
of Random Sequences.” Cognitive Psychology 17 (3): 295–314. http://www.sciencedirect.com/science/article/pii/
0010028585900106.
Hahn, Ulrike, and Paul A. Warren. 2009. “Perceptions of Randomness: Why Three Heads are Better than Four.” Psychological
Review 116 (2): 454–461.
Hahn, Ulrike, and Paul A. Warren. 2010. “Why Three Heads are a Better Bet than Four: A Reply to Sun, Tweney, and Wang
(2010).” Psychological Review 117 (2): 706–711.
Hájek, Alan. 1996. “‘Mises Redux’ – Redux: Fifteen Arguments against Finite Frequentism.” Erkenntnis (1975-) 45 (2/3):
209–227. http://www.jstor.org/stable/20012727.
Hájek, Alan. 2009. “Fifteen Arguments against Hypothetical Frequentism.” Erkenntnis 70: 211–235.
Hintikka, Jaakko. 1962.Knowledge and Belief: An Introduction to the Logic of the Two Notions. Ithaca: Cornell University
Press.
Kasperson, Roger E., Ortwin Renn, Paul Slovic, Halina S. Brown, Jacques Emel, Robert Goble, Jeanne X. Kasperson,
and Samuel Ratick. 1988. “The Social Amplification of Risk: A Conceptual Framework.” Risk Analysis 8 (2): 177–187.
doi:10.1111/j.1539-6924.1988.tb01168.x.
Kelly, Thomas. 2003. “Epistemic Rationality as Instrumental Rationality: A Critique.” Philosophy and Phenomenological
Research 66 (3): 612–640. http://onlinelibrary.wiley.com/doi/10.1111/j.1933-1592.2003.tb00281.x/abstract.
Lepley, William M. 1963. “‘The Maturity of the Chances’ : A Gambler’s Fallacy.” The Journal of Psychology 56 (1): 69–72.
doi:10.1080/00223980.1963.9923699.
Miller, Joshua Benjamin, and Adam Sanjurjo. 2016.Surprised by the Gambler’s and Hot Hand Fallacies? A Truth in the Law of
Small Numbers. SSRN Scholarly Paper ID 2627354. Rochester, NY: Social Science Research Network. https://papers.ssrn.
com/abstract=2627354.
Mueller, John, and Mark G. Stewart. 2016. “The Curse of the Black Swan.” Journal of Risk Research 19 (10): 1319–1330.
doi:10.1080/13669877.2016.1216007.
Oskarsson, An T., Leaf Van Boven, Gary H. McClelland, and Reid Hastie. 2009. “What’s Next? Judging Sequences of Binary
Events.” Psychological Bulletin 135 (2): 262–285.
Parsons, Tom, and L. Eric. 2012. “Geist, Were Global M \ geqslant8.3 Earthquake Time Intervals Random between 1900 and
2011?” Bulletin of the Seismological Society of America 102 (4): 1583–1592. http://www.bssaonline.org/content/102/4/
1583.
Pritchard, Duncan. 2004. “Epistemic Luck.” Journal of Philosophical Research 29: 191–220. https://www.pdcnet.org/pdc/
bvdb.nsf/purchase?openform&fp=jpr&id=jpr_2004_0029_0191_0220.
Downloaded by [24.91.119.232] at 11:46 04 October 2017
JOURNAL OF RISK RESEARCH 11
Rabin, Matthew. 2002. “Inference by Believers in the Law of Small Numbers.” The Quarterly Journal of Economics 117 (3):
775–816. http://qje.oxfordjournals.org/content/117/3/775.
Ramsey, Frank P. 2016. “Truth and Probability.” In Readings in Formal Epistemology, edited by Horacio Arló-Costa,
Vincent F. Hendricks, and Johan van Benthem, Vol. 1, Springer Graduate Texts in Philosophy, 21–45. Springer.
doi:10.1007/978-3-319-20451-2_3.http://link.springer.com/chapter/10.1007/978-3-319- 20451-2_3.
Samuelson, William, and Richard Zeckhauser. 1988. “Status Quo Bias in Decision Making.” Journal of Risk and Uncertainty
1 (1): 7–59. http://link.springer.com/article/10.1007/BF00055564.
Searle, John R. 1976. “A Classification of Illocutionary Acts.” Language in Society 5 (1): 1–23. Bibtex: Searle1976, http://
www.jstor.org/stable/4166848.
Shearer, Peter M., and Philip B. Stark. 2012. “Global Risk of Big Earthquakes has not Recently Increased.” Proceedings of
the National Academy of Sciences of the United States of America 109 (3): 717–721. http://www.ncbi.nlm.nih.gov/pmc/
articles/PMC3271898/.
Sjöberg, Lennart. 2000. “Factors in Risk Perception.” Risk Analysis 20 (1): 1–12. http://onlinelibrary.wiley.com/doi/10.1111/
0272-4332.00001/abstract.
Sundali, James, and Rachel Croson. 2006. “Biases in Casino Betting: The Hot Hand and the Gambler’s Fallacy.”
ResearchGate 1 (Jul): 1–12. https://www.researchgate.net/publication/5140572_Biases_in_Casino_Betting_The_Hot_
Hand_and_the_Gambler’s_Fallacy.
Tversky, Amos, and Daniel Kahneman. 1971. “Belief in the Law of Small Numbers.” Psychological Bulletin 76 (2): 105–110.
Tversky, Amos, and Daniel Kahneman. 1973. “Availability: A Heuristic for Judging Frequency and Probability.” Cognitive
Psychology 5 (2): 207–232. http://www.sciencedirect.com/science/article/pii/0010028573900339.
Tversky, Amos, and Daniel Kahneman. 1974. “Judgment under Uncertainty: Heuristics and Biases.” Science 185 (4157):
1124–1131. http://www.sciencemag.org/content/185/4157/1124.
Tversky, Amos, and Daniel Kahneman. 1991. “Loss Aversion in Riskless Choice: A Reference-dependent Model.” The
Quarterly Journal of Economics 106 (4): 1039–1061. http://www.jstor.org/stable/2937956.
Verburgt, Lukas M. 2014. “Remarks on the Idealist and Empiricist Interpretation of Frequentism: Robert Leslie Ellis
versus John Venn.” BSHM Bulletin: Journal of the British Society for the History of Mathematics 29 (3): 184–195.
doi:10.1080/17498430.2014.889269.
Wardman, Jamie K., and Gabe Mythen. 2016. “Risk Communication: Against the Gods or against All Odds?
Problems and Prospects of Accounting for Black Swans.” Journal of Risk Research 19 (10): 1220–1230.
doi:10.1080/13669877.2016.1262002.
Appendix 1. Probability, beliefs, rationality: some conceptual remarks
Throughout this paper, we talk about ‘probability’, ‘beliefs’ and ‘rationality’. Even though these terms are part of everyday
language as well as of scientific jargon, they are not easy to define. To some degree, they might represent so-called
essentially contested concepts (Gallie 1955): There is a common baseline understanding of those concepts, but there is
justified disagreement about their precise definition. Even though the concepts of probability, belief and rationality do
not have a single agreed upon definition, it is worth briefly exploring those concepts in the way they are applied in this
paper.
A.1. What is probability?
Probability is a number that can take the value of anything between 0 and 1. Or, expressed differently, a probability is any
number xfor which {xR|0x1}. But that, of course, is only a description of probability, not a definition. Even
though probability is a term that we use on a day-to-day basis, defining what we really mean when we are talking about
probability is not at all easy or straightforward.
There are several interpretations of probability, two of which are particularly relevant: Frequentist interpretations of
probability and subjectivist interpretations of probability. Within the frequentist paradigm of probability, probability is
defined in two ways. Finite frequentist probability (Hájek 1996) posits that the probability of an attribute Ain a reference
class Bis the relative frequency of occurrences of Awithin B. Finite frequentist probability is perhaps the most intuitive
interpretation of probability. If we pick ten apples from a bowl, and three of the ten apples are red, the other ones
green, then we can easily calculate a probability for picking a green or red apple. The second frequentist interpretation
of probability is hypothetical frequentist probability. Hypothetical frequentist probability posits that the probability of an
attribute Ain a reference class Bis the limit of the relative frequency of AsamongBs if there was an infinite sequence
of Bs(Hájek 2009). Hypothetical frequentism is very similar to finite frequentism; the only difference is the assumption
that some experiment is repeated infinitely often. Hypothetical frequentism is not as intuitively applicable to real-world
situations as finite frequentism, because the idea of collecting some data infinitely often, obviously, does not work in
reality. But hypothetical frequentism is fairly important in real-world applications. For example, so-called frequentist
statistics rely on hypothetical frequentism for calculating p-values.
The subjectivist view of probability is very different from the frequentist one. Whereas the general idea of frequentism is
to (hypothetically) count frequencies and derive probabilities from them, subjectivist probability proposes to understand
Downloaded by [24.91.119.232] at 11:46 04 October 2017
12 M. KOVIC AND S. KRISTIANSEN
probability as an expression of degree of belief in uncertain situations (de Finetti 1970). The subjectivist interpretation
might not be immediately as intuitive as the frequentist one, but subjectivist probability is, arguably, the only actual
interpretation of probability – a frequentist attempt at probability remains meaningless unless we infuse it with some
subjectivist meaning. Take, for example, a simple frequentist scenario of coin flipping: you flip a coin 100 times and try to
derive the probability for heads from those empirical observations. Let us say that you have obtained 53 heads in your
100 trials. The proportion of heads in your reference class is 0.53. In a very simplistic manner, you could now state that
the probability for heads is around Pr (heads)=0.5. But what would you actually mean by that? Chances are you are not
simply trying to use another word for proportion. Instead, when we work with a probability, we are usually expressing
a belief about some proposition about the world. When we observe that the proportion of heads in our sample is 0.53,
then we can use that information to form a belief about our coin. This means that frequentism is simply one method for
arriving at justified probabilistic beliefs.
If we propose to understand probability in the subjectivist tradition as an expression of degree of belief in uncertain
situations, then we should also briefly address what is meant by uncertainty and certainty. More specifically, a potential
point of conceptual confusion is the relationship between certainty and probability. One could, intuitively, assume that a
probability of 1 equals certainty, but that is a rather loose understanding of certainty. Rather than expressing certainty,
probability 1 means that something is almost sure or almost certain. Certainty is not a probabilistic, but rather a logical
concept. For example, the statement p∧¬pis certainly false, and the statement ¬{p∧¬p}is certainly true. Expressed
more generally, the difference between probability 1 and certainty is the difference between a sample space where
one event Ehas probability 1 and another sample space for which E=.
A.2. What are beliefs?
The casual reader of this paper might wonder why we are referring to ‘beliefs’ throughout this paper. After all, science is
supposed to be the domain of facts and not mere beliefs, correct? In everyday language, the term belief has a connotation
of something like faith. In scientific contexts, however, belief means something different: (theories of) epistemology
(Hintikka 1962;Armstrong 1973). Within epistemological theories, beliefs are usually thought of as certain kinds of
propositional attitudes. Propositions are declarations with a world-to-mind direction of fit (Searle 1976) that are capable
of having a truth value.
Beliefs can be thought of as binary classificators: either something is or is not true. Such a binary approach to beliefs,
however, is incomplete given that we have an understanding of probability: propositions are not simply either true or
false, but they can be true with some probability. When we supplement the notion of belief with the notion of probability,
we arrive at the understanding of beliefs as degrees of belief (Ramsey 2016) – and the idea of beliefs as degrees of belief
is, of course, precisely the subjectivist definition of probability. For the sake of conceptual clarity, the idea of beliefs as
degrees of belief is sometimes referred to as credence (Buchak 2014).
A.3. What is rationality?
Rationality, much like probability and belief, has an everyday meaning as well a more stringent philosophical one. In the
latter case, there are two categories of rationality: instrumental rationality and epistemic rationality. Instrumental rationality
is the rationality of rational choice theory: utility maximization. Instrumental rationality is such rationality whereby one
makes decisions that maximize one’s utility. Described less opaquely, instrumental rationality is behavior that maximizes
the probability of achieving one’s goals. Epistemic rationality, on the other hand, is rationality in the sense of justified
beliefs. Epistemic rationality, therefore, is such rationality whereby one has good reasons (i.e. justification) for holding the
beliefs one holds.
Throughout this paper, we talk about epistemic rationality: being rational in the context of the gambler’s fallacy means
holding justified probabilistic beliefs as opposed to unjustified ones. However, in the context of the gambler’s fallacy,
we are not simply holding epistemically rational beliefs for the sake of being rational about some proposition about
the world. Obviously, we also want to improve our decision-making by being epistemically rational. It can be argued,
therefore, that epistemic rationality is, ultimately, also a form of instrumental rationality (Kelly 2003). If some epistemically
rational belief we hold does not affect a decision, we can still think of it as instrumentally rational: by means of being
justified, we are most likely to achieve our goal of truth.
Downloaded by [24.91.119.232] at 11:46 04 October 2017
... To this end, investing in education programs that make people understand the high probability of loss by increasing gamblers' awareness seems desirable. Many pathological gamblers only take into account eventual winnings, disregarding the cost of loss (Kovic and Kristiansen, 2019). For example, gaming venues could be mandated to advertise the probability of winning for each game and the number of daily plays could be limited. ...
Preprint
Full-text available
This study investigates the causal link between inequality and gambling on a distinct dataset that encompasses administrative records of gambling turnover by game type in Italian municipalities. Employing shift–share instruments, we find a substantial impact of inequality on gambling turnover. Moreover, we show that such impact is additional and larger than the impact of poverty and is biggest in less well-off Southern municipalities, especially concerning turnover generated from games closely associated with gambling-related issues. Given the documented regressive nature of gambling taxation, our results substantiate the concept of a hysteresis loop between inequality and gambling.This study investigates the causal link between inequality and gambling on a distinct dataset that encompasses administrative records of gambling turnover by game type in Italian municipalities. Employing shift–share instruments, we find a substantial impact of inequality on gambling turnover. Moreover, we show that such impact is additional and larger than the impact of poverty and is biggest in less well-off Southern municipalities, especially concerning turnover generated from games closely associated with gambling-related issues. Given the documented regressive nature of gambling taxation, our results substantiate the concept of a hysteresis loop between inequality and gambling. JEL codes: I12; I14
... As for the gambler's fallacy, it describes the erroneous reasoning that the probability of a random event is less likely to occur in the future, if a comparable known event has occurred, and in a situation where these occurrences are independent of one another (Clotfelter & Cook, 1993). According to Kovic and Kristiansen (2019), the origin of the gambler's fallacy is prior experience (i.e., driving accidents). If people believe that risk is cyclical, such as the gambler's fallacy, they might think, ''Since I had an accident this year, I am unlikely to have another accident in the next few years (Levi, 2009).'' ...
Article
Full-text available
The primary aim of our research was to examine the moderating role of prevention focus (PRE) and the mediating role of risk perception (RP) on the relationship between driving accident history (DAH) and insurance coverage (IC) decisions to test this moderated mediation mechanism. We collected survey data from 808 newly eligible voluntary automobile liability insurance policyholders in Taiwan and analyzed the data using PROCESS macro. The estimated results showed that PRE moderated the indirect effect of DAH on IC through RP. In general, a worse DAH would increase the RP of people who were high prevention-focused, thereby increasing their willingness to purchase a higher IC. Conversely, a worse DAH would not increase the RP of people who were low prevention-focused, and such people would not increase or would even decrease their IC. The results provide an explanation for the inconsistent paths from DAH to IC in the literature. From a marketing perspective of psychographic segmentation, our research helps insurance companies to determine what types of consumers they should pay more attention to and to formulate marketing strategies. JEL: D91 G52 E21
... It is, though, also related with the fifth rule of thumb, the persuasive intent heuristic, which prevent someone to trust a message if clear persuasion intention is detected (Zyl, Turpin, & Matthee, 2020). This cue might also be related to the so called "fallacy fallacy", or the belief that if some argument is faulty built, than it is necessarily false, or that if some mind patterns are frequently proved to be irrational, then all conclusions based on them are always irrational, when this is not the case (Kovic & Kristiansen, 2019). ...
Article
Full-text available
As research on fake news and deepfakes advanced, a growing consensus is building towards considering critical and analytical thinking, as well as general or topic specific knowledge, which is related to information literacy, as the main significant or effective factors in curving vulnerability to bogus digital content. However, although the connection might be intuitive, the processes linking critical or analytical thinking to manipulation resistance are still not known and understudied. The present study aims to contribute to filling this gap by exploring how analytically driven conclusions over a media content relate to proper evaluations of its credibility. In order to observe how observations highlighted through critical engagement with a specific content are related with awareness on its manipulative structure, a biased, not fake, journalistic article was first passed through Faircough’s (2013) model of Critical Discourse Analysis, which was adapted for media studies. The same article was then screened for disinformation techniques embedded in its architecture, as well as for logical fallacies incorporated as arguments. Preliminary conclusions show that analytical thinking outcomes are consistent with evaluations based on particular filters for credibility attribution. Furthermore, the two ways derived observations over the same content, partially overlap.
... If, on the other hand, gamblers lose control over their behavior, we classify their behaviors as irrational. Loss of control might be due to illogical reasoning, a phenomenon widely documented (see Kovic & Kristiansen, 2019;Källmén et al., 2008;Walker, 1992). Loss of control can even be complete and thus lead to ruinous gambling (see Moore & Ohtsuka, 1999;Toneatto, 1999). ...
Article
Full-text available
The typical gambler loses money but continues to gamble nonetheless. Why? Research from orthodox and behavioral economics, psychology, sociology, and medicine has offered a wide range of possible explanations. This paper reviews these explanations. The evidence is organized according to the degree of rationality assumed and/or found in the studies. This approach allows research from highly distinctive fields to be integrated within a unified framework. Gambling patterns are so highly dispersed that no satisfying one-fits-all explanation is possible. The findings suggest that the whole spectrum from rationality to highly destructive irrationality can be found within the gambler population.
... In the data science lifecycle, this bias appears when data experts use a particular model repeatedly for all the problems based on its historical performance without testing other suitable models. In this phenomenon, a person who got best results recently is supposed to have greater chances of success in future [64]. ...
Preprint
Full-text available
In recent years, data science has become an indispensable part of our society. Over time, we have become reliant on this technology because of its opportunity to gain value and new insights from data in any field - business, socializing, research and society. At the same time, it raises questions about how justified we are in placing our trust in these technologies. There is a risk that such powers may lead to biased, inappropriate or unintended actions. Therefore, ethical considerations which might occur as the result of data science practices should be carefully considered and these potential problems should be identified during the data science lifecycle and mitigated if possible. However, a typical data scientist has not enough knowledge for identifying these challenges and it is not always possible to include an ethics expert during data science production. The aim of this study is to provide a practical guideline to data scientists and increase their awareness. In this work, we reviewed different sources of biases and grouped them under different stages of the data science lifecycle. The work is still under progress. The aim of early publishing is to collect community feedback and improve the curated knowledge base for bias types and solutions.
Article
Pengambilan keputusan memiliki peran penting dalam perilaku keuangan terutama terkait perilaku investor didasarkan pada berbagai bias psikologis, perilaku, volatilitas pasar dan peluang untuk memaksimalkan keuntungan. Investor seringkali bertindak secara tidak rasional dalam pengambilan keputusan suatu investasi terutama investor muda milenial yang lebih mengincar keuntungan dari volatilitas harga. Penelitian ini bertujuan menguji atau merumuskan hipotesis yang mempengaruhi pengambilan keputusan investor dari behavioral finance yang terdiri dari overconfidence, gambler’s fallacy, mental accounting, disposition effect, dan hindsight terhadap investment decision. Selain itu, studi ini juga menganalisa peran Risk perceptions sebagai variabel mediasi antara Overconfidence terhadap investment decision. Dengan menggunakan Teknik convenience sampling, didapatkan jumlah responden sebanyak 452 investor sebagai responden. Metode analisis data yang digunakan adalah Partial Least Squares – Structural Equation Modelling (PLS-SEM). Hasil penelitian menunjukkan bahwa gambler’s fallacy dan mental accounting berpengaruh signifikan terhadap investment decision. Lebih lanjut, studi ini juga membuktikan hubungan signifikan antara overconfidence terhadap investment decision dengan risk perceptions sebagai variabel mediasi. Decision-making process plays a prominent role in financial behaviour, particularly about investor behaviour which is commonly based on enormous psychological biases, behaviour, market volatility and the opportunity to reach maximum return. Investors often have irrationality in main investment decisions, especially young millennials investors who are interested in profit gaining during the price volatility. This study aims to examine some variables which hypothetically influence the investment decision making. These variables are overconfidence, gambler’s fallacy, mental accounting, disposition effect, and hindsight. Further, this study also analyses the role of risk perceptions as a mediating variable between overconfidence and investment decision. Using a convenience sampling technique, the number of survey’s participants is 452 respondents. The data analysis method used is Partial Least Squares – Structural Equation Modelling (PLS-SEM). The results reveal that gambler’s fallacy and mental accounting have a significant effect on investment decisions. In addition, this study ascertains the significant role of risk perceptions as a mediating variable between overconfidence and investment decision.
Chapter
The article addresses the overall body of problems associated with studying selected emotions that emerge in road trafficroad traffic. Among the emotionsemotions observed in road traffic participants, the following are central for this elaboration: anxietyanxiety, fear, and restlessness. Once experienced, these emotions condition specific interpretationinterpretation of a road traffic scenetraffic scene. Fearfear as well as anxiety in particular, can be recognised using technologically advanced instruments. Eye tracking was chosen by the author to serve as an example of the said measurement techniques. The relevant studies were conducted on a sample of vehicle drivers in individual and collective transport. The article provide critical remarks that identification of emotionsemotions must be supported each time by the identification of stimulants and correlated with the results of other measurement techniques. The author believes that emotional states can be studied in road and rail traffic, and may offer some utility value.
Article
Full-text available
Recent academic and policy preoccupations with ‘Black Swans’ underscore the predicament of capturing and communicating risk events when information is absent, partial, incomplete or contingent. In this article, we wish to articulate some key thematic and theoretical points of concurrence around which academic and practitioner interests in risk communication under conditions of ‘high uncertainty’ intersect. We outline the historical context and recent debate concerning the limits to ‘risk thinking’ spurred by Black Swans, and in particular how this calls for a more holistic approach to risk communication. In order to support a more critical foresight agenda, we suggest incorporating ‘adaptive governance’ principles to decentre focal risk communication concerns on the mitigation of short-term security threats, which critics argue can also lead to other unforeseen dangers. Finally, we welcome further interdisciplinary inquiry into the constitution and use of risk communication under high uncertainty.
Article
We prove that a subtle but substantial bias exists in a standard measure of the conditional dependence of present outcomes on streaks of past outcomes in sequential data. The magnitude of this novel form of selection bias generally decreases as the sequence gets longer, but increases in streak length, and remains substantial for a range of sequence lengths often used in empirical work. The bias has important implications for the literature that investigates incorrect beliefs in sequential decision making - most notably the Hot Hand Fallacy and the Gambler's Fallacy. Upon correcting for the bias, the conclusions of prominent studies in the hot hand fallacy literature are reversed. The bias also provides a novel structural explanation for how belief in the law of small numbers can persist in the face of experience. JEL Classification Numbers: C12; C14; C18;C19; C91; D03; G02. Keywords: Law of Small Numbers; Alternation Bias; Negative Recency Bias; Gambler's Fallacy; Hot Hand Fallacy; Hot Hand Effect; Sequential Decision Making; Sequential Data; Selection Bias; Finite Sample Bias; Small Sample Bias.
Article
When unexpected and emotion-engaging events become Black Swans and carry an ‘extreme impact,’ this derives not so much those qualities or from their intrinsic size or importance as from reaction, or overreaction, they generate; but one that is often as extreme and unpredictable as the event itself. Most consequential development in human history, however, stems not from such events, but from changes in thinking and behavior that are gradual and often little-noticed as they occur. In addition, when an unexpected, emotion-grabbing event becomes a Black Swan, the response is likely to become internalized, and getting people to re-evaluate through sensible risk analysis and risk communication is extremely difficult. As part of this, events that are aberrations are often unwisely taken instead to be harbingers – and continue to be so even in the face of repeated disconfirming evidence. An examination of the 9/11 response in the US illustrates these points.
Article
There has been a great deal of discussion in the recent literature regarding the supposed phenomenon of "epistemic luck." This is the putative situation in which an agent gains knowledge even though that knowledge has come about in a way that has, in some sense to be specified, involved luck in some significant measure. Unfortunately, very little of the literature that deals with epistemic luck has offered an account of it that is anything more than suggestive. The aim of this paper is to offer a more nuanced elucidation of what is involved in different types of epistemic luck. More specifically, an account of luck is proposed and several varieties of epistemic luck are shown to be compatible with knowledge possession, in contrast to two other varieties whose status is much more problematic. It is argued that by being clear about what is involved in epistemic luck one can gain an insight into several central debates in epistemology, including the "Gettier" counterexamples, the problem of radical scepticism and the so-called "metaepistemological" challenge to externalist theories of knowledge.
Article
There are currently two robust traditions in philosophy dealing with doxastic attitudes: the tradition that is concerned primarily with all-or-nothing belief, and the tradition that is concerned primarily with degree of belief or credence. This paper concerns the relationship between belief and credence for a rational agent, and is directed at those who may have hoped that the notion of belief can either be reduced to credence or eliminated altogether when characterizing the norms governing ideally rational agents. It presents a puzzle which lends support to two theses. First, that there is no formal reduction of a rational agent’s beliefs to her credences, because belief and credence are each responsive to different features of a body of evidence. Second, that if our traditional understanding of our practices of holding each other responsible is correct, then belief has a distinctive role to play, even for ideally rational agents, that cannot be played by credence. The question of which avenues remain for the credence-only theorist is considered.
Article
The goal of this paper is to correct a widespread misconception about the work of Robert Leslie Ellis and John Venn, namely that it can be considered as the ‘British empiricist’ reaction against the traditional theory of probability. It is argued, instead, that there was no unified ‘British school’ of frequentism during the nineteenth century. Where Ellis arrived at frequentism from a metaphysical idealist transformation of probability theory’s mathematical calculations, Venn did so on the basis of an empiricist critique of its ‘inverse application’.