ArticlePublisher preview available

Violation of Utility Theory in Unique and Repeated Gambles

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This article is concerned with a recent debate on the generality of utility theory. It has been argued by Lopes (1981) that decisions regarding preferences between gambles are different for unique and repeated gambles. The present article provides empirical support for the need to distinguish between these two. It is proposed that violations of utility theory obtained under unique conditions (e.g., Kahneman & Tversky, 1979), cannot necessarily be generalized to repeated conditions. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Journal of Experimental Psychology:
Learning, Memory, and Cognition
I9S7,
Vol. 13, No. 3, 387-391
Copyright 1987 by the American Psychological Association, Inc.
027H-7.19.V87/SOO.75
Violation of Utility Theory in Unique and Repeated Gambles
Gideon Keren
Institute for Perception, TNO, Soesterberg, The NetherlandsWillem
A.
Wagenaar
State University at Leiden, The Netherlands
This article is concerned with a recent debate on the generality of utility theory. It has been argued
by Lopes
(198
i) that decisions regarding preferences between gambles are different for unique and
repeated
gambles.
The present article provides empirical support for
the need to
distinguish between
these two. It is proposed that violations of utility theory obtained under unique conditions (e.g.,
Kahneman
&
Tversky,
1979),
cannot necessarily
be
generalized to repeated conditions.
In two earlier articles published in this journal, Lopes (1981)
and Tversky and Bar-Hillel (1983) raised an old problem con-
cerning the rationale of expected utility theory. The problem
raised
by
Lopes concerns the distinction between the interpreta-
tion of expected utility (or expected value) in long-run versus
short-run situations. Whereas Lopes accepted utility theory in
long-run situations, she questioned its rationality and applica-
bility to short-run circumstances and unique events. She pro-
posed that "for short-run situations, it is reasonable to consider
the probability of coming out ahead (which is related to the
median outcome of
the
gamble) instead of, or at least in addi-
tion to, the long-run expectation"
(p.
377).
To
support her con-
jecture, she discussed three examples involving unique and re-
peated gambles and concluded that the standard conception of
rational choice, based on the maximization of expected utility,
"is simply not sensible." Tversky and Bar-Hillel have rebutted
Lopes' arguments, mainly by proposing a different analysis of
the examples used by Lopes.
The disagreement between Lopes (1981) and Tversky and
Bar-Hillel (1983) stems, to a great extent, from treating the
problematic issues from different viewpoints and with different
underlying assumptions. As noted by Schoemaker (1982), it is
inappropriate to assess the acceptability of an optimal model
(such as expected utility) without explicit prior statement of
its purpose and underlying assumptions. More specifically, we
suggest that Lopes has pointed out some inadequacies of utility
theory mainly from a
descriptive
point of
view.
In contrast, the
treatment offered by Tversky and Bar-Hillel is more formal and
stems from a
normative
viewpoint.
The goal of the present article is not to expand on the norma-
tive question of whether people should treat unique and re-
peated gambles in the same manner. For the development of
a
psychological theory, it is more relevant to determine whether
in practice people do react differently or in the same manner to
unique and repeated gambles.
Modern utility theory, as developed by Von Neuman and
Morgenstern (1947), claimed not only to provide sound justifi-
We would like to thank Baruch
Fischhoff,
Sarah Lichtenstein,
Charles
Lewis,
and
Charles Vlek
for many valuable comments on previ-
ous drafts of this article.
Correspondence concerning this article should be addressed to Gid-
eon Keren, Institute for Perception,
TNO,
Kampweg 5,3769,
DE
Soest-
erberg, The Netherlands.
cation for the Bernoullian expected utility principle, but also
to show that "this justification does not depend on long run
considerations, hence it is applicable to unique choice situa-
tions"
(Coombs, Dawes,
&
Tversky, 1970, p.
126;
sec also Luce
& Raiffa, 1957). It is therefore not surprising that most investi-
gators did not hesitate to study the application of utility theory
as a model of human decision making by using research para-
digms that contained unique gambles only. This held also for
those who were critical of utility theory, like Allais (1953). A
more recent example is the classical set of experiments pre-
sented by Kahneman and Tversky (1979) to demonstrate the
inadequacy of utility theory. Out of 14 decision problems used
by these investigators, 13 contained unique gambles and the
remaining one (Problem 9) was ambiguous.1 The conclusion
reached by Kahneman and Tversky that "utility theory, as it is
commonly interpreted and applied, is not an adequate descrip-
tive model"
(p.
263), is not necessarily true for decision makers
faced with repeated gambles.
Utility theory treats the two domains, unique and repeated
events, monolkhically. Consequently, most researchers as-
sumed, at least implicitly, that demonstrated violations of util-
ity theory obtained under unique conditions (like those re-
ported by Kahneman & Tversky* 1979) can be generalized to
repeated conditions. The major purpose of the present investi-
gation was to test whether such a generalization is justified.
More specifically, in the experiments reported here, we studied
two robust effects that were used by Kahneman and Tversky to
demonstrate the inadequacy of utility theory in unique cases.
These are the
certainty effect
and
the possibility
effect.
The pur-
pose of
these
experiments
was
to test whether utility theory will
also be violated under repeated (gambles) conditions.
Two of the few researchers to realize the potential difference
between unique and repeated gambles were Coombs and
Bowen (1971), They asked their subjects to rank order different
sets of gambles in terms of their perceived riskiness. A major
variable manipulated by these investigators was the number of
1 Problem 9 offers a probabilistic insurance of property against dam-
age.
Although the decision to insure is taken only once, the damage
(such as
fire
or theft) can occur many times in the period covered by the
contract. Paying half of the premium for coverage on odd days of the
month may be normatively attractive when damage can occur only
once,
ft is very unattractive if damage occurs 10 times, because the
probability that all instances will occur on odd days (the days on which
the insurance is
valid) is
very small.
387
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
... By contrast, our design involved participants making hundreds of choices with very short intervals between them. Serialization has long been appreciated to promote risk-seeking (for gains) because it allows participants to amortize the risk associated with each choice [11,12]. Short intervals between choices have also been implicated in modulating risk-seeking in NHPs [13], which can be interpreted as a form of temporal discounting which is known to be abnormal in gambling-addicted humans [14,15]. ...
... Broadly, risk attitudes describe how decision-makers perceive risk given different contexts [16,17]. In line with serialization prompting risk-seeking for gains [11,12,18], our participants' preference for experience stimuli when seeking gains suggests that experience options were perceived as riskier than description options. Similarly, were the experience options perceived as riskier, that participants avoided them when mitigating losses could be explained as a form of uncertainty aversion [19][20][21]. ...
Article
Full-text available
Subjective inferences of probability play a critical role in decision-making. How we learn about choice options, through description or experience, influences how we perceive their likelihoods, an effect known as the description–experience (DE) gap. Classically, the DE gap details how low probability described options are perceptually inflated as compared to equiprobable experience ones. However, these studies assessed probability perception relative to a ‘sure-bet’ option, and it remained unclear whether the DE gap occurs when humans directly trade-off equiprobable description and experience options and whether choice patterns are influenced by the prospects of gain and loss. We addressed these questions through two experiments where humans chose between description and experience options with equal probabilities of either winning or losing points. Contrary to early studies, we found that gain-seeking participants preferred experience options across all probability levels and, by contrast, loss-mitigating participants avoided the experience options across all probability levels, with a maximal effect at 50%. Our results suggest that the experience options were perceived as riskier than descriptive options due to the greater uncertainty associated with their outcomes. We conclude by outlining a novel theory of probabilistic inference where outcome uncertainty modulates probability perception and risk attitudes.
... E.g. group REAL inBeattie and Loomes (1997), experiments CR5 and CR6 inCubitt et al. (2001),Cubitt et al. (1998), study 2 in DeKay et al.(2016),Hagen (1979),Keren and Wagenaar (1987),Schmidt and Seidl (2014) andWeber and Chapman (2005). ...
Article
Full-text available
The common-ratio effect and the Allais Paradox (common-consequence effect) are the two best‐known violations of Expected Utility Theory. We reexamine data from 39 articles reporting experiments (143 designs/parameterizations, 14,909 observations) and find that the common-ratio effect is systematically affected by experimental design and implementation choices. The common-ratio effect is more likely to be observed in experiments with a low common-ratio factor, a high ratio of middle to highest outcome, when lotteries are presented as simple probability distributions (not in a compound/frequency form), and with high hypothetical incentives.
... The certainty effect is one such example, which arises when gambles contain at least one safe option (e.g., $3 with probability 1) versus multiple risky options (e.g., $4 with probability .8, $0 otherwise; Glöckner, Hilbig, Henninger, & Fiedler, 2016;Kahneman & Tversky, 1979;Keren & Wagenaar, 1987;Wulff et al., 2018). Compared to the certain option, any probabilistic outcome will seem exceptionally risky, resulting in particularly low apparent probability weights for the high-probability risky options. ...
Preprint
Full-text available
When making decisions based on probabilistic outcomes, people guide their behavior using knowledge gathered through both indirect descriptions and direct experience. Paradoxically, how people obtain information significantly impacts apparent preferences. A ubiquitous example is the description-experience gap: individuals seemingly overweight low probability events when probabilities are described yet underweight them when probabilities must be experienced firsthand. A leading explanation for this fundamental gap in decision-making is that probabilities are weighted differently when learned through description relative to experience, yet a formal theoretical account of the mechanism responsible for such weighting differences remains elusive. Here, we demonstrate how various learning and memory retention models incorporating neuroscientifically-motivated learning mechanisms can explain why probability weighting and valuation parameters are often found to vary across description and experience. In a simulation study, we show how learning through experience can lead to systematically biased estimates of probability weighting when using a traditional cumulative prospect theory model. We then use hierarchical Bayesian modeling and Bayesian model comparison to show how various learning and memory retention models capture participants’ behavior over and above changes in outcome valuation and probability weighting, accounting for description and experience-based decisions in a within-subject experiment. We conclude with a discussion of how substantive models of psychological processes can lead to insights that heuristic, statistical models fail to capture.
... Christensen, Heckerling, Mackesyamiti, Bernstein, and Elstein (1995) found much smaller framing effects for medical experts compared to novices. In addition, repeated play of gambles with outcome feedback reduces violations of EUT in Allais type problems (van de Kuilen and Wakker, 2006) and leads people to approach maximizing expected value (Keren & Wagenaar, 1987;Barron & Erev, 2003). More recently, van de Kuilen (2009) conducted a study in which he found that the best fitting probability weighting of Prospect Theory approached linearity after increased experience. ...
Chapter
This chapter reviews normative and descriptive aspects of decision making. Expected Utility Theory (EUT), the dominant normative theory of decision making, is often thought to provide a relatively poor description of how people actually make decisions. Prospect Theory has been proposed as a more descriptively valid alternative. The failure of EUT seems at least partly due to the fact that people’s preferences are often unstable and subject to various influences from the method of elicitation, decision context, and goals. In novel situations, people need to infer their preferences from various cues such as the context and their memories and emotions. Through repeated experience with particular decisions and their outcomes, these inferences can become more stable, resulting in behavior that is more consistent with EUT.
... Such behavior has been associated with a negative impact on individuals' financial decision making (Looney and Hardin, 2009). There exists a substantial body of empirical evidence reporting MLA-compliant behavior among university students in individual decisions (Keren and Wagenaar, 1987;Gneezy and Potters, 1997;Thaler et al., 1997;Bellemare et al., 2005;Langer and Weber, 2005;Fellner and Sutter, 2009), team decisions (Sutter, 2007), and experimental market situations (Gneezy et al., 2003). It has further been demonstrated that also individuals from the general population ( Van der Heijden et al., 2012), financial experts (Haigh and List, 2005;Eriksen and Kvaloy, 2010;Larson et al., 2012), and private investors (Wendy and Asri, 2012) behave in accordance with MLA theory. ...
Article
Full-text available
We present results from a highly powered online experiment with 937 participants on Amazon Mechanical Turk (MTurk) that examined whether MTurkers exhibit myopic loss aversion (MLA). The experiment consisted of measuring MLA-compliant behavior in two between-subjects treatments that differed only regarding the risk profile of the risky asset employed. We found no statistically significant evidence of MLA-compliant behavior for any of the two risk profiles among MTurkers in the full samples. However, we found evidence of MLA for one of the two risk profiles in some sub-samples where we screened-out participants based on processing times in the experiment.
... E.g. group REAL inBeattie and Loomes (1997), experiments CR5 and CR6 inCubitt et al. (2001),Cubitt et al. (1998), study 2 in DeKay et al.(2016),Hagen (1979),Keren and Wagenaar (1987),Schmidt and Seidl (2014) andWeber and Chapman (2005). ...
Article
Full-text available
Myopic Loss Aversion (MLA) is one of the study objects of behavioral economics. It corresponds to the fact that participants, when facing choice situations, cannot rationally evaluate the risks and profits of available options, leading them to choose investments more likely to occur but less profitable. This behavior shows that they cannot evaluate the options satisfactorily, so they have sub-optimal decisions. There may be conditions for MLA more favorable for it to occur, as the frequency one shows the participant the outcomes of their choices (feedback) and the probability of winning or losing. In this way, this study aims to evaluate how feedback influences participants' choices and the influence and interaction of the winning and losing probability. This study had the participation of 80 people, ages 18 to 30 years, all university students, 29 women and 51 men, without either relation to business degree courses or familiarity with the research area on economic behavior. The experiment consisted in making nine repeated choices in a lottery game. Participants started the experiment with R$1000 (a thousand) fictitious reais (Brazilian currency); each lottery game had a cost of 150 reais, and the profits returned this invested value with an addition of more 150 reais. The results indicate that the presence of feedback induces participants to bet more. However, the winning and losing probability do not influence the invested amount, and there was no interaction between these two factors.
Article
Full-text available
When making decisions based on probabilistic outcomes, people guide their behavior using knowledge gathered through both indirect descriptions and direct experience. Paradoxically, how people obtain information significantly impacts apparent preferences. A ubiquitous example is the description-experience gap: individuals seemingly overweight low probability events when probabilities are described yet underweight them when probabilities must be experienced firsthand. A leading explanation for this fundamental gap in decision-making is that probabilities are weighted differently when learned through description relative to experience, yet a formal theoretical account of the mechanism responsible for such weighting differences remains elusive. Here, we demonstrate how various learning and memory retention models incorporating neuroscientifically-motivated learning mechanisms can explain why probability weighting and valuation parameters are often found to vary across description and experience. In a simulation study, we show how learning through experience can lead to systematically biased estimates of probability weighting when using a traditional cumulative prospect theory model. We then use hierarchical Bayesian modeling and Bayesian model comparison to show how various learning and memory retention models capture participants’ behavior over and above changes in outcome valuation and probability weighting, accounting for description and experience-based decisions in a within-subject experiment. We conclude with a discussion of how substantive models of psychological processes can lead to insights that heuristic, statistical models fail to capture.
Article
Full-text available
We introduce a training intervention based on a novel tool to mitigate behavior consistent with myopic loss aversion (MLA). We present the results of a large-scale online experiment with 894 student participants. The study featured a two-step debiasing training intervention based on experience sampling and a subsequent elicitation of MLA. We found that participants in the baseline treatment exhibit behavior consistent with MLA, which was not the case for decision makers who underwent the debiasing training intervention. Nonetheless, we found no statistically significant difference-indifference effect of the training intervention on the magnitude of MLA. However, when we focused on the more attentive participants, the magnitude of the difference-indifference effect of the training intervention increased strongly and became statistically significant when controlling for age, gender, education, field of study, investment experience, and risk preferences.
Article
Full-text available
Rebuts L. Lopes's (see record 1982-07078-001) normative objections to expected utility theory and analyzes the "fallacy of large numbers," discussed by P. A. Samuelson (1963), from both mathematical and psychological standpoints. It is suggested that values in risky decisions are gains and losses defined relative to some reference point. Because the value function tends to be concave for gains and convex for losses, the shift of the reference point can produce systematic reversals of preferences. (6 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Chapter
Decision theory and the theory of rational choice have recently been the subjects of considerable research by philosophers and economists. However, no adequate anthology exists which can be used to introduce students to the field. This volume is designed to meet that need. The essays included are organized into five parts covering the foundations of decision theory, the conceptualization of probability and utility, pholosophical difficulties with the rules of rationality and with the assessment of probability, and causal decision theory. The editors provide an extensive introduction to the field and introductions to each part.
Article
Four experiments were conducted to study the effects of information about expected value (EV) on choices among gambles. The subjects were presented with one or more sets of 17 gambles and were asked to select the one gamble they would prefer from each of these sets. In all experiments the EV concept was explained to the subjects and after this EV was displayed for each choice alternative. The experiments varied with respect to (a) the structure of the gambling alternatives, (b) the type of experimental design (within- or between-subjects design), and (c) whether repeated gambles were allowed or not. The EV information had marginal effects on the subjects choice behavior except when repeated gambles were allowed. It is suggested that subjects, rather than being guided by an abstract composite measure, such as EV, attempted to find a gamble having some concrete pattern of features.
Article
Examines the commonly accepted idea that the only rational measure of the worth of a gamble is its expected value or some subjective counterpart such as expected utility. The argument is made that for short-run situations, it is reasonable to consider the probability of coming out ahead instead of, or in addition to, the long-run expectation. Changes are discussed that might be called for in theories of rational choice when the prescriptions of rational models violate common sense. (23 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
There are several theories of risk which indicate that risk could be a function only of variance and expectation. A transformation on odds or skewness was constructed which left the variance and expectation of a gamble unchanged. Perceived risk was clearly a function of this transformation as well as variance and expectation, even under multiple play in which the effect of the central limit theorem modifies the effect of skewness but it remains a relevant variable.
Article
The use of transformations to stabilize the variance of binomial or Poisson data is familiar(Anscombe [1], Bartlett [2, 3], Curtiss [4], Eisenhart [5]). The comparison of transformed binomial or Poisson data with percentage points of the normal distribution to make approximate significance tests or to set approximate confidence intervals is less familiar. Mosteller and Tukey [6] have recently made a graphical application of a transformation related to the square-root transformation for such purposes, where the use of "binomial probability paper" avoids all computation. We report here on an empirical study of a number of approximations, some intended for significance and confidence work and others for variance stabilization. For significance testing and the setting of confidence limits, we should like to use the normal deviate K exceeded with the same probability as the number of successes x from n in a binomial distribution with expectation np, which is defined by 12πKe12t2dt=Prob{xkmidbinomial,n,p}.\frac{1}{2\pi} \int^K_{-\infty} e^{-\frac{1}{2}t^2} dt = \operatorname{Prob} \{x \leq k |mid \operatorname{binomial}, n, p\}. The most useful approximations to K that we can propose here are N (very simple), N+N^+ (accurate near the usual percentage points), and NN^{\ast\ast} (quite accurate generally), where N=2((k+1)q(nk)p).N = 2 (\sqrt{(k + 1)q} - \sqrt{(n - k)p)}. (This is the approximation used with binomial probability paper.) N+=N+N+2p112E,E=lesser ofnpandnq,N=N+(N2)(N+2)12(1np+11nq+1),N=N+N+2p112EE=lesser ofnpandnq.N^+ = N + \frac{N + 2p - 1}{12\sqrt{E}},\quad E = \text{lesser of} np \text{and} nq, N^\ast = N + \frac{(N - 2)(N + 2)}{12} \big(\frac{1}{\sqrt{np + 1}} - \frac{1}{\sqrt{nq + 1}}\big), N^{\ast\ast} = N^\ast + \frac{N^\ast + 2p - 1}{12 \sqrt{E}}\cdot\quad E = \text{lesser of} np \text{and} nq. For variance stabilization, the averaged angular transformation sin1xn+1+sin1x+1n+1\sin^{-1}\sqrt{\frac{x}{n + 1}} + \sin^{-1} \sqrt{\frac{x + 1}{n+1}} has variance within ±6\pm 6% of 1n+12(angles in radians),821n+12(angles in degrees),\frac{1}{n + \frac{1}{2}} \text{(angles in radians)}, \frac{821}{n + \frac{1}{2}} \text{(angles in degrees)}, for almost all cases where np1np \geq 1. In the Poisson case, this simplifies to using x+x+1\sqrt{x} + \sqrt{x + 1} as having variance 1.