ArticlePublisher preview available

Distinguishing Between Random and Nonrandom Events

American Psychological Association
Journal of Experimental Psychology: Learning, Memory, and Cognition
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Subjects judged whether binary strings had been generated by a random or a nonrandom process. Half of the strings were generated by a Bernoulli process with p = .5. The other half were generated by either a repetition-biased process or an alternation-biased process. Subjects were (a) not informed about the nonrandom process, (b) informed about the qualitative nature of the process, or (c) given accurate feedback after each trial about the generating process. The data show that subjects equate long runs and symmetry with nonrandomness, and high rates of alternation with randomness, making them less successful in detecting alternation-biased processes. The data also show that performance can be improved by instructions or feedback. A second experiment using statistically sophisticated subjects showed that although they perform better than naive subjects, their data are similar qualitatively. We interpret these results in terms of whether the subject must perform the task in a null hypothesis mode or a maximum likelihood mode. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Journal of Experimental Psychology:
Learning, Memory, and Cognition
1987,
Vol. 13, No.
3,392-MX)
Copyright! 987 by the American Psychological Association, Inc.
0278-7393/87/SOO.7S
Distinguishing Between Random and Nonrandom Events
Lola L. Lopes and Gregg
C.
Oden
University of Wisconsin—Madison
Subjects judged whether binary strings had been generated by a random or a noarandom process.
Half of the strings
were
generated by a Bernoulli process with p ~
.5.
The other half were generated
by
either
a
repetition-biased process
or an
alternation-biased
process.
Subjects
were (a)
not informed
about the nonrandom process, (b) informed about the qualitative nature of the process, or
(c)
given
accurate feedback after each trial about the generating process. The data show that subjects equate
long runs and symmetry with nonrandomness, and high rates of alternation with randomness, mak-
ing them less successful in detecting alternation-biased processes. The data also show that perfor-
mance can
be
improved
by
instructions or feedback.
A
second experiment
using
statistically sophisti-
cated subjects showed that although they perform better than naive subjects, their data are similar
qualitatively. We interpret these results in terms of whether the subject must perform the task in a
null hypothesis mode or
a
maximum likelihood mode.
It is well known that
people are
not naturally good at generat-
ing random sequences (for reviews, see Tune, 1964; Wagenaar,
1972),
Compared with real random sequences,1 human-pro-
duced sequences have too few symmetries and long runs, too
many alternations among events, and too much balancing of
event frequencies over relatively short regions. Although vari-
ous interpretations exist for these results, a case can be made
that the differences stem, at least in part, from a bias to expect
that things that
are
random will
look
random more consistently
than they really do. Thus, taking coin tossing as an example,
the
sequence HTHTTHHT
looks more
random than either HTHH-
HHHH or HTHTHTHT because it has equal numbers of heads
and tails and also because it has no easily discemable pattern.
Support for such a conceptual view can be found in studies of
the judged randomness of sequences (e.g., Falk, 1981; Teigen,
1983;
Wagenaar, 1970).
Outside the laboratory, people have little need to generate
random strings, but they often perform the inductive task of
deciding whether an unexpected event is merely coincidental
(i.e.,
random), or whether it signals a new or hitherto unrecog-
nized regularity in the environment (Lopes, 1982). Consider
Guildenstern in the play Rosencrantz and
Guildensiern
Are
Dead
as he confronts such an event. Eighty-nine times he has
tossed a coin—albeit a different coin each time—and 89 times
it has landed heads. Should Guildenstern accept that this is
merely, as he says, "a spectacular vindication of
the
principle
that each individual coin spun individually is as likely to come
down heads
as
tails and therefore should
cause
no surprise each
This research was supported by Wisconsin Alumni Research Foun-
dation Grant
100691
to Lola
L.
Lopes and
National Science Foundation
Grant BNS80-143I6 to Gregg
C.
Oden.
We
thank Josh Klayman for having introduced us to
Tom
Stoppard's
play, and Michael Waterman and
P.
Revesz for helping us obtain infor-
mation on the Varga teaching experiment.
Correspondence concerning this article should be addressed to Lola
Lopes, Department of Psychology, University of Wisconsin, Madison,
Wisconsin 53706.
individual time it
does"
or is he correct to wonder if this is "in-
dicative of something . . . un-, sub-, or supernatural?" (Stop-
pard, 1967, Act I, Scene 1).
Lopes (1982) has argued that such situations can be usefully
studied in a signal detection framework. An observer is pre-
sented with candidate strings and asked to decide, for each
string, whether it has been generated by a random or a nonran-
dom
process.
A
known proportion of the strings are in fact gen-
erated by a random process, and the rest are generated by a
process that differs in some way from the random process. The
observer may or may not have information about the character
of the nonrandom process. Observers are correct if they judge
a string produced by the nonrandom process (the signal) to be
nonrandom (a hit) or if they judge a string produced by the
random process (the noise) to be random (a correct rejection).
They are incorrect if they judge a string produced by the ran-
dom process to be nonrandom (a false alarm) or a string pro-
duced by the nonrandom process to be random (a miss).
There are two basic approaches to such tasks, the choice of
which depends on how much
the
observer
knows
about
the
non-
random process. The simpler of the two is what we will call
the maximum likelihood (MH) mode. This requires that the
observer have full or partial knowledge of the characteristics of
both the random and the nonrandom
process.
Such an observer
can use this knowledge to decide for each candidate string
whether it
is
closer to what would be expected from the random
process or what would be expected from the nonrandom pro-
cess.
For a sophisticated observer with full statistical knowledge
of the alternative processes, this would amount to
using
a maxi-
1 It
is
not easy to
say
what
a
"real"
random sequence
is.
Although one
might suppose it to be a sequence produced by a real random process,
one cannot know that a real process is random without inferring that
fact from the statistical characteristics of the sequences it
produces.
For
our purposes, it is not necessary to solve this definitional problem. In-
stead,
we
will use the terms random
process
and
randomness
to refer to
the properties expected theoretically of stationary Bernoulli processes.
392
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.
... The affinity between the two has been recognized of course (e.g. Falk, 1975;Falk & Konold, 1997;Garner, 1970;Griffiths & Tenenbaum, 2003;Lopes & Oden, 1987; see also Garner, 1974) but Fitousi's strings hold the promise of aligning the respective bodies of work in a more detailed manner. ...
... Although people find the dichotomy, random-nonrandom, almost as compelling as the dichotomy, symmetrical-nonsymmetrical, the former still proved amenable to meaningful graded ratings (Falk, 1975;Falk & Konold, 1997;Lopes & Oden, 1987). Apparent randomness thus Figure 11. ...
... Consequently, symmetry has been incorporated into formal models of apparent randomness (e.g. Griffiths & Tenenbaum, 2003;Lopes & Oden, 1987). The "difficulty predictor" (DP) model developed by Falk and Konold (1997) is based on the relative frequency of runs (subsequences containing the same symbol) in the stimulus. ...
Article
Full-text available
Of the four interrelated concepts in the title, only symmetry has an exact mathematical definition. In mathematical development, symmetry is a graded variable-in marked contrast with the popular binary conception of symmetry in and out of the laboratory (i.e. an object is either symmetrical or nonsymmetrical). Because the notion does not have a direct graded perceptual counterpart (experimental participants are not asked about the amount of symmetry of an object), students of symmetry have taken various detours to characterize the perceptual effects of symmetry. Current approaches have been informed by information theory, mathematical group theory, randomness research, and complexity. Apart from reviewing the development of the main approaches, for the first time we calculated associations between figural goodness as measured in the Garner tradition and measures of algorithmic complexity and randomness developed in recent research. We offer novel ideas and analyses by way of integrating the various approaches.
... Detecting patterns. Following (Lopes, 1982;Lopes & Oden, 1987) we model discrimination of patterned and random sequences as equal-variance Gaussian signal detection theory (SDT). (A) The SDT model. ...
... In normative SDT, criterion c is set to maximize expected gain given the penalties and rewards associated with each outcome and the prior probability that a signal is present. (Lopes, 1982;Lopes & Oden, 1987) first reformulated the problem of pattern detection within the framework of SDT. Lopes & Oden asked observers to discriminate binary patterns generated by Markov chains (defined below) from binary patterns generated by a "fair coin." ...
... With probability p d , we independently flipped each of the tokens in the Markov sequence from blue to yellow or vice versa ( Figure 2C). If p d is 0, the resulting DMS is just a Markov sequence similar to those used by (Lopes, 1982;Lopes & Oden, 1987). If p d is 0.5, the resulting DMS is effectively a random sequence: the disruption process with p d equal to 0.5 removes any pattern in the disrupted sequence. ...
Article
Full-text available
We measured human ability to detect texture patterns in a signal detection task. Observers viewed sequences of 20 blue or yellow tokens placed horizontally in a row. They attempted to discriminate sequences generated by a random generator ("a fair coin") from sequences produced by a disrupted Markov sequence (DMS) generator. The DMSs were generated in two stages: first a sequence was generated using a Markov chain with probability, pr = 0.9, that a token would be the same color as the token to its left. The Markov sequence was then disrupted by flipping each token from blue to yellow or vice versa with probability, pd-the probability of disruption. Disruption played the role of noise in signal detection terms. We can frame what observers are asked to do as detecting Markov texture patterns disrupted by noise. The experiment included three conditions differing in pd (0.1, 0.2, 0.3). Ninety-two observers participated, each in only one condition. Overall, human observers' sensitivities to texture patterns (d' values) were markedly less than those of an optimal Bayesian observer. We considered the possibility that observers based their judgments not on the entire texture sequence but on specific features of the sequences such as the length of the longest repeating subsequence. We compared human performance with that of multiple optimal Bayesian classifiers based on such features. We identify the single- and multiple-feature models that best match the performance of observers across conditions and develop a pattern feature pool model for the signal detection task considered.
... An important question based on these findings is whether individuals detect the random relationship between unrelated events when the events are presented in triads as mentioned above. Previous research has shown that, when choices and outcomes are independent of one another, i.e., random, human subjects tend to perceive a nonrandom relation when there are long runs of choices and outcomes with symmetry (Lopes & Oden, 1987) or when such choice and outcome pairs are contiguous (Matute, 1994). This, in turn, leads to an increased likelihood of engagement in superstitious behavior (Hake & Hyman, 1953;Niness & Niness, 1998;Rudski, Lischner, & Albert, 1999). ...
... However, factors such as feedback on one's performance and experience with random processes tend to reduce the formation of such nonrandom relations between events (Lopes & Oden, 1987). These factors may have this effect because they provide more accessible information about the nature of the relations. ...
Article
Full-text available
Decision-making researchers have shown that making optimal decisions is aided by the detection of information salient to the task. When the task involves random events, humans tend to perceive these events as contingent. In this study, outcomes were grouped together with choices to identify some of the conditions under which random events are correctly perceived. Of the two groups (ns = 40) only one was provided information regarding the relationship between choice and outcome. This provision did not improve the detection of the relationship between random events any more than direct contact with the underlying contingencies. Findings are discussed in terms of experiential contact with and sensitivity to underlying contingency.
... In the literature on randomness judgments [6,7], this phenomenon is often referred to as either gambler's fallacy or negative recency-a tendency to overestimate the frequency of alternations. For example, in experiments in which the task was to assess the outcomes of consequent fair coin tosses, people considered series with alternation rates between 0.6 and 0.7 more random than series with an exact alternation rate of 0.5 [8][9][10]. Moreover, when producing a random series, people not only violate the constraint of the uniform frequencies of distinct elements but also their series exhibit interdependence of consequent items. ...
... This leaves an open question of whether by improving people's definition of randomness (and providing them with more complex patterns) they will be able to produce more random series. Although some classic studies on random series generation [9,65] indeed demonstrated that people with a better understanding of such concepts like randomness, probability, or statistics in general performed better in random-likes series generation tasks than novices they only used simple measures based on frequencies of elements and not examined the dynamics of the series complexity. ...
Article
Full-text available
People are not equipped with an internal random series generator. When asked to produce a random series they simply try to reproduce an output of known random process. However, this endeavor is very often limited by their working memory capacity. Here, we investigate the model of random-like series generation that accounts for the involvement of storage and processing components of working memory. In two studies, we used a modern, robust measure of randomness to assess human-generated series. In Study 1, in the experimental design with the visibility of the last generated elements as a between-subjects variable, we tested whether decreasing cognitive load on working memory would mitigate the decay in the level of randomness of the generated series. Moreover, we investigated the relationship between randomness judgment and algorithmic complexity of human-generated series. Results showed that when people did not have to solely rely on their working memory storage component to maintain active past choices they were able to prolongate their high-quality performance. Moreover, people who were able to better distinguish more complex patterns at the same time generated more random series. In Study 2, in the correlational design, we examined the relationship between working memory capacity and the ability to produce random-like series. Results revealed that individuals with longer working memory capacity also were to produce more complex series. These findings highlight the importance of working memory in generating random-like series and provide insights into the underlying mechanisms of this cognitive process.
... The belief that the occurrence of a particular outcome in random events will be balanced by a tendency for the opposite outcome is a cognitive fallacy that is rooted in the false intuition that events are not truly random and independent. Several mechanisms have been proposed to explain this fallacy, including the representativeness heuristic (Kahneman & Tversky, 1972), The law of small numbers, which posits that probabilistic properties of large samples are applicable to small samples (Tversky & Kahneman, 1971; Rabin, 2002), and the experience of negative recency (Ayton et al., 1989;Lopes & Oden, 1987). The fallacy has not only been exhibited in probability learning experiments (cited above), but has also been identified empirically in real gambling (Croson & Sundali, 2005;Suetens et al., 2016), sport (Misirlisoy & Haggard, 2014) and investments (Odean, 1998, Shapira & Venezia, 2001, Chen et al., 2007. ...
... A misunderstanding of sampling-in situations where a finite population of outcomes is sampled without replacement, expectations with negative recency have some validity because observing a particular outcome indeed lowers the chances of observing that outcome the next time. Accordingly, some authors have suggested that the experience of negative recency in life might be responsible for the gambler's fallacy in experimental tasks where participants are asked to generate or recognize random sequences (Ayton et al., 1989;Lopes & Oden, 1987). ...
Preprint
Full-text available
This thesis presents findings from an online experiment aimed at evaluating individuals' attitudes towards their own cognitive biases that lead to objective mistakes. In a number of incentivized decision problems, a participant in the experiment might make a mistake and choose a dominated lottery (FOSD lottery), which is likely to result from a particular heuristic or cognitive bias. In this case, she is then confronted with her error under one of two treatments: a confrontation with an explanation only regarding the mistake or one with the addition of the supposed mechanism (bias). Following this explanation, her comfort level with her choice is measured. A bias that acted as a mechanism for the observed error could be considered advantageous (i.e., useful given alternative costs) or disadvantageous (i.e., mostly harmful) in other decision-making scenarios encountered during one's lifetime. Consequently, the perception of cognitive biases is subjective and may vary among decision makers, who may emphasize either the positive or negative aspects of such biases. The proposed methodology allows to determine whether these biases are perceived as useful rules of thumb, despite leading to a dominated lottery choice in that particular context, or unfavorable.Results revealed significant differences between the two treatments and across five different decision problems. Interestingly, the results showed that in only 60% of the cases, the participants felt uneasy about their mistakes. In the remaining instances, the participants demonstrated either indifference or a sense of comfort with their choice of a dominated lottery. The net effect of providing information on the cognitive biases that presumably lead to the mistakes, measured as the difference between treatments, revealed a negative perception of the mechanism behind the error in one problem, contentment with the mechanism for another, and no significant effect for the remaining biases.
... whereas in the second series, the probability of an alternation, Pr( T \ H) and Pr(H\T), is .6 and that of a repetition, Pr(H\H) and Pr(T\T), is .4. Judging the randomness of two-event series, people consistently consider series with alternation rates of about .6 to be random, whereas series with an alternation rate of .5 (as would be expected of a random series) are thought to contain runs too long to have been produced by chance (e.g., Falk, 1981;Gilovich, Vallone, &Tversky, 1985;Lopes &Oden, 1987; for a recent review, see Bar-Hillel & Wagenaar, 1991). People's bias in judging randomness is so strong and consistent as to inspire statements such as "People have a very poor conception of randomness; they don't recognize it when they see it and they cannot produce it when they try" (Slovic, Kunreuther, & White, 1974, p. 192). ...
Article
Full-text available
Perception of covariation often differs from statistically normative values: People find order in random series and relationships between uncorrelated values. Theoretical analysis, allowing for working-memory limitations, shows that the degree of covariation in the typical, locally representative series is more negative, whereas that of the atypical series is more positive, than the covariation in the complete set. I assumed that typical series serve as a norm to which other series are compared, and predicted a positive bias in the perception of covariation. This prediction was tested and found to hold across a wide range of actual relationships in 2 experiments involving sequential dependencies and events with co-occurring values. Another analysis revealed positive correlations to be more informative than negative ones when events are not equiprobable. Positive bias may thus be a rational predisposition for early detection of a potentially more informative relationship.
... Such a rise in confidence might also be caused by another phenomenon: human difficulties in differentiating between random and non-random (deterministic) sequences (Lopes & Oden, 1987;Williams & Griffiths, 2013). DM's are inclined to identify sequences wherever possible (Tyszka, Markiewicz, Kubińska, Gawryluk, & Zielonka, 2017;Tyszka, Zielonka, Dacey, & Sawicki, 2008). ...
... People also expect random generators to produce sequences with a high proportion of reversals, or alternation rates of about .60 (Ayton, Hunt, & Wright, 1989;Bar-Hillel & Wagenaar, 1991;Falk, 1981;Rapoport & Budescu, 1997;Reimers, Donkin, & Le Pelley, 2018). 1 When asked to judge sequences of outcomes that have unequal base rates, or that exhibit an alternation rate lower than .60, people view those sequences as too "streaky," and judge the outcome-generating process to be non-random (Gronchi & Sloman, 2008;Lopes & Oden, 1987;Olivola & Oppenheimer, 2008;Scholl & Greifeneder, 2011). When asked to predict future outcomes for sequences produced by random generators, people also expect the proportion of outcomes to reflect the population base rate for each outcome type (i.e. to "balance out") even in short sequences. ...
Article
Full-text available
Beliefs like the Gambler's Fallacy and the Hot Hand have interested cognitive scientists, economists, and philosophers for centuries. We propose that these judgment patterns arise from the observer's mental models of the sequence‐generating mechanism, moderated by the strength of belief in an a priori base rate. In six behavioral experiments, participants observed one of three mechanisms generating sequences of eight binary events: a random mechanical device, an intentional goal‐directed actor, and a financial market. We systematically manipulated participants’ beliefs about the base rate probabilities at which different outcomes were generated by each mechanism. Participants judged 18 sequences of outcomes produced by a mechanism with either an unknown base rate, a specified distribution of three equiprobable base rates, or a precise, fixed base rate. Six target sequences ended in streaks of between two and seven identical outcomes. The most common predictions for subsequent events were best described as pragmatic belief updating, expressed as an increasingly strong expectation that a streak of identical signals would repeat as the length of that streak increased. The exception to this pattern was for sequences generated by a random mechanical device with a fixed base rate of .50. Under this specific condition, participants exhibited a bias toward reversal of streaks, and this bias was larger when participants were asked to make a dichotomous choice versus a numerical probability rating. We review alternate accounts for the anomalous judgments of sequences and conclude with our favored interpretation that is based on Rabin's version of Tversky & Kahneman's Law of Small Numbers.
Article
Some random processes, like a series of coin flips, can produce outcomes that seem particularly remarkable or striking. This paper explores an epistemic puzzle that arises when thinking about these outcomes and asking what, if anything, we can justifiably believe about them. The puzzle has no obvious solution, and any theory of epistemic justification will need to contend with it sooner or later. The puzzle proves especially useful for bringing out the differences between three prominent theories; the probabilist theory, the normic theory and a theory recently defended by Goodman and Salow.
Article
Full-text available
The process of induction is formulated as a problem in detecting nonrandomness or pattern against a background of randomness or noise. Experimental approaches that have been taken to evaluate the rationality of human conceptions of randomness are described, and the narrow conceptions of randomness implicit in this experimental literature are contrasted with the broader and less well agreed upon conceptions of randomness in philosophy and mathematics. The relation between induction and the experience of randomness is discussed in terms of signal detection theory. It is argued that an adequate evaluation of human conceptions of randomness must consider the role those conceptions play in inductive inference. (36 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Conducted 2 experiments with 7 undergraduates and 4 high school students, respectively, to test the widely accepted conclusion that people are unable to behave randomly by evaluating whether feedback would enable Ss to learn to generate random sequences. In both experiments, Ss were asked to generate random sequences of 2 numbers on the keyboard of a computer terminal. In Exp I, feedback was given from 5 descriptors; in Exp II, feedback was given from 10. Results indicate that during baseline, all Ss' sequences differed significantly from random, thereby replicating the findings of the literature. But when given feedback from 5 or 10 statistical descriptors, the Ss learned to generate sequences that were indistinguishable, according to these statistics, from computer-generated random numbers. It is concluded that randomlike behavior can be learned. (51 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Notes that the subjective concept of randomness is used in many areas of psychological research to explain a variety of experimental results. 1 method of studying randomness is to have ss generate random series. Few results of experiments using this method, however, lend themselves to comparison and synthesis because investigators employ such a variety of experimental conditions and definitions of mathematical randomness. (24 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
During the last fifteen months or so, I have been preparing the second edition of my book, Seminumerical Algorithms ( The Art of Computer Programming , vol. 2, published by Addison-Wesley), and Prof. Loos has kindly suggested that the readers of SIGSAM Bulletin may want to know something about this new version.
Article
The psychological literature is contradictory with respect to the question of whether or not Ss are able to select a random binary sequence out of a set of non-random ones. If not, it can be assumed that a subjective concept of randomness exists. This paper describes an experiment in which 203 Ss judged binary sequences of white and black dots with respect to randomness. The conditional probability of white following white (black following black) was varied from 0.2 to 0.8 with steps of 0.1. At the same time the order of dependency was varied among 1, 2 and 3.The results showed that sequences with conditional probabilities around 0.4 were judged as most random. This holds for all three orders of dependency. The standard deviation of successive judgments increased with the order of dependency. The data suggest that Ss did not process conditional probabilities or informational contents, but rather the run-structure of the binary sequences. Therefore the effect of ‘subjective randomness’ is mainly to be attributed to a bias against runs of six or more elements.
Article
When students are asked to predict the outcome of a random event, where all alternatives are equally probable (lotteries), they tend to choose central, “representative” values, and avoid extreme ones. In ten informal experiments, it is shown how this pattern of choices is influenced by various procedural and structural changes in the basic task. The results show that guessing behavior can be described as a kind of absolute judgment, subject to grouping, anchoring and context effects. Of the two general prediction heuristics originally proposed by Kahneman & Tversky (1973), “representativeness” applies better than “availability”. In fact, a major strategy of guessing is apparently to eschew numbers with prominent, “non-random” properties, which at the same time are highly available to the subjects.