ArticlePublisher preview available

Doing the impossible: A note on induction and the experience of randomness

American Psychological Association
Journal of Experimental Psychology: Learning, Memory, and Cognition
Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The process of induction is formulated as a problem in detecting nonrandomness or pattern against a background of randomness or noise. Experimental approaches that have been taken to evaluate the rationality of human conceptions of randomness are described, and the narrow conceptions of randomness implicit in this experimental literature are contrasted with the broader and less well agreed upon conceptions of randomness in philosophy and mathematics. The relation between induction and the experience of randomness is discussed in terms of signal detection theory. It is argued that an adequate evaluation of human conceptions of randomness must consider the role those conceptions play in inductive inference. (36 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Journal
of
Experimental
Psychology:
Learning,
Memory,
and
Cognition
1982,
Vol.
8, No. 6,
626-636
Copyright
1982
by the
American Psychological Association, Inc.
0278-7393/82/0806-0626S00.75
Doing
the
Impossible:
A
Note
on
Induction
and the
Experience
of
Randomness
Lola
L.
Lopes
University
of
Wisconsin-Madison
The
process
of
induction
is
formulated
as a
problem
in
detecting nonrandomness
or
pattern against
a
background
of
randomness
or
noise.
The first
section
of the
article describes
the
experimental approaches
that
have been taken
to
evaluate
the
rationality
of
human conceptions
of
randomness.
The
second section con-
trasts
the
narrow conceptions
of
randomness implicit
in
this experimental lit-
erature with
the
broader
and
less well agreed upon conceptions
of
randomness
in
philosophy
and
mathematics.
The
third
section
discusses
the
relation
between
induction
and the
experience
of
randomness
in
terms
of
signal detection theory.
And
the
fourth
section argues that
an
adequate evaluation
of
human conceptions
of
randomness must consider
the
role those conceptions play
in
inductive
in-
ference.
In
standardized tests
of
reasoning ability,
one
often
finds
questions like this:
What
digit should
go in the
space
at the end of the
series
below?
12233344445555566666?
Almost
certainly, test
makers
and
proficient test
takers
would
agree that
the
answer
is 6.
Another
question
that
might
be
asked
is the
following:
Below
are
three hypotheses concerning
the
source
of the
series above. Rank order
the
hypotheses
from
most
to
least
likely.
a.
The
test maker made
up the
series.
b.
The
series
is
digits
676,512
through 676,531
of
Rand's
One
Million Random Digits.
c.
The
series
is
digits 500,000 through
500,019
of
Rand's
One
Million Random Digits.
Although
the
question
is
unusual, most people
would
rank
the
hypotheses exactly
as
given.
The
series seems almost certainly
to
have been con-
structed
by
human agency,
but if by
some
ex-
traordinary coincidence,
it did
happen
to
occur
in
Rand's
(1955)
table
of
random
digits,
then
it
seems
far
more likely
to
have been
at
some rela-
tively
anonymous
position than exactly
in the
middle
of the
table.
Consider, however, what would happen
if the
This paper
was
facilitated
by a
grant
from
the
Wis-
consin Alumni Research Foundation
and was
presented
in an
earlier version
at the
Bayesian Research Confer-
ence,
Los
Angeles, California, February 1980. Thanks
are
due
Ward Edwards, Hillel
Einhorn,
Dominic Mas-
saro, Gregg Oden,
and
Charles Snowdon
for
their
helpful
criticisms
of an
earlier
draft
of the
manuscript,
and
spe-
cial thanks
are due
Mark
Kac for
allowing
me to
include
his
anecdote
about
the
draft
lottery.
same
two
questions were asked about another
se-
ries:
221215917917683158672
In
this case
it is not at all
clear
how the
series
ought
to be
completed,
nor is it
clear
how the
source hypotheses ought
to be
ordered.
Why do
we
have such
different
intuitions
about
these
two
series?
The
answer
is
obviously that
the first
series
has a
readily discernible pattern whereas
the
sec-
ond
does not.
It is
this pattern that underlies
our
mistaken
intuition that
the first
series
is
less likely
than
the
second
to be
generated
by a
random
pro-
cess.
And it is
also this pattern
that
underlies
and
enables
our
inductive inference
that
the first
series
should
end
with
a 6.
Induction
is how we
discover
for
ourselves what
the
world
is
like.
It
occurs when
we
generalize
past
experience with particular event patterns
to new
and as yet
unobserved
instances
(Harre,
1970).
Scientists
do it; lay
people
do it;
even birds
and
beasts
do it. But the
process
is
mysterious
and
full
of
paradox (cf. Gardner, 1976),
for as
Hume
(1748/1977)
showed long ago, induction cannot
be
justified
on
logical grounds:
No
matter
how
strongly
available evidence
may
seem
to
support
our
current
beliefs
about
the
world,
the
possibility
always
remains that
new
evidence will prove
us
wrong.
Thus,
to do
induction
is
necessarily
to run
the risk of
error,
and
this,
it
will
be
argued, makes
it
difficult
to
evaluate
how
well
the
process
is
being
done.
For if
induction
cannot
be
justified
ratio-
nally,
then what criteria
can be
used
to
judge
whether
it is
being done rationally?
For
human beings, induction
has two
relatively
distinct stages:
the act of
conceiving
a new
idea
or
theory
and the act of
testing
or
justifying
that
626
... Detecting patterns. Following (Lopes, 1982;Lopes & Oden, 1987) we model discrimination of patterned and random sequences as equal-variance Gaussian signal detection theory (SDT). (A) The SDT model. ...
... In normative SDT, criterion c is set to maximize expected gain given the penalties and rewards associated with each outcome and the prior probability that a signal is present. (Lopes, 1982;Lopes & Oden, 1987) first reformulated the problem of pattern detection within the framework of SDT. Lopes & Oden asked observers to discriminate binary patterns generated by Markov chains (defined below) from binary patterns generated by a "fair coin." ...
... With probability p d , we independently flipped each of the tokens in the Markov sequence from blue to yellow or vice versa ( Figure 2C). If p d is 0, the resulting DMS is just a Markov sequence similar to those used by (Lopes, 1982;Lopes & Oden, 1987). If p d is 0.5, the resulting DMS is effectively a random sequence: the disruption process with p d equal to 0.5 removes any pattern in the disrupted sequence. ...
Article
Full-text available
We measured human ability to detect texture patterns in a signal detection task. Observers viewed sequences of 20 blue or yellow tokens placed horizontally in a row. They attempted to discriminate sequences generated by a random generator ("a fair coin") from sequences produced by a disrupted Markov sequence (DMS) generator. The DMSs were generated in two stages: first a sequence was generated using a Markov chain with probability, pr = 0.9, that a token would be the same color as the token to its left. The Markov sequence was then disrupted by flipping each token from blue to yellow or vice versa with probability, pd-the probability of disruption. Disruption played the role of noise in signal detection terms. We can frame what observers are asked to do as detecting Markov texture patterns disrupted by noise. The experiment included three conditions differing in pd (0.1, 0.2, 0.3). Ninety-two observers participated, each in only one condition. Overall, human observers' sensitivities to texture patterns (d' values) were markedly less than those of an optimal Bayesian observer. We considered the possibility that observers based their judgments not on the entire texture sequence but on specific features of the sequences such as the length of the longest repeating subsequence. We compared human performance with that of multiple optimal Bayesian classifiers based on such features. We identify the single- and multiple-feature models that best match the performance of observers across conditions and develop a pattern feature pool model for the signal detection task considered.
... On reflection, it should be obvious that positive bias may enhance performance when the actual relationship is positive and hinder performance when it is negative. Lopes's (1982) theoretical analysis confirms this prediction. ...
... In a sense, the same puzzling question about positive bias has been left: Is it useful or detrimental in everyday functioning? To quote Lopes (1982), Thus, in order to evaluate fairly whether people's false expectations about alternation are helpful or harmful over a lifetime's opportunities for induction, one would have to know whether in the world nonrandom events are more often biased toward alternation or toward repetition-which raises tantalizing, but probably unanswerable, questions concerning the natural ecology of nonrandomness. (pp. ...
... (pp. 633-634) I share with Lopes (1982) the feeling that the question of whether the world is positively or negatively biased is probably unanswerable. However, I propose a different line of attack in evaluating whether positive bias is beneficial or detrimental. ...
Article
Full-text available
Perception of covariation often differs from statistically normative values: People find order in random series and relationships between uncorrelated values. Theoretical analysis, allowing for working-memory limitations, shows that the degree of covariation in the typical, locally representative series is more negative, whereas that of the atypical series is more positive, than the covariation in the complete set. I assumed that typical series serve as a norm to which other series are compared, and predicted a positive bias in the perception of covariation. This prediction was tested and found to hold across a wide range of actual relationships in 2 experiments involving sequential dependencies and events with co-occurring values. Another analysis revealed positive correlations to be more informative than negative ones when events are not equiprobable. Positive bias may thus be a rational predisposition for early detection of a potentially more informative relationship.
... Most participants respond that the quality of symmetry does not apply to a circle. gained traction as a popular dependent variable (ignoring the fact that randomness lacks an exact consensual definition; Lopes, 1982). For students of symmetry, apparent randomness proved attractive in their quest to uncover the perceptual effects of symmetry. ...
... However, many computer programs of complexity yield the same outcome: two instructions suffice to generate each sequence. For the top: start with n = 1 and then repeat it six times; for the bottom: for n = 1 to 3, repeat n n times (inspired by Lopes, 1982). ...
Article
Full-text available
Of the four interrelated concepts in the title, only symmetry has an exact mathematical definition. In mathematical development, symmetry is a graded variable-in marked contrast with the popular binary conception of symmetry in and out of the laboratory (i.e. an object is either symmetrical or nonsymmetrical). Because the notion does not have a direct graded perceptual counterpart (experimental participants are not asked about the amount of symmetry of an object), students of symmetry have taken various detours to characterize the perceptual effects of symmetry. Current approaches have been informed by information theory, mathematical group theory, randomness research, and complexity. Apart from reviewing the development of the main approaches, for the first time we calculated associations between figural goodness as measured in the Garner tradition and measures of algorithmic complexity and randomness developed in recent research. We offer novel ideas and analyses by way of integrating the various approaches.
... Because correlations underlie all learning, their early detection and, subsequently, accurate assessment are of great importance for the functioning and well-being of organisms (Alloy & Tabachnik, 1984). The detection of correlations is so important for the functioning of organisms that it has been argued (Lopes, 1982) that it may be worth the organisms' while to lower their decision criterion and risk a false alarm rather than to risk a miss, hi other words, organisms would be better off if they were primed (or even biased) to detect correlations. ...
... The detection of a correlation, like any other act of induction, requires that a pattern be noticed against a background of noise (Lopes, 1982). Moreover, because any situation offers numerous possibilities for correlations, a fundamental question of induction-"how do people avoid generating innumerable fruitless hypotheses in their search for fruitful generalizations?" (Holyoak & Nisbett, 1988, p. 55)-is also relevant in this context. ...
Article
Full-text available
Capacity limitations of working memory force people to rely on samples consisting of 7 ± 2 items. The implications of these limitations for the early detection of correlations between binary variables were explored in a theoretical analysis of the sampling distribution of φ, the contingency coefficient. The analysis indicated that, for strong correlations (φ > .50), sample sizes of 7 ± 2 are most likely to produce a sample correlation that is more extreme than that of the population. Another analysis then revealed that there is a similar cutoff point at which useful correlations (i.e., for which each variable is a valid predictor of the other) first outnumber correlations for which this is not the case. Capacity limitations are thus shown to maximize the chances for the early detection of strong and useful relations.
... This is thought to happen because people detect patterns of causation behind appearances that can be thought of as rules. Humans essentially have a liberal bias when detecting patterns of causation, so they have more false alarms than a chicken (i.e., seeing patterns when there are in fact none), but humans will also have more hits in that humans will find hidden patterns when they are really there (Gaissmaier & Schooler, 2008;Lopes, 1982). False alarms, like seeing intentionality behind the weather, are the price we pay for our human capacity to apprehend patterns of hidden causation that in fact govern the apparent. ...
Article
Full-text available
Our premodern ancestors had perceptual, motoric, and cognitive functional domains that were modularly encapsulated. Some of these came to interact through a new type of cross-modular binding in our species. This allowed previously domain-dedicated, encapsulated motoric and sensory operators to operate on operands for which they had not evolved. Such operators could at times operate nonvolitionally, while at other times they could be governed volitionally. In particular, motoric operations that derive from the same circuits that compute hand motions for object manipulation could now be retooled for virtual manipulation in a mental workspace in the absence of any physical hand or other effector movements. I hypothesize that the creativity of human imagination and mental models is rooted in premotor simulation of sequential manipulations of objects and symbols in the mental workspace, in analogy with the premotor theory of attention, which argues that attention evolved from “internalized” eye movement circuitry. Overall, operator “disencapsulation” led to a bifurcation of consciousness in humans: a concrete form centered on perception of the body in the physical world and an abstract form focused on explanatory mental models. One of the consequences of these new abilities was the advent of psychotic disorders that do not exist in species possessed solely of the concrete type of consciousness.
... The sensemaking of intelligence analysts can also be framed in terms of signal detection theory (SDT; Flach & Voorhorst, 2020;Green & Swets, 1966;Lopes, 1982). Based on our knowledge elicitation efforts, we found SDT to provide a useful framework to account for both bottom-up and top-down effects on information search and decision making. ...
Conference Paper
Full-text available
In many defense organizations, intelligence analysis (IA) is shifting from individual platform-specific work to team-based problem-focused work. One of the impacts this has on sensemaking is that analysts must consider more complex information and hypotheses. The Data/Frame model of Sensemaking, plus a Signal-Detection Theory (SDT) approach to the processes of reframing and elaboration, provides a useful characterization of sensemaking for team-based intelligence analysis. We illustrate this through examples from knowledge elicitation interviews with intelligence analysts.
... Some have argued that history effects in randomized experiments (often binary series) reflect a capacity limitation of the system (Baddeley 1966;Tune 1964;Kahneman and Tversky 1972. Others reframe the question in the context of optimality by considering these history effects as carry-over from rational behaviour in other contexts where statistical regularity is more prevalent, or question if bias is truly present (Lopes 1982;Ayton, Hunt, and Wright 1989;Kareev 1992). From our perspective, there is little to be concluded about why history effects are present in our task and dataset, simply because the dimensionality of sufficient explanations is such that redundancy is likely, and falsification is a challenge. ...
Preprint
Full-text available
Predictions are combined with sensory information when making choices. Accumulator models have conceptualized predictions as trial-by-trial updates to a baseline evidence level. These models have been successful in explaining the influence of choice history across-trials, however they do not account for how sensory information is transformed into choice evidence within-trials. Here, we derive a gated accumulator that models the onset of evidence accumulation as a combination of delayed sensory information and a prediction of sensory timing. To test how the delays interact with across-trial predictions, we designed a free choice saccade task where participants directed eye movements to either of two targets that appeared with variable delays and asynchronies. Despite instructions not to anticipate, participants responded prior to target onset on some trials. We reasoned that anticipatory responses may reflect a trade-off between inhibiting and facilitating the onset of evidence accumulation via a gating mechanism as target appearance became more likely. Using a choice history analysis, we found that anticipatory responses were more likely following repeated choices, despite task randomization, suggesting that the balance between anticipatory and sensory responses was driven by a prediction of sensory timing. By fitting the gated accumulator model to the data, we found that variance in within-trial fluctuations in baseline evidence best explained the joint increase of anticipatory responses and faster sensory-guided responses with longer delays. Thus, we conclude that a prediction of sensory timing is involved in balancing the costs of anticipation with lowering the amount of accumulated evidence required to trigger saccadic choice, which could be explained by history-dependence in the baseline dynamics of a gated accumulator. New and Noteworthy Evidence accumulation models are used to study how recent history impacts the processes underlying how we make choices. Biophysical evidence suggests that the accumulation of evidence is gated, however classic accumulator models do not account for this, so little is known about how recent history may influence the gating process. In this work, computational modeling and experimental data from a free choice saccade task argue that predictions of the timing of sensory information are important in controlling how evidence accumulation is gated, and that signatures of these predictions can be detected even in randomized task environments where history effects may not be expected.
Article
Full-text available
Incompressibility is rejected as a necessary condition for randomness of a digit sequence. Tests for the sequence itself are suggested rather than the process by which it is generated. This leads to the conjecture of an infinite number of compressible random sequences.
Article
Predictions are combined with sensory information when making choices. Accumulator models have conceptualized predictions as trial-by-trial updates to a baseline evidence level. These models have been successful in explaining the influence of choice history across-trials, however they do not account for how sensory information is transformed into choice evidence. Here, we derive a gated accumulator that models the onset of evidence accumulation as a combination of delayed sensory information and a prediction of sensory timing. To test how delays interact with predictions, we designed a free choice saccade task where participants directed eye movements to either of two targets that appeared with variable delays and asynchronies. Despite instructions not to anticipate, participants responded prior to target onset on some trials. We reasoned that anticipatory responses reflected a trade-off between inhibiting and facilitating the onset of evidence accumulation via a gating mechanism as target appearance became more likely. We then found that anticipatory responses were more likely following repeated choices, suggesting that the balance between anticipatory and sensory responses was driven by a prediction of sensory timing. By fitting the gated accumulator model to the data, we found that variance in within-trial fluctuations in baseline evidence best explained the joint increase of anticipatory responses and faster sensory-guided responses with longer delays. Thus, we conclude that a prediction of sensory timing is involved in balancing the costs of anticipation with lowering the amount of accumulated evidence required to trigger saccadic choice.
Article
Full-text available
Active reinforcement learning enables dynamic prediction and control, where one should not only maximize rewards but also minimize costs such as of inference, decisions, actions, and time. For an embodied agent such as a human, decisions are also shaped by physical aspects of actions. Beyond the effects of reward outcomes on learning processes, to what extent can modeling of behavior in a reinforcement-learning task be complicated by other sources of variance in sequential action choices? What of the effects of action bias (for actions per se) and action hysteresis determined by the history of actions chosen previously? The present study addressed these questions with incremental assembly of models for the sequential choice data from a task with hierarchical structure for additional complexity in learning. With systematic comparison and falsification of computational models, human choices were tested for signatures of parallel modules representing not only an enhanced form of generalized reinforcement learning but also action bias and hysteresis. We found evidence for substantial differences in bias and hysteresis across participants—even comparable in magnitude to the individual differences in learning. Individuals who did not learn well revealed the greatest biases, but those who did learn accurately were also significantly biased. The direction of hysteresis varied among individuals as repetition or, more commonly, alternation biases persisting from multiple previous actions. Considering that these actions were button presses with trivial motor demands, the idiosyncratic forces biasing sequences of action choices were robust enough to suggest ubiquity across individuals and across tasks requiring various actions. In light of how bias and hysteresis function as a heuristic for efficient control that adapts to uncertainty or low motivation by minimizing the cost of effort, these phenomena broaden the consilient theory of a mixture of experts to encompass a mixture of expert and nonexpert controllers of behavior.
Chapter
Two books have been particularly influential in contemporary philosophy of science: Karl R. Popper's Logic of Scientific Discovery, and Thomas S. Kuhn's Structure of Scientific Revolutions. Both agree upon the importance of revolutions in science, but differ about the role of criticism in science's revolutionary growth. This volume arose out of a symposium on Kuhn's work, with Popper in the chair, at an international colloquium held in London in 1965. The book begins with Kuhn's statement of his position followed by seven essays offering criticism and analysis, and finally by Kuhn's reply. The book will interest senior undergraduates and graduate students of the philosophy and history of science, as well as professional philosophers, philosophically inclined scientists, and some psychologists and sociologists.
Book
Commit it then to the flames: for it can contain nothing but sophistry and illusion.' Thus ends David Hume's Enquiry concerning Human Understanding, the definitive statement of the greatest philosopher in the English language. His arguments in support of reasoning from experience, and against the 'sophistry and illusion' of religiously inspired philosophical fantasies, caused controversy in the eighteenth century and are strikingly relevant today, when faith and science continue to clash. The Enquiry considers the origin and processes of human thought, reaching the stark conclusion that we can have no ultimate understanding of the physical world, or indeed our own minds. In either sphere we must depend on instinctive learning from experience, recognizing our animal nature and the limits of reason. Hume's calm and open-minded scepticism thus aims to provide a new basis for science, liberating us from the 'superstition' of false metaphysics and religion. His Enquiry remains one of the best introductions to the study of philosophy, and this edition places it in its historical and philosophical context.
Article
Purpose: Pursuit latency is shortest with high contrast, first order stimuli. Low contrast and third order stimuli show longer latency due to increased processing and/or longer pathways through different areas. Here we looked at a comparison of eye and hand movement latencies to see if both changed together with task type. Method: Stimuli were squares of 1 and 2 degrees on a side, defined by high contrast edges, low contrast edges, or by orientation differences in dynamic noise. One square moved in a random walk, the other moved under joystick control as the subject tried to keep them concentric. Left eyes were tracked by a dPi tracker. Eye and hand velocities were cross correlated with target velocity to extract response latencies. Results: The average latency of ocular pursuit for three subjects was 90 msec for high contrast, 111 msec for low contrast, and 127 msec for orientation defined targets. Hand responses were about 90 msec slower than eye responses, regardless of stimulus type. Conclusions: Given the multiplicity of visual areas, it is reasonable to expect latency to change with stimulus type. The fact that eye and hand latencies change identically suggests they are driven by the same visual processes.