A preview of this full-text is provided by American Psychological Association.
Content available from Psychological Bulletin
This content is subject to copyright. Terms and conditions apply.
Psychological
Bulletin
1988.
VolT
103,
No.
1,44-56
Copyright
1988
by the
American
Psychological
Association, Inc.
0033-2909/88/J00.75
Behavioral
Momentum
and the
Partial
Reinforcement
Effect
John
A.
Nevin
University
of
New
Hampshire
Free-operant
behavior
is
more
resistant
to
change when
the
rate
of
reinforcement
is
high
than
when
it
is
low.
The
usual
partial
reinforcement
extinction
effect,
demonstrating
greater
resistance
to
extinc-
tion
after
intermittent
than
after
continuous
reinforcement, seems
to
contradict
this
generalization.
However, most
free-operant
extinction
data
are
reported
as
response
totals,
which confound
the
initial
levels
of
responding
and the
rate
at
which
responding
decreases
over
the
course
of
extinction.
A
reanalysis
shows
that
after
extended
training,
the
slope
of the
extinction
curve
is
shallower after
continuous
reinforcement
than
after
intermittent
reinforcement, suggesting
greater
rather
than
less
resistance
to
change.
These
results, which
hold
for
both
independent-groups
and
within-subject
com-
parisons,
support
the
general
finding
that
resistance
to
change
of
free-operant
behavior
is a
positive
function
of the
rate
of
reinforcement.
This
generalization
does
not,
however,
hold
for
discrete-trial
performance.
I
discuss
some
consequences
of
these
analyses
for
applications
of
behavioral
research
results.
A
series
of
experiments conducted over
the
past dozen years
supports
a
simple generalization:
The
greater
the
rate
of
rein-
forcement
for a free-operant
response,
the
more
it
resists change
(Nevin,
1974,
1979; Nevin, Mandell,
&
Atak,
1983;
Nevin,
Mandell,
&
Yarensky,
1981).
A
standard experiment uses
pi-
geons
as
subjects, deprived
to 80% of
their
free-feeding
weights.
After
brief training
to eat
from
the
grain magazine
in a
conven-
tional
pigeon chamber
and
then
to
peck
the
key, with
food
after
each peck,
the
birds
are
exposed
to a
multiple schedule
of
rein-
forcement
in
which
two
distinctive lights
on the key
signal
two
different
schedule components.
For
example,
the key
might
be
lighted
red for 1
min,
during which pecks would
be
reinforced
at
variable
intervals
averaging
1 min (60
reinforcers
per
hr),
and
then green
for the
next
1
min,
during which pecks would
be
reinforced
at
variable intervals averaging
3 min (20
reinforcers
per
hr). (Such
a
schedule would
be
denoted
mult
VI
1-min,
VI
3-min.)
Training would continue until response rates
in
both
components became
stable—perhaps
40
daily
1-hr
sessions.
Then
the
resistance
to
change
of
these asymptotic response
rates would
be
assessed
by
introducing
a new
variable
uni-
formly
with respect
to
both components, such
as
free
food
dur-
ing
dark-key periods between schedule components (e.g.,
Nevin,
1974)
or
prefeeding
in the
home cage (e.g.,
Eckerman,
1968).
Both procedures have
a
common result:
The
rate
of re-
sponding changes less, relative
to its
baseline training level,
in
the
component with
the
higher rate
of
reinforcement (red,
VI
1-min
in
this case).
[ am
indebted
to
L.
Slade
for
assistance
with
literature
searches
and
data
analyses,
to E.
Fickett
for
help
with
the
conduct
of the
study
re-
ported
in
this
article,
to W.
Baum
and V.
Benassi
for
many stimulating
discussions,
and to two
reviewers
for
helpful
comments
on an
earlier
version
of
this
article.
Correspondence
concerning
this
article
should
be
addressed
to
John
A.
Nevin,
Department
of
Psychology, University
of New
Hampshire,
Durham,
New
Hampshire
03824.
Generality
and
Consistency
In
addition
to
pigeons, rats
and
monkeys have served
as
sub-
jects
in
experiments
of
this sort,
and
many other procedures
for
assessing resistance
to
change have been used: introducing
concurrent
reinforcement
for an
alternative response, either sig-
naled
(Nevin
et
al.,
1981)
or
unsignaled
(Pliskoff,
Shull,
&
Gol-
lub,
1968);
varying
hours
of
deprivation
(Carlton,
1961)orbody
weight
(Herrnstein
&
Loveland,
1974); presenting stimuli
sig-
naling
unavoidable electric shock (Blackman,
1968;
Lyon,
1963);
punishing
the
reinforced response
(Bouzas,
1978);
in-
creasing response
effort
(Elsmore,
1971);
and
simply tracking
response rates through
circadian
changes
in
activity (Elsmore
&Hursh,
1982).
Whatever
the
assessment procedure,
it
must
be
applied
equally
to the two
components:
For
example,
if
punishment
is
used
to
assess resistance
to
change,
it
must
be
arranged
so
that
experienced shock rates
are the
same
in
both components, even
when
response rates
differ.
Prefeeding
and
deprivation proce-
dures
are
especially
useful
in
this
respect,
because motivation
levels
and
internal stimulation
are the
same regardless
of
which
component
is in
effect.
Also,
all
these procedures leave
the
base-
line
schedule
of
reinforcement unchanged. Under these condi-
tions,
the
results
are
uniform
and
clear: Relative
to
their base-
line levels,
the
rate
of
responding
in the
component with
the
higher
rate
of
reinforcement
is
less
affected
than
the
rate
of re-
sponding
in the
component
with
the
lower
rate
of
reinforce-
ment.
Normally,
baseline response rate
is
higher
in the
component
with
the
higher rate
of
reinforcement. However, resistance
to
change
is
independent
of
baseline response rates.
For
example,
Nevin
(1974,
Experiment
5)
established
low
response
rates
in a
VI
1-min component
and
high rates
in a VI
3-min component
by
means
of
tandem
interresponse-time
requirements. These
rate
differences
are
opposite
to
those usually
obtained
with
standard
VI
schedules. Nevertheless, resistance
to
change rela-
tive
to
baseline remained greater
in the VI
1-min component.
44
This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.