JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR
RESPONSE STRENGTH IN MULTIPLE SCHEDULES'
JOHN A. NEVIN
UNIVERSITY OF NEW HAMPSHIRE
In several different experiments, pigeons were trained with one schedule or condition of
food reinforcement for pecking in the presence of one key color, and a different schedule
or condition in the presence of a second key color. After responding in both of these
multiple schedule components stabilized, response-independent food was presented dur-
ing dark-key periods between components, and the rates of pecking in both schedule
components decreased. The decrease in responding relative to baseline depended on the
frequency, magnitude, delay, or response-rate contingencies of reinforcement prevailing
in that component. When reinforcement was terminated, decreases in responding relative
to baseline rates were ordered in the same way as with response-independent food. The
relations between component response rates were power functions. Internal consistencies
in the data, in conjunction with parallel findings in the literature, suggest that the con-
cept of response strength summarizes the effects of diverse procedures, where response
strength is identified with relative resistance to change. The exponent of the power func-
tion relating response rates may provide the basis for scaling response strength.
Frequency of Reinforcement
Experiment I: Response-independent food
Experiment II: Extinction
Magnitude of Reinforcement
Experiment III: Response-independent food
Delay of Reinforcement
food and extinction
Contingencies on Response Rate
Experiment V: Response-independent food
"strengthen" operant behavior, and extinction
NIMH Grant 08515 to Swarthmore College, and Grants
16252 and 18624 to Columbia University. Preparation
of this manuscript was supported by NIMH Grant
23824 to the University of New Hampshire. I am in-
debted to Sara Shettleworth, Suzanne Roth, Roberta
Welte, Dudley Anderson, Lynda Farrell, and Charlotte
Mandell for their assistance in conducting the research
and their comments on the findings as they developed.
Reprints may be obtained from the author at Depart-
ment of Psychology, Conant Hall, University of New
Hampshire, Durham, New Hampshire 03824.
is said to "weaken" it. The actual observa-
tions, of course, are that response rate in-
creases with reinforcement and decreases dur-
ing extinction. The description in terms of
response strength implies that there are sev-
eral properties of behavior that vary together
with response rate. For example, Kling (1971)
"The term response strength refers to
the speed, intensity, or persistence with
which responses occur. The term is not
just a synonym for one of these depen-
dent variables; it implies something more
than is measured by any of them. For
(1938) spoke of the
"strength" of an operant as it is reflected
in response rate and in the number of
responses emitted in extinction. However,
because rate and resistance to extinction
rarely are perfectly correlated, it is ob-
vious that response strength must refer
to something that is related to both, but
identical with neither." (p. 596.)
The study of operant behavior in relation
to schedules of reinforcement has made little
use of the notion of response strength in this
sense. The first problem with the term, iden-
tified by Kling, is the frequent lack of cor-
relation between two. presumably fundamen-
tal measures of behavior: response rate and
1974, 21, 389-408
NUMBER 3 (MAY)
JOHN A. NEVIN
resistance to extinction. The problem is ex-
emplified by Wilson's (1954) study of fixed-
interval (FI) reinforcement. He trained in-
dependent groups of
values, and obtained an orderly monotonic
relating average main-
tained response rate to the length of the in-
terval. However, when reinforcement was dis-
continued, the average number of responses
during extinction was not monotonically re-
lated to the value of the Fl during training,
but exhibited a maximum at Fl 1-min. Per-
haps as a consequence of findings of this sort,
the notion of strength has received little at-
tention in the literature on intermittent re-
trated on elucidating the variables responsible
for the properties of maintained perform-
ances, while extinction responding has been
discussed primarily in terms of the similarity
and Skinner, 1957).
As the study of maintained performance
progressed, a second problem arose: namely,
that response rate was itself a conditionable
tained by variable-ratio (VR) schedules, and
schedules. In the former case, responding oc-
curs at a high steady rate, whereas in the lat-
ter, it occurs at a low steady rate. These differ-
ent performances may be understood as the
outcomes of different contingencies between
various interresponse times (IRTs) and rein-
forcement, in conjunction with contingencies
relating the obtained rate of reinforcement
However they may differ, both performances
may be seen as the terminal products of re-
lated contingencies, and thus, perhaps, equally
"strong". An alternative view might hold that
the two sets of contingencies serve to define
two different responses that cannot be com-
pared at all. In DRL schedules, for example,
the behavioral event on which reinforcement
press, key peck, etc., but also the prior passage
of the DRL interval. In VR schedules, the
response definition does not involve a tempo-
ral dimension, so that the two performances
involve qualitatively different responses.
the measured bar
Some of the difficulties outlined above arise
strength of responding with its absolute rate.
An alternative approach is to examine the way
in which responding changes, relative to its
baseline rate, when some parameter of the
experiment is varied. Extinction-the termi-
nation of reinforcement-is only one way of
examining such changes. Other variables that
alter the rate of responding may give results
that accord with the effects of extinction. To
the extent that similar changes are effected by
different variables, and the results bear some
orderly relation to the conditions of rein-
forcement used to establish the baseline per-
formance, the concept of response strength
provides a useful summary of the findings.
Two-component multiple schedules of re-
inforcement are particularly convenient for
examining the changes in responding effected
by different procedures in relation to the con-
ditions of reinforcement for individual sub-
jects. In multiple schedules, two successive
stimulus conditions are correlated with inde-
pendent schedules or conditions of reinforce-
ment. Each stimulus and its correlated sched-
schedule. After prolonged training, the aver-
age rate of responding in each component
will stabilize at a level that is determined both
by the conditions of reinforcement prevailing
in that component, and by those in the al-
ternated component. If, at this point, some
variable that reduces the rates of responding
is introduced uniformly with respect to both
that undergoes the smaller reduction, relative
to its stabilized baseline, may be identified as
the stronger of the two performances. It will
be shown empirically that this identification
is consistent across several operations that de-
crease response rates, and that the same inter-
nal consistency holds for a yariety of differ-
ent reinforcement conditions.
The present paper considers multiple sched-
ules using variable-interval schedules of food
reinforcement in both components, where the
components differ in one of the following
ways: frequency of reinforcement per unit
time, magnitude of reinforcement, delay of
reinforcement, or contingencies on response
rates at the time of reinforcement. The op-
erations used to decrease response rates are,
in the studies reported here, simple extinction
RESPONSE STRENGTH IN MULTIPLE SCHEDULES
-the withholding of food-or the introduc-
tion of response-independent food during pe-
riods separating the schedule components. A
review of related studies suggests that com-
parable data are obtained with satiation, or
with the introduction of stimuli preceding
FREQUENCY OF REINFORCEMENT
The frequency of reinforcement per unit
time is a potent determinant of performance
in single, multiple, and concurrent sclhedules,
and its effects have been examined parametri-
cally in many stuidies (for review, see Catania
and Reynolds, 1968, and Herrnstein,
with the examination of responding in mul-
tiple schedules where the components differed
in frequency of reinforcement.
This experiment was designed to explore
changes in responding in a three-component
multiple schedule, where key pecking was re-
inforced with food at different, constant fre-
quencies during two components, while the
the third component was varied systemati-
cally. Some of the data have been published
by Herrnstein (1970).
Pigeons 479, 481, 482, and 483, which had
previously served in a multiple-schedule study
(Nevin, 1968), were maintained within 15 g
of 80% of their free-feeding weights.
The experiment was conducted in a stan-
dard single-key Lehiglh Valley pigeon chamber
with red and green keylights, houselight, and
grain feeder. Scheduling and recording were
accomplished by conventional electromechan-
ical equipment in an adjacent room.
Each session consisted of a fixed number of
schedule cycles, during whiclh the key was dark
for the first 30 sec, followed by red or green
,illumination for 60 sec. Red and green alter-
nated irregularly from cycle to cycle, with
the restriction that there were no more than
three consecutive presentations of one color,
and the colors appeared equally often. Thus,
the key was red for one-third of the session,
green for one-third of the session, and dark
for one-third of the session. The houselight
was on continuously. Experimental sessions
were conducted daily with few exceptions; the
number of cycles per session was adjusted
from time to time to maintain the subjects at
their 80% weights. An arithmetic VI 1-min
schedule was correlated with green and an
arithmetic VI 3-min schedule with red. When
the key was dark, a separate tape timer ran
continuously, and presented food at variable
lasted 3 sec. The number of food presenta-
tions per hour dluring dark-key periods was
the independent variable. Values of 60, 180,
360, and 20 food presentations per hour were
scheduled for a total of 6 to 10 hr each, in
that order. Approximately 5 hr of training
with no food during dark-key periods
cases, feeder presentations
The average rate of responding in the pres-
ence of green and red for the last 5 hr of
mainder of the experiment is shown in Fig-
sponding when the key was green (VI 1-min
reinforcement) was always higher than when
the key was red (VI 3-min reinforcement).
The introduction of food during dark-key
periods decreased responding to both green
and red, with
larger decrements resulting
from more frequent food presentation. Re-
sponse rates during the first hour after re-
sponse-independent food was introduced dif-
fered little from those during the last hour
before returning to baseline conditions. Be-
cause of this slight difference, and because
changed the obtained frequencies of peck-
contingent reinforcement when the key was
lighted, performance during the first hour of
exposure to response-independent food was
taken as the major dependent variable in this
and subsequent experiments using the same
1. As expected, the average rate of re-
2Complete tables of individual data are available
from the author.
JOHN A. NEVIN
Another process to which the analysis may
be applied is stimulus generalization. Several
experimenters have noted that when a gradi-
ent of generalization is obtained during ex-
tinction, responding decreases relatively more
rapidly as the test stimulus values depart in-
for review see Nevin,
This differential change in responding,
dicated by the sharpening of the relative grad-
ient during extinction, leads to the paradox
that stimulus control appears to improve in
the absence of reinforcement. According to
Lea and Morgan (1972), this result indicates
the inappropriateness of relative measures of
responding. The present analysis of response
strength suggests that, on the contrary, a rela-
tively smaller decrement in responding at the
training stimulus, and the consequent sharp-
ening of the gradient,
strength is maximal at the training stimulus.
A major theoretical problem in stimulus
generalization is the assessment of conditions
of "excitation" and "inhibition" responsible
for the peak shift that follows discrimination
training with S+ alternating with S- on the
Stubbs (1974) have demonstrated that overall
gradient height is reduced and the usual peak
shift is eliminated by concurrent presentation
of a stimulus correlated with a second sched-
ule of reinforcement during training. Ter-
race (1966) observed the same sort of changes
in generalization gradients obtained succes-
sively during extended discrimination train-
ing. In present terms, these findings demon-
strate that responding to the S+ value
relatively less resistant to reduction, and there-
shifted peak. This interpretation is of course
consistent with the traditional Spence (1937)
theory of discrimination and generalization,
and it may well be that a shift from absolute
or relative response rates to scaled response
strengths will facilitate theoretical unification
learning theory, as well as the current experi-
mental analysis of behavior,
reinforcement. Many of the data on condi-
have been obtained with chained schedules,
is exactly consistent
for operant behavior
in which responding during an initial link
produces a terminal-link stimulus that is cor-
related with unconditioned reinforcement. In
this situation, Ferster and Skinner (1957) and
that initial-link responding in chain VI VI
schedules is relatively more reduced by satia-
tion than is terminal-link responding. In pres-
ent terms, this implies that initial-link per-
formance, which is based at least in part on
terminal-link performance, an interpretation
that accords with the view that conditioned
reinforcers are less effective than the uncon-
ditioned reinforcers on wlhich they are based
(e.g., Kelleher and Gollub, 1962). Indeed, the
data of Fischer and Fantino (1968) indicate
that the relation between initial-link and ter-
minal-link response rates is a power function
1974). This result suggests the
possibility of scaling the effectiveness of con-
ditioned reinforcement in relation to the con-
ditions of pairing with unconditioned rein-
forcement. It may even be possible to go a
step further, and arrive at a predictive for-
mulation of conditioned reinforcement. Wy-
ckoff (1959) suggested that the effectiveness
of a conditioned reinforcer was an increasing
strength was defined by reference to the prob-
ability of responding in the presence of the
stimulus serving as the conditioned reinforcer
(in a two-link chained schedule, this would
constitute the terminal link of the chain).
noted, Wyckoff's formulation cannot explain
chained schedule performances with terminal-
link rates lower than initial-link rates.
however, cue strength is defined in terms of
response strength-that is, the relative resist-
ance of responding to change, rather than its
absolute value-this difficulty may be over-
come. The results of Experiment V, for ex-
ample, have been interpreted above as demon-
strating that DRL contingencies established
greater response strength than DRH. One
would therefore predict that the discrimina-
tive stimulus correlated with DRL would be
a more effective conditioned reinforcer than
a stimulus correlated with DRH, when fre-
quency of food reinforcement was equated,
regardless of the response rates controlled by
would be consistent with the general ideas
is weaker than
its "cue strength", where cue
RESPONSE STRENGTH IN MULTIPLE SCHEDULES
proposed by Wyckoff (1959) and would sug-
gest a new approach to the study of condi-
The foregoing discussion should suffice to
show that the conceptualization of response
strength in terms of relative resistance of re-
sponding to change can lead to a coherent
quantitative summary of the relations
tween asymptotic operant behavior and the
conditions of reinforcement. The same logic
and research methods may also permit an in-
theoretical account of many
havioral phenomena of central concern to the
psychology of learning.
Conditioned suppression or facilitation
as a function of the behavioral baseline. Journal
of the Experimental Analysis of Behavior,
11, 53-61. (a)
quency, and conditioned suppression. Journal of
the Experimental Analysis of Behavior, 1968, 11,
Blough, D. S.
Definition and measurement in general-
ization research. In D. I. Mostofsky (Ed.), Stimulus
generalization. Stanford: Stanford University Press,
1965. Pp. 30-37.
Brady, J. V. and Hunt, H. F. An experimental ap-
proach to the analysis of enmotional behavior. Jour-
nal of Psychology, 1955, 40, 313-324.
Carlton, P. L.
The interacting effects of deprivation
schedule. Journal of the Ex-
perimental Analysis of Behavior, 1961, 4, 379-381.
Catania, A. C.
ment interaction and response independence. Jour-
nal of the Experimental Analysis of Behavior, 1963,
Catania, A. C. and Reynolds, G. S. A quantitative
analysis of the responding maintained by interval
schedules of reinforcement. Journal of the Experi-
mental Analysis of Behavior, 1968, 11, 327-382.
Catania, A. C., Silverman, P. J., and Stubbs, D. A.
ents during schedules of signalled and unsignalled
concurrent reinforcement. Journal of the Experi-
mental Analysis of Behavior, 1974, 21, 99-107.
Chung, S. H. and Herrnstein, R. J.
Analysis of Behavior, 1967, 10, 67-74.
schedule and multiple schedule performance. Pa-
Psychological Association, 1968.
Ferster, C. B. and Skinner, B. F.
Fischer, K. and Fantino, E.
criminative and conditioned reinforcing functions
of stimuli with changes in deprivation. Journal of
the Experimental Analysis of Behavior, 1968,
Choice and delay
at the meetings of the American
Schedules of rein-
The dissociation of dis-
Gollub, L. R. and Urban, J. T.
rate difference during extinction. Journal of the
Experimental Analysis of Behavior, 1958,
single organism. Journal of the Experimental Anal-
ysis of Behavior, 1961, 4, 133-144.
Stimulus intensity dynamism and auditory
generalization for approach and avoidance behav-
ior in rats. Journal of Comparative and Physio-
logical Psychology, 1969, 68, 111-117.
Herrnstein, R. J.
Relative and absolute strength of
response as a function of frequency of reinforce-
of the Experimental Analysis
Behavior, 1961, 4, 267-272.
Herrnstein, R. J.
Secondary reinforcement and rate
of primary reinforcement. Journal of the Experi-
mental Analysis of Behavior, 1964, 7, 27-36.
Herrnstein, R. J. On the law of effect. Journal of the
Experimental Analysis of Behavior, 1970, 13, 243-
Hull, C. L.
Principles of behavior. New York: Apple-
Kelleher, R. T. and Gollub, L. R. A review of positive
conditioned reinforcement. Journal of the Experi-
mental Analysis of Behavior, 1962, 5, 543-597.
Kimble, G. A.
Hilgard and Marquis' conditioning and
Kling, J. W.
Learning: introductory survey. In J. W.
Kling and L. A. Riggs (Eds.), Experimental psychol-
ogy. New York: Holt, Rinehart, and Winston, 1971.
Lea, S. E. G. and Morgan, M. J.
rate-dependent changes in responding. In R. M.
Gilbert and J. R. Millenson (Eds.), Reinforcement:
1972. Pp. 129-145.
Logan, F. A.
Incentive: how the conditions of rein-
Haven: Yale University Press, 1960.
Lyon, D. 0.
Frequency of reinforcement as a param-
eter of conditioned suppression. Journal
Experimental Analysis of Behavior, 1963, 6, 95-98.
Millenson, J. R. and de Villiers, P. A.
properties of conditioned anxiety. In R. M. Gilbert
and J. R. Millenson (Eds.), Reinforcement: behav-
ioral analyses. New York: Academic
Morse, W. H.
Intermittent reinforcement. In W. K.
Honig (Ed.) Operant behavior: areas of research
Crofts, 1966. Pp. 52-108.
Neuringer, A. J.
Effects of reinforcement magnitude
on choice and rate of responding. Journal of the
Experimental Analysis of Behavior, 1967, 10, 417-
Nevin, J. A.Differential reinforcement and stimulus
control of not responding. Journal of the Experi-
mental Analysis of Behavior, 1968, 11, 715-726.
Nevin, J. A.
On the form of the relation between
response rates in a multiple schedule. Journal of
the Experimental Analysis of Behavior, 1974, (in
Nevin, J. A.Stimulus control. In J. A. Nevin (Ed.),
The accentuation of a
The measurement of
JOHN A. NEVIN
The study of behavior: learning, motivation, emo-
tion, instinct. Glenview, Ill.: Scott, Foresman, and
Co., 1973. Pp. 114-152.
Nevin, J. A. and Shettleworth, S.
trast effects in multiple schedules. Journal of the
Experimental Analysis of Behavior,
Rachlin, H. and Baum, W. M.
tion of amount of reinforcement for a signalled
concurrent response. Journal of the Experimental
Analysis of Behavior, 1969, 12, 11-16.
Rachlin, H. and Baum, W. M.
reinforcement: does the source matter? Journal of
the Experimental Analysis of Behavior, 1972, 18,
Reynolds, G. S.
Some limitations on behavioral con-
trast and induction during successive discrimina-
tion. Journal of the Experimental Analysis of Be-
havior, 1963, 6, 131-139.
Richards, R. W.
Reinforcement delay: some effects on
behavioral contrast. Journal of the Experimental
Analysis of Behavior, 1972, 17, 381-394.
An analysis of con-
1966, 9, 305-
Response rate as a func-
Effects of alternative
Shettleworth, S. and Nevin, J. A.
sponse and relative magnitude of reinforcement in
Analysis of Behavior, 1965, 8, 199-202.
Skinner, B. F.
The behavior of organisms. New York:
Spence, K. W.
The differential response of animals
to stimuli differing within a single dimension. Psy-
chological Review, 1937, 44, 430-444.
Terrace, H. S. Behavioral contrast and the peak shift:
effects of extended discrimination training. Jour-
nal of the Experimental Analysis of Behavior, 1966,
number of periodic reinforcements
of response strength. Journal of Comparative and
Physiological Psychology, 1954, 47, 51-56.
Wyckoff, L. B.
Toward a quantitative theory of sec-
ondary reinforcement. Psychological Review, 1959,
Relative rate of re-
of the Experimental
Receiver 11 July 1973.
(Final Acceptance 29 November 1973.)