You are on page 1of 22

The Psychological Record, 2010, 60, 151158

SCHEDULES OF REINFORCEMENT AT 50:


A RETROSPECTIVE APPRECIATION
David L. Morgan
Spalding University
Published just over a half century ago, Schedules of Reinforcement (SOR) (Ferster & Skinner, 1957) marked the seminal empirical contribution of the books
second author and ushered in an era of research on behaviorenvironment relationships. This article traces the origins of both the methods and the data
presented in SOR, and its legacy within experimental psychology. The operant
methodology outlined in SOR revolutionized the study of behavior pharmacology and early investigations of choice behavior. More recently, schedules have
been used to examine microeconomic behavior, gambling, memory, and contingency detection and causal reasoning in infants, children, and adults. Implications of schedules research for the psychology curriculum are also discussed.
Key words: schedules of reinforcement, choice, behavioral pharmacology,
behavioral economics

A half century has come and gone since the publication of Schedules of
Reinforcement (Ferster & Skinner, 1957), a landmark work in experimental
psyc holog y, a nd i na rg uably Sk i n ne rs most substa nt ive empi r ica l
contribution to a science of behavior. The book introduced psychologists
to a n i n novative methodolog y for st udy i ng the moment-to-moment
dynamics of behaviorenvironment interactions, presented volumes of realtime data exhibiting an orderliness nearly unprecedented in psychology,
and established the research agenda of the science Skinner termed the
experimental analysis of behavior. Although he drew sharp and abundant
criticism for his forays into social theorizing, Skinners experimental work
was frequently recognized, even by those not sharing his epistemology, as
a rigorous and important development in behavioral science, as attested to
by numerous awards, including the American Psychological Associations
(APAs) Distinguished Scientific Contribution Award (APA, 1958) and the
National Medal of Science (APA, 1969).
Skinner has frequently been acknowledged as among psychologys
most eminent figures (Haggbloom et al., 2002). Unfortunately, much of his
reputation, among both psychologists and laypersons, stems largely from his
writings on freedom, dignity, autonomous man, the limitations of cognitive
Correspondence concerning this article should be addressed to David L. Morgan, PhD, School
of Professional Psychology, Spalding University, 845 South Third St., Louisville, KY 40203 (e-mail:
dmorgan@spalding.edu).
This article is dedicated to the memory of Peter Harzem, mentor and scholar, for his substantial
empirical and conceptual contributions to a science of behavior.

138

MORGAN

theorizing, and social engineering. Although this work received considerable


attention and provoked much dialogue both within and outside of psychology
(see, e.g., Wheeler, 1973), a focus on Skinners social views may serve to
shortchange an appreciation of the important experimental work for which
he initially became known. Consequently, it may be fitting to revisit Schedules
of Reinforcement (SOR) and to consider its ramifications for a science of
behavior more than 50 years since the publication of this influential work.
The purpose of the present article is to discuss the psychological climate in
which SOR was conceived, the many research programs within behavioral
science that benefitted from both the methodology and the conceptualization
provoked by SOR, and contemporary basic and applied research utilizing
schedules to illuminate many dimensions of human and nonhuman behavior,
and to consider some ways in which instructors of psychology might update
and extend their coverage of schedules of reinforcement in the classroom.

History and Development of Schedules Research


As is common in science, work on schedules of reinforcement was
prompted by serendipitous and practical matters having little to do with
formal empirical or theoretical questions. In a very personal account of
his development as a scientist, Skinner (1956) discussed how the idea
of delivering reinforcers intermittently was prompted by the exigencies
of laboratory supply and demand. Because the food pellets used in the
inaugural work on operant behavior were fashioned by Skinner just prior to
experimentation, their rapid depletion during the course of experimentation
was nearly axiomatic:
Eight rats eating a hundred pellets each day could easily keep
up with production. One pleasant Saturday afternoon I surveyed
my supply of dry pellets, and, appealing to certain elemental
theorems in arithmetic, deduced that unless I spent the rest
of that afternoon and evening at the pill machine, the supply
would be exhausted by ten-thirty Monday morning. . . . It led
me to apply our second principle of unformalized scientific
method and to ask myself why every press of the lever had to be
reinforced. (Skinner, 1956, p. 226)
This insight would lead to a fruitful program of research, carried
out by Ferster and Skinner, over the course of nearly 5 years, in which
the patterning and rate of response of nonhuman animal subjects were
examined under several fundamental schedules. The research itself,
programmatic and unabashedly inductive, was carried out in a style that
would scarcely be recognizable to todays behavioral researchers. In many
cases, experiments were designed and conducted in a single day, often to
answer a specific question provoked by surveying cumulative records
resulting from the previous days research. The research did not follow
the prescriptions of the hypothetico-deductive method, because the data
from any given experiment pointed to the logical subsequent experimental
manipulations:
There was no overriding preconception that ruled where
research should or should not go. All that new facts needed
for admission to scientific respectability was that they meet

SCHEDULES OF REINFORCEMENT AT 50

139

minimal operational requirements. New concepts had to be


publicly replicable to be verified and accepted. (Green, 2002, p.
379)
Somewhat of an anomaly among monographs, Schedules of Reinforcement
offered the reader an unusually large ratio of graphed data to actual text.
This configuration, although unfamiliar to many behavioral scientists, was
no mistake. That the book is dominated not by verbiage but by volumes of
cumulative records, portraying response characteristics across a wide array
of schedule types and parameters, is testament to the spirit of the natural
science of behavior that was unfolding in the Harvard pigeon lab. Ferster
described the tenor of the schedules research with enthusiasm:
Much of the early research in the various areas of schedules of
reinforcement was playful, in the sense that the behavior of the
experimenter was maintained and shaped by the interaction
with the phenomena that were emerging in the experiment.
(Ferster, 1978, p. 349)
Because both the experimental methods and the data being generated
broke new ground, there was the potential for nearly continuous discovery,
as each distinct schedule produced noticeable, and sometimes dramatic,
effects on the patterning and rate of behavior. The classic fixed-interval
scallop evidenced by a gradual increase in response rate near the end of
the interval, the postreinforcement pauses characteristic of both fixed
ratio and interval schedules, and the high response rates generated by ratio
schedules compared to interval schedules were distinct reminders that
not all behaviorconsequence relationships are created equal. Moreover,
these dynamic changes in behavior were not hidden in statistical models
or summarized descriptive measures; they were made evident by the
cumulative recorder, an apparatus nearly perfectly adapted to the research
program itself. For Skinner, the method and corresponding apparatus had
more than proved themselves on pragmatic grounds:
The resolving power of the microscope has been increased
manyfold, and we can see fundamental processes of behavior
in sharper and sharper detail. In choosing rate of responding
as a basic datum and in recording conveniently in a cumulative
curve, we make important temporal aspects of behavior visible.
(Skinner, 1956, p. 229)
These molecularchanges in probability of responding are
most immediately relevant to our own daily lives... There is
also much to be said for an experiment that lasts half an hour
and for a quick and comprehensive look at the fine grain of the
behavior in the cumulative curve. (Skinner, 1976, p. 218)
There was clearly a great deal of excitement in the air among the small
cadre of researchers, many of them colleagues or students of Skinners, who
pioneered research on schedules of reinforcement. The enthusiasm was not
universally shared, however, among experimental psychologists, and the
novelty of both the research methodology and data generated in operant
laboratories proved a sizable obstacle to dissemination of the research.
Many journal editors, themselves practitioners and advocates of the large-

140

MORGAN

group, null-hypothesis testing designs of Ronald Fisher, were reluctant to


publish studies whose principal data were from individual organisms,
neither aggregated across participants nor analyzed via acceptable tests of
statistical inference:
Operant conditioners were finding it hard to publish their
papers. The editors of the standard journals, accustomed to
a different kind of research, wanted detailed descriptions of
apparatus and procedures which were not needed by informed
readers. They were uneasy about the small number of subjects
and about cumulative records. (The cumulative record, was, in
fact, attacked as a subtle curve-smoothing technique, which
concealed differences between cases and should be abandoned
in favor of objective statistics.) (Skinner, 1987, p. 448)
In response to the unfavorable climate offered by the traditional
experimental journals, those involved in the early schedules research
took it upon themselves to establish a separate journal friendly to the
innovative methods and data of operant research. Ferster, SORs first
author, spearheaded an effort that led, in 1958, to the establishment of the
Journal of the Experimental Analysis of Behavior (JEAB), the primary outlet
for operant research for the past half-century. Not surprisingly, JEAB was
dominated by research on schedules of reinforcement, although perusal
of the journals earliest volumes reveals that there was much more on the
minds of scientists submitting their work to the journal (Nevin, 2008).
Indeed, the journals first volume contained studies utilizing an impressive
array of species (rats, pigeons, rhesus monkeys, chimpanzees, and human
adults and children), response classes (lever pressing, key pecking, wheel
running, grooming, and stuttering), and consequences (food, water, shock,
tones, tokens, money, and colored lights correlated with reinforcement).
Among the most noteworthy aspects of this pioneering work was the
usefulness of operant methodology to many scientists from disciplines
seldom associated with behavior analysis. The early volumes of JEAB
read like a Whos Who of scientists whose names would eventually be
synonymous with such disciplines as behavioral medicine (Brady, Porter,
Conrad, & Mason, 1958), behavior pharmacology (Dews, 1958, 1970;
Dews & Morse, 1958), animal psychophysics (Blough, 1958, 1959, 1971),
and animal cognition (Premack & Bahwell, 1959; Premack & Schaeffer,
1963; Shettleworth & Nevin, 1965; Terrace, 1963, 1969). For many of
these researchers, the free-operant methodology introduced by Skinner
allowed for an unprecedented degree of experimental control, and
this paid especially high dividends in work with nonhuman animals.
The response rates and patterns emitted by animals under common
schedules of reinforcement exhibited remarkable stability, even over
prolonged experimental sessions. This stability, or steady-state, behavior
thus became a reliable baseline for evaluating the effects of other
variables of interest, including drugs, acoustic and visual properties of
antecedent (discriminative) stimuli, and characteristics of reinforcers,
such as magnitude, delay, and quality. In particular, the programming of
concurrent schedules not only allowed for these kinds of comparisons but
also set the stage for the experimental analysis of choice behavior, a topic
that dominated the experimental analysis of behavior for decades. The

SCHEDULES OF REINFORCEMENT AT 50

141

well-known matching law, first articulated by Herrnstein (1961, 1970), may


represent the single most important empirical contribution of behavior
analysis to a science of behavior.
Naturally, schedules of reinforcement served frequently as the principal independent variable, and this was a logical consequence of the work
done previously by Ferster and Skinner (1957). What might surprise many
psychologists, however, is the frequent attention given, in both SOR and the
pages of JEAB, to a much larger collection of behavioral contingencies than
depicted in introductory textbooks (continuous reinforcement [CRF], fixedratio [FR], variable-ratio [VR], fixed-interval [FI], and variable-interval [VI]).
Indeed, Ferster and Skinner devoted a substantial portion of SOR to the
study of a variety of both simple and complex schedules, as it became apparent early on that consequences could be delivered contingent on behavior
along a sizable continuum of dimensions. In addition to studying response
patterns across various ratio and interval parameters, it was possible to arrange compound schedules, in which two or more separate schedules were
operating either successively or concurrently. Sometimes schedule alterations were signaled by discriminative stimuli, a procedure that allowed for
the analysis of stimulus control, conditioned reinforcement, and the development of behavioral chains. In addition, Ferster and Skinner explored the
possibility of producing very low or very high rates of behavior by arranging
reinforcement contingent only on specified interresponse times (irt). These
differential reinforcement schedules would become of theoretical interest because of their implications for how response classes are defined (Catania,
1970; Malott & Cumming, 1964), and several variations of differential reinforcement have become standard alternatives to punishment in applied behavior analysis (Cooper, Heron, & Heward, 2007).
In sum, the work begun by Ferster and Skinner (1957), and embraced by
an enthusiastic group of scientists, established a strong precedent for researchers interested in the systematic study of behavior maintained by its
consequences. The free-operant method would eventually find widespread
application, not only in the laboratories of many behavioral and natural scientists but, increasingly, in applied endeavors as well. Featured prominently
in much of this research, however, is the primacy of schedules of reinforcement. The pervasiveness of schedule effects was suggested by Zeiler (1977):
It is impossible to study behavior either in or outside the laboratory without encountering a schedule of reinforcement: Whenever behavior is maintained by a reinforcing stimulus, some
schedule is in effect and is exerting its characteristic influences.
Only when there is a clear understanding of how schedules operate will it be possible to understand the effects of reinforcing
stimuli on behavior. (p. 229)
Naturally, not all who have made use of schedule methodology endorse
the claim that behavior is principally controlled by schedules of reinforcement. Several prominent researchers in behavior analysis have suggested that
schedules are, in fact, artificially imposed contingencies that neither mimic
real world functional relationships nor cast explanatory light on the role of
the environment in the organization of behavior. Shimp (1976, 1984), for
instance, has long argued that the molar behavior patterns observed under
conventional reinforcement schedules are merely the by-product of moment-

MORGAN

142

to-moment adjustments of behavior to more localized contingencies. There


is, Shimp suggested, no reason to view molar response rates as fundamental
units of behavior when more molecular analyses reveal similar order and
offer a more nuanced view of the temporal organization of behavior. Similarly,
Schwartz has claimed that the contingencies of reinforcement manipulated by
both laboratory and applied behavior analysts have no counterpart in nature
and do not reflect fundamental properties of behavior. He has suggested that
research has merely shown that reinforcement may influence behavior when
other causal factors are eliminated or suppressed, and this often occurs
only under contrived and artificial conditions (Schwartz, 1986; Schwartz
& Lacey, 1982). Finally, there is a substantive research literature on human
operant behavior that questions the generality of nonhuman animal schedule performance. Differences that emerge in response patterning between
humans and nonhumans, especially on temporal schedules, have provoked
numerous dialogues on the interaction between verbal behavior and schedule
performance, contingency versus rule-governed behavior, and the sensitivity
of behavior to scheduled contingencies (e.g., Catania, Shimoff, & Matthews,
1989; Hayes, Brownstein, Zettle, Rosenfarb, & Korn, 1986; Hayes & Ju, 1998;
Matthews, Shimoff, Catania, & Sagvolden, 1977; Schlinger & Blakely, 1987).
Thus, the applicability of reinforcement schedules in describing and accounting for human behavior remains a contentious issue for many.

Prominent Areas of Research on Schedules of Reinforcement


Todays behavior analysts, whether working in laboratories or applied
settings, continue to pursue the implications of Ferster and Skinners
early work, and in doing so, they have extended considerably the scope of
behavioral phenomena to which reinforcement schedules seem clearly
relevant. As suggested by Zeiler (1977), the importance of the schedules
research lay in its generic implications for behaviorconsequence
relationships, rather than in the detailed analysis of pigeons pecking keys
for grain. In a sense, Ferster and Skinner (1957) had offered both a powerful
methodology for studying behavior and a promissory note for those willing
to mine further its implications for a science of behavior, especially with
respect to human affairs.

Choice Behavior
As mentioned earlier, an especially important development in schedules research was the use of concurrent schedules as a vehicle for studying behavioral choice. The programming of two separate schedules of reinforcement (often two variable-interval schedules), operating simultaneously, made it possible to examine how organisms allocate their behavior to
several available options. Among the most salient and reliable discoveries
was that relative response rates (or allocation of time to a schedule) closely
track relative reinforcement ratesthe so-called matching law (Baum, 1973;
Herrnstein, 1961, 1970). Once behavior stabilized on concurrent schedules,
response rates and patterns served as reliable and powerful benchmarks for
evaluating qualitative and quantitative differences in reinforcement. Thus,
the use of concurrent schedules became invaluable as a tool for exploring
other issues in behavioral allocation.

SCHEDULES OF REINFORCEMENT AT 50

143

The study of choice behavior became a highly sophisticated business


within the operant laboratory, and significant effort has been expended on
identifying the local and extended characteristics of responding on concurrent schedules. This research has led to both significant quantification, as
seen in the growth of mathematical models of operant behavior (Killeen,
1994; Mazur, 2001; Shull, 1991), and substantial theoretical discussion of
whether matching behavior is best conceptualized at the molar or molecular level (Baum, 2004; Hineline, 2001). But such developments would remain
of interest to a fairly small group of laboratory scientists were it not for
the general applicability of the choice paradigm itself. In fact, the relevance
of the concurrent schedules preparation is its conspicuous analogy to any
circumstance in which an organism faces a choice between two or more behavioral options. The matching law, initially demonstrated by pigeons pecking keys for grain reinforcement, is now recognized as a remarkably robust
behavioral phenomenon, characterizing the choice behavior of a large number of species, emitting various topographically diverse responses, for an
array of reinforcers (Anderson, Velkey, & Woolverton, 2002; deVilliers, 1977;
Houston, 1986; Pierce & Epling, 1983; Spiga, Maxwell, Meisch, & Grabowski,
2005).
Perhaps the most relevant and interesting examples of the matching law
are its applications to human behavior. Initial laboratory studies in which
(a) responses typically included button or lever presses and (b) reinforcers
included points on a computer console or snacks (e.g., peanuts) often
produced data very similar to those acquired in the animal laboratory; that
is, participants tended to allocate behavior in a manner consistent with
the matching law (Baum, 1975; Bradshaw & Szabadi, 1988; Buskist & Miller,
1981). Applications of the matching law to behavior in applied settings
have largely confirmed findings from the more controlled environment
of the laboratory. Matching, for instance, has described allocation of selfinjury in an 11-year-old child (McDowell, 1982), and choice of sexually
provocative slides by a convicted pedophile (Cliffe & Parry, 1980). More
recently, the matching relation has been demonstrated in archival analyses
of sports performance. Both college and professional basketball players have
been shown to match the relative rate of 2-point and 3-point shots to their
respective reinforcement rates, once the differences in reinforcer magnitude
are adjusted (Romanowich, Bourret, & Vollmer, 2007; Vollmer & Bourret,
2000). Reed, Critchfield, and Martens (2006) demonstrated that professional
football teams allocated running and passing plays in a manner predicted
by the matching law as well. Other studies have shown that spontaneous
comments during informal conversations were made toward confederates
who reinforced responses with verbal feedback, and that the relative rates of
comments occurred in a manner consistent with the generalized matching
law (Borrero et al., 2007; Conger & Killeen, 1974).
The general choice paradigm has become an enormously generative
platform for studying many aspects of behavior that involve choosing
between alternative activities or reinforcers, and there would seem to be
few socially significant behaviors that could not be meaningfully viewed
as instances of choice activity. For instance, both classic research and
contemporary research on self-control, which typically pits an immediate,
small reinforcer against a delayed, large reinforcer, have shed considerable
light on the relative importance of reinforcer quality, magnitude, and delay

MORGAN

144

in determining individual choice behavior (Epstein, Leddy, Temple, & Faith,


2007; Logue, 1998; Mischel & Grusec, 1967; Mischel, Shoda, & Rodriguez,
1989). A sizable literature attests to the extent to which reinforcer value
decreases monotonically as a function of time delay, and this finding has
potential implications for an array of behaviors said to reflect impulsivity,
or lack of self-control. Indeed, the phenomenon known as delay discounting
has informed many recent analyses of behavior problems characterized by
impulsivity, including smoking (Bickel, Odum, & Madden, 1999), substance
abuse (Bickel & Marsch, 2001; Vuchinich & Tucker, 1996), and pathological
gambling (Dixon, Marley, & Jacobs, 2003; Petry & Casarella, 1999).

Behavioral Pharmacology
Articles that appeared in the first volume of JEAB portended a significant
role for schedules of reinforcement in the examination of the relationship
between drugs and behavior. By 1958, the year of the journals inaugural
issue, schedule performance, including response rate and patterning, was
well known for several species and across several parameters of many
different schedule types. Consequently, predictable behavior patterns were
the norm, and these served as very sensitive baselines against which to
judge the effects of other independent variables, including chemicals. In
reminiscing about the early laboratory work on behavioral pharmacology,
some of the disciplines pioneers reflected on the advantages of schedules
research for scientists interested in drugbehavior relationships:
When Charlie [Ferster] showed me the pigeon laboratory in early
1953, it was immediately apparent from the counters and cumulative records that behavioral phenomena were being studied
in a way that was well suited for application to pharmacology.
In spite of my reservations about pigeons as subjects for drugs,
in a short time we had planned a joint project, on effects of pentobarbital on fixed-interval responding, and immediately started
experiments... A sort of natural selection based on results then determined what lines seemed interesting and would be continued....
(Dews, 1987, p. 460)
The marriage of operant methodology, particularly schedules research,
with pharmacology paid immediate dividends, and much of the agenda for
the young science of behavioral pharmacology was established by the work
being done by Peter Dews, Roger Kelleher, and William Morse. Among the
more important observations coming out of this early research was the
concept of rate dependencethat the effects of any particular drug were
substantially modulated by the current rate of the behavior, and this was
largely determined by the schedule maintaining the behavior:
Under conditions where different food-maintained performances alternated throughout an experimental session, pentobarbital
was shown to affect behavior occurring under a fixed-interval
(FI) schedule differently from behavior controlled by a fixed ratio (FR) schedule.
...
This simple but powerful series of experiments lifted the evaluation and interpretation of drug effects on
behavior from that of speculation about ephemeral emotional
states to quantitative and observable aspects of behavior that

SCHEDULES OF REINFORCEMENT AT 50

145

could be manipulated directly and evaluated experimentally.


(Barrett, 2002, p. 471)
Although the general notion of rate dependency appeared early in
behavioral pharmacology, its explanatory scope was eventually questioned,
and the discipline turned ultimately to other behavior principles expected
to interact with drug ingestion, including qualitative properties of both
reinforcement and punishment (Branch, 1984). In addition, considerable
research focused on the function of drugs as discriminative stimuli,
as this work dovetailed well with traditional pharmacological efforts at
identifying physiological mechanisms of drug action (Colpaert, 1978). More
recently, attention has been directed to self-administration of drugs and the
identification of drugs as reinforcing stimuli (Branch, 2006).
Even in contemporary research on behavioral pharmacology, schedules of
reinforcement play an integral role, particularly when drugs are used as reinforcers for operant responses. Indeed, one method for evaluating the reinforcing strength of a drug is to vary its cost, and this can be done in operant
studies through the use of a progressive ratio schedule. In a progressive ratio
schedule, response requirements are increased with the delivery of each reinforcer (Hodos, 1961). In such a study, a participant may receive injection of a
drug initially based on a single response, but the required ratio of responses
to reinforcers increments with each reinforcer delivery. Under such schedules,
a single drug injection may eventually require hundreds or thousands of responses. The point at which responding terminates is known as the break
point, and this behavioral procedure allows for a categorization of drugs in a
quantitative manner that somewhat parallels the abuse potential for different
drugs (Young & Herling, 1986). The use of progressive ratio schedules in evaluating drug preference and/or choice in nonhumans and humans alike has
become a standard procedure in behavioral pharmacological research (Hoffmeister, 1979; Katz, 1990; Loh & Roberts, 1990; Spear & Katz, 1991; Stoops,
Lile, Robbins, Martin, Rush, & Kelly, 2007; Thompson, 1972).

Behavioral Economics
In operant experiments with both nonhumans and humans, participants respond to produce consequences according to specific reinforcement schedules. The reinforcers, of course, differ from study to study and
include such diverse consequences as food, tokens, points on a computer
monitor, money, social praise or attention, music, toys, and so on. It is not
terribly difficult, however, to see how these consequences could be viewed
as commodities, with participants functioning as consumers in standard
operant experiments. In fact, the analogy between behavioral experiments
and the phenomena of economics has been recognized by scientists for several decades (Lea, 1978). For those interested in the study of microeconomic
behavior, the operant laboratory afforded a controlled environment and established experimental methods well suited to this purpose. In addition to
viewing participants as consumers and reinforcers as commodities, operant
analyses of economic behavior conceptualize reinforcement schedules as
representing constraints on choice. Thus, different schedules of reinforcement can be seen as alterations in the availability, or price, of a commodity,
and response allocation can be observed as a function of these changes in
availability.

146

MORGAN

The use of reinforcement schedules, then, allows for an experimental


investigation of such economic principles as elasticityt he change in consumption of a good as a function of its price. The consumption of some reinforcers, for example, may be affected more than others by increases in price,
usually represented by increases in ratio schedule parameters. Similarly, by
making different commodities available concurrently and according to the
same price (schedule of reinforcement), the relative substitutability of the
commodities can be ascertained. In essence, researchers involved in behavioral economics view the operant laboratory as a very useful environment
for pursuing the classic demand curves of economic theory, but with the
added benefit of precise manipulation of independent variables and objective measurement of dependent variables (Allison, 1983; Hursh, 1984; Hursh
& Silberberg, 2008). Fortunately, the benefits of this cross-fertilization have
been realized on both sides. Research in behavioral economics has had important, though unforeseen, consequences for the experimental analysis of
behavior as well. Studies of the elasticity and substitutability of reinforcers
conventionally used in the operant laboratory (e.g., food, water) demonstrated that demand curves for these reinforcers depend substantially on how
they are made available. An especially important variable, though historically ignored by behavior analysts, was whether the reinforcer used in experimentation was available only within the context of experimental sessions,
or available in supplementary amounts outside the experimental space, as is
common with nonhuman animal research. This distinction between open
(supplemental food made available outside of experimental sessions) and
closed (no supplemental food available outside experimental sessions)
economies proved immensely important to understanding response allocation in studies of choice, and was a direct result of viewing operant experiments as investigations in microeconomics (Hursh, 1978, 1984). In general,
the synthesis of microeconomic theory with the experimental methodology
of behavior analysis has moved the latter field toward a more systematic
assessment of reinforcer value, as well as a greater appreciation of the delicate nature of reinforcer value, given the variables of price and alternative
sources of reinforcement (Hursh & Silberberg, 2008).

The Pervasive Nature of Schedules of Reinforcement in


Human Behavior
Any psychology professor teaching about schedules of reinforcement
has the somewhat dubious challenge of describing to students what experiments on key-pecking pigeons might have to do with the lives of human
beings as we move about our very busy environments. Ordinarily, this assignment entails a description of the scientific logic behind using simple,
well-controlled anologue strategies to examine fundamental principles before proceeding with larger scale generalizations. A few students get that
there are important functional equivalencies between birds pecking keys
and humans pressing buttons on an ATM. For those who are not so convinced, descriptions of behavior under schedule control must exceed some
higher threshold of ecological validity to be meaningful. These descriptions
inevitably must go beyond the conventional description of basic contingencies (e.g., CRF, FR, VR, FI, VI) and their putative relevance to piecework,
checking the mail, and gambling. Fortunately, this task is not especially

SCHEDULES OF REINFORCEMENT AT 50

147

daunting. The ubiquity of schedule effects on behavior in the natural environment can be readily appreciated once one acknowledges that all behavior
operates on the environment. Of course, the consequences of behavior are
not always as dramatic or programmed as the delivery of a food pellet or a
winning slot machine configuration. Subtle behaviorenvironment relationships, common in the real world we all inhabit, offer both numerous and
authentic examples. Turning the ignition key starts the engine of a vehicle,
and turning ones head in the direction of a novel sound brings a potentially
important stimulus into ones visual field. Entering ones correct password
on a keyboard allows access to an electronic environment of ones choice. In
none of these scenarios would we be likely to speak of the consequences described as feeling good or as rewards in the vernacular sense, but their
function in maintaining these common repertoires can hardly be ignored.
Also, real-world consequences are often of an ethereal nature in the
everyday environment, seldom delivered in a manner well-described
by any of the conventionally studied reinforcement schedules. Social
reinforcement, for instance, can be especially fickle, and consequently quite
difficult to simulate in the operant chamber. Parents do not always pick up
crying babies, married couples respond to each others verbal statements
with fitful consistency, and teachers attend too infrequently to students
who are academically on task. Moreover, these natural contingencies
are of a reciprocal nature, with each actor serving both antecedent and
consequential functions in an ongoing, dynamic interplay. This reciprocity
was well appreciated by Skinner (1956), who often credited his nonhuman
animal subjects with maintaining his interest in the learning process: The
organism whose behavior is most extensively modified and most completely
controlled in research of the sort I have described is the experimenter
himself . . . The subjects we study reinforce us much more effectively than
we reinforce them (p. 232).
More recently, complex interpersonal contingencies have been pursued by a number of researchers, most notably child development researchers interested in the interplay between infants and caregivers. In
much of this work, the focus is on the childs detection of social contingencies. Numerous studies have demonstrated that infants respond
very differently to social stimulation (e.g., caregiver facial expressions,
verbalizations, physical contact) delivered contingent on infant behavior
as compared with responses to the same stimulation delivered noncontingently (Fagen, Morrongiello, Rovee-Collier, & Gekoski, 1984; Millar &
Weir, 1992; Tarabulsy, Tessier, & Kappas, 1996). Caregiver behavior that is
unpredictable or unrelated to infant behavior is seen as a violation of expected social behavior and is correlated with negative affective reactions
in infants (Fagen & Ohr, 1985; Lewis, Allesandri, & Sullivan, 1990). This
contingency detection research suggests that human infants are highly
prepared to identify social reinforcement contingencies, particularly
those connected to their own attempts to engage caregivers. Tarabulsy et
al. (1996) have argued that disruptions to, or asynchrony in, infantcaregiver interactions, known to correlate with insecure attachment (Isabella
& Belsky, 1991), are fundamentally manifestations of social contingency
violations.
Similar to contingency-detection research in developmental psychology,
reinforcement schedules have also been used to evaluate causal reasoning

148

MORGAN

in adult participants. Reed (1999, 2001, 2003) has demonstrated that


participants render postexperiment judgments of causality as a function
of the contingency described by the schedule. In these experiments, the
individuals responses on a computer keyboard resulted in brief illumination
of an outlined triangle on the monitor according to programmed schedules.
Participants moved through several schedules sequentially, and experiments
pitted schedules that ordinarily produce high response rates (ratio and
differential reinforcement of high rate; drh) against those usually associated
with lower response rates (interval and differential reinforcement of low
rate; drl). Although reinforcement rates were held constant across schedules
by way of a yoking procedure, ratio and drh schedules still produced
greater response rates as well as higher causality judgments. Participants
viewed their behavior as being more responsible for the illumination of
the triangle when emitting higher rather than lower response rates. These
results were interpreted as evidence that the frequent bursts of responding
that produce reinforcement in ratio schedules create a greater sense of
behaviorconsequence contingency than the pauses, or longer interresponse
times (irt) common to interval schedules (Reed, 2001). This research,
utilizing conventional operant methodology, is important because it brings
a functional analysis to bear on response topographies seldom considered
within the purview of behavior analysis.
Of course, being creatures of habit, scientists conducting contemporary
research on schedules tend to remain closely tethered to the traditional
dimensions of count and time (ratio and interval schedules) developed
by Ferster and Skinner (1957). There are, however, many other ways in
which behaviorenvironment relationships can be configured. Williams
and Johnston (1992), for instance, described a number of reinforcement
schedules in which both the response and the consequence could assume
either discrete (count-based) or continuous (duration) properties, and
observed response rates to vary as a function of the specific contingency.
Broadening the properties of reinforcement schedules in this way helps
bring research on schedules closer to the dynamic behaviorenvironment
interchanges characteristic of human behavior in the natural environment.
As an example, in a conjugate reinforcement schedule, the duration or
intensity of a reinforcing stimulus is proportional to the duration or
intensity of the response. Conjugate reinforcement schedules were utilized
early in the development of applied behavior analysis, particularly in
the pioneering work of Ogden Lindsley, who discovered that arranging a
continuous relationship between behavior and its consequences allowed for
remarkably sensitive assessment of attention in patients recovering from
surgery (Lindsley, 1957; Lindsley, Hobika, & Etsten, 1961). The procedure
was also used by advertising researchers to evaluate both attention and
preference for television programming, interest in commercials, and
preference for music type (monophonic vs. stereophonic) and volume. In
most of this research, participants pushed buttons or stepped on pedals, and
stimulation was presented as long as the response occurred, but terminated
or decreased in duration with cessation of responding (Morgan & Lindsley,
1966; Nathan & Wallace, 1965; Winters & Wallace, 1970). By requiring the
participants to maintain stimulation through a continuous response, the
method allowed for a direct assessment of interest or attention, rather than
unreliable measures such as direction of gaze or verbal reports.

SCHEDULES OF REINFORCEMENT AT 50

149

More recently, conjugate reinforcement schedules have been used to


study memory in infants. Rovee-Collier developed a procedure in which
infants in cribs developed a kicking response when their foot, attached via a
string to a mobile suspended over the crib, caused movement in the mobile.
Because both the intensity and duration of mobile movement is proportional
to the intensity and duration of leg movement, the contingency represents
a conjugate reinforcement schedule. The procedure has become a standard
method for evaluating not only contingency awareness in infants but also
memory for the contingency over time, and the effects of contextual stimuli
on durability of memory. Using the method and distinct contextual cues,
Rovee-Collier and colleagues have established impressive long-term memory
for infants as young as 3 months of age (Butler & Rovee-Collier, 1989; RoveeCollier, Griesler, & Earley, 1985; Rovee-Collier & Shyi, 1992; Rovee-Collier &
Sullivan, 1980).
The conjugate reinforcement schedule may very well represent one of
the most natural and common contingencies affecting humans and nonhumans alike. The movement of a car at a fixed speed requires a constant pressure on the accelerator (assuming one is not using cruise control); an angler
controls the retrieval speed of a lure by the rate at which line is reeled in;
and the rate at which a manuscript scrolls on a computer monitor is a direct
function of speed of operation of the mouses roller. There are, undoubtedly,
many more examples of behaviorenvironment interplay in which contingent stimulation tracks response intensity or duration, especially amid the
many machinehuman interfaces characterizing the modern world. Ferster
and Skinner (1957) were understandably constrained in their ability to study
many of these configurations, due to the limited technology at their disposal, much of it fashioned by Skinner himself. But todays researchers, beneficiaries of more dynamic and powerful electronic programming equipment,
face fewer barriers in arranging reinforcement schedules along multiple
dimensions. Such research may serve as an important reminder, especially
to those outside of behavior analysis, of the generality and applicability of
schedules of reinforcement to naturally occurring human behavior.

Schedules of Reinforcement in the Decade of Behavior


The American Psychological Association has declared 20002010 the
Decade of Behavior. As we near the end of this decade, reflecting on
the legacy of a landmark work like Schedules of Reinforcement (Ferster &
Skinner, 1957) seems appropriate, especially if one considers the somewhat
subservient role behavior has come to play in contemporary psychological
science. Baumeister, Vohs, and Funder (2007), having surveyed contemporary
social and personality psychology, reached a sobering conclusion: In a
field at one time celebrated for its methodological ingenuity, classic field
experiments, and direct assessment of social influence effects, self-report
measures have become the predominant dependent variable in contemporary
research. There are, of course, numerous advantages to utilizing selfreports as the primary mechanism for collecting psychological data, among
them convenience and potential ethical considerations, as pointed out by
Baumeister and colleagues (Baumeister et al., 2007). The behavior of filling
out a scale, however, is not isomorphic to the behavior to which the scales
items are said to refer, and the limitations in drawing viable inferences from

150

MORGAN

self-report data are well known (Nisbett & Wilson, 1977). In fact, behavior
analysts have contributed an important piece to this scientific dialogue in
the form of analyses of the conditions under which verbal and nonverbal
behavior exhibit correspondence (Baer, Osnes, & Stokes, 1983; Critchfield,
1993; Matthews, Shimoff, & Catania, 1987; Wulfert, Dougher, & Greenway,
1991), though this research literature is not likely to be familiar to those
outside the field.
Behavior analysts may be comforted by the observation that other
behavioral scientists have increasingly come to lament a science dependent
on self-reports for its primary data. For instance, a recent series of articles
explored both the scientific and clinical repercussions of psychologys
reliance on arbitrary metrics (Blanton & Jaccard, 2006; Embretson, 2006;
Greenwald, Nosek, & Sriram, 2006; Kazdin, 2006). An arbitrary metric
is a measure of a construct, such as depression or marital satisfaction,
whose quantitative dimensions cannot be interpreted with confidence as
reflecting fundamental properties of the underlying construct. Because such
constructs are ordinarily conceptualized as multidimensional, no discrete
numerical index could be taken as an independent and comprehensive
measure of the construct itself. Nor can such measures be usefully assessed
as indicators of change in the construct in response to treatment regimens
or independent variable manipulations. Such measures, however, are the
bread and butter of assessment practice throughout much of the behavioral
sciences, especially in clinical and applied settings. As Baumeister et al.
(2007) noted, employing arbitrary metrics has become standard practice in
empirical research in social psychology, a discipline once devoted to direct
observation of ecologically valid behaviors.
This state of affairs is particularly lamentable given the substantial
advances made recently in scientific instrumentation. In fact, many of our
more mundane daily routines are now readily monitored and measured by
a host of small, powerful digital devices. This technology has often come
from other disciplines, such as animal behavior, physical and occupational
therapy, and the medical sciences, but its utility for behavioral science is
easily acknowledged. Precision-based instrumentation is now available for
the continuous measurement of a variety of behavioral dimensions. Increasingly powerful and portable digital cameras can capture the details of sequential interactions between participants, eye gaze and attention, facial
expressions, and temporal relationships between behavior and environmental events, and laboratory models are often capable of integrating these data
with corresponding physiological indices. In addition, accelerometers, telemetric devices, and actigraphs can be used to assess numerous behavior patterns, such as movement during sleep and gait and balance during ambulation, and large-scale movement, such as migration patterns. The enhanced
mobility and durability of many such instruments make their exportation
outside the laboratory and remote use more accessible than ever before.
In sum, there would seem to be no legitimate reason a science of behavior
would not make more use of a growing technology that renders direct measurement of behavior more accessible than ever before.
Of course, technological advances do not, in and of themselves, make
for better science. As in other pursuits, it is the tool-to-task fit that seems
most relevant. By todays standards, the operant equipment fashioned and
used by Ferster and Skinner (1957) seems quite pedestrian. It was, however,

SCHEDULES OF REINFORCEMENT AT 50

151

more than adequate to the task of revealing the intricate and sensitive
dance between an operant class and its environmental determinants. Mere
demonstrations that consequences influenced behavior were not original;
Thorndike (1911) had demonstrated as much. The advent of different
kinds of behaviorconsequence arrangements, though, allowed for a more
sensitive evaluation of these parameters, although little would have come
from this early experimentation if response patterns and rates had been
uniform across different schedules. The fact that this did not happen
lent immediate credibility to the research program begun by Ferster and
Skinner and advanced by other operant researchers for the past half
century.

Conclusion
The research that led to the publication of Schedules of Reinforcement
(Ferster & Skinner, 1957) served to both justify and inspire an empirical
science of behavior whose functional units consisted of behavior
environment relationships. The myriad ways in which behavior could be
shown to operate on the environment offered a large tapestry for those
wishing to explore the many shades and colors of the law of effect. In
following this behaviorconsequence interplay over long experimental
sessions and at the level of individual organisms, the experimental analysis
of behavior distanced itself from the large-group designs and statistical
inference that had come to dominate psychological research. A perusal of
Haggbloom et al.s (2002) list of eminent figures in psychology reveals that
Skinner is joined by Piaget (2nd on the list behind Skinner) and Pavlov
(24th), both of whose fundamental contributions are well known even by
nonpsychologists. What is perhaps unappreciated by behavioral scientists,
however, is that the groundbreaking work of both Piaget and Pavlov resulted
from data collected and analyzed at the individual level, with little regard
for sample size and statistical inference (Morgan, 1998). That Skinner, Piaget,
and Pavlov, clearly not kindred souls with respect to much of their scientific
epistemology, found common ground in the study of the individual calls
into question conventional methods of psychological science. Meehl (1978)
may have expressed it most cogently 30 years ago:
I suggest to you that Sir Ronald has befuddled us, mesmerized
us, and led us down the primrose path. I believe that the almost
universal reliance on merely refuting the null hypothesis as
the standard method for corroborating substantive theories
in the soft areas is a terrible mistake, is basically unsound,
poor scientific strategy, and one of the worst things that ever
happened in the history of psychology. (p. 817)
As mentioned previously, Schedules of Reinforcement was an unorthodox
publication in part because the primary data presented were real-time
cumulative records of response patterns, often emitted over prolonged
experimental sessions. These displays were, in many respects, antithetical to
the manner in which behavioral data were being treated within psychology
by the 1950s. Consequently, some justification for this deviation in method
may have seemed necessary, and Skinner (1958) offered the following
rationale shortly after SORs publication:

152

MORGAN

Most of what we know about the effects of complex schedules


of reinforcement has been learned in a series of discoveries no
one of which could have been proved to the satisfaction of a
student in Statistics A. ... The curves we get cannot be averaged
or otherwise smoothed without destroying properties which
we know to be of first importance. These points are hard to
make. The seasoned experimenter can shrug off the protests
of statisticians, but the young psychologist should be prepared
to feel guilty, or at least stripped of the prestige conferred
upon him by statistical practices, in embarking upon research
of this sort .... Certain peopleamong them psychologists
who should know betterhave claimed to be able to say how
the scientific mind works. They have set up normative rules
of scientific conduct. The first step for anyone interested in
studying reinforcement is to challenge that claim. Until a great
deal more is known about thinking, scientific or otherwise, a
sensible man will not abandon common sense. Ferster and I
were impressed by the wisdom of this course of action when,
in writing our book, we reconstructed our own scientific
behavior. At one time we intendedthough, alas, we changed
our mindsto express the point in this dedication: To the
mathematicians, statisticians, and scientific methodologists
with whose help this book would never have been written."
(p.99)

References
ALLISON, J. (1983). Behavioral economics. New York: Praeger.
AMERICAN PSYCHOLOGICAL ASSOCIATION. (1958). American Psychological

Association: Distinguished scientific contribution awards. American


Psychologist, 13, 729738.
AMERICAN PSYCHOLOGICAL ASSOCIATION. (1969). National medal of science
award. American Psychologist, 24, 468.
ANDERSON, K. G., VELKEY, A. J., & WOOLVERTON, W. L. (2002). The generalized
matching law as a predictor of choice between cocaine and food in rhesus
monkeys. Psychopharmacology, 163, 319326.
BAER, D. M., OSNES, P. G., & STOKES, T. F. (1983). Training generalized
correspondence between verbal behavior at school and nonverbal
behavior at home. Education and Treatment of Children, 6, 379388.
BARRETT, J. E. (2002). The emergence of behavioral pharmacology. Molecular
Interventions, 2, 470475.
BAUM, W. M. (1973). The correlation-based law of effect. Journal of the
Experimental Analysis of Behavior, 20, 137153.
BAUM, W. M. (1975). Time allocation and human vigilance. Journal of the
Experimental Analysis of Behavior, 23, 4553.
BAUM, W. M. (2004). Molar and molecular views of choice. Behavioural
Processes, 66, 349359.
BAUMEISTER, R. F., VOHS, K. D., & FUNDER, D. C. (2007). Psychology as
the science of self-reports and finger movements: Whatever happened
to actual behavior? Perspectives on Psychological Science, 2, 396
403.

SCHEDULES OF REINFORCEMENT AT 50

153

BICKEL, W. K., & MARSCH, L. A. (2001). Toward a behavioral economic

understanding of drug dependence: Delay discounting processes.


Addiction, 96, 7386.
BICKEL, W. K., ODUM, A. L., & MADDEN, G. J. (1999). Impulsivity and cigarette
smoking: Delay discounting in current, never, and ex-smokers.
Psychopharmacology, 146, 447454.
BLANTON, H., & JACCARD, J. (2006). Arbitrary metrics in psychology.
American Psychologist, 61, 2741.
BLOUGH, D. S. (1958). A method for obtaining psychophysical thresholds from
the pigeon. Journal of the Experimental Analysis of Behavior, 1, 3143.
BLOUGH, D. S. (1959). Generalization and preference on a stimulusintensity
continuum. Journal of the Experimental Analysis of Behavior, 2, 307317.
BLOUGH, D. S. (1971). The visual acuity of the pigeon for distant targets.
Journal of the Experimental Analysis of Behavior, 15, 57.
BORRERO, J. C., CRISOLO, S. S., TU, Q., RIELAND, W. A., ROSS, N. A., FRANCISCO, M. T.,
& YAMAMOTO, K. Y. (2007). An application of the matching law to social

dynamics. Journal of Applied Behavior Analysis, 40, 589601.


BRADSHAW, C. M., & SZABADI, E. (1988). Quantitative analysis of human

operant behavior. In G. Davey & C. Cullen (Eds.), Human operant conditioning


and behavior modification (pp. 225259). Chichester, England: John Wiley
& Sons.
BRADY, J. V., PORTER, R. W., CONRAD, D. G., & MASON, J. W. (1958). Avoidance
behavior and the development of gastroduodenal ulcers. Journal of the
Experimental Analysis of Behavior, 1, 6972.
BRANCH, M. N. (1984). Rate dependency, behavioral mechanisms, and behavioral
pharmacology. Journal of the Experimental Analysis of Behavior, 42,
511522.
BRANCH, M. N. (2006). How research in behavioral pharmacology informs
behavioral science. Journal of the Experimental Analysis of Behavior, 85,
407423.
BUSKIST, W. F., & MILLER, H. L. (1981). Concurrent operant performance in humans:
Matching when food is the reinforcer. Psychological Record, 31, 95100.
BUTLER, J., & ROVEE-COLLIER, C. (1989). Contextual gating of memory retrieval.
Developmental Psychobiology, 22, 533552.
CATANIA, A. C. (1970). Reinforcement schedules and psychophysical judgments:
A study of some temporal properties of behavior. In W. N. Schoenfeld
(Ed.), The theory of reinforcement schedules (pp. 142). New York:
Appleton-Century-Crofts.
CATANIA, A. C., SHIMOFF, E., & MATTHEWS, B. A. (1989). An experimental
analysis of rule-governed behavior. In S. C. Hayes (Ed.), Rule-governed
behavior: Cognition, contingencies, and instructional control (pp. 119
152). New York: Plenum.
CLIFFE, M. J., & PARRY, S. J. (1980). Matching to reinforcer value: Human
concurrent variable-interval performance. Quarterly Journal of Experimental
Psychology, 32, 557570.
COLPAERT, F. C. (1978). Discriminative stimulus properties of narcotic
analgesic drugs. Pharmacology, Biochemistry and Behavior, 9, 863887.
CONGER, R., & KILLEEN, P. (1974). Use of concurrent operants in small group
research. Pacific Sociological Review, 17, 399416.
COOPER, J. O., HERON, T. E., & HEWARD, W. L. (2007). Applied behavior
analysis (2nd ed.). Upper Saddle River, NJ: Pearson/Merrill Prentice-Hall.

154

MORGAN

CRITCHFIELD, T. S. (1993). Signal-detection properties of verbal self-reports.

Journal of the Experimental Analysis of Behavior, 60, 495514.


DEVILLIERS, P. (1977). Choice in concurrent schedules and a quantitative

formulation of the law of effect. In W. K. Honig & J. E. R. Staddon (Eds.),


Handbook of operant behavior (pp. 233287). Englewood Cliffs, NJ:
Prentice-Hall.
DEWS, P. B. (1958). Effects of chlorpromazine and promazine on performance
on a mixed schedule of reinforcement. Journal of the Experimental
Analysis of Behavior, 1, 7382.
DEWS, P. B. (1970). Drugs in psychology. A commentary on Travis Thompson
and Charles R. Schusters Behavioral pharmacology. Journal of the
Experimental Analysis of Behavior, 13, 395406.
DEWS, P. B. (1987). An outsider on the inside. Journal of the Experimental
Analysis of Behavior, 48, 459462.
DEWS, P. B., & MORSE, W. H. (1958). Some observations on an operant in human
subjects and its modification by dextro amphetamine. Journal of the
Experimental Analysis of Behavior, 1, 359364.
DIXON, M. R., MARLEY, J., & JACOBS, E. A. (2003). Delay discounting by
pathological gamblers. Journal of Applied Behavior Analysis, 36, 449458.
EMBRETSON, S. E. (2006). The continued search for nonarbitrary metrics in
psychology. American Psychologist, 61, 5055.
EPSTEIN, L. H., LEDDY, J. J., TEMPLE, J. L., & FAITH, M. S. (2007). Food reinforcement
and eating: A multilevel analysis. Psychological Bulletin, 133(5), 884906.
FAGEN, J. W., MORRONGIELLO, B. A., ROVEE-COLLIER, C. K., & GEKOSKI, M. J.

(1984). Expectancies and memory retrieval in three-month-old infants.


Child Development, 55, 936943.
FAGEN, J. W., & OHR, P. S. (1985). Temperament and crying in response to
violation of a learned expectancy in early infancy. Infant Behavior and
Development 8, 157166.
FERSTER, C. B. (1978). Is operant conditioning getting bored with behavior?
A review of Honig and Staddons Handbook of Operant Behavior. Journal
of the Experimental Analysis of Behavior, 29, 347349.
FERSTER, C. B., & SKINNER, B. F. (1957). Schedules of reinforcement. New York:
Appleton-Century-Crofts.
GREEN, E. J. (2002). Reminiscences of a reformed pigeon pusher. Journal of
the Experimental Analysis of Behavior, 77, 379380.
GREENWALD, A. G., NOSEK, B. A., & SRIRAM, N. (2006). Consequential
validity of the Implicit Association Test: Comment on Blanton and
Jaccard (2006). American Psychologist, 61, 5661.
HAGGBLOOM, S. J., WARNICK, R., WARNICK, J. E., JONES, V. K., YARBROUGH, G. L.,
RUSSELL, T. M., ET AL. (2002). The 100 most eminent psychologists of the

20th century. Review of General Psychology, 6(2), 139152.


HAYES, S. C., BROWNSTEIN, A. J., ZETTLE, R. D., ROSENFARB, I., & KORN,Z. (1986).

Rule-governed behavior and sensitivity to changing consequences of


responding. Journal of the Experimental Analysis of Behavior, 45, 237256.
HAYES, S. C., & JU, W. (1998). The applied implications of rule-governed
behavior. In W. ODonohue (Ed.), Learning and behavior therapy (pp.
374391). Boston: Allyn & Bacon.
HERRNSTEIN, R. J. (1961). Relative and absolute strength of response as a
function of frequency of reinforcement. Journal of the Experimental
Analysis of Behavior, 4, 267272.

SCHEDULES OF REINFORCEMENT AT 50

155

HERRNSTEIN, R. J. (1970). On the law of effect. Journal of the Experimental

Analysis of Behavior, 13, 243266.


HINELINE, P. H. (2001). Beyond the molarmolecular distinction: We need

multiscaled analyses. Journal of the Experimental Analysis of Behavior, 75,


342347.
HODOS, W. (1961). Progessive ratio as a measure of reward strength. Science,
134, 943944.
HOFFMEISTER, F. (1979). Progressive-ratio performance in the rhesus monkey
maintained by opiate infusions. Psychopharmacology, 62, 181186.
HOUSTON, A. (1986). The matching law applies to wagtails foraging in the
wild. Journal of the Experimental Analysis of Behavior, 45, 1518.
HURSH, S. R. (1978). The economics of daily consumption controlling food- and
water-reinforced responding. Journal of the Experimental Analysis of
Behavior, 29, 475491.
HURSH, S. R. (1984). Behavioral economics. Journal of the Experimental
Analysis of Behavior, 42, 435452.
HURSH, S. R., & SILBERBERG, A. (2008). Economic demand and essential
value. Psychological Review, 115, 186198.
ISABELLA, R. A., & BELSKY, J. (1991). Interactional synchrony and the origins
of infantmother attachment. Child Development, 62, 373384.
KATZ, J. L. (1990). Models of relative reinforcing efficacy of drugs and their
predictive utility. Behavioural Pharmacology, 1, 283301.
KAZDIN, A. E. (2006). Arbitrary metrics: Implications for identifying
evidence-based treatments. American Psychologist, 61, 4249.
KILLEEN, P. R. (1994). Mathematical principles of reinforcement. Behavioral
and Brain Sciences, 17, 105172.
LEA, S. E. G. (1978). The psychology and economics of demand. Psychological
Bulletin, 85, 441446.
LEWIS, M., ALLESANDRI, S. M., & SULLIVAN, M. W. (1990). Violation of
expectancy, loss of control, and anger expressions in young infants.
Developmental Psychology, 26, 745751.
LINDSLEY, O. R. (1957). Operant behavior during sleep: A measure of depth
of sleep. Science, 126, 12901291.
LINDSLEY, O. R., HOBIKA, J. H., & ETSTEN, R. E. (1961). Operant behavior
during anesthesia recovery: A continuous and objective measure.
Anesthesiology, 22, 937946.
LOGUE, A. W. (1998). Self-control. In W. ODonohue (Ed.), Learning and
behavior therapy (pp. 252273). Boston: Allyn & Bacon.
LOH, E. A., & ROBERTS, D. C. S. (1990). Break-points on a progressive-ratio
schedule reinforced by intravenous cocaine increase following depletion
of forebrain serotonin. Psychopharmacology, 101, 262266.
MALOTT, R. W., & CUMMING, W. W. (1964). Schedules of interresponse time
reinforcement. Psychological Record, 14, 221252.
MATTHEWS, B. A., SHIMOFF, E., & CATANIA, A. C. (1987). Saying and doing: A
contingency-space analysis. Journal of Applied Behavior Analysis, 20,
6974.
MATTHEWS, B. A., SHIMOFF, E., CATANIA, A. C., & SAGVOLDEN, T. (1977).
Uninstructed human responding: Sensitivity to ratio and interval
contingencies. Journal of the Experimental Analysis of Beahvior, 27, 453457.
MAZUR, J. E. (2001). Hyperbolic value addition and general models of animal
choice. Psychological Review, 108, 96112.

156

MORGAN

MCDOWELL, J. J. (1982). The importance of Herrnsteins mathematical

statement of the law of effect for behavior therapy. American Psychologist,


37, 771779.
MEEHL, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir
Ronald, and the slow progress of soft psychology. Journal of Consulting
and Clinical Psychology, 46, 806834.
MILLAR, W. S., & WEIR, C. G. (1992). Relations between habituation and
contingency learning in 5 to 12 month old infants. Cahiers de
Psychologie Cognitive/European Bulletin of Cognitive Psychology, 12,
209222.
MISCHEL, W., & GRUSEC, J. (1967). Waiting for rewards and punishments:
Effects of time and probability on choice. Journal of Personality and
Social Psychology, 5, 2431.
MISCHEL, W., SHODA, Y., & RODRIGUEZ, M. L. (1989). Delay of gratification in
children. Science, 244, 933938.
MORGAN, B. J., & LINDSLEY, O. R. (1966). Operant preference for stereophonic
music over monophonic music. Journal of Music Therapy, 3, 135143.
MORGAN, D. L. (1998). Selectionist thought and methodological orthodoxy in
psychological science. The Psychological Record, 48, 439456.
NATHAN, P. E., & WALLACE, W. H. (1965). An operant behavioral measure of
TV commercial effectiveness. Journal of Advertising Research, 5,
1320.
NEVIN, J. A. (2008). Control, prediction, order, and the joys of research.
Journal of the Experimental Analysis of Behavior, 89, 119123.
NISBETT, R. E., & WILSON, T. D. (1977). Telling more than we can know:
Verbal reports on mental processes. Psychological Review, 84, 231259.
PETRY, N. M., & CASARELLA, T. (1999). Excessive discounting of delayed rewards
in substance abusers with gambling problems. Drug and Alcohol
Dependence, 56, 2532.
PIERCE, W. D., & EPLING, W. F. (1983). Choice, matching, and human behavior:
A review of the literature. The Behavior Analyst, 6, 5776.
PREMACK, D., & BAHWELL, R. (1959). Operant-level lever pressing by a
monkey as a function of intertest interval. Journal of the Experimental
Analysis of Behavior, 2, 127131.
PREMACK, D., & SCHAEFFER, R. W. (1963). Distributional properties of
operant-level locomotion in the rat. Journal of the Experimental Analysis
of Behavior, 6, 473475.
REED, D. D., CRITCHFIELD, T. S., & MARTENS, B. K. (2006). The generalized
matching law in elite sport competition: Football play calling as operant
choice. Journal of Applied Behavior Analysis, 39, 281297.
REED, P. (1999). Role of a stimulus filing an action-outcome delay in human
judgments of causal effectiveness. Journal of Experimental Psychology:
Animal Behavior Processes, 25, 92102.
REED, P. (2001). Schedules of reinforcement as determinants of human
causality judgments and response rates. Journal of Experimental
Psychology: Animal Behavior Processes, 27, 187195.
REED, P. (2003). Human causality judgments and response rates on DRL and
DRH schedules of reinforcement. Learning and Motivation, 31, 205211.
ROMANOWICH, P., BOURRET, J., & VOLLMER, T. R. (2007). Further analysis of the
matching law to describe two- and three-point shot allocation by professional
basketball players. Journal of Applied Behavior Analysis, 40, 311315.

SCHEDULES OF REINFORCEMENT AT 50

157

ROVEE-COLLIER, C., GRIESLER, P. C., & EARLEY, L. A. (1985). Contextual

determinants of retrieval in three-month-old infants. Learning and


Motivation, 16, 139157.
ROVEE-COLLIER, C., & SHYI, C.-W.G. (1992). A functional and cognitive
analysis of infant long-term retention. In M. L. Howe, C. J. Brainerd, &
V. F. Reyna (Eds.), Developmentof long-term retention (pp. 355). New York:
Springer-Verlag.
ROVEE-COLLIER, C. K., & SULLIVAN, M. W. (1980). Organization of infant
memory. Journal of Experimental Psychology: Human Learning and
Memory, 6, 798807.
SCHLINGER, H. D., JR., & BLAKELY, E. (1987). Funtion-altering effects of
contingency-specifying stimuli. The Behavior Analyst, 10, 4145.
SCHWARTZ, B. (1986). The battle for human nature: Science, morality and
modern life. New York: W. W. Norton & Company.
SCHWARTZ, B., & LACEY, H. (1982). Behaviorism, science, and human nature.
New York: W. W. Norton & Co.
SHETTLEWORTH, S., & NEVIN, J. A. (1965). Relative rate of response and
relative magnitude of reinforcement in multiple schedules. Journal of
the Experimental Analysis of Behavior, 8, 199202.
SHIMP, C. P. (1976). Organization in memory and behavior. Journal of the
Experimental Analysis of Behavior, 26, 113130.
SHIMP, C. P. (1984). Cognition, behavior, and the experimental analysis
of behavior. Journal of the Experimental Analysis of Behavior, 42,
407420.
SHULL, R. L. (1991). Mathematical descriptions of operant behavior: An
introduction. In I. H. Iverson & K. A. Lattal (Eds.), Experimental analysis
of behavior (Vol. 2, pp. 243292). New York: Elsevier.
SKINNER, B. F. (1956). A case history in scientific method. American
Psychologist, 11, 221233.
SKINNER, B. F. (1958). Reinforcement today. American Psychologist, 13, 9499.
SKINNER, B. F. (1976). Farewell, My LOVELY! Journal of the Experimental
Analysis of Behavior, 25, 218.
SKINNER, B. F. (1987). Antecedents. Journal of the Experimental Analysis of
Behavior, 48, 447448.
SPEAR, D. J., & KATZ, J. L. (1991). Cocaine and food as reinforcers: Effects of
reinforcer magnitude and response requirement under second-order
fixed-ratio and progressive-ratio schedules. Journal of the Experimental
Analysis of Behavior, 56, 261275.
SPIGA, R., MAXWELL, R. S., MEISCH, R. A., & GRABOWSKI, J. (2005). Human
methadone self-administration and the generalized matching law.
Psychological Record, 55, 525538.
STOOPS, W. W., LILE, J. A., ROBBINS, G., MARTIN, C. A., RUSH, C. R., & KELLY,
T. H. (2007). The reinforcing, subject-rated, performance, and

cardiovascular effects of d-Amphetamine: Influence of sensationseeking. Addictive Behaviors, 32, 11771188.


TARABULSY, G. M., TESSIER, R., & KAPPAS, A. (1996). Contingency detection
and the contingent organization of behavior in interactions: Implications
for socioemotional development in infancy. Psychological Bulletin, 120,
2541.
TERRACE, H. S. (1963). Discrimination learning with and without errors.
Journal of the Experimental Analysis of Behavior, 6, 127.

158

MORGAN

TERRACE, H. S. (1969). Extinction of a discriminative operant following

discrimination learning with and without errors. Journal of the


Experimental Analysis of Behavior, 12, 571582.
THOMPSON, D. M. (1972). Effects of d-amphetamine on the breaking point
of progressive-ratio performance. Psychonomic Science, 29, 282284.
THORNDIKE, E. L. (1911). Animal intelligence. New York: Macmillan.
VOLLMER, T. R., & BOURRET, J. (2000). An application of the matching law to
evaluate the allocation of two- and three-point shots by college basketball
players. Journal of Applied Behavior Analysis, 33, 137150.
VUCHINICH, R. E., & TUCKER, J. A. (1996). Alcoholic relapse, life events, and
behavioral theories of choice: A prospective analysis. Experimental and
Clinical Psychopharmacology, 4, 1928.
WHEELER, H. (Ed.). (1973). Beyond the punitive society. San Francisco: W. H.
Freeman.
WILLIAMS, D. C., & JOHNSTON, J. M. (1992). Continuous versus discrete
dimensions of reinforcement schedules: An integrative analysis. Journal
of the Experimental Analysis of Behavior, 58, 205228.
WINTERS, L. C., & WALLACE, W. H. (1970). On operant conditioning techniques.
The Public Opinion Quarterly, 3, 3945.
WULFERT, E., DOUGHER, M. J., & GREENWAY, D. E. (1991). Protocol analysis of
the correspondence of verbal behavior and equivalence class formation.
Journal of the Experimental Analysis of Behavior, 56, 489504.
YOUNG, A. M., & HERLING, S. (1986). Drugs as reinforcers: Studies in
laboratory animals. In S. R. Goldberg & I. P. Stolerman (Eds.), Behavioral
analysis of drug dependence (pp. 967). Orlando, FL: Academic Press.
ZEILER, M. D. (1977). Schedules of reinforcement: The controlling variables.
In W. K. Honig & J. E. R. Staddon (Eds.), Handbook of operant behavior
(pp. 201232). Englewood Cliffs, NJ: Prentice-Hall.

You might also like