You are on page 1of 33

LEARNING

Submitted to:
Prof. Ma. Corazon Cabigao Constantino
PSYC1013 Th 10:30am-1:30pm



Submitted by:
Apiladas, Jessa Marie
Bestal, Vanesa
Billones, Joanne
Camangian, Mae-Ann
Cortez, Shayna
Dela Cruz, Carlo
BSA I-1, GROUP 2




Merely repeating a response will not necessarily
produce learning. You could close your eyes and
swing a tennis racket hundreds of times without
learning anything about tennis.
What is learning? In ordinary language, this term is
applied to many different casesthe development of new
skills, the acquisition of new knowledge, and more. Although
most people think of learning as studying, a lot of situations
nonetheless take place outside a classroom. Psychologists
define it more broadly as the process of acquiring new and
relatively enduring information or behaviors. It is a relatively
permanent change in behavior due to experience. Notice that
this definition excludes temporary changes caused by
motivation, fatigue, maturation, disease, injury, or drugs. Each
of these can alter our behavior, but none qualifies as learning.
For one to learn something, one must experience it
first, whether directly from the persons own experience or
indirectly through the experiences of others. Learning must
also be able to produce some kind of change in the persons
behaviour, whether covertly (thoughts, ideas, attitudes,
emotions) or overtly (responses and skills).
Types of Learning
Learning is traditionally divided into three categories: associative, nonassociative, and cognitive. More than one type
of learning can operate simultaneously in the same situation. A new technique, Computer-Assisted Learning, is
another type of learning applied with technology.
Nonassociative learning
Nonassociative learning involves changes in the magnitude of responses to a single stimulus rather than the
formation of connections between stimuli. Two important types of nonassociative learning are habituation and
sensitization.
Habituation reduces our reactions to repeated experiences that have already been evaluated and found to
be unchanging and harmless. We habituate to things that we should, ideally, still be noticing. A major concern about
exposing children to violent media is the possibility that their emotional responses to violent images will habituate,
leading to higher tolerance for violent behaviour.
In contrast to habituation, sensitization increases our reactions to a wide range of stimuli following
exposure to one strong stimulus. Following an earthquake, people often experience exaggerated responses to
movement, light, or noise. If you are awakened by a loud crash, even if you figure out its just your roommate
coming home late at night, it might be harder to get back to sleep due to your suddenly increased state of arousal.
Every little sound now seems magnified.

A) It is likely that the first time this
dogs owner attempted to dress it up
the dog was a bit upset. After
repeated experiences of being
dressed up, however, the dog
probably has habituated, which
means that it has learned that no
harm results from the process. Now it
remains calm.

B) Following an earthquake, this little
boy is likely to be extra jumpy for a
while in response to other stimuli, like
loud noises, due to sensitization.
In general, habituation occurs in response to milder stimuli, whereas sensitization occurs in response to
stronger stimuli. Habituation ensures that we do not waste precious resources monitoring low-priority stimuli.
Sensitization is particularly useful in dangerous situations. After detecting one harmful stimulus, raising our overall
level of responsiveness should improve reaction time should other dangers arise.
Associative learning
Associative learning occurs when we form associations, or connections, among stimuli and/or behaviors. It
emphasizes the establishment and strengthening between the stimulus and the response and emphasizes the role of
reinforcement in the learning process. In other words, if A happens, then B is likely to follow. Psychologists who
study learning describe two types of associative learning: classical conditioning and operant conditioning.
Classical Conditioning
Classical conditioning is a kind of learning in which a neutral stimulus acquires the ability to produce a response
that was originally produced by a different stimulus. In classical conditioning, we form associations between pairs of
stimuli that occur sequentially in time.
Classical conditioning was pioneered by Ivan Pavlov, a Russian physiologist. Normally, when food is
placed in the mouth of an animal, the salivary glands automatically start releasing saliva to help with chewing and
digestion. This is normal reflex- an unlearned involuntary response that is not under personal control or choice one
of many that occur in both animals and humans. The food causes a particular reaction, the salivation. But Pavlov
noticed that the dogs salivated before the food was in their mouths: the mere sight of food made them drool. In fact,
they even drooled at the sound of the experimenters footsteps. This aroused Pavlovs curiosity that led him to
devise an experiment to the concept of classical conditioning.
Elements of Classical Conditioning
Stimulus: any object, event, or experience that causes a response
Response: reaction of an organism
Unconditioned Stimulus (US): any stimulus that has the ability to elicit a response without previous
training.
Conditioned Stimulus (CS): the stimulus which initially does not elicit the response under the study but
becomes to do so by being paired with the unconditioned stimulus.
Unconditioned Response (UR): the original response to an unconditioned stimulus.
Conditioned Response (CR): a learned response to a conditioned stimulus.
Operant Conditioning
Operant conditioning is the type of learning in which the likelihood of a behavior is increased or decreased by the
use of reinforcement or punishment. It is the learning of voluntary behavior through the effects of pleasant and
unpleasant consequences to responses. In operant conditioning (or instrumental learning) we associate responses
with their consequences. The basic principle is simple: Acts that are reinforced tend to be repeated.
In operant conditioning, the learner actively operates on the environment. Thus, operant conditioning
refers mainly to learning voluntary responses. For example, pushing buttons on a TV remote control is a learned
operant response. Pushing a particular button is reinforced by gaining the result you desire, such as changing
channels or muting an obnoxious commercial.
Consequences
One key principle of operant conditioning is that consequences are contingent on behavior. Consequences have to be
immediate, or clearly linked to the behavior.
Reinforcement is a consequence that occurs after a behavior and increases the chance that the behavior
will occur again. For example, one of the main reasons you study hard for exams is to get good grades
(reinforcement). The consequence of getting a good grade increases the chances that youll study hard for future
exams. There are two kinds of reinforcementspositive and negativethat increase the occurrence of behaviors.
Positive reinforcement refers to the presentation of a stimulus that increases the probability that a behavior
will occur again. A positive reinforcer is a stimulus that increases the likelihood that a response will occur again.
For example, if you ask a friend for money and get it, the money is a positive reinforcer that will increase the
chances of your asking again.
Negative reinforcement refers to an aversive (unpleasant) stimulus whose removal increases the
likelihood that the preceding response will occur again. If you have a headache and take an aspirin to get rid of it,
your response of taking an aspirin is an example of negative reinforcement. The negative reinforcer (aspirin)
removes the likelihood of having headache, and you become inclined to take it whenever you have a headache.
Punishment is a consequence that occurs after a behavior and decreases the chance that the behavior will
occur again. For example, parents may withdraw their kids TV privileges or impose a curfew on them after picking
up a fight at school. The punishment aims to eliminate the kids aggressive behavior.
Positive punishment (punishment by application) is the punishment of a response by the addition or
experiencing of an unpleasant stimulus. One example is spanking a child who engaged in a prohibited behavior.
Negative punishment (punishment by removal) is the punishment of a response by the removal of a
pleasurable stimulus, like removing a childs freedom to play outside after misbehaving.

Reinforcers and punishers may be distinguished whether primary or secondary. Primary reinforcers, such
as food, water, and caresses, are naturally satisfying. Primary punishers, such as pain and freezing temperatures,
are naturally unpleasant.
Secondary reinforcers, such as money, fast cars, and good grades, are satisfying because theyve become
associated with primary reinforcers. Secondary punishers, such as failing grades and social disapproval, are
unpleasant because theyve become associated with primary punishers. Secondary reinforcers and punishers are also
called conditioned reinforcers and punishers because they arise through classical conditioning.
To distinguish between primary and secondary reinforcers, people can ask: Would a newborn baby find
this stimulus satisfying? If the answer is yes, the reinforcer is primary. If the answer is no, its secondary. The same
idea can be applied to punishers by asking whether a baby would find the stimulus unpleasant.
Schedules of Reinforcement
A reinforcement schedule is the pattern in which reinforcement is given over time. Reinforcement
schedules can be continuous or partial (intermittent). Continuous reinforcement means that every occurrence of the
operant response results in delivery of the reinforcer. In continuous reinforcement, someone provides reinforcement
every time a particular response occurs. Suppose a dog, pushes the remote under her chair. If the owner finds this
amusing and pats him every time he does it, he is providing continuous reinforcement for his behavior. On the other
hand, partial reinforcement refers to a situation in which responding is reinforced only some of the time. In partial
or intermittent reinforcement, someone provides reinforcement on only some of the occasions on which the response
occurs.
Stimulus Reinforcement:
Increases behavior
Punishment:
Decreases behavior
Presentation of
Stimulus (+)
Positive Reinforcement:
Ex. Telling more jokes after people laugh
at your first joke
Positive Punishment:
Ex. A speeder gets a traffic ticket and drives
away more slowly.
Removal of Stimulus
(-)
Negative Reinforcement:
Ex. Learning to rub a sore muscle to
relieve pain
Negative Punishment:
Ex. Parents taking away your phone after
you get failing grades.
Partial Reinforcement Schedules
There are four main types of partial schedules, which fall into two categories: ratio or interval. In a ratio
schedule, reinforcement happens after a certain number of responses. In an interval schedule, reinforcement
happens after a particular time interval.
In a fixed-ratio schedule, reinforcement happens after a set number of responses, such as when a car
salesman earns a bonus after every three cars he sells. In a variable-ratio schedule, reinforcement happens after a
particular average number of responses. For example, a person trying to win a game by getting heads on a coin toss
gets heads every two times, on average, that she tosses a penny. Sometimes she may toss a penny just once and get
heads, but other times she may have to toss the penny two, three, four, or more times before getting heads.
In a fixed-interval schedule, reinforcement happens after a set amount of time, such as when an attorney at
a law firm gets a bonus once a year. In a variable-interval schedule, reinforcement happens after a particular
average amount of time. For example, a boss who wants to keep her employees working productively might walk by
their workstations and check on them periodically, usually about once a day, but sometimes twice a day or some-
times every other day. If an employee is slacking off, she reprimands him. Since the employees know there is a
variable interval between their bosss appearances, they must stay on task to avoid a reprimand.


Other Conditioning Concepts
Generalization: In operant conditioning, generalization means that an animal or person emits the same response to
similar stimuli. In classical conditioning, generalization is the tendency for a stimulus similar to the original
conditioned stimulus to elicit a response similar to the conditioned response.
A B
C D
A. Workers in a garment factory are
usually paid a certain amount for each
item of clothing completed, so theyre
rewarded on a fixed-ratio schedule of
reinforcement.

B. Slot machines pay out on a certain
percentage of tries. But these machines
pay out at random. People feeding
coins to a machine are rewarded on a
variable-ratio reinforcement schedule.

C. The behavior of checking the mailbox
isnt rewarded until the mail has
actually been delivered. The first check
of the mailbox after this delivery will be
rewarded (actually getting some mail).
This is a fixed-interval schedule of
reinforcement.

D. When a wolf prowls through a
meadow, all rodents retreat into
hiding; so, if the wolf returns soon, hell
find no prey. Eventually, the rodents
will come back, and then another
hunting trip by the wolf will pay off
hell find his dinner. Therefore, a return
visit by the wolf will be rewarded only
after some time has passedthis is an
interval schedule. The rodents may
sometimes return a little sooner, or a
little laterand so this is a variable
interval schedule.
Discrimination: In operant conditioning, discrimination means that a response is emitted in the presence of a
stimulus that is reinforced and not in the presence of unreinforced stimuli. In classical conditioning, discrimination
is the tendency for some stimuli but not others to elicit a conditioned response.
Extinction: In operant conditioning, extinction refers to the reduction in an operant response when it is no longer
followed by the reinforcer. In classical conditioning, extinction refers to the reduction in a response when the
conditioned stimulus is no longer followed by the unconditioned stimulus.
Spontaneous Recovery: In operant conditioning, spontaneous recovery refers to a temporary recovery in the rate of
responding. In classical conditioning, spontaneous recovery refers to the temporary occurrence of the conditioned
response in the presence of the conditioned stimulus.
Classical Conditioning vs. Operant Conditioning
Classical Conditioning Operant Conditioning
Known as Pavlovian Known as Skinnerian
Developed in Russia Developed in U.S.
Also called Respondent Conditioning Also called Instrumental Conditioning
Responses are involuntary and reflexive, elicited
by a stimulus.
Responses are voluntary, emitted by the organism.
End result is the creation of a new response to a
stimulus that did not normally produce that
response.
End result is an increase in the rate of an already occurring
response.
Antecedent stimuli are important in forming an
association.
Consequences are important in forming an association.
CS must occur immediately before the UCS. Reinforcement should be immediate.
An expectancy develops for UCS to follow CS. An expectancy develops for reinforcement to follow a correct
response.
CR decreases when CS is repeatedly presented
alone.
Responding decreases when reinforcement stops.
Natural predispositions constrain what stimuli and
responses can easily be associated.
Organisms best learn behaviors similar to their natural
behaviors; unnatural behaviors instinctively drift back toward
natural ones.

Cognitive Learning
Cognitive learning, which involves mental processes such as attention and memory, says that learning can occur
through observation or imitation and such learning may not involve any external rewards or require a person to
perform any observable behaviors.
A form of cognitive learning, observational learning, is learning through watching. Insight Learning (also
Discovery Learning) is another form of cognitive learning. It is a kind of learning in which solution to a problem
comes suddenly as one discerns the pattern or interrelationship of one aspect of the situation with another. This is
based on insight and understanding from previous knowledge.



Theorists on Learning
Ivan Pavlov was a Russian physiologist whose
research on the physiology of digestion led to the
development of the first experimental model of learning,
Classical Conditioning. Pavlov was born on September 14,
1849, at Ryazan, Russia. Because he was born into a large
family, poverty was always an issue. His father, Peter
Dmitrievich Pavlov, was the village priest and young Ivan
tended to the church property. Pavlov inherited many of his
father's characteristics including a strong will to succeed.
The oldest sibling, Ivan Pavlov was also among the
healthiest. He began school at the Ryazan Ecclesiastical
High School. Pavlov and his brothers eventually entered the
Ryazan Ecclesiastical Seminary. At the Seminary, he
planned to pursue a career in theology. However, after
being introduced to the works of Charles Darwin and Ivan
Sechenov, Pavlov decided to transfer to the University of
St. Petersburg to gain knowledge about natural science.
There, Pavlov gained great respect for a professor of
physiology, Cyon. Due to Cyon's enthusiasm for
physiology, he decided to become a physiologist during his
third year. At that point, Pavlov started work as an assistant
in a laboratory in which he earned 50 rubles a month.
Eventually, Pavlov's research on the physiology of
digestion would earn him the Nobel Prize. As a skilled
surgeon, he was able to implant small stomach pouches in
dogs to measure the secretion of gastric juices produced
when the dogs began to eat. With the help of his assistants,
he was able to condition the dogs to salivate at the click of a
metronome. As his work progressed, Pavlov established the
basis for conditioned reflexes and the field of classical
conditioning.
Contribution: Classical Conditioning
Explanation:
Classical Conditioning
Classical conditioning was pioneered
by Ivan Pavlov, a Russian physiologist
(a person who studies the workings of
the body). Studying the digestive
system in his dogs, Pavlov had built a
device that would accurately measure
the amount of saliva produced by the
dogs when they were fed a measured
amount of food.
I van Petrovich Pavlov

Born 26 September 1849
Ryazan, Russia
Died 27 February 1936 (aged 86)
Leningrad, Soviet Union
Residence Russian Empire, Soviet
Union
Nationality Russian, Soviet
Fields Physiologist, physician
Institutions Military Medical Academy
Alma mater Saint Petersburg University
Known for Classical conditioning
Notable
awards
Nobel Prize in Physiology or
Medicine (1904)
Procedure:
Pavlovs Explanation
After Pavlov observed that food made
his dogs salivate, he began his classic
experiments. To begin, he rang a bell.
At first, the bell was a neutral stimulus
(the dogs did not respond to it).
Immediately after Pavlov rang the bell,
he placed meat powder on the dogs
tongue, which caused reflex salivation.
This sequence was repeated many
times: bell, meat powder, salivation; bell, meat powder, salivation. Eventually (as conditioning took
place), the dogs began to salivate when they heard the bell. By association, the bell, which before had no
effect, began to evoke the same response that food did. This was shown by sometimes ringing the bell
alone. Then the dog salivated, even though no food had been placed in its mouth.
Conclusion:
A neutral stimulus that originally has no effect to an individual can be able to elicit responses that can be
brought about by other stimuli through classical conditioning, thus, reflexes can be learned.
Application:
Pavlov believed that animals and people evolved the capacity for classical conditioning because it had an
adaptive value. Adaptive value refers to the usefulness of certain abilities or traits that have evolved in
animals and humans and tend to increase their chances of survival, such as finding food, acquiring mates,
and avoiding pain and injury.
Conditioned Taste Aversion refers to associating a particular sensory cue (smell, taste, sound, or
sight) with getting sick and thereafter avoiding that particular sensory cue in the future. For example, if
you have eaten something and gotten sick after taking a thrill ride, you may avoid the smell or taste of
that particular food. Similarly, people who get sick from drinking too much of a particular alcoholic drink
avoid that drink for a long period of time. Taste-aversion learning may also warn us away from eating
poisonous plants that cause illness or even death, such as eating certain varieties of mushrooms. All these
examples of taste-aversion learning show the adaptive value of classical conditioning, which is to keep us
away from potentially unpleasant or dangerous situations, such as taking thrill rides, overdrinking, or
eating poisonous plants.
Conditioned Emotional Response refers to feeling some positive or negative emotion, such as
happiness, fear, or anxiety, when experiencing a stimulus that initially accompanied a pleasant or painful
event. Conditioned emotional responses can have survival value, such as learning to fear and avoid
stimuli that signal dangerous situations, like the sound of a rattlesnake or wail of a siren. Conditioned
emotional responses can also signal pleasant situations. For example, many couples have a special song
that becomes emotionally associated with their relationship. When this song is heard by one in the
absence of the other, it can elicit strong emotional and romantic feelings. Thus, different kinds of stimuli
can be classically conditioned to elicit strong conditioned emotional responses.
psychology.about.com/od/classicalconditioning/a/pavlovs-dogs.htm
Plotnik, Rod and Haig Kouyoumdjian. Introduction to Psychology, Ninth Edition. California,
USA: Wadsworth. 2011.
Herman Ebbinghaus was a known German
psychologist. He was the pioneer in the experimental study of
memory and the forgetting curve. Born on January 24th, 1850,
in Barmen, Germany he was the son of a rich merchant. He
acquired his early education from town gymnasium and then
attended University of Bon in 1867 at the age of 17. He wasnt
able continue pursuing philosophy as a proper degree because
Franco-Prussian war broke out. He served in the Prussian army
during this war.
After serving for a brief time span in the army, he
completed his thesis on Philosophy of The Unconscious. He
acquired his doctorate at the age of 23 on August 16th, 1873.
After the completion of his PhD he started tutoring students in
England and France to earn his living. In 1886, he established
and opened an experimental psychology laboratory at the
University of Berlin for purposes of psychological research
and study. In the years following, Ebbinghaus co-founded the
Zeitschrift fur Psychology und Physiologie der Sinnersorgane
(Journal of Psychology and Physiology of the Sense Organs), a
literary establishment often credited with the international
advancement of psychological study.
Ebbinghaus was also the pioneer of sentence
completion exercises. It was developed by to gauge the mental
abilities of schoolchildren in sentence structuring. He also
discovered optical illusion which occurs due to the relative
size perception. This concept is used in conducting studies on
cognitive psychology. Ebbinghaus was an accomplished
psychologist who laid firm foundations for intelligence testing
through his ground breaking researches on memory. In 1909,
Ebbinghaus succumbed to pneumonia, dying in Breslau at the
age of 59 on February 26th, 1909.
Contribution: Memory and Forgetting Curve
Explanation:
Hermann Ebbinghaus began to study human memory, the study of higher psychological processes
through introspective self-observation approaches dominated the field. Precise scientific study was
essentially limited to tests of physiological processes such as reaction time and sensory perception
Ebbinghauss systematic and careful approach to the study of memory changed this paradigm by
demonstrating that higher cognitive processes could also be studied scientifically. The methodology he
developed for doing this brought the study of memory out of philosophy and into the realm of empirical
science. Some of his innovations, such as the use of the nonsense syllable, are still valuable tools in 21st
century learning and memory research. Like his peers who used introspective methodology, Ebbinghaus
used his own experiences as a source of data. However, his approach to self-study was carefully
controlled; the conditions of data collection followed procedures that were commonly used in research in
the so-called hard sciences.
Procedure:
To test his own memory, he first created 2300 nonsense syllables, each consisting of two consonants
Hermann Ebbinghaus

Born January 24, 1850
Barmen, Germany
Died February 26, 1909 (age 59)
Halle, Germany
Citizenship German
Fields Psychology
Institutions University of
Berlin, University of
Breslau, University of
Halle
Known for Research on memory,
laying the groundwork
for intelligence testing
separated by a vowel (e.g. nog, baf). These syllables were necessary for a controlled experiment because
they were presumably free of any previously learned associations. He learned lists of these syllables until
he had reached a pre-established criterion (perfect recall), and then recorded how many he was able to
retain after specific time intervals. He also noted how many trials were necessary for relearning after the
syllables had been forgotten. His first set of trials took place over the course of one year (1879-1880) and
he replicated the experiments three years later. Ebbinghauss methodological innovations would have
been enough to secure a place for him in the history of psychology, but his research also made several
important contributions to scientific knowledge base.

Conclusion:
Memory can be affected by the type of material to be learned. Individuals tend memorize familiar and
meaningful materials (words, events, objects, etc.) easier than unfamiliar and meaningless ones. Because
forgetting occurs right after learning and slows down overtime, repetition is a key to memorization.
Application:
Using the curve to predict memory efficiency and promote memory techniques.
Ebbinghaus empirical findings have important consequences for the development of pedagogical
practice and also provide a theoretical basis to guide the study of individual differences in human
intelligence.






http://www.famouspsychologists.org/hermann-ebbinghaus/
www.intelltheory.com/ebbinghaus.shtml_3/
Edward Lee Thorndike was a son of a Methodist
minister in Lowell, Massachusetts. He became an American
pioneer in comparative psychology and was a typical late
19th century American scientist. He grew up in an age when
scientific psychology was establishing its place in academic
institutions and attracting college graduates, Thorndike being
one of them. He became interested in the field of psychology
after reading William Jame's "Principles of Psychology" and
after graduating from Weslyan University enrolled at
Harvard in order to study under James. His research interest
was with children, but his initial study of "mind reading" led
to their unavailability for future study. So, he developed
projects that examined learning in animals to satisfy
requirements for his courses and degree. He completed a
study of maze learning in chicks, but for personal reasons,
Thorndike did not complete his education at Harvard. Cattell
invited him to go to Columbia University where he
continued his animal research. He switched from chicks to
cats and dogs, and made good use out of his own designed
"puzzled boxes." In 1898, he was awarded the doctorate for
his thesis, "Animal Intelligence: An Experimental Study of
the Associative Processes in Animals", in which he
concluded that an experimental approach is the only way to
understand learning and established his famous "Law of
Effect".
Upon graduation, Thorndike returned to his initial
interest, Educational Psychology. In 1899, after a year of
unhappy, initial employment at the College for Women of
Case Western Reserve in Cleveland, Ohio, he became an
instructor in psychology at Teachers College at Columbia
University, where he remained for the rest of his career,
studying human learning, education, and mental testing.
Edward L. Throndike's pioneer investigations in the fields of human and animal learning are
among the most influential in the history of Psychology. In 1912, he was recognized for his
accomplishments and elected president of the American Psychological Association. In 1934, the
American Association for the Advancement of Science elected Thorndike as the only social scientist to
head this professional organization. Thorndike retired in 1939, but worked actively until his death ten
years later.
Contribution: Instrumental Conditioning, Law of Effect and the Puzzle Box
Explanation:
Thorndike formulated the laws of learning, namely: law of readiness, law of exercise, and law of effect.
Law of readiness means that an organism must be ready or prepared to learn; otherwise learning can be
difficult. Law of exercise is exemplified by the statement CORRECT practice makes perfect. This
principle states that we learn by doing and we forget by not doing. The law of effect, probably the most
important and well-known, states that the consequences of a response can either strengthen or weaken the
neural connection between the two. Learning is strengthened each time a response is followed by a
satisfying state of affairs. The law of effect, also connectionism, is the association between sense
Edward L. Thorndike

Born Edward Lee Thorndike
August 31, 1874
Williamsburg, Massachusetts,
U.S.
Died August 9, 1949 (aged 74)
Montrose, New York
Nationality American
Education Roxbury Latin, Wesleyan,
Harvard, Columbia
Known for Father of modern educational
psychology, Law of Effect
Title Psychologist, Professor at
Teachers College, Columbia
University
impressions and impulse to action. He believed that there is a neural bond between the stimulus (S) and
response (R). Thorndike pioneered the study of instrumental conditioning in which learning is aided and
is through trial-and-error.
Procedure:
Thorndikes method was to set up a problem for an
animal to solve. In his best-known trial-and-error
experiment, he placed a hungry cat inside a box with a
latched door. The cat could open the doorand
escape from the boxonly by performing some
simple action such as pulling a loop of wire or
pressing a lever and once outside the box, the cat was
rewarded with a small portion of food. Then the cat
was placed back into the box for another trial so that
the procedure could be repeated over and over until
the task of escaping the box was mastered.
On the first trial, the cats had no notion of how to escapeand so they meowed loudly and
clawed and bit at their surroundings. This continued for several minutes until finally, purely by accident,
the animal hit upon the correct response. Subsequent trials brought gradual improvement, and the animal
took less and less time to produce the response that unlocked the door. By the time the training sessions
were completed, the cats behavior was almost unrecognizable from what it had been at the start. When
placed in the box, they immediately approached the wire loop or the lever, yanked it or pressed it with
business-like dispatch, and hurried through the open door to enjoy the well-deserved reward.
Thorndike explains that, with repeated trials, the cat spends more time around the latch, which
increases the chances of finding and hitting the latch and more quickly escaping to get the fish. To explain
why a cats random trial-and-error behaviors gradually turned into efficient, goal-directed behaviors,
Thorndike formulated the law of effect. Cats solved the problem not by a flash of insight but by a gradual
process of trial and error. Nevertheless, here was a clear example of learning. Its characteristic feature
was that the animals actions were critical (instrumental) in producing a certain outcome. It displayed
contingency between a preceding stimulus, a pattern of behavior (or response) and a subsequent state of
the environment (the effect or outcome). In this respect, instrumental learning is fundamentally different
from classical conditioning, in which the animals response plays no role in determining the outcome.
Conclusion:
We learn by trial and error, or by rewards and punishment. These elements motivate us to perform certain
actions that form behavior. Also, problems cannot be solved in an instant but with the gradual process of
trial-and-error until the correct solution comes.
Application:
Thorndikes findings were significant because they suggested that the law of effect was a basic law of
learning and provided an objective procedure to study it. Thorndike emphasized on studying the
consequences of goal-directed behavior.
www.muskingum.edu/~psych/psycweb/history/thorndike.htm
Plotnik, Rod and Haig Kouyoumdjian. Introduction to Psychology, Ninth Edition. California,
USA: Wadsworth. 2011.
Thorndikes puzzle box
Burrhus Frederic Skinner was born and raised
in Susquehanna, Pennsylvania. He had one brother
who was 2 years younger than he, who died at the age
of 16 from a cerebral aneurism. Skinner enjoyed
working with his hands, many of his childhood days
were spent building things such as rollerscooters,
steerable wagons and sleds. And, he invented things.
For example, he and a friend gathered elderberries to
sell them door to door. He constructed a flotation
system which separated ripe from green berries. And,
he even worked on the idea of a perpetual motion
machine. Skinner went through all twelve grades in
one school building, graduating with only eight other
students.
He developed an interest art and literature through
drawing in the younger grades and later reading
Shakespeare. He earned his BA in English and hoped
to be a writer. However, this profession did not work
out, and at the age of 24, he applied and was accepted
to the psychology graduate program at Harvard. Here
he happened to meet William Crozier in the
physiology department. Young Skinner was taken by
Crozier, an ardent advocate for animal studies and
behavioral measures, and began to tailor his studies
according to Crozier's highly functional, behaviorist
framework. Working across disciplines, he integrated
methods and theories from psychology and physiology
and developed new ways of recording and analyzing
data. Skinner also published books on self-therapy
(Beyond Freedom and Dignity on 1971, and Enjoying
Old Age on 1983). He died on August 18, 1990.
Contribution: Operant Conditioning and the Skinner Box
Explanation:
Operant Conditioning
Skinner believed that we do have such a thing
as a mind, but that it is simply more
productive to study observable behavior rather
than internal mental events. Skinner believed
that the best way to understand behavior is to
look at the causes of an action and its
consequences. He called this approach
operant conditioning. Skinner's theory of
operant conditioning was based on the work of
Thorndike. The heart of operant conditioning
is the effect of consequences on behavior.
Thinking back to the section on classical
conditioning, learning a reflex really depends
B. F. Skinner

Born Burrhus Frederic Skinner
March 20, 1904
Susquehanna, Pennsylvania,
Died August 18, 1990 (aged 86)
Cambridge, Massachusetts
Nationality American
Fields Psychology, linguistics,
philosophy
Institutions University of Minnesota
Indiana University
Harvard University
Known for Operant conditioning
Awards National Medal of Science (1968)
on what comes before the response- the unconditioned stimulus and what will become the conditioned
stimulus. These two stimuli are the antecedent stimuli. But in operant conditioning, learning depends on
what happens after the response-the consequence. In a way, operant conditioning could be summed up as
this: If I do this, whats in for me?
Procedure:
Skinner Box Experiment
One of his ground-breaking inventions was the operant conditioning chamber, which is also called
Skinner box. The Skinner box consisted of a lever, a food tray and a rat which can feed itself by pressing
the lever. Each time a rat was put into that box it would run and sniff around for the food eventually
identifying the correct spot, pressing the lever and getting the food pellet. After the first successful
attempt, the rat got used to the box and hit many successful attempts resulting in getting food as a reward
until it satiated its hunger. BF Skinner formulated the principle of reinforcement through this experiment.
Skinners studies indicated and confirmed his belief that human free will is not a phenomenal reality but
an indicator of results produced by the actions performed.

Conclusion: It is through reinforcement that learning occurs. Behavior can be changed through
reinforcement or punishment. If reinforced, the behavior will most likely occur and continue, but if
punished, the chances that the behavior will occur again decreases.
Application:
The therapy technique of behavior modification resulted from Skinners theories on reinforcement and
behaviorism that can be applied using these points:
State goals (aims for the study) - That is, clarify exactly what changes are to be brought
about.
Monitor behavior (log conditions) - Keep track of behavior so that one can see whether the
desired effects are occurring.
Reinforce desired behavior (give reward for proper behavior)
Reduce incentives to perform undesirable behavior


Plotnik, Rod and Haig Kouyoumdjian. Introduction to Psychology, Ninth Edition. California,
USA: Wadsworth. 2011.
John Broadus Watson was born to a poor family in
Greenville, South Carolina; his mother was very religious
while his father did not follow the same rules of living as his
mother. He drank, had extra-marital affairs, and left in 1891.
John rebelled and turned to violence but was able to turn his
life back around with the help of his teacher, Gordon Moore,
at Furman University. With Moore's help, he was able to
succeed and moved on to the University of Chicago. He
became interest in the field of comparative psychology and
studying animals. Eventually John married Mary Ickes from
the University of Chicago. Together they had two children,
Mary and John.
In 1903 he received his doctorate and became an
associate professor of psychology, and later on the director
of the psychological laboratory at Johns Hopkins University.
Watson's career in academic psychology was cut short by an
affair with one of his research assistants, Rosalie Rayner,
and subsequent divorce from Mary Ickes. Watson married
Rayner soon after the divorce, and the much publicized
scandal led administrative officials at Hopkins to ask for
Watson's resignation. They had two more children, James
and William. John focused much of his study of behaviorism
on his children. After Rosalie's death, his already poor
relationships with his children grew worse and he became a
recluse. After leaving Johns Hopkins University, Watson
went into the advertising business. He wanted to use his
scientific theories of behaviorism and the emotions of fear,
rage, and love to improve the effects of advertising on the
"animal" or what we know as consumers. He lived on a farm
in Connecticut until his death in 1958.
Contribution: Behaviorism, Little Albert Experiment
Procedure:
One of Watsons most famous experiments was the Little Albert experiment, which explored classical
conditioning using a nine month-old baby boy.
In the experiment, Watson
together with his assistant Rosalie
Rayner demonstrated that Little Albert
could be conditioned to fear
something, like a white rat, when no
such fear existed initially. Watson
combined a loud noise with the
appearance of the rat, in order to create
fear in the baby. The experiment was
highly controversial and would likely
be considered unethical by today's
research standards.
J ohn B. Watson

Born John Broadus Watson
January 9, 1878
Travelers Rest, South
Carolina
Died January 9, 1878(aged 80)
New York City, New York
Nationality American
Fields Psychology
Known for Founding Behaviorism
Spouse: Mary Ickes Watson
Rosalie Rayner
After only seven pairings of the
noise with the rat, every time the baby
saw the rat, he started to cry. In
conditioning terms, the loud noise was the
UCS, the fear of the noise UCR, the white
rat became the CS, and the fear of the rat
(the phobia) was the CR. The learning of
phobias is a very good example of a
certain type of classical conditioning, the
conditioned emotional response.
Conditioned emotional responses are
some of the easiest forms of classical
conditioning to accomplish and our lives
are full of them.
Conclusion:
The Little Albert experiment proved that
fear could be influenced by classical
conditioning. Fear is learned when we
associate a frightening experience with an
object that was not initially fearful.
Its easy to think of fears people
might have that are conditioned or
learned: a childs fear of a dentists chair,
a puppys fear of a rolled newspaper, or
the fear of dogs that is often shown by a
person who has been attacked by a dog in
the past.
Application:
Behaviorism remains a popular approach for animal training. Some mental health professionals use
behaviorist principles to condition away phobias and fears. In addition, advertisers frequently use
behaviorist conditioning to encourage consumers to purchase products.






www.muskingum.edu/~psych/psycweb/history/watson.htm
Plotnik, Rod and Haig Kouyoumdjian. Introduction to Psychology, Ninth Edition. California,
USA: Wadsworth. 2011.
Edward Chance Tolman was an American psychologist
who made significant contributions to the studies of learning
and motivation. Considered a cognitive behaviorist today, he
developed his own behaviorism when the likes of Watson were
dominating the field. Tolman was born in Newton,
Massachusetts in 1886. He remained there as he grew up and
was educated in the Newton Public Schools. He lived in a
family of "upper middle" socioeconomic status and had a father
who was the president of a manufacturing company. His
brother, Richard, was five years older than he was and both he
and Richard were expected to go into the family business.
He and his brother decided to seek academic careers,
against their family's wishes. Both went on to attend the
Massachusetts Institute of Technology. Richard pursued a
career in academics, ultimately becoming a world-renowned
theoretical chemist and physicist, and Edward initially sought a
bachelor's degree in electrochemistry. Tolman changed the
course of his career during his senior year after reading the
works of William James. He decided to become a philosopher.
After graduation in 1911, he attended summer school and took a
course in philosophy and psychology. He concluded that he
wasn't quite smart enough for philosophy and that psychology
was more to his liking.
That coming fall, Tolman enrolled at the Harvard
Graduate School as a philosophy and psychology graduate
student. At that time, the disciplines were a combined
department. A course in ethics, taught by Ralph Barton Perry,
as well as readings of McDougall, eventually led to his interest in motivation. After his first year as a
graduate student, he went to Giessen in Germany to study for his PhD examination in German (at that
time all PhD examinations were conducted in French, German, or Russian). It was in Germany where he
was introduced to Gestalt psychology.
Upon returning to Harvard from Germany, Tolman studied in the laboratory researching nonsense
syllable learning. His PhD dissertation was a study of retroactive inhibition. He received his doctorate in
1915. He later returned to Giessen to learn more about Gestalt psychology during the fall of 1923.
Tolman became an instructor at Northwestern University and taught for three years after receiving his
doctoral degree. He described himself as being self-conscious, inarticulate, and fearful of his classes. His
pacifist views led him to lose his job when, during World War I, he was called to the Dean for anti-war
statements reported in a pacifist student publication. Tolman went on to become an instructor at the
University of California in Berkeley in the fall of 1918 where he remained for the rest of his life.
Contribution: Purposive Behaviorism, Cognitive Learning
Explanation:
Purposive behaviorism is a branch of psychology that was introduced by Edward C. Tolman. It combines
the objective study of behavior while also considering the purpose or goal of behavior. Tolman thought
that learning developed from knowledge about the environment and how the organism relates to its
environment. Tolmans goal was to identify the complex cognitive mechanisms and purposes that guided
behavior. The main difference between behaviorism and Tolman's purposive behaviorism is that behavior
Edward Chace Tolman

Born April 14, 1886
West Newton,
Massachusetts
Died November 19, 1959
Berkeley, California
Nationality American
Fields Psychologist
Known for Cognitive Psychology
Purposive Behaviorism
is goal oriented. Tolman argues that learning that involves mental processes such as attention and
memory, which may or may not involve external rewards, not solely based on rewards.
Procedure:
Tolman, one of the early cognitive psychologists, introduced this idea when doing an experiment
involving rats and mazes. In Tolman's experiment, a rat was placed in a cross shaped maze and allowed to
explore it. After this initial exploration, the rat was placed at one arm of the cross and food was placed at
the next arm to the immediate right. The rat was conditioned to this layout and learned to turn right at the
intersection in order to get to the food. When placed at different arms of the cross maze however, the rat
still went in the correct direction to obtain the food because of the initial cognitive map it had created of
the maze. Rather than just deciding to turn right at the intersection no matter what, the rat was able to
determine the correct way to the food no matter where in the maze it was placed.
Tolman wondered what the rat had learned when he
quickly discovered how to go through the maze to get to the
food. Tolman believed that the rat had developed a
cognitive map of his maze, with knowledge of where the
food was located. With this research, he believed this
experiment supported his notion that this learning was not
rooted in stimulus-response connections but in the nervous
system of sets which are to function like cognitive maps.
Also, Tolman assumed that these cognitive maps vary from
a narrow strip of variety to a broader, comprehensive
variety. Tolman showed in his study that the rats exhibited a
capability of latent learning. The results showed that the rats
used problem solving because of the absence of
reinforcement, which could not have been resolved by S-R
representations.
Conclusion:
Learning involves mental processes and is not solely based on rewards or punishment. It may not be
instantly exhibited but is stored. Cognitive maps are important because they help us remember location
and direction.
Application:
Both in animals and humans, cognitive map is likely to show where they go and the routes used. Asking
people to sketch a map of a location is a way to find out what its salient features are for them. A cognitive
map can show what is important, and by omission, reveal what is less important. This procedure could be
used by city planners or landscape architects who want to know more about how a space is seen or used.
Cognitive maps can provide insight into the worlds of those with sensory deficits and physical
handicaps. The maps of blind people make more use of sound and touch cues than do those of sighted
people. People in wheelchairs emphasize physical barriers in their maps, obstacles that are missing from
the maps of those able to move more freely.
http://www.lifecircles-inc.com/Learningtheories/behaviorism/Tolman.html
http://www.muskingum.edu/~psych/psycweb/history/tolman.htm
Del Rosario, Maria Theresa, et al. General Psychology. Malabon City, Metro Manila: Mutya
Publishing House. 2012.
Wolfgang Kohler was one of the founders of
Gestalt psychology along with Max Wertheimer and Kurt
Koffka. He is also famous for his description of insight
learning which he tested on animals, particularly
chimpanzees.
Kohler was born on January 21, 1887 in Revel,
Estonia. His family moved to Germany and settled in
Wolfenbuttell when he was six years old. Between 1905
and 1907, he attended the universities of Tubingen, Bonn,
and Berlin. In 1909, Kohler received his Ph.D. under Carl
Stumpf. During the same year, he began to work at the
Psychological Institute in Frankfort-am-Main where he met
Wertheimer and Koffka. He was appointed director of the
Anthropoid Research Station on Tenerife in the Canary
Islands. Remaining on the island during W.W.I, Kohler
began to study problem solving and general intelligence of
a group of African chimpanzees. In 1917, he published The
Mentality of Apes which summarized the results of his
insight studies. Upon his return to Germany, Kohler took
the position as director of the Psychological Institute at the
University of Berlin. During 1925-1926, he served as a
visiting professor at Clark University in the United States.
In 1934-1935, Kohler gave the William James
Memorial lecture at Harvard. He immigrated to the United
States in 1935 because of Nazi interference with his work.
From 1935 to 1955, he was a professor of psychology at
Swarthmore College. Kohler was appointed president of the American Psychological Association in
1959. In 1958, he became a research professor at Dartmouth University until his death on June 11, 1967,
in Enfield, NH.
Contribution: Insight Learning
Explanation:
Insight Learning is a kind of learning in which solution to a
problem comes suddenly as one discerns the pattern or
interrelationship of one aspect of the situation with another.
Procedure:
Kohler attempted to prove that animals arrive at a solution
through insight rather than trial and error. His first
experiments with dogs and cats involved food being placed
on the other side of a barrier. The dogs and cats went right
towards the food instead of moving away from the goal to
circumvent the barrier like chimps that were presented with
this situation.
Kohler's experiments consisted of placing chimps
in an enclosed area and presenting them with a reward that
Wolfgang Khler

Born January 22, 1887
Reval (now Tallinn),
Governorate of Estonia,
Russian Empire
Died June 11, 1967
Enfield, New Hampshire
Nationality German
Fields Psychologist
Known for Gestalt psychology
Insight Learning
was out of reach, such as bananas. Kohler used four chimps in his experiments, Chica, Grande, Konsul,
and Sultan. In one experiment, Kohler placed bananas outside Sultan's cage and two bamboo sticks inside
his cage. Neither stick was long enough to reach the bananas so the only way to reach the bananas was to
put the sticks together. Kohler demonstrated to Sultan the solution by putting his fingers into the end of
one of the sticks. However, this did not help Sultan solve the problem. After some contemplation, Sultan
put the two sticks together and created a stick long enough to reach the bananas outside his cage.
Another study involved bananas suspended from the roof. The chimps first tried to knock them down by
using a stick. Then, the chimps learned to stack boxes on top of one another to climb up to the bananas.
Conclusion:
Through this experiment we have found out that insight-learning is based on the animal perceiving the
solution, it is not dependent on rewards and once you have solved a problem it is easier to solve a similar
one.
Application:
Complex problems require higher learning and solutions are reached only by application of insight. All
new ideas and concepts, inventions and discoveries are the result of insightful learning. Teaching and
learning of mathematics and science demand higher intellectual exercises.
Learning by conditioning is common to all animals and human beings and useful for early
education. But learning by insight is suitable only for intelligent creatures both human and animals and
useful for higher learning. It is a kind of learning done by observation, by perceiving the relationship and
understanding the situation.











http://www.muskingum.edu/~psych/psycweb/history/kohler.htm
http://principlesoflearning.wordpress.com/dissertation/chapter-3-literature-review-2/the-
cognitive-perspective/insight-learning-wolfgang-kohler-1925/
Plotnik, Rod and Haig Kouyoumdjian. Introduction to Psychology, Ninth Edition. California,
USA: Wadsworth. 2011.
Robert Gagn is an education psychologist best known
for his "Conditions of Learning" which identified the mental
conditions of learning and was published in 1965. He was born in
North Andover, Maine in 1916 and died in 2002. He earned his
Ph.D. in psychology from Brown University in 1940. He went on
to work as a professor for Connecticut College, Penn State
University and Florida State University. He also served as
Director of the U.S. Air Force Perceptual and Motor Skill
Laboratory where he began developing the principles of his
learning theory. Gagn pioneered the science of instruction during
World War II. He went on to develop a series of studies and
works that simplified and explained what he and others believed
to be 'good instruction.' Gagn was also involved in applying
concepts of instructional theory to the design of computer-based
training and multimedia-based learning.
He is considered to be a major contributor to the
systematic approach of instructional design. Gagnes learning
theory is summarized as The Gagne Assumption and consists of
five types of learning (each requires a different type of
instruction) and nine events of instruction. He also identified a
hierarchy of eight conditions to learning.
Contribution: Conditions of Learning
Explanation:
Gagn's work is sometimes summarized as the Gagn assumption. The assumption is that different types
of learning exist, and that different instructional conditions are most likely to bring about these different
types of learning.
9 Events of Instruction



Robert M. Gagn

Born Robert Mills Gagn
August 21, 1916
Died April 28, 2002
Nationality American
Fields Psychologist
Known for Conditions of Learning
Educational Psychology
Categories of Learning

Hierarchy of Eight Conditions to Learning
1. Signal learning: the learner makes a general
response to a signal
2. Stimulus-response learning: the learner makes a
precise response to a signal
3. Chaining: the connection of a set of individual
stimulus & responses in a sequence.
4. Verbal association: the learner makes associations
using verbal connections
5. Discrimination learning: the learner makes different
responses to different stimuli that are somewhat
alike
6. Concept learning: the learner develops the ability to
make a generalized response based on a class of
stimuli
7. Rule learning: a rule is a chain of concepts linked to
a demonstrated behavior
8. Problem solving: the learner discovers a
combination of previously learned rules and applies
them to solve a novel situation
Conclusion:
Learning is sequential and builds on prior knowledge. It is a process with different levels and stages that
requires different approaches. As seen in Gagnes framework, there is no such thing as a single type of
learning because it occurs in different aspects of behavior.
Application:
While Gagne's theory covers all aspects of learning, the focus of the theory is on intellectual skills. The
theory has been applied to the design of instruction in all domains. In its original formulation, special
attention was given to military training settings. Gagne addresses the role of instructional technology in
learning.
Gagne's instructional theory is widely used in the design of instruction by instructional designers
in many settings, and its continuing influence in the field of educational technology can be seen cited in
prominent journals in the field.






http://www2.rgu.ac.uk/celt/pgcerttlt/how/how4a.htm
Albert Bandura was born on December 4, 1925 in
Alberta, Canada. His parents were Polish wheat farmers. He
went to a small high school with only 20 students and 2
teachers. In 1949 Bandura received his B.A. from the
University of British Columbia. Bandura then went on to
the University of Iowa where he obtained his doctorate in
1952. Upon graduation Bandura did a clinical internship at
the Wichita Kansas Guidance Center. The following year,
in 1953, Bandura accepted a teaching position at Stanford
where he continues to teach today. While at the University
of Iowa Bandura's interests in learning and behaviorism
began to grow.
Bandura has done a great deal of work on social
learning throughout his career and is famous for his "Social
Learning Theory" which he has recently renamed, "Social
Cognitive Theory". Bandura is seen by many as a cognitive
psychologist because of his focus on motivational factors
and self-regulatory mechanisms that contribute to a person's
behavior, rather than just environmental factors. This focus
on cognition is what differentiates social cognitive theory
from Skinner's purely behavioristic viewpoint.
Albert Bandura focuses on the acquisition of
behaviors. He believes that people acquire behaviors
through the observation of others, and then imitate what
they have observed. Several studies involving television
commercials and videos containing violent scenes have
supported this theory of modeling.
In 1986 Bandura wrote Social Foundations of
Thought and Action which provides a framework of his
social cognitive theory. In addition he has written many articles and a total of nine books on various
topics in psychology. Bandura has made important contributions to the field of psychology, as seen in the
many honors. In 1998 he received the Thorndike Award for Distinguished Contributions of Psychology to
Education from the American Psychological Association.
Contribution: Social Cognitive Theory
Explanation:
Social Cognitive Theory
Social cognitive theory is a view of learning that emphasizes the ability to learn by observing a model or
receiving instructions, without firsthand experience by the learner. Bandura has conducted many studies
involving observational learning, or modeling. The modeling process includes four steps:
Attention- In order for an individual to learn anything, he or she must pay attention to the features
of the modeled behavior. Many factors contribute to the amount of attention one pays to the
modeled activities, such as the characteristics of both the observer and the person being observed
and competing stimuli.
Albert Bandura

Born December 4, 1925
Mundare, Alberta
Nationality Canadian/American
Fields Psychology, Philosophy of
Action
Institutions Stanford University
Alma
mater
University of British Columbia
University of Iowa
Known for Social Cognitive Theory
Observational Learning
Bobo doll experiment
Influenced Cognitive Psychology, Social
Psychology
Memory (Retention) - If an individual is to be influenced by observing behaviors he or she needs
to remember the activities that were modeled at one time or another. Imagery and language aid in
this process of retaining information. Humans store the behaviors they observe in the form of
mental images or verbal descriptions, and are then able to recall the image or description later to
reproduce the activity with their own behavior.
Imitation (Reproduction)- Reproduction involves converting symbolic representations into
appropriate actions. Behavioral reproduction is accomplished by organizing one's own responses
in accordance with the modeled pattern. A person's ability to reproduce a behavior improves with
practice.
Motivation- To imitate a behavior, the person must have some motivating factor behind it, such
as incentives that a person envisions. These imagined incentives act as reinforcers. Negative
reinforcers discourage the continuation of the modeled activity
The Bobo Doll Experiment
Procedure:
In one part of the room, preschool children were involved in their own art projects. In another part of the
room, an adult got up and, for the next 10 minutes, kicked, hit, and yelled (Hit him! Kick him!) at a
large, inflated Bobo doll. Some children watched the models aggressive behaviors, while other children
did not. Each child was later subjected to a frustrating situation and then placed in a room with toys,
including the Bobo doll. Without the childs knowledge, researchers observed the childs behaviors.
Children who had observed the models aggressive attacks on the Bobo doll also kicked, hit, and
yelled (Hit him! Kick him!) at the doll. Through observational learning alone, these children learned the
models aggressive behaviors and were now performing them. In comparison, children who hadnt
observed the models behaviors didnt hit or kick the Bobo doll after they had been mildly frustrated.
Result:
Bandura found that the children exposed to the aggressive model were more likely to act in physically
aggressive ways than those who were not exposed to the aggressive model. The results concerning gender
differences strongly supported Bandura's prediction that children are more influenced by same-sex
models.
Bandura also found that the children exposed to the aggressive model were more likely to act in
verbally aggressive ways than those who were not exposed to the aggressive model. In addition, the
results indicated that the boys and girls who observed the non-aggressive model exhibited far less non-
imitative mallet aggression than in the control group, which had no model.
The evidence strongly supports that males have a tendency to be more aggressive than females.
When all instances of aggression are tallied, males exhibited 270 aggressive instances compared to 128
aggressive instances exhibited by females.
Conclusion:
Individuals, particularly children, learn social behavior such as aggression through watching the behavior
of another person and adapting the behavior as their own. It is also found that individuals chose who or
which types of models they are likely to imitate, which are usually the same-sex.

Application:
Bandura's research on observational learning raises an important question: If children were likely to
imitate aggressive actions viewed on a film clip in a lab setting, doesn't it also stand to reason that they
will imitate the violence they observe in popular films, television programs, and video games? The debate
over this topic has raged on for years, with parents, educators, politicians, and movie and video game
makers weighing in with their opinions on the effects of media violence on child behavior. Banduras
observational theory suggests that the media has an impact on viewers, whether on violence or positive
behavior.











http://psychology.about.com/od/developmentalpsychology/a/sociallearning.htm
Plotnik, Rod and Haig Kouyoumdjian. Introduction to Psychology, Ninth Edition. California,
USA: Wadsworth. 2011.
Benjamin S. Bloom was born on 21 February 1913
in Lansford, Pennsylvania, and died on 13 September 1999.
He received a bachelors and masters degree from
Pennsylvania State University in 1935 and a Ph.D. in
Education from the University of Chicago in March 1942.
He became a staff member of the Board of Examinations at
the University of Chicago in 1940 and served in that
capacity until 1943, at which time he became university
examiner, a position he held until 1959. In this position, he
developed tests to determine if undergraduates had mastered
material necessary for them to receive their bachelors
degrees.
His initial appointment as an instructor in the
Department of Education at the University of Chicago began
in 1944 and he was eventually appointed Charles H. Swift
Distinguished Service Professor in 1970. He served as
educational adviser to the governments of Israel, India and
numerous other nations. In 1948, he and a group of
colleagues with the American Psychological Association
began discussions that led to the taxonomy of educational
goals, a system of classification that frequently is called
Blooms Taxonomy.
His 1956 book on the subject, Taxonomy of
Educational Objectives: The Classification of Educational
Goals, Handbook I: Cognitive Domain, deals with
knowledge and the development of intellectual skills. Bloom set forth a hierarchy of learning, beginning
with factual knowledge and leading through comprehension, application, analysis, synthesis and
evaluation. The second book in the series, for which he was a co-author, Taxonomy of Education
Objections, Volume II: The Affective Domain, was published in 1964. It helped educators understand the
importance of attitudes in the development of learning. Also in 1964, Bloom published Stability and
Change in Human Characteristics. That work, based on a number of longitudinal studies, led to an
upsurge of interest in early childhood education, including the creation of the Head Start program.
Bloom showed that many physical and mental characteristics of adults can be predicted through
testing done while they are still children. For example, he demonstrated that 50 percent of the variations
in intelligence at age 17 can be estimated at age 4. He also found that early experiences in the home have
a great impact on later learning. Bloom summarized his work in a 1980 book titled All Our Children
Learning, which showed from evidence, gathered in the United States and abroad that virtually all
children can learn at a high level when appropriate practices are undertaken in the home and school. In
the later years of his career, Bloom turned his attention to talented youngsters and led a research team that
produced the book Developing Talent in Young People, published in 1985.
Contribution: Blooms Taxonomy (Taxonomy of Learning)
Explanation:
Benjamin Bloom headed a group of cognitive psychologists at the University of Chicago that developed a
taxonomic hierarchy of cognitive-driven behavior deemed important to learning and to measurable
Benjamin Samuel Bloom

Born February 21, 1913
Lansford, Pennsylvania
Died September 13, 1999 (aged 86)
Chicago
Nationality American
Education Ph.D. in Education
Occupation Educational psychologist
Known for Blooms Taxonomy
capability. (For example, one can measure an objective that begins with the verb "describe", unlike one
that begins with the verb "understand".) Blooms taxonomy provides a structure in which to categorize
instructional objectives and instructional assessment.
Conclusion:
Blooms taxonomy is an effective tool for determining which process in learning must be emphasized by
instructors or teachers. It eliminates bias and one-dimensional teaching, thus, more complex types of
learning is given higher merit.
Application:
Bloom designed the taxonomy in order to help teachers and instructional designers to classify
instructional objectives and goals. The taxonomy relies on the idea that not all learning objectives and
outcomes have equal merit. In the absence of a classification system (a taxonomy), teachers and
instructional designers may choose, for example, to emphasize memorization of facts (which makes for
easier testing) rather than emphasizing other (and likely more important) learned capabilities.
Blooms taxonomy in theory helps teachers better prepare objectives and, from there, derive
appropriate measures of learned capability and higher order thinking skills. Curriculum-design, usually a
state (governmental) practice, did not reflect the intent of such taxonomy until the late 1990s.

http://www.nwlink.com/~donclark/hrd/bloom.html
Neal E. Miller was born in Milwaukee, Wisconsin on
August 3, 1909. He received a B.S. degree from the University
of Washington (1931), an M.S. from Stanford University
(1932), and a Ph.D. degree in Psychology from Yale University
(1935). Miller was a social science research fellow at the
Institute of Psychoanalysis, Vienna for one year (1935-1936)
before returning to Yale as a faculty member in 1936. He first
worked in research in psychology, and later as a researcher in
the University's Institute of Human Relations.
During World War II, Miller served as an officer in
charge of research in the Army Air Corps' Psychological
Research Unit #1 in Nashville, Tennessee. After that he was
director of the Psychological Research Project at the
headquarters of the Flying Training Command in Randolph
Field, Texas. In 1950, Miller returned to Yale to become a
professor of psychology and in 1952 he was appointed the
James Rowland Angell Professor of Psychology. He spent a
total of 30 years at Yale (1936-1966). In 1966, Miller
transferred to Rockefeller University, where he spent an
additional 15 years of service. He became Professor Emeritus at
Rockefeller in 1981 and Research Affiliate at Yale in 1985. He
was President of the American Psychological Association from
1960-61, and received the APA Distinguished Scientific
Contribution Award in 1959 and the APA Citation for
Outstanding Lifetime Contribution to Psychology in 1991. He
also received the National Medal of Science. Miller was also
president of the Society for Neurosciences, the Biofeedback
Society of America and the Academy of Behavioral Medicine.
Contribution: Frustration-Aggression Hypothesis and Biofeedback
Explanation
Frustration-Aggression Hypothesis:
Frustrationaggression hypothesis is a theory of aggression proposed by John Dollard, Neal E. Miller et
al. in 1939. The theory says that aggression is the result of blocking, or frustrating, a person's efforts to
attain a goal. It attempts to explain why people scapegoat (blame a single person or group for the
wrongdoings of many). It attempts to give an explanation as to the cause of violence. The theory says that
frustration causes aggression, but when the source of the frustration cannot be challenged, the aggression
gets displaced onto an innocent target.
There are many examples of this. If a man is disrespected and humiliated at his work, but cannot
respond to this for fear of losing his job, he may go home and take his anger and frustration out on his
family. This theory is also used to explain riots and revolutions. Both are caused by poorer and more
deprived sections of society who may express their bottled up frustration and anger through violence.
Biofeedback:
Neal Miller, a psychologist and neuroscientist who worked and studied at Yale University, is generally
considered to be the father of modern-day biofeedback. In the 1950s, he came across the basic principles
Neal E. Miller

Born August 3, 1909
Milwaukee, Wisconsin
Died March 23, 2002 (aged 92)
Hamden, Connecticut
Nationality American
Alma mater University of Washington
Stanford University
Yale University
Known for Frustration-Aggression
Hypothesis and
Biofeedback
of biofeedback when doing animal experimentation conditioning the behavior of rats. His team found that
by stimulating the pleasure centers of the rats' brains with electricity, it was possible to train rats to
control phenomena ranging from their heart rates to their brainwaves. Until that point, it was believed that
bodily processes like heart rate were under the control of the autonomic nervous system and not
responsive to conscious effort. Miller proposed psychotherapy for aggression, frustration, or anxiety, in
which people would learn more adaptive behaviors and unlearn maladaptive behaviors. Teaching
relaxation techniques, coping skills, or effective discrimination of cues would be part of such therapy.
In 1961, when Neal Miller first suggested that the autonomic nervous system could be as
susceptible to training as the voluntary nervous system, that people might learn to control their heart rate
and bowel contractions just as they learned to walk or play tennis, his audiences were aghast. He was a
respected researcher, director of a laboratory at Yale, but this was a kind of scientific heresy. Everyone
'knew' that the autonomic nervous system was precisely that: automatic, beyond our control. - James S.
Gordon, founder of the Center for Mind-Body Medicine in Washington.
Nonetheless, Miller was eventually able to prove his point, and biofeedback became gradually
accepted in scientific circles as an effective therapy.
Conclusion:
Biofeedback is an effective treatment that involves manipulating involuntary processes and is particularly
effective at treating conditions brought on by severe stress. Relaxation and mental techniques in
biofeedback are used to alleviate pains and stress.
Application:
Miller was instrumental in the development of biofeedback. He discovered that even the autonomic
nervous system could be susceptible to classical conditioning. His findings regarding voluntary control of
autonomic systems were later disproven due to an inability to replicate his results.
Biofeedback is a technique that trains people to improve their health by controlling certain bodily
processes that normally happen involuntarily, such as heart rate, blood pressure, muscle tension, and skin
temperature. Electrodes attached to the skin measure these processes and display them on a monitor. With
help from a biofeedback
therapist, you can learn to
change your heart rate or blood
pressure, for example. At first
you use the monitor to see your
progress, but eventually you will
be able to achieve success
without the monitor or
electrodes. Biofeedback is an
effective therapy for many
conditions, but it is primarily
used to treat high blood
pressure, tension headache,
migraine headache, chronic
pain, and urinary incontinence.
http://nealmiller.org/?page_id=82
http://www.newworldencyclopedia.org/entry/Neal_E._Miller
Stimulus + Response
Movement Act BEHAVIOR
Edwin Ray Guthrie was born and raised in Lincoln,
Nebraska. After graduating from high school, he attended the
University of Nebraska where he obtained his bachelors
degree in mathematics. He remained there and received his
masters degree in philosophy. Guthrie then taught
mathematics at several high schools, while he worked on his
doctorate in philosophy at the University of Pennsylvania.
After receiving his doctorate, he was hired as an instructor in
the department of philosophy at the University of Washington.
After five years, he moved to the psychology department
where he remained for the remainder of his career. Dr. Guthrie
was 33 years old when he made the transition from philosophy
to psychology. He was the winner of the second gold medal
awarded by the American Psychology Association for
outstanding lifetime contributions. During World War II, he
worked with the overseas branch as both a chief consultant and
psychologist. He later became Dean of Graduate Studies at the
University of Washington. The Psychology Department at the
University is in a building named Guthrie Hall. Dr. Guthrie
made contributions in the philosophy of science, abnormal
psychology, social psychology, educational psychology and
learning theory. He is remembered best for his theory of
learning based on association.
Contribution: Law of Contiguity
Explanation:
Guthrie's law of contiguity states that a combination of stimuli which has accompanied a movement will
on its recurrence tend to be followed by that movement. He said that all learning is based on a stimulus-
response association. Movements are small stimulus- response combinations. These movements make up
an act. A learned behavior is a series of movements. It takes time for the movements to develop into an
act. He believed that learning is incremental. Some behavior involves repetition of movements and what
is learned are movements, not behaviors.


Guthrie stated that each movement produces stimuli and the stimuli then become conditioned.
Every motion serves as a stimulus to many sense organs in muscles, tendons and joints. Stimuli which are
acting at the time of a response become conditioners of that response. Movement-produced stimuli have
become conditioners of the succession of movements. The movements form a series often referred to as a
habit. Our movements are often classified as forms of conditioning or association. Some behavior
involves the repetition of movements, so that conditioning can occur long after the original stimulus.
Edwin Ray Guthrie

Born 9 January 1886
Lincoln, Nebraska
Died 23 April 1959 (aged 73)
Known for Law of Contiguity
Influences Edward Thorndike
Guthrie rejected the law of frequency. He believed in one-trial learning. One-trial learning states
that a stimulus pattern gains its full associative strength on the occasion of its first pairing with a
response. He did not believe that learning is dependent on reinforcement. He defined reinforcement as
anything that alters the stimulus situation for the learner. He rejected reinforcement because it occurs after
the association between the stimulus and the response has occurred. He believed that learning is the
process of establishing new stimuli as cues for some specified response.
Guthrie believed that the recency principle plays an integral role in the learning process. This
principle states that which was done last in the presence of a set of stimuli will be that which is done
when the stimulus combination occurs again. The stimulus-response connections tend to grow weaker
with elapsed time.
Guthrie stated that forgetting is due to interference because the stimuli become associated with
new responses. He believed that you can use sidetracking to change previous conditioning. This involves
discovering the initial cues for the habit and associating other behavior with those cues. Sidetracking
causes the internal associations to break up. It is easier to sidetrack than to break a habit. Breaking up a
habit involves finding the cues that initiate the action and practicing another response to such cues.
Guthrie's law of contiguity states that a combination of stimuli which has accompanied a
movement will on its recurrence tend to be followed by that movement. He said that all learning is based
on a stimulus-response association. Guthrie rejected the law of frequency. He believed in one-trial
learning. One-trial learning states that a stimulus pattern gains its full associative strength on the occasion
of its first pairing with a response. He did not believe that learning is dependent on reinforcement. He
defined reinforcement as anything that alters the stimulus situation for the learner. He rejected
reinforcement because it occurs after the association between the stimulus and the response has occurred.
He believed that learning is the process of establishing new stimuli as cues for some specified response.
Conclusion:
Learning is based on a stimulus-response action, that is, when a stimulus is presented, a response is likely
to occur. This stimulus determines what we are to do, and these actions become behavior. That is why in
order to break a habit which is a learned behavior, the simplest actions involved must be addressed first
before the behavior is eliminated.
Application: Contiguity theory is intended to be a general theory of learning, although most of the
research supporting the theory was done with animals. The theory applies to personality disorders and
breaking habits.





http://www.muskingum.edu/~psych/psycweb/history/guthrie.htm
Clark Hull grew up handicapped and contracted polio
at the age of 24, yet he became one of the great contributors to
psychology. His family was not well off so his education had
to be stopped at times. Clark earned extra money through
teaching. Originally Clark aspired to be a great engineer, but
that was before he fell in love with the field of Psychology. By
the age of 29 he graduated from Michigan University.
When Hull was 34 when he received his Ph.D. in
Psychology at the University of Wisconsin in 1918. Soon after
graduation he became a member of the faculty at the
University of Wisconsin, where he served for 10 years.
Although one of his first experiments was an analytical study
of the effects of tobacco on behavioral efficiency, his lifelong
emphasis was on the development of objective methods for
psychological studies designed to determine the inderlying
principles of behavior.
Hull devoted the next 10 years to the study of hypnosis
and suggestibility, and in 1933 he published Hypnosis and
Suggestibility, while employed as a research professor at Yale
University. This is where he developed his major contribution,
an elaborate theory of behavior based on Pavlov's laws of
conditioning. Pavlov provoked Hull to become greatly
interested in the problem of conditioned reflexes and learning.
In 1943 Hull published, Principles of Behavior, which
presented a number of constructs in a detailed Theory of
Behavior. He soon he became the most cited psychologist.

Contribution: Scientific Laws of Motivation and Behavior
Explanation:
Motivation and Reinforcement
Hull believed that human behavior is a result of the constant interaction between the organism and its
environment. The environment provides the stimuli and the organism responds, all of which is
observable. Yet there is a component that is not observable, the change or adaptation that the organism
needs to make in order to survive within its environment. Hull explains, "when survival is in jeopardy, the
organism is in a state of need (when the biological requirements for survival are not being met) so the
organism behaves in a fashion to reduce that need". Simply, the organism behaves in such a way that
reinforces the optimal biological conditions that are required for survival.
Hull viewed drive as a stimulus, arising from a tissue need, which in turn stimulates behavior.
The strength of the drive is determined upon the length of the deprivation, or the intensity or strength of
the resulting behavior. He believed the drive to be non-specific, which means that the drive does not
direct behavior rather it functions to energize it. In addition this drive reduction is the reinforcement. Hull
recognized that organisms were motivated by other forces, secondary reinforcements. This means that
Clark Leonard Hull

Born May 24, 1884
Akron, New York, United
States
Died May 10, 1952 (aged 67)
New Haven, Connecticut,
United States
Nationality American
Fields Psychology
Institutions University of Wisconsin
Yale University
Alma mater University of Michigan
Known for Laws of Motivation
previously neutral stimuli may assume drive characteristics because they are capable of eliciting
responses that are similar to those aroused by the original need state or primary drive. So learning must be
taking place within the organism.
Hull based his theory around the concept of homeostasis, the idea that the body actively works to
maintain a certain state of balance or equilibrium. For example, your body regulates its temperature in
order to ensure that you do not become too hot or too cold. Hull believed that behavior was one of the
ways that an organism maintains this balance. Based on this idea, Hull suggested that all motivation arises
as a result of these biological needs. In his theory, Hull used the term drive to refer to the state of tension
or arousal caused by biological or physiological needs. Thirst, hunger and the need for warmth are all
examples of drives. A drive creates an unpleasant state; a tension that needs to be reduced.
Drive Reduction Theory
Hull's learning theory focuses mainly on the principle of reinforcement; when the drive is
followed by a reduction of the need, the probability increases that in future similar situations the same
stimulus will create the same prior response. If you have achieved homeostasis your motivation is 0, since
you have no drives to reduce. If you are hungry, then your drive is increased to 1. If you are really
hungry, your drive becomes 2. If you are thirsty your drive to satisfy the hunger and thirst becomes 3. As
drives accumulate your overall motivation increases.
Reinforcement can be defined in terms of
reduction of a primary need. Just as Hull believed that
there were secondary drives, he also felt that there
were secondary reinforcements. If the intensity of the
stimulus is reduced as the result of a secondary or
learned drive, it will act as a secondary reinforcement.
The way to strengthen the S-R response is to increase
the number of reinforcements, habit strength. He also
stated that the link between the S-R relationship could
be anything that might affect how an organism
responds: learning, fatigue, disease, injury,
motivation, etc.
Conclusion:
When an individuals drive is satisfied, he is motivated to repeat the action that satisfied him. The key to
motivation is identifying first the needs of an individual and promoting an environment or a behavior that
satisfies them.
Application:
Although Hull was a great contributor to psychology, his theory was criticized for the lack of
generalizability due to the way he defined his variables in such precise quantitative terms. But his theory
is effective in explaining the biological drives and natural inclination of humans, and how these are
satisfied by primary reinforcements.
http://psychology.about.com/od/profilesal/p/clark-hull.htm
http://www.muskingum.edu/~psych/psycweb/history/hull.htm

You might also like