You are on page 1of 73

Action Learning

Action learning represents a process by which a culture of learning is cultivated in


groups with the aim of generating positive action. Organizations as varied as the Peace
Corps, Xerox, Motorola and the United Nations Development Program have engaged in
action learning. The structure of action learning helps groups explore, define, act on and
evaluate complicated problems.
Action learning takes place in cycles. Each cycle is comprised of four phases. In the first
phase, the group must identify a problem that is relevant to the group at that particular
moment. It must be a problem that the group has the ability to solve but which suggests
no single appropriate action.
The next phase touches off the process of solving the problem. This begins with each
member of the group reflecting on the problem. The group will discuss these reflections,
examining in detail the language used during reflection and attempt to come to an
understanding as a group on the meaning of the language employed and associated
concepts.
The idea of this phase is to move away from a state of unawareness, anxiety, confusion
and risk and toward an environment that is supportive and encourages questions and
learning. Only in such an environment can the group move beyond its preconceptions
toward posing and exploring questions that seem to increase in their insightfulness with
every moment. The attention moves away from the individuals and toward their
questions. In this atmosphere, it becomes possible to explore possible courses of action
that might solve the problem.
In the third phase, the group hones in on the action that seems to be the most
constructive, based on all that the group has learned in its exploratory question period,
and this action is implemented. In the final, fourth, phase, the group evaluates the
effectiveness of the chosen, implemented course of action. If the group assesses the
action as having been less than effective or otherwise not meeting the requirements of
the group, further action learning sessions are undertaken until it is clear that the
problem has been resolved.
Action learning is made up of six integral parts: the problem, a group, a process of
reflection and questioning, a decision to take action, a commitment by the group to the

learning process and a group facilitator. The facilitator makes sure that the other five
components are present, recognized and advanced by members of the group. The aim
of the facilitator is to ensure a culture of learning in which participants feel safe to
explore and develop thoughts beyond preconceived notions and what is already known.
In order for the group to gain the ability to move beyond preprogrammed safe thinking,
there must be commitment to innovation. The prerequisite for innovative thinking is
being open and honest. This is the background that allows the fertile flowering of critical
thinking and expression. Developing innovative thought in this atmosphere helps form a
bond between members of the group and strengthens them as a single thinking/working
entity.
The facilitator will encourage the action learning group to develop certain attributes. This
begins with a commitment to finding a solution to a problem. The individuals within the
group take ownership of the identified problem, but instead of attaching personal blame,
they focus on solving the problem as a collective. The implication here is to clarify what
is and is not within the power of the group to control and to move toward a focus on only
those aspects within the control of the group.
It is also the job of the facilitator to encourage participants to be good listeners, to
question their own attitudes and those of others in positive ways that shed light and
further the quest for insight on the problem at hand. The facilitator also inspires group
members to allow for openness and to risk a state of vulnerability, so gaps in knowledge
can be identified. Trust between group members is reinforced. A helpful atmosphere
results in which participants assist each other in the learning process.
The facilitator encourages members of the group to show evidence of admiration toward
others for expertise, knowledge, capacity for learning and points of view. At the same
time, the facilitator asks for a commitment to action and for belief in the ultimate success
of the group. A good facilitator helps develop the group and the individuals that make up
the group, helping individuals gain self-awareness of learning potential and assisting the
group in facilitating the professional development of individual group members.

Active Learning

Active learning is a process where a learner takes a dynamic and energetic role in his or
her education. An active learner, unlike a passive learner, is not dependent on a
teacher. In active learning, the student is a partner in the process, while passive
learning requires little personal involvement from a student. Active learning commonly
makes teachers act as guides to the learning process, motivators for further endeavors
for students. As a result of the learner's participation, such learning is self-reinforcing,
which should add to the retention of what is learned.
Typically, active learning is enjoyable, motivational and effective in getting tasks done,
while passive learning has a reputation for becoming dull very quickly. Active learning
tends to boost the learner's ego, as key steps are achieved in the learning process.
Active learning usually stimulates a learner's pride, increases confidence and imparts
credibility in the eyes of teachers. It may also stimulate a thirst for deeper and broad
understanding in future academic endeavors. Passive learners, by contrast, tend to
become disinterested and unmotivated. What is learned passively is usually not
effectively or enthusiastically applied.
An active learner often asks questions of clarification, example, nomenclature, category,
reason, status or rationale. Such questions are aimed at enhancing learning and they
tend to stimulate further learning. An active learner often challenges ideas, procedures,
content relationships, and priorities but does not attack people or their character. Active
students frequently follow up learning sessions with personal extensions. Such
extensions include added reading, group discussions about what was learned,
applications of learning, and experimentation. These activities validate learner interest
in what was learned.
In active learning, a student connects new material with what was previously learned.
An active learner attaches what is learned with skill development. The connection of
knowledge and skill is an advanced learning dynamic. An active learner discusses what
he or she knows with others. Thus one validates the ability to clearly and thoroughly
articulate what he or she knows. Such discussions increase a learner's credibility for
others and at the same time boost his or her own confidence.
Instructors, classmates and people outside school more often seek active learners for
opinions, assistance and insight than passive students. Active learners also share
research findings, exchange views, and debate topics among themselves. Such

exchanges add considerably to what is learned. In addition, active learners usually have
an open mind, possess better reasoning skills, and make fewer snap judgments.
Instructing active learners is easier and more successful for teachers than teaching
passive learners, partly because an active learner tends to realize when presented
material or readings are unusually difficult or confusing. They ask relevant questions in
order to clear up confusion and avoid small problems from turning into big difficulties.
Active learners also provide relevant examples when appropriate and offer answers to
questions and problems, thus helping the instructors by adding to the dialogic flow in the
classroom.
Active learners' work tends to be done on time, completely and neatly. Active learners
tend to be more creative, while also being more likely to accept and adopt suggestions
offered by tutors, instructors and classmates than passive learners. Teachers are more
likely to give enthusiastic and quality recommendation statements for further education
to active learners. Active behaviors and values typically lead to better opportunities for
advancement and higher remuneration increases. While active learning does not
guarantee success, it enhances a student's chances of doing well.
Active learning is not exclusively taught, rewarded and promoted in school. Active
learning should be reinforced and extended by playground supervisors, parents and
babysitters. It is easiest for students to start active learning early, in part thanks to good
role modeling. They should also get healthy rewards and understand that such learning
is useful. Meanwhile, teachers, parents, and others working with students need to be
competently instructed so that they can reinforce, reward and extend active learning
behaviors and values.

Associative Learning (Conditioning)

Associative learning is a type of learning principle based on the assumption that ideas
and experiences reinforce one another and can be linked. Abramson (1994) defines the
concept as a form of behavior modification involving the association of two or more
events, such as between two stimuli, or between a stimulus and a response.

This type of learning falls not only under the scope of psychology but is of interest for
neurologists as well. Associative learning is classified as the most basic form of learning
with a more complex type being cognitive learning, which requires language and
memory. For example, associative learning has been identified in the behavior of honey
bees. Research has shown that the bee extends its proboscis, which is an elongated
appendage from its head, as a reflex to antennal stimulation.
Psychologists have divided associative learning into two types: classical conditioning
and operant conditioning. Classical conditioning is the formation of an association
between a conditioned stimulus and a response. It was first defined by the Russian
physiologist Ivan Pavlov (1849 to 1936), who was awarded the Nobel Prize in
Physiology and Medicine for his influential work. His research focused on conditioning
and involuntary reflex.
Pavlov carried out experiments on digestion and the work of the digestive glands. In the
1920s, Pavlov carried out an experiment involving a dog, which was given food when a
bell was ringing. After repeating the pattern several times, the dog started salivating
when the bell was ringing, even without getting food. Pavlov referred to this
phenomenon as the conditioned response, which means that the dog associated the
bell with food.
Unlike classical conditioning, operant conditioning concerns the modification of
voluntary behavior. While classical conditioning is determined by antecedent conditions,
the environment and the consequences shape operant conditioning. This method was
the subject of an experiment by the eminent American behavioral expert B.F. Skinner
(1904 to 1990), who trained rats and pigeons to press a lever to receive food as a
reward. Skinner designed the so-called Skinner Box, which is an operant conditioning
chamber, to carry out his experiment. In the experiment, a desired output, that is
pressing the lever, is paired with a stimulus, which is the food reward.
Abramson cites two studies, which suggest that some types of worms may be capable
of modifying their responses to obtain a reinforcement. He argues that that evidence
indicates roundworm, known as C.elegans, are capable of undergoing at least one form
of associative learning, classical conditioning. Abramson believes that the biggest
question is how to determine if an animal is doing something new the hallmark of
associative learning.

The key instruments of reinforcement, punishment and extinction are important in


operant conditioning. Reinforcement encourages the increased frequency of certain
behavior; punishment reduces the frequency of its occurrence, whereas extinction
describes the case where the behavior pattern does not provoke any response and
therefore, it may disappear. Research shows that previously reinforced behavior can
vanish when it no longer gives rise to any reinforcement.
Associative learning can be used to study and modify animal behavior. However, it is a
widespread method in human psychology, pedagogy and medicine. It is believed to
improve human visual working memory performance. Some researchers seek to extend
the role of the associative learning process in dealing with nicotine addiction.
Associative learning has been widely applied in clinical settings as well.
This method of learning can be very effective in the classroom. Educational theorists
have studied the role of association formation and response to stimuli in the learning
process. In particular, operant conditioning can be widely used in educational practices.
With regard to this method, stimuli in the classroom include praise and disapproval.
Students are thought to learn and demonstrate new behavior in response to the
consequences of these behaviors evoke. Appropriate behavior is often acquired
because of the desirable outcome. Inappropriate and undesirable behaviors can be
acquired for a similar reason.
Punishment in the classroom can include placing the misbehaving student in an
environment with no reinforcers, withdrawal of a previously earned reinforcer as well as
reprimands. In order to be effective, the punishment has to be described in clear and
concrete terms. Teachers are advised to detail in advance the desired behavior.
Furthermore, there should be an agreement between students and the teacher
regarding the expectations for the student's behavior.

Brain-Based Learning

Brain-based learning is also known as brain-compatible learning. It is the explicit


acknowledgement that learning is fundamentally linked to the biological and chemical
functioning of the brain. This may seem like a redundant concept but historically the role

of the brain in the learning process has been overlooked. The revolution of education
through brain-based learning is due to developments in neurological research
(particularly in the 1990s), when new insights into the workings of the brain were
discovered in light of innovative technologies and developments such as
electrophysiological studies, neuropsychological tests and imaging techniques.
These brain studies have led to a shift in education and learning models. They lend
greater significance to learner differences and sociocultural contexts, emphasizing how
the brain learns. Brain-based learning offers a holistic look at the learning process and
takes in the relationship between the emotions and memory as well as individual
variables which affect learning. Brain-based learning relates the functions of different
areas of the brain. For example the thalamus, which is located in the center of the brain,
is associated with attention. Research has found that our bodies have cycles of
approximately 90 to 110 minutes during which energy levels peak; at the bottom of the
cycle, energy and attention decrease. Proponents of brain-based learning believe that
these cycles should be exploited to optimize learning.
Brain activity such as stress can act as a barrier to complex thinking and creativity. Our
natural responses to stress are counter-productive to the learning process. In highly
stressful situations we may experience a psychophysiological response, which can lead
to feelings of helplessness or fatigue. In these situations we know that information
travels through the thalamus and amygdala and then moves into the cerebellum. In a
school environment this reduces the capacity for learning to the memorization of
isolated facts. Even something seemingly innocuous such as teasing by the student's
peer group may be as threatening to the learner as a saber-toothed tiger in terms of
eliciting a primal response from the brain. Using brain-based learning we can work this
knowledge to our advantage by eliminating stress in the educational environment.
Low stress levels are conducive to reflection and analytical thinking. In the traditional
school environment, teachers may have thrived on high-stress tactics, perhaps focusing
on the unprepared student in an attempt to motivate him or her. Brain-based learning
calls into question this style of teaching, favoring a learning environment that is relaxed
and safe for the student. An important aspect of brain-based learning is the learning
environment itself. Students who join school from kindergarten and attend through to
the end of high school will spend 13,000 hours of their time there; so for a significant
portion of time their developing brains are exposed to the learning environment. Brain-

based learning takes this into account and looks for ways to optimize the capacity for
learning of the developing brain. External factors that can impact the brain's capacity to
learn and retain information include room temperature, the time of the day, input
quantity (capacity) and engagement (such as goal-orientated attention).
Maintaining a student's attention is a process of engaging the relevant neural networks
in the brain. In a highly social environment this can be a challenge; such things as
gossip, thirst or hunger or even a change in the weather outside can cause the typical
student's attention to wander. To counter this, both external and internal distractions
should be minimized. Ensuring a student's engagement in the classroom or learning
environment also stimulates structures in the brain associated with pleasure. We learn
better when we focus our sight, listen, and physically attend to information rather than
simply memorizing it. When struggling to pay attention, brain activity is heightened in
the prefrontal and posterior parietal lobes and the thalamus and anterior cingulate; in
other words, our neurons are firing extra hard in an attempt to keep us attuned to what
we are learning.
The brain's processing mechanisms play a large role in brain-based learning. Central to
this is the brain's predilection for pattern-making. Learning should incorporate context to
stimulate neural activity. The brain will use information in the context of knowledge that
is already attained and stored, putting related events together in hierarchies and
categories. Learning depends on the healthy functioning of the brain; if we stimulate it in
the right way we are more likely to achieve educational goals. In short, the more we
learn about the brain, the better we learn.

Classical Conditioning (Pavlovian Conditioning)

learning
learning, in psychology, the process by which a relatively lasting change in potential
behavior occurs as a result of practice or experience. Learning is distinguished from
behavioral changes arising from such processes as maturation and illness, but does
apply to motor skills, such as driving a car, to intellectual skills, such as reading, and to

attitudes and values, such as prejudice. There is evidence that neurotic symptoms and
patterns of mental illness are also learned behavior. Learning occurs throughout life in
animals, and learned behavior accounts for a large proportion of all behavior in the
higher animals, especially in humans.
Models of Learning
The scientific investigation of the learning process was begun at the end of the 19th
cent. by Ivan Pavlov in Russia and Edward Thorndike in the United States. Three
models are currently widely used to explain changes in learned behavior; two
emphasize the establishment of relations between stimuli and responses, and the third
emphasizes the establishment of cognitive structures. Albert Bandura maintained (1977)
that learning occurs through observation of others, or models; it has been suggested
that this type of learning occurs when children are exposed to violence in the media.
Classical Conditioning
The first model, classical conditioning, was initially identified by Pavlov in the salivation
reflex of dogs. Salivation is an innate reflex, or unconditioned response, to the
presentation of food, an unconditioned stimulus. Pavlov showed that dogs could be
conditioned to salivate merely to the sound of a buzzer (a conditioned stimulus), after it
was sounded a number of times in conjunction with the presentation of food. Learning is
said to occur because salivation has been conditioned to a new stimulus that did not
elicit it initially. The pairing of food with the buzzer acts to reinforce the buzzer as the
prominent stimulus.
Operant Conditioning
A second type of learning, known as operant conditioning, was developed around the
same time as Pavlov's theory by Thorndike, and later expanded upon by B. F. Skinner.
Here, learning takes place as the individual acts upon the environment. Whereas
classical conditioning involves innate reflexes, operant conditioning requires voluntary
behavior. Thorndike showed that an intermittent reward is essential to reinforce
learning, while discontinuing the use of reinforcement tends to extinguish the learned
behavior. The famous Skinner box demonstrated operant conditioning by placing a rat in
a box in which the pressing of a small bar produces food. Skinner showed that the rat

eventually learns to press the bar regularly to obtain food. Besides reinforcement,
punishment produces avoidance behavior, which appears to weaken learning but not
curtail it. In both types of conditioning, stimulus generalization occurs; i.e., the
conditioned response may be elicited by stimuli similar to the original conditioned
stimulus but not used in the original training. Stimulus generalization has enormous
practical importance, because it allows for the application of learned behaviors across
different contexts. Behavior modification is a type of treatment resulting from these
stimulus/response models of learning. It operates under the assumption that if behavior
can be learned, it can also be unlearned (see behavior therapy).
Cognitive Learning
A third approach to learning is known as cognitive learning. Wolfgang Khlershowed
that a protracted process of trial-and-error may be replaced by a sudden understanding
that grasps the interrelationships of a problem. This process, called insight, is more akin
to piecing together a puzzle than responding to a stimulus. Edward Tolman (1930) found
that unrewarded rats learned the layout of a maze, yet this was not apparent until they
were later rewarded with food. Tolman called this latent learning, and it has been
suggested that the rats developed cognitive maps of the maze that they were able to
apply immediately when a reward was offered.

Cognitive Learning

learning
learning, in psychology, the process by which a relatively lasting change in potential
behavior occurs as a result of practice or experience. Learning is distinguished from
behavioral changes arising from such processes as maturation and illness, but does
apply to motor skills, such as driving a car, to intellectual skills, such as reading, and to
attitudes and values, such as prejudice. There is evidence that neurotic symptoms and
patterns of mental illness are also learned behavior. Learning occurs throughout life in
animals, and learned behavior accounts for a large proportion of all behavior in the

higher animals, especially in humans.


Models of Learning
The scientific investigation of the learning process was begun at the end of the 19th
cent. by Ivan Pavlov in Russia and Edward Thorndike in the United States. Three
models are currently widely used to explain changes in learned behavior; two
emphasize the establishment of relations between stimuli and responses, and the third
emphasizes the establishment of cognitive structures. Albert Bandura maintained (1977)
that learning occurs through observation of others, or models; it has been suggested
that this type of learning occurs when children are exposed to violence in the media.
Classical Conditioning
The first model, classical conditioning, was initially identified by Pavlov in the salivation
reflex of dogs. Salivation is an innate reflex, or unconditioned response, to the
presentation of food, an unconditioned stimulus. Pavlov showed that dogs could be
conditioned to salivate merely to the sound of a buzzer (a conditioned stimulus), after it
was sounded a number of times in conjunction with the presentation of food. Learning is
said to occur because salivation has been conditioned to a new stimulus that did not
elicit it initially. The pairing of food with the buzzer acts to reinforce the buzzer as the
prominent stimulus.
Operant Conditioning
A second type of learning, known as operant conditioning, was developed around the
same time as Pavlov's theory by Thorndike, and later expanded upon by B. F. Skinner.
Here, learning takes place as the individual acts upon the environment. Whereas
classical conditioning involves innate reflexes, operant conditioning requires voluntary
behavior. Thorndike showed that an intermittent reward is essential to reinforce
learning, while discontinuing the use of reinforcement tends to extinguish the learned
behavior. The famous Skinner box demonstrated operant conditioning by placing a rat in
a box in which the pressing of a small bar produces food. Skinner showed that the rat
eventually learns to press the bar regularly to obtain food. Besides reinforcement,
punishment produces avoidance behavior, which appears to weaken learning but not
curtail it. In both types of conditioning, stimulus generalization occurs; i.e., the

conditioned response may be elicited by stimuli similar to the original conditioned


stimulus but not used in the original training. Stimulus generalization has enormous
practical importance, because it allows for the application of learned behaviors across
different contexts. Behavior modification is a type of treatment resulting from these
stimulus/response models of learning. It operates under the assumption that if behavior
can be learned, it can also be unlearned (see behavior therapy).
Cognitive Learning
A third approach to learning is known as cognitive learning. Wolfgang Khlershowed
that a protracted process of trial-and-error may be replaced by a sudden understanding
that grasps the interrelationships of a problem. This process, called insight, is more akin
to piecing together a puzzle than responding to a stimulus. Edward Tolman (1930) found
that unrewarded rats learned the layout of a maze, yet this was not apparent until they
were later rewarded with food. Tolman called this latent learning, and it has been
suggested that the rats developed cognitive maps of the maze that they were able to
apply immediately when a reward was offered.

Constructivism in Education

Constructivism is a theory founded on the premise that humans construct their


knowledge and understanding of the world by reflecting on their own experiences.
The concept of constructivism dates back to ancient Greece, when Socrates asked his
students questions which led them to realize the weakness of their thinking.
Constructivist educators still use this method as a teaching technique.
Swiss developmental psychologist Jean Piaget and American philosopher and
educational reformer John Dewey developed theories of childhood development and
education that led to the evolution of constructivism. Russian psychologist Lev Vygotsky
and American psychologists Jerome Bruner and David Ausubel added new perspectives
to the constructivist learning theory.

Piaget believed that assimilation and accommodation are the two processes through
which knowledge is internalised by learners. He suggested that humans construct new
knowledge through their experiences. When individuals assimilate new knowledge they
add it to an existing framework, without changing its structure. Accommodation is the
process of changing the internal mental structure to fit new experiences.
Constructivist teaching creates motivated learners. All school subjects involve
constructing new ideas. Teachers should create environments in which students can
construct their own ideas and understanding. Constructivist teachers encourage
students to assess how classroom activities help them gain understanding.
Unlike traditional teaching, in a constructivist classroom the focus is no longer on
teachers who transfer their knowledge to passive learners, but on students, who are
urged to be actively involved in the process of learning. Teachers are facilitators who, by
asking questions, mediate and help students develop their understanding. Teachers no
longer give answers according to a set curriculum, but help students come to their own
conclusions and create their own ideas. Teachers are in continuous dialogue with their
students, unlike in traditional classrooms where they mostly give monologues.
In a constructivist classroom, knowledge is no longer something that should be
memorised, but dynamic views of the world we live in. Learners are encouraged to
discover concepts and facts for themselves. Constructivist classrooms are based on a
constant dialogue between teachers and students who develop an awareness of each
other's viewpoints and opinions. At the same time, learners are encouraged to work in
groups and collaborate in tasks. Rather than absorbing the information that is being
presented, students are able to interact, interpret and analyse everything.
In traditional classrooms learning is based on repetition. In constructivist classrooms
learning is interactive and students build new knowledge starting from what they already
know.
Negotiation is an important aspect of a constructivist classroom. Teachers should
openly talk about how new information may be learned and can invite students to
contribute and modify the educational programme.
Traditionally, assessment is based on testing. In constructivist teaching, assessment is
not based only on tests, but also on students' work, observations and points of view.

Assessment should be used to enhance both the student's learning and the teacher's
understanding of what students understand. Constructivists don't see assessment as an
isolated exercise, but as a continuous process. They encourage students assess their
own knowledge and evaluate each other's work.
The main benefit of constructivism is that students enjoy learning more when they are
actively involved than when they are just given information about a subject. Moreover,
constructivism focuses more on understanding and learning to think, unlike traditional
teaching which focuses on memorisation.
The dynamism of the lessons engages students' initiatives and gives them ownership of
what they learn. Constructivism develops students' ability to express and use their
knowledge in a myriad of ways in real life situations.
At the same time, students improve their communication skills by collaborating and
exchanging ideas with the rest of the class. They learn how to negotiate and clearly
express their ideas so that they the group they are working in accomplish a task.
However, constructivism has been criticised on various grounds. Some critics say this
teaching method has been more successful with children who have committed rich
parents, while traditional methods work better with students lacking such resources.
Other critics believe that in each constructivist classrooms there are students who
dominate the whole class and the other students are somehow forced to conform to
their opinions. Moreover, they claim there is little evidence constructivist methods work.
By rejecting evaluation through testing, teachers have made their students' progress
unaccountable.

Conditions of Learning
The theory that different types or levels of learning require different types of instruction
was developed by U.S. educational psychologist Robert Gagne (1916-2002) in The
Conditions of Learning and the Theory of Instruction (1965). This theory outlines the
relation between learning objectives and appropriate instructional designs.

According to Gagne, there are five main categories of learning - verbal information,
intellectual skills, cognitive strategies, motor skills and attitudes. There are different
conditions, both internal (such as attention, motivation and recall) and external (such as
arrangement and timing of stimulus events), that have to be in place for each type of
learning. For example, in order to learn cognitive strategies one should have the chance
to practice developing new solutions to problems. On the other hand successful learning
of attitudes requires a credible role model or persuasive arguments.
Gagne identified a sequence of nine instructional events that should satisfy or provide
the necessary conditions for learning:
1. Gaining attention - When a lesson starts, learners often think about many other things
than the lesson. Capturing and keeping their attention is therefore of crucial importance
for the instructor. For example, this can be done via gesturing, speaking loudly or
providing an interesting visual. 2. Informing learners of the objective - Another important
thing to do is to clearly describe the goal of the lesson and explain how the lesson
would be useful to the learners. By doing so the instructor allows the learners to form
expectations about the lesson and lets them know what they should attend to. 3.
Stimulating recall of prior learning - In many cases in order to be able to understand and
learn new information one must have certain existing knowledge or skills, sometimes
referred to as prerequisites. By reminding the learners of prior learning the instructor
helps them build on their previous knowledge or skills. 4. Presenting the material - This
is when the new knowledge is introduced. It could be useful to present the information in
small parts in order to avoid memory overload. Although many instructors start their
lessons at this point, the previous three events could actually improve the effectiveness
of education. 5. Providing guidance for learning - This is the point when the teacher
gives instructions on how the students can learn the new knowledge. This guidance
may take the form of giving examples, relating new information to existing knowledge,
showing images or offering mnemonics. 6. Eliciting performance - At this point the
instructor gives the learners a chance to practice the newly acquired behavior, skills or
knowledge, thereby confirming that the material has been understood correctly. It is
generally considered that repetition increases the chance of retaining knowledge. 7.
Providing feedback - This instructional event is of crucial importance as it allows both
the instructor and the learners to see if there are misunderstandings and correct them.
There are various ways to give feedback, for example, via a test, quiz or verbal
comments. However, one should keep in mind that feedback has to be specific and

explanatory. 8. Assessing performance - The instructor has to make sure that the new
knowledge has been reliably stored. In order to really learn the new information
students need additional practice, which is often in the form of homework. One popular
way to assess performance is via graded tests. 9. Enhancing retention and transfer - In
order for instruction to have a long-term effect on learners, one should try to increase
the chance of the new knowledge being retained for future needs. It is important that
after the end of the lesson the students are ready to transfer the newly acquired
knowledge and skills to different problems and situations. One can enhance retention
and transfer of the new information by giving examples of similar situations, providing
additional practice in various situations, reviewing the lesson.
In The Conditions of Learning and the Theory of Instruction Gagne named several steps
to be followed when planning and designing instruction. First, the instructor should
identify the types of learning outcomes as well as the prerequisite knowledge or skills
they may require. Then one should identity the internal and external conditions needed
to achieve the outcomes. Other steps are specifying the learning context, recording the
characteristics of the learners and selecting the media for instruction. There should also
be a plan how to motivate learners. Finally, the instruction design should be tested both
via formative evaluation (before actually being used) and summative evaluation (after
being used).

Conceptual Frameworks in Learning

A conceptual framework can be explained in the following ways:


A set of ideas or concepts organized in a fashion that makes them easy to convey and
explain to others.
An organized way of thinking about how a project should take place and why it should
take place and how to understand the activities.
A basic overview of practices and ideas that will shape the way the work in the project
will be done.
A set of hypotheses, values and definitions under which people can work together.

The conceptual framework for any study is the system of concepts, expectations,
assumptions, beliefs and theories that support the research. It is the key part of the
design. The most important thing to understand about conceptual framework is that it is
basically a conception or model of what is to be studied. It is a tentative theory of the
things or phenomena that will be studied and investigated. The idea behind the theory is
to inform the rest of the design and help assess and refine the goals, develop realistic
and relevant research questions and select the proper methods and identify potential
threats to any conclusions that may be drawn.
The research problem is part of the conceptual framework, and formulating the research
problem is often the most important task of designing the conceptual framework. It is an
integral part of the conceptual framework, because it identifies something that is going
on in the world, something that is problematic or contains consequences that are
problematic. The research problem is there to justify the study and to show people why
the research is important and necessary. In addition, the problem may be something
that is not fully understood and it is not known how to deal with it. It is exactly for that
reason that more information is necessary.
Here is an example of a conceptual framework for a health promotion project for
Hispanic women:
1. Develop and implement a Lay Health Promoter training program for Hispanic women.
2. Develop culturally appropriate health education and promotion materials related to
cervical and breast cancer in Hispanic women.
3. Reach out to the Hispanic community and inform them as well as involve them in
health promotion activities.
4. Identify and remove barriers to preventative health services for Hispanic women.
5. Promote health through activities that improve cancer screening behavior among
Hispanic women.
In the world of politics, conceptual frameworks may be necessary. In political science
one may not always be conscious of the progress slowly being made with respect to the
search for useful theoretical ideas under the very broad and poorly outlined behavioral

umbrella. This is perhaps due to the need to concentrate on the often time-consuming
and difficult job of reshaping the tools of research. The need to understand new
languages of analysis and become familiar with new methods, data and findings is at
times overwhelming. The preoccupation of political science with theory has left the
political scientist uncomfortably sensitive to the theoretical implications of behavioral
tendencies. Within the very short period of time that the behavioral approach has been
persuasive in political research, there have been a number of respectable alternative
conceptual approaches for the study of political science. Many conceptual frameworks
have been developed and many are still in the midst of being developed.
A community of working scientists has its own unique way of looking at the world. The
scientists who are members of the community have their own way of concluding
research, their own programs and their own way of interpreting what takes place in their
experiments. They have their own theories and beliefs about nature. All these values of
these scientists form and shape their conceptual framework of the world. According to
the conceptual relativist, the beliefs that make up the conceptual framework of this
group are by definition true. Any new beliefs or findings are true if they fit into the
accepted conceptual framework. Any particular research finding is true or false only in
relation to the particular conceptual framework.

Operant conditioning

Operant conditioning (also "instrumental conditioning") is a type of learning in which


the strength of a behavior is modified by its consequences, such as reward or
punishment, and the behavior is controlled by antecedents called discriminative stimuli
which come to signal those consequences.
While operant and classical conditioning both involve stimulus control, they differ in the
nature of this control. In operant conditioning, stimuli present when a behavior is
rewarded or punished come to control that behavior. For example, a child may learn to
open a box to get the candy inside, or learn to avoid touching a hot stove; the box and
the stove are discriminative stimuli. However, in classical conditioning, stimuli that signal
significant events come to control reflexive behavior. For example, the sight of a colorful
wrapper comes to signal "candy", causing a child to salivate, or the sound of a door
slam comes to signal an angry parent, causing a child to tremble.

The study of animal learning in the 20th century was dominated by the analysis of these
two sorts of learning,[1] and they are still at the core of behavior analysis.

Thorndike's law of effect[edit]


Main article: Law of effect
Operant conditioning, sometimes called instrumental learning, was first extensively
studied by Edward L. Thorndike (18741949), who observed the behavior of cats trying
to escape from home-made puzzle boxes.[2] A cat could escape from the box by a
simple response such as pulling a cord or pushing a pole, but when first constrained the
cats took a long time to get out. With repeated trials ineffective responses occurred less
frequently and successful responses occurred more frequently, so the cats escaped
more and more quickly. Thorndike generalized this finding in his law of effect, which
states that behaviors followed by satisfying consequences tend to be repeated and
those that produce unpleasant consequences are less likely to be repeated. In short,
some consequences strengthen behavior and some consequences weaken behavior.
By plotting escape time against trial number Thorndike produced the first known
animal learning curves through this procedure.[3]
Humans appear to learn many simple behaviors through the sort of process studied by
Thorndike, now called operant conditioning. That is, responses are retained when they
lead to a successful outcome and discarded when they do not, or when they produce
aversive effects. This usually happens without being planned by any "teacher", but
operant conditioning has been used by parents in teaching their children for thousands
of years.[4]
Skinner[edit]
Main article: B. F. Skinner
B.F. Skinner (19041990) is often referred to as the father of operant conditioning, and
his work is frequently cited in connection with this topic. His book "The Behavior of
Organisms",[5] published in 1938, initiated his lifelong study of operant conditioning and
its application to human and animal behavior. Following the ideas of Ernst Mach,
Skinner rejected Thorndike's reference to unobservable mental states such as
satisfaction, building his analysis on observable behavior and its equally observable
consequences.[6]

To implement his empirical approach, Skinner invented the operant conditioning


chamber, or "Skinner Box," in which subjects such as pigeons and rats were isolated
and could be exposed to carefully controlled stimuli. Unlike Thorndike's puzzle box, this
arrangement allowed the subject to make one or two simple, repeatable responses, and
the rate of such responses became Skinner's primary behavioral measure. [7] Another
invention, the cumulative recorder, produced a graphical record from which these
response rates could be estimated. These records were the primary data that Skinner
and his colleagues used to explore the effects on response rate of various
reinforcement schedules.[8] A reinforcement schedule may be defined as "any procedure
that delivers reinforcement to an organism according to some well-defined rule". [9] The
effects of schedules became, in turn, the basic findings from which Skinner developed
his account of operant conditioning. He also drew on many less formal observations of
human and animal behavior.[10]
Many of Skinner's writings are devoted to the application of operant conditioning to
human behavior.[11] In 1948 he published Walden Two, a fictional account of a peaceful,
happy, productive community organized around his conditioning principles. [12] In
1957, Skinner publishedVerbal Behavior,[13] which extended the principles of operant
conditioning to language, a form of human behavior that had previously been analyzed
quite differently by linguists and others. Skinner defined new functional relationships
such as "mands" and "tacts" to capture some essentials of language, but he introduced
no new principles, treating verbal behavior like any other behavior controlled by its
consequences, which included the reactions of the speaker's audience.
Concepts and procedures[edit]
Origins of operant behavior: operant variability[edit]
Operant behavior is said to be "emitted"; that is, initially it is not elicited by any particular
stimulus. Thus one may ask why it happens in the first place. The answer to this
question is like Darwin's answer to the question of the origin of a "new" bodily structure,
namely, variation and selection. Similarly, the behavior of an individual varies from
moment to moment, in such aspects as the specific motions involved, the amount of
force applied, or the timing of the response. Variations that lead to reinforcement are
strengthened, and if reinforcement is consistent, the behavior tends to remain stable.
However, behavioral variability can itself be altered through the manipulation of certain
variables.[14]
Modifying operant behavior: reinforcement and shaping[edit]
Main article: Reinforcement

Reinforcement and punishment are the core tools through which operant behavior is
modified. These terms are defined by their effect on behavior. Either may be positive or
negative, as described below.
1. Positive Reinforcement and Negative Reinforcement increase the probability of a
behavior while Positive Punishment and Negative Punishment reduce the
probability of a behaviour that it follows.
There is an additional procedure
1. Extinction occurs when a previously reinforced behavior is no longer reinforced
with either positive or negative reinforcement. During extinction the behavior
becomes less probable.
Thus there are a total of five basic consequences 1. Positive reinforcement (reinforcement): This occurs when a behavior
(response) is rewarding or the behavior is followed by another stimulus that is
rewarding, increasing the frequency of that behavior.[15] For example, if a rat in
a Skinner box gets food when it presses a lever, its rate of pressing will go up.
This procedure is usually called simply reinforcement.
2. Negative reinforcement (escape): This occurs when a behavior (response) is
followed by the removal of an aversive stimulus, thereby increasing that
behavior's frequency. In the Skinner box experiment, the aversive stimulus might
be a loud noise continuously sounding inside the box; negative reinforcement
would happen when the rat presses a lever, turning off the noise.
3. Positive punishment: (also referred to as "punishment by contingent
stimulation") This occurs when a behavior (response) is followed by an aversive
stimulus, such as pain from a spanking, which results in a decrease in that
behavior. Positive punishment is a rather confusing term, and usually the
procedure is simply called "punishment."
4. Negative punishment (penalty) (also called "Punishment by contingent
withdrawal"): Occurs when a behavior (response) is followed by the removal of a
stimulus, such as taking away a child's toy following an undesired behavior,
resulting in a decrease in that behavior.

5. Extinction: This occurs when a behavior (response) that had previously been
reinforced is no longer effective. For example, a rat is first given food many times
for lever presses. Then, in "extinction", no food is given. Typically the rat
continues to press more and more slowly and eventually stops, at which time
lever pressing is said to be "extinguished."
It is important to note that actors (e.g. rat) are not spoken of as being reinforced,
punished, or extinguished; it is the actions (e.g. lever press) that are reinforced,
punished, or extinguished. Also, reinforcement, punishment, and extinction are not
terms whose use is restricted to the laboratory. Naturally occurring consequences can
also reinforce, punish, or extinguish behavior and are not always planned or delivered
by people.
Factors that alter the effectiveness of reinforcement and punishment[edit]
The effectiveness of reinforcement and punishment can be changed in various ways.
1. Satiation/Deprivation: The effectiveness of a positive or "appetitive" stimulus
will be reduced if the individual has received enough of that stimulus to satisfy its
appetite. The opposite effect will occur if the individual becomes deprived of that
stimulus: the effectiveness of a consequence will then increase. If someone is
not hungry, food will not be an effective reinforcer for behavior.[16]
2. Immediacy: An immediate consequence is more effective than a delayed
consequence. If one gives a dog a treat for "sitting" right away, the dog will learn
faster than if the treat is given later.[17]
3. Contingency: To be most effective, reinforcement should occur consistently after
responses and not at other times. Learning may be slower if reinforcement is
intermittent, that is, following only some instances of the same response, but
responses reinforced intermittently are usually much slower to extinguish than
are responses that have always been reinforced. [16]
4. Size: The size, or amount, of a stimulus often affects its potency as a reinforcer.
Humans and animals engage in a sort of "cost-benefit" analysis. A tiny amount of
food may not "be worth" an effortful lever press for a rat. A pile of quarters from a
slot machine may keep a gambler pulling the lever longer than a single quarter.

Most of these factors serve biological functions. For example, the process of satiation
helps the organism maintain a stable internal environment (homeostasis). When an
organism has been deprived of sugar, for example, the taste of sugar is a highly
effective reinforcer. However, when the organism's blood sugar reaches or exceeds an
optimum level the taste of sugar becomes less effective, perhaps even aversive.
Shaping[edit]
Main article: Shaping (psychology)
Shaping is a conditioning method much used in animal training and in teaching nonverbal humans. It depends on operant variability and reinforcement, as described
above. The trainer starts by identifying the desired final (or "target") behavior. Next, the
trainer chooses a behavior that the animal or person already emits with some
probability. The form of this behavior is then gradually changed across successive trials
by reinforcing behaviors that approximate the target behavior more and more closely.
When the target behavior is finally emitted, it may be strengthened and maintained by
the use of a schedule of reinforcement (see below).
Stimulus control of operant behavior[edit]
Though initially operant behavior is emitted without reference to a particular stimulus,
during operant conditioning operants come under the control of stimuli that are present
when behavior is reinforced. Such stimuli are called "discriminative stimuli." A so-called
"three-term contingency" is the result. That is, discriminative stimuli set the occasion for
responses that produce reward or punishment. Thus, a rat may be trained to press a
lever only when a light comes on; a dog rushes to the kitchen when it hears the rattle of
its food bag; a child reaches for candy when she sees it on a table.
Behavioral sequences: conditioned reinforcement and chaining[edit]
Most behavior cannot easily be described in terms of individual responses reinforced
one by one. The scope of operant analysis is expanded through the idea of behavioral
chains, which are sequences of responses bound together by the three-term
contingencies defined above. Chaining is based on the fact, experimentally
demonstrated, that a discriminative stimulus not only sets the occasion for subsequent
behavior, but it can also reinforce a behavior that precedes it. That is, a discriminative
stimulus is also a "conditioned reinforcer". For example, the light that sets the occasion
for lever pressing may be used to reinforce "turning around" in the presence of a noise.

This results in the sequence "noise - turn-around - light - press lever - food". Much
longer chains can be built by adding more stimuli and responses.
Escape and Avoidance[edit]
In escape learning, a behavior terminates an (aversive) stimulus. For example, shielding
one's eyes from sunlight terminates the (aversive) stimulation of bright light in one's
eyes. (This is an example of negative reinforcement, defined above.) Behavior that is
maintained by preventing a stimulus is called "avoidance," as, for example, putting on
sun glasses before going outdoors. Avoidance behavior raises the so-called "avoidance
paradox", for, it may be asked, how can the non-occurrence of a stimulus serve as a
reinforcer? This question is addressed by several theories of avoidance (see below).
Two kinds of experimental settings are commonly used: discriminated and free-operant
avoidance learning.
Discriminated avoidance learning[edit]
A discriminated avoidance experiment involves a series of trials in which a neutral
stimulus such as a light is followed by an aversive stimulus such as a shock. After the
neutral stimulus appears an operant response such as a lever press prevents or
terminate the aversive stimulus. In early trials the subject does not make the response
until the aversive stimulus has come on, so these early trials are called "escape" trials.
As learning progresses, the subject begins to respond during the neutral stimulus and
thus prevents the aversive stimulus from occurring. Such trials are called "avoidance
trials." This experiment is said to involve classical conditioning, because a neutral CS is
paired with an aversive US; this idea underlies the two-factor theory of avoidance
learning described below.
Free-operant avoidance learning[edit]
In free-operant avoidance a subject periodically receives an aversive stimulus (often an
electric shock) unless an operant response is made; the response delays the onset of
the shock. In this situation, unlike discriminated avoidance, no prior stimulus signals the
shock. Two crucial time intervals determine the rate of avoidance learning. This first is
the S-S (shock-shock) interval. This is time between successive shocks in the absence
of a response. The second interval is the R-S (response-shock) interval. This specifies
the time by which an operant response delays the onset of the next shock. Note that

each time the subject performs the operant response, the R-S interval without shock
begins anew.
Two-process theory of avoidance[edit]
This theory was originally proposed in order to explain discriminated avoidance
learning, in which an organism learns to avoid an aversive stimulus by escaping from a
signal for that stimulus. Two processes are involved: classical conditioning of the signal
followed by operant conditioning of the escape response: a) Classical conditioning of
fear. Initially the organism experiences the pairing of a CS (conditioned stimulus) with
an aversive US (unconditioned stimulus). The theory assumes that this pairing creates
an association between the CS and the US through classical conditioning and, because
of the aversive nature of the US, the CS comes to elicit a conditioned emotional reaction
(CER) "fear." b) Reinforcement of the operant response by fear-reduction. As a result
of the first process, the CS now signals fear; this unpleasant emotional reaction serves
to motivate operant responses, and responses that terminate the CS are reinforced by
fear termination. Note that the theory does not say that the organism "avoids" the US in
the sense of anticipating it, but rather that the organism "escapes" an aversive internal
state that is caused by the CS. Several experimental findings seem to run counter to
two-factor theory. For example, avoidance behavior often extinguishes very slowly even
when the initial CS-US pairing never occurs again, so the fear response might be
expected to extinguish (see Classical conditioning). Further, animals that have learned
to avoid often show little evidence of fear, suggesting that escape from fear is not
necessary to maintain avoidance behavior.[21]
Operant or "one-factor" theory[edit]
Some theorists suggest that avoidance behavior may simply be a special case of
operant behavior maintained by its consequences. In this view the idea of
"consequences" is expanded to include sensitivity to a pattern of events. Thus, in
avoidance, the consequence of a response is a reduction in the rate of aversive
stimulation. Indeed, experimental evidence suggests that a "missed shock" is detected
as a stimulus, and can act as a reinforcer.[22] Cognitive theories of avoidance take this
idea a step farther. For example, a rat comes to "expect" shock if it fails to press a lever
and to "expect no shock" if it presses it, and avoidance behavior is strengthened if these
expectancies are confirmed. [23] [24]

Some other terms and procedures[edit]


Noncontingent reinforcement[edit]
Noncontingent reinforcement is the delivery of reinforcing stimuli regardless of the
organism's behavior. Noncontingent reinforcement may be used in an attempt to reduce
an undesired target behavior by reinforcing multiple alternative responses while
extinguishing the target response.[18] As no measured behavior is identified as being
strengthened, there is controversy surrounding the use of the term noncontingent
"reinforcement".[19]
Schedules of reinforcement[edit]
Schedules of reinforcement are rules that control the delivery of reinforcement. The
rules specify either the time that reinforcement is to be made available, or the number of
responses to be made, or both. Many rules are possible, but the following are the most
basic and commonly used

Fixed interval schedule: Reinforcement occurs following the first response after a
fixed time has elapsed after the previous reinforcement.

Variable interval schedule: Reinforcement occurs following the first response


after a variable time has elapsed from the previous reinforcement.

Fixed ratio schedule: Reinforcement occurs after a fixed number of responses


have been emitted since the previous reinforcement.

Variable ratio schedule: Reinforcement occurs after a variable number of


responses have been emitted since the previous reinforcement.

Continuous reinforcement: Reinforcement occurs after each response. [20]

Discrimination, generalization & context[edit]


Most behavior is under stimulus control. Several aspects of this may be distinguished:

"Discrimination" typically occurs when a response is reinforced only in the


presence of a specific stimulus. For example, a pigeon might be fed for pecking at a
red light and not at a green light; in consequence, it pecks at red and stops pecking
at green. Many complex combinations of stimuli and other conditions have been

studied; for example an organism might be reinforced on an interval schedule in the


presence of one stimulus and on a ratio schedule in the presence of another.

"Generalization" is the tendency to respond to stimuli that are similar to a


previously trained discriminative stimulus. For example, having been trained to peck
at "red" a pigeon might also peck at "pink", though usually less strongly.

"Context" refers to stimuli that are continuously present in a situation, like the
walls, tables, chairs, etc. in a room, or the interior of an operant conditioning
chamber. Context stimuli may come to control behavior as do discriminative stimuli,
though usually more weakly. Behaviors learned in one context may be absent, or
altered, in another. This may cause difficulties for behavioral therapy, because
behaviors learned in the therapeutic setting may fail to occur elsewhere.

Operant hoarding[edit]
Operant hoarding refers to the observation that rats reinforced in a certain way may
allow food pellets to accumulate in a food tray instead of retrieving those pellets. In this
procedure, retrieval of the pellets always instituted a one-minute period
of extinction during which no additional food pellets were available but those that had
been accumulated earlier could be consumed. This finding appears to contradict the
usual finding that rats behave impulsively in situations in which there is a choice
between a smaller food object right away and a larger food object after some delay.
See schedules of reinforcement.[21]
Operant conditioning to change human behavior[edit]
Main article: Applied behavior analysis
Applied behavior analysis is the discipline initiated by B. F. Skinner that applies the
principles of conditioning to the modification of socially significant human behavior. It
uses the basic concepts of conditioning theory, including conditioned stimulus (S C),
discriminative stimulus (Sd), response (R), and reinforcing stimulus (Srein or Sr for
reinforcers, sometimes Save for aversive stimuli).[22] A conditioned stimulus controls
behaviors developed through respondent (classical) conditioning, such as emotional
reactions. The other three terms combine to form Skinner's "three-term contingency": a
discriminative stimulus sets the occasion for responses that lead to reinforcement.
Researchers have found the following protocol to be effective when they use the tools of
operant conditioning to modify human behavior: [citation needed]

1. State goal Clarify exactly what changes are to be brought about. For example,
"reduce weight by 30 pounds."
2. Monitor behavior Keep track of behavior so that one can see whether the
desired effects are occurring. For example, keep a chart of daily weights.
3. Reinforce desired behavior For example, congratulate the individual on weight
losses. With humans, a record of behavior may serve as a reinforcement. For
example, when a participant sees a pattern of weight loss, this may reinforce
continuance in a behavioral weight-loss program. A more general plan is
the token economy, an exchange system in which tokens are given as rewards
for desired behaviors. Tokens may later be exchanged for a desired prize or
rewards such as power, prestige, goods or services.
4. Reduce incentives to perform undesirable behavior For example, remove
candy and fatty snacks from kitchen shelves.

Divergent Thinking

Divergent thinking is related to creativity and involves a broad search for solutions to
problems that have no single correct answers. In the process of divergent thinking, the
individual must find several combinations of elements that might provide possible
answers. Fluid thinking and originality are the key characteristics needed to embark on
a divergent search for alternative solutions, as opposed to following a strict regimen of
applying criteria and using steps for finding the one true answer to a problem, which is
known as "convergent thinking."
Divergent thinking as a concept was developed by J.P. Guilford, a psychologist, in the
1950s. Guilford saw divergent thinking as a major factor in manifesting creativity. The
pioneering psychologist identified four main attributes to divergent thinking: fluency, the
ability to produce many ideas or solutions to problems in a short period of time;
flexibility, the capacity to evaluate many approaches to a single problem at the same
time; originality, a tendency to produce ideas that deviate from those of the majority of
other people; and elaboration, the capacity to use thought processes to identify the
steps to an idea as well as carry them out.

Edward De Bono described divergent thinking as containing elements of both vertical


and lateral thinking. De Bono described vertical thinking as digging the same hole
deeper, while lateral thinking is about finding a different place to dig the hole. If the hole
is in the wrong place, no amount of logical thinking will restore it to its rightful place. This
means that while a creative thinker must consider the duality necessitated by the
thought process, his model will not work unless he is digging in the correct spot. This is
why some people speak of divergent thinking as broadening the thought process, rather
than rejecting out of hand the idea that problem-solving might necessitate certain
restraints.
In 1962, Mary Henle outlined her idea of the conditions necessary to the process of
creative thinking. According to Henle, there are five conditions for divergent thinking:
receptivity, immersion, seeing questions, utilization of errors and detached devotion.
Receptivity is about becoming detached from whatever one is thinking/doing in order to
pay attention to ideas that occur. Immersion is about staying within the field of
knowledge related to the subject and using all knowledge toward solving the problem.
Seeing the question that no one else sees shows the ability to process information in a
broader context than the one in which it has been presented. Utilization of errors helps
to direct the creative thought processes by eliminating certain paths so that energies
can be directed elsewhere and concentrated. Detached motivation is intense motivation
to solve the problem while remaining detached enough to see the problem in a fresh
light.
Other experts speak of six-stage or nine-stage processes used by divergent thinkers as
part of the creative problem-solving process. The six-stage process includes finding the
objectives, facts, problems, ideas, solutions and acceptance. This type of problemsolving is about seeking data and narrowing them down. Divergent thinking is used to
generate a list of issues, while convergent thinking is used to pinpoint the issues that
are likely to yield answers.
The six stages in a nutshell are:
1. Objective-finding: defining the problem area
2. Fact-finding: gathering information

3. Problem-finding: defining the problem with accuracy


4. Idea-finding: generating solutions
5. Solution-finding: evaluating all possible solutions and choosing the likeliest
6. Acceptance-finding: implementing the chosen solutions the correct way
The nine-stage process for creative problem-solving is an extension of the six-stage
process. The outline of the nine-stage process is as follows:
1. Constant analysis of the environment in order to spot potential problems
2. Objective-finding: defining the problem area
3. Fact-finding: gathering information
4. Problem-finding; defining the problem with accuracy
5. Pinpointing any assumptions
6. Idea-finding: generating solutions
7. Solution-finding: evaluating all possible solutions and choosing the likeliest
8. Acceptance-finding: implementing the chosen solutions the correct way
9. Controlling to ascertain that all objectives have been achieved after implementation
It may not be possible or necessary to use all steps of these processes in order to arrive
at good solutions. Often, for instance, instead of having to look for an issue or problem,
one is laid in people's laps. At other times, the best solution to a problem stares one in
the face, and there is no need to consider any other idea.

Deep Processing (in Cognitive Learning)

Deep Processing Cognitive refers to detailed, intensive thinking that brings about the
formation of memory representations, also known as learning. Learning cannot take
place without deep processing of information. Psychologists refer to "levels of
processing," in which deep processing is compared to a more superficial, or shallow,
level of processing.
Some experts state that different students have different styles of learning (processing)
while others conclude that whether a student chooses to perform deep or shallow
processing is a matter of choice. These experts believe that the student adapts his style
of learning to the type of material to be learned and to his own expectations of what is to
be gained by covering the material. Studies have shown that patterns of neural activity
change according to the type of study undertaken: shallow versus deep.
An attempt has been made to find a cognitive neuroscientific explanation to describe the
effects of these levels. Processing that draws on the powers of verbal speech is a kind
of deep processing that appears to activate certain regions of the brain's left frontal
cortex. The more automatic processing tasks, such as those related to phonological
processing, or shallow tasks such as encoding, for instance, do not activate these
regions. Scientists feel that since these shallow tasks do not require the individual to
form a representation of information in the ventral prefrontal cortex, it may be that such
tasks are not efficient at forming memories. The information may not "sink in."
In one study, scientists manipulated processing by presenting half the information in
visual format and half in an auditory format. In both cases, the study subjects were
charged with word retrieval tasks. The researchers discovered that the subjects who
had engaged in deep processing had better recall performance than those who had
learned on a shallower, more superficial level.
Other studies have shown that the response time for retrieving information is longer for
a shallow encoding task in comparison with an encoding task involving deep meaning.
The type of processing used is a predictor of subsequent memory. Deep encoding
always produces better memory performance than shallow encoding.
In their 1997 work, Learning and Awareness, Ference Marton and Shirley Booth
developed a theory that one can actively apply deep processing to learning by using all
one's abilities to the fullest to process material to great depths. The converse of the

theory is that one can apply a more superficial processing strategy using the same
abilities, but in a more cursory fashion, to cover only the surface of the information.
In another study from 1976 entitled, On Qualitative Differences in Learning , Marton and
Roger Saljo had students read short passages of text and relate what they had learned.
The researchers found that students learned in one of two different styles. Some of the
students employed the deep approach to the materials while others chose a shallow
approach. Each approach was linked to an expected learning outcome. The
researchers concluded that the chosen approach had less to do with personality
differences in students than with the perceived relationship between the learner and the
task.
When the students were questioned about their approaches to the text, researchers
found that a chosen approach had to do with what students expected to gain from the
reading material. Students articulated one or the other of two different intentions: to gain
an understanding of the meaning of the text or to remember important points and terms
with accuracy in anticipation of subsequent questioning on these details. The students
who processed the reading material for meaning focused on ideas and themes, while
those who strove for memorizing details were more focused on words and phrases.
The first approach represents deep processing, while the second approach represents
shallow processing. The deep approach generated a higher-level recounting of the
material with details used only to illustrate and support ideas, while the shallow
approach missed the connections that linked the facts together and as a result, led to an
inability to identify the main idea of the story. The researchers concluded that the
processing differences had less to do with the individuals than with the perceived
relationship between student and task. Marton called this concept "phenomenography."
In a study performed in 2000, called Promoting deep learning through teaching and
assessment: conceptual frameworks and educational contexts, Noel Entwistle studied
Aboriginal university students who had acquired skills through observing and imitating
others. At first, these activities might be undertaken without understanding, which may
suggest shallow processing. However, if these skills are then applied and used for
problem-solving or for learning more about interesting subjects, they could be said to
represent a deep approach.

Other researchers believe that a student who is interested in and engaged with the
material will employ deep processing while the student who is bored will choose a
shallow approach to the material. These researchers believe these choices reflect
neither personality nor ability. Rather, these approaches are chosen according to the
students' environmental stimuli.

Cooperative Learning

Cooperative learning is a teaching strategy for organizing classroom activities. Grouped


into small teams, pupils work together to achieve shared goals. This structured group is
an effective tool to address learning, organizational and communication problems at
school.
The early forms of cooperative learning appeared in the 18th century when pioneered
by English educators Joseph Lancaster and Andrew Bell. The Common School
Movement in the United States in the early 19th century laid a strong emphasis on
collective learning methods. The promotion of cooperative learning in the 20th century
was marked by the work done by John Dewey, and later Alice Miel and Herbert Thelen.
It was in the 1950s that American sociologist James S. Coleman campaigned for
cooperative learning as a technique to reduce competition in schools and to minimize
negative components in the education system. He took the view that competition
impeded the process of learning. Instead, teachers should resort to a more collaborative
approach to education. The 1966 Coleman report, the first government-commissioned
study of private and public schools in the United States, elaborated on the equality of
educational opportunities. Coleman coined the terms "climate of values," and
"adolescent society," which had an impact on future understanding of collective learning.
In the 1960s, the U.S. education system showed clear preferences for individual
learning methods. Cooperative learning experienced a revival in the 1980s, when
teachers returned to the extensive use of this method.

Robert Slavin published research on cooperative learning in 1994, referring to the


cooperative learning method as Student Team Learning. Defining cooperative learning
as "instructional programs in which students work in small groups to help one another
master academic content," Slavin praised it as a way to capitalize on "the
developmental characteristics of adolescents in order to harness their peer orientation,
enthusiasm, activity, and craving for independence within a safe structure."
Cooperative learning shifts the focus in teaching from lecturing to interaction. The
teacher serves as a facilitator and observer during all cooperative learning activities.
Although the teacher's role is not so overtly dominant, he or she remains actively
involved. Teachers are expected to join the student groups for brief periods to facilitate
the learning process and to make sure that students do not digress from the task. They
should also be available to answer questions that may come from students.
James A. Duplass (2006) enumerated the characteristics of cooperative learning as
follows: heterogeneous groups, positive interdependence, face-to-face interaction,
individual accountability, social skills and group processing.
Some of the techniques used in cooperative learning include: jigsaw, student teams
achievement divisions, think-pair-share, numbered heads together, three-step interview,
round robin, inside-outside circle, and round table.
Cooperative learning has proved to boost academic achievement, improve behavior and
attendance, increase self-esteem and motivation. The method promotes the use of
critical thinking skills and peer coaching.
The groups comprised of students of different levels help the participants capitalize on
each other's knowledge and skills and gain from each other's efforts. The teacher has to
make sure that the teams have a diversity of viewpoints, abilities, gender, race and
other characteristics.
This method of learning promotes team-work. Students recognize that all group
members share a common fate. While independence and accountability are considered
to be the major assets offered by this learning method, they are coupled with positive
interdependence and cooperation. The method appears to be particularly effective in the
work with Hispanic American and Native American students, whose culture is more

oriented toward cooperation and sharing, as research done by Myra Pollack Sadker
(1991) has shown.
Teachers are sometimes reluctant to use cooperative learning as they have to give up
part of their control. While the method is believed to be beneficial for gifted students,
slow learners may feel intimidated. Quiet students may also feel uncomfortable in such
a situation. The teacher has to ensure balance of power and prevent more dominant
students from taking over the team. Psychologists also argue that this method can place
greater burden on children by making them responsible for each other's learning.

Dual-Coding Theory

Dual coding theory was developed by Allan Paivio in 1971 and in 1986 to explain an
aspect of human cognition. This theory states that human recall and recognition are
enhanced when nonverbal information is accompanied by verbal information. For
example, if you show someone an image of a boy marked with the word boy and the
word boy is spoken aloud at the same time, the person shown the image will better
recognize and recall that image at a future point in time. The converse of this theory is
that the recognition and recall of information will be weakened if only one medium of
input is utilized.
Proponents of using technological advances to aid education use the dual coding theory
to promote their views. They claim that dual coding lends justification to using
multimedia applications in the classroom. Such multimedia applications make use of
text, image, audio and video at the same time while traditional teaching methods remain
focused on verbal presentation of material.
Paivio's theory of dual coding can be explained by the fact that there are levels to
understanding. Verbal processing is the level that focuses on language while nonverbal
processing is the level that focuses on the representations of nonverbal events. Both
levels are important to the process of learning in humans. When these levels are
combined, they stimulate an enhancement of human cognition and deduction.

The dual coding theory was designed to demonstrate the positive effects of
concreteness in verbal processing. There is a representational difference between
abstract and concrete words. The strength of the dual coding theory lies in its ability to
explain why it is easier to recall concrete verbal materials compared to abstract
materials.
Paivio advocated creating an imagery code. The researcher believed that developing
such a code would appear to be the logical strategy for understanding and remembering
sentences. However, in the case of an abstract sentence, there may not be an
available, useful image that can accompany the verbal information. In this case, a
verbal code may be all that a person can deduce.
Opponents of the dual coding theory -- and there are many -- make the case that
abstract and concrete sentences are not equally understood. They claim that the
difference in the ability to comprehend these two types of sentences might account for
the observed pattern that has been labeled dual coding. Even so, the dual coding theory
still presents the idea that concreteness and comprehensibility bear a relationship and
as such, merit a good explanation.
Paivio's dual coding theory posits a verbal system he terms the logogen system that is
distinct in structure and function from the image system he calls the imagen system.
The logogen system was said by Paivio to be a system of logogens that corresponds
somewhat to words and interconnects through an associative network of related
information. These networks develop through associative experiences. This means that
as we use and hear language, we draw associations from word to word. Spoken and
written language are thought to activate logogens in the most direct and immediate
manner.
On the other hand, the imagen system consists of sensory images that suggest
characteristics of the original forms from which they arose. This system contains various
permutations of partial to whole relationships that come from the sensory experience.
The direct activation of imagens is said to be triggered by visual-spatial information.
The logogen and imagen systems can act together or alone as one processes
language. There are links that can be used to cross-reference one system so that it
activates the other. Concrete words are thought to have a stronger reference to the

imagen system than do abstract words while the verbal associative links are not
dissimilar in these two word types.
When one is reading or listening to textual/word material, that person's logogen system
requires activation. However, in some cases, the imagen system is also activated, in
particular, by concrete verbal material. Paivio said, "Precisely which images or
descriptions will be activated at any moment depends upon the stimulus context
interacting with the relative functional strength of the different referential connections."

Expectancy Theory

Expectancy theory is a motivation theory in organizational psychology which postulates


that individuals can be motivated to adopt a specific behavior if they have certain
expectations. This theory aims to explain the person's behavior at work and its
correlation with his or her goals. Guided by self-interest, individuals adopt courses of
action meant to maximize desirable outcome for their own benefit.
Expectancy theory is used to measure job satisfaction, occupational choice, the
likelihood of staying in the same position and the likelihood of success. This model also
helps managers realize that their employees are not naturally productive or
nonproductive and enables them to establish a highly motivational working environment.
According to the theory, workers' motivation can be increased via rewards and
incentives. Hence, the theory is a method of maximizing satisfaction and minimizing
dissatisfaction. It relies heavily on expectations and perceptions rather than measurable
facts. The theory is praised for brining to the fore the role of rewards and pay-offs.
Expectancy theory was first used to explain organizational behavior by an American
business school professor, Victor Vroom, in his book Work and Motivation (1964). His
motivational model was distinctly different from previously developed concepts in
organizational psychology. In particular, Abraham Maslow and Frederick Herzberg
discussed how internal needs and resultant efforts are closely related. In his A Theory
of Human Motivation (1943), Maslow elaborated on a hierarchy of needs, often
portrayed in the shape of a pyramid with the most fundamental levels of needs at the
bottom, and the need for self-actualization at the top. Herzberg proposed the

Motivation-Hygiene Theory, also known as the Two Factor Theory (1959) of job
satisfaction. According to his findings, people are influenced by two sets of factors:
motivation factors, including achievement and recognition; and hygiene factors, such as
pay and benefits, supervision and job security.
Unlike preceding motivation theories, Vroom's expectancy theory identified the outcome
rather than needs as a major factor in motivation. Vroom found out that motivation is
predetermined by individual factors skills, knowledge, experience and abilities. The
elements contributing to the employee's motivation are as follows: positive correlation
between efforts and performance; performance leads to reward; the reward satisfies
important needs.
Vroom built his theory on three variables: Valence, Expectancy and Instrumentality.
These variables interact psychologically in line with Vroom's formula: Motivation =
Valence x Expectancy (Instrumentality). Expectancy describes the relationship between
efforts and performance. It stands for the belief that efforts result in the achievement of
the desired goals. One's past experience, confidence, and the perceived level of
difficulty of the goal, are prerequisites for expectancy.
Instrumentality concerns the relationship between performance and reward. It is the
individual's belief that he or she will be rewarded in case of meeting the desired
performance goal. The achievement of the goal may be rewarded by a pay raise,
promotion or sense of accomplishment. However, rewards have to be used wisely, as
when all performances are rewarded, instrumentality can be low. Valence is the value
the person attributes to the rewards. The individual's valence is closely related to his or
her values, needs, goals, preferences and sources of motivation.
Building upon Vroom's model, Lawler and Porter developed a new expectancy theory
model in Managerial Attitudes and Performance (1968), discovering additional aspects
of expectancy theory. Influenced by Maslow's idea of the importance of needs for
motivation, they held the view that each person has a stable set of preferences over
time. Lawler and Porter also categorized rewards as intrinsic and extrinsic. While
intrinsic rewards are the sense of achievement, extrinsic rewards translate into bonuses
and pay raises.
W.F. Maloney and J.M. McFillen (1986) carried out research into the application of the
expectancy theory model as regards the motivation of construction workers. According

to their definition, worker expectancy means the good match between the employee and
the tasks. Worker instrumentality means the worker's awareness that any improvement
in his or her performance would result in achievement of the goal.
Some criticize expectancy theory for being too simplistic while other scholars argue that
few individuals see clearly the correlation between performance and rewards.
Furthermore, many organizations do not establish a direct link between performance
and rewards. Although Human Resource experts have welcomed the recommendations
of expectancy theory, analysts criticize it for laying a strong emphasis on extrinsic
awards.

Experiential Learning

In A Handbook of Reflective and Experiential Learning: Theory and Practice, Jennifer A.


Moon notes that all learning is, in effect, learning from experience. However,
experiential learning is recognized to depend upon theory as much as practice because
theory places experience into context and permits extrapolation. A very simple example
of experiential learning follows.
A father has spent some months telling his young daughter, "Don't touch! Hot!" about
coffee cups, soup bowls, pots and pans full of food cooking on the stove. But the child is
strong, inquisitive and stubborn, and more than once he has barely been able to keep
her from scalding herself. So he decides to set her up with a cup of hot liquid before she
pulls a stock pot of soup off the stove and all over herself. The cup is full of liquid that is
only unpleasantly hot for a child's tender skin, certainly not scalding; to him, it is little
more than lukewarm.
After being told yet again, "Don't touch, hot!" the child is allowed to touch the cup, which
she proceeds to pull off the table and onto herself. A squall of tears follow, more
surprised than pained, and her father quickly wipes her off with a washcloth soaked in
cool water. The redness vanishes within minutes, but the lesson is learned and the child
now knows several concepts, including that of "hot." The next time she reaches for a
cup, she asks, "Hot?" and her father checks the cup, saying, "Yes, too hot."

This small example illustrates David Kolb's four-Stage theory of learning, presented at
Learning Theories.com:
1. The child has an unpleasant experience.
2. She observes that the word "hot" refers to something unpleasant.
3. She forms an abstract concept of hot: Hot can mean her father's coffee, or a pot of
soup simmering on the stove, the radiator in the winter or a luxurious soak in the tub.
4. She starts testing the meaning of the word "hot," and the degrees of hot, in different
situations.
The difference between the simple experience of dousing herself with coffee, and
making that experience an incident of experiential learning is that the father uses the
experience to help his daughter understand a word, "hot," and the concept behind the
word.
Experiential learning has a long history. It was inherent to the old guild and apprentice
systems, wherein aspiring young craftsmen served as apprentices to established guild
members and had to acquire a certain level of skill before they were admitted to the
guild. It is also visible in craft hobbies, such as woodworking or knitting, where there is a
great deal of self-taught learning. And experiential learning is crucial to scientific
education, with its insistence on results that can be observed and duplicated.
Despite this rich history, there is no universal recognized definition of experiential
learning. Moon offers at least 11, including this one: "The received professional ideology
of experiential learning is that it empowers individuals to gain control over learning and
hence their lives and to take responsibility for themselves. [It is] widely regarded as
empowering learners perhaps in ways that non-experiential learning does not."
Yet the father of modern experiential learning, the American educator and philosopher
John Dewey (1859-1952), understood that scholarly learning was central to human
existence. He wrote in Democracy and Education, "With the growth of civilization, the
gap between the original capacities of the immature and the standards and customs of
the elders increases. Mere physical growing up, mere mastery of the bare necessities of
subsistence will not suffice to reproduce the life of the group. Deliberate effort and the

taking of thoughtful pains are required." However, Dewey understood that "Schools are,
indeed, one important method of the transmission which forms the dispositions of the
immature; but ... Only as we have grasped the necessity of more fundamental and
persistent modes of tuition can we make sure of placing the scholastic methods in their
true context."
Kurt Hahn (1886-1974), another influential founder of experiential learning, placed
scholarly and experiential learning in their proper relationship to each other in his
Harrogate Address of 1965. Hahn, who fled Hitler's Germany because of both his
politics and his Judaism, founded the Gordonstoun School in Scotland in 1934 and
Outward Bound in Wales in 1941; he also cofounded the Salem School in 1920 with
Prince Max von Baden. In his 1965 Harrowgate speech, Hahn refers to the wise pride of
Prince Max that there was nothing original about the Salem School; they had simply
"stolen" the best they could find, from the Boy Scouts to Plato.
"In medicine," Hahn begins by quoting the Prince, "as in education, you must harvest
the wisdom of a thousand years. If you ever come across a surgeon who wants to take
out your appendix in the most original manner possible, I strongly advise you to go to
another surgeon."

Extinction and Reconditioning (in Conditioning)

A creature's behavior is molded by circumstances both planned and happenstance. The


two processes known as reinforcement and extinction interact to eliminate some
behaviors and reinforce others.
Reinforcement is the indispensable condition for strengthening reactions. Its effect is
exercised in the presence of all stimuli existent at the time it occurs. Some stimuli such
as temperature, smell and momentary illumination may be irrelevant to reinforcing
behavior, and are subsequently ignored in favor of more relevant stimuli. Eventually, a
particular stimulus will be used to modify the behavior to the exclusion of other less
relevant stimuli.

If all stimuli were used to create a new behavior, then energy would be wasted, time lost
and the organism would have less of a chance of survival and opportunity to pass on its
genes. On the other hand, the adaptability of behavior to critical stimuli depends on
reducing the responses to noncritical stimuli. This decline in the strength of reaction by
withholding reinforcement is known as extinction. It does not necessarily define a
situation in which the response in a creature has been reduced to zero.
Many experiments have been carried out to modify behavior using the hypotheses of
reinforcement and extinction. Although some of these studies have been successful,
they have not led to entirely predictable results. The reason for this is that, like many
other areas of human psychology, much depends on the individual.
For example, Pavlov carried out an experiment when which he stimulated a dog's skin
and then fed it two minutes later. After the experiment was repeated several times, the
dog salivated about two minutes after its skin was stimulated, even though it received
no food. The conditioned stimulus in this case was not "skin stimulation," but "2-minuteold memory of past skin stimulation." This phenomenon, which is referred to as a
memory reflex, can be established more readily in children than in animals.
In another experiment, a child who was afraid of rabbits was put in a room, together with
a caged rabbit, while he eating. At the beginning of the experiment, the rabbit was
placed at the far end of a long room; every day, it was brought slightly nearer to the
child. Eventually, the child was able to play with the rabbit happily as he associated the
rabbit with a happy stimulus like eating.
Krasnogorski performed another type of test on children. He trained a child to open his
mouth to receive candy whenever a certain point on his arm was touched a few minutes
after a bell had rung. Repeated touching of the arm without the bell, as well as hearing
the bell alone, failed to produce the desired reaction. This reaction was labeled by
Krasnogorski as storing and discharge, as it appeared that the stimulation from the bell
was stored in the nervous system until the arm was touched, and then discharged in the
form of the mouth reaction.
However, it is not entirely proven that behavior can always be reinforced and then
removed on a whim. In another study, for example, college students were conditioned to
a light and then a puff of air being blown on their cornea to make them blink. There were
three different groups, one in which participants had 100 percent reinforcement in their

trials, because a puff always followed the light; a second group that received 50 percent
reinforcement; and then a group subjected to extinction trials.
This group had 100 percent reinforcement for the first set of trials, then a batch of 48
non-reinforced trials, and then 24 extinction trials. The results showed that nonreinforced trials do not always lead to behavior extinction. Furthermore, behavior is
more likely to be reinforced over a longer-term experiment than a shorter-term one.
The researchers did conclude, however, that extinction of behavior is more likely to
occur when the behavior is not always reinforced as vigorously. Overall, the
reinforcement or extinction of behavior has little do with the number of times
reinforcement or extinction occurs, instead relies heavily on the individual's inner
strength.

Howard Gardner

Howard Gardner is an American psychologist, Harvard University professor, and most


notably, the creator of the theory of multiple intelligence. Born in 1943 and raised in
Scranton, Pennsylvania, Gardner's research has predominately focused on the nature
of human intelligence, and the nature of and development of abilities in the arts and how
they relate to intelligence. For many years, Gardner conducted research in symbolusing capacities in normal and gifted children, and in adults with brain damage.
Gardner's efforts to combine these areas of work resulted in his theory of multiple types
of intelligence, which he introduced in his 1983 book Frames of Mind. The book draws
upon research in neuropsychology and proposes that there are seven types of
intelligence which are located in different areas of the brain. He concluded that
intelligence is not one single attributing factor that underlies different abilities
countering the previously held belief upon which intelligence tests had been based.
In the 1980s Gardner became involved in educational reform in the United States.
During this period he began teaching at the Harvard School of Education, where he
became co-director of Harvard Project Zero, a study looking at human cognition within
the field of arts. Gardner and his colleagues mainly focused on designing performance-

based tests which apply the theory of multiple types of intelligence in order to create
more individualized teaching and testing methods. In his book Artful Scribbles: The
Significance of Children's Drawings, Gardner investigates the growth of creativity in
young children, and its decline as they age and mature. He concludes by examining the
questions and results that arise from this sequence of development.
Gardner continued his exploration of human intelligence in 1985's The Mind's New
Science: A History of the Cognitive Revolution; which looks at the study of cognitive skill
and intelligence. The book also traces the subject's roots as far into history as the
Ancient Greeks and Plato and to the work of Descartes, who believed that ideas present
in the human mind are stimulated but not produced by human experience. Gardner has
written and published over 400 research articles and approximately twenty books, which
have all been translated into different languages, and is possibly most synonymous with
the 1993 study and bookMultiple Intelligences: The Theory in Practice, which points out
how his perspectives can be put into practice within the field of education. Gardner's
original Theory of Multiple Intelligences consists of three components and seven
"intelligences." The three components are a definition of "intelligence" and a challenge
to the suggestion of a "general intelligence," or g; and a challenge to the conviction that
g can be accurately measured.
Gardner's seven intelligences consist of:
- Linguistic intelligence: Sensitivity to the spoken and written word, to learning
languages and to using them to accomplish specific goals.
- Logical-mathematical intelligence: An ability in mathematics and other complex logical
systems.
- Spatial intelligence: To perceive the visual world accurately. This also applies to how a
human can recreate or alter their view of the world in their mind or in paper form.
- Bodily-kinesthetic intelligence: The ability to use one's body in a skilled way, for selfexpression or toward a goal. This is particularly relevant to sportsmen and women.
- Interpersonal intelligence: the ability to perceive and understand other individuals and
their moods, desires, and motivations.

- Intrapersonal intelligence: An understanding of one's emotions.


- Musical intelligence: The ability to both understand and create music.
Gardner claims that to some degree every human being contains all seven
intelligences, with each individual having their own mixture of stronger and weaker
intelligences. Gardner also argues that most tasks require multiple intelligences working
in harmony. The example he uses is the conductor of a symphony, who would not only
apply musical intelligence but also interpersonal intelligence as a group leader and
bodily-kinesthetic intelligence to convey his information and instruction to the orchestra.
This claim of separate and independent intelligences is a central point of Gardner's
theory.
In recent years Gardner has carried out long-term case studies of successful leaders
and creators. This work is published in Changing Minds: The Art and Science of
Changing Our Own and Other People's Minds. In the book, Gardner discusses his belief
that leadership requires the ability to change minds, including one's own mind. As an
illustration, the author uses examples of world leaders such as India's Mohandas
Gandhi, former British Prime Minister Margaret Thatcher, and former President of the
United States, Ronald Reagan.
In 2011, Gardner held positions including the John H. and Elisabeth A. Hobbs professor
in cognition and education at the Harvard Graduate School of Education, adjunct
professor of Psychology at Harvard University and adjunct professor of Neurology at the
Boston University School of Medicine.

Genetic epistemology

Genetic epistemology is a study of the origins (genesis) of knowledge (epistemology).


The discipline was established by Jean Piaget.
The goal of genetic epistemology is to link the validity of knowledge to the model
of its construction. It shows that how the knowledge was gained affects how valid it is.
For example, our experience of gravity makes our knowledge of it more valid than our
theory about black holes. Genetic epistemology also explains the process of how

people develop cognitively from birth throughout their lives in four primary stages:
sensorimotor (birth to age 2), pre-operational (2-7), concrete operational (7-11), and
formal operational (11 years onward). The main focus is on the younger years of
development. Assimilation occurs when the perception of a new event or object occurs
to the learner in an existing schema and is usually used in the context of self-motivation.
In Accommodation, one accommodates the experiences according to the outcome of
the tasks. The highest form of development isequilibration. Equilibration encompasses
both assimilation and accommodation as the learner changes how they think to get a
better answer. This is the upper level of development.
In English, genetics refers to heredity. The terminology 'developmental theory of
knowledge' is perhaps better. Piaget believed that knowledge is a biological function
that results from the actions of an individual through change. He also stated that
knowledge consists of structures, and comes about by the adaptation of these
structures with the environment.
Piaget's genetic epistemology is half-way between formal logic and dialectical logic.
Piaget's genetic epistemology is mid-way betweenobjective idealism and materialism.
Piaget's schema theory[edit]
1. Thought passes through a series of stages of development; at each stage there
applies formal logic at a specific stage of differentiation which may be
characterized by an algebra in which exactly such-and-such a mathematical
structure applies, corresponding to the axioms of logic at that stage; this logic is
manifested first in actions, then at a relatively early stage in sensorimotor
operations (in the specific mathematical sense of the word, as opposed to
"actions" which are equivalent to relations but not yet mathematical operations),
and finally in operations which express thoughts, conscious purposive activity.
2. The material basis for transition from sensorimotor intelligence to representation
and from representation to conceptual thought is the interiorisation of practical
activity.
3. The successive stages of concepts manifested in child development imply
relations of deduction in mathematical logic and in the development of thinking in
other planes of development, such as in the history of science and the history of
knowledge in theanthropological domain.[1]

Piaget draws on the full range of contemporary mathematical knowledge, a vast


empirical base of observation of the learning of very young children built up at his
institute and reports of observations of older children and a general knowledge of the
development of knowledge in history.
(1) From the standpoint of dialectical logic, we must agree that at each stage of
development, at each "definition of the Absolute" in Hegel's terminology, formal logic is
applicable. Piaget's proof of this is striking, and his demonstration of how the stages of
development in child thought pass through a specific series which is deductive in a
specific sense from the standpoint of mathematics is original and profound.
However, from the standpoint of understanding development (and this is Piaget's
standpoint), what is important is not the definition of each stage but the transition from
one to the next; and for this it is necessary to demonstrate the internal contradiction
within the logic of that plane.
Since Piaget draws on mathematical logic more developed than what was known
to Hegel, it will be necessary to investigate these structures to see if this speculative
proposition proves to be valid.
(2) The concept of interiorisation is indeed the basis of the materialist view of the
development of thought. However, Piaget, as a professional child-psychologist falls prey
to the objective idealism of any professional, of elevating the subject matter of his
particular profession from being an aspect of the material world to being its master. [The
charge of objective idealism is qualified, for Piaget is quite unambiguous that relations
conceived of in thought exist objectively in the material world].
Thus, since his body of authoritative empirical work is in relation to early childhood
development, he imposes the schema appropriate to this semi-human subject on to
adolescent development, speculates on its possible reflection in anthropological
development and confounds it with the history of development of science
and philosophy. I say "confounds" because Piaget is aware that his schemas do not
seem to apply in this domain. In this sense, the charge of objective idealism would
seem unfair, but from confounding he does not go further and seek the implication of
this lack of correspondence, but seeks to minimize it.
By focusing on early childhood (as indeed he must; that is his profession, and his
institute has contributed a vast body of empirical material), Piaget sees what is

biologically (zoologically?) human but not what is socially (historically) human, and
humanity is essentially social, after all.
(3) On the plus side, it has to be said that Piaget deals once and for all with any idea of
innate intelligence, and makes fully convincing the prospect of a fully genetic (i.e.
developmental) elaboration of intelligence, assuming only animal instincts such as
grasping and sucking and sensorimotor "equipment" capable of reflecting highly
developed relations. A weakness in Piaget's theory could be that there isn't proof in how
one transitions from one stage to the next. Can someone progress from one stage
forward, but revert backwards, and then move forward again?
Types of knowledge[edit]
Piaget proposes three types of knowledge: physical, logical mathematical, and social
knowledge.
Physical knowledge: It refers to knowledge related to objects in the world, which can be
acquired through perceptual properties. The acquisition of physical knowledge has been
equated with learning in Piaget's theory (Gruber and Voneche, 1995). In other words
thought is fit directly to experience.
"Piaget also called his view constructivism, because he firmly believed that knowledge
acquisition is a process of continuous self-construction. That is, Knowledge is not out
there, external to the child and waiting to be discovered. But neither is it wholly
performed within the child, ready to emerge as the child develops with the world
surrounding her...Piaget believed that children actively approach their environments and
acquire knowledge through their actions." [2]
"Piaget distinguished among three types of knowledge that children acquire: Physical,
logical-mathematical, and social knowledge. Physical knowledge, also called empirical
knowledge, has to do with knowledge about objects in the world, which can be gained
through their perceptual properties... Logical-Mathematical knowledge is abstract and
must be invented, but through actions on objects that are fundamentally different from
those actions enabling physical knowledge....Social Knowledge is culture-specific and
can be learned only from other people within one's cultural group."

Learning Styles

learning
learning, in psychology, the process by which a relatively lasting change in potential
behavior occurs as a result of practice or experience. Learning is distinguished from
behavioral changes arising from such processes as maturation and illness, but does
apply to motor skills, such as driving a car, to intellectual skills, such as reading, and to
attitudes and values, such as prejudice. There is evidence that neurotic symptoms and
patterns of mental illness are also learned behavior. Learning occurs throughout life in
animals, and learned behavior accounts for a large proportion of all behavior in the
higher animals, especially in humans.
Models of Learning
The scientific investigation of the learning process was begun at the end of the 19th
cent. by Ivan Pavlov in Russia and Edward Thorndike in the United States. Three
models are currently widely used to explain changes in learned behavior; two
emphasize the establishment of relations between stimuli and responses, and the third
emphasizes the establishment of cognitive structures. Albert Bandura maintained (1977)
that learning occurs through observation of others, or models; it has been suggested
that this type of learning occurs when children are exposed to violence in the media.
Classical Conditioning
The first model, classical conditioning, was initially identified by Pavlov in the salivation
reflex of dogs. Salivation is an innate reflex, or unconditioned response, to the
presentation of food, an unconditioned stimulus. Pavlov showed that dogs could be
conditioned to salivate merely to the sound of a buzzer (a conditioned stimulus), after it
was sounded a number of times in conjunction with the presentation of food. Learning is
said to occur because salivation has been conditioned to a new stimulus that did not
elicit it initially. The pairing of food with the buzzer acts to reinforce the buzzer as the
prominent stimulus.
Operant Conditioning

A second type of learning, known as operant conditioning, was developed around the
same time as Pavlov's theory by Thorndike, and later expanded upon by B. F. Skinner.
Here, learning takes place as the individual acts upon the environment. Whereas
classical conditioning involves innate reflexes, operant conditioning requires voluntary
behavior. Thorndike showed that an intermittent reward is essential to reinforce
learning, while discontinuing the use of reinforcement tends to extinguish the learned
behavior. The famous Skinner box demonstrated operant conditioning by placing a rat in
a box in which the pressing of a small bar produces food. Skinner showed that the rat
eventually learns to press the bar regularly to obtain food. Besides reinforcement,
punishment produces avoidance behavior, which appears to weaken learning but not
curtail it. In both types of conditioning, stimulus generalization occurs; i.e., the
conditioned response may be elicited by stimuli similar to the original conditioned
stimulus but not used in the original training. Stimulus generalization has enormous
practical importance, because it allows for the application of learned behaviors across
different contexts. Behavior modification is a type of treatment resulting from these
stimulus/response models of learning. It operates under the assumption that if behavior
can be learned, it can also be unlearned (see behavior therapy).
Cognitive Learning
A third approach to learning is known as cognitive learning. Wolfgang Khlershowed
that a protracted process of trial-and-error may be replaced by a sudden understanding
that grasps the interrelationships of a problem. This process, called insight, is more akin
to piecing together a puzzle than responding to a stimulus. Edward Tolman (1930) found
that unrewarded rats learned the layout of a maze, yet this was not apparent until they
were later rewarded with food. Tolman called this latent learning, and it has been
suggested that the rats developed cognitive maps of the maze that they were able to
apply immediately when a reward was offered.

Lifelong Learning

Lifelong learning is the development of human potential through a continuously


supportive process which stimulates and empowers individuals to acquire all the

knowledge, values, skills and understanding they will require throughout their lifetimes
and to apply them with confidence, creativity and enjoyment in all roles, circumstances
and environments, according to the publication Encyclopedia for Education.
The term, however, is difficult to define as some see it as learning from childhood and
early schooling, while others assume it is an adult learning process. Some educators
prefer the term lifelong education because it implies a more explicitly intentional learning
than the term lifelong learning. They see lifelong learning itself as a concept with
different meanings and values.
According to a publication by the European Commission, the European Commission's
Lifelong Learning Programme allows people at all stages of their lives to take part in
stimulating learning experiences, as well as helping to develop the education and
training sector across Europe. The Commission is paying seven billion euro over 20072013 for a range of actions including exchanges, study visits and networking activities.
Projects are intended not only for individual students and learners, but also for teachers,
trainers and all others involved in education and training. The sub-programs that the
European Union funds are: Comenius for schools; Erasmus for higher education;
Leonardo da Vinci for vocational education and training; and Grundtvig for adult
education.
Three international organizations supported the creation of the lifelong learning idea in
Europe in the 1970s. The Council of Europe, the Organization for Economic Cooperation and Development and the United Nations Educational, Scientific and Cultural
Organization played a key role. The Council of Europe worked for a permanent
education plan aiming to reshape European education for the whole life span. The
Organization for Economic Co-operation and Development advocated recurrent
education, while the United Nations Educational, Scientific and Cultural Organization
published a report on the issueLearning to Be in 1972.
In the United States people preferred the term lifelong learning, rather than lifelong
education and applied it mostly to adult education. The Mondale Lifelong Learning Act
of 1976 proposed a large list of nearly twenty areas which ranged from adult basic
education to education for older and retired persons. In the 1990s, both in Europe and
the United States global competition and knowledge-based industries started to require
high-qualified professionals for whom lifelong learning became essential. Companies

began to invest in human capital and expect employees to continue learning in order to
maintain high levels of competitiveness. New economic trends, closely related to high
technologies, shifted the focus on learning to human resource development from
personal growth. Both in times of economic growth or crisis, education and training
approaches became a solution for keeping people away from unemployment and
welfare dependency.
In Europe, the four sub-programs of the European Union have set ambitious goals.
Comenius should involve at least three million pupils in joint educational activities from
2007 to 2013. Erasmus is expected to reach a total of three million individual
participants in student mobility actions. Leonardo da Vinci has to raise placements in
enterprises to 80,000 a year, while Grundtvig will support the mobility of 7,000
individuals, according to the site of the European Commission.
Lifelong learning has become an essential challenge and a necessity as it offers selfdirected learning, learning on demand, collaborative learning and organizational
learning. It has to face, however, three main challenges: an increasing prevalence of
high-technology jobs; the need of change in the course of a professional lifetime; and
the growing gap between the opportunities offered to the educated and to the
uneducated people.
Knowledge societies require smarter employees, who inevitably need to know how to
use and develop their creativity and innovation. An important challenge to the lifelong
learning is to show people how those capabilities can be learned and practiced. The
creativity and innovation potential of individuals should be developed during the whole
life which makes lifelong learning an indispensible approach. But lifelong learning is
more than just to going to school all the time. The successful lifelong learning process
would offer to people the chance to explore conceptual understanding and practical
application of the learned skills.
Usually, people change careers three or four times in their lives and in many cases
school programs they attended five or ten years before the change cannot prepare them
for the new demands. The pace of change is so fast that inevitably it imposes the need
for continuous learning. Moreover, university education does not always reflect the
demand on the labor markets. All those factors encourage people to keep looking for
lifelong learning.

Mastery Learning

Mastery learning is an educational method that professes that all children can learn if
the classroom provides the proper learning conditions. If the classroom is conducive to
learning a child will learn. It also promotes a learning theory whereby students do not
move on to the next topic or subject unless they have mastered the one they are
currently learning.
There are four basic characteristics in mastery learning. The first is that written material
is a very important aspect in mastery learning and very little weight is given to lecturing.
Instead of students listening to the teacher giving over the material orally, the teacher
prepares reading material, study sheets and study questions and creates behavioral
objectives for the student to use. The teacher also creates assignments and tests that
measure the student's progress. The second characteristic is that students study and
complete assignments at their own pace. This is based on the premise that all students
learn at different speeds and not every student can absorb material at the same pace as
another. The third characteristic is that students must prove proficiency in what they
have learned before continuing to the next topic. Finally, learning aids and materials to
help explain the material are available to students who need help.
Mastery learning is based on the Learning for Mastery model created by Benjamin
Bloom. It is mainly a teacher-paced teaching method where students are taught to
cooperate and inter-react with their classmates. Although most mastery learning lessons
are group based, at times children are required to work by themselves.
Mastery learning concentrates on the process of learning the material and not on
content. The process works best when using the conventional curriculum that focuses
on content with well-defined lessons and objectives. The main topics in the curriculum
are outlined and then organized into smaller units. In mastery learning, the teacher will
divide the class into smaller groups and direct the students what to learn and how to
learn it. The students then go off with their groups to learn the material. The teacher
receives knowledge feedback from the students by administering diagnostic tests, and
will correct any mistakes that they have made.

The curriculum of mastery learning is made up of carefully chosen topics and all
students begin at the same time. Students do not proceed to the next topic until they
have mastered the first one. Additional instruction is offered to students until they have
succeeded in understanding the material at hand. Any student who has completed an
assignment early is given extra enrichment activities to work on until the class is ready
to continue together.
Mastery learning can also be successfully implemented as a self-paced learning
program or as a one-on-one tutoring program. Provided that there are specific learning
objectives, the program can be group based and individual based all at the same time.
When individualized instruction is necessary, the teacher will concentrate on those
students who need assistance while the rest of the groups advance to the next topic.
This allows for the teacher to devote more attention to those students who require extra
attention and assistance.
Mastery learning is based on the behaviorist theory known as operant conditioning,
which asserts that learning takes place when a connection is made between a stimulus
and a response. In mastery learning the material that is to be taught is divided into
smaller sub-lessons which follow a logical sequence and the student will respond by
learning each discrete unit and will not go on until it is well understood.
Mastery learning programs have been proven to help students achieve higher levels of
learning and understanding than traditional classroom teaching. Despite this evidence,
many schools that have implemented mastery learning programs have abandoned the
program. This is due to number of factors. Schools have found that it is difficult to find
teachers with the devotion and commitment to the program and it is very difficult to
manage a class where each student is following an individual course at his or her own
pace.
Peer-mediated instruction

Peer-mediated instruction (PMI) is an approach in special

education where peers of the target students are trained to provide


necessary tutoring in educational, behavioral, and/or social concerns.
(Chan et al., 2009). In PMI, peers may mediate by modelingappropriate
behavior themselves, using prompting procedures to elicit appropriate
behavior from the target students, and reinforcingappropriate behavior
when it occurs. The peer tutors are chosen from the target students'
classrooms, trained to mediate and closely observed during mediation.
Among the advantages noted to the technique, it takes advantage of
the positive potential of peer pressureand may integrate target
students more fully in their peer group. Conversely, it is time
consuming to implement and presents challenges in making sure that
the peers follow proper techniques. However, studies have suggested
it may be an effective technique for a wide range of students, including
those with Autism spectrum disorders.
Procedure[edit]
A student or students will be chosen from the target student's classroom to serve as a
peer tutor. Garrison-Harrell et al. (as cited in Chan et al., 2009) suggested a systematic
way to choosing the peers to be involved in the treatment based on social status and
teacher judgment. Students were asked to list three peers they would like to play with
on the playground, three peers they would invite to a party, and three peers they
consider to be good friends. Teachers reviewed the top candidates, and selected the
tutors based on social skills, language skills, school attendance and classroom
behavior.
The student or students chosen as peers must be properly coached before the peer
relationship begins, both to understand the importance of the intervention and the
methods which should be used. Instructors may model behaviors to the peer tutors and
may role play with the peer tutors, allowing the peer tutors to experience both parts in
the PMI relationship. Other methods for training could include visual aids, reinforcement
for correct implementation, instruction manuals and video instructions. Once the PMI
relationship begins, the teacher provides on-going feedback, watching the peer at all
times while the intervention is being used. (Chan et al., 2009).

Strengths and limitations[edit]


There are advantages of using PMI as an intervention strategy. First, there is never a
shortage of peers to use, especially when implementing an intervention in a school or
classroom. Second, students are influenced through observational learning by what
they see their peers doing. Third, students are often less intimidated by a peer than they
are a teacher, which makes instruction and feedback from peers potentially more
effective. Fourth, it may not only offer short-term intervention benefits, but can also
strengthen the target student's social ties within the classroom. Finally, research has
been done with many different types of learners, including students with learning
disabilities, behavior disorders and attention deficit hyperactivity disorder, which show
that PMI may be effective for a wide range of students (Fuchs & Fuchs, 2005; Flood,
Wilder, Flood & Masuda, 2002).
In 2009, Research in Autism Spectrum Disorders published a paper by Chan et al. that
concluded that PMI is a potentially effective intervention approach for students with
Autism disorders. The paper reviewed 42 studies in which PMI was used with a total of
172 target students. In most cases (92%), the peer tutors did not have any diagnosis
themselves, although in some peers had learning disabilities or behavior disorders of
their own. The target students were diagnosed with Autistic Disorder, Asperger
Syndrome, or Pervasive Developmental Disorder-Not Otherwise Specified. The most
common intervention goal was to increase social interaction amongst the target student
and his or her peers. The peers prompted the target students to play, use motor
skills and communicate (Odom & Strain, 1986; Carr & Darcy, 1990, as cited in Chan et
al., 2009). The target students were most commonly observed for improvement in social
interaction in such areas as taking turns sharing (Harper, Symon & Frea, 2008 as cited
in Chan et al., 2009), maintaining interactions (Haring & Breen, 1992 as cited in Chan et
al., 2009), and showing affection (Ragland et al., 1978 as cited in Chan et al., 2009).
Overall, in 91% of the studies, the PMI proved effective for all participants. In the
remaining studies, the intervention was effective for some, but not all participants.
Chan et al. (2009) did note some limitations on the procedures. It is time consuming and
sometimes challenging to implement, particularly as the tutoring is being performed by
children rather than trained professionals (Chan et al., 2009). They noted that the
studies they reviewed had no data on how faithfully the peer tutors followed intervention
procedures.

Application to General Education classroom settings[edit]


Varying forms of Peer Mediated Instruction and Interventions have been conducted in a
great range of settings over the decades. Research has been conducted in educational
and non-educational environments with positive outcomes in each. It is important to
note that PMII strategies are not restricted or inclusive to education or special
education, but have been found to be effective in each-as well as inclusive classroom
settings. The following characteristics have been identified by Kulik & Kulik (1992), as
central for successful implementation of Peer Mediated Instruction.
1 Expectations for student learning. Teachers should establish high expectation
levels. No students are expected to fall below the level of learning needed to be
successful at the next level of education.
2 Careful orientation to lessons. Teachers must clearly describe the relationship of
a current lesson to previous study. Students are reminded of key concepts or
skills previously covered.
3 Clear and focused instructions to participants.
4 Close teacher monitoring of student progress. Frequently formal and informal
monitoring of student learning by teachers. Teachers must require that students
are accountable for their product and learning.
5 Re-teach. If students show signs of confusion, misinterpretation or
misunderstanding, the teacher must be responsible to teach again.
6 Use classtime for learning. Students must pace themselves and should be
monitored for task completion.
7 Positive and personal teacher and student interaction. Cooperative Learning and
Peer Tutoring Strategies are instruction methods of choice in many classrooms
as they are noted for preventing and alleviating many social problems related to
children, adolescents, and young adults.

Metacognition (in Education)

The term metacognition refers to learners' consciousness of their own body of


knowledge and their ability to understand, wield control over and even change their
thought processes. Metacognition is an automatic process and one that is important not
just during a person's school years, but all through life. In effect, metacognition is what
helps a person learn to learn. It is metacognition that helps people take the material
found in schoolbooks and apply it to real-life situations.
While metacognition as a field is still new, most of the research on this subject lies
within three categories: metamemory, metacomprehension and self-regulation. Each of
these subcategories of metacognition involves a crucial component used in the learning
process. All three components of metacognition are about using innate knowledge and
ability toward the learning process.
Metamemory refers to learners' awareness and knowledge of their own memory
systems and how they might use those systems to best effect. The metamemory
consists of the consciousness of the various memory strategies, knowing which strategy
is best in a given memory task and knowing the most efficient way to use that strategy.
Metacomprehension signifies the learner's own monitoring process in regard to the
intake of new information. Learners use metacomprehension to see how much or how
little they understand the information that is communicated. The learner notes when
there is a failure to comprehend information, and once an area of failure is identified,
that individual can employ strategies to attempt to repair understanding of the subject at
hand.
Where there are poor metacomprehension skills, a learner may discover he or she has
finished a chapter of a book without being aware that there is no understanding of what
has just been read. A learner who has sharp metacomprehension skills, on the other
hand, will self-edit a passage as it is read. That person may look for inconsistencies in
the text or identify parts of the text that are confusing. That individual may even apply a
corrective strategy to the reading process. For instance, he or she may reread the
chapter, tell over parts of the text to another person, watch for key topics, identify a
paragraph that sums up the material or relate the information to what he or she already
knows.
Self-regulation refers to a learner's ability to adjust his or her learning process as a
response to feedback about the current learning status. There is a great deal of overlap

between this term and the previous two. The focus of self-regulation is the ability of a
learner to monitor the personal learning process with no external motivation. By
extension, self-regulation is also the ability of the learner to call up and use these
monitoring strategies without external prodding or persuasion. If a student is to use selfregulation with efficiency, that individual needs to know what strategies are at hand and
the purposes they serve. The learner must be able to select, employ, monitor and
evaluate the use of these strategies at the right time.
While metacognition is an important factor in the learning process, it can also be used to
improve attitude and, therefore, the personality of the learner. For instance, a key factor
in understanding is to approach reading with the idea that the topic treated in the
material is worthy of consideration and comprehension. Using metacognition, it is
possible to foster a positive attitude, and this, in and of itself, is a metacognitive skill.
Since metacognition is a somewhat automatic process, individuals may not be aware
that they use metacognition on a regular basis. A good reader uses metacognition to
take meaning from the words and applies a metacognitive strategy to adjust
understanding of the more confusing passages. These metacognitive powers are
always within, ready to be called upon and utilized whenever the need arises. So while
most people have little awareness of their metacognitive processes, it is possible to
raise consciousness of metacognitive powers and to consciously control them.
A good example of how this works is seen in the person who reads a paragraph of a
book while distracted. A certain amount of time goes by, and the person realizes he or
she has read a paragraph without having comprehended what was read. At that point,
the person goes back to the beginning of the paragraph and slows down to allow for the
conscious application of metacognitive strategies to reading.
While one can pause and reflect on the metacognitive steps used in a particular
process, the most efficient metacognition occurs as the result of "overlearning." This can
only happen through repetition and with the passing of time. As a person repeats the
metacognitive processes over and over, they become automatic and somewhat
unconscious, making for a decreased burden on the working memory.

Operant Conditioning (Instrumental Conditioning)

Operant conditioning is used in social sciences to escribe the process where an


individual learns and modifies behavior due to a stimulus. An operant is a voluntary
behavior used to obtain a reinforcer or avoid a punisher. This differs from the classical
Pavlovian conditioning theory (sometimes called respondent conditioning), which relies
on the response of an organism to a stimulus in the environment, since operant
conditioning relies on an organism to initiate an action that is followed by a
consequence.
Operant conditioning is the systematic use of reinforcement and punishment to facilitate
learning. It emphasizes on the consequences of behavior; respondent conditioning
emphasizes involuntary behaviors (reflexes).
Reinforcement and punishment can be positive and negative. Positive reinforcement is
generally considered synonymous with reward, while negative reinforcement refers
punishment. Operant behavior is more likely to occur in the future as a result of
reinforcement, while punishment makes its occurrence less likely. An operant procedure
called shaping can use reinforcement by giving it to behaviors that increasingly
resemble a target behavior and the individual will gradually display the target behavior.
The procedure is also sometimes called the Method of Successive Approximation.
Reinforcement is also used in a procedure called fading, where prompts or assistance
of some kind are used simultaneously with the reinforcement of one behavior. The
assistance is gradually withdrawn, or faded, and eventually the behavior is emitted
without any prompts. While undesirable behaviors can be eliminated by punishment, it
is sometimes more appropriate to identify the reinforcers that support them and
eliminate them. If there is no reinforcement to support a behavior, it becomes extinct,
which means punishment is not necessary. Time-outs, where individuals are placed in a
setting that does not allow them to get reinforcement or support inappropriate
behaviors, are based on this conclusion.
The accurate identification of punishers and reinforcers is one of the key steps when
using operant procedures, because what controls the behavior of one individual
sometimes has an impact on another. The type and the amount of the consequence
given partially determine its effectiveness, but deprivation (hunger), the gradient (the

interval between the behavior and the consequence) and the schedule (the number of
behaviors to be emitted before earning a reinforcer) also play a role. A shorter interval
generally ensures maximally effective consequences. Early on a continuous schedule
should be used, with every instance of the behavior earning a consequence, while later
on an individual may emit two, three, five and finally 10 behaviors before earning a
reinforcer. Larger ratios are useful when programming generalization and maintenance,
while variable schedules can be used to ensure a behavior is resistant to extinction.
Each of these operant concepts was demonstrated by American behaviorist Burrhus
Frederic Skinner (1904 to 1990), a professor of psychology at Harvard, in highly
controlled experiments.
Operant conditioning is highly effective when its use in educational or clinical settings is
systematical, as demonstrated by research by Skinner and others. According to Skinner,
operating conditioning can also occur spontaneously in the natural environment.
Operant conditioning initially developed from the ideas of Edward Thorndike (1874 to
1949). On the basis of his studies of learning in chickens and cats, Thorndike developed
the Law of Effect, according to which a behavior with a positive outcome is likely to
occur again. According to his Law of Exercise, when a response occurs in a given
situation, the more it occurs, the more strongly it is linked with the situation, while it is
also more likely to be repeated.
Classical conditioning, which focuses on antecedents and reflexes, was famously
studied by Russian physiologist Ivan Pavlov (1849 to 1946), used a bell as an
antecedent stimulus in his famous research with dogs. The dogs in the experiment
salivated after they had come to associate the ringing of the bell with food. Classical
conditioning became the dominant model for the study of behaviorism in Russia, while
operant conditioning took hold in the United States.
Social Learning theory is another theory, closer to operant conditioning. The emphasis
of this perspective is on modeling and observational learning, but it also recognizes the
impact of consequences.

Problem-Based Learning

Problem-based learning (PBL) is an approach to learning that challenges students to


learn by engaging them in a real problem. This form simultaneously develops problemsolving strategies and disciplinary knowledge bases and skills. PBL places students in
the active role of problem-solvers who are confronted with an ill-structured situation,
simulating the kind of problems they are likely to face during their future careers.
This approach originated from a curriculum reform by medical faculty at Case Western
Reserve University in the late 1950s. The practice of PBL continued to evolve through
innovative medical and health science programs, especially the specific small group
learning and tutorial process developed by medical faculty at McMaster University in
Canada. These innovative medical school programs believed that the pattern of basic
science lectures followed by a clinical teaching program was an ineffective and
dehumanizing way to prepare future physicians.
As a result of the boom of medical information and new technology and the rapidly
changing demands of future medical practice, they developed a new mode and strategy
of leaning that would better prepare students for professional practice. Since then, PBL
has spread to over 50 medical schools. The approach has also diffused into many other
professional fields, such as law, economics, architecture, mechanical and civil
engineering and in K-12 curricula.
PBL is student-centered and shifts the focus from teaching to learning. The process is
aimed at engaging students and enhancing their learning and motivating by using the
power of authentic problem-solving. The PBL approach is characterized by the fact that
learning takes place within the contexts of authentic tasks, problems and issues that are
aligned with real-world concerns. Students and the instructor become co-learners, coplanners, co-producers and co-evaluators as they design, implement and continually
refine their curricula.
The PBL approach stimulates students to take responsibility for their own learning
because there are few lectures and no structures sequence of assigned readings. It
fosters collaboration among students and stresses the development of problem-solving
skills within the context of professional practice. PBL also promotes effective reasoning
and self-directed learning. The approach is aimed at increasing students' motivation for
life-long learning.

At the beginning of PBL, an ill-structured problem on which all learning is centered is


introduced. Students' expertise is developed by engaging them in progressive problemsolving and problems drive the organization and dynamics of the course. Students, both
individually and collectively, assume major responsibilities for their own learning and
instruction. Most of the learning occurs not in lectures but in small groups.
The teacher acts as a facilitator and coach of student learning. The teacher is no longer
knowledge-holder and disseminator but a resource person. Meanwhile, the student has
a more active role than that of a passive listener and note-taker. He or she is engaged
in problem-solving, decision-making and meaning-making.
According to research in educational psychology, traditional educational approaches do
not lead to a high rate of knowledge retention. In addition, traditional education practices
often leave students disenchanted and bored with their education. Motivation in
traditional classroom environment is also usually low.
PBL, on the other hand, makes students genuinely enjoy the process of learning. It is a
challenging program that makes studying intriguing for students because a need to
understand and solve real problems motivates them to learn. They easily see the
relevance of the information they learn and become aware of a need for knowledge as
they work to resolve the problems.
As part of PBL, students first entertain a problem they are given in light of the
knowledge they already have from their own experience. Then they list questions or
learning issues that they need to answer to address missing knowledge or to shed light
on the problem. Students analyze the problem into components, discuss implications,
entertain possible explanations or solutions and develop working hypotheses. They also
formulate learning goals that outline what further information is needed and how they
can obtain it.
Finally, students discuss, evaluate and organize hypotheses and tentative hypotheses.
They make a list of issues such as what resources to consult, people to interview,
articles to read and specific actions to be performed by team members. After identifying
and allocating learning tasks and developing study plans to discover needed
information, students gather information from the classroom, resource readings, texts
and library sources, as well as from external experts on the subject. Unlike traditional
and standard classes, in PBL learning objectives are not stated up front. Rather, the

students and the instructor are responsible for generating their own learning issues or
objectives based on the group's analysis of the problem.

Project-Based Learning

Project-based learning is a method of teaching as part of which a child or a group of


children conduct an in-depth study of a particular topic. When this level of instruction is
managed by a teacher, students can be effectively engaged in as many decisionmaking junctures as possible. Exemplary teachers managing this level of instruction
understand that learning from choice and learning from subject content are equally
important. Project-based learning can potentially increase the sense of responsibility for
and control over one's learning as a student.
Projects may be initiated by a child or teacher. Research is focused on finding answers
to questions posed by students. In projects as an approach of teaching the direction of
inquiry follows the interests of the children.
There are a number of additional characteristics of project-based learning. The project,
progression, student interest, resources as well as community support determine the
amount of time or length of the learning experience. As a result, it may last over several
weeks or even months. The activities focus on investigation, finding answers to
questions, using resources and especially making use of human experts who
demonstrate skills and can be interviewed.
Teachers as well as school library media specialists facilitate and debrief students after
field trips, interviews and in cases when diverse materials from different sources are
used. In addition, not only teachers and school library specialists provide resources for
the project, but also students, parents and other community members. Teachers and
school library media specialists also observe the investigations and determine the next
steps of the project by using student interest and questions. Students also help in the
planning of the next steps of the project through discussions.
Concept maps or webs, which are written at various stages of the project, illustrate how
the project changes and progresses, reflecting what the students knows, is learning and
what is still left to be explored. Students collect or create artifacts and other objects,

including model cars or spaceships, tools, fossils or other items relevant to the
investigation and use them to represent the project. Students make final presentations
or demonstrations that show the success or failure in implementing the project. In
addition, there should be celebrations at which the students' projects can be showcased
to their parents and other community members who can view and praise them.
At the highest learning levels in secondary schools, project-based learning should
include challenging questions or problems as a result of which students are involved in
design, problem-solving, decision-making or investigating activities. Such activities
should allow students to work relatively autonomously over extended periods of time
and lead to realistic products or presentations.
The complexities related to the implementation of project-based learning in public
school environments can lead to some problems. The process can be time consuming
and it can also be difficult to clearly show what the student learned when measures
against standards. The provision of reasonable resources support can also prove
expensive.
Research on project-based learning has shown students have difficulty in generating
meaningful scientific questions, managing complex processes, managing time,
transforming data and developing a logical argument to support claims. Students often
find it difficult to understand the concept of controlled experiments and create
inadequate research data as well as poor data collection plans. They pursue questions
without examining their merits and also questions based on personal preferences and
not ones warranted by the scientific project. Students also often fail to carry out their
plans systematically.
Most students need a great deal of guidance and modeling for scientific projects. The
findings tend to underline the need for science teachers and school library media
specialists to collaborate and provide frequent interventions to advise, model and
compare with previous successful projects. Teachers have also faced difficulties,
reporting lack of time, difficulties in classroom management, poor access to technology
and an inability to provide meaningful assessment.
However, a few studies have also shown positive effects from project-based learning.
When students are involved in different project-based experiences over an extended
period of time, such as several semesters, they develop their ability to raise complex

and insightful questions. They also show higher gains in math word-problem
performance, their attitude toward mathematics becomes more positive and their
performance on math portions of standard exams is stronger than that of students who
are not involved in projects. According to other studies, low-ability students show the
greatest improvement in critical thinking skill performance on the basis of challenges
presented in project-based activities.

Psychology of Learning

In psychology, learning is defined as a process by which a relatively lasting change in


behavior is introduced through practice and experience.
Learning differs from other behavioral changes due to maturing and illness, however,
some neurotic symptoms and patterns of mental illness are also learned behavior.
Learning comprises:
motor skills, for example when driving a car intellectual skills, such as readingattitudes and
values, such as prejudice

Not only humans can learn. Animals also learn behavior through life and experience.
At the beginning of the 20th century psychological studies put a major emphasis on
learning along with the emergence of behaviorism as one of the main schools of
thought. Behaviorism was founded by John B. Watson and it was aimed at measuring
only observable behaviors. Behaviorism is based on the concept that psychology is an
experimental and objective science, while internal mental processes are not relevant to
psychology as they cannot be observed and measured.
Yet in the 21st century learning remains a central topic in various areas of psychology,
such as:
cognitive psychology educational psychology social psychology anddevelopmental psychology

Research into learning was launched at the end of the 19th century by Russian
physiologist and Nobel Prize Laureate Ivan Pavlov and American psychologist Edward
Thorndike. Later, in the 20th century, scientific studies into the psychology of learning
were developed by American behaviorist B.F. Skinner and Canada-born psychologist
Albert Bandura.
Studies in psychology of learning have led to the establishment of three models used to
explain learned behavior:
Classical conditioning Operant conditioning and Cognitive learning

The classical conditioning model was introduced by Pavlov while examining the
salivation reflex of dogs. Classical conditioning represents a learning process where an
association is made between a previously neutral stimulus and a so called
"unconditioned stimulus," a stimulus that usually evokes a response.
Salivation, as an innate reflex, is an unconditioned response to an unconditioned
stimulus, the presentation of food. Salivation is triggered automatically in response to
the smell or sight of food and it is not a controlled process. While experimenting, Pavlov
proved that dogs can be conditioned to salivate even to a certain sound after the sound
was repeatedly presented together with food. Pavlov concluded that salivation is a
learned response, as he observed that dogs were reacting to various neutral stimuli
such as the white lab coats of his co-workers.
The operant conditioning model was initially developed by Thorndike and later
expanded by Skinner. In operant conditioning, learning occurs as the individual acts
upon the environment. Unlike classical conditioning where the process is uncontrolled,
this model is based on voluntary behavior.
Thorndike is famous for his theory dubbed "the law of effect". He formulated the theory
after examining the behavior of cats trying to escape from puzzle boxes. In order to
escape, the animals had to initiate certain response. Thorndike observed that animals
learn responses that are rewarded, whereas they easily forget responses left
unrewarded. From the experiment Thorndike concluded that animals learn by trial-anderror or by the model of reward and punishment.

Thorndike's theory was further developed by Skinner, who created the so called Skinner
box, a chamber containing some kind of instrument an animal can manipulate in order
to get food or water as a type of reinforcement. Skinner demonstrated that animals
eventually learn to handle the tool to obtain food. Reinforcement and punishment lead
to the avoidance of certain behavior that may weaken learning without suspending it.
The intensity and frequency of reinforced behavior can actually have a great impact on
the strength and rate of the response. Therefore reinforcement schedules play a central
role in the learning process under the operant conditioning model.
Both classical and operant conditioning require the existence of certain stimuli that
should trigger a response. The application of stimuli allows learning of behaviors in
different environments. Such conditioning is applied in a type of treatment known as
"behavior modification", as it is considered that behavior can be both learned and
unlearned.
The cognitive learning theory rests on the concept that learning occurs through
observations. Wolfgang Kohler demonstrated that the trial-and-error learning method in
animals can develop into a sudden understanding called insight. Such process
resembles puzzle solving rather than a stimulus response.
Albert Bandura's works mostly contributed to the development of the observational
learning model. In his famous Bobo doll experiment Bandura showed how people
imitate certain behavior without reinforcement. Effective observational learning is based
on four elements:
attention motor skills motivation memory

Bandura's social learning theory became the most influential theory in learning
psychology. Later he developed a social cognitive theory, according to which learning is
acquired through observations within social interactions and experience.

Reinforcement (Psychology)

Reinforcement is a concept used widely in psychology to refer to the method of


presenting or removing a stimuli to increase the chances of obtaining a behavioral
response. It is usually divided into two categories - positive and negative.
The term reinforcement has been attributed to Russian physiologist and Nobel Prize
winner Ivan Pavlov (1849-1936), who developed a theory of classical conditioning,
which is the formation of an association between a conditioned stimulus and a
response. In particular, he used the terms reinforced stimulus andreinforced reflex.
Reinforcement is also a key concept in behaviorism, a school of psychology, whose
pioneer was American scholar B.F. Skinner (1904-1990). In his experiments, Skinner
trained rats and pigeons to press a lever to receive food as a reward. His device, a
small plastic chamber known as the Skinner Box, aimed to study operant conditioning,
learning in which there is contingency between the response and the presentation of the
reinforcer. A reinforcer was defined as the stimulus which emerges in response to
behavior. In Skinner's experiments, the rats learned very quickly to repeat the action of
pressing the lever so that they could receive their reward.
In operant conditioning, behavior is controlled via a number of tools such as reinforcers,
punishment and extinction. Reinforcers provide an incentive for the increased frequency
of certain behavior. Through punishment, the occurrence of specific conduct becomes
rarer. Meanwhile, extinction is the case where behavior does not result in a response.
Some classifications make the difference between positive and negative reinforcement.
Researchers claim that the purring of a cat is a positive reinforcer, which encourages
people to stroke its fur. Negative reinforcement aims to encourage the occurrence of a
behavior by removal of aversive stimulus. In the Skinner Box, rats pressed a lever to
stop a loud noise, which is another example of negative reinforcement. According to
Skinner, reinforcers were responses from the environment that increase the probability
of a behavior being repeated.
Generally, positive reinforcement is regarded as a reward. Negative reinforcement for its
part is equal to punishment. However, in contemporary psychology punishment and
negative reinforcement are not synonyms, as they provide two different approaches to
controlling certain behavior patterns. While negative reinforcement strengthens a
behavior, punishment weakens a behavior pattern. Therefore, negative reinforcement is

more similar to positive reinforcement than to punishment. Furthermore, in many cases


it is difficult to decide whether reinforcement is positive or negative.
Psychologists have also identified so-called primary reinforcers, or biologically
determined reinforcers. Sometimes referred to as unconditioned reinforcers, these
include sleep, food, water and sex. In contrast, psychologists have also defined
conditioned reinforcers, which are neutral stimuli coupled with a primary reinforcer. As a
result, the neutral stimuli acquire the same reinforcement properties linked to the
primary reinforcer. While primary reinforcers are inborn, conditioned reinforcers are
learned. Hence, an example of a conditioned reinforcer is money. Originally, it was not
seen as a primary reinforcer but it is closely related to primary reinforcers such as food
or water. Bribing children with candies can also create conditioned reinforcers.
Widespread reinforcers in the classroom include praise, attention, grades and
recognition. Children learn through their behavior about reinforcement - both positive
and negative - and learn to recognize what is acceptable or inappropriate in the school
environment. For example, if they are caught smoking they must face the
consequences of their actions. Positive reinforcement could come from peers, who
admire them for smoking, while negative enforcement can be imposed by teachers who
might punish them for taking part in this activity. Self-reinforcement, or the practice of
recognizing your own success, is also useful both for children and adults.
Reinforcement theory is also used in the treatment of drug and alcohol addictions.
Researchers have found that drugs and alcohol both serve as strong reinforcers that
force the user or addict into a habit of seeking and taking them regularly, resulting in a
cycle which is difficult to break. Experiments show that drugs which serve as reinforcers
for humans have the same effect on animals.
Despite its efficiency, reinforcement has proved to have some shortcomings. The
efficient application of reinforcers is highly conditional upon the understanding of its
restrictions. Often it is difficult to make the difference between rewards and
punishments. Reinforcement can be effective if only all sources of reinforcement are
under control. Hence, school, peer groups and family are different sources of
reinforcement, which may provide conflicting stimuli and can undermine each other's
effect.

Self-Regulated Learning

Theory and research on self regulated academic learning emerged in the mid-1980s,
aiming to give an answer to the question how students become masters of their own
learning processes.
Self-regulation refers to the self-directive process used by learners. By using this
approach students can hope to transform their mental abilities into task-related
academic skills. This process describes learning as an activity that students do and
initiate on their own and not as something that happens to them reactively as a result of
teaching experiences.
Dale Schunk and Barry Zimmerman (2001) define self-regulation as "controlling one's
own behavior in order to achieve a certain goal." According to Schunk and Zimmerman,
students who use self-regulation will be able to set better learning goals, to implement
more successful learning strategies and exert more effort and persistence. They
suggest that motivation can be a precursor to self-regulated learning but it can also be
the result of it.
Self-regulated learning describes the processes used by individual learners for
organizing, monitoring and controlling their own learning. Students are self-regulated to
the degree that they are metacognitively, motivationally and behaviorally active
participants in their own learning process. These students are able to generate
thoughts, feelings and actions on their own in order to achieve the goals that they have
set in the learning process.
There are four major processes for self-regulation in learning. These include:
- Self monitoring this is the process of self-regulation which includes narrations,
frequency counts, duration measures, time-sampling procedures, behavior ratings,
behavioral traces and archival records;
- Self-instruction - teaching self-instructions and accompanying non-verbal actions is
considered an effective way of improving functioning in a wide variety of academic

areas. This method may be implemented in the form of written stimuli for learners to
follow;
- Self evaluation - during the process of self evaluation individuals compare some
dimension of their behavior to that of a standard, which could refer to both accuracy and
improvement of performance;
- Self-reinforcement - this is described by the need for external reward for selfreinforcing responses, such as social surveillance or increased status.
Self-regulated learning could also be defined as an active, constructive process during
which learners set goals for their learning and then try to monitor, regulate and control
their cognition, motivation and behavior. Their goals and the contextual features in the
environment guide them. Studies into models of self-regulated learning have defined
four general domains that learners can use. These include cognition, motivation,
behavior and the environment.
Students at almost any age are capable of self-regulated learning but this does not
mean that all students take effective charge of their own learning. When having to deal
with a learning task, self-regulated students begin the process by analyzing the task and
interpreting it on the basis of their own knowledge and beliefs. They set certain goals
that play a major role in the process of selecting strategies. After the implementation of
the strategy the students monitor their progress toward their goals. If needed, they
adjust their strategies and efforts. Self-regulated students often use motivational
strategies when they are discouraged or face difficulties.
The description of the process leads to the conclusion that academic self-regulation
includes skills such as valuing learning and its anticipated outcomes, setting
performance goals, planning and managing time, holding positive beliefs about one's
abilities, attending to and concentrating on instruction, effectively organizing, rehearsing
and encoding information. It also involves setting up a productive work environment,
using social resources effectively, focusing on positive effects and making useful
attributions for success and failure.
Individuals are not able to monitor and control their cognition, motivation, or behavior at
all times. However, according to the theory of self-regulation, some monitoring, control
and regulation is possible. All of the models describing this kind of learning define

biological, developmental, contextual and individual difference constraints that can


affect individual efforts at regulation.
Students who have mastered the process of self-regulation are much more likely to be
successful in school, to learn more and to achieve at higher levels. There has been a
significant increase in the number of information sources and the creation of more ways
of learning. This means that improvements in the ability of knowing how to self-regulate
the learning process will become an even more important goal for all educational
systems.

You might also like