You are on page 1of 7

CIRO EVALUATION

The CIRO approach to evaluate training impact is and another 4 level approach which is
originally developed by Warr,Bird and Racham. It is unique was to classify evaluation
process. This approach consists of four level of evaluation; first letter of each evaluation
forms the word CIRO.
1.
2.
3.
4.

Context evaluation
Input evaluation
Reaction evaluation
Outcome evaluation

Context evaluation of the learning event


Context evaluation involves collecting information about performance deficiency, assessing
that information to establish training needs and on the basis of those findings, setting of
objectives at three levels.
Context of the learning event concerns with obtaining and using information about the
current operational situation in order to determine training needs and objectives. This
evaluation determines if training is needed. During this process three types of objectives may
be evaluated.
Ultimate objectives: The particular deficiency in organization that program will eliminate.
Intermediate objective: Changes in employees work behaviour necessary for attainment of
ultimate objectives.
Immediate objectives: New knowledge, skills or attitudes that employees must acquire to
change their behaviour and to reach intermediate objectives.
Input evaluation to the learning even.
Input evaluation to the learning event
It concerns with how well the learning event was planned, managed, designed and delivered.
It involves determining how cost efficient, cost effective and feasible and well-chosen major
inputs are. It involves analysing the resources available and determining how they can be
deployed in order to achieve maximum possibility of desired objectives.
Reaction evaluation to the learning event

Reaction evaluation concerns with obtaining and using information about participants
reactions to improve the HRD process. The distinguishing feature of this type of evaluation is
that it relies on subjective inputs of participants. It can be helpful when collected and used in
systematic and objective manner.

Outcome evaluation of the learning event


This involves assessing what actually happened as a result of learning event. Outcome should
be measured at any or all of the following levels, depending on the object of the evaluation
exercise and resources available for the task.
The learner level: This involves establishing changes in learners knowledge, skills and
attitudes at the completion of the training. These changes can be determined and compared
with levels of knowledge, skills and attitudes identified at the beginning of program.

The workplace level: This involves changes that take place at the workplace level in the
learners job behaviour. This can be measured by appraisal, observation, and discussion with
the manager of learner/peers/customers/clients.

The team/department or unit level: This involves identifying changes that take place in
team, department or unit as a result of learning event. It is very difficult to evaluate changes
at departmental level. Changes at departmental level may include alteration in departmental
output, costs, scrap rates, absenteeism, and staff turnover or accident frequency. Unit level
changes may include enhanced productivity rates, reduced labour costs, and reduced
absenteeism and staff turnover rates.

The organizational level: This involves identifying changes that take place in the
organization as whole after the completion of training program. This outcome is also very
difficult to evaluate. The changes which may occur after the introduction of training program
may include change in culture of organization, more flexibility, reduced level of conflict,
enhanced ability to attract and retain valued workers.

BRINKERHOFFS EVALUATION

Brinkerhoff's Six-Stage Model of Evaluation is based on the Instructional Systems Design


training cycle and follows a circular pattern. It stresses the importance of continuous
evaluation and the need to change a course of action if the proposed approach is not working.

Step1 Identify targeted business goals and impact expectations


Step2 Survey a large representative sample of all participants in a program to identify
high impact and low impact cases
Step3 Analyse the survey data to identify:
a small group of successful participants
a small group unsuccessful participants
Step4 Conduct in-depth interviews with the two selected groups to:
Document the nature and business value of their application of learning
Identify the performance factors that supported learning application and
obstacles that prevented it.
Step5 Document and disseminate the story
report impact
applaud successes
use data to educate managers and organization
The process produces two key outputs

In-depth stories of documented business effect that can be disseminated to a variety of


audiences

Knowledge of factors that enhance or impede the effect of training on business


results. Factors that are associated with successful application of new skills are
compared and contrasted with those that impede training.

KIRKPATRICKS 4 LEVELS
Discussion on the subject of evaluation types may appear somewhat academic. However, in
evaluation literature this discussion inevitably leads to the very concrete examples of
evaluation models and schemes. The most famous and applied evaluation model was

developed by Donald J. Kirkpatrick (notably in his Evaluating Training Programs).


Kirkpatrick described 4 levels of training evaluation: reaction, learning, behaviour and
results. He identified the four levels as:
Reaction a measure of satisfaction (what the trainees/fellows thought and felt about the
training); evaluation here focuses on the reaction of individuals to the training or other
improvement intervention:
Learning a measure of learning (the resulting increase in knowledge or capability);
evaluation here assesses what has been learned as measured with end of course tests;
Behaviour a measure of behaviour change (extent of behaviour and capability
improvement and implementation/application); evaluation here measures the transfer of what
has been learned back to the workplace;
Results a measure of results (the effects on the institutional environment resulting from
the fellows performance); evaluation here measures (at least tries to) the impact of the
training on overall organizational results (in the private sector on business results).
In the framework of the above summary of types of evaluation levels 1 and 2 are normally
seen as part of formative evaluation, whereas levels 3 and 4 are typically associated with
summative evaluation. There have also been attempts to establish a level 5 by measuring the
impact at a societal level (in business terms, by calculating return on investment (ROI).
Levels 4 and 5 are associated with normative and/or Meta evaluation to achieve an ideal
full-scale evaluation.

HAMBLINS FIVE - LEVEL APPROACH (1974)


Hamblin was one of the first to modify Kirkpatricks model. The first three levels
correspond closely, the final level splits into two.

Level 1

Reactions

Level 2

Learning

Level 3

Job behaviour

Level 4

Organisation the effects on the organisation, from participants job to


performance changes

Level 5

Ultimate value the financial effects, both on the organisation and the

economy

COMA MODEL
A training evaluation model that involves the measurement of four types of variables
1.
2.
3.
4.

Cognitive
Organizational Environment
Motivation
Attitude

The COMA model improves on Kirkpatricks model in four ways:


1)
2)
3)
4)

Transforms the typical reaction by incorporating greater number of measures


Useful for formative evaluations
The measures are known to be causally related to training success
Defines new variables with greater precision

GUSKEYS CRITICAL LEVELS


Thomas Guskey (2002) has also elaborated Kirkpatricks 4 levels into 5: his levels may be of
relevance as he had students and educational environments in mind (see the University of
Minnesota website Education Minnesota27):
Level 1: Participant reaction

Purpose: to gauge the participants reactions about information and basic human needs
Technique: usually a questionnaire
Key questions: was your time well spent? Was the presenter knowledgeable?

Level 2: Participant learning


Purpose: examine participants level of attained learning
Technique: test, simulation, personal reflection, full-scale demonstration. Key question: did
participants learn what was intended?

Level 3: Organizational support and learning


Purpose: analyse organizational support for skills gained in staff development
Technique: minutes of district meetings, questionnaires, structured interviews or unobtrusive
observations
Key questions: were problems addressed quickly and efficiently? Were sufficient resources
made available, including time for reflection?

Level 4: Participant use of new knowledge and skills


Purpose: determine whether participants are using what they learned and using it well
Technique: questionnaires, structured interviews, oral or written personal reflections,
examination of journals or portfolio, or direct observation
Key question: are participants implementing their skills and to what degree?

Level 5: Student learning outcomes


Purpose: analyse the correlating student learning objectives
Technique: classroom grades, tests, direct observation
Key question: did student show improvement in academic, behaviour or other areas?

You might also like