You are on page 1of 23

Training Evaluation

Training Evaluation
• Training evaluation refers to activities aimed at
finding out effectiveness of training programs
against the training objectives for which these
programs were organized

• It is a planned process that provides specific


information about a selected topic/ session or
program for purposes of determining value and/
or making decisions
Uses of Evaluation
• To determine success in meeting program objectives
• To identify strengths and weaknesses of training
activities
• To compare costs to benefits
• To decide who should participate in future programs
• To test clarity and validity of tests, cases and exercises
• To determine if program was the appropriate solution for
specific need
• To establish database that can help management in
decision making about future programs
• To reinforce major points made to the participants
Evaluation Models
• Goal Based model-Kirkpatrick’s Model
• System Based model
- CIPP Model
- IPO Model
- TVS Approach
Kirkpatrick's Four-Level Training
Evaluation Model
• helps trainers to measure the effectiveness of
their training in an objective way. The model was
originally created by Donald Kirkpatrick in 1959,
and has since gone through several updates
and revisions.
• The Four-Levels are as follows:
• Reaction.
• Learning.
• Behaviour.
• Results.
Level Evaluation Evaluation description and Examples of evaluation tools and Relevance and
type (what is characteristics methods Practicability
measured)

1 Reaction Reaction evaluation is how the 'Happy sheets', feedback forms. Quick and very easy
delegates felt about the training or Verbal reaction, post-training surveys to obtain.
learning experience. or questionnaires. Not expensive to
gather or to analyse

2 Learning Learning evaluation is the Typically assessments or tests before Relatively simple to
measurement of the increase in and after the training. set up; clear-cut for
knowledge - before and after. quantifiable skills.

Interview or observation can also be Less easy for


used. complex learning.
3 Behaviour Behaviour evaluation is the extent Observation and interview over time Measurement of
of applied learning back on the job are required to assess change, behaviour change
- implementation. relevance of change, and typically requires
sustainability of change. cooperation and skill
of line-managers.

4 Results Results evaluation is the effect on Measures are already in place via Individually not
the business or environment by normal management systems and difficult; unlike whole
the trainee. reporting - the challenge is to relate to organisation.
the trainee.

Process must
attribute clear
accountabilities.
COMA Model

A training evaluation model that involves


the measurement of four types of
variables

1. Cognitive
2. Organizational Environment
3. Motivation
4. Attitudes
CIPP Model
for Context, Input, Process, Product,
and these 4 main aspects comprise the
CIPP Evaluation Model. The intention of
this model is not to prove, but rather, to
improve upon the programme itself. The
CIPP Evaluation Model may be applied to
educational / training programmes, to best
determine the merit and worth of the
training programme, as well as to
determine how to improve upon it.
Context Evaluation, which establishes
the goals of the programme. At this
stage, the beneficiaries and their needs
are also identified, along with potential
resources available on hand, and
potential problems that will need to be
overcome. At this stage, the background
of the programme will need to be
evaluated, and any social / economic /
political / geographical / cultural factors
within the immediate environment are to
be accounted for.
*What needs to be done ??
Input Evaluation encompasses the programme
plans / planning. Stakeholders will need to be
engaged, and suitable strategies of programme
execution identified. Competing or conflicting
strategies may also be identified. A budget will need
to be allocated and suitably portioned off. To ensure
sufficient coverage of the training programme,
research may also have to be carried out.
*How should it be done??
In Process Evaluation stage of the CIPP
Evaluation Model, the actual actions are
evaluated. This can be cyclic, repeated
throughout the develop / development stage,
or during the implementation / execution of
the training programme. Controls to monitor
the progress will have to be in place, as well
as a system for feedback from learners and
stakeholders, and vice versa.
* Is it being done??
Product Evaluation stage of the CIPP
Evaluation Model measures outcomes. The
impact / reach of the training programme,
and its effectiveness in fulfilling the
objectives. Transportability seeks to
determine if the training programme can be
transferred, adapted, or used in a different
setting. Sustainability is another aspect to be
measured, accounting for how durable / long-
lasting the benefits were. Adjustments to the
training programme may also need to be
performed at this stage.
*Did it succeed??
TVS Model (1994) Training Validation System
(TVS) Approach
1. Situation: collect pre training data to determine
current levels of performance within the
organisation; define a desirable level of future
performance
2. Intervention: identifying the reason for the
existence of the gap between the present and
desirable performance to find out if training is the
solution to the problem
3. Impact: evaluate the difference between the pre
and post-training data
4. Value: measures differences in quality,
productivity, service, or sales, all of which can be
expressed in terms of dollars
IPO Model
Input
Process
Output
IPO model can readily determine
whether training programs are
achieving the right purposes. It also
enables them to detect the types of
changes they should make to improve
course design, content, and delivery.
Jack Phillips – Five level Model
Calculating ROI
• Next step is to convert the data to monetary
value
– Direct conversion of hard data – quantity, quality, cost
or time
– Conversion of soft data to place monetary value on
improvements; Techniques are
• Historical costs
• Supervisor estimation
• Management estimation
• Expert opinion
• Participation estimation
• External studies
• Next calculate costs of the program
Calculating ROI
• ROI Formula is the annual net program
benefits divided by the program costs;
• Where,
• Net benefits are monetary value of
benefits minus costs of the program
CIRO Model
• Context – collect information about organizational
deficiency, identify needs and sets objectives at 3 levels

– Ultimate objectives (overcome particular deficiency)
– Intermediate objectives (changes in work behavior require for
ultimate objectives to be met)
– Immediate objectives (new knowledge, attitude, skills or attitude
employee requires to reach intermediate objectives)
• Input – involves obtaining and using information about
possible training resources to choose between
alternative inputs to training
• Reaction – involves obtaining and using information
about participants reactions
• Outcomes – involves obtaining and using information
about results
Methods of data collection
Method Advantages Limitation
Interview Flexible, opportunity for clarification, High reactive effects, high cost,
depth possible, personal interaction face to face threat potential,
labour-intensive, time consuming
Questionnaire Low cost, honesty increased if it is Possible inaccurate data, on the
anonymous, respondent sets pace, job responding conditions are not
variety of options controlled, respondent sets
varying paces, return rate of
questionnaire difficult to control

Direct Non-threatening, excellent way to Possibly disruptive, reactive effect


Observation measure behavior change possible, may be unreliable,
trained observers needed
Written test Low purchase cost, readily scored, May be threatening, possible low
quickly processes, easily administered, relation to job performance,
wide sampling possible reliance on norms may distort
individual performance

Performance Test Reliable, objective, close relation to job Time consuming, simulation often
performance difficult, high development cost
Performance Data Reliable, objective, job based, easy to Lack of knowledge of criteria for
review, minimal reactive effects keeping or discarding records,
information system discrepancies
Designs of training evaluation

• One group pre-test, post test design


• Randomized non-equivalent control group
design ; Group that undergoes training –
experimental group, the one that doesn’t is
control group
• Randomized equivalent control group
design
• Post test only control group design –
prevents effects of pre-test sensitivities
Suggestions for better evaluation
• Plan your metrics before writing survey questions
• Ensure the measurement is replicable and scalable
• Ensure measurements are internally and externally
comparable
• Use industry accepted measurement approaches
• Define value in eyes of stakeholders
• Leverage automation and IT
• Manage the change associated with measurement
• Ensure your metrics have flexibility

You might also like