You are on page 1of 9

Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

OVERVIEW OF INTERNATIONAL IMPACT


EVALUATION MODELS

The document gives an overview of five international models:


 Integrated Evaluation model for Human Resource Development
 Kirkpatrick’s four levels of evaluation
 Human Performance Technology model
 Strategy for learning transfer, and
 Adaptive systems model.

MODEL 1: INTEGRATED EVALUATION MODEL FOR HRD


[Main sources consulted: Brinkerhoff (1987 and 1988).]

Overview of the model


The Integrated Evaluation Model for Human Resource Development (HRD) of
Robert Brinkerhoff covers the broader skills development process, including the
planning, programme design and delivery, as well as measuring the impact in
achieving the desired organisational objectives.

The integrated model for evaluating HRD follows the basic logic of the six stages of
HRD programme development and implementation (Brinkerhoff, 1988: 56):
 A need, problem or opportunity worth addressing exists that could be influenced
favourably by someone learning something.
 An HRD programme that has the potential to teach what is needed is designed
or accessed.
 The designed programme is successfully implemented.
 The participants exit the programme after acquiring the intended skills,
knowledge, values and/or attitudes.
 The participants retain and use their newly acquired learning in their workplace.
 The organisation benefits when participants apply their learning in the
workplace.

Brinkerhoff’s model integrates evaluation in the six stages described above:


 Stage 1 - Evaluation of goal setting: What is the skills need that has to be
addressed?
 Stage 2 - Evaluation of programme design: What kind of intervention will be
the best to address the identified need?
 Stage 3 - Evaluation of programme implementation: How well is the programme
being implemented?
 Stage 4 - Evaluation of the achievement of immediate outcomes: Did the
learners learn what they were supposed to learn?
 Stage 5 - Evaluation of the achievement of intermediate or usage outcomes:
Are the learners continuing to use and apply what they have learnt?

-1-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

 Stage 6 – Evaluation of the impact and worth: Did the intervention make a
worthwhile difference in the workplace?

Figure 2 indicates the six stages of the integrated model for evaluating HRD of
Brinkerhoff.

Stage 1:
Evaluate the
identification of skills
needs and goals

Stage 6: Stage 2:
Evaluate the payoff Evaluate programme
(Impact and worth) design

Stage 5: Stage 3:
Evaluate the usage and Evaluate programme
endurance of learning implementation

Stage 4:
Evaluate achievement of
immediate outcomes
(Learning)

Figure 1: Six-stage integrated model for evaluating HRD

Key elements of the model


 Brinkerhoff emphasises the need for evaluation from the planning and design
of an intervention, through to the implementation and achievement of the
desired objectives. All stages of the HRD process should be evaluated, i.e. the
analysis of the need, the design and development of the intervention, and the
implementation.
 The model stresses the importance of designing the intervention on the basis
of an accurate identification of the skills needs to ensure that the selected
design is fit-for-purpose.
 Brinkerhoff also highlights the importance of making sure that the skills need is
sufficiently important to justify an intervention, and that a training programme
is the most appropriate way of addressing the identified need. This addresses
the unfortunate tendency among some managers of sending underperformers
on training programmes before analysing the cause of the performance
problem, and without considering whether other interventions may be more
effective in addressing the problem.

-2-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

 The model indicates the importance of specifying the desired learning


outcomes, the behavioural change that will result from the learners applying
their learning in the workplace, as well as the desired business results. This will
ensure that interventions are focused on these outcomes, changes and results.
 The model makes it possible to monitor an intervention across all six stages,
and to identify and correct problems throughout the planning, development and
implementation. This avoids the common problem of back-end evaluations
which identify problems after the programme has been concluded.

MODEL 2: KIRKPATRICK’S FOUR LEVELS OF EVALUATION


[Main sources consulted: Kirkpatrick (1994), Geis & Smith (1992), Callahan (1998)
and Moorhouse (2006b).]

Overview of the model


Donald Kirkpatrick introduced a four-level evaluation model that is widely accepted
as the standard for evaluating learning programmes. The four levels at which
learning interventions should be evaluated are:
 Level 1 evaluation - Reaction: Did the learners find the programme relevant,
interesting and enjoyable?
 Level 2 evaluation - Learning: Did the learners acquire the knowledge,
understanding, skills, values and/or attitudes during the programme?
 Level 3 evaluation - Behaviour: Are the learners applying the learning in the
workplace?
 Level 4 evaluation - Results: Does the application of learning have a positive
impact in the workplace?

Key elements of the model


 The model supports continuous evaluation at different stages of the delivery of
a learning programme to determine the effectiveness of the programme for the
learners and the workplace.
 Different role players are responsible for evaluation at the four levels:
- Level 1 evaluation should be conducted by the learning facilitator at the end
of a learning programme, but it could be conducted at various intervals
during the programme.
- Level 2 evaluation is mainly conducted by assessors.
- It could be argued that the responsibility for level 3 and 4 evaluation is
shared by the training provider and employer. Ideally, they should conduct
follow-up evaluations after completion of a programme to determine
whether learners are applying the learning and whether the programme has
made an impact on business performance.

-3-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

MODEL 3: HUMAN PERFORMANCE TECHNOLOGY MODEL


[Main sources consulted: Stolovitch & Keeps (1992), Robinson & Robinson (1989
and 1996).]

Overview of the model


The model promotes a systematic approach to analysing, improving and managing
performance in the workplace. The main objective of the model is to ensure that
training is focused on addressing the gap between current and desired performance
to ensure that the organisation’s needs are met. It focuses on the interrelationship
and alignment of factors impacting on workplace performance and which support
or inhibit the transfer of learning. The model stresses the importance of identifying
the systems and processes in the work environment that need to be modified in
order to address performance gaps (e.g. coaching of the learner and feedback from
the manager), as well as appropriate incentives and rewards.

Key elements of the model


 Performance consulting is the key process in this model. ‘Human performance
technologists’ (broadly referring to HRD professionals) must establish
collaborative relationships with management and others. They must work
together to identify performance needs and develop and implement strategies
for improving performance linked to organisational goals. The strategic
partnership and collaboration in addressing performance gaps are essential
components of this model.
 The human performance technologists must clearly understand the goals and
strategies the organisation wants to achieve, as well as the performance that is
required for the organisation to thrive, so that they can link performance
enhancement activities to organisational needs and goals.
 The model recognises that training is only one factor influencing on-the-job
performance. Therefore, it is essential to determine conditions in the work
environment that must be modified to support training interventions and
contribute towards performance improvement.
 Human performance technologists specialise in developing solutions to
performance problems, and work with informed people in the organisation to
determine all the interventions that are required for training interventions to
contribute towards improved performance.

Figure 3 is a diagram of the Performance Relationship Map that depicts the factors
that have to be considered during performance consulting (Robinson & Robinsons,
1996: 68-69).

-4-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

Operational Results On-the-Job Performance

1. Should Casual Linkage 2. Should


of performance
required
The organisation has On-the-job performance requirements
business and are established for employees to
operational goals. ensure that goals are met.

Gap Gap

Casual Linkage
4. Is of actual 3. Is
performance to
operational
Current performance is results This indicates the current, or actual,
yielding current performance of employees when
operational results. compared to the ‘Should’ (No. 2
above).

5. Environmental Factors
Impacting Performance

External Causes Internal Causes

Causes outside the control of management that can Causes within the control of management that can
contribute to a gap in operational and performance contribute to a gap in performance and operational
results. results.
Examples: Examples:
 Competition  Lack of clearly defined accountability
 Economy  No incentive or reward to perform as required
 Governmental regulations  Lack of managerial coaching and reinforcement
 Lack of employee skill or knowledge

Figure 2: Performance Relationship Map

MODEL 4: STRATEGY FOR LEARNING TRANSFER


[Main sources consulted: Broad & Newstrom (1992).]

Overview of the model


What Broad & Newstrom described in their book, Transfer of training (1992) is
not strictly a model, but rather strategies for supporting the transfer of learning.
However, because the transfer of learning is critical in evaluating the overall impact
of skills development, this aspect has been incorporated into the IM&E model. The

-5-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

central theme of this book is how to ensure the effective transfer of the knowledge
and skills acquired during training sessions to the workplace in order to support
and maintain performance.

Transfer is defined as the effective and continuing application, by learners to their


jobs, of the knowledge and skills gained in training on and off the job (1992: 6).
The authors confirm the common problem regarding learning transfer by estimating
that about 80% of the knowledge and skills gained during training are not fully
applied by employees in their work. The book addresses the common problem of
how to ensure that learning interventions – which are implemented at high cost to
organisations in terms of money, time away from work, production losses, etc. –
actually translate into organisational benefits.

Key elements of the model


 If organisations are to benefit from the skills development interventions, they
must improve the transfer of learning. They should also identify potential
barriers in the workplace that inhibit the transfer of learning, e.g. lack of
reinforcement on the job, non-supportive organisational cultures and resistance
to change.
 The responsibility for promoting the transfer of learning must be shared by the
learner’s manager/supervisor, the training provider and the learner. In
particular, it involves taking managers from behind the scenes and placing them
in visible management roles in relation to the skills development of their
employees.
 Broad and Newstrom introduce a ‘transfer matrix’ to stress the important role
of all three partners before, during and after the delivery of the intervention, as
depicted in Table 1. They stipulate what each partner must do at these three
stages to ensure that the intervention achieves the desired results in the
organisation in a sustainable way (1992: 170-171).

Before the intervention During the intervention After the intervention

Manager

Trainer

Trainee

Table 1: Learning transfer matrix

 Achieving the desired impact in the workplace requires proactive planning of


transfer, including actions to support learning transfer before, during and after
a learning intervention. Therefore, one of the steps in the overall planning and
delivery of a learning intervention must be the establishment of the three-way
partnership between the manager, learner and learning facilitator. This is
depicted in Figure 4 below (Broad & Newstrom, 1992: 38). This partnership is
reflected in the interaction that is required between Steps 5 and 6.

-6-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

1. IDENTIFY THE
7. EVALUATE NEED
TRAINING for performance
OUTCOMES on improvement
4 levels:
Reaction, 2. IDENTIFY
Learning PROBABLE
Behaviour & CAUSES
Results of the
performance
problem/
opportunity

6. DESIGN &
DELIVER
TRAINING
3. ADDRESS WORK
ENVIRONMENT &
MOTIVATIONAL
CAUSES
5. DEVELOP THE of the performance
TRANSFER problem/opportunity
PARTNERSHIP
and IMPLEMENT 4. CONSIDER
TRANSFER TRAINING
STRATEGIES as part of the
solution when lack
of knowledge/skills
is the cause of the
problem

Figure 3: Key performance-related decision-making process

MODEL 5: ADAPTIVE SYSTEMS MODEL FOR EVALUATION


[Main sources consulted: Brethower & Rummler (1976) and Smith & Geis (1992).]

Overview of the model


The underlying assumption in systems models is that the delivery of a programme
is only one phase in the broader life cycle of a programme, covering the inception,
planning, implementation, ongoing administration, and eventually the termination
or obsolescence of the programme. Systems models also recognise that
programmes are delivered within a specific organisational setting, which places
certain demands and constraints on the programme. Such models assist in
evaluating a programme in relation to its organisational context and decisions taken
over its life cycle (Smith & Geis, 1992: 160).

-7-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

Adaptive systems models include feedback loops throughout a programme to


facilitate continuous monitoring, adaptation and improvement of all processes in
the life cycle.

Key elements of the model


The adaptive systems model promotes the evaluation of the following main issues
in HRD interventions:
 Inputs: including the front-end analysis within the workplace context, the
identification of the needs of the organisation and stakeholders, programme
objectives and environmental variables that will affect programme design and
implementation
 The processing system that converts the inputs into outputs: where different
options are considered and decisions taken on the most appropriate way of
addressing the identified needs within the specific organisational context –
considering the cost, available resources and appropriateness of the programme
for the specific context
 The outputs: i.e. the learners who come out of the processing system having
acquired some knowledge, skills, attitudes and/or values
 The receiving system: i.e. the area or workplace unit where the learners work,
and where they are required to apply what they have learnt, and
 The results: i.e. the impact of the intervention in improvement in on-the-job
performance demonstrated in the achievement of the desired objectives, as well
as tracking learner progress after the programme to determine whether the
programme has had a long-term and lasting impact in the organisation.

Feedback loops are critical in adaptive systems models as they generate


information about the effectiveness of the inputs and processing system in
addressing the needs in the receiving system. This information must be used in
making the required adaptations. Systems models generally include four levels of
evaluation, similar to Kirkpatrick’s four levels of evaluation:
 Are the learners satisfied with the programme?
 Does the programme teach the concepts?
 Are the concepts used on the job?
 Does application of the concepts positively affect the organisation?

The four main phases of a systems model of evaluation, with the feedback loops,
are depicted in Figure 5.

Processing Receiving
Inputs System System Outputs
The learners Converting the The Job Results
and their needs into and the and Job
learning needs learning Organisation performance
programmes

Feedback
Feedback
Feedback

Figure 4: Main features of an adaptive systems model of evaluation

-8-
Impact Monitoring and Evaluation: Overview of international models (developed by Suzanne Hattingh)

REFERENCES

 Brethower, K.S. & Rummler, G.A. (1976) Evaluating training. In Baird, L.


Training and Development Sourcebook. Amerst, Massachusetts: Human
Resource Development Press.
 Brinkerhoff, R.O. (1987) Achieving results from training – How to
evaluate human resource development to strengthen programs and
increase impact. San Francisco: Jossey-Bass Publishers.
 Brinkerhoff, R.O. (1988) An integrated evaluation model for HRD. In Training
& Development. Amherst: American Society for Training & Development.
 Broad, M.L. & Newstrom, J.W. (1992) Transfer of training – action-packed
strategies to ensure high payoff from training investments. New York:
Addison-Wesley Publishing Company, Inc.
 Callahan, M. (1998) The role of the performance evaluator. Alexandria: ASTD
(Info-line series)
 Geis, G.L. & Smith, M.E. (1992) The function of evaluation. In Stolovitch, H.D.
& Keeps, S.J. Handbook of human performance technology – a
comprehensive guide for analyzing and solving performance problems
in organizations. San Francisco: Jossey-Bass.
 Kirkpatrick, D. (1994) Evaluating Training Programs: The Four Levels. San
Francisco: Berrett-Koehler Publishers.
 Moorhouse, C. (2006a) Conducting ETD Evaluation. Rosebank: Knowres
Publishing.
 Moorhouse, C. (2006b) Understanding ETD evaluation. Rosebank: Knowres
Publishing.
 Phillips, J. (1994) “Measuring ROI: progress, trends, and strategies” in In
action: Measuring return on investment. Amherst: The American Society
for Training & Development.
 Robinson, D.G. & Robinson, J.C. (1989) Training for impact: how to link
training to business needs and measure the results. San Francisco:
Jossey-Bass.
 Robinson, D.G. & Robinson, J.C. (1996) Performance consulting – moving
beyond training. San Francisco: Berret-Koehler Publishers, Inc.
 Rothwell, W.J. & Kazanas, H.C. 1992. Mastering the instructional design
process: a systematic approach. San Francisco: Jossey-Bass.
 Smith, M.E. & Geis, G.L. (1992) Planning an evaluation study. In Stolovitch,
H.D. & Keeps, S.J. Handbook of human performance technology – a
comprehensive guide for analyzing and solving performance problems
in organizations. San Francisco: Jossey-Bass.
 Von Broembsen, M., Wood, E. and Herrington, H. (2005) Global
Entrepreneurship Monitor South African Report 2005.

-9-

You might also like