You are on page 1of 147

CONTRIBUTORS

Secure the Future: Beryl Mohr


(Programme Executive)
Manto Management: Juliann Moodley
(Principal Course Facilitator, M&E Consultant
Centre for Interdisciplinary Alana Rosenberg
Research on AIDS (CIRA), (M&E Advisor)
Yale University, USA:
Funded By:
S
t
e
p

b
y

s
t
e
p

g
u
i
d
e

t
o

M
o
n
i
t
o
r
i
n
g

a
n
d

E
v
a
l
u
a
t
i
o
n
PLANNING
1
Page 7
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
The focus of this chapter is to enhance the readers understanding
about planning, the planning cycle, how to plan a programme
and project, and how to ore from strategic planning to
individual planning.
1.2 OBJECTIVES
To give details as to what planning is;
To understand the planning cycle;
To identify the different levels of planning.
1.3 TARGET GROUP
Monitors;
Project Managers;
Project Staff.
Page 8
2. INTRODUCTION TO PLANNING
For many years, captains of industry have known that success depends on a clear vision of where
they are going and a reliable map to get there. Without these tools, organisations would be left to
the vagaries of the external and internal environments.
In the NGO sector, organisations need to recognise the strengths of planning, harness it as an
asset and set it to work in our favour.
Our political commitment to improve and expand service delivery needs is to be accompanied by a
well-crafted plan that is implemented with gusto and energetic will.
So, how do we do this?
The answer is simple:
Strategic, proactive, teamwork strategic planning that is linked to strategic implementation
of the plan in a disciplined systematic manner.
William C Bean, 1994
2.1 WHAT IS A PLANNING SYSTEM?
Page 9
The planning system must directly express the needs of an organisation.
Planning happens at different levels.
PLANNING LEVEL DEFINITION WHO IS INVOLVED TIME-FRAME
Strategic planning The process of determining long-term Board members and 5 years
visions and goals for the organisation management
and how to fulfil them.
Business planning Highlights the specific focus, the Director of organisation 3 years
objectives and targets to be met and senior management
to attain the organisations goals
(and medium-term focus).
The indicators for this process are
outcomes-based.
Annual planning This is a short-term plan put in place Director 1 year
annually to meet the organisations Project manager
goals. It includes annual scorecards,
projects, programmes, budget and
challenges. The indicators for this
process are outputs-based.
Operational planning Drives the implementation of business Project staff 1 year
plans, is action orientated and deals
with short-term (daily, monthly)
activities.
Individual planning Highlights individual objectives Project staff 6 months
and actions in relation to attaining
organisational (division, section,
unit) plans.
An organisation must develop clear inter-linkages in its planning system. Continuity, consistency and
rigour are key elements in planning for success.
Continuity: Planning ensures that there is alignment between all departmental operations and
the organisation as a whole. It also secures the connection of activities over time. In other
words, the work of the organisation has coherence, and stability in the medium to long-term.
Consistency: Planning guarantees that each department is following similar processes to other
departments. The framework promotes the consistent use of terminology, and conceptualisation
processes towards aligned goals.
Rigour: Successful planning must have implementable detail. This detail must be measured
and assessed.
2.2 UNDERSTANDING THE PLANNING CYCLE
Communication
Monitoring and evaluation
Communication
Monitoring and evaluation
Communication
Monitoring and evaluation
Communication
Monitoring and evaluation
Review of November
previous of each
year year
What? When? By whom?
Operational November
planning of each
year
What? When? By whom?
Review of November
previous of each
year year
What? When? By whom?
Strategic
planning
What? When? By whom?
Business
planning
What? When? By whom?
Page 10
Page 11
2.3 LEVELS OF PLANNING
F
I
N
A
N
C
I
A
L

P
L
A
N
Individual Planning
Operational Planning
Business Planning
Strategic Planning
3. Manto Management: Bristol-Myers Squibb M&E Capacity Building Workshop; August 2002
3. STRATEGIC PLANS
3.1 PURPOSE AND CORE BUSINESS
Introduction - What is the essence of the organisation?
Vision;
Mission;
Core business definition - what are the key products and services that the organisation
delivers to its customers to identify:
- Stakeholders/customers
- Products and services offered (distinguished in terms of support or delivery)
3.2 STRATEGIC OUTCOMES (INTENT) AND PERFORMANCE MEASURES
(INDICATORS AND TARGETS)
3.2.1 Establishing the strategic outcomes
In order to establish strategic outcomes, the organisation should consider the key strategy
drivers (key imperatives - the determined vision of the organisation, political agenda of the
organisation (if appropriate) and policy/legislative requirements of the organisation).
Vision - here the task is to interpret and identify the priorities of vision that are relevant
to the organisation (for example, home-based care (HBC) - what will the organisation do
in respect of this?).
Mission - here the focus is on stating which objectives are relevant/applicable to the
organisation.
Policy and legislation - here the focus is on what the organisation will be required to do
in terms of relevant mandates.
Page 12
Page 13
3.2.2 Developing the strategic objectives, key performance indicators and targets in
relation to the strategic outcomes.
Organisations should now translate their strategic outcomes into strategic objectives that
demonstrate the core components of the strategy regarding how the strategic outcomes will
be achieved. This would also include the development of key performance indicators and
targets.
The strategic objectives should be written in a manner that identifies the core strategies
(tactics) that will be used to achieve the strategic outcome. The core strategy to achieve the
strategic outcome decreased dependence by families on HBC caregivers is through an
increase in support and education programmes for families on caring for the PLWAs.
Similarly, the key performance indicators and targets need to be specifically related to the
strategy.
With regard to the setting of multi-year targets, this will also be determined by the nature of
the strategy and the requirements for implementation. This can result in one-year targets
and/or multi-year targets, up to a maximum of five years. Where single year or multi-year
targets are set, the business plan must identify the target to be achieved for each year
applicable and the final achievable target. The annual plan will then need to reflect the
specific target for that year and the final target, so that progress against the annual plan and
the five-year business plan can be monitored and evaluated.
The strategic plan (Appendix 1) illustrates how multi-year targets by strategic outcome,
strategic objective and key performance indicators need to be captured, as well as the
proposed layout for the score-card.
4. THE OPERATIONAL PLAN
The operational plan (Appendix 2) is a rolling plan that lays out the various
programmes/projects/initiatives that have been identified to deliver on the strategic outcomes and
strategic objectives. The components of the operational plan will typically be the following:
4.1 IDENTIFICATION OF PROGRAMMES AND PROJECTS
Identification of the various programmes/projects/initiatives that are anticipated to deliver the
targeted strategic outcomes, strategic objectives and key performance indicators.
Programmes/projects/initiatives may have the following options for achieving the identified
strategic outcomes, strategic objectives and key performance indicators:
To target a particular key performance indicator;
To target a particular strategic objective covering multiple key performance indicators;
To target a particular strategic outcome covering multiple strategic objectives.
It is important to understand the level at which the programme/project/initiative is targeted (in
other words, at the strategic outcome, or strategic objective, or indicator) and more importantly,
how the deliverables of the programme/project/initiative will meet the indicators and targets.
Ideally, where programmes/projects/initiatives cut across strategic outcomes, strategic
objectives or key performance indicators, specific components or deliverables of these
programmes/projects/initiatives should be discreetly identified and linked to the specific strategic
outcome, strategic objective or key performance indicator targeted.
Typically the content of the operational plan in this section will consist of:
A description of the project, initiative or operation (what does the project or initiative
entail, who are the beneficiaries, among others);
Page 14
A description of the project owner and participants, and other involved stakeholders,
(especially if this is an inter-departmental project or initiative or operation);
The strategic outcomes, strategic objectives, key performance indicators, deliverables,
and targets for the programme/project/initiative;
The anticipated timing, including start date, implementation period and end date;
The resource requirements (people, infrastructure, technology, among others);
The estimated costs (operating and capital).
Planning Process
Objectives
Activities
Timeframes
Indicators
Targets
Results
Vision and
Strategy
Project Framework
Strategic
Outcomes
Project
Priorities
4. Manto Management: Bristol-Myers Squibb M&E Capacity Building Workshop; August 2002
Page 15
4.2 ALIGNMENT OF PROGRAMMING AND PROJECTS
Alignment of programmes/projects/initiatives to the strategic outcomes, strategic objectives and
key performance indicators:
List the programmes/projects/initiatives and their associated strategic outcomes, strategic
objectives, key performance indicators and targets;
Identify and discuss options available for areas where no, or insufficient, programmes/
projects/initiatives have been identified to deliver on the established targets.
4.3 THE RESOURCE PLAN
Once the operational plan has been developed, the resource plan will consist of an aggregation of
the resource requirements for the various programmes/projects/initiatives identified. Its main
purpose is to obtain a view of what is going to be required during the life-cycle of a project in
terms of:
Personnel requirements;
Infrastructure support (equipment, machinery, tools, among others);
Technology requirements (hardware, software, tools, among others);
Training and development;
Other (international services, consulting services, among others).
The above resource requirements should be summarised by programme/project/initiative activity
and by year, so that a clear view of the total resource requirements is obtained, as well as the
resource requirements on a year-by-year basis for the project period.
Page 16
4.4 THE FINANCIAL PLAN
The financial plan is a budget projection of the financial performance of the organisation,
covering primarily the following components:
Annual operating revenue and expenditure for the project period;
Annual cash flow for the project period;
Annual balance sheet for the project;
Key financial ratios and other strategic measures;
Key assumptions relating to aspects such as growth, interest rates, inflation, among others.
In addition to the above, the financial plan should also meet the organisations requirements
regarding the standards and type of financial information presented.
The annual plan is a plan containing detailed information pertaining to the operations of the
project, the resources, the capital requirements and the budgeted financials for the coming year.
Based on the organisations business planning process, the annual plan will be developed.
The purpose of the annual plan is to outline in a more detailed and considered manner what the
project will be focusing on for the coming year, and how it plans to deliver on the targets that
have been set. The annual plan also serves as the implementation plan, and clarifies other
priorities that have been identified and will therefore need to be monitored and evaluated to
assess progress against both the business plan targets, and the annual targets.
The contents of the annual plan will follow the contents of the business plan covering the
following sections:
5. THE ANNUAL PLAN
Page 17
5.1 INTRODUCTION
The introduction should provide a broad overview of the purpose and core business of the project.
The introduction should also discuss the scope and purpose of the plan, and the period applicable.
5.2 THE ANNUAL PLAN
A summary of the strategic outcomes, objectives and targets that will be focused on for the
coming year should be provided. The strategic outcomes, objectives and targets will be drawn from
the business plan. The summary should indicate which objectives and targets are only applicable
for the coming year, and which are rolling targets that will contribute to the overall objectives and
targets. The summary should also rank the various objectives/priorities and indicate where the
emphasis will be placed.
5.3 RESOURCES
The annual plan will provide a detailed list of the resources required, including aspects such as
nature and type of the resource (people, infrastructure, equipment, etc), timing (when the
resource is required), specifications (if necessary) and detailed associated costs.
5.4 FINANCIALS
The annual plan will need to provide detailed information on the following financial components:
Annual projected operating revenue and expenditure, with actual monthly or quarterly
income and expenditure statements;
Annual projected cash flow, with actual monthly or quarterly cash flow statements;
Page 18
Annual projected balance sheet, with actual monthly or quarterly balance sheet
statements;
Key financial ratios and indicators and targets that have been defined in the overall plan;
Other information that will be required to meet the requirements of the organisation and
funders.
How do you actually write strategic outcomes, key performance areas, outputs?
The language you use in your plan should be:
Clear;
Simple;
Precise.
Do not use ambiguous words. Use words that can easily be understood by everybody.
All plans should be written in the future as is tense. This means that you express your
intentions as if they had already been achieved. So, you are never going to do something, you
always have already done it.
All sentences must be in the active voice. For example, initiate safety and security procedures;
not to initiate procedures nor will initiate procedures.
Always begin your sentence with a verb and not an adverb, for example, do not use words like to.
Examples of starting words:
Improve; Co-ordinate;
Implement; Design;
Direct; Promote;
Maximise.
Page 19
It is very important that everybody in the organisation uses the same words to mean the same
thing.
What questions can I ask in order to get my answers?
ANSWER TO QUESTION
Strategic outcome WHAT is the broader impact we want to have?
Key performance areas HOWcan we make this goal more tangible?
HOW can we narrow down the bigger question to a more
focused scale?
Activities In order to realise the outcome, WHAT must we actually
do? WHAT activities can we do which will assist in
reaching our goal (strategic outcome)?
Deliverables/outputs WHAT tangible product or service will we get from each
activity? If we do this activity, WHAT do we expect to get
at the end of it?
Indicators HOWwill we be able to assess whether we have achieved
WHAT we set out to do? WHAT will we use to measure
whether we have accomplished what we said we would?
Targets If we are aiming to achieve certain outputs, WHAT
quantitative measures can we attach to them?
HOWmuch do we want to achieve?
HOWmany do we want to make/do/deliver?
Page 20
ANSWER TO QUESTION
Timeframes By WHEN should our activities be completed?
HOWlong will it take to do the necessary tasks?
Person/people responsible WHO will make sure that what we want to happen does in
fact happen?
WHO is responsible for the activity?
Networks With WHICH other departments/units/directorates do we
need to co-ordinate in order to achieve our outputs and
ultimately outcomes?
Resources WHAT do we need to do our job?
WHAT technology is required?
WHAT skills are required?
HOWmany people do we need?
WHO will conduct the task?
6. Manto Management: Bristol-Myers Squibb M&E Capacity Building Workshop; August 2002
Page 21
Page 22
THE CONTEXT OF MONITORING
AND EVALUATION
2
Page 25
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
This chapter provides the reader with an overview of contextual
issues related to monitoring and evaluation, for example,
what factors influence monitoring and evaluation, definitions
on evaluation, and the difference between monitoring and
evaluation.
1.2 OBJECTIVES
Define what monitoring and evaluation is;
Understand the different purposes for which monitoring and
evaluation can be used;
Clarify the difference between external and internal
evaluation;
Identify the benefits and limitations of both monitoring and
evaluation;
Identify the participants in the monitoring and evaluation
process and become familiar with their roles;
Identify the different types of monitoring and evaluation
studies;
Be familiar with the different models of monitoring and
evaluation;
Be familiar and able to work with the instruments for
internal evaluation;
Be familiar and able to work with the instruments for
external evaluation.
1.3 TARGET GROUP
Monitors and supervisors;
Project Managers;
Project Staff.
Focus of the Monitoring and Evaluation
The Monitoring and Evaluation Design
Analysis of the Monitoring
and Evaluation Results
Conducting the
Monitoring and
Evaluation Process
Monitoring
Report
Evaluation
Report
1
Babbie and Mouton, 2001
Page 26
2. GENERAL CONTEXTUAL ISSUES REGARDING
MONITORING AND EVALUATION
2.1 INTRODUCTION - PROCESS
The basic point of departure for the methodology outlined is that all empirical monitoring and
evaluation conforms to standard logic. Irrespective of the kind of data that is collected (market
research, surveys, policy evaluation), the purpose of the information and the method utilised,
should conform to this logic. This logic is referred to as the ProDec Framework
1
. The ProDec
Framework is based on the premise that monitoring and evaluation is conducted to construct an
argument or present a point of view that is supported by evidence and scientific reasoning. This is
done through the following process:
Collecting the Data
Page 27
2.1.1 Worksheet
PROJECT MONITORING
Why are we monitoring this project?
What are we monitoring in this project?
What factors do we consider as important
influences for the monitoring?
Who is the monitoring information for?
Describe the process that you will use to
conduct monitoring of the project
2.2 THE CONTEXT OF MONITORING AND EVALUATION
2.2.1 Evaluation Defined
There are a number of varying definitions of evaluation in general. All these various elements
consider the same factors. These factors are:
They all refer to a systematic assessment, collection or analysis of information;
They all collect information about the operation or outcomes of a process;
They all refer to using this information to inform decision-making around improvement of
the process or project (Boulmetis and Dutwin, 2000; Weiss, 1998)
Considering these factors, the following generic definition for evaluation is considered:
Evaluation is the systematic process of collecting and analysing data in order to determine
whether and to what degree the objectives have been or are being achieved in order to make
a decision. (Boulmetis and Dutwin, 2000, p4)
Monitoring refers to the regular collection and analysis of information to assist timely
decision-making, ensure accountability and provide the basis for evaluation and learning.
(Boulmetis and Dutwin, 2000)
2.2.2 Monitoring and Evaluation
Monitoring and Evaluative Thinking
Monitoring and evaluation is a process and way of thinking;
Monitoring and evaluation is much more than just providing information for someone
else at the end of a project;
Monitoring and evaluation should be an integral part of management;
Evaluative thinking begins with project design;
Monitoring and evaluation provides key information for management;
Monitoring and evaluation can be creative, fun and rewarding.
Page 28
Purposes of Monitoring and Evaluation
Ensuring planned results are achieved;
Improving and supporting management;
Generating shared understanding;
Generating new knowledge and support learning;
Building the capacity of those involved;
Motivating levels within the system;
Ensuring accountability;
Fostering public and political support;
Assessing merit and worth;
Initiating/ensuring institutional improvement;
Upholding oversight and compliance;
Developing knowledge (Mark et al, 2000).
(i) Assessment of Merit and Worth
Monitoring and evaluation can be conducted to determine whether a project has achieved
the desired outcomes. This process will look at the quality of the project (merit) and the value
of the project (worth) (Mark et al, 2000). Monitoring and evaluation that is conducted with
this purpose will facilitate the decision-making process in terms of the performance of the
project.
(ii) Project Improvement
The purpose of monitoring is to provide the project with feedback to allow it to adjust its way
of working or its structure to facilitate improvement. This purpose will result in project effects
and processes being monitored within the project environment. Typical monitoring will
include the comparison of data to define best practices. (Mark et al, 2000)
Page 29
(iii) Oversight and Compliance
Monitoring and evaluation that has the purpose of oversight and compliance will review the
extent to which a project complies with the goals, objectives and rationale as defined in the
document. This purpose is generally used to determine whether the project is operating
within its defined parameters, and whether it is delivering what it is supposed to deliver.
(Mark et al, 2000)
(iv) Knowledge Development
Monitoring and evaluation that is conducted with the purpose of knowledge development
is generally used to test general theories around social processes and mechanisms as they
manifest in the context of social policies and communities.
2.2.3 Monitoring versus Evaluation
(i) Monitoring
Monitoring occurs when an internal team of staff implement the monitoring process.
A monitoring approach would be utilised to facilitate ongoing self-evaluation and
project-based monitoring. (Ramashia and Rankin, 1995) This approach would be adopted
where there is a need to ensure that a climate of routine monitoring is developed.
The advantages of conducting monitoring are as follows:
The cost of routine monitoring is normally significantly lower than the costs of an
external evaluation;
This approach creates increased opportunities for staff to develop their monitoring and
evaluation skills;
The use of monitoring supports the continuous assessment environment
(Ramashia and Rankin, 1995);
Monitoring facilitates a more efficient and useful evaluation process.
Page 30
The disadvantages of using only a monitoring approach are as follows:
Using staff as monitors takes them away from their core functions. This could result in
extensive hidden costs for the project;
Monitoring processes are often seen as less objective, and therefore less credible, than
external evaluations;
Internal staff do not usually have requisite levels of technical skills in monitoring
techniques. This could result in significant errors in monitoring design and
implementation. (Ramashia and Rankin, 1995)
(ii) External Evaluation
The key reason for adopting an external approach is that this is likely to lend a greater deal
of objectivity to the overall evaluation. External evaluators/ supervisors have fewer
pre-conceived ideas about a project or programme that is being evaluated. The concept of
objectivity and credibility is critical to ensure an effective and relevant evaluation. Relying on
monitoring information, only is seen as biased and predetermined and has minimal validity.
(Ramashia and Rankin, 1995).
An external evaluator/supervisor essentially runs an external evaluation. This evaluator will
draw on the staff to provide him or her with the necessary information to conduct the
evaluation.
The advantages of using an external evaluation approach are as follows:
External evaluations have a higher level of objectivity. This results in an unbiased report
detailing the findings and recommendations;
As a result of the objectivity, external evaluations are seen as more credible;
Page 31
External evaluators/supervisors generally have a high level of technical expertise in
evaluation. They can bring this to bear in the evaluation process. This also allows for a
high degree of competence in the implementation of the evaluation process.
(Ramashia and Rankin, 1995);
The disadvantages of using an external evaluation approach are as follows:
The cost of an external evaluation can be prohibitive. However, the costs are generally
established at the beginning of the evaluation process and can therefore be included in
an evaluation budget.
The use of external evaluators/supervisors reduces the opportunities for internal staff to
develop their own evaluation skills. (Ramashia and Rankin, 1995)
It is important to note that internal monitoring and external evaluation processes
complement each other, and both should be used to the maximal extent feasible.
2.2.4 A Participatory Learning Approach
To make Monitoring and Evaluation (M&E) useful, focus on:
Information needs for management;
Participation of the different levels of staff:
- Project Supervisors;
- Project Implementing Staff;
- Project Administrators.
Facilitating learning;
Providing feedback;
Questioning assumptions (reality checking).
Page 32
Situation
analysis
Monitoring,
evaluation
and learning
Organisational
development
Performance
development
AN INTEGRATED PERSPECTIVE ON MONITORING AND EVALUATION
Manto Management: Planning handbook for the City of Johannesburg, October 2002
Page 33
VISUALISING AN M&E PLAN
Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3 Q4 Q1
YEAR 1 YEAR 2 YEAR 3


Key meetings
Report due
Phase two
preparation
Mid-term
review
Preparation for
mid-term
Develop M&E plan
with stakeholders
Training in use of
reporting system
PRA with
participating
communities
Preparation for
annual review
(performance and
lessons learnt)
Annual review
and planning
workshop
Annual review
and planning
workshop
It is important to see monitoring and evaluation as tools to be integrated into all aspects of
programme and project management, as illustrated below. The starting point is to ask What
information is required for effective management and what sort of monitoring and evaluation
system is required to provide it.
MANAGEMENT FUNCTIONS AND MONITORING AND EVALUATION
Unfortunately M & E is often erroneously viewed as an annoying task to provide donors with the
information they require. Certainly accountability to funding bodies is one function of the M & E
system, but it is certainly not the only one or even the most important one.
Manto Management: Bristol-Myers Squibb M&E Capacity Building Workshop; August 2002
Planning
Organising
Staffing
Leading
Controlling
MONITORING
AND
EVALUATION
Page 34
Manto Management: Bristol-Myers Squibb M&E Capacity Building Workshop; August 2002
2.2.5 Adaptive Management and Action Learning Research
In a complex, rapidly-changing world, blueprint planning is a recipe for failure;
Often the solutions need to be found by testing alternatives and learning;
Unanticipated impacts (positive or negative) need to be monitored and responded to -
solving one problem often creates another;
Things rarely go exactly as planned!
Effective management is adaptive management
THE ACTION LEARNING/RESEARCH CYCLE
MONITOR MONITOR
EVALUATE EVALUATE
PLAN PLAN
ACT ACT
Page 35
2.3 IDENTIFY ALL PARTICIPANTS AND THEIR ROLES IN MONITORING
AND EVALUATION
Complete a monitoring and evaluation participants analysis that asks:
Who are the M&E participants?
What decisions do they make that affect the project?
What are their information needs?
How can they be assisted to effectively use monitoring and evaluation information and
learn from it?
Which participants should be involved in which aspects of monitoring and evaluation, to
ensure ownership and use of the monitoring and evaluation results?
2.4 KEY PARTICIPANTS IN THE MONITORING AND EVALUATION PROCESS
Key participants in the M&E process include the following:
The community;
Project recipients;
The staff (co-ordinator, implementing staff and administrators);
The donor or funder;
Key community stakeholders that are not direct recipients of the project, for example,
other projects, churches, hospitals;
Other stakeholders, defined important
Page 36
2.5 BENEFITS AND LIMITATIONS OF MONITORING AND EVALUATION
2.5.1 Benefits
There are a number of benefits to monitoring and running an evaluation. These include
monitoring information that can provide the necessary compass for an organisation to
redirect its efforts, resources, work and progress. This allows for improved performance.
(Ramashia and Rankin, 1995) Another key benefit is that monitoring and evaluation
provides feedback to the various team members on their performance and the impact of
their work. This allows for effective performance management and reward structures.
(Ramashia and Rankin, 1995)
Monitoring information on the changes in an environment can assist in guiding the
evaluation. The ever-changing nature of the work environment can often result in previous
assumptions no longer being valid. An evaluation will identify the areas where assumptions
are no longer valid, and will present a new and more appropriate set of assumptions to guide
the delivery. (Ramashia and Rankin, 1995)
Monitoring and evaluation also provides the information that is necessary to determine the
future of an intervention. Once an intervention has achieved its intended purpose, a revised
approach needs to be adopted to ensure a new, future purpose is defined and delivered.
Monitoring and evaluation will also provide valuable information around the opportunities to
be explored in future interventions. These could include terminating the intervention due to it
having achieved its required outcomes. (Ramashia and Rankin, 1995) Monitoring and
evaluation will also provide information on the relative values of activities, the effectiveness
of processes and the impact that an intervention may have on the people at whom it is aimed.
(Boulmetis and Dutwin, 2000)
Page 37
Formal evaluation also helps staff to identify, select and articulate appropriate standards of
performance. It facilitates the framing of indicators for measurement of performance levels,
and guides what evidence is necessary to verify performance against these indicators. This
allows for effective tracking of performance, which is beneficial for the project.
(Ramashia and Rankin, 1995; Boulmetis and Dutwin, 2000)
2.5.2 Limitations of Monitoring and Evaluation
Primarily the limitations of monitoring and evaluation are that it does not guarantee change
in an organisation, it merely provides information on the status quo and identifies trends that
can be addressed by the team. This is also relevant for challenges and problems that are
identified. The monitoring and evaluation cannot effect the changes that are required to
address the challenges and problems. A plan needs to be implemented to deal with the
implementation of recommendations around the findings of the monitoring and evaluation.
(Ramashia and Rankin, 1995; Boulmetis and Dutwin, 2000)
2.6 THE MONITORING AND EVALUATION CONTEXT
There are a number of questions to facilitate the development of the concept that will be used to
undertake a specific monitoring and evaluation. The key questions that need to be considered are:
Why is the evaluation taking place?
What is being evaluated?
What is evaluation for?
Who can evaluate?
The broad structures that need to be considered under each of these questions are as follows:
Page 38
2.6.1 Why is the M&E taking place?
The broad purposes for a monitoring and evaluation process are generally around issues of
accountability and improvement. Some of the specific purposes could include the
effectiveness of the project, improvements noted in the project, feedback to staff on both
successes and failures, identifying problems and solving these, providing ongoing information
and determining success.
2.6.2 What is being evaluated?
The broad objectives need to be mapped while conceptualising the monitoring and evaluation.
A variety of issues could be evaluated.
These could include:
Monitoring and evaluation objectives;
Design;
Process and implementation;
Products and outcomes;
Impact on the project or intervention.
These areas can roughly be described as effectiveness, efficiency and impact. Effectiveness is
comparing the project with the desired results. These resources could include financial, human
and materials. Efficiency is achieved when as few resources as possible are used to achieve the
desired results. Impact means looking at the effect that activities have on the environment, in
terms of changing the community in any lasting way. (Shapiro, 1996)
The nature of issues that might be included in a monitoring and evaluation would be plans,
policies, objectives and cultures.
Page 39
2.6.3 What is the Monitoring and Evaluation for?
This would map the purpose of the monitoring and evaluation. The monitoring and evaluation
could be used to report on performance to management, funders, HR managers and the
community. It could also serve to review the current structure of the project and to identify
potential gaps or challenges. A monitoring and evaluation is generally conducted to look at
issues around accountability, and to make recommendations on potential modifications, or
improvements that could be implemented.
IUCN, Monitoring and Evaluation Training, 2000.
Gov/Donor
Goals/Policies
Organisation
Mission
ACT
PLAN
MONITOR
EVALUATE
Beneficiary
Needs
Scoping
Final Evaluation
Implementation
Monitoring and
Evaluation
Mobilisation and
Implementation Planning
M&E
Plan
Formulation/
Design
Financing and
Contracting
M&E
Strategy
Page 40
The diagram illustrates a generic programme or project cycle. It emphasises the importance of
starting with detailed scoping, situation analysis and design stages. It also illustrates the
importance of considering monitoring and evaluation at all stages in the cycle. Importantly, it also
illustrates the need for constant cycles of planning, acting, monitoring and evaluation (in other
words, learning) during implementation.
2.7 THE MONITORING AND EVALUATION OF OUTCOME
The M&E of the outcome of an intervention can assume a number of levels of complexity. The
most basic level evaluates the condition of the community that received the services offered
as part of the intervention. This form of monitoring and evaluation could also determine if the
delivery of services caused a positive change in the condition of the participants.
The monitoring and evaluation of outcomes is complex, due to the need for agreement on what
an outcome is, and what it means within a particular monitoring and evaluation context.
(Posavac and Carey, 1996)
2.8 THE MONITORING AND EVALUATION OF EFFICIENCY
The M&E of efficiency reviews how effectively the project was run. This requires a review
of the resources utilised to deliver the monitoring and evaluation. This M&E reviews the
cost-effectiveness of the delivery in line with the achievement of outcomes.
(Posavac and Carey, 1996)
Page 41
2.9 FORMATIVE EVALUATION
A formative evaluation is an evaluation that produces information that is fed back into the
project, to assist in the projects improvement. This type of evaluation is designed to assist those
who are involved in the interventions. Information from a formative evaluation can assist in the
early stages of an evaluation, to ensure greater benefits from the outcomes of the intervention.
(Weiss, 1998). A formative evaluation is generally conducted in the early stages of the
implementation of the intervention. This is because one of the key focuses of a formative
evaluation is to assess the extent to which the intervention is meeting the defined outcomes, and
to determine what needs to be changed to ensure a greater level of performance. A formative
evaluation would also identify any unanticipated negative side effects that are being produced by
the intervention.
2.10 SUMMATIVE EVALUATION
Summative evaluation is conducted at the completion of the intervention. It provides information
on the effectiveness of the intervention to parties who are considering adopting the intervention
within their environment. A summative evaluation measures the intervention outcomes, and
documents intervention processes to facilitate informed decisions about the overall effectiveness
of the intervention.
2.11 PARADIGMS IN EVALUATION
The two paradigms that emerge in evaluation are the quantitative and qualitative paradigms. The
quantitative paradigm adopts an empirical view that seeks direct experience and objective reality
when conducting an evaluation. This means that the evaluation is conducted through measuring
the effects of the intervention against quantitative indicators. Quantitative approaches often
result in generalised conclusions that emerge from evaluations. The quantitative approach adopts
a representational emphasis when developing findings.
Page 42
The qualitative paradigm emphasises the meaning and subjective experience of the intervention. It
operates from the basis of reviewing the participants experience of the intervention. The
qualitative paradigm posits that there is an external reality on which the different observers can
reach agreement, through common, shared methodologies. (Mark et al, 2000). Qualitative
approaches generally do not prioritise generalised views, but focus on particular peculiarities that
emerge from the intervention within the context in which it operated. The qualitative approach
adopts an evaluative emphasis in developing findings.
In spite of the paradigms that exist in an evaluation, it is important to note that both qualitative
and quantitative approaches can be used in an evaluation.
MONITORING AND EVALUATION PIPELINE
N
U
M
B
E
R

O
F

I
N
T
E
R
V
E
N
T
I
O
N
S
LEVELS OF MONITORING AND EVALUATION EFFORTS
INPUTS
ALL
Resources
Staff
Funds
Facilities/Supplies
Training
Condom availability
Trained staff
Quality of service
Knowledge of HIV
transmission
Short-term and
intermediate effects
Behaviour change
Attitude change
Changes in STI trends
Increase in social
support
Long-term effects -
changes in:
HIV/AIDS trends
AIDS-related mortality
Social norms
Coping capacity in
community
Economic impact
MOST SOME FEW
OUTPUTS
OUTCOMES
IMPACT
Page 43
M&E Evaluation Pipeline Model - FHI 2000.
Page 44
INTER-RELATIONSHIPS between THE VARIOUS MONITORING FORMS
Process
Monitoring
Impact
Monitoring
Outcome
Monitoring
3
4
2
1
Monitoring of
Performance
What have we
achieved?
How much have
we achieved
Why have we (not)
achieved something?
How did we
achieve it
What long-term
impact does the
work have
What evidentiary
results have we
seen
Manto Management: Bristol-Myers Squibb, Capacity Building Workshop, August 2002
MONITORING AND EVALUATION
MODELS
3
Page 47
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
This chapter focuses on Monitoring and Evaluation modules and
examines in detail the Results Based Management model.
1.2 OBJECTIVES
To understand the importance of the results based
management and monitoring model
To understand the importance of utilising a logic model in
monitoring and evaluation
To provide an understanding of the different levels of results
To determine how to refine strategic plan elements into
result statements
1.3 TARGET GROUP
Monitors
Project Managers
Project Staff
Page 48
2. MONITORING AND EVALUATION MODELS
2.1 RESULTS BASED MONITORING, EVALUATION AND REPORTING (MER)
2.1.1 Introduction
Monitoring and evaluation is one of the most talked about,
but least practiced aspects of organisational management.
Organisations and managers are often aware that in order to be effective they need to
know on a regular basis how well their organisations are doing; but tend (in reality)
to base decision making on personal and staff judgment, anecdotal data, or
haphazardly collected field information. Managers commonly state that they tend
to place less emphasis on monitoring and evaluation because they perceive
measuring performance is complex, time intensive and because they dont see a real
benefit of investing in monitoring and evaluation systems. Many organisations
and their management consider MER to be a requirement of the funding agencies
that support them and thus see MER as an external necessity.
For instance, few managers consider M&E to be the strategic system they can adapt for
assessing organisational capacity, judging their economic effectiveness, or predicting their
organisations future sustainability. The aim of this chapter is, therefore, to discuss the
principles of monitoring, evaluation and reporting and analyse the benefits and purposes
of monitoring and evaluation systems.
2.1.2 Why monitor performance, evaluate results and report on the progress of an
organisation?
Civil society organisations exist in large part because they believe that change to the social,
administrative or ecological conditions of an area will occur as a result of natural and human
factors. Organisations believe in large part, that if they intervene with initiatives and
programmes, change will be promoted in a positive manner. The overall goal of management,
therefore, is to keep the character and rate of change due to human factors within acceptable
(or preferable) levels. The management challenge is not one of how to prevent any change in
an area but to identify what management actions are needed to guide and control it.
A MER system is simply a tool that organisations and managers use to see if they are
achieving change (Appendix 3).
2.1.3 The shift towards results based management and monitoring
Traditionally, monitoring (if it existed at all) focused simply on the implementation of projects
- tracking basic inputs (resources) and outputs (products or services). For example: We were
given x amount of money and we: trained 12 organisations, issued 300 press releases,
vaccinated 1 200 cattle, among other things. Data collection was often completed
haphazardly and not as part of a systematic, comprehensive and long term plan. When your
project ended so did your monitoring.
Today, given the increasing complexity of development issues and increasing competition
for resources, organisations must think about (and present) the results of their programmes as
contributing to a larger development objective. A development objective is the overall and
long term effect of an intervention (it is your highest level of impact anticipated). Reduction
in incidence of HIV, improved food security, a more just and democratic society, or improved
biodiversity conservation are four examples of possible development objectives. Today,
organisations plan, present and monitor how they contribute to the attainment of a
development objective in the short, intermediate and long term (PACT MER Handbook, 2002).
Page 49
When you were developing your organisations strategic plan you were probably thinking
about what your organisation wants to do in terms of development objectives (results you
want to see and make happen). Thus you were laying out the structure for results based
management. Results based management is a management approach by which an
organisation ensures that its processes, products and services contribute to the achievement
of clearly stated results. A result is a consequence of a particular activity, project or
programme that an organisation can effect and for which it is willing to be held
accountable. Simply put - a result is a change in condition attributable in whole or part to
your organisation (PACT MER Handbook, 2002).
If you are practicing results based management, then it makes sense that your monitoring,
evaluation and reporting would mirror the same structure (as this would provide you
information as to whether your were meeting your results). This is referred to as results
based monitoring, or sometimes referred to as performance monitoring, or outcome
monitoring. This just means that your MER system in addition to tracking general project
information (such as implementation monitoring - such as how much money did we spend on
an activity) also measures your organisations processes, products and services contributions to
development objectives (PACT MER Handbook, 2002).
2.1.4 Levels of results
One way to present the short, intermediate and long-term results (and associated indicators)
is to think about what you are achieving at four levels:
Inputs and processes;
Outputs;
Outcomes;
Impact.
Page 50
Inputs and processes are the resources and methods you employ to conduct an activity,
project, and/or programme. Inputs can be physical (equipment rental or purchase), material
(supplies and provisions), human (labour cost such as salaries for technical assistance, staff) or
financial (such as travel costs, per diem costs, direct and indirect costs). Processes are the
methods or course of action you select to conduct your work (for example, training, capacity
building, service provision, message promotion). Inputs usually produce a result immediately
(0 to1 year) - for example, people trained, computers purchased, messages developed.
Outputs are information, products, or results produced by undertaking activities or projects.
Outputs relate to completion of activities and are the type of results over which managers
have a high degree of influence. Outputs reflect what you hoped to produce from a particular
input (or set of inputs). For example: You decide the process you want to use is to train
people, thus people trained is the result at the input/process level while knowledge level
increased would be the result at an output level. The assumption being is that if you train
people they will increase their knowledge on a given subject. Outputs usually reflect a result
achieved in a relatively short-time period (0 to 2 years).
Outcomes are broad changes in development conditions. Outcomes help us answer the so
what question (So we trained 100 people and increased their knowledge, but did they or
did they not change their behaviour?). Outcomes often reflect behaviour or economic change
and help us analyse how our activities and projects scale up or contribute toward
development outcomes. Outcomes usually reflect a result achieved in an intermediate time
period (2 to 5 years).
IMPACTS ARE THE OVERALL AND LONG-TERM EFFECTS OF AN INTERVENTION. IMPACTS ARE
THE ULTIMATE RESULT ATTRIBUTABLE TO A DEVELOPMENT INTERVENTION OVER AN EXTENDED
PERIOD. IMPROVEMENT IN FOOD SECURITY, HIGHER STANDARDS OF LIVING. Impacts usually
reflect a result achieved over a longer time period (5 to 10+ years).
Page 51
Page 52
THE RESULTS CHAIN
Inputs/Processes Outputs Outcomes Impacts
Resources & Short term change, Intermediate change Long term change
processes used effects and results effects, and results effects, and results
Such as: Such as: Such as: Such as
Staff; Knowledge/ Behaviour change; Improvements in:
Funds/materials; Awareness change; Economic change Health
Facilities; Access change; and others. conditions;
Training; Service change; Political
Services, Quality change; environments;
and others. Capacity change, Socio-economic
and others. status;
Human rights;
Resource
Conservation.
This leads to ... which leads to Which Leads To
1) IMPACT LEVEL RESULTS - YOUR LONG-TERM DEVELOPMENT OBJECTIVE:
a) Review the precise wording and intention of your vision statement (you may also want to
review your mission statement and objectives).
Project Level
(Tracking YOUR Efficiency)
(Implementation Monitoring)
Strategic Level
(Evaluating YOUR Effectiveness)
(Outcome Monitoring)
Programme Level
(PACT MER Handbook, 2002)
Ensure the result is an effect or consequence of a particular activity, project or
programme of your organisation for which you are willing to be held accountable
(perhaps in partnership with others).
Ensure the statement is uni-dimensional (you may have more than one development
outcome).
Ensure the statement is written as an accomplishment.
Do not include the means of achieving them but reflect what you can achieve through
your intervention. Do not include extemporaneous factors, just critical outcomes.
2) OUTCOME LEVEL RESULTS YOUR INTERMEDIATE RESULTS:
a) Review the precise wording and intention of your mission statement (you may also want
to review your objectives and strategies).
Ensure the result is an effect or consequence of a particular activity, project or
programme of your organisation for which you are willing to be held accountable
(perhaps in partnership with others).
Ensure the statement is uni-dimensional. You may find that you have more than one
development outcome.
Ensure the statement is written as an accomplishment.
Do not include the means of achieving them but reflect what you can achieve through
your intervention.
3) OUTPUT LEVEL RESULTS - YOUR SHORT-TERM RESULTS:
a) Select one of your outcome level results.
b) Review the wording and intention of the associated strategies and activities, related to
the selected outcome result (you may also want to review your objectives).
Page 53
Ensure the result is an effect or consequence of a particular activity or project of your
organisation for which you are willing to be held accountable.
Ensure the statement is uni-dimensional.
Ensure the statement is written as an accomplishment.
Do not include the means of achieving them but reflect what you can achieve through
your intervention.
d) Repeat the process with other outcomes.
4) INPUT/PROCESS LEVEL RESULTS - YOUR IMMEDIATE TERM RESULTS:
a) Continue with the same outcome level results you have selected in Step 3.
b) Review the wording and intention of the associated strategies and activities, related to
the selected outcome result.
Ensure the result is an effect or consequence of a particular activity or project of your
organisation for which you are willing to be held accountable.
Ensure the statement is uni-dimensional.
Ensure the statement is written as an accomplishment.
Do not include the means of achieving them but reflect what you can achieve through
your intervention.
d) Repeat the process with other outcomes.
2.1.5 Information to Assist your Organisation Develop A Results Frameworks
What is A Result?
A result is a broad term used to refer to the effects of a programme.
A result is the most ambitious impact that an organisation can effect and for which it
is willing to be held accountable.
A result is a describable or measurable change in state that is derived from a cause and
effect relationship.
Page 54
A result is the consequences of a particular programme/project/activity.
A result is the outcome, output, or impact. These terms describe different types of
results. There can be several levels of results for a large complex programme. In its
simplest form a result is The objective re-stated as an accomplishment.
Characteristics of a good Results Statement
Results statements should state the most ambitious impact that an organisation can
effect.
Results statements should be expressed at the highest level for which the project can
reasonably be expected to be held accountable.
Results statements may need to be qualified by using terms such as facilitated, improved,
and supported to ensure it is a realistic result of the project.
Results statement should not include the means of achieving them.
Results statements should be as specific as possible.
Page 55
Page 56
HOW TO WRITE A RESULTS STATEMENT
1) Review the precise wording and intention of the activities, objectives and project
hypothesis.
What exactly do they say? Sometimes objectives and results are so broadly stated it is difficult to
identify the right indicators. Instead, specify those aspects believed to make the greatest
difference to improve performance. Avoid overly broad results statements. For example rather
than using a broad statement like reduced conflict in xx community clarify those aspects that
programme activities actually emphasise such as Fewer incidences of conflict requiring use of an
outside mitigation team in xx community.
2) Clarify what type of change is implied.
As a result is a describable or measurable change in state, you need to be clear about what type
of change is implied. What is expected to change - a situation, a condition, the level of
knowledge, an attitude, a behaviour? Changing a countrys law about voting is very different from
changing citizens awareness of their right to vote. Which again is different from their voting
behaviour. Each type of change is measured by different types of indicators.
Clarify whether the change being sought is an absolute change, a relative change, or no change.
Absolute changes involve the creation or introduction of something new.
Relative changes involve increases, decreases, improvements, strengthening or weakening in
something that currently exists, but at a higher or lower level than is considered optimum.
No change involves the maintenance, protection or preservation of something that is
considered fine as it.
Page 57
3) Identify precisely the specific targets for change.
Be clear about where change should appear. Is change expected to occur among individual,
families, groups, communities, regions? Clearly, a change in the saving rate for an entire nation
will be quite different than for a particular sector of the business community. This is known as
identifying the unit of analysis for the indicators. Who or what are the specific targets for the
change? If individuals, which individuals?
4) Determine what changes are reasonable to expect in relation to the project objectives and
timeline.
Before appropriate statements can be developed, clarity is needed about the expected relationship
between activities and their intended outcomes over a specific time frame, in order to understand
exactly what changes are reasonable to expect.
REMEMBER:
The strength of a quality monitoring, evaluation and reporting
system is not its ability to gather data, but rather in its ability
to provide useful information for managing for results.
(PACT MER Handbook, 2002)
Page 58
3. THE LOGIC MODEL
3.1 DEVELOPING GOALS AND OBJECTIVES
3.1.1 Purpose of Goals and Objectives
Well-conceived and well-written programme goals and objectives are essential for anchoring
programmes and communicating programme expectations to others. Specific programme
objectives also help focus the M&E activities by stating exactly what will be measured and
assessed during the course of the programme.
3.1.2 What are Goals and Objectives?
A goal is a broad statement of a desired, long-term outcome of the programme. As such, goals
express general programme intentions and help guide the programmes development. Each
goal has a set of related, more specific objectives that, if met, will collectively permit
programme staff to reach the stated goal. Objectives, then, are statements of desired, specific,
realistic, and measurable programme results.
3.1.3 Developing SMART Objectives
Two basic programme elements that are the focus of M&E - outputs and outcomes - are most
closely related to programme objectives. Outputs were defined as results of a programmes
activities (for example, staff are trained, those at risk of HIV are educated about risk
reduction); and outcomes were defined as effects of programmes or interventions on target
audiences or populations (for example, behavioural changes reinforcing HIV risk reduction,
better quality of life for those infected with HIV). Programme objectives simply state these
outputs and outcomes in measurable terms. (Also note that impacts are almost always
related to goals rather than more specific programme objectives.)
Page 59
For example, an output of a VCT programme might be: Clients tested for HIV receive their
test results. This output can be turned into an objective by stating it as a target that can be
measured during programme implementation. For instance, the objective may be stated as:
By the end of the first programme year, 98% of clients tested for HIV (assuming use of rapid
testing) will receive their test results. Because an output is the result of a programme activity
- in this case, counselling and testing - it does not refer to the way in which clients actually
respond to the activity. In other words, an output and its related objective say something
about the accomplishment of the process of delivering a service or activity, not about the
effect of these services or activities on clients. As such, objectives related to outputs are
known as process objectives.
A desired outcome of this same programme might be that clients - both those testing
HIV-positive and HIV-negative - form personalised risk-reduction and treatment strategies.
The objective related to this outcome might be: By the beginning of the second programme
year, 65% of clients receiving HIV test results will have developed and adhered to personalised
risk-reduction/treatment strategies. This objective is stated in measurable terms and, because
it is related to programme outcomes, is known as an outcome objective.
A tool to determine whether or not objectives will be measurable and useful to programme
planning is the SMART approach to developing objectives. Applying the SMART criteria to the
outcome objective: By the beginning of the second programme year, 65% of clients receiving
HIV test results will have developed and adhered to personalised risk-reduction/ treatment
strategies, we find the objective to be:
Specific (clients will form risk-reduction/treatment strategies);
Measurable (at least 65% of clients receiving test results will develop and adhere to these
strategies);
Appropriate (this objective fits with the overall goals of VCT activities);
Page 60
Realistic (it is hard to determine the realism of this objective in the abstract apart from
knowing the details of the particular programme. Programme staff will want to determine
if 65% is realistic, too high, or too low an objective to be reached);
Time-based (by the beginning of the second programme year).
A SMART OBJECTIVE IS:
Specific:
Identifies concrete events or actions that will take place; answers the question, Does
the objective clearly specify what will be accomplished?
Measurable:
Quantifies the amount of resources, activity, or change to be expended and achieved;
answers the question, Does the objective state how much is to be deliverd or how much
change is expected?
Appropriate:
Logically relates to the overall problem statement and desired effects of the programme;
answers the question, Does the objective make sense in terms of what the programme is
attempting to accomplish?
Realistic:
Provides a realistic dimension that can be achieved with the available resources and plans
for implementation; answers the question, Is the objective achievable given available
resources and experience?
Time-based:
Specifies a time within which the objective will be achieved; answers the question, Does
the objective specify when desired results will be achieved?
Page 61
3.1.4 What are Logic Models?
An important early step to conducting M&E activities is to clearly describe the programme
of interest. A well-described programme or intervention is easier to monitor and evaluate and
facilitates using M&E data to improve the programme.
Logic models are invaluable programme design, management, and evaluation tools that
describe the main elements of a programme and how these elements work together to reach a
particular goal, such as prevention, care and treatment of HIV/AIDS in a specific population.
As described earlier, the basic elements in describing the implementation of a programme
and its effects are: inputs, activities, outputs, outcomes, and impacts. A logic model
graphically presents the logical progression and relationship of these elements. For instance,
logic models represent the relationships between a programmes activities and its intended
effects, and they make explicit the assumptions about how a programme will effectively
address a particular problem. You can use logic models to describe an entire programme, parts
of a programme (for example, individual projects/interventions), or multiple related
programmes.
Logic models go by many different names:
Roadmap Theory of Action
Conceptual Map Model of Change
Blue Print Theoretical Underpinning
Rationale Causal Chain
Programme Theory Chain of Causation
Programme Hypothesis
As with many aspects of M&E, people use a variety of terms to describe logic models and their
component parts. This guide will use the terms employed by, but we note some other common
names used to refer to logic models as stated above. Similarly, there are many different ways
to construct a logic model. People use a variety of visual schematics to create logic models.
For instance, flow charts, maps, and tables may all be used to portray the sequence of steps
that lead to programme outcomes.
Logic models are useful for everyone involved in a programme, programme staff, funders and
other stakeholders. They increase the likelihood that programme efforts will be succesful
because they:
Communicate the fundamental purpose of the programme;
Become a reference point for everyone involved in the programme;
Illustrate programme results;
Serve as the basis to determine whether planned activities will lead to desired results;
Identify potentail obstacles to programme operation so that staff can address problems as
soon as possible;
Improve programme staffs expertise in planning, implementation and evaluation.
Why Use Logic Models?
Logic models are intended to represent the ideal. They describe the intended activities and their
results if things go as planned. As such, these models help to situate and convey the way in which
a programme is supposed to run and what results can be expected, barring unexpected barriers
and changes.
The reality of changes in funding, shifting priorities, unpredictable challenges, and other
stumbling blocks can lead to actual programme implementation and outcomes that are much
different from what was intended. Logic models can be created or revised after programme
implementation to describe the implementation process as it actually occurred and outcomes that
were achieved. Since implementation and outcomes do not always go as planned, logic models
are useful programme monitoring tools, facilitating comparison of planned and actual
implementation and enabling assessment of why differences may have occurred. Also, since logic
models identify the steps necessary to reach intended outcomes, they can illuminate important
evaluation priorities.
Page 62
Page 63
Identifying the goals of your programme and developing a logic model yield many payoffs. You
may learn that the programme is too ambitious or not ambitious enough, or that logical
connections between goals, objectives, and activities are missing.
Logic Model Components
Components of a logic model, depicted below typically fall within two main sections: process and
outcome. The process section describes the programme resources (inputs), programme activities,
and the direct products of the programme (outputs). If the process goes as planned, it should lead
to the intended outcomes and impacts.
LOGIC MODEL FOR HIV PREVENTION
Assumptions/Context
Problem Statement
Outcomes
Impacts
Implementation
Activities Inputs Outputs
Global AIDS Programme: M&E Capacity Building for Global AIDS Programme improvement,
version 1 December 2003
The following are explanations of logic model components:
The assumptions and context relate to the unique socio-political-economic issues that exist
in the locale where the respective programme is being planned and the limitations and
facilitators that these issues have on the potential success of the programme. The assumptions
that programme planners make are based on the above issues, and can include theories and
evidence-based knowledge that is available from similar programmes. Many aspects of the
assumptions and context result from assessment and planning activities. For example, a
situational assessment conducted before planning a programme may focus on the particular
barriers and facilitators that a programme will need to address to be successful.
A problem statement describes the nature and extent of the problem that needs to be
addressed. For example, for a population at risk of HIV/AIDS, the problem statement would
include factors that contribute to the problem. These factors may be related to knowledge,
attitudes, beliefs, behaviours, skills, access to services and information, policies, and
environmental conditions. An example of a problem statement is:
- HIV infection rates continue to rise, underscoring the importance for people to know
their serostatus, develop personalised risk-reduction strategies, and access care and
treatment services.
The problem statement often results from assessment and planning activities. For example, an
assessment of prevention needs of populations at risk of HIV, or care and treatment needs of a
population, may contribute to a clear and accurate problem statement.
When discussing evaluation with others or reading other evaluation materials, a variety of
different terms may be used to identify similar concepts. For instance, something identified as an
output by a programme may be called a short-term outcome by another programme. Terms are
not necessarily correct or incorrect. As long as a term makes sense for a programme and it is
used consistently, then the differences in terminology used by various individuals need not be a
concern.
Page 64
Page 65
Problem Statement
Outcomes
Impacts
Implementation
Activities Inputs Outputs
DEFINITIONS OF LOGIC MODEL COMPONENTS
Inputs: Resources used in a programme, such as
money, staff, curricula and materials.
- GAP Government and other donor funds;
- C & T personnel;
- VCT protocols and guidance;
- Training materials;
- HIV test kits.
Activities: Services that the programme provides
to accomplish its objectives, such as outreach,
materials distribution, counselling sessions,
workshops and training.
- Train C&T personnel and site managers;
- Provide pre-test counselling, HIV tests and
post-test counselling.
Outputs: Direct products or deliverables of the
programme, such as intervention sessions
completed, people reached and materials
distributed.
- # personnel certified
- # clients receiving pre-test counselling,
HIV tests and post-test counselling.
Problem Statement
Outcomes
Impacts
Implementation
Activities Inputs Outputs
Outcomes: Programme results that occur both
immediately and sometime after the activities are
completed, such as changes in knowledge,
attitudes, beliefs, skills, behaviours, access,
policies and environmental conditions.
- Quality of VCT improved;
- Access to VCT increased;
- Clients develop and adhere to personalised
risk reduction and treatment strategy.
Global AIDS Programme: M&E Capacity
Building for Global AIDS Programme
improvement, version 1 December 2003
Problem Statement
Outcomes
Impacts
Implementation
Activities Inputs Outputs
DEFINITIONS OF LOGIC MODEL COMPONENTS
Impacts: Long-term results of one or more
programmes over time, such as changes in HIV
infection, morbidity and mortality.
- HIV transmission rates decrease;
- HIV incidence decreases;
- HIV morbidity and mortality decrease.
Although we have only presented components of a logic model in a list format to this point,
organisations often are interested in the cause and effect relationships between activities and
want to see how these elements connect. Once the more basic elements of a logic model
have been described, a more detailed logic model can be developed with boxes and arrows to
depict assumptions and relationships. The diagram on the next page shows a common format for
displaying a logic model.
This model is used to illustrate relationships between programme elements, but, this model would
not be sufficient to describe an entire programme. Again, logic models can be used to represent a
part of a programme, such as distinct projects or interventions; the complete programme; or even
one programme in a multi-programme effort. For example, two separate logic models representing
VCT and PMTCT activities might be linked where activities are linked programmatically.
Page 66
Global AIDS Programme: M&E Capacity
Building for Global AIDS Programme
improvement, version 1 December 2003
Page 67
Logic models are often cyclical in that an outcome from one activity can provide information
that then feeds back into a previous activity. Much of the benefit of constructing programme
logic models comes from the iterative process of discussing, analysing, and justifying the expected
relationships and feedback loops. Therefore, even though we actually present logic models in a
box-and-arrow format, conceptually, they are more cyclical in nature, and the actual process of
implementing a programme based on logic models may be better depicted as follows:
Funding from
govt, GAP &
other donors
Train counselling
& testing
personnel & site
managers
Personnel
certified in VCT
HIV transmission
rates decreased
HIV incidence
decreased
Quality of VCT
increased
Access to VCT
increased
Risk behaviours
decreased
Clients (HIV+
and -) develop
& adhere to
personalised HIV
risk-reduction
& treatment
strategy
Clients received
pre-test
counselling
Clients received
HIV tests
Clients received
results & post-
test counselling
Provide pre-test
counselling, HIV
testing & post-
test counselling
to all clients
INPUTS ACTIVITIES OUTPUTS IMPACTS OUTCOMES
Counselling
& testing
personnel
VCT protocols
guidelines &
training
documents
HIV test kits
VCT PROGRAMME IMPLEMENTATION LOGIC MODEL
(Locally Determined Assumptions/Context)
Problem Statement: HIV infection rates continue to rise, underscoring the importance for people to know their
serostatus, develop personalised risk-reduction strategies and access care and treatment services.
Global AIDS Programme: M&E Capacity Building for Global AIDS Programme improvement,
version 1 December 2003
Developing Logic Models for Your Programme
The VCT logic models illustrate the big picture expectations for the programme over the
long term. Depending on the availability of other resources, a logic model that describes a
programme may be complex, with emphasis and feedback loops in different places. In the
following chapters, we also draw on sample logic models for some of the activities undertaken. If
the models accurately represent a local programme, they may be used as actual models that field
VCT PROGRAMME IMPLEMENTATION LOGIC MODEL
Page 68
Problem
Statement
Process Monitoring
and Evaluation
Outcome Monitoring
and Evaluation
Inputs
Activities
Outputs
Impacts
Outcomes
Global AIDS Programme: M&E Capacity Building for Global AIDS Programme improvement,
version 1 December 2003
Page 69
offices implement. However, the more likely scenario is that these examples will need to be
adjusted to fit local circumstances, uses, activities, and outcomes. Programmes may also include
activities that are not represented in these logic models. In this case it may be a very useful
planning activity to engage stakeholders in creating a logic model that includes those activities.
Those creating new logic models may want to begin by identifying programme goals and
listing all the resources available for meeting them. Next, a decision may be made regarding the
activities that will be necessary (and realistic, given resources) to reach these goals. As the basic
model on page 66 illustrates, logic models frequently flow from left to right, with inputs on the
left, leading to activities and then, finally, on the right, to outcomes. This is not necessarily the
only approach, however. Some may choose to build a model that flows from top to bottom. The
key criteria are that the model is an accurate representation of a programme and can be
understood by stakeholders.
Ideally, the model should fit on a single page with enough detail to be explained fairly easily and
understood by others. However, depicting the relationships among activities and outcomes, and
uncovering the assumptions about those relationships, can be difficult. One way to proceed is to
connect a chain of If, then statements. For example: If we provide HIV test kits, then we will be
able to test more people. This statement begs the question: How? The answer is another
activity: conducting HIV tests. Through this process of refining the model, activities and their
related anticipated outcomes are gradually identified.
Even the best logic models may need periodic revision.
As a logic model is developed, it is important to identify possible problems and solutions, as well
as possible unintended outcomes. This sort of preparation - looking at the problem from all sides
and imagining all the possible scenarios - will make the work proceed much more smoothly. Logic
models are useful to convey both the overview of a programme and the details of programme
activities. An overview model, comparable to the VCT models, lays out the chain of activities and
effects intended to achieve the outcomes. It clarifies intended outcomes and the range of actors
to be mobilised. It can be used as a template for all of the actors as well as a template for smaller
scale activities.
Working from the overview model, the focus can then be narrowed to specific activities
contained in the logic model and more specific, individual logic models may be designed for each
activity. These narrower models describe the activities represented by the arrows between boxes in
the overview logic model. For example, you may want to set out how to move from developing
local-level partnerships for HIV prevention to developing a local plan for HIV prevention; these
require two separate approaches with distinct logic models. A logic model focused solely on this
process will help you enumerate all the details necessary to get from one output to another.
Once a logic model is developed to provide a thorough overview of a programme and its
elements, a good basis will have been developed for understanding what needs to be monitored
and evaluated as your programme is implemented. The logic model should then naturally assist in
developing M&E questions, as well as designing a plan for collecting data and measuring
programme progress and outcomes.
MONITORING AND EVALUATION PIPELINE
N
U
M
B
E
R

O
F

P
R
O
J
E
C
T
S
LEVELS OF MONITORING AND EVALUATION EFFORTS
Input/Output
Monitoring
Process Evaluation Outcome
Monitoring/
Evaluation
Impact Monitoring/
Evaluation
Supplemented with
impact indicators from
surveillance data.
ALL MOST SOME FEW
Page 70
M&E Evaluation Pipeline Model - FHI 2000.
Page 71
Input/Output, Outcomes, and Impact Monitoring
The two types of monitoring - input/output and outcome - involve data-collection methods,
typically involving the review of information collected in the natural course of programme
implementation. For example, inventories of prevention/education materials and
pharmaceuticals, and review of programme activity logs and client records would likely
provide all the information needed to monitor programme inputs and outputs. Similarly, client
records, including results of questionnaires or surveys that test and show results of programme
services, usually contain ample data to track outcomes. Therefore, effective programme
monitoring may be accomplished with thoughtful and thorough record-keeping, the ability
to aggregate data from programme documents, and strict client confidentiality guidelines
around information drawn from client records. The third type of monitoring - impact monitoring -
typically involves the selection of key information or variables from surveillance data systems and
national surveys.
As an example, consider the diagram on page 68, which is a portion of a larger VCT
Implementation Logic Model.
If programme staff established as an objective that: By the end of the first programme year,
98% of clients counselled for HIV testing would be tested for HIV, the evaluable question would
be: Were 98% of clients who were counselled, actually tested by the end of the first programme
year? To answer this question, information related to inputs and outputs would need to be
tracked. Assuming that adequate funding, counselling protocols and guidelines, and personnel
(inputs) were already established for this programme, the following measures could be tracked and
reported:
Number of HIV test kits acquired (input).
Number of clients receiving counselling for HIV testing (output).
Number and percent of clients tested (output).
Programmes may have established measurable objectives related to the first two measures as
pre-requisite objectives that would have to be met before the objective of testing 98% of clients
who were counselled could be accomplished. For instance, programme staff would likely have
stated the necessary test kits that would need to be in place to serve the expected number of
clients. Similarly, they would have established a target number of individuals to receive
counselling in the programme year before establishing the target of testing 98% of these clients.
Data sources for these measures would all likely be project documents and records kept on a
continuous basis; for example, clinic inventories of HIV test kits and records of clients.
Most of these above measures are not part of the required or core reporting indicators for
annual reporting. However, the final measure is similar to Programme Area Indicator 1.6: Number
of individuals (by sex) tested in VCT sites supported by/. If programme staff kept records of
numbers of clients tested by sex, this would satisfy reporting requirements, as well as assist in
monitoring for programme improvement.
Monitoring outcomes is similar. Programme staff may have established the objective: By the
beginning of the second programme year, 65% of clients receiving HIV test results will have
established personalised risk-reduction/treatment strategies (outcome).
Page 72
Page 73
3.2 M&E FRAMEWORK AND EXPECTATIONS
To achieve these goals, nine critical elements form the M&E Strategy as depicted in the diagram
below.
As the model of the strategy shows, the first steps in this three-phased strategy involve
planning steps, including the first critical element, the systematic review of existing HIV/AIDS
behavioural interventions that are targeted to various populations at risk of, or infected with, HIV.
The goal of this review is to identify successful evidence-based, population-specific interventions
that may be replicated in countries. Of course, the generation of information about evidence-
based practices requires partners and stakeholders who design programmes, interventions, and
related strategies based on this knowledge. Therefore, a necessary next step is to share
Systematic Review
Partnerships
CAPs & Logic Models
M&E Plans, Annual
Reports & Programme
Reviews
M&E Needs Assessments
& Training
Programme Monitoring
& Process Evaluation
Care Studies, Operation &
Intervention Research,
Economic Evaluations
National-Level Outcome
Monitoring
National-Level Impact
Monitoring
CRITICAL ELEMENTS OF M&E STRATEGY
Start-up/Phase 1 Phase 2 Phase 3
Global AIDS Programme: M&E Capacity Building for Global AIDS Programme improvement,
version 1 December 2003
information with partners and stakeholders through publications, presentations, as well as
face-to-face and other forms of communication. Evidence-based information is particularly
useful for field offices in developing the third critical element of the strategy, the Country
Assistance Plan (CAP) and related programme logic models that assist in establishing the logical
steps and programme needs (for example, staffing, materials, referral partners). The CAP, which is
the document that allows country programmes to communicate their respective countries
HIV/AIDS context and plans for contributing to the national response, requires the establishment
of programme objectives related to programme plans and logic models.
The fourth critical element of the M&E strategy is development by field offices of M&E plans and
systems for conducting M&E activities, using M&E data for annual reporting, and conducting
country programme reviews. Field offices are supported by the M&E staff in identifying
appropriate M&E approaches, as well as implementing M&E activities. This support begins with the
assessment of needs for planning and implementing M&E activities. Identified needs are met
through both formal face-to-face training in M&E and on-the-ground Training Assessment (TA).
TA that the M&E staff are able to provide is supplemented by a number of university, private, and
other TA providers to meet various M&E needs.
Once field offices have identified specific M&E activities in an M&E plan and have been trained in
basic M&E knowledge, field office staff are expected to track inputs (such as, resources put in
programme) and outputs (such as, results of programme activities) over the course of
programme implementation (sixth critical element). Some may also want to assess the quality
of the programme.
Public health professionals have sought to arrive at the best approaches for assuring that
the health of the public is maintained or restored, especially among those most impacted by
disease and mitigating circumstances, such as poverty and lack of access to disease prevention and
treatment. The illustration on the next page depicts the elements and types of data identified in
the M&E framework as they relate to this public health questions approach.
Page 74
Page 75
PUBLIC HEALTH QUESTIONS APPROACH
Determining
Collective
Effectiveness
Monitoring &
Evaluating
National
Programmes
Understanding
Potential Responses
Problem
Identification
Are collective efforts being implemented on a large enough
scale to impact the epidemic (coverage; impact)? Surveys
and Surveillance
Are interventions working/making a difference?
Outcome Evaluation Studies
What interventions and resources are needed?
Needs, Resource, Response Analysis and Input Monitoring
What interventions can work (efficacy and effectiveness)?
Are we doing the right things? Special studies,
Operations res., Formative res. and Research synthesis
What are the contributing factors? Determinants Research
What is the problem? Situation Analysis and Surveillance
Are we implementing the programme as planned?
Outputs Monitoring
What are we doing? Are we doing it right?
Process Monitoring and Evaluation, Quality Assessments
OUTCOMES
& IMPACTS
MONITORING
OUTCOMES
OUTPUTS
INPUTS
ACTIVITIES
Global AIDS Programme: M&E Capacity Building for Global AIDS Programme improvement,
version 1 December 2003
Page 76
INDICATOR DEVELOPMENT
4
Page 79
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
This chapter provides the reader with an understanding of the
indicator development process.
1.2 OBJECTIVES
Define an indicator;
Understand the different levels of indicators;
Identify and develop indicators at the different levels;
Understand the principles in setting and using indicators;
How to use the indicator information to generate a report.
1.3 TARGET GROUP
Monitors;
Project Managers;
Project Staff.
Page 80
2. TYPES OF INDICATORS
2.1 BASELINE INDICATORS
Definition
These are indicators that measure conditions before a project or programme is implemented.
Baseline indicators show the status quo or the current situation. They may indicate the level
of poverty, service and infrastructure.
Example
The number of participants in a training programme as a percentage of the target population.
2.2 INPUT INDICATORS
These include the main characteristics of the organisational and project infrastructure, funding,
professional and support staff.
Definition
These are indicators that measure economy and efficiency.
They measure what it costs the municipality to purchase the essentials for producing desired
outputs (economy), and whether the organisation achieves more with less in resource terms
(efficiency) without compromising quality.
Example
Physical resources in the project.
Professional and support staff.
2.3 OUTPUT INDICATORS
These include achievements in academic standards, standards of behaviour and rates of
punctuality and attendance.
Definition
Output indicators measure whether a set of activities or processes yields the desired outputs.
They show achievement in terms of stated objectives and often record milestones in
achieving the project goals of improving access or efficiency.
Example
Successes attained at the end of each stage of the project.
Progress made by participants during the project.
2.4 OUTCOME INDICATORS
Definition
These are the indicators that measure the quality, as well as the impact, of the project
outputs.
They show whether individuals and communities are happy with the project (quality), or
whether there is an improvement in the quality of life for project participants (impact).
For example, they can measure whether a project is having an effect on caring for people
living with HIV/AIDS.
Example
Improve the quality of care provided to people living with HIV/AIDS above the 60% percentile.
Page 81
2.5 COMPOSITE INDICATORS
Definition
Composite indicators combine a set of different indicators into one index by developing a
mathematical relationship between them.
When lists of indicators grow very long, composite indicators can be used to summarise
performance by combining indicators into an equation.
Example
Medical Reports;
Training Reports.
2.6 WHAT ARE THE KEY PERFORMANCE INDICATORS (KPIs)?
All projects must report on a common set of KPIs. A common set of indicators will:
Ensure accountability;
Direct projects to focus on the overall goals and priorities;
Enable benchmarking and create the basis for performance comparison across projects;
Bring some uniformity into the system, and ensure that there is commonality of measures
in performance evaluation across projects.
2.7 CHALLENGES WITH USING KPIs
There are some challenges and pitfalls in using performance indicators, including the fact that:
Performance measures may be contradictory and may mislead decision-makers;
Performance measures may become an end in themselves, without really helping to change
performance levels. The result is senseless data collection;
Page 82
Performance measurement systems that work well are very costly both to introduce and to
maintain. The costs of performance systems may exceed the benefits.
2.8 CRITERIA FOR GOOD KPIs
An understanding of the dangers means that it is important to start projects on a small scale;
It is necessary to ensure that projects also need data that is available for measurement in the
community. A project needs clarity about what data it currently collects, and what data it
will have the capacity to collect in the near future;
From an understanding of the challenges, and what care needs to be exercised, we can
derive a set of criteria for good performance indicators.
There are many different sets of criteria for good KPIs. Some of the criteria include:
Measurable; Simple;
Precise; Relevant;
Adequate; Objective.
The developing of indicators involves the following steps:
Step 1: Planning.
Step 2: Defining priority areas.
Step 3: Defining objectives.
Step 4: Setting input, output and outcome indicators.
Step 5: Setting targets within each indicator.
Once the priority areas are known, clear objectives can be set. Objectives are clear statements of
intent, which guide the process of determining areas of change.
Page 83
3. OVERALL PROCESS
Page 84
3.1 SETTING INDICATORS
Once priorities and clear objectives have been identified, indicators can be set. Indicators are
derived from objective statements. It must be established what knowledge is necessary to
ascertain if the project has performed well on a certain objective.
3.2 SETTING TARGETS
After agreement is reached on the indicators, targets need to be set for performance within these
indicators. Decision-makers must then make a contractual commitment to achieve these targets
within agreed time-frames, and to notify all stakeholders of these targets and timeframes.
3.3 TYPES OF INFORMATION FOR MONITORING AND EVALUATION
Indicators; Simple quantitative indicators;
Complex or compound indicators; Indices;
Qualitative indicators; Focused qualitative information;
Open-ended qualitative information; Background information;
General project information; General observations.
Ideally indicators should be reported
Quantitatively but this will not always be possible
- dont limit M&E to only what can be measured.
3.4 DEFINITION OF A PROJECT INDICATOR
A project indicator is defined as the specific information that provides evidence about the
achievement of planned impacts, results and activities.
3.5 INDICATORS AT DIFFERENT LEVELS IN AN OBJECTIVE HIERARCHY
Impact indicators - indicators that show to what extent the project has contributed towards
its goals;
Outcome indicators - indicators that show to what extent planned results have been
achieved;
Output indicators - indicators that show what activities have been completed;
Process indicators - indicators that show what tasks and activities are being implemented;
Input indicators - indicators that show what resources have been used by the project.
3.6 STEPS FOR DEVELOPING RESULT INDICATORS
1. Clarify specifically what the result is intended to achieve;
2. Develop key evaluation questions for the result;
3. Brainstorm ideas for indicators;
4. Select a minimum of feasible indicators;
5. Clearly define and word the selected indicators;
Per indicator ...
6. List the information needed;
7. Identify how this information will be gathered (method, timing, by whom, forms, frequency);
8. Identify how information will be collated, stored and managed;
Page 85
9. Establish analysis and presentation methods;
10. Establish mechanisms for validating and checking information.
3.7 QUANTITATIVE AND QUALITATIVE INDICATORS
Indicators must always be provable, and they are often measurable in terms of numbers.
Quantitative indicators tend to stress objective numbers that speak for themselves, controlled
measurements and quantifiable results.
3.7.1 Quantitative Indicators
The following should be considered in developing quantitative indicators:
Identify the indicator as it relates to the objective - number of community
development meetings facilitated;
Specify target group - stakeholders (community leaders, managers from the primary
healthcare facility, school governing body, school management committee, women
groups, youth groups);
Specify unit of measure - number, level, extent, percentage, rate, ratio;
Specify timeframe - end May 2003 or 30th September 2004;
Specify baseline - prior to the project 0 development meetings were facilitated;
Define quantity - 12 meetings per annum;
Specify location - Kliptown.
3.7.2 Qualitative Indicators
Qualitative indicators express how things are, rather than how much. The existence of a
clinic in an informal settlement is a qualitative indicator of the improved provision of
healthcare facilities. The fact that people are feeling more confident about participating in
meetings is a qualitative indicator in terms of increased organisational capacity. It must be
provable, and this may mean doing a qualitative analysis, and then expressing the results in
Page 86
quantitative terms, for example: 95% of the committee members said that they felt more
confident about participating in meetings than they had previously. Qualitative indicators
tend to stress observations that show changes in situations, behaviour, feelings and attitudes,
observations about processes and interpretation of situations.
The following should be considered in developing qualitative indicators:
Subject of interest: capacity building, training, home-based care;
Target group: non-governmental organisations, home-based caregivers;
Type of change: improved care provided to People Living with HIV/AIDS;
Timeframe: May to December 2003;
Location: Kliptown.
3.8 INDICATORS FOR EFFECTIVENESS, EFFICIENCY AND IMPACT
Indicators can be used as signs of efficiency, effectiveness and impact. Indicators for
development and strategic objectives will tell you if your work is having an impact. Indicators
at an activity level will tell you if you are being effective and efficient. Efficiency indicators
tell you whether or not resources are being put to the best possible use. Effectiveness indicators
tell you whether or not you have done what you said you would do.
Example:
An NGO that did HIV/AIDS prevention work had the following objective:
Increased sharing of resources, skills and experiences among NGOs working in the field of
HIV/AIDS prevention.
Page 87
Page 88
3.8.1 The Activity
To organise a three-day national conference for all NGOs working in the prevention of
HIV/AIDS field in South Africa, by the end of 1998.
3.8.2 Effectiveness Indicators
Number of organisations that attended the workshop;
Reactions received in respect of the workshop.
3.8.3 Impact Indicators
Number of regional workshops facilitated as a result of the conference;
Extent to which participants engaged in networking activities after the conference;
Number of requests for information regarding NGOs working in the field of HIV/AIDS
prevention.
3.8.4 Efficiency Indicators
Level of completion in the logistical arrangements for the conference;
Cost per participant to attend the conference;
Time spent by staff to prepare for the conference.
SAMPLING AND DEVELOPMENT OF
DATA COLLECTION TOOLS
5
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
This chapter provides the reader with an understanding of the
sampling, instrumentation development process and methods of
collecting data.
1.2 OBJECTIVES
Identify different methods in the selection of a sample;
Understand the concept of sampling;
Understand the logic of sampling;
Be comfortable with developing a sampling framework;
Identify the different methods used in data collection;
Identify the key evaluation questions used in collecting data;
Analyse the data that has been collected.
1.3 TARGET GROUP
Monitors;
Project Managers;
Project Staff.
Page 91
Page 92
2. SAMPLING
2.1 GENERIC ISSUES RELATED TO SAMPLING
2.1.1 The Choice of Participants
The number of participants to be monitored and evaluated will be decided during the process
of planning the evaluation against the agreed criteria.
2.1.2 Sampling Concepts and Terms
The process of sampling employs a number of technical terms. These terms will be defined to
ensure that the discussion is easily understood. The following definitions are drawn from
Babbie and Mouton 1998, p174 175.
Element:
An element is the unit about which information is collected and which provides the basis for
analysis.
Population:
A population is the theoretically specified aggregation of the elements that are included in
the study. A population must be carefully defined to ensure that the reader of the report is
not confused or deceived.
Sampling Unit:
A sampling unit is that element that is considered for selection at some stage of the sampling.
Sampling Frame:
A sampling frame is the actual list from which the sample is selected.
Observation Unit:
An observation unit is also referred to as a unit of data collection. It is an element or
aggregation of elements from which information is collected.
Variable:
A variable is a set of mutually exclusive attributes.
Parameter:
A parameter is the summary description of a given variable in a population.
Statistic:
A statistic is the summary description of a given variable in a sample.
Sampling Error:
A sampling error is the degree of error that is to be expected for a given sample design. This
sampling error results from the levels of difference between the statistics, and the parameters
that result from the application of a probability sampling method.
Confidence Levels And Confidence Intervals:
The accuracy of the sample statistics is expressed in terms of the level of confidence in which
the statistic will fall, within a specified interval from the parameter. As the confidence internal
is expanded, the level of confidence increases.
2.1.3 The Logic of Sampling
Due to the heterogeneous nature of communities, it is necessary that the selected sample
closely mirrors the characteristics that are prevalent in the overall population of the
community. It is therefore critical that the sampling approach is logical, and carefully
considered.
(i) Representivity of the Sample
A sample will be regarded as representative if it mirrors the aggregate characteristics of the
population from which the sample is drawn. The representivity is determined with respect to
the particular characteristics that are relevant to the substantive interests of the study.
A basic principle of probability sampling is that a sample will be representative of the
population from which it is selected if all members of the population have an equal chance
of being selected in the sample (Babbie and Mouton, 1998, p173). Samples, however,
seldom exactly mirror the population from which they are drawn. Probability sampling offers
two particular advantages. These are:
Page 93
that they are generally more representative, because the sampling biases are avoided;
that probability theory allows for the estimating of the accuracy or representivity of the
sample.
(ii) Developing a Sampling Framework
There are a number of sampling methods that can be utilised to develop the framework to be
used in the research. These include:
Simple random sampling;
Systematic sampling;
Stratified sampling.
2.1.4 Types of Sampling
(i) Simple Random Sampling
This is the basic sampling method. Once a sampling frame has been established, the monitor/
evaluator allocates a single number to each element of the list. A table of random numbers is
then used to select the sample that will be used in the monitoring and evaluation. This
method of sampling is rarely used, as it is administratively complex and difficult to manage if
it is being undertaken manually.
(ii) Systematic Sampling
In this sampling method, every key element in the total list is chosen for inclusion in the
sample. An example of this would be that if there were 10 000 participants in a project, and a
sample of 1 000 participants was required, then every tenth name on the list of participants
would be selected to participate in the monitoring and evaluation. To ensure that the
selection is truly unbiased, the first element should be selected randomly.
(iii) Stratified Sampling
Stratified sampling represents a modification of the two sampling methods that have already
been discussed. Stratified sampling draws the appropriate numbers of elements from
homogeneous subsets of the overall population. To draw a stratified sample from a project,
Page 94
the evaluation participants would first be selected by urban and rural location, and
appropriate numbers of urban and rural participants would then be drawn.
The development of a sampling framework could entail the use of all three different methods
of sampling. The factors that would govern the framework would be the availability of the
sampling frame, the purpose of the monitoring and evaluation, and the accessibility of the
various people to participate in the monitoring and evaluation of the project.
(iv) Cluster Sampling
Cluster sampling involves selecting people for consultation in groups or clusters, rather than
on an individual basis. For example, in a home-based care project, only people who have
participated in the project for more than two years would be targeted during the data
collection process.
(v) Quota Sampling
With the quota sampling method, a certain number of people who meet specified criteria
are consulted. For example, a specified percentage of women and men must be interviewed.
However, this type of sampling should be used with caution, as it can produce biased
information.
3. DEVELOPING DATA COLLECTION INSTRUMENTS
3.1 GENERIC EVALUATION INFORMATION ON DATA COLLECTON
There are a vast number of different methods that can be used to collect the data needed to
conduct an evaluation. The principal methods that will be used are:
The use of questionnaires; Face-to-face interviews;
Self-administered questionnaires; Focus group discussions;
Observations
Page 95
To be able effectively to manage the data collection process, it is necessary to understand the
theory behind the development of the instruments that are used to guide these processes.
Grantees are encouraged to use multiple methods to collect data. Just as no single programme
design can solve complex social problems, no single method of data collection can document and
explain the complexity and richness of a project. Whenever possible monitoring and evaluation
designs should include both qualitative and quantitative data collection methods.
There are many different data collection methods to choose from, including observation,
interviews, written questionnaires, tests, assessments and document reviews. When deciding on
which methods to use, consider the following:
Resources available for the monitoring and evaluation task - determine the resources
available and design the monitoring and evaluation accordingly. Calculating the cost of
several data collection methods that address the same questions, and employing a good
mix of methods, adequately thought out, can help optimise resources utilisation.
Credibility - how credible will the monitoring and evaluation information be? When
deciding between the various methods and instruments, ask the following questions:
- Is the instrument valid? In other words, does it measure what it claims to measure?
- How reliable is the measuring instrument? Will it provide the same answers even if it is
administered at different times or in different places?
- Are the methods and instruments suitable for the population being studied, and the
problems being assessed?
- Can the methods and instruments detect salient issues, meaningful changes and various
outcomes of the project?
Increased credibility can also be accomplished by using more than one data collection
method, because findings can be compared and confirmed.
Page 96
Importance of information remember to consider the importance of each piece of
information you plan to collect, both for the project and for the stakeholders. Some types of
information are more difficult and costly to gather than others. By deciding what information
is most useful, wasted energy and resources can be minimised.
3.2 KEY ISSUES IN DATA COLLECTION
To ensure that the most credible and cost-effective instruments are developed, the following
should be considered:
3.2.1 Guidelines to asking questions
There are two different forms of questions that can be asked. These are open-ended, and
closed-ended, questions. An open-ended question is when the respondent is asked to provide
his or her own answer. These questions generally start with words like why, what, how, when
and describe.
Closed-ended questions require the respondent to select an answer from a set list of
possibilities. This type of questioning is very popular, because it provides a greater level of
uniformity. This makes for easier data processing. The key limitation of this type of
questioning is that it can result in some important answers being overlooked. This is more
likely to happen when the anticipated responses are not clear-cut and straightforward.
It is important that questions included in the instrument are very clear and unambiguous. It is
also important to ensure that the questionnaires do not include double-barrelled questions.
Double-barrelled questions are those that expect a single answer to a combination of
questions.
The questions that are asked in an instrument must be relevant to the sample, and to the
research that is being undertaken. Questions need to avoid any appearance of negation. This
type of question can result in respondents misunderstanding what is being asked.
Page 97
Biased items and terms should be avoided. Questions that seem to lead the respondents to a
particular answer are regarded as biased. This type of question will have a negative effect on
the overall result of the research.
3.2.2 Questionnaire Construction
The format of a questionnaire is important, as an inappropriately structured questionnaire
can lead to respondents missing questions. The questionnaire should generally be clearly
spread out and uncluttered. Only one question should be put on a line.
If the questionnaire requires the respondent to check one of a series of responses, the most
appropriate structure is to provide boxes for the respondent to mark. The respondent could
also be asked to circle the correct response.
Contingency questions can also be asked in a questionnaire. These are questions that are
relevant to some respondents, but not to others. A series of questions is generally asked
about a single topic. Each level of question leads to a further set of questions and answers.
Matrix questions are those where several questions have the same set of answer categories.
Under these circumstances a Likert scale is often used. A typical Likert scale will have options,
such as the following, as a set of answers:
Strongly agree; Agree;
Disagree; Strongly disagree;
Undecided.
3.2.3 Questionnaire Format
The order of the questions in the questionnaires can influence the way that respondents
understand and view their responses. The proposed format to apply for self-administered
questionnaires is to:
Begin the questionnaire with the most interesting set of items;
Ensure that the initial items are not threatening;
Page 98
Place demographic information at the end of the questionnaire.
Interview questionnaires should reverse this order. This allows the interviewer to establish
rapport with his or her interviewee, through asking unthreatening demographic questions at
the beginning of the discussion.
All questionnaires should have very clear instructions as to what the respondents are expected
to do. This can be facilitated by providing examples of a typical answer that is required in the
questionnaire. This is particularly relevant in the case of a self-administered questionnaire. If a
questionnaire is broken into sub-sections, it is important to introduce each section as well as
the expectations for answering the questions in that section.
3.2.4 Pre-testing the Questionnaire
Due to the uncertainty of questionnaire design, and the manner in which respondents are
going to react to a particular questionnaire, it is necessary to pilot the instrument prior to
engaging in the full-scale research project. This process allows for the identification of
ambiguity in questions, misunderstood questions and questionnaire flow challenges.
This process can be accomplished by getting a small number of the population to complete
the questionnaire prior to commencing the research. Their responses are then analysed and
discussed, so that the questionnaire can be corrected.
3.3 TYPES OF DATA COLLECTION
3.3.1 Face-to-face Interviews
This method of data collection is the most commonly used in the region. Due to the low level
of literacy of the population, it is more efficient to ask the respondents the questions and to
record their answers, than it is to give them self-administered questionnaires to complete.
Page 99
There are number of dynamics that need to be considered when running interviews. The first is
that many of the participants in the interview process are not familiar with interviews. This
may influence their responses. The participants may feel threatened by the process, which may
result in them giving inaccurate or biased/insincere/responses.
Collecting data through an interview process has a number of advantages. The first of these is
that the response rate is far higher than the response rate of a self-administered
questionnaire that is distributed via the mail. Face-to-face interviews also increase the
accuracy and relevance of the answers that are elicited, because interviewers can probe for
answers to a particular question.
The interviewer can also note things in an interview that may be sensitive issues that are best
not asked. An example of this could be that an interviewer would be able to determine the
race of a person, without necessarily having to ask the question, as the question of race could
be deemed to be sensitive.
(i) General Rules for Face-to-face Interviews
The key rules are as follows:
The interviewer should be able to communicate with the interviewee in his or her home
language;
It is desirable to match the demographic profile of the interviewer and interviewee, as this
is inclined to put the interviewee at ease;
It is desirable to use an interviewer from the same area as the interviewee;
The interviewer needs to present himself or herself in an appropriate manner. He or she
should be neatly dressed, and should put the interviewee at ease;
The interviewer should be very familiar with the questionnaire that is used to guide the
interview. He or she should be able to interpret the context of the questions, to ensure
that the correct information is extracted;
The responses that are elicited need to be recorded accurately;
Page 100
When necessary, the interviewer needs to be able to probe for responses, to ensure that
the necessary information is extracted/noted. This is normally necessary when the
answer presented is ambiguous and unclear.
3.3.2 Self-administered Questionnaires
The use of self-administered questionnaires is only appropriate when the sample is sufficiently
literate to complete the questionnaire accurately. The primary method of self-administered
questionnaires is through mail distribution, although a number of other distribution methods
can be used. These include home delivery, gathering the respondents in a single place and/or
combinations of these methods. Electronic distribution is a new approach that is becoming
increasingly used.
It is necessary to monitor the returns of self-administered questionnaires. The completed
instrument is collected during the evaluation that is conducted by the team.
3.3.3 Focus Group Discussions
A focus group discussion is when a number of people (ranging up to 12 persons) are
interviewed. The interviewer ensures that all the people in the focus group are given an
opportunity to provide answers to questions. A focus group discussion allows the researcher to
access information that would not otherwise be accessible.
There are a number of general rules for running focus groups. These are:
Make sure that there are enough participants to ensure that the focus group will still
work if some of those present choose not to participate;
Bear in mind the amount of information that needs to be generated from the focus
group, and choose the number of members to ensure that the group dynamics emerge;
Try to avoid friendship pairs, experts and unco-operative participants;
The larger the group, the more difficult it is to manage;
Larger groups require a higher level of skill and interviewer intervention;
Page 101
It is generally recommended that an excess of 20% of candidates is recruited for focus
group interviews. This will allow for a group size that is appropriate even if some of the
participants do not arrive.
Between three and five focus groups should be held to facilitate the identification of
idiosyncratic characteristics from data;
The more heterogeneous the group, the higher the number of focus group sessions that
should be held;
The instruments that should be used to run focus group sessions need to have the
following:
- They need to consider the profile of the focus group, and ask questions that are
relevant to the particular profile;
- They need to echo the instruments that have been developed for the individual
interviews;
- They need to ask questions that allow the group to formulate answers.
3.3.4 Observation and Participant Observation
There are two generally used forms of observation. These are simple observation and
participant observation. Simple observation is where the researcher remains outside the
activities that are being observed. Participant observation is where the researcher is part of
the team being researched, and is the person conducting the research. Typical issues that may
be observed are:
Exterior physical signs; Expressive movement;
Physical location; Language indicators;
Time duration.
When conducting an observation, it is important to make full and comprehensive notes of
what is seen. The notes should include the empirical observations and any comments on these
observations.
Page 102
The instruments that are prepared to facilitate an observation need to list the particular
behaviours and characteristics that will be observed. These must be aligned with the overall
evaluation of the project. The preparation of this instrument will assist in effective
observation and will guide the observer in terms of the areas that require evaluation.
3.3.5 What should be observed within projects during evaluations?
Key Evaluation Questions
Relevance - in terms of the improvement-required was/is the project a good idea?
Was the logic of the project correct? Why, or why not?
Effectiveness - have the planned results been achieved? Why, or why not?
Efficiency - have resources been used in the best possible way? Why, or why not?
Impact - to what extent has the project contributed towards its longer term goals?
Why, or why not? Have there been any unanticipated positive or negative consequences
of the project? Why did they arise?
Sustainability once it is finished will there be continued positive impacts as a result of
the project? Why, or why not?
The situation to improve
Problems and Visions
Project Plan
(Planned Results
and Activities)
5. Sustainability
3. Efficiency
1. Relevance
4. Impact
2. Effectiveness
(Performance)
Input ActivityResults Actual
Page 103
Manto Management: Bristol-Myers Squibb, Capacity Building Workshop, August 2002
3.4 DATA COLLECTION
Once you have refined your monitoring and evaluation questions, and determined what data
collection methods to use, you and your team are ready to collect the data. Remember to collect
only the information you are going to use, and to use all the information that you collect.
Best practice indicates that an integrated, rather than a stand-alone, approach to data collection
is important. Most organisations collect a great deal of information on a range of topics and
issues pertaining to the project, but that information often stays unused in management
computer systems, or remains isolated in a particular area of the organisation. Therefore part of
the data collection process is in examining existing tracking systems, deciding why certain data is
collected and how it is used, and thinking critically about what kinds of data project staff need
but have not been collecting in consistent ways.
You then need to review and establish the following:
What information you need;
How you will collect the information;
What information you have;
Where the information gaps are;
What resources you have for collecting information;
What technical capacity you have.
You can then spin your wheels, and go ahead and collect your information, based on the
instruments that you developed, or the indicators that have developed.
Page 104
DATA ANALYSIS AND REPORTING
6
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
This chapter provides readers with an overview of the data
analysis and report writing processes. Participants will acquire
techniques of how to analyse data, and generate reports based
on the information.
1.2 OBJECTIVES
Understand the process for data capture;
Identify the principles involved in data analysis;
Organise data for report writing, quarterly and semi-annual
reports.
1.3 TARGET GROUP
Monitors;
Project Managers;
Project Staff.
Page 107
2. DATA CAPTURING
2.1 DATA CODING
To facilitate the interpretation and analysis of data, it is necessary that it be captured into a
database. To facilitate this, the data must be coded. Coding is a process that translates the data
that is collected into a format that can be electronically captured. The process of coding reduces a
large range of pieces of information into a number of defined attributes.
The process of developing a coding structure often emerges from the research that has been
undertaken. A code would be allocated, based on trends that emerge from the data. It is necessary
to define the code categories and it is helpful to provide several examples of the types of
responses that would fall into the defined code category.
When it is unclear how the data should be coded, a set of anticipated responses should be
prepared and codes should be allocated to the various categories of responses. When coding
information, it is important that every response can fit into only one category that has been
developed.
Data can also be coded against particular questions that are asked. When this occurs, the
interviewer records the data against the question. All information that is collected in response to
that particular question will be the data for analysis in respect to that question.
2.2 DATA CLEANING
Data cleaning is the process of eliminating the errors from the data that has been captured.
There are two different approaches to data cleaning. These are possible code cleaning and
contingency cleaning.
Page 108
Possible code cleaning is when the code that has been allocated to a response is irrelevant in
relation to the question that has been asked. For example, if the question asks for the gender of
the respondent, the possible answers and codes could be: 1 for female; 2 for male; and 0 for no
answer. If, when the data is reviewed, an answer to this question is coded 5, this is clearly an error
that needs to be rectified. The source documentation must be referenced to correct this error.
Many of the database programmes that are used now implement parameters that will not allow
incorrect information to be entered into a field. This limits the number of data errors that are
made.
Contingency cleaning is when there are limited responses available to a particular question. For
example, a questionnaire may ask for the number of children that a woman has borne. Under
these circumstances, any responses that are secured from male participants should have no
response to this question. If a male response is indicated in the data, this needs to be rectified
through referencing the source documentation.
2.3 DATA CAPTURING THROUGH DIRECT DATA ENTRY
Direct data entry is when the data that is collected is captured directly onto the computerised
database without using code sheets or edge coding. The questionnaire links the responses to
particular questions or codes. The responses are captured in line with these codes or questions.
The interviewers may capture their responses directly onto the database. Under these
circumstances, the database will facilitate capturing of text, as opposed to a quantitative
response.
Page 109
3. DATA ANALYSIS
3.1 INTRODUCTION TO DATA ANALYSIS
Most data analysis relies on the simultaneous analysis of multiple variables. The data can be
analysed through a number of different tools, including multiple correlation, factor analysis,
multiple regression and path analysis. Simple tables are the easiest way to present the
information to facilitate the analysis of the data. Data is analysed to identify the trends that
emerge, the frequency of responses and the areas of performance and non-performance.
The key to data analysis is to identify the correct statistic to use, given the level of the data and
the appropriate statistical procedure. The levels of data are:
Nominal data: This data is based on one principle that has no value. It would include data
around issues such as gender, race, job title, among others;
Ordinal data: This data is based on one principle or level, but it can be ranked. An
example of ordinal data would be data that is ranked in a scale from strongly agree to
strongly disagree. If the scale was mixed up, it would make little sense. Ordinal data
measures the ranking, but not the difference, between the levels;
Interval data: This data also ranks information but does so in equal intervals. An example of
interval data would be a score. The interval between 1 and 2 is the same as the interval
between 2 and 3;
Ratio data: This data possesses the characteristics of the previous levels of data, but also
has an absolute zero. An example of ratio data would be weight. 80kg is twice as heavy
as 40kg.
Understanding the level of data that has been collected allows for an understanding of how to
analyse the data.
Page 110
3.2 UNIVARIATE ANALYSIS
Univariate analysis is a method of analysis of a single variable of data. It views the distribution of
the data for the selected variable. The most basic format for presenting univariate data is to
report for all the individual cases. This entails the listing of the attributes of each case that
was researched. The data can be presented in a table, or in a frequency distribution graph.
3.2.1 Central Tendency
Data can also be presented as a measure of central tendency. This can be done through use of
the mode (the most frequent attribute), the mean (the average) or the median (the midpoint
of the data set).
3.2.2 Dispersion
Using an average reduces the data to a single figure. The disadvantage of this approach is that
it is impossible to reconstruct the original data set from the average. To alleviate this
situation, a summary of the dispersion of results can be presented. The simplest dispersion is
the range. This is the distance that separates the highest from the lowest value.
3.2.3 Continuous and Discrete Variables
A continuous variable is one that increases steadily in equal amounts. An example of this
would be age. A discrete variable jumps from category to category. An example of a discrete
variable is gender, marital status and job title.
3.2.4 Detail versus Manageability
The paradox of data analysis is that as much data as possible should be provided to the
readers. To make sense of the data, it needs to be presented in a sensible manner. This requires
that the data be reduced through analysis. To manage this paradox, it is often recommended
that the data be presented in more than one way. Multiple levels of analysis will allow for the
development of a more complete picture of the information.
Page 111
4. ORGANISING DATA FOR REPORT WRITING
Bearing in mind that the data represents the process through which the findings are determined,
it is necessary to present it in a manner that is easily understandable. The techniques that can be
used to present the data are tables, graphs and other illustrations. It is important that the data is
presented in a logical manner that follows the flow of the findings. As part of the analysis process,
each piece of data needs to be introduced and discussed.
It is also important that the data is presented in a manner that prevents any misunderstandings.
The impact of misunderstood findings could be vast and varied. The balance between the
sample size, the methodology employed to determine the sample size and the findings needs
to be correctly presented to ensure that findings are not inappropriate when extrapolated
to a broader population.
It is also important that the findings of the data analysis reflect any limitations that may have
been identified. This will ensure that accurate interpretations are levelled against the data that has
been presented.
The use of appendices allows for a fuller presentation of the data that has formed the basis of the
findings that are included in the report. The nature of the report is that it needs to be user-
friendly, and talks to the audience at which it is aimed. To facilitate this, the interpretation of
the findings is the most critical information. The data analysis process needs to be mapped and
the interpretation of this analysis should be reflected in the report.
To present a plethora of tables and graphs that were part of the process of data analysis does not
necessarily result in the most easily readable report. The full set of data that has been analysed for
reporting can be presented in the report appendices. This ensures that readers will have access to
the full extent of the processed data should they require it.
Page 112
5. REPORT WRITING
Once the Bristol-Myers Squibb Foundation and management of the foundation has authorised
and approved to aid a project, an understanding on the frequency of reporting on the project
progress is reached by the foundation and the organisation.
This information evolves around the satisfactory completion of the programmes established goals
and agreed upon milestones and up to date accountability of funds paid to the organisation. A
satisfactory narrative and financial report triggers the release of installments due for continued
support by the foundation.
The submitted reports should be signed by the programme and/or financial director. Designated
representatives from the Bristol-Myers Squibb Foundation will be responsible for overseeing the
progress of programmes and ensuring compliance with the programmes established goals and
agreed upon milestones.
REPORTING FORMAT FOR BRISTOL-MYERS SQUIBB, SECURE THE FUTURE GRANTEES
Name of Organisation:
Title of funded project:
Date of report submission:
Report number:
1. Introduction - organisation brief including objectives of the project. This gives an overall view
of the organisation and its intentions.
2. Objective(s) for this reporting period. State objectives achieved and those not achieved during
a reporting period. Each objective must be defined and the progress elaborated upon to gain
insight into the project progress or challenges.
Page 113
3. Implementation of the Monitoring Plan - to assess how the project is gaining momentum
(The Monitoring & Evaluation Operational plan is a working document which is updated with
actual information completed for the objectives reported versus predicted targets).
4. Activity Reporting - Insert tables/graphs for quantitative information. Document how many
people have been reached through this project. Elaborate on qualitative outcomes namely,
feedback from the target groups identified.
5. Networking with other organisations.
6. Summary of the key findings.
7. Achievements stated during this reporting period.
8. Challenges for the stated reporting period.
9. Plan for next reporting period.
10. Conclusions for the stated reporting period.
11. Author.
* Attach the Financial Report and expenditure narrative. Also fax a copy of the Bank statement
with the signed and dated copy of the financial report.
** Final Reports should be comprehensive, including details of quantitative and qualitative
information, achievements, challenges and a conclusion from all reports.
Page 114
Manto Management: Bristol-Myers Squibb, Capacity Building Workshop, August 2003
EVALUATION REPORTING FRAMEWORK
Formative Evaluation
Monitoring Report (3)
Monitoring Report (2)
M1
M2
M3
M4
M5
M6
M7
M8
M9
M10
M11
M12
Monitoring Report (1)
Baseline Information
Page 115
Page 116
PLANNING FOR AN EVALUATION
7
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
This chapter provides readers with insight on how to develop and
implement a plan to execute an evaluation. This planning process
focuses on both the internal and external evaluation.
1.2 OBJECTIVES
Identify the steps required in planning an evaluation;
Undertake the steps necessary to prepare for an evaluation.
1.3 TARGET GROUP
Monitors;
Project Managers;
Project Implementers;
Key Stakeholders (Community, Funders and Donors).
Page 119
2. PREPARING FOR AN EVALUATION
2.1 PREPARING FOR AN EVALUATION
When preparing for an evaluation, it is important to make sure that the details of the evaluation
are clearly mapped in writing. An evaluation generally deals with sensitive issues such as
project staff performance, organisational issues and evaluation success or failure. For this
reason, it is important that there are written agreements between the project/organisation and
the evaluation team. This will allow both parties to be clear on the expectations and may
highlight potential side effects or consequences that were not initially anticipated (Posavac and
Carey, 1996).
2.2 IDENTIFYING THE LEVELS OF EVALUATION PARTICIPANTS
The first step in identifying the levels of evaluation participants is to obtain a description of
what is to be evaluated. This will assist the evaluator/supervisor to understand the environment of
the evaluation and to map out its context under which the evaluation is occurring. Typical areas
that are highlighted are the geographical scope of the evaluation, the level of willingness of the
participants and the size of the audience (Posavac and Carey, 1996).
Once this process has been undertaken, the evaluator/supervisor should identify the levels of
officials. Levels of evaluation participants include:
Those people who are personally involved in the project as implementers;
People who receive the services delivered by the project (Posavac and Carey, 1996).
Once the levels of evaluation participants have been identified, their various roles and functions
need to be mapped. The needs of the evaluation participants will vary. The following table maps
some needs that various levels of officials may have:
Page 120
2.3 THE NEEDS OF THE EVALUATION
The key considerations of the evaluation that need to be identified are as follows:
The type of evaluation: The evaluator/supervisor must guide the form to be taken by the
evaluation. Since the evaluator/supervisor generally has a higher level of technical skills
around evaluation than the members of the project team. The range of forms that the
evaluation may take vary from a goal-free structure through to an objective-based structure
(Posavac and Carey, 1996).
Project staff To provide feedback on the performance of the
project;
To give feedback on each individuals contribution
to the project;
To formulate ideas on strategies for improvement.
Funders/Donors To determine the performance of the project;
To identify areas of potential growth;
To understand the areas that require attention to
ensure that the project performs at the level
required.
Recipients To understand the level of performance of the
project;
To understand how the project should be meeting
the needs of the recipients;
To receive feedback on the performance of the
project.
STAKEHOLDER NEEDS
Page 121
Manto Management: Bristol-Myers Squibb, Capacity Building Workshop, August 2003
The purpose of the evaluation: The purpose behind the evaluation will govern the overall
structuring of the process. The evaluator/supervisor needs to identify what the evaluation
needs to review, and why this information is required. It is important that the climate is
prepared around the purpose of the evaluation. This entails planning information sessions
with the project, preparing project staff and the recipient community about the process. The
evaluator/supervisor needs to facilitate this process (Posavac and Carey, 1996).
The timing of the evaluation:. The evaluation needs to be run at a time that is suitable both
to the community and the organisation implementing the project. The timelines of the
evaluation must be carefully considered. Factors that must be considered include the time
it will take to access the levels of evaluation participants; to develop the various measures; to
capture and analyse the information, and to develop the report. The end date of the
evaluation needs to ensure that the evaluation team is able effectively to work through the
complete process (Posavac and Carey, 1996).
The resources that are available to undertake the evaluation: The evaluator/supervisor
needs to consider the availability of staff to participate in the evaluation. The schedule of
the evaluation team also needs to be reviewed when planning the evaluation. These parties
need to be available to ensure that the process is successfully completed
(Posavac and Carey, 1996).
How effectively can an evaluation be undertaken in the defined area? It is important that
the methods and approach of the evaluation are agreed on prior to the commencement
of the process. This will ensure that the findings will be applicable, and this will increase
the credibility and therefore the success of the evaluation (Posavac and Carey, 1996).
Page 122
2.4 DATA MANAGEMENT
2.4.1 Documentation at Site
The following needs to be considered:
Essential documents which individually and collectively permit evaluation of the project
and quality of the data produced.
The essential documents should include the following:
- Introduction;
- Information collected before the evaluation commences;
- Information collected during the evaluation process;
- Information collected after the completion of the evaluation.
2.4.2 Document Management
Record retention:
- The evaluator is to hand over all records studied during the evaluation;
- The evaluator would also submit electronically, and in hard copy, all documents and
data collected during the evaluation;
- It is the evaluators responsibility to retain all evaluation materials for at least one
year following the date of the evaluation.
Document Management - keep all evaluation-related documents in a designated
file/notebook as this will:
- Ease the record retention burden;
- Help the sponsor to evaluate the project;
- Facilitate routine monitoring and audit reports.
Study file/project notebook contents should include the following:
- Site information;
- Project proposal;
- Revisions on the project proposal;
- Progress reports;
Page 123
- CVs of the project implementers;
- Policies and documents that govern the project;
- Strategic, business and operational plans;
- Monitoring plans and tools;
- List of indicators and their targets.
Data management requirements:
- Requirements for quality and quantity of data from monitoring and evaluation must
be rigorous;
- Confidentiality of evaluation participants must be maintained;
- Data reported to the funder must be accurate, complete, legible and timely;
- Data reported on the project should be consistent with any source documents or data
available, including monitoring information;
- Evaluation data must be retained according to the organisational and donor
agreements;
- Data reported in the project should be verifiable against source documentation;
- Direct access to all source documents and other study-related materials must be
permitted.
If you do not
write it,
it did not
happen.
Page 124
Bristol-Myers Squibb, Good Clinical Practice.
ETHICS AND POLITICS IN
EVALUATION
8
1.1 INTRODUCTORY STATEMENTS
1. OVERVIEW
This chapter provides readers with insight on the ethics in
respect of monitoring and evaluation.
1.2 OBJECTIVES
Identify the ethical considerations related to the evaluation
process;
Consider the issue of the validity of evaluations;
Interrogate ethical problems associated with evaluations;
Understand the issue of ethical responsibility;
Interrogate the politics of evaluation.
1.3 TARGET GROUP
Monitors;
Project Managers;
Project Staff.
Page 127
Page 128
2. ETHICAL ISSUES IN EVALUATION
The ethics concerning evaluation relate to the standard of conduct that is linked to the
professional performance of evaluators. There are clearly a number of different codes of conduct
that have been agreed in the evaluation research environment. Most of these codes of conduct
address the following issues:
Systematic enquiry: the evaluators conduct systematic database enquiries into the areas
being evaluated;
Competence: the evaluators provide a competent level of performance to the sponsors;
Integrity and honesty: the evaluator fiercely guards the integrity and honesty of the entire
evaluation process;
Respect for people: the security, dignity and self-worth of the respondents, programme
participants, clients and other stakeholders that fall within the evaluation is held paramount;
Responsibilities for general and public welfare: The general interests and values that may be
related to the general and public welfare are taken into account.
2.1 ETHICAL ISSUES INVOLVED IN THE TREATMENT OF PEOPLE
The first and most important responsibility of the evaluator is to protect the people who are
participating in the evaluation from harm. This is also one of the most difficult areas to manage.
Often the findings from an evaluation could have a controversial or negative impact. Should
the participants in the evaluation process be negatively impacted through their role in the process,
they should receive adequate additional services to ensure that the programme being evaluated
does not harm them.
People who participate in an evaluation should be informed of the evaluation programme and
consent should be secured prior to their participation. This will also allow for a greater level of
Page 129
protection of the people participating in the programme. Sufficient information must be
provided to the potential participants to allow them to make an informed decision as to whether
or not they wish to participate in the programme.
All information that is gathered in an evaluation process needs to be treated with the utmost
confidence. This can be done in a number of ways, including collecting data from anonymous
sources and reducing names to a coding system.
2.2 ROLE CONFLICTS IN EVALUATION
There is often a level of conflict between the evaluation team and the staff of the programme
or school, because, if the evaluation reflects levels of non-performance, this could potentially put
the programme or school staff in a difficult position. Under these circumstances, it is advisable to
utilise the services of an external evaluator. The external evaluator would be important in
reflecting the performance of the school or programme accurately. An internal evaluator would
stand to place himself or herself at a high level of risk should they reflect that the programme or
school was not performing at the desired level.
Prior to the commencement of the evaluation, evaluators should try to minimise the conflict
levels between the various stakeholders. This could include securing agreement around issues such
as who will have access to the findings, how the information is to be used, what the
information is to be used for, and how the various patterns of data are to be interpreted.
2.3 RECOGNISING THE DIFFERENT NEEDS OF THE STAKEHOLDER
Part of the process of making the evaluation as relevant as possible is to consider the needs of the
different stakeholders. Typical stakeholders would be:
Programme managers: The key focus of the programme managers is on efficiency and
service delivery.
Staff members: Staff members need practical recommendations that can facilitate the
improvement in service delivery issues. These recommendations can be process or outcomes
based. The staff members must be able to implement the recommendations to ensure higher
levels of service delivery.
Clients: the clients are those members of the community being served by the project. Their
key needs are around the delivery of appropriate and effective services. They also need the
service that is being delivered to be disrupted as little as possible during the evaluation. It is
therefore important that the evaluation process is effectively planned and managed.
3. THE VALIDITY OF EVALUATIONS
To ensure that the evaluation is conducted in a competent and appropriate manner, the
following steps must be in place:
Valid instruments must be used. These instruments must be designed to meet the needs of
the evaluation. They must also be as culturally sensitive and appropriate as possible for the
particular evaluation environment;
Skilled data collectors are critical to ensure the reliability and validity of the data. This also
impacts on the overall credibility of the evaluation;
The research design must be appropriate for the environment. The design must meet the
needs of the audience who will utilise the information. If the research design cannot answer
the questions that have been asked by the defining audience, it would be unethical to
continue with the evaluation;
The programme and procedures that are used need to be adequately described to facilitate
the repetition of the evaluation process. The information provided must also allow others fully
to understand the programme and the evaluation procedures.
Page 130
4. ETHICAL PROBLEMS
Some of the most serious violations of good ethical evaluation practice include:
Changing the evaluation questions after examining the data;
Promising confidentiality when it cannot be guaranteed;
Making decisions about the evaluation without consulting with the client;
Carrying out an evaluation without sufficient training or skill;
Making it easy for groups to delete from the evaluation reports references to embarrassing
programme weaknesses.
The most difficult challenge that evaluators face is to present information in a clear, complete
and fair manner. Sometimes a great deal of pressure is applied to evaluators to present findings in
a particular way. It is critical that evaluators do not succumb to these pressures, as this would
present a serious ethical infringement. The challenge that presents itself to evaluators is to ensure
that the client openly accepts the criticisms and improvement recommendations that are made as
a result of the evaluation findings. To facilitate this, the evaluator needs to create an environment
where the client feels respected, and the critique is handled in a sensitive and appropriate
manner.
5. ETHICAL RESPONSIBILITY
The management of ethical evaluation is not the sole domain of the evaluator. The evaluation
sponsors, participants and audiences also assume a level of responsibility for the ethical
conduct around evaluations. Typical ethical infringements that emerge from parties other than
the evaluator are:
The evaluator is pressurised into changing his or her findings;
Page 131
The findings are suppressed by the sponsor;
The findings are misused by the sponsor;
The findings are deliberately modified by the client prior to their release;
The sponsors omit specific stakeholder groups from the evaluation planning process;
The stakeholder pressurises the evaluator to violate the confidentiality that exists around the
data collection process;
The findings are used to punish the evaluator.
Page 132
GLOSSARY
Accountability:
Activities:
Annual plan:
Appraisal:
Areas of relevance:
An agencys, organisations or individuals obligation to
demonstrate and take responsibility for performance in the
light of agreed expectations. The functions of monitoring and
evaluation thus support accountability.
The specific and tangible actions that must be carried out in
order to achieve particular objectives or results.
This is a plan containing detailed information pertaining to the
operations of the organisation, the resources, the capital
requirements and the budgeted financials for the coming year.
Analysis of a proposed project to determine its merit and
acceptability prior to commitment, usually in the light of
donor programme objectives and agreed approaches. Appraisal
of a project is the final step before a project is agreed for
financing. It checks that the project is feasible against the
situation on the ground, that the objectives that are set remain
appropriate and that costs are reasonable. Different kinds of
appraisals may be carried out (economic appraisal, social
appraisal, environmental appraisal, among others) and donors
usually have set requirements for which kinds are compulsory.
These are the issues in which the NGO is able to translate the
objectives of the organisation and give expression to other
appropriate policy requirements.
Page 133
Collection of information and data needed to plan programmes
and initiatives. These data may describe the needs of the
population and the factors that put people at risk, as well as
the context, programme response, and resources available
(financial and human).
Answers questions such as:
What are the needs of the population to be reached by the
programme/initiative?
How should the programme/initiative be designed or
modified to address population needs?
What would be the best way to deliver this
programme/initiative?
Assumptions refer to the set of hypotheses or beliefs about
cause and effect relationships and external factors that are
taken to be true in order for particular activities to lead to
desired results and for a set of results to lead to a higher level
result or goal. Any programme or project design and
intervention strategy is based on assumptions about cause and
effect relationships. There are internal assumptions about the
cause and effect relationships between levels in the objective
hierarchy and external assumptions about physical or
socio-economic circumstances that must exist for desired
objectives.
Assessment and planning:
Assumptions:
Page 134
An examination or review that assesses the extent to which
predetermined standards or conditions are met. Formerly only
a technical financial review, but now used to describe various
kinds of limited evaluations or appraisals, for example
environmental audit.
Information or data that describe the situation at the start of a
project or programme and that serve as a reference for
planning intended performance and measuring actual
performance. Baseline values are usually identified for
indicators.
Used occasionally to describe the purpose level of a logical
framework/project design, that is the project is broken into a
number of components each corresponding to one of the
project purposes. Outputs and activities are grouped to
correspond directly to the different components.
The very initial document setting out a proposal for a project.
External agencies (for example, NGOs) often prepare concept
notes and submit them to a particular donor agency.
This refers to those activities which define what the
organisation is about. Organisations distinguish activities,
which define its essence from those which are necessary for
its function, but are not primarily what the organisation is
about.
Direct beneficiaries of our services or products (for example
citizens).
Audit:
Baseline:
Components:
Concept note:
Core business:
Customers:
Page 135
This term is used in evaluation studies to describe the extent to
which the results of a project have or are likely to result in the
achievement of the project purpose. The distinction between
Efficiency and Effectiveness is an important criterion in
evaluation studies. Even if a project produces outputs
inefficiently.
This term was widely used instead of purpose when the
logical framework was first introduced.
The cost-effectiveness of converting resources (inputs) to
outputs. Both direct costs and the means of delivery, for
example management time, should be included. The distinction
between Efficiency and Effectiveness is useful for evaluation
studies.
The extent to which a project has been defined in such a way
as to enable evaluation to be undertaken.
A periodic assessment of the efficiency, effectiveness, impact,
sustainability and relevance of a project in the context of
stated objectives. An evaluation may also include an
assessment of unintended impacts. Evaluation studies are
usually undertaken as an independent examination of the
background, objectives, results, activities and means deployed,
with a view to drawing lessons that may guide future actions.
It is a rigorous, scientifically based collection of information
about programme activities, characteristics, and outcomes
to determine the merit or worth of a specific programme.
Effectiveness:
Effects:
Efficiency:
Evaluability:
Evaluation:
Page 136
It is used to improve programmes and inform decisions
about future resource allocations.
A feasibility study, conducted during the project preparation
phase, verifies whether the proposed project is well-founded,
and is likely to meet the needs of its intended beneficiaries.
Separate feasibility studies are sometimes carried out,
examining technical, economic, financial, institutional,
management, environmental and socio-cultural feasibility.
The long term objective to which a project is designed to
contribute. The ongoing interventions or projects of other
agencies and beneficiaries (over which the project has no
control) are necessary for the goal to be achieved. The goal
cannot be achieved by any single project alone. Common
goals are poverty reduced by X % in District Y.
What we want to achieve by a certain time (for example,
reduce number of repeat offenders by 10% next year).
Equivalent to project purpose: the central objective of the
project in terms of the change in performance by the
beneficiaries or target entity, such as an institution or system.
It does not refer to the services provided by the project, nor to
the use of these services.
Instead it refers to the benefits that project beneficiaries get
from using project services or the resulting change in
behaviour of the beneficiaries or target group.
Feasibility study:
Goal:
Goals:
Immediate objective:
Page 137
The achievements of a programme or project that are assessed
with reference to purposes and goals (that is the results that
are outside of the direct control of the project). Includes
planned or unplanned, positive or negative changes in a
situation that a project contributes to.
An ex-post evaluation conducted long after project completion
(generally five years), focusing particularly on the achievement
of purpose and goal levels and their sustainability.
Collects data about HIV infection at the jurisdictional,
regional, and national levels.
Answers the question: What long-term effects do
interventions have on HIV infection?
An essential requirement for performance and impact
measurement. Used in project planning to pre-define
measurements that is designed to signal progress (or lack of
progress) towards objectives, or to monitor assumptions. One
or more indicators are given against each objective in a logical
framework matrix. Well-designed project/programme
indicators are described in terms of quantity, quality and time
(QQT).
Indicator is also used to describe particular measurements,
which are not necessarily related to the objectives of a
particular programme or project, but show large scale change
(such as GDP)
Impact:
Impact evaluation:
Impact Monitoring &
Evaluation:
Indicator:
Page 138
The financial, human and physical resources required for the
implementation of project/programme activities. Often
described in a project budget accompanying a project logical
framework.
Collects data describing the individuals served, the services
provided, and the resources used to deliver those services.
Answers questions such as: What services were delivered?
What population was served and what numbers were
served?
What staffing/resources were used?
These are a set of small-scale, short-term activities. They
are intended to give effect to organisational strategic
objectives.
Action programmes that will achieve our performance
goals (for example, license renewals via the Internet).
A review or evaluation of a project or programme carried out
during implementation, some years after the start of the
project. Mid-term reviews usually examine both the efficiency
and effectiveness of the project thus far.
What we are about (for example, Our mission is to provide).
Tracks priority information relevant to programme planning
and intended outputs, outcomes and impacts. Tracks costs and
programme functioning. Provides basis for programme
evaluation when linked to a specific programme.
Input:
Input/Output Monitoring:
Initiatives:
Mid-term review:
Mission:
Monitoring:
Page 139
Describes the aims of a project or programme. Used generically
to refer to all or any levels of objectives, whether they are
activities, results, purposes, or goals.
Describes the logical hierarchy of objectives (activities, results,
purposes, goal for example) in the left hand column of a
results based framework series of more objectives in an
objective tree.
A diagrammatic representation of the proposed project
interventions, set out in a logical hierarchy. An objective tree
links objectives in an if-then logic. Objective trees are often
produced as a result of team-based planning activities and
problem tree analysis.
Outcome is often used generically to describe actual
achievements. It is more formally used among some agencies
to refer only to a projects actual achievements at the purpose-
level. In this case, outcomes are the benefits that project
beneficiaries get from using project services or the resulting
change in behaviour of the beneficiaries or target
group/institution.
What results are desired; our planned accomplishments (for
example, to improve citizen satisfaction).
Collects data about outcomes before and after the
intervention for clients as well as with a similar group that
did not participate in the intervention being evaluated.
Objectives:
Objectives hierarchy:
Objective tree:
Outcome:
Outcomes:
Outcome Evaluation:
Page 140
Answers the question: Did the intervention cause the
expected outcomes?
Basic tracking of measures related to desired programme
outcomes. With National AIDS programmes, outcome
monitoring is typically conducted through population-
based surveys to track whether or not desired outcomes
have been reached. May also track information directly
related to programme clients, such as change in
knowledge, attitudes, behaviour.
Answers the question: Did the expected outcomes occur,
for example, increase in condom use; increase in
knowledge or change in behaviour; increase in client use
of services?
The direct product of a projects activities, in terms of goods or
services. The outputs say nothing about the actual
outcome/purpose of a project. Well-defined outputs bring
added-value to the project activities. Thus for example the
output from training activities is improved knowledge or
application of a skill, not the number of people trained. The
outputs from building rural roads is the number of people with
improved access, not kilometres of road constructed.
What is produced (for example, number of vaccinations given,
number of claims accurately processed, participation by 25%
over two years, reduce annual crime rate by 10% per 100 000
people).
Outcome Monitoring:
Outputs:
Outputs/deliverables:
Page 141
Indicators that pre-define measurements that are designed to
signal progress (or lack of progress) towards objectives. A
distinction between performance indicators and impact
indicators is sometimes made. In which case, however,
performance refers to the achievement of project outputs
(efficiency). A set of performance indicators does not
necessarily capture all aspects of performance, as some of
these aspects may not be easily reduced to simple
measurements.
Indicators of success (for example, number of home-based care
patients served last month).
Predictors (performance drivers) of future success (for example,
increase in participants knowledge).
Different views of our organisation (for example,
customers/stakeholders, employees, capacity and financial).
Collects more detailed data about how the intervention
was delivered, differences between the intended
population and the population served, and access to the
intervention.
Answers questions such as: Was the intervention
implemented as intended? Did the intervention reach the
intended audience? What barriers did clients experience in
accessing the intervention?
Performance indicators:
Performance measures:
(Lagging)
Performance measures:
(Leading)
Perspectives:
Process Evaluation:
Page 142
Often used by donors to describe a group of projects that form
a recognisable group, for example country programme, or
forestry programme. Programme implies that the set of
projects for a coherent group of interrelated policies,
strategies, activities and investments.
Programmes comprise a set of projects. Together, the projects
give effect to the outcomes of the programme. The programme
is characterised by a routine set of activities, which ensures
that the projects are performing optimally. Project managers
communicate with each other, so that the overall objectives of
the programme are achieved.
Describes the stages of the life of a project: programming,
concept, design, monitoring, evaluation. The project cycle
provides a structure to ensure that stakeholders are consulted,
and defines the key decisions, information requirements and
responsibilities at each phase so that informed decisions can be
made at key phases in the life of a project.
A matrix of four rows by four columns. It sets out a hierarchy
of project objectives on four levels, defines indicators for each
of these levels (and their sources of data), and cites the
assumptions that are being made for the objectives on one
level to lead to those on the row above.
Columns are: objectives, objectively verifiable indicators,
sources of verification, and assumptions.
Programme:
Programmes:
Project Cycle:
Project Planning Matrix:
Page 143
The rows of the objectives are often; goal, purpose, outputs,
activities. A project planning matrix for a project is usually
designed as a result of a team-based workshop during which
problem trees and objective trees are identified.
A cluster of non-routine activities, which together achieve an
output. A project has a definite beginning, a clearly defined
end and brings together a unique team of people. A project
manager heads the team.
Refers to information provided in a non-numeric form, either
as verbal/written descriptions, photographs, focus group
interviews, maps, among others.
This consists of an aggregation of the resource requirements
for the various programmes/projects/initiatives identified.
The tangible products or services to be delivered by the project,
and for which project managers can be held directly
accountable for producing. The results are what the project will
have achieved on its completion.
A useful mnemonic for choosing or designing indicators.
S = specific (in terms of quality, quantity and time).
M = measureable (at acceptable cost). A = available (from
existing sources or without reasonable extra effort/cost).
R = relevant (to objectives and sensitive to change). T = timely
(produced regulary and up-to-date for it to be of use to
project managers). S-M-A-R-T is also often applied to
objectives..
Projects:
Qualitative data:
Resource plan:
Results:
S-M-A-R-T:
Page 144
The universe of people with an interest in our products and
services (for example, Board of County).
Strategy components, which are action items that must be
done (for example, improve processing time, increase employee
skills, develop a new claims process).
The impact the outputs have.
How we intend to accomplish our vision and goals by our
approach, or game plan (for example, acquire additional
parkland, develop new faith-community and business
partnerships, reduce taxes).
These are the key imperatives - the determined vision of the
city, political agenda of the city and policy/legislative
requirements of local government for the city.
Analysis of an organisations (or a programmes or projects)
Strengths and Weaknesses, and the Opportunities and Threats
that it faces. A tool used for institutional appraisal.
Desired level of performance for a performance measure (for
example, customer satisfaction target = 95%).
The actual value of an indicator of a result that is intended to
be achieved by a set date. Use of the term target implies that
indicator measurements will be used for performance
management.
Stakeholders:
Strategic objectives:
Strategic outcomes:
Strategies:
Strategy drivers:
SWOT Analysis:
Target:
Targets:
Page 145
The groups of people that are intended to ultimately benefit
from the results of that programme or project.
A rolling plan that lays out the various
programmes/projects/initiatives that have been identified to
deliver on the strategic outcomes and strategic objectives.
What we want to be in the future (for example, Our vision is
to be the leading provider of ... ).
Target group:
The operational plan:
Vision:
Page 146
BIBLIOGRAPHY
William C Bean, 1994
Boulmetis and Dutwin, 2000
Weiss, 1998 (Morris et al, 1987)
Babbie, E and Mouton, J (2001). The Practice of Social Research. Cape Town: Oxford University
Press.
Booth, W Ebrahim, R and Morin, R (2001). Participatory Monitoring, Evaluation and Reporting.
An Organisational Development Perspecitive of South African NGOs. Braamfontein:
PACT/South Africa.
Herman, J L, Morris, L L and Taylor Fitz-Gibbon, C. (1987). Evaluators Handbook. Center for the
Study of Evaluation University of California, Los Angeles. London: Sage Publications.
Mark, M M, Henry, G T and Julnes, G (2000). Evaluation: An Integrated Framework for
Understanding, Guiding, and Improving Policies and Programmes. San Francisco: Jossey-Bass.
Prosavac, E J and Carey, R G (1997). Programme Evaluation - Methods and Case Studies
(5th Edition). New Jersey: Prentice Hall.
Ramashia, R and Rankin, S (1995). Managing Evaluation. A Guide for NGO Leadership.
Braamfontein: PACT/South Africa.
Rossi, H and Freeman, E (1998). Evaluation: A Systematic Approach. (4th Edition). Newbury Park:
Sage Publications.
Shapiro, J (1996). Evaluation: Judgement Day or Management Tool? A Manual on Planning for
Evaluation in a Non-Profit Organisation. Durban: Olive.
Taylor Fitz-Gibbon, T (1996). Monitoring Evaluation. Indicators, Quality and Effectiveness.
Cassell: London.
Worthen, B.R, Sanders, J R and Fitzpatrick, J L (Eds). (1997). Programme Evaluation. Alternative
Approaches and Practical Guidelines. 2nd Edition. USA: Longman Publishers.
Global AIDS Program; Monitoring & Evaluation Capacity Building for Program Improvements;
Field Guide. Dec 2003
PACT; Monitoring Evaluation Reporting Handbook, South Africa, 2004
Page 147
Page 148
APPENDICES
Outcome Objective Indicators/Measures Targets
Appendix 1
Appendix 3
Appendix 2
Y1 Y2 Y3
TEMPLATE FOR A STRATEGIC PLAN
Objective:
Activities Tasks IndicatorResults
Time-frame
Target Actual
Time-frame
Target Actual
MONITORING TEMPLATE
Objective Activities Time-frame Indicator Results Responsible Deliverables
TEMPLATE FOR OPERATIONAL PLAN
Targets Actual

You might also like