You are on page 1of 5

Evaluation

Research
ABANDULA | YENEZA

Program Evaluation

Systematic collection of information about


the
activities,
characteristics,
and
outcomes of programs for use by specific
people to reduce uncertainties, improve
effectiveness, and make decisions with
regard to what those programs are doing
and affecting. (Patton , 1986)

Evaluation

Determination of the worth or value of


something in this case, of educational
and
social
programs,
policies
and
personnel

judged
according
to
appropriate criteria, with those criteria
explicated and justified. (House, 1993: 1,
paraphrasing Scriven , 1991a)

Evaluation Research
Is the systematic application of social
research procedures for assessing the
conceptualization, design,
implementation and utility of social
intervention programs. (Rossi and
Freeman, 1993:5)

Program Evaluation
Research
A type of action research that uses a
set of techniques used to determine
effectiveness of a social service or
intervention program in meeting
needs or solving problems
used for decision making
purposes and making
recommendations
versus social research, which is
generally used to build
understanding and knowledge,
as well as to inform practice

uses behavioural research methods to


assess the effects of
interventions/programs designed to
influence behavior

What is assessed in
Program Evaluation
Research?
effectiveness of human service
organizations in order to provide
feedback to administrators about
their services
needs, processes, outcome and
efficiency of social services in
order to provide feedback

How are they


assessed?
Process through observation
Outcome through other research
designs, experiments or evaluation
methods

Program Evaluation
Models
Objective-based Approach
Objectives written by client and
evaluator depict overarching
purpose of the evaluation
and clearly state the type of
information to be collected
Uses benchmarks (more
specific objectives)
Goal-free evaluation
Guided by the perspective that
many findings and outcomes do
not fall within goals and
objectives that should have
been established
Expertise-oriented Evaluation

The expert uses data collected


by the program

Summative- outcome or impact


evaluation
-

consultation
Participant-oriented Evaluation
Interested in who the program
serves

outcome evaluations investigate


whether the program or technology caused
demonstrable effects on specifically defined
target outcomes

Involve participants in actual


evaluation by letting them
develop instruments, collect
and analyze data, and report
findings

impact evaluation is broader and


assesses the overall or net effects -intended or unintended -- of the program or
technology as a whole

Dimensions of
Evaluation Research

cost-effectiveness and costbenefit analysis address questions of


efficiency by standardizing outcomes in
terms of their dollar costs and values

Formative-Summative (Types of
Evaluation Research)
Formative- Process or progress
evaluation
-

done to provide feedback to


people who are trying to
improve something

Types:
needs assessment determines
who needs the program, how great
the need is, and what might work
to meet the need
evaluability
assessment determines whether
an evaluation is feasible and how
stakeholders can help shape its
usefulness
structured
conceptualization helps
stakeholders define the program or
technology, the target population,
and the possible outcomes
implementation
evaluation monitors the fidelity of
the program or technology delivery
process evaluation investigates
the process of delivering the
program or technology, including
alternative delivery procedures

done to determine the overall


effectiveness of a programme
or project, with a view o
recommending Formal-Informal

secondary analysis reexamines


existing data to address new questions or
use methods not previously employed

meta-analysis integrates the


outcome estimates from multiple studies to
arrive at an overall or summary judgement
on an evaluation question
Formal-Informal
Formal
Needs to be systematic,
because its findings will be
scrutinized
Needs to be accurate,
reliable, credible and of use
to those involved
Informal

Universal and abiding


human act

Scarcely separable from


thinking and feeling
Case Particular-Generalization
The findings of an evaluation of a
particular programme may only
apply to that programme
specifically, or they may apply to

other programmes which share


similar approaches and feelings.
If the aim of an evaluation is to
permit generalizations, then there
is a much greater need for careful
controls and description to
provide a secure basis for
generalizations.
Product-Process
Product-oriented evaluation
Provide information about
WHAT effects are associated
with a particular programme
Process-oriented evaluation

yields information about


WHY those effects occurred

they see as the key features


of the programme which
need to be explored
External
Seen to be more objective

Steps in Planning an
Evaluation
1. Identify stakeholders
2. Arrange preliminary meetings
3. Decide whether an evaluation should
be done
4. Examine the literature
5. Determine the methodology
6. Present a written proposal

Methodologies
An evaluation strategy is composed of:
Descriptive-Judgmental

an evaluation design

Preordinate-Responsive

a data collection method


and

Preordinate
Focus on the objectives of
the programme and the
evaluation will be designed
to assess the extent to
which these objectives have
been realized
Responsive
Permits some of the agenda
to be set by those involved
in the programme, and
allows for issues to be
explored as they emerge
during evaluation
Permits unanticipated
outcomes to be identified
Holistic-Analytic
Internal-External
Internal
Advantage of allowing the
developers to focus on what

an analytical method
Evaluation Designs:
Experimental design which involves a
treatment and control group
Quasi-experimental design wherein the
treatment or control group is not chosen
randomly
Implicit design only a treatment group
exists, measurement is made after exposure
to the program and assumptions are made
about conditions before the treatment
Data Collection Method
Literature Search evaluation of official
documents, general research reports,
published papers, books or past evaluations
File Review review of general program
files (specific to what is being evaluated),
files on individual projects, clients and
participants, administrative and financial
records

Observation
Survey
Expert Opinion as a source of data re
functional aspects and addressing evaluation
issues, not as part of the evaluation team
Case Study
Analytical Method
Statistical Analysis descriptive or
inferential statistics

As an effect for fiscal or monetary policy as


contributed by the program
Statistical Models
Cost-benefit: monetary, prices
Cost-effectiveness: non-monetary (e.g.
lives saved)

Planning Evaluation
Cycle

Analysis of Qualitative Information use


of evaluators professional judgment
(summary or comparison)
Analysis of Further Program Results
other outcomes at the macro level for
example:
Reading program increased reading skills
better employment
Program client benefits/operational outputs
broader outcomes
Use of Models
Simulation model

Pitfalls

Input data mathematical model output


data

sources of resistance (posavac


and carey, as cited in mcburney)

Custom program with new questions 11


seconds longer to administer than previous
program waiting time of clients

lack of random assignment


wherein not everyone participates
in the program being assessed

Input-output models
In econ: factors of production (resource)
goods and services (product)
Use of Models
Microeconomic Analysis
Behavior of consumers at the micro level
(people, households, firms) because this is
usually the target of the program
Uses demand and supply curves as well as
prices
Macroeconomic Analysis
Inflation, unemployment, GNP, interest rates,
etc

For example,
reluctance of public
officials to seek
evaluation of their
own reforms
Uses a longitudinal design
which entails difficulty in
controlling what occurs over
time

References
Banister, P. (1994). Qualitative
methods in psychology. Buckingham
[England]: Open University Press.

Bennett, J. (2003). Evaluation Methods


in Research. New York: Continuum.
Clarke, A. (1999). Evaluation
Research:An Introduction to Principles,
Methods and Practices. London, UK:
Sage Publications
Leary, M. (2001). Introduction to
behavioral research methods. Boston:
Allyn and Bacon.
Lodico, M., Spaulding, D., & Voegtle, K.
(2006). Methods in educational
research. San Francisco, CA: JosseyBass.
McBurney, D., Middleton, P., &
McBurney, D. (1994). Research
methods. Pacific Grove, Calif.:
Brooks/Cole Pub. Co.

Powell, R. (2006). Evaluation Research:


An Overview. Library Trends, 55(1),
102-120. doi:10.1353/lib.2006.0050
Silver, H. (2004). Evaluation Research
in Education. Research in Education.
Retrieved 22 August 2015, from
http://www.edu.plymouth.ac.uk/RESIN
ED/evaluation/index.htm
Stangor, C. (2011). Research methods
for the behavioral sciences. Australia:
Wadsworth Cengage Learning.
Trochim, W. (2006). Evaluation
Research. Social Research Methods.
Retrieved 22 August 2015, from
http://www.socialresearchmethods.net/
kb/evaluation.php

You might also like