You are on page 1of 40

Evaluation with an Impact

BSILI- June 2012

Christina A. Christie, Ph.D. University of California, Los Angeles


Michael Harnar, Ph.D. tina.christie@ucla.edu 310-825-0432

Goals for this Session


Ground

evaluation as a practice in varying definitions of evaluation what it means for an evaluation to have impact and consider the various ways evaluation can influence decisions and activities strategies for promoting and increasing evaluation impact

Understand

Identify

What is Evaluation? (anyway!)

Research vs. Evaluation


Evaluation and Research have many similar characteristics; however, they are very different in the following ways:

Evaluation Intended for:


Research

Intended for:

Program decision making Rendering judgments


Adding to the existing knowledge base

Stakeholders set the agenda Primary audience for the study:

Researcher sets the agenda Primary audience for the study:

Program staff & stakeholders

Scientific/academic community Intended to be broadly applicable or generalizable Shared at the end of the

Findings are:

Findings are:

Program & context specific Shared on an ongoing basis

Evaluation Defined
Program evaluation is the use of social research methods to systematically investigate the effectiveness of social intervention programs. (Rossi, Lipsey, Freeman, 2004, p. 28) Evaluation refers to the process of determining the merit, worth, or value of something, or the product of that process. (Scriven, 1991, p. 139) The evaluation of educational and social programs aspires to be an institution for democratizing public decisions by making programs and policies more open to public scrutiny and deliberation. (House, 1993, p. 1) Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs to make judgments about the program, improve program effectiveness, and/or inform decisions about future programming. (Patton, 1997, p. 23) 5

What Can Evaluation Help Us Know


Know-about problems: knowledge about health, wealth and social inequities Know what-works: policies, programs, strategies that bring about desired outcomes at acceptable costs and with relatively few unwanted consequences Know-how (to put into practice): effective program implementation Know-who (to involve): estimates of clients needs as well as information on key stakeholders necessary for potential solutions Know-why: knowledge about why an action is required e.g., the relationship between values and 6 policy decisions

Why Evaluate?

People (stakeholders) naturally make evaluative judgments about programs & policies, often based on limited information and are susceptible to biases Evaluators use a set of tools (research designs, methods) and roadmaps (evaluation theories) that offer stakeholders understanding of and action in relation to programs and policies

How Does One Evaluate?

There are many ways to approach an evaluation of a program or policy Roadmaps are also called theories (but are really models or approaches or frameworks ) about how best to conduct an evaluation study There are many evaluation theories and there is no one right theory or way of doing an evaluation An evaluation theory offer evaluators a conceptual framework that helps to organize the procedures Theories are distinguished by what is believed to be the primary purpose of evaluation

Impact as a Motivator for Evaluation


The

organizational contexts in which stakeholders use information about programs & policies vary, so the demands on and approaches to evaluation should vary accordingly.

CDC Evaluation Model

www.cdc.gov/eval

What Does it Mean for an Evaluation to Have Impact?

12

Weiss (2012) Says:


Good Cheap Fast

Evaluation can not be all three

13

Evaluations with Impact

Evaluation
Process & Findings

Program Activities
Improvements in programs that are intended to promote social betterment

Evaluation Use
The connection between the work of the evaluation and the program/policy activities that promote social betterment

14

Evaluation Impact: A Trade Off

Challenge the Status Quo- potential for more fundamental change Action Orientation- explicit guidance and clear direction for feasible reform

15

Impact? What Do We Mean? Use


Enlightenment/Conceptual More general learning that takes place as a result of evaluationsystematic evidence makes its way into a persons knowledge-base that is acquired informally overtime Instrumental Evaluation findings lead to specific actions such as program continuation, expansion, revision or termination Process Use Beyond findings use What happens to people & organizations as a result of participating in evaluation activities Evaluation Capacity Building The primary purpose of the evaluation - intentional

16

Examples of Use

Please describe some instances when you experience use?

What were the conditions? What was the impact? What was done to help promote use? By whom? How?

17

Strategies for Promoting and Increasing Evaluation Impact

18

Messy, Subtle, Complex

Studies have shown that evaluation use is not a straightforward, direct application of information by program leaders, practitioners, or policy makers. Nutley, Walter & Davies characterize use as a complex process that is difficult to trace and resulting in equally subtle and complex outcomes (p. 33).

19

In Search of Information? Not Very Often (Decision Accretion)

People tend to make decisions based on what they already know (know about) People value information that comes naturally People seek evidence when:

New issues & an orientation to the issue is needed Important (or expensive) consequences Feel underprepared Seek authoritative support (position may be challenged)

20

Discussion Example

In 1998, Mathematica was commissioned to conduct a congressionally mandated evaluation of the effectiveness of abstinence education programs. Programs receiving these funds taught abstinence from sexual activity outside of marriage as the expected standard for school-age children and could not endorse or promote contraceptive use.

21

Discussion Example

What were the nature and underlying theories of the abstinence education programs supported with Section 510 funding? What were the implementation and operational experiences of local communities and schools that received abstinence education funding? What were the impacts of abstinence education programs? How successful were they in changing the knowledge, attitudes, and intentions of youth? How successful were they in reducing teen sexual activity among youth? How did they change the risk of pregnancy and STDs?
22

Discussion Example

The multi-year evaluation used an experimental design to estimate impacts on youth attitudes and behaviors Although the evaluation produced numerous findings, two key findings were:

Youth in the program group were no more likely than those in the control group to have abstained from sex and, among those who reported having had sex, they had similar numbers of sexual partners. Contrary to concerns raised by critics of abstinence education, youth in the program group were no more likely to have unprotected sex than youth in the control group.

23

Discussion Example

Health bill restores $250 million in abstinenceeducation funds By Rob Stein Washington Post Staff Writer Saturday, March 27, 2010

24

Discussion Example

But the effort came under mounting criticism when independent evaluations concluded that the approach was ineffective, and evidence began to emerge that the long decline in teen pregnancies was reversing. During the health legislation debate in the Senate Finance Committee, Sen. Orrin G. Hatch (R-Utah) added $50 million in annual funding for five years to states for abstinence programs -- a provision that survived the tumultuous process that ensued.

25

Weisss I-I-I Analysis


Ideology- principles, values, political orientation Interest- self interest Information- systematic evidence + the mess & the least important

Information can be influential when:


people hold conflicting ideologies & interests new information clarifies interests & ideologies interests & ideologies are at a stalemate

26

How Do We Foster Evaluation Use? Research on Use Factors identified that lead to use: Relevance Credible Evidence Political Factors

27

Engage Stakeholders: Relevance

Fostering input, participation, and power-sharing among those persons who have an investment in the conduct of the evaluation and the findings; it is especially important to engage primary users of the evaluation. Helps increase chances that the evaluation will be useful; can improve the evaluations credibility, clarify roles and responsibilities, enhance cultural competence, help protect human subjects, and avoid real or perceived conflicts of interest.

28

Identifying Potential Stakeholder Audiences

Primary Audiences

Major decision makers, funders Program staff, supervisors, managers, external constituents

Secondary Audiences

May have little or no daily contact with program but may have some level of responsibility for the program; may use results in some decision making situations (e.g., program participants or their supervisors or managers)

Tertiary Audiences

More distanced from programs inner workings; may be interested in the results (e.g., future program participants, general public, special interest groups)

29

Critical Activities

Consult insiders (e.g., leaders, staff, clients, and program funding sources) and outsiders (e.g., skeptics) Take special effort to promote the inclusion of less powerful groups or individuals Coordinate stakeholder input throughout the process of evaluation design, operation, and use; and Avoid excessive stakeholder identification, which might prevent progress of your work
30

Credible Evidence

Information that stakeholders perceive as trustworthy and relevant Persons involved in an evaluation should strive to collect information that will convey a well-rounded picture of the program and be seen as credible by the evaluations primary users. When stakeholders find evaluation data to be credible, they are more likely to accept the findings and to act on its recommendations.

31

Credible Evidence

How evaluation questions are posed Beliefs about truth, knowledge and knowing Sources of information Conditions of data collection, reliability of measurement, validity of interpretations, and quality control procedures These may vary from context to context, user to user

32

Credible Evidence
Indicators How will general concepts regarding the program, its context, and its expected effects be translated into specific measures that can be interpreted? Will the chosen indicators provide systematic data that is valid and reliable for the intended uses? Quality Is the information trustworthy (i.e., reliable, valid, and informative for the intended uses)? Quantity What amount of information is sufficient? What level of confidence or precision is possible? Is there adequate power to detect effects? Is the respondent burden reasonable?
33

Fair Justification of Conclusions

Evaluation conclusions are justified when they are linked to the evidence gathered and judged against agreed-upon values or standards set by the stakeholders. Stakeholders must agree that conclusions are justified before they will use the evaluation results with confidence.

34

Critical Activities

Use culturally and methodologically appropriate methods of analysis and synthesis to summarize findings; Interpret the significance of results for deciding what the findings mean; Make judgments according to clearly stated values that classify a result (e.g., as positive or negative and high or low); Consider alternative ways to compare results (e.g., compared with program objectives, a comparison group, national norms, past performance, or needs); Generate alternative explanations for findings and indicating why these explanations should be discounted; Recommend actions or decisions that are consistent with the conclusions; Limit conclusions to situations, time periods, persons, contexts, and purposes for which the findings are applicable.
35

Sharing Findings

An evaluation, however technically proficient it may be, is of little value if the findings are not shared in ways that connect with the natural sensemaking processes of relevant parties. This has implications not only for the dissemination and presentation of evaluation findings, but also for the potential array of evaluator roles related to evaluation influence. (Mark & Henry, 2012)

36

Mechanisms for Use

Dissemination- present information in a manner that is accessible to potential users Interaction- develop stronger links between decisionmakers and evaluators Social Influence- relying on influential others (experts & peers) to inform people about the study & its value Facilitation- enabling the use of findings through technical, financial & organizational support

37

Mechanisms for Use

Dont assume that lessons learned in the course of evaluation will automatically translate into informed decision-making and appropriate action Deliberate effort is needed to ensure that the evaluation processes and findings are used and disseminated appropriately. Preparing for use involves strategic thinking and continued vigilance, both of which begin in the earliest stages of stakeholder engagement and continue throughout the evaluation process.

38

Critical Activities

Prepare stakeholders for eventual use by rehearsing throughout the project how different kinds of conclusions would affect program operations and/or decisions Provide continuous feedback to stakeholders regarding interim findings, provisional interpretations, and decisions to be made that might affect likelihood of use; Schedule follow-up meetings with stakeholders to facilitate the transfer of evaluation conclusions into appropriate actions or decisions; and Disseminate both the procedures used and the lessons learned from the evaluation to stakeholders, using tailored communications strategies that meet their particular needs.

39

Thank you for your time and attention!

40

You might also like