You are on page 1of 12

Evaluation and Program Planning 34 (2011) 1–12

Contents lists available at ScienceDirect

Evaluation and Program Planning


journal homepage: www.elsevier.com/locate/evalprogplan

Working with evaluation stakeholders: A rationale, step-wise


approach and toolkit
John M. Bryson a,1, Michael Quinn Patton b,*, Ruth A. Bowman c,2
a
Hubert H. Humphrey Institute of Public Affairs, 300E Humphrey Center, University of Minnesota, Minneapolis MN 55455, United States
b
Utilization-Focused Evaluation, 740 Mississippi River Blvd S., Suite 15-H, Saint Paul, MN 55116-1029, United States
c
University of Minnesota, 15400 Edgewood Court, Eden Prairie, MN 55346, United States

A R T I C L E I N F O A B S T R A C T

Article history: In the broad field of evaluation, the importance of stakeholders is often acknowledged and different
Received 26 January 2010 categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders’
Received in revised form 30 June 2010 interests, needs, concerns, power, priorities, and perspectives and subsequent application of that
Accepted 5 July 2010
knowledge to the design of evaluations. This article is meant to help readers understand and apply
stakeholder identification and analysis techniques in the design of credible evaluations that enhance
Keywords: primary intended use by primary intended users. While presented using a utilization-focused-
Stakeholders
evaluation (UFE) lens, the techniques are not UFE-dependent. The article presents a range of the most
Evaluation use
relevant techniques to identify and analyze evaluation stakeholders. The techniques are arranged
Intended users
according to their ability to inform the process of developing and implementing an evaluation design and
of making use of the evaluation’s findings.
ß 2010 Elsevier Ltd. All rights reserved.

1. Introduction the Guiding Principles for Evaluators of the American Evaluation


Association (1995) without attending to stakeholders. The
Attention to stakeholders has gained prominence for both principles include: systematic inquiry, providing competent
practical and ethical reasons.3 Attention to, and involvement of, performance to stakeholders, integrity and honesty, respect for
key stakeholders is presumed to enhance the design and people, and responsibility for the general and public welfare.
implementation of evaluations and the use of evaluation results While most members of the evaluation community would
in decision-making. Beyond that, it would appear to be difficult to agree that attention to stakeholders is important, they might not
adhere to the standards for judging evaluations put forward by the agree on how to define the term. The definition is consequential as
Joint Committee on Standards for Educational Evaluations (1994) – it affects who and what counts (Alkin, Hofstetter & Ai, 1998;
utility, feasibility, propriety, and accuracy – without careful Mitchell, Agle, & Wood, 1997). For example, some definitions of
attention to stakeholders. Similarly, it would be hard to follow stakeholders focus on program stakeholders (e.g. Rossi, Lipsey, and
Freeman, 2003, pp. 18, 435). But starting with a program focus
seems unduly restrictive. We propose a broader and more inclusive
* Corresponding author. Tel.: +1 651 690 3254. approach and define stakeholders as individuals, groups, or
E-mail addresses: jmbryson@umn.edu (J.M. Bryson), MQPatton@Prodigy.net organizations that can affect or are affected by an evaluation process
(M.Q. Patton), bowm0098@umn.edu (R.A. Bowman).
1
and/or its findings. The definition is purposefully broad so that the
Tel.: +1 612 625 5888.
2
Tel.: +1 612 735 7872.
full range of possible stakeholders is considered before narrowing
3
The concept of ‘‘stakeholders’’ has assumed a prominent place in evaluation the focus to the primary intended users of an evaluation. This
theory and practice in the last 20 years, and especially in the last decade. The word broad approach thus helps support the process of narrowing the
stakeholder originated in gambling in 16th-century England where wagers were focus to those stakeholders who are going to be the major audience
posted on wooden stakes. Later the term was broadened to refer to a neutral or
for a specific evaluation effort – the primary intended users – while
trustworthy person who held a wager until the winner was decided. The term came
to evaluation from management consulting where it was first used in 1963 at the also identifying other stakeholders and their interests, powers,
Stanford Research Institute as a way of describing people who were not perspectives, and other related information to inform the
stockholders in a company but ‘‘without whose support the firm would cease to evaluation effort (Bryson & Patton, 2010; Patton, 2008).
exist’’ (Mendelow, 1987, p. 177). The term was given visibility by Freeman (1984) in Such inclusive thinking about stakeholders early on is consis-
his influential text, Strategic Management: A Stakeholder Approach. He defined a
stakeholder as any group or individual who can affect or is affected by the
tent with (but broader than) the Encyclopedia of Evaluation
achievement of the organization’s objectives. definition of stakeholders as ‘‘people who have a stake or a vested

0149-7189/$ – see front matter ß 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.evalprogplan.2010.07.001
2 J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12

interest in the program, policy, or product being evaluated . . . and resource management, crime, AIDS, natural disasters, global
therefore also have a stake in the evaluation’’ (Greene, 2005, p. warming, terrorism – and it is clear that ‘‘the problem’’
397). Greene clusters stakeholders into four groups: ‘‘(a) people encompasses or affects numerous people, groups, organizations,
who have decision authority over the program, including other and sectors. In this world of shared-power, no one is fully in
policy makers, funders, and advisory boards; (b) people who have charge; no organization ‘‘contains’’ the problem (Kettl, 2002).
direct responsibility for the program, including program devel- Instead, many individuals, groups and organizations are involved,
opers, administrators in the organization implementing the affected, and/or have some partial responsibility to act. Figuring
program, program managers, and direct service staff; (c) people out what the problem is and what solutions might work in a
who are the intended beneficiaries of the program, their families, shared-power world means that taking stakeholders into account
and their communities; and (d) people disadvantaged by the is a crucial aspect of public problem solving (Bardach, 1998; Crosby
program, as in lost funding opportunities’’ (pp. 397–398). But & Bryson, 2005; Nutt, 2002). Beyond that, fashioning effective
others with a direct or indirect interest in program effectiveness leadership and governance of policy domains becomes in large part
may be considered stakeholders, including journalists, taxpayers, the effective management of stakeholder relationships (Feldman &
participants in ‘‘civil society,’’ and members of the general public Khademian, 2002). Governmental and nonprofit reforms across the
(Weiss, 1998, pp. 28–29). In other words, ordinary people of all world are also prompting the need for more attention to
kinds who are affected by programs and policies also can be stakeholder analyses (Braverman, Constantine, & Slater, 2004;
thought of as stakeholders, a move that helps clarify what Leeuw Kettl, 2002; Mohan & Sullivan, 2007; Peters, 1996). The need to
(2002) has called the challenge of ‘‘bringing evaluation to the manage relationships has become such a part and parcel of the
people’’ (pp. 5–6). Thus, stakeholders can include anyone who need to govern that Feldman and Khademian (2002) assert that ‘‘to
makes decisions or desires information about a program (or other manage is to govern’’ and it is extremely hard to imagine
evaluand) or is affected by the program or its evaluation. effectively managing – and evaluating the managing of –
However, stakeholders typically have diverse and often relationships without making use of carefully done stakeholder
competing interests. No evaluation can answer all potential analyses. Thus, in the private, public and nonprofit sectors, we are
questions equally well. This means that some process is necessary moving into an era when networks of stakeholders are becoming at
for narrowing the range of possible questions to focus the least as important, if not more so, than markets and hierarchies
evaluation, which in turn necessitates focusing on a narrow list (Durland & Fredericks, 2005; Thiele, Devaux, Velasco, & Horton,
of potential stakeholders that form the group of what Patton 2007),
(2008) refers to as primary intended users. For this article, we often
use the term key evaluation stakeholders to convey a similar 3. Why stakeholder analyses are important in evaluation
concept, but with the intent of generalizing it to a broad range of
evaluation approaches. Evaluation practice involves linking technical rationality with
political rationality in order ‘‘to mobilize support for substance’’
2. Why stakeholder analyses have become so important in Wildavsky (1979, p. 1). To make this linkage, essential competen-
problem solving, planning and decision-making cies for program evaluators include both technical skills and
people skills (Ghere, King, Stevahn, & Minnema, 2006; King,
History offers important lessons about the consequences of Stevahn, Ghere, & Minnema, 2001). People skills include the
inadequate stakeholder analysis and engagement. For example, capacity to work with diverse groups of stakeholders (SenGupta,
Tuchman (1984) in her sobering history The March of Folly from Hopson, & Thompson-Robinson, 2004) and to operate in highly
Troy to Vietnam recounts a series of disastrous misadventures that political environments. The importance of and need for technical,
followed in the footsteps of ignoring the interests of, and cultural, interpersonal, and political competencies intersects with
information held by, key stakeholders. She concludes ‘‘Three increased attention to building evaluation capacity as a foundation
outstanding attitudes – obliviousness to the growing disaffection for conducting useful evaluations (Compton, Baizerman, & Stock-
of constituents, primacy of self-aggrandizement, and the illusion of dill, 2002; Taut, 2007). Capacity building includes developing the
invulnerable status – are persistent aspects of folly.’’ For more essential competencies of evaluators – including for stakeholder
recent examples, consider Paul Nutt’s Why Decisions Fail (2002), a analysis – and developing organizational cultures that support
careful analysis of 400 strategic decisions. Nutt finds that half of evaluative thinking and practice as well as engaging stakeholders
the decisions ‘‘failed’’ – that is they were not implemented, only in ways that build their capacity to participate in and use
partially implemented, or otherwise produced poor results – in evaluations. Building evaluation capacity through stakeholder
large part because decision makers failed to attend to interests and engagement is a primary form of process use in which evaluation
information held by key stakeholders. Other quantitative and has an impact on those involved in addition to and beyond just use
qualitative studies report broadly similar findings with respect to of findings (Cousins, 2007; Patton, 2008, pp. 151–194).
the importance of paying attention to stakeholders in problem- The importance of stakeholder interests, views, influences,
solving, planning and decision-making (e.g. Alkin, Daillak, & White, involvement, needs, and roles are incorporated into the work of the
1979; Bryson & Bromiley, 1993; Bryson, Bromiley, & Jung, 1990; most prominent authors in the field of evaluation theory and
Burby, 2003; Clayson, Castaneda, Sanchez, & Brindis, 2002; Cousins practice (Alkin, 2004). Evaluators overwhelmingly acknowledge
& Shulha, 2006; Cousins & Whitmore, 2007; King, 2007; the importance of working with stakeholders. Preskill and Caracelli
Margerum, 2002; Mohan, Bernstein, & Whitsett, 2002; Morris, (1997) conducted a survey of members of the American Evaluation
2002; Patton, 2008). In short, failure to attend to the interests, needs, Association’s Topical Interest Group on Use. They found that 85%
concerns, powers, priorities, and perspectives of stakeholders rated as extremely or very important ‘‘identifying and prioritizing
represents a serious flaw in thinking or action that too often and intended users of the evaluation’’ (p. 216). They also found that 80%
too predictably leads to poor performance, outright failure, or even of survey respondents agreed that evaluators should take
disaster. responsibility for involving stakeholders in the evaluation
Stakeholder analyses are now arguably more important than processes. Fleischer (2007) asked the same question on a
ever because of the increasingly interconnected nature of the replication survey of American Evaluation Association members
world. Choose any public problem – economic development, in 2006 and found that 98% agreed with this assertion. In rating the
economic collapse, poor educational performance, environmental importance of eight different evaluation approaches, ‘‘user-
J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12 3

focused’’ evaluation was rated highest. Stakeholder involvement in stakeholders, cultural sensitivities, and political vulnerabilities are
evaluations has become an accepted evaluation practice. overlooked. An evaluation that fails to attend to key stakeholders
Unfortunately, the challenges of identifying and including and as a consequence is inaccurate, insensitive, and insufficient to
stakeholders in evaluation, capturing their perspectives, embrac- make needed improvement is a waste of resources and could lead
ing their concerns, and accounting for political sensitivities are affected leadership groups (and funders) to avoid evaluation in the
under-appreciated, particularly when evaluators are faced with future. Note that what is being said does not imply that all possible
budget and time constraints (Bamberger, Rugh, & Mabry, 2006; stakeholders should be satisfied, or involved, or otherwise wholly
Bryson & Patton, 2010). The contribution this article makes to the taken into account, only that the key stakeholders must be, and that
evaluation literature is to help overcome these challenges by the choice of which stakeholders are key is inherently political
presenting a compilation of straight-forward stakeholder identi- (House & Howe, 2000; Julnes & Rog, 2007; Ryan & DeStefano, 2000;
fication and analysis tools that can be employed in a step-wise Stone, 2002), has ethical consequences (Cooper, 1998; House &
fashion throughout an evaluation process with minimal invest- Howe, 1999; Lewis & Gilman, 2005), and involves judgment (House,
ment of time, effort, and materials. The result is an efficient 1977, 1980; Kahneman & Tversky, 2000; Vickers & Vickers, 1998).
approach to identifying stakeholders, clarifying their interests, The process does not, however, imply that stakeholders who fall into
assessing their power and its sources, and determining how they the category of less key should be ignored – for their perspectives
might best be engaged in the design and implementation of an may offer overlooked interests or relevant concerns that enhance the
evaluation and the implementation of resulting recommenda- evaluation, even though they may not play a participatory role in the
tions. We cannot offer a carefully done analytic case or cases evaluation or ultimately be classified as a primary intended user.
demonstrating the effectiveness of the techniques as a set, since In short, we go so far as to hypothesize that evaluation
we are unaware of any such study. Instead, we present the set as processes that employ a reasonable number of competently done
our accumulated wisdom regarding ways of identifying and stakeholder analyses are more likely to be used by intended users
working with stakeholders and challenge ourselves and others to for their intended use than are evaluation processes that do not. At
engage in the kind of careful research needed to determine which a minimum, stakeholder analyses should help evaluators deter-
techniques work best, under which circumstances, and why. That mine who cares, who has influential resources, who will use the
said, we are able to offer some illustrative cases in this article. findings, and what they will use the findings for; and should
The inclusion of stakeholders in evaluation design should be establish stronger commitment to credible evaluation. Testing this
thought of in different terms than inclusion of stakeholders in hypothesis is beyond the scope of this article, but we do believe
program design or problem solving, though overlap is inevitable. For this article lays much of the groundwork for such tests.
example, in a formative approach, the evaluation design is integral to The next section discusses a number of stakeholder identifica-
the program design. In a developmental approach, the anticipated tion and analysis techniques.
and unanticipated must be constantly reconciled. Both beg for
adaptive thinking and stakeholder considerations are a fundamental 4. An array of techniques
vehicle for adaptation, particularly as the stakeholder themselves
may be involved in the program as well as the evaluation. A This article presents twelve stakeholder identification and
summative approach offers a more detached view of the program or analysis techniques in enough detail for readers to get a good idea
process, meaning that the evaluation stakeholders, once identified, of what is involved in using them. The techniques are grouped by
are in general less likely to change. In all contexts, evaluation step in the evaluation process. All of the techniques are fairly
stakeholders are intimately tied to the purposes of the evaluation, simple in concept and rely on standard facilitation materials such
broadly categorized by Patton (2008) as: (1) making decisions about as flip charts, marking pens, tape, and colored stick-on dots. On-
the program (i.e. to fund, continue, or abandon); (2) program line collaborative tools, such as Wiki’s and blogs; and technology,
improvement (i.e. identify opportunities to expand, modify process, such as Skype, could be easily employed for decentralized
target different audience); (3) to add knowledge to the field and discussions and inclusion of remote participants. Using the
inform decision-making (i.e. confirm assumptions, meta-evalua- techniques requires some time, effort, and informed participants
tions); (4) support development of new innovations; or (5) – resources that are typically available in most evaluation settings.
accountability. Table 1 summarizes the presentation of techniques.
The article, while influenced by Patton’s utilization-focused
evaluation framework (2008), is organized around a more generic 4.1. STEP 1 – evaluation planning
step-wise evaluation approach. The approach includes the following
steps: Initiators of the evaluation process should articulate what the
purpose of the evaluation is, at least initially. This purpose should
STEP 1 – Evaluation Planning (context, scope, and budget; the guide the first step in making choices about stakeholder analyses
step includes gaining clarity about the ‘‘evaluation questions’’). and who should do them. Deciding who should be involved, how,
STEP 2 – Evaluation Design (including methods and and when in doing stakeholder analyses is a key strategic choice. In
measurement). general, people should be involved if they have information that
STEP 3 – Data Collection. cannot be gained otherwise, or if their participation is necessary to
STEP 4 – Analysis (interpretation, judgments and recom- assure successful implementation of the evaluation built on the
mendations). analyses. There is always a question of whether there can be too
STEP 5 – Decision-Making and Implementation (including much or too little participation. And the general answer is yes to
presentation of findings and recommendations). both, but the specific answer depends on the situation, and there are
no hard and fast rules, let alone good empirical evidence, on when,
Attention to stakeholders is important throughout the evaluation where, how, and why to draw the line. There may be important
process. Otherwise, there is not likely to be enough understanding, trade-offs between early and later participation in analyses and one
appreciation, information sharing, legitimacy or commitment to or more of the following: representation, accountability, analysis
produce a credible evaluation that will ultimately be used. In other quality, analysis credibility, analysis legitimacy, the ability to act
words, significant missed opportunities may result, even in the best based on the analyses, or other factors, and these will need to be
of evaluation circumstances, when the perspectives of various thought through. Fortunately, ‘‘the choices’’ actually can be
4 J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12

Table 1
Evaluation and stakeholder identification and analysis techniques.

Evaluation step Technique Purpose Reveals Diagram

1. Evaluation Planning 1.a. List Evaluation To develop initial list of stakeholders, begin to Broad list of stakeholders
Stakeholders conduct iterative process of narrowing the field
of key stakeholders
1.b. Basic Stakeholder To identify the interests of individual stakeholders Key evaluation issues Fig. 1
Analysis Technique in the program and their interests in the evaluation.
1.c. Power versus To determine which players’ interests and power Players, context setters, subjects, Fig. 2
Interest Grids issues must be considered and crowd
Common ground all or subsets of
stakeholders
Possible coalitions of support
and/or opposition
Strategies for changing views of
stakeholders. Ways to advance the
interests of the powerless
1.d. Stakeholder Influence To identify how stakeholders influence one Who influences whom among the
Diagrams another stakeholders
Who the most influential
stakeholders are
1.e. Bases of Power – To identify the sources of a stakeholders’ power. The goals the stakeholder seeks to Fig. 3
Directions To clarify stakeholder’s interests or stakes achieve or the interests they seek
of Interest Diagram To help planning team identify common ground across to serve, as well as the power
all stakeholder groups based on which the stakeholder
can draw to pursue those interests

2. Evaluation Design 2.a. Participation To indicate probable level of stakeholder participation Expectations for involvement and Fig. 4
Planning Matrix and relationship of evaluator to stakeholder action plans for communication
2.b. Purpose To engage the expanded evaluation team in Causal network or hierarchy of
Network or Hierarch identifying purposes beyond the initial evaluation purposes indicating which purposes
purpose and establishing the primary purpose are prerequisite to or help achieve
or intended use of the evaluation other purposes
Primary evaluation purpose

3. Data Collection 3.a. Stakeholder Role Plays To understand how different stakeholders Insights into how other
respond to different methods, stakeholders think
measurements, and designs

4. Analysis 4.a. Evaluation To identify which stakeholders are likely to Recommendations that have a Fig. 5
Recommendation support which recommendations and which strong coalition of support
Support Versus are likely to oppose it Recommendations that may need
Opposition Grids to be changed in order to
garner support
4.b. Stakes and Inclination Compares importance of recommendations Fig. 6
Toward Evaluation versus support, opposition, and neutrality
4.c. Recommendation To identify recommendations that are likely Recommendations that have Fig. 7
Attractiveness Versus to be implemented due to stakeholder capacity strong stakeholder capacity
Stakeholder Capability and those that will fail due to lack of capacity to implement
Grid

5. Decision-making 5.a. Evaluation To help stakeholders gain a clear picture of what Resources and strategies Fig. 8
and Implementation Recommendation will be required for implementation and for successful implementation
Implementation Strategy help develop action plans that
Development Grid will tap stakeholder
interests and resources

approached as a sequence of choices, in which first an individual, gathering techniques in this and subsequent steps, or in
who may be the evaluator, or a small evaluation planning group conjunction with the other techniques outlined in this article. In
begins the effort, and then other participants are added later as the this step it is important to make sure stakeholders are identified at
advisability of doing so becomes apparent. the right level of aggregation, meaning at a level that makes sense
Two possible starting points for identifying stakeholders are from a strategic perspective (Eden & Ackermann, 1998). For
presented. The first is extremely simple, while the second builds on example, usually ‘‘the government’’ is not a stakeholder, but some
the first and therefore provides more information. parts of it might be such as the city council or the police force. ‘‘The
1.a. List evaluation stakeholders. This technique begins an government’’ thus is typically a kind of ‘‘phantom stakeholder’’
individual, who may be the evaluator, or a small evaluation (Beech & Huxham, 2003) and should be avoided. You should be
planning group, brainstorming the list of individuals or groups who able to find the ‘‘voice’’ of each stakeholder that is identified, be it
care about or are affected by the evaluation. Those doing the an actual individual or a representative of the group.
brainstorming should realize that other stakeholders may emerge 1.b. Basic stakeholder analysis. This technique is an adaptation
subsequently. Next, the stakeholders should be ranked according of a technique described in Bryson (2004a, 2004b). It offers a quick
to their importance to the evaluation. When doing so, consider the and useful way of identifying each stakeholder and comparing and
stakeholder’s power, legitimacy, and attention-getting capacity contrasting their interest(s) in the program versus their interest(s)
(Mitchell, Agle, & Wood, 1997). in the evaluation. A separate sheet is prepared for each program
This step is typically ‘‘back room’’ work. Necessary additional and/or evaluation stakeholders. Colored stick-on dots can be used
information inputs may be garnered through the use of interviews, to assess how well the stakeholder (not the evaluator) probably
questionnaires, focus groups, or other targeted information thinks the program does in terms of satisfying the stakeholder’s
J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12 5

Fig. 2. Power versus interest grid. Source: Eden and Ackermann (1998, p. 122)

 Subjects – have an interest but little power. It may be important to


support and enhance Subjects’ capacity to be involved, especially
when they may be affected by findings, as might be the case with
program participants.
Fig. 1. Basic stakeholder analysis. Source: Adapted from Bryson (2004a, 2004b).  Context Setters – have power but little direct interest. It may be
important to increase the interest of Context Setters in the
evaluation if they are likely to pose barriers to use through their
wishes. Green dots indicate the program does well against a wish, disinterest.
yellow dots indicates the program does a fair job, and red dots  Crowd – consists of stakeholders with little interest or power. The
indicate it does a poor job (Fig. 1). Crowd may need to be informed about the evaluation and its
Bryson (2004a) describes how this technique was used to findings. On the other hand, if communication is badly done,
evaluate the performance of a state department of natural resources controversy may quickly turn this amorphous ‘‘crowd’’ into a
in the United States, because it showed participants how existing very interested mob.
strategies ignored important stakeholders – who refused to be
ignored – as well as what might be done to satisfy the stakeholders. Construct a power versus interest grid by first placing the name
The evaluation results were used to successfully bring about major of each the evaluation stakeholders identified in 1.a. and 1.b on a
changes in the organization, which included funding increases, separate Post-It1 note. Then locate each Post-It1 note in the
increased end-user satisfaction, and increased political legitimacy. appropriate place on the power versus interest grid. The scales are
Examples of stakeholders that may have a distinct interest in not absolute, but instead are relative, so that, for example, within
the evaluation, and thus be categorized as evaluation stakeholders, the Player category there will be some players who are more
could include past, current, and future program participants, powerful and/or have a stronger interest than others.
employers or associates of program participants, and developers of Power versus interest grids typically help determine which
similar, complementary, or competing programs, among others. It players’ interests and power bases must be taken into account in
is also important to consider those stakeholders that may have a order to produce a credible evaluation. More broadly, they also
negative influence on the evaluation for any variety of reasons, help highlight coalitions to be encouraged or discouraged, what
including opposition to the use of resources for evaluation, feeling behavior should be fostered, and whose ‘‘buy in’’ should be sought
threatened by the potential outcomes, or feeling anxiety about or who should be co-opted, in part by revealing which stakeholders
other aspects of the evaluation. Ignoring such stakeholders has the have the most to gain (or lose) and those who have the most (or
potential to hinder progress and derail any positive outcomes. least) control over the direction of the evaluation. This information
1.c. Power versus interest grids. Power versus interest grids provides a helpful basis for assessing the political, technical,
are described in detail by Eden and Ackermann (1998, pp. 121– practical, and other risks as the evaluation goes forward. Finally,
125, 344–346; see also Patton, 2008, p. 80) (see Fig. 2). These grids they may provide some information on how to convince
array stakeholders on a two-by-two matrix – usually using Post- stakeholders to change their views. Interestingly, the knowledge
It1 notes on a flipchart sheet – where the dimensions are the gained from the use of such a grid can be used to help advance the
stakeholder’s interest in the evaluation or issue at hand, and the interests of the relatively powerless subjects (Bryson, Cunning-
stakeholder’s power to affect the evaluation. Interest here means ham, & Lokkesmoe, 2002).
interest in a political sense, or having a political stake, as opposed 1.d. Stakeholder influence diagrams. Stakeholder influence
to simple inquisitiveness. Each of the dimensions should be diagrams indicate how the stakeholders on a power versus interest
thought of as a range, i.e. from low to high interest and from low to grid influence one another. The technique is taken from Eden and
high power. Nonetheless, it is often helpful to think of stakeholders Ackermann (1998, pp. 349–350; see also Bryson et al., 2002) and
as generally falling into four categories: builds on the power versus interest grid.
A stakeholder influence diagram is constructed as follows:
 Players – have both an interest and significant power. Players have Using the power versus interest grid developed in step 1.c., discuss
high potential to be primary intended users. These are often key how each evaluation stakeholder influences the other evaluation
stakeholders who are in a prime position to affect use, including stakeholders. Draw lines with arrows to indicate the direction of
using it themselves or affecting how others use it. their influence. While two-way influences are possible, an attempt
6 J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12

should be made to identify the primary direction in which team find the common ground – especially in terms of interests –
influence flows between evaluation stakeholders. The diagrams across all or most of the evaluation stakeholder groups. In other
may be used to further assess the power of stakeholders and to words, after exploring the power bases and interests of each
determine which stakeholders are the most influential and/or stakeholder, the planning group will be in a position to identify
more central than others in the network. commonalities across the stakeholders as a whole, or across
1.e. Bases of power – directions of interest diagrams. This particular subgroups. Second, the diagrams are intended to provide
technique takes the analysis in the power versus interest grid to background information on each evaluation stakeholder in order to
deeper level by identifying: (a) the sources of different evaluation know how to tap into their interests or make use of their power to
stakeholder’s power, i.e. where the power comes from; and (b) advance the evaluation’s credibility and purpose.
what the actual interests or goals are of the different evaluation Step 1 – Evaluation Planning Summary. Five stakeholder
stakeholders. The technique is an adaptation of Eden and identification and analysis techniques have been presented as part
Ackermann’s ‘‘star diagrams’’ (1998, pp. 126–128, 346–349; see of the evaluation planning phase. Note that there are overlapping
also Bryson et al., 2002). A diagram of this kind indicates the activities in these techniques as each tends to build on previous
sources of power available to the evaluation stakeholder, as well as work. Whether used sequentially or in combination, the intent of
the goals or interests the stakeholder seeks to achieve or serve. these techniques is to provide the basis for selection and inclusion
Power can come from access to or control over various resources, of evaluation stakeholders in the next step of a generic evaluation
such as expertise, money and votes, formal authority, network process – evaluation design. Some of the evaluation stakeholders
centrality, or informal charisma; or from access to or control over that have emerged will be both logical participants and accessible
various sanctions, such as regulatory authority or votes of no – if not as ongoing, active members of an evaluation team, then at
confidence (Eden & Ackermann, 1998, pp. 126–127). Directions of least as reliable sources of information, feedback, and advice. In
interest indicate the aspirations or concerns of the stakeholder. some cases, evaluation planners will have been in contact with
When used in the context of evaluation, the diagrams focus on the particular stakeholders to gather information about their views. In
evaluation stakeholder’s bases of power and directions of interest other cases, the identification and analysis process may have
in relation to the evaluation; that is, the analyses seek to identify involved making educated guesses without the direct involvement
the powers that might affect progress and completion of the of specific evaluation stakeholder(s). A judgment will be needed
program evaluation and the specific directions the evaluation about whether these guesses will need to be verified with the
might take (Fig. 3). stakeholders.
There are two reasons for constructing bases of power – After using the above techniques, it should be possible to come
directions of interest diagrams. The first is to help the planning fairly close to deciding who the ‘‘key’’ stakeholders are, including

Fig. 3. Bases of power – directions of interest diagram, with examples of power bases and interests. Source: Adapted from Eden and Ackermann (1998, p. 127) and Bryson et al.
(2002).
J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12 7

who the primary intended users are. Patton (2008, pp. 79–80) 73–75; Friend & Hickling, 1997, pp. 257–265; Patton, 2008, pp. 69–
suggests persons selected as primary intended users should: 75). The evaluation team is the group most likely to use the
stakeholder analysis techniques described below, but other groups
 Have an interest in and commitment to using evaluation may be asked to use one or more of the techniques as well. In
findings, either because they themselves will be making addition, as part of these discussions or following them, it is
decisions using the findings, or they are closely connected to important to clarify, confirm, and adjust assumptions made in the
those who will be using the evaluation findings. prior planning phase.
 Be available, since interest must be joined with engagement, Note that this staged process embodies a kind of technical,
which means making time to participate in evaluation decision- political, and ethical rationality. The process is designed to gain
making as part of the primary intended users group. needed information, build political acceptance, and address some
 Have the capacity to contribute to the evaluation (or a important questions about legitimacy, representation, and credi-
willingness to participate in capacity building as part of the bility. Stakeholders are included when there are good and prudent
process); capacity means they understand enough about reasons to do so, but not when their involvement is impractical or
methods and data to help make the evaluation credible and unnecessary.
relevant as well as useful, which also means they can participate 2.a. Participation planning matrix. The participation planning
in trade-off negotiations in choosing among options. matrix adapts contributions from the International Association for
 Bring a perspective that will contribute to the diversity of Public Participation, specifically their notion of a spectrum of levels
perspectives and views that surround the evaluation and should of public participation, and the steps used in this article to organize
be represented as determined by the stakeholder analyses. techniques. The levels of participation range from not engaging a
 Have the interpersonal skills needed to effectively participate in stakeholder at all through to giving specific stakeholders final
the group process; in other words, they must ‘‘play well with decision-making authority. The category of non-engaged includes
others,’’ which means that it is important to avoid, as much as identified stakeholders who for justifiable reasons will be
possible, people who are divisive, combative, and antagonistic to considered non-participants. Each level implies a different kind
others. of promise from the evaluator to the stakeholder – implicitly if not
explicitly (see Fig. 4).
The matrix prompts the evaluation team to clarify how different
evaluation stakeholders should hold different levels of influence
4.2. STEP 2 – evaluation design over the course (steps) of an evaluation, with appropriate
accompanying promises made to the stakeholders. The participa-
The evaluation planning step should generate healthy discus- tion planning matrix can be used to create a sort of evaluation
sion and reveal a list of evaluation stakeholders that should be contract with selected stakeholders who are important to engage;
included in the more public beginning of the evaluation effort, or the contract should confirm the level of commitment and
the evaluation design phase. The involved group probably should participation.
include those evaluation stakeholders that cannot be ignored due 2.b. Purpose networks. Another technique that is quite useful
to high power and interest. The initial planning may also reveal when designing an evaluation is the purpose network, or purpose
evaluation stakeholders that will be affected by the evaluation hierarchy. The purpose network builds on earlier evaluation
results (positively or negatively), yet have little power or planning work and seeks the input of the recently identified key
articulated interest. They may not actually know that they should evaluation stakeholders. (Note that evaluation planners may wish
care about the evaluation. Given the evaluation’s purpose, it may to use it during Step 1 as well to gain a clearer initial understanding
be important to find ways to give these stakeholders a voice and/or of purpose.)
enhance their perceived interest. A purpose network indicates the various interrelated purposes
As the evaluation planning team moves to the evaluation design that the evaluation might serve. The technique is adapted from
step, continuing the stakeholder analysis process involves assem- Nadler and Hobino (1998) and Bryson, Ackermann, and Eden
bling – either physically or virtually – those identified as key (2007). The process of creating a purpose network first requires the
evaluation stakeholders or their representatives. This expanded evaluation team to recall the original purpose of the evaluation
group will use as many of the techniques already discussed as that was identified in the first stage of evaluation planning. Any
needed (i.e. basic analysis technique, power versus interest grid, newly engaged participants are also encouraged to reflect on the
stakeholder influence diagram, and/or bases of power – directions initial statement of potential purposes of the evaluation. The group
of interest diagrams) to education themselves and bring everyone should use a flipchart to display the original purpose(s) written on
up to speed. a Post-It1 note attached to the sheet. The group then brainstorms
The group should also think carefully about other stakeholders additional purposes (goals, aims, outcomes, indicators, or aspira-
that may not have been included in the group, but should be. Again, tions) and writes them separately on additional Post-It1 notes and
the group should consider actual or potential stakeholders’ power, attaches them to the flipchart sheet. The full array of purposes
legitimacy, and attention-getting capacity (Mitchell, Agle, & Wood, should then be linked with arrows in a causal fashion; i.e. arrows
1997). The group should consider the positive and negative should indicate how one purpose helps lead to or fulfill a
consequences of involving – or not involving – other stakeholders subsequent purpose(s).
or their representatives. This includes thinking about ways they Once the network (or hierarchy) is created, the group should
might be engaged in the process as well as ways they may hamper decide which purposes are the actual primary purpose(s) of the
the process. evaluation. Note that the primary purpose may end up being
Following these broader discussions, it should be possible to different from what group members or other stakeholders
finalize who the key evaluation stakeholders are and how they originally identified. It is also possible the purpose(s) may be
might contribute to the evaluation effort without compromising changed somewhat based on further stakeholder analyses.
the credibility of the evaluation. For example, some may be STEP 2 – evaluation design – summary. This concludes the
identified as sponsors and champions, members of a coordinating discussion of the first two steps in the evaluation process. By the
group, member of various advisory groups, a resource person or end of this step, an evaluation design should be created that will
group, or members of the final evaluation team (Bryson, 2004a, pp. allow the evaluation to fulfill its intended use by its intended users.
8 J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12

Fig. 4. Participation planning matrix: differing levels of participation and accompanying promises from the evaluator to the stakeholder. Source: Adapted from Bryson (2004a,
p. 33) and from the International Association for Public Participation’s Public Participation Spectrum of levels of public participation (http://www.iaps.org/practioner tools/
spectrum.html).

Note that since the use of stakeholder identification and analysis might respond is the use of stakeholder role plays. This technique
techniques is always context-dependent, there are no hard and fast can be used to assess how different stakeholders might respond to
rules about when to use and when not to use any particular different methods, measures, and other design choices, including
techniques. Note as well that the time invested in stakeholder different approaches to data collection and organization. Role plays
analysis in each step is not likely to be prohibitive, and indeed is can also be useful in anticipating the response to evaluation
highly likely to be cost-beneficial. Using the techniques involves recommendations when used in conjunction with the support
fostering a structured dialogue that typically reveals insights that versus opposition grid technique (4.a), which is discussed in a later
will improve the evaluation and that are not likely to be revealed section. In other words, stakeholder role plays can be very useful in
otherwise. Use of the techniques will also build individual and Steps 2, 3 and 4.
group capacity for further stakeholder analysis exercises. Eden and Ackermann (1998, pp. 133–134) show how role plays,
in which different members of an evaluation team play the role of
4.3. STEP 3 – data collection different stakeholders, can be used to develop evaluation
approaches that are likely to address stakeholder interests, and
The evaluation design will include methods, measures, and data can help ensure effective evaluation implementation and use of
collection choices that are specific to the evaluation approach results. Role plays have the special benefit of enhancing the
chosen. To the extent that the purpose network (2.a.) described evaluation group’s capacity to understand how other stakeholders
above has revealed new or modified evaluation purposes, the think. Role plays build on the information revealed in previous
evaluation design should be reviewed in relation to those choices. analyses. Of course, there are always dangers in imagining what
3.a. Stakeholder role plays. If any key (or somehow significant) the views of others are, rather than engaging with them directly, so
evaluation stakeholders are unable to fully participate in finalizing the evaluation team will have to assess the risks and do what they
the design, one tool that may be helpful in understanding how they can to mitigate them if necessary.
J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12 9

A stakeholder role play involves having each member of the


evaluation team review the results of previous analyses, and
particularly the (2.b.) bases of power – directions of interest
diagrams. After each team member has assumed the role of a
different stakeholder, the following questions are asked and
answered: (1) how would I react to this option? and (2) what could
be done that would increase my support or decrease my
opposition?
A special virtue of this exercise is that it may bring out and serve
to protect the interests of stakeholders who are under-represented
or difficult to access. Fig. 6. Mapping stakeholders’ stakes and inclinations toward the evaluation’s
recommendations.
4.4. STEP 4 – analysis
tool identifies the level of importance of the recommendation to
Once the data have been collected, they must be interpreted, the stakeholder, on the one hand, against the support, opposition,
judgments made of various sorts, and recommendations prepared. or neutrality of the stakeholder, on the other hand (Patton, 2008).
Three techniques will be suggested for use in the analysis phase: This tool is used in a way similar to the recommendation support
(4.a.) evaluation recommendation support versus opposition grids, versus opposition grid (Fig. 5).
(4.b.) recommendation attractiveness versus stakeholder capabil- 4.c. Recommendation attractiveness versus stakeholder
ity grids, and (4.c.) tapping individual stakeholder interests to capability grid. This is another helpful tool to use prior to making
pursue the common good. decisions about recommendations. The tool helps with assessing
4.a. Evaluation recommendation support versus opposition which recommendations are likely to be implemented successfully
grids. These grids indicate which stakeholders are likely to support because they match stakeholder capacity – and those that are
particular recommendations and which are likely to oppose them. likely to fail due to lack of stakeholder capacity (see Fig. 7). The grid
Nutt and Backoff (1992) developed the technique for planning is adapted from Bryson, Freeman, and Roering (1986, pp. 73–76;
purposes; here it is adapted to assess the viability of evaluation see also Bryson, 2004a).
recommendations (see Fig. 5). The steps are simple. For each In order to make effective use of this technique, the evaluation
recommendation, write the names of the key evaluation stake- team will need to develop the criteria to assess the attractiveness
holders on a separate Post-It1 note. On a chart similar to the of a recommendation and the capabilities necessary for successful
example in Fig. 5, plot where, in the judgment of the evaluation implementation. Note that resource requirements and resource
team, the stakeholder should be positioned in terms of likely availability are key components of ‘‘capability’’ – and while some
support for, or opposition to, the recommendation. Discuss and evaluation teams may have already gathered the information
move the cards around until the group agrees with the arrange- needed to estimate the various costs of implementation, some may
ment. Repeat the exercise for each recommendation. Then step be in a position to list only the components. In either case,
back and reflect on which recommendations have the needed inclusion of needed resource requirements and availabilities are a
support. To the extent there is stakeholder opposition to what is key consideration of the capability assessment.
otherwise seen as a desirable recommendation, the team may Each recommendation should be listed on a Post-It 1 and placed
want to assess how the stakeholders in question might be on the grid in the appropriate position after considering both the
influenced to support, or at least not oppose, the recommendation. recommendation’s attractiveness and the various stakeholders’
Alternatively, the team may reconsider the recommendation to see capacities to implement it. Discuss results and any implications for
if stakeholder support can be gained without sacrificing the
important merits of the recommendation.
A somewhat more elaborate tool for assessing support for or
opposition to evaluation recommendations is shown in Fig. 6. The

Fig. 5. Evaluation recommendation support versus opposition grid. Source: Crosby, Fig. 7. Recommendation attractiveness versus stakeholder capability grid. Source:
Bryson, and Anderson (2003); adapted from Nutt and Backoff (1992, p. 198). Bryson et al. (1986, pp. 73–76); see also Bryson (2004a, p. 281).
10 J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12

Fig. 8. Recommendation implementation strategy development grid. Source: Adapted from Meltsner (1972), Coplin and O’Leary (1976), Kaufman (1986), and Christensen
(1993).

building necessary capacity among stakeholders, or, if needed, how Steps 1–5 – overall summary. This completes the discussion of
to remove unattractive recommendations from the agenda. specific stakeholder analysis techniques. As can be seen, a wide
variety of techniques is available to inform evaluation efforts
4.5. STEP 5 – decision-making and implementation intended to produce a credible evaluation likely to be used by
intended users for its intended use. Each technique provides a
In a sense, all of the techniques considered so far are relevant to different kind of information, often building on previous techni-
decision-making and implementation of the evaluation recom- ques to provide structured assistance in considering the interests,
mendations. They are all concerned with developing significant concerns, perspectives and other important aspects of different
stakeholder support. That said, it is still important to continue evaluation stakeholders.
retaining a stakeholder focus during decision-making and imple-
mentation (Nutt, 2002). We present one final technique to help do 5. Conclusions
so.
5.a. Recommendation implementation strategy develop- There are three notable trends in evaluation that all point to the
ment grid. Filling out a recommendation implementation importance of working effectively with stakeholders. They are: (1)
strategy development grid can help evaluators, planners and a general increase in both technical and people skills in
decision makers gain a clearer picture of what will be required for evaluators; (2) an increasing emphasis on building evaluation
implementation and help them develop action plans that will tap capacity; and (3) increased attention to, and valuing of, the
stakeholder interests and resources. The technique is adapted impacts on participants of process use. The tools for working with
from Meltsner (1972), Coplin and O’Leary (1976), Kaufman stakeholders offered in this article are aimed at providing concrete
(1986), and Christensen (1993), and builds on information and practice-tested approaches for strengthening all three trends
revealed by previously created (1.e.) bases of power – directions and increasing the ultimate use and usefulness of evaluations. As
of interest diagrams, (3.a.) stakeholder role plays, (4.a.) evaluation noted previously, in a 2006 on-line survey of members of the
recommendation support versus opposition grids, and (4.c.) American Evaluation Association, 77% of 1047 respondents agreed
recommendation attractiveness versus stakeholder capability or strongly agreed with the following statement: ‘‘Evaluators
grids (Fig. 8). should take responsibility for: Being accountable to intended users
The tool recognizes the separation between supportive and of the evaluation for intended uses of the evaluation’’ (Fleischer,
opposing stakeholders. For each stakeholder, list their stake 2007). To exercise that responsibility and realize that account-
in the evaluation, their resources, avenues of influence, ability, evaluators can benefit from using specific stakeholder
probability of participating, influence, implications for imple- analysis tools at every step in the evaluation process. Working
mentation strategy, and action plan for dealing with them. It is meaningfully with stakeholders is not something to be done just
possible that a separate grid will need to be developed for each at the beginning of an evaluation. Attending to and engaging with
recommendation. evaluation stakeholders typically must occur every step along the
J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12 11

way, including during the interpretation of data and findings and Bryson, J., & Patton, M. (2010). Analyzing and engaging stakeholders. In H. Hatry, J.
Wholey, & K. Newcomer (Eds.), Handbook of practical program evaluation (3rd ed.,
in support of implementation of recommendations and decisions pp. 30–54). San Francisco, CA: Jossey-Bass.
and actions that flow from the evaluation findings. Burby, R. (2003). Making plans that matter: Citizen involvement and government
This article is, to the best of our knowledge, one of the first action. Journal of the American Planning Association, 69(1), 33–50.
Christensen, K. (1993). Teaching savvy. Journal of Planning Education and Research, 12,
attempts to provide a how-to guide to a range of stakeholder 202–212.
analysis tools applied to evaluation and the issues of which Clayson, Z. C., Castaneda, X., Sanchez, E., & Brindis, C. (2002). Unequal power—Changing
stakeholders to engage, why, and when in the evaluation process landscapes: Negotiations between evaluation stakeholders in Latino communities.
American Journal of Evaluation, 23(1), 33–44.
(Bryson & Patton, 2010). As indicated in the introduction, the Compton, D., Baizerman, M., & Stockdill, S. (Eds.). (2002). The art, craft, and science of
process is loosely aligned with Patton’s (2008) utilization-focused evaluation capacity building. New Directions for Evaluation, No. 93.
evaluation; however, we have argued that the approach to working Cooper, T. (1998). The responsible administrator (4th ed.). San Francisco, CA: Jossey-Bass.
Coplin, W., & O’Leary, M. (1976). Everyman’s prince: A guide to understanding your
with evaluation stakeholders we present is more generic, and that
political problem. Boston: Duxbury Press.
the application of the tools is not dependent on any one evaluation Cousins, J. B. (Ed.). (2007). Process use. New Directions for Evaluation, No. 116.
approach. Cousins, J. B., & Shulha, L. M. (2006). A Comparative analysis of evaluation utilization
Each of the stakeholder analysis techniques has a specific and its cognate fields of inquiry: Current issues and trends. In I. F. Shaw, J. C.
Greene, & M. M. Mark (Eds.), The Sage handbook of evaluation: Policies, programs and
purpose and reveals some things, while hiding, or at least not practices (pp. 266–291). Thousand Oaks, CA: Sage.
highlighting, others. Stakeholder analyses therefore must be Cousins, J. B., & Whitmore, E. (2007). Framing participatory evaluation. New Directions
undertaken skillfully and thoughtfully, with a willingness to learn for Evaluation, 114: 87–105.
Crosby, B. C., & Bryson, J. M. (2005). Leadership for the common good (2nd ed.). San
and revise along the way (Bardach, 1998; Lynn, 1996). For some Francisco, CA: Jossey-Bass.
small evaluation efforts, a one-time use of one or two techniques Crosby, B. C., Bryson, J. M., & Anderson, S. R. (2003). Leadership for the common good field
may be all that is necessary; for larger evaluation efforts, a whole book. Saint Paul, MN: University of Minnesota Extension Service, Community
Vitality program.
range of techniques will be needed at various points throughout Durland, M., & Fredericks, K. (Eds.). (2005). Social network analysis in program
the process. Hybrid techniques or new techniques may also need to evaluation. New Directions for Evaluation, No. 107.
be invented along the way. The key point is the importance of Eden, C., & Ackermann, F. (1998). Making strategy. Thousand Oaks, CA: Sage.
Feldman, M., & Khademian, A. (2002). To manage is to govern. Public Administration
thinking strategically about which analyses are to be undertaken, Review, 62(5), 541–554.
why, when, where, how, and with whom, and how to change Fleischer, D. (2007). Evaluation use: A survey of U.S. American evaluation association
direction when needed. We hope that the inclusion of a portfolio of members. Unpublished Masters Thesis, Claremont Graduate University.
Freeman, R. E. (1984). Strategic management: A stakeholder approach. Boston: Pitman.
straight-forward and sensible techniques will indeed improve how
Friend, J., & Hickling, A. (1997). Planning under pressure: The strategic choice approach
evaluation stakeholders are identified, assessed, and involved, and (2nd ed.). Oxford, England: Heinemann.
therefore benefit the field. Ghere, G., King, J., Stevahn, L., & Minnema, J. (2006). A professional development unit
Finally, there remains quite an agenda for research, education, for reflecting on program evaluation competencies. American Journal of Evaluation,
27(1), 108–123.
and practice around stakeholder identification, analysis, and Greene, J. C. (2005). Stakeholders. In S. Mathison (Ed.), Encyclopedia of evaluation (pp.
engagement. We still have much to learn about which techniques 397–398). Thousand Oaks, CA: Sage.
work best under which circumstances and why. What we do know House, E. R. (1977). The logic of evaluative argument. In CSE monograph lines in
evaluation (Vol. 7). Los Angeles: UCLA Center for the Study of Education.
is that skillfully, thoughtfully, and authentically working with House, E. R. (1980). Evaluating with validity. Beverly Hills, CA: Sage.
stakeholders to achieve intended use by intended users increases House, E. R., & Howe, K. (1999). Values in evaluation and social research. Thousand Oaks,
use of both evaluation findings and processes (Patton, 2008). CA: Sage.
House, E. R., & Howe, K. (2000). Deliberative democratic evaluation. Evaluation as a
democratic process. New Directions for Evaluation, 85, 3–12.
References Joint Committee on Standards for Educational Evaluation. (1994). The program evalua-
tion standards. Thousand Oaks, CA: Sage.
Julnes, G., & Rog, D. (Eds.). (2007). Informing federal policies on evaluation methodol-
Alkin, M. C. (2004). Evaluation roots: Tracing theorists’ views and influences. Thousand ogy: Building the evidence base for method choice in government sponsored
Oaks, CA: Sage Publications. evaluation. New directions for program evaluation, No. 113.
Alkin, M. C., Daillak, R., & White, P. (1979). Using evaluation: Does evaluation make a
difference? Beverly Hills, CA: Sage. Kahneman, D., & Tversky, A. (Eds.). (2000). Choices, values, and frames. Boston: Cam-
Alkin, M. C., Hofstetter, & Ai, X. (1998). Stakeholder concepts. Advances in Educational bridge University Press.
Productivity, 7, 87–113. Kaufman, J. (1986). Making planners more effective strategists. In B. Checkoway
American Evaluation Association Task Force on Guiding Principle for Evaluators. (Ed.), Strategic perspectives on planning practice. Lexington, MA: Lexington
(1995). Guiding principles for evaluators. New Directions for Program Evaluation, Books.
66, 19–34. Kettl, D. (2002). The transformation of governance: Public administration for twenty-first
Bamberger, M., Rugh, J., & Mabry, L. (2006). Real world evaluation: Working under century America. Baltimore, MD: Johns Hopkins University Press.
budget, time, data, and political constraints. Thousand Oaks, CA: Sage Publications. King, J. A. (2007). Making sense of participatory evaluation. New Directions for Evalua-
Bardach, E. (1998). Getting agencies to work together. Washington, DC: Brookings tion, No. 114, pp. 83–86.
Institution Press. King, J., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential
Beech, N., & Huxham, C. (2003). Cycles of identificationing in collaborations. Glasgow, program evaluator competencies. American Journal of Evaluation, 22(2), 229–247.
Scotland: University of Strathclyde, Graduate School of Business, Working Paper Leeuw, F. (2002). Evaluation in Europe 2000: Challenges to a growth industry. Evalua-
Series. tion, 8(1), 5–12.
Braverman, M. T., Constantine, N. A., & Slater, J. K. (Eds.). (2004). Foundations and Lewis, C. W., & Gilman, S. C. (2005). The ethics challenge in public service: A problem-
evaluation: Contexts and practices for effective philanthropy. San Francisco: Jossey-Bass. solving guide. San Francisco: Jossey-Bass.
Bryson, J. (2004a). Strategic planning for public and nonprofit organizations (3rd ed.). San Lynn, L. (1996). Public management as art, science and profession. Chatham, New Jersey:
Francisco, CA: Jossey-Bass. Chatham House.
Bryson, J. (2004b). What to do when stakeholders matter: A guide to stakeholder Margerum, R. (2002). Collaborative planning: Building consensus and a distinct model
identification and analysis techniques. Public Management Review, 6(1), 21–53. of practice. Journal of Planning Education and Research, 21, 237–253.
Bryson, J., Ackermann, F., & Eden, C. (2007). Putting the resource-based view of Meltsner, A. (1972). Political feasibility and policy analysis. Public Administration
management to work in public organizations. Public Administration Review, Review, 32(November/December), 859–867.
67(4), 702–717. Mendelow, A. L. (1987). Stakeholder analysis for strategic planning and implementa-
Bryson, J., & Bromiley, P. (1993). Critical factors affecting the planning and implemen- tion. In W. R. King & D. I. Cleland (Eds.), Strategic planning and management
tation of major projects. Strategic Management Journal, 14, 319–337. handbook (pp. 176–191). New York: Van Nostrand Reinhold.
Bryson, J., Bromiley, P., & Jung, Y. S. (1990). Influences on the context and process on Mitchell, R. K., Agle, B. R., & Wood, D. J. (1997). Toward a theory of stakeholder
project planning success. Journal of Planning Education and Research, 9(3), 183–185. identification and salience: Defining the principle of who and what really counts.
Bryson, J., Cunningham, G., & Lokkesmoe, K. (2002). What to do when stakeholders Academy of Management Review, 22(4), 853–886.
matter: The case of problem formulation for the African American Men Project of Mohan, R., Bernstein, D. J., & Whitsett, M. D. (Eds.). (2002). Responding to sponsors and
Hennepin County Minnesota. Public Administration Review, 62(5), 568–584. stakeholders in complex evaluation environments. New Directions for Evaluation,
Bryson, J., Freeman, R. E., & Roering, W. (1986). Strategic planning in the public sector: No. 95.
Approaches and directions. In B. Checkoway (Ed.), Strategic perspectives on planning Mohan, R., & Sullivan, K. (Eds.). (2007). Promoting the use of government evaluations in
practice. Lexington, MA: Lexington Books. policymaking. New Directions for Evaluation, No. 113.
12 J.M. Bryson et al. / Evaluation and Program Planning 34 (2011) 1–12

Morris, D. (2002). The inclusion of stakeholders in evaluation: Benefits and drawbacks. Tuchman, B. (1984). The march of folly: From Troy to Vietnam. New York: Knopf.
The Canadian Journal of Evaluation, 17(2), 49–58. Vickers, B., & Vickers, G. (1998). The art of judgment. New York: HarperCollins.
Nadler, G., & Hobino, S. (1998). Breakthrough thinking (rev. 2nd ed.). Roseville, CA: Weiss, C. H. (1998). Have we learned anything new about the use of evaluation?
Prima Publishing. American Journal of Evaluation, 19(1), 21–33.
Nutt, P. (2002). Why decisions fail: Avoiding the blunders and traps that lead to debacles. Wildavsky, A. (1979). Speaking truth to power: The art and craft of policy analysis. Boston:
San Francisco: Berrett-Koehler Publishers Inc. Little Brown.
Nutt, P., & Backoff, R. (1992). Strategic management of public and third sector organiza-
tions: A handbook for leaders. San Francisco: Jossey-Bass.
Patton, M. Q. (2008). Utilization-focused evaluation (4th ed.). Thousand Oaks, CA: Sage. John M. Bryson, PhD, is McKnight Presidential Professor of Planning and Public Affairs
Peters, B. G. (1996). The future of governing: Four emerging models. Lawrence, KN: at the Hubert H. Humphrey Institute of Public Affairs, University of Minnesota. He is
University Press of Kansas. author of Strategic Planning for Public and Nonprofit Organizations: A Guide to Strength-
Preskill, H., & Caracelli, (1997). Current and developing conceptions of use: Evaluation ening and Sustaining Organizational Achievement and co-author of Leadership for the
use TIG survey results. American Journal of Evaluation, 18(3), 209–226. Common Good: Tackling Public Problems in a Shared-Power World.
Rossi, P., Lipsey, M., & Freeman, H. (2003). Evaluation: A systematic approach (7th ed.).
Thousand Oaks, CA: Sage.
Ryan, K., & DeStefano, L. (Eds.). (2000). Evaluation as a democratic process: Promoting Michael Quinn Patton, PhD, is an independent consultant, former president of the
inclusion, dialogue, and deliberation. New Directions for Evaluation, No. 85. American Evaluation Association, and author of six major evaluation books including
SenGupta, S., Hopson, R., & Thompson-Robinson, M. (2004). Cultural competence in Utilization-Focused Evaluation, Qualitative Research and Evaluation Methods, and a new
evaluation: An overview. New Directions for Evaluation, No. 102, pp. 5–20. book, Developmental Evaluation: Applying Complexity Concepts to Enhnace Innovation
Stone, D. (2002). Policy paradox and political reason. New York: WW Norton. and Use.
Taut, S. (2007). Studying self-evaluation capacity building in a large international
development organization. American Journal of Evaluation, 28(1), 45–59.
Thiele, G., Devaux, A., Velasco, C., & Horton, D. (2007). Horizontal evaluation: Fostering Ruth A. Bowman, PhD, is Vice President for Research and Evaluation, The Anne Ray
knowledge sharing and program development within a network. American Journal Charitable Trust, headquartered in Minnesota, and adjunct faculty at the University of
of Evaluation, 28(4), 493–508. Minnesota.

You might also like