You are on page 1of 22

Article

The Nature and Frequency of


Inclusion of People with
Disabilities in Program
Evaluation

American Journal of Evaluation


34(1) 23-44
The Author(s) 2012
Reprints and permission:
sagepub.com/journalsPermissions.nav
DOI: 10.1177/1098214012461558
aje.sagepub.com

Miriam R. Jacobson1, Tarek Azzam1, and Jeanette G. Baez1

Abstract
Although evaluation theorists over the last two decades have argued for the importance of including
stakeholders from marginalized groups in program planning and research, little is known about the
degree of inclusion in program evaluation practice. In particular, we know little about the type and
level of inclusion of people with intellectual, developmental, and psychiatric disabilities in the
evaluation of programs that aim to serve them. Through a content analysis of articles published in
the last decade describing evaluations of programs for people with these types of disabilities, this
article describes which stakeholders have been included in evaluations, how program recipient input
was obtained, and in which stages of the evaluation stakeholder participation occurred. The findings
indicate that program recipient disability type (developmental, psychiatric, or other) may predict
type and level of inclusion, and inclusion tends to occur in later parts of the evaluation process.

Keywords
evaluation inclusion, disabilities, stakeholder involvement, data collection, evaluation practice

Inclusion of different stakeholders has long been an important consideration of evaluation practice.
It is prescribed in many evaluation theories, including democratic, participatory, and empowerment
evaluation (Christie, 2003) and can range from obtaining stakeholder input on a program, to
employing their assistance in data collection, to having them actively collaborate in the evaluation
design and interpretation of results (Mertens, 1999). Stakeholder inclusion can increase the
likelihood of use (Fleischer & Christie, 2009; Toal, 2009), relevancy of the evaluation to the
community of interest, and accuracy of results (Botcheva, Shih, & Huffman, 2009; Brandon,
1998; Chouinard & Cousins, 2009). Inclusion may also promote social justice by giving recipients
a voice in program decision making (Mertens, 1999).
The importance of inclusion is reiterated by the Joint Committee on Standards for Educational
Evaluation, a network of professional organizations focused on the quality of evaluation practice.

Claremont Graduate University, Claremont, CA, USA

Corresponding Author:
Miriam R. Jacobson, Claremont Graduate University, 123 East 8th Street, Claremont, CA 91711, USA.
Email: jacobson.miriam@gmail.com

Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

24

American Journal of Evaluation 34(1)

Table 1. Definitions of Disability Types.


Term

Definition

Disability

A disability is a physical or mental impairment that substantially limits one or more major
life activities (Americans with Disabilities Act, 1990)
A developmental disability is a severe, chronic disability of an individual that is attributable
to a mental or physical impairment or combination of mental and physical impairments,
is manifested before the individual attains age 22, and is likely to continue indefinitely
(The Developmental Disabilities Assistance and Bill of Rights Act of 2000). Some
examples are autism, cerebral palsy, and Down syndrome
Intellectual disability is characterized by significant limitations in both intellectual
functioning and adaptive behavior, which covers many everyday social and practical
skills. This disability originates before the age of 18 (American Association on
Intellectual and Developmental Disabilities, 2011). This type of disability was previously
referred to as mental retardation. It can be considered one disability within the
broader category of developmental disabilities
A mental or psychological disorder that substantially limits one or more major life
activities (Equal Employment Opportunity Commission, 1997)

Developmental
disability

Intellectual
disability

Psychiatric
disability

In particular, the standards of attention to stakeholders and responsive and inclusive practice
(Yarbrough, Shulha, Hopson, & Caruthers, 2011) clearly emphasize this point. Although
stakeholder inclusion can support other evaluation standards such as utility and accuracy, a
significant focus on inclusion can also compromise standards such as feasibilityfor example if
a high level of inclusion is costly for stakeholders or disruptive to the program (Yarbrough et al.,
2011). In such cases, the evaluator may be required to make a choice of which standard to emphasize
(Taut, 2008; Yarbrough et al., 2011). Therefore, even though inclusion is widely advocated, it may
sometimes be unfeasible in practice. In an attempt to understand this struggle, this study examines
how evaluators deal with potentially disparate demands when working with stakeholders who have
developmental, psychiatric, or other forms of disabilities.

Inclusion and People With Disabilities: Historical Overview


A disability is a physical or mental impairment that substantially limits one or more major life
activities (Americans with Disabilities Act, 1990). Although disabilities can take numerous forms,
of greatest relevance to this discussion and study are developmental, intellectual, and psychiatric
disabilities (Table 1), each of which presents unique challenges design and administer service programs and to those who wish to conduct meaningful, useful evaluations of those programs.
In the past, evaluators and program staff would often define the interests of program recipients
with disabilities by making their own assumptions about the ideal quality of life, and therefore would
fail to collect data on issues important to these individuals (Cambell, 1997; Kiernan, 1999; Mertens,
1999). Researchers also tended to overlook the influence of varying cultural and contextual factors
on individuals (Harry, 2002). Mertens (2007a) refers to this latter issue as the myth of homogeneity,
which occurs when a cultural outsider assumes that all members of the cultural group are the same as
one another.
Since then, there have been dramatic increases in the rights of individuals with disabilities and
more opportunities for participation in research. In the late 1980s, there was a rising movement
to include people with disabilities in research and evaluation (Barnes, 2003; Chappell, 2000; Walmsley & Johnson, 2003). Members of the disabilities rights movement have called for people with disabilities to have a voice in decisions about their programs and services, as reflected in the slogan
Nothing About Us Without Us. In 1992, a group of researchers with disabilities pushed for an
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

25

emancipatory model of disability research, whereby people with disabilities are in control of the
research process, as lead researchers (Barnes, 2003). They argued that putting people with disabilities in control of research is necessary to shift the research perspective from the medical model (disability arises from problem with the individual) to the social model, whereby disability arises from a
problem in society.
These demands for inclusion led to numerous changes in policy and practice (Barnes, 2003).
Increasingly, there were examples of participation of people with disabilities in research and
evaluation, and disabilities research funders began requiring participation of people with disabilities
in conducting the research (Walmsley & Johnson, 2003). In 2000, federal legislation called for
involvement of all people with developmental disabilities in the planning of their own programs and
services (Developmental Disabilities Assistance and Bill of Rights Act, 2000). People with
disabilities have now come to occupy places of influence in the research process and on
organizational committees (Caldwell, Hauss, & Stark, 2009; Cambell, 1997). While these trends
in research and program development have been well documented in the literature, it remains
unclear just how evaluation practice as a whole has addressed the issue of inclusion for people with
disabilities.

Challenges to Evaluation Inclusion


In nearly any evaluation context, special consideration should be given to linguistic, cognitive, and
cultural diversity. This need becomes especially great when individuals with disabilities are
participating in the data collection. Within a given group of people, one might encounter a wide
range of cognitive abilities or preferences for how to communicate. People with disabilities may
prefer to communicate nonverbally or through writing; they may process information or
conceptualize words differently. In some cases, words and concepts that make sense to researchers
may be misleading or confusing to participants. Data collection procedures must recognize such
characteristics to promote the collection of accurate data.
Researchers have discussed the need for careful procedures to collect data from individuals with
intellectual or developmental disabilities (Ware, 2004). These disabilities represent a vast range of
functionalities. Many of these individuals have limitations in speech, oral comprehension, or
cognition. In addition, some may have great strengths in one of these areas (such as oral
comprehension) while having limitations in another (such as speech), making it difficult for an
evaluator to design procedures that fit all potential participants.
Importantly, researchers should be aware that obtaining a response from an interviewee does not
necessarily mean that he or she understands what is being asked. Participants might acquiesce in
surveys not just out of a need to please but also because they have difficulty understanding the
questions (Finlay & Lyons, 2001). One study examined response bias in self-reports among a sample
of people with intellectual disabilities in staffed housing. Researchers used items from quality of life
measures specifically designed to identify when respondents were exhibiting acquiescence and
recency bias, for example, by asking the same question twice in a slightly a different way. The study
found that two thirds either demonstrated at least one of these types of response bias or did not
respond with relevant answers (Perry & Felce, 2002).
Researchers have raised concerns about gathering accurate data from those with psychiatric
disabilities as well. In particular, some researchers are concerned that associated cognitive
impairments or psychiatric symptoms can affect a persons insight into his or her best interests or
ability to self-report on outcomes such as quality of life. While many tools exist to measure
psychopathology, fewer are available to measure recipient viewpoints on topics such as program
satisfaction (Cambell, 1997). In addition, some people with certain psychiatric disorders, such as
bipolar disorder or schizophrenia, demonstrate impairments in verbal and nonverbal communication
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

26

American Journal of Evaluation 34(1)

(such as incoherent speech or flat facial expressions) as part of their conditions. In such cases, data
collection procedures need to adapt for these circumstances (Dadich & Muir, 2009; Harvey, Wingo,
Burdick, & Baldessarini, 2010).

Strategies for Inclusion


When the reliability of recipient self-report is in doubt, researchers suggest that data be obtained
from multiple sources, such as parents and clients, and, if possible, from other sources of
evidencefor example, medical recordsas well (Boland, Daly, & Staines, 2008). Simply using
a proxy report alone (having a parent or someone close to the person answering on his or her behalf)
is not always sufficient. The reliability of proxy reports has been mixed, and for individuals without
a certain level of expressive communication, the accuracy of the proxy can never truly be determined
(Perry & Felce, 2002; Stancliffe, 2000). Researchers have tended to believe that the degree to which
a proxy can answer meaningfully for the research participant depends on the type of information
involved and the proxys relationship to the participant (Stancliffe, 2000). To address this issue,
some evaluations may use two or more proxy reports to better estimate the actual viewpoint of the
participant (Bonham et al., 2004).
The Ask Me! Project, conducted for the Maryland Developmental Disabilities Administration,
published articles on how to gather reliable information from people with a wide range of
developmental disabilities. They found that when severe communication difficulties are present,
adapting strategies to each individual allowed them to obtain accurate interview data from most but
not all of their clients. Techniques such as limiting response options to two or three choices as well
as using simple language and providing multiple ways to perceive questions and provide answers
can facilitate communication with a range of participants. These procedures can entail providing
pictures with response options, and allowing respondents to speak, point, or gesture answer choices,
or to use a translator when necessary (Bonham et al., 2004).
In recent years, a variety of self-report instruments and interview protocols have been developed
to support data collection from people with disabilities. Instruments like the ARC
Self-Determination scale are now available to assess constructs such as quality of life, selfdetermination, or health attitudes directly from the individuals themselves. Designers of these
instruments recommend providing assistance to respondents as needed in order to ensure that the
measures can be used with participants with a wide range of functionality (Wehmeyer, 1995).
Applied researchers have also developed alternative data collection approaches that may hold
advantages over structured interviews and surveys. For example, the researcher may obtain accounts
of participants experiences in their own words through personal narratives. Another technique is
PhotoVoice, in which photographs are taken by participants and discussed in interviews (Jurkowski
& Ferguson, 2008). These open-ended techniques tend to increase participation and are especially
useful in answering certain types of research questions. Overall, the increasing availability of validated, structured research tools has potentially encouraged more researchers and evaluators to
include people with disabilities in their study designs.
When diversity is present among recipients, appropriately including all people with disabilities
within a program can sometimes require greater resourcesin the form of trained staff, sources
of evidence, and overall time investmentthan would otherwise be necessary (Birman, 2007). For
instance, Balch and Mertens (1999) described a series of focus groups they conducted to understand
the experience of deaf individuals in the court system. They included people from different
cities who were American Sign Language speakers, Mexican Sign Language speakers, and
hard-of-hearing adults with cochlear implants or hearing aids. To include these diverse participants,
their project required a range of specialized staff including multiple types of interpreters to accommodate the various languages, appropriate settings to conduct the evaluation, and a court reporter to
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

27

provide live transcriptions of the discussions that were displayed for focus group members. These
added expenses can be prohibitive in many small- or medium-scale evaluation projects and need
to be considered when designing an evaluation and figuring its budget.

Deeper Participation in the Evaluation Process


It is important not only to collect input from recipients but also to involve them more deeply in
conducting evaluations of their programs. Mertens (1999) notes that in the initial stages of an
evaluation, it is important to prioritize questions and issues that are directly relevant to community
interests. Similarly, criteria for program success and definition of constructs should match those held
by the community. Mertens argues that evaluators and those who typically make evaluation
decisions remain highly influenced by their own biases and interests, even if they are committed
to producing the best potential outcomes for recipients. This bias is of concern for people with
disabilities who have been traditionally left out of the program decision-making process. Mertens
(1999, 2007b), who advocates the use of a transformative evaluation approach, argues that people
with disabilities should be included as active collaborators throughout the evaluation; program
recipients should have meaningful input in identifying evaluation questions, developing
measurement procedures, and interpreting results. By taking on a larger role in the evaluation,
people with disabilities can influence the project, and simultaneously become empowered to initiate
community change in the future (Mertens, 2007a).
Although many argue that the use of participatory approaches can be beneficial for people with
disabilities (Conder, Milner, & Mirfin-Veitch, 2011; Smith & OFlynn, 2000), including participants
improperly could create unrealistic expectations and potentially cause more harm than good. Former
participants in research have spoken of incidents in which researchers promised collaboration but
failed to deliver on the promises, compensate them for their assistance, or share the research results
(Gill, 1999; Heller, Pederson, & Miller, 1996). Heller and colleagues (1996) conducted a focus
group of people with intellectual disabilities who had been involved in research and found that the
majority of them noted barriers to participation. These barriers included a lack of training in research
ideas and procedures, a failure to be respected or listened to, logistical difficulties, and a lack of
personal support. Other difficulties that have been reported in participatory evaluations include
unclear roles, ethical concerns, and skills disparities between participants and what is needed for
research involvement (Smith & OFlynn, 2000). Researchers have commented that even if people
are given a high degree of responsibility, inevitable power differences remain between researchers
and stakeholders (Smith & OFlynn, 2000).
Case examples have provided some guidance on how to optimize stakeholder inclusion.
Collaboration has been more successful when research roles are clearly defined, and when
stakeholders are involved early in the planning stages, are kept informed of important aspects of the
project, and are provided proper training and support (Abma, Nierse, & Widdershoven, 2009;
Bonham et al., 2004; Conder et al., 2011; Gilbert, 2004; Linhorst & Eckert, 2002; Smith & OFlynn,
2000; Walmsley, 2004). To promote an environment of mutual learning and respect, it is helpful for
team members to regularly reflect on power dynamics, provide frequent opportunities for informal
discussions about the project, and have more than one person with disabilities on the team
(Abma et al., 2009; Conder et al., 2011).
As with putting together any evaluation team, it is important to assess and consider individual
strengths (e.g., interpersonal or analytical skills) or interests when selecting potential collaborators
for specific roles (Gilbert, 2004; Walmsley, 2004). At the beginning and throughout the evaluation,
evaluators should think through what additional resources and flexibility might be needed in the
budget, timeline, and staffing and plan accordingly (Conder et al., 2011; Hassouneh, AlcalaMoss, & McNeff, 2011). This includes accounting for all reasonable accommodations for
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Program
Context

Evaluation
Goals

Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Outcomes of Inclusive Activities

Level of Inclusion of Recipients

Availability
of Strategies
and Data
Collection
Tools
Evaluator
Attitudes

Program
Recipient
Characteristics

Quality of
Inclusion

Perceived Feasibility of Inclusion

Availability
of Resources
(time, people,
money)

Figure 1. Conceptual model of inclusion of people with disabilities in evaluation.

Perceived Benefits of Inclusion

Evaluator
Attitudes and
Approach

Program
Context

28
American Journal of Evaluation 34(1)

Jacobson et al.

29

participants such as adapting communication mechanisms, providing for transportation needs and
planning regular breaks during trainings (Hassouneh et al., 2011; Read & Maslin-Prothero, 2012).
It is important to note that even when research collaboration is done conscientiously, it may not
be desirable or appropriate for all stakeholders. For example, some individuals may not wish to
devote the time required to participate (Chen, Poland, & Skinner, 2007). For some individuals with
psychiatric disabilities, there may also be the potential for increase in psychiatric symptoms with the
added stress of the project (Linhorst & Eckert, 2002). It is also possible that those from the
community who are thought to be most capable of being active collaborators in the research
procedures may not necessarily be representative of the target population (Heller et al., 1996; Smith
& OFlynn, 2000; Walmsley, 2004). All of these issues must be considered as the evaluation is both
designed and carried out.

The Inclusion Model


Figure 1 provides a conceptual model of inclusion of recipients with disabilities that integrates the
research and evaluation literature used to frame this study. This model describes the various factors
that can inform evaluators perceptions of the benefits and feasibility of inclusion. The model takes
into account the factors that influence the Perceived Feasibility of inclusion, which can include the
availability of adequate tools, resources, evaluator training, participant characteristics, and program
context. These feasibility issues are then weighed against the Perceived Benefits of inclusion, which
can be influenced by evaluators support of inclusive and transformative evaluation approaches
(Azzam, 2011; Christie, 2003), the needs of the program context (e.g., in certain programs inclusion
might be particularly beneficial, such as programs where recipients typically do not have a voice),
and the overarching goals of the evaluation.
Feasibility and Benefits considerations are used in deciding the Level of Inclusion. If perceived
benefits are high, evaluators may be convinced that inclusion is worth investing in, even if it is
perceived to be very costly. Alternatively, if few meaningful benefits are expected, evaluators may
be reluctant to devote even a small portion of extra evaluation resources to include recipients.
The model also highlights the importance of the Quality of Inclusion and argues that the benefits
of inclusion, such as increasing accuracy of findings or recipient empowerment, are derived from
high quality and meaningful inclusive practices. For example, methods of collecting data should provide appropriate accommodations for recipients needs. Efforts to gather multiple perspectives
should be conducted to avoid misconstruing recipients viewpoints or provide misleading information. Evaluators must also respect the individuals who participate by providing proper training and
appropriate compensation for their time. Otherwise, recipients might feel even further
disenfranchised by the program and the evaluation process.
This model was developed specifically to understand how evaluators can reconcile the benefits
and feasibility concerns when choosing the level of inclusion. Through review of the literature on
transformative evaluation approaches, research methodology, and disability research, five factors
underlying perceived feasibility and three factors underlying perceived benefits were identified.
Examination of cases of recipient participation and data collection strategies highlighted the need
for high quality practices to achieve the desired benefits of inclusion.
This study examines specific elements of this modelthose highlighted in gray in Figure 1to
provide additional insights on the interactions between Feasibility and Level of Inclusion levels by
describing the frequency with which inclusion occurs and the contextual factors surrounding it.
The choice to focus on feasibility was primarily to assess whether having either a psychiatric,
developmental, or other type of disability will meaningfully reduce the likelihood of inclusion in the
evaluation process. While the model describes multiple factors underlying feasibility, the role of
recipient characteristics and program contexts are the least understood and, it could be argued, the
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

30

American Journal of Evaluation 34(1)

most relevant. Understanding the role that program context and recipient characteristics have on
feasibility could inform evaluators about the level of inclusion that has been achieved in similar
projects and what they can expect when planning their own evaluations. It could also inform efforts
to further increase inclusion in areas where it is presently perceived to be less feasible. Many case
examples have described the need for additional resources in participatory projects, suggesting that
disability type may negatively impact the level of inclusion (Conder et al., 2011; Hassouneh et al.,
2011). However, case descriptions have not enabled feasibility comparisons across different
program contexts, or across recipients of different ages or disability types. It is possible that with
increases in the availability of inclusive strategies and data collections tools starting in the 1990s,
and changes in researcher/evaluator attitudes, inclusion, is now more feasible across multiple disability groups and contexts (Barnes, 2003; Chappell, 2000; Mcdonald, Keys, & Henry, 2008;
Walmsley & Johnson, 2003).
Specific contextual and recipient variables were selected of study because of their relevance to
inclusive practices. However, a couple of potentially pertinent variables (e.g., scope of budget) were
not included because they were not consistently present in the evaluation descriptions in most of the
articles used in this study. This study also uses the information gathered on these elements to further
refine the models development, and highlight common methodologies and approaches that
evaluators utilize when including individuals with disability in their work.

Research Questions
Because theorists assert the importance of inclusion of stakeholders, especially those from
marginalized groups, it is important that we understand the degree to which principles of inclusion
are actually instituted in evaluation practice. Using the conceptual model as a guide (Figure 1), this
study aimed to understand the role that recipient stakeholders play in evaluation, specifically in
terms of the level and type of inclusion, the role of program and recipient characteristics in
involvement, and the nature of the strategies that evaluators use. To explore these issues, we
conducted a content analysis of peer-reviewed articles that describe evaluations of programs that
serve people with developmental, intellectual, psychiatric, and other disabilities. This approach is
appropriate in areas of research where theory is underdeveloped, because it allows the researcher
to take a close look at a group of cases that can support the development of a theoretical framework
and inform further study (Creswell & Plano Clark, 2011).
Our analysis was guided by the following questions:
1.
2.
3.

To what extent have people with disabilities been included in the evaluation of programs that
serve them?
What methodologies have been used to elicit views of people with disabilities?
What has been the role of contextual variables, such as type of program, in moderating
inclusion?

Gaining a deeper understanding of the ways in which evaluators include stakeholders in their work
can offer useful guidelines for practitioners who may be wary of inclusive approaches and, ultimately, may lead to more useful, meaningful evaluative practices.

Method
Journal Selection
Articles were obtained from peer-reviewed evaluation journals published in paper or online format
in the United States during the 10 years prior to the study (20002009), using selection criteria
adapted from Christie and Fleisher (2010). Specifically, evaluation journals were included if they
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

31

Table 2. Evaluation Journals.


Journal

Number of articles selected

Evaluation and Program Planning


New Directions for Evaluation
American Journal of Evaluation
Evaluation Review
Educational Evaluation and Policy Analysis
Evaluation and the Health Professions
Measurement and Evaluation in Counseling and Development
Journal of Multidisciplinary Evaluation
Policy Evaluation
Practical Assessment, Research & Evaluation

16
6
3
3
2
2
0
0
0
0

(1) had the word evaluation in their title and (2) focused on social sciences or education. With
these criteria, 10 journals were selected for this study (Table 2).1
The focus on evaluation journals was necessary because guidelines for focusing our search were
needed and this was one of the criteria that we chose to utilize. The focus on evaluation journals also
allowed us to hand search each of the journals, and this provided a more sensitive measure for
selecting articles rather than using keyword searches, which might have led to missed articles. In
addition, we assumed that since the role of stakeholders is an important component of evaluation
practice, including only evaluation journals created a sample of articles more likely to discuss in
detail the participatory aspects of evaluation design. Articles in these journals are also more likely
to describe evaluations rather than applied research or program monitoring projects. It is important
to note, however, that this approach may not produce results representative of evaluation practice as
a whole, as many evaluation reports do not get published in peer-reviewed journals.

Article Selection
Within these journals, an article was included in the analysis if (1) it described an empirical study of
a program and (2) the majority of the program recipients had disabilities. Each journals abstracts
were manually searched and then a scan was conducted to determine whether the article met the
inclusion criteria. A keyword search was conducted within each journal using words such as
disabilities, mental retardation, psychiatric, mental health, deaf, brain injury,
special needs, autism, and blind, in order to identify articles that were not found in the
manual search. If the article did not specifically mention that program recipients had disabilities but
they were described as having physical or mental conditions (such as mental illness) that interfered
with a major life activity (such as for those living in an institution rather than on their own), the
article was included. This was in keeping with the definition of disability in the Americans with
Disabilities Act (ADA) of 1990.
While other definitions, such as that of the World Health Organization (WHO, 2011), cover a
broader range of disabilities, the ADA definition was used to focus the study sample mostly on
individuals with psychiatric and developmental disabilities, and to compare these groups to a few other
types of disabilities such as physical or sensory disabilities. While this study focuses on intellectual,
developmental, psychiatric, physical, and communication disabilities, there are many other kinds of
disabilities not represented in this study, which can include substance dependence or certain learning
disabilities or physical illnesses that do not meet ADA criteria. Selection procedures identified 35 articles for inclusion in the content analysis. Two of the articles described the same evaluation (Fredericks,
2005; Fredericks, Deegan, & Carman, 2008), and of those two the most recent article was chosen. During the initial coding phase, we discovered that two articles did not meet our inclusion criteria and they
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

32

American Journal of Evaluation 34(1)

Table 3. Main Coding Categories and Their Definitions.


Code category

Definition

Disability type

Disabilities fell into one of the following categories: developmental/intellectual,


psychiatric, or other (such as physical/communication)
Inclusion
Inclusion was divided into two domains: (1) whether data were collected from program
recipients with disabilities and (2) whether program recipients participated in the
evaluation beyond just being a data source
Context
Context variables were chosen based on those thought to possibly moderate inclusion.
These included program scope (local, state, national, international) and number of sites
(single site, multisite, no sites described)
Data sources
Raters coded whether evaluation data sources were described in the article, and from
whom data were collected (e.g., program recipients with disabilities, family members,
and other stakeholders with disabilities), and how data were collected
Stakeholder
Our definition was based on Cousins and Whitmores (1998) three dimensions of
participation
stakeholder involvement: stakeholder selection (diversity of stakeholders involved in
the evaluation), depth of participation (ranging from limited participation in some
stages to participation in all stages of the evaluation), and stakeholder control (degree
of stakeholder control over evaluation decisions)a
Stakeholder groups We recorded the participation of certain key stakeholder groups (program recipients
with disabilities, family members of program recipients, and other stakeholders with
disabilities). The codebook also noted the presence of any other stakeholders who
participated in the evaluation, including policy makers, decision makers, and program
implementers
Evaluation stagesb
We noted the stages at which stakeholders were involved in the evaluation. These stages
were derived from the Centers for Disease Control (CDC) Evaluation Framework
(CDC, 1999). They included:
1. Describing the program
2. Focusing the evaluation design (questions and scope)
3. Focusing the evaluation design (methods)
4. Gathering of credible evidence
5. Justifying conclusions (analysis and synthesis)
6. Justifying conclusions (interpretation, judgment, recommendations)
7. Ensuring use and sharing lessons learned
Note. aStakeholder control, however, was ultimately not used in the coding scheme because it was not ascertainable from the
articles.
b
The stage engaging stakeholders, present in CDCs framework, was not used since this was thought to be done throughout
the other stages.

were subsequently discarded. One described the program planning process but not an evaluation or
study of a program, and in the other, the people with disabilities were not considered program recipients. Therefore, the final sample consisted of 32 articles.
Of the 32 articles included in the final sample, 16 focused on programs that serve individuals with
psychiatric disabilities. Ten of the articles described evaluations of programs designed for people
with intellectual or developmental disabilities. Six of the articles focused on programs designed
to address other disabilities or did not specify the types of disabilities addressed. Given the small
number of articles that focused on other types of disabilities, most of the analyses focused on the
other two categories.

Coding Instrument
A coding sheet and guidelines were developed for this study to deductively analyze the articles. An
initial list of coding categories (listed in Table 3) was developed and was further refined using the
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

33

evaluation and research methods literature to identify existing definitions or frameworks that could
be adapted or incorporated into the codebook. The code for disability types was also informed by a
preliminary review of some of the articles by one of the authors in order to see how disability types
were grouped and described.

Procedures
Two independent raters read through each article and completed the initial coding instrument.
Differences between raters were discussed, and a consensus was reached on the appropriate codes.
The raters utilized a sample of 11 articles to clarify coding procedures and guidelines, and to
subsequently revise them. These discussions led to the addition of new codes. The final codebook
was created and used by each rater to independently code all articles in the sample, and the level
of agreement equaled 92%. When coders did not agree, they discussed differences in ratings and
came to a consensus on the final ratings for each article. Table 3 details the main coding categories
and the definitions used in this study.
It is important to note that the studys sample of peer-reviewed articles in evaluation journals
limits the generalizability of the results. There might have been a great deal of involvement that
is simply not reported in the articles, either because of editing or because the article only focuses
on one part of a larger project. We attempted to create codes that would capture as many details
about the context as possible, in order to reduce the impact of this limitation.

Results
The results section is divided into two broad areas that represent the type of inclusion that was coded
in this study. The first section examines the evaluations likelihood of collecting information/data
from individuals with disabilities, along with the types of approaches and methods utilized during
this process. The second section examines the likelihood of involving individuals with disabilities
in the evaluation process, beyond just collecting data from them, along with the strategies utilized
during this process.

Inclusion as a Data Source


Of the articles in which data sources were described (N 31), data were obtained from program
recipients with disabilities in 24 (77%) of the articles, from family members in 11 (36%) of the
articles, and from other stakeholders with disabilities in 6 (19%) of the articles. In two (6%) of the
articles, data were obtained only from family members and not from program recipients. There was
variation when we examined the articles according to the types of disabilities addressed by the
programs in question. Specifically, in 88% of the articles describing programs that served individuals with psychiatric disabilities, data were collected from recipients. This was true in 80% of the
articles focused on programs for recipients with developmental disabilities and in 50% of the articles
on programs focused on other types of disabilities.
As shown in Table 4, the context in which the evaluation was conducted generally did not appear
to influence the inclusion of program recipients in data collection. One exception was with
evaluations conducted in educational settings. Only 57% of the evaluations conducted in this type
of setting included stakeholders in their data collection, compared to 75% and 88% in health and
community/social service/vocational settings, respectively. Additionally, mixed-methods studies
more frequently included data collection from recipients91% of these studies did so, compared
to 64% of quantitative-only evaluations, and 75% of qualitative-only studies.
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

34

American Journal of Evaluation 34(1)

Table 4. Inclusion as a Data Source by Program and Evaluation Context (N 31).


Were data collected from recipients?
Yes N 24

Program and evaluation context


Field
Community/social service/vocational
Health
Education
Conducted in the United States
Yes
No
Not specified
Scope
Local
State
National
Not specified
Sites
Single
Multi
No sites/not described
Data collection methods
Quantitative
Qualitative
Mixed
Not specified

Total
N 31

No N 7

14
6
4

88
75
57

2
2
3

13
25
43

16
8
7

15
4
5

71
80
100

6
1
0

29
20
0

21
5
5

7
8
4
5

70
80
67
100

3
2
2
0

30
20
33
0

10
10
6
5

6
15
3

100
75
60

0
5
2

0
25
40

6
20
5

7
6
10
1

64
75
91
100

4
2
1
0

36
25
9
0

11
8
11
1

Note. Only consists of articles where data sources are described.

Interview

4
Intellectual/
Developmental
N=8

Observation

Focus Group

Type of Disability

3
Survey
14
3

Psychiatric
N=14

5
3
0
0

Other
N=2

1
1
0

10

12

14

16

Number of Articles

Figure 2. Methods used to collect data from participants by disability type (N 24). Figure 2 Includes articles
where data were collected from program recipients with disabilities. Multiple methods were used in some
articles, so totals sometimes exceed the number of articles.
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

35

Table 5. Excerpts from Articles Where Adaptations to Data Collection Strategies Were Used.
Use of Visual Prompts
. . . the facilitators distributed a paper with 12 different labeled faces on it to each participant. The
participants were asked to circle the face or faces that best illustrated how they generally felt about the
program (e.g., anxious, happy, sad, bored, etc.). They were also encouraged to write comments on the back
or to verbally provide feedback to the facilitators (Heinz, 2003, p. 267). [Survey, IDD]
They were invited to tell a story about themselves and the project with the help of two visual images. They
could choose from a set of 40 images we had selected. The set was as diverse as possible, because we wanted
to appeal to people with different interests (Abma, 2000, p. 203). [Focus Group, Psych].
Stakeholder Involvement
From this third party, the research team learnt about the needs and preferences of the clients. This included
the appropriate time to contact and interview the client, the most suitable interview setting, the language
style that should be used, regularity of interludes during the interview, and whether researcher safety would
be a potential risk. The third party also liaised with clients about the study. . . (Dadich & Muir, 2009, p. 47).
[Structured/Semi-structured Interview, Psych]
Selected self-advocates [with disabilities]. . . interviewed each other while the research team observed, and
participated in informal focus groups to critique the instrument and procedures. These sessions suggested
the need for a flash card which respondents could use to indicate their answers (Schalock, Bonham, &
Marchand, 2000, p. 81). [Structured/Semistructured Interview, IDD]
Proactively Seek Out Recipients Voice
The interview protocol for this forum, and for the others, was specifically designed to engage the
disenfranchised group, to ensure that their voices were in the mix. As an example, the first question after the
general introduction was directed to the people who resided in the institution (MacNeil, 2000, p. 57). [Focus
Group, Psych]
Standard facial, body, and vocal queues may be absent throughout the entire process. Although it was
sometimes difficult to read client disposition, the researchers regularly reminded clients that they could
choose to end or break the interview at any time (Dadich & Muir, 2009, p. 53). [Structured/Semi-structured
Interview, Psych]
Participate in Program Context
In the evaluation the on-site evaluator. . . could not become a full participant in the program due to the fact
that the program was designed to serve a special population. However, she made a sustained effort over
several months to participate in the program as much as possible in order to yield the most meaningful
observational data (Heinz, 2003, p. 264). [Observation, IDD]
Those who are not familiar with an interview for evaluation research often experience it as an examination
for therapy or treatment. To avoid this and to gain trust we hung out and worked together with the patients
(Abma, 2000, pp. 201202). [Interview, Psych]
Note. IDD intellectual or developmental disability; Psych psychiatric disability.

Methods of obtaining data. Interviews, focus groups, surveys, and observations were all used to
collect data from recipients; document analysis did not appear in any of the articles. Interviews were
used in 75% of the articles in which data were collected from recipients. In 63% of the articles, either
structured or semistructured interviews were used; unstructured interviews were used in 8% and
interviews of an unspecified format were used in 17%.
Because of the challenges associated with the participation of individuals with specific types of
disabilities, it is useful to examine what types of data collection were used with which recipient
populations. We found that interviews were the most common method used to obtain the
participation of individuals with developmental disabilities and psychiatric disabilities. In fact,
interviews were used in all of the articles in which data were collected from those with psychiatric
disabilities. Focus groups were the second most common method used for those with psychiatric
disabilities and the least commonly used method for those with developmental disabilities
(Figure 2).
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

36

American Journal of Evaluation 34(1)

Table 6. Evaluation Participation by Program and Evaluation Context (N=29).


Did program recipients participate in the evaluation?
Yes
N9

Program and evaluation characteristics


Field
Health
Community/social service/vocational
Education
Conducted in United States
Yes
No
Not specified
Scope
Local
State
National
Not described
Sites
Single
Multi
No sites/not described
Data Collection Methods
Quantitative
Qualitative
Mixed
Not specified

No
N 20

Total
N 29

3
5
1

43
36
13

4
9
7

57
64
88

7
14
8

4
2
3

20
50
60

16
2
2

80
50
40

20
4
5

2
3
2
2

18
33
33
67

9
6
4
1

82
67
67
33

11
9
6
3

3
6
0

43
30
0

4
14
2

57
70
100

7
20
2

1
3
5
0

11
38
50
0

8
5
5
2

89
63
50
100

9
8
10
2

Note. Only includes articles in which any stakeholder participation is mentioned.

Examples of data collection methods. The interviews used in the evaluations had a variety of
formats. Evaluators asked both open- and closed-ended questions; some protocols were brief, and
others were in depth. A few data collection tools were standardized and previously validated, while
others were newly created for the project. Many included measures of recipients subjective views
on the program rather than measures of more objective characteristics such as physical condition or
academic achievement.
Some of the articles offered suggestions for modifying data collection procedures to
accommodate program recipients. The most common suggestions for adapting to various
communication styles were to allow for flexibility, to individualize procedures, and to simplify
answer choices. Table 5 describes specific adaptations. Each example is followed by a notation
of the type of data collection utilized in that study and the type of disability addressed by that
program. In particular, the articles identified some strategies that could apply both to individuals
with developmental and psychiatric disabilities. For example, interaction with stakeholders, either
recipients with disabilities or those who knew them well, helped evaluators to tailor data collection
strategies to the populations needs. Visual prompts was another technique that allowed sensitivity
to various communication mechanisms, whereby recipients could choose from a set of pictures, and
could also supplement their choice with comments of their own. Being involved in the program
helped the evaluator build rapport with recipients and better understand the unique program context.
Finally, in cases where there was a concern that the voices of the recipients would not be heard, the
evaluator could take extra steps to encourage them to speak.
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

37

Number of Articles

Other Stakeholder
20

Program Recipient

18

Family Member

16

Other Stakeholder
with a Disability

14
12
10
8
6
4
2
0
Describing
the Program

Focusing
Questions
and Scope

Focusing
Methods

Gathering
Evidence

Analysis

Interpretation

Ensuring
Use

Evaluation Stage

Figure 3. Participation in specific evaluation stages by stakeholder type.

Inclusion as a Participant
Of the 29 articles where stakeholder participation in the evaluation was mentioned, program
recipients with disabilities participated in 31% of the cases, family members participated in 24%
of the cases, and other stakeholders with disabilities participated in 10% of the cases. Within those
29 articles, participation appeared to be related to disability type, in that 47% (7 of the 15) of the
evaluations of programs for people with psychiatric disabilities described recipient participation,
whereas only 13% (1 of the 8) of the evaluations of programs for people with developmental
disabilities described recipient participation.
Our coding process also revealed that participation occurred relatively equally across most of the
context subcategories and was not restricted to certain settings (Table 6). However, there was a
smaller proportion of recipient participation in evaluations of educational programs (13%) than in
evaluations in health (43%) or community/social service/vocational settings (36%), and a smaller
proportion of recipient participation in evaluations that only used quantitative data collection
methods (11%) versus evaluations where mixed methods (50%) or qualitative data
collection methods (38%) were used (Table 6).
We conducted a frequency count to compare participant and stakeholder involvement across
the various evaluation stages (Figure 3). Program recipient participation occurred in each of the
seven evaluation stages except for the Focusing Questions stage. The stages in which program
recipients were most often involved were Interpretation (6 articles) and Focusing Methods
(4 articles). These were followed by Ensuring Use (3 articles), Gathering Evidence (2 articles),
Analysis (2 articles), and Describing the Program (1 article). The most common stage for
family participation was Gathering Evidence (3 articles); other stakeholders with disabilities
most commonly participated in the Interpretation stage (3 articles). Other stakeholders tended
to participate in the Focusing Methods stage (19 articles). Overall, program recipients
participated more frequently than family members across almost all stages of the evaluation
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

38

American Journal of Evaluation 34(1)

Table 7. Sample Descriptions of Program Recipient Participation.

Article

Disability type

Abma (2000)

Psychiatric

Evaluation
context

Stages where
participation
occurred
Sample quotes (Stage)

Organizational Interpret
Ensure Use
Single Site
Mixed
Methods

Psychiatric
Cook, Carey,
Razzano, Burke,
and Blyler
(2002)

Organizational
Multisite
Mixed
Methods

Focus
Methods
Gather
Evidence
Analyze
Interpret
Ensure Use

Lepage-Chabriais
(2005)

Psychiatric

Primary/
Secondary
Education
Multisite
Mixed
Methods

Focus
Methods

Schalock et al.
(2000)

Intellectual/
Community/
developmental Social Service
Multisite
Quantitative

Focus
Methods
Gather
Evidence

During and after every interview we


asked for feedback from the
respondents so we could check the
credibility of our findings. . . in
addition several group member
checks were organized (p. 202).
[Interpret]
Roles of the consumer assembly have
included hypothesis development,
examination and interpretation of
preliminary findings, qualitative and
case study analysis, identification of
policy-related issues resulting from
EIDP (p. 34). [Focus methods, analysis, interpret, ensure use]
With the agreement of both the
youths and the institutions, we have
chosen the term successful placement
for that placement that allowed the
youth to achieve the fixed goals of
the educational program (p. 463).
[Focus methods]
Selected self-advocates were trained as
interviewers during these work
sessions, interviewed each other
while the research team observed,
and participated in informal focus
groups to critique the instrument
and procedures (p. 81). [Focus
methods, Gather evidence]

Note. Coding was not done on quotes in isolation. Rather while specific quotes were highlighted, some could only be coded in
the context of the rest of the article. For example, if consumers were said to serve on an advisory board, and this advisory
board was said to be involved in a stage, it was presumed that the consumer participated in that stage unless said otherwise.

except for Gathering Evidence, but less often than other stakeholders at each evaluation stage.
It is important to note that other stakeholders could represent the involvement of multiple
types of people throughout the evaluation (such as implementers, funders, directors, etc.), rather
the sustained participation of one individual or group.
Description of specific examples. Descriptive examples illustrate the types of participation that
occurred across different evaluation stages. Three articles described involvement of program
recipients in conducting member checks, a tactic used in qualitative studies to validate evaluation
findings through review by those from whom the data were collected (Creswell & Plano Clark, 2011).
These three articles were also the only ones that described recipient participation in single-site
evaluations. Three included board members in multisite evaluations. Schalock, Bonhamb, and
Marchand (2000) spoke about activities as part of the Ask Me! project (described above in the literature
review) and was the only author to describe inclusion of people with developmental disabilities. In that
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

39

example, the recipients provided input on measurement tools and also conducted interviews. LePageChabriais (2005) described limited participation, consisting of some sort of program recipient
feedback at one point in the evaluation. This was also the only case in which children or youth were
involved. None of the cases described youth participation beyond having them provide passive input
into the evaluation methods. Table 7 offers examples of these various types of inclusion. Each sample
quote is followed by a list of the stages during which participation occurred.

Discussion
This study examined the type and level of involvement that program recipients with disabilities have
in evaluations, and described various strategies that evaluators use when working with populations
with disabilities. When the results are mapped on to the inclusion model presented in Figure 1, we
find noteworthy relationships between feasibility considerations (e.g., participant characteristics)
and levels and quality of involvement. For example, individuals with psychiatric disabilities were
more likely to be included in evaluations (whether as sources of data or as participants) than were
individuals with developmental/intellectual disabilities. This finding is arguably not surprising,
since there are more challenges in collecting data from those with developmental disabilities then
psychiatric disabilities (Finlay & Lyons, 2001). In addition, this study found that most inclusion
activities tended to revolve around data collection (77%) rather than deeper participation in the
evaluation process (31%). Finally, program recipients with disabilities tended to participate in
evaluations comparatively less often than other stakeholder groups.
Contextual factors measured in this study appeared to have less of an influence on inclusion
levels than other feasibility considerations did. Of the contextual factors we coded, only the field
and methodological approach appeared to contain differences in inclusion levels. More specifically,
educational contexts and studies that employed solely quantitative research approaches tended to
have fewer individuals with disabilities included as data sources and participants. The former
difference could be attributable to the fact that many of the educational programs included youth
below the age of 12, and in these programs data were often collected from parents or legal guardians,
rather than the youth themselves. This appears to be the only major differentiating factor between the
educational programs and programs in other areas, however this observation warrants further
investigation in subsequent studies. The latter difference may have reflected the greater range of data
collection methods available in nonquantitative approaches, as well as the greater consistency
between the goals of participatory approaches and qualitative or mixed-method studies as compared
to the goals of quantitative studies. In particular, with qualitative components, representing the
subjective experience of service recipients may be more valued.
Indeed, one of the other interesting aspects of this study was the different approaches and
methods evaluators used when working with individuals with disabilities. Evaluators used a range
of techniques, including interviews, focus groups, surveys, and observations. Interviews in particular
were especially useful for gathering the input of people with psychiatric and developmental
disabilities. Interviews allow the evaluator to tailor procedures to individual respondents, closely
assess recipients capacity to respond, and make sure questions are understood as the evaluator
intended (Bonham et al., 2004). Regardless of the methods used, the evaluator needs to develop data
collection procedures that keep in mind the needs and abilities of specific recipients in the program.
Together, these findings provide tentative support for parts of the inclusion model presented in
Figure 1. They show a relatively strong connection between participant characteristics and their
likelihood of being involved in the evaluation process as well as the quality of inclusionindividuals with psychiatric disabilities were more likely to participate more fully in some aspects of the
evaluation versus just being used as a data source. These findings also support the connection
between context (e.g., program field) and level of inclusion and evaluation quality. This suggests
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

40

American Journal of Evaluation 34(1)

that the model could change to have participant characteristics and context inform the quality of
inclusion. Although not all elements of the model were represented in this study, this is an initial
step toward understanding the factors that connect different feasibility considerations with levels
of inclusion and how these connections are weighed by evaluators.

Making Inclusion Work


Evaluators may encounter certain challenges to collecting data in some contexts, and they have
various strategies to draw on when this occurs. Involving other stakeholders and piloting data
collection tools can help ensure the appropriateness of measurement procedures for the recipient
population. Visiting or even taking part in the program can both build rapport with recipients and
allow evaluators to better understand what evaluation procedures are appropriate to the program and
recipient context. Evaluators might need to actively seek out those quieter voices, either because of
power dynamics in a program or because of the communication styles of the recipients. With
individuals who communicate differently, it can be helpful to use creative strategies that give
recipients multiple ways to access information, whether by listening to the interviewers speech,
by reading text, or by seeing pictures. Providing room for flexibility in procedures for recipients
appears to be helpful as well. As described in some articles, the target population of a program might
include a great deal of diversity both in recipient abilities and characteristics of their environmental
contexts, (e.g., their living situations). These suggestions can apply to both those with psychiatric or
developmental disabilities depending on the individual and their context. Other types of disabilities,
not represented in this study, may merit alternative strategies. In addition, most of the participation
occurred with adults, and so many of the strategies identified in the articles for how to collect data
from or involve people with disabilities may not capture those needed when involving children.
Practitioners should consider involving recipients in at least one or two stages of an evaluation, or
even throughout the evaluation process. The potential benefits from inclusion include increased
evaluation accuracy and empowered program recipients. Meanwhile, evaluators should also think
seriously about what they want to accomplish through inclusion, whether involvement is truly in the
interest of recipients, and what type of inclusion is needed to accomplish these goals. This will
ensure that the engagement with recipients with disabilities is a genuine process that goes beyond
merely symbolic benefits.

Directions for Future Research


There remains a great deal for us to discover about the inclusion of individuals with disabilities in the
evaluation of programs designed to meet their needs. Future research can build on this study and the
inclusion model (Figure 1) to continue exploring the factors that influence the levels and types of
inclusion in various evaluation efforts, as well as the quality of inclusive practices. The main area
of future exploration should focus on the relationship between the level and quality of inclusion and
the actual benefits of these efforts. This is an aspect of the model that could be examined in future
studies through in-depth interviews and focus groups with evaluation practitioners and, if possible,
with program implementers and participants.
Future studies may also examine other contextual factors such as whether a program serves only
individuals with disabilities or the general population as well. It will be important to determine the
extent to which evaluators are willing to devote the resources needed to overcome potential
communication and cognitive barriers when only a minority of program recipients have disabilities.
Additional investigations could also focus on evaluator characteristics, which have been previously
shown to relate to stakeholder involvement (Azzam, 2010, 2011; Christie, 2003). For example, these
studies could examine evaluators beliefs about social justice, their methodological orientations, and
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

41

their levels of experience, and then compare these factors to inclusion levels and quality in
their practices.
Finally, the validity of data collection methods should be studied further to ensure that recipient
voices, when included, are accurately represented, especially in evaluations where this is the only
way recipients are involved. Evaluators should ensure that inclusion is designed strategically to
achieve the desired goal of participation, and should focus energy and resources accordingly. Further
efforts to understand how to increase the quality of inclusion can help ensure that people with
disabilities are not only included, but that this inclusion actually benefits them and other
stakeholders groups.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or
publication of this article.

Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.

Note
1. Please note, the criteria we utilized in selecting journals was derived from the Christie and Fleisher (2010)
article; however, the actual articles were from many different areas including community programs, health
programs, vocational programs, and educational programs.

References
Abma, T. A. (2000). Stakeholder conflict: A case study. Evaluation and Program Planning, 23, 199210. doi:
10.1016/S0149-7189(00)00006-9
Abma, T. A., Nierse, C. J., & Widdershoven, G. A. (2009). Patients as partners in responsive research:
Methodological notions for collaborations in mixed research teams. Qualitative Health Research,
19, 401415. doi:10.1177/1049732309331869
Azzam, T. (2010). Evaluator responsiveness to stakeholders. American Journal of Evaluation, 31, 4565. doi:
10.1177/1098214009354917
Azzam, T. (2011). Evaluator characteristics and methodological choice. American Journal of Evaluation,
32, 376391. doi:10.1177/1098214011399416
American Association on Intellectual and Developmental Disabilities. (2011). Retrieved December 2010 from
http://www.aamr.org/content_100.cfm?navID21
Americans with Disabilities Act. (1990). U.S. code. 42, 1210112213.
Balch, G. I., & Mertens, D. M. (1999). Focus group design and group dynamics: Lessons from deaf and
hard of hearing participants. American Journal of Evaluation, 20, 265277. doi: 10.1177/
109821409902000208
Barnes, C. (2003). What a difference a decade makes: Reflections on doing emancipatory disability research.
Disability & Society, 18, 317.
Birman, D. (2007). Sins of omission and commission: To proceed, decline, or alter? American Journal of
Evaluation, 28, 7985. doi:10.1177/1098214006298059
Boland, M., Daly, L., & Staines, A. (2008). Methodological issues in inclusive intellectual disability
research: A health promotion needs assessment of people attending Irish disability services. Journal of
Applied Research in Intellectual Disabilities, 21, 199209. doi:10.1111/j.1468-3148.2007.00404.x
Bonham, G. S., Basehart, S., Schalock, R. L., Marchand, C. B., Kirchner, N., Rumenap, J. M., & Scotti, J.
(2004). Consumer-based quality of life assessment: The Maryland Ask Me! Project. Mental Retardation,
42, 338355. doi: 10.1352/00476765(2004)42
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

42

American Journal of Evaluation 34(1)

Botcheva, L., Shih, J., & Huffman, L. C. (2009). Emphasizing cultural competence in evaluation: A process
oriented approach. American Journal of Evaluation, 30, 176188. doi:10.1177/1098214009334363
Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging
the gap between collaborative and non-collaborative evaluations. American Journal of Evaluation,
19, 325337. doi:10.1177/109821409801900305
Caldwell, J., Hauss, S., & Stark, B. (2009). Participation of individuals with developmental disabilities and
families on advisory boards and committees. Journal of Disability Policy Studies, 20, 101109. doi:10.
1177/1044207308327744
Cambell, J. (1997). How consumers/survivors are evaluating the quality of psychiatric care. Evaluation Review,
21, 357363.
Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health.
Morbidity and Mortality Weekly Report, 48 (No.RR-11). Atlanta, Georgia: Author.
Chappell, A. L. (2000). Emergence of participatory methodology in learning difficulty research: Understanding
the context. British Journal of Learning Disabilities, 28, 3843.
Chen, S., Poland, B., & Skinner, H. A. (2007). Youth voices: Evaluation of participatory action research.
Canadian Journal of Program Evaluation, 22, 125150.
Chouinard, J. A., & Cousins, J. B. (2009). A review and synthesis of current research on cross-cultural
evaluation. American Journal of Evaluation, 30, 457494. doi:10.1177/1098214009349865
Christie, C. A. (2003). What guides evaluation? A study of how evaluation practice maps onto evaluation
theory. New Directions for Evaluation, 97, 735. doi:10.1002/ev.72
Christie, C. A., & Fleischer, D. N. (2010) Insight into evaluation practice: A content analysis of designs and
methods used in evaluation studies published in North American evaluation-focused journals. American
Journal of Evaluation, 31, 326346. doi:0.1177/1098214010369170
Conder, J., Milner, P., & Mirfin-Veitch, B. (2011). Reflections on a participatory project: The rewards and
challenges for the lead researchers. Journal of Intellectual and Developmental Disability, 36, 3948. doi:
10.3109/13668250.2010.548753
Cook, J. A, Carey, M. A., Razzano, L. A., Burke, J., & Blyler, C. R. (2002). The pioneer: The employment
intervention demonstration program. New Directions for Evaluation, 94, 3144. doi:10.1002/ev.49
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation,
80, 523. doi:10.1002/ev.1114
Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research. Thousand
Oaks, CA: Sage.
Dadich, A., & Muir, K. (2009). Tricks of the trade in community mental health research: Working with mental
health services and clients. Evaluation & the Health Professions, 32, 3858. doi:10.1177/
0163278708328738
Developmental Disabilities Assistance and Bill of Rights Act, Public Law 206-402, 2000.
Equal Employment Opportunity Commission. (1997). EEOC enforcement guidance on the Americans with
disabilities act and psychiatric disabilities. Washington, DC: Author. Retrieved from http://www.eeoc.
gov/policy/docs/psych.html
Finlay, W. M. L., & Lyons, E. (2001). Methodological issues in interviewing and using self-report
questionnaires with people with mental retardation. Psychological Assessment, 13, 319335. doi:10.1037/
1040-3590.13.3.319
Fredericks, K. A. (2005). Network analysis of a demonstration program for the developmentally disabled. New
Directions for Evaluation, 107, 5568. doi:10.1002/ev.161
Fredericks, K. A., Deegan, M., & Carman, J. G. (2008). Using system dynamics as an evaluation tool:
Experience from a demonstration program. American Journal of Evaluation, 29, 251267. doi:10.1177/
1098214008319446
Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation
Association members. American Journal of Evaluation, 30, 158175. doi:10.1177/1098214008331009
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

Jacobson et al.

43

Gilbert, T. (2004). Involving people with learning disabilities in research: Issues and possibilities. Health and
Social Care in the Community, 12, 298308.
Gill, C. J. (1999). Invisible ubiquity: The surprising relevance of disability issues in evaluation. American
Journal of Evaluation, 20, 279287. doi:10.1177/109821409902000209
Harry, B. (2002). Trends and issues in serving culturally diverse families of children with disabilities. Journal
of Special Education, 36, 131138.
Harvey, P. D., Wingo, A. P., Burdick, K. E., & Baldessarini, R. J. (2010). Cognition and disability in bipolar
disorder: Lessons from schizophrenia research. Bipolar Disorders, 12, 36475. doi:10.1111/j.1399-5618.
2010.00831.x
Hassouneh, D., Alcala-Moss, A., & McNeff, E. (2011). Practical strategies for promoting full inclusion of
individuals with disabilities in community-based participatory intervention research. Research in Nursing
& Health, 34, 253265. doi:10.1002/nur.20434
Heinz, L. (2003). A process evaluation of a parenting group for parents with intellectual disabilities. Evaluation
and Program Planning, 26, 263274. doi:10.1016/S0149-7189(03)00030-2
Heller, T., Pederson, E. L., & Miller, A. B. (1996). Guidelines from the consumer: Improving consumer
involvement in research and training for persons with mental retardation. Mental Retardation, 34, 141148.
Jurkowski, J. M., & Ferguson, P. (2008). Photovoice as participatory action research tool for engaging people
with intellectual disabilities in research and program development. Intellectual and Disabilities, 46, 111.
doi:10.1352/0047-6765 (2008) 46[1:PAPART] 2.0.CO;2
Lepage-Chabriais, M. (2005). Evaluation of childrens stay in institutions: What is working? Evaluation
Review, 29, 454466. doi:10.1177/0193841X05279082
Linhorst, D. M., & Eckert, A. (2002). Involving people with severe mental illness in evaluation and
performance improvement. Evaluation & The Health Professions, 25, 284301. doi:10.1177/
0163278702025003003
Kiernan, C. (1999). Participation in research by people with learning disability: Origins and issues. British Journal of Learning Disabilities, 27, 4347.
MacNeil, C. (2000). Surfacing the realpolitik: Democratic evaluation in an antidemocratic climate. New
Directions for Evaluation, 85, 5162. doi:10.1002/ev.1161
Mcdonald, K. E., Keys, C. B., & Henry, D. B. (2008). Gatekeepers of science: Attitudes toward the research
participation of adults with intellectual disability. American Journal on Mental Retardation, 113,
466478. doi:10.1352/2008.113
Mertens, D. M. (1999). Inclusive evaluation: Implications of transformative theory for evaluation. American
Journal of Evaluation, 20, 114. doi: 10.1177/109821409902000102
Mertens, D. M. (2007a). Transformative considerations: Inclusion and social justice. American Journal of
Evaluation, 28, 8690. doi:10.1177/1098214006298058
Mertens, D. M. (2007b). Transformative paradigm: Mixed methods and social justice. Journal of Mixed
Methods Research, 1, 212225. doi:10.1177/1558689807302811
Perry, J., & Felce, D. (2002). Subjective and objective quality of life assessment: Responsiveness, response
bias, and resident: Proxy concordance. Mental Retardation, 40, 445456. doi:0.1352/00476765 (2002)
040<0445:SAOQOL>2.0.CO;2
Read, S., & Maslin-Prothero, S. (2011). The involvement of users and carers in health and social research: The
realities of inclusion and engagement. Qualitative Health Research, 21, 704713. doi:10.1177/
1049732310391273
Schalock, R. L., Bonham, G. S., & Marchand, C. B. (2000). Consumer based quality of life assessment: A
path model of perceived satisfaction. Evaluation and Program Planning, 23, 7787. doi:10.1016/
S0149-7189 (99) 00041-5
Smith, B., & OFlynn, D. (2000). The use of qualitative strategies in participant and emancipatory research to
evaluate developmental disability service organizations. European Journal of Work and Organizational
Psychology, 9, 515526. doi: 10.1080/13594320050203111
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

44

American Journal of Evaluation 34(1)

Stancliffe, R. J. (2000). Proxy respondents and quality of life. Evaluation and Program Planning, 23, 8993.
doi:10.1016/S0149-7189 (99) 00042-7
Taut, S. (2008). What have we learned about stakeholder involvement in program evaluation? Studies in Educational Evaluation, 34, 224230. doi:10.1016/j.stueduc.2008.10.007
Toal, S. A. (2009) The validation of the evaluation involvement scale for use in multisite settings. American
Journal of Evaluation, 30, 349362. doi:10.1177/1098214009337031
Walmsley, J. (2004). Involving users with learning difficulties in health improvement: Lessons from inclusive
learning disability research. Nursing Inquiry, 11, 5464. doi: 10.1111/j.1440-1800.2004.00197.x
Walmsley, J., & Johnson, K. (2003). Inclusive research with people with learning disabilities: Past, present and
futures. London, England: Jessica Kingsley.
Ware, J. (2004). Ascertaining the views of people with profound and multiple learning developmental
disabilities. British Journal of Learning Disabilities, 32, 175179. doi: 10.1111/j.1468-3156.2004.00316.x
Wehmeyer, M. L. (1995). The ARCs self-determination scale: Procedural guidelines. Arlington, TX: The Arc
National Headquarters.
World Health Organization. (2011). World report on disability. Retrieved from http://whqlibdoc.who.int/hq/
2011/WHO_NMH_VIP_11.01_eng.pdf
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards:
A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.

Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015

You might also like