Professional Documents
Culture Documents
Abstract
Although evaluation theorists over the last two decades have argued for the importance of including
stakeholders from marginalized groups in program planning and research, little is known about the
degree of inclusion in program evaluation practice. In particular, we know little about the type and
level of inclusion of people with intellectual, developmental, and psychiatric disabilities in the
evaluation of programs that aim to serve them. Through a content analysis of articles published in
the last decade describing evaluations of programs for people with these types of disabilities, this
article describes which stakeholders have been included in evaluations, how program recipient input
was obtained, and in which stages of the evaluation stakeholder participation occurred. The findings
indicate that program recipient disability type (developmental, psychiatric, or other) may predict
type and level of inclusion, and inclusion tends to occur in later parts of the evaluation process.
Keywords
evaluation inclusion, disabilities, stakeholder involvement, data collection, evaluation practice
Inclusion of different stakeholders has long been an important consideration of evaluation practice.
It is prescribed in many evaluation theories, including democratic, participatory, and empowerment
evaluation (Christie, 2003) and can range from obtaining stakeholder input on a program, to
employing their assistance in data collection, to having them actively collaborate in the evaluation
design and interpretation of results (Mertens, 1999). Stakeholder inclusion can increase the
likelihood of use (Fleischer & Christie, 2009; Toal, 2009), relevancy of the evaluation to the
community of interest, and accuracy of results (Botcheva, Shih, & Huffman, 2009; Brandon,
1998; Chouinard & Cousins, 2009). Inclusion may also promote social justice by giving recipients
a voice in program decision making (Mertens, 1999).
The importance of inclusion is reiterated by the Joint Committee on Standards for Educational
Evaluation, a network of professional organizations focused on the quality of evaluation practice.
Corresponding Author:
Miriam R. Jacobson, Claremont Graduate University, 123 East 8th Street, Claremont, CA 91711, USA.
Email: jacobson.miriam@gmail.com
24
Definition
Disability
A disability is a physical or mental impairment that substantially limits one or more major
life activities (Americans with Disabilities Act, 1990)
A developmental disability is a severe, chronic disability of an individual that is attributable
to a mental or physical impairment or combination of mental and physical impairments,
is manifested before the individual attains age 22, and is likely to continue indefinitely
(The Developmental Disabilities Assistance and Bill of Rights Act of 2000). Some
examples are autism, cerebral palsy, and Down syndrome
Intellectual disability is characterized by significant limitations in both intellectual
functioning and adaptive behavior, which covers many everyday social and practical
skills. This disability originates before the age of 18 (American Association on
Intellectual and Developmental Disabilities, 2011). This type of disability was previously
referred to as mental retardation. It can be considered one disability within the
broader category of developmental disabilities
A mental or psychological disorder that substantially limits one or more major life
activities (Equal Employment Opportunity Commission, 1997)
Developmental
disability
Intellectual
disability
Psychiatric
disability
In particular, the standards of attention to stakeholders and responsive and inclusive practice
(Yarbrough, Shulha, Hopson, & Caruthers, 2011) clearly emphasize this point. Although
stakeholder inclusion can support other evaluation standards such as utility and accuracy, a
significant focus on inclusion can also compromise standards such as feasibilityfor example if
a high level of inclusion is costly for stakeholders or disruptive to the program (Yarbrough et al.,
2011). In such cases, the evaluator may be required to make a choice of which standard to emphasize
(Taut, 2008; Yarbrough et al., 2011). Therefore, even though inclusion is widely advocated, it may
sometimes be unfeasible in practice. In an attempt to understand this struggle, this study examines
how evaluators deal with potentially disparate demands when working with stakeholders who have
developmental, psychiatric, or other forms of disabilities.
Jacobson et al.
25
emancipatory model of disability research, whereby people with disabilities are in control of the
research process, as lead researchers (Barnes, 2003). They argued that putting people with disabilities in control of research is necessary to shift the research perspective from the medical model (disability arises from problem with the individual) to the social model, whereby disability arises from a
problem in society.
These demands for inclusion led to numerous changes in policy and practice (Barnes, 2003).
Increasingly, there were examples of participation of people with disabilities in research and
evaluation, and disabilities research funders began requiring participation of people with disabilities
in conducting the research (Walmsley & Johnson, 2003). In 2000, federal legislation called for
involvement of all people with developmental disabilities in the planning of their own programs and
services (Developmental Disabilities Assistance and Bill of Rights Act, 2000). People with
disabilities have now come to occupy places of influence in the research process and on
organizational committees (Caldwell, Hauss, & Stark, 2009; Cambell, 1997). While these trends
in research and program development have been well documented in the literature, it remains
unclear just how evaluation practice as a whole has addressed the issue of inclusion for people with
disabilities.
26
(such as incoherent speech or flat facial expressions) as part of their conditions. In such cases, data
collection procedures need to adapt for these circumstances (Dadich & Muir, 2009; Harvey, Wingo,
Burdick, & Baldessarini, 2010).
Jacobson et al.
27
provide live transcriptions of the discussions that were displayed for focus group members. These
added expenses can be prohibitive in many small- or medium-scale evaluation projects and need
to be considered when designing an evaluation and figuring its budget.
Program
Context
Evaluation
Goals
Availability
of Strategies
and Data
Collection
Tools
Evaluator
Attitudes
Program
Recipient
Characteristics
Quality of
Inclusion
Availability
of Resources
(time, people,
money)
Evaluator
Attitudes and
Approach
Program
Context
28
American Journal of Evaluation 34(1)
Jacobson et al.
29
participants such as adapting communication mechanisms, providing for transportation needs and
planning regular breaks during trainings (Hassouneh et al., 2011; Read & Maslin-Prothero, 2012).
It is important to note that even when research collaboration is done conscientiously, it may not
be desirable or appropriate for all stakeholders. For example, some individuals may not wish to
devote the time required to participate (Chen, Poland, & Skinner, 2007). For some individuals with
psychiatric disabilities, there may also be the potential for increase in psychiatric symptoms with the
added stress of the project (Linhorst & Eckert, 2002). It is also possible that those from the
community who are thought to be most capable of being active collaborators in the research
procedures may not necessarily be representative of the target population (Heller et al., 1996; Smith
& OFlynn, 2000; Walmsley, 2004). All of these issues must be considered as the evaluation is both
designed and carried out.
30
most relevant. Understanding the role that program context and recipient characteristics have on
feasibility could inform evaluators about the level of inclusion that has been achieved in similar
projects and what they can expect when planning their own evaluations. It could also inform efforts
to further increase inclusion in areas where it is presently perceived to be less feasible. Many case
examples have described the need for additional resources in participatory projects, suggesting that
disability type may negatively impact the level of inclusion (Conder et al., 2011; Hassouneh et al.,
2011). However, case descriptions have not enabled feasibility comparisons across different
program contexts, or across recipients of different ages or disability types. It is possible that with
increases in the availability of inclusive strategies and data collections tools starting in the 1990s,
and changes in researcher/evaluator attitudes, inclusion, is now more feasible across multiple disability groups and contexts (Barnes, 2003; Chappell, 2000; Mcdonald, Keys, & Henry, 2008;
Walmsley & Johnson, 2003).
Specific contextual and recipient variables were selected of study because of their relevance to
inclusive practices. However, a couple of potentially pertinent variables (e.g., scope of budget) were
not included because they were not consistently present in the evaluation descriptions in most of the
articles used in this study. This study also uses the information gathered on these elements to further
refine the models development, and highlight common methodologies and approaches that
evaluators utilize when including individuals with disability in their work.
Research Questions
Because theorists assert the importance of inclusion of stakeholders, especially those from
marginalized groups, it is important that we understand the degree to which principles of inclusion
are actually instituted in evaluation practice. Using the conceptual model as a guide (Figure 1), this
study aimed to understand the role that recipient stakeholders play in evaluation, specifically in
terms of the level and type of inclusion, the role of program and recipient characteristics in
involvement, and the nature of the strategies that evaluators use. To explore these issues, we
conducted a content analysis of peer-reviewed articles that describe evaluations of programs that
serve people with developmental, intellectual, psychiatric, and other disabilities. This approach is
appropriate in areas of research where theory is underdeveloped, because it allows the researcher
to take a close look at a group of cases that can support the development of a theoretical framework
and inform further study (Creswell & Plano Clark, 2011).
Our analysis was guided by the following questions:
1.
2.
3.
To what extent have people with disabilities been included in the evaluation of programs that
serve them?
What methodologies have been used to elicit views of people with disabilities?
What has been the role of contextual variables, such as type of program, in moderating
inclusion?
Gaining a deeper understanding of the ways in which evaluators include stakeholders in their work
can offer useful guidelines for practitioners who may be wary of inclusive approaches and, ultimately, may lead to more useful, meaningful evaluative practices.
Method
Journal Selection
Articles were obtained from peer-reviewed evaluation journals published in paper or online format
in the United States during the 10 years prior to the study (20002009), using selection criteria
adapted from Christie and Fleisher (2010). Specifically, evaluation journals were included if they
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
Jacobson et al.
31
16
6
3
3
2
2
0
0
0
0
(1) had the word evaluation in their title and (2) focused on social sciences or education. With
these criteria, 10 journals were selected for this study (Table 2).1
The focus on evaluation journals was necessary because guidelines for focusing our search were
needed and this was one of the criteria that we chose to utilize. The focus on evaluation journals also
allowed us to hand search each of the journals, and this provided a more sensitive measure for
selecting articles rather than using keyword searches, which might have led to missed articles. In
addition, we assumed that since the role of stakeholders is an important component of evaluation
practice, including only evaluation journals created a sample of articles more likely to discuss in
detail the participatory aspects of evaluation design. Articles in these journals are also more likely
to describe evaluations rather than applied research or program monitoring projects. It is important
to note, however, that this approach may not produce results representative of evaluation practice as
a whole, as many evaluation reports do not get published in peer-reviewed journals.
Article Selection
Within these journals, an article was included in the analysis if (1) it described an empirical study of
a program and (2) the majority of the program recipients had disabilities. Each journals abstracts
were manually searched and then a scan was conducted to determine whether the article met the
inclusion criteria. A keyword search was conducted within each journal using words such as
disabilities, mental retardation, psychiatric, mental health, deaf, brain injury,
special needs, autism, and blind, in order to identify articles that were not found in the
manual search. If the article did not specifically mention that program recipients had disabilities but
they were described as having physical or mental conditions (such as mental illness) that interfered
with a major life activity (such as for those living in an institution rather than on their own), the
article was included. This was in keeping with the definition of disability in the Americans with
Disabilities Act (ADA) of 1990.
While other definitions, such as that of the World Health Organization (WHO, 2011), cover a
broader range of disabilities, the ADA definition was used to focus the study sample mostly on
individuals with psychiatric and developmental disabilities, and to compare these groups to a few other
types of disabilities such as physical or sensory disabilities. While this study focuses on intellectual,
developmental, psychiatric, physical, and communication disabilities, there are many other kinds of
disabilities not represented in this study, which can include substance dependence or certain learning
disabilities or physical illnesses that do not meet ADA criteria. Selection procedures identified 35 articles for inclusion in the content analysis. Two of the articles described the same evaluation (Fredericks,
2005; Fredericks, Deegan, & Carman, 2008), and of those two the most recent article was chosen. During the initial coding phase, we discovered that two articles did not meet our inclusion criteria and they
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
32
Definition
Disability type
were subsequently discarded. One described the program planning process but not an evaluation or
study of a program, and in the other, the people with disabilities were not considered program recipients. Therefore, the final sample consisted of 32 articles.
Of the 32 articles included in the final sample, 16 focused on programs that serve individuals with
psychiatric disabilities. Ten of the articles described evaluations of programs designed for people
with intellectual or developmental disabilities. Six of the articles focused on programs designed
to address other disabilities or did not specify the types of disabilities addressed. Given the small
number of articles that focused on other types of disabilities, most of the analyses focused on the
other two categories.
Coding Instrument
A coding sheet and guidelines were developed for this study to deductively analyze the articles. An
initial list of coding categories (listed in Table 3) was developed and was further refined using the
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
Jacobson et al.
33
evaluation and research methods literature to identify existing definitions or frameworks that could
be adapted or incorporated into the codebook. The code for disability types was also informed by a
preliminary review of some of the articles by one of the authors in order to see how disability types
were grouped and described.
Procedures
Two independent raters read through each article and completed the initial coding instrument.
Differences between raters were discussed, and a consensus was reached on the appropriate codes.
The raters utilized a sample of 11 articles to clarify coding procedures and guidelines, and to
subsequently revise them. These discussions led to the addition of new codes. The final codebook
was created and used by each rater to independently code all articles in the sample, and the level
of agreement equaled 92%. When coders did not agree, they discussed differences in ratings and
came to a consensus on the final ratings for each article. Table 3 details the main coding categories
and the definitions used in this study.
It is important to note that the studys sample of peer-reviewed articles in evaluation journals
limits the generalizability of the results. There might have been a great deal of involvement that
is simply not reported in the articles, either because of editing or because the article only focuses
on one part of a larger project. We attempted to create codes that would capture as many details
about the context as possible, in order to reduce the impact of this limitation.
Results
The results section is divided into two broad areas that represent the type of inclusion that was coded
in this study. The first section examines the evaluations likelihood of collecting information/data
from individuals with disabilities, along with the types of approaches and methods utilized during
this process. The second section examines the likelihood of involving individuals with disabilities
in the evaluation process, beyond just collecting data from them, along with the strategies utilized
during this process.
34
Total
N 31
No N 7
14
6
4
88
75
57
2
2
3
13
25
43
16
8
7
15
4
5
71
80
100
6
1
0
29
20
0
21
5
5
7
8
4
5
70
80
67
100
3
2
2
0
30
20
33
0
10
10
6
5
6
15
3
100
75
60
0
5
2
0
25
40
6
20
5
7
6
10
1
64
75
91
100
4
2
1
0
36
25
9
0
11
8
11
1
Interview
4
Intellectual/
Developmental
N=8
Observation
Focus Group
Type of Disability
3
Survey
14
3
Psychiatric
N=14
5
3
0
0
Other
N=2
1
1
0
10
12
14
16
Number of Articles
Figure 2. Methods used to collect data from participants by disability type (N 24). Figure 2 Includes articles
where data were collected from program recipients with disabilities. Multiple methods were used in some
articles, so totals sometimes exceed the number of articles.
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
Jacobson et al.
35
Table 5. Excerpts from Articles Where Adaptations to Data Collection Strategies Were Used.
Use of Visual Prompts
. . . the facilitators distributed a paper with 12 different labeled faces on it to each participant. The
participants were asked to circle the face or faces that best illustrated how they generally felt about the
program (e.g., anxious, happy, sad, bored, etc.). They were also encouraged to write comments on the back
or to verbally provide feedback to the facilitators (Heinz, 2003, p. 267). [Survey, IDD]
They were invited to tell a story about themselves and the project with the help of two visual images. They
could choose from a set of 40 images we had selected. The set was as diverse as possible, because we wanted
to appeal to people with different interests (Abma, 2000, p. 203). [Focus Group, Psych].
Stakeholder Involvement
From this third party, the research team learnt about the needs and preferences of the clients. This included
the appropriate time to contact and interview the client, the most suitable interview setting, the language
style that should be used, regularity of interludes during the interview, and whether researcher safety would
be a potential risk. The third party also liaised with clients about the study. . . (Dadich & Muir, 2009, p. 47).
[Structured/Semi-structured Interview, Psych]
Selected self-advocates [with disabilities]. . . interviewed each other while the research team observed, and
participated in informal focus groups to critique the instrument and procedures. These sessions suggested
the need for a flash card which respondents could use to indicate their answers (Schalock, Bonham, &
Marchand, 2000, p. 81). [Structured/Semistructured Interview, IDD]
Proactively Seek Out Recipients Voice
The interview protocol for this forum, and for the others, was specifically designed to engage the
disenfranchised group, to ensure that their voices were in the mix. As an example, the first question after the
general introduction was directed to the people who resided in the institution (MacNeil, 2000, p. 57). [Focus
Group, Psych]
Standard facial, body, and vocal queues may be absent throughout the entire process. Although it was
sometimes difficult to read client disposition, the researchers regularly reminded clients that they could
choose to end or break the interview at any time (Dadich & Muir, 2009, p. 53). [Structured/Semi-structured
Interview, Psych]
Participate in Program Context
In the evaluation the on-site evaluator. . . could not become a full participant in the program due to the fact
that the program was designed to serve a special population. However, she made a sustained effort over
several months to participate in the program as much as possible in order to yield the most meaningful
observational data (Heinz, 2003, p. 264). [Observation, IDD]
Those who are not familiar with an interview for evaluation research often experience it as an examination
for therapy or treatment. To avoid this and to gain trust we hung out and worked together with the patients
(Abma, 2000, pp. 201202). [Interview, Psych]
Note. IDD intellectual or developmental disability; Psych psychiatric disability.
Methods of obtaining data. Interviews, focus groups, surveys, and observations were all used to
collect data from recipients; document analysis did not appear in any of the articles. Interviews were
used in 75% of the articles in which data were collected from recipients. In 63% of the articles, either
structured or semistructured interviews were used; unstructured interviews were used in 8% and
interviews of an unspecified format were used in 17%.
Because of the challenges associated with the participation of individuals with specific types of
disabilities, it is useful to examine what types of data collection were used with which recipient
populations. We found that interviews were the most common method used to obtain the
participation of individuals with developmental disabilities and psychiatric disabilities. In fact,
interviews were used in all of the articles in which data were collected from those with psychiatric
disabilities. Focus groups were the second most common method used for those with psychiatric
disabilities and the least commonly used method for those with developmental disabilities
(Figure 2).
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
36
No
N 20
Total
N 29
3
5
1
43
36
13
4
9
7
57
64
88
7
14
8
4
2
3
20
50
60
16
2
2
80
50
40
20
4
5
2
3
2
2
18
33
33
67
9
6
4
1
82
67
67
33
11
9
6
3
3
6
0
43
30
0
4
14
2
57
70
100
7
20
2
1
3
5
0
11
38
50
0
8
5
5
2
89
63
50
100
9
8
10
2
Examples of data collection methods. The interviews used in the evaluations had a variety of
formats. Evaluators asked both open- and closed-ended questions; some protocols were brief, and
others were in depth. A few data collection tools were standardized and previously validated, while
others were newly created for the project. Many included measures of recipients subjective views
on the program rather than measures of more objective characteristics such as physical condition or
academic achievement.
Some of the articles offered suggestions for modifying data collection procedures to
accommodate program recipients. The most common suggestions for adapting to various
communication styles were to allow for flexibility, to individualize procedures, and to simplify
answer choices. Table 5 describes specific adaptations. Each example is followed by a notation
of the type of data collection utilized in that study and the type of disability addressed by that
program. In particular, the articles identified some strategies that could apply both to individuals
with developmental and psychiatric disabilities. For example, interaction with stakeholders, either
recipients with disabilities or those who knew them well, helped evaluators to tailor data collection
strategies to the populations needs. Visual prompts was another technique that allowed sensitivity
to various communication mechanisms, whereby recipients could choose from a set of pictures, and
could also supplement their choice with comments of their own. Being involved in the program
helped the evaluator build rapport with recipients and better understand the unique program context.
Finally, in cases where there was a concern that the voices of the recipients would not be heard, the
evaluator could take extra steps to encourage them to speak.
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
Jacobson et al.
37
Number of Articles
Other Stakeholder
20
Program Recipient
18
Family Member
16
Other Stakeholder
with a Disability
14
12
10
8
6
4
2
0
Describing
the Program
Focusing
Questions
and Scope
Focusing
Methods
Gathering
Evidence
Analysis
Interpretation
Ensuring
Use
Evaluation Stage
Inclusion as a Participant
Of the 29 articles where stakeholder participation in the evaluation was mentioned, program
recipients with disabilities participated in 31% of the cases, family members participated in 24%
of the cases, and other stakeholders with disabilities participated in 10% of the cases. Within those
29 articles, participation appeared to be related to disability type, in that 47% (7 of the 15) of the
evaluations of programs for people with psychiatric disabilities described recipient participation,
whereas only 13% (1 of the 8) of the evaluations of programs for people with developmental
disabilities described recipient participation.
Our coding process also revealed that participation occurred relatively equally across most of the
context subcategories and was not restricted to certain settings (Table 6). However, there was a
smaller proportion of recipient participation in evaluations of educational programs (13%) than in
evaluations in health (43%) or community/social service/vocational settings (36%), and a smaller
proportion of recipient participation in evaluations that only used quantitative data collection
methods (11%) versus evaluations where mixed methods (50%) or qualitative data
collection methods (38%) were used (Table 6).
We conducted a frequency count to compare participant and stakeholder involvement across
the various evaluation stages (Figure 3). Program recipient participation occurred in each of the
seven evaluation stages except for the Focusing Questions stage. The stages in which program
recipients were most often involved were Interpretation (6 articles) and Focusing Methods
(4 articles). These were followed by Ensuring Use (3 articles), Gathering Evidence (2 articles),
Analysis (2 articles), and Describing the Program (1 article). The most common stage for
family participation was Gathering Evidence (3 articles); other stakeholders with disabilities
most commonly participated in the Interpretation stage (3 articles). Other stakeholders tended
to participate in the Focusing Methods stage (19 articles). Overall, program recipients
participated more frequently than family members across almost all stages of the evaluation
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
38
Article
Disability type
Abma (2000)
Psychiatric
Evaluation
context
Stages where
participation
occurred
Sample quotes (Stage)
Organizational Interpret
Ensure Use
Single Site
Mixed
Methods
Psychiatric
Cook, Carey,
Razzano, Burke,
and Blyler
(2002)
Organizational
Multisite
Mixed
Methods
Focus
Methods
Gather
Evidence
Analyze
Interpret
Ensure Use
Lepage-Chabriais
(2005)
Psychiatric
Primary/
Secondary
Education
Multisite
Mixed
Methods
Focus
Methods
Schalock et al.
(2000)
Intellectual/
Community/
developmental Social Service
Multisite
Quantitative
Focus
Methods
Gather
Evidence
Note. Coding was not done on quotes in isolation. Rather while specific quotes were highlighted, some could only be coded in
the context of the rest of the article. For example, if consumers were said to serve on an advisory board, and this advisory
board was said to be involved in a stage, it was presumed that the consumer participated in that stage unless said otherwise.
except for Gathering Evidence, but less often than other stakeholders at each evaluation stage.
It is important to note that other stakeholders could represent the involvement of multiple
types of people throughout the evaluation (such as implementers, funders, directors, etc.), rather
the sustained participation of one individual or group.
Description of specific examples. Descriptive examples illustrate the types of participation that
occurred across different evaluation stages. Three articles described involvement of program
recipients in conducting member checks, a tactic used in qualitative studies to validate evaluation
findings through review by those from whom the data were collected (Creswell & Plano Clark, 2011).
These three articles were also the only ones that described recipient participation in single-site
evaluations. Three included board members in multisite evaluations. Schalock, Bonhamb, and
Marchand (2000) spoke about activities as part of the Ask Me! project (described above in the literature
review) and was the only author to describe inclusion of people with developmental disabilities. In that
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
Jacobson et al.
39
example, the recipients provided input on measurement tools and also conducted interviews. LePageChabriais (2005) described limited participation, consisting of some sort of program recipient
feedback at one point in the evaluation. This was also the only case in which children or youth were
involved. None of the cases described youth participation beyond having them provide passive input
into the evaluation methods. Table 7 offers examples of these various types of inclusion. Each sample
quote is followed by a list of the stages during which participation occurred.
Discussion
This study examined the type and level of involvement that program recipients with disabilities have
in evaluations, and described various strategies that evaluators use when working with populations
with disabilities. When the results are mapped on to the inclusion model presented in Figure 1, we
find noteworthy relationships between feasibility considerations (e.g., participant characteristics)
and levels and quality of involvement. For example, individuals with psychiatric disabilities were
more likely to be included in evaluations (whether as sources of data or as participants) than were
individuals with developmental/intellectual disabilities. This finding is arguably not surprising,
since there are more challenges in collecting data from those with developmental disabilities then
psychiatric disabilities (Finlay & Lyons, 2001). In addition, this study found that most inclusion
activities tended to revolve around data collection (77%) rather than deeper participation in the
evaluation process (31%). Finally, program recipients with disabilities tended to participate in
evaluations comparatively less often than other stakeholder groups.
Contextual factors measured in this study appeared to have less of an influence on inclusion
levels than other feasibility considerations did. Of the contextual factors we coded, only the field
and methodological approach appeared to contain differences in inclusion levels. More specifically,
educational contexts and studies that employed solely quantitative research approaches tended to
have fewer individuals with disabilities included as data sources and participants. The former
difference could be attributable to the fact that many of the educational programs included youth
below the age of 12, and in these programs data were often collected from parents or legal guardians,
rather than the youth themselves. This appears to be the only major differentiating factor between the
educational programs and programs in other areas, however this observation warrants further
investigation in subsequent studies. The latter difference may have reflected the greater range of data
collection methods available in nonquantitative approaches, as well as the greater consistency
between the goals of participatory approaches and qualitative or mixed-method studies as compared
to the goals of quantitative studies. In particular, with qualitative components, representing the
subjective experience of service recipients may be more valued.
Indeed, one of the other interesting aspects of this study was the different approaches and
methods evaluators used when working with individuals with disabilities. Evaluators used a range
of techniques, including interviews, focus groups, surveys, and observations. Interviews in particular
were especially useful for gathering the input of people with psychiatric and developmental
disabilities. Interviews allow the evaluator to tailor procedures to individual respondents, closely
assess recipients capacity to respond, and make sure questions are understood as the evaluator
intended (Bonham et al., 2004). Regardless of the methods used, the evaluator needs to develop data
collection procedures that keep in mind the needs and abilities of specific recipients in the program.
Together, these findings provide tentative support for parts of the inclusion model presented in
Figure 1. They show a relatively strong connection between participant characteristics and their
likelihood of being involved in the evaluation process as well as the quality of inclusionindividuals with psychiatric disabilities were more likely to participate more fully in some aspects of the
evaluation versus just being used as a data source. These findings also support the connection
between context (e.g., program field) and level of inclusion and evaluation quality. This suggests
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
40
that the model could change to have participant characteristics and context inform the quality of
inclusion. Although not all elements of the model were represented in this study, this is an initial
step toward understanding the factors that connect different feasibility considerations with levels
of inclusion and how these connections are weighed by evaluators.
Jacobson et al.
41
their levels of experience, and then compare these factors to inclusion levels and quality in
their practices.
Finally, the validity of data collection methods should be studied further to ensure that recipient
voices, when included, are accurately represented, especially in evaluations where this is the only
way recipients are involved. Evaluators should ensure that inclusion is designed strategically to
achieve the desired goal of participation, and should focus energy and resources accordingly. Further
efforts to understand how to increase the quality of inclusion can help ensure that people with
disabilities are not only included, but that this inclusion actually benefits them and other
stakeholders groups.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or
publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Note
1. Please note, the criteria we utilized in selecting journals was derived from the Christie and Fleisher (2010)
article; however, the actual articles were from many different areas including community programs, health
programs, vocational programs, and educational programs.
References
Abma, T. A. (2000). Stakeholder conflict: A case study. Evaluation and Program Planning, 23, 199210. doi:
10.1016/S0149-7189(00)00006-9
Abma, T. A., Nierse, C. J., & Widdershoven, G. A. (2009). Patients as partners in responsive research:
Methodological notions for collaborations in mixed research teams. Qualitative Health Research,
19, 401415. doi:10.1177/1049732309331869
Azzam, T. (2010). Evaluator responsiveness to stakeholders. American Journal of Evaluation, 31, 4565. doi:
10.1177/1098214009354917
Azzam, T. (2011). Evaluator characteristics and methodological choice. American Journal of Evaluation,
32, 376391. doi:10.1177/1098214011399416
American Association on Intellectual and Developmental Disabilities. (2011). Retrieved December 2010 from
http://www.aamr.org/content_100.cfm?navID21
Americans with Disabilities Act. (1990). U.S. code. 42, 1210112213.
Balch, G. I., & Mertens, D. M. (1999). Focus group design and group dynamics: Lessons from deaf and
hard of hearing participants. American Journal of Evaluation, 20, 265277. doi: 10.1177/
109821409902000208
Barnes, C. (2003). What a difference a decade makes: Reflections on doing emancipatory disability research.
Disability & Society, 18, 317.
Birman, D. (2007). Sins of omission and commission: To proceed, decline, or alter? American Journal of
Evaluation, 28, 7985. doi:10.1177/1098214006298059
Boland, M., Daly, L., & Staines, A. (2008). Methodological issues in inclusive intellectual disability
research: A health promotion needs assessment of people attending Irish disability services. Journal of
Applied Research in Intellectual Disabilities, 21, 199209. doi:10.1111/j.1468-3148.2007.00404.x
Bonham, G. S., Basehart, S., Schalock, R. L., Marchand, C. B., Kirchner, N., Rumenap, J. M., & Scotti, J.
(2004). Consumer-based quality of life assessment: The Maryland Ask Me! Project. Mental Retardation,
42, 338355. doi: 10.1352/00476765(2004)42
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
42
Botcheva, L., Shih, J., & Huffman, L. C. (2009). Emphasizing cultural competence in evaluation: A process
oriented approach. American Journal of Evaluation, 30, 176188. doi:10.1177/1098214009334363
Brandon, P. R. (1998). Stakeholder participation for the purpose of helping ensure evaluation validity: Bridging
the gap between collaborative and non-collaborative evaluations. American Journal of Evaluation,
19, 325337. doi:10.1177/109821409801900305
Caldwell, J., Hauss, S., & Stark, B. (2009). Participation of individuals with developmental disabilities and
families on advisory boards and committees. Journal of Disability Policy Studies, 20, 101109. doi:10.
1177/1044207308327744
Cambell, J. (1997). How consumers/survivors are evaluating the quality of psychiatric care. Evaluation Review,
21, 357363.
Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health.
Morbidity and Mortality Weekly Report, 48 (No.RR-11). Atlanta, Georgia: Author.
Chappell, A. L. (2000). Emergence of participatory methodology in learning difficulty research: Understanding
the context. British Journal of Learning Disabilities, 28, 3843.
Chen, S., Poland, B., & Skinner, H. A. (2007). Youth voices: Evaluation of participatory action research.
Canadian Journal of Program Evaluation, 22, 125150.
Chouinard, J. A., & Cousins, J. B. (2009). A review and synthesis of current research on cross-cultural
evaluation. American Journal of Evaluation, 30, 457494. doi:10.1177/1098214009349865
Christie, C. A. (2003). What guides evaluation? A study of how evaluation practice maps onto evaluation
theory. New Directions for Evaluation, 97, 735. doi:10.1002/ev.72
Christie, C. A., & Fleischer, D. N. (2010) Insight into evaluation practice: A content analysis of designs and
methods used in evaluation studies published in North American evaluation-focused journals. American
Journal of Evaluation, 31, 326346. doi:0.1177/1098214010369170
Conder, J., Milner, P., & Mirfin-Veitch, B. (2011). Reflections on a participatory project: The rewards and
challenges for the lead researchers. Journal of Intellectual and Developmental Disability, 36, 3948. doi:
10.3109/13668250.2010.548753
Cook, J. A, Carey, M. A., Razzano, L. A., Burke, J., & Blyler, C. R. (2002). The pioneer: The employment
intervention demonstration program. New Directions for Evaluation, 94, 3144. doi:10.1002/ev.49
Cousins, J. B., & Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation,
80, 523. doi:10.1002/ev.1114
Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research. Thousand
Oaks, CA: Sage.
Dadich, A., & Muir, K. (2009). Tricks of the trade in community mental health research: Working with mental
health services and clients. Evaluation & the Health Professions, 32, 3858. doi:10.1177/
0163278708328738
Developmental Disabilities Assistance and Bill of Rights Act, Public Law 206-402, 2000.
Equal Employment Opportunity Commission. (1997). EEOC enforcement guidance on the Americans with
disabilities act and psychiatric disabilities. Washington, DC: Author. Retrieved from http://www.eeoc.
gov/policy/docs/psych.html
Finlay, W. M. L., & Lyons, E. (2001). Methodological issues in interviewing and using self-report
questionnaires with people with mental retardation. Psychological Assessment, 13, 319335. doi:10.1037/
1040-3590.13.3.319
Fredericks, K. A. (2005). Network analysis of a demonstration program for the developmentally disabled. New
Directions for Evaluation, 107, 5568. doi:10.1002/ev.161
Fredericks, K. A., Deegan, M., & Carman, J. G. (2008). Using system dynamics as an evaluation tool:
Experience from a demonstration program. American Journal of Evaluation, 29, 251267. doi:10.1177/
1098214008319446
Fleischer, D. N., & Christie, C. A. (2009). Evaluation use: Results from a survey of U.S. American Evaluation
Association members. American Journal of Evaluation, 30, 158175. doi:10.1177/1098214008331009
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
Jacobson et al.
43
Gilbert, T. (2004). Involving people with learning disabilities in research: Issues and possibilities. Health and
Social Care in the Community, 12, 298308.
Gill, C. J. (1999). Invisible ubiquity: The surprising relevance of disability issues in evaluation. American
Journal of Evaluation, 20, 279287. doi:10.1177/109821409902000209
Harry, B. (2002). Trends and issues in serving culturally diverse families of children with disabilities. Journal
of Special Education, 36, 131138.
Harvey, P. D., Wingo, A. P., Burdick, K. E., & Baldessarini, R. J. (2010). Cognition and disability in bipolar
disorder: Lessons from schizophrenia research. Bipolar Disorders, 12, 36475. doi:10.1111/j.1399-5618.
2010.00831.x
Hassouneh, D., Alcala-Moss, A., & McNeff, E. (2011). Practical strategies for promoting full inclusion of
individuals with disabilities in community-based participatory intervention research. Research in Nursing
& Health, 34, 253265. doi:10.1002/nur.20434
Heinz, L. (2003). A process evaluation of a parenting group for parents with intellectual disabilities. Evaluation
and Program Planning, 26, 263274. doi:10.1016/S0149-7189(03)00030-2
Heller, T., Pederson, E. L., & Miller, A. B. (1996). Guidelines from the consumer: Improving consumer
involvement in research and training for persons with mental retardation. Mental Retardation, 34, 141148.
Jurkowski, J. M., & Ferguson, P. (2008). Photovoice as participatory action research tool for engaging people
with intellectual disabilities in research and program development. Intellectual and Disabilities, 46, 111.
doi:10.1352/0047-6765 (2008) 46[1:PAPART] 2.0.CO;2
Lepage-Chabriais, M. (2005). Evaluation of childrens stay in institutions: What is working? Evaluation
Review, 29, 454466. doi:10.1177/0193841X05279082
Linhorst, D. M., & Eckert, A. (2002). Involving people with severe mental illness in evaluation and
performance improvement. Evaluation & The Health Professions, 25, 284301. doi:10.1177/
0163278702025003003
Kiernan, C. (1999). Participation in research by people with learning disability: Origins and issues. British Journal of Learning Disabilities, 27, 4347.
MacNeil, C. (2000). Surfacing the realpolitik: Democratic evaluation in an antidemocratic climate. New
Directions for Evaluation, 85, 5162. doi:10.1002/ev.1161
Mcdonald, K. E., Keys, C. B., & Henry, D. B. (2008). Gatekeepers of science: Attitudes toward the research
participation of adults with intellectual disability. American Journal on Mental Retardation, 113,
466478. doi:10.1352/2008.113
Mertens, D. M. (1999). Inclusive evaluation: Implications of transformative theory for evaluation. American
Journal of Evaluation, 20, 114. doi: 10.1177/109821409902000102
Mertens, D. M. (2007a). Transformative considerations: Inclusion and social justice. American Journal of
Evaluation, 28, 8690. doi:10.1177/1098214006298058
Mertens, D. M. (2007b). Transformative paradigm: Mixed methods and social justice. Journal of Mixed
Methods Research, 1, 212225. doi:10.1177/1558689807302811
Perry, J., & Felce, D. (2002). Subjective and objective quality of life assessment: Responsiveness, response
bias, and resident: Proxy concordance. Mental Retardation, 40, 445456. doi:0.1352/00476765 (2002)
040<0445:SAOQOL>2.0.CO;2
Read, S., & Maslin-Prothero, S. (2011). The involvement of users and carers in health and social research: The
realities of inclusion and engagement. Qualitative Health Research, 21, 704713. doi:10.1177/
1049732310391273
Schalock, R. L., Bonham, G. S., & Marchand, C. B. (2000). Consumer based quality of life assessment: A
path model of perceived satisfaction. Evaluation and Program Planning, 23, 7787. doi:10.1016/
S0149-7189 (99) 00041-5
Smith, B., & OFlynn, D. (2000). The use of qualitative strategies in participant and emancipatory research to
evaluate developmental disability service organizations. European Journal of Work and Organizational
Psychology, 9, 515526. doi: 10.1080/13594320050203111
Downloaded from aje.sagepub.com by mihaela manea on January 4, 2015
44
Stancliffe, R. J. (2000). Proxy respondents and quality of life. Evaluation and Program Planning, 23, 8993.
doi:10.1016/S0149-7189 (99) 00042-7
Taut, S. (2008). What have we learned about stakeholder involvement in program evaluation? Studies in Educational Evaluation, 34, 224230. doi:10.1016/j.stueduc.2008.10.007
Toal, S. A. (2009) The validation of the evaluation involvement scale for use in multisite settings. American
Journal of Evaluation, 30, 349362. doi:10.1177/1098214009337031
Walmsley, J. (2004). Involving users with learning difficulties in health improvement: Lessons from inclusive
learning disability research. Nursing Inquiry, 11, 5464. doi: 10.1111/j.1440-1800.2004.00197.x
Walmsley, J., & Johnson, K. (2003). Inclusive research with people with learning disabilities: Past, present and
futures. London, England: Jessica Kingsley.
Ware, J. (2004). Ascertaining the views of people with profound and multiple learning developmental
disabilities. British Journal of Learning Disabilities, 32, 175179. doi: 10.1111/j.1468-3156.2004.00316.x
Wehmeyer, M. L. (1995). The ARCs self-determination scale: Procedural guidelines. Arlington, TX: The Arc
National Headquarters.
World Health Organization. (2011). World report on disability. Retrieved from http://whqlibdoc.who.int/hq/
2011/WHO_NMH_VIP_11.01_eng.pdf
Yarbrough, D. B., Shulha, L. M., Hopson, R. K., & Caruthers, F. A. (2011). The program evaluation standards:
A guide for evaluators and evaluation users (3rd ed.). Thousand Oaks, CA: Sage.