Professional Documents
Culture Documents
David Townsend
Pamela Adams
Faculty of Education
The University of Lethbridge
March, 2003
1
Essays written by the left hand need to be read with as much rigor as those
written with the right hand.
Elliot Eisner, 1990.
The schism that exists between social research methods favoring either a
qualitative or quantitative approach to program review is not new. On the one hand,
qualitative researchers criticize strictly quantitative program evaluation models for
drawing conclusions that are often pragmatically irrelevant (Reichardt and Rallis, 1994;
Woods, 1986); for employing methods that are overly mechanistic, impersonal, and
socially insensitive (Maturana, 1991; Scott and Usher, 2002); for compartmentalizing,
and thereby minimizing, the complex multidimensional nature of human experience
(Moerman, 1974; Silverman, 2000; Yutang, 1937); for encouraging research as an
isolationist and detached activity impervious to collaboration (Scott and Usher, 2002); for
tipping the scales of understanding excessively toward objective disenchantment (Bonβ
and Hartmann, 1985; Weber, 1919); and for forwarding claims of objectivity that are
simply not fulfilled to the degree espoused in many quantitative studies (Flick, 2002).
On the other hand, qualitative program reviews are seen as quintessentially
unreliable forms of inquiry (Gardner, 1993). Some educational researchers suggest that
even the most rigorous qualitative study provides no assurance of linking research with
relevant practice (James, 1925; Kerlinger, 1977); that the degree to which qualitative
study variables are uncontrolled offers certainty that causation can rarely be proven (Ary,
Jacobs,and Razavieh, 1979); that methodologies such as narration and autobiography can
yield data that is unverifiable, deceptive, and narcissistic; that qualitative researchers
often inadvertantly influence the generation of data and conclusions (Durkheim, 1982);
and that the Hawthorne effect, rather than authentic social reality, is responsible for many
events observed in these types of studies. Nonetheless, recognition of unique contexts,
2
intellectual diversity, and reasonable yet poetic thinking can contribute a strong
foundation, as well as gifts of insight, to even the most complex educational experiences.
In support of more holistic methods of inquiring into the nuances of educational practices
and programs, Lin Yutang (1937) suggests,
As a result of this past dehumanized logic, we have dehumanized truth. We have a
philosophy that has become a stranger to the experience of life itself, that has
almost half disclaimed any intention to teach us the meaning of life and the
wisdom of living; a philosophy that has lost that intimate feeling of life or
awareness of living which is the very essence of philosophy. (p. 422)
passages from documents, correspondence, records, and case history” (p. 22). Wolcott
(1992) alliteratively describes effective educational research as the activities of
experiencing and attending to sensory data; enquiring with curiosity beyond mere
observation and; examining and reviewing materials prepared by self and others. His
diagram of the complex activities of qualitative educational research is included below.
Figure 1.
Wolcott’s Qualitative Strategies
In North America, the past three decades have witnessed a dramatic increase in
the social and political inspection and critique of schools. Public examination of student
achievement has occurred on an unprecedented scale. While a plethora of fiscal, social,
ideological, and economic influences have provided impetus for this scrutiny, one well-
documented response to demands for educational accountability has been an escalating
interest in programs that link incentives or fiscal rewards to student achievement. Perhaps
it is the generalized belief that education cannot save itself that has led to such increases
in standardized and high-stakes testing, prescriptive curricula, and externally mandated
professional development of teachers. Yet, there is persuasive evidence to suggest that if
students, teachers, administrators, school boards, parents, and, indeed, the community as
4
a whole, are to be held accountable for children’s learning, programs to assess and
support educational reforms should be based on models that are internally empowering,
rather than externally interrogative. This notion of empowerment evaluation is a shift
away from the singular criterion of quantitative merit and worth toward a fundamentally
democratic process that seeks to foster self-determination, self-improvement, and
capacity building in a spirit of responsiveness (Fetterman, 2001).
Successful models of school reform, such as the Manitoba School Improvement
Program (MSIP) or the Improving the Quality of Education for All Project (IQEA) in the
United Kingdom, have carefully constructed cushioning networks of technical assistance
and site-to-site support to buoy schools and educators seeking to implement change.
These initiatives are also based, in part, on research indicating that the exclusionary use
of standardized testing and resultant student achievement as the primary barometer of
school effectiveness is increasingly insufficient in providing the most useful information
to stakeholders in the educational community, including policy makers. Such initiatives
are seen to promote expansive improvements in all types of learning as the goal of
schools, signalling a shift away from the conventional use of accountability systems
“toward a more cooperative and transitional path of program review procedures”
(Schmoker, 2000, p. 62).
To the extent that educational initiatives are most commonly evaluated in order to
determine past effectiveness and to create future goals, “evaluation is an essential,
integral component of all innovative programs” (Somekh, 2001, p. 76). It is “the process
of making judgment about the merit, value, or worth of educational programs, projects,
materials, and techniques” (Borg and Gall, 1983, p. 733). A more extensive definition of
program evaluation might outline, “the sets of activities involved in collecting
information about the operations and effects of policies, programs, curricula, courses,
5
educational software, and other instructional materials” (Gredler, 1996, p. 13). That
program evaluation not be confused with other forms of inquiry or data collections which
are conducted for different purposes is of critical importance (Gredler, 1996). For
example, it is the use of evaluation as a strategy for program improvement rather than for
accountability, justification, and program continuity that has traditionally differentiated
formative from summative evaluation. The latter consists of activities “to obtain some
kind of terminal or over-all evaluation in order that some type of general conclusion can
be made” (Tyler, Gagne and Scriven, 1967, p. 86). While summative evaluation can serve
to justify additional funding, it may also generate modification or elimination of a
program or its individual components. Formative evaluation, however, takes place at a
more intermediate stage, “permit[ting] intelligent changes to be made….” (Tyler, et al.,
1967, p. 86) as the initiative evolves. The benefits of such action can usually facilitate the
saving of both finite time and money. Alternately, programs conducted without an
evaluation component run the very real risk of wasted funding when “opportunities [are]
lost for policy makers to learn either from their successes or from what went wrong”
(Somekh, 2001, p. 76).
Hopkins (1989) suggests that evaluation in schools should be used for three types of
decisions: course improvement (instructional methods and materials); decisions about
individuals (pupil and teacher needs); and administrative regulation (rating schools,
systems, and teachers). Evaluation of schools, evaluation for school improvement, and
evaluation as school improvement characterize these three approaches. Regardless of the
nature of the evaluation, Sanders (2000) identifies the following as key tasks:
• deciding whether to evaluate.
• defining the evaluation problem.
• designing the evaluation.
• budgeting the evaluation.
• contracting for the evaluation.
• managing the evaluation.
• staffing the evaluation.
• developing evaluation policies.
• designing a program for evaluators, collecting the information.
6
evaluations should be expected to acknowledge the personal and professional biases they
bring to evaluative processes. This awareness and recognition will frame theoretical
considerations as well as establish, “the advantages and limitations of what is chosen, as
opposed to what is disregarded…” (Rebien, 1997, p. 2). Evaluators must also be
explicitly aware of relative strengths and weaknesses of different evaluative approaches
(Shadish, Cook and Leviton, 1991), and take care to not “create their own establishment
and glamorize it as an elite ” (Stenhouse, quoted in Hopkins, 1989, p. iii).
During the last five decades, several major models of program evaluation have
emerged. To compare these models is one way to understand the breadth and depth of the
subject. A study of alternate approaches might also be crucial to the scientific
advancement of evaluation. Moreover, such an appraisal can help evaluators assess and
consider frameworks which they may employ as they plan and conduct studies. It is
important to identify strengths and weaknesses of a variety of models in order to refine
specific relevant approaches, rather than “to enshrine any one of them….” (Stufflebeam
and Webster, as cited in Maldaus, Scriven, and Stufflebeam, 1984, p. 24). The underlying
theoretical assumptions of each will provide a basis for comparison, as “…. models differ
from one another as the base assumptions vary” (House, as cited in Maldaus, Scriven and
Stufflebeam, 1984, p. 24). In Table 1, Maldaus, et al. (1984) compare assorted models,
proponents, major audiences, understandings, methodologies, outcomes, and examples of
typical questions associated with formative program evaluation.
Table 1.
A Taxonomy of Major Evaluation Models.
variation
Goal Free Scriven Consumers Consequences Bias control; Consumer What are all the effects?
criteria;; logical choice; social
analysis; utility
modus
operandi
Art Criticism Eisner, Connoisseurs, Critics, Critical review Improved Would a critic approve
Kelly Consumers standards, Standards this program?
All are dependent to some degree upon the philosophy of liberalism, and “partake
of the ideas of a competitive, individualistic, market society…. the most fundamental
idea is freedom of choice, for without choice, of what use is evaluation?” (House, as cited
in Maldaus, Scriven and Stufflebeam, 1984, p. 49).
There was a flurry of evaluation protocol development in the late 1960s when a
number of academics produced several alternative theoretical approaches. This
renaissance in the field was fuelled, in part, by “the mounting responsibilities and
resources that society assigned to educators” (Stufflebeam and Webster, as cited in
Maldaus, Scriven, and Stufflebeam, 1984, p. 23). Table 2 presents Scriven’s (1993)
review of program evaluation approaches of this period.
9
Table 2.
Scriven’s Past Conceptions of Evaluation.
Rich Constructivist,
Strong Decision Weak Relativistic Description or Fourth
Support View Decision View Approach / Generation
Support Social Approach
View Process
School
Tyler, CIPP with Alkin Provus Stake, Guba & Lincoln,
DEVELOPER Stufflebeam,Guba Rossi & Cronbach many supporters
Freeman, in UK and US
WHEN 1971 (CIPP) 1972 1971, 1989 1980 1981
Purports that all
Info is Uses only evaluation
gathered client’s Ethnographic results from
in service values as enterprise construction by
Process of rational for a framework even without individuals and
program decision without client’s negations by
PURPOSE management maker judgment values groups
STRONG/DIRECT Rejects
CONCLUSIONS summative No, rejects all
REACHED? Yes No No evaluation claims
An extensive analysis of the work of several other authors engaged in evaluation from
1960 through the 1980s is presented in Appendix A.
Empowerment evaluation proceeds through three steps. The first establishes the
mission or vision of the program. That is, the participants state the results they would like
to see, based on the projected outcome of the implemented program, and then map
through the process in reverse design. The second step involves taking stock of,
identifying, and prioritizing the most significant program activities. Staff members rate
present program effectiveness using a nominal instrument and the ensuing discussion
determines the current program status. Charting a course for the future is the third step.
The group outlines goals and strategies to achieve their dream with an explicit emphasis
on improvement. External evaluators assist participants in identifying types of evidence
required to document progress toward the goal. In the presence of a strong, shared
commitment on the part of the participants, deception is inappropriate and unnecessary;
the group itself is a useful, powerful check (Fetterman, 2001). For empowerment
evaluation to be effective and credible, participants must enjoy the latitude to take risks
and simultaneously assume responsibility. A safe atmosphere, in which it is possible to
share success and failure, is as essential as a sense of caring and community.
Other influential authors (Posavac and Carey, 1997) have adopted a model of
program evaluation that honors many of the principles of empowerment evaluation. Their
improvement-focused model, they contend, best meets the criteria necessary for effective
evaluation. That is, the needs of stakeholders are served; valid information is provided;
and alternate viewpoints are acknowledged. As Posavac and Carey note, “to carry this off
without threatening the staff is the greatest challenge of program evaluation” (p. 27).
improving the quality of Manitoba public schools, such as the adoption of provincially
mandated curricula accompanied by subject and grade level outcomes, and province-wide
testing at grades three, six, nine, and twelve (Harris and Young, 1999). The provincial
government denied that such initiatives constituted an explicit attack on the failures of
teachers and schools. Yet, such a comprehensive attempt at system change, seemingly
driven by a political agenda and accompanied by a reduction of funding to education,
provoked considerable resistance from the Manitoba Teachers’ Society, as well as
extensive public controversy and debate (Harris and Young, 1999).
The Manitoba School Improvement Project actually came into being before this
contentious governmental reform. Originating in an independent charitable foundation, it
drew on the professional praxis of teachers---rather than academics---to focus exclusively
on school reform at the secondary school level. In existence since 1991, the program was
born as a result of the vision and support of the Walter and Duncan Gordon Foundation, a
Canadian philanthropic group interested in enhancing educational opportunities for
students at risk. The educational community in Manitoba welcomed and supported this
involvement. Accordingly, the Foundation elected to support secondary projects that
were designed by individual schools in urban centres, with later initiatives expanded to
rural and northern settings. Thus began one of Canada’s major initiatives to empower
teachers as catalysts for change.
MSIP has provided multi-year funding to more than 30 schools in 13 school
divisions in Manitoba. In 1998, an external evaluation of the initiative was commissioned
to examine achievement of project goals, increased student learning, increased student
engagement, and successful school improvement. The report concluded that, while not
every school showed high levels of improvement, a majority of schools in the project had
been successful in these four areas. In fact, improved academic performance, increased
student enrolment, reduced disciplinary problems, improved attendance, increased family
and community involvement, and increased student graduation were also noted as
unanticipated positive results (Harris and Young, 1999). Michael Fullan, a leader of the
external evaluation team, remarked that, “at the secondary level, I know of no other
strategy which has taken 20 or more schools and shown the level of success in this short
amount of time….[it shows] secondary schools can move more quickly, even more
quickly than we thought possible, and in a cost efficient way ” (Fullan, quoted in Harris
13
development fund. Entry is limited; consequently, schools are required to agree to a prior
set of conditions before joining the project. They must gain the support of 80% of the
staff, they must commit their professional development time to IQEA over the four terms,
and a cadre must be formed which will be responsible for leading the school change.
Finally, schools commit themselves to undergo a process of internal and external
evaluation. For their part, the universities design a program of staff development
activities, and provide a liaison advisor for each school whose responsibilities include
networking, training, support, consultancy, feedback, advice, and pressure (Harris and
Young, 1999).
Teachers in any school considering joining the project are expected to share the
philosophy and values of the IQEA Project. The following tenets are worthy of note:
• Schools do not improve unless teachers, individually and collectively,
develop. While teachers can often develop their practice on an individual
basis, if the whole school is to develop there need to be many staff
development opportunities for teachers to learn together.
• Successful schools seem to employ decision-making mechanisms that
encourage feelings of involvement from a number of stake-holder groups,
especially students.
• Schools that are successful at reform and improvement establish a clear
vision for themselves and regard leadership as a responsibility of many
staff, rather than a single set of responsibilities vested in a single
individual.
• Co-ordination of activities is an important way to keep people involved,
particularly when changes of policy are being introduced. Communication
within the school is a vital aspect of co-ordination, as is informal
interaction between teachers.
• Schools which recognize the importance of professional inquiry and
reflection find it easier to gain clarity and establish shared meaning around
identified development priorities, and are better able to monitor the extent
to which policies actually deliver intended outcomes for pupils.
• Through the process of planning for development, a successful school is
able to link its educational aspirations with identifiable priorities, to
15
Several commonalities emerge when comparing the MSIP and the IQEA in terms
of stimulating potent and lasting change. Both employ an external monitoring agency.
Both focus on specific teaching and learning activities. They are committed to
professional interchange, collaboration, and networking. They espouse devolved
leadership and temporary systems. Finally, both support formative and summative
evaluation, and demonstrate that “some of the best evaluation occurs in response to
questions that teachers and other school personnel ask about their professional practice”
(Sanders, 2000, p. 3). That is, they establish inquiry and reflection as intrinsic to school
growth and improvement.
Among several other authors, Rapple (1994) and Barth (1990, 2001) contend that
these types of strategies are necessary to foster educational accountability grounded not
in passive and external models which encourage compliance and subservience, but in
active internally reflective models which build responsibility and capacity.
....there is sheer futility in attempting to regulate education by
economic laws. Accountability in education should not be facilely
linked to mechanical examination results, for there is a very distinct
danger that the pedagogical methods employed to attain those results will
themselves be mechanical and the education of children will be so
much the worse. (Rapple, 1994, p. 11)
been one of the critical factors in encouraging the "collaboration, system leadership, and
consensus building" (Booi, Hyman, Thomas, and Couture, 2000, p. 35) that characterized
successful proposal development in the early stages of the Initiative.
In less than three years, AISI has given rise to hundreds of system and school
projects. Many of them have chosen to re-focus educational vision and structure through
an empowerment-based action research model that facilitates and encourages teachers to
make internal assessments of strengths, and establish action plans based on collaborative
problem solving. Alberta Deputy Minister of Learning, Maria David-Evans (2000),
suggests that this process will enhance the “collective capacity” within schools as a result
of “greater sharing [and] pursuit of a common goal….” (p. 11). Cimbricz (2002) concurs,
noting that projects of the kind developed within AISI can increase the likelihood that
administrators and staffs will engage in positive goal setting that, in turn, can encourage a
common focus and purpose for all members of the school community.
In addition, Elliott’s (1991) model of action research as a method to undertake
educational change is one of many that has been seen to fit the purpose of most AISI
projects, to “….study an educational situation with a view to improving the quality of
action within it” (p. 69). Moreover, the balanced perspective inherent in AISI processes
renders documents such as Long Term Strategic Plans, Individualized Student Education
Plans, and Individual Teacher Growth Plans potentially more valuable as they are seen to
dovetail with the daily decisions made by staff about the learning activities of students.
For Rogers (2000), this perspective increases the likelihood of collaboration and
cooperation among teachers and principals, and across schools.
In systems all over the world, many schools are exploring newly emerging forms
of assessment designed to demonstrate and celebrate students’ knowledge and skills, and
the effectiveness of schools and teachers. Several factors contribute to this reform in
student and program evaluation: the changing nature of educational goals, the
relationship between assessment and pedagogy, and the clear limitations of current
methods of judging performance (Marzano, Pickering, and McTighe, 1993). Demands for
external accountability, advances in the technology and science of assessment, the advent
of large scale testing in schools, and calls for educational reform are other significant
factors influencing the changing face of student learning and program assessment
(Brandt, 2000). In addition, academic and non-academic competencies necessary for the
17
Table 3.
Existing and Desired States of Change and Assessment.
The assumption that change takes Operating within people’s maps of reality (personal knowledge) and creating
place by mandating new cognitive conditions for people to examine and alter their own internal maps.
18
Assessments that communicate that Assessments that allow the capacity to make meaning of the massive flow of
knowledge is outside the learner. information and to shape raw data into patterns that make sense to the
individual.
Assessments that signal that personal
knowledge and experience are of little Assessments of knowledge being produced from within the learner.
worth.
Conceptions of curriculum,
instruction, and assessment are
separate entities in a system. Communicating that the learner’s personal knowledge and experience is of great
worth.
Each aspect of the system that is
Cont.
assessed is considered to be separate
and discrete.
Assessment is an integral component for all learning at all levels and for all
Individual and organizational critique individuals who compose the system.
perceived as negative and a barrier to
change. All parts of the system are interconnected and synergistic.
there is a new role on the horizon for assessment, one which overrides other limited goals
such as accountability and classification, as it helps to provide more and better education
for the learner (Brandt, 2000). When student achievement is tied to reform, those school
systems that are being assessed need to be a part of a continuous, empowering process.
Public officials must allow for teachers collectively enhancing their professional efficacy,
“the essential foundation stone for school improvement” (Hopkins, 1989, p.194). In
powerful contrast to traditional external approaches, evaluations with an empowerment
component can serve as catalysts to influence, clarify, expand, and improve more
traditional forms of evaluation. Proven initiatives such as the Manitoba School
Improvement Project, the Improving Quality of Education for All Project and, now, the
Alberta Initiative for School Improvement, represent a paradigm shift in which
enlightened, willing educators can dedicate themselves to promoting social change,
democratic participation, and shared decision making. Evaluators and other participants
can help foster interdependence, and professional growth, and all can contribute to the
cultivation of a community of learners (Fetterman, 2001).
Finally, in a summary of their recent text, Posavac and Carey (1997) reaffirm the
importance of interpersonal relations and the attitude of evaluators as they work with
stakeholders. The authors offer the following statements for the guidance and conduct of
program evaluations:
• Humility won’t hurt.
• Impatience may lead to disappointment.
• Recognize the importance of the evaluator’s perspective.
• Work on practical questions.
• Work on feasible issues.
• Avoid data addiction.
• Make communications accessible.
• Seek evaluation [of the work of the evaluators].
• Encourage the development of a learning culture. (p. 262)
References
20
Aitken, A., Gunderson, T., & Wittchen, E. (2000). AISI and the superintendent:
Opportunities for new relationships. Paper presented at the annual meeting of the
Canadian Society for Studies in Education Symposium. Edmonton, Alberta.
Alberta Learning. (1999b). Framework for the Alberta Initiative for School
Improvement. Alberta: Ministry of Learning.
Anderson, S., & Ball, S. (1978). Profession and practice of program evaluation.
San Francisco, CA: Jossey-Bass.
Ary, D., Jacobs, L., & Razavieh, A. (1979). Introduction to research in education.
New York: Holt, Rinehart and Winston.
Booi, L., Hyman, C., Thomas, G., & Couture, J-C. (2000). AISI opportunities and
challenges from the perspective of the Alberta Teachers’ Association. Paper presented at
the annual meeting for the Canadian Society for the Study of Education. Edmonton,
Alberta.
Borg, W., & Gall, M. (1983). Educational research. New York: Longman.
Borg, W., & Gall, M. (2003). Educational research: An introduction, 5th Edition.
New York: Longman.
Cimbricz, S. (2002). State mandated testing and teachers’ beliefs and practices.
Educational Policy Analysis Archives, 10(2).
Costa, A., & Kallick, B. (Eds.). (1995). Assessment in the learning organization:
Shifting the paradigm. Alexandria, Virginia: Association for Supervision and Curriculum
Development.
Denzin, N., & Lincoln, Y. (Eds.). (2000). Handbook of qualitative research 2nd
edition. London, UK: Sage.
Earl, L. (2000). AISI: A bold venture in school reform. Paper presented at the
annual meeting of the Canadian Society for Studies in Education Symposium. Edmonton,
Alberta.
Eisner, E. (1986). The primacy of experience and the politics of method. Lecture
delivered at the University of Oslo, Norway.
Eisner, E., & Peshkin, A. (Eds.). (1990). Qualitative inquiry in education: The
continuing debate. New York: Columbia University Teachers College.
Fern, E. (1982a). The use of focus groups for the idea generation: The effects of
group size, acquaintanceship, and moderator on response quality and quantity. Journal of
Marketing Research. 19.
Fern, E. (1982b). Why do focus groups work: A review and integration of small
group process theories. Advances in Consumer Research, 8.
Fullan, Michael. (2001). The new meaning of educational change. New York:
Teachers College Press.
Gay, L., & Airasian, P. (2003). Educational research: Competencies for analysis
and applications. New Jersey: Pearson.
Glaser, B., & Strauss, A. (1967). The discovery of grounded theory: Strategies for
qualitative research. New York: Aldine.
Harris, A., & Young, J. (1999). Comparing school improvement programs in the
United Kingdom and Canada: Lessons learned. Retrieved on November 11, 2002 from
www1.worldbank.org/education/est/resources/case%20studies/UK&Can-SchoolImp.doc
23
Kahneman, D., Slovic, P., and Tversky, A. (Eds.). (1997). Judgment under
uncertainty: Heuristics and biases. New York: Cambridge University Press.
Kohn, A. (1999). The schools our children deserve. New York: Houghton Mifflin.
Oaks: Sage.
Marzano, R., Pickering, D., & McTighe, J. (1993). Assessing student outcomes:
Performance assessment using the dimensions of learning model. Alexandria, Virginia:
Association for Supervision and Curriculum Development.
Posavac, E., and Carey, R. (1997). Program evaluation: Methods and case
studies. New Jersey: Prentice Hall.
Radwanski, G. (1987). Ontario study of the relevance of education and the issue
of dropouts. Toronto: Ministry of Education.
Reason, P., & Marshall, J. (1987). Research as personal process. In Griffin, V., &
Boud D. (Eds.). Appreciating Adults’ Learning. Center for the Study of Organizational
Change and Development: University of Bath Press.
Reichardt, C., & Rallis, S. (1994). The relationship between the qualitative and
quantitative research traditions. In Reichardt, C. & Rallis, S. (Eds.). The qualitative –
quantitative debate: New perspectives. San Francisco: Jossey-Bass.
Rogers, T. (2000). Potential and challenges of the Alberta Initiative for School
Improvement. Paper presented at the annual meeting of the Canadian Society for the
Study of Education. Edmonton, Alberta.
Smith, M., Stevenson, D., & Li, C. (1998). Voluntary national tests would
improve education. Educational Leadership, 55(6).
Wholey, J., Duffy, H., Fukumoto, J., Scanlon, J.W., Berlin, M., Copeland, W., &
Zelinsky, J. (1972). Proper Organizational Relationships. In C. H. Weiss (Ed.),
Evaluating action programs: Readings in social action and education. Boston: Allyn &
Bacon.
Approaches Values-Orientation
(True Evaluation)
Definition Studies that are designed primarily to assess some object’s worth
Study Types Accreditation/Certification Policy Studies Decision- Consumer- Client- Connoisseur-based
guidelines oriented studies oriented centered studies
studies studies
Advance Accreditation/certification Policy issues Decision Societal values Localized Evaluators’
Organizers guidelines Situations and needs concerns and expertise and
issues sensitivities
Purpose To determine whether To identify To provide a To judge the To foster To critically
institutions, programs and and assess the knowledge and relative merits understanding describe, appraise
personnel should be potential costs value base for of alternative of activities and illuminate an
approved to perform and benefits making and educational and how they object
specified functions of competing defending goods and are valued in a
policies for a decisions services given setting
given and form a
institution or variety of
society perspectives
Source of Accrediting/certifying Legislators, Decision Society at Community Critics and
questions agencies policy boards makers large, and authorities
and special (administrators, consumers, practitioner
interest parents, and the groups in local
groups students, evaluator environments
teachers) their and
constituents, educational
and evaluators experts
Main Are institutions, programs, Which of two How should a Which of What is the What merits and
Questions and personnel meeting or more given enterprise several history and demerits
minimum standards; and competing be planned, alternative status of a distinguish an
how can they be improved? policies will executed, and consumable program and object from others
maximize the recycled in objects is the how is it of the same general
achievement order to foster best buy, given judged by kind?
of valued human growth their costs, the those who are
outcomes at a and needs of the involved with
reasonable development at consumers, it and those
cost? a reasonable and the values who have
cost? of society at expertise in
large? program
areas?
Typical Self-study and visits by Delphi, Surveys, needs Checklists, Case study, Systematic use of
Methods expert panels to assess experimental assessments, needs adversary refined perceptual
performance in relation to and quasi- case studies, assessment, reports, sensitivities and
specified guidelines experimental advocate teams, goal-free sociodrama, various ways of
design, observation, and evaluation, responsive conveying meaning
scenarios, quasi- experimental evaluation and feelings
forecasting, experimental and quasi-
and judicial and experimental
proceedings experimental design, modus
design operandi
analysis, and
cost analysis