Professional Documents
Culture Documents
High
Reliability
School
The Next Step in School Reform
ROBERT J. MARZANO
OUR MISSION
To provide the best research, the
most useful actions, and the
highest level of services to
educators.
OUR VISION
To continuously develop tools
that translate high quality
educational research into
practical applications educators
can put to immediate use.
OUR GOAL
To be the place educators go for
the latest information and data,
synthesized into clear, concise
resources that facilitate
immediate action.
Table of Contents
About Marzano Research Laboratory ........................................................................................ ii
About the Author .......................................................................................................................... ii
Introduction: Ushering in the New Era of School Reform ........................................................1
A Pessimistic View From the Past ...................................................................................... 1
An Optimistic View of the Future ...................................................................................... 5
A High Reliability Perspective ........................................................................................... 8
Overview of the Chapters ................................................................................................. 12
An Instructional Framework
That Develops and Maintains Effective Instruction in Every Classroom .......................... 23
Leading Indicators for Level 2 .......................................................................................... 25
Lagging Indicators for Level 2 ......................................................................................... 34
Standards-Referenced Reporting
of Student Progress ............................................................................................................. 51
Leading Indicators for Level 4 .......................................................................................... 53
Lagging Indicators for Level 4 ......................................................................................... 62
A Competency-Based System
That Ensures Students Mastery of Content ....................................................................... 63
Leading Indicators for Level 5 .......................................................................................... 65
Lagging Indicators for Level 5 ......................................................................................... 71
ii
Introduction
Table I.1: Marzano Works That are the Basis for the Framework
1.
2.
3.
4.
5.
6.
7.
Leaders of Learning: How District, School, and Classroom Leaders Improve Student
Achievement
(DuFour & Marzano, 2011)
8.
9.
Introduction
launching of Sputnik in 1957. Shocked by this event, the U.S. public began
to question the rigor and viability of our schools. Indeed, influential figures
such as Admiral Hyman Rickover (1959) forwarded the position that public
education was weakening the intellectual capacity of our students.
Rickovers book, Education and Freedom, made direct links between the
security of the nation and the quality of education. (Marzano, 2003b, pp.
12)
The 1960s saw no respite from the harsh criticisms. As a result of the Civil Rights Act of
1964, a cornerstone of President Johnsons war on poverty, a nationwide survey was
undertaken involving 640,000 students, 60,000 teachers, and 4,000 schools. The resulting report,
Equality of Educational Opportunity, was published in 1966 (Coleman et al., 1966). Although
written by a team of researchers, the report became known as the Coleman Report in deference
to the senior author, James Coleman. The overall conclusions in the report were not very
flattering regarding K12 education in the United States:
Taking all of these results together, one implication stands above all: that
schools bring little to bear on a childs achievement that is independent of
his background and general social context; and that this very lack of an
independent effect means that the inequalities imposed on children by
their home, neighborhood, and peer environment are carried along to
become the inequalities with which they confront life at the end of school.
(p. 325)
The report had a profound and negative effect on the perception of the utility and effectiveness of
K12 schools.
In the 1970s, this negative perception was underscored by Christopher Jencks and his
colleagues in the report Inequality: A Reassessment of the Effect of Family and Schooling in
America, which was based on a reanalysis of the Coleman data (Jencks et al., 1972). Among the
conclusions reported by Jencks and his colleagues were the following:
Schools do little to lessen the gap between rich students and poor students.
Schools do little to lessen the gap between more and less able students.
Little evidence exists that education reform can improve a schools influence on student
achievement.
The criticisms of K12 education from the 1960s and 1970s were repeated and exacerbated in
the 1980s. As Peter Dow (1991) explains in his book, Schoolhouse Politics: Lessons from the
Sputnik Era:
Introduction
In 1983 educators and the general public were treated to the largest
outpouring of criticism of the nations schools in history, eclipsing even the
complaints of the early 1950s. Nearly fifty reports totaling more than six
thousand pages voiced a new wave of national concern about the troubled
state of American education. They spoke of the fragmented state of the
school curriculum, the failure to define any coherent, accepted body of
learning, the excessive emphasis on teaching isolated facts, and the lack of
attention to higher order skills and concepts. They called for more
individualism of instruction, the development of a closer relationship
between teachers and students, and methods that encourage the active
participation of the student in the learning process. (p. 243)
As I describe in What Works in Schools:
Again, a single report laid the foundation for the outpouring of criticism.
Without a doubt, A Nation at Risk: The Imperative for Educational Reform,
issued by the National Commission on Excellence in Education, was
considered by some as proof that K12 education had indeed devolved to
a state of irreversible disrepair. (Marzano, 2003b, p. 3)
The report, A Nation at Risk, went so far as to warn that the educational foundations of our
society are presently being eroded by a rising tide of mediocrity that threatens our very future as
a nation and a people (National Commission on Excellence in Education, 1983, p. 5). Clearly,
the 1960s, 1970s, and early 1980s saw great pessimism with respect to K12 education in the
United States.
Reason #1: Those studies that have been interpreted as evidence that schools do not
significantly affect student achievement do, in fact, support the potential impact of
schools when interpreted properly.
Reason #2: Highly effective schools produce results that almost entirely overcome the
effects of students backgrounds.
Reason #3: The research on school effectiveness considered as a whole paints a very
positive image of schools impact on student achievement.
These reasons are discussed in some depth in What Works in Schools (Marzano, 2003b) and
therefore are not explicated further here. However, it is worth expanding on the third reason,
which basically notes that the literature on school effectiveness is overwhelmingly positive,
especially from the 1980s to the present. More specifically, the research taken in the aggregate
5
provides clear guidance as to actions schools can take to dramatically increase their
effectiveness. That research includes, but is not limited to, the following works: Brookover,
Schweitzer, Schneider, Beady, Flood, & Wisenbaker, 1978; Brookover, Beady, Flood,
Schweitzer, & Wisenbaker, 1979; Edmonds, 1979a, 1979b, 1979c, 1981a, 1981b; Madaus,
Kellaghan, Rakow, & King, 1979; Rutter, Maughan, Mortimore, Ouston, & Smith, 1979; Purkey
& Smith, 1982; Walberg, 1984; Good & Brophy, 1986; Elberts & Stone, 1988; Mortimore,
Sammons, Stoll, Lewis, & Ecob, 1988; Raudenbush & Bryk, 1988; Stringfield & Teddlie, 1989;
Levine & Lezotte, 1990; Bosker, 1992; Bryk & Raudenbush, 1992; Scheerens, 1992; Wang,
Haertel, & Walberg, 1993; Creemers, 1994; Luyten, 1994; Rowe & Hill, 1994; Bosker &
Witziers, 1995, 1996; Raudenbush & Willms, 1995; Rowe, Hill, & Holmes-Smith, 1995;
Sammons, Hillman, & Mortimore, 1995; Goldstein, 1997; Scheerens & Bosker, 1997; van der
Werf, 1997; Wright, Horn, & Sanders, 1997; Sammons, 1999; Reynolds & Teddlie, 2000a,
2000b; Townsend, 2007a, 2007b; Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010.
The most comprehensive effort to date to synthesize the research on school effectiveness is
breathtaking in its scope. In his 2009 book, Visible Learning, John Hattie synthesized the
findings from over 800 meta-analyses involving over 52,000 studies and over 145,000 effect
sizes to identify and rank 138 factors that have significant correlations with student achievement.
In 2012, Hattie updated his synthesis to include 115 additional meta-analyses involving 7,518
additional studies and 13,428 additional effect sizes. These additional findings prompted him to
add 12 factors to his original list of 138 for a total of 150 ranked factors. Clearly, some of those
factors are outside of a schools control. Table I.2 shows those factors from Hatties list of 150
that fall outside a schools control.
Table I.2: Hatties Factors Outside of the Schools Control
Rank
Factor
20
39
44
45
51
59
81
82
84
101
119
122
133
141
147
149
150
Prior achievement
Pre-term birth weight
Home environment
Socio-economic status
Parental involvement
Self-concept
Creativity related to achievement
Attitude to mathematics/science
Ethnicity
Lack of illness
Personality
Family structure
Gender
Ethnic diversity of students
Welfare policies
Television
Mobility
Introduction
While the factors in table I.2 are outside of a schools control, many important factors can be
controlled or at least strongly influenced by a school. For example, consider the top one-third
(the top 50) of Hatties factors listed in table I.3. Those not shaded can be influenced by schools.
Table I.3: Hatties Top 50 Factors
Rank
Factor
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Phonics instruction
Student-centered teaching
Classroom cohesion
Pre-term birth weight
Kellers Mastery Learning (PSI)
Peer influences
Classroom management
Outdoor/adventure programs
Home environment
Socio-economic status
Interactive video methods
Professional development
Goals
Play programs
Second/third-chance programs
As indicated in table I.3, 46 of the top 50 factors, or 92%, can be influenced by schools and the
teachers within those schools. Additionally, virtually all of the factors in Hatties list that can be
influenced by schools fit in the model presented in this paper.
With this vast research base regarding factors that influence student achievement, what is the
next step schools can take to dramatically increase their effectiveness? The purpose of this
publication is to answer that question directly. I propose that a necessary condition to move
schools to the next level of effectiveness is to adapt a high reliability perspective.
Introduction
The hierarchical relationship of the levels depicted in figure I.1 has some intuitive appeal. Level
1 can be considered foundational to all other levels. If students and faculty do not have a safe and
orderly environment in which to work, little if any substantive work can be accomplished. Level
2 addresses the single most commonly cited characteristic of effective schools: high quality
instruction in every classroom. High quality instruction is a prerequisite for level 3, which
addresses a curriculum that is both guaranteed and viable. Levels 1 through 3 are common fare
among current efforts to make schools more effective. Level 4 moves into a more rarified
atmosphere because it involves reporting individual students progress on specific standards. At
any point in time, the leader of a level 4 school can identify individual students strengths and
weaknesses relative to specific topics in each subject area. Level 5 schools exist in the most
rarified atmosphere of allone in which students move to the next level of content as soon as
they demonstrate competence in the previous level. Matriculation, then, is not based on the
amount of time a student spends in a given course but rather on his or her demonstrated mastery
of content.
10
Introduction
When asked, faculty and staff generally describe the school as a safe place.
When asked, faculty and staff generally describe the school as an orderly place.
Clear and specific rules and procedures are in place for the running of the school.
Faculty and staff know the emergency management procedures and how to implement
them for specific incidents.
As shown here, leading indicators can involve both perceptions and actions. They help inform
school leaders about specific issues that should be addressed and how much effort should be
devoted to those issues. For example, if faculty and staff frequently complain that the school is
unsafe, it is an indicator that school safety is an important issue that should be addressed.
Similarly, if clear rules and procedures are not in place, it is an indication that school safety
should be addressed. It is important to note that positive leading indicators do not necessarily
mean that a school has achieved high reliability status regarding a specific issue. For example, if
a school has well-established rules and procedures in place and faculty and staff generally report
that the environment is safe, it does not necessarily mean that school is, in fact, safeat least at
the level required for high reliability status. To reach this level of assurance, lagging indicators
such as the following must be used:
Lagging Indicator 1.1: Few, if any, incidents occur in which students safety is compromised.
Lagging Indicator 1.2: Few, if any, incidents occur in which rules and procedures are not
followed.
Lagging indicators are the evidence, then, for high reliability status. Leading and lagging
indicators used in tandem provide the clarity and guidance that school leaders need to seek and
attain high reliability status for each of the five levels.
12
1
A Safe and Orderly Environment
That Supports Cooperation and
Collaboration
Level 1 addresses those factors that are considered foundational to any substantive change
within a school. Quite obviously, if a school is not safe or orderly, all other activities suffer. If
those within the school do not cooperate or collaborate, little progress can be made in enhancing
a schools effectiveness. Level 1 has eight leading indicators:
Leading Indicator 1.1: The faculty and staff perceive the school environment as safe and
orderly.
Leading Indicator 1.2: Students, parents, and the community perceive the school environment
as safe and orderly.
Leading Indicator 1.3: Teachers have formal roles in the decision-making process regarding
school initiatives.
Leading Indicator 1.4: Teacher teams and collaborative groups regularly interact to address
common issues regarding curriculum, assessment, instruction, and the
achievement of all students.
Leading Indicator 1.5: Teachers and staff have formal ways to provide input regarding the
optimal functioning of the school.
13
Leading Indicator 1.6: Students, parents, and the community have formal ways to provide
input regarding the optimal functioning of the school.
Leading Indicator 1.7: The success of the whole school, as well as individuals within the
school, is appropriately acknowledged.
Leading Indicator 1.8: The fiscal, operational, and technological resources of the school are
managed in a way that directly supports teachers.
Each of these leading indicators is well-grounded in the research literature. Of the books I have
authored (listed in table I.1, pages 23), the following contain the most direct reviews of the
research literature and recommended interventions for the leading indicators at level 1:
Leaders of Learning: How District, School, and Classroom Leaders Improve Student
Achievement (DuFour & Marzano, 2011)
Effective Supervision: Supporting the Art and Science of Teaching (Marzano et al., 2011)
District Leadership That Works: Striking the Right Balance (Marzano & Waters, 2009)
The Art and Science of Teaching: A Comprehensive Framework for Effective Instruction
(Marzano, 2007)
School Leadership That Works: From Research to Results (Marzano et al., 2005)
As mentioned in the introduction, the framework described in this paper is quite compatible
with Hatties (2012) synthesis of the research into 150 factors that correlate with student
achievement. As a result of his analysis of over 59,000 studies, Hattie identified an effect size of
.40 as the hinge-point in terms of evaluating factors that should be considered as possible areas
of intervention within a school. An effect size of .40 roughly indicates that the average
achievement in a school that possesses a given factor is four-tenths of a standard deviation higher
than the average achievement of a school that does not possess that factor. Table 1.1 lists Hatties
factors that are at or above the .40 hinge-point and directly relate to level 1.
14
Level 1
Rank
Factor
12
16
25
38
41
42
47
49
52
54
56
62
65
69
Teacher-student relationships
Classroom behavior
Not labeling students
Classroom cohesion
Peer influences
Classroom management
Professional development
Play programs
Small-group learning
Concentration/persistence/engagement
Motivation
Teacher expectations
Cooperative learning
Reducing anxiety
Data teams
Detailed budgets
15
Examples:
When asked, faculty and staff generally describe the school as a safe place.
When asked, faculty and staff generally describe the school as an orderly place.
Clear and specific rules and procedures are in place for the running of the school.
Faculty and staff know the emergency management procedures and how to implement
them for specific incidents.
Evidence of practicing emergency management procedures for specific incidents is
available.
Evidence of updates to emergency management plans is available.
Leading Indicator 1.2:
Examples:
When asked, parents and students generally describe the school as a safe place.
When asked, parents and students generally describe the school as an orderly place.
Clear and specific rules and procedures are in place for the running of the school.
Social media are used to allow students to anonymously report potential incidents.
The school leader has a means of communicating to parents about issues regarding
school safety (for example, a call-out system).
The school leader coordinates with local law enforcement agencies regarding school
safety issues.
The school leader engages parents and the community regarding issues of school safety.
Leading Indicator 1.3:
Examples:
The specific types of decisions on which teachers will have direct input are made clear.
Data-gathering techniques are in place to collect information from teachers.
Notes and reports that describe how teacher input was used when making specific
decisions are in place.
Electronic tools are utilized to collect and report teacher opinions regarding specific
decisions (for example, Survey Monkey).
Groups of teachers are targeted and utilized to provide input regarding specific decisions.
16
Level 1
Leading Indicator 1.4:
Examples:
Professional learning communities (PLCs) are in place.
PLCs have written goals.
The school leader regularly examines the PLCs progress toward goals.
Common assessments are created by PLCs.
Student achievement and growth are analyzed by PLCs.
Data teams are in place.
Data teams have written goals.
The school leader regularly examines each data teams progress toward goals.
The school leader collects and reviews minutes, notes, and goals from meetings to
maintain a focus on student achievement.
Leading Indicator 1.5:
Examples:
Data collection systems are in place to collect opinion data from teachers and staff
regarding the optimal functioning of the school.
Data is archived and reports regularly generated regarding these data.
The manner in which these data are used is made transparent.
The school improvement team provides input regarding the school improvement plan.
Leading Indicator 1.6:
Examples:
Data collection systems are in place to collect opinion data from students, parents, and
the community regarding the optimal functioning of the school.
Data is archived and reports are regularly generated regarding these data.
The manner in which these data are used is made transparent.
The school hosts an interactive website for students, parents, and the community.
The school leader and teachers use social networking technologies (Twitter, Facebook)
to involve students, parents, and the community.
The school leader engages in virtual town hall meetings.
The school leader conducts focus group meetings with students.
The school leader hosts or speaks at community/business luncheons.
continued on next page
17
Examples:
When asked, faculty and staff generally report that the accomplishments of the school
have been adequately acknowledged and celebrated.
When asked, faculty and staff generally report that their individual accomplishments have
been adequately acknowledged and celebrated.
The school leader recognizes the accomplishments of individual teachers, teams of
teachers, and the whole school in a variety of ways (for example, faculty celebrations,
newsletters to parents, announcements, websites, social media).
The school leader recognizes the success of individual departments.
The school leader regularly celebrates the success of a variety of types of individuals (for
example, teacher of the year, support staff employee of the year).
Leading Indicator 1.8:
Examples:
When asked, faculty and staff generally report that they have adequate materials to teach
effectively.
When asked, faculty and staff generally report that they have adequate time to teach
effectively.
The school leader develops, submits, and implements detailed budgets.
The school leader successfully accesses and leverages a variety of resources (for
example, grants, title funds).
The school leader manages time effectively in order to maximize focus on instruction.
The school leader appropriately directs the use of technology to improve teaching and
learning.
The school leader provides adequate training for the instructional technology teachers
are expected to use.
Copyright 2012 Robert J. Marzano
While all of the leading indicators for level 1 (as well as the other four levels) are useful
endeavors, one of the purposes of this publication is to describe steps that a school can take to
utilize the specific activities and interventions I have developed with colleagues over the last four
decades. Consequently, for each level, I provide recommendations for specific initiatives that
come directly from my work. I refer to these initiatives as critical commitments. A critical
commitment does not automatically address all elements of a level but it does represent an
initiative or activity that, when executed well, establishes what I believe to be a necessary
18
Level 1
foundation for attaining high reliability status at a given level. I believe that the PLC process
should be considered a critical commitment for level 1.
While the PLC process is sometimes thought of as a singular intervention to engage teachers
in meaningful collaboration, when used to its full potential it can be the structure that makes
possible the successful implementation of a variety of the leading indicators for level 1. Indeed,
Richard DuFour and I (2011) maintain that the PLC process can change the basic dynamic of
leadership within a school, allowing school leaders to have a more efficient and direct impact on
what occurs in classrooms. We note:
The principal of a K5 building can now work closely with six teams rather
than thirty individuals. The principal of a large high school can influence
twenty team leaders directly rather than 150 teachers indirectly. In short,
the PLC process provides a vehicle for focused interactions between
principals and teachers. (p. 51)
DuFour and I explain that in the absence of the PLC process, a principals influence on student
achievement might be depicted as shown in figure 1.1.
Student Achievement
Principal Actions
From DuFour & Marzano, 2011, p. 49.
Figure 1.1: Typical relationship between principal behavior and student achievement.
Figure 1.1 indicates that in the absence of the PLC process, the principals influence on
student achievement passes through teachers. This has long been recognized in the research
literature: the principal has an indirect influence on student achievement (see Marzano et al.,
2005). DuFour and I further note that one of the more enlightening and disturbing aspects of the
figure is that:
Multiple lines of influence are depicted between the principal and
teachers actions. This is because traditionally there has been no way for
principals to interact directly and concretely with teachers in a manner
that influences their actions in the classroom. (p. 49)
19
The PLC process alters this basic dynamic. Within the context of the collaborative team structure
of a PLC, the relationship between principal behavior and student achievement might be depicted
as shown in figure 1.2.
Student Achievement
PLCs
Principal Actions
From DuFour & Marzano, 2011, p. 52.
Figure 1.2: Relationship between principal behavior and student achievement with PLCs.
As shown in figure 1.2, principals have a direct line of influence to collaborative teams, and
collaborative teams have a direct line of influence to teacher actions in the classroom. In effect,
use of the PLC process can render leadership more efficient.
I recommend the PLC process as a critical commitment because it is a vehicle for facilitating
most, if not all, of the leading indicators for level 1. Obviously, the PLC process is directly
related to leading indicator 1.4 because teachers interact to address issues regarding curriculum,
assessment, and instruction. The PLC process can also be a powerful vehicle for leading
indicator 1.7 because collaborative groups can be used to identify and recognize individuals
whose students have made exceptional gains in their learning. Collaborative teams can be singled
out and acknowledged as well as the school as a whole. The PLC process creates a foundation
for leading indicators 1.1, 1.3, and 1.5 because collaborative teams can be used to identify and
execute ways to make the school more safe and orderly, obtain teacher input into decisions
regarding school policies, and provide input regarding how the school might function more
effectively. Leading indicator 1.8 can also be addressed through PLCs as collaborative teams can
be used to gather information from teachers about the use of fiscal, operational, and
technological resources. Finally, leading indicators 1.2 and 1.6 relate to parents and the
community. It is important that initiatives and activities be designed and executed specifically for
those constituent groups and collaborative teams can aid in the design of those initiatives and
activities.
20
Level 1
Lagging Indicator 1.10: Materials and resources for specific classes and courses meet
the state or district specifications for those classes and courses.
Lagging Indicator 1.11: Time available for specific classes and courses meets the state
or district specifications for those classes and courses.
continued on next page
21
Lagging Indicator 1.12: Evidence is available that adequate proportions of the school
budget are focused on issues that directly support teaching and
learning.
Lagging Indicator 1.13: Evidence is available that specific accomplishments of the
school and/or individuals within the school have been formally
acknowledged.
Lagging Indicator 1.14: Incidents indicating teacher dissatisfaction with the school (for
example, teacher requests for transfers to other schools) are
very low or nonexistent.
Copyright 2012 Robert J. Marzano
Some of the lagging indicators in table 1.3 are perceptual in nature and can be addressed through
simple survey techniques, many of which can be administered in the context of PLCs. For
example, surveys might be developed to determine if faculty and staff perceive the school
environment as safe and orderly, whether they believe they have proper input into the running of
the school, and so on. However, to use these surveys as lagging indicators, appropriate criterion
scores must be set. For example, the school might set as a criterion that 80% of teachers must
have positive responses to the survey items to indicate that level 1 status has been met.
Other lagging indicators are much more concrete. For example, consider lagging indicator
1.2: Few, if any, incidents occur in which rules and procedures are not followed. Detailed
records must be kept to establish clear criterion scores for indicators such as this. For example, a
schools leader might decide that the school must average no more than one incident of a
significant violation of school rules and procedures per month to be considered highly reliable
for this lagging indicator. In the concluding chapter, I address how a school leader might identify
the lagging indicators he or she will use and set criterion scores for high reliability status.
22
2
An Instructional Framework
That Develops and Maintains
Effective Instruction in Every
Classroom
Level 2 addresses a central feature of effective schoolingthe quality of teaching in
classrooms. When a school reaches high reliability status for level 2, it can guarantee that quality
teaching occurs in every classroom. Operationally, this means that variability in teacher quality
within a school is quite lowevery teacher uses effective instructional strategies. Indeed, one of
the hallmarks of school systems around the world that produce the greatest gains in student
learning is that they monitor and minimize the variability of instruction in their classrooms
(Barber & Mourshed, 2007). Level 2 has six leading indicators:
Leading Indicator 2.1: The school leader communicates a clear vision as to how instruction
should be addressed in the school.
Leading Indicator 2.2: Support is provided to teachers to continually enhance their
pedagogical skills through reflection and professional growth plans.
Leading Indicator 2.3: Predominant instructional practices throughout the school are known
and monitored.
23
Leading Indicator 2.4: Teachers are provided with clear, ongoing evaluations of their
pedagogical strengths and weaknesses that are based on multiple
sources of data and are consistent with student achievement data.
Leading Indicator 2.5: Teachers are provided with job-embedded professional development
that is directly related to their instructional growth goals.
Leading Indicator 2.6: Teachers have opportunities to observe and discuss effective teaching.
Of the books I have authored, the following contain the most direct reviews of the research
literature and recommended interventions for the leading indicators at level 2:
24
Effective Supervision: Supporting the Art and Science of Teaching (Marzano et al., 2011)
The Art and Science of Teaching: A Comprehensive Framework for Effective Instruction
(Marzano, 2007)
Level 2
Factors from Hatties (2012) list that most directly relate to level 2 and are at or above the
hinge-point are listed in table 2.1.
Table 2.1: Hatties Factors Related to Level 2 At or Above the Hinge-Point
Rank
Factor
4
5
6
7
9
10
13
21
23
27
29
30
32
34
35
46
47
48
49
52
53
57
65
Teacher credibility
Providing formative evaluation
Micro-teaching
Classroom discussion
Teacher clarity
Feedback
Spaced vs mass practice
Self-verbalization and self-questioning
Teaching strategies
Concept mapping
Direct instruction
Tactile stimulation programs
Worked examples
Peer tutoring
Cooperative vs competitive learning
Interactive video methods
Professional development
Goals
Play programs
Small-group learning
Questioning
Quality of teaching
Cooperative learning
25
Examples:
A written document articulating the schoolwide model of instruction is developed with
input by teacher leaders.
Professional development opportunities are provided for new teachers regarding the
schoolwide model of instruction.
When asked, teachers can describe the major components of the schoolwide model of
instruction.
New initiatives are prioritized and limited in number to support the instructional model.
The schoolwide language of instruction is used regularly in faculty and department
meetings.
The schoolwide language of instruction is used regularly by faculty in their informal
conversations.
The schoolwide language of instruction is used regularly by faculty in their professional
learning communities (PLCs).
Leading Indicator 2.2:
Examples:
Individual teachers have written statements of their pedagogical growth goals.
Individual teachers keep track of their progress on their pedagogical growth goals.
The school leader meets with teachers regarding their growth goals.
When asked, teachers can describe their progress on their pedagogical growth goals.
The school leader hires effective teachers.
The school leader has a system in place to effectively evaluate the selection process for
hiring new teachers.
The school leader has a system in place to effectively evaluate and revise the new
teacher induction program.
The school leader retains effective teachers.
When asked, the school leader can produce evaluation results, growth plans, and
evidence of support for struggling teachers.
26
Level 2
Leading Indicator 2.3:
Examples:
Walk-through data are aggregated so as to disclose predominant instructional practices
in the school.
When asked, the school leader can describe the predominant instructional practices in the school.
When asked, teachers can describe the predominant instructional practices in the school.
The school leader provides forthright feedback to teachers regarding their instructional
practices.
The school leader can describe effective practices and problems of practice.
Leading Indicator 2.4:
Examples:
Highly specific rubrics are in place to provide teachers with accurate feedback on their
pedagogical strengths and weaknesses.
Teacher feedback and evaluation data are based on multiple sources of information including,
but not limited to: direct observation, teacher self-report, analysis of teacher performance as
captured on video, student reports on teacher effectiveness, and peer feedback to teachers.
Teacher evaluation data are regularly used as the subject of conversation between school
leaders and teachers.
The school leader provides frequent observations and feedback to teachers.
When asked, teachers can describe their instructional strategies that have the strongest and
weakest relationships to student achievement.
Leading Indicator 2.5:
Examples:
Online professional development courses and resources are available to teachers regarding
their instructional growth goals.
Teacher-led professional development is available to teachers regarding their instructional
growth goals.
Instructional coaching is available to teachers regarding their instructional growth goals.
Data are collected to link the effectiveness of professional development to improvement in
teacher practices.
When asked, teachers can describe how the available professional development supports their
attainment of instructional growth goals.
continued on next page
27
Examples:
Teachers have opportunities to engage in instructional rounds.
Teachers have opportunities to view and discuss video-recorded examples of exemplary
teaching.
Teachers have regular times to meet and discuss effective instructional practices (for
example, lesson study).
Teachers have opportunities to interact about effective teaching via technology.
Instructional practices are regularly discussed at faculty and department meetings.
Video segments of instructional practices are regularly viewed and discussed at faculty
and department meetings.
Information is available regarding teachers participation in opportunities to observe and
discuss effective teaching.
Information is available regarding teacher participation in virtual discussions regarding
effective teaching.
Copyright 2012 Robert J. Marzano
As at level 1, many of the examples of leading indicators for level 2 are relatively common
practices in many schools. Such practices include:
Video-recording teachers
All of these activities are viable ways to focus on level 2 issues. However, I believe that the
critical commitment essential to attaining level 2 status is an evaluation system whose primary
purpose is teacher development.
Clearly, teacher evaluation is one of the major initiatives of the second decade of the 21st
century. Indeed, it is such a robust movement that it can be used to address every issue relative to
level 2 status, but to do so it must have a primary focus on teacher development. As I note in a
2012 article, The Two Purposes of Teacher Evaluation (Marzano, 2012b), states, districts, and
schools all across the United States are busy developing or implementing teacher evaluation
28
Level 2
systems. In the article I pose a question about the purpose of teacher evaluation that I believe
every school and district should ask itself: Is the purpose of teacher evaluation primarily
measurement or development?
In the article, I report the results of an informal survey administered to over 3,000 K12
educators. That survey employed a simple scale with values ranging from 1 to 5. Educators who
thought that measurement should be the sole purpose of teacher evaluation selected 1. Educators
who thought that development should be the sole purpose of teacher evaluation selected 5. If an
educator believed that the purpose of teacher evaluation should be half measurement and half
development, he or she selected 3. Selecting 2 indicated a belief that measurement and
development should be dual purposes but that measurement should be dominant, and 4 indicated
a belief that measurement and development should be dual purposes but that development should
be dominant. The results from the survey are depicted in table 2.3.
Table 2.3: Results from Informal Survey Regarding the Purposes of Teacher
Evaluation
Results
5: Completely development
2%
76%
20%
2%
1: Completely measurement
0%
As indicated in table 2.3, the vast majority of those who responded to the informal survey
favored development as the primary purpose of teacher evaluation. I believe that a teacher
evaluation system focused on development has three characteristics: (1) the system is
comprehensive and specific, (2) the system includes a developmental scale, and (3) the system
acknowledges and supports growth.
and collegiality and professionalism. Table 2.4 lists the 41 elements of the model that pertain
directly to classroom instruction.
Table 2.4: 41 Elements of the Art and Science of Teaching Model That Pertain to
Classroom Instruction
I. Routine Segments
A. Communicating Learning Goals, Tracking Student Progress, and Celebrating Success
1. Providing clear learning goals and scales to measure those goals
2. Tracking student progress
3. Celebrating success
B. Establishing and Maintaining Classroom Rules and Procedures
4. Establishing classroom routines
5. Organizing the physical layout of the classroom for learning
II. Content Segments
C. Helping Students Interact with New Knowledge
6. Identifying critical information
7. Organizing students to interact with new knowledge
8. Previewing new content
9. Chunking content into digestible bites
10. Group processing of new information
11. Elaborating on new information
12. Recording and representing knowledge
13. Reflecting on learning
D. Helping Students Practice and Deepen Their Understanding of New Knowledge
14. Reviewing content
15. Organizing students to practice and deepen knowledge
16. Using homework
17. Examining similarities and differences
18. Examining errors in reasoning
19. Practicing skills, strategies, and processes
20. Revising knowledge
E. Helping Students Generate and Test Hypotheses about New Knowledge
21. Organizing students for cognitively complex tasks
22. Engaging students in cognitively complex tasks involving hypothesis generating and
testing
23. Providing resources and guidance
30
Level 2
III. Segments Enacted on the Spot
F. Engaging Students
24. Noticing and reacting when students are not engaged
25. Using academic games
26. Managing response rates during questioning
27. Using physical movement
28. Maintaining a lively pace
29. Demonstrating intensity and enthusiasm
30. Using friendly controversy
31. Providing opportunities for students to talk about themselves
32. Presenting unusual or intriguing information
G. Recognizing and Acknowledging Adherence or Lack of Adherence to Rules and
Procedures
33. Demonstrating withitness
34. Applying consequences
35. Acknowledging adherence to rules and procedures
H. Establishing and Maintaining Effective Relationships with Students
36. Understanding students interests and background
37. Using behaviors that indicate affection for students
38. Displaying objectivity and control
I.
The 41 elements in table 2.4 are categorized according to the type of lesson segment in which
they normally occur: routine segments, content segments, and segments enacted on the spot.
Strategies that are used on a routine basis are listed under routine segments. These include five
types of strategies (elements 15) organized into two subcategories: strategies that involve
communicating learning goals, tracking student progress, and celebrating success, and strategies
that involve establishing and maintaining classroom rules and procedures. Strategies that are
used when students are interacting with content are listed under content segments and fall into
three subcategories: strategies that help students interact with new knowledge, strategies that
help students practice and deepen their understanding of knowledge they have previously been
introduced to, and strategies that help students apply knowledge by generating and testing
hypotheses. There are 18 types of strategies that are used when students interact with content
(elements 623). Strategies that teachers must be prepared to use whenever they are needed, even
though they might not have planned to use them in a given lesson or on a given day, are listed
under segments enacted on the spot. These strategies fall into four categories: strategies for
engaging students, strategies that acknowledge adherence or lack of adherence to rules and
31
procedures, strategies that build relationships with students, and strategies that communicate
high expectations for all students. There are 18 types of strategies used in on-the-spot lesson
segments (elements 2441).
Each of the 41 elements has substantial research supporting its efficacy (see Marzano, 2007).
Also, I believe that the model accurately represents the diversity of strategies that highly
effective teachers employ. Such a comprehensive and detailed listing of instructional strategies
makes perfect sense in the context of a teacher evaluation system focused on development.
An evaluation system designed primarily for measurement would not need to be as robust. In
fact, many of the 41 elements in table 2.4 (pages 3031) are unnecessary if the sole purpose of
teacher evaluation is measurement. This is because some of the strategy areas listed in table 2.4
correlate with student achievement but are not absolutely necessary to be effective in the
classroom. For example, consider academic games (element 25), which are certainly useful tools
in enhancing student achievement (Hattie, 2009; Walberg, 1999). However, every teacher does
not have to use academic games. Indeed, a teacher can produce dramatic gains in student
learning without using games at all.
A teacher evaluation system focused on measurement alone would only involve those
elements that cut across all grade levels, all subjects, and all types of students. In my model,
there are 15 such elements, which are shaded in table 2.4 (pages 3031). It is important to note
that these 15 elements would not address the fine-tuned granular levels of behavior that
distinguish true experts in the classroom from everyone else. As Nalini Ambady and Robert
Rosenthal (1992) note, expertise occurs in thin slices of behavior (p. 257). To develop those
thin slices of behavior that are characteristic of experts, teachers need feedback on all 41
elements listed in table 2.4. Using that feedback, teachers can identify areas of strength and
weakness and then systematically begin improving their areas of weakness.
Innovating (4)
Applying (3)
Developing (2)
The teacher
adapts or creates
a new version of
the strategy or
behavior for
unique student
needs and
situations.
The teacher
uses the
strategy or
behavior and
monitors the
extent to which
it affects
student
outcomes.
32
Beginning (1)
The teacher
uses the
strategy or
behavior
incorrectly or
with parts
missing.
Level 2
Not using indicates that a teacher is not aware of a particular strategy or is aware of it but has not
tried it in his or her classroom. For example, if a teacher were unaware of strategies for engaging
students in friendly controversy (element 30 in table 2.4, page 31), he or she would be at the not
using level.
At the beginning level, a teacher knows about a strategy and uses it in the classroom, but
exhibits errors or omissions in its execution. For example, a teacher using the strategy of friendly
controversy is at the beginning level if he or she simply asks students to state their opinions
about a topic. Although students are performing one component of the strategy, stating their
opinions, they are not supporting their opinions with evidence or disagreeing respectfully with
others, which are also important components of the strategy.
At the developing level, the teacher uses the strategy without significant errors or omissions
and with relative fluency. At the applying level, a teacher not only executes a strategy with
fluency, but also monitors the class to ensure that the strategy is having its desired effect. A
teacher using the friendly controversy strategy at the applying level would verify that students
are backing up their opinions with evidence and disagreeing in a controlled and respectful
manner. It is at the applying level and above that a strategy has the potential of producing large
gains in student learning.
Finally, at the innovating level, the teacher monitors the class to ensure that a strategy is
having its desired effect with the majority of students and makes necessary adaptations to ensure
that all student populations are experiencing the strategys positive effects. To reach all students,
a teacher might have to make adaptations for English language learners or for students who are
lacking in important background knowledge for the topic being addressed. One might think of
the innovating level as that at which the teacher is effectively differentiating instruction
(Tomlinson & Imbeau, 2010).
The scale in table 2.5 is specifically designed with teacher development in mind. It enables
teachers (commonly with the aid of a supervisor or instructional coach) to pinpoint their current
level of performance for a specific strategy, set goals for operating at higher levels within a given
period of time, and then achieve those goals as part of their personal growth plan.
summative category of effectiveness regarding their teaching at the end of the year (for example,
highly effective, effective, needing improvement, or not acceptable). Such a system would
communicate to teachers that the school expectsand rewardscontinuous improvement.
A teacher evaluation system that focuses on teacher development can be highly instrumental
in satisfying the six leading indicators for level 2. For example, having a system that is
comprehensive and specific greatly facilitates attainment of a clear vision of instruction (leading
indicator 2.1) and clear and ongoing evaluations of teachers pedagogical strengths and
weaknesses (leading indicator 2.4). Use of a developmental scale helps teachers enhance their
pedagogical skills (leading indicator 2.2) and provides evidence regarding the predominant
instructional practices used throughout the school (leading indicator 2.3). Acknowledging and
supporting growth naturally leads to a school providing job-embedded professional development
(leading indicator 2.5) and providing opportunities for teachers to observe and discuss effective
teaching (leading indicator 2.6).
Survey data indicate that teachers are well aware of the schools
instructional model and their status within that model.
34
Level 2
Lagging Indicator 2.8:
Lagging Indicator 2.10: Evidence exists that teachers who have demonstrated little or no
desire to develop or maintain high levels of pedagogical skill are
counseled out of the profession or terminated in extreme cases.
Copyright 2012 Robert J. Marzano
Again, some of the lagging indicators are perceptual in nature such as:
Survey data indicate that teachers are well aware of the schools instructional model and
their status within that model.
Survey data indicate high levels of agreement that the school in general and the
evaluation system in particular are designed to help teachers improve their pedagogical
skills.
As at level 1, perceptual data can be systematically gathered through surveys offered within
PLCs. Of course, to use perceptual data as a lagging indicator, criterion scores regarding
favorable perceptions would have to be established. The majority of the lagging indicators for
level 2 are not perceptual and require recordkeeping that is not commonly available in schools
today. For lagging indicator 2.4, records would have to be kept regarding teacher growth scores
and those scores correlated with student growth. For lagging indicator 2.5, records would need to
be kept on the professional development opportunities in which teachers engaged. For lagging
indicator 2.6, the distribution of teacher status scores would have to be continually updated and
examined. In short, to establish criterion scores for the lagging indicators for level 2, detailed
records in areas like the following must be kept and examined: teacher retention, teacher
dismissal, teachers current status regarding specific instructional strategies, teachers growth on
specific instructional strategies, and teacher participation in professional development activities.
While certainly labor-intensive to collect, such data forms the basis of evidence that a school has
reached level 2 high reliability status.
35
3
A Guaranteed and Viable Curriculum
Focused on Enhancing
Student Learning
Level 3 addresses the extent to which a schools curriculum provides opportunities for all
students to learn challenging content that is aligned with national and state standards. Level 3 has
six leading indicators:
Leading Indicator 3.1: The school curriculum and accompanying assessments adhere to state
and district standards.
Leading Indicator 3.2: The school curriculum is focused enough that it can be adequately
addressed in the time available to teachers.
Leading Indicator 3.3: All students have the opportunity to learn the critical content of the
curriculum.
Leading Indicator 3.4: Clear and measureable goals are established and focused on critical
needs regarding improving overall student achievement at the school
level.
Leading Indicator 3.5: Data are analyzed, interpreted, and used to regularly monitor progress
toward school achievement goals.
37
Leading Indicator 3.6: Appropriate school- and classroom-level programs and practices are in
place to help students meet individual achievement goals when data
indicate interventions are needed.
The bedrock for level 3 high reliability status is a guaranteed and viable curriculum. The
concept of a guaranteed and viable curriculum was addressed in the book What Works in Schools
(Marzano, 2003b). Although the phrase was first coined in that book, research had been
accumulating for years supporting its importance. Perhaps the most direct research is that
regarding opportunity to learn (OTL). The concept of OTL was introduced by the International
Association for the Evaluation of Educational Achievement (see Wilkins, 1997) when it became
a component of the Firstand then, laterthe Second International Mathematics Study (FIMS
and SIMS respectively) (see Burstein, 1992; Husen, 1967a, 1967b).
The logic behind OTL is that all students should have equal opportunities to learn the content
of the items being used to assess their achievement:
One of the factors which may influence scores on an achievement
examination is whether or not students have had an opportunity to study a
particular topic or learn how to solve a particular type of problem
presented by the test. (Husen, 1967b, pp. 162163)
OTL is a very simple concept: If students do not have the opportunity to learn the content
expected of them, there is, of course, little chance that they will. As it relates to level 3 high
reliability status, OTL addresses the extent to which the curriculum in a school is guaranteed.
Operationally, this means that the curriculum provides clear guidance regarding the content to be
addressed in specific courses and at specific grade levels. Additionally, it means that individual
teachers do not have the option to disregard or replace content that has been designated as
essential to a specific course or grade level. This constitutes the guaranteed part of a
guaranteed and viable curriculum. But what about the viability of the curriculum?
The criterion of viability is equally as important and, in fact, a necessary condition for having
a guaranteed curriculum. Viability means that the content teachers are expected to address can be
adequately addressed in the time teachers have available for instruction. Unfortunately, for years,
K12 education has ignored the problem of too much content in their standards. My colleagues
and I (Marzano et al., 2013) commented on the proliferation of content that resulted from the
standards movement of the 1990s: As different subject-matter organizations developed
standards for their specific content areas, each group of specialists identified everything they
thought students should know and be able to do in their fields (p. 2). As a result, the standards
developed by subject-matter organizations during the 1990s presented far too much content for
teachers to address. Stated differently, the curriculum recommended or implied by the standards
initiatives of the 1990s was by definition not viable and, therefore, could not be guaranteed.
The Common Core State Standards (CCSS) initiative sought to alleviate the problem of too
much content in previous standards efforts:
The National Governors Association (NGA) and the Council of Chief State
School Officers (CCSSO) met in 2009 and agreed to take part in a state-led
38
Level 3
process that will draw on evidence and lead to development and adoption
of a common core of state standards . . . in English language arts and
mathematics for grades K12 (as cited in Rothman, 2011, p. 62). Other
organizations also contributed to the effort, among them Achieve, the
Alliance for Excellent Education, the James B. Hunt Jr. Institute for
Educational Leadership and Policy, the National Association of State Boards
of Education, the Business Roundtable, ACT, and the College Board
(Rothman, 2011). These organizations created a set of three criteria that
would guide the design of the CCSS. (Marzano et al., 2013, p. 6)
One of the three criteria established was that the new standards should be fewer, clearer, and
higher than previous standards. That is, there should be fewer standards statements, they should
be clearer (unidimensional and concrete), and they should encourage students to use higher-level
thinking (Marzano et al., 2013, p. 6). While the CCSS effort did succeed in reducing the amount
of content in mathematics and English language arts, not all agreed that the new standards were
completely viable and useful to K12 schools (for a discussion, see Marzano et al., 2013).
In addition to a curriculum that is guaranteed and viable, level 3 status requires a curriculum
that enhances student learning. This means that in addition to traditional content, the curriculum
also addresses skills that help students learn. This emphasis is explicit in the CCSS, particularly
in the Standards for Mathematical Practice and the college and career readiness (CCR) anchor
standards in English language arts. My colleagues and I (Marzano et al., 2013) note that these
standards involve mental processes that could be directly taught to students and then used to
apply mathematics and ELA content in meaningful ways (p. 23). Many of these standards
represent metacognitive skills. Level 3 high reliability status, then, requires significant tightening
and focus in the school curriculum and how it is used by teachers.
Of the books I have authored, the following contain the most direct reviews of the literature
and recommended interventions at level 3:
Teaching and Assessing 21st Century Skills (Marzano & Heflebower, 2012)
39
Designing and Assessing Educational Objectives: Applying the New Taxonomy (Marzano
& Kendall, 2008)
Table 3.1 lists those factors from Hatties (2012) list that are at or above the .40 hinge-point and
directly related to level 3.
Table 3.1: Hatties Factors Related to Level 3 At or Above the Hinge-Point
40
Rank
Factor
3
14
17
19
22
26
32
33
36
43
47
61
Response to intervention
Meta-cognitive strategies
Vocabulary programs
Creativity programs on achievement
Study skills
Comprehension programs
Worked examples
Visual-perception programs
Phonics instruction
Outdoor/adventure programs
Professional development
Writing programs
Level 3
Examples:
The written curriculum is analyzed to ensure that it correlates with state and district
standards (for example, the CCSS, if applicable).
The written curriculum adequately addresses important 21st century skills (for example,
college and career readiness [CCR] skills and mathematical practice skills from the
CCSS).
The curriculum taught in classrooms (that is, the taught curriculum) is analyzed to ensure
that it correlates with the written curriculum.
Assessments are analyzed to ensure that they accurately measure the written and taught
curricula.
School teams regularly analyze the relationship between the written curriculum, the
taught curriculum, and assessments.
When asked, teachers can describe the essential content and standards for their subject
area(s) or grade level(s).
Leading Indicator 3.2:
Examples:
Essential elements of content are identified.
The amount of time needed to adequately address the essential elements is examined.
Teams regularly meet to discuss the progression and viability of documents that articulate
essential content and timing of delivery (for example, pacing guides, curriculum maps).
Essential vocabulary is identified at all levels (that is, tiers 1, 2, and 3).
Leading Indicator 3.3:
Examples:
Tracking systems that examine each students access to the essential elements of the
curriculum are in place.
Parents are aware of their childs current access to the essential elements of the curriculum.
All students have access to advanced placement courses.
The extent to which all students have access to necessary courses has been analyzed.
The school leader ensures that teachers have completed appropriate content training in
their subject-area courses.
A system of direct vocabulary instruction is available at all levels (that is, tiers 1, 2, and 3).
continued on next page
41
Examples:
Goals are established as a percentage of students who will score at a proficient or higher
level on state assessments or benchmark assessments.
Goals are established for eliminating differences in achievement for students at different
socioeconomic levels.
Goals are established for eliminating differences in achievement for students of differing
ethnicities.
Schoolwide achievement goals are posted so that faculty and staff see them on a regular basis.
Schoolwide achievement goals are discussed regularly at faculty and staff gatherings.
Faculty and staff can describe the schoolwide achievement goals.
Faculty and staff can explain how goals eliminate differences in achievement for students
of differing ethnicities.
Faculty and staff can explain how goals eliminate differences in achievement for students
at different socioeconomic levels, English language learners, and students with disabilities.
Improvement goals are assigned to various departments and faculty.
Goals are established for eliminating the achievement gap for all students.
Goals are established for eliminating differences in achievement for English language learners.
Goals are established for eliminating differences in achievement for students with
disabilities.
Goals address the most critical and severe deficiencies.
Timelines contain specific benchmarks for each goal, including the individual(s)
responsible for the goal.
Leading Indicator 3.5:
Examples:
Overall student achievement is regularly analyzed.
Student achievement is examined from the perspective of value-added results.
Results from multiple types of assessments are regularly reported and used (for example,
benchmark assessments, common assessments).
When asked, faculty and staff can describe the different types of reports available to
them.
Reports, graphs, and charts are regularly updated to track growth in student
achievement.
School leadership teams regularly analyze school growth data.
Data briefings are conducted at faculty meetings.
42
Level 3
Leading Indicator 3.6:
Examples:
Extended school day and extended school week programs are in place.
Extended school year programs are in place.
After-school programs are in place.
Tutorial programs are in place.
The school schedule is designed so that students can receive academic help while in
school.
Individual student completion of programs designed to enhance their academic
achievement is monitored (that is, gifted and talented, advanced placement, STEM, and
others).
Response to Intervention measures and programs are in place.
Enrichment programs are in place.
Copyright 2012 Robert J. Marzano
Of the leading indicators listed in table 3.2, many are already commonly employed in schools.
These include:
Goals are established as a percentage of students who will score at a proficient or higher
level on state assessments or benchmark assessments
I believe there are three critical commitments important to achieving level 3 high reliability
status: (1) continually monitoring the viability of the curriculum, (2) a comprehensive
vocabulary program, and (3) direct instruction in knowledge application and metacognitive
skills.
43
Level 3
as to be not worth the effort (for a discussion, see Marzano, 2004, 2010c). Fortunately, viable
solutions have been proposed.
Beck and McKeown (1985) explain that vocabulary terms can be thought of in three tiers.
The first tier includes those terms that are very frequent in the English languagethe most basic
terms in the language which are encountered frequently enough that students commonly learn
them in context. Tier 2 terms are those that are important to understanding a language but appear
infrequently enough in general language usage that they will probably not be learned in context.
Tier 3 terms in the Beck and McKeown schema are subject-matter specificterms that are
important to academic subject areas but not as frequently found in general use in the language.
What are the Tier 1, 2, and 3 Terms?
In a series of works, my colleagues and I have identified the tier 1, 2, and 3 terms. The tier 1
and tier 2 terms were identified in the book Teaching Basic and Advanced Vocabulary (Marzano,
2010c). There are 2,845 tier 1 terms and 5,162 tier 2 terms. Tier 1 and tier 2 terms are found in
the general vocabulary. Tier 3 terms are specific to academic subject areas. They were first
identified in the book Building Background Knowledge for Academic Achievement (Marzano,
2004). The number of tier 3 terms for various subject areas is reported in table 3.3 (page 46).
As shown in table 3.3 (page 46), there are 7,923 tier 3 terms. When added to the 2,845 tier 1
terms and 5,162 tier 2 terms, the total number of terms is 15,930. However, some 900 terms can
be found in more than one tier. This brings the total down to 15,000 terms. In effect, schools now
have a corpus of 15,000 terms that can be used to develop the foundational vocabulary
knowledge for any student at any level in any subject area.
A reasonable approach to direct vocabulary instruction would have the following characteristics:
1. Direct instruction in the tier 1 terms only for those students who need it.
2. Direct instruction in the tier 2 terms for all students as a regular part of instruction in the
English language arts.
3. Direct instruction in tier 3 terms as part of instruction in subject-area classes.
This is not to say that all 15,000 terms should be targeted for direct instruction. Indeed, students
can be guaranteed a firm grounding in tier 2 and tier 3 terms without any student experiencing
direct instruction in more than about 300 terms per year. This amounts to direct instruction in less
than 10 terms per week, including all major subject areas (for a discussion, see Marzano, 2004,
2010c). Instruction in the tier 1 terms can occur quite efficiently and quickly if terms are taught in
semantic clustersgroups of related terms (see Marzano, 2010c). Again, not all students would
require direct instruction in tier 1 terms. In fact, instruction in tier 1 terms is usually necessary only
for some English language learners and for some students who come from home environments not
highly conducive to developing background knowledge (see Marzano, 2010c).
45
Subject Area
(K2)
(35)
(68)
(912)
Totals
Mathematics
80
190
201
214
685
Science
100
166
225
282
773
83
245
247
223
798
162
560
319
270
1,311
U.S. History
154
123
148
425
World History
245
301
297
843
Geography
89
212
258
300
859
Civics
45
145
210
213
613
Economics
29
68
89
155
341
Health
60
68
75
77
280
Physical Education
57
100
50
34
241
Arts General
14
36
30
89
Dance
18
24
42
37
121
Music
14
83
67
32
196
Theater
14
35
13
67
Visual Arts
41
24
76
Technology
23
47
56
79
205
Totals
782
2,398
2,352
2,391
7,923
History
General History
The Arts
Level 3
Knowledge application and metacognitive skills can be organized into two broad categories of
skills: cognitive and conative. Cognitive skills are those needed to effectively process information and
complete tasks. Conative skills allow a person to examine his or her knowledge and emotions in order
to choose an appropriate future course of action. There are ten cognitive skills and seven conative skills
that have considerable research behind their efficacy and should be the subject of explicit instruction
within the guaranteed and viable curriculum. Each skill is briefly described in table 3.4.
Table 3.4: Cognitive and Conative Skills
Cognitive Skills
Generating conclusions involves combining known information to form new ideas.
Identifying common logical errors involves analyzing information to determine how true it is.
Presenting and supporting claims involves providing evidence to support a new idea.
Navigating digital sources involves using electronic resources to find credible and relevant
information.
Problem solving involves accomplishing a goal in spite of obstacles or limiting conditions.
Decision making involves using criteria to select among alternatives that initially appear to be
equal.
Experimenting is the process of generating and testing explanations of observed phenomena.
Investigating involves identifying confusions or contradictions about ideas or events and
suggesting ways to resolve those confusions or contradictions.
Identifying basic relationships between ideas involves consciously analyzing how one idea
relates to others.
Generating and manipulating mental images involves creating a picture of information in ones
mind in order to process it more deeply.
Conative Skills
Becoming aware of the power of interpretations involves becoming aware that ones thoughts,
feelings, beliefs, and actions are influenced by how one interprets situations.
Cultivating a growth mindset involves building the belief that each person can increase his or
her intelligence and abilities.
Cultivating resiliency involves developing the ability to overcome failure, challenge, or adversity.
Avoiding negative thinking involves preventing ones emotions from dictating ones thoughts
and actions.
Taking various perspectives involves identifying the reasoning behind multiple (and often
conflicting) perspectives on an issue.
Interacting responsibly involves being accountable for the outcome of an interaction.
Handling controversy and conflict resolution involves reacting positively to controversy or
conflict.
Adapted from Marzano et al., 2013, pp. 2644.
47
The three critical commitments described above provide a strong foundation for addressing
the six leading indicators for level 3. Continually monitoring the viability of the curriculum
directly addresses leading indicator 3.2 and provides a foundation for addressing leading
indicators 3.1, 3.3, and 3.5. A comprehensive vocabulary program and explicit instruction in
knowledge application and metacognitive skills facilitate leading indicators 3.4 and 3.6.
48
Level 3
Lagging Indicator 3.10: Written goals are available specifying the elimination of
differences in achievement for students at different
socioeconomic levels.
Lagging Indicator 3.11: Written goals are available specifying the elimination of
differences in achievement for students of differing ethnicities.
Lagging Indicator 3.12: Written goals are available specifying the elimination of the
achievement gap for all students.
Lagging Indicator 3.13: Written goals are available specifying the elimination of
differences in achievement for English language learners.
Lagging Indicator 3.14: Written goals are available specifying the elimination of
differences in achievement for students with disabilities.
Lagging Indicator 3.15: Written timelines are available containing specific benchmarks
for each goal, including the individual(s) responsible for the goal.
Lagging Indicator 3.16: Reports, graphs, and charts are available for overall student
achievement.
Lagging Indicator 3.17: Evidence is available showing that reports, graphs, and charts
are regularly updated to track growth in student achievement.
Lagging Indicator 3.18: Evidence is available that students who need instructional
support outside of the regular classroom have had access to and
taken advantage of such support.
Copyright 2012 Robert J. Marzano
At level 3, none of the lagging indicators are perceptual in nature. Therefore, survey data would
not suffice. Most of the lagging indicators manifest as written documents. For example, a
curriculum audit document would provide direct evidence of the viability of the curriculum. A
document that is regularly updated tracking the courses taken by students could be used to
provide evidence that students in need of instructional support are taking the necessary courses to
improve their achievement, and so on.
49
4
Standards-Referenced Reporting
of Student Progress
Level 4 addresses the extent to which a schools reporting system clearly identifies specific
topics for each subject area at each grade level and each students current status on each
reporting topic. Level 4 contains the following two leading indicators:
Leading Indicator 4.1: Clear and measureable goals are established and focused on critical
needs regarding improving achievement of individual students within
the school.
Leading Indicator 4.2: Data are analyzed, interpreted, and used to regularly monitor progress
toward achievement goals for individual students.
As mentioned in the introduction, a school that reaches level 4 high reliability status operates in a
rarified atmosphere because it reports student achievement at a level of detail that surpasses
overall letter grades. Specifically, the school reports student achievement for specific topics
within each subject area. Such a system is referred to as standards-referenced but is frequently
confused with a standards-based system. John Kendall and I (Marzano & Kendall, 1996)
highlight this distinction as critical to well-informed school reform efforts. We note:
In a standards-based system, students must demonstrate that they have
met the standards at one level before they are allowed to pass on to the
next level. In a standards-referenced system, students standings relative
51
52
Leaders of Learning: How District, School, and Classroom Leaders Improve Student
Achievement (DuFour & Marzano, 2011)
Designing and Assessing Educational Objectives: Applying the New Taxonomy (Marzano
& Kendall, 2008)
Level 4
Table 4.1 lists those factors from Hatties (2012) list that are at or above the .40 hinge-point and
directly related to level 4.
Table 4.1: Hatties Factors Related to Level 4 At or Above the Hinge-Point
Rank
Factor
1
9
10
15
31
40
47
48
The scant attention currently paid to even the leading indicators for level 4 attests to the fact that
this level represents a major shift in how schools operate. To become a high reliability school at
level 4, I recommend two critical commitments: (1) develop proficiency scales for the essential
content and (2) report status and growth on the report card using proficiency scales. Both
represent major shifts in how schools are run.
53
Examples:
Goals are established for each student in terms of their performance on state
assessments, benchmark assessments, or common assessments.
Essential elements for each subject area are articulated in terms of clear learning
progressions or scales (that is, rubrics).
Goals accompanied by proficiency scales are established for each student in terms of
their knowledge gain regarding the essential elements in each subject area.
When asked, students are aware of their status on their specific achievement goals.
Students keep data notebooks regarding their individual goals.
When asked, parents are aware of their childs achievement goals.
Student-led conferences focus on the individual students goals.
Parent-teacher conferences focus on the individual students goals.
Students perceive that their individual goals are academically challenging.
Leading Indicator 4.2:
Examples:
The status and growth of individual students are analyzed regularly.
When asked, individual students and their parents can describe their achievement status
and growth.
Individual student achievement is examined from the perspective of value-added results.
Individual student results from multiple types of assessments are regularly reported and
used (for example, benchmark assessments, common assessments).
When asked, faculty can describe the different types of individual student reports
available to them.
Individual student reports, graphs, and charts are regularly updated to track growth in
student achievement.
Teachers regularly analyze growth data for individual students.
School leadership teams regularly analyze individual student performance.
Copyright 2012 Robert J. Marzano
54
Level 4
The student will create and defend a hypothesis about what might have happened
if specific events that led to World War II had not happened or had happened
differently.
The student will compare the primary causes for World War II with the primary
causes for World War I.
The student will describe the primary causes for World War II.
The student will recognize isolated facts about World War II.
While rubrics like that in table 4.3 have been used successfully in individual classrooms,
rubrics designed by different teachers are not usually comparable. To illustrate,
55
consider table 4.4, which is a rubric written by a different teacher on the same topic and at the
same grade level as the one in table 4.3.
Table 4.4: A Second Rubric Regarding World War II at Grade 6
4
The student will compare the turning points in World War II to those in other wars.
The student will discuss key turning points in World War II that led to the victory of
the Allied powers.
The student will recall basic information about how the Allied powers achieved a
victory in World War II.
The student will recognize basic information about the outcome of World War II.
Even though the rubrics in tables 4.3 and 4.4 address the same topic (World War II), they have
very different expectations regarding the content for scores 2, 3, and 4. In the first rubric, a score
of 3 indicates that students can compare the causes of World War II with those of World War I.
A score of 3 in the second rubric indicates that students can describe the turning points in World
War II. That content is somewhat easier than the score 3 content in the first rubric.
To solve the problem of inconsistent rubrics from teacher to teacher, it is necessary to develop
a systematic approach to rubric design. Such an approach is depicted in table 4.5.
Table 4.5: Generic Form of a Proficiency Scale
Score 4.0
Score 3.0
Score 2.0
Simpler content
Score 1.0
With help, partial success at score 2.0 content and score 3.0 content
Score 0.0
To understand the generic form of a proficiency scale shown in table 4.5, it is best to start with
score 3.0. To receive a score of 3.0, a student must demonstrate competence regarding the target
learning goal. A score of 2.0 indicates competence regarding the simpler content, and a score of
4.0 indicates competence regarding the more complex content. While scores 4.0, 3.0, and 2.0
involve different content, scores 1.0 and 0.0 do not. A score of 1.0 indicates that, independently,
a student cannot demonstrate competence in the score 2.0 or 3.0 content, but, with help, he or she
demonstrates partial competence. Score 0.0 indicates that even with help, a student does not
demonstrate competence or skill in any of the content.
Table 4.6 depicts a proficiency scale for the topic of heritable traits.
56
Level 4
Students will be able to discuss how heritable traits and nonheritable traits affect
one another.
Score 3.0
Score 2.0
Score 1.0
With help, partial success at score 2.0 content and score 3.0 content
Score 0.0
The generic form of a proficiency scale depicted in table 4.5 allows for the creation of scales that
are comparable across teachers, across topics, across subject areas, and across grade levels.
Regardless of who uses a scale, students scores can be interpreted the same way in terms of their
status relative to the learning goals articulated at score 3.0. A student who receives a score of 3.0
has met the learning goal; a student who receives a score of 4.0 has exceeded the learning goal,
and so on. The book Formative Assessment and Standards-Based Grading (Marzano, 2010a)
describes how proficiency scales designed using the generic framework in table 4.5 allow
teachers to use three different types of classroom assessments (obtrusive, unobtrusive, and
student-generated), compile summative scores for specific topics, and increase the reliability of
test design and scoring.
I believe proficiency scales are foundational to reaching level 4 high reliability status and
their importance to successful school reform has become evident in the recent research literature.
For example, in a study of minimum grading practices, Carey and Carifio (2012) noted:
The results suggest that policy makers who are looking to institute reforms
that lead to fairer, more accurate, and more consistent student
assessment will need to look beyond minimum grading and to more
substantive reforms, such as instituting standards-based grading and
proficiency scales, to address the inherent inequities now empirically
established in this study to be a part of traditional grading schemes.
(p. 207)
To achieve level 4 high reliability status, proficiency scales should be written for each essential
topic in each course at each grade level. There are many resources to aid in such endeavors. For
example, over 1,500 scales are available at itembank.marzanoresearch.com that address the
subject areas of math, English language arts, science, U.S. history, world history, geography,
economics, civics, world languages, visual arts, performing arts, physical education, technology,
21st century skills, SEL or life skills, and career and technical skills. Additionally, proficiency
scales for the CCSS are available in the book Using Common Core Standards to Enhance
Classroom Instruction and Assessment (Marzano et al., 2013).
57
58
Level 4
Some schools like to use more refined categories such as A+, A, A, and so on. If that is the
case, the conversion scale depicted in table 4.7 can be used.
Table 4.7: Conversion Scale to Traditional Grades
Traditional Grade
3.754.00
A+
3.263.74
3.003.25
2.842.99
B+
2.672.83
2.502.66
2.342.49
C+
2.172.33
2.002.16
1.761.99
D+
1.261.75
1.001.25
Below 1.00
A report card like the one in figure 4.1 (pages 6061) can be accompanied by a traditional
transcript that lists courses taken, credits earned (in the case of high school), and an overall grade
point average (GPA). As mentioned previously, this report card and variations of it are referred
to as standards-referenced. In a standards-referenced system, students do not have to
demonstrate proficiency in each measurement topic to move on to another grade level.
Proficiency scales and standards-referenced report cards both directly address the leading
indicators for level 4. In effect, these two critical commitments make it rather easy to establish
goals for individual students in the school (leading indicator 4.1) and monitor the progress of
students toward those goals (leading indicator 4.2).
59
John Mark
123 Some Street
Anytown, CO 80000
2.46
2.50
2.20
3.10
3.00
Language Arts
Reading:
Word Recognition and Vocabulary
Reading for Main Idea
Literary Analysis
Writing:
Language Conventions
Organization and Focus
Research and Technology
Evaluation and Revision
Writing Applications
Listening and Speaking:
Comprehension
Organization and Delivery
Analysis and Evaluation of Oral Media
Speaking Applications
Life Skills:
Participation
Work Completion
Behavior
Working in Groups
Average for Language Arts
Mathematics
Number Systems
Estimation
Addition/Subtraction
Multiplication/Division
Ratio/Proportion/Percent
Life Skills:
Participation
Work Completion
Behavior
Working in Groups
Average for Mathematics
60
Grade Level:
Homeroom:
C
B
C
A
A
4
Ms. Smith
Participation
Work Completion
Behavior
Working in Groups
3.40
2.90
3.40
2.70
A
B
A
B
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
2.5
1.5
2.0
3.5
2.5
1.0
2.5
3.0
3.0
3.0
2.5
2.5
4.0
3.5
3.5
3.0
2.46
3.5
3.0
2.5
2.5
1.0
4.0
2.0
3.5
2.0
2.50
Level 4
Science
Matter and Energy
Forces of Nature
Diversity of Life
Human Identity
Interdependence of Life
Life Skills:
Participation
Work Completion
Behavior
Working in Groups
Average for Science
Social Studies
The Influence of Culture
Current Events
Personal Responsibility
Government Representation
Human and Civil Rights
Life Skills:
Participation
Work Completion
Behavior
Working in Groups
Average for Social Studies
Art
Purposes of Art
Art Skills
Art and Culture
Life Skills:
Participation
Work Completion
Behavior
Working in Groups
Average for Art
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
2.0
2.5
1.5
3.5
1.5
3.0
1.5
2.5
1.0
2.20
3.5
3.0
4.0
3.5
1.5
3.5
3.5
3.5
4.0
3.10
3.5
3.0
2.5
2.5
4.0
4.0
3.5
3.00
61
When proficiency scales and standards-referenced report cards are in place, satisfying the
lagging indicators is simply a matter of keeping detailed records and setting criterion scores. For
example, to satisfy lagging indicator 4.3, goals must simply be set for each student regarding
their growth on the proficiency scale for selected topics.
62
5
A Competency-Based System
That Ensures Students Mastery of
Content
Level 5 directly addresses the extent to which a school has replaced a system that matriculates
students based on time for one that matriculates students based on their demonstrated
competence. Level 5 has three leading indicators:
Leading Indicator 5.1: Students move on to the next level of the curriculum for any subject
area only after they have demonstrated competence at the previous
level.
Leading Indicator 5.2: The school schedule is designed to accommodate students moving at a
pace appropriate to their background and needs.
Leading Indicator 5.3: Students who have demonstrated competence levels greater than those
articulated in the system are afforded immediate opportunities to begin
work on advanced content and/or career paths of interest.
63
A school with level 5 high reliability status operates in the most rarified atmosphere of all
one that is competency-based (also known as standards-based or outcome-based). The driving
force behind a competency-based system is that students do not move on to the next level until
they have demonstrated competency at the previous level. Additionally, each student progresses
at his or her individual pace. This revolutionary concept has been advocated and discussed by
many with a number of variations on the theme (for example, Bloom, 1976; Boyer, 1983, 1995;
Goodlad, 1984; Guskey, 1980, 1985, 1987; Spady, 1988, 1994, 1995) but is most commonly
associated with the work of John Carroll (1963, 1989). The Carroll model can be represented
using the following formula:
Time actually spent
Amount of learning
=
Time needed to learn
This formula indicates that the amount of content any student learns about a given topic is a
function of the time the student actually spends focusing on the content and the time needed to
learn the content. If a student has spent five hours on a topic but needs ten hours to learn the
content, then she has not learned the content well.
An interesting issue disclosed by Carrolls formula is the fact that students require differing
amounts of time to learn content. This is innately problematic. It seems almost self-evident that
an optimal educational system would be one in which students could take as much or as little
time as needed to learn important content.
As described in the book Classroom Assessment and Grading That Work (Marzano, 2006),
there are at least two conventions in the current system that work against the realization of
Carrolls modelgrade levels and credits. By definition, grade levels work against students
progressing through content at their own pace. Regardless of their understanding of and skill at
the content addressed at a given grade, all students, with some rare exceptions, are moved
through the system at exactly the same pace. Time in school is constant, learning is varied.
Using credits as the basic indicator of progress within a subject area at the secondary level
also works against the realization of a competency-based system. Students must spend a specific
amount of time in a course to receive credit for the course. Credits can be traced back some 100
years to 1906, when Henry S. Smith, the president of the Carnegie Foundation for the
Advancement of Teaching, defined a unit as a course of five periods weekly throughout an
academic year (Tyack & Tobin, 1994). In his book, High School: A Report on Secondary
Education in America, Ernest Boyer (1983) explains that the credit approach has spawned a
virtual logjam (p. 237) in terms of allowing students to progress through subject areas at their
own pace.
64
Level 5
Of the books I have authored, the following contain the most direct reviews of the literature
and recommended interventions at level 5:
Table 5.1 lists those factors from Hatties (2012) list that are at or above the .40 hinge-point and
directly related to level 5.
Table 5.1: Hatties Factors Related to Level 5 At or Above the Hinge-Point
Rank
Factor
1
9
10
15
31
37
40
47
48
65
Examples:
Clear criteria are established for each essential element regarding minimum scores that
demonstrate competence.
A system is in place that tracks each students status on the essential elements for each
subject area at the students current level.
Student status and progress for each essential element in each subject area are
continually monitored.
When students reach criterion scores for the essential elements at a particular level
within a subject area, they immediately start working on the elements at the next level.
Leading Indicator 5.2:
Examples:
Grade levels are replaced by competency levels.
Multiple venues are available simultaneously (that is, at the same time) for students to
learn and demonstrate competency in the essential elements for each level of each
subject area.
Online competency-based instruction and assessment are available in the essential
elements for each level of each subject area.
The time it takes for students to move through the various levels of the curriculum for
each subject area at each level is constantly monitored.
Leading Indicator 5.3:
Examples:
Students who have demonstrated the highest level of competence within a given subject
area are provided with opportunities for even more advanced study within that subject area.
Students who have demonstrated competence adequate for high school graduation begin
and receive credit for college work.
Students who have demonstrated competence adequate for high school graduation begin
and receive credit for work toward a trade that is of interest to them.
Copyright 2012 Robert J. Marzano
66
Level 5
Very few if any of the leading indicators for level 5 are commonly exhibited in schools today.
Perhaps the only one that is beginning to receive attention is that online competency-based
instruction and assessment are available in some schools.
The critical commitment I believe necessary to attain high reliability status at level 5 is to get
rid of time requirements to move through levels of knowledge and adjust the reporting systems
accordingly. By definition, this means that overall or omnibus grades cannot be used.
As described in the book Formative Assessment and Standards-Based Grading (Marzano,
2010a), a competency-based system does not lock students into a specific grade level based on
their age. Rather, students move up and down a continuum of knowledge or skills based on their
demonstrated competence for each subject area. Table 5.3 (page 68) depicts an individual
students report card in this version of a competency-based system. This report card indicates the
students status across various subject areas.
In Formative Assessment and Standards-Based Grading, Marzano (2010a) describes the type
of report card shown in table 5.3 (page 68):
Most subject areas include levels 1 to 10. Level 10 represents mastery of
the content expected for a general high school diploma. Not all subject
areas have ten levels, however. Art has six levels, technology has seven
levels, and personal/social skills has five levels. This convention is used
because in a standards-based system, content is not organized into grade
levels that are based on age. It is instead organized into levels based on the
nature of the content. Where the content necessary for high school
graduation might logically fall into ten levels for some subjects, it might fall
into fewer levels for others. (pp. 119120)
Another feature of the report card in table 5.3 to note is the manner in which a students current
status is reported. In mathematics, for example, the students score at level 4.0 is reported as a
ratio of 21/35. This means that the student has achieved a score of 3.0 or higher on twenty-one of
the thirty-five learning goals (that is, proficiency scales) at that level. This student must
demonstrate score 3.0 or higher competence on fourteen more proficiency scales to progress to
level 5 in mathematics. Finally, each subject area can also include advanced levels. Art has one
advanced level, career literacy has two advanced levels, math and language arts have three
advanced levels, and so on.
No overall grades are computed in competency-based systems because they are antithetical to
the competency-based philosophy. In a competency-based system, the emphasis is on
demonstrating proficiency in each and every learning goal before a student progresses to the next
level. Overall grades simply summarize a students average competence across a set of topics
across a given level and subject area.
67
68
n/a
n/a
Advanced 3
Advanced 2
n/a
n/a
n/a
09
08
07
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
9 of 10
3.0
(Proficient)
3.0
(Proficient)
03
02
01
4.0
(Advanced)
4.0
(Advanced)
21 of 35
Mathematics
2 of 16
n/a
Career
Literacy
04
05
06
n/a
10
Advanced 1
Art
Level
3.0
(Proficient)
3.0
(Proficient)
4 of 6
n/a
n/a
n/a
n/a
n/a
n/a
n/a
Personal/
Social Skills
3.0
(Proficient)
3.0
(Proficient)
4.0
(Advanced)
3 of 36
Language
Arts
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
17 of 25
Science
3.0
(Proficient)
3.0
(Proficient)
13 of 15
n/a
Social
Studies
3.0
(Proficient)
4.0
(Advanced)
7 of 8
n/a
n/a
n/a
n/a
Technology
4.0
(Advanced)
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
9 of 10
3.0
(Proficient)
3.0
(Proficient)
4.0
(Advanced)
4.0
(Advanced)
3.0
(Proficient)
7 of 11
Mathematics
Career
Literacy
4 of 32
Art
Level
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
2 of 6
Personal/
Social Skills
3.0
(Proficient)
3.0
(Proficient)
4.0
(Advanced)
3.0
(Proficient)
7 of 31
Language
Arts
3.0
(Proficient)
3.0
(Proficient)
3.0
(Proficient)
4.0
(Advanced)
2 of 23
Science
3.0
(Proficient)
3.0
(Proficient)
2 of 15
Social
Studies
3.0
(Proficient)
4.0
(Advanced)
7 of 8
Technology
Level 5
69
For some schools and districts, getting rid of traditional grade levels represents too radical a
shift from the norm. Stated differently, some schools seek to employ a competency-based
approach but maintain traditional grade levels. Fortunately, there is a way to do this. The most
straightforward approach to implementing a competency-based system while maintaining
traditional grade levels is to treat grade levels as performance levels. The record-keeping system
up to grade 8 in such a system is depicted in table 5.4 (page 69). Table 5.4 is basically identical
to table 5.3 (page 68) except that it uses grade levels. Each grade level represents a level of
knowledge or skill defined by specific learning goals for which proficiency scales have been
developed. Table 5.5 depicts a competency-based report card at the high school level.
At the high school level, specific courses are listed for each subject area in order of their
complexity. For example, in mathematics, Algebra I addresses simpler content than Algebra II
and so on. At the high school level, some courses might not exhibit a strict hierarchic structure.
For example, it might be the case that in technology, Desktop Publishing does not have to be
taken before Digital Graphics and Animation. Therefore, some courses at the high school level
will not have prerequisite courses or be prerequisites to other courses. However, progression
though any course is still executed in a competency-based fashion. Once a student has
demonstrated mastery (score 3.0 content) for all of the proficiency scales within a course, the
student receives credit for that course.
Table 5.5: Competency-Based Reporting for High School
Subject Area
Mathematics
Science
Social Studies
Language Arts
Art
Technology
70
Course
Calculus
Geometry
Algebra II
Algebra I
AP Environmental Science
Physics
Chemistry
Biology
Economics
World History
U.S. History
Geography
Shakespeare
Ancient Literature
European Literature
U.S. Literature
Orchestra
Performing Arts
Painting
Digital Graphics and Animation
Desktop Publishing
Computer Science
Score
12 of 24
3.0 (proficient)
6 of 22
3.0 (proficient)
11 of 21
4.0 (advanced)
3.0 (proficient)
13 of 22
3.0 (proficient)
3.0 (proficient)
9 of 21
3.0 (proficient)
17 of 22
4.0 (advanced)
Level 5
As is the case with a competency-based system that does not use grade levels, overall
omnibus grades are not assigned to students when grade levels are used as performance levels.
Rather, the report cards depicted in tables 5.4 (page 69) and 5.5 are kept current at all times and a
ratio is recorded at each grade level in which the student is working for each subject area.
Examining the patterns in tables 5.3 (page 68), 5.4 (page 69), and 5.5, it is evident that the
lowest acceptable score a student can receive on any proficiency scale for any level, grade level,
or course is a 3.0. This is because students must demonstrate a score of 3.0 on all topics to move
on to the next level, grade level, or course. However, discriminations can still be made between
students as to their performances within each level, grade level, or course. To illustrate, consider
table 5.4. Notice that at grade 1, the student achieved an overall score of advanced in
mathematics and technology and an overall score of proficient in all other subjects. Recall that
at each grade level, students are scored on a 4-point scale for each learning goal. If a student has
achieved a 4.0 on all (or the majority) of the learning goals for a given subject at a given grade
level, he or she can be awarded the status of advanced as opposed to proficient.
71
With competency-based reporting systems like the one described here, records can be kept in a
manner that provides clear evidence of level 5 high reliability status. For example, each of the
report cards depicted in tables 5.3 (page 68), 5.4 (page 69), and 5.5 (page 70) satisfies lagging
indicator 5.2: Reports are available that indicate each students current status for each essential
element at each level for each subject area. Having this type of data for each student would
make it easy to generate reports that would satisfy lagging indicator 5.5: Reports are available
depicting how long students are taking to move through the curriculum for each subject area at
each level. The requirements of other lagging indicators could also be satisfied with the data
available from such a reporting system.
72
Conclusion
73
Guideline #2: Work on levels 1, 2, and 3 simultaneously but seek high reliability status for
one level at a time.
Levels 1, 2 and 3 are obviously related because they are a natural part of the day-to-day
running of a school. Safety and order are always concerns in a well-run school, and cooperation
and collaboration (or lack thereof) always influence day-to-day operations (level 1). Instruction
occurs every day, and the more attention paid to enhancing instructional practices in the
classroom, the better (level 2). The curriculum is what teachers and students interact about on a
daily basis and the more attention paid to ensuring that the curriculum is guaranteed and viable
and focused on enhancing student learning, the better (level 3). In short, school leaders are, by
definition, engaged in level 1, 2, and 3 activities constantly and anything they can do to improve
their schools status regarding these levels is always a step in the right direction. Consequently, a
school leader might work on level 1, level 2, or level 3 leading indicators simultaneously. For
example, to increase a schools effectiveness, a school leader might decide to install an electronic
tool to help collect suggestions and comments from teachers as to the effective running of the
schoola level 1 leading indicator. Similarly, the school leader might decide to develop a
document that describes a schoolwide model of instructiona level 2 leading indicator. Finally,
the school leader might also decide to determine which elements of the CCSS are considered
essential learning goals for each grade levela level 3 leading indicator. Improvement in various
aspects of levels 1, 2, and 3 using the leading indicators as guides is always good practice as long
as these efforts do not overload teachers and school leaders.
Establishing criteria and collecting evidence for high reliability status for a particular level,
however, should be done methodically and systematically, level by level, starting at level 1. For
example, a school leader would start by identifying the lagging indicators the school will use for
level 1 and the criterion scores for those indicators. Next, the school leader would collect
evidence indicating that the school had met the criterion scores for each selected lagging
indicator. Once the criterion scores for all selected lagging indicators were met, the school leader
would consider the school validated for level 1 high reliability status and would then move on to
level 2.
One final point to make about moving through the levels is that many schools are already
operating highly effectively regarding levels 1 through 3. Consequently, attaining high reliability
status for these levels might simply be a matter of collecting evidence for selected lagging
indicators. In effect, schools that suspect they are already operating at high reliability status for a
given level should be able to identify lagging indicators and criterion scores and confirm their
perceptions in a quick and efficient manner.
Guideline #3: If necessary, set interim criterion scores for lagging indicators.
Some school leaders might find it useful to set criterion scores for lagging indicators in a
staged or incremental fashion to provide a scaffold for reaching their ultimate goal. To illustrate,
consider the following lagging indicator for level 1: Few, if any, incidents occur in which rules
and procedures are not followed. A school leader might ultimately wish to establish as the
criterion for high reliability status that the school must average no more than one incident of a
significant violation of school rules and procedures per month. However, after examining the
schools records, the school leader realizes that the school is currently far from reaching that
74
Conclusion
status. To provide momentum for progress on this lagging indicator, the school leader might set a
goal of moving to an average of no more than five violations per month as an interim step.
Guideline #4: Lead from the perspective of the indicators.
Throughout this paper, I have consistently alluded to the role of school leaders in moving a
school through the five levels. Ultimately, whether a school reaches high reliability status for any
level is dependent on whether the school leaders keep the five high reliability levels at the
forefront of their efforts to guide the school in its improvement efforts. In effect, school leaders
should judge their effectiveness by the extent to which they systematically move their school
through meeting criterion scores for the lagging indicators. Such a perspective will keep the
school and its leaders firmly grounded in tangible results that have direct effects on the wellbeing
of students.
In conclusion, the five levels of high reliability status described in this white paper, the
leading and lagging indicators, and the critical commitments recommended for each level are a
product of the syntheses of research conducted by myself and others over decades. They are also
a product of working on long-term projects with schools and districts around the world. This
framework is offered as a tool for schools to guide their current and future efforts at reform.
Those using this framework should feel free to make adaptations to meet their specific needs and
circumstances.
75
References
Ambady, N., & Rosenthal, R. (1992). Thin slices of expressive behavior as predictors of
interpersonal consequences: A meta-analysis. Psychological Bulletin, 111(2), 256274.
Barber, M., & Mourshed, M. (2007). How the worlds best-performing school systems come out
on top. London: McKinsey & Company.
Beck, I. L., & McKeown, M. G. (1985). Teaching vocabulary: Making the instruction fit the
goal. Educational Perspectives, 23(1), 1115.
Bellamy, G. T., Crawford, L., Marshall, L. H., & Coulter, G. A. (2005). The fail-safe schools
challenge: Leadership possibilities from high reliability organizations. Educational
Administration Quarterly, 41(3), 383412.
Bloom, B. S. (1976). Human characteristics and school learning. New York: McGraw-Hill.
Bosker, R. J. (1992). The stability and consistency of school effects in primary education.
Enschede, The Netherlands: University of Twente.
Bosker, R. J., & Witziers, B. (1995, January). School effects, problems, solutions and a metaanalysis. Paper presented at the International Congress for School Effectiveness and
Improvement, Leewarden, The Netherlands.
Bosker, R. J., & Witziers, B. (1996, April). The magnitude of school effects. Or: Does it really
matter which school a student attends? Paper presented at the annual meeting of the
American Educational Research Association, New York.
Boyer, E. L. (1983). High school: A report on secondary education in America. New York, NY:
Harper & Row.
Boyer, E. L. (1995). The basic school: A community for learning. Princeton, NJ: The Carnegie
Foundation for the Advancement of Teaching.
Brookover, W. B., Beady, C., Flood, P., Schweitzer, J., & Wisenbaker, J. (1979). School social
systems and student achievement: Schools can make a difference. New York: Praeger.
77
Brookover, W. B., Schweitzer, J. G., Schneider, J. M., Beady, C. H., Flood, P. K., &
Wisenbaker, J. M. (1978). Elementary school social climate and school achievement.
American Educational Research Journal, 15(2), 301318.
Bryk, A. S., & Raudenbush, S. W. (1992). Hierarchical linear models. New York: Sage.
Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing
schools for improvement: Lessons from Chicago. Chicago, IL: University of Chicago
Press.
Burstein, L. (Ed.). (1992). The IEA study of mathematics III: Student growth and classroom
processes. New York: Pergamon Press.
Carey, T., & Carifio, J. (2012). The minimum grading controversy: Results of a quantitative
study of seven years of grading data from an urban high school. Educational Researcher,
41(6), 201208.
Carroll, J. B. (1963). A model of school learning. Teachers College Record, 64(8), 723733.
Carroll, J. B. (1989). The Carroll model: A 25-year retrospective and prospective view.
Educational Researcher, 8(1), 2631.
Coleman. J. S., Campbell, E. Q., Hobson, C. J., McPartland, J., Mood, A. M., Weinfield, F. D.,
& York, R. L. (1966). Equality of educational opportunity. Washington, DC: U.S.
Department of Health, Education, and Welfare, Office of Education.
Costa, A. L., & Kallick, B. (Eds.). (2009). Habits of mind across the curriculum: Practical and
creative strategies for teachers. Alexandria, VA: Association for Supervision and
Curriculum Development.
Creemers, B. P. M. (1994). The effective classroom. London: Cassell.
Dow, P. B. (1991). Schoolhouse politics: Lessons from the Sputnik era. Cambridge, MA:
Harvard University Press.
DuFour, R., & Marzano, R. J. (2011). Leaders of learning: How district, school, and classroom
leaders improve student achievement. Bloomington, IN: Solution Tree Press.
Edmonds, R. (1979a). A discussion of the literature and issues related to effective schooling.
Cambridge, MA: Center for Urban Studies, Harvard Graduate School of Education.
Edmonds, R. (1979b). Effective schools for the urban poor. Educational Leadership, 37(1),
1527.
Edmonds, R. (1979c). Some schools work and more can. Social Policy, 9, 2832.
Edmonds, R. (1981a). Making public schools effective. Social Policy, 12, 5660.
78
References
Edmonds, R. (1981b). A report on the research project, Search for effective schools . . . and
certain of the designs for school improvement that are associated with the project.
Unpublished report prepared for NIE. East Lansing, MI: The Institute for Research on
Teaching, College of Education, Michigan State University.
Elberts, R. W., & Stone, J. A. (1988). Student achievement in public schools: Do principals
make a difference? Economic Education Review, 1, 291299.
Foley, E., Mishook, J., Thompson, J., Kubiak, M., Supovitz, J., & Rhude-Faust, M. K. (n.d.).
Beyond test scores: Leading indicators for education. Annenberg Institute for School
Reform, Brown University.
Goldstein, H. (1997). Methods of school effectiveness research. School Effectiveness and School
Improvement, 8(4), 369395.
Good, T. L., & Brophy, J. E. (1986). School effects. In M. C. Wittrock (Ed.), Handbook of
research on teaching (3rd ed., pp. 570602). New York: Macmillan.
Goodlad, J. L. (1984). A place called school: Prospects for the future. New York, NY: McGrawHill.
Guskey, T. R. (1980). What is mastery learning? Instructor, 90(3), 8086.
Guskey, T. R. (1985). Implementing mastery learning. Belmont, CA: Wadsworth Publishing
Company.
Guskey, T. R. (1987). Rethinking mastery learning reconsidered. Review of Educational
Research, 57(2), 225229.
Hanushek, E. A. (1971). Teacher characteristics and gains in student achievement: Estimation
using micro-data. American Economic Review, 61(2), 280288.
Hanushek, E. A. (1992). The trade-off between child quantity and quality. Journal of Political
Economy, 100(1), 84117.
Hanushek, E. A. (1996). A more complete picture of school resources policies. Review of
Educational Research, 66(3), 397409.
Hanushek, E. A. (1997). Assessing the effects of school resources on student performance: An
update. Educational Evaluation and Policy Analysis, 19(2), 141164.
Hanushek, E. A. (2003). The failure of input-based schooling policies. The Economic Journal,
113(485), 6498.
Hanushek, E. A. (2010). The economic value of higher teacher quality (Working Paper 16606).
Cambridge, MA: National Bureau of Economic Research.
79
References
Marzano, R. J., & Kendall, J. S. (2007). The new taxonomy of educational objectives (2nd ed.).
Thousand Oaks, CA: Corwin Press.
Marzano, R. J., & Kendall, J. S. (2008). Designing and assessing educational objectives:
Applying the New Taxonomy. Thousand Oaks, CA: Corwin Press.
Marzano, R. J., & Marzano, J. (1988). A cluster approach to elementary vocabulary instruction.
Newark, DE: International Reading Association.
Marzano, R. J., & Paynter, D. E. (1994). New approaches to literacy: Helping students develop
reading and writing skills. Washington, DC: American Psychological Association.
Marzano, R. J., & Pickering, D. J. (1997). Dimensions of learning: Teachers manual (2nd ed.).
Alexandria, VA: Association for Supervision and Curriculum Development.
Marzano, R. J., & Pickering, D. J. (with Heflebower, T.). (2011). The highly engaged classroom.
Bloomington, IN: Marzano Research Laboratory.
Marzano, R. J., Pickering, D. J., & McTighe, J. (1993). Assessing student outcomes:
Performance assessment using the Dimensions of Learning model. Alexandria, VA:
Association for Supervision and Curriculum Development.
Marzano, R. J., Pickering, D. J., & Pollock, J. E. (2001). Classroom instruction that works:
Research-based strategies for increasing student achievement. Alexandria, VA:
Association for Supervision and Curriculum Development.
Marzano, R. J., & Pollock, J. E. (2001). Standards-based thinking and reasoning skills. In A. L.
Costa (Ed.), Developing minds: A resource book for teaching thinking (3rd ed.,
pp. 2934). Alexandria, VA: Association for Supervision and Curriculum Development.
Marzano, R. J., & Simms, J. A. (2013a). Coaching classroom instruction. Bloomington, IN:
Marzano Research Laboratory.
Marzano, R. J., & Simms, J. A. (2013b). Vocabulary for the Common Core. Bloomington, IN:
Marzano Research Laboratory.
Marzano, R. J., & Toth, M. (2013). Teacher evaluation that makes a difference. Alexandria, VA:
Association for Supervision and Curriculum Development.
Marzano, R. J., & Waters, T. (2009). District leadership that works: Striking the right balance.
Bloomington, IN: Solution Tree Press.
Marzano, R. J., Waters, T., & McNulty, B. (2005). School leadership that works: From research
to results. Alexandria, VA: Association for Supervision and Curriculum Development.
Marzano, R. J., Yanoski, D. C., Hoegh, J. K., & Simms, J. A. (2013). Using Common Core
standards to enhance classroom instruction and assessment. Bloomington, IN: Marzano
Research Laboratory.
82
References
Mortimore, P., Sammons, P., Stoll, L., Lewis, D., & Ecob, R. (1988). School matters: The junior
years. Somerset: Open Books.
National Commission on Excellence in Education. (1983). A nation at risk: The imperative for
educational reform. Washington, DC: Government Printing Office.
Nye, B., Konstantopoulos, S., & Hedges, L. V. (2004). How large are teacher effects?
Educational Evaluation and Policy Analyses, 26(3), 237257.
Partnership for 21st Century Skills. (2012). Framework for 21st century learning. Accessed at
www.p21.org/overview on January 23, 2013.
Pellegrino, J. W., & Hilton, M. L. (Eds.). (2012). Education for life and work: Developing
transferable knowledge in the 21st century. Washington, DC: National Academies Press.
Purkey, S. C. & Smith, M. S. (1982). Effective schools: A review. Madison, WI: Wisconsin
Center for Educational Research, School of Education, University of Wisconsin at
Madison.
Raudenbush, S. W., & Bryk, A. S. (1988). Methodological advances in analyzing the effects of
schools and classrooms on student learning. In E. Z. Rothkopf (Ed.), Review of research
in education: Vol. 15 (pp. 423475). Washington, DC: American Educational Research
Association.
Raudenbush, S. W., & Willms, J. D. (1995). The estimation of school effects. Journal of
Educational and Behavioral Statistics, 20(4), 307335.
Reynolds, D., & Teddlie, C. (2000a). The process of school effectiveness. In C. Teddlie & D.
Reynolds (Eds.), The international handbook of school effectiveness research (pp. 134
159). New York: Falmer Press.
Reynolds, D., & Teddlie, C. (with Hopkins, D., & Stringfield, S.). (2000b). Linking school
effectiveness and school improvement. In C. Teddlie & D. Reynolds (Eds.), The
international handbook of school effectiveness research (pp. 206231). New York:
Falmer Press.
Rowe, K. J., & Hill, P. W. (1994, March). Multilevel modeling in school in school effectiveness
research: How many levels? Paper presented at the Seventh International Congress for
School Effectiveness and Improvement, Melbourne, Australia.
Rowe, K. J., Hill, P. W., & Holmes-Smith, P. (1995). Methodological issues in educational
performance and school effectiveness research: A discussion with worked examples.
Australian Journal of Education, 39(3), 217248.
Rutter, M., Maughan, B., Mortimore, P., Ouston, J., & Smith, A. (1979). Fifteen thousand hours:
Secondary schools and their effects on children. Cambridge, MA: Harvard University
Press.
83
Sammons, P. (1999). School effectiveness: Coming of age in the twenty-first century. Lisse:
Swets and Zeitlinger.
Sammons, P., Hillman, J., & Mortimore, P. (1995). Key characteristics of effective schools: A
review of school effectiveness research. London: Office of Standards in Education and
Institute of Education.
Scheerens, J. (1992). Effective schooling: Research, theory and practice. London: Cassell.
Scheerens, J., & Bosker, R. (1997). The foundations of educational effectiveness. New York:
Elsevier.
Spady, W. G. (1988). Organizing for results: The basis of authentic restructuring and reform.
Educational Leadership, 46(2), 48.
Spady, W. G. (1994). Choosing outcomes of significance. Educational Leadership, 51(6), 1822.
Spady, W. G. (1995). Outcome-based education: From instructional reform to paradigm
restructuring. In H. H. Block, S. T. Everson, & T. Guskey (Eds.), School improvement
programs (pp. 367398). New York: Scholastic.
Stringfield, S. (1995). Attempts to enhance students learning: A search for valid programs and
high-reliability implementation techniques. School Effectiveness and School
Improvement, 6(1), 6796.
Stringfield, S., & Teddlie, C. (1989). The first three phases of the Louisiana school effectiveness
study. In B. P. M. Creemers, T. Peters, & D. Reynolds (Eds.), School effectiveness and
school improvement: Proceedings of the Second International Congress, Rotterdam (pp.
281294). Lisse: Swets & Zeitlinger.
Strong, M. (2011). The highly qualified teacher: What is teacher quality and how do we measure
it? New York: Teachers College Press.
Tomlinson, C., & Imbeau, M. (2010). Leading and managing a differentiated classroom.
Alexandria, VA: Association for Supervision and Curriculum Development.
Townsend, T. (Ed.). (2007a). International handbook of school effectiveness and improvement:
Part one. The Netherlands: Springer.
Townsend, T. (Ed.). (2007b). International handbook of school effectiveness and improvement:
Part two. The Netherlands: Springer.
Tyack, D. K., & Tobin, W. (1994). The grammar of schooling: Why has it been so hard to
change? American Educational Research Journal, 31(3), 453479.
84
References
85
Read about Dr. Marzanos optimistic view of the future and his recommendations for
reaching the highest levels of school effectiveness in Becoming a High Reliability School:
The Next Step in School Reform. Inside, youll find:
A solid research base spanning 40 years of educational research and development.
Practical strategies for achieving each level of high reliability status.
Explicit references to previous works by Dr. Marzano and his colleagues that
allow you to delve deeper into your schools specific growth areas.
Leading and lagging indicators with concrete examples to help you determine
your schools current status and future action steps.
The Marzano High Reliability Schools framework integrates four decades of Dr. Marzanos
work involving teacher and school leader development; The Art and Science of Teaching;
effective research-based strategies for classrooms, schools, and districts; vocabulary
instruction and intervention; and hands-on research and training to give school leaders
the tools they need to systematically increase their schools reliability and effectiveness.