You are on page 1of 157

PERFORMANCETRACKER IN PENNSYLVANIA SCHOOLS: MEASURING

IMPLEMENTATION USING THE STAGES OF CONCERN



A dissertation submitted to the faculty
of
Immaculata University
by Joseph James Cannella, Jr.



in partial fulfillment
of the requirements for the degree of
Doctor of Education




Immaculata University September 2011













Copyright 2011 Joseph James Cannella, Jr.
All rights reserved
TITLE OF DISSERTATION:
Performance Tracker in Pennsylvania Schools: Measuring Implementation Using
the Stages of Concern
AUTHOR: Joseph Cannella
Rea er
ON BEHALF OF IMMACULATA UNIVERSITY

F. Kane, Ed.D.
Dean, College of Graduate Studies

Thomas Compitello, Ed D.
Chairperson, Education Division
DATE:

iii
Dedication





to Kris




iv
Acknowledgements
I would like to express a very special thank you to family, friends, and colleagues who
have supported me through this journey:
To my committee chairperson, Sr. Jane Anne Molinaro, IHM, Ph.D. for her
wisdom, patience, and guidance. It warmed my heart to learn that she, who gently
shepherded me into my doctoral studies by teaching my first doctoral class, would
assist me through this process. Sister, I am eternally grateful;
To my committee members, Mary Bolenbaugh, Ed.D. and Maureen McQuiggan,
Ed.D. Your constant support, mentorship, and friendship have been an inspiration
to me;
To Dr. Byron A. McCook, my independent reader, for his feedback and support.
I count your friendship among my greatest blessings;
To, Kristen, who, as in all things, has been my partner throughout this process.
You are an unwavering source of support throughout my personal and
professional life;
To my children, Elizabeth LiXing and Benjamin Joseph Sungmin, who have
patiently postponed play time while daddy worked on his homework;
To my academic advisor, Dr. Joseph Corabi, for his guidance and sense of humor;
To the staff of SunGard Public Sector for their support of this effort, and the
teachers, Superintendents, and Boards of Directors who participated in this study.

v
CONTENTS
Dedication .......................................................................................................................... iii
Acknowledgements ............................................................................................................ iv
Table of Figures ............................................................................................................... viii
List of Tables ..................................................................................................................... ix
Abstract .............................................................................................................................. xi
CHAPTER ONE INTRODUCTION ................................................................................1
Overview .....................................................................................................................1
Need for the Study .......................................................................................................5
Statement of the Problem ............................................................................................6
Definition of Terms .....................................................................................................7
Limitations .................................................................................................................10
Research Questions ...................................................................................................10
Summary ....................................................................................................................11
CHAPTER TWO REVIEW OF THE LITERATURE ...................................................13
Introduction ...............................................................................................................13
Accountability in Schools ..........................................................................................13
The Evolution of Accountability in Education: From ESEA to NCLB. ...........13
The High-Stakes Nature of Testing under NCLB. ............................................15
Adequate Yearly Progress (AYP). ....................................................................16
Raising stakes for Pennsylvania students. .........................................................18
Raising stakes for educators. .............................................................................19
Data-Driven Decision Making in Education .............................................................20
Evidence of success. .........................................................................................21
Using multiple measures of data. ......................................................................22

vi
Bridging the data gap. .......................................................................................24
Instructional Data Systems to Manage Data .............................................................27
Instructional Systems. .......................................................................................28
PerformanceTRACKER by SunGard Public Sector ..................................................30
Teachers Use of Instructional Data Systems ............................................................32
Case Studies. .....................................................................................................34
Quantitative Studies. .........................................................................................36
The Concerns Based Adoption Model ......................................................................40
Description and Development of the Concerns Based Adoption Model ..........41
Stages of Concern. ............................................................................................43
Measuring Concerns about Innovations ....................................................................45
Research using Stages of Concern to Help Understand Implementations. .......45
Summary ....................................................................................................................52
CHAPTER THREE METHODS AND PROCEDURES ...............................................53
Introduction ...............................................................................................................53
Participants ................................................................................................................54
Instruments ................................................................................................................54
Design of the Study ...................................................................................................57
Procedure ...................................................................................................................57
Summary ....................................................................................................................60
CHAPTER FOUR RESULTS ........................................................................................61
Introduction ...............................................................................................................61
Participant Characteristics .........................................................................................61
Results Related to Research Questions .....................................................................68
Research Question One. ....................................................................................68

vii
Research Question Two. ...................................................................................72
Research Question Three. .................................................................................76
Summary ....................................................................................................................83
CHAPTER FIVE DISCUSSION ....................................................................................84
Summary of the Study ...............................................................................................84
Summary of the Results .............................................................................................84
Participants. .......................................................................................................84
Research Question One. ....................................................................................85
Research Question Two. ...................................................................................87
Research Question Three. .................................................................................90
Limitations of the Study ............................................................................................93
Relationship to Other Research .................................................................................94
Recommendations for Further Research ...................................................................95
Conclusion .................................................................................................................99
References ........................................................................................................................102
Appendix A RERB Approval Form ................................................................................112
Appendix B Survey Instrument and SoCQ Norming Tables ..........................................114
Appendix C Data Tables .................................................................................................122
Appendix D Figures ........................................................................................................135



viii
Table of Figures
Figure 2.1. Multiple Measures of Data ..............................................................................23
Figure 2.2. Time Lapses in Feedback to Students in Traditional Classrooms ...................30
Figure 2.3. Rapid Cycles of Feedback to Students in Classrooms with a Technology
Infrastructure. ...............................................................................................................31
Figure 3.1. Study Design Showing Relationship of Data Sources to Research
Questions......................................................................................................................58
Figure 4.1. Aggregate Concerns Profile for Teachers Using PerformanceTRACKER. .....70
Figure D.1. Concerns Profiles Across NCES Building Levels. .......................................136
Figure D.2. Concerns Profiles Across NCES Locale Codes. ..........................................137
Figure D.3. Concerns Profiles Across Title 1 Eligibility. ................................................138
Figure D.4. Concerns Profiles Across 2010 AYP Status. ................................................139
Figure D.5. Concerns Profile Across Teaching Assignment. ..........................................140
Figure D.6. Concerns Profile Across Teaching Experience. ...........................................141
Figure D.7. Concerns Profile Across PerformanceTRACKER Experience. ....................142
Figure D.8. Concerns Profile Across Self-Assessed Proficiency with Technology. .......143
Figure D.9. Concerns Profile Across Self-Assessed Proficiency with
PerformanceTRACKER. ............................................................................................144
Figure D.10. Concerns Profile Based on Professional Development. .............................145



ix
List of Tables
Table 2.1 Stages of Concern About an Innovation ............................................................44
Table 3.1 Coefficients of Internal Reliability and Test-Retest Reliability for Each
Stage of the Concerns Questionnaire ...........................................................................56
Table 4.1 Comparison of proportions of NCES School Levels in Participants to Other
PA School Populations Levels ...................................................................................62
Table 4.2 Comparison of proportions of Title I Eligibility in Participants to Other PA
School Populations Title I Eligibility .........................................................................64
Table 4.3 Comparison of proportions of Locale Codes in Participants to Other PA
School Populations Locale Codes ..............................................................................65
Table 4.4 Comparison of proportions of AYP Status in Participants to Other PA
School Populations AYP Status .................................................................................67
Table 4.5 Raw Scores and Corresponding Percentile Ranks for the Stages of Concern ...68
Table 4.6 Quotes from Teachers Indicating Informational, Personal and Management
Concerns ......................................................................................................................71
Table 4.7 Concerns Scores for Teachers in Various School Levels ..................................73
Table 4.8 Concerns Scores for Teachers in Various School Locales ................................74
Table 4.9 Concerns Scores for Teachers in Schools by Title 1 Eligibilities ......................75
Table 4.10 Concerns Scores for Teachers in Schools by AYP Status ...............................76
Table 4.11 Concerns Scores for Teachers across Level of Teaching Assignment ............77
Table 4.12 Concerns Scores for Teachers across Years of Teaching Experience .............78
Table 4.13 Concerns Scores for Teachers across Years of PerformanceTRACKER
Experience....................................................................................................................79
Table 4.14 Concerns Scores for Teachers Across Self-Assessed Technology
Proficiency ...................................................................................................................80
Table 4.15 Concerns Scores for Teachers Across Self-Assessed
PerformanceTRACKER Proficiency ............................................................................81
Table 4.16 Concerns Scores for Teachers Across Professional Development ..................82
Table C.1 Locale Codes used by the National Center for Education Statistics ...............123

x
Table C.2 Proportion of School-Based Variables for Schools in Research Sample ........124
Table C.3 ANOVA Table for Concerns * NCES School Level ......................................125
Table C.4 ANOVA Table for Concerns * NCES Locale Code .......................................126
Table C.5 ANOVA Table for Concerns * NCES Title I Eligibility ................................127
Table C.6 ANOVA Table for Concerns * PA AYP Status..............................................128
Table C.7 ANOVA Table for Concerns * Participant Teaching Assignment Level .......129
Table C.8 ANOVA Table for Concerns * Participant Teaching Experience ..................130
Table C.9 ANOVA Table for Concerns * Participant PerformanceTRACKER
Experience..................................................................................................................131
Table C.10 ANOVA Table for Concerns * Participant Self-Assessed Technology
Proficiency .................................................................................................................132
Table C.11 ANOVA Table for Concerns * Participant Self-Assessed
PerformanceTRACKER Proficiency ..........................................................................133
Table C.12 ANOVA Table for Concerns * Participant Professional Development in
PerformanceTRACKER .............................................................................................134



xi
Abstract

PerformanceTRACKER is a web-enabled database that delivers standards-based
assessment data describing students strengths and needs in order to assist teachers when
targeting instruction. This study explored the concerns of teachers using
PerformanceTRACKER as conceptualized by the Stages of Concern, one component of
the Concerns-Based Adoption Model (CBAM). Two-hundred eighty-six teachers in 26
schools representing 14 school districts across Pennsylvania participated in the study.
Significant differences were found across demographic variables describing the schools
and demographic variables which characterized the teachers themselves. The implications
for leadership and professional development that emerged are discussed.


1

CHAPTER ONE INTRODUCTION
This quantitative study examined concerns shared by classroom teachers in
Pennsylvania Public Schools which have implemented PerformanceTRACKER, internet-
enabled database program owned and marketed by SunGard Public Sector. The program
assists educators in tracking and analyzing student achievement and performance data on
local, state, and national assessments.
Concerns shared by teachers implementing this system were conceptualized in the
manner described by Stages of Concern, a component of the Concerns-Based Adoption
Model, and measured by the Stages of Concern Questionnaire. Chapter One introduces
the concepts of contemporary emphasis on the use of data in schools and how the
Concerns-Based Adoption Models focus on the affective nature of change can be used to
measure the change process itself.
Overview
Since the reauthorization of the Elementary and Secondary Education Act of
2001, more commonly known as No Child Left Behind (NCLB), the pressure on schools
to improve their performances has continued to rise. In fact, NCLB holds practitioners
accountable for student achievement in ways that have never before been evidenced in
education (Kowalski & Lasley, 2009, p. viii). Specifically, NCLB requires continuous
monitoring and improvement of attendance rates, graduation rates, and student
achievement. By 2014, 100% of students regardless of gender, disability, economic
status, English proficiency level, migrant status, or race must achieve proficiency on
state assessments of reading and mathematics (NCLB, 2002). In addition to holding
schools accountable for student performance on standardized tests, NCLB requires
2

schools to adopt decision-making practices based on the use of data. The legislation
emphasized this necessity by including the phrases scientifically-based research and
evidenced-based decision making 111 times in the Act (Regional Educational
Laboratory Northeast and Islands, 2010).
Data use has been identified as the key to answering the accountability
requirements of NCLB (Potter & Stefkovich, 2009) and every American school district
knows that improving student learning requires the collection, analysis, and use of data
(Bernhardt, 2007, p. 7). Subsequently, the rise of accountability and shift to data-driven
decision making in education has been accompanied by unprecedented increases in
schools capacity to harness data. Student information systems and data-warehouses,
therefore, enabled schools to perform levels of analysis that were impossible before these
technologies (Esty & Rushing, 2007).
As Bernhardt (2007) stated, with the use of data comes the need for
tools[which] get needed data into the hands of teachers, without having to wait for the
district data expert to provide the data and the answers (p. 7). Whether described as an
instructional data management system (EDmin, n.d.), student data management system
(School Information Systems, 2010), curriculum/instruction/assessment management tool
(Bernhardt, 2007), or learning management system (SunGard Public Sector, 2009),
database tools of this nature share one common goal: store and deliver information to
teachers about the abilities and learning needs of students in their classrooms.
Thus, these systems save student information and provide data and resources to
teachers in areas of curriculum, instruction, and assessments that measure student
3

performance against established standards. Described by EDmin (n.d.), these
technologies
support a standards-based instructional approach that brings data directly to the
desktop computers of classroom teachers and school leaders. [These systems
allow] educators to pace instruction, align materials to state standards over the
school year, create formative assessments and generate district, school, class and
student reports that are meaningful, actionable, and easy to use. (para. 3)
One system designed to provide educators with this category of information is
PerformanceTRACKER, a product of SunGard Public Sector. This system stores and
delivers standards-based local, state, and national assessment results to teachers and
administrators and is in use in more than 150 of Pennsylvanias 500 school districts.
While implementation of these technologies has the potential to transform
teaching and learning into a system that is data-driven and customized for individual
students, implementing this or any other change presents challenges. In the 1970s,
specialists at the Research and Development Center for Teacher Education (R&DCTE) at
the University of Texas in Austin began studying the change process in schools. Their
research verified several assumptions about change (Hord, Rutherford, Huling, & Hall,
2006), namely:
1) The process of change takes place over time, usually a period of several years.
2) Since changes affect people, individuals must be the focus when
implementing any new program or initiative.
3) Each individual reacts differently to a change in a highly personal manner and
account of these personal differences must be taken. As such, change will be
4

most successful when support is geared to the specific needs of the individuals
involved.
4) Feelings and skills tend to shift in a developmental manner as individuals gain
ever increasing experience with the innovation in question. These feelings
and skills can be diagnosed and prescribed for.
5) Teachers will relate to a change in terms of what it will mean to them and how
it will affect their current situation in their classrooms.
6) Concrete programs, packages, and materials alone do not make change; only
people can make change by altering their behavior. Thus, the focus of change
facilitation should be on individuals, innovations, and the context in which the
change is taking place.
These six tenets formed the basis of the development of the Concerns Based
Adoption Model (CBAM; Hord, Rutherford, Huling, & Hall, 2006). Thus, CBAM
provides a framework for understanding the relationship between operationalizing any
sort of change and the individuals responsible for doing so. In addition, one of the
diagnostic components of CBAM is the Stages of Concern (SoC). SoC describes the
concerns of individuals involved in any implementation process and can be measured
using the Stages of Concern Questionnaire (SoCQ). Change facilitators can use the
concerns of teachers to provide the necessary supports designed to maximize the
prospects for successful school improvement projects while minimizing the innovation-
related frustrations of individuals (p. 7). Considering that implementation of
instructional management systems represents a substantial capital investment for school
systems, leaders implementing these structures should consider the affective nature of
5

change in terms of the concerns of teachers, and any barriers those concerns present to
the successful implementation of these technologies.
Need for the Study
The Concerns Based Adoption Model (CBAM) is comprised of three dimensions
which describe change: Stages of Concern, Levels of Use, and Implementation
Configurations. Stages of Concern (SoC) describes how the observations and feelings of
individuals evolve over time as they implement a change. As an innovation like
PerformanceTRACKER is introduced, CBAM and SoC describe how users concerns
develop through seven broad stagesfrom general awareness of the innovation to a
desire of how they can use it to maximize benefits for the betterment of their students.
Since CBAM predicts that the concerns of individuals develop over time,
individuals can be aided in moving along the continuum of concerns through appropriate
interventions and professional development (George, Hall, & Stiegelbauer, 2006). The
authors suggested that school leaders, as change agents, need to be aware of the concerns
shared by their teachers about the changes underway. In particular, leaders who wished
to seek the maximum impact of the implementation of a system like
PerformanceTRACKER need to consider the affective nature of change on their teachers.
Change facilitators could use the information about teachers feelings to plan professional
development for individuals expressing specific concerns. These plans then could move
teachers through the continuum of concerns described by CBAM.
Originally developed to describe change in education, the Concerns Based
Adoption Model, Stages of Concern, and Stages of Concern Questionnaire have been
used widely in measuring the change process both in and out of educational settings.
6

Moreover, Stages of Concern and the Stages of Concern Questionnaire are frequently
used to measure implementations related to technology, but their use is not limited to
these areas (George, Hall, & Stiegelbauer, 2006).
However, the literature is ostensibly silent on teachers concerns that are related
to data-driven instruction or the introduction of learning management systems like
PerformanceTRACKER. Considering the importance of data use in education and the
prevalence of PerformanceTRACKER in Pennsylvania schools today, understanding the
affective impact of this change is essential for Pennsylvania school leaders. Moreover,
this understanding is vital for supporting teachers in their use of data in general, and
PerformanceTRACKER in particular.
Statement of the Problem
With consideration of the concepts discussed above, this study determined: (a) the
nature of concerns shared by teachers implementing PerformanceTRACKER in
participating school districts as revealed by the Stages of Concern Questionnaire (SoCQ);
(b) the degree of statistical significance in the concerns of teachers across variables
describing their schools (including level (elementary, middle, or high school), locale,
poverty level, and whether the school is currently making Adequately Yearly Progress
(AYP) as defined by NCLB); and (c) the degree of statistical significance in the concerns
of teachers across variables describing themselves and their current teaching assignment
(including their level of teaching assignment (primary, intermediate, middle, or high
school), years of teaching experience, years of PerformanceTRACKER use, self-assessed
skill levels in both technology in general and PerformanceTRACKER specifically, and
7

the presence or absence of professional development in the use of
PerformanceTRACKER).
Potentially, this study adds to the emerging body of knowledge about data-driven
decision making in education and the use of data systems by teachers. Furthermore, this
project prospectively contributes to research about the affective nature of the impact of
these sorts of innovations.
Definition of Terms
Terms which are used in relation to data-driven decision making in education, or
which are discussed in the context of this study are defined as follows:
Accountability A process by which educators, schools, and school leaders are
held responsible for student performance or outcomes (American Association of School
Administrators, 2002).
Benchmark A measure against which something can be measured or judged.
Benchmark assessments can be used to monitor students progress towards meeting
established grade level expectations (American Association of School Administrators,
2002).
Classroom Teacher A teacher of a class of students. For the purpose of this
study, classroom teachers include only those who are in daily contact with students, but
did not include special/elective subject teachers (like music or world language) or other
educational support staff like psychologists or guidance counselors.
Concern The way one perceives the heightened feelings or thoughts about
stimuli depending on the nature of the stimuli and the individuals own psychosocial state
of being. (George, Hall, & Stiegelbauer, 2006).
8

Concerns Based Adoption Model A conceptual framework that describes,
explains, and predicts probable behaviors throughout the change process (George, Hall,
& Stiegelbauer, 2006, p. 5).
Data Qualitative or quantitative information, such as measurements or statistics,
which can be used as a basis for reasoning, discussion, or calculation (American
Association of School Administrators, 2002).
Data warehouse An electronic system that serves as a central repository for all
data collected by a school system (American Association of School Administrators,
2002).
Disaggregated data Analyzed data that reveals patterns for a specific subset of
students rather than data representing the entire population of students (American
Association of School Administrators, 2002).
Formative Assessment Aplanned process in which assessment-elicited
evidence of a students status is used by teachers to adjust their ongoing instructional
procedures (Popham, 2008, p. 6).
Innovation The generic name given to the object or situation that is the focus of
the concerns experienced by the individuals implementing a particular change (George,
Hall, & Stiegelbauer, 2006). In this study, the innovation being examined is the
implementation of a specific learning management system, PerformanceTRACKER from
SunGard Public Sector.
Learning Management System (LMS) A computerized system that allows
classroom teachers to access standards-based data about students whom they teach. The
9

LMS that will be the focus of this study will be PerformanceTRACKER from SunGard
Public Sector.
Level The instructional level taught by the teacher. In this study, level is
stratified into four groups corresponding to the grades in which the majority of students
are clustered: (a) primary, grades PK-2; (b) intermediate, grades 3-5; (c) middle, grades
6-8; and (d) high (grades 9-12).
Locale A designation assigned by the National Center for Education Statistics
to schools and districts indicating their location relative to a populous area. The locale
code categories are defined in Appendix C.1.
PerformanceTRACKER The learning management system for tracking and
analyzing student achievement data from SunGard Public Sector.
Poverty levelThis represents a code assigned to the level of Title I eligibility of
the school designated by the National Center for Education Statistics and taking one of
the three following quantities: (a) 0 if the school is not eligible for Title I; (b) 1 for a
Title I eligible school; and (c) 2 for a Title I School Wide Eligible school.
Stages of Concern (SoC) one of the diagnostic dimensions of the Concerns
Based Adoption Model focusing on the concerns of individuals involved in change.
Stages of Concern Questionnaire (SoCQ) An instrument used to determine in
which stage along a continuum of development an individuals chief concern(s) about an
innovation reside.
Title I Eligible and Title I School-Wide Eligible A Title I eligible school is a
school designated as being high poverty and eligible for participation in Title I of NCLB.
A Title I eligible school is one in which the percentage of children from low-income
10

families is at least as high as the percentage of children from low-income families served
by the LEA as a whole, or that 35% or more of the children are from low-income
families. A School-Wide Eligible School is a Title I eligible school with at least 40% of
student of low income (U.S. Department of Education, 2010).
Limitations
This study was limited to analyzing concerns of classroom teacher volunteers who
use PerformanceTRACKER in the state of Pennsylvania. Consequently, conclusions
drawn from this study may not be generalizable to other educators in other contexts.
Since the participants represent the experiences specifically of classroom teachers who
are using PerformanceTRACKER, findings in this study may not represent the feelings of
other categories of users of PerformanceTRACKER (e.g. special-subject area teachers,
guidance counselors, instructional coaches, administrators, psychologists) or of educators
using another learning management system designed to accomplish similar goals.
Research Questions
The purpose of this study was to determine the nature of concerns shared by
teachers implementing PerformanceTRACKER in participating school districts as
revealed by the Stages of Concern Questionnaire (SoCQ). Furthermore, the degree of
statistical significance in the concerns of teachers across variables describing themselves
and the school districts in which they work was explored.
Specifically, this study was designed to answer the following research questions:
Question 1: What types of concerns are shared by teachers implementing
PerformanceTRACKER as measured by the Stages of Concern Questionnaire in each of
the participating schools?
11

Question 2: Are there significant differences in the concerns shared by all
participating teachers when examined across variables describing the schools in which
they work including:
a. the level of the school (elementary, middle, or high)?
b. the locale of the school district?
c. the poverty level of the school district?
d. the school districts current AYP status?
Question 3: Are there significant differences in the concerns shared by teachers
when examined across variables describing themselves as individuals including:
a. the current teaching level (primary, intermediate, middle or
high) of the teachers surveyed?
b. their number of years of teaching experience?
c. the amount of time the teachers have themselves been using
PerformanceTRACKER?
d. their self-assessed skill level with technology in general and
PerformanceTRACKER specifically?
e. the presence or absence of professional development in
PerformanceTRACKER?
Summary
Implementation of learning management technologies potentially holds the
promise of providing teachers with the information necessary to target standards-based
instruction to students and ultimately raise their achievement. As a result of No Child
Left Behind, the pressure on all educators to raise student achievement continues to
12

increase. Teachers in nearly 30% of Pennsylvania schools use PerformanceTRACKER to
manage the data to inform instruction. Therefore, it is critical to understand the affective
nature of the impact on teachers of implementing this particular instructional data
management system.
Chapter Two will review the literature related to this study.
13

CHAPTER TWO REVIEW OF THE LITERATURE
Introduction
This chapter begins with the development of federally-legislated accountability
efforts in education and addresses successive lawmaking endeavors that increased
accountability through testing. Explored next is the concept of data-driven decision
making as a potential solution to meeting these mandated performance increases. Since
implementations of these systems can have significance to their end users in the affective
domain, literature on the Concerns Based Adoption Models description of individuals
feelings during the change process is also addressed.
Accountability in Schools
The Evolution of Accountability in Education: From ESEA to NCLB.
The federal accountability provisions currently found in No Child Left Behind
(NCLB) beginning with the Elementary and Secondary Education Act and proceeding
through the publication of A Nation at Risk, have been developing for more than 40
years.
In 1965, President Lyndon Johnson signed into law, the Elementary and
Secondary Education Act (ESEA) which provided funds to school districts in order to
meet the needs of educationally deprived children (20 U. S. C., 1965, Sec 2701). Since
its initial authorization, ESEA has been reauthorized nine times and with each successive
passage of the legislation came increasing federal regulations (Creighton, 2001).
The federal governments role in education increased in 1983, after the National
Commission on Excellence in Education published A Nation at Risk: The Imperative for
Educational Reform. The report declared that the educational foundations of our society
14

are presently being eroded by a rising tide of mediocrity that threatens our very future as
a Nation and as a people (National Commission on Excellence in Education, p. 5).
Since the time A Nation at Risk criticized the state of the educational system in this
country
many policy solutions have been recommended and implemented including
content standards and assessments for studentssometimes with serious
consequences for nonachievement; increased testing for teachers entering the
profession, with sanctions on the colleges that prepare them; and school report
cards and league tables published in newspapers, that show the relative success
of different schools within a district or state. (Danielson, 2002, p. vii)
On January 8, 2002, President George W. Bush again reauthorized the Elementary
and Secondary Education Act in the legislation known as the No Child Left Behind Act of
2001 (NCLB; Public Law 107-110, 2002). NCLB included key provisions related to
testing, achievement, and accountability. To support the goal of having 100% of students
in America performing at the proficient level by the year 2014, NCLB required states to
set challenging academic standards of achievement and create a system of reporting and
accountability to measure the results initially in reading and math, and additionally
science (Public Law 107-110).
Granger (2008) inventoried the accountability provisions of NCLB, noting: (a)
indications of teacher quality, (b) publication of test scores in local newspapers, (c) a
regularly updated list of failing schools, and (d) statistics revealing the shortcomings of
American students in comparison to their international peers. Granger further described
the presumptions of NCLB stating the following:
15

Teachers are only teaching if students are learning in accordance with
prescribed standards;
Student learning is accurately reflected in scores on standardized tests that
assess these standards; and
If students test scores are not meeting these standards, then teachers are, in
fact, not teaching, that is to say, they are not doing their jobs. (p. 215)
The High-Stakes Nature of Testing under NCLB.
Defined by Brandt and Voke (2002), high-stakes tests carry important
consequences for the test taker or his institution. For example, high-stakes tests could be
used to determine a students promotion from one grade to the next or used in
determining whether or not a school is considered to have made Adequate Yearly
Progress (AYP) under NCLB (Mitchell, 2006). Furthermore, Mitchell depicted three
important characteristics shared by high stakes tests, namely: (a) a single defined
assessment, (b) a clear line drawn between those who pass and those who fail, and (c) a
direct reward for passing or consequence for failing. While NCLB included sanctions for
failing schools, the federal legislation took no position on whether states and/or districts
should use test results to determine whether individual students will receive rewards or
consequences (Heubert, 2002).
The use of exam results to determine eligibility for high school graduation
represents a growing trend. According to a 2008 study from The Center on Education
Policy, 22 states required students to pass an exit exam to receive a high school diploma
in 2002, and by the year 2012, 26 states are scheduled to have comprehensive high school
exit exams. These 26 states will represent 74% of high school students in the nation.
16

According to Bracey (2009), the origins of this movement date back to the mid-1970s
when some states were using minimum competency tests as graduation requirements.
Bracey further asserted that numbers of states using these exit exams increased after the
1977 study entitled On Further Examination: Report of the Advisory Panel on the
Scholastic Aptitude Test from the National Testing Service described a decline in SAT
scores.
In Pennsylvania, the current test used by the state to determine whether or not
schools and school systems are making Adequate Yearly Progress (AYP) is the
Pennsylvania System of School Assessment (PSSA) test. In accordance with NCLB,
students are required to take tests of reading and mathematics in grades three through
eight as well as once in high school in order to exhibit proficiency in those areas. The
results of those assessments are used to determine AYP (Public Law 107-110
1111(b)(3)(C)(vii), 2002).
Adequate Yearly Progress (AYP).
In order to determine AYP, NCLB puts forth three requirements, namely: 1)
attendance and graduation rates, 2) assessment participation rates, and 3) performance
indicators (Pennsylvania Department of Education, n.d., About AYP tab, 2). These
targets are designated for all students in an aggregate, as well as for disaggregated
subgroups of students of every racial and ethnic background, of English Language
Learners, of economically disadvantaged students, of migrant students, and of special
education students. The NCLB law maintains that this disaggregated data shall not be
required in a case in which the number of students in a category is insufficient to yield
statistically reliable information or results would reveal personally identifiable
17

information about an individual student (Public Law 107-110 1111(b)(2)(C)(v)(II),
2002). Accordingly, each state is empowered to set the minimum number of students that
constitutes a subgroup.
In a 2006 analysis, Minimum Subgroup Size for Adequate Yearly Progress (AYP):
State Trends and Highlights, Fulton reported that states have adopted two primary
approaches for determining the minimum size of subgroups, namely: (a) arriving at a
fixed number that applies to all schools, or (b) developing a formula that considers a
schools overall enrollment. Fulton reported that the most common subgroup sizes used
by 15 of 50 states were either 30 or 40 students with a low of 5 for schools in Maryland
to a high of 52 in Oklahoma. In Pennsylvania, 40 or more students constitute a
subgroup; only in instances where the size of the subgroup exceeds this value is the
subgroups performance considered in determining the schools AYP status
(Pennsylvania Department of Education, 2007).
Attendance or Graduation Rate. Pennsylvania regulations require individual
schools without a high school graduating class (e.g. elementary schools and middle
schools) to have an attendance rate of 90% or any percentage showing improvement from
the previous year. For high schools, Pennsylvania regulations require realization of a
graduation rate of 80% or a value showing improvement over that of the previous year.
School districts must meet both attendance and graduation targets in all of their schools
for the district to be considered as having met AYP in this area (Pennsylvania
Department of Education, n.d., About AYP tab, 2).
Test Participation. At least 95% of students (overall and within each subgroup)
who are enrolled in school as of the last day of the assessment window, regardless of
18

whether or not those students were enrolled at the school for the full academic year, must
be included in testing (Pennsylvania Department of Education, n.d., About AYP tab, 2).
Academic Performance. The AYP targets for academic performance indicate the
percentage of students who have been enrolled at the school for a full year (that is,
registered since October 1 of the testing year) that must meet or exceed scores at the
proficient level in mathematics and reading. Notably, for the 2008-2009 school year,
Pennsylvania state targets required that 56% of students perform at proficiency or higher
in mathematics, and 63% of students are rated proficient or higher in reading. These
targets will remain the same through spring 2010, but as of 2011, those numbers increase
to 67% and 72% respectively, and by 2014, the target for both subjects is 100%
(Pennsylvania Department of Education, n.d., About AYP tab, 2).
Raising stakes for Pennsylvania students.
On January 8, 2010, the Pennsylvania State Board of Education adopted changes
to Title 22, Chapter 4, the portion of the law that governs Academic Standards and
Assessments. These regulations set new, more rigorous graduation requirements for
students in the class of 2015. A central feature of these new requirements are tests
known as Keystone Exams. Keystone Exams are state developed end-of-course exams
(Pennsylvania Department of Education, n.d.) and, by regulation, the Keystone Exam
score will count for one-third of the final course grade. The policy states that if a student
scores Below Basic on the Keystone Exam, a zero counts as one-third of the final course
grade. For the class of 2015, students will be required to demonstrate proficiency on
Keystone Exams in English Composition, Literature, Algebra 1, and Biology. By the
year 2017, requirements will expand to include passing Keystone exams in the following:
19

(a) both English Composition and Literature, (b) two of three math courses (Algebra 1,
Algebra 2, or Geometry), (c) one of two science courses (Biology or Chemistry), and (d)
one social studies course (Civics and Government, American History, or World History).
The revised Chapter 4 regulations do allow for school districts to use other
options for granting high school diplomas in the form of locally approved and
administered, independently validated assessments or Advanced Placement or
International Baccalaureate exams. However, these alternatives do not replace the
Keystone Exams, as they are part of the States plan to meet AYP reporting requirements
under NCLB (Pennsylvania Department of Education, n.d.).
Raising stakes for educators.
Finally, the accountability provisions of NCLB included a high stakes component
for school teachers and administrators. If schools repeatedly failed to make AYP, the
consequences prescribed in the law continued to grow. After having failed to make AYP
for four consecutive years, schools are designated for Corrective Action (Public Law 107-
110). Under Corrective Action, NCLB required that state education agencies take action
against school districts failing to make AYP. One of the sanctions included in the
legislation called for replacing the [school district] personnel who are relevant to the
failure to make adequate yearly progress (Public Law 107-110 1116(c)(10)(C)(iii),
2002).
A proposal from the Aspen Institute, a nonpartisan group in Washington,
described a plan to directly evaluate teachers based on their students scores on tests, and
rate teachers based on the progress made over the course of the academic year. In a 2007
report, the Aspen Institute recommended that NCLB be revised to include a requirement
20

that all classroom teachers be considered Highly Qualified and Effective Teachers
(HQET). Under HQET, those educators who fall in the bottom quartile of their peers in
terms of producing learning gains for their students would receive professional
development, but if, over time, the teacher was unsuccessful in achieving HQET status,
that individual would be forbidden from teaching in a school which receives Title I
funding (Aspen Institute, 2007). The report also suggested that lawmakers establish a
definition of Highly Effective Principal by requiring school leaders to accomplish the
following: (a) obtain state licensure, (b) demonstrate the necessary leadership skills, and
(c) most importantly, produce improvements in student achievement that are comparable
to high-achieving schools made up of similar children with similar challenges (Aspen
Institute, 2007, p. 47).
Data-Driven Decision Making in Education
In 1997, data-driven decision making (D3M) was prescribed for public schools by
the U. S. Department of Education and has since been included in virtually all federal and
state accountability plans (Brooks-Young, 2003). Given D3Ms potential to improve
teaching and learning and raise student achievement, combined with its prominence in
accountability plans for more than the last decade, it is critical that districts adopt data-
informed practices. Additionally, districts must increase both the overall organizations
and also individual members capacity to access and use data for instructional purposes
through implementation of some technology system.
In 2002, President George W. Bush summarized the federal governments view of
data use to improve student achievement in education stating, When children are
regularly tested, teachers know where and how to improve (US Department of
21

Education, 2002, slide 2). Thus, NCLB has increased the demand for data use and
systems to collect and analyze information. As maintained by Hoff (2007), principals
and teachers faced with accountability provisions of NCLB will demand access to data
that supports the goal of ensuring all students are meeting standards.
Evidence of success.
Deciding where to begin with collecting and analyzing data can be challenging
(American Association of School Administrators, 2002). Before a school or district
makes the shift to a data-driven organization there must be agreement on the part of
leaders and staff about the measures of data present in schools that can, in fact, evaluate
excellence. The rankings that indicate success or excellence can vary widely depending
on circumstances and cultures of the district (Collins, 2005).
Defining excellence or growth in the business sector may be easily accomplished
using data such as sales, profits, and stock prices. In his supplement, Good to Great and
the Social Sectors, Collins (2005) identified factors that made organizations in the social
sectorsincluding educationgreat, and suggested a method to define greatness without
standard business metrics. Collins proposed that performance must be assessed to
mission, not financial returns (p. 5). In addition, Collins stressed the importance of
assembling other metrics appropriate to the social sectors, stating:
It doesnt really matter whether you can quantify your results. What matters is
that you rigorously assemble evidencequalitative or quantitativeto track your
progress. If the evidence is primarily qualitative, think like a trial lawyer
assembling the combined body of evidence. If the evidence is primarily
22

quantitative, then think of yourself as a laboratory scientist assembling and
assessing the data. (p. 7)
Furthermore, Collins (2005) contended that schools should consider their mission
statements when defining greatness. For example, if the school system values equipping
students to be successful in the 21
st
century, measures ought to include standardized test
scores and the mastery of technology skills.
Using multiple measures of data.
Once a school staff has collectively described how to measure success, they must
begin gathering the appropriate data. For instance, Bernhardt (2004, 2007) described the
need for schools to use a series of sources of evidence in order to fully understand
conditions that exist and contribute to student achievement, thus suggesting that schools
collect, analyze, and combine these various sources of data to examine the impact they
have on students relative to their mission (2004). The author asserted that the multiple
measures of demographic data, perceptual data, student learning outcomes (which include
test results), and school process data, should not only be tracked over time, but
intersected with one another to examine richer questions. Bernhardts representation of
the interplay among various kinds of data is depicted in Figure 2.1. The disaggregation
of student test data by demographic characteristics under No Child Left Behind examines
the interplay of two measures of data in this schemedemographics and student
learning. Additional technologies, including databases and data warehouses, can support
analyses that cross the other multiple measures described. With robust data systems in
place, schools can make better decisions about what to change and how to change it and
also understand whether or not the changes are working (Bernhardt, 2004).
23


Figure 2.1. Multiple Measures of Data. From Using Data to Improve Student Learning in
Elementary Schools, by Victoria L. Bernhardt, 2003, Larchmont, NY: Eye on Education.
Copyright 2003 by Education for the Future Initiative, Chico, CA. Reprinted with
permission.
24

Bridging the data gap.
Schools are described as being data rich, but information poor (IBM, 2007). This
characterization highlights the vast amount of data available to schools and school
systems, yet also the comparatively small quantity of actionable information that can be
derived from that data. Love (2004) claimed that schools have access to large numbers of
records, but do not use them to guide changes in instructional practice to improve student
achievement. Bridging this gap appears to be a significant challenge for schools and
school leaders; the gap may be one of expertise, capacity, technology, or culture (Love,
Stiles, Mundry, & DiRanna, 2008). In particular, Love et al. characterized this gap as
being between the
myriad data now inundating schools [including] state test data sliced and diced
every which way, local assessments, demographic data, dropout rates, graduation
rates, course-taking patterns, attendance data, survey data, and on and on[and
the]desire, intention, moral commitment, and mandate to improve learning and
close persistent achievement gaps. (p. 16)
Due to the pressures of NCLB, schools are being asked to engage in continuous
improvement in the quality of the educational experience of students and to subject
themselves to the discipline of measuring their success by the metric of students
academic performance (Elmore, 2002, p. 5). Nevertheless, Elmore claimed most
people who currently work in public schools werent hired to do this work, nor have they
been adequately prepared to do it either by their professional education or by their prior
experience in schools (p. 5).
25

With significant quantities of data available, school leaders need to focus their
efforts and those of their faculties on some systematic improvement process (Love et al.,
2008). Various authors have crafted several approaches for school leaders. One such
approach, the Data Wise Improvement Process involves professionals in a cyclical
inquiry process which includes: preparing for data analysis, inquiring about the
knowledge necessary to raise student achievement, and acting on the improvement plan
(Boudett, City, & Murnane, 2008). Other processes, including the Continuous School
Improvement Planning via the School Portfolio (Bernhardt, 2004), the Collaborative
Inquiry approach (Love et al., 2008), and the phases of Marzanos model from What
Works in Schools (2003), also share similar features. Each of these approaches calls for
an assessment of the current state of the school, the identification of areas for
improvement, the determination of a plan of action for change, and an evaluation of the
changes impact on student performance.
In order to be potentially successful, however, each of these processes requires
that school leaders create the proper conditions and supports within their organization.
Love (2002) described some of the necessary supports for sustained data-driven change
over the long term which include, but are not limited to: (a) professional development
through which teachers transform their beliefs and gain the necessary skills and
knowledge to improve teaching and learning; (b) school cultures that support risk-taking,
collegiality, and a focus on student learning; (c) leadership that guides and supports the
change process; (d) the necessary technology to support the improvement; and (e)
policies that support reform.
26

Fox (2001) made an important connection between achievement data and
instruction by stating that systematic, targeted, and purposeful instruction was responsible
for high levels of student achievement and that instruction required skillful use of
classroom assessment data. These ideas date back to the 1970s when the suggestion was
made for teachers to keep track of each students pattern of mastery and non-mastery, and
to regroup students based on mastery profiles (Means, 2005). A related concept is that of
differentiated instruction whose conceptual base assumes that students come to
classrooms with different experiences, expectations, motivations, and preferred learning
modalities. In other words, teachers should adjust the curriculum and instructional
approach to personalize instruction so a diverse set of learners can be served within the
same classroom. Data from assessments provides teachers with the necessary
information for planning individualized programs for circumstances such as these
(Tomlinson, 2001).
Similarly, in a landmark meta-analysis, Black and Wiliam (1998) described how
using data from formative feedback systems has the potential to significantly improve
student learning. Moreover, Black and Wiliam identified more than 680 publications that
described the effects of formative assessment use. The approximately 250 reports
included in the final analysis were diverse, international in scope, involved students from
kindergarten through college, and spanned a variety of disciplines. Based on their
analysis, the authors concluded that attention to formative assessment can lead to
significant learning gains (p. 17) and that formative assessment helps low achievers
more than other studentsand so reduces the range of achievement while raising
achievement overall (p. 141).
27

Later, building on Black and Wiliams (1998) principles for formative assessment
and feedback, Andrade, Buff, Terry, Erano, and Paolino (2009) described interventions
that were put in place at Knickerbacker Middle School (KMS) to improve the results of
scores of economically disadvantaged students on the English Language Arts (ELA) test.
Focusing on formative assessment techniques and providing meaningful feedback to
students, in the summer of 2005 the staff at KMS worked to develop a common rubric for
grading writing assignments. However, teachers at KMS reported that providing
meaningful feedback to students was time consuming. As a result, pupils became
engaged in the process of carefully considering the strengths and weaknesses of their own
writing and performed peer reviews of others writing using established writing rubrics.
Through the use of structured feedback, student scores in grades six and eight on the ELA
test (grade 7 does not have to produce a writing sample on the ELA test for that grade)
increased by 7% and 15% overall and the improvement for the economically
disadvantaged students increased by 20% in both grades (Andrade, Buff, Terry, Erano, &
Paolino, 2009).
Instructional Data Systems to Manage Data
Wayman and Stringfield (2003) suggested that teachers would make increased use
of data to inform their classroom practice if such data were available quickly and easily
in a manner befitting their needs. The authors highlighted characteristics that such
systems should share across four general domains, namely: (a) data in terms of its quality
and accessibility, (b) usability by the end user, (c) information access in terms of
flexibility in how users can access and report on data, and (d) other categories such as
cost.
28

Subsequently, in a 2004 report to the Center for Research on the Education of
Students Placed at Risk (CRESPAR), Wayman, Stringfield, and Yakimowski refined
these features to include: (a) user friendless, (b) user features, (c) information access, (d)
creating and sustaining quality data, and (e) additional features. Wayman et al. then
reviewed 13 known commercially-available systems that enabled all district users to
access student data for achievement purposes. Researchers found that while a variety of
both commercially- and locally-developed software solutions exist, each with its own set
of features, that the system itself was less important than how teachers can use the data
stored therein to inform their instruction.
Instructional Systems.
Many formative assessment techniques are relatively simple and allow teachers to
make immediate adjustments to their daily instruction (Popham, 2008). However,
targeting standards-based instruction to the specific needs of individual students would
require data from multiple tools (Bernhardt, 2007). Policy makers and school leaders
currently view the use of assessment data as having significant potential to improve
student outcomes (Crawford, Schlager, Penuel, & Toyama, 2008).
Rudner and Boston (2003) declared that data extracted and analyzed from data-
driven decision making (D3M) technology systems could be used to provide the
necessary information to make improvements in teaching and learning, thus raising
student achievement. While systems designed to support D3M have been growing in
number (Means, 2005), the large-scale assessments that are typically housed in those
systems have limited use for teachers in terms of informing their instruction. Crawford et
al. (2008) reported that these large scale assessments are not linked to classroom
29

practices, are not aligned with instructional objectives pursued in classrooms, do not
cover the domains tested to give a valid picture of subject matter learning, and are not
available to teachers in a timely enough manner to enable effective instructional decision
making. Crawford et al. argued that classroom-level instructional decision making
required classroom-level data suitable for diagnostic, real-time decisions regarding
student learning and instruction (p. 110).
Furthermore, in their 2008 study, Crawford et al. used a multiple-case
comparative design to study high school algebra classrooms to determine the requisite
principles in a classroom technology system that would best support teaching and
learning. Based on their findings, the investigators reconceptualized the problem and
declared as critical the amount of time that elapses between when work is gathered from
students and when feedback is provided by the teacher. Describing traditional classroom
assessment practices as generally too little, too late, with days passing between the
assessment and the feedback provided to students, Crawford et al. represented the
situation graphically in Figure 2.2.
Conversely, a system designed to minimize the time between students submitting
work and feedback from the teacher would not only support learning in the manner
described by Black and Wiliam (1998) but also provide information to teachers in real
time. Graphically, Crawford et al. (2008) depicted rapid cycles of feedback to students as
depicted in Figure 2.3. This sort of rapid feedback has been shown to enhance student
learning (Wiliam, 2007).Despite the idealized nature of the representation depicted in
Figure 2.3, Wayman (2007) described systems similar to those conceptualized by
Crawford et al. which provided tools for educators, such as: the ability to administer
30


Figure 2.2. Time Lapses in Feedback to Students in Traditional
Classrooms. From Data-Driven School Improvement, Ellen B.
Mandinach & Margaret Honey (eds.), 2008, New York, NY:
Teachers College Press. Copyright 2008 by Teachers
College, New York, NY. Reprinted with permission.
assessments, organize the assessments results, and report on students strengths and
deficiencies. PerformanceTRACKER is one system which meets all of the criteria set
forth by Crawford et al. (2008).
PerformanceTRACKER by SunGard Public Sector
SunGard is an international company specializing in software and technology
services to businesses, higher education, and the public sector. SunGard is comprised of
four businesses: Availability Services, Financial Systems, Higher Education, and Public
Sector (SunGard, 2010a). SunGard Public Sector is the division of the company that
supports government agencies, including public safety and justice, public schools,
utilities, and non-profits, [allowing them to] provide more effective services to their
citizens and communities (SunGard, 2010b, Public Sector).
31



Figure 2.3. Rapid Cycles of Feedback to Students in
Classrooms with a Technology Infrastructure. From Data-
Driven School Improvement, Ellen B. Mandinach & Margaret
Honey (eds.), 2008, New York, NY: Teachers College Press.
Copyright 2008 by Teachers College, New York, NY.
Reprinted with permission.
SunGard Public Sectors K-12 Education solution is a collection of products
called Plus360. Plus360 provides school districts with information management
applications to manage four key areas of school district operations. They are: (a) Finance
& Human Resource Management through the BusinessPLUS and eFinancePlus
applications, (b) Student Information Management through the eSchoolPLUS application,
(c) Special Education Management through the IEPPLUS application, and (d) Learning
Management through the PerformancePLUS applications. Across the country, SunGard
Public Sector software supports school operations in over 1,700 school districts, serving
over seven million students, or one out of seven students nationwide (F. Lavelle, personal
communication, November 8, 2008; SunGard Public Sector, 2010b).
32

Specifically, SunGard Public Sectors K-12 PerformancePLUS Learning
Management products include CurriculumCONNECTOR which enables educators to
develop and share curricular documents and resources, AssessmentBUILDER which
allows districts to create, score and analyze local benchmark assessments, and
PerformanceTRACKER which provides educators with a single system to easily access
the performance data for their students along with important student demographic
information (SunGard Public Sector, 2010a).
PerformanceTRACKER is used in nearly 600 school districts across the country
including all 179 school districts in New Hampshire (S. Gladfelter, personal
communication, July 14, 2010). As of June 3, 2010, 155 school districts in Pennsylvania
were using PerformanceTRACKER (S. Gladfelter, personal communication, June 3,
2010). According to the US Department of Education (2008) these Pennsylvania districts
served over 577,000 students in the state which represented 31% of the states 500
districts and nearly 34% of the states K-12 student population.
Schwartz, a consultant to the New Hampshire Department of Education, where a
statewide implementation of PerformanceTRACKER has been in place since 2006,
reported that PerformanceTRACKER has changed the way schools across the state have
integrated curriculum and assessment data and has enabled schools to examine data in
areas not previously possible including tracking individual students progress as needed
(New Hampshire Department of Education, 2010).
Teachers Use of Instructional Data Systems
Teacher access and use of instructional data systems is increasing. Several key
findings appeared in a 2008 report from the US Department of Education:
33

teachers reported their ability to access an electronic student data system
increased significantly from 48% to 74% (p < .0001) between 2005 and 2007.
Statistical differences in the access level of teachers in high-poverty schools
versus teachers in low-poverty schools were not found to be significant at the
middle and high school levels; a statistically significant difference (p < .05) was
found at the elementary level,
teachers access to current student test scores increased from 38% to 49%
(p < .0001) and prior student test scores from 36% to 46% (p < .001),
teachers with access to student data systems used these technologies to provide
information to parents (65%) and monitor student progress (65%) with no
significant change in these ratios from 2005 to 2007, and
teachers making use of data systems did not vary significantly by grade level or
subject except for informing parents (where high school teachers were more likely
to use the systems for this purpose, p <.01), and instructional pacing (where
elementary school teachers were more likely to use the systems for this purpose,
p <.01).
Because teachers should be using assessment results to base their instruction on
the needs of students (Brimijoin, Marquissee, & Tomlinson, 2003; Downey, 2001;
McTighe & O'Connor, 2005; Petrides, 2006; Popham, 2001; Young, 2006), a body of
research on teacher use of data and data systems is emerging. The research tends to fall
into two categories: (a) case study research where the use of instructional data systems
has produced gains of a particular school or school system, and (b) quantitative research
on the effectiveness of using data derived from these systems on groups of students.
34

Case Studies.
Bernhardt (2009) described the Marylin Avenue Elementary School (MAES) in
Livermore, California as one where the population was in a state of flux. In the 2002-
2003 school year, 49% of MAES students were of Hispanic descent; five years later the
percentage had increased to 66%. During this same time, the Caucasian population
decreased from 31% to 18%. Concurrently, the percentage of students receiving
free/reduced lunch increased from 45% to nearly 76%. MAES had not made Adequate
Yearly Progress (AYP) for the previous four years when Bernhardt began working with
teachers and administrators from MAES in 2006. The team from MAES reported to
Bernhardt in a follow-up meeting in 2007, noting that they had used multiple measures of
data stored in data systems to accomplish the following: (a) examine changing student
demographics, (b) learn from perceptions of students, parents, and staff, (c) disaggregate
student learning results, and (d) measure school processes and programs. Using the
Continuous Improvement Continuums from Bernhardts organization, educators at MAES
developed a comprehensive, common vision; engaged in school-wide learning; and used
the results of common assessments to inform their instruction. The school is now making
AYP.
School-wide results were also described by Larocque (2007) where the author
credited data-driven decision making as one of the reasons that the schools grade on the
Florida Comprehensive Achievement Test (FCAT) increased from a grade of D to B over
three academic years. Focusing on data use by educators at this school, the researcher
reported that students who do not meet benchmarks are given support and those who
score well on benchmark assessments are given additional enrichment to maximize their
35

talents. Larocque credits the use of the schools achievement management system which
allows teachers unrestricted access to student data for the increase in FCAT performance.
System-wide results of data produced by the Grow Network allowed teachers in
the New York City school system to understand their students and target their instruction.
Over the course of a two-year study, Brunner et al. (2005) surveyed teachers and
administrators on their use of the Grow Reports. Subsequently, teachers and
administrators reported that the web-based system guided instructional decision making
to meet the needs of individual students who were struggling academically; fully 89% of
teachers reported that they used Grow Reports to differentiate instruction. Grow
Reports were also used to: (a) support conversations between and among teachers,
administrators, students, and parents; (b) shape professional development in terms of
teachers studying how best to teach a particular skill; and (c) support self-directed
learning on the part of students when the reports were shared directly with them by
increasing students self awareness and enabling them to practice the specific skills that
data showed needed remediation (Brunner et al., 2005).
Notably, case studies are not limited to the scope of whole schools or school
systems. In a 2008 case study, Tedford described one department of one high-performing
California high school. At this site, professionals used data systems to analyze multiple
sources of information related to the schools placement practices, specifically those
linked to a reading intervention program. Students at the school studied could enroll in
advanced-level, college-preparatory, or remedial-level courses, depending on their
instructional needs. Currently, teachers and administrators draw from reading assessment
data to guide the placement process; whereas until recently, some students were placed at
36

the remedial level for reasons not related to their reading performance. Using data
derived from these assessments to guide reading remediation activities, exit rates from
remedial classes were charted. The program was demonstrated to be successful, properly
matching student academic needs to course placement and ultimately in remediating
reading deficiencies (Tedford, 2008).
Quantitative Studies.
Qualitative studies in the literature which reflect improvement of student
outcomes when educators use assessment data to support instruction outnumber
quantitative studies. However, several quantitative studies showing the impact of
teachers use of data systems have been conducted (Burns, Klingbeil, & Ysseldyke, 2010;
Ysseldyke & Bolt, 2007; Ysseldyke, Betts, Thill, & Hannigan, 2004; Ysseldyke,
Spicuzza, Kosciolek, & Boys, 2003; Ysseldyke, Tardrew, Betts, Thill, & Hannigan,
2004). Of particular interest to Ysseldyke and his colleagues was the impact of
technology-enhanced formative evaluation (TEFE) systems on the math achievement of
various populations of students. TEFE systems assist teachers in their instruction by
collecting data, determining instructional targets, and monitoring student progress.
Examining the ability of TEFE systems to enhance established curriculum
materials, Ysseldyke, Kosciolek, Spicuzza, and Boys (2003) selected a treatment group
of 157 fourth and fifth grade students in classrooms using a TEFE system with the
Everyday Math curriculum. Students in the treatment group were compared with a
within-school control group of 61 fourth and fifth grade students as well as all fourth and
fifth grade students in the district (N = 6,385). Results showed that students in the
treatment groups who were in classes where the TEFE system was implemented as an
37

enhancement to Everyday Math, had an increase in the amount of time spent on
classroom activities, contributing to positive outcomes and demonstrated greater math
gains than did the control groups. Overall, Ysseldyke et al. found a significant difference
between the experimental group and the rest of the fourth and fifth grade students,
F(1, 6,537) = 24.52, p <.0001.
Moreover, Ysseldyke et al. (2003) studied the effect of using the same TEFE
system on students in classrooms where the degree of integrity of the implementation
varied. Within-schools comparisons of classrooms using the TEFE systems were
contrasted to classrooms which did not have access to the system. Similarly,
comparisons were made between classrooms with a high integrity of implementation
versus those with only partial implementation. Students enrolled in classrooms that
recorded a high degree of integrity of implementation showed more growth than those
maintaining a degree of implementation that was partial or nonexistent, F(2, 459) =
4.126, p <.02, d=.13. No significant difference was found between partial and non-
existent implementations.
Further examining the differences that fidelity of implementation had on
performance, Ysseldyke and Bolt (2007) compared variability in teacher implementation
of the TEFE system using math results in classrooms in which teachers did and did not
use the system as designed. There were significant differences in STAR Math and
TerraNova scores found in the pre-/post-test comparisons of students at both the
elementary and middle school levels when implementation level of the teacher was
considered (elementary schools: STAR Math, F(2, 777) = 9.289, p < .001; TerraNova,
F(2, 954) = 13.240, p < .001; and middle schools: STAR Math, F(2, 994) = 18.354,
38

p < .001; TerraNova, F(2, 1047) = 4.066, p < .019) (Ysseldyke & Bolt, 2007). The
authors noted that if the implementation level of the teacher was not taken into account,
the results of the study would have shown no significant difference between the groups.
The authors conclusion that the degree to which the change is implemented can impact
whether or not a statistically significant result is revealed echoed the findings of the
developers of the CBAM. Specifically, the consequence of the discovery prompted
researchers to develop the Innovation Configurations dimension of the CBAM model to
ascertain how an innovation was actually being implemented by teachers (Hord,
Rutherford, Huling, & Hall, 2006).
Comparing pre-/post-test results of the mathematics achievement of gifted and
talented (GT) students, Ysseldyke, Tardrew, Betts, Thill, and Hannigan (2004) studied
the impact of the use of a TEFE instructional management system. The TEFE
determined the next steps for instruction, monitored student progress, and provided
teachers with the information they needed to differentiate their instruction. Ysseldyke et.
al. created control groups of GT students (n=52) and other students (n=736) whose
teachers did not use the instructional management system, then compared results to
experimental groups of GT students (n=48) and other students (n=743) whose teachers
used the instructional management system. In their analysis, Ysseldyke et al. found that
there were significant gains for both GT students (p <.01) and all other students (p<.001)
in the experimental groups.
Ysseldyke, Betts, Thill, and Hannigan (2004) next sought to explore the impact of
this same instructional management system with struggling Title I students. The authors
also sought to ascertain the extent to which teacher use of this system resulted in
39

significantly greater gains in mathematics achievement for students in Title I programs
than for students in Title I programs where the system was not applied. Using a two-
group pre-/post-test comparison approach to evaluate the hypothesis that Title I students
in classes (n = 132) using the system would show greater gains in mathematics
achievement than similar students in Title I programs who received no intervention other
than regular instruction (n = 138), Ysseldyke et al. confirmed that the treatment had a
significant impact on math achievement (p < .0001) with the treatment group gaining 7.9
Normal Curve Equivalents (NCEs); while the control group gained only 0.3 NCEs.
Looking beyond classroom performance, Burns, Klingbeil, and Ysseldyke (2010)
compared student achievement on state tests of mathematics and reading achievement of
360 randomly selected elementary schools that either had or had not used a TEFE system.
The public, non-charter elementary schools chosen for this study were from Florida,
Minnesota, New York, and Texas. The researchers hypothesized that schools using the
system would have a greater number of students performing at the Advanced and
Proficient levels on their states high-stakes tests. Burns, Klingbeil, and Ysseldyke
divided the schools into three groups: (a) a control group of schools that had no TEFE
system in place, (b) an experimental group of those schools that had been using the
system for a period of one to four years, and (c) an experimental group of those schools
that had been using the system for five years or more. ANCOVA analysis revealed a
significantly higher percentage of students successfully passing the states high stakes
test, F(2, 357) = 19.27, p < .001 when they compared users of the system to non-users of
the system (Burns, Klingbeil, & Ysseldyke, 2010). Investigators then explored the
effects of use of the TEFE system on the achievement gap by comparing performances of
40

schools with various ratios of minority students. No significant difference was
discovered between schools that used the TEFE system whether they had a majority of
students who were White (at least 50%) or a majority of students who were non-White,
F(1, 111) = .17, p = .68. However, a significant difference was noted between schools
without a TEFE system that had a majority of students that were White when compared
to those schools where a majority of students were non-White, F(1, 118) = 14.36,
p < .001 (Burns, Klingbeil, & Ysseldyke, 2010).
Studies providing quantitative evidence of learning gains when teachers use data-
driven instruction are not limited to studies of mathematics. For example, Tyler (2009)
examined the effects of data-driven instruction and literacy coaching on the literacy
development of kindergarten students. Quantitative results from the task showed that
while there were no significant differences in literacy assessment data between groups of
students at the beginning of the school year, F(2, 167) = 1.07, p = .35, students in the
classrooms where teachers used data-driven instruction and availed themselves of the
services of the literacy coach showed reading scores which were significantly higher by
the end of the school year, F(2, 167) = 3.81, p = .02, over those classes where the
teachers did not base their instruction on data and did not use the services of the literacy
coach.
The Concerns Based Adoption Model
In order for leaders to successfully change a school culture to one that uses and
values data, they must recognize that while changes to systems and procedures are
necessary, psychological transitions being experienced by teachers during the change
process also need to be considered. According to Bridges (2009), change is situational
41

whereas transitions are the psychological progressions people experience that allow them
to come to terms with a new situation. Leaders can, by virtue of positional power, dictate
changes to structures, roles, and procedures, but to keep the change from being merely
rearrangement of the chairs (Bridges, p. 3), leaders must also skillfully manage the
emotional transitions associated with change.
Description and Development of the Concerns Based Adoption Model
The Concerns Based Adoption Model (CBAM) describes, explains, and predicts
the needs of those involved in a change process and can assist change facilitators in
addressing those needs based on the models diagnostic dimensions. The representation
consists of three diagnostic dimensions: Stages of Concern, Levels of Use, and
Innovation Configurations. Each dimension of CBAM addresses one facet of the change
process. Stages of Concern (SoC) reflects the notion that individuals who are in the
midst of change possess different concerns about the innovation and SoC determines and
predicts the mental transitions in the individuals involved over time. Level of Use (LoU)
employs an interview protocol to measure specific behaviors of users to identify the
extent to which they are successfully applying the innovation. Finally, Innovation
Configurations (ICs) explicitly identify what new practices look like when they are
implemented (Southwest Educational Development Laboratory, 2006).
Historically, The Concerns Based Adoption Model was developed in the 1970s by
the Research and Development Center for Teacher Education at the University of Texas
at Austin (R&DCTE) as an extension of Fullers studies in the 1960s regarding teachers
concerns. Fuller (1969) created a developmental model of teachers concerns about
teaching which progressed through a continuum of three phases: (a) the preteaching
42

phase, for education students with no teaching experience, (b) the early teaching phase,
for student teachers and beginning teachers and, (c) the late teaching phase for
experienced teachers. Each phase is distinguished from the next not only by experience
level of the teacher, but also by the nature of the concerns held by those individuals.
According to Fuller, the phases were as follows:
the preteaching phase - characterized by nonconcern; when students
rarely had concerns related to teaching itself;
the early teaching phase - described as concern with self ; when
individuals wondered about status in their organization and adequacy
regarding the task of teaching; and,
the late teaching phase reflected as concerns for pupils; when the
teachers focus had shifted to improving student learning and his or her
own professional development and improvement.
The staff at the R&DCTE added an abstraction to Fullers original work to
describe observations they made of teachers and administrators who were in the process
of implementing an innovation and noted that concerns of those teachers and
administrators could be clustered into four broad categories: unrelated concerns, self
concerns, task concerns, and impact concerns (Hall & Hord, 1987). Beginning with pre-
service teachers concerns about the act of teaching, researchers at R&DCTE documented
that at the beginning of preservice teaching programs, students would identify concerns
unrelated to teaching such as passing tests or getting along with roommates. Later
research revealed that as education students progressed in their studies, these initial
concerns were replaced by self concerns which were related to teaching but were
43

egocentric in nature and reflected the individuals feelings of inadequacy or self-doubt
about their knowledge (George, Hall, & Stiegelbauer, 2006, p. 3). Once these students
began their careers as teachers, they expressed task concerns related to managing the
logistics of the responsibilities related to their new jobs. Finally, as their experience in
teaching grew, teachers were found to express impact concerns which center[ed] on how
their teaching affect[ed] students and how they can improve themselves as teachers
(p. 3).
Stages of Concern.
Researchers at R&DCTE hypothesized that as individuals faced any sort of
change, there were distinct stages to the concerns that they expressed. Furthermore,
investigators theorized that those concerns developed in a logical progression as
individuals experience and comfort with the innovation grew (George, Hall, &
Stiegelbauer, 2006).
During the development of the model, stages were proposed to reflect the
developmental movement observed initially at R&DCTE. Specifically, researchers at
R&DCTE noted that as concerns at one level are potentially resolved, subsided, or
lowered in intensity, they are typically replaced by an increase in the intensity or
emergence of later concerns (George, Hall, & Stiegelbauer, 2006; Hall, 2010).
The researchers ultimately identified seven Stages of Concern (SoC) and
developed a valid questionnaire to determine the stage an individuals expressed concerns
reflected (George, Hall, & Stiegelbauer, 2006). The seven stages, grouped into
unconcerned, self, task, and impact phases, numbered sequentially from 0 to 6, named
44

and characterized in terms of what a typical expression of the concern might include,
appear in Table 2.1 (George, Hall, & Stiegelbauer, 2006).
Table 2.1
Stages of Concern About an Innovation
Stages of Concern Typical Expressions of Concern
Unconcerned 0 Unconcerned
Stage
I am not concerned about it.
Self-Concerns 1 Informational
Stage
I would like to know more about it.
2 Personal Stage How will using it affect me?
Task Concerns 3 Management
Stage
I seem to be spending all my time getting
materials ready.
Impact
Concerns
4 Consequence
Stage
How is my use affecting my students?
5 Collaboration
Stage
I would like to coordinate my effort with
others to maximize the innovations effect.
6 Refocusing
Stage
I have some ideas about something that would
work even better.
Additionally, Hord, Rutherford, Huling, and Hall (2006) stated, A central and
major premise of the model is that the single most important factor in any change process
is the people who will be most affected by the change (p. 29). Their research revealed
that the change process is highly personal and required time and timely interventions to
address the cognitive and affective needs of participants (George, Hall, & Stiegelbauer,
2006). In general, CBAM predicts that a users concerns about an innovation evolved
toward the later, higher-level stages over time, with successful experience, and with the
acquisition of new knowledge and skills.
45

Measuring Concerns about Innovations
CBAM and SoC have been used in numerous studies since its initial development
(George, Hall, & Stiegelbauer, 2006) and its sustained use provides evidence of [its]
continued viabilityin research settings (p. 57). SoC, while originally developed for
application in educational settings, has been adapted for utilization in a variety of
industries including nursing and business. CBAM and SoC in education commonly
measures implementations of technology in education, but the innovations that the model
can describe are not limited to technology.
Research using Stages of Concern to Help Understand Implementations.
George, Hall, and Stiegelbauer (2006) reviewed literature using the Concerns
Based Adoption Model and Stages of Concern. They found that studies employing SoC
tended to fall into two broad categories: (a) descriptive studies in which the researcher(s)
reported on a population that is in the process of implementing a particular change at one
particular time, and (b) differential studies in which researchers used the data from Stages
of Concern to show differences in populations before and after either a period of time or
after a planned intervention such as a professional development program or experimental
treatment.
Descriptive Studies. Rakes and Casey (2002) analyzed the concerns of 659
teachers towards the use of instructional technology using the Stages of Concern
Questionnaire (SoCQ). Soliciting responses from PK-12 teachers who subscribed to one
of four email lists, participants included at least two respondents from each of the 50
states who currently use instructional technology in some form related to their teaching.
Results of SoC analysis showed that for this study, the highest concern was Stage 2,
46

indicating intense concerns for the participants on a personal level. Additionally, the
second highest score for this sample was Stage 5, Collaboration. Authors concluded that
participants have concerns about looking for ideas from each other and a desire to learn
from what others are doing. Also, authors asserted that teachers could benefit from
opportunities to share practices in their classrooms.
In their 2004 study, Baker, Gersten, Dimino, and Griffiths, explored a variety of
factors influencing the continued use of Peer Assisted Learning Strategies (PALS) in
mathematics. Teachers implemented PALS during an initial research study, but Baker, et
al. examined whether PALS were used after the study ended. Participants consisted of
eight teachers in one particular elementary school in the southeastern United States.
Researchers reported that because PALS was in use at this school for some time, five of
eight teachers indicated that their biggest concerns pointed to the impact level, as the
development described in the CBAM would predict. Additionally, four of five claimed
high impact concerns, mainly Collaboration. Researchers noted that despite the fact that
PALS was in place for some time, the teachers second lowest concern was refocusing.
Nevertheless, no teachers gave indication that they were interested in exploring
alternatives to PALS.
In another descriptive study, Hollingshead (2009) used the SoCQ to assess the
implementation of Rachels Challenge, a character education curriculum in the Rockwall
Independent School District, a suburban school district located in North Texas.
Surveying the concerns of 302 teachers from across the districts eight schools,
Hollingshead discovered that the Collaboration stage ranked at the highest level for
teachers in seven of eight schools, with the high school as the only exception to this
47

pattern. Management was among the lowest stage at the elementary schools, but was
higher in middle and high schools. Across all schools, the strong emphasis in impact
concerns showed that teachers are adopting the program and believe that its success is
the responsibility of all teachers on campus (p. 9).
Christou, Eliophotou-Menon, and Philippou (2004) used the SoCQ to identify the
concerns of 655 elementary school teachers from 100 different elementary schools in
Cyprus. Teachers were asked to describe their concerns in relation to the adoption of a
mathematics curriculum and materials. Christou et al. employed an adapted version of
the SoCQ which eliminated the Unconcerned stage because all of the teachers in Cyprus
were acquainted with the new mathematics curriculum and the new textbooks by the time
the study was conducted (p. 165) and modified the scale to record participants
agreement with statements in the questions on a 1 through 9 scale.
Thus, Christou et al. (2004) asserted that the highest concerns reported by
teachers in an aggregate were in the Information and Personal stages. For these two
stages, mean scores and standard deviations were 6.78 (1.24) and 6.43 (1.20)
respectively. The teachers had lower mean scores for the Consequence, Collaboration,
and Refocusing stages of 5.99 (1.26), 5.59 (1.36), and 5.97 (0.80) respectively. The fact
that teachers scored higher in the self concerns than in the impact concerns showed that
teachers were more concerned with dealing with their daily instructional practices than
with the impact that this curriculum had on student achievement. In addition, Christou et
al. (2004) reported that there were no significant differences in teachers concerns across
years of implementation, Multivariate F(2,595) = 0.03, 1.53, 1.5, 1.94, 1.65, 1.8, and p =
.97, .22, .22, .14, .19, .177 for the Information, Personal, Management, Consequence,
48

Collaboration, and Refocusing stages, respectively. Some significant differences were
revealed across beginning teachers (1-5 years of experience), for teachers with some
experience (6-10 years), experienced (11-20 years), and highly experienced teachers
(more than 20 years experience) in the Information, Management and Consequence
factors, Multivariate F(3,595) = 4.76, 2.90, 3.26, and p = .003, .03, and .02, respectively,
showing that experienced teachers exhibited a lower degree of self concern, again
consistent with CBAMs predictions.
Liu and Huang (2005) used the Stages of Concern Questionnaire (SoCQ) to
examine the pattern concerns of 86 inservice teachers enrolled in a graduate course in a
midwest university about their perception of their own degree of technology integration
in their classrooms. Participants identified their personal perception of the level of their
status of integrating technology in their classrooms as beginner, intermediate, or
advanced users of technology in their classrooms. The study revealed that the three
categories of self, task, and impact concerns hypothesized in the development of CBAM
were supported: beginning users had higher scores in the personal and informational
stages, intermediate teachers had higher scores in the consequence stage, and advanced
users of technology had higher scores in the collaboration and refocusing stages.
Additionally, the study revealed statistically significant differences in the concerns of the
three groups in five of the seven SoC (n=86, df=2): Stage 1, Informational (
2
=15.12,
p<.01); Stage 2, Personal (
2
=8.61, p=.01); Stage 3, Management (
2
=7.77, p=.02);
Stage 5, Collaboration (
2
=7.09, p=.03); and Stage 6, Refocusing (
2
=7.63, p=.03).
Differential Studies. One focus of studies employing the SoC is the effect of
support and time on the progression in individuals concerns from unconcerned to self,
49

to task, and ultimately to impact (Hall, 2010). Newhouse (2001) used the CBAM and the
SoC to evaluate teachers concerns regarding the implementation of a portable computer
program. The initial project spanned the school years from 1993 through 1995 with a
short follow-up study in 1999. During the initial investigation, all dimensions of CBAM
(SoC, LoU, and IC) were used to describe the implementation process. The SoC was
then used in a 1999 follow-up study. In comparing the profile of concerns, the data
disclosed that initially the concerns were highest in the awareness, informational, and
personal stages. Four years later, the highest concerns had shifted to personal and
management stages, but importantly, the profile showed a decrease in self concerns and
an increase across task and impact concerns. This finding is consistent with the results
expected and reported by Hall (2010) which underscores that over time, self concerns
decrease and task and impact concerns increase.
Another study that examined the differences in two populations was accomplished
by Donovan, Hartley, and Strudler (2007). The authors used data gathered from the
SoCQ from 17 teachers and two administrators at the beginning of a one-to-one laptop
initiative in a middle school. Consistent with the prediction of CBAM (Hall, 2010), over
half of teachers at this early stage of implementation had the most intense concerns
centered at the personal level. Of particular interest, was the comparison of the concerns
of the two school administrators: one administrator who had been at the school during the
development and planning of the initiative had the most intense concerns at the
consequence level. In contrast, the administrator who had arrived at the school after the
initiative was underway reported awareness level concerns (Donovan, Hartley, &
Strudler, 2007). The administrators profile is also consistent with the developmental
50

nature of concerns described by CBAM (George, Hall, & Stiegelbauer, 2006; Hall,
2010).
Dobbs (2004) performed a quasi-experimental investigation with pre-/post-test
design to explore differences in groups of college faculty and administrators who were
expected to deliver instruction via distance education. Dobbs grouped participants into
those who: (a) received classroom training on distance education; (b) received classroom
training and laboratory (hands-on) experiences on distance education; and (c) received no
distance education training. Dobbs conducted analyses of variance (ANOVA) and found
significant (p < .01) level for five of the seven SoC and at the p < .05 level for one more
of the stages; only the awareness stage showed no significant difference among the
groups. Additionally, independent t-tests between the groups revealed significant
differences in the mean scores between the control group and both of the experimental
groups, with both experimental groups scoring significantly higher than the control group
in the informational, consequence, collaboration, and refocusing stages. Dobbs further
correlated pre-test and post-test scores to determine whether analysis of covariance
(ANCOVA) was appropriate. Coefficients ranged from a low of r = .52 for the
management stage, to a high of r = .92 for the collaboration stage; in all cases, the
correlations were significant at the p < .01 level, supporting the need for analysis of
covariance. Finally, Dobbs adjusted the post-test scores by ANCOVA and determined
that significant differences still existed for the management, consequence, collaboration,
and refocusing stages. Dobbs findings are also consistent with Hall (2010) who showed
that supports provided to different groups can purposefully and positively affect the
distribution of concerns revealed by the SoC.
51

Liu, Theodore, and Lavelle (2004) looked at the effect of simple exposure to an
intervention on the pre-/post-test concerns shared by students. Twenty-three students in a
research-methods class responded to concerns about technology integration in teaching
such as using the internet or computers to accomplish instructional objectives.
Participants took the SoCQ before and after a Fall 2001 course delivered completely
online. Mean differences on each of the seven scales of the SoCQ were statistically
tested and all were found to have increased: awareness concerns, t = -10.44, df = 22,
p <.001; informational concerns, t = -6.49, df = 22, p <.001; personal concerns,
t = -5.02, df = 22, p <.001; management concerns, t = -5.68, df = 22, p <.001;
consequence concerns, t = -4.08, df = 22, p <.001; collaboration concerns, t = -3.26,
df = 22, p <.001; and refocusing concerns, t = -8.80, df = 22, p <.001.
Overbaugh and Lu (2008) studied the impact of a formal professional
development program on the concerns of participants toward instructional technology
integration into curriculum. Measuring concerns using the SoCQ in a pre-test, post-test,
and follow-up design, the researchers stratified the participants (N = 377) by age, gender,
and school-level (elementary, middle, high). ANOVA results on the participants states
of concern on pre-/post-/follow up measures were significant (p<.01) on all seven stages
identified in SoC. Additionally, paired-sample t-test comparisons between the pre-survey
and post-survey showed significant differences at the p < .01 level for all stages except
the consequence stage. Overall, Overbaugh and Lu reported that participants concern
levels dropped for the self- and task-based concerns with the greatest decrease in the
intensity of concerns in the information and personal stages.
52

Summary
Across the nation, as stakes have risen for schools due to federal mandates,
educators and students have recognized the high value placed upon the results of their
teaching and learning. Furthermore, with the approach of the 2014 deadline requiring
100% proficiency mandated by NCLB, schools face an ever-increasing challenge of
raising student achievement and corresponding pressure to produce those results.
Pennsylvania students and teachers, like their peers nationwide, recognize that the future
holds many challenges.
Using assessment results to target instruction has significant potential to improve
student learning and the use of electronic systems to process and deliver that information
to teachers can increase the accessibility of this information for educators. Concurrent
with the increased use of and confidence in the impact of data systems on the part of
educators, the change facilitators in schools need to consider and address the concerns
shared by their teachers in order to maximize the effectiveness of this or any other
change.
Chapter Three will describe the methods and procedures employed in this study.
53

CHAPTER THREE METHODS AND PROCEDURES
Introduction
This quantitative study determined the nature of concerns experienced by teachers
in Pennsylvania school districts who have implemented PerformanceTRACKER, a
learning management system from SunGard Public Sector. In addition, this investigation
explored the nature of the concerns shared by teachers across demographic variables
describing the participants themselves and the districts in which they teach.
This investigation sought to amplify the growing body of research about the
effectiveness of learning management technologies, specifically PerformanceTRACKER.
Essentially, the population, protocol, and area of exploration were chosen to enhance
research regarding concerns of teachers responsible for implementing learning
management systems and the degree to which the concerns impact the ability of
technologies to reach their desired potential.
According to SunGard Public Sector, there were 155 school districts in
Pennsylvania that have implemented or are in the process of implementing
PerformanceTRACKER during the 2010-2011 school year (S. Gladfelter, personal
communication, June 8, 2010). Participating schools were identified in terms of Level,
Poverty, Locale, and Performance. Level, Poverty, and Locale data for participating
districts were retrieved from the National Center for Education Statistics. Performance
information was based on whether or not the schools were currently achieving Adequate
Yearly Progress (AYP) status on their 2010 Pennsylvania System of School Assessment
(PSSA) tests as indicated by the Pennsylvania Department of Education (PDE).
54

Participants
Of the 128 schools in 26 Pennsylvania school districts in which the researcher
was permitted to conduct research, 50 were purposively chosen for the study. Purposive
sampling of the 128 schools was performed to closely approximate the proportion of the
school-based variables being explored (level, poverty, locale, AYP status. The data table
found in Appendix C.2 shows the distribution of schools across these variables of interest
for four populations: all Pennsylvania schools, those Pennsylvania schools using
PerformanceTRACKER, Pennsylvania schools using PerformanceTRACKER where
permission was granted, and the sample chosen for inclusion in the study.
Individual participants were volunteers from the teaching staffs of Pennsylvania
public schools described above. Individual demographic characteristics which were
assessed reflected current level of their teaching assignment, years of teaching experience
and years of experience with using PerformanceTRACKER, their self-assessed
proficiency level with technology in general and PerformanceTRACKER specifically, and
the presence or absence of professional. There were no specific criteria for participation
within any of these quantities and any other demographic descriptors (e.g. degree status,
certification areas, race, or gender) about participants were not considered in this study.
Instruments
The instrument used in this study was the Stages of Concern Questionnaire
(SoCQ) as derived from the Concerns Based Adoption Model. The items in the standard
version of the SoCQ are independent of the innovation being studied, but authors
recommend changing the generic words found in the standard version of the SoCQ to a
phrase the participants would recognize (George, Hall, & Stiegelbauer, 2006),.
55

Therefore, for the purposes of this study, the generic terms found in the standard version
of the SoCQ were replaced with PerformanceTRACKER when they appear in the
items.
The SoCQ is comprised of 35 items each consisting of a declarative statement,
such as I have a very limited knowledge of PerformanceTRACKER. Using an 8-point
Likert-type scale, participants were asked to evaluate the extent to which statements in
the items seem true to them at the time they responded. Replies on the scale were scored
from 0 to 7, with 1 indicating a low level of agreement with the item to a score of 7
signifying a high degree of agreement. Scores of 0 correspond to those items that seemed
irrelevant to the participant.
Scoring the SoCQ involved determining a raw score for each of the seven Stages
of Concern by totaling the responses to the five items on the SoCQ that correspond to that
stage. Thus, a raw score for any particular stage could be anywhere from zero to 35.
Using a scoring device provided by CBAM found in Appendix B.1, raw scores were then
converted to percentile ranks for each stage. Graphing the percentile ranks of the seven
stages provides a profile of the relative intensities of the feelings for each of the seven
Stages.
In addition to the numerically scored questions, another portion of the SoCQ
includes a demographic page which can be modified as desired to gather additional
information relevant to the study (George, Hall, & Stiegelbauer, 2006). In this study,
these questions included: time in teaching, time involved with PerformanceTRACKER,
and level of assignment. In order to maintain anonymity of participants, there was no
area on the instrument where teachers were asked to record their names or the name of
56

their school district. The final section of the SoCQ used in this study included a space
where teachers could write answers to the open-ended question, When you think about
using PerformanceTRACKER, what are you concerned about? Copies of the complete
research instrument used in this study appear in Appendix B.
The SoCQ has been used in a number of studies related to implementations since
its development in the 1970s and has been found to be reliable. Internal reliability for the
various stages described in the SoCQ have been measured using a generalization of the
Kuder-Richardson Formula since the publication of the original Stages of Concern
Manual (George, Hall, & Stiegelbauer, 2006). Depending on the situation and audiences,
these measures have shown some degree of variance. Statistics from each of the stages
measured in the SoCQ are shown in Table 3.1.
Table 3.1
Coefficients of Internal Reliability and Test-Retest Reliability for Each Stage of the
Concerns Questionnaire
Stage of Concern
0 1 2 3 4 5 6
Range of
internal
reliability
coefficient
.50 - .78 .74 - .87 .65 - .86 .65 - .84 .74 - .84 .79 - .83 .65 - .82
Test-retest
reliability
.65 .86 .82 .81 .76 .84 .71
As revealed in Table 3.1, internal reliability coefficients for SoC ranged from a
low of 0.50 to a high of 0.87 depending on the stage of concern. Additionally, test-retest
57

correlations, measured during the development of the instrument in the 1970s, ranged
from a low of .65 to a high of .86 (George, Hall, & Stiegelbauer, 2006). The reliability of
this instrument lent credence to results.
Design of the Study
This study used a quantitative design to measure relationships between the
variables of interest. Data for this study were gathered from volunteers in Pennsylvania
school districts who have implemented or are in the process of implementing
PerformanceTRACKER during the 2010-2011 school year.
Participants responded to a survey which was an adaptation of the Stages of
Concern Questionnaire (SoCQ) from the Concerns Based Adoption Model and included
a series of demographic questions. Answers were gathered via paper and pencil and are
conceptualized in the model shown in Figure 3.1.
Procedure
Approval by the Research Ethics Review Board of Immaculata University
(Appendix A) allowed processing of data on Pennsylvania school districts. These data
included: superintendents contact information, results of performance data for the 2010
Pennsylvania System of School Assessment (PSSA) tests, and school demographic
variables including Level, Poverty, and Locale.
Basic contact information for the school districts was downloaded from the
Pennsylvania Department of Educations Education Names and Addresses (EdNA)
database available on the Departments web site as a comma-delimited file which was
converted into a Microsoft Excel spreadsheet. This file included the names, addresses,
district website, contact phone numbers, and email addresses for many of the
58


Figure 3.1. Study Design Showing Relationship of Data Sources to Research Questions.
59

superintendents and school principals for the schools in the state. Where information was
incomplete from the states database, missing information was received via the websites
of or by telephone calls directly to districts.
SunGard Public Sector provided the researcher a list of school districts using
PerformanceTRACKER. Schools in this file were identified by their Administrative Unit
Number (AUN), a unique number assigned to school districts by the Pennsylvania
Department of Education.
Superintendents of schools which were candidates for study were contacted via
email and phone call to seek permission to contact the building-level administrators in
their districts. Once authorization from superintendents was secured, principals in
participating districts were contacted via email with information regarding the
distribution and collection of the surveys. Principals were asked to designate an
individual in the building to assist with the distribution and collection of the surveys as
well as to provide time for teachers to participate.
Surveys were distributed via US mail to each school building. These materials
included an instruction sheet that the designated individual was to read to the
participating, volunteer teachers before the administration of the survey in order to
increase the level of uniformity in administration of the instrument. Once the surveys
were completed the designated individual gathered the materials and returned them via
US mail.
Surveys were returned to a research assistant who coded them using a sequentially
numbering system as they were received. The research assistant entered responses from
participants into a scoring spreadsheet designed specifically for SoCQ made available by
60

the authors of the Stages of Concern Questionnaire. The researcher received raw data
only and did not have access to the original surveys.
Performance data were downloaded from the Pennsylvania Department of
Education during the summer of 2010 when the 2010 PSSA results were released.
Results found on the demographic variables of population, such as poverty, and locale for
all Pennsylvania school districts were downloaded from the National Center on
Educational Statistics as a Microsoft Excel spreadsheet.
All data were then merged into a Microsoft Access relational database. From this
database, relevant data were exported into SPSS in order to perform the statistical
analyses required for this study including measures of central tendency and analyses of
variance.
Summary
This study determined the relationship between concerns of teachers using
PerformanceTRACKER and demographic variables describing the participants and the
districts in which they work. Findings were brought together on spreadsheets and
relational databases, and then were analyzed using inferential statistics. The results of
this research will be presented in Chapter Four.
61

CHAPTER FOUR RESULTS
Introduction
In general, this study assessed concerns of classroom teachers about the
implementation of PerformanceTRACKER, a computer database which provides teachers
access to student performance data. Effects were assessed through use of the Stages of
Concern Questionnaire (SoCQ) using both Likert-scale type items and open-ended
responses.
The following areas were explored: (a) the concerns expressed by teachers who
had implemented or were in the process of implementing PerformanceTRACKER in their
schools, (b) the significance of differences in the expressed concerns of those teachers
when compared across demographic variables describing the schools in which they work,
and (c) the significance of differences in the expressed concerns of those teachers when
compared across demographic variables describing the teachers themselves.
Two hundred and eighty-six surveys, an overall response rate of 13.1% of the
total number of teachers included in the sampled schools, became the substance of the
study. The participants represented 26 different schools in 14 different school districts.
Participant Characteristics
Tables 4.1 through 4.4 reflect the comparison of school and district demographics
among three populations: (a) the schools of the study participants (N = 26), (b) all
schools in Pennsylvania, and (c) Pennsylvania schools using PerformanceTRACKER.
Throughout tables 4.1 through 4.4 the symbol indicated the difference or change
between the proportion represented by the participants and the larger population
indicated. A negative value for suggested that there were proportionally fewer in the
62

participant population; a positive value specified that the proportion in the participant
population exceeded the indicated comparison group.
Table 4.1 reflects the comparison of distribution of school levels as characterized
by the National Center for Education Statistics (NCES). NCES characterizes schools into
one of four categories depending on the grade-levels found in the schools: (a)
Elementary, for schools with kindergarten through 5
th
or 6
th
grades, (b) Middle, for
schools with 6
th
through 8
th
or 7
th
through 9
th
grades, (c) High, with students from grades
9 or 10 through 12
th
grade, and (d) Other, for less common configurations such as K-8
th

grade, or 5
th
to 6
th
grade, or 6
th
through 12
th
grades.
Table 4.1
Comparison of proportions of NCES School Levels in Participants to Other PA School
Populations Levels

Participants
Schools
All Pennsylvania
Schools
Pennsylvania PT
Schools
n % n % n %
Elementary 7 53.8 1854 57.1 -3.3 556 59.5 -5.7
Middle 5 19.2 561 17.3 1.9 187 20.0 -0.8
High 14 26.9 694 21.4 5.5 160 17.1 9.8
Other 0 0.0 139 4.3 -4.3 25 2.7 -2.7
Unknown 0 0.0 0 0.0 0.0 7 0.7 -0.7
Total 26 100 3248 100 935 100
Note: Percentages may not total 100% due to rounding.
63

As may be seen from Table 4.1, the proportion of school levels represented by the
participants at the elementary and high school levels was comparable to the proportion of
schools in the other comparison groups with elementary schools being slightly under-
represented in the sample and high Schools overrepresented in the sample. In
Pennsylvania, elementary schools represented 59.5% of schools using
PerformanceTRACKER yet only 53.8% of the sample. In contrast, 14 of the 26 schools
represented (26.9%) were high schools, exceeding the proportion of high schools in
Pennsylvania and Pennsylvania PT users which are 21.4% and 17.1% respectively.
NCES data reported three possible values for Title I eligibility, an indicator of the
degree to which economically disadvantaged (ED) students are represented in a particular
school building. Specifically, depending on the ED population in a school, a school
would be designated by NCES as: (a) Not Eligible for Title 1 funding, (b) Targeted for
those schools with a certain level of ED students who would receive targeted support
through Title 1 for remediation, and (c) School Wide for schools with significant number
of ED students.
Table 4.2 compares the representation of Title 1 status codes in the participant
population to the larger groups of all Pennsylvania schools and those PA schools who use
PerformanceTRACKER.
As seen in Table 4.2, the proportions of Title 1 eligibility codes in the participant
population was markedly different from larger comparison groups. Specifically, schools
not eligible for Title 1 funding represented more than 42% of schools, in contrast to
23.1% of PA schools and 30.3% of Pennsylvania PT schools with the same designation.
Conversely, schools qualifying for Title 1 were underrepresented in the participant
64

Table 4.2
Comparison of proportions of Title I Eligibility in Participants to Other PA School
Populations Title I Eligibility

Participants
Schools
All Pennsylvania
Schools
Pennsylvania PT
Schools
f % f % f %
Not
Eligible
11 42.3

751 23.1 19.2 283 30.3 12.0
Targeted 14 53.8 1721 53.0 0.8 546 58.4 -4.6
School
Wide
1 3.8

704 21.7 -17.9 99 10.6 -6.8
Unknown 0 0 72 2.2 -2.2 7 0.7 -0.7
Total 26 100.0 3176 100.0 935 100.0
Note: Percentages may not total 100% due to rounding.
population at 3.8% when compared to 21.7% of all PA schools and 10.6% of PA
TRACKER Schools.
NCES data also described the proximity of a school to metropolitan areas using
one of four Locale categories: Rural, Town, Suburb, or City. Table 4.3 depicts the
evaluation of the Locale codes represented by participants schools in contrast to larger
PA populations.
Data presented in Table 4.3 indicates that the distribution of Locale codes
represented by the schools of the participants was considerably different from the
distribution of Locale codes across the state. As represented in the table, Rural schools
characterized only 19.2% of the sample population, more than 8% below the rate at
which Rural schools were found in other Pennsylvania comparison groups. Schools in
65

Table 4.3
Comparison of proportions of Locale Codes in Participants to Other PA School
Populations Locale Codes

Participants
Schools
All Pennsylvania
Schools
Pennsylvania PT
Schools
n % n % n %
Rural 5 19.2 898 27.6 -8.4 264 28.2 -9.0
Town 7 26.9 430 13.2 13.7 153 16.4 10.5
Suburb 10 38.5 1315 40.5 -2.0 412 44.1 -5.6
City 4 15.4 605 18.6 -3.2 99 10.6 4.8
Unknown 0 0 7 0.1 -0.1
Total 26 100 3248 100 935 100
Note: Percentages may not total 100% due to rounding.
Suburbs were underrepresented in the sample. Conversely, schools in Towns, were
overrepresented in the participants when compared to PA comparison groups by as much
as 13.7%. Schools in Cities were underrepresented when compared to the state-wide
population but overrepresented in terms of schools using PerformanceTRACKER
In Pennsylvania, there are seven categories of AYP status: (a) Made AYP,
indicating that the rate that students passed the reading and mathematics PSSA was at or
above the thresholds set by the state, (b) Warning, for schools who failed to make AYP
but made AYP last year, (c) School Improvement 1 and (d) School Improvement 2 for
schools who fail to make AYP in a series of years, followed by (e) Corrective Action 1
and (f) Corrective Action 2, for schools who consistently fail to meet the goals set by the
66

state; if a school is currently in any of the School Improvement or Corrective Action
stages and successfully makes AYP, the school is designated in the seventh category,
Making Progress. Should the school make AYP in the next school year, it is designated
as Made AYP; should it fail to do so, the school is returned to the School Improvement or
Corrective Action designation. Table 4.4 describes the fraction of schools in various
AYP categories according to their 2010 Pennsylvania System of School Assessment
(PSSA) tests scores and compares the distributions in the sample to the distributions
found across the state and in PerformanceTRACKER schools in terms of the AYP status
achieved in 2010.
Table 4.4 indicates that the distribution of AYP status codes for the schools
represented by the sample population was comparable to the proportions found in schools
across Pennsylvania and in those PA schools who use PerformanceTRACKER. The
greatest difference in the distributions was found in schools in the Made AYP category.
In the sample, Made AYP represents 80.8% of the sample whereas Made AYP schools are
found at the rate of 73.6% and 78.4% in all Pennsylvania schools and in PA
PerformanceTRACKER schools respectively.
Note that tables 4.1 through 4.4 compared the distributions of four demographic
variables describing schools (Level, Title 1 Eligibility, Locale, and AYP Status) in three
populations, the schools represented by the participants, all Pennsylvania schools, and
those schools in Pennsylvania who have implemented or who are implementing
67

Table 4.4
Comparison of proportions of AYP Status in Participants to Other PA School
Populations AYP Status

Participants
Schools
All Pennsylvania
Schools
Pennsylvania PT
Schools
n % n % n %
Made
AYP
21 80.8 2390 73.6 7.2 733 78.4 2.4
Warning 2 7.7 222 6.8 0.9 69 7.4 0.3
SI 1 1 3.8 136 4.2 -0.4 36 3.9 -0.1
SI 2 1 3.8 99 3.0 0.8 25 2.7 1.1
CA 1 0 0 60 1.8 -1.8 17 1.8 -1.8
CA 2 1 3.8 201 6.2 -2.4 38 4.1 -0.3
N/A 0 0.0 18 0.6 -0.6 17 1.8 -1.8
Unknown 0 0.0 122 3.8 -3.8 0 0.0 0.0
Total 26 100 3248 100 935 100
Note: SI = School Improvement; CA = Corrective Action; Percentages may not total
100% due to rounding.

PerformanceTRACKER. Tables 4.1, 4.2, and 4.3 showed substantial differences in the
proportions of Level, Title 1 Eligibility, and Locale across the three comparison groups.
Only AYP Status codes, as reflected in Table 4.4, showed that the sample population was
comparable to larger groups.
68

Results Related to Research Questions
Research Question One.
The first research question in this study was What concerns are shared by
teachers in schools that have implemented or that are in the process of implementing
PerformanceTRACKER? In order to address this question, mean scores for the entire
population of participants were computed for the seven Stages of Concern (SoC)
associated with the Concerns Based Adoption Model (CBAM); further, using a norming
table those scores converted to a percentile rank. Table 4.5 shows the mean raw scores,
standard deviations, and corresponding percentile rank on the SoC for each of the seven
possible stages measured by the Stages of Concern Questionnaire (SoCQ). Each stage of
concern measured in the SoCQ is normed separately with its own scale, thus, raw scores
between stages are not comparable. The norming table for SoCQ may be found in
Appendix B.1.
Table 4.5
Raw Scores and Corresponding Percentile Ranks for the Stages of Concern
Raw Score
M (SD)
Stages of Concern
Percentile Rank
Unconcerned 21.97 (6.51) 99
Informational 13.09 (6.564) 51
Personal 14.93 (7.837) 55
Management 13.38 (6.743) 47
Consequence 9.28 (5.220) 5
Collaboration 11.03 (5.684) 16
Refocusing 9.52 (4.960) 20
69

As evidenced in Table 4.5, when considering the entire research sample, the
highest percentile rank was determined to be the Unconcerned stage at a percentile rank
of 99%. This stage indicates that most teachers who responded spend little time thinking
about or interacting with PerformanceTRACKER. Another important finding is that the
lowest percentile rank is found in the Consequence stage. The Consequence stage is
associated with teachers concerns about the impact of the innovation in question on their
students.
In evaluating the concerns profile for any group, the user of the Stages of Concern
Questionnaire (SoCQ) computes total raw scores for each stage by adding the Likert
scale scores provided by the participant for the questions on the SocQ which are
associated with each of the Stages and then converts those raw scores to percentile ranks.
In graphing the percentile ranks for each of the stages, a visual profile of the relative
strengths of each of the concerns is created. Figure 4.1 shows the concerns profile for the
participating teachers.
In their research, CBAM authors have identified several canonical forms of these
concerns profiles. The concerns profile revealed in Figure 4.1 is described in CBAM as
that of a non-user (George, Hall, & Stiegelbauer, 2006). According to the authors, the
population that displays this particular profile has the highest concerns scores in the
Unconcerned, Informational, and Personal stages and lower scores in the Consequence,
Collaboration, and Refocusing stages. Additionally, George, Hall, and Stiegelbauer
suggest that since the Personal and Informational stages are still high and that since the
Personal stage score is higher than the Informational stage that this profile indicates that
70


Figure 4.1. Aggregate Concerns Profile for Teachers Using PerformanceTRACKER.
in the aggregate, teachers participating in this study are interested in learning more about
PerformanceTRACKER.
Aside from the Unconcerned stage, the most prominent concerns pointed out in
the aggregate concerns profile are the Informational and Personal and Management
concerns. In addition to having teachers respond to the quantitative Stages of Concern
Questionnaire, teachers were invited to respond to an open-ended item. The 286
participants provided a total of 201 qualitative comments. Consistent with the graphical
representation of the concerns above, 121 comments (more than one half of the
comments collected) could be characterized as being associated with these stages.
0
10
20
30
40
50
60
Informational Personal Management Consequence Collaboration Refocusing
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y


o
f

C
o
n
c
e
r
n

(
%
i
l
e
)

Overall Concerns Profile
71

Sample comments reflecting teachers concerns in the Informational and Personal and
Management stages appear in Table 4.6.
Table 4.6
Quotes from Teachers Indicating Informational, Personal and Management Concerns
Open-Ended Question Sample of Teacher Comments
When you think about
using
PerformanceTRACKER,
what are you concerned
about?
Time management is a concern; however, I see the value of
data-driven curriculum. The district needs to provide time for
formal training using this product. I haven't begun to tap into its
potential.
Not enough time to use it to determine student performance and
make changes.
I feel that our plates are so full at this time that another
requirement put on us will cause us to again be distracted from
student learning to work on the next best thing.
Looking up over 100 students and continually going back for 4
Sight is very Time Consuming.
Takes a lot of time.
I am concerned that we put time into entering scores, but once
they are entered how often is this data used to improve student
achievement?
I hardly use PerformanceTRACKER. Would like to use it
more/learn more about it.
It's another thing to do. My day is too full of things to do now--
adding another is only going to make the day tougher.
One major concern I have is the relative lack of data available.
As students move between districts their scores are not moved
with them. This limits usable information.
The concern that I have is the time involved. I spend many
hours preparing lesson plans, quizzes, tests, & homework. I go
to various extracurricular activities to support my students.
There are only so many hours in a day and I do have a life of my
own as well. I put everything that I can into teaching my
classes.
As evidenced by the sample of comments shown in Table 4.6, the majority of
concerns expressed by teachers in their open-ended responses were consistent with the
72

graphical representation of the concerns profile showing that most teachers using
PerformanceTRACKER are interested in learning more about it (Informational), dealing
with the demands required by the innovation (Personal), and understanding the processes
and tasks associated with the use of PerformanceTRACKER (Management).
While not typically reflected in the graphical version of the concerns profile, it is
important to note an additional finding. In the aggregate (as indicated in Table 4.5)
participants scored in the 99
th
percentile in the Unconcerned stage. In every subsequent
analysis in this study where the overall participant population was broken into
disaggregate groups, those subgroups also scored at the 99
th
percentile for the
Unconcerned stage. This pattern continued in every disaggregate group across every
demographic variable explored in this study whether that demographic variable is
associated with the school or the teacher.
Research Question Two.
The second research question examined by this study was, Are there significant
differences in the concerns of teachers when examined across demographic variables
describing their schools (Level, Title 1 Eligibility, Locale, and AYP Status)?
Table 4.7 and Figure D.1 (found in Appendix D) show the concerns profile and
table of concerns across NCES building levels represented in the participant pool.
Demonstrated both quantitatively in Table 4.7 and graphically in Figure D.1, teachers in
high schools had higher Information, Personal, and Management concerns than teachers
at the middle schools. Moreover, middle school teachers had higher scores in these same
three areas than teachers in elementary schools.

73

Table 4.7
Concerns Scores for Teachers in Various School Levels
School Level Informational Personal Management Consequence Collaboration Refocusing
Elementary
(n = 95)
M 11.07 12.85 11.21 8.54 11.32 9.02
SD 6.592 7.646 6.396 4.963 5.833 4.146
Middle
(n = 54)
M 12.93 14.74 13.80 10.15 12.13 10.98
SD 6.207 6.827 6.649 5.935 5.477 5.071
High
(n = 137)
M 14.56 16.44 14.73 9.45 10.41 9.28
SD 6.342 8.050 6.677 5.063 5.618 5.345
As noted in Table 4.7, mean scores across the levels increased from the
elementary- to the middle- and increased further at the high school-level for the
Informational, Personal, and Management stages indicating greater concerns in learning
about PerformanceTRACKER at the middle- and high school-levels. ANOVA analysis
revealed statistically significant differences across all three levels for Informational,
(F(2,283)=8.353, p<.001), Personal, (F(2,283)=6.100, p=.003),and Management
(F(2,283)=8.156, p<.001) concerns as well. A barely statistically significant difference
was noted among the three levels for the Refocusing stage, F(2,283)=3.021, p=.050. No
statistical differences were noted between school levels at other stages of concern. The
complete ANOVA table for this data appears in Appendix C.3.
Table 4.8 and Figure D.2 (found in Appendix D) show the table of concerns
across scores and the graphical representation of the concerns profile when compared
across NCES Locale codes for the aggregate participant pool.
As can be noted in both Table 4.8 and Figure D.2, the mean scores and percentile
ranks for each of the Locale codes do not exhibit substantial variation. Statistically
significant differences were noted between concerns across Locale codes for only the
74

Table 4.8
Concerns Scores for Teachers in Various School Locales
Locale Code Informational Personal Management Consequence Collaboration Refocusing
Rural
(n = 21)
M 12.71 15.29 12.76 9.81 13.10 10.76
SD 6.589 6.769 5.403 5.066 6.587 4.721
Town
(n = 57)
M 10.98 12.67 13.02 8.95 11.61 9.68
SD 6.548 8.271 5.845 5.065 5.512 5.600
Suburb
(n = 134)
M 13.33 15.14 13.17 8.99 10.22 8.78
SD 6.682 7.799 7.304 5.238 5.805 4.667
City
(n = 74)
M 14.41 16.18 14.23 9.89 11.47 10.36
SD 6.061 7.631 6.714 5.383 5.164 4.901
Informational stage, F(3,282)=3.098, p=.027. No other statistically significant
differences were observed between school locales for the other stages of concern. The
complete ANOVA table for this analysis may be found in Appendix C.4.
The next demographic variable explored in this study was related to the schools
poverty level in terms of Title 1 Eligibility status. A school that is eligible for Targeted
Assistance under Title 1 has a greater level of poverty than a school not eligible; a school
eligible for School-Wide Title 1 programming has an even greater level of economic
need. Table 4.9 and Figure D.3 (found in Appendix D) show the concerns profile and
table of concerns across Title 1 Eligibility categories for the participants schools.
Table 4.9 reflects the decrease in mean scores in the Informational, Personal, and
Management stages as the level of title poverty increases. For the Informational stage
scores fell from 14.02 to 12.46 to 7.00 for Non-Eligible, Targeted, and School-Wide
assistance respectively. Similarly, scores in the Personal stage fell from 15.28 to 14.88
to 8.71 respectively; Management stage concern scores also fell from 14.47 to 12.54 to
75

Table 4.9
Concerns Scores for Teachers in Schools by Title 1 Eligibilities
Title I
Eligibility
Informational Personal Management Consequence Collaboration Refocusing
Not
Eligible
(n = 141)
M 14.02 15.28 14.47 9.31 10.74 9.52
SD 6.376 7.789 6.562 5.271 5.301 5.189
Targeted
(n = 138)
M 12.46 14.88 12.54 9.37 11.38 9.45
SD 6.652 7.891 6.767 5.198 6.122 4.732
School
Wide
(n= 7)
M 7.00 8.71 8.29 6.71 10.14 10.71
SD 3.873 5.648 5.880 4.572 4.180 5.219
8.29 respectively. Scores in the Consequence, Collaboration, and Refocusing stages did
not follow a similar pattern.
Despite marked decreases in mean scores for the Informational, Personal, and
Management stages and the substantial difference in the visual representation of the
concerns profile as displayed in Figure D.3 (see Appendix D), statistically significant
differences were noted between concerns across Title 1 Eligibility status for only the
Informational stage, F(2,283)=5.224, p=.006, and the Management stage,
F(2,283)=5.054 , p=.007. Analysis revealed no other statistically significant differences
across Title 1 Eligibility for the other stages of concern. The complete ANOVA table for
this data is in Appendix C.5.
The final school-based demographic variable explored in this study was related to
the AYP Status of the participants schools. Table 4.10 and Figure D.4 (found in
Appendix D) show the table of mean scores for each stage of concern as well as the
visual depiction of concerns across AYP Status designations for the participants schools.
76

Table 4.10
Concerns Scores for Teachers in Schools by AYP Status
2010 AYP Status Informational Personal Management Consequence Collaboration Refocusing
Made AYP
(n = 215)
M 13.25 15.07 13.37 9.36 11.03 9.35
SD 6.584 7.518 6.799 5.342 5.806 4.733
Warning
(n = 26)
M 12.88 14.35 12.73 8.96 10.12 8.54
SD 6.855 9.230 6.673 5.142 5.102 4.852
School
Improvement 1
(n = 9)
M 9.89 9.33 13.33 7.44 12.33 12.11
SD 4.649 5.500 3.240 2.833 4.717 7.079
School
Improvement 2
(n = 7)
M 7.00 8.71 8.29 6.71 10.14 10.71
SD 3.873 5.648 5.880 4.572 4.180 5.219
Corrective
Action 2
(n = 29)
M 14.62 17.66 15.31 10.10 11.69 10.52
SD 6.377 8.587 6.970 5.024 5.995 5.779

Statistically significant differences were indicated between concerns across AYP
status for only the Informational stage, F(4,281)=2.525, p=.041, and the Personal stage,
F(4,281)=3.279 , p=.012. No other statistically significant differences were observed
across AYP Status for the other stages of concern. The complete ANOVA table for these
data is in Appendix C.6.
Research Question Three.
The third research question examined by this study was, Are there significant
differences in the concerns of teachers when compared across demographic variables
describing the teachers themselves (the grade level of their teaching assignment, their
amount of teaching experience, their amount of experience using
77

PerformanceTRACKER, their self-assessed facility with PerformanceTRACKER
specifically and technology in general, and whether or not they had received formal
training in using PerformanceTRACKER)?
Table 4.11 and Figure D.5 (found in Appendix D) show the table of concerns
scores and visual concerns profile across grade level of teachers primary assignment for
the 2011 school year. For the purposes of this question, participants selected Primary for
assignments in grades PK-2, Intermediate for grades 3 to 5, Middle for grades 6 to 8, and
High for grades 9-12.
Table 4.11
Concerns Scores for Teachers across Level of Teaching Assignment
Level Informational Personal Management Consequence Collaboration Refocusing
Primary
(Gr PK-2)
(n = 33)
M 10.21 12.33 10.21 6.79 11.27 8.09
SD 6.726 7.540 5.946 4.314 6.326 3.736
Intermediate
(Gr 3-5)
(n = 69)
M 10.97 13.20 11.61 9.35 11.72 9.62
SD 6.364 7.539 6.713 5.147 5.620 4.373
Middle
(Gr 6-8)
(n = 47)
M 13.96 14.87 14.30 10.43 11.68 11.04
SD 6.068 7.017 6.362 5.919 5.474 5.095
High
(Gr 9-12)
(n = 137)
M 14.56 16.44 14.73 9.45 10.41 9.28
SD 6.342 8.050 6.677 5.063 5.618 5.345

Similar to the information presented quantitatively in Table 4.7 and visually in
Figure D.1 above, Table 4.11 and Figure D.5 reveal that teachers in high schools had the
highest Information, Personal, and Management concerns followed in decreasing order
by teachers at the middle schools then teachers in the intermediate or primary grades. Of
78

interest here, however, statistically significant differences were noted between the four
grade spans in four of the six concerns stages. Differences were present in Informational,
F(3,282)=7.571, p<.001, Personal, F(3,282)=4.150, p=.007, Management,
F(3,282)=6.490, p<.001, and Consequence stages F(3,282)=3.395, p=.018. No
statistical differences were noted between school levels at the Collaboration and
Refocusing stages of concern. The ANOVA table for these data is in Appendix C.7.
The next demographic variable explored in this study was related to Teaching
Experience measured in years of each participant. Table 4.12 and Figure D.6 (found in
Appendix D) show the concerns profile and table of concerns when disaggregated across
years of teaching experience.
Table 4.12
Concerns Scores for Teachers across Years of Teaching Experience
Teaching
Experience
(years)
Informational Personal Management Consequence Collaboration Refocusing
0
(n = 3)
M 9.00 15.33 16.00 7.33 7.33 8.33
SD 3.464 5.132 9.644 2.082 .577 3.215
1-5
(n = 53)
M 11.66 13.32 12.74 9.74 12.38 9.68
SD 5.997 6.254 5.691 5.182 5.249 3.906
6-10
(n = 70)
M 12.17 13.99 12.66 8.40 11.01 9.23
SD 6.386 7.401 6.574 4.756 6.376 4.990
11-15
(n = 56)
M 13.98 15.57 14.05 9.84 11.63 10.84
SD 7.187 8.653 6.496 5.582 5.578 5.239
16-20
(n = 40)
M 13.72 15.30 13.15 9.13 10.30 9.10
SD 6.880 7.994 7.361 4.751 5.219 6.059
21-25
(n = 31)
M 13.00 15.81 13.39 9.16 9.48 8.19
SD 5.899 7.998 7.065 5.152 5.079 4.708
26-30
(n = 24)
M 16.25 18.25 15.33 10.42 11.50 10.42
SD 7.079 9.008 7.987 6.440 6.262 4.491
31+
(n = 9)
M 13.67 14.00 13.67 8.56 8.22 7.00
SD 4.583 9.474 7.969 6.366 4.893 4.500
79

Despite apparent differences in means and in the visual representation of the
concerns profile no statistically significant differences were noted in the concerns of
teachers with varying amounts of teaching experiences. The complete ANOVA table for
these data is in Appendix C.8.
The next demographic variable explored in this study is related to Experience with
Using PerformanceTRACKER also measured in years. Table 4.13 and Figure D.7 (found
in Appendix D) show the table of concerns scores and the visual display of the concerns
profile when disaggregated across years of teaching experience.
Table 4.13
Concerns Scores for Teachers across Years of PerformanceTRACKER Experience
TRACKER
Experience
(before this
school year) Informational Personal Management Consequence Collaboration Refocusing
Never
Used
(n = 49)
M 17.55 18.14 14.65 9.18 9.47 7.49
SD 6.844 8.573 7.731 5.747 5.478 4.556
1 Year
(n = 54)
M 14.06 15.54 14.35 9.96 10.20 9.63
SD 5.982 7.422 7.018 5.518 5.381 5.594
2 Years
(n = 48)
M 11.94 13.92 14.44 9.54 12.29 10.60
SD 5.628 7.610 5.048 4.603 6.123 4.336
3 Years
(n = 39)
M 9.62 11.72 11.69 7.62 9.85 9.49
SD 5.451 7.824 7.609 5.504 5.153 5.472
4 Years
(n = 29)
M 13.69 15.90 11.86 10.59 13.86 10.10
SD 6.809 7.687 6.004 4.997 7.205 4.443
5 or more
Years
(n = 67)
M 11.66 14.25 12.57 9.00 11.42 9.90
SD 6.168 7.091 6.354 4.812 4.733 4.774
It seems evident in both the data presented in Table 4.13 and the visual
representation of concerns depicted in Figure D.7, teachers with no TRACKER
80

experience had the highest level of Informational, Personal, and Management concerns.
Statistically significant differences were observed across levels of experience in the
Informational, F(5,280)=9.048, p<.001 and Personal stages, F(5,280)=3.520, p=.004.
Also of note are the mean scores which were significantly lower than that of their peers in
the Collaboration and Refocusing stages, F(5,280)=3.420, p=.005, and F(5,280)=2.315,
p=.044, respectively for those who had never before used PerformanceTRACKER. The
complete ANOVA table for these data may be found in Appendix C.9.
Next, teachers performed a self-assessment on their facility with technology in
general to explore whether or not teachers who felt more confident in their general
technology abilities altered their concerns about PerformanceTRACKER. No specific
technologies were described in this question and no benchmarks were used to distinguish
one performance level from another. Table 4.14 and Figure D.8 show the table of
concern values and the diagram of the corresponding concerns profile when
disaggregated across teachers assessment of their skill level.
The only significant finding for this question appeared in the comparison of the
means in the Management stage. A one-sided ANOVA revealed statistically significant
differences were found between the means of teachers with various technology skills in
this area F(3,282)=2.915, p=.035. No other statistically significant differences were
noted in the other areas in this analysis. The complete ANOVA table for these data is
located in Appendix C.10.
Next, teachers performed a similar self-assessment of their skills, but described
their ability with PerformanceTRACKER specifically. Again, no specific benchmarks
were used to distinguish among the levels of proficiency. Table 4.15 presents the
81

Table 4.14
Concerns Scores for Teachers Across Self-Assessed Technology Proficiency
Tech Skills Informational Personal Management Consequence Collaboration Refocusing
Non-User
(n = 9)
M 14.89 16.00 16.11 8.22 9.67 9.00
SD 5.278 5.477 6.412 5.449 4.950 6.285
Novice
(n = 39)
M 15.23 17.21 15.95 10.41 11.59 9.87
SD 6.479 8.880 6.909 5.543 5.697 4.747
Intermediate
(n = 172)
M 12.65 14.28 12.91 8.98 10.58 9.02
SD 6.599 7.653 6.919 5.193 5.547 4.762
Skilled
(n = 66)
M 12.76 15.12 12.74 9.53 12.08 10.67
SD 6.523 7.810 5.869 5.066 6.052 5.298

concerns score data and Figure D.9 (found in Appendix D) visually presents the concerns
profile when disaggregated among teachers describing their own skill level with
PerformanceTRACKER.
Table 4.15
Concerns Scores for Teachers Across Self-Assessed PerformanceTRACKER Proficiency
TRACKER Skill Informational Personal Management Consequence Collaboration Refocusing
Non User
(n = 50)
M 16.52 15.66 13.94 8.54 8.62 7.74
SD 6.503 8.233 8.049 5.552 4.981 5.066
Novice
(n = 117)
M 14.17 16.33 14.10 9.67 11.28 9.91
SD 6.175 7.474 6.903 4.828 5.463 4.845
Intermediate
(n = 100)
M 10.50 13.09 12.62 8.97 11.34 9.65
SD 5.622 7.555 5.843 5.244 5.476 4.779
Skilled
(n = 19)
M 11.11 14.00 11.53 10.42 14.26 11.05
SD 8.116 8.944 6.132 6.449 7.658 5.512
Significant differences were present between the mean scores of TRACKER users
based on their own perceptions of their skill levels. Users expressing less expertise with
TRACKER having significantly greater Informational F(3,282)=12.788, p<.001 and
Management concerns, F(3,282)=3.406, p=.018. Conversely, PerformanceTRACKER
82

users who self-identified as having more skill with the software had significantly higher
Collaboration and Refocusing concerns, F(3,282)=5.467, p=.001, and F(3,282)=3.087,
p=.028, respectively. The complete ANOVA table for these data may be found in
Appendix C.11
The final teacher-based demographic variable explored in this study is related to
the simple presence or absence of professional development experiences for teachers in
the use of PerformanceTRACKER. Table 4.16 and Figure D.10 (found in Appendix D)
show the concerns profile for teachers who indicated whether or not they had received
some sort of training on how to use PerformanceTRACKER.
Table 4.16
Concerns Scores for Teachers Across Professional Development
Training Informational Personal Management Consequence Collaboration Refocusing
No
Training
(n = 50)
M 15.68 17.02 14.26 9.60 10.62 9.34
SD 7.049 8.570 7.059 4.866 5.848 4.279
Some
Training
(n = 236)
M 12.55 14.48 13.20 9.21 11.12 9.56
SD 6.338 7.618 6.675 5.299 5.657 5.100
The only statistically significant differences in the mean scores between the two
groups in this analysis were at the Informational, F(1,284)=9.688, p=.002 and Personal
stages, F(1,284)=4.375, p=.037. The complete ANOVA table for these data is offered in
Appendix C.12
83

Summary
This chapter presented an analysis and interpretation of quantitative and
qualitative data collected using the Stages of Concern Questionnaire to respond to the
research questions explored in this study. Qualitative responses were summarized and
compared and contrasted to the overall quantitative profile of the participant pool.
Additionally, quantitative comparisons of concern profiles of PerformanceTRACKER
users were reviewed and compared across demographic variables describing teacher
participants and the schools in which they work. A discussion of the results of the study
occurs in Chapter Five.
84

CHAPTER FIVE DISCUSSION
Summary of the Study
For this study, classroom teachers from across Pennsylvania were asked to use the
Stages of Concern Questionnaire, a survey instrument which is part of the Concerns
Based Adoption Model to describe their current concerns regarding
PerformanceTRACKER, a computer database which delivers assessment data results to
teachers. This research explored the presence of significant differences in teachers
concerns across demographic variables describing the schools in which teachers worked
as well across demographic variables describing the teachers themselves. As a final point
of analysis, qualitative perceptions were gathered to corroborate the quantitative findings.
Summary of the Results
Participants.
Surveys were sent to 50 schools in 22 Pennsylvania school districts which were
purposefully chosen so that the proportion of demographic variables being explored in
this study closely matched the proportions of those variables in schools and districts
across the state. For this study, the variables explored were the level of school
(Elementary, Middle, or High), poverty level as measured by Title I eligibility status (Not
Eligible, Eligible for Targeted Assistance, or Eligible for School Wide Assistance),
Locale designation (Rural, Town, Suburb, or City), and AYP status (Made AYP, Warning,
School Improvement 1 or 2, Corrective Action 1 or 2, or Making Progress). More than
300 teachers from across the state responded to the surveys sent. Overall, 286 responses
were viable for the analysis. Respondents represented 26 different schools in 14 school
districts.
85

When the proportion of those variables in the returned surveys were compared to
the numbers of those same variables in all schools in Pennsylvania, it was found that the
distribution represented by the sample varied from that of other schools in the state. As
indicated in Table 4.1, while high schools represented 26.9% of the sample, whereas high
schools only comprise 21.4% of Pennsylvania schools, an overrepresentation of 5.5%;
similarly, high schools only characterized 17.1% of schools using
PerformanceTRACKER, an overrepresentation of 9.8%. Conversely, elementary schools
were underrepresented by 5.7% and middle schools by 0.8% when comparing the
participants schools to those in the state who are using PerformanceTRACKER.
Tables 4.2 and 4.3 showed that the proportions of Title I Eligibility codes and
Locale varied by as much as 19.2% and 13.7% respectively from the distributions found
in Pennsylvania. Table 4.4 also showed that the fraction of schools in PA represented by
AYP status codes varies from the participant sample with schools designated as Made
AYP being identified 7.2% more than schools in the state. The finding that demographic
variables describing the schools are not present across the groups at the same rate
significantly limits the generalizability of the study to Pennsylvania schools in general
and Pennsylvanias PerformanceTRACKER users specifically.
Research Question One.
The first research question was designed to explore at a basic level the concerns
of teachers who have implemented or who are in the process of implementing
PerformanceTRACKER. Table 4.5 showed the percentile ranks of each of the Stages of
Concern. The stage of greatest concern, represented at the 99
th
percentile, was the
Unconcerned stage, described by George, Hall, and Stiegelbauer (2006) indicating that
86

individuals indicate little concern about or involvement with the innovation (p. 8).
This indicated that most teachers who responded to the study spend little time thinking
about PerformanceTRACKER. The next two highest ranks were in the Informational and
in the Personal stages of concern. These Self stages are typically characterized by
expressions of concern or involvement like I would like to know more about [the
innovation] and How will using [the innovation] affect me? The next highest level of
concern was in the Management stage, in which teachers are expressing concerns about
the amount of time the innovation will require of them. Where qualitative responses
were provided, these also indicated that teachers were concerned about the time required
for using PerformanceTRACKER as well.
The Consequence stage was revealed to be consistently the lowest scoring stage
whether the analysis was of the entire participant set or any subset thereof. As noted in
previously, the authors of the SoCQ permit researchers who are using the SoCQ to
replace the words the innovation with the name of the innovation itself. Specifically, a
question worded in the original SoCQ would be changed to read from I would like to
excite my students about their part in the innovation to I would like to excite my
students about their part in PerformanceTRACKER. The questions on the SoCQ that
assessed concerns in the Consequence stage are worded as follows:
I am concerned about students attitudes towards PerformanceTRACKER.
I am concerned about how PerformanceTRACKER affects students.
I am concerned about evaluating my impact on students (in relation to
PerformanceTRACKER).
87

I would like to excite my students about their part in
PerformanceTRACKER.
I would like to use feedback from students to change the program.
PerformanceTRACKER is a tool designed for teachers to use, not students. Since
the implementation of PerformanceTRACKER impacts only teachers, rather than another
change which might be experienced by teachers and students together, one could
postulate that the wording of these questions contributed to the lower scores in the
Consequence stage.
Research Question Two.
The second research question sought to ascertain the presence or absence of
significant differences in concerns of teachers when those concerns were explored across
demographic variables describing the schools in which they worked. When exploring
means of concerns scores across school levels (Elementary, Middle, High), the mean
scores for the Informational, Personal, and Management concerns were higher for middle
school teachers than for elementary teachers, and higher still for high school teachers.
Statistically significant differences were noted across the Informational (p<.001),
Personal (p=.003), and Management (p<.001) stages. This result suggested that
elementary school teachers are significantly less concerned about becoming aware of,
understanding, and managing PerformanceTRACKER than their middle- and high-school
counterparts. Consequently, this finding proposed that leaders of middle and high
schools might need to provide additional or alternative supports for their teachers to
address their heightened concerns in these areas.
88

With regard to the concerns of teachers when explored across Locale codes, the
only significant difference noted was in the Informational stage (p=.027) with concerns
somewhat lower for schools in towns than for those in rural, suburban, or city areas. It
can be concluded, therefore, that the location of the school or district does not play a
significant factor in the concerns of teachers implementing this product.
Likewise, across Title I Eligibility, statistically significant differences were noted
only in the Informational (p=.006) and Management (p=.007) stages. The mean
concerns scores for School-Wide Title I Eligibility in comparison to Not Eligible for Title
I funding for these stages were 7.00 vs. 14.02 and 8.71 vs. 15.28 respectively. This
implied that PerformanceTRACKER and the data that it provides to teachers are more
present in the minds of teachers who are in schools which qualify for Title I funding than
in those schools who do not. It may be put forward that in Title I schools, the
performance of students as measured by the standards-based data typically found in
PerformanceTRACKER is connected to funding, programming, and to staffing, and thus,
understanding and using the assessment data provided by PerformanceTRACKER is more
relevant to schools in which student poverty is connected to funding and requires
continual examination of student performance data.
The final school-based variable explored in this study was AYP Status. With
regard to concerns of teachers when explored across this variable, the only significant
difference noted were in the Informational stage (p=.041) and the Personal stage
(p=0.12). Of note here is that mean concern scores for teachers in schools designated as
being in Corrective Action were found to be consistently higher in every stage of concern
than for schools with any other AYP designation. It may be postulated based on this
89

observation, that schools in this lowest level of AYP status, teachers are more interested
in learning about PerformanceTRACKER, more concerned about managing the
requirements associated with its implementation, and more concerned about using
PerformanceTRACKER to positively impact student learning than teachers in other
schools. It may also be put forward that due to pressures to raise achievement that
schools in Corrective Action are under intense pressures in a variety of areas. George,
Hall, and Steigelbauer (2006) reported that if a change is imposed without sufficient
supports, that concerns in all of the areas measured by SoCQ can show increased values.
Another possible explanation may be that in schools designated as being in Corrective
Action 2 that PerformanceTRACKER has been sought as a tool to use to help teachers
inform their instruction and has been done so in a manner that raises concern levels rather
than implemented in such a manner as to lower teachers concern levels.
Of additional interest when reviewing the above results, the demographic
variables of School Level and Title 1 Eligibility are ordinal in nature: High above Middle
above Elementary when considering grade levels present in the building and Not Eligible,
Eligible for Targeted Assistance, and Eligible for School Wide Title 1 Funding when
considering level of poverty in a school. Stages of concern associated with significant
differences across these demographic variables tended to order themselves in the same
sequence within these variables. In contrast, the nominal variables of Locale and AYP
Status do not likewise rank the concerns of teachers in those schools and significant
differences in the concerns of teachers across the variables of Locale and AYP were not
as common.
90

Research Question Three.
The third research question sought to explore whether significant differences
existed when considering variables not associated with the schools but with the teachers
themselves. First considering their level of teaching assignment (Primary, Intermediate,
Middle, or High), significant differences were observed in all but the Consequence stage.
Also of note were increasing mean scores observed in the Informational, Personal, and
Management concerns stages. This finding was consistent with results described earlier
when examining concerns across building level, but reveals a further, intriguing result.
Specifically, it was observed and displayed in Table 4.7 that there were increasing means
across building levels, but this analysis shows that not only did these mean scores in these
areas increase across levels, but even within levels as shown in Table 4.11. Mean scores
for the Informational, Personal, and Management were lower in teachers of the primary
(PK-2) grades the those who teach the intermediate (3-5) grades even within the K-5
elementary group (M = 10.21 vs. 10.97, 12.33 vs. 13.2, and 10.21 vs. 11.61,
respectively). This result was particularly intriguing in that while Table 4.7 shows a
general trend across the grade levels with teachers being less involved with
PerformanceTRACKER the higher the level of the building, it also shows that this trend
even exists across grade levels within the elementary level. These data also suggested
that primary teachers are more familiar with PerformanceTRACKER than their peers
teaching in the intermediate grades. This would imply that districts are capturing data
other than PSSA scores because PSSA data is not available to teachers regarding their
students until early in students 4
th
grade school year. Likewise, the absence of PSSA
91

data in 9
th
and 10
th
grades could have contributed to the lack of involvement with
PerformanceTRACKER at the high school level.
An additional result related to the question of concerns among different teaching
levels is revealed when comparing ANOVA tables C.6 and C.2 for the analyses of the
data found in Tables 4.11 and 4.7 respectively. The disaggregation of the Elementary
NCES school designation used in Table 4.7 into a Primary or Intermediate teaching
assignment in Table 4.11 caused a shift in the p values reported for each category (where
they were >.0005 and could be compared). In comparing the p values, the level of
significance dropped in three areas, Personal, Collaboration, and Refocusing and rose in
the Consequence stage.
The next analysis examined differences across levels of teaching experience.
While no statistically significant differences were noted, as was the case when concerns
were examined across several other ordinal variables, the mean scores in the
Informational stage tended to increase with experience, suggesting that teachers with less
experience in years are actually more informed about the innovation than their more
senior colleagues. Also of note are where the extrema are located in each stage of
concern. For example, as may be noted in Table 4.12, with respect to Management
concerns, teachers using PerformanceTRACKER in their first year of teaching this year
had the highest level of concern. Likewise, these new teachers had the lowest scores in
the Consequence and Collaboration stage. In contrast, teachers with 6-10 years of
experience had the lowest scores in the Management concern stage and teachers with 1-5
years of teaching experience had the highest scores in the Collaboration stage. This
92

finding should potentially inform school leaders as they plan professional development
for and assign mentors to new teachers.
The next consideration in this study examined concerns with respect to the
amount of time teachers had been using PerformanceTRACKER. Mean scores for novice
users were significantly higher for the Informational and Personal stages (p<.001 and
p=.004, respectively) and significantly lower in the Collaboration and Refocusing stages
(p=.005 and p=.044, respectively). These data, like the one above has implications for
school leaders when planning professional development.
The next two demographic questions asked teachers to self-assess both their
capability with technology in general and with PerformanceTRACKER specifically. In
the former case, significant differences (p=.035) were noted in only in the Management
stage, suggesting that teachers who were familiar with technologies were less concerned
with handling the demands of implementing PerformanceTRACKER. In contrast,
however, teachers who did not express confidence in their own PerformanceTRACKER
abilities had significantly higher Informational (p<.001) and Management (p=.018)
concerns. Conversely, teachers who expressed a greater level of competence with
PerformanceTRACKER had higher Collaboration (p=.001) and Refocusing (p=.028)
concerns. In exactly the manner predicted by CBAM, increased time working with
PerformanceTRACKER and opportunities for teachers to practice using the software
could be causes for positive shifts in concerns from the Self to the Impact stages.
The final exploration in this study focused on the effect of professional
development experiences. Of note here, is that teachers only responded to whether or not
they had received any sort of professional development as the nature and format of any
93

professional development was not explored in this study. Teachers who described
themselves as having been the recipients of professional development showed
significantly lower concerns in the Informational (p=.002) and Personal (p=.037) stages.
Typical goals of professional development with respect to PerformanceTRACKER would
be to lower the Self concerns (represented by the Informational, Personal, and
Management stages) and raise the Impact concerns (represented by the Collaboration and
Refocusing stages). While the mean scores for teachers who had received professional
development were lower in Management (M=13.2 vs. 14.26) and higher for
Collaboration (M=11.12 vs. 10.62) and Refocusing (M=9.56 vs. 9.34), these differences
in means were not found to be statistically significant. Thus, it could be concluded that
not only can professional development of any sort significantly lower Informational and
Personal concerns and positively affect the mean scores in other stages, continued and
targeted professional development might be required to make those differences
statistically significant as well.
Limitations of the Study
As data was being gathered from respondents, there was some evidence that
leaders who shared the survey with their teaching staff failed to completely follow the
directions accompanying the survey instrument. Specifically, some respondents
indicated that they were psychologists, or cross-grade special education teachers, or
special-area teachers, who were beyond the scope of the study. Wherever possible, these
respondents surveys were excluded from the analysis.
Purposive sampling of school districts using PerformanceTRACKER was
performed in such a manner as to match the proportions of school/district-based variables
94

(Level, Locale, Title I Eligibility, and AYP Status) across schools used in this study to that
of the proportion of all Pennsylvania schools which are using PerformanceTRACKER.
While there was close approximation between the sampled schools and the overall
population of schools using PerformanceTRACKER, participation in the study was
voluntary and the distribution of these variables represented in the respondents departed
from that of larger comparison population. Additionally, the researcher leveraged
professional networks in order to increase participation; for example, one such colleague
of the researcher, an assistant principal at a high school, provided more than half of the
respondents, potentially skewing the results. As such, the results of the study are not
generalizable.
Relationship to Other Research
Basic findings in this study are consistent with the assertions found in the
Concerns Based Adoption Model. As presented in Table 4.13, results showed that the
teachers who had never used PerformanceTRACKER had significantly higher
Informational and Personal concerns, whereas teachers with one or two years of
PerformanceTRACKER experience had higher Collaboration and Refocusing concerns.
This finding is consistent with the assertion of the Concerns Based Adoption Model that
as experiences with an innovation accumulate, users move through the phases from
Informational through Refocusing concerns (Fuller, 1969; Hall & Hord, 1987).
With reference to Table 4.15, teachers who described themselves as non-users of
PerformanceTRACKER showed significantly higher Informational concerns.
Conversely, those who described their skill level with PerformanceTRACKER as
Intermediate or Skilled had higher Collaboration and Refocusing concerns. This finding
95

corroborates the Concerns Based Adoption Model which would predict that users who are
more skilled with an innovation would tend to be exploring how to maximize the benefit
of that innovation for their students (George, Hall, & Stiegelbauer, 2006; Hall & Hord,
1987; Hord, Rutherford, Huling, & Hall, 2006).
Again, this finding is consistent with the Concerns Based Adoption Model.
Teachers who have not received professional development in PerformanceTRACKER
would be seeking information about what PerformanceTRACKER is and how to deal with
the personal impact of being able to successfully implement it to a greater degree than
their peers who have received at least some training in PerformanceTRACKER.
Recommendations for Further Research
This study began the exploration of PerformanceTRACKER use and described
differences in populations but did not seek to understand why those differences existed.
Other results of the study warrant further exploration because they tend to support beliefs
held by some individuals about teachers at particular levels, but have not been
extensively and formally studied. The following research studies could add significantly
to the understanding of PerformanceTRACKER use in Pennsylvania:

1) The relationship between concerns regarding PerformanceTRACKER
use and teaching levelThe concerns pattern revealed when use of
PerformanceTRACKER was analyzed across teaching level suggests
that PerformanceTRACKER is used and understood less and less as the
level of the teacher increases from elementary to middle to high school
These data are displayed in Tables 4.7 and 4.11. Studies should be
96

performed which seek to understand what factors contribute to this
phenomenon.
2) The relationship between concerns regarding PerformanceTRACKER
use and data dependent programsTables 4.9 and 4.10 show
differences in PerformanceTRACKER use in teachers depending on
Title 1 Eligibility and AYP Status. Both of these demographic
categories are associated with data use on the part of the school and its
teachers. Title 1 Eligibility is based on the percentage of students who
are economically disadvantaged and entrance and exit criteria for Title 1
supports is based on data. Of note in these results is that in the case of
Title 1 Eligibility there were marked differences between the
Informational and Personal concerns between the Not Eligible and
School Wide levels, but not as marked a difference between the Not
Eligible and Targeted levels as can be seen in Figure 4.4 and Table 4.9.
Research exploring whether all teachers who are teaching in schools
eligible for Targeted Title 1 support are as involved with
PerformanceTRACKER as their colleagues who are providing the
supports. Likewise, AYP Status is driven by a number of data points,
not the least of which is performance on the PSSA. Teachers and
school leaders in schools failing to make AYP are focused on
improving student achievement. Additionally, one would expect that
teachers who are in schools failing to make AYP would be heavily
involved in data use. Yet, as can be seen in Figure 4.5 and Table 4.10,
97

teachers who are teaching in schools in Corrective Action 2, the lowest
and most dire level of AYP status in Pennsylvania actually had higher
Informational and Personal concerns indicating that they have less
exposure to PerformanceTRACKER than their colleagues who work in
schools making AYP. In both of these cases, additional exploration
should be conducted to understand the dynamics in the schools that are
contributing to these findings.
3) The relationship between concerns regarding PerformanceTRACKER
use and the relevancy and nature of the data storedThis study did not
explore whether there was any difference in the concerns of secondary
(grades 6-8) teachers for whom PSSA data is available (Math and
Reading) when compared to their colleagues in other areas (e.g. Social
Studies). Nor did this study explore the differences in concerns of
teachers depending on the quantity of data available in
PerformanceTRACKER. Further study should be conducted to see
whether or not the presence or absence of data specifically related to
ones subject area affects the concerns held by teachers of those
subjects. Related, concerns of teachers who have data available through
PerformanceTRACKER only consisting of PSSA scores should be
contrasted to the concerns of their colleagues in schools where
PerformanceTRACKER is used to hold additional types of data or data
that is updated with greater frequency than the annual addition of PSSA
scores.
98

4) The relationship between concerns regarding PerformanceTRACKER
use and Professional DevelopmentThe Stage of Concern that ranked
at the highest level across all of the analyses was the Unconcerned
stage, ranking at the 99
th
percentile for all analyses performed as part of
this study. Individuals whose highest concern is in the Unconcerned
stage have little concern about or involvement with the innovation
(George, Hall, & Stiegelbauer, 2006, p. 8). This finding could indicate
that the implementation of PerformanceTRACKER has been
accomplished without sufficient supports. Fuller (1969) pointed out
that in order for individuals to move through the Stages of Concern,
additional information needs to be acquired, additional practice needs to
be provided, and synthesis of this new knowledge needs to be
accomplished. George, Hall, and Stiegelbauer reported that attempting
to force higher-stage concerns can actually increase lower-stage
concerns, a finding that might explain why teachers at the lowest AYP
status, Corrective Action 2, have higher concerns than teachers in
schools making AYP. Couple the assertion made by George, Hall, and
Stiegelbauer with the finding presented in Table 4.16 indicating that the
mere presence of some sort of training had significantly decreased
Informational and Personal concerns and this study has substantial
implications for professional development. Additional research should
be conducted to explore the form, nature, frequency, and structure of
99

the professional development that would most positively impact the
implementation of PerformanceTRACKER.
5) The relationship between concerns regarding PerformanceTRACKER
use and the actual use of dataThough beyond the scope of this study,
the ultimate purpose of providing teachers with a tool like
PerformanceTRACKER is to enable them to access the data they need
to improve student achievement. Research should be conducted about
the actual Levels of Use, another component of the Concerns Based
Adoption Model. Studying Levels of Use would provide additional
insights about how to effectively introduce PerformanceTRACKER to
teachers and to have its implementation positively impact data use by
teachers and, subsequently, student achievement.
Conclusion
Data-driven decision making has been prescribed for schools since the late 1990s.
Initially, data was more often used at the district level than at the school level. Large-
scale data were initially provided by external sources (for example, external demographic
studies used to create enrollment projections) and was used to make equally large-scale
decisions (district budgets, school attendance area boundaries, and staffing). However, as
time passed, understanding the value and potential power of data as well as the ability
and competence of individuals in schools to analyze those results has caused a paradigm
shift in how data are used.
Today, schools can collect and analyze data down to the level of individual
students who are enrolled in specific programs and who are receiving specific kinds of
100

instruction from specific teachers. Additionally, data systems exist that can enable
teachers to obtain nearly real-time data on student performance and make the necessary
adjustments to teaching to help the students in their classrooms in equally readied real-
time.
The shift from a macro-use of data to help entire school systems to a micro-use of
data to help individual students has set the stage for a revolution in instruction and in
meeting individualized student needs. However, despite substantial increases in the
affordability, ubiquity, and investments in technology this revolution has yet to take
place. In that respect, additional reforms are necessary to enable school leaders to
provide the supports necessary for such reforms to reach the promises associated with
that revolution. Classroom teachers will require time and appropriate professional
development to come to a level of comfort and confidence with using data. As indicated
in this study, one major component of the success or failure of tools like
PerformanceTRACKER to provide the necessary information to teachers is dependent on
appropriate, sufficient, and targeted professional development opportunities.
In order to meet the 2014 deadline of NCLB, the use of data and data systems to
inform instruction is crucial. The results of this study suggest that if school leaders
expect their staff members to use systems like PerformanceTRACKER, increases in
resources and supports for teachers may be warranted. Possibilities for accomplishing
this can be in the form of coaching, increased time for professional development, and the
use of professional learning communities around data to facilitate the change. These
strategies can assist teachers as they manage the rapid integration of data into the culture
of teaching and learning of the school. As accountability demands on school systems,
101

schools, principals, teachers, and students increase, school leaders will need to be even
more creative in order to find ways to provide the resources necessary for data-driven
reforms and fulfill their fiduciary responsibilities to their communities if they want to
realize the meaning of No Child Left Behind.
102

References
Boudett, K. P., City, E. A., & Murnane, R. J. (Eds.). (2008). Data wise: A step-by-step
guide to using assessment results to improve teaching and learning. Cambridge,
MA: Harvard Education Press. (Original work published 2005).
American Association of School Administrators. (2002). Using data to improve schools:
What's working. Arlington, VA: Author.
Andrade, H., Buff, C., Terry, J., Erano, M., & Paolino, S. (2009). Assessment-driven
improvements in middle school students' writing. Middle School Journal, 40(4),
4-12.
Aspen Institute. (2007). Beyond NCLB (The Report of the Commission on No Child Left
Behind). Washington, DC: Author
Baker, S., Gersten, R., Dimino, J. A., & Griffiths, R. (2004). The sustained use of
research-based instructional practice: A case study of peer-assisted learning
strategies in mathematics. Remedial and Special Education, 25(5). doi:10.1177/
07419325040250010301
Bernhardt, V. (2003). Using data to improve student learning in elementary schools.
Larchmont, NY: Eye on Education.
Bernhardt, V. (2004). Using data to improve student learning in middle schools.
Larchmont, NY: Eye on Education.
Bernhardt, V. (2007). Translating data into information to improve teaching and
learning. Larchmont, NY: Eye on Education.
Bernhardt, V. (2009). Data use. Journal of Staff Development, 30(1), 24-27.
103

Black, P., & Wiliam, D. (1998). Assessment and Classroom Learning. Assessment in
Education, 5(1), 7-71.
Bracey, G. W. (2009). Time to say goodbye to high school exit exams. Principal
Leadership, 10(2), 64-66.
Brandt, R., & Voke, H. (Eds.). (2002). A Lexicon of Learning. Alexandria, VA:
Association for Supervision and Curriculum Development.
Bridges, W. (2009). Managing transitions (3rd ed.). Philadelphia: Da Capo Press.
Brimijoin, K., Marquissee, E., & Tomlinson, C. A. (2003). Using data to differentiate
instruction. Educational Leadership, 60(5), 70-73.
Brooks-Young, S. (2003). Tips for effective data management. Technology & Learning,
23(11), 11-18. Retrieved from http://www.techlearning.com/article/17956
Brunner, C., Fasca, C., Heinze, J., Honey, M., Light, D., Mandinach, E., & Wexler, D.
(2005). Linking data and learning: The Grow Network study. Journal of
Education for Students Placed At Risk, 10(3), 241-267.
Burns, M. K., Klingbeil, D. A., & Ysseldyke, J. (2010). The effects of technology-
enhanced formative evaluation on student performance on state accountability
math tests. Psychology in the Schools, 47(6), 582-591. doi:10.1002/pits.20492
Center on Education Policy. (2008). State high school exit exams. Washington, DC.
Christou, C., Eliophotou-Menon, M., & Philippou, G. (2004). Teachers' concerns
regarding the adoption of a new mathematics curriculum: An application of
CBAM. Educational Studies in Mathematics, 57(2), 156-176.
Collins, J. (2005). Good to great and the social sectors. New York: HarperCollins.
104

Crawford, V. M., Schlager, M. S., Penuel, W. R., & Toyama, Y. (2008). Supporting the
art of teaching in a data-rich, high-performance learning environment. In E. B.
Mandinach & M. Honey (Eds.), Data-driven school improvement (pp. 109-129).
New York: Teachers College Press.
Creighton, T. (2001). School and data. Thousand Oaks, CA: Corwin Press.
Danielson, C. (2002). Enhancing student achievement: A framework for school
improvement. Alexandria, VA: Association for Supervision and Curriculum
Development.
Dobbs, R. L. (2004). Impact of training on faculty and administrators in an interactive
television environment. The Quarterly Review of Distance Education, 5(3), 183-
194.
Donovan, L., Hartley, K., & Strudler, N. (2007). Teacher concerns during initial
implementation of a one-to-one laptop initiative at the middle school level.
Journal of Research on Technology in Education, 39(3), 263-286.
Downey, C. (2001). Special delivery: Elements of instruction. Leadership, 31(2), 35-36.
EDmin. (n.d.). Instructional data management system. Retrieved from http://
www.idmsweb.com/
Elmore, R. F. (2002). Bridging the gap between standards and achievement: The
imperative for professional development in education. Washington, DC: Albert
Shanker Institute.
Esty, D., & Rushing, R. (2007). The promise of data-driven policymaking. Issues in
Science and Technology, 23(4), 67-72.
Fox, D. (2001). No more random acts of teaching. Leadership, 31(2), 14-17.
105

Fuller, F. F. (1969). Concerns of teachers. American Educational Research Journal, 6(2),
207-226.
Fulton, M. (2006). Minimum subgroup size for Adequate Yearly Progress (AYP): State
trends and highlights [Policy Briefing]. Retrieved from http://www.ecs.org/
clearinghouse/71/71/7171.pdf
George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in
schools: The stages of concern questionnaire. Austin, TX: SEDL.
Granger, D. A. (2008). No Child Left Behind and the spectacle of failing schools: The
mythology of contemporary school reform. Educational Studies, 43(3), 206-228.
doi:10.1080/00131940802117654
Hall, G. (2010). Technology's Achilles heel: Achieving High-Quality Implementation.
Journal of Research on Technology in Education, 42(3), 231-253.
Hall, G. E., & Hord, S. M. (1987). Change in schools: Facilitating the Process. Albany:
State University of New York Press.
Heubert, J. P. (2002). High-stakes testing: Opportunities and risks for students of color,
English-language learners, and students with disabilities. Wakefield, MA:
National Center on Accessing the General Curriculum.
Hoff, D. (2007). A hunger for data. Digital Directions, 1(1), 26-28.
Hollingshead, B. (2009). The concerns-based adoption model: A framework for
examining implementation of a character education program. NASSP Bulletin,
20(10), 1-18. doi:10.1177/0192636509357932
Hord, S. M., Rutherford, W. L., Huling, L., & Hall, G. E. (2006). Taking charge of
change. Austin, TX: SEDL.
106

IBM. (2007). Reinventing education. Retrieved from http://www.ibm.com/ibm/ibmgives/
grant/education/programs/reinventing/insights.shtml
Kowalski, T. J., & Lasley, T. J. (Eds.). (2009). Preface. In Handbook of data-based
decision making in education. New York: Routledge.
Larocque, M. (2007). Closing the achievement gap: The experience of a middle school.
The Clearing House, 80(4), 157-161.
Liu, Y., & Huang, C. (2005). Concerns of teachers about technology integration in the
USA. European Journal of Teacher Education, 28(1), 35-47.
Liu, Y., Theodore, P., & Lavelle, E. (2004). A preliminary study of the impact of online
instruction on teachers' technology concerns. British Journal of Educational
Technology, 35(3), 1-3.
Love, N. (2002). Using data/getting results: A practical guide for school improvement in
math and science. Norwood, MA: Christopher-Gordon Publishers.
Love, N. (2004). Taking data to new depths. Journal of Staff Development, 25(4), 22-26.
Love, N., Stiles, K., Mundry, S., & DiRanna, K. (2008). The data coach's guide to
improving learning for all students. Thousand Oaks, CA: Corwin Press.
Mandinach, E., & Honey, M. (2008). Data-driven school improvement: Linking data and
learning. New York: Teachers College Press.
Marzano, R. (2003). What works in schools: Translating research into action.
Alexandria, VA: Association for Supervision and Curriculum Development.
McTighe, J., & O'Connor, K. (2005). Seven practices for effective learning. Educational
Leadership, 63(3), 10-17.
107

Means, B. (2005, April). Evaluating the implementation of integrated student information
and instructional management systems. Paper prepared for the Policy and
Program Studies Service U.S. Department of Education, Menlo Park, CA: SRI
International.
Mitchell, R. (2006, February 15). The nature of assessment: A guide to standardized
testing. Retrieved from http://www.centerforpubliceducation.org/site/
c.kjJXJ5MPIwE/ b.1501925/k.C980/ The_nature_of_assessment_A_
guide_to_standardized_testing.htm #t2a
National Commission on Excellence in Education. (1983). A nation at risk: An
imperative for educational reform. Washington, DC.
New Hampshire Department of Education leverages curriculum & assessment data for
improved student performance. (2010, January 4). Customer Connections.
Retrieved from http://www.sungard.com/~/media/publicsector/casestudies/
performanceplus_newhampshire_casestudy.ashx
Newhouse, C. P. (2001). Applying the concerns-based adoption model to research on
computers in classrooms. Journal of Research on Computing in Education, 33(5).
Retrieved from http://staging.iste.org/Content/NavigationMenu/Publications/
JRTE/Issues/Volume_331/Number_5_Summer_2001/jrce-33-5-newhouse.pdf
No Child Left Behind Act, 20 U.S.C 6301 (2002).
Overbaugh, R., & Lu, R. (2008). The impact of a federally funded grant on a professional
development program: Teachers' stages of concern toward technology integration.
Journal of Computing in Teacher Education, 25(2), 45-55.
108

Pennsylvania Department of Education. (2007, January 6). 2006-2007 AYP Results
Frequently Asked Questions. Retrieved from http://www.portal.state.pa.us/portal/
server.pt/community/school_assessments/7442/2006-2007_ayp_faq/507513
Pennsylvania Department of Education. (n.d.). Academic Achievement Report. Retrieved
from http://paayp.emetric.net/Home/About
Pennsylvania Department of Education. (n.d.). Stronger high school graduation
requirements [FAQ Document]. Retrieved from http://www3.bucksiu.org/
167010122141821273/lib/167010122141821273/Keystone_Fact_Sheet.pdf
Petrides, L. A. (2006). Using data to improve instruction. T.H.E. Journal, 33(9), 32-37.
Popham, W. J. (2001). The truth about testing. Alexandria, VA: Association of
Supervision and Curriculum Development.
Popham, W. J. (2008). Transformative assessment. Alexandria, Virginia: Association for
Supervision and Curriculum Development.
Potter, R. L., & Stefkovich, J. A. (2009). Legal dimensions of using employee and
student data to make decisions. In T. J. Kowalski & T. J. Lasley (Eds.), Handbook
of data-based decision making in education (pp. 38-53). New York: Routledge.
Public Law 107-110. (2002). No Child Left Behind Act of 2001. Washington, DC: 107
th

Congress.
Rakes, G. C., & Casey, H. B. (2002). An analysis of teacher concerns toward
instructional technology. International Journal of Educational Technology, 3(1).
Retrieved from http://www.ed.uiuc.edu/ijet/v3n1/rakes/index.html
109

Regional Educational Laboratory Northeast & Islands. (2010). Evidence based education.
Retrieved from Regional Educational Laboratory Northeast & Islands Web site:
http://www.relnei.org/connecting.evidence.php
Rudner, L., & Boston, C. (2003). Data warehousing: Beyond disaggregation. Educational
Leadership, 60(5), 62-65.
School Information Systems. (2010). School information systems: Student data
management. Retrieved from http://www.sisk12.com/content/117/
student_data_management.aspx
Southwest Educational Development Laboratory. (Producer). (2006). Measuring
implementation in schools: Using the tools of the concerns-based adoption model
[DVD]. (Available from http://www.sedl.org)
SunGard Public Sector. (2009). PerformanceTRACKER [Brochure]. Bethlehem, PA:
Author.
SunGard Public Sector. (2010a). Plus360 - PerformancePLUS. Retrieved from http://
www.sungardps.com/products/performanceplus.aspx
SunGard Public Sector. (2010b). SunGard Plus360. Retrieved from http://
www.sungardps.com/plussolutions
SunGard. (2010a). About SunGard. Retrieved from http://www.sungard.com/
aboutsungard.aspx
SunGard. (2010b). SunGard Corporation. Retrieved from http://www.sungard.com
Tedford, J. (2008). When remedial means what it says: How teachers use data to reform
instructional interventions. The High School Journal, 92(2), 28-36.
110

Tomlinson, C. A. (2001). How to differentiate instruction in mixed ability classrooms
(2nd ed.). Alexandria, VA: Association for Supervision and Curriculum
Development.
Tyler, K. (2009). The Effects of Data-Driven Instruction and Literacy Coaching on
Kindergarteners' Literacy Development. Dissertation Abstracts. (UMI No.
3387428)
U. S. Department of Education, National Center for Education Statistics. (2008).
Common Core of Data [Data File]. Available from http://nces.ed.gov/ccd/
U.S. Department of Education, Office of Elementary and Secondary Education. (2002).
On the horizon: State accountability systems [PowerPoint slides]. Retrieved from
US Department of Education Archives: http://www2.ed.gov/admins/lead/account/
stateacct/edlite-index.html
U.S. Department of Education, Office of Planning., Evaluation, & Policy Development.
(2008, August). Teachers' use of student data systems to improve instruction:
2005 to 2007. Retrieved from http://www.ed.gove/about/offices/lst/opepd/ppss/
reports.html
U.S. Department of Education. (2010, January 27). Title I, Part A program. Retrieved
from http://www2.ed.gov/programs/titleiparta/index.html
United States Congress. (1965). Elementary and Secondary Education Act of 1965.
Washington, D.C.: Congress (20).
Wayman, J. C. (2007). Student data systems for school improvement: The state of the
field. In Educational Technology Research Symposium, Vol. 1 (pp. 156-162).
Lancaster, PA: ProActive.
111

Wayman, J. C., & Stringfield, S. (2003, October). Teacher-friendly options to improve
teaching through student data analysis. Paper presented at the 2003 Annual
Meeting of the American Association for Teaching and Curriculum, Baltimore.
Wayman, J. C., Stringfield, S., & Yakimowski, M. (2004). Software enabling school
improvement through analysis of student data (Technical Report No. 67).
Baltimore: Center for Research on the Education of Students Placed At Risk
(CRESPAR), Johns Hopkins University.
Wiliam, D. (2007). Keeping learning on track: Formative assessment and the regulation
of learning. In F. K. Lester, Jr. (Ed.), Second handbook of mathematics teaching
and learning (pp. 1051-1098). Greenwich, CT: Information Age.
Young, V. M. (2006). Teachers' use of data: Loose coupling, agenda setting, and team
norms. American Journal of Education, 112(4), 521-548.
Ysseldyke, J. E., & Bolt, D. (2007). Effect of Technology-enhanced continuous progress
monitoring on math achievement. School Psychology Review, 36(3), 453-467.
Ysseldyke, J. E., Betts, J., Thill, T., & Hannigan, E. (2004). Use of an instructional
management system to improve mathematics skills for students in Title I
programs. Preventing School Failure, 48(4), 10-14.
Ysseldyke, J. E., Spicuzza, R., Kosciolek, S., & Boys, C. (2003). Effects of a learning
information system on mathematics achievement and classroom structure. Journal
of Educational Research, 96(3), 163-174.
Ysseldyke, J. E., Tardrew, S., Betts, J., Thill, T., & Hannigan, E. (2004). Use of an
instructional management system to enhance math instruction of gifted and
talented students. Journal for the Education of the Gifted, 27(4), 293-310.

112
Appendix A

RERB Approval Form

113

114
Appendix B

Survey Instrument and SoCQ Norming Tables


115
116

117

118

119

120


121

Table B.1

Stages of Concern Norming Table

Percentile Rank for Stage
SoCQ
Total Raw
Score for
Stage
Uncon-
cerned
Inform-
ational
Personal
Manage-
ment
Conse-
quence
Collabor-
ation
Refo-
cusing
0 0 5 5 2 1 1 1
1 1 12 12 5 1 2 2
2 2 16 14 7 1 3 3
3 4 19 17 9 2 3 5
4 7 23 21 11 2 4 6
5 14 27 25 15 3 5 9
6 22 30 28 18 3 7 11
7 31 34 31 23 4 9 14
8 40 37 35 27 5 10 17
9 48 40 39 30 5 12 20
10 55 43 41 34 7 14 22
11 61 45 45 39 8 16 26
12 69 48 48 43 9 19 30
13 75 51 52 47 11 22 34
14 81 54 55 52 13 25 38
15 87 57 57 56 16 28 42
16 91 60 59 60 19 31 47
17 94 63 63 65 21 36 52
18 96 66 67 69 24 40 57
19 97 69 70 73 27 44 60
20 98 72 72 77 30 48 65
21 99 75 76 80 33 52 69
22 99 80 78 83 38 55 73
23 99 84 80 85 43 59 77
24 99 88 83 88 48 64 81
25 99 90 85 90 54 68 84
26 99 91 87 92 59 72 87
27 99 93 89 94 63 76 90
28 99 95 91 95 66 80 92
29 99 96 92 97 71 84 94
30 99 97 94 97 76 88 96
31 99 98 95 98 82 91 97
32 99 99 96 98 86 93 98
33 99 99 96 99 90 95 99
34 99 99 97 99 92 97 99
35 99 99 99 99 96 98 99


122
Appendix C

Data Tables


123
Table C.1
Locale Codes used by the National Center for Education Statistics
Locale
code
Code
name
Definition
11
City,
Large
Territory inside an urbanized area and inside a principal city with
population of 250,000 or more.
12
City,
Midsize
Territory inside an urbanized area and inside a principal city with
population less than 250,000 and greater than or equal to 100,000.
13
City,
Small
Territory inside an urbanized area and inside a principal city with
population less than 100,000.
21
Suburb,
Large
Territory outside a principal city and inside an urbanized area with
population of 250,000 or more.
22
Suburb,
Midsize
Territory outside a principal city and inside an urbanized area with
population less than 250,000 and greater than or equal to 100,000.
23
Suburb,
Small
Territory outside a principal city and inside an urbanized area with
population less than 100,000.
31
Town,
Fringe
Territory inside an urban cluster that is less than or equal to 10 miles
from an urbanized area.
32
Town,
Distant
Territory inside an urban cluster that is more than 10 miles and less than
or equal to 35 miles from an urbanized area.
33
Town,
Remote
Territory inside an urban cluster that is more than 35 miles of an
urbanized area.
41
Rural,
Fringe
Census-defined rural territory that is less than or equal to 5 miles from
an urbanized area, as well as rural territory that is less than or equal to
2.5 miles from an urban cluster.
42
Rural,
Distant
Census-defined rural territory that is more than 5 miles but less than or
equal to 25 miles from an urbanized area, as well as rural territory that is
more than 2.5 miles but less than or equal to 10 miles from an urban
cluster.
43
Rural,
Remote
Census-defined rural territory that is more than 25 miles from an
urbanized area and is also more than 10 miles from an urban cluster.
Note. From National Center of Education Statistics Common Core Data (2007).
http://nces/ed/gov/ccd/commonfiles/glossary.asp
124

Table C.2
Proportion of School-Based Variables for Schools in Research Sample

State
Using Performance
TRACKER
Permission
Granted
Sample
n % n % n % n %
LEAs
Districts 500 150 22 22
Schools 3,248 935 128 50
Teachers 123,100 40,936 5,357 2,178
Students 1,769,786 580,838 77,684 31,301

Level
Elem 1854 57.1% 556 59.5% 73 57.0% 29 58.0%
Middle 561 17.3% 187 20.0% 28 21.9% 10 20.0%
High 694 21.4% 160 17.1% 23 18.0% 9 18.0%
Other 139 4.3% 25 2.7% 4 3.1% 2 4.0%

Title I
Not Eligible 751 23.1% 283 30.3% 39 30.5% 14 28.0%
Targeted 1721 53.0% 546 58.4% 75 58.6% 29 58.0%
School-Wide 704 21.7% 99 10.6% 14 10.9% 7 14.0%

Locale
Rural 898 27.6% 264 28.2% 28 21.9% 13 26.0%
Town 430 13.2% 153 16.4% 22 17.2% 8 16.0%
Suburb 1315 40.5% 412 44.1% 63 49.2% 22 44.0%
City 605 18.6% 99 10.6% 15 11.7% 7 14.0%

2010 AYP
Status

Made AYP 2390 76.5% 733 78.4% 105 82.0% 39 78.0%
Warning 222 7.1% 69 7.4% 10 7.8% 4 8.0%
School
Improvement 1
136 4.4% 36 3.9% 5 3.9% 2 4.0%
School
Improvement 2
99 3.2% 25 2.7% 1 0.8% 1 2.0%
Corrective
Action 1
60 1.9% 17 1.8% 1 0.8% 1 2.0%
Corrective
Action 2
201 6.4% 38 4.1% 5 3.9% 2 4.0%
N/A 18 0.6% 17 1.8% 1 0.8% 1 2.0%
125

Table C.3
ANOVA Table for Concerns * NCES School Level

Sum of
Squares df
Mean
Square F Sig.
Informational
* School
Level
Between
Groups
(Combined) 684.541 2 342.270 8.353 .000
Within Groups 11595.911 283 40.975

Total 12280.451 285

Personal *
School Level
Between
Groups
(Combined) 723.428 2 361.714 6.100 .003
Within Groups 16780.030 283 59.293

Total 17503.458 285

Management
* School
Level
Between
Groups
(Combined) 706.136 2 353.068 8.156 .000
Within Groups 12251.556 283 43.292

Total 12957.692 285

Consequence
* School
Level
Between
Groups
(Combined) 96.903 2 48.452 1.788 .169
Within Groups 7668.275 283 27.096

Total 7765.178 285

Collaboration
* School
Level
Between
Groups
(Combined) 125.922 2 62.961 1.962 .142
Within Groups 9081.728 283 32.091

Total 9207.650 285

Refocusing *
School Level
Between
Groups
(Combined) 146.575 2 73.288 3.021 .050
Within Groups 6864.837 283 24.257

Total 7011.413 285



126

Table C.4
ANOVA Table for Concerns * NCES Locale Code

Sum of
Squares df
Mean
Square F Sig.
Informational
* Locale
Code
Between
Groups
(Combined) 391.793 3 130.598 3.098 .027
Within Groups 11888.658 282 42.158

Total 12280.451 285

Personal *
Locale Code
Between
Groups
(Combined) 415.483 3 138.494 2.286 .079
Within Groups 17087.975 282 60.596

Total 17503.458 285

Management
* Locale
Code
Between
Groups
(Combined) 74.753 3 24.918 .545 .652
Within Groups 12882.939 282 45.684

Total 12957.692 285

Consequence
* Locale
Code
Between
Groups
(Combined) 50.970 3 16.990 .621 .602
Within Groups 7714.208 282 27.355

Total 7765.178 285

Collaboration
* Locale
Code
Between
Groups
(Combined) 210.603 3 70.201 2.200 .088
Within Groups 8997.048 282 31.904

Total 9207.650 285

Refocusing *
Locale Code
Between
Groups
(Combined) 159.415 3 53.138 2.187 .090
Within Groups 6851.998 282 24.298

Total 7011.413 285



127

Table C.5
ANOVA Table for Concerns * NCES Title I Eligibility

Sum of
Squares df
Mean
Square F Sig.
Informational
* Title I
Eligibility
Between
Groups
(Combined) 437.276 2 218.638 5.224 .006
Within Groups 11843.175 283 41.849

Total 12280.451 285

Personal *
Title I
Eligibility
Between
Groups
(Combined) 287.672 2 143.836 2.364 .096
Within Groups 17215.786 283 60.833

Total 17503.458 285

Management
* Title I
Eligibility
Between
Groups
(Combined) 446.839 2 223.419 5.054 .007
Within Groups 12510.854 283 44.208

Total 12957.692 285

Consequence
* Title I
Eligibility
Between
Groups
(Combined) 47.328 2 23.664 .868 .421
Within Groups 7717.850 283 27.272

Total 7765.178 285

Collaboration
* Title I
Eligibility
Between
Groups
(Combined) 33.579 2 16.789 .518 .596
Within Groups 9174.071 283 32.417

Total 9207.650 285

Refocusing *
Title I
Eligibility
Between
Groups
(Combined) 10.676 2 5.338 .216 .806
Within Groups 7000.737 283 24.738

Total 7011.413 285



128

Table C.6
ANOVA Table for Concerns * PA AYP Status

Sum of
Squares df
Mean
Square F Sig.
Informational
* 2010 AYP
Status
Between
Groups
(Combined) 426.146 4 106.536 2.525 .041
Within Groups 11854.305 281 42.186

Total 12280.451 285

Personal *
2010 AYP
Status
Between
Groups
(Combined) 780.505 4 195.126 3.279 .012
Within Groups 16722.953 281 59.512

Total 17503.458 285

Management
* 2010 AYP
Status
Between
Groups
(Combined) 300.709 4 75.177 1.669 .157
Within Groups 12656.983 281 45.043

Total 12957.692 285

Consequence
* 2010 AYP
Status
Between
Groups
(Combined) 100.174 4 25.044 .918 .454
Within Groups 7665.004 281 27.278

Total 7765.178 285

Collaboration
* 2010 AYP
Status
Between
Groups
(Combined) 55.160 4 13.790 .423 .792
Within Groups 9152.490 281 32.571

Total 9207.650 285

Refocusing *
2010 AYP
Status
Between
Groups
(Combined) 130.257 4 32.564 1.330 .259
Within Groups 6881.155 281 24.488

Total 7011.413 285



129

Table C.7
ANOVA Table for Concerns * Participant Teaching Assignment Level

Sum of
Squares df
Mean
Square F Sig.
Informational
* Level
Between
Groups
(Combined) 915.356 3 305.119 7.571 .000
Within Groups 11365.095 282 40.302

Total 12280.451 285

Personal *
Level
Between
Groups
(Combined) 740.009 3 246.670 4.150 .007
Within Groups 16763.449 282 59.445

Total 17503.458 285

Management
* Level
Between
Groups
(Combined) 836.905 3 278.968 6.490 .000
Within Groups 12120.787 282 42.982

Total 12957.692 285

Consequence
* Level
Between
Groups
(Combined) 270.682 3 90.227 3.395 .018
Within Groups 7494.496 282 26.576

Total 7765.178 285

Collaboration
* Level
Between
Groups
(Combined) 108.015 3 36.005 1.116 .343
Within Groups 9099.636 282 32.268

Total 9207.650 285

Refocusing *
Level
Between
Groups
(Combined) 184.670 3 61.557 2.543 .057
Within Groups 6826.743 282 24.208

Total 7011.413 285



130

Table C.8
ANOVA Table for Concerns * Participant Teaching Experience

Sum of
Squares df
Mean
Square F Sig.
Informational
* Teaching
Experience
Between
Groups
(Combined) 521.164 7 74.452 1.760 .095
Within Groups 11759.287 278 42.300

Total 12280.451 285

Personal *
Teaching
Experience
Between
Groups
(Combined) 524.805 7 74.972 1.228 .288
Within Groups 16978.653 278 61.074

Total 17503.458 285

Management
* Teaching
Experience
Between
Groups
(Combined) 198.992 7 28.427 .619 .740
Within Groups 12758.701 278 45.895

Total 12957.692 285

Consequence
* Teaching
Experience
Between
Groups
(Combined) 131.232 7 18.747 .683 .687
Within Groups 7633.946 278 27.460

Total 7765.178 285

Collaboration
* Teaching
Experience
Between
Groups
(Combined) 328.723 7 46.960 1.470 .178
Within Groups 8878.928 278 31.939

Total 9207.650 285

Refocusing *
Teaching
Experience
Between
Groups
(Combined) 247.030 7 35.290 1.450 .185
Within Groups 6764.382 278 24.332

Total 7011.413 285



131

Table C.9
ANOVA Table for Concerns * Participant PerformanceTRACKER Experience

Sum of
Squares df
Mean
Square F Sig.
Informational
* TRACKER
Experience
Between
Groups
(Combined) 1708.141 5 341.628 9.048 .000
Within Groups 10572.310 280 37.758

Total 12280.451 285

Personal *
TRACKER
Experience
Between
Groups
(Combined) 1035.092 5 207.018 3.520 .004
Within Groups 16468.366 280 58.816

Total 17503.458 285

Management
* TRACKER
Experience
Between
Groups
(Combined) 406.259 5 81.252 1.813 .110
Within Groups 12551.433 280 44.827

Total 12957.692 285

Consequence
* TRACKER
Experience
Between
Groups
(Combined) 191.724 5 38.345 1.418 .218
Within Groups 7573.455 280 27.048

Total 7765.178 285

Collaboration
* TRACKER
Experience
Between
Groups
(Combined) 529.947 5 105.989 3.420 .005
Within Groups 8677.704 280 30.992

Total 9207.650 285

Refocusing *
TRACKER
Experience
Between
Groups
(Combined) 278.394 5 55.679 2.315 .044
Within Groups 6733.019 280 24.046

Total 7011.413 285



132

Table C.10
ANOVA Table for Concerns * Participant Self-Assessed Technology Proficiency

Sum of
Squares df
Mean
Square F Sig.
Informational
* Tech Skills
Between
Groups
(Combined) 249.152 3 83.051 1.947 .122
Within Groups 12031.299 282 42.664

Total 12280.451 285

Personal *
Tech Skills
Between
Groups
(Combined) 287.464 3 95.821 1.570 .197
Within Groups 17215.994 282 61.050

Total 17503.458 285

Management
* Tech Skills
Between
Groups
(Combined) 389.773 3 129.924 2.915 .035
Within Groups 12567.919 282 44.567

Total 12957.692 285

Consequence
* Tech Skills
Between
Groups
(Combined) 79.840 3 26.613 .977 .404
Within Groups 7685.338 282 27.253

Total 7765.178 285

Collaboration
* Tech Skills
Between
Groups
(Combined) 135.733 3 45.244 1.406 .241
Within Groups 9071.918 282 32.170

Total 9207.650 285

Refocusing *
Tech Skills
Between
Groups
(Combined) 136.480 3 45.493 1.866 .135
Within Groups 6874.933 282 24.379

Total 7011.413 285



133

Table C.11
ANOVA Table for Concerns * Participant Self-Assessed PerformanceTRACKER
Proficiency

Sum of
Squares df
Mean
Square F Sig.
Informational
* TRACKER
Skill
Between
Groups
(Combined) 1470.600 3 490.200 12.788 .000
Within Groups 10809.851 282 38.333

Total 12280.451 285

Personal *
TRACKER
Skill
Between
Groups
(Combined) 612.048 3 204.016 3.406 .018
Within Groups 16891.410 282 59.899

Total 17503.458 285

Management
* TRACKER
Skill
Between
Groups
(Combined) 199.806 3 66.602 1.472 .222
Within Groups 12757.886 282 45.241

Total 12957.692 285

Consequence
* TRACKER
Skill
Between
Groups
(Combined) 79.217 3 26.406 .969 .408
Within Groups 7685.962 282 27.255

Total 7765.178 285

Collaboration
* TRACKER
Skill
Between
Groups
(Combined) 506.054 3 168.685 5.467 .001
Within Groups 8701.597 282 30.857

Total 9207.650 285

Refocusing *
TRACKER
Skill
Between
Groups
(Combined) 222.950 3 74.317 3.087 .028
Within Groups 6788.463 282 24.073

Total 7011.413 285



134

Table C.12
ANOVA Table for Concerns * Participant Professional Development in
PerformanceTRACKER

Sum of
Squares df
Mean
Square F Sig.
Informational
* Training
Between
Groups
(Combined) 405.084 1 405.084 9.688 .002
Within Groups 11875.367 284 41.815

Total 12280.451 285

Personal *
Training
Between
Groups
(Combined) 265.546 1 265.546 4.375 .037
Within Groups 17237.912 284 60.697

Total 17503.458 285

Management
* Training
Between
Groups
(Combined) 46.432 1 46.432 1.021 .313
Within Groups 12911.260 284 45.462

Total 12957.692 285

Consequence
* Training
Between
Groups
(Combined) 6.352 1 6.352 .233 .630
Within Groups 7758.826 284 27.320

Total 7765.178 285

Collaboration
* Training
Between
Groups
(Combined) 10.434 1 10.434 .322 .571
Within Groups 9197.216 284 32.385

Total 9207.650 285

Refocusing *
Training
Between
Groups
(Combined) 1.909 1 1.909 .077 .781
Within Groups 7009.504 284 24.681

Total 7011.413 285



135
Appendix D

Figures


136
Figure D.1. Concerns Profiles Across NCES Building Levels.

0
10
20
30
40
50
60
70
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n

(
%
i
l
e
)

Concerns Profile Across NCES Building Levels
Elementary
Middle
High
137


Figure D.2. Concerns Profiles Across NCES Locale Codes.

0
10
20
30
40
50
60
70
Informational Personal Management Consequence Collaboration Refocusing
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n

(
%
i
l
e
)

Concerns Profile Across NCES Locales
Rural
Town
Suburb
City
138


Figure D.3. Concerns Profiles Across Title 1 Eligibility.

0
10
20
30
40
50
60
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n

(
%
i
l
e
)

Concerns Profile Across Title 1 Eligibility
Not Eligible
Targeted
School Wide
139


Figure D.4. Concerns Profiles Across 2010 AYP Status.

0
10
20
30
40
50
60
70
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n

(
%
i
l
e
)

Concerns Profile Across 2010 AYP Status
Made AYP
Warning
School Improvement 1
School Improvement 2
Corrective Action 2
140


Figure D.5. Concerns Profile Across Teaching Assignment.

0
10
20
30
40
50
60
70
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n
s

(
%
i
l
e
)

Concerns Profile Across Teaching Assignment
Primary
Intermediate
Middle
High
141


Figure D.6. Concerns Profile Across Teaching Experience.

0
10
20
30
40
50
60
70
80
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n
s

(
%
i
l
e
)

Concerns Profile Across Teaching Experience
Zero
1 to 5 Years
6-10 Years
11-15 Years
16-20 Years
21-25 Years
26-30 Years
More than 30 Years
142


Figure D.7. Concerns Profile Across PerformanceTRACKER Experience.

0
10
20
30
40
50
60
70
80
R
E
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n
s

(
%
i
l
e
)

Concerns Profile Across PerformanceTRACKER
Experience
Never
1 Year
2 Years
3 Years
4 Years
5 or More Years
143


Figure D.8. Concerns Profile Across Self-Assessed Proficiency with Technology.

0
10
20
30
40
50
60
70
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
n
r
s

(
%
i
l
e
)

Concerns Profile Across Self-Assessed Proficiency
with Technology
Non-User
Novice
Intermediate
Skilled
144


Figure D.9. Concerns Profile Across Self-Assessed Proficiency with
PerformanceTRACKER.

0
10
20
30
40
50
60
70
Informational Personal Management Consequence Collaboration Refocusing
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n
s

(
%
i
l
e
)

Concerns Profile Across Self-Assessed Proficiency
with PerformanceTRACKER
Non-User
Novice
Intermediate
Skilled
145


Figure D.10. Concerns Profile Based on Professional Development.

0
10
20
30
40
50
60
70
R
e
l
a
t
i
v
e

I
n
t
e
n
s
i
t
y

o
f

C
o
n
c
e
r
n
s

(
%
i
l
e
)

Concerns Profile Based on Professional
Development
No Training
Some Training

You might also like