You are on page 1of 6

West Branch Local Schools conducts a number of assessments for each grade level during the course of

the year. These assessments have different purposes and are used to guide staffing and instruction.
Below is a chart that highlights testing that is common to throughout the district. These assessments do
not include formative assessments that may be given at a specific grade level of course specific
assessments.

Assessments:
1) K-RAL - Given in days -! in "indergarten. The test scored and results are immediate. #esults
are used by teachers$ school psychologists$ and administrators for the purposes of leveling
classes. %arents are not given the results unless they re&uest them.
) STAR Enterprises Reading and Math - Given in grades '-(. #esults are immediate for staff and
students )when given). %arents are not given the results unless they re&uest them. Star is used
to measure growth and determine interventions. Test are given &uarterly to all students with
at-ris" students ta"ing the test additional times )progress monitoring) each &uarter.
!) 3
rd
Grade Ohio Achievement Assessment (OAA) * This is the reading portion that is given in
con+unction with the !
rd
Grade #eading Guarantee. Students are given the e,am in the fall$
spring$ and summer. %assage is re&uired to be promoted to the -
th
grade. .t is unclear as to
how &uic"ly and/or when these results are posted and the time line for results to teachers$
schools$ students$ and/or parents.
-) OAA Reading, Math, and Science * these tests are given in grades !-( in reading and math as
well as 01( in science. #esults are used for district report cards$ mastery of content standards$
and evaluation of strengths and wea"nesses. 2alue added data is used to show student
progress and to ma"e pro+ections for future growth. Tests are given in the spring and results are
given to the district in the summer. %arents and teachers receive the results in the fall of the
following school year.
0) Ohio Gradation Test (OGT) * these tests are given in the 13-1 grades and measure proficiency
in the areas of reading$ writing$ math$ science$ and social studies. %assage is re&uired for
graduation from high school. Tests are given in the spring$ with results posted sometime in the
summer. .t is unclear what the information is used for other than targeted interventions in
areas of deficiency.
4) Ohio Test !or Eng"ish Langage Ac#isition (OTELA) * this test mirrors the 566 and 5GT tests
and is given to 7nglish Language Learners. Tests are given in the spring with parents$ students$
and teachers receiving results in the fall of the following school year. #esults are used in similar
ways as the 566 and 5GT.
8) A"ternate Assessment !or $ognitive"% &isa'"ed (AA$&) * this test also mirrors the 566 and 5GT
tests in timing$ reporting$ and use. The test is given to people with disabilities who are unable to
ta"e the standard 566 or 5GT.
() State &iagnostics * these tests are given in grades "-! in 9anuary. #eading and math is given in
"-! and writing is given in grade !. Teachers score the tests upon completion and use the
information as a formative assessment to guide instruction. %arents and students are not given
test results unless they re&uest them.

Intervie(s:
7ducators were as"ed the same five &uestions regarding testing. The &uestions were as follows:
1. 6re all of the assessments being utili;ed effectively< Why do you thin" so<
. What recommendations do you have that would help the building/district improve<
!. =o the assessments and their results help to increase student achievement<
-. 6re results communicated to all necessary sta"eholders<
0. 6re teachers well e&uipped to analy;e the data )if they>re involved in the process)<
4. =o teachers fully understand the purpose of the assessments<
8. What conclusions can you ma"e about the overall practices of the building/district<

6 cross section of educators were chosen at each of the building levels. #esponses were not edited$ but
reflect a paraphrase and/or direct statement from each educator. %ositions and grade level were
intentionally omitted to protect the anonymity of the participants to encourage a more open response.
?igh School
1. Tests are being utili;ed$ but not too sure that they are effective. The tests give us a
standardi;ed score to compare "ids.
. They )administration) should train teachers on the administration of the tests )more than they
have). Some educators are as"ed to proctor a test that they have never given before and they
have never given before. 7ven a brief training would be helpful.
!. @ot sure that tests help with achievement. The tests +ust tell us where they are$ maybe where
they need to go.
-. The results are accessible$ but not always given to all of the sta"eholders.
0. Teachers are not always e&uipped to understand the data. The middle school uses Star$ but not
everyone understands how to use the information.
4. Teachers seem to understand the purposes for the assessments$ for the most part.
8. The district does a good +ob of administering the tests when and how they should be given.
Students are well prepared for testing and ta"e the test according to all regulations. Teachers$
however$ do not always get all of the information about results that they need. The focus of
district testing is to follow the rules to get good results$ not use the results for future
achievement.
Aiddle School
1. Star and &uarterly formative assessments are being used.
. Teachers have different levels of training when it comes to understanding the testing results.
Star is not used as well as it should be. .t is a function of time. Teachers need help to root out
the useful data and put aside what is not necessary. Bormative assessments could and should
be used more.
!. There is a lac" of consistency between teachers. Some are using the types of best practices$ li"e
leveling as a result of testing results$ other teachers are not.
-. #esults are communicated$ to an e,tent. .t is being used to drive intervention.
0. Ces$ teachers are well e&uipped to understand data.
4. Ces$ but time is always an issue.
8. 6ssessment are always changing and more training is needed. We have hard wor"ing teachers
who want to help students. Time$ training$ and tools would ma"e all teachers better.
Aiddle School
1. @o$ assessments are given because we are told to give them. #esults$ on formal assessments$
are not as good as formative assessments given in the class.
. Aore training on understanding the data for formal assessments might help in finding ways to
use the data.
!. @o$ tests are end results. Bormative assessments are used for achievement.
-. Sta"eholders have access to the data$ although they may not "now how to use it.
0. Teachers could use more time and training on specific ways to use the data to drive instruction$
not +ust intervention.
4. Teachers seem to understand what assessments are used for.
8. The district has made a push for more and more data. .t has yet to determine the best ways to
use this data. Less information would be better if it was traded for more useful data.
Aiddle School
1. 6ssessments are not being used effectively. The Star data is overwhelming. The use varies
between teachers on what they li"e use and what they don>t use.
. Specific training would be helpful for how to interpret the data.
!. Ces$ assessments do help student achievement$ but there is still room for growth.
-. Ces$ results are communicated.
0. Aore training is needed in the testing areas. We are awash in all the data. .t is hard to "now
what to focus on.
4. Teachers seem to "now the purpose of assessments.
8. We are good at administering and scoring the tests. We need help in interpreting them.
7lementary
1. @o$ assessments are not being analy;ed to find specific patterns and errors in student learning.
. 6ssessments should be analy;ed in more detail$ loo"ing for specific patterns in student learning
and deficits.
!. Ces$ if they are analy;ed.
-. @o$ results are not always communicated.
0. @o$ not all teachers are well e&uipped.
4. Ces$ in general teachers understand what the tests are used for.
8. The district is ta"ing steps towards using effective assessments. They are using more common
assessments. .mprovement could be made in the area of analy;ing the data.

Ana"%sis:
1. 6re all of the assessments being utili;ed effectively< Why do you thin" so< Generally$ the people
interviewed thought that assessments in the district were either not being used effectively or that they
needed some twea"ing to be useful. 7ducators thought that the tests were mostly used to measure
achievement. 6 snap-shot of progress on the given testing date.
. What recommendations do you have that would help the building/district improve< Time and
training to understand the information was the most common response. 7ducators seemed to
understand that there was more information that could be gleaned$ but they seemed unsure of ways to
do it.
!. =o the assessments and their results help to increase student achievement< There were mi,ed
results to answering this &uestion. The general consensus was that it depended greatly on the s"ill of
the individual teacher when it came to understanding the testing results and then using the data to help
improve student achievement.
-. 6re results communicated to all necessary sta"eholders< 7ducators thought that the results were
available to all sta"eholders$ but may not have been given to them. .f someone wants the data they
may need to see" it out.
0. 6re teachers well e&uipped to analy;e the data )if they>re involved in the process)< Generally$
educators felt that they were not well e&uipped to analy;e the data and would re&uire more training to
better sift through and understand all of the information that is available to them.
4. =o teachers fully understand the purpose of the assessments< Donsensus was that educators
understand the purpose of the assessments.
8. What conclusions can you ma"e about the overall practices of the building/district<
Generally$ it was concluded that the district does a very good +ob of giving tests and ensure that proper
protocols are followed. .t is also generally concluded that more training is needed to better interpret
the data as well as ways to put the data into useful practice.

5ver the course of discussions$ the amount of data and how to use it was a consistent theme. Stull
supports the use of multiple tests$ E@5 single test can accomplish all the goals and ob+ectives of the
diverse educational system. .t is important both legally and technically not to put all the weight on a
single test when ma"ing important decision about student>s success and school.F )Stull$ 334)
?owever$ there was another underlying theme$ one that may not have been directly stated. This was
reflected in the way &uestions were answered$ more than what was said. .t was the feeling that testing
may not be doing everything that we would hope it would be. This was evident in responses that
alluded to the understanding of why tests were given along with a lac" of understanding as to how to
best use the data. Caffe somewhat captures the collective sentiment with$ E=espite reformers> best
intentions$ using test scores as the gauge of school success has distorted the educational system.
Standardi;ed test scores were supposed to serve as pro,ies for something outside the test G literacy$
numeracy$ wor"place s"ills G but the pro,y has become an end in itself.F )Caffe$ 33H)
.n researching how tests improve achievement$ the research does not always support their use. EAath
and reading @67% data reveal a few interesting patterns. .n math$ pre-@DLB achievement gains were
greater than post-@DLB gains. Thus$ students were progressing in math at a much faster rate before the
national high-sta"es testing movement spawned by @DLB. By comparison$ fourth and eighth grade
reading achievement remained relatively stable over time$ with the e,ception of small increases for
fourth graders )330-338) and small decreases for eighth graders )33!-330) after @DLB. When it
comes to @67% achievement from 33 to 33H$ the institution of the @DLB was followed by varied
achievement patterns in fourth and eighth grade math.F )@ichols$ 31)
6dditionally$ E=ifferences in test scores often are assumed to reflect differences in an intervention )such
as the educational policies and practices of a state)$ when the scores reflect the collective nature of the
test-ta"ers.F )Aarchant$ 330).
While the use of data from standardi;ed tests allows educators to &uantify outcomes$ it is unclear as to
how testing does and/or should effect teaching practices. #esearch indicates that good teaching is the
solution for student growth$ not good testing. West Branch Local Schools is very good at giving tests.
7ducators in the district do not$ however$ feel as good about analy;ing the data and/or using it to drive
instruction. 6s the district continues to progress in the data driven teaching era$ West Branch Local
Schools will need to decide what weight to place on testing$ what weight to place on good teaching$ and
how the two disciplines relate.

&istrict Overvie(:
West Branch Local Schools is a rural$ tight-"nit district. The district is comprised of - schools. There are
two elementary schools )grades "--)$ a middle school )0-()$ and a high school )H-1). The average daily
enrolment for the district is 38 students. The distribution of students is as follows: =amascus
7lementary - -!4 students$ 'no, 7lementary * !8 students$ Aiddle School * 4H( students$ ?igh School
* 831 students. The student body ma"eup is 0 percent male and -( percent female$ and the total
minority enrollment is percent. 8I of the students &ualify as being economically disadvantaged.

)rocess:
The district test coordinator is also the middle school building principal. .nformation was freely shared
and accessible via a simple re&uest. 6n interview was conducted to understand the number tests and
their intended applications.
7ducator interviews were conducted at random at different grade levels. Dandidates were selected in a
non-scientific manner. #andom calls were made and people were stopped outside of the school office
in the order they came at the end of the day. @o candidate refused an interview and each candidate
was offered a copy of this report. Dandidates were promised anonymity to help ensure candid
responses. Dandidates were only as"ed the &uestions listed above and were only prompted for
clarification. @o other response was given to avoid prompting or steering of the candidate>s responses.

@ichols$ S. L.$ Glass$ G.2.$ 1 Berliner$ =.D. )31). ?igh-Sta"es Testing and Student 6chievement:
Jpdated 6nalyses with @67% =ata. Education policy Analysis Archives, 20(20).
Aarchant$ G. 9.$ 1 %aulson$ S. 7. )330). The relationship of high school graduation e,ams to graduation
rates and S6T scores. 7ducational %olicy 6nalysis 6rchives$ 1!)4). #etrieved from
http://epaa.asu/epaa/v1!n4/.

Stull$ 9.D. )334). Standardi;ed test: .s there a fair use< 9ournal of law 1 7ducation$ 2ol. !0 @o. 0 040-
08!.

Caffe$ =.$ Doley$ #.9.$ %lins"in$ #.$ 1 7ducational Testing Service$ %. )33H). 6ddressing 6chievement
Gaps: 7ducation Testing in 6merica: State 6ssessment$ 6chievement Gaps$ @ational %olicy and
.nnovations. 7TS %olicy @otes. 2olume 18$ @umber 1$ Winter 33H. Education Testing Service.

You might also like