You are on page 1of 9

Cummings 1

SPED 311 Assessment Review Project

Name: Ashlyn Cummings

Date: 4/20/16

School/Setting: SFA Middle School/ Life Skills

How does this project contribute to your knowledge about assessment?


This project has helped me understand why it is important to know whether or not a test

you are conducting on a student is appropriate. The usefulness of the research I conducted

allowed me to understand how easy it is to misinterpret whether a test is good based upon face-

value. The reviews, manual, normative data, reliability, and validity all play a major role in

figuring out whether or not a test is an effective test to use. This information was something I

definitely took away from this project. My understanding now of what makes a good test has

increased tremendously. Now I know that I can make wise decisions on what tests I use for my

future students.

On my honor, as an Aggie, I have neither given nor received unauthorized aid on this
academic work.

Signature____________________________________________
Cummings 2

Practical Elements

Description of Test:

The Kaufman Functional Academic Skills Test (K-FAST) is for ages 15 to 85 years of

age. The test was published by American Guidance Service Incorporation in 1994, and is

currently retired. The cost of the K-FAST is around $125.00 dollars. The K-FAST included a test

manual, an easel, and a protocol. The administration of the test takes 15 to 25 minutes, and

includes two subtests. The subtests are in both reading and arithmetic. The K-FAST is used to

help in comprehension measures of intelligence. This particular test is in common with many

intelligence, achievement, and adaptive behavior tests.

Description of test manuals:

The test manuals that were provided were very easy to understand and operate. The

testing manuals shared that the test itself was a well normed measure of both functional behavior

and academic achievement in students. The manuals are broken down by chapters with

subheadings and include a table of contents. The manual discusses both reliability and validity

scores. The test manuals are full of useful information that is helpful to not only the person who

is administering the test, but also the teachers who will use the information to help gage student

progress. There was complete information indicated except in the area of the norms. The norms

were missing representation in the demographics of the United States, such as the Southwest or

East regions. They did a background check on the parents and those of different ethnic groups.

The representativeness of the norms including the students parents were based upon educational

level, ethnic group, and gender.

Description of test materials:


Cummings 3

The test materials included a flipbook that used pictures and word problems to test both

functional and academic achievement. The testing materials also included a protocol that is used

to record the answers of the students. The directions given in the testing materials were short and

included only one sentence with a 1 or 0 rating scale. The materials are well made and seems to

last for a while. The protocol is not very helpful for the fact that the surface is slippery and

difficult to write on. The flipbook does not stand up by itself, which makes it a little harder on

the person who is administering the test. The overall difficulty level of the test is that it is easy

enough for the some students, but the questions may not be generalized for multiple different

audiences. (Add if it is visually appealing)

Description of test protocols:

The protocol is the answer sheet that the administrator uses to score the student while

giving the test. The protocol for the K-FAST is light in weight and small. The object is easy to

handle since it is thin, but is hard to write on. This causes a problem for the administrator when

they are trying to write down student responses and it smudges. Other than this, the protocol is

easy to handle because the administrator only has to write a 1 or a 0 depending upon student

responses. The test protocol is less durable because there is a gloss over it.

Description of test items:

The test items are pictures or words that are shown to the students. The student will be

asked to either write the answer or orally state it. The administration process is fairly easy with

short one sentence directions. The scoring is very simple as well. The administrator either

chooses a 1 or a 0. For the student to be given a 1, the student must complete all items on the

page. The items for the ages that it is administered to was appropriate. The test items consisted of
Cummings 4

pictorial representations to assess the students functional academic skills. The test had

approximately 25 items in the arithmetic subtest and 29 items in the reading subtest. Some of the

items that the reading subtest consisted of were signs, pictorial representations, newspaper

articles, etc. The arithmetic portion of the test consisted of items such as counting, graphs,

grocery shopping, etc.

Technical Evaluation

Norms:

The K-FAST norms were based upon the United States Census in 1988. The norms were

based upon random sampling across the United States from ages 15 to 85 years and above.

According to the review conducted by Steven R. Shaw, he found that the testing involved 2600

participants from 27 states (1994). The K-FAST creators took norms based upon age group,

geographic location, socioeconomic status based on parent educational level, race/ethnic group,

and gender (Kauffman, 1994). The norms however did exclude those who were Alaskan and

Hawaiian. The focus of the K-FAST was on four major regions: Northeast, Northwest, South,

and West. From the presented norms, there was no mention to either special education or the

representation of Texas. The review conducted by Steven R. Shaw shared that there was an

overrepresentation of the North Central and South regions, and an underrepresentation in the

Northeast and West (1994). This is not a main concern to the overall effectiveness of the test, but

plays a big role due to the fact that states such as Florida have a higher senior adult population.

This information is important to the test, since it has a target audience of those who are from ages

15 to 85 years of age.

Reliability:
Cummings 5

The reliability was based upon test-retest reliability. They administered the test twice to

116 normal adolescents and adults. They had a range from 6-94 days with a mean of 33 days and

a median of 31 days. The sample ranges from ages 15 to 91 years of age. The coefficient for the

subtest had a collective score that ranged from .80 to .91. These values reflect adequate test-

retest reliability for the subtest and composite scores. The scores were broken down into three

data points in reading, arithmetic, and composite. Considering that the standard errors of

measurement are inversely related to reliability coefficients, the higher the reliability coefficient

the smaller the SEM. The composite scores have a mean SEM of about 4 points, and reading and

arithmetic each have a mean SEM of 5 points. The examiners were also encouraged to help

create a larger margin of error, by banding standard scores that were 90 or 95 percent.

Validity:

The validity of the K-FAST was seen by comparing the test to other tests. They used a

correlation between the K-FAST and other tests such as the K-BIT, K-Snap, WAIS-R, etc. The

groups had a controlled population of those without disabilities. The ceiling of the validity is four

consecutive 0s. The validity for the paper was very hard to find. There were no numerical

representations of the effectiveness of the assessment, but from what was presented, the

assessment included concurrent validity, content validity, clinical validity, construct validity, and

finally criterion-related validity. Concurrent validity was conducted on persons who have reading

disabilities, mental retardation, severe depression, Alzheimers disease, and neurological

impairments (Shaw, 1994). According to the review conducted by Steven R. Shaw, clinical

validity is found by discussing consultants examination of adaptive inventories and the selection

of concepts and items (1994). Construct validity was determined by establishing reading and

mathematics tasks that are applied to daily situations (1994). Finally criterion- related validity is
Cummings 6

conducted through studies and analysis that was carried out through the development of the K-

FAST. The validity in the manual was very hard to find which made it difficult to trust whether

or not the validity was stable. In the review conducted by Shaw he was able to find more

information by further research to demonstrate that the K-FAST was indeed successful.

Journal Review #1: Mental Measurements Yearbook:

The review conducted by Steven R. Shaw, described the course materials and purpose of

the test, as well as provided an overview of the tests reliability and validity. Shaw stated that the

test was both easy and quick to administer, as well as not timed (1994). The testing materials of

the easel and test records are easy to follow and large enough for people to read. Regarding the

normative data that was presented by the K-Fast, Shaw found that the test had fairly good

normative data for ethnicity groups, SES, gender, and education (1994). Although the normative

data was good for these areas, there was a lack of geographic representation. The norms showed

an underrepresentations in the regions such as Northeast and West (Shaw, 1994). There was also

an overrepresentation of the North central and south regions of the United States. Shaw

mentioned that usually the geographic regions are not a major factor in how effective a test may

be. Although true for the purposes of this test, since the K-Fast was not tested in states such as

Arizona and Florida where there is a higher number of senior populations, the norms for those

who are 65+ may not be a good representation of the seniors in the United States (1994).

Reliability data that was discussed in this review found this test to be very reliable. Shaw

shared how the test used both internal consistency reliability, and test-retest reliability (1994).

The range of internal consistency for the test was .83-.94. Shaw also found the reliability for

test-retest intervals, which were .84 for arithmetic, .88 for reading, and .91 for the composite
Cummings 7

scores. Shaw did mention that the manual used standard errors of measurement for the

arithmetic portion of the test, which helped to make it easier to interpret the test as well as

helping the administrators avoid test abuse (1994). The reviewer discussed how the validity of

the K-FAST was found in concurrent validity, clinical validity, construct validity, and criterion-

related validity. Shaw mentioned in his review that the manual presented many different ways to

help support the validity of the test (1994). Overall, Shaws review shared that the K-FAST was a

very good test that should be used as a supplemental aid to other assessments (1994). His final

remarks regarding the test were focused on the fact that the test was a good starting place for

administrators, but to be more effective with the process he would suggest using the CASAS

which has an adult-oriented functional assessment system focus. This way the test would have a

more well-rounded approach to truly testing functional behavioral analysis. The information

provided by this article was extremely helpful in determining whether this test was affective. The

reviewer did a very though search over the K-FAST and provided valuable information that was

far greater than what was presented in the manual.

Journal Article #2:

The article, Improvement in Academic Screening Instruments? A Concurrent Validity

Investigation of the K-FAST, MBA, and WRAT-3, by Dawn P. Flanagan and colleagues,

researched K-FAST, MBA, and WRAT-3 to find concurrent validity. The purpose of the article

was to show the extent to which scores from the WRAT-3, K-FAST, and MBA are comparable in

terms of correlation or concurrent validity and mean scores (Flanagan, 1997). The research was

conducted on 62 adult volunteers that ranged from 19 to 45 years of age. The K-FAST test was

made to measure the academic skills by showing problems that people encounter every day

(1997). The Woodcock-McGrew-Werder Mini-Battery of Achievement (MBA) focuses on the


Cummings 8

measurement of four areas: reading, writing, mathematics, and factual knowledge, and only takes

25 to 30 minutes to administer. The last test that this information was compared to was the Wide

Range Achievement Test-3 or the WRAT-3. The WRAT-3 focuses on three academic skills:

reading, spelling, and arithmetic. The correlation between the three tests shows that they all focus

on some kind of reading and arithmetic. The results of the findings showed that there were no

significant correlations between the three tests. The focus of the MBA and WRAT-3 were more

on reading and mathematics, where as the K-FAST measured some commonalities, but was

ultimately focused on functional daily living skills (1997). Flanagan stated that the K-FAST

and/or MBA would be a suitable replacement of the WRAT-3 in regards to general achievement

screening (1997). Overall, the findings in this article shared that the K-FAST does have a lot of

potential in helping students determine their functional academic skills in comparison to the

other two test. This test was also helpful in giving an image in how this test compares to others

with similar intensions. This research was helpful in determining the impact of the effectiveness

the K-FAST has on student performance.


Cummings 9

References:

Kaufman, A. S., & Kaufman, N. L. (1994). Kaufman Functional Academic Skills

Kaufman, A.S. (1994). Kaufman Functional Academic Skills Test (K-FAST) American

Guidance Service, Inc.

Flanagan, D. P., McGrew, K. S., Abramowitz, E., Lehner, L., Untiedt, S., Berger, D., &

Armstrong, H. (1997). Improvement in academic screening instruments? A

concurrent validity investigation of the K-FAST, MBA, and WRAT-3. Journal Of

Psychoeducational Assessment, 15(2), 99-112. doi:10.1177/073428299701500201

You might also like