You are on page 1of 11

PEREALISASIAN ITEM

LEMBAGA PEPERIKSAAN

Step 1

WRITE OBJECTIVES AND PLAN TEST


The development of a test that accurately measures student achievement requires careful planning. Planning for tests should start with an examination of student outcomes identified in the instructional objectives. An objective is a communication device that specifies the knowledge, skills, and attitudes expected of students at the end of an instructional unit.

Objectives include three components: conditions, (2) performance, and (3) criteria.

(1)

Conditions identify what is available to students (e.g., setting, resource materials, context, circumstances, or restrictions) when they are asked to complete the desired performance. Performance specify the desired measurable and observable student outcome (e.g., what the student will be able to do). Criteria specify standards (tolerance limits) or proficiency for satisfactory performance.

A table of specifications is a twodimensional planning tool used to analyze instructional content in order to determine the percentage of instructional time the teacher should spend on objectives as well as the type and number of test items necessary to adequately measure student achievement of class content.

Step 2 DEVELOP TEST ITEMS.


Test items are classified as objective or subjective. Objective test items (true or false, multiplechoice, matching, and completion) are easy-towrite and score and can sample large amounts of content; however, they are limited to facts, encourage guessing, and fail to measure higher levels of cognitive learning. Subjective test items allow students to express their thoughts and require demonstration of mastery of instructional objectives in the higher levels of the cognitive domain.

Step 3 ESTABLISH TEST VALIDITY AND RELIABILITY. Validity is the extent to which a test measures what it was intended to measure. Reliability provides an estimate of consistency of test results. All tests must be valid and reliable to accurately measure student achievement. Everything from the testing environment to student illness can affect test validity and reliability.

Step 4

ASSEMBLE TEST ITEMS AND WRITE DIRECTIONS.


Test items should be assembled by type and increasing difficulty. Test items should also be checked for inconsistencies and follow a parallel format. Experienced test developers have content experts read the test for understanding and clarity prior to administration. Clear and concise test directions must be developed so students understand how, where, and when to provide responses. Good test directions stand out from other parts of a test through use of different text font, increasing the font size, or placing the directions in a textbox.

Step 5

ADMINISTER TEST.
Prior to administration, teachers should also consider the physical setting (e.g., space, lighting, ventilation, and temperature) of the testing environment. Finally, teachers should consider the psychological factors (test anxiety and pressure) that affect students by explaining the reason for the test and adequately preparing students for the test.

Discuss the instructional content areas to be covered by the test as well as the format of the test. Discuss the parameters of the test (e.g., number and type of test items) before administration. Provide students with practice test items (and similar directions) prior to the test. Indicate to students that you expect them to succeed on the test and that you are available to help them.

Step 6

INTERPRET TEST RESULTS AND ANALYZE TEST ITEMS.


Test results are interpreted using descriptive statistics. Descriptive statistics summarize test results through the use of measures of central tendency (mean, median, and mode) and measures of dispersion (range, percentiles/quartiles, ranks, standard deviation, and Zscores). Measures of central tendency describe the tendency of scores to cluster together.

Measures of dispersion describe the manner in which test scores are different or vary. Descriptive statistics coupled with graphic illustrations are useful in explaining test results to students, parents, and administrators.

Tests are analyzed using item analysis procedures, which provide a response profile for individual items and indicate item difficulty and item discrimination. Item difficulty indicates the percentage of students who responded correctly to a test item. Item discrimination provides an index of how an item discriminates between students who scored high and low on a test.
Item difficulty and item discrimination are used to determine which test items are effective and which test items need improvement or should be discarded.

You might also like