You are on page 1of 16

ITEM RESPONSE THEORY FOR

‘DUMMIES’
Maulik Shah
3rd November, 2010
K-hive Inaugural session
IN THE WORLD OF 'ASSESSMENT'...
Objective:
 To assess learning levels of a student in different
concepts
 To gauge proficiency (ability level) of a student

Tools:
 A good test comprising of appropriate items
(questions)
 Diagnostics based on analysis of student's response

"A product like ASSET has a scope to evolve on both the


two fronts."
TRADITIONAL METHOD OF SCORING
 Raw scores
 Percentile rank based on raw scores

So what??
SOME REAL TEST STATISTICS…
Total 13 14 15 30 31
Scores

No. of 1565 1635 1542 301 262


students

If you got 14 questions correct in the test, just by


answering 1 question correctly you move ahead of
about 6.4% of students who took the test.
A FEW ITEMS FROM ASSET…
Which is an easier question of the two?
INSIGHTS!!!
 54% of the class 7 students answered the first
question correctly
 82.2% of the same set of students answered the 2 nd
question correctly
A case: Item 1 Item 4 Total score

Student X √ X 10

Student Y X √ 10

Both gets the same score and the same percentile


score!
Any change in the perspective!!!
A SOLUTION…
Problem: Assigning the same credit (mark) to each
item in the test irrespective of their difficulty level
Solution: Any guess!!!
Assign different credit (mark) to questions as per their
difficulty level
But how???

TWO ITEMS OF NEARLY SAME
DIFFICULTY
INSIGHTS!!!
 60% of the students who scored 20 on the test
answered the first question correctly
 70% of the same set of students answered the 2nd
question correctly
A case: Item 1 Item 2 Total score
Student X of √ X 1
ability 1 unit

Student Y of X √ 1
ability 2 units

The item does not discriminate between the two.


A SOLUTION…
Problem: Assigning the same credit (mark) to an
item in the test irrespective of the ability of the
student who answers it correctly
Solution: Any guess!!!
Assign different credit (mark) to a question as per its
ability to discriminate at different ability levels
 This adds to the dimension of gauging ability of a
student while estimating discriminating power of an
item simultaneously.
A MODEL OF A NEW SCORING
MECHANISM
Estimate ability
level of a
student

Estimate item
parameters
THANKS TO IRT TO RESCUE US…
 IRT works on the same model described earlier
 For the dichotomous data like in ASSET, one can
apply one of the 3 different models
Difficulty Discrimination Guessing

1 parameter Yes No No
model

2 parameter Yes Yes No


model

3 parameter Yes Yes Yes


model
ITEM CHARACTERISTIC CURVES
ESTIMATING STUDENT’S ABILITY
LEVEL
A hypothetical case

Items 1 2 3 4 5 6 7
Hypotheti- 0.12 0.23 0.34 0.47 0.59 0.65 0.78
cal Item
difficulty

Response 1 1 1 1 0 0 0
of Student
X
The estimated ability of a student is about 0.47.
USES OF IRT
 ‘Learning’ for a test developer: The test should
comprise of items of different difficulties and
discriminating abilities
 Scoring with greater precision than that based on
the classical theory (traditional scoring)
 Percentile Ranks based on ability scores

 Item Banking

 Computer based Adaptive Test


THANKS!
Interested! Come join the club “IRT Explorers”

You might also like