You are on page 1of 75

c5

Student: ___________________________________________________________________________

1. The meaning of reliability in the psychometric sense differs from the meaning of
reliability in the "every day" use of that word in that
A. reliability in the "every day sense" is usually "a good thing."
B. reliability in the psychometric sense is usually "a good thing."
C. reliability in the psychometric sense has greater implications.
D. None of these

2. Which is TRUE about reliability in the psychometric sense?


A. reliability is an all-or-none measurement
B. a test may be reliable in one context and unreliable in another
C. a reliability coefficient may not be derived for personality tests
D. alternate forms reliability may not be derived for personality tests

3. In classical test theory, an observed score on an ability test is presumed to represent


the testtaker's
A. true score.
B. true score less the variance.
C. true score combined with extraneous factors.
D. the testtaker's true score and error.

4. In an illustrative scenario described in Chapter 5 of your text, a group of 12th grade


"whiz kids" in math, newly arrived to the United States from China, perform poorly on a
test of 12th grade math. According to the text, what probably accounted for this?
A. lower standards in China as compared to the US for measuring math ability
B. higher standards in the US as compared to China for earning high grades
C. the ability of the Chinese students to read what was required in English
D. the reliability of the instrument used to test 12th grade math skills

5. Which is TRUE of measurement error?


A. Like error in general, measurement error may be random or systematic.
B. Unlike error in general, measurement error may be random or systematic.
C. Measurement error is always random.
D. Measurement error is always systematic.
6. This variety of error has also been referred to as "noise." It is
A. systematic error.
B. random error.
C. measurement error.
D. background error.

7. A Wall Street Securities firm that is actually located on Wall Street is testing a group of
candidates for their aptitude in finance and business. As the testing begins, an
unexpected "Occupy Wall Street" sit-in takes place. From a psychometric perspective in
the context of this testing, the sit-in is viewed as
A. systematic error.
B. random error.
C. test administration error.
D. background error.

8. A test entails behavioral observation and rating of front desk clerks to determine
whether or not they greet guests with a smile. Which type of error is this test most
susceptible to?
A. test administration error
B. test construction error
C. examiner-related error
D. polling error

9. Error in the reporting of spousal abuse may result from


A. one partner simply forgets all of the details of the abuse.
B. one partner misunderstands the instructions for reporting.
C. one partner is ashamed to report the abuse.
D. All of these

10. Stanley (1971) wrote that in classical test theory, a so-called "true score" is "not the
ultimate fact in the book of the recording angel." By this, Stanley meant that
A. it would be imprudent to trust in Divine influence when estimating variance.
B. the amount of test variance that is true relative to error may never be known.
C. it is near impossible to separate fact from fiction with regard to "true scores."
D. All of these
11. The term test heterogeneity BEST refers to the extent to which test items measure
A. different factors.
B. the same factor.
C. a unifactorial trait.
D. a nonhomogeneous trait.

12. The more homogeneous a test is, the


A. less inter-item consistency it can be expected to have.
B. more utility the test has for measuring multifaceted variables.
C. more inter-item consistency it can be expected to have.
D. None of these

13. Which would NOT be useful in estimating a test's inter-item consistency?


A. Cronbach's alpha
B. the Kuder-Richardson formulas
C. the average proportional distance
D. a coefficient of equivalence

14. Cronbach's alpha is to similarity of scores on test items as average proportional


distance is to
A. difference in scores on test items
B. inter-item consistency
C. test-retest reliability
D. parallel forms reliability

15. One of the problems associated with classical test theory has to do with
A. the notion that there is a "true score" on a test has great intuitive appeal.
B. the fact that CTT assumptions are often characterized as "weak."
C. its assumptions concerning the equivalence of all items on a test.
D. its assumptions allow for its application in most situations.

16. Which of the following is NOT an alternative to classical test theory cited in your
text?
A. generalizability theory
B. representational theory
C. domain sampling theory
D. latent trait theory
17. Item response theory is to latenttrait theory as observer reliability is to
A. generalizability theory.
B. domain sampling theory.
C. odd-even reliability.
D. inter-scorer reliability.

18. The multiple-choice test items on this examination are all examples of
A. dichotomous test items.
B. latent trait test items.
C. polytomous test items.
D. None of these

19. A confidence interval is a range or band of test scores that


A. has proven test-retest reliability.
B. is calculated using the standard error of the difference.
C. is likely to contain the true score.
D. None of these

20. The standard error of measurement is


A. used to infer how far an observed score is from the true score.
B. also known as the standard error of a score.
C. is used in the context of classical test theory.
D. All of these

21. Reliability, in a broad statistical sense, is synonymous with


A. consistently good.
B. consistently bad.
C. consistency.
D. validity.

22. A reliability coefficient is


A. an index.
B. a proportion of the total variance attributed to true variance.
C. unaffected by a systematic source of error.
D. All of these
23. Which of the following is true of systematic error?
A. It significantly lowers the reliability of a measure.
B. It insignificantly lowers the reliability of a measure.
C. It increases the reliability of a measure.
D. It has no effect on the reliability of a measure.

24. As the degree of reliability increases, the proportion of


A. total variance attributed to true variance decreases.
B. total variance attributed to true variance increases.
C. total variance attributed to error variance increases.
D. None of these

25. Why might ability test scores among testtakers most typically vary?
A. because of the true ability of the testtaker
B. because of irrelevant, unwanted influences
C. All of the above
D. None of the above

26. A source of error variance may take the form of


A. item sampling.
B. testtakers' reactions to environment-related variables such as room temperature and
lighting.
C. testtaker variables such as amount of sleep the night before a test, amount of anxiety,
or drug effects.
D. All of the above

27. Computer-scorable items have tended to eliminate error variance due to


A. item sampling.
B. scorer differences.
C. content sampling.
D. testtakers' reactions to environmental variables.

28. Which type of reliability estimate is obtained by correlating pairs of scores from the
same person (or people) on two different administrations of the same test?
A. a parallel-forms estimate
B. a split-half estimate
C. a test-retest estimate
D. an au-pair estimate
29. Which type of reliability estimate would be appropriate only when evaluating the
reliability of a test that measures a trait that is presumed to be relatively stable time?
A. parallel-forms
B. alternate-forms
C. test-retest
D. split-half

30. An estimate of test-retest reliability is often referred to as a coefficient of stability


when the time interval between the test and retest is more than
A. 30 days.
B. 60 days.
C. 3 months.
D. 6 months.

31. Which of the following might lead to a decrease in test-retest reliability?


A. the passage of time between the two administrations of the test.
B. coaching designed to increase test scores between the two administrations of the test.
C. practice with similar test materials between the two administrations of the test.
D. All of these

32. Which of the following is TRUE for estimates of alternate- and parallel-forms
reliability?
A. Two test administrations with the same group are required.
B. Test scores may be affected by factors such as motivation, fatigue, or intervening
events like practice, learning, or therapy.
C. Item sampling is a source of error variance.
D. All of these

33. Which of the following is TRUE for parallel forms of a test?


A. The means of the observed scores are equal for the two forms.
B. The variances of the estimated scores are equal for the two forms.
C. The means and variances of the observed scores are equal for the two forms.
D. The means and variances of the estimated scores are equal for the two forms.
34. Which source of error variance affects parallel- or alternate-form reliability estimates
but does not affect test-retest estimates?
A. fatigue
B. learning
C. practice
D. item sampling

35. Which of the following types of reliability estimates is the most expensive due to the
costs involved in test development?
A. test-retest
B. parallel-form
C. internal-consistency
D. Spearman's rho

36. What term refers to the degree of correlation between all the items on a scale?
A. inter-item homogeneity
B. inter-item consistency
C. inter-item heterogeneity
D. parallel-form reliability

37. Test-retest estimates of reliability are referred to as measures of ________, and split-
half reliability estimates are referred to as measures of ________.
A. true scores; error scores
B. internal consistency; stability
C. interscorer reliability; consistency
D. stability; internal consistency

38. Which of the following is usually minimized when using split-half estimates of
reliability as compared with test-retest or parallel/alternate-form estimates of reliability?
A. time and expense
B. reliability and validity
C. reliability only
D. time spent in scoring and interpretation

39. Which of the following factors may influence a split-half reliability estimate?
A. fatigue
B. anxiety
C. item difficulty
D. All of these
40. Internal-consistency estimates of reliability are inappropriate for
A. reading achievement tests.
B. scholastic aptitude/intelligence tests.
C. word processing tests based on speed.
D. tests purporting to measure a single personality trait.

41. The Spearman-Brown formula is used for:


A. correcting for one half of the test by estimating the reliability of the whole test.
B. determining how many additional items are needed to increase reliability up to a
certain level.
C. determining how many items can be eliminated without reducing reliability below a
predetermined level.
D. All of these

42. For a heterogeneous test, measures of internal-consistency reliability will tend to be


________ compared with other methods of estimating reliability.
A. higher
B. lower
C. very similar or higher
D. more robust

43. Typically, adding items to a test will have what effect on the test's reliability?
A. Reliability will decrease.
B. Reliability will increase.
C. Reliability will stay the same.
D. Reliability will first increase and then decrease.

44. Error variance for measures of inter-item consistency comes from


A. fatigue.
B. motivation.
C. a testtaker practice effect.
D. heterogeneity of the content.

45. If items from a test are measuring the same trait, estimates of reliability yielded from
split-half methods will typically be ________ as compared to estimates from KR-20.
A. higher
B. lower
C. similar
D. approximately the same
46. Which of the following is NOT an acceptable way to divide a test when using the split-
half reliability method?
A. Randomly assign items to each half of the test.
B. Assign odd-numbered items to one half and even-numbered items to the other half of
the test.
C. Assign the first-half of the items to one half of the test and the second half of the
items to the other half of the test.
D. Assign easy items to one half of the test and difficult items to the other half of the
test.

47. If items on a test are measuring very different traits, estimates of reliability yielded
from split-half methods will typically be ________ as compared with estimates from KR-
20.
A. higher
B. lower
C. similar
D. approximately the same

48. KR-20 is the statistic of choice for tests with which types of items?
A. multiple-choice
B. true-false
C. All of these
D. None of these

49. The KR-21 reliability estimate was developed


A. to yield greater consistency in reliability coefficients.
B. to facilitate computation by hand.
C. for use with less homogeneous items.
D. because Kuder wanted to "one-up" Richardson's 20.

50. Which is NOT an assumption that should be met in order to use KR-21?
A. Items should be dichotomous.
B. Items should be of equal difficulty.
C. Items should be homogeneous.
D. Items should be scorable by computer.
51. Which of the following is generally the preferred statistic for obtaining a measure of
internal-consistency reliability?
A. KR-20
B. KR-21
C. Kendall's Tau
D. coefficient alpha

52. Coefficient alpha is appropriate to use with all of the following test formats EXCEPT
A. multiple-choice.
B. true-false.
C. short-answer for which partial credit is awarded.
D. essay exam with no partial credit awarded.

53. The "20" and "21" in KR-20 and KR-21 represent


A. numbers held constant in the denominator.
B. numbers held constant in the numerator.
C. the order in which the formulas were created.
D. the age of Fred Kuder's son and nephew at the time the formulas were developed.

54. Coefficientalpha is an expression of


A. the mean of split-half correlations between odd- and even-numbered items.
B. the mean of split-half correlations between first- and second-half items.
C. the mean of all possible split-half correlations.
D. the mean of the best or "alpha" level split-half correlations.

55. A coefficient alpha over .9 may indicate that


A. the items in the test are too dissimilar.
B. the test is not reliable.
C. the items in the test are redundant.
D. the test is biased against low-ability individuals.

56. Which of the following is TRUE about coefficient alpha?


A. Kuder thought it to be single best measure of reliability.
B. It was first conceived by Alfalfa Alpha.
C. It is a characteristic of a particular set of scores, not of the test itself.
D. None of these
57. A synonym for interscorer reliability is
A. interjudge reliability
B. observer reliability
C. interrater reliability
D. All of these

58. Which BEST conveys the meaning of an inter-scorer reliability estimate of .90?
A. Ninety percent of the scores obtained are reliable.
B. Ninety percent of the variance in the scores assigned by the scorers was attributed to
true differences and 10% to error.
C. Ten percent of the variance in the scores assigned by the scorers was attributed to
true differences and 90% to error.
D. Ten percent of the test's items are in need of revision according to the majority of the
test's users.

59. When more than two scorers are used to determine inter-scorer reliability, the
statistic of choice is
A. Pearson r.
B. Spearman's rho.
C. KR-20.
D. coefficient alpha.

60. For determining the reliability of tests scored using nominal scales of measurement,
the statistic of choice is
A. Kendall's Tau.
B. the Kappa statistic.
C. KR-20.
D. coefficient alpha.

61. If a test is homogeneous


A. it is functionally uniform throughout.
B. it will likely yield a high internal-consistency reliability estimate compared with a test-
retest reliability estimate.
C. it would be reasonable to expect a high degree of internal consistency.
D. All of these
62. Which type(s) of reliability estimates would be most appropriate for a measure of
heart rate?
A. test-retest
B. alternate-form
C. parallel form
D. internist consistency

63. If a time limit is long enough to allow test-takers to attempt all items, and if some
items are so difficult that no test-taker is able to obtain a perfect score, then the test is
referred to as a ________ test.
A. speed
B. power
C. reliable
D. valid

64. Typically, speed tests


A. contain items of a uniform difficulty level.
B. are completed by fewer than 1% of all test-takers.
C. have low validity coefficients.
D. yield high rates of false positives.

65. Which type(s) of reliability estimates would be appropriate for a speed test?
A. test-retest
B. alternate-form
C. split-half from two independent testing sessions
D. All of these

66. Which of the following would result in the LEAST appropriate estimate of reliability for
a speed test?
A. test-retest
B. alternate-form
C. split-half from a single administration of the test
D. split-half from two independent testing sessions
67. A Kuder-Richardson (KR) or split-half estimate of reliability for a speed test would
provide an estimate that is
A. spuriously low.
B. spuriously high.
C. insignificant.
D. equal to a test-retest method.

68. A measure of clerical speed is obtained by a test that has respondents alphabetize
index cards. The manual for this test cites a split-half reliability coefficient for a single
administration of the test of .95. What might you conclude?
A. The test is highly reliable.
B. The published reliability estimate is spuriously low and would have been higher had
another estimate been used.
C. The split-half estimate should not have been used in this instance.
D. Clerical speed is too vague a construct to measure.

69. The Spearman-Brown formula can be used for which types of tests?
A. speed and multiple-choice
B. true-false and multiple-choice
C. speed, true-false, and multiple-choice
D. trade school and driving tests

70. An estimate of the reliability of a speed test is a measure of


A. the stability of the test.
B. the consistency of the response speed.
C. the homogeneity of the test items.
D. All of these

71. Use of the Spearman-Brown formula would be INAPPROPRIATE to


A. estimate the effect on reliability of shortening a test.
B. determine the number of items needed in a test to obtain the desired level of
reliability.
C. estimate the internal consistency of a speed test.
D. All of these
72. Interpretations of criterion-referenced tests are typically made with respect to
A. the total number of items the examinee responded to.
B. the material that the examinee evidenced mastery of.
C. a comparison of the examinee's performance with that of others who took the test.
D. a formula that takes into account the total number of items for which no response was
scorable.

73. Traditional measures of reliability are inappropriate for criterion-referenced tests


because variability
A. is maximized with criterion-referenced tests.
B. is minimized with criterion-referenced tests.
C. is variable with criterion-referenced tests.
D. cannot be determined with criterion-referenced tests

74. If traditional measures of reliability are applied to a criterion-referenced test, the


reliability estimate will likely be
A. spuriously low.
B. spuriously high.
C. exactly zero.
D. None of these

75. The fact that the length of a test influences the size of the reliability coefficient is
based on which theory of measurement?
A. classical test theory (CTT)
B. generalizability theory
C. domain sampling theory
D. item response theory (IRT)

76. Which estimate of reliability is most consistent with the domain sampling theory?
A. test-retest
B. alternate-form
C. internal-consistency
D. interscorer
77. Classical reliability theory estimates the portion of a test score that is attributed to
________, and domain sampling theory estimates ________.
A. specific sources of variation; error
B. error; specific sources of variation
C. the skills being measured; variation
D. the skills being measured; content knowledge

78. Item response theory (IRT) focuses on the


A. circumstances that inspired the development of the test.
B. test administration variables.
C. individual items of a test.
D. "how and why" of the Interborough Rapid Transit line

79. Generalizability theory focuses on which of the following?


A. the circumstances under which a test was developed
B. the circumstances under which a test is administered
C. the circumstances under which a test is interpreted
D. All of these

80. The standard deviation of a theoretically normal distribution of test scores obtained
by one person on equivalent tests is
A. the standard error of the difference between means.
B. the standard error of measurement.
C. the standard deviation of the reliability coefficient.
D. the variance.

81. Which of the following is NOT a part of the formula for the standard error of
measurement for a particular test?
A. the validity of the test
B. the reliability of the test
C. the standard deviation of the group of test scores
D. Both b and c

82. "Sixty-eight percent of the scores for a particular test fall between 58 and 61" is a
statement regarding
A. the utility of a test.
B. the reliability of a test.
C. the validity of a test.
D. None of these
83. The standard error of measurement of a particular test of anxiety is 8. A student
earns a score of 60. What is the confidence interval for this test score at the 95% level?
A. 52-68
B. 40-68
C. 44-76
D. 36-84

84. As the confidence interval increases, the range of scores into which a single test
score falls is likely to
A. decrease.
B. increase.
C. remain the same.
D. alternately decrease and increase.

85. As the reliability of a test increases, the standard error of measurement


A. increases.
B. decreases.
C. remains the same.
D. alternately increases, then decreases.

86. If the standard deviations of two tests are identical but the reliability is lower for Test
A as compared to Test B, then the standard error of measurement will be ________ for Test
A as compared with Test B.
A. higher
B. lower
C. the same
D. None of these

87. Which statistic can help the test user determine how large a difference must exist for
scores yielded from two different tests to be considered statistically different?
A. standard error of measurement between two scores
B. standard error of the difference between two scores
C. observed variance minus error variance
D. standard error of the difference between two means
88. The standard error of the difference between two scores is larger than the standard
error of measurement for either score because the standard error of the difference
between the two scores is affected by
A. the true score variance of each score.
B. the standard deviation of each score summed.
C. the measurement error inherent in both scores.
D. All of these

89. A guidance counselor wishes to determine if a student scored higher on a


mathematics test than on a reading test. What statistic(s) would be MOST useful?
A. the standard error of measurement for each test score
B. the standard error of the difference between two scores
C. the raw score on each test as well as the mean of each distribution
D. the mean of each distribution and index of test difficulty for each test.

90. The ________ in generalizability theory is analogous to the reliability coefficient in


classical test theory.
A. universe coefficient
B. coefficient of generalizability
C. universe score
D. Roulin coefficient

91. According to Cronbach et al.'s generalizability theory, "facets" include


A. the number of test items.
B. the amount of training the examiners received.
C. the purpose of administering the test.
D. All of these

92. The universe score in Cronbach et al.'s generalizability theory analogous to the
________ in classical test theory.
A. coefficient of generalizability
B. true score
C. standard deviation
D. internal-consistency estimate
93. In classical test theory, there exists only one true score. In Cronbach generalizability
theory, how many "true scores" exist?
A. one
B. as many as the number of times the test is administered to the same individual
C. many, depending on the number of different universes
D. None of these

94. The term coefficient of generalizability refers to


A. how generalizable scores obtained in one situation are to other situations.
B. test-retest reliability estimates with respect to different "universes."
C. split-half reliability estimates with respect to different "universes."
D. None of these

95. If a device to measure blood pressure consistently overestimated every assessee's


actual blood pressure by 10 units, which of the following would be TRUE of the reliability
of this measuring device as the years passed?
A. It would increase.
B. It would decrease.
C. It would not be affected.
D. It would alternately decrease and increase.

96. In general, which of the following is TRUE of the relationship between the magnitude
of the test-retest reliability estimate and the length of the interval between test
administrations?
A. The longer the interval, the lower the reliability coefficient.
B. The longer the interval, the higher the reliability coefficient.
C. The magnitude of the reliability coefficient is typically not affected by the length of the
interval between test administrations.
D. The magnitude of the reliability coefficient is always affected by the length of the
interval between test administrations, but one cannot predict how it is affected.

97. What is the difference between alternate forms and parallel forms of a test?
A. Alternate forms do not necessarily yield test scores with equal means and variances.
B. Alternate forms are designed to be equivalent only with regard to level of difficulty.
C. Alternate forms are different only with respect to how they are administered.
D. There are no differences between alternate and parallel forms of a test.
98. Coefficientalpha is the reliability estimate of choice for tests
A. with dichotomous items and binary scoring.
B. with homogeneous items.
C. that can be scored along a continuum of values.
D. that contain heterogeneous item content and binary scoring.

99. In which type(s) of reliability estimates would test construction NOT be a significant
source of error variance?
A. test-retest
B. alternate-form
C. split-half
D. Kuder-Richardson

100. If the variance of either variable is restricted by the sampling procedures used, then
the magnitude of the coefficient of reliability will be
A. lowered.
B. raised.
C. unaffected.
D. affected only in tests with a true-false format.

101. For criterion-referenced tests, which of the following reliability estimates is


recommended?
A. test-retest reliability estimates
B. alternate-form reliability estimates
C. split-half reliability estimates
D. None of these

102. Which of the following is TRUE of domain sampling theory?


A. It supports the existence of a "true score" when measuring psychological constructs.
B. It can be used to argue against the existence of a "true score" when measuring
psychological constructs.
C. Neither Kuder nor Richardson found it to have any applied value.
D. All of these
103. If a student received a score of 50 on a math test with a standard error of
measurement of 3, which of the following statements would be TRUE of the "true
score"?
A. In 68% of the cases, the "true score" would be expected to be between 44 and 56.
B. In 68% of the cases, the "true score" would be expected to be between 47 and 53.
C. In 95% of the cases, the "true score" would be expected to be between 47 and 53.
D. In 95% of the cases, the "true score" would be expected to be between 44 and 56.

104. A psychologist administers a test and the test-taker scores a 52. If the cut-off score
for eligibility for a particular program is 50, what index will best help the psychologist
determine how much confidence to place in the test-taker's obtained score of 52?
A. the standard error of difference
B. the standard error of measurement
C. measures of central tendency: mean, median, or mode
D. measures of variability such as the standard deviation

105. Which of the following is TRUE of both the standard error of measurement and the
standard error of difference?
A. Both provide confidence levels.
B. Both can be used to compute confidence intervals for short answer tests.
C. Both can be used to compare performance between two different tests.
D. Both are abbreviated by SEM.

106. Test-retest reliability estimates of breathalyzers have


A. a margin of error of approximately one-hundredth of a percentage point.
B. a margin of error of one percentage point.
C. a margin of error so high that they must be deemed unreliable.
D. not been done in the State of Alaska.

107. A police officer administers a breathalyzer test to a suspected drunk driver, does
not put on his glasses to read the meter, and as a result, mistakenly records the blood
alcohol level. This is the kind of mistake that is BEST with which type of reliability
estimates?
A. test-retest
B. interscorer
C. internal-consistency
D. situational
108. Which of the following statements is TRUE regarding the differences between a
power test and a speed test?
A. Power tests involve physical strength; speed tests do not.
B. In a power test, the testtaker has time to complete all items; in a speed test, a specific
time limit is imposed.
C. In a power test, a broad range of knowledge is assessed; in a speed test, a narrower
range of knowledge is assessed.
D. Both b and c

109. The index that allows a test user to compare two people's scores on a specific test
to determine if the true scores are likely to be different is
A. the standard error of the mean.
B. the standard error of the difference.
C. the standard deviation.
D. the correlation coefficient.

110. Which type of reliability is directly affected by the heterogeneity of a test?


A. test-retest
B. interrater
C. internal-consistency
D. alternate-forms or parallel-forms

111. Generalizability theory is most closely related to


A. developing norms.
B. item analysis.
C. test reliability.
D. the way things are "in general."

112. A test of attention span has a reliability coefficent of .84. The average score on the
test is 10, with a standard deviation of 5. Lawrence received a score of 64 on the test.
We can be 95% sure that Lawrence's "true" attention span score falls between
A. 63 and 65.
B. 62 and 66.
C. 60 and 68.
D. 54 and 74.
113. By definition, estimates of reliability can range from _______ to _______.
A. -3.00; +3.00
B. 1; 10
C. 0; 1
D. -1 to 1

114. Using estimates of internal consistency, which of the following tests would likely
yield the highest reliability coefficients?
A. a test of general intelligence
B. a test of achievement in a basic skill such as mathematics
C. a test of reading comprehension
D. a test of vocational interest

115. What type of reliability estimate is appropriate for use in a comparison of "Form A"
to "Form B" of a picture vocabulary test?
A. test-retest
B. alternate-forms
C. inter-rater
D. internal-consistency

116. What index of reliability would you use to compare two evaluators' assessments of
a group of job applicants?
A. KR-20
B. coefficient alpha
C. the Kappa statistic
D. the Spearman-Brown correction

117. Which of the following is TRUE of the standard error of measurement?


A. The larger the standard error of measurement, the better.
B. The standard error of measurement is inversely related to the standard deviation (that
is, when one goes up, the other goes down).
C. The standard error of measurement is inversely related to reliability (that is, when one
goes up, the other goes down).
D. A low standard error of measurement is indicative of low validity.
118. What type of reliability estimate is obtained by correlating pairs of scores from the
same person on two different administrations of the same test?
A. parallel-forms
B. split-half
C. interrater
D. test-retest

119. A test containing 100 items is revised by deleting 20 items. What might be
expected to happen to the magnitude of the reliability estimate for that test?
A. It will be expected to increase.
B. It will be expected to decrease.
C. It will be expected to stay the same.
D. It cannot be determined based on the information provided.

120. In the formula X = T + E, T refers to


A. the true score.
B. the time factor.
C. the average test score.
D. test-retest reliability.

121. The greater the proportion of the total variance attributed to true variance, the
more ____________ the test.
A. scientific
B. variable
C. reliable
D. expensive

122. A score earned by a testtaker on a psychological test may BEST be viewed as equal
to
A. the raw score plus the observed score.
B. the error score.
C. the true score.
D. the true score plus error.

123. Which is NOT a possible source of error variance?


A. test administration
B. test scoring
C. test interpretation
D. All are possible sources of error variance.
124. A goal of a test developer is to
A. maximize error variance.
B. minimize true variance.
C. maximize true variance.
D. minimize stress for testtakers.

125. Which of the following is TRUE about systematic and unsystematic error in the
assessment of physical and psychological abuse?
A. Few sources of unsystematic error exist, due to the nature of what is being assessed.
B. Few sources of systematic error exist.
C. Gender represents a source of systematic error.
D. None of these

126. In general, approximately what percentage of scores would be expected to fall


within two standard deviations above or below the standard error of measurement of the
"true score" on a test?
A. 85%
B. 90%
C. 95%
D. 99%

127. In Chapter 5 of your textbook, you read of the "writing surface on a school desk
riddled with heart carvings, the legacy of past years' students who felt compelled to
express their eternal devotion to someone now long forgotten." This imagery was
designed to graphically illustrate sources of error variance during test
A. development.
B. administration.
C. scoring.
D. interpretation.

128. In the Chapter 5 Meet an Assessment Professional feature, Dr. Bryce B. Reeve noted
the necessity for very brief questionnaires in his work due to the fact that many of his
clients were:
A. young children with very short attention spans.
B. seriously ill and would find taking tests burdensome.
C. visually impaired an unable to focus for an extended period of time.
D. All of these
129. In the Chapter 5 Meet an Assessment Professional feature, Dr. Bryce B. Reeve cited
an experience in which he learned that the "Excellent" response category on a test was
best translated as meaning ______ in Chinese?
A. "super bad"
B. "superlative"
C. "bad"
D. None of these

130. The items of a personality test are characterized as heterogeneous in nature. This
tells us that the test measures
A. aspects of family history.
B. ability to relate to the opposite sex.
C. unconscious motivation.
D. more than one trait.

131. "Coefficient alpha 20" is a reference to


A. a variant of the Kuder-Richardson KR-20 formula.
B. the 20

th
in a series of formulas developed by Cronbach.
C. a 20th-century revision of a Galtonian expression.
D. None of these

132. With regard to a value found for coefficient alpha,


A. "bigger is always better."
B. "smaller is always better."
C. "negative is best."
D. None of these

133. Most reliability coefficients, regardless of the specific type of reliability they are
measuring, range in value from:
A. -1 to +1
B. 0 to 100
C. 0 to 1.
D. negative infinity to positive infinity
134. All indices of reliability provide an index that is a characteristic of a particular
A. test.
B. group of test scores.
C. trait.
D. approach to measurement.

135. The precise amount of error inherent in the reliability estimate published in a test
manual will vary with
A. the purchase price of the test (the more expensive, the less the error).
B. the sample of test-takers from which the data were drawn.
C. the population of test user actually using a published test.
D. All of these

136. Different types of reliability coefficients


A. all reflect the same sources of error variance.
B. may reflect different sources of error variance.
C. never reflect the same source of error variance.
D. reflect on error variance during leisure activities.

137. A test of infant development contains three scales: (1) Cognitive Ability, (2) Motor
Development, and (3) Behavior Rating. Because these three scales are designed to
measure different characteristics (that is, they are not homogeneous), it would be
inappropriate to combine the three scales in calculating estimates of the test's
A. alternate-forms reliability.
B. internal-consistency reliability.
C. test-retest reliability.
D. interrater reliability.

138. The fact that young children develop rapidly and in "growth spurts" is a problem
when it comes to the estimation which type of reliability for an infant development
scale?
A. internal-consistency reliability
B. alternate-forms reliability
C. test-retest reliability
D. interrater reliability
139. In the language of psychological testing and assessment, reliability BEST refers to
A. how well a test measures what it was originally designed to measure.
B. the complete lack of any systematic error.
C. the proportion of total variance that can be attributed to true variance.
D. whether or not a test publisher consistently publishes high quality instruments.

140. Because of the unique problems in assessing very young children, which of the
following would be the BEST practice when attempting to estimate the reliability of tests
designed to measure cognitive and motor abilities in infants?
A. Use relatively short test-retest intervals.
B. Use relatively long test-retest intervals.
C. Do not use the test-retest method for estimating reliability of the test.
D. Use only inter-scorer reliability estimates.

141. If the variance of either variable in a correlational analysis is restricted by the


sampling procedure used, then the resulting correlation coefficient tends to be
A. higher.
B. lower.
C. unaffected.
D. unstable.

142. If the variance of either variable in a correlational analysis is inflated by the


sampling procedure used, then the resulting correlation coefficient tends to be
A. higher
B. lower.
C. unaffected.
D. unstable.

143. The directions for scoring a particular motor ability test instruct the examiner to
"Give credit if the child holds his hands open most of the time." Because what
constitutes "most of the time" is not specifically defined, directions such as these could
result in lowered reliability estimates for
A. test-retest reliability.
B. alternate-form reliability.
C. inter-rater reliability.
D. parallel forms reliability.
144. A vice president (VP) of personnel employs a "Corporate Screening Test" in the
hiring process. For future testing purposes, the VP maintains records of scores achieved
by __________ as opposed to ___________ in order to avoid restriction of range effects.
A. job applicants; hired employees
B. hired employees; job applicants
C. successful employees; hired employees
D. successful employees; other corporate officers

145. In the Everyday Psychometrics for Chapter 5, psychometric aspects of the


Breathalyzer were discussed. In one challenge to the test-retest reliability of this device,
the court found
A. the margin of error was not taken into account by the legislature when it originally
wrote the law.
B. the margin of error was taken into account by the legislature when it originally wrote
the law.
C. the police officer had erred by administering the test at headquarters and not the site
of the infraction.
D. None of these

146. The Everyday Psychometrics for Chapter 5 dealt with psychometric aspects of the
Breathalyzer. We learned that in the state of New Jersey, it is legal and proper to
administer a Breathalyzer test to a drunk driver
A. only at the arrest scene.
B. at police headquarters.
C. even if the officer is intoxicated.
D. while a suspect is sucking on a breath mint.

147. In the Chapter 5 Everyday Psychometrics on psychometric aspects of the


Breathalyzer, we read of a police officer who intentionally recorded incorrect readings
from the instrument. Such an event would most appropriately be recalled in the technical
manual for this instrument under the heading
A. "Test-Retest Reliability."
B. "Internal Consistency Reliability."
C. "Inter-Scorer Reliability."
D. "How to Fake Findings with the Breathalyzer 900a."
148. According to generalizability theory, a variable such as "number of items in the
test" is a description of one
A. facet of the universe.
B. true element of the dominion.
C. dominion in the domain.
D. None of these

149. Advocates of generalizability theory prefer the use of which of the following terms
as an alternative to the use of the term "reliability"?
A. generalizability
B. universality
C. regularity
D. dependability

150. Which is the BEST example of a dynamic characteristic?


A. the stress level of a trapeze flyer at a circus
B. the intelligence of a college student during Spring Break
C. the anti-authority attitude of an inmate serving a life term
D. None of these

151. As used in Chapter 5 of your text, the term inflation of variance is synonymous with
A. restriction of variance.
B. restriction of range.
C. inflation of range.
D. None of these

152. In the term latent trait theory, "latent" is a synonym for


A. invisible.
B. state.
C. undeveloped.
D. dormant.

153. IRT is a term used to refer to


A. a model that has many parameters.
B. a parameter that has many models.
C. a family of models for data analysis.
D. a dysfunctional family of models.
154. A polytomous test item is a test item
A. that has multiple tomous's attached
B. that has varied tomous's attached
C. that has multiple and varied tomous's attached
D. None of these

155. The Rasch model


A. was developed by a Danish mathematician named Rasch.
B. is an IRT model with specific assumptions about the underlying distribution.
C. was devised from generalizability
D. Both a and b

156. Why isn't IRT used more by "mom-and-pop" test developers such as classroom
teachers?
A. most classroom teachers were trained in generalizability theory
B. IRT has no application in classroom tests
C. applying IRT requires statistical sophistication
D. All of these

157. Who are the primary users of IRT?


A. classroom teachers
B. commercial test producers
C. instructors at universities in Departments of Education
D. Georg Rasch's twin sisters

158. Which of the following is NOT an assumption attendant to the use of IRT?
A. the assumption of unidimensionality
B. the assumption of heteroskedacity
C. the assumption of local independence
D. the assumption of monotonicity

159. In IRT, the single, continuous latent construct being measured is often symbolized
by the Greek letter:
A. alpha.
B. beta.
C. psy.
D. theta.
160. If some of the items on a test were locally dependent, it would be reasonable to
expect that:
A. all test items were designed for members of a specific culture.
B. all test items were measuring the exact same thing.
C. some test items were measuring something different than other test items.
D. some test items were structured in a dichotomous format and others were structured
in a polytomous format.

161. "The probability of endorsing or selecting an item response indicative of higher


levels of theta should increase as the underlying level of theta increases." This quote
sums up the meaning of the IRT assumption of
A. unidimensionality.
B. heteroskedacity.
C. local independence.
D. monotonicity.

162. The probabilistic relationship between a testtaker's response to a test item and that
testtaker's level on the latent construct being measured by the test is expressed in
graphic form by
A. an item characteristic curve.
B. an item response curve.
C. an item trace line.
D. All of these

163. It's an IRT tool that is useful in helping test users to better understand the range
over theta that an item is most useful for. It's called
A. an item response curve.
B. an information function.
C. an item trace line.
D. None of these

164. An IRT tool useful in helping test users abbreviate a "long form" of a test to a "short
form" is the
A. item response curve.
B. information function.
C. item trace line.
D. None of these
165. In an IRT information curve, the term information magnitude may BEST be
understood as referring to
A. theta.
B. the range of the underlying construct.
C. precision.
D. difficulty.

166. Test items with little discriminative ability prompt the test developer to consider the
possibility that
A. the content of the item does not match the construct measured by the other items in
the scale.
B. the item is poorly worded and needs to be rewritten.
C. the item is too complex for the educational level of the population.
D. All of these

167. According to your textbook, a test of depression that contains an abundance of


items that probe the respondents' outward expression of emotion may be inappropriate
for use with
A. test-takers who have made suicidal gestures.
B. inpatients who have been committed involuntarily.
C. veterans diagnosed with PTSD.
D. Ethiopians.

168. The fact that cultural factors may be operating to weaken an item's ability to
discriminate between groups is evident from:
A. Lord's treatise entitled Item Response Theory.
B. an item characteristic curve.
C. an information function.
D. Georg Rasch's unauthorized biography, You Can Never Be Too Rich or Too "Rasch."

169. A difference between the use of coefficient alpha and IRT for evaluating a test's
reliability is that with IRT, it is possible to learn
A. how the precision of a scale varies depending on the level of the construct being
measured.
B. how the level of the construct being measured varies depending on variations in the
item characteristic curve.
C. the precise numerical value for the test's total interitem consistency.
D. All of these
c5 Key

1. The meaning of reliability in the psychometric sense differs from the meaning of
reliability in the "every day" use of that word in that
A. reliability in the "every day sense" is usually "a good thing."
B. reliability in the psychometric sense is usually "a good thing."
C. reliability in the psychometric sense has greater implications.
D. None of these

Cohen - Chapter 05 #1

2. Which is TRUE about reliability in the psychometric sense?


A. reliability is an all-or-none measurement
B. a test may be reliable in one context and unreliable in another
C. a reliability coefficient may not be derived for personality tests
D. alternate forms reliability may not be derived for personality tests

Cohen - Chapter 05 #2

3. In classical test theory, an observed score on an ability test is presumed to represent


the testtaker's
A. true score.
B. true score less the variance.
C. true score combined with extraneous factors.
D. the testtaker's true score and error.

Cohen - Chapter 05 #3

4. In an illustrative scenario described in Chapter 5 of your text, a group of 12th grade


"whiz kids" in math, newly arrived to the United States from China, perform poorly on a
test of 12th grade math. According to the text, what probably accounted for this?
A. lower standards in China as compared to the US for measuring math ability
B. higher standards in the US as compared to China for earning high grades
C. the ability of the Chinese students to read what was required in English
D. the reliability of the instrument used to test 12th grade math skills

Cohen - Chapter 05 #4
5. Which is TRUE of measurement error?
A. Like error in general, measurement error may be random or systematic.
B. Unlike error in general, measurement error may be random or systematic.
C. Measurement error is always random.
D. Measurement error is always systematic.

Cohen - Chapter 05 #5

6. This variety of error has also been referred to as "noise." It is


A. systematic error.
B. random error.
C. measurement error.
D. background error.

Cohen - Chapter 05 #6

7. A Wall Street Securities firm that is actually located on Wall Street is testing a group of
candidates for their aptitude in finance and business. As the testing begins, an
unexpected "Occupy Wall Street" sit-in takes place. From a psychometric perspective in
the context of this testing, the sit-in is viewed as
A. systematic error.
B. random error.
C. test administration error.
D. background error.

Cohen - Chapter 05 #7

8. A test entails behavioral observation and rating of front desk clerks to determine
whether or not they greet guests with a smile. Which type of error is this test most
susceptible to?
A. test administration error
B. test construction error
C. examiner-related error
D. polling error

Cohen - Chapter 05 #8
9. Error in the reporting of spousal abuse may result from
A. one partner simply forgets all of the details of the abuse.
B. one partner misunderstands the instructions for reporting.
C. one partner is ashamed to report the abuse.
D. All of these

Cohen - Chapter 05 #9

10. Stanley (1971) wrote that in classical test theory, a so-called "true score" is "not the
ultimate fact in the book of the recording angel." By this, Stanley meant that
A. it would be imprudent to trust in Divine influence when estimating variance.
B. the amount of test variance that is true relative to error may never be known.
C. it is near impossible to separate fact from fiction with regard to "true scores."
D. All of these

Cohen - Chapter 05 #10

11. The term test heterogeneity BEST refers to the extent to which test items measure
A. different factors.
B. the same factor.
C. a unifactorial trait.
D. a nonhomogeneous trait.

Cohen - Chapter 05 #11

12. The more homogeneous a test is, the


A. less inter-item consistency it can be expected to have.
B. more utility the test has for measuring multifaceted variables.
C. more inter-item consistency it can be expected to have.
D. None of these

Cohen - Chapter 05 #12


13. Which would NOT be useful in estimating a test's inter-item consistency?
A. Cronbach's alpha
B. the Kuder-Richardson formulas
C. the average proportional distance
D. a coefficient of equivalence

Cohen - Chapter 05 #13

14. Cronbach's alpha is to similarity of scores on test items as average proportional


distance is to
A. difference in scores on test items
B. inter-item consistency
C. test-retest reliability
D. parallel forms reliability

Cohen - Chapter 05 #14

15. One of the problems associated with classical test theory has to do with
A. the notion that there is a "true score" on a test has great intuitive appeal.
B. the fact that CTT assumptions are often characterized as "weak."
C. its assumptions concerning the equivalence of all items on a test.
D. its assumptions allow for its application in most situations.

Cohen - Chapter 05 #15

16. Which of the following is NOT an alternative to classical test theory cited in your
text?
A. generalizability theory
B. representational theory
C. domain sampling theory
D. latent trait theory

Cohen - Chapter 05 #16


17. Item response theory is to latenttrait theory as observer reliability is to
A. generalizability theory.
B. domain sampling theory.
C. odd-even reliability.
D. inter-scorer reliability.

Cohen - Chapter 05 #17

18. The multiple-choice test items on this examination are all examples of
A. dichotomous test items.
B. latent trait test items.
C. polytomous test items.
D. None of these

Cohen - Chapter 05 #18

19. A confidence interval is a range or band of test scores that


A. has proven test-retest reliability.
B. is calculated using the standard error of the difference.
C. is likely to contain the true score.
D. None of these

Cohen - Chapter 05 #19

20. The standard error of measurement is


A. used to infer how far an observed score is from the true score.
B. also known as the standard error of a score.
C. is used in the context of classical test theory.
D. All of these

Cohen - Chapter 05 #20

21. Reliability, in a broad statistical sense, is synonymous with


A. consistently good.
B. consistently bad.
C. consistency.
D. validity.

Cohen - Chapter 05 #21


22. A reliability coefficient is
A. an index.
B. a proportion of the total variance attributed to true variance.
C. unaffected by a systematic source of error.
D. All of these

Cohen - Chapter 05 #22

23. Which of the following is true of systematic error?


A. It significantly lowers the reliability of a measure.
B. It insignificantly lowers the reliability of a measure.
C. It increases the reliability of a measure.
D. It has no effect on the reliability of a measure.

Cohen - Chapter 05 #23

24. As the degree of reliability increases, the proportion of


A. total variance attributed to true variance decreases.
B. total variance attributed to true variance increases.
C. total variance attributed to error variance increases.
D. None of these

Cohen - Chapter 05 #24

25. Why might ability test scores among testtakers most typically vary?
A. because of the true ability of the testtaker
B. because of irrelevant, unwanted influences
C. All of the above
D. None of the above

Cohen - Chapter 05 #25


26. A source of error variance may take the form of
A. item sampling.
B. testtakers' reactions to environment-related variables such as room temperature and
lighting.
C. testtaker variables such as amount of sleep the night before a test, amount of anxiety,
or drug effects.
D. All of the above

Cohen - Chapter 05 #26

27. Computer-scorable items have tended to eliminate error variance due to


A. item sampling.
B. scorer differences.
C. content sampling.
D. testtakers' reactions to environmental variables.

Cohen - Chapter 05 #27

28. Which type of reliability estimate is obtained by correlating pairs of scores from the
same person (or people) on two different administrations of the same test?
A. a parallel-forms estimate
B. a split-half estimate
C. a test-retest estimate
D. an au-pair estimate

Cohen - Chapter 05 #28

29. Which type of reliability estimate would be appropriate only when evaluating the
reliability of a test that measures a trait that is presumed to be relatively stable time?
A. parallel-forms
B. alternate-forms
C. test-retest
D. split-half

Cohen - Chapter 05 #29


30. An estimate of test-retest reliability is often referred to as a coefficient of stability
when the time interval between the test and retest is more than
A. 30 days.
B. 60 days.
C. 3 months.
D. 6 months.

Cohen - Chapter 05 #30

31. Which of the following might lead to a decrease in test-retest reliability?


A. the passage of time between the two administrations of the test.
B. coaching designed to increase test scores between the two administrations of the test.
C. practice with similar test materials between the two administrations of the test.
D. All of these

Cohen - Chapter 05 #31

32. Which of the following is TRUE for estimates of alternate- and parallel-forms
reliability?
A. Two test administrations with the same group are required.
B. Test scores may be affected by factors such as motivation, fatigue, or intervening
events like practice, learning, or therapy.
C. Item sampling is a source of error variance.
D. All of these

Cohen - Chapter 05 #32

33. Which of the following is TRUE for parallel forms of a test?


A. The means of the observed scores are equal for the two forms.
B. The variances of the estimated scores are equal for the two forms.
C. The means and variances of the observed scores are equal for the two forms.
D. The means and variances of the estimated scores are equal for the two forms.

Cohen - Chapter 05 #33


34. Which source of error variance affects parallel- or alternate-form reliability estimates
but does not affect test-retest estimates?
A. fatigue
B. learning
C. practice
D. item sampling

Cohen - Chapter 05 #34

35. Which of the following types of reliability estimates is the most expensive due to the
costs involved in test development?
A. test-retest
B. parallel-form
C. internal-consistency
D. Spearman's rho

Cohen - Chapter 05 #35

36. What term refers to the degree of correlation between all the items on a scale?
A. inter-item homogeneity
B. inter-item consistency
C. inter-item heterogeneity
D. parallel-form reliability

Cohen - Chapter 05 #36

37. Test-retest estimates of reliability are referred to as measures of ________, and split-
half reliability estimates are referred to as measures of ________.
A. true scores; error scores
B. internal consistency; stability
C. interscorer reliability; consistency
D. stability; internal consistency

Cohen - Chapter 05 #37


38. Which of the following is usually minimized when using split-half estimates of
reliability as compared with test-retest or parallel/alternate-form estimates of reliability?
A. time and expense
B. reliability and validity
C. reliability only
D. time spent in scoring and interpretation

Cohen - Chapter 05 #38

39. Which of the following factors may influence a split-half reliability estimate?
A. fatigue
B. anxiety
C. item difficulty
D. All of these

Cohen - Chapter 05 #39

40. Internal-consistency estimates of reliability are inappropriate for


A. reading achievement tests.
B. scholastic aptitude/intelligence tests.
C. word processing tests based on speed.
D. tests purporting to measure a single personality trait.

Cohen - Chapter 05 #40

41. The Spearman-Brown formula is used for:


A. correcting for one half of the test by estimating the reliability of the whole test.
B. determining how many additional items are needed to increase reliability up to a
certain level.
C. determining how many items can be eliminated without reducing reliability below a
predetermined level.
D. All of these

Cohen - Chapter 05 #41


42. For a heterogeneous test, measures of internal-consistency reliability will tend to be
________ compared with other methods of estimating reliability.
A. higher
B. lower
C. very similar or higher
D. more robust

Cohen - Chapter 05 #42

43. Typically, adding items to a test will have what effect on the test's reliability?
A. Reliability will decrease.
B. Reliability will increase.
C. Reliability will stay the same.
D. Reliability will first increase and then decrease.

Cohen - Chapter 05 #43

44. Error variance for measures of inter-item consistency comes from


A. fatigue.
B. motivation.
C. a testtaker practice effect.
D. heterogeneity of the content.

Cohen - Chapter 05 #44

45. If items from a test are measuring the same trait, estimates of reliability yielded from
split-half methods will typically be ________ as compared to estimates from KR-20.
A. higher
B. lower
C. similar
D. approximately the same

Cohen - Chapter 05 #45


46. Which of the following is NOT an acceptable way to divide a test when using the split-
half reliability method?
A. Randomly assign items to each half of the test.
B. Assign odd-numbered items to one half and even-numbered items to the other half of
the test.
C. Assign the first-half of the items to one half of the test and the second half of the
items to the other half of the test.
D. Assign easy items to one half of the test and difficult items to the other half of the
test.

Cohen - Chapter 05 #46

47. If items on a test are measuring very different traits, estimates of reliability yielded
from split-half methods will typically be ________ as compared with estimates from KR-
20.
A. higher
B. lower
C. similar
D. approximately the same

Cohen - Chapter 05 #47

48. KR-20 is the statistic of choice for tests with which types of items?
A. multiple-choice
B. true-false
C. All of these
D. None of these

Cohen - Chapter 05 #48

49. The KR-21 reliability estimate was developed


A. to yield greater consistency in reliability coefficients.
B. to facilitate computation by hand.
C. for use with less homogeneous items.
D. because Kuder wanted to "one-up" Richardson's 20.

Cohen - Chapter 05 #49


50. Which is NOT an assumption that should be met in order to use KR-21?
A. Items should be dichotomous.
B. Items should be of equal difficulty.
C. Items should be homogeneous.
D. Items should be scorable by computer.

Cohen - Chapter 05 #50

51. Which of the following is generally the preferred statistic for obtaining a measure of
internal-consistency reliability?
A. KR-20
B. KR-21
C. Kendall's Tau
D. coefficient alpha

Cohen - Chapter 05 #51

52. Coefficient alpha is appropriate to use with all of the following test formats EXCEPT
A. multiple-choice.
B. true-false.
C. short-answer for which partial credit is awarded.
D. essay exam with no partial credit awarded.

Cohen - Chapter 05 #52

53. The "20" and "21" in KR-20 and KR-21 represent


A. numbers held constant in the denominator.
B. numbers held constant in the numerator.
C. the order in which the formulas were created.
D. the age of Fred Kuder's son and nephew at the time the formulas were developed.

Cohen - Chapter 05 #53


54. Coefficientalpha is an expression of
A. the mean of split-half correlations between odd- and even-numbered items.
B. the mean of split-half correlations between first- and second-half items.
C. the mean of all possible split-half correlations.
D. the mean of the best or "alpha" level split-half correlations.

Cohen - Chapter 05 #54

55. A coefficient alpha over .9 may indicate that


A. the items in the test are too dissimilar.
B. the test is not reliable.
C. the items in the test are redundant.
D. the test is biased against low-ability individuals.

Cohen - Chapter 05 #55

56. Which of the following is TRUE about coefficient alpha?


A. Kuder thought it to be single best measure of reliability.
B. It was first conceived by Alfalfa Alpha.
C. It is a characteristic of a particular set of scores, not of the test itself.
D. None of these

Cohen - Chapter 05 #56

57. A synonym for interscorer reliability is


A. interjudge reliability
B. observer reliability
C. interrater reliability
D. All of these

Cohen - Chapter 05 #57


58. Which BEST conveys the meaning of an inter-scorer reliability estimate of .90?
A. Ninety percent of the scores obtained are reliable.
B. Ninety percent of the variance in the scores assigned by the scorers was attributed to
true differences and 10% to error.
C. Ten percent of the variance in the scores assigned by the scorers was attributed to
true differences and 90% to error.
D. Ten percent of the test's items are in need of revision according to the majority of the
test's users.

Cohen - Chapter 05 #58

59. When more than two scorers are used to determine inter-scorer reliability, the
statistic of choice is
A. Pearson r.
B. Spearman's rho.
C. KR-20.
D. coefficient alpha.

Cohen - Chapter 05 #59

60. For determining the reliability of tests scored using nominal scales of measurement,
the statistic of choice is
A. Kendall's Tau.
B. the Kappa statistic.
C. KR-20.
D. coefficient alpha.

Cohen - Chapter 05 #60

61. If a test is homogeneous


A. it is functionally uniform throughout.
B. it will likely yield a high internal-consistency reliability estimate compared with a test-
retest reliability estimate.
C. it would be reasonable to expect a high degree of internal consistency.
D. All of these

Cohen - Chapter 05 #61


62. Which type(s) of reliability estimates would be most appropriate for a measure of
heart rate?
A. test-retest
B. alternate-form
C. parallel form
D. internist consistency

Cohen - Chapter 05 #62

63. If a time limit is long enough to allow test-takers to attempt all items, and if some
items are so difficult that no test-taker is able to obtain a perfect score, then the test is
referred to as a ________ test.
A. speed
B. power
C. reliable
D. valid

Cohen - Chapter 05 #63

64. Typically, speed tests


A. contain items of a uniform difficulty level.
B. are completed by fewer than 1% of all test-takers.
C. have low validity coefficients.
D. yield high rates of false positives.

Cohen - Chapter 05 #64

65. Which type(s) of reliability estimates would be appropriate for a speed test?
A. test-retest
B. alternate-form
C. split-half from two independent testing sessions
D. All of these

Cohen - Chapter 05 #65


66. Which of the following would result in the LEAST appropriate estimate of reliability for
a speed test?
A. test-retest
B. alternate-form
C. split-half from a single administration of the test
D. split-half from two independent testing sessions

Cohen - Chapter 05 #66

67. A Kuder-Richardson (KR) or split-half estimate of reliability for a speed test would
provide an estimate that is
A. spuriously low.
B. spuriously high.
C. insignificant.
D. equal to a test-retest method.

Cohen - Chapter 05 #67

68. A measure of clerical speed is obtained by a test that has respondents alphabetize
index cards. The manual for this test cites a split-half reliability coefficient for a single
administration of the test of .95. What might you conclude?
A. The test is highly reliable.
B. The published reliability estimate is spuriously low and would have been higher had
another estimate been used.
C. The split-half estimate should not have been used in this instance.
D. Clerical speed is too vague a construct to measure.

Cohen - Chapter 05 #68

69. The Spearman-Brown formula can be used for which types of tests?
A. speed and multiple-choice
B. true-false and multiple-choice
C. speed, true-false, and multiple-choice
D. trade school and driving tests

Cohen - Chapter 05 #69


70. An estimate of the reliability of a speed test is a measure of
A. the stability of the test.
B. the consistency of the response speed.
C. the homogeneity of the test items.
D. All of these

Cohen - Chapter 05 #70

71. Use of the Spearman-Brown formula would be INAPPROPRIATE to


A. estimate the effect on reliability of shortening a test.
B. determine the number of items needed in a test to obtain the desired level of
reliability.
C. estimate the internal consistency of a speed test.
D. All of these

Cohen - Chapter 05 #71

72. Interpretations of criterion-referenced tests are typically made with respect to


A. the total number of items the examinee responded to.
B. the material that the examinee evidenced mastery of.
C. a comparison of the examinee's performance with that of others who took the test.
D. a formula that takes into account the total number of items for which no response was
scorable.

Cohen - Chapter 05 #72

73. Traditional measures of reliability are inappropriate for criterion-referenced tests


because variability
A. is maximized with criterion-referenced tests.
B. is minimized with criterion-referenced tests.
C. is variable with criterion-referenced tests.
D. cannot be determined with criterion-referenced tests

Cohen - Chapter 05 #73


74. If traditional measures of reliability are applied to a criterion-referenced test, the
reliability estimate will likely be
A. spuriously low.
B. spuriously high.
C. exactly zero.
D. None of these

Cohen - Chapter 05 #74

75. The fact that the length of a test influences the size of the reliability coefficient is
based on which theory of measurement?
A. classical test theory (CTT)
B. generalizability theory
C. domain sampling theory
D. item response theory (IRT)

Cohen - Chapter 05 #75

76. Which estimate of reliability is most consistent with the domain sampling theory?
A. test-retest
B. alternate-form
C. internal-consistency
D. interscorer

Cohen - Chapter 05 #76

77. Classical reliability theory estimates the portion of a test score that is attributed to
________, and domain sampling theory estimates ________.
A. specific sources of variation; error
B. error; specific sources of variation
C. the skills being measured; variation
D. the skills being measured; content knowledge

Cohen - Chapter 05 #77


78. Item response theory (IRT) focuses on the
A. circumstances that inspired the development of the test.
B. test administration variables.
C. individual items of a test.
D. "how and why" of the Interborough Rapid Transit line

Cohen - Chapter 05 #78

79. Generalizability theory focuses on which of the following?


A. the circumstances under which a test was developed
B. the circumstances under which a test is administered
C. the circumstances under which a test is interpreted
D. All of these

Cohen - Chapter 05 #79

80. The standard deviation of a theoretically normal distribution of test scores obtained
by one person on equivalent tests is
A. the standard error of the difference between means.
B. the standard error of measurement.
C. the standard deviation of the reliability coefficient.
D. the variance.

Cohen - Chapter 05 #80

81. Which of the following is NOT a part of the formula for the standard error of
measurement for a particular test?
A. the validity of the test
B. the reliability of the test
C. the standard deviation of the group of test scores
D. Both b and c

Cohen - Chapter 05 #81


82. "Sixty-eight percent of the scores for a particular test fall between 58 and 61" is a
statement regarding
A. the utility of a test.
B. the reliability of a test.
C. the validity of a test.
D. None of these

Cohen - Chapter 05 #82

83. The standard error of measurement of a particular test of anxiety is 8. A student


earns a score of 60. What is the confidence interval for this test score at the 95% level?
A. 52-68
B. 40-68
C. 44-76
D. 36-84

Cohen - Chapter 05 #83

84. As the confidence interval increases, the range of scores into which a single test
score falls is likely to
A. decrease.
B. increase.
C. remain the same.
D. alternately decrease and increase.

Cohen - Chapter 05 #84

85. As the reliability of a test increases, the standard error of measurement


A. increases.
B. decreases.
C. remains the same.
D. alternately increases, then decreases.

Cohen - Chapter 05 #85


86. If the standard deviations of two tests are identical but the reliability is lower for Test
A as compared to Test B, then the standard error of measurement will be ________ for Test
A as compared with Test B.
A. higher
B. lower
C. the same
D. None of these

Cohen - Chapter 05 #86

87. Which statistic can help the test user determine how large a difference must exist for
scores yielded from two different tests to be considered statistically different?
A. standard error of measurement between two scores
B. standard error of the difference between two scores
C. observed variance minus error variance
D. standard error of the difference between two means

Cohen - Chapter 05 #87

88. The standard error of the difference between two scores is larger than the standard
error of measurement for either score because the standard error of the difference
between the two scores is affected by
A. the true score variance of each score.
B. the standard deviation of each score summed.
C. the measurement error inherent in both scores.
D. All of these

Cohen - Chapter 05 #88

89. A guidance counselor wishes to determine if a student scored higher on a


mathematics test than on a reading test. What statistic(s) would be MOST useful?
A. the standard error of measurement for each test score
B. the standard error of the difference between two scores
C. the raw score on each test as well as the mean of each distribution
D. the mean of each distribution and index of test difficulty for each test.

Cohen - Chapter 05 #89


90. The ________ in generalizability theory is analogous to the reliability coefficient in
classical test theory.
A. universe coefficient
B. coefficient of generalizability
C. universe score
D. Roulin coefficient

Cohen - Chapter 05 #90

91. According to Cronbach et al.'s generalizability theory, "facets" include


A. the number of test items.
B. the amount of training the examiners received.
C. the purpose of administering the test.
D. All of these

Cohen - Chapter 05 #91

92. The universe score in Cronbach et al.'s generalizability theory analogous to the
________ in classical test theory.
A. coefficient of generalizability
B. true score
C. standard deviation
D. internal-consistency estimate

Cohen - Chapter 05 #92

93. In classical test theory, there exists only one true score. In Cronbach generalizability
theory, how many "true scores" exist?
A. one
B. as many as the number of times the test is administered to the same individual
C. many, depending on the number of different universes
D. None of these

Cohen - Chapter 05 #93


94. The term coefficient of generalizability refers to
A. how generalizable scores obtained in one situation are to other situations.
B. test-retest reliability estimates with respect to different "universes."
C. split-half reliability estimates with respect to different "universes."
D. None of these

Cohen - Chapter 05 #94

95. If a device to measure blood pressure consistently overestimated every assessee's


actual blood pressure by 10 units, which of the following would be TRUE of the reliability
of this measuring device as the years passed?
A. It would increase.
B. It would decrease.
C. It would not be affected.
D. It would alternately decrease and increase.

Cohen - Chapter 05 #95

96. In general, which of the following is TRUE of the relationship between the magnitude
of the test-retest reliability estimate and the length of the interval between test
administrations?
A. The longer the interval, the lower the reliability coefficient.
B. The longer the interval, the higher the reliability coefficient.
C. The magnitude of the reliability coefficient is typically not affected by the length of the
interval between test administrations.
D. The magnitude of the reliability coefficient is always affected by the length of the
interval between test administrations, but one cannot predict how it is affected.

Cohen - Chapter 05 #96

97. What is the difference between alternate forms and parallel forms of a test?
A. Alternate forms do not necessarily yield test scores with equal means and variances.
B. Alternate forms are designed to be equivalent only with regard to level of difficulty.
C. Alternate forms are different only with respect to how they are administered.
D. There are no differences between alternate and parallel forms of a test.

Cohen - Chapter 05 #97


98. Coefficientalpha is the reliability estimate of choice for tests
A. with dichotomous items and binary scoring.
B. with homogeneous items.
C. that can be scored along a continuum of values.
D. that contain heterogeneous item content and binary scoring.

Cohen - Chapter 05 #98

99. In which type(s) of reliability estimates would test construction NOT be a significant
source of error variance?
A. test-retest
B. alternate-form
C. split-half
D. Kuder-Richardson

Cohen - Chapter 05 #99

100. If the variance of either variable is restricted by the sampling procedures used, then
the magnitude of the coefficient of reliability will be
A. lowered.
B. raised.
C. unaffected.
D. affected only in tests with a true-false format.

Cohen - Chapter 05 #100

101. For criterion-referenced tests, which of the following reliability estimates is


recommended?
A. test-retest reliability estimates
B. alternate-form reliability estimates
C. split-half reliability estimates
D. None of these

Cohen - Chapter 05 #101


102. Which of the following is TRUE of domain sampling theory?
A. It supports the existence of a "true score" when measuring psychological constructs.
B. It can be used to argue against the existence of a "true score" when measuring
psychological constructs.
C. Neither Kuder nor Richardson found it to have any applied value.
D. All of these

Cohen - Chapter 05 #102

103. If a student received a score of 50 on a math test with a standard error of


measurement of 3, which of the following statements would be TRUE of the "true
score"?
A. In 68% of the cases, the "true score" would be expected to be between 44 and 56.
B. In 68% of the cases, the "true score" would be expected to be between 47 and 53.
C. In 95% of the cases, the "true score" would be expected to be between 47 and 53.
D. In 95% of the cases, the "true score" would be expected to be between 44 and 56.

Cohen - Chapter 05 #103

104. A psychologist administers a test and the test-taker scores a 52. If the cut-off score
for eligibility for a particular program is 50, what index will best help the psychologist
determine how much confidence to place in the test-taker's obtained score of 52?
A. the standard error of difference
B. the standard error of measurement
C. measures of central tendency: mean, median, or mode
D. measures of variability such as the standard deviation

Cohen - Chapter 05 #104

105. Which of the following is TRUE of both the standard error of measurement and the
standard error of difference?
A. Both provide confidence levels.
B. Both can be used to compute confidence intervals for short answer tests.
C. Both can be used to compare performance between two different tests.
D. Both are abbreviated by SEM.

Cohen - Chapter 05 #105


106. Test-retest reliability estimates of breathalyzers have
A. a margin of error of approximately one-hundredth of a percentage point.
B. a margin of error of one percentage point.
C. a margin of error so high that they must be deemed unreliable.
D. not been done in the State of Alaska.

Cohen - Chapter 05 #106

107. A police officer administers a breathalyzer test to a suspected drunk driver, does
not put on his glasses to read the meter, and as a result, mistakenly records the blood
alcohol level. This is the kind of mistake that is BEST with which type of reliability
estimates?
A. test-retest
B. interscorer
C. internal-consistency
D. situational

Cohen - Chapter 05 #107

108. Which of the following statements is TRUE regarding the differences between a
power test and a speed test?
A. Power tests involve physical strength; speed tests do not.
B. In a power test, the testtaker has time to complete all items; in a speed test, a
specific time limit is imposed.
C. In a power test, a broad range of knowledge is assessed; in a speed test, a narrower
range of knowledge is assessed.
D. Both b and c

Cohen - Chapter 05 #108

109. The index that allows a test user to compare two people's scores on a specific test
to determine if the true scores are likely to be different is
A. the standard error of the mean.
B. the standard error of the difference.
C. the standard deviation.
D. the correlation coefficient.

Cohen - Chapter 05 #109


110. Which type of reliability is directly affected by the heterogeneity of a test?
A. test-retest
B. interrater
C. internal-consistency
D. alternate-forms or parallel-forms

Cohen - Chapter 05 #110

111. Generalizability theory is most closely related to


A. developing norms.
B. item analysis.
C. test reliability.
D. the way things are "in general."

Cohen - Chapter 05 #111

112. A test of attention span has a reliability coefficent of .84. The average score on the
test is 10, with a standard deviation of 5. Lawrence received a score of 64 on the test.
We can be 95% sure that Lawrence's "true" attention span score falls between
A. 63 and 65.
B. 62 and 66.
C. 60 and 68.
D. 54 and 74.

Cohen - Chapter 05 #112

113. By definition, estimates of reliability can range from _______ to _______.


A. -3.00; +3.00
B. 1; 10
C. 0; 1
D. -1 to 1

Cohen - Chapter 05 #113


114. Using estimates of internal consistency, which of the following tests would likely
yield the highest reliability coefficients?
A. a test of general intelligence
B. a test of achievement in a basic skill such as mathematics
C. a test of reading comprehension
D. a test of vocational interest

Cohen - Chapter 05 #114

115. What type of reliability estimate is appropriate for use in a comparison of "Form A"
to "Form B" of a picture vocabulary test?
A. test-retest
B. alternate-forms
C. inter-rater
D. internal-consistency

Cohen - Chapter 05 #115

116. What index of reliability would you use to compare two evaluators' assessments of
a group of job applicants?
A. KR-20
B. coefficient alpha
C. the Kappa statistic
D. the Spearman-Brown correction

Cohen - Chapter 05 #116

117. Which of the following is TRUE of the standard error of measurement?


A. The larger the standard error of measurement, the better.
B. The standard error of measurement is inversely related to the standard deviation (that
is, when one goes up, the other goes down).
C. The standard error of measurement is inversely related to reliability (that is, when one
goes up, the other goes down).
D. A low standard error of measurement is indicative of low validity.

Cohen - Chapter 05 #117


118. What type of reliability estimate is obtained by correlating pairs of scores from the
same person on two different administrations of the same test?
A. parallel-forms
B. split-half
C. interrater
D. test-retest

Cohen - Chapter 05 #118

119. A test containing 100 items is revised by deleting 20 items. What might be
expected to happen to the magnitude of the reliability estimate for that test?
A. It will be expected to increase.
B. It will be expected to decrease.
C. It will be expected to stay the same.
D. It cannot be determined based on the information provided.

Cohen - Chapter 05 #119

120. In the formula X = T + E, T refers to


A. the true score.
B. the time factor.
C. the average test score.
D. test-retest reliability.

Cohen - Chapter 05 #120

121. The greater the proportion of the total variance attributed to true variance, the
more ____________ the test.
A. scientific
B. variable
C. reliable
D. expensive

Cohen - Chapter 05 #121


122. A score earned by a testtaker on a psychological test may BEST be viewed as equal
to
A. the raw score plus the observed score.
B. the error score.
C. the true score.
D. the true score plus error.

Cohen - Chapter 05 #122

123. Which is NOT a possible source of error variance?


A. test administration
B. test scoring
C. test interpretation
D. All are possible sources of error variance.

Cohen - Chapter 05 #123

124. A goal of a test developer is to


A. maximize error variance.
B. minimize true variance.
C. maximize true variance.
D. minimize stress for testtakers.

Cohen - Chapter 05 #124

125. Which of the following is TRUE about systematic and unsystematic error in the
assessment of physical and psychological abuse?
A. Few sources of unsystematic error exist, due to the nature of what is being assessed.
B. Few sources of systematic error exist.
C. Gender represents a source of systematic error.
D. None of these

Cohen - Chapter 05 #125


126. In general, approximately what percentage of scores would be expected to fall
within two standard deviations above or below the standard error of measurement of the
"true score" on a test?
A. 85%
B. 90%
C. 95%
D. 99%

Cohen - Chapter 05 #126

127. In Chapter 5 of your textbook, you read of the "writing surface on a school desk
riddled with heart carvings, the legacy of past years' students who felt compelled to
express their eternal devotion to someone now long forgotten." This imagery was
designed to graphically illustrate sources of error variance during test
A. development.
B. administration.
C. scoring.
D. interpretation.

Cohen - Chapter 05 #127

128. In the Chapter 5 Meet an Assessment Professional feature, Dr. Bryce B. Reeve noted
the necessity for very brief questionnaires in his work due to the fact that many of his
clients were:
A. young children with very short attention spans.
B. seriously ill and would find taking tests burdensome.
C. visually impaired an unable to focus for an extended period of time.
D. All of these

Cohen - Chapter 05 #128

129. In the Chapter 5 Meet an Assessment Professional feature, Dr. Bryce B. Reeve cited
an experience in which he learned that the "Excellent" response category on a test was
best translated as meaning ______ in Chinese?
A. "super bad"
B. "superlative"
C. "bad"
D. None of these

Cohen - Chapter 05 #129


130. The items of a personality test are characterized as heterogeneous in nature. This
tells us that the test measures
A. aspects of family history.
B. ability to relate to the opposite sex.
C. unconscious motivation.
D. more than one trait.

Cohen - Chapter 05 #130

131. "Coefficient alpha 20" is a reference to


A. a variant of the Kuder-Richardson KR-20 formula.
B. the 20

th
in a series of formulas developed by Cronbach.
C. a 20th-century revision of a Galtonian expression.
D. None of these

Cohen - Chapter 05 #131

132. With regard to a value found for coefficient alpha,


A. "bigger is always better."
B. "smaller is always better."
C. "negative is best."
D. None of these

Cohen - Chapter 05 #132

133. Most reliability coefficients, regardless of the specific type of reliability they are
measuring, range in value from:
A. -1 to +1
B. 0 to 100
C. 0 to 1.
D. negative infinity to positive infinity

Cohen - Chapter 05 #133


134. All indices of reliability provide an index that is a characteristic of a particular
A. test.
B. group of test scores.
C. trait.
D. approach to measurement.

Cohen - Chapter 05 #134

135. The precise amount of error inherent in the reliability estimate published in a test
manual will vary with
A. the purchase price of the test (the more expensive, the less the error).
B. the sample of test-takers from which the data were drawn.
C. the population of test user actually using a published test.
D. All of these

Cohen - Chapter 05 #135

136. Different types of reliability coefficients


A. all reflect the same sources of error variance.
B. may reflect different sources of error variance.
C. never reflect the same source of error variance.
D. reflect on error variance during leisure activities.

Cohen - Chapter 05 #136

137. A test of infant development contains three scales: (1) Cognitive Ability, (2) Motor
Development, and (3) Behavior Rating. Because these three scales are designed to
measure different characteristics (that is, they are not homogeneous), it would be
inappropriate to combine the three scales in calculating estimates of the test's
A. alternate-forms reliability.
B. internal-consistency reliability.
C. test-retest reliability.
D. interrater reliability.

Cohen - Chapter 05 #137


138. The fact that young children develop rapidly and in "growth spurts" is a problem
when it comes to the estimation which type of reliability for an infant development
scale?
A. internal-consistency reliability
B. alternate-forms reliability
C. test-retest reliability
D. interrater reliability

Cohen - Chapter 05 #138

139. In the language of psychological testing and assessment, reliability BEST refers to
A. how well a test measures what it was originally designed to measure.
B. the complete lack of any systematic error.
C. the proportion of total variance that can be attributed to true variance.
D. whether or not a test publisher consistently publishes high quality instruments.

Cohen - Chapter 05 #139

140. Because of the unique problems in assessing very young children, which of the
following would be the BEST practice when attempting to estimate the reliability of tests
designed to measure cognitive and motor abilities in infants?
A. Use relatively short test-retest intervals.
B. Use relatively long test-retest intervals.
C. Do not use the test-retest method for estimating reliability of the test.
D. Use only inter-scorer reliability estimates.

Cohen - Chapter 05 #140

141. If the variance of either variable in a correlational analysis is restricted by the


sampling procedure used, then the resulting correlation coefficient tends to be
A. higher.
B. lower.
C. unaffected.
D. unstable.

Cohen - Chapter 05 #141


142. If the variance of either variable in a correlational analysis is inflated by the
sampling procedure used, then the resulting correlation coefficient tends to be
A. higher
B. lower.
C. unaffected.
D. unstable.

Cohen - Chapter 05 #142

143. The directions for scoring a particular motor ability test instruct the examiner to
"Give credit if the child holds his hands open most of the time." Because what
constitutes "most of the time" is not specifically defined, directions such as these could
result in lowered reliability estimates for
A. test-retest reliability.
B. alternate-form reliability.
C. inter-rater reliability.
D. parallel forms reliability.

Cohen - Chapter 05 #143

144. A vice president (VP) of personnel employs a "Corporate Screening Test" in the
hiring process. For future testing purposes, the VP maintains records of scores achieved
by __________ as opposed to ___________ in order to avoid restriction of range effects.
A. job applicants; hired employees
B. hired employees; job applicants
C. successful employees; hired employees
D. successful employees; other corporate officers

Cohen - Chapter 05 #144

145. In the Everyday Psychometrics for Chapter 5, psychometric aspects of the


Breathalyzer were discussed. In one challenge to the test-retest reliability of this device,
the court found
A. the margin of error was not taken into account by the legislature when it originally
wrote the law.
B. the margin of error was taken into account by the legislature when it originally wrote
the law.
C. the police officer had erred by administering the test at headquarters and not the site
of the infraction.
D. None of these

Cohen - Chapter 05 #145


146. The Everyday Psychometrics for Chapter 5 dealt with psychometric aspects of the
Breathalyzer. We learned that in the state of New Jersey, it is legal and proper to
administer a Breathalyzer test to a drunk driver
A. only at the arrest scene.
B. at police headquarters.
C. even if the officer is intoxicated.
D. while a suspect is sucking on a breath mint.

Cohen - Chapter 05 #146

147. In the Chapter 5 Everyday Psychometrics on psychometric aspects of the


Breathalyzer, we read of a police officer who intentionally recorded incorrect readings
from the instrument. Such an event would most appropriately be recalled in the technical
manual for this instrument under the heading
A. "Test-Retest Reliability."
B. "Internal Consistency Reliability."
C. "Inter-Scorer Reliability."
D. "How to Fake Findings with the Breathalyzer 900a."

Cohen - Chapter 05 #147

148. According to generalizability theory, a variable such as "number of items in the


test" is a description of one
A. facet of the universe.
B. true element of the dominion.
C. dominion in the domain.
D. None of these

Cohen - Chapter 05 #148

149. Advocates of generalizability theory prefer the use of which of the following terms
as an alternative to the use of the term "reliability"?
A. generalizability
B. universality
C. regularity
D. dependability

Cohen - Chapter 05 #149


150. Which is the BEST example of a dynamic characteristic?
A. the stress level of a trapeze flyer at a circus
B. the intelligence of a college student during Spring Break
C. the anti-authority attitude of an inmate serving a life term
D. None of these

Cohen - Chapter 05 #150

151. As used in Chapter 5 of your text, the term inflation of variance is synonymous with
A. restriction of variance.
B. restriction of range.
C. inflation of range.
D. None of these

Cohen - Chapter 05 #151

152. In the term latent trait theory, "latent" is a synonym for


A. invisible.
B. state.
C. undeveloped.
D. dormant.

Cohen - Chapter 05 #152

153. IRT is a term used to refer to


A. a model that has many parameters.
B. a parameter that has many models.
C. a family of models for data analysis.
D. a dysfunctional family of models.

Cohen - Chapter 05 #153

154. A polytomous test item is a test item


A. that has multiple tomous's attached
B. that has varied tomous's attached
C. that has multiple and varied tomous's attached
D. None of these

Cohen - Chapter 05 #154


155. The Rasch model
A. was developed by a Danish mathematician named Rasch.
B. is an IRT model with specific assumptions about the underlying distribution.
C. was devised from generalizability
D. Both a and b

Cohen - Chapter 05 #155

156. Why isn't IRT used more by "mom-and-pop" test developers such as classroom
teachers?
A. most classroom teachers were trained in generalizability theory
B. IRT has no application in classroom tests
C. applying IRT requires statistical sophistication
D. All of these

Cohen - Chapter 05 #156

157. Who are the primary users of IRT?


A. classroom teachers
B. commercial test producers
C. instructors at universities in Departments of Education
D. Georg Rasch's twin sisters

Cohen - Chapter 05 #157

158. Which of the following is NOT an assumption attendant to the use of IRT?
A. the assumption of unidimensionality
B. the assumption of heteroskedacity
C. the assumption of local independence
D. the assumption of monotonicity

Cohen - Chapter 05 #158


159. In IRT, the single, continuous latent construct being measured is often symbolized
by the Greek letter:
A. alpha.
B. beta.
C. psy.
D. theta.

Cohen - Chapter 05 #159

160. If some of the items on a test were locally dependent, it would be reasonable to
expect that:
A. all test items were designed for members of a specific culture.
B. all test items were measuring the exact same thing.
C. some test items were measuring something different than other test items.
D. some test items were structured in a dichotomous format and others were structured
in a polytomous format.

Cohen - Chapter 05 #160

161. "The probability of endorsing or selecting an item response indicative of higher


levels of theta should increase as the underlying level of theta increases." This quote
sums up the meaning of the IRT assumption of
A. unidimensionality.
B. heteroskedacity.
C. local independence.
D. monotonicity.

Cohen - Chapter 05 #161

162. The probabilistic relationship between a testtaker's response to a test item and that
testtaker's level on the latent construct being measured by the test is expressed in
graphic form by
A. an item characteristic curve.
B. an item response curve.
C. an item trace line.
D. All of these

Cohen - Chapter 05 #162


163. It's an IRT tool that is useful in helping test users to better understand the range
over theta that an item is most useful for. It's called
A. an item response curve.
B. an information function.
C. an item trace line.
D. None of these

Cohen - Chapter 05 #163

164. An IRT tool useful in helping test users abbreviate a "long form" of a test to a "short
form" is the
A. item response curve.
B. information function.
C. item trace line.
D. None of these

Cohen - Chapter 05 #164

165. In an IRT information curve, the term information magnitude may BEST be
understood as referring to
A. theta.
B. the range of the underlying construct.
C. precision.
D. difficulty.

Cohen - Chapter 05 #165

166. Test items with little discriminative ability prompt the test developer to consider the
possibility that
A. the content of the item does not match the construct measured by the other items in
the scale.
B. the item is poorly worded and needs to be rewritten.
C. the item is too complex for the educational level of the population.
D. All of these

Cohen - Chapter 05 #166


167. According to your textbook, a test of depression that contains an abundance of
items that probe the respondents' outward expression of emotion may be inappropriate
for use with
A. test-takers who have made suicidal gestures.
B. inpatients who have been committed involuntarily.
C. veterans diagnosed with PTSD.
D. Ethiopians.

Cohen - Chapter 05 #167

168. The fact that cultural factors may be operating to weaken an item's ability to
discriminate between groups is evident from:
A. Lord's treatise entitled Item Response Theory.
B. an item characteristic curve.
C. an information function.
D. Georg Rasch's unauthorized biography, You Can Never Be Too Rich or Too "Rasch."

Cohen - Chapter 05 #168

169. A difference between the use of coefficient alpha and IRT for evaluating a test's
reliability is that with IRT, it is possible to learn
A. how the precision of a scale varies depending on the level of the construct being
measured.
B. how the level of the construct being measured varies depending on variations in the
item characteristic curve.
C. the precise numerical value for the test's total interitem consistency.
D. All of these

Cohen - Chapter 05 #169


c5 Summary

Category # of Questi
ons
Cohen - Chapte 169
r 05

You might also like