Professional Documents
Culture Documents
10/6/2012
MBA (2011-13)
CONTENTS
SOURCES OF ERROR CRITERIA FOR GOOD SCALE
10/6/2012
MBA (2011-13)
SOURCES OF ERROR
There are four major sources of error in
measurement. These are: The Respondent The situation The measurer The instrument for data collection
10/6/2012
MBA (2011-13)
1. THE RESPONDENT
Measurement get distorted on account of
different opinions of the respondent on a given issue. These difference may take place due to status of the respondent, level of education, social class, nature of job.
10/6/2012
MBA (2011-13)
2. THE SITUATION
The situation in which an interview is being
10/6/2012
MBA (2011-13)
3. THE MEASURER
The measurer is the interviewer who is
conducting interviews on the basis of a questionnaire. Measurement error may crop up on account of the method used by the interviewer. It is possible that he may not be very careful while recording responses.
10/6/2012
MBA (2011-13)
4. THE INSTRUMENT
Defective instrument for data collection may
cause distortion in number of ways. Such as: The questionnaire is too lengthy containing a no. of questions. The questionnaire has an element of ambiguity in the questions. The language use in a question is suggestive of a particular response.
10/6/2012 MBA (2011-13) 7
are two important criteria for ascertaining whether the scale developed is good or not. Reliability Validity
10/6/2012
MBA (2011-13)
Reliability
There are three major estimating the reliability
of measurement. These are: Test retest reliability The alternate forms reliability Split half reliability
10/6/2012
MBA (2011-13)
measurement of the same respondent or group using the same scaling technique under similar conditions. This would involve administering a test at two points of time to the same person or a group of persons. The scores of the two tests would then correlated. If the correlation is low, then the reliability too is less.
10/6/2012 MBA (2011-13) 10
respondent being given a set of two forms. The forms are considered equivalent but are not identical. The result obtained on the basis of these forms are compared to ascertain whether there is considerable difference b/w the two scores.
10/6/2012 MBA (2011-13) 11
multi item instrument. It involves splitting of a multi item measurement instrument into two equivalent groups.
10/6/2012
MBA (2011-13)
12
VALIDITY
Content validity
Predictive validity Concurrent validity
10/6/2012
MBA (2011-13)
13
1.Content validity
The researcher should first define the problem
clearly, identify the items to be measured & evolve a suitable scale for the purpose. Qualitative in nature.
Content validity, sometimes called logical or
rational validity, is the estimate of how much a measure represents every single element of a construct.
10/6/2012 MBA (2011-13) 14
2.PREDICTIVE VALIDITY
Predictive validity signifies how best the
researcher can guess the future performance, from his knowledge of the attitude score.
Most educational and employment tests are
used to predict future performance, so predictive validity is regarded as essential in these fields.
10/6/2012 MBA (2011-13) 15
3. CONCURRENT VALIDITY
In the case of concurrent validity, an attitude
scale on one variable can be used to estimate scores on another variable. Example:- One may decide the social status of the respondents on the basis of their attitude towards savings.
10/6/2012
MBA (2011-13)
16
10/6/2012
MBA (2011-13)
17