Professional Documents
Culture Documents
Sources: http://www.joe.org/joe/2007february/tt2.php
RELIABILITY OF QUESTIONNAIRE
• Reliability means the consistency or repeatability of the measure.
• especially important if the measure is to be used on an on-going basis to detect
change
• Forms of reliability:
• Test-retest reliability - whether repeating the test/questionnaire under the same
conditions produces the same results;
• Reliability within a scale - that all the questions designed to measure a particular
trait are indeed measuring the same trait.
• Other methods:
• You can use factor analysis to reduce the length of the questionnaire
http://www.daa.com.au/analytical-ideas/questionnaire-validity/
VALIDITY OF QUESTIONNAIRE
• Validity means that we are measuring what we want to measure.
• Types of validity:
• Face Validity - whether at face value, the questions appear to be measuring the
construct.
• largely a "common-sense" assessment, but also relies on knowledge of the way
people respond to survey questions and common pitfalls in questionnaire design;
• Content Validity - whether all important aspects of the construct are covered. Clear
definitions of the construct and its components come in useful here;
• Criterion Validity/Predictive Validity - whether scores on the questionnaire successfully
predict a specific criterion.
• E.g. does the questionnaire used in selecting executives predict the success of
those executives once they have been appointed; and
• Concurrent Validity - whether results of a new questionnaire are consistent with results
of established measures.
http://www.daa.com.au/analytical-ideas/questionnaire-validity/
PROPOSAL STRUCTURE
• Introduction
• Background, and literature
• Theoretical framework :
• summarizing the literature or something else
• Methods
• Design
• Phases of research
• Population and sample
• Instrument: Questionnaire, interview questions
• You need to show that instrument is available, and you are in selection
process, this helps for feasibility check
OTHER TYPES OF VALIDITY
• Other types of validity is:
• Convergent validity:
• Different measures lead to similar results of measurement
• Divergent validity (subset of conceptual validity):
• Construct is distinguishable from other construct (Discriminant validity)
• Distinguish between phenomenon (e.g. self efficacy and self esteem)
• Other pitfalls of questionnaire:
• Ambiguity
• Bias
• Low difference between options
• Difficulty in analyzing and understanding
• Overlap of options, and lack of mutual exclusion
RELIABILITY
• Test re-test:
• Means if you conduct the test again after some duration, you will get the same result
• Sample is separated into to half, and check them together
• This is a measure for precision of the measurement instrument
• Duration should be chosen based on the trade of :
• Forgetting the previous answer
• The duration that variables has not changed a lot
• Cronbach’s Alpha:
• coefficient of internal consistency: should be more than 0.6 and 0.7
• If it is standard even 0.6 is not good showing lack of fitness with culture as an
example
• Is indicator of precision, showing that indicators move together
• If for example temperature indicators do not move together they show existence of
third variable
• Saying that indicators more together up or down
WHAT MEASURES? HOW TO SELECT?
• For precision in literature you should not only tell about concept, theory and
relation but also read about the instruments, and you should show what were the
available measures
• You should look at the operationalization behind the instruments
• You should also show that what is your definition, and find the instrument that
measures this concept, and not other definitions
• Measurement should be valid and reliable to get reliable results
• Comparison of the instruments is very important, component, and match with
context is very important
• Path analysis helps to find better items for analysis
SAMPLING & STATISTICS IMPLICATION
• Qualitative work is not seeking to generalize, but quantitative work does
• Checking whole population is cumbersome due to the considerable resource
needed
• Sample should be representative of population
• Statistical inference:
• We take sample and extract statistics
• Generalize results to the population
• We want to check what is the probability that this statistics would be generalized
to the population
• For example you say that 95% probability we will have the same result
• Hypothesis is not confirmed, but we say within the range of errors our hypothesis
is correct (We say that reliable interval is this amount)
• Sampling should be correct, and it is very important, if sample would not be
representative, then the result would not be generalizable
SAMPLING AND STATISTICAL IMPLICATIONS
• Sample should be by congruent with level:
• E.g. if the phenomenon is organizational level, you need to have samples from
organizations, and you need to have sample from each cluster
• Sample could be multiple level:
• E.g. for each organization you may have sample from second level
• Sample should be random
• Random sample would be that you assign numbers to database of customers
• Assortment of people would be random, and you select out of them randomly
• You contact your sample based on sample size
• In this method you need to know population, and you know who are them
• Systematic sampling:
• When size of sample is identified
• Takes the first sample, and then goes forward in selection in steps of ten
• i.e. choose 10th, 20th, …
• If your data would be chronological, it could result in bias
• Advanced sampling thinks about population size
SAMPLING & STATISTICAL IMPLICATIONS
• Advanced sampling
• Thinks about the population structure
• Methods of Advanced Sampling:
• Stratified Sampling
• Clusteral sampling
• E.g. for example different generation of university, or different faculty, if you
believe each has specific feature, and important variable exists in those
clusters
• Variance of phenomenon results in greater sample size requirement
• Expected error also affects on the sample size
• There are different approaches for sampling, and they tell you based on
cluster what should be the size of sample given expected error