You are on page 1of 51

12-1

Measurement

Measurement
If you cant measure it, you cant manage it.
Bob Donath, Consultant

12-2

Measurement in research consists of assigning numbers to empirical events, objects or properties, or activities in compliance with a set of rules. It broadly consist of three processes

Measurement
Selecting measurable phenomena Developing a set of mapping rules Applying the mapping rule to each phenomenon
12-4

Why measurement?

To provide the highest-quality, lowest-error data for testing hypotheses, estimation or prediction, or description.

To ensure higher-level concepts, constructs, for specialized scientific explanatory purposes that are not directly observable and for thinking about and communicating abstractions.

Types of Scales/levels of Measurement


Nominal

Ordinal
interval Ratio

12-6

Levels of Measurement
Nominal Classification

Ordinal
interval Ratio

12-7

12-8

Nominal Scales
Mutually exclusive and collectively exhaustive categories Exhibits the classification characteristic only

Levels of Measurement
Nominal Classification Classification Order

Ordinal
interval Ratio

12-9

12-10

Ordinal Scales
Characteristics of nominal scale plus an indication of order Implies statement of greater than and less than

Levels of Measurement
Nominal Classification Classification Order Classification Order Distance

Ordinal
interval Ratio

12-11

12-12

Interval Scales
Characteristics of nominal and ordinal scales plus the concept of equality of interval. Equal distance exists between numbers

Levels of Measurement
Nominal Classification Classification Order Classification Order Classification Order Distance Distance Natural Origin

Ordinal
interval Ratio

12-13

12-14

Ratio Scales
Characteristics of previous scales plus an absolute zero point Examples
Weight Height Number of children

Some Measurement Questions

12-15

Examples of Questions covering the various scales and levels of Measurement

Sources of Error:
Measurement of data may result in errors. Much error is systematic (results from bias), while the remainder is random (occurs erratically). Four major error sources may contaminate results:
Respondent: Opinion differences from relatively stable characteristics of the respondent such as employee status, ethnic group membership, social class, and gender. Situation: condition that places a strain on the interview or measurement session can have serious effects on the interviewerrespondent rapport. Measurer: The interviewer can distort responses by rewording, paraphrasing, or reordering questions, incorrect coding, careless tabulation, and faulty statistical calculation. Instrument: A defective instrument can cause distortion in two ways. First, it can be too confusing and ambiguous. Second, it may not explore all the potentially important issues.

Evaluating Measurement Tools


Validity

Criteria
Practicality Reliability

12-18

Validity is the extent to which a test measures what we actually wish to measure. Reliability refers to the accuracy and precision of a measurement procedure. Practicality is concerned with a wide range of factors of economy, convenience, and interpretability.

Validity Determinants

Content

Criterion

Construct

12-20

Increasing Content Validity

Literature Search

Content

Etc.

Expert Interviews Group Interviews


12-21

Question Database

12-22

Validity Determinants
Construct validity is a measurement scale that demonstrates both convergent validity and discriminant validity (the theory and measurement instrument ). For instance, suppose we wanted to measure the effect of trust in relationship marketing. We would begin by correlating results obtained from our measure with those obtained from an established measure of trust. To the extent that the results were correlated, we would have indications of convergent validity. We could then correlate our results with the results of known measures of similar, but different measures such as empathy and reciprocity. To the extent that the results are not correlated, we can say we have shown discriminant validity.

Content

Construct

Increasing Construct Validity

New measure of trust


Known measure of trust

Empathy
Credibility
12-23

12-24

Validity Determinants
Criterion-related validity reflects the success of measures used for prediction or estimation. There are two types of criterionrelated validity: concurrent and predictive. They differ only on the time perspective. An attitude scale that correctly forecasts the outcome of a purchase decision has predictive validity. An observational method that correctly categorizes families by current income class has concurrent validity.

Content

Criterion

Construct

Judging Criterion Validity

Relevance

Freedom from bias

Criterion
Reliability

Availability
12-25

Exhibit 12-6 Understanding Validity and Reliability

12-26

Reliability Estimates

Stability

Internal Consistency

Equivalence

12-27

Reliability Estimates

Stability

Internal Consistency

Equivalence

12-28

Reliability Estimates

Stability

Internal Consistency

Equivalence

12-29

Reliability Estimates

Stability

Internal Consistency

Equivalence

12-30

Practicality

Economy

Convenience

Interpretability

12-31

16-32

Data Preparation and Description

Data Preparation in the Research Process

16-33

Editing
Accurate Consistent

Arranged for simplification

Criteria

Uniformly entered

Complete
16-34

16-35

Field Editing
Field editing review Entry gaps identified Callbacks made Validate results

Central Editing
Be familiar with instructions given to interviewers and coders Do not destroy the original entry Make all editing entries identifiable and in standardized form Initial all answers changed or supplied Place initials and date of editing on each instrument completed

16-36

Coding
This involves assigning numbers or other symbols to answers so that the responses can be grouped (categorised) into a limited number of categories. There can be a codebook containing each variable in the study and the application of coding rules to the variable. Coding can commence with precoding (assigning codebook codes to variables in a study and recording them on the questionnaire). It helps for manual data entry because it makes the step of completing a data entry coding sheet unnecessary.

Sample Codebook

16-38

Precoding

16-39

Coding Open-Ended Questions

16-40

Coding Rules
Appropriate to the research problem
Categories should be

Exhaustive

Mutually exclusive

Derived from one classification principle

16-41

Data Entry
Keyboarding Database Programs

Digital/ Barcodes Voice recognition


16-42

Optical Recognition

16-43

Describing, Displaying, and Analyzing Data

Data Exploration, Examination, and Analysis in the Research Process

17-44

Methods
Frequency tables Graphs Charts, etc. Cross tabulation

16-46

Frequencies
A
Unit Sales Increase (%)
5 6 7 8 9 Total

Frequency
1 2 3 2 1 9

Percentage
11.1 22.2 33.3 22.2 11.1 100.0

Cumulative Percentage
11.1 33.3 66.7 88.9 100

B
Unit Sales Increase (%)
Origin, foreign (1) 6 7 8 5 6 7 9 Total

Frequency
1 2 2 1 1 1 1 9

Percentage
11.1 22.2 22.2 11.1 11.1 11.1 11.1 100.0

Cumulative Percentage
11.1 33.3 55.5 66.6 77.7 88.8 100.0

Origin, foreign (2)

Example of Cross-Tabulation

17-47

Measures of Central Tendency

Mean

Median

Mode

16-48

Characteristics of Distributions

16-49

Measures of Variability
Variance Quartile deviation Standard deviation

Dispersion
Interquartile range Range

16-50

Symbols
Variable
Mean Proportion Variance Standard deviation Size Standard error of the mean Standard error of the proportion

Population
2 N

Sample

_
p

s2 s n

16-51

_ Sx
Sp

You might also like