You are on page 1of 30

One-Way ANOVA

Introduction to Analysis of Variance


(ANOVA)
What is ANOVA?

 ANOVA is short for ANalysis Of VAriance


 Used with 3 or more groups to test for MEAN
DIFFS.
 E.g., caffeine study with 3 groups:
 No caffeine
 Mild dose
 Jolt group
 Level is value, kind or amount of IV
 Treatment Group is people who get specific
treatment or level of IV
 Treatment Effect is size of difference in means
Rationale for ANOVA (1)

 We have at least 3 means to test, e.g., H0: 1 = 2


= 3.
 Could take them 2 at a time, but really want to test
all 3 (or more) at once.
 Instead of using a mean difference, we can use the
variance of the group means about the grand mean
over all groups.
 Logic is just the same as for the t-test. Compare the
observed variance among means (observed
difference in means in the t-test) to what we would
expect to get by chance.
Rationale for ANOVA (2)
Suppose we drew 3 samples from the same population.
Our results might look like this:
Three Samples from the Same Population
4
Note that the
means from the 3
M ean 1
3
groups are not M ean 3

M ean 2
exactly the same,
but they are 2

close, so the
variance among 1

means will be Sta n d a rd De v Gro u p 3

small. 0
-2 0 -1 0 0 10
Ra w S c o re s (X )
Rationale for ANOVA (3)
Suppose we sample people from 3 different populations.
Our results might look like this:
Note that the Three Samples from 3 Diffferent Populations

sample means are 4

far away from one Mean 1


another, so the 3
Mean 2
Mean 3
variance among
2
means will be
large. SD Group 1
1

0
-2 0 -1 0 0 10 20

Raw Scores (X)


Rationale for ANOVA (4)
Suppose we complete a study and find the following
results (either graph). How would we know or decide
whether there is a real effect or not?
Three Samples from 3 Diffferent Populations
Three Samples from the Same Population
4
4

Mean 1
M ean 1 3
3
M ean 3 Mean 2
M ean 2
Mean 3
2 2

SD Group 1
1 1

Sta n d a rd De v Gro u p 3
0
0
-2 0 -1 0 0 10
Ra w S c o re s (X )
-2 0 -1 0 0 10 20

Raw Scores (X)

To decide, we can compare our observed variance in


means to what we would expect to get on the basis
of chance given no true difference in means.
Review

 When would we use a t-test versus 1-way


ANOVA?
 In ANOVA, what happens to the variance in
means (between cells) if the treatment effect
is large?
Rationale for ANOVA
We can break the total variance in a study into
meaningful pieces that correspond to treatment effects
and error. That’s why we call this Analysis of Variance.

Definitions of Terms Used in ANOVA:


XG The Grand Mean, taken over all observations.

XA The mean of any level of a treatment.


The mean of a specific level (1 in this case)
X A1 of a treatment.
Xi The observation or raw data for the ith person.
The ANOVA Model
A treatment effect is the difference between the overall,
grand mean, and the mean of a cell (treatment level).
IV Effect  X A  X G
Error is the difference between a score and a cell
(treatment level) mean.
Error  X i  X A
The ANOVA Model:
Xi  XG  (X A  XG )  (Xi  X A)
An individual’s The grand A treatment
score is + or IV effect + Error
mean
The ANOVA Model
Xi  XG  (X A  XG )  (Xi  X A)
The grand A treatment Error
mean or IV effect
ANOVA Data by Treatment Level
40
The graph shows the
Error
terms in the equation.
There are three cells or
30
Grand Mean IV Effect
Frequency

levels in this study. 20 Treatment Mean


The IV effect and error
for the highest scoring 10

cell is shown.
0
ANOVA Calculations
Sums of squares (squared deviations from the mean)
tell the story of variance. The simple ANOVA designs
have 3 sums of squares.

SStot   ( X i  X G ) 2 The total sum of squares comes from the


distance of all the scores from the grand
mean. This is the total; it’s all you have.

SSW   ( X i  X A )2 The within-group or within-cell sum of


squares comes from the distance of the
observations to the cell means. This
indicates error.

SS B   N A ( X A  X G )2 The between-cells or between-groups


sum of squares tells of the distance of
the cell means from the grand mean.
SSTOT  SS B  SSW This indicates IV effects.
Computational Example: Caffeine on
Test Scores
G1: Control G2: Mild G3: Jolt
Test Scores
75=79-4 80=84-4 70=74-4
77=79-2 82=84-2 72=74-2
79=79+0 84=84+0 74=74+0
81=79+2 86=84+2 76=74+2
83=79+4 88=84+4 78=74+4
Means
79 84 74
SDs (N-1)
3.16 3.16 3.16
Xi XG ( X i  X G )2
G1 75 79 16
Total Control 77 79 4
Sum of M=79 79 79 0
Squares
SD=3.16 81 79 4
83 79 16
G2 80 79 1
M=84 82 79 9
SD=3.16 84 79 25
86 79 49
SStot   ( X i  X G ) 2 88 79 81
G3 70 79 81
M=74 72 79 49
SD=3.16 74 79 25
76 79 9
78 79 1
Sum 370
In the total sum of squares, we are finding the
squared distance from the Grand Mean. If we took
the average, we would have a variance.
SStot   ( X i  X G ) 2
0.5
Relative Frequency

0.4

0.3

Grand Mean
0.1

0.0
Low High
Scores on the Dependent Variable by Group
Xi XA ( X i  X A )2
G1 75 79 16
Within Control 77 79 4
Sum of M=79 79 79 0
Squares
SD=3.16 81 79 4
83 79 16
G2 80 84 16
M=84 82 84 4
SD=3.16 84 84 0
86 84 4
SSW   ( X i  X A )2 88 84 16
G3 70 74 16
M=74 72 74 4
SD=3.16 74 74 0
76 74 4
78 74 16
Sum 120
Within sum of squares refers to the variance within
cells. That is, the difference between scores and their
cell means. SSW estimates error.

SSW   ( X i  X A )2
0.5
Relative Frequency

0.4

0.3

Cell or Treatment Mean


0.1

0.0
Low High
Scores on the Dependent Variable by Group
XA XG ( X A  X G )2
G1 79 79 0
Between Control 79 79 0
Sum of M=79 79 79 0
Squares
SD=3.16 79 79 0
79 79 0
G2 84 79 25
M=84 84 79 25
SD=3.16 84 79 25
SSB   N A ( X A  X G )2 84 79 25
84 79 25
G3 74 79 25
M=74 74 79 25
SD=3.16 74 79 25
74 79 25
74 79 25
Sum 250
The between sum of squares relates the Cell Means to
the Grand Mean. This is related to the variance of the
means.
SSB   N A ( X A  X G ) 2

Grand Mean
0.5
Relative Frequency

0.4

0.3

Cell Mean Cell Mean


0.1
Cell Mean

0.0
Low High
Scores on the Dependent Variable by Group
ANOVA Source Table (1)
Source SS df MS F

Between 250 k-1=2 SS/df F=


Groups 250/2= MSB/MSW
125 = 125/10
=MSB =12.5
Within 120 N-k= 120/12 =
Groups 15-3=12 10 =
MSW
Total 370 N-1=14
ANOVA Source Table (2)

 df – Degrees of freedom. Divide the sum of


squares by degrees of freedom to get
 MS, Mean Squares, which are population
variance estimates.
 F is the ratio of two mean squares. F is
another distribution like z and t. There are
tables of F used for significance testing.
The F Distribution
F Table – Critical Values
Numerator df: dfB

dfW 1 2 3 4 5

5 5% 6.61 5.79 5.41 5.19 5.05


1% 16.3 13.3 12.1 11.4 11.0
10 5% 4.96 4.10 3.71 3.48 3.33
1% 10.0 7.56 6.55 5.99 5.64
12 5% 4.75 3.89 3.49 3.26 3.11
1% 9.33 6.94 5.95 5.41 5.06
14 5% 4.60 3.74 3.34 3.11 2.96
1% 8.86 6.51 5.56 5.04 4.70
Review

 What are critical values of a statistics (e.g.,


critical values of F)?
 What are degrees of freedom?
 What are mean squares?
 What does MSW tell us?
Review 6 Steps

1. Set alpha (.05). 4. Determine critical value


F.05(2,12) = 3.89
2. State Null &
5. Decision rule: If test
Alternative statistic > critical value,
H0: 1   2  3 reject H0.
H1: not all  are =. 6. Decision: Test is
significant (12.5>3.89).
3. Calculate test statistic: Means in population are
F=12.5 different.
Post Hoc Tests

 If the t-test is significant, you have a


difference in population means.
 If the F-test is significant, you have a
difference in population means. But you
don’t know where.
 With 3 means, could be A=B>C or A>B>C or
A>B=C.
 We need a test to tell which means are
different. Lots available, we will use 1.
Tukey HSD (1)
Use with equal sample size per cell.
HSD means honestly significant difference.
MSW  is the Type I error rate (.05).
HSD  q
NA
q Is a value from a table of the studentized range
statistic based on alpha, dfW (12 in our example)
and k, the number of groups (3 in our example).

MSW Is the mean square within groups (10).


NA Is the number of people in each group (5).
MSW
10
HSD.05  3.77  5.33 Result for our example.
5
From table NA
Tukey HSD (2)

To see which means are significantly different, we


compare the observed differences among our means to
the critical value of the Tukey test.

The differences are:


1-2 is 79-84 = -5 (say 5 to be positive).
1-3 is 79-74 = 5
2-3 is 84-74 = 10. Because 10 is larger than 5.33, this result
is significant (2 is different than 3). The other differences
are not significant. Review 6 steps.
Review

 What is a post hoc test? What is its use?


 Describe the HSD test. What does HSD
stand for?
Test

 Another name for mean square is


_________.
1. standard deviation
2. sum of squares
3. treatment level
4. variance
Test

When do we use post hoc tests?


 a. after a significant overall F test
 b. after a nonsignificant overall F test
 c. in place of an overall F test
 d. when we want to determine the impact of
different factors

You might also like