You are on page 1of 9

CORRELATIONS BOX

Take a look at the first box in your output file called Correlations. You will see your
variable names in two rows. In this example, you can see the variable name water in
the first row and the variable name skin in the second row. You will also see your two
variable names in two columns. See the variable names water and skin in the
columns on the right? You will see four boxes on the right hand side. These boxes will
all contain numbers that represent variable crossings. For example, the top box on the
right represents the crossing between the water variable and the skin variable. The
bottom box on the left also happens to represent this crossing. These are the two boxes
that we are interested in. They will have the same information so we really only need to
read from one. In these boxes, you will see a value for Pearsons r, a Sig. (2-tailed)
value and a number (N) value.

PEARSONS R

You can find the Pearsons r statistic in the top of each box. The Pearsons r for the
correlation between the water and skin variables in our example is 0.985.

When Pearsons r is close to 1

This means that there is a strong relationship between your two variables. This means
that changes in one variable are strongly correlated with changes in the second
variable. In our example, Pearsons r is 0.985. This number is very close to 1. For this
reason, we can conclude that there is a strong relationship between our water and skin
variables. However, we cannot make any other conclusions about this relationship,
based on this number.

When Pearsons r is close to 0

This means that there is a weak relationship between your two variables. This means
that changes in one variable are not correlated with changes in the second variable. If
our Pearsons r were 0.01, we could conclude that our variables were not strongly
correlated.

When Pearsons r is positive (+)

This means that as one variable increases in value, the second variable also increase in
value. Similarly, as one variable decreases in value, the second variable also decreases
in value. This is called a positive correlation. In our example, our Pearsons r value of
0.985 was positive. We know this value is positive because SPSS did not put a negative
sign in front of it. So, positive is the default. Since our example Pearsons r is positive,
we can conclude that when the amount of water increases (our first variable), the
participant skin elasticity rating (our second variable) also increases.

When Pearsons r is negative (-)

This means that as one variable increases in value, the second variable decreases in
value. This is called a negative correlation. In our example, our Pearsons r value of
0.985 was positive. But what if SPSS generated a Pearsons r value of -0.985? If SPSS
generated a negative Pearsons r value, we could conclude that when the amount of
water increases (our first variable), the participant skin elasticity rating (our second
variable) decreases.

Sig (2-Tailed) value

You can find this value in the Correlations box. This value will tell you if there is a
statistically significant correlation between your two variables. In our example, our Sig.
(2-tailed) value is 0.002.
If the Sig (2-Tailed) value is greater than 05

You can conclude that there is no statistically significant correlation between your two
variables. That means, increases or decreases in one variable do not significantly relate
to increases or decreases in your second variable.

If the Sig (2-Tailed) value is less than or equal to .05

You can conclude that there is a statistically significant correlations between your two
variables. That means, increases or decreases in one variable do significantly relate to
increases or decreases in your second variable.

Our Example

The Sig. (2-Tailed) value in our example is 0.002. This value is less than .05. Because
of this, we can conclude that there is a statistically significant correlation between
amount of water consumed in glasses and participant rating of skin elasticity.

Warning about the Sig (2-Tailed) value

When you are computing Pearsons r, significance is a messy topic. When you have
small samples, for example only a few participants, moderate correlations may
misleadingly not reach significance. When you have large samples, for example many
participants, small correlations may misleadingly turn out to be significant. Some
researchers think that significance should be reported but perhaps should receive less
focus when it comes to Pearsons r.

ONE-WAY ANOVA IN SPSS STATISTICS (CONT...)

SPSS Statistics Output of the one-way ANOVA

SPSS Statistics generates quite a few tables in its one-way ANOVA analysis. In this
section, we show you only the main tables required to understand your results from the
one-way ANOVA and Tukey post hoc test. For a complete explanation of the output you
have to interpret when checking your data for the six assumptions required to carry out
a one-way ANOVA, see our enhanced guide here. This includes relevant boxplots, and
output from the Shapiro-Wilk test for normality and test for homogeneity of variances.
Also, if your data failed the assumption of homogeneity of variances, we take you
through the results for Welch ANOVA, which you will have to interpret rather than the
standard one-way ANOVA in this guide. Below, we focus on the descriptives table, as
well as the results for the one-way ANOVA and Tukey post hoc test only. We will go
through each table in turn.

Descriptives Table

The descriptives table (see below) provides some very useful descriptive statistics,
including the mean, standard deviation and 95% confidence intervals for the dependent
variable ( Time ) for each separate group (Beginners, Intermediate and Advanced), as
well as when all groups are combined (Total). These figures are useful when you need
to describe your data.

Published with written permission from SPSS Statistics, IBM Corporation.

SPSS Statisticstop ^
ANOVA Table

This is the table that shows the output of the ANOVA analysis and whether there is a
statistically significant difference between our group means. We can see that the
significance value is 0.021 (i.e., p = .021), which is below 0.05. and, therefore, there is a
statistically significant difference in the mean length of time to complete the spreadsheet
problem between the different courses taken. This is great to know, but we do not know
which of the specific groups differed. Luckily, we can find this out in the Multiple
Comparisons table which contains the results of the Tukey post hoc test.

Published with written permission from SPSS Statistics, IBM Corporation.

Multiple Comparisons Table

From the results so far, we know that there are statistically significant differences
between the groups as a whole. The table below, Multiple Comparisons, shows which
groups differed from each other. The Tukey post hoc test is generally the preferred test
for conducting post hoc tests on a one-way ANOVA, but there are many others. We can
see from the table below that there is a statistically significant difference in time to
complete the problem between the group that took the beginner course and the
intermediate course (p = 0.046), as well as between the beginner course and advanced
course (p = 0.034). However, there were no differences between the groups that took
the intermediate and advanced course (p = 0.989).
Published with written permission from SPSS Statistics, IBM Corporation.

It is also possible to run comparisons between specific groups that you decided were of
interest before you looked at your results. For example, you might have expressed an
interest in knowing the difference in the completion time between the beginner and
intermediate course groups. This type of comparison is often called a planned contrast
or a simple custom contrast. However, you do not have to confine yourself to the
comparison between two time points only. You might have had an interest in
understanding the difference in completion time between the beginner course group and
the average of the intermediate and advanced course groups. This is called a complex
contrast. All these types of custom contrast are available in SPSS Statistics. In our
enhanced guide we show you how to run custom contrasts in SPSS Statistics using
syntax (or sometimes a combination of the graphical user interface and syntax) and how
to interpret and report the results. In addition, we also show you how to "trick" SPSS
Statistics into applying a Bonferroni adjustment for multiple comparisons which it would
otherwise not do.

Reporting the output of the one-way ANOVA

Based on the results above, you could report the results of the study as follows (N.B.,
this does not include the results from your assumptions tests or effect size calculations):

There was a statistically significant difference between groups as determined by one-


way ANOVA (F(2,27) = 4.467, p = .021). A Tukey post hoc test revealed that the time to
complete the problem was statistically significantly lower after taking the intermediate
(23.6 3.3 min, p = .046) and advanced (23.4 3.2 min, p = .034) course compared to
the beginners course (27.2 3.0 min). There was no statistically significant difference
between the intermediate and advanced groups (p = .989).
In our enhanced one-way ANOVA guide, we show you how to write up the results from
your assumptions tests, one-way ANOVA and Tukey post hoc results if you need to
report this in a dissertation, thesis, assignment or research report. We do this using the
Harvard and APA styles (see here). It is also worth noting that in addition to reporting
the results from your assumptions, one-way ANOVA and Tukey post hoc test, you are
increasingly expected to report an effect size. Whilst there are many different ways you
can do this, we show you how to calculate an effect size from your SPSS Statistics
results in our enhanced one-way ANOVA guide. Effect sizes are important because
whilst the one-way ANOVA tells you whether differences between group means are
"real" (i.e., different in the population), it does not tell you the "size" of the difference.
Providing an effect size in your results helps to overcome this limitation. You can learn
more about our enhanced one-way ANOVA guide here, or our enhanced content in
general here.

Two-way ANOVA in SPSS Statistics (cont...)

SPSS Statistics Output of the Two-way ANOVA

SPSS Statistics generates quite a few tables in its output from a two-way ANOVA. In
this section, we show you the main tables required to understand your results from the
two-way ANOVA, including descriptives, between-subjects effects, Tukey post hoc tests
(multiple comparisons), a plot of the results, and how to write up these results.

For a complete explanation of the output you have to interpret when checking your data
for the six assumptions required to carry out a two-way ANOVA, see our enhanced
guide. This includes relevant boxplots, and output from your Shapiro-Wilk test for
normality and test for homogeneity of variances.

Finally, if you have a statistically significant interaction, you will also need to report
simple main effects. Alternately, if you do not have a statistically significant interaction,
there are other procedures you will have to follow. We show you these procedures in
SPSS Statistics, as well as how to interpret and write up your results in our enhanced
two-way ANOVA guide.

Below, we take you through each of the main tables required to understand your results
from the two-way ANOVA.

Descriptive statistics

You can find appropriate descriptive statistics for when you report the results of your
two-way ANOVA in the aptly named "Descriptive Statistics" table, as shown below:

Published with written permission from SPSS Statistics, IBM Corporation.


This table is very useful because it provides the mean and standard deviation for each
combination of the groups of the independent variables (what is sometimes referred to
as each "cell" of the design). In addition, the table provides "Total" rows, which allows
means and standard deviations for groups only split by one independent variable, or
none at all, to be known. This might be more useful if you do not have a statistically
significant interaction.

SPSS Statisticstop ^
Plot of the results

The plot of the mean "interest in politics" score for each combination of groups of
"Gender" and "Edu_level" are plotted in a line graph, as shown below:

Published with written permission from SPSS Statistics, IBM Corporation.

Although this graph is probably not of sufficient quality to present in your reports (you
can edit its look in SPSS Statistics), it does tend to provide a good graphical illustration
of your results. An interaction effect can usually be seen as a set of non-parallel lines.
You can see from this graph that the lines do not appear to be parallel (with the lines
actually crossing). You might expect there to be a statistically significant interaction,
which we can confirm in the next section.

SPSS Statisticstop ^
Statistical significance of the two-way ANOVA
The actual result of the two-way ANOVA namely, whether either of the two
independent variables or their interaction are statistically significant is shown in
the Tests of Between-Subjects Effects table, as shown below:

Published with written permission from SPSS Statistics, IBM Corporation.

The particular rows we are interested in are the "Gender", "Edu_Level" and
"Gender*Edu_Level" rows, and these are highlighted above. These rows inform us
whether our independent variables (the "Gender" and "Edu_Level" rows) and their
interaction (the "Gender*Edu_Level" row) have a statistically significant effect on the
dependent variable, "interest in politics". It is important to first look at the
"Gender*Edu_Level" interaction as this will determine how you can interpret your results
(see our enhanced guide for more information). You can see from the "Sig." column that
we have a statistically significant interaction at the p = .014 level. You may also wish to
report the results of "Gender" and "Edu_Level", but again, these need to be interpreted
in the context of the interaction result. We can see from the table above that there was
no statistically significant difference in mean interest in politics between males and
females (p = .207), but there were statistically significant differences between
educational levels (p < .0005).

SPSS Statisticstop ^
Post hoc tests simple main effects in SPSS Statistics

When you have a statistically significant interaction, reporting the main effects can
be misleading. Therefore, you will need to report the simple main effects. In our
example, this would involve determining the mean difference in interest in politics
between genders at each educational level, as well as between educational level for
each gender. Unfortunately, SPSS Statistics does not allow you to do this using the
graphical interface you will be familiar with, but requires you to use syntax. Therefore, in
our enhanced two-way ANOVA guide, we show you the procedure for doing this in
SPSS Statistics, as well as explaining how to interpret and write up the output from your
simple main effects.
When you do not have a statistically significant interaction, we explain two options you
have, as well as a procedure you can use in SPSS Statistics to deal with this issue.

SPSS Statisticstop ^
Multiple Comparisons Table

If you do not have a statistically significant interaction, you might interpret the Tukey
post hoc test results for the different levels of education, which can be found in
the Multiple Comparisons table, as shown below:

Published with written permission from SPSS Statistics, IBM Corporation.

You can see from the table above that there is some repetition of the results, but
regardless of which row we choose to read from, we are interested in the differences
between (1) School and College, (2) School and University, and (3) College and
University. From the results, we can see that there is a statistically significant difference
between all three different educational levels (p < .0005).

You might also like