Professional Documents
Culture Documents
Biology Students
http://www.thomson.com
gopher.thomson.com
ftp.thomson.com
findit@kiosk.thomson.com
Aservice of
iT\
I(!)P
Introductory Statistics
for Biology Students
Second edition
T.A. Watt
Environment Department,
Wye College, University of London, Wye, Ashforo, uK
la !11
ISBN 978-0-412-80760-2
DOI 10.1007/978-1-4899-3166-5
Apart from any fair dealing for the purposes of research or private study,
or criticism or review, as permitted under the UK Copyright Designs and
Patents Act, 1988, this publication may not be reproduced, stored, or
transmitted, in any form or by any means, without the prior permission in
writing of the publishers, or in the case of reprographic reproduction only
in accordance with the terms of the licences issued by the Copyright
Licensing Agency in the UK, or in accordance with the terms of the
licences issued by the appropriate Reproduction Rights Organization
outside the UK. Enquiries concerning reproduction outside the terms
stated here should be sent to the publishers at the London address printed
on this page.
The publisher makes no representation, express or implied, with regard
to the accuracy of the information contained in this book and cannot
accept any legal responsibility or liability for any errors or omissions that
may be made.
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication: 97-'7 3551
To Colyear Dawkins
Contents
Preface
Preface to first edition
Note to students
How long is a worm?
1.1 Introduction
1.2 Sampling a population
1.3 The Normal distribution
1.4 Expressing variability
1.5 Exercise
Xl
xiii
xv
1
1
2
3
8
15
18
18
22
Sampling
3.1 First, catch your worm
3.2 Random sampling
3.3 Stratified random sampling
3.4 Systematic sampling
3.5 Further methods of sampling
3.6 Practical problems of sampling
3.7 Exercise
42
42
42
45
49
Planning an experiment
4.1 Replication
4.2 Randomization
4.3 Controls
4.4 Objectives
55
27
27
36
51
52
53
55
56
58
58
Vlll
CONTENTS
4.5
4.6
4.7
4.8
5
59
61
62
64
67
67
69
71
77
79
80
84
87
89
89
94
7.1
7.2
7.3
7.4
98
98
101
105
116
121
122
123
129
131
134
142
143
147
152
152
153
155
159
161
166
167
CONTENTS
I I
10
172
173
179
185
11
Your project
11.1 Choosing a topic and a supervisor
11.2 What can happen if you do not seek advice at the start
11.3 General principles of experimental design and execution
11.4 General principles of survey design and execution
1l.5 Health and safety
186
186
187
188
195
200
12
202
202
203
207
209
209
Appendix A
213
215
Appendix C
219
Index
Statistical tables
229
IX
Preface
The structure and format of the second edition remain very similar to
those of the first. However, in response to suggestions from both teachers
of statistics to biologists and from my own students, several important
additions have been made:
*Minitab Inc. is based at 3081 Enterprise Drive, State College, PA 16801-3008, USA.
Telephone: 001-814-238-3280, fax: 001-814-238-4383.
Note to students
How long is
a worm?
I would not enter on my list offriends . .. the man who needlessly sets
foot upon a worm.
William Cowper
1.1 INTRODUCTION
School experiments in physics and chemistry often have known answers.
If you don't record a value of 9.8 metres per second per second for 'the
acceleration with which an object falls to the Earth' then you know it must
be because there was something wrong with your equipment or with how
you used it. Similarly, the molar mass of calcium carbonate is 100.09
grams per mole, so any other value would be wrong. The idea that there is
a single clear-cut answer to a question isn't relevant in biology. 'How
heavy is a hedgehog?' or 'What is the length of an earthworm?' do not
have just one answer. However we need to know the answers because, for
example, the weight of a hedgehog in autumn largely determines whether
or not it will survive its over-winter hibernation. The aim of this chapter is
to show how to obtain a useful answer to such questions.
We will simplify life by concentrating on just those earthworms of one
species living in one particular field. Since earthworms are both male and
female at the same time we don't need to specify which sex we wish to
measure. Even so, individuals occur with widely differing lengths. Why is
this? Like all animals, earthworms vary in their genetic makeup - some
inherit a tendency to be short and fat and others to be long and thin.
Earthworms can live for a long time and young worms are likely to be
shorter than old ones.
Those which live in the moister part of the field at the bottom of the
slope might be more active and have a better food supply so they will grow
more quickly and may tend to be longer than those in a less favourable
part of the field. Meanwhile perhaps those living near the footpath along
one side of the field tend to be shorter because they are infested with a
parasite or because they have recently escaped from a tussle with a bird.
How then should we measure and describe the length of worms in this
field?
k-_2___1
H_O_W__L_O_N_G__
IS_A
__W_O_R
__
M_?_________________
L I_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
1.2
SAMPLING A POPULATION
We would like to obtain information about all the worms in the field
because they form the population in which we are interested, but it is
impossible to measure them all. Therefore we measure a few - our sample
- and generalize the results to the whole popUlation. This is only a valid
solution provided that the worms in our sample are representative of the
whole population.
1.2.1 Measuring worms
Let's imagine that we have collected ten worms and propose to measure
them, using a ruler, before returning them to their habitat. First, how do
we make one keep straight and still? We will need to coax it into a straight
line with one end at 0 mm and read off the measurement. We must avoid
stretching it or allowing it to contract too much. This is difficult and will
need practice. No doubt slime or soil will add to the problem of deciding
the correct reading. I suspect that, although we would like to say that 'this
particular worm is 83 mm long' and mean it, we will realize that this is
unlikely to be a very accurate measurement.
Before you can even start to worry about analysing your results,
therefore, you should think carefully about the errors and uncertainties
involved in actually making the measurements. Would you be quite so
careful in measuring worms, for example, on a dripping wet day as when
the sun was shining?
We can but do our best to standardize the process and, in view of the
various uncertainties, decide to measure earthworms only to the nearest
5 mm, i.e. 0.5 cm. Here are the results in cm:
11.5, 10.0, 9.5, 8.0, 12.5, 13.5,9.5, 10.5,9.0,6.0
If we arrange our measurements in order we see at once the enormous
variation in length:
6.0,8.0,9.0,9.5,9.5,10.0, 10.5,11.5, 12.5, 13.5
Some of the worms are twice as long as others. It is sensible to check our
set of observations at this stage. For example, if one of them was 70 cm,
do we remember measuring this giant (worms this size do exist in
Australia), or is it more likely that we have misread 10cm as 70cm?
So how long is a worm? We could work out the average or mean length
of our ten worms and say that it was the length of a 'typical' worm in
our sample. If we add up the lengths and divide the sum by 10 we get
10 cm. Is this, however, the length of a 'typical' worm in the field?
Let's imagine that our sample of ten worms had shown that each worm
measured 10 cm. This is very consistent (not to mention very suspicious -
T_H_E_N_O_R
__
M_A_L_D
__
IS_T_R_I_B_U_T_IO_N
______________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _
we should check that someone is not making up the results). The mean
length is 10 cm and it would seem, in common-sense terms, very likely that
the mean length of all the worms in the field (the population) is very close
to this value. If we had sampled 20 worms and they were all correctly
measured as 10 cm, this would be amazingly strong evidence that the
population mean is very close to 10 cm. The more worms we measure
(increasing replication), the more information we have and so the greater
our confidence that the estimate of the mean that we have obtained from
the sample is close to the real population mean which we want to know
but cannot measure directly.
However, the ten worms in our sample were not all the same length.
Here they are again:
6.0,8.0,9.0,9.5,9.5, 10.0, 10.5, 11.5, 12.5, 13.5
The mean length is:
(6.0 + 8.0 + ... + 12.5 + 13.5)/10 = 100/10 = IOcm
With this amount of variation within the ten values, how confident can
we be that the mean length of all the worms in the field is 10 cm? To answer
this question we need a way of expressing the variability of the sample
and of using this as an estimate of the variability of the population, and
for that you need 'statistics'.
1.2.2
__
1~_3 ~
L-_4__~1
H_O_W__L_O_N_G__IS_A
__W_O_R
__
M_?________________~
L I_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
No.
No.
worms
worms
of
of
30
20
10
a
(a)
6 7 8 910 11121314
Mid-point of class (cm)
Length
o
(b)
67891011121314
Mid-point of class (cm)
Length
Figure 1.1 The number of worms in eaeh size class: (a) few worms (ten in our
sample); (b) many worms.
of medium length and fewer to be either very big or very small. Look at
Figure I.Ia, where the frequency of occurrence (or actual number) of worms
in a range of size classes is shown. If we measure more worms, the number
in the sample increases, and so the shape of the graph becomes less ragged
(Figure 1.1 b). For the whole population of worms in the field (assuming that
the field does not contain any localized abnormality) it is likely to follow
a smooth 'bell-shaped' curve (Figure 1.2), called the Normal distribution.
(Normal is a technical word here - it doesn't just mean 'ordinary'.)
The spread of the curve depends on the natural variability in the
population, being greater if the variability is large, i.e. there are relatively
more values a long way from the centre line. This is a common shape of
distribution for measurements like length and weight. However, not all
populations follow the Normal distribution. For example, counts of the
10 000
No.
of
worms
5000
a
Length (em)
T_H_E_N
__
O_R_M_A_L_D
__
IS_T_R_IB_U_T_I_O_N______________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _
No.
of
thistles
30
20
10
4
5
2
3
No. of butterflies/thistle
number of butterflies per thistle might produce mainly zeros (most thistles
lack butterflies), a few ones and twos and the occasional larger number
where the thistle is in a sunny spot and attracts many butterflies. Such a
distribution would be skewed to the right (Figure 1.3).
Although other distributions are used in statistics (Box 1.1) the Normal
distribution is the most important and so it is the one upon which we
concentrate here.
BOX 1.1
Discontinuous distributions
The Normal distribution is a continuous one in that it is used for data
which are measurements and so can take any value on a continuous
scale (e.g. mm, kg). However data may also be counts and so can
only be whole numbers, i.e. they are discontinuous. There are two
main types of distribution which may be used for counts (for more
detail see Rees, 1995):
LI__5__~
~_6__~1 ~I
_________________H_O_W__L_O_N_G__IS_A__W_O_R__M_?________________~
These two distributions assume that individuals are only behaving
according to chance. Sometimes we may suspect that there is a
biological reason for individuals (for example microorganisms on a
slide) to be arranged in a pattern rather than just according to
chance. We then compare our observations with the predictions from
these models (see section 9.6) to discover whether there is evidence
to reject the idea that chance is the only factor.
However, if data are counts and fairly large numbers it is often
reasonable and convenient to treat them as if they followed an
approximately Normal distribution. For example, in Chapters 5 and
6 we discuss an experiment in which the numbers of spiders in field
margins managed in different ways are compared with a method
which uses the Normal distribution.
8%Or
~
~ 0.08
--I
6
10
Length (em)
Figure 1.4 The probability of a worm being between 6 em and 7 em in length.
__
~_____________T_H__E_N_O_R_M
__A_L_D
__
IS_T_R_I_B_U_T_IO_N
______________~I ~I 7__~
9.5
em
10.5
10
Standard deviation
= 0.5 em
find out the area between any two length values. The ratio of such an area
to the total area under the curve (representing the whole population) gives
you the probability that you will encounter worms of a particular length.
For example, if we wanted to know the probability of a worm being
between 6cm and 7 cm long we would find that it is 0.08 (or only 8% of the
population, Figure 1.4).
Fortunately you do not need to know anything about either the equation
of the curve or about integration to be able to answer such questions.
The information needed is already available in tables (statistical tables)
and most statistically oriented computer programs already incorporate
them, so that we don't even need to consult the tables themselves very
often. However, it is useful to remember that the probability of a worm's
length being in the range between the two values where the curve changes
direction (points of inflection) is about 0.68 (68%) (Figure 1.5). These
values are the mean plus or minus one standard deviation. For example, if
the mean is 10 cm and the standard deviation was calculated to be 0.5 cm
we would know that 68% of worms would have lengths between 9.5 cm
and 10.5 cm. If the population was more variable and had a standard
deviation of 2 cm we would expect 68% of worms to have lengths between
8 cm and 12 cm (Figure 1.6).
How in practice therefore do we summarize the variability of a
population in a way that will tell us something about the spread of the
curve, so that we can work out the probability of finding worms of
different lengths?
10
12
Standard deviation
= 2.0 em
em
~_8__~1
H_O_W
__
L_O_N_G__
IS_A
__W_O_R
__
M_?________________~
L I_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Table 1.1
a
Observation (cm)
b
Distance from the
mean value (cm)
c
Difference
(positive) (cm)
6.0
8.0
9.0
9.5
9.5
10.0
10.5
11.5
12.5
13.5
-4.0
-2.0
-1.0
-0.5
-0.5
0.0
0.5
1.5
2.5
3.5
4.0
2.0
1.0
0.5
0.5
0.0
0.5
1.5
2.5
3.5
Sum
0.0
16.0
10
em
11
12
E_X_P_R_E_SS_I_N_G_V_A
__R_IA_B_I_L_I_T_Y______________~IIL _ _9__~
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _
(a)
10
em
12
14
(b)
10
em
11
10
I I
Sample 1
Sample 2
Measured
values (cm)
Difference
(all positive)
Difference
squared
6
14
4
4
16
16
Sum
32
9
9
9
9
II
11
11
11
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
Sum
one (sample 1) so this revised method gives a much better picture of the
variability present than just the sum of the differences used earlier. The
phrase 'the sum of the squared differences between the observations and
their mean' is usually abbreviated to 'the sum-of-squares'. However, this
abbreviation sometimes leads to it being miscalculated. The 'sum-ofsquares' does not mean the sum of the squares of each observation, as the
following example should make clear.
The sum-of-squares for a sample of three worms measuring 2, 3 and
4 em respectively is not (2 x 2) + (3 x 3) + (4 x 4) = 29. Instead, the mean
of the values 2, 3 and 4 is 3, and so the sum-of-squares is:
(2 - 3)2 + (3 - 3)2 + (4 -
3i = 12 + 02 + 12 = 2
E_X_P_R_E_S_SI_N_G__
V_A_R_IA_B_I_L_IT_Y______________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _
11
12
II
~----------------------------------------------------~
Difference
(all positive)
Difference
squared
6.0
8.0
9.0
9.5
9.5
10.0
10.5
11.5
12.5
13.5
4.0
2.0
1.0
0.5
0.5
0.0
0.5
1.5
2.5
3.5
16.0
4.0
1.0
0.25
0.25
0.0
0.25
2.25
6.25
12.25
Sum
16.0
42.5
4.722 and the standard deviation, which is the square root of this, is
2.173. This value helps us to judge the extent to which worm length varies
from one individual to another, but its usefulness becomes clearer when
we put this piece of information together with our knowledge of the
Normal distribution (Figure 1.5). This told us that 68% of the worms in
the field will have lengths between:
the mean + the standard deviation
In our sample of ten worms that is between
10cm - 2.173cm and lOcm +2.173cm
which works out to be between
7.827cmand 12.173cm
This sounds amazingly precise - it implies that we can measure the
length of a worm to the nearest 0.01 of a millimetre. Since we only
measured our worms to the nearest 0.5 cm (5 mm) it is better to express
this result as:
68% of the worms in the field have lengths between 7.8 cm and
12.2cm.
If the worms we measured still had a mean length of 10 cm but had been
much less variable, for example, with lengths mainly in the range from 9
to 11 cm, the standard deviation would be much less. Try working it out
for these values:
8.5,9.0,9.5,9.5, 10.0, 10.0, 10.5, 10.5, 11.0, 11.5
(You should get 0.913.)
~______________E_X_P_R_E_S_SI_N_G__V_A_R_IA_B_I_L_IT_Y______________~I
1.4.5
12
Sample
ABC
Worm 1
Worm 2
8 8 8 8 9
10 10 11 12 10
9
10
9
11
9
12
10
10
10
11
10
12
10
11
10
12
11
12
For convenience, where the two worms in a sample differ in length, the
shorter has been called worm 1. There are 15 possible samples (A to 0),
because that is the maximum number of different combinations of pairs of
worms that you can get from six worms. Because two of the worms share
the same length (10 cm) some samples will give the same mean (see samples
Band C, for example).
We can see that, by chance, our sample might have provided us with
an estimate of the mean that was rather extreme: sample A would give 8.5,
while sample 0 would give 11.5, compared with a population mean of
10.0. The means of the 15 possible samples can be summarized as in
Table 1.4.
Table 1.4
Sample
numbers
Number of
samples
Sample
mean
Number of
samples x sample
mean
A
B,C
D,F,G
E, H,J
I,K,M
L,N
0
I
2
3
3
3
2
1
8.5
9.0
9.5
10.0
10.5
11.0
11.5
8.5
18.0
28.5
30.0
31.5
22.0
1.5
Total
Mean
15
150
10.0=150115
13
14
H_O_W_L_O_N_G_IS_A_W_O_R_M_?_ _ _ _ _ _ _---..J
L I_ _ _ _ _ _ _ _
No.
of
samples
3
2
o
9
9.5
10
10.5
11
Figure 1.9 Means from the 15 possible samples each of two worms.
The second column in this table shows that if we take a sample of two
worms we have a 1 in 15 chance of selecting sample A and so getting an
estimated mean of 8.5. However, we have a 3 in 15 (or 1 in 5) chance of
having a sample with a mean of 9.5 (samples D, F or G) or of 10.0 (E, H
or J) or of 10.5 (I, K or M). This is the sampling distribution of the sample
mean. It shows us that we are more likely to obtain a sample with a mean
close to the true population mean than we are to obtain one with a mean
far away from the population mean.
Also, the distribution is symmetrical (Figure 1.9) and follows the
Normal curve so that if we take a series of such samples we will obtain an
unbiased estimate of the population mean. The mean of all 15 samples is
of course the population mean (10.0, bottom of right-hand column of the
table) because all worms in the population have been sampled equally,
with each worm occurring once with each of the other five worms in the
population.
~________________E_X_ER_C_I_SE________________~I
slightly different statistic is used - the standard error. In contrast to the
standard deviation the standard error describes our uncertainty about the
mean length of a worm.
This uncertainty is caused by the fact that if we took several samples,
as we have illustrated above, they may each provide a different estimate of
the mean. To obtain the standard error (of the mean) we divide the
variance by the sample size before taking the square root. This can be
expressed in three equivalent ways, but mathematically they are all the
same and give the same result:
variance
sample size
Jvariance
standard deviation
Jsample size
Jsample size
The standard error gets smaller as the sample size increases because
the standard deviation is divided by the square root of the sample size.
So, if our sample size was 16 the standard error of the mean would be
the standard deviation divided by 4. If we took a sample of 25 we would
divide the standard deviation by 5 and so on. This is because the lengths
of occasional tiny or huge worms have much less influence on the mean
when many worms are measured than if only a few are measured. Thus,
the mean of a larger sample is more likely to be closer to the population
mean.
Back in section 1.4.4 the standard deviation of the sample of ten worms
was found to be 2.173 cm. The standard error of the mean will be less
because we divide the standard deviation by the square root of 10 (which
is 3.162):
2.173/3.162 = 0.69
Because the distribution of sample means follows the Normal curve then
there is a 68% chance that the range between the estimated mean plus
one standard error and the estimated mean minus one standard error will
contain the population mean. So if the mean is 10.0 cm and the standard
error of the mean is 0.7 cm, there is a 68% chance that the range between
10 cm plus 0.7 cm and lOcm minus 0.7 cm, i.e. from 9.3 cm to 10.7 cm,
contains the population mean.
1.5 EXERCISE
Histogram, sample mean, standard deviation and standard error
Here are the circumferences of a random sample of ten apples taken from
each of two contrasting varieties:
15
16
I LI________________H_O_W__L_O_N_G__IS_A__W_O_R__M_?______________~
Apple
number
Tyler's
Kemal
(cm)
Red
Charles
Ross
(cm)
22.0
24.5
25.5
27.5
22.5
27.5
24.0
26.5
23.5
25.0
18.3
18.4
20.2
22.0
17.5
18.1
17.6
16.8
18.8
18.9
2
3
5
6
7
8
9
10
n
df
mean
SD
SE
Plot a histogram of the data from each variety (as in Figure l.1a).
Choose class ranges which suitably summarize the variability (i.e. avoid
putting all the observations into one class or having a number of classes
almost equal to the number of apples).
Copy out the table. Use your calculator to complete the entries at the
bottom of each column in your table:
Switch on your calculator and put it into statistical mode.
Clear out the statistical memory.
Enter the ten values.
(b)
(a)
No.
No.
of
apples 4
of
apples 4
.-
23 25 27
Mid-point of class (cm)
Circumference
17 19
21 23
Figure 1.10 The number of apples in each size class: (a) Tyler's Kemal; (b) Red
Charles Ross.
E_X_E_R_C_I_SE______________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Check that n = 10. Write this number in your table and follow it with
the number of degrees of freedom (df) (n - 1).
Obtain the sample mean and standard deviation (SD) from the
appropriate keys xand O"n_1 or s and enter them into your table.
Calculate the standard error (SE) by: squaring the standard deviation,
dividing the result by n and taking the square root. Enter the value into
your table.
Answer
The histograms are shown in Figure l.1O. The values you should have
obtained in your table are:
n
df
mean
SD
SE
Tyler's
Kernal
Red
Charles
Ross
10
9
24.85
1.930
0.610
10
9
18.66
1.492
0.472
17
2.1
Any estimate that we make of the mean value of a population mean should
be accompanied by an estimate of its variability - the standard error. As
we have seen in Chapter 1 this can be used to say within what range of
values there is a 68% chance that the population mean will occur. This
range is called a 68% confidence interval. However, this is a bit weak; we
usually want to be rather more confident than only 68%. The greater the
chance of our range containing the population mean, the wider the
confidence interval must be.
Think about it: if we want to be absolutely certain that the range
contains the population mean we need a 100% confidence interval and this
must be very wide indeed; it must embrace the whole of the distribution
curve, not just the middle part. It is standard practice to calculate a 95%
confidence interval (excluding 2.5% at each end) which is usually taken as
a good compromise. The 68% confidence interval was obtained by
multiplying the standard error by 1; therefore to calculate the 95%
confidence interval we need a 'multiplier' of greater than 1. Statistical
tables, summarizing the Normal distribution curve, tell us that we need to
multiply by 1.96 for a 95% confidence interval (see Table C.l, where the
'multiplier' = z = 1.96 for p = 0.975, i.e. 97.5% of the area lies below this
point and so 2.5% lies above it). There is however just one further
modification to be made. The Normal distribution assumes that we have a
large number of observations (usually taken to be more than 30) in our
sample. If, as in our case, we have only ten, our knowledge is more limited
so we need to take this fact into account.
Having less information leads to more uncertainty and so to a wider
confidence interval. A distribution which is appropriate for small samples
is the t distribution. Again this exists in table form, so from the tables we
can find out the appropriate value of t with which to multiply the standard
IM
__P_O_R_T_A_N_C_E_O_F__C_O_N_F_ID
__
EN
__
C_E_IN
__T_E_R_V_A_L_S________~I
L -________
Table 2.1 Values of t for various numbers of observations and confidence intervals
Number of worms Degrees of freedom
10
20
30
19
29
95% confidence
99% confidence
2.26
2.09
2.04'
3.25
2.84
2.76
This is very close to the value of 1.96 which comes from the Normal distribution. For most
practical purposes, if our sample contains 30 or more individuals a 95% confidence interval
is given by the mean twice its standard error.
19
20
II
CONFIDENCE INTERVALS
~----------------------------------------------------~
12
14
16
18
20
22
Sample mean
24
26
28
Figure 2.1 There is a 95% chance that this range includes the mean number of
bushes per hectare over the entire forest.
mean and we know that there is some uncertainty about how close the true
population mean is to this estimate. If we calculate a confidence interval,
we can quantify this uncertainty. The 95% confidence interval might be
from 15 to 25 bushes per hectare (Figure 2.1). So we are 95% confident
that the population mean lies in this range. If we have worked out that, to
make full use of the factory's capacity, we must have at least 15 bushes/
ha we might decide that we are happy to proceed since there is only a small
chance (2.5%, which is the area in the left-hand tail of the distribution)
that the population mean will be less than this.
However, we may decide that a 2.5% chance of bankruptcy from this
source of risk is too much and that we are only prepared to take a 0.5%
risk. Then a 99% confidence interval for the population mean should be
used instead. This gives the range which is 99% likely to include the
population mean. This might be between 12 and 28 bushes. Then, there is
a 0.5% chance of the population mean being less than 12 bushes per
hectare (the 0.5% chance of there being more than 28 bushes per hectare
does not influence the decision). This confidence interval now includes
uneconomic densities of shrubs (12 to 14.9). Thus we may decide not to
expand the factory.
In the real world there are likely to be areas of the forest where the bushes
are dense and other areas where they are sparse. The mean of 20 bushes
per hectare might be built up from a few areas at a great distance from the
factory where there are many shrubs and areas nearby where there are few.
This would be vital additional information since it would affect the cost of
collecting the fruit. In Chapter 3 we look at how sampling methods can be
made more sophisticated to take account of this type of variation.
2.1.2 Consolidation of the basic ideas
You have now encountered some very important concepts. Don't be
surprised if you don't understand or remember them after just one reading.
L -____________________________________________________
II
Most people find that they need to come across these ideas many times
before they are comfortable with them. The best way to understand the
subject is to have some observations of your own which you wish to
summarize, then you are motivated to put these methods to good use. In
the mean time, use the flow chart below to obtain the standard error from
the data which follow it. Check that you understand what is going on
and why. If you are puzzled, refer back to the appropriate section and
re-read it. Reading about statistical methods cannot be done at the same
fast pace as reading a light novel or a science fiction book - but it will
repay your attention.
FLOW CHART
n observations
y,. y2. Yn
~
sum of squares (n - 1)
J,
variance of the
observations = Vy
('Ly)/n
------t)
=Y
sum of squares =
'L(Y-W
Vy/n
variance of the
mean=Vy
J,
J,
KY
..ffy
J,
J,
mean
--4)
SOy
.;n
The symbol 'L means 'add up all the values of the item which follows'. For example. all n
values of y.
The bar above y indicates that this is the mean of all the values of y.
27,32,36,38,45,48, 55,61,
68,71,7~78
21
22
I LI________C_O_N_F_ID_E_N_C_E_IN_T_E_R_V_A_L_S_ _ _ _ _ _ _---'
To calculate the mean and a 95% confidence interval go through the
following procedure. Put your calculator into statistical mode and then
enter the observations. Press the button marked n (the number of
observations) to check that you have, in fact, entered 12 values. Press the
button marked x to obtain the mean.
To obtain the standard error easily we then square the standard deviation
(to get the variance), divide by n (the number of observations) and take the
square root. Multiply the standard error by the value of t for 11 degrees of
freedom (n - 1) from the tables (95%) which is 2.201. Add the result to, and
subtract it from, the mean to give the required confidence interval.
Here are the results: mean = 52.8, standard deviation = 17.6986,
standard error = 5.1092, t = 2.201, confidence interval = 52.8 11.245, i.e.
from 41.5 to 64.0 years. We conclude that there is a 95% chance of the
population mean lying within this range. What would be the range for a
99% confidence interval? Remember however that the population consists
of the patients of one particular doctor. It would be misleading to
generalize from these results to those patients on the list of a doctor in a
different part of the country. No matter how good the statistical summary,
your conclusions must always be reviewed in the light of how the
observations were originally collected.
2.2 INTRODUCTION TO MINITAB
We can improve on the calculator by using a computer package. The
MINITAB package is available on PCs and is very popular. The latest
versions (releases 10 and 11) allow you to select commands from menus
and have excellent graphics. They are very easy to use and provide quick
and useful ways of looking at the sample of observations or set of data we
have collected and of summarizing it. We will see how MINITAB could
be used to deal with our original sample of the lengths of ten worms. We
type the ten values directly into column 1 of the worksheet and name the
column 'length'. Then we can ask for the data to be printed (File, Display
Data):
length
6.0
8.0
9.0
9.5
9.5
10.0
10.5
11.5
12.5
13.5
Because there are only a few values they are printed across the page
instead of in a column, to save space.
We can see the position of each observation on a scale of length (Graph,
Character Graphs, Dotplot):
..
- - + - - - - + - - - + - - - - - + - - - - + - - - + ---length
6.0
7.5
9.0
10.5
12.0
13.5
I_N_T_R_O_D_V_C_T_I_O_N_T_O__M_I_N_IT_A_B______________~I
L -_____________
9
10
11
12
13
14
1
3
1
1
1
1
N=10
N=10
0
0
055
05
5
5
5
This has three columns of numbers. The central column is called the stem
and the numbers to its right are the leaves. There is one leaf for each
observation. Here it represents the value of the number after the decimal
point in the observation. There can be many leaves on each stem. The stem
here represents the whole number part of an observation, before the
decimal point. The top row of the middle and right-hand columns shows
that there is one observation with a stem of 6 and a leaf of 0 (this is
equivalent to a worm 6.0 cm in length). The fourth row shows that there is
one worm 9.0cm long and two worms are each 9.5 cm long.
The column of numbers on the left is the cumulative number of worms
up to the midpoint of the histogram. Thus, starting from the top these are
1 (one worm in first class), 1 again (no more worms added from second
class), 2 (one more worm added -length 8.0cm) and 5 (three more worms
23
24
I I
CONFIDENCE INTERVALS
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
added in fourth class). The same process goes on from the bottom upwards
as well, so that we can see that there are three worms whose lengths are
greater than or equal to 11.5 cm and five worms with lengths of at least
10 cm. In our sample the observations happen to fall into two groups of
five counting from both the high end and from the low end, but the
distributions in small samples are not always quite so well balanced.
The advantage of the stem-and-Ieaf plot is that, in addition to giving
us the shape of the distribution, the actual values are retained. The plot
would also suggest to us (if we did not know already) that the lengths were
recorded only to the nearest 0.5 cm which might affect how we interpret
the results or the instructions we give to anyone who wanted to repeat the
survey.
The following instruction produces many summary statistics (Stat, Basic
Statistics, Descriptive Statistics):
length
N
10
Mean
10.000
Median
9.750
TrMean
10.063
length
Min
6.00
Max
13.500
Q1
8.750
Q3
11. 750
StDev
2.173
SEMean
0.687
I_N_T_R_O_D_U_C_T_I_O_N_T_O__
M_I_N_IT_A_B______________~I
L -_____________
individuals. The mean is then not the best measure of the salary of a
'typical' employee. Let's look at some actual figures. The salaries of 13
employees have been entered into column 1 of a MINIT AB worksheet and
printed out, as for the worm lengths, as follows (File, Display Data):
salary (pounds per year)
8500
12000
9000
12000
9500
18000
10000
17500
10000
25000
10500
54000
11500
2
2
1
1
1
1
1
o
1
1
2
2
3
3
N = 13
899
000122
78
5
4
4
25
26
I I
CONFIDENCE INTERVALS
salary
13
Mean
15962
Median
11500
Min
8500
Max
54000
Ql
Q3
salary
9750
17750
TrMean
13182
StDev
12360
SEMean
3428
This command tells us that the median is 11500. QI stands for lower
quartile. This is the value which has one-quarter (25%) of observations
below it. Obviously some rounding is necessary where the number of
observations is not exactly divisible by four. Similarly Q3 is the salary
which is exceeded by one-quarter (25%) of people. So three-quarters of
people have salaries below it. It is called the upper quartile. MINIT AB
produces a very helpful graph which contains these features, called a boxand-whisker plot (or boxplot) (Graph, Character Graphs, Boxplot):
minimum
01 median 03
\tL_~
-1+
outlier
maximum
\,
1---
-----+---+------+-----+---+-----salary
10 000
20 000
30 000
40 000
50 000
The cross shows the median. The box around it encloses the central 50%
of the observations and so it excludes the largest 25% and smallest 25% of
values. It is defined at the lower end by Ql and at the upper end by Q3.
The lines from each side of the box (called whiskers) extend as far as the
minimum and maximum values except that very small or very big values
(outliers) are marked separately beyond the ends of the lines as '*' or '0' to
draw attention to them. Thus the 54000 point is so marked in our
example. This also makes it very easy to spot mistakes made in entering
the data (for example if an extra zero had been added onto a salary of
10000).
Contrast the box plot for the salaries data with that for the lengths of
the ten worms (cm) we measured earlier:
6.0, 8.0, 9.0, 9.5, 9.5, 10.0, 10.5, 11.5, 12.5, 13.5
+
1-------
--+-----+---+---+----+--------+ ---length
6.0
7.5
9.0
10.5
12.0
13.5
Here the median is nearer the middle of the box and the whiskers are
symmetrical. It is often helpful to display the data in this way early on in
the proceedings, to get an idea as to whether they do show a roughly
Normal distribution (as for worm lengths) or not (salaries), because some
of the tests that will be discussed later are based upon the assumption that
the data are normally distributed.
C_O_M_P_A_R_I_N_G_TW
___O_P_O_P_U_L_A_T_I_O_N_S____________~I
L -____________
>.7 u
c 6 <Ll
:J
IT
<Ll
L-
4 -
u... 3
2 -
10000 20000
Figure 2.2
I
30000
salary
I
40000
I
50000
50000
>.40000
eo
rn({) 30000
L-
20000
10000
27
28
I L-I________C_O_N_FI_D_E_N_C_E_IN_T_E_R_V_A_L_S_ _ _ _ _ _ _--'
test, which a hypothesis is not guaranteed to pass. Scientific method works
by discarding or amending hypotheses when the predictions extracted from
them fail.'
If we have two random samples, one from each of two groups, we start
by assuming that they come from the same population, and so have a
common mean (the null hypothesis 'Ho'). If we are satisfied that the
evidence from our sample data is strongly against this idea we reject it and
instead accept that the samples come from populations with different
means (the alternative hypothesis 'HI '). We proceed in this way by rejecting
a hypothesis as false instead of accepting it as true, although it seems
cumbersome, because it is logically sound.
2.4.1
Introduction
C_O_M
__P_A_R_IN
__
G_T_W
__
O_P_O_P_U_L_A_T_I_O_N_S____________~I
L -____________
standard error
and the probability level is discovered by comparing the calculated t-ratio
with the values in a t-table for the number of degrees of freedom
appropriate to the standard error in the denominator.
2.4.3
Independent t-test
9 -
units
8-
7-
6 5-
0
0
0
0
40
3-
Figure 2.4
Complete overlap.
29
30
I LI________C_O_N_FI_D_E_N_C_E_IN_T_E_R_V_A_L_S_ _ _ _ _ _ _
----I
9 -
B7 _
units 6 -
5 -
o
o
o
3 2
0r-'
----L-,--_ _ _ _ _ _ _ _ _ _ _ _ _
9 -
8 7 -
units
6 -
3 A
L-____________________________________________________
uni t s
8 -
7-
6 0
5 -
43 -
o
o
o
o
o
Figure 2.7
II
Unequal variances.
t-ratzo
A note on variances
Note that even if the mean value has not changed, the variation in
weights about the mean might have done so. If the two variances are
very different this is evidence in itself that the populations behave
differently (i.e. the new diet is having a different effect and there is
little point in continuing with the comparison). We can test for this
by dividing the bigger variance by the smaller one. If this variance
31
32
IL________C_O_N_F_I_D_E_N_C_E_IN_T_E_R_V_A_L_S_ _ _ _ _ _ _
---l
ratio is larger than the value in F-tables for p = 0.025 for respective
degrees of freedom (6 and 6 in our example) we have evidence that
the variances are significantly different. We will return to this in
Chapter 6.
The numerator (on the top) of equation (2.3) (see Box 2.2) is straightforward to calculate:
3.671 - 4.714 = -1.043
To get the denominator (on the bottom) of equation (2.3) we must
proceed as follows. First, the pooled (or average or combined) variance is
obtained. We need to calculate the sum-of-squares of the observations in
XI and add this to the sum-of-squares of the observations in X 2 (equation
(2.1)). A quick method is to obtain the standard deviation from the
calculator (O"n_1 or XO"n_1 or s), square it to obtain the variance and multiply
by (n - 1) to find the sum-of-squares (see section 2.1.3 and the flow chart
preceding it):
(0.5648)2 x 6 = 1.9143 = first sum-of-squares
(0.8275i x 6 = 4.1086
= second sum-of-squares
1.9143 + 4.1086 = 6.0229 = total of both sums-of-squares
Divide the result by the sum of the degrees of freedom for XI and X 2 :
6+6
= 12
6.0229/12
Equations
The equation for the pooled variance is:
S2 _
p -
Sxx l +Sxx2
(n1 - 1) + (n2 - 1)
(2.1)
L -____________________________________________________
II
(2.3)
Symbol
Phrase
<95
95
99
99.9
>0.05
0.05
0.01
0.001
ns
non-significant
significant
highly significant
very highly significant
*
**
***
Many computer packages (like MINIT AB) give the exact value of p
instead, e.g. p = 0.03.
This is preferable to consulting conventional levels in tables as it is more
accurate.
BOX 2.3
33
34
I ~I_______________C_O_N_F_ID_E_N_C_E__IN_T_E_R_V_A_L_S______________~
(Table C.2) by looking down the column headed 5. This means that
there is 2.5% in each tail, giving 5% in total (i.e. p = 0.05). (Beware,
other sets of tables may not behave this way. You want the column
with 1.96 at the bottom - for infinite degrees of freedom!)
Occasionally we might be certain that one treatment could only
give sayan increase in weight, not a decrease. This would certainly
be the case in many medical trials - where we may only be testing
treatments which might increase survival. We are then asking if there
is any evidence that the mean of treatment XI is bigger than that of
treatment X 2 This is a one-tailed test. In this case we use the t value
in the column of Table C.2 headed 10. This means that there is 5%
in each tail and we are only interested in one of them.
old
3.4
3.9
4.2
4.5
3.6
2.9
3.2
new
4.5
4.8
5.7
5.9
4.3
3.6
4.2
Then we ask for a two-sample t-test (Stat, Basic Statistics, 2-Sample t- Test,
Samples in Different Columns):
Twosample T for old
N
Mean
old
7
3.671
new
7
4.714
vs new
StDev
0.565
0.828
SEMean
0.21
0.31
Note that MINITAB refers to the population means as 'mu' (mu old
and mu new). This is the sound of the Greek letter J1 which is used to
represent the population mean(s) which we have estimated by the sample
means.
It concludes that we can reject the hypothesis of no difference between
the population means with 98.3% confidence (Le. p = 0.017). Another way
of thinking about this is to realize that 1. 7% of the means from samples
with eight observations in each group drawn from two populations with
C_O_M_P_A_R_I_N_G_T_W
__O_P_O_P_U_L_A_T_IO_N
__S____________~I
L -____________
the same mean will produce a difference between the sample means, which
indicates that the difference between the two population means is zero. Is
it more likely that you were lucky enough to draw one of the few sets of
samples that indicate that the mean difference is equal to zero; or that the
mean difference is really not equal to zero?
The confidence interval calculation tells us that we can be 95% confident
that the two population means differ by between 0.2 and 1.9 kg with the
new diet having the greater weight gain.
If the sheep were actually seven pairs of twins we can take these
relationships into account by asking MINITAB to calculate the differences
in weight gain for each of the seven pairs of twins and then carrying out
a 'one-sample t-test' also called a 'paired t-test' on these differences (Calc,
Mathematical Expression, New Variable=C3, Expression= Cl - C2,
File, Display Data):
Row
1
2
3
4
5
6
7
old
3.4
3.9
4.2
4.5
3.6
2.9
3.2
new
4.5
4.8
5.7
5.9
4.3
3.6
4.2
diff
1.1
0.9
1.5
1.4
0.7
0.7
1.0
Now the t-test on the difference (Stat, Basic Statistics, I-Sample t-Test):
Confidence Intervals
Variable
diff
N
7
Mean
1. 043
StDev
0.315
SEMean
0.119
95.0% C.I.
(0.751,1.335)
Variable
diff
N
7
Mean
1. 043
StDev
0.315
SEMean
0.119
8.75
P-Value
0.0001
We are testing the null hypothesis that the mean difference is zero.
Now that this extra information is taken into account we can be even
more confident that the null hypothesis is untrue. The p value is now
0.0001. So the chance that the two groups of animals come from the same
population is only I in 10000.
The confidence interval calculation tells us that we are 95% confident
that the two population means differ by between 0.75 and 1.31 kg with
the new diet being an improvement. It is a narrower (more precise)
interval than from the two-sample t-test which ignored the pairing. This
shows the benefit of accounting for variation between the seven pairs
of twins. We will explore this concept again later in Chapter 5 when we
discuss the advantages of blocking treatments, i.e. of pairing or
grouping them.
35
36
1 ,-I________C_O_N_FI_D_E_N_C_E_IN_T_E_R_V_A_L_S_ _ _ _ _ _ ___
2.5 EXERCISES
Confidence intervals and boxplots
We can now use the statistics of the sample mean and its standard error,
which we calculated for each of two apple varieties in the exercise at the
end of Chapter I, to obtain confidence intervals for the population means
of the two varieties.
Answer
The t Tfllue
Look up the appropriate value of t from the statistical tables (Table C.2,
row = df, column headed 5). Double check: the correct column for a 95%
confidence interval value of t has 1.96 in the bottom row of the table.
mean
t
SE
Tyler's
Kernal
Red
Charles
Ross
24.85
2.262
0.610
18.66
2.262
0.472
Confidence interval
as
follows: multiply t by SE; subtract the result from the sample mean (to
obtain the bottom of the range); then add the same value onto the sample
mean (to obtain the top of the range).
The 95% confidence interval for population mean circumference of
Tyler's Kernal is:
This means that we are 95% confident that the mean circumference of the
whole population of Tyler's Kernal apples is in this range. Another way of
thinking of this is to imagine taking another 99 random samples each of
ten apples so we have 100 samples in total. If we constructed such
confidence intervals round each of the 100 sample means, 95 of the
intervals would contain the population mean but five of them would not
contain it. In reality we only usually have the resources to take one sample
and so the confidence interval gives us a way of relating its summary
statistics to the mean of the whole population.
Similarly, the 95% confidence interval for population mean circumference of Red Charles Ross is:
18.66 (2.262 x 0.472)
E_X_E_R_C_IS_E_S____________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
variety
1
2
3
4
5
6
7
8
9
10
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
11
12
13
14
15
16
17
18
19
20
cm
18.3
18.4
20.2
22.0
17.5
18.1
17.6
16.8
18.8
18.9
22.0
24.5
25.5
27.5
22.5
27.5
24.0
26.5
23.5
25.0
10
10
Variable
Tyler
Red
22.000
16.800
Min
Mean
24.850
18.660
Max
27.500
22.000
Median
24.750
18.350
Q1
23.250
17.575
TrMean
24.875
18.475
Q3
26.750
19.225
StDev
SEMean
1.930
1.492
0.610
0.472
37
38
II
CONFIDENCE INTERVALS
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
16
17
18
19
20
21
22
22
23
24
25
26
27
N=10
variety=2
N=10
8
56
13489
2
0
Stem-and-leaf of em
Leaf Unit = 0 .10
2
3
5
5
3
2
variety = 1
05
5
05
05
5
55
-------+---------+---------+---------+---------+---------cm
variety 2
-------+---------+---------+---------+---------+---------em
18.0
20.0
22.0
24.0
26.0
28.0
(Note: use variety 1 as the first 'y' and variety 2 as the second 'y', Frame,
Multiple Graphs, Overlay Graphs.) The result is shown in Figure 2.8.
Obtain a high-quality histogram for each variety
(Note: insert 'em' as 'x', select 'For each group' and insert 'variety' under
'Group variables'.) The result is shown in Figure 2.9.
L -_ _ _ _ _ _ _ _
E_XE_R_C_IS_E_S_ _ _ _ _ _ _ _---.l1
28
E 23 u
18 -
2
variety
Figure 2.8
1-11
>.4
u
c
~ 3
()
(l.)
Co.
LL
17
Figure 2.9
19
21
em
23
25
27
The t-test
Here are summary statistics from the samples of two apple varieties which
we calculated in the last section:
n
df
mean
SD
Sxx
Tyler's
Kernal
Red
Charles
Ross
10
9
24.85
1.930
33.52
10
9
18.66
1.492
20.03
Carry out a t-test with Ho (null hypothesis) that 'the population means
39
40
I LI________C_O_N_F_ID_E_N_C_E_IN_T_E_R_V_A_L_S_ _ _ _ _ _ _
----l
Calculator answer
Square each standard deviation (to get the variance) and mUltiply this by
the degrees of freedom to obtain the sum of squares for each sample and
check that you obtain the values of 33.52 and 20.03 in the above table.
Calculate the pooled variance by adding these two sums of squares and
dividing by the sum of the two degrees of freedom (pooled df). You should
obtain: 2.975.
We can now obtain the test statistics t by substituting for the two sample
means, the pooled variance and the two values of n in the equation:
.
t-ratlO
(x - x) - 0
I
s~(J..+J..)
n
n
l
You should obtain t = 8.02 (see below). If you obtain -8.02 this simply
indicates the direction of the difference (it depends which apple variety you
put into the equation first).
Compare this value with t-tables (Table C.2) using pooled degrees of
freedom. What is your conclusion?
N
10
10
Mean
18.66
24.85
StDev
1. 49
1. 93
SEMean
0.47
0.61
~_______________E_X_E_RC_I_SE_S______________~I
presents a 95% confidence interval for the mean difference of between
-7.81 and -4.47. If this had included zero it would show that a difference
of zero was a possibility (i.e. the two varieties having the same population
mean). However, this confidence interval excludes zero, which is consistent
with the conclusion that one variety (here Tyler's Kernal) is larger than
the other.
41
Sampling
R_A_N_D_O
__
M_S_A_M
__
PL_I_N_G________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
would vary - some would be bigger than the unknown population mean
and some would be smaller. However, there would be as many 'high'
values as 'low' ones, and if you took the average of all of their means, that
value would be the true population mean.
Unfortunately, if we usually take only one random sample of, say, ten
observations we may obtain an unrepresentative result because, by chance,
no observations have been selected from a particularly distinctive part of
the population. Nevertheless, if we start off knowing nothing about the
worms and how they vary in length and abundance throughout the field, a
random sample is more likely to give us a representative sample than is
any other approach.
3.2.2
We have decided that our population consists of the worms on the surface
of a field on a damp night. Note that this immediately rules out extrapolating our results to worms that have stayed below ground. If we are
about to set out on a sampling scheme we should think about whether our
method automatically precludes some of the people, species, individuals
or events in which we may be interested. Telephone surveys exclude any
household that does not have a working telephone; surveys of woodland
plants will under-record plants such as celandines and anemones if they
take place in the late summer or autumn, after these species have died
back. But back to the worms!
We can make a random sample of surface-crawling worms by selecting
ten positions in the field at random, going to these and then picking up the
nearest worm. Make a rough plan of the field and mark the length and
100
Length
(m)
48 -
34
100
Width (m)
43
44
II
SAMPLING
~----------------------------------------------------~
ROW
COLUMN
1 2
1 8 7 12 9 6 8 12
2 16 18 9 JO 7 9 14
3 22 [B] 17 14 12 13 16
4 19 16 13 10 8 9 12
5 16 12 12 9 5 7 10
6 19 16 II 7 8!Illi 14
7 22 18 16 19 12 13 17
8 25 14 16 12 13 15 16
9 20 17 14!m 17 19 00
10 I2SJ 22 19 25 29 32 36
II 29 30 27 32 37 42 45
12 31 35 38 45 47 44 49
13 28 33 30 1TIl40 39 42
14 26 29 32 35 35 31 29
15 22 28 31 29 28 27 25
16 21 22 28 26 31 27 22
J7 17 26 30 31 31 1211 25
18 17 24 31 33 27 22 19
19 19 19 21 17 15 16 16
20 23 1TIl17 14 13 19 23
14
10
9
9
12
10
12
14
24
30
45
51
Figure 3.2
9 10 11 12 13 14 15 16 17 18 19 20
7
6
4
8
6
16
18
23
25
35
43
4 5 4
3 4 8
5 6 5
7 9 10
8 7 6
15 17 16
ITIl 20 19
25 28 33
27 32 40
40 111147
51 53 52
[3DJ 58 56 50
46 51 48 44 40
32 40 37 35 31
27 16 22 19 16
24 00 14 10 8
33 12 8 7 6
17 14 10 9 7
19 16 13 10 12
17 12 II 6 9
3
5
7
9
6
18
25
30
42
45
7
6
7
8
5
17
22
27
35
40
48 39
41 43
37 30
16 19
8 4
6 6
6 4
3 0
8 3
7 3
4
5
6
3
6
19
24
31
35
32
42
30
35
20
2
4
2
3
2 4 6
3 3 4
5 4 5
3 7 JO
4 6 7
22 24 20
27 [2ffi' 17
25 22 20
37 29 28
33 27 27
!TIl 33 29
27 29 23
22 21 17
II 12 10
3 6 5
7 4 3
0 0
I 0 4
0 3 2
0
8
6
7
13
9
19
22
24
27
31
27
21
14
13
7
I
0
2
5
2
7
6
9
10
10
15
14
29
[33]
30
20
[j]]
16
12
10
5
0
0
3
4
width of it in metres (Figure 3.1). Then press the random number button
on the calculator (RAN #) to give the number of metres across the field
and press the random number button again to give the number of metres
down the field that we should go to find a sample point. For example, if
the numbers generated are 0.034 and 0.548 these can be interpreted as
34m across and 48m up (using the last two digits). Alternatively, we can
use the first two values (like 10 and 27) from a table of random numbers,
ignoring any which are too large (see Table C.7).
Make sure that the range of coordinates which are available will allow
any point in the field to be selected. If the field was more than 100 m in one
direction for example, at least three digits would be needed. Having found
one point we repeat this process until we have found ten points which lie
within the field. Let's look at a practical example which for simplicity will
be a small square field, 400 m2 in size. Figure 3.2 shows the yield of fruit
(g) from a particular shrub on each 1 m2 patch, conveniently arranged in
20 rows and 20 columns.
To select a position at random we could use the last two digits in the
numbers generated by the random number button on our calculator. If this
gives 0.302 and then 0.420 this would identify the position in row 2 and
column 20. If we select a random sample of 16 positions at which to
S_T_R_A_T_I_F_IE_D__R_A_N_D_O_M
__S_A_M_P_L_IN
__
G____________~I
L -___________
harvest the crop and measure its yield we could get the selection shown
in Figure 3.2. One of the most common problems in sampling is deciding
on how many observations should be made. A general rule is that the more
variable is the population, the more observations must be taken from it.
This is necessary if we are to obtain a 95% confidence interval which is not
so large as to be useless. Here we have taken 16 observations. This could
be regarded as a pilot study. If the confidence interval turns out to be very
wide we will take more observations. At least we will be unlikely to have
taken an 'excessive number of observations.
The standard error of the mean is then obtained by putting all the
observations into the calculator on statistical mode, pressing the standard
deviation button and dividing the answer by the square root of 16:
mean = 18.87 g, SE mean = 2.979 g
We notice that, by chance, in our sample of 16 points no positions were
selected from the top right or bottom right of the field, where crop yields
happen to be low. So this sample is unrepresentative although there is no
way that, without some other information, this could have been predicted
in advance. However, if we started off knowing which areas were
particularly low-yielding and which high-yielding then we could improve
our sampling technique. We do this in the next section.
3.3
45
46
II~________________S_A_M_P_LI_N_G______________~
ROW
1 234
1 8 7 12 9
2 16 18 9 10
3 22 15 17 14
4 19 16 13 10
5 16 12 12 9
6 19 16 11 7
7 22 18 16 19
8 25 14 16 12
9 20 17 14 16
10 25 22 19 25
11 29 30 27 32
12 31 35 38 45
13 28 33 30 37
14 26 29 32 35
15 22 28 31 29
16 21 22 28 26
17 17 26 30 31
18 17 24 31 33
19 19 19 21 17
20 23 21 17 14
5
6
7
12
8
5
8
12
13
17
29
37
47
40
35
28
31
31
27
15
13
678
8 12 14
9 14 10
13 16 9
9 12 9
7 10 12
10 14 10
13 17 12
15 16 14
19 20 24
32 36 30
42 45 45
44 49 51
39 42 46
31 29 32
27 25 27
27 22 24
27 2533
22 19 17
16 16 19
19 23 17
COLUMN
9 10 11 12 13
7 4 543
6 3 485
4 5 657
8 7 9 10 9
6 8 766
16 15 17 16 18
18 21 20 19 25
23 25 28 33 30
25 27 32 40 42
35 40 41 47 45
43 51 53 52 48
50 58 56 50 41
51 48 44 40 37
40 37 35 31 16
16 22 19 16 8
20 14 10 8 6
12 8 7 6 6
14 10 9 7 3
16 13 10 12 8
12 11 6 9 7
6
8
9
13
9
7
10
13
15
19
32
42
44
39
31
27
27
27
22
16
19
COLUMN
9 10 11 12
7 4 5 4
6 3 4 8
4 5 6 5
8 7 9 10
6 8 7 6
16 15 17 16
18 21 20 19
23 25 28 33
25 27 32 40
35 40 41 47
43 51 53 52
50 58 56 50
51 48 44 40
40 37 35 31
16 22 19 16
20 14 10 8
12 8 7 6
14 10 9 7
16 13 10 12
12 11 6 9
14
7
6
7
8
5
17
22
27
35
40
39
43
30
19
4
6
4
0
3
3
15
4
5
6
3
6
19
24
31
35
32
42
30
35
20
2
4
2
3
1
1
16 17 18 19 20
2 468 7
3 346 6
5 4 5 7 9
3 7101310
4 6 7 9 10
22 24 20 19 15
27 20 17 22 14
25 22 20 24 29
37 29 28 27 35"
33 27 27 31 30
37 33 29 27 20
27 29 23 21 19
22 21 17 14 16
11 12 10 13 12
3 6 5 7 10
7 4 3 1 5
1 0 0 0 0
1 0 4 2 0
0 3 2 5 3
0 1 1 2 4
15
4
5
6
16
2
3
5
3
4
22
27
25
37
33
37
27
22
11
3
7
1
1
0
0
ROW
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
8
16
22
19
16
19
22
25
20
25
29
31
28
26
22
21
17
17
19
23
2
7
18
15
16
12
16
18
14
17
22
30
35
33
29
28
22
26
24
19
21
3
12
9
17
13
12
11
16
16
14
19
27
38
30
32
31
28
30
31
21
17
4
9
10
14
10
9
7
19
12
16
25
32
45
37
35
29
26
31
33
17
14
5
6
7
12
8
5
8
12
13
17
29
37
47
40
35
28
31
31
27
15
13
7
12
14
16
12
10
14
17
16
20
36
45
49
42
29
25
22
25
19
16
23
8
14
10
9
9
12
10
12
14
24
30
45
51
46
32
27
24
33
17
19
17
13 14
3 7
5 6
7 7
9 8
6 5
18 17
25 22
30 27
42 35
45 40
48 39
41 43
37 30
16 19
8 4
6"6
6 4
3 0
8 3
7 3
6
19
24
31
35
32
42
30
35
20
2
4
2
3
1
1
17
4
3
4
7
6
24
20
22
29
27
33
29
21
12
6
4
0
0
3
1
18
6
4
5
10
7
20
17
20
28
27
29
23
17
10
5
3
0
4
2
1
19
8
6
7
13
9
19
22
24
27
31
27
21
14
13
7
1
0
2
5
2
20
7
6
9
10
10
15
14
29
35
30
20
19
16
12
10
5
0
0
3
4
S_T_R_A_T_I_F_IE_D__
R_A_N_D_O_M
__S_A_M_P_L_IN
__
G____________~I
L -___________
47
48
I ~I________________SA_M_P_L_IN_G________________J
(a)
ROW
COLUMN
I
I
2
3
4
5
8
16
22
19
16
7
18
15
16
12
12 9 6 8 12 14
9 10 7 [2] 14 10
17 14 12 13 16 9
[TI] 10 8 9 12 9
12 9 5 7 10 12
6
7
8
9
10
19
22
25
20
25
16
18
14
17
22
II 7
16 19
16 [I2)
(M] 16
19 25
11
12
13
14
15
29
31
28
26
22
30
35
33
29
28
16
17
18
19
20
21
17
17
19
23
8
12
13
17
29
10
13
15
19
32
14
17
16
20
36
10
12
14
24
30
9 10 I I 12 13 14 15 16 17 18 19 20
7
6
4
8
6
4
3
5
7
8
5 4
4 8
6 5
9 10
7 6
16
18
23
25
35
15
21
25
27
40
17 16
20 19
28 33
3240
41 47
3 7
5 []]
7 7
9 8
6 5
18
25
30
42
45
17
22
27
35
40
4
5
6
3
6
19
24
31
35
32
2 4 6 [}D 7
3 3 4 6 6
5 4 5 7 9
3 7 10 13 10
4 6 7 9 10
22
27
25
37
33
27 32 37 42 45 45 @] 51 53 52 48 39 42 37
[E) 27
30 37 40 39 42 46 51 48 44 40 37 30 35 22
32 35 35 31 29 32 40 37 35 31 16 19 20 II
31 29 28 27 25 27 16 22 19 16 8 4 2 3
~ 45 47 44 49 51 50 58 56 SO 41 43
22 28
30
24 31
19 21
21 17
26
31
33
17
14
31
31
27
15
13
27
27
22
16
19
22
25
19
16
23
24
33
17
[l2)
17
20
12
14
16
12
14 10 8 6 6
8 7 6 []:I 4
10 9 7 3 0
13 10 12 8 3
II 6 9 7 3
4 7
2 1
3 I
I 0
1 0
24 20
20 17
22 20
122128
27 27
33
29
21
12
6
19 15
22 14
I24l 29
27 35
31 30
29 27
23 I2Il
17 14
\0 13
5 7
4 3
0 0
0 131
3 2
I
1
0
2
5
2
20
19
16
12
10
5
0
0
3
4
(b)
Figure 3.5 (a) Stratified random sampling of two patches in each of eight strata.
(b) Select random points from those within the area marked '.'.
degree of confidence that the population mean lies within the range from
17.9 to 19.9 instead of from 16.9 to 20.9 derived from the random sample.
The value of t for a stratified random sample with two samples per
stratum is found for degrees offreedom equal to the number of stratum. Here
there are eight degrees of freedom. This is because we add together the
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _S_Y_S_T_E_M_A_T_IC
__
SA_M
__
P_L_IN_G______________
~I
Stratum
mean
Variance of
stratum mean"
9 13
12 14
38 43
19 26
6
8
24 29
21 30
4
6
11.0
13.0
40.5
22.5
7.0
26.5
25.5
5.0
4.0
1.0
6.25
12.25
1.0
6.25
20.25
1.0
Total = 302
151.0
52.0
49
50
I ~I_________________SA_M_P_L_IN_G________________~
maximum dispersion over the population. They are not chosen at random
but regularly spaced in the form of a grid (Figure 3.6a).
Systematic sampling is very efficient for detecting events because we
are less likely to miss one (perhaps a fallen tree within a wood or a molehill
in grassland) than if the sampling has a random element. Also, because
you are more likely to include both very small and very large individuals,
the mean of a homogeneous population is often close to the true mean, but
has a large standard error.
(a) ROW
I
2
3
4
5
6
7
8
9
10
II
12
13
14
15
16
17
18
19
20
COLUMN
8
16
22
19
16
19
22
25
20
25
29
31
28
26
22
21
17
17
19
23
7
18
15
16
12
16
18
14
17
22
30
35
33
29
28
22
26
24
19
21
12
9
17
13
12
II
16
16
14
19
27
38
30
32
31
28
30
31
21
17
9 6 8
\0 [2] 9
14 12 13
\0 8 9
9 5 7
7 8 10
19 [UJ 13
12 13 15
16 17 19
25 29 32
32 37 42
45 IDJ44
37 40 39
35 35 31
29 28 27
26 31 27
31[TI] 27
33 27 22
17 15 16
14 13 19
12
14
16
12
\0
14
17
16
20
36
45
49
42
29
25
22
25
19
16
23
14
10
9
9
12
9 10 II 12 13 14 15 16 17 18 19 20
10
12
14
24
30
45
51
46
32
27
24
33
17
19
17
7
6
4
8
6
16
18
23
25
35
43
50
51
5
4
5 6
7 9
8 7
15 17
12Il20
25 28
27 32
40 41
51 53
l3ID 56
48 44
40 37 35
16 22 19
20 14 10
12 [8J 7
14 10 9
16 13 10
12 II 6
OJ
4 3 7 4 2
8 5 6 OJ 3
5 7 7 6 5
10 9 8 3 3
6 6 5 6 4
16 18 17 19 22
19 25 22 ~ 27
33 30 27 31 25
40 42 35 35 37
47 45 40 32 33
52 48 39 42 37
50 41 43 00 27
40 37 30 35 22
31 16 19 20 II
16 8 4 2 3
8 6 6 4 7
6 6 4 Dl I
7 3 0 3 I
12 8 3
0
9 7 3
0
4
3
4
7
6
24
20
22
29
27
33
29
21
12
6
4
0
0
3
I
6
4
5
\0
7
20
17
20
28
27
29
23
17
10
5
3
0
4
2
8
6
7
13
9
19
22
24
27
31
27
21
14
13
7
I
0
2
5
2
(b) (ii)
(b) (i)
10
11
12
13
15
18
19
21
----..
LOW
~~--------~
MEDIUM
HIGH
[Q]
9
\0
\0
15
Illl
29
35
30
20
II2l
16
12
10
5
[QJ
0
3
4
F_U_R_T_H_E_R_M_E_T_H_O_D_S_O_F_SA_M_P_L_IN_G
_ _ _ _ _ _---'1
L -_ _ _ _ _ _
If we have decided to use a systematic sampling grid then the first point
should be chosen at random. Once this is chosen (for example, column
15, row 2 in Figure 3.6a) the other sample points are chosen in a fixed
pattern from this point according to the scale of the grid, and so they are
not independent from each other. It is important to locate the first point at
random because otherwise there is a risk that the grid will be positioned
such that all the points miss (say) the edges of the sample area.
Much use is made of systematic sampling and the data are often treated
as if they were from a random sample. For example in work in forestry
plantations every 10th tree in every 10th row may be measured. As long as
the number of sample units is high there is little risk of coinciding with
any environmental pattern which might affect tree growth.
Similarly in social surveys, every 50th name on the electoral roll might
be selected as a person to be interviewed. This is very convenient.
However, it is important to be aware of the possibility of bias in systematic
surveys. In the social survey for example, some flats might be in blocks
of 25 and all occupied by couples, so we could end up only interviewing
people who lived on the ground floor. In the forestry survey every 10th tree
might coincide with the spacing of the forest drains so that all the sampled
trees were growing a little bit better than their neighbours on the wet
site.
Systematic sampling is excellent for mapping an unknown area however
and for looking for patterns that you may wish to investigate in later
samples. The yields of fruit per shrub taken from 16 plants distributed
evenly as points in a grid can be used to divide the area into parts of low,
medium and high productivity (Figure 3.6b). Such parts could be used as
strata in subsequent stratified random sampling, to obtain an unbiased,
precise confidence interval for the population mean yield.
1 51
52
I I~________________S_A_M_PL_I_N_G______________~
In other words we select so that a bigger tree stands a greater chance of
being selected than a smaller one. This can be an efficient method of
sampling because the population mean depends more on the means of the
large trees than on those of the smaller ones. Sampling is a huge subject
area in itself. As a beginner, you won't need to use any of these more
sophisticated methods, but it is always a good idea to discuss your
proposed method with someone who has experience of sampling. If
nothing else they may be able to provide you with a few practical tips.
(For example, when doing fieldwork never be separated from your
waterproof clothing and your sandwiches!)
3.6 PRACTICAL PROBLEMS OF SAMPLING
In real life, areas of land tend to be strange shapes rather than square.
Then we must mark the boundaries of homogeneous areas as strata on a
map. Draw a baseline along the bottom of the map (the x axis) and a line
up the side at right-angles to it (the y axis). Mark distances in metres along
the x and y axes of the map. Then select the positions of the required
number of small sampling areas (these are usually called quadrats) in each
stratum using random numbers to identify the x and y grid coordinates.
In the field the quadrats can be sited in turn by pacing 1 m strides up
and across, placing, say, the bottom left corner of the quadrat where the
toe of the last foot falls.
We can make sample plots circular, square (rectangular) or in the shape
of a long, thin strip (called a transect) (Figure 3.7a-c). The advantage of
(a)
(b)
(c)
(d)
Figure 3.7 Types of sampling plot: ( a) circular; (b) square; (c) transect; (d) use
of transects from paths.
EXERCISE
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
II
a circular plot is that we need to mark its position only at one central point
where we can then fix a tape measure rather than having to mark all the
four corners of a square. We can use a transect when it is difficult to move
through the vegetation. Strips cut into very dense vegetation at rightangles from tracks and positioned at random starting points along the
paths are more efficient than random quadrats covering the same area,
because we do not waste time in getting to them (Figure 3.7d).
We must mark quadrats securely if we want to re-record them later. A
couple of wooden pegs is not likely to be sufficient to survive several years
of field wear and tear or vandalism. A buried metal marker (15 cm of thick
metal piping) provides added security and we can re-Iocate it using a metal
detector. If we intend to make observations at the same place on several
dates a square quadrat is advantageous. It is easier to decide whether, say,
new seedlings are inside or outside it and, if there are large numbers of
them, we can decide to sample only part of the quadrat (subsampling). For
example, although we might record cover of established plant species over
the whole quadrat, we might count seedlings present in the bottom right
quarter only.
If we divide aIm square quadrat into, say, 25 squares each of
20 x 20 cm we can then record the presence or absence of particular species
within each of these small squares. Then each species will have a frequency
out of 25 for each quadrat. This is an objective method which encourages
careful scrutiny of the whole quadrat and allows us to re-record subquadrats over time. It gives figures which are likely to be more reliable
than subjective estimates of percentage ground cover for each species.
Such subjective estimates of cover vary greatly from one observer to
another. Also, the appearance of the same species under different
managements or at different seasons may lead us to under- or overestimate its importance.
3.7 EXERCISE
Choice of sampling method
Answers
1. A population often consists of too many individuals for us to measure
all of them. Therefore we wish to select a few individuals and measure
53
54
I ~I_________________SA_M_P_L_IN_G________________~
them. We want this sample to be representative of the whole
population.
2. If we use random numbers to select the individuals to be measured we
ensure that each one has an equal chance of being selected. This means
that the sample is not biased and should therefore be representative of
the whole.
3. If the population is heterogeneous it may be that a single random
sample will be unrepresentative because say, by chance, no individuals
have been selected from a distinctive part of the population. So it will
be advantageous to divide the population up into internally homogeneous strata or sub-populations. This also reduces the variability
within each stratum and hence the standard error of the mean and the
width of the confidence interval for the population mean.
4. A systematic sample runs the risk of coinciding with a pattern in the
population and so obtaining a biased sample. However, if this is
thought unlikely to be a problem we may sample, say, every 10th
person on a list. The problem is that we can then never be sure that this
is not a biased sample. However, if we wish to map an area about which
we know nothing a grid sample is sensible.
Planning an
experiment
REPLICATION
Our first thought might be just to split a field in three and apply a different
amount of fertilizer to each third (Figure 4.1a). This could give misleading
results however. Suppose that the natural fertility of the soil is higher at
the bottom of the slope, then whichever fertilizer is allocated to that
position will appear to be better than it really is in comparison with the
others. However, we could divide the field into, say, 12 parts or
experimental units (usually called plots) and allocate each fertilizer
treatment at random (see section 4.2) to four of them.
This will improve matters if there is variation in fertility in the field to
start with because it is unlikely that all of one treatment will end up in a
very high- or very low-fertility patch. Rather, the underlying variation is
56
II
PLANNING AN EXPERIMENT
~----------------------------------------------------~
(a)
Top
(b)
c
Botlom
Figure 4.1
4.2 RANDOMIZATION
The four replicate plots of each treatment must be allocated to positions
in the field at random. This is achieved by numbering the 12 plots from I
R_A_N_D_O
__
M_IZ_A_T_I_O_N__________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
(a)
10
11
12
B
B
Figure 4.2 (a) Allocation of plot numbers. (b) Allocation of treatments to plots
using random numbers.
57
58
I ~I______________P_L_A_N_N__IN_G__A_N_E_X_P_E_R_I_M_E_N_T______________~
being statistically invalid) if, for example, it means that B is always being
shaded by the taller growing C.
4.3 CONTROLS
Suppose we carry out our fertilizer experiment and there are differences
between the treatments, with the yields from A being the least. How will
we know that A is in fact having any effect on growth at all? What we need
is a control. A control is the name given to a treatment in which (usually)
nothing is applied to the plot. We can then see what changes take place
naturally during the experiment. A slight variation on this idea is a
procedural control. For example, suppose the experiment is taking place
not in a field but in a glasshouse and the treatments are different chemicals
applied in water. If we have a treatment where nothing is applied the
results we get may be simply the effect of the water applied with chemicals.
Therefore we might apply water only as a control. This allows us to assess
any effect of the chemicals separately from that of the water.
So far we have been considering a simple experiment in which fertilizers
are applied to barley. Imagine a more complicated one in which we are
interested in the effect of sheep grazing at different times of year on the
number of wild flower species in grassland at different sites across the
country. We can fence off each plot to make sure that the sheep are in the
right place at the right time each year for, say, five years. But the grassland
present at the beginning will contain a range of species characteristic of
its location, soil type and previous management. These may differ from
one place to another. For example, one site had always been grazed with
sheep whereas another was growing maize two years ago.
If we want to carry out the same experiment on several sites, we must
have control plots on each site. They tell us what happens on each site in
the absence of any grazing treatments (in this instance) and provide a
standard comparison between the two sites. In the same way, if we carry
out the same experiment in different years the results from the control
plots in each year provide a standard basis for comparison of the effect of
different treatments.
4.4 OBJECTIVES
The above gives you an idea of the key factors in experimental design replication, randomization and controls. But there is an important stage
we have skipped - precisely what are you trying to test in your experiment.
It is always good practice to write down the background to your
experiment. This consists of why you are interested in the problem and
L-____________________________________________________
II
your general objective. For example, farmers may be paid to sow grass in
strips of land round the edges of their corn fields to encourage wildlife.
The seed used to establish these strips could also contain wild flower seeds
(but at much greater cost). The wild flowers will attract butterflies and bees
or spiders and may be generally thought to be better for nature
conservation than strips without them. However the strips may also
harbour weeds that could spread into the crop and reduce crop yield or
quality. How often and at what times of year should the grass strips be cut
if we want to encourage the growth of the desirable wild flower species
without increasing the competitive weed species. Is it possible to
recommend the best solution? Let us take this question and see where it
leads us in terms of trying to design and layout an experiment. We must
not forget the practical issues of how and what we record that will need to
be faced before we ever get results to analyse.
4.5 LAYING OUT THE EXPERIMENT
Talking to farmers we have discovered that they think two cuts a year
are needed to stop thistles flowering and setting seeds which then become
weeds in the crop. However, wildlife advisers believe that two cuts may be
too much for the sown wild flowers. Our first questions then become:
1. What is the effect of sowing or not sowing wild flowers?
2. What is the difference between one cut and two cuts?
We have decided to compare four treatments:
Fl
NFl
F2
NF2
59
60
II
PLANNING AN EXPERIMENT
(a)
(b)
Buffer zone
Fl
N
F
2
F
2
F
1
(e)
1m
(
1m
(d)
.-
F
2
11
20em
D20em
Figure 4.3 (a) Plot with buffer zone. (b) Field layout of plots - wholly
randomized design. (c) An individual quadrat. (d) Presence or absence recorded
in each small square.
treated as far as possible like the rest of the plot but we will not make
any recordings from it. It is unlikely to be a good representation of the
treatment because it may be affected by the neighbouring treatment. Its
function is to protect the inside of the plot.
It is common to use aIm x 1 m quadrat for recording vegetation. There
will probably be variation within each plot - perhaps caused by soil
composition, shading by occasional large shrubs in the nearby hedge or
rabbit grazing. Just as it was desirable to have replicates of each treatment,
so it is desirable to take more than one sample from within each strip.
We now decide how we are going to sample each particular treatment
strip. We will take the mean value from several quadrats as a fairer representation of the plot as a whole than if we had recorded only one quadrat
which happened by chance to have been on the regular pathway of a
badger.
Choosing the number of replicates and samples within each replicate is
a problem. On the one hand the more we have, the more precision we will
have in our results. On the other hand we will have more work to do. We
need to consider the cost and availability of resources: people, land,
equipment and the time and effort involved. Tired people will start to
make mistakes at the end of a hard day's work, no matter how dedicated
they are. Somewhere there is a compromise to be reached. Let's assume
that we decide to have three quadrats on each plot. This will allow us to
decide on the length of the plot, after allowing room between each quadrat
RECORDING DATA
L -____________________________________________________
II
61
62
II
P_L_A_N_N_I_N_G_A_N_E_X_P_E_R_I_M_E_N_T_ _ _ _ _ _ _-'
L _______
Date _ _ _ _ __
Recorder _ _ _ _ __
Quadrat No. _ _
Square
1
2
1 234 567 890 1 234 5 6 789 0 1 2 345
/./,/./1
./.f "
Lolium perenne
have something soft to kneel on when peering at the plants. If you are
studying the behaviour of ducks on a pond take a folding stool to sit on. If
you can persuade someone to write down the observations as you call
them out, this can speed the process up a great deal. Spend time designing
and testing a recording sheet before photocopying or printing it for use.
In our case the sheet needs space for the date, recorder's name and plot
code, plus a column of species names down the left-hand side (Figure
4.4). The next 25 columns are used to tick the presence of a species in one
of the 25 small squares in the quadrat. The column at the far right is used
to enter the total for each species at the end of the day's work. We need
a minimum of 48 sheets for each time we do the recording and, in practice,
a few spares.
Because of all the work that has gone into designing and laying out the
experiment and recording the quadrats the data are precious. Ideally they
should be transferred to a computer as soon as possible, whilst always
keeping the original sheets. If the data cannot be input immediately the
record sheets should be photocopied and the copies kept in a separate
building. Always remember, what can go wrong will go wrong!
4.7 MINITAB
The observations made in the fertilizer experiment described at the
beginning of this chapter can be easily compared using MINITAB. We
code the three treatments (A = 1, B = 2 and C = 3) and put these codes
directly into one column of the worksheet (headed 'fert' here) and the
yields from each plot in the next column (headed 'yield') (File, Display
Data):
ROW
1
2
3
4
5
fert
1
1
1
1
2
yield
7.8
8.3
8.7
7.5
8.0
I I
MINITAB
2
2
2
3
3
3
3
7
8
9
10
11
12
8.9
9.3
9.7
10.3
11.8
11.0
10.9
Note that MINITAB puts in the row numbers to tell us how many
observations we have. We can then ask for a boxplot to be drawn
separately for each of the three treatments (Graph, Character Graphs,
Boxplot):
fert
--I
1-
-I
1-
-I
1-
--+---+---+---+---+----yield
8.00
8.80
9.60
10.40
11.20
Or we could ask for the yield from each plot to be put on a graph
against its treatment code (Graph, Character Graphs, Scatter Plot):
12.0+
yield
10.5+
9.0+
7.5+
--+---+---+----+-__+---yield
1.20
1.60
2.00
2.40
2.80
63
64
I LI______________P_L_A_N_N_I_N_G__A_N_E_X_P_E_R_I_M_E_N_T______________~
The codes for fertilizer (1, 2, 3) are along the bottom axis. MINITAB
expresses the values on this axis as though it were possible to have values
which were not whole numbers. This is not very elegant.
To make this clearer we can ask for the points to be labelled A, Band
C (Graph, Character Graphs, Scatter Plot, Use Labels):
12.0+
C
yield
C
C
10.5+
9.0+
A
A
A
A
7.5+
B
B
B
B
---+-------+---+-----+----+1.20
1.60
2.00
2.40
2.80
fert
These displays show that there is some overlap between the yields from
treatments A and B. but that the yields from treatment C seem generally
higher than those from the other two. We need some way of quantifying
this description and assessing the evidence objectively. Some of the
variability between yields is caused by the different amounts of fertilizer
and some by random variation. How confident can we be that if we
repeated this same experiment in exactly the same conditions we would
obtain similar results? In Chapter 5 we will see how to describe these ideas
in a model. This then will enable us (Chapter 6) to test whether or not
these fertilizers affect yield and to say how certain we are about our
conclusion.
General guidelines on the choice of method of data analysis are given
in Appendix A.
4.8 EXERCISE
Planning an experiment on diet and growth in mice
You are asked to carry out an experiment to compare the effects of six
different diets (A to F) on the growth of mice. You have 18 mice available
- each in a separate cage. Allocate three mice per treatment to their three
EXERCISE
L -____________________________________________________
II
cages - using either a table of random numbers (Table C.7) or the random
number button on your calculator. One way of doing this is to represent
each cage by its row and column coordinates. So that the first mouse (diet
A) would be allocated as shown if the first two random numbers selected
were 2 (for row) and 4 (for column):
At the end of the experiment the following weight gains (g) were
recorded:
Diet A
Diet B
DietC
Diet D
Diet E
Diet F
24.7
20.4
16.6
16.0
19.0
20.5
23.3
20.9
17.5
17.2
18.3
19.7
23.7
21.4
17.0
17.0
18.7
19.6
Enter the data into a MINIT AB worksheet. In column 1 you will need
codes to identify the diets (l to 6) (try Calc, Set Patterned Data), while the
18 weight gains go into column 2. Save the worksheet. You will need it
again in Chapter 5. Try summarizing the data using high-quality graphics:
Data Display
Graph Plot (weight on y axis, diet on x axis)
Graph Boxplot (y
= weight, x = diet)
Answer
Data display
Row
1
2
3
4
5
Diet
1
1
1
2
2
Weight
24.7
23.3
23.7
20.4
20.9
65
66
I I
PLANNING AN EXPERIMENT
2
3
3
3
4
4
4
5
5
5
6
6
6
6
7
8
9
10
11
12
13
14
15
16
17
18
21.4
16.6
17.5
17.0
18.0
17.2
17.0
19.0
18.3
18.7
20.5
19.7
19.6
The results are shown in Figures 4.5 and 4.6. Note that Figures 4.5 and
4.6 are shown exactly as they first appear on the screen. They may be
edited, for example to add a title, to add units to the vertical axis and to
make the label 'Weight' horizontal. A 'double click' on the graph produces
the editing toolbars.
25
24 23 -
0
0
0
22 0>21 -
-c
~ 20
19
18
17
0
0
0
0
@
16
2
80
0
0
0
Diet
Figure 4.5
-c
22 -
0>21 -
~ 20 -
Q
E3
19 18 -
B D
17 16
Figure 4.6
Die t
El
SOURCES OF VARIATION
FI
NFl
F2
NF2
We wanted to find out whether cutting the vegetation once a year instead
of twice affects the numbers of spiders which live on the site. Do they
prefer vegetation which has been seeded with wild flowers to the natural
grasses?
It is important to realize that even if we did not impose any treatments
in the experiments we would find that the number of spiders of a particular
species would not be the same in each quadrat. The natural variation
across the site will mean that some plots have damper or more fertile soil
or are more protected from wind and so they may have more vigorous
grass growth. One quadrat may, by chance, contain many poppies because
68
1 1.. 1_ _ _ _
A_C_C_O_U_N_T_I_N_G_F_O_R_B_A_C_K_G_R_O_U_N_D_V_A_R_I_A_T_IO_N
_ _ _ _---I
the grass had been dug up by rabbits the previous year so the poppy
seeds which had been dormant in the soil for years were able to
germinate at last. The number of spiders in a quadrat will be affected
by these differences in the vegetation in different quadrats. Such natural
variability from one quadrat or sampling unit to another is called
random variation.
In a laboratory experiment such variation is often quite small. For
example, the effect of different concentrations of a chemical on the growth
rate of a bacterium may be studied by adding the chemical solutions to
inoculated agar in Petri dishes which are then kept at a constant
temperature. The conditions within each Petri dish will be extremely
similar, apart from the deliberately imposed treatments (the chemical
concentrations) being different. In contrast, in a field experiment this
random variation is often very large so the quadrats and hence whatever is
measured within them will differ in many ways from each other, apart
from the deliberate differences introduced by the cutting and sowing
treatments.
That is why if we wish to determine the factors which influence say the
number of spiders per plot we need to set up a model of our system. This
must separate background or random variation from that caused by the
treatments. Let us look at the results we might collect from our field
experiment. First we count the number of spiders in each 1 m x 1 m
quadrat in our plot. We then add together the values from each of the
three quadrats per plot and divide by 3 to give the number of spiders per
square metre. This is because it is the plots which were randomized and so
they are the experimental units. Recording three quadrats within each plot
is like subsampling the plots. It gives us more information and allows us
to obtain a better estimate of the number of spiders per square metre of
plot than if we had only recorded one quadrat. However, because each set
of three quadrats are within one plot they are not independent
observations and therefore should not be used separately in the analysis of
variance that is described below. Thus we can summarize the variation in
spider numbers across the 16 plots as shown in Table 5.1.
Table 5.1
Replicate
1
2
3
4
Mean
Treatment
FI
F2
NFl
NF2
21
20
19
18
19.5
16
16
14
14
15.0
18
14
15
16
16.5
12
13.0
17
13
13
T_H_E_M_O_DE_L______________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _
The mean number of spiders per plot in Table 5.1 is not the same in all
plots. How can we make sense of the variation in terms of our model? The
variation might be caused by the effects of the four treatments or it might
represent random variation or, more likely, it is composed of both
elements.
If we want to predict how many spiders we would find on a plot which
has received a certain treatment, this experiment provide us with an
estimate. It is the mean of the values from the four plots which received
that treatment, the bottom row in Table 5.1, and is called the treatment
mean.
We find that the mean number of spiders on plots which received wild
flower seed and were cut once (F1) was 19.5, whereas on those which were
not sown and were cut twice a year (NF2) the mean was only 13.0. Not
every plot receiving treatment F1 however will contain 19.5 spiders (this is
simply an average - we cannot have half a spider in reality). There is a
considerable amount of random variation around the treatment mean.
Some plots have more and some less: we cannot say why they differ, except
that the differences are caused by chance.
grand
= mean
This simple model predicts that each of the four plots in treatment F2 is
expected to have 16 + (15 - 16) = 16 + (-1) = 15 spiders per square
metre. However they do not, so the model needs further work. We can find
out by how much our model fails to fit our observations on each plot in
treatment F2 (Table 5.2).
Two of the replicate plots of this treatment (Table 5.2) have observed
numbers greater than the mean for the treatment and two have values
which are less. The differences between observed and expected values are
called residuals. They represent random variation. Residuals can be
69
70
Observed
number
Expected
number
16
16
14
14
15
15
15
15
15
15
2
3
Mean
Difference
= Residual
I
I
-I
-I
positive or negative. They always add up to zero for each treatment and
must also have a mean of zero.
A simple model to explain what is going on in our experiment is:
observed number of spiders per plot (observed value) = expected number of
spiders per plot + residual. In more general terms:
observed value = expected value + residual
Since we already know how to calculate an expected value (see above) we
can include this information as well to give the full equation:
observed _ grand difference between treatment
.d I
value
- mean + mean and grand mean
+ res} ua
We can make this clearer by using the term treatment effect to represent
the difference between the treatment mean and the grand mean:
observed value = grand mean + treatment effect + residual
Note that both the treatment effect and residuals may be positive or
negative.
Box 5.1
The model using symbolic equations
Some textbooks use equations to represent the same idea:
Y;j
= ji + tj +
eij
The small letters i and j are called subscripts. The letter i stands for
the treatment number. In our experiment we could replace it by 1,2,
3 or 4. The letter j stands for the replicate number. We could use 1,
2, 3 or 4 here. If we wished to consider the number of spiders in
treatment 2, replicate 3 we would replace these SUbscripts of y (the
observed value) by 2 and 3 respectively.' Thegraad mean is
represented by y with a bar above it. The treatment effect is given by
t with a subscript i which represents the appropriate treatment. In
our case we would use 2 for treatment 2. Finally, the residual is
B_LO
__
C_K_IN_G
____________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
BLOCKING
There are four replicates of each treatment in our experiment and the 16
plots were arranged at random with four of them on each side of the field
(see Figure 4.3b). However, the four sides of the field may well provide
slightly different environments. We will now see how to take this
information into account in a revised layout known as a randomized
complete block design and so reduce random variation and improve our
ability to detect differences between treatments.
For example, the field may be on a slope; the side at the top may have
drier or sandier soil than that at the bottom; perhaps a tall hedge runs
along one side of the field, whereas there may be only a low fence along
the other side. With the completely randomized distribution of treatments
in Chapter 4 it is very likely that we will have allocated two replicates of
a particular treatment to the field margin at the top of the slope and none
to the margin at the bottom (Figure 4.3b). (Can you calculate the
probability of each treatment being present by chance only once on all
four sides?) Such differences can be important because, for example, in a
dry season, grass growth will be less vigorous at the top of the slope and
this may mean that low-growing wild flowers have a better chance of
becoming established. Thus, if our results from the treatment in which we
sow flower seeds and cut once (FI) show it to have a large number of
herbs, this may partly be because of its over-representation on the
71
72
IL____A_C_C_O_U_N_T_I_N_G_FO_R_B_A_C_K_G_R_O_U_N_D_V_A_R_IA_T_I_O_N_ _ _---'
favourable ground at the top of the slope. If we were to carry out such
an experiment many times, such effects would even out since perhaps next
time the same treatment might be over-represented at the bottom of the
slope. However, if we have only the resources for one or two experiments
we need to find a way of overcoming variations like this.
Ideally we want one replicate of each treatment to be on each of the four
sides of the field. This type of design is called a randomized complete block.
Each side of the field is a 'block' and each block contains a complete set
of all the treatments within it. Each block is selected so that the conditions
are even or homogeneous within it but that the conditions differ between
one block and another. So in our field the top block is on slightly sandier
soil and the block on one side is more shaded because of the hedge.
However, these differences should not be too great. For example, one
block can have a higher soil moisture content than another, but if one
block is a bog then the other must not be a desert. The whole experiment
should be carned out on a reasonably uniform site.
How should we allocate the four treatments to the four plots within each
block? This must be done using random numbers (Figure S.Ia). This
ensures that each treatment has an equal probability of occurring on each
plot. We number the four plots in a block from 1 to 4. Then we use the
random number button on our calculator. If the last digit is 3 we allocate
treatment FI to plot 3; if number 2 appears next we allocate treatment F2
to plot 2; number 1 would mean that treatment NFl goes on plot 1,
leaving plot 4 for treatment NF2 (Figure S.Ib).
Such blocking can also be helpful in enabling us to apply treatments
(b)
(a)
Randomized complete
block design
Block 1
F2 I NF21 Fl I NFl
N
F
~F
N
F
2
l- Block 2
+.
Block
N
4
F
F
1
1-
F
2
F2 INFll Fl
Block 3
I NF2
F
2
NF1
3rd
2
2nd
1st
4th
Plot
numbers
NF2
Treatments
Order of
obtaining the numbers
1-4 at random
Figure 5.1 (a) Field layout of plots. (b) Allocation offour treatments to block 2.
I I
BLOCKING
(a)
(b)
Block
Block
2
III]
Figure 5.2
mSown
Recorded
and to record results sensibly and fairly. Just as we have seen how to
associate environmental variation with blocks, so we can do the same with
our own behaviour. Perhaps it is possible to sow only four plots with
flower seeds in a working day, so it will take two days to sow the eight
plots required. We should sow two plots in each of two of the blocks on
the first day (Figure 5.2a) rather than sowing only one plot in all four
blocks. Then, if it rains overnight and the soil is too wet to allow the
remainder to be sown until a few days later, any differences in the
establishment of plants from the two different times of sowing will be
clearly associated with blocks. The same applies to recording the species
present. There are 48 quadrats to be recorded. If two people are sharing
the work they will have different abilities to see and to identify species
correctly. One person should record results from blocks 1 and 2 and the
other from blocks 3 and 4 (Figure 5.2b). This means that each treatment
will have half of its data recorded by each person so that any differences
in the recorders' abilities affect all treatments similarly on average. In
addition, we can account for differences between recorders; this becomes
part of the differences between blocks.
Blocking should be used wherever there may be a trend in the
environment which could affect the feature in which you are interested.
For example, in a glasshouse heating pipes may be at the rear of the bench,
so one block should be at the rear and another at the front of the bench.
Even in laboratories and growth cabinets there can be important gradients
in environmental variables which make it worth while arranging your plots
(pots, trays, Petri dishes, etc.) into blocks. It is common to block feeding
experiments with animals by putting the heaviest animals in one block and
the lightest ones in another block. This helps to take account of differences
in weight at the start of the experiment.
The concept of blocking in an experiment is similar to that of stratifying
73
74
I L-I____A_C_C_O_U_NT_I_N_G_F_O_R_B_A_C_K_G_R_O_U_N_D_V_A_R_I_A_T_IO_N_ _ _
----.J
grand
treatment
= mean + effect
Y;j
block
+ effect + resIdual
= y + ti + bj + eij
Treatment
Block
Fl
F2
NFl
NF2
3
4
21
20
19
18
16
16
14
14
18
17
15
16
14
13
13
12
17.25
16.5
15.25
15.0
Mean
19.5
15.0
16.5
13.0
16.0
1
2
I I
BLOCKING
Table 5.4 Calculation of expected value for
each plot
Block
Treatment
Fl
F2
NFl
Mean
NF2
20.75 16.25
20.0
2
3
4
Mean
19.5
15.0
16.5
13.0
17.25
16.5
15.25
15.0
16.0
We can now start to construct the table of expected values. Use the model
to calculate the expected values for treatment F2 in block 1 and for
treatment Fl in block 2. You should obtain 16.25 and 20.0 respectively
(Table 5.4).
There is a quicker way to calculate expected values once the first one
has been calculated. It also sheds light on what the model is doing. On
average all plots in treatment F2 have 4.5 fewer spiders than those in
treatment Fl (19.5 -15.0). To obtain the expected value for treatment F2
in block 1 we subtract 4.5 from the expected value for treatment Fl in
block 1. This gives 16.25. Similarly, to obtain the expected value for
treatment NFl in block 1 we add 1.5 (the difference between the treatment
means for F2 and NFl) to the expected value for F2 in block 1 and obtain
17.75.
The same idea works for calculating expected values in the same
column. On average all plots in block 1 have 0.75 more spiders than those
75
76
II
Block
Treatment
Mean
Fl
F2
NFl
NF2
1
2
3
4
20.75
20.0
18.75
18.5
16.25
15.5
14.25
14.0
17.75
17.0
15.75
15.5
14.25
13.5
12.25
12.0
17.25
16.5
15.25
15.0
Mean
19.5
15.0
16.5
13.0
16.0
Fl
1
2
3
4
Mean
Mean
Treatment
Block
F2
NFl
NF2
0.25 -0.25
0.25 -0.25
-0.5
0.5
0
0
0.75
0.25 -0.25 -0.75
-0.5
0.5
0
0
0
0
0
0
0
0
EXERCISE
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
II
5.4 EXERCISE
Blocks and treatments
Block 1
Block 2
Block 3
Block 4
Block 5
None
Low
High
12
9
3
3
8
31
14
7
11
12
18
8
0
5
8
Mean
Change
Mean
Change
Answer
Work out the mean of each row and each column and check your working
by calculating the grand mean which should be both the mean of the
column means and the mean of the row means. Then work out the change
expected when moving from one block to another and from one row to
another. You should obtain:
None
Low
High
Mean
Block 1
Block 2
Block 3
Block 4
Block 5
12
9
3
3
8
31
14
7
11
12
18
8
0
5
8
20.3
10.3
3.3
6.3
9.3
Mean
Change
7.0
+8 --+
15.0
-7.2 --+
7.8
9.9
Change
.J,
.J,
.J,
.J,
-10
-7
+3
+3
Next, calculate the expected value for the top left cell in the table - using
the general principle:
fitted value = grand mean + block effect + treatment effect
where an effect is the difference between the relevant block (or treatment)
mean and the grand mean. You should obtain:
block 1 'none' fitted value = 9.9 + (20.3 - 9.9) + (7.0 - 9.9) = 17.4
77
78
II~_______A__CC_O_U__N_T_IN_G__F_O_R_B_A_C_K_G_R_O_U_N_D__V_A_R_I_A_T_IO_N______~
Then complete the table by using the 'changes' shown above, for example
block 1 'low' fitted value = 17.4 + 8 = 25.4
block 2 'none' fitted value = 17.4 - 10 = 7.4
You should obtain the following expected, predicted or fitted values based
on our model:
None
Low
High
Mean
Block I
Block 2
Block 3
Block 4
Block 5
17.4
7.4
0.4
3.4
6.4
25.4
15.4
8.4
1l.4
14.4
18.2
8.2
l.2
4.2
7.2
20.3
10.3
3.3
6.3
9.3
Mean
7.0
15.0
7.8
9.9
Block 1
Block 2
Block 3
Block 4
Block 5
Mean
None
Low
High
Mean
-5.4
l.6
2.6
-0.4
l.6
5.6
-1.4
-1.4
-0.4
-2.4
-0.2
-0.2
-l.2
0.8
0.8
0
0
0
0
0
1
2
3
4
Mean
Treatment
Fl
F2
NFl
NF2
21
20
19
18
19.5
16
16
14
14
15.0
18
17
15
16
16.5
14
13
13
12
13.0
80
II
~----------------------------------------------------~
6.1
We will start by assuming that our data came from a wholly randomized
experiment with four replicates per treatment. We will incorporate the idea
of block variation later. Textbooks often refer to the analysis of a wholly
randomized design as one-way analysis of variance. This refers to the fact
that there is only one source of variation involved (treatments).
The analysis of variance splits up the variation:
total
treatment
residual
sum-of-squares = sum-of-squares + sum-of-squares
We have just calculated the total sum-of-squares; therefore if we can
calculate the residual sum-of-squares then we can find out the treatment
sum-of-squares which is what we require because it tells us how much of
the total variation is due to treatments. Therefore we need to know what
the 'residuals' are. As we saw in Chapter 5 they are the differences between
the observed and expected values and in section 5.3 we saw how to
calculate expected values (Table 5.5). Thus the values in Table 6.2 and 6.3
can be built up.
Table 6.2
Expected values
Replicate
Treatment
Fl
F2
NFl
NF2
19.5
19.5
19.5
19.5
15.0
15.0
15.0
15.0
16.5
16.5
16.5
16.5
13.0
13.0
13.0
13.0
Mean
19.5
15.0
16.5
13.0
1
2
3
Table 6.3
Residuals
Replicate
Treatment
Fl
F2
NFl
NF2
1
2
3
4
1.5
0.5
-0.5
-1.5
1.0
1.0
-1.0
-1.0
1.5
0.5
-1.5
-0.5
1.0
0
0
-1.0
Mean
W_H_O_L_L_y__R_A_N_D_O_M
__IZ_E_D__
D_E_SI_G_N____________~I
L -_ _ _ _ _ _ _ _ _ _ _ _
treatment
residual
= sum-of-squares + sum-of-squares
106.0 = treatment
160
sum-of-squares + .
(a)
No.
spiders
per
plot
20
10
o~~~~~~~~~~~~~~~
2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
F1
F2
NF1
NF2
Treatment
Figure 6.1
Replicate
81
82
I ~I_______________A_N_A_L_y_S_I_N_G_y__O_U_R_R__E_SU__LT_S______________~
Therefore,
treatment
sum-of-squares
total
= sum-of-squares = 106.0 -
residual
sum-of-squares
16.0 = 90.0
We see that in this case the treatments account for most of the total
variation. The experiment has been reasonably successful in that other
sources of variation have been kept to a minimum. This won't always be
the case however, so let's consider the two extreme situations we could
encounter: where treatments explain nothing or everything (Figure 6.1) .
Treatments explain nothing
If all the treatment means are the same, knowing the treatment mean
does not help you to predict the plot mean. Each treatment may contain
a great deal of random variation (Table 6.4) and the total variation
would be entirely explained by this variation within treatments.
In Table 6.4 the total sum-of-squares equals the sum-of-squares of
the residuals. The treatment sum-of-squares is zero. It is just as if we
had selected 16 values from one population at random and allocated
them as the treatment results. Note that there are many experiments
where a zero treatment effect is a very desirable result; suppose for
example you are testing for possible side-effects of herbicides on fish in
nearby streams.
Table 6.4
Replicate
Treatment
Fl
F2
NFl
NF2
1
2
3
4
16.0
15.0
17.0
16.0
15.0
17.0
16.0
16.0
16.0
16.0
17.0
15.0
17.0
16.0
15.0
16.0
Mean
16.0
16.0
16.0
16.0
I I
Replicate
FI
F2
NFl
NF2
I
2
3
4
19.5
19.5
19.5
19.5
15.0
15.0
15.0
15.0
16.5
16.5
16.5
16.5
13.0
13.0
13.0
13.0
Mean
19.5
15.0
16.5
13.0
Total
78
60
66
52
83
84
I 1L-______________A_N_A_L_y_S_I_N_G_y__O_U_R_R__E_SU__LT_S______________~
6.2
ANALYSIS-OF-VARIANCE TABLE
df
Treatments
Residual
3
12
Total
15
Source
df
Sum-of-squares
Mean square
Variance ratio'
Treatments
Residual
3
12
90.0
16.0
30.0
1.3333
22.5
Total
15
106.0
Variance ratio = treatments mean square/residual mean square. Our value here is 30/
1.3333 =22.5.
A_N_A_L_y_S_I_S-_O_F_-V_A_R_I_A_N_C_E_T_A
__
BL_E____________~I
L -____________
85
86
I I
~----------------------------------------------------~
2
df
for
residual
1
2
12
Figure 6.2
3.49
With a variance ratio of 22.5 in the Table 6.6, with what confidence
might we reject the null hypothesis of no difference between the four
treatments? If we are not using a computer program we will need to find
the appropriate table in the back of this textbook. The table we want is
called an F-table. By comparing our calculated variance ratio (22.5) with
the appropriate values of F in the table we can estimate the probability p
of obtaining our particular results if the null hypothesis of no difference
between the treatments is correct. If this probability is very small we
decide to reject the null hypothesis and instead conclude that the
treatments do not all come from one population.
However, what is the appropriate value (usually called the critical value
of F) with which we should compare our variance ratio? It is an accepted
standard to use the table of F labelled '5%' or 'p = 0.05' (p stands for
probability; it is a proportion between 0 and I). The complete F-table is
given as Table C.3. The numerous possible values within it depend on what
combination of degrees of freedom have been used in calculating the
variances of treatment and residual effects. We choose the column in the
F-table which has degrees of freedom for the item in which we are
interested - in this case the treatment effect. Here, this is 3 df. We choose
the row according to the degrees of freedom which go with the residual, in
this case 12 df. This value of F - 'for 3 on 12 df' - is 3.49 (Figure 6.2).
Our variance ratio of 22.5 is much greater than this. Therefore we may
conclude that we have evidence to reject the null hypothesis.
Because the table is labelled '5%' or 'p = 0.05' we can say that if the null
hypothesis were true (and the populations of the four treatments were the
same) there is a less than 5% chance of our obtaining our set of results by
a random selection from the popUlations. We are usually content to
conclude that this is so unlikely that we will accept the alternative
hypothesis, HI> and state that there is likely to be a difference between the
treatment population means.
Our variance ratio is so much bigger than 3.49 that it is worth while
L -____________________________________________________
II
checking the F-table which is labelled '1%' or 'p = 0.01' (Table C.4) or
even the one labelled '0.1%' or 'p = 0.001' (Table C.5). These two tables
give critical values of F of 5.95 and 10.8 respectively (for degrees of
freedom 3 on 12). Our variance ratio is even bigger than 10.8. Thus we can
conclude that there is very strong evidence to reject the null hypothesis.
This is because 'p = 0.001' tells us that there is only a one-in-a-thousand
chance of obtaining our particular set of experimental results if the
treatments really all have the same effect (and so the observations come
from only one population). This is such a small chance that we should
prefer to conclude that the treatments do not all have the same effect on
spiders.
Now we can state our degree of confidence in our conclusion in a
formal way. Here we are 99.9% confident that we should reject
hypothesis Ho and accept HI. Computer packages like MINITAB give
the value of p for each grouping in an analysis of variance from a builtin database, so we do not need to consult F-tables. This means that
we obtain exact probabilities. For example, 'p = 0.045' means we are
less confident in rejecting the null hypothesis than if we saw 'p = 0.012'.
Often however p values will be represented (particularly in published
papers) by asterisks or stars: * represents p < 0.05, ** represents
p < 0.01 and *** represents p < 0.001. However, unless you are
specifically asked to present probabilities in this way you should give
the exact probabilities. Note also that when results are described as
'statistically significant' this is simply a convention for describing the
situation when the p value is less than 0.05.
6.3
df
3
3
9
15
87
88
II
~----------------------------------------------------~
Because there are four blocks there are 3 df for blocks. We add together
3 df for blocks and 3 df for treatments and subtract the answer (6) from
the total df (15) to obtain the df for the residual (9). This is three fewer
than it was before because 3 df are now accounted for by blocks. Just as
the old df for the residual has now been split into df for blocks and new df
for residual so the sum-of-squares for the residual will be split: part will
now belong to blocks and the remainder will be the new, smaller, residual
sum-of-squares.
We must now calculate a sum-of-squares for blocks. The method is just
the same as for treatments. Our four block means are 17.25, 16.5, 15.25
and 15.0 and the block totals are 69, 66, 61 and 60. To find the block sumof-squares we enter the block totals into the calculator on statistical mode.
We then press the standard deviation button (Un-I) and square it to find
the variance. We then multiply by n - I (3). This gives us the sum-ofsquares of these four totals, but, as with the treatment sum-of-squares, it
ignores the fact that each of them is itself derived from four observations
(one plot from each treatment). To obtain the block sum-of-squares on a
'per plot' scale (like the total, treatment and residual sums-of-squares), we
must divide our result by 4 (the number of observations in each block
total):
block totals 69, 66, 61, 60
standard deviation (of block totals) = 4.243
variance = 18
variance x 3 = 54
block sum-of-squares = 54/4 = 13.5
We now include the block sum-of-squares in the analysis-of-variance table
(Table 6.7). The revised residual sum-of-squares is obtained by subtracting
13.5 and 90.0 (treatment sum-of-squares) from 106.0 to give 2.5.
The blocks mean square is obtained by dividing its sum-of-squares by
degrees of freedom as before. The revised residual mean square is then
obtained by dividing 2.5 by 9. The revised variance ratio for treatments is
obtained by dividing the treatments mean square by the revised residual
mean square to give 108.0. This value is then compared with the critical
Table 6.7 The analysis-of-variance table
Source
Blocks
Treatments
Residual
Total
df
Sum-of-squares
Mean square
Variance ratio
3
3
9
13.5
90.0
2.5
4.5
30.0
0.278
16.2
108.0
15
106.0
~________________M_IN_I_TA_B________________~I
value in F -tables for '3 on 9 df'. Our variance ratio is very much bigger
than the critical F value for p = 0.001. Therefore we have strong evidence
for rejecting the null hypothesis. A common way of expressing this in
scientific papers is to say that treatment means differ (p < 0.001).
The < sign means 'less than'.
The blocks mean square (4.5), though smaller than that for treatments,
is also very much bigger than the residual mean square. Therefore we have
strong evidence that the blocks are not merely random groups. They have
accounted for site variation well, in that plots in some blocks tend to
contain more spiders than plots in other blocks. This source of variation
has been identified and separated from the residual. Thus, the amount of
random variation which remains unaccounted for is quite small.
6.4 WHICH TREATMENT IS DIFFERENT?
So far we have concentrated on hypothesis testing. (Are the treatments all
the same?) This is commonly only one aspect of the problems in which we
are interested and not the most important. When we carry out experiments
we usually choose treatments which we are already reasonably sure will
have effects which differ from each other. Formally rejecting the null
hypothesis that there is no difference between the treatments is thus often
a routine matter. What is frequently of greater interest is the comparison
of the results for different treatments in the experiment. The treatment
means are only estimates based on the samples in our experiment, so we
need to calculate confidence intervals if we wish to know the range of
values within which we are 95% confident that:
1. the treatment population mean lies, or
2. a difference between two population means lies.
89
90
II
~----------------------------------------------------~
6.5.2
Row
treat
block
spiders
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
21
20
19
18
16
16
14
14
18
17
15
16
14
13
13
12
This assumes that the plots were not blocked and so ignores the block
codes in column 2 (Stat, Anova, Balanced Anova, Response = C3,
Model=CI, Options, Display Means for CI). This asks for the data in
column 3 to be explained in terms of the codes in column I and the means
for each treatment to be printed. MINIT AB responds by first summarizing
the model: the factor 'treat' is 'fixed' (controlled by us) and has four levels
which are coded 1, 2, 3 and 4.
Factor
treat
Type
fixed
Levels
Values
OF
SS
MS
3
12
15
90.000
16.000
106.000
30.000
1.333
22.50
0.000
MINIT AB gives the heading 'F' to the variance ratio column (often
labelled VR in textbooks) because the variance ratio is compared with the
appropriate value in an F-table. The p value (0.000) is extremely small
0.001). The chance of obtaining our sample of results if all four
treatments are samples from the same population is extremely unlikely
~________________M_I_N_IT_A_B________________~I
(less than one chance in 1000). we conclude that the treatments do not all
have the same effect on spider numbers.
Means
treat
1
2
3
4
6.5.3
4
4
4
4
spiders
19.500
15.000
16.500
13.000
Now we introduce blocking. Thus there are now two sources of variation
which we can identify, treatments and blocks (Stat, Anova, Balanced
Anova, Response C3, Model Cl, C2, Options, Display Means for
Cl, C2):
Factor
treat
block
Type
fixed
random
Levels
4
4
Values
1 2 3
1 2 3
4
4
DF
3
3
9
15
SS
90.000
13.500
2.500
106.000
MS
30.000
4.500
0.278
F
108.00
16.20
0.000
0.001
Means
treat
4
4
4
4
2
3
4
block
1
2
3
4
4
4
4
4
spiders
19.500
15.000
16.500
13.000
spiders
17.250
16.500
15.250
15.000
The variance ratio for treatments* (F = 108) is now much higher than it
was in the one-way analysis. We have removed variation between blocks
from the random variation (error). The treatment mean square is therefore
now compared with a much smaller mean square for randomness (0.278).
* Note:
MINITAB labels the variance ratio column 'F', which represents the theoretical values
found in the F-table, against which the calculated variance ratio is compared.
91
92
I I~______________A_N_A_L_Y_S_IN_G__Y_O_U_R_R_E_S_U_L_T_S____________~
We now have even stronger evidence for rejecting the null hypothesis of
no difference between the effects of the four treatments.
I9 ~ 18 -
~ 17 -
8
B
~ 16 -
15 14 13 12 -
Figure 6.3
8
treat
'-
r---
Q)
-0
0.
r-r---
Ul
o 10
c
ro
Q)
:L
Figure 6.4
treat
M_IN_I_TA_B________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Type
fixed
random
Levels
4
4
Values
1 2 3
1 2 3
4
4
OF
3
3
9
15
SS
90.000
13.500
2.500
106.000
MS
30.000
4.500
0.278
F
108.00
16.20
0.000
0.001
treat
1
1
1
1
2
2
2
2
3
3
3
3
4
block
1
2
3
4
1
2
3
4
1
2
3
4
1
spiders
21
20
19
18
16
16
14
14
18
17
15
16
14
RESIl
0.25
0.00
0.25
-0.50
-0.25
0.50
-0.25
0.00
0.25
0.00
-0.75
0.50
-0.25
FITS1
20.75
20.00
18.75
18.50
16.25
15.50
14.25
14.00
17.75
17.00
15.75
15.50
14.25
93
94
I I
14
15
16
4
4
4
13
13
12
2
3
4
-0.50
0.75
0.00
13.50
12.25
12.00
6.6 EXERCISE
You have carried out an experiment to compare the yields (tlha) of four
varieties of a crop (VI to V4). The layout plan (as viewed from the air)
also shows the yield from each plot:
2~-------~
0.7
---------IUCl1.551
ro
:::>
~
.,
~1
1i.,
0.2
If>
0:::
0::-0.3
V 1\v .c">,~
'-I
">.,......
"'.000
-I
- - - - - - - - - LCt"1.551
-2~--~--r---~
>-7
0.7
0-
u6
ro
r-
&4
:::>
~
f-----,
~3
LL.2
1
-0.6
-0.3
0.2
Residual
15
Histogram of Residuals
c:
illS
10
Observation Number
0.7
0.2
If>
III
~---------~
0:::-0.3
12 13 14 15 16 17 18 19 20 21
Fit
II
EXERCISE
Block I
Block 2
Block 3
Block 4
Block 5
VI 5.5
V3 6.3
V35.6
V45.3
V43.9
V36.6
V46.8
V2 5.1
VI 3.2
VI 2.7
V4 7.1
V26.0
VI 4.6
V34.7
V23.4
V26.4
VI 5.7
V46.7
V23.6
V34.0
Answer
Row
block
variety
yield
1
1
1
1
2
1
2
3
4
1
2
5.5
6.4
6.6
7.1
5.7
6.0
6.3
6.8
4.6
5.1
5.6
6.7
3.2
3.6
4.7
5.3
2.7
3.4
4.0
3.9
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
2
2
2
3
3
3
3
4
4
4
4
5
5
5
5
4
1
2
3
4
1
2
3
4
1
2
3
4
Type
fixed
fixed
Levels
5
4
Values
1 2 3
1
4
4
95
96
YOUR
RESULTS
I ~I_ _ _ _ _ _ _ _ _ANALYSING
______
___
_ _ _ _ _ _ _ _ _ _~
DF
4
3
12
19
SS
MS
25.6480
7.2920
1.0080
33.9480
6.4120
2.4307
0.0840
F
76.33
28.94
0.000
0.000
Means
block
yield
1
2
3
4
5
4
4
4
4
4
6.4000
6.2000
5.5000
4.2000
3.5000
treat
yield
1
2
3
4
5
5
5
5
4.3400
4.9000
5.4400
5.9600
The results are shown in Figures 6.6 and 6.7. We conclude that there is
very strong evidence that the variety population means are not all the same
(p < 0.001). The residual plots reassure us that the model is valid as the
residuals are approximately Normally distributed and show similar
variability over the range of fitted values.
6
~5
Cl)
>-4
'"-
c 3 -
co
.;!.2
2
3
variety
Figure 6.6
E_X_E_R_C_I_SE____________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Normal Plot of
Residuals
0.4
_ 0.3
~ 0.2
-0 0.1
'iii 0.0
a.>-O.I
0::: -0.2
- 0.3
- 0.4
I Chart of Residuals
..
0.7 r=========~
.
..
.
. ...
-6
0.2
"X-O.OOO
Ul
a.>
0:::_
-1
0
1
Normal Score
-2
0.3
- 0.8
- - - - - - - - - LCL-'O.6BB7
10
Observation
..
r----
r---
a.> 4
::>
0-3
a.>
~2
r---
r---
Figure 6.7
20
Residuals vs.
Fits
Histogram of Residuals
>u 5
c
0.4
UCL-O.6BB7
co
0.4
_ 0.3
~ 0.2
-0 0.1
'iii 0.0 -+-'-----------1
a.>_ 0.1
0::: -0.2
'0.3
..
- 0 . 4 '----r--..,.:--,-----,r-----,---'
7
3
5
6
Fit
97
CONFIDENCE INTERVALS
Treatment mean
C_O_N
__
F_ID_E_N_C_E__
IN_T_E_R_V_A_L_S______________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _
Treatment NF2 (no seed mixture, cut twice per year) has an estimated
mean of 13 spiders per plot. Our best estimate of the difference between
the two treatments is 19.5 - 13, i.e. 6.5 spiders per plot. We now wish to
<
18
19
>
20
21
Figure 7.1 The 95% confidence interval for the mean number of spiders per plot
on treatment Fl.
99
100
II
~----------------------------------------------------~
calculate a 95% confidence interval for the difference between the two
population means:
difference (t x standard error of the difference)
The difference is 6.5. As usual, t is for residual degrees of freedom (12),
so it is 2.179 as before.
The standard error of the difference has to take into account the
variance of the first mean and the variance of the second mean. If we are
estimating how far apart two population means are and we have uncertain
estimates of each of them, both amounts of uncertainty will have an effect
on the range of size of the difference.
Both treatments have four replicates. Therefore, in this case, the two
treatment mean variances are the same as each other. We add the two
variances of the means together:
0.0695 +0.0695 =0.139
We then take the square root, to give 0.373 as the standard error of the
difference between the two means. Putting the values into the formula, the
confidence interval is:
6.5 (2.179 x 0.373)
or 6.5 0.813
or from 5.7 to 7.3
We are therefore 95% confident that the mean difference in spider numbers
between these two populations is in this range. This can be put another
way: There is no more than a 5% chance that the mean difference lies
outside this range and, since the probabilities are equal that it lies above as
below, no more than a 2.5% chance that the difference is less than 5.7.
With these results, it is very unlikely that the mean difference is really zero.
Thus we can be very confident that the difference between the two
treatments is real and not caused by chance.
A common way of comparing treatment means after analysis of
variance is to ask whether the difference between them is greater than a
certain amount. If it is, then we can say that the two treatments are
significantly different - in other words that we are (usually 95%) confident
that they come from populations which have different means. The 'certain
amount' is called the least significant difference (LSD):
LSD=(tx standard error of the difference) = tx SED
So, in our example above
LSD =
The difference between our two treatment means is 6.5. As 6.5 is greater
than 0.813 we can be 95% sure that the samples come from populations
with different means.
As 6.5 is very much greater than 0.813 we can use a value of t for a
higher level of confidence: t for 99.9% confidence and for 12df is 4.318
(Table C.2). So the new LSD = 4.318 x 0.373 = 1.611. As 6.S is greater
than 1.611 we can be 99.9% confident that the samples come from
populations with different means.
Because we have 3 df we are allowed to make three particular
comparisons between treatments to determine what is having the most
effect on spider numbers. In this well-designed experiment there is one set
of three comparisons which ,is most informative. We will now outline the
reasons for this.
Factor 2 Cutting
Factor I Sowing
Levell Sown
Level 2 Vnsown
Treatment I FI
Treatment 3 F2
Treatment 2 NFl
Treatment 4 NF2
101
102
II
~----------------------------------------------------~
Protein in diet
low
medium
high
very high
low
medium
high
5 reps
5 reps
5 reps
5 reps
5 reps
5 reps
5 reps
5 reps
5 reps
5 reps
5 reps
5 reps
df
F_A_C_T_O_R_I_A_L_S_T_R_V_C_T_V_R_E_O_F_T_R_E_A_T_M_E_N_T_s_ _ _ _----11
L..-_ _ _ _
cut once while the other four were cut twice, and similarly for the unsown
plots. It simply asks, in general, did sowing a wild flower mixture have
any effect?
The second line represents the main effect of cutting. In other words,
we compare the spider numbers on the eight plots which were cut once
with those on the eight plots which were cut twice. Does cutting once, in
general, have a different effect from cutting twice?
The third line represents a very important idea. It enables us to see
whether cutting once instead of twice changes spider numbers similarly
on all plots irrespective of whether they were sown with wild flower
seed or not. If this test is not significant we can assume that the two
factors are independent. If this test is significant we say that the two
factors interact; in other words the effect of cutting is not consistent,
rather it depends on whether seed has been sown or not. For example,
suppose that spiders liked to hunt on the wild flowers (because flies
are attracted to the flowers) but the second cut in the plots that receive
it removes the flower heads - hence no flies, hence no great abundance
of spiders. In this case an interaction would be expected - sowing wild
flowers would give more spiders than on unsown plots when plots were
also cut only once but this would not be so in the plots which were
cut twice.
103
104
II
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Table 7.3
Source
df
Sum-of-squares
I
I
I
25
64
Four treatments
90
16
106
Residual
Total
12
15
Therefore
interaction sum-of-squares = treatment sum-of-squares
- sowing sum-of-squares
- cutting sum-of-squares.
interaction sum-of-squares = 90 - 25 - 64 = 1
We find the mean square for each of the three component lines of the
treatment line by dividing their sums-of-squares by their degrees of
freedom. As in this case df are always 1 the mean squares equal the sumsof-squares. If, as with the structure in Table 7.2, factors have more than
two levels then the df will not be 1 and hence mean squares will not equal
sums-of-squares. For example, in Table 7.2 the main effect for fibre in
the diet has 2 df as there are three levels.
Then each mean square is divided by the residual mean square from
the original analysis (Table 6.6) to produce three variance ratios which can
each be compared with the critical F value from tables for 1 on 9 df
(5.12). The annotated MINITAB printout in section 7.3 shows how the
computer package carries out all these calculations for us.
'--_ _ _ _ _ _ _ _
M_IN_I_TA_B_ _ _ _ _ _ _ _-----',
cannot generalize about sowing seed; its effect on spider numbers depends
on how frequently we cut the vegetation. A complete factorial structure
can have many factors, each at many levels but they must be balanced: all
possible combinations of factors at all levels must be present. If any are
missing then the analysis described above will not work. However,
MINITAB can be used to analyse such unbalanced data whether the lack
of balance is intentional or accidental (as when one or more plots are lost
because of an accident). An example of such an analysis is given in section
7.3.3.
A useful check on the degrees of freedom in the Anova plan is that the
number of df for an interaction can be found by multiplying together the
number of df for each of the component factors (hence 1 x 1 = 1 above or
2 x 3 = 6 in Table 7.2). A more obvious way of obtaining the number of
interaction df is to take away the df for all main effects (and any other
interactions) from the treatment df. Therefore in Table 7.2 this gives
11-2 - 3 = 6.
7.3
7.3.1
MINITAB
The effect of cutting and sowing on spider numbers
Factorial analysis of variance, using codes for sowing and cutting levels
We display the data:
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
treat
block
spiders
seeds
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
21
20
19
18
16
16
14
14
18
17
15
16
14
1
1
1
1
1
1
1
1
0
0
0
0
0
13
13
12
0
0
0
cut
1
1
1
1
2
2
2
2
1
1
1
1
2
2
2
2
105
106
I I~_______C_O__N_SI_D_E_R__T_H_E_T_R_E_A_T_M_E_N_T_S_O__F_T_H_E_F_I_E_L_D________~
here is: please examine the main effect of seeds, the main effect of cut
and the interaction between them.
Analysis of Variance (Balanced Designs)
Factor
block
seeds
cut
Type
fixed
fixed
fixed
Levels
Values
4
2
2
0
1
2
1
2
OF
SS
3
1
1
1
9
15
13.500
25.000
64.000
1.000
2.500
106.000
MS
4.500
25.000
64.000
1.000
0.278
16.20
90.00
230.40
3.60
P
0.001
0.000
0.000
0.090
The p values for seeds and for cut are both very small, showing that
there is very strong evidence that both factors affect spider numbers.
However, the p value for the interaction is greater thaiJ.0.05, showing that
there is no evidence for an interaction between the factors. The main
effects may be interpreted independently of one another: sowing a seed
mixture increases spider numbers as does only cutting once.
Means
block
1
2
3
4
4
4
4
4
spiders
seeds
0
1
8
8
14.750
17.250
cut
spiders
1
2
8
8
18.000
14.000
17.250
16.500
15.250
15.000
spiders
seeds
cut
0
0
1
1
1
2
1
2
spiders
4
4
4
4
16.500
13.000
19.500
15.000
We can plot the number of spiders in each plot against the two types
of sowing and cutting (Stat, Anova, Main Effects Plot, Factors Seeds Cut,
Response Data in Spiders). We can do the same for cutting. The unedited
results are shown in Figure 7.2.
---'I I
M_IN_I_TA_B_ _ _ _ _ _ _ _
L---_ _ _ _ _ _ _ _
18
17
Q)
~ 16
Ul
15
14
seeds
Figure 7.2
cut
To edit this graph, maximize its size and double-click to produce the
toolbars. Use these to amend the size, appearance, position and nature of
text and to add lines or shading. Also use them to delete unwanted items.
An example of edited graphs is shown in Figure 7.3.
Main effects of seeding and cutting on mean number of
spiders per plot
18
17
mean
number
of
16
spiders
per
plot 15
14
no
Figure 7.3
yes
seed mixture
no
yes
cut ting
7.3.2 The effect of amount of phosphate and contact with plant tissue on
bacterial growth
We have designed an experiment to investigate the importance of increasing amounts of phosphate (1, 2, 3 and 4mg per culture) and of
contact with living plant tissue (present or absent) on the extension of a
bacterial colony (mm per day).
The experiment was carried out by inoculating a carbohydrate medium
107
108
II
~----------------------------------------------------~
on Petri dishes with the bacterium. One set of each of the eight treatments
was established on each of three days, making 24 dishes in all. The
amounts of growth are analysed in MINITAB:
Row
expt
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
3
3
3
3
3
3
3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
contact
phos
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
exten
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
1
1
2
2
3
3
4
4
10
6
13
11
14
20
16
22
12
10
treat
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
13
13
14
14
14
18
14
10
12
10
10
14
16
18
Type
random
fixed
fixed
Values
Levels
1
0
1
3
2
4
2
1
2
3
3
OF
SS
MS
2
1
3
3
14
23
4.000
2.667
166.000
57.333
68.000
298.000
2.000
2.667
55.333
19.111
4.857
0.41
0.55
11. 39
3.93
0.670
0.471
0.000
0.031
I I
MINITAB
MEANS
expt
ext en
1
2
3
8
8
8
14.000
l3.500
13.000
contact
ext en
0
1
12
12
13.833
l3.167
phos
exten
6
6
6
6
10.333
12.000
14.333
17.333
1
2
3
4
contact
phos
exten
0
0
0
0
1
1
1
1
1
2
3
4
1
2
3
4
3
3
3
3
3
3
3
3
8.667
11.333
16.000
19.333
12.000
12.667
12.667
15.333
We need to plot the eight treatment means to see the interaction effect
- code numbers 1 to 8 in column 5 (Graph, Character Graphs, Boxplot).
The result is shown in Figure 7.4.
The bacteria not in contact with plant tissue (even-numbered treatments: 2, 4, 6 and 8) show a great response to increasing amounts of
phosphate. In contrast, the bacteria which are in contact with plant tissue
(odd-numbered ones: 1, 3, 5 and 7) show much less response. Perhaps this
is better displayed in an interaction diagram (Figure 7.5). We might
speculate on the biological reasons for such a difference.
We can plot a histogram of the residual~ and a graph of residuals
against fitted values (Figure 7.6). If, as here, the histogram follows an
approximately Normal distribution and there is no pattern in the other
plot this reassures us that the assumptions are reasonable.
Other particular comparisons between treatment means or groups of
treatment means can be made but this should be done with care. Ask a
statistician for advice before carrying out your experiment. He or she will
help you answer your questions efficiently and validly.
7.3.3 What to do if you have missing observations
Observations of zero
It is important to realize the difference between missing values caused by
accidents and values of zero in a data set. The latter may represent death
109
110
I LI_________C_O_N_S_ID_E_R__T_H_E_T_R_E_A_T_M_E_N_T_S__O_F_T_H_E_F_I_E_L_D________~
treat
-I
--I
1-
-I +
-I
1-
--I
1--
-I
7
8
+
+
1--
--+---+---+---+----il---+-exten
6.0
9.0
12.0
15.0
1B.0
21.0
These will happen. Despite taking care with your experiment, something
may well go wrong. You accidentally drop a test tube on the floor, the
tractor driver accidentally ploughs the wrong plot, or your helper throws
M_IN_I_TA_B_ _ _ _ _ _ _ _
L -_ _ _ _ _ _ _ _
-----'I I
contact
----
19
0
I
0
I
c
rn
v 14
-----~--
-------~
phos
Figure 7.5 The effect of increasing amounts of phosphate and contact with plant
tissue on the extension rate of a bacterial colony.
away a paper bag containing the shoots from one plant before you have
weighed them. You now have the problem of one (or more) missing
observations. It is sensible to represent these by a '*' on your record sheet
(with a footnote describing the problem) so that you do not confuse them
with a real value of zero.
Residual plots from bacterial growth experiment
Normal Plot of Residuals
I Chart of Residuals
4 ~------------------,
....
(ij2
-6
Ui 0
.. ...
~-I
-2
-3
-2
..
'
-I
Normal Score
10
15
20
25
Observation Number
Histogram of Residuals
9
>-8
0 7
c6
v5
5-4
v3 Ir----
f----,
.... 2
u.. 1
3
(ij2
-6 1
iii 0
~-I
'.
-2
-3
-3
-I
Residual
Figure 7.6
10
15
20
Fit
111
112
I LI_________C_O_N_S_ID_E_R__T_H_E_T_R_E_A_T_M_E_N_T_S_O__F_T_H_E_F_I_E_L_D________~
Since you have fewer observations you have less information and so
the experiment is less likely to be able to detect differences between the
populations which you are comparing. Another problem is that you now
have more information about some treatments than about others. A oneway analysis of variance can be carried out in MINITAB as usual. It takes
into account the differing replication between the treatments.
In a wholly randomized experiment we compared the effects of six
different plant hormones on floral development in peas. A standard
amount was applied to the first true leaf of each of four plants for each
hormone, making 24 plants in the experiment. Unfortunately, two plants
were knocked off the greenhouse bench and severely damaged: one from
hormone 2 and one from hormone 6. So we have data on the number of
the node (the name given to places on the stem where leaves or flowers
may appear) at which flowering first occurred for only 22 plants. This is in
C2 ofa MINITAB worksheet, with treatment codes in Cl.
ROW
hormone
node
1
2
3
4
5
6
7
8
9
10
1
2
3
4
5
6
1
3
4
5
1
2
3
4
5
6
1
2
3
4
5
6
49
53
49
53
51
58
52
50
57
56
51
54
52
54
51
54
48
57
49
51
54
55
11
12
13
14
15
16
17
18
19
20
21
22
Type
fixed
Levels
Values
DF
SS
5
16
21
101. 008
70.083
171. 091
MS
20.202
4.380
4.61
0.009
I I
MINITAB
Means
hormone
1
node
50.000
54.667
50.000
53.750
53.000
55.667
4
3
4
4
4
3
3
4
5
6
SE mean =
For hormones 2 and 6, where we have only three replicates, the SE for
each mean is .J4.38/3 = 1.46; whereas, for the remaining hormones it is
.J4.38/4 = 1.095, which is considerably smaller.
Perhaps we realized that our glasshouse was not a homogeneous
environment and so laid out our experiment in four randomized complete
blocks. We now have a two-way analysis of variance. However, this is not
balanced:
Hormone
Block
1
2
3
4
49
52
51
48
53
49
50
52
49
53
57
54
51
51
56
51
54
58
*
54
57
54
55
3
4
5
6
block
1
1
1
1
1
1
hormone
1
2
3
4
5
6
node
49
53
49
53
51
58
113
114
I I
2
2
2
2
3
3
3
3
3
3
4
4
4
4
4
4
11
12
13
14
15
16
17
18
19
20
21
22
1
3
4
5
1
2
3
4
5
6
1
2
3
4
5
6
52
50
57
56
51
54
52
54
51
54
48
57
49
51
54
55
<1)
-0
053 c
48 -
~ ~~~
B0
2
3
hormone
Figure 7.7 High-quality boxplot of number of nodes for each of six hormones.
Factor
hormone
block
Levels
6
4
Values
1 2 3
1 2 3
4
4
M_I_N_IT_A_B__________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
DF
5
3
13
21
Seq SS
101.008
23.465
46.618
171.091
Adj SS
Adj MS
117.632
23.465
46.618
23.526
7.822
3.586
6.56
2.18
0.003
0.139
The puzzling thing about this response is that there are two columns of
sums-of-squares. These are 'sequential' (Seq) and 'adjusted' (Adj). When
an experiment is balanced the order in which we ask the terms in the model
(here hormones and blocks) to be fitted doesn't affect the outcome. There
is only one value for the sum-of-squares due to hormones, whether we ask
it to be fitted either before or after blocks. However, now we have an
unbalanced experiment the order in which the terms are fitted is very
important.
In the sequential sum-of-squares column, hormone has been fitted first
and then blocks. This was the order in which we asked for the two terms in
the glm line. However, in the adjusted sum-of-squares column each term
has the sum-of-squares appropriate to it if it were fitted last in the model.
Here, if hormones is fitted before blocks its SS is 101.0, but if it is fitted
after blocks its SS is 117.6. It is sensible to use the adjusted sum-of-squares
here. This is appropriate since it represents evidence for differences
between the hormone effects after taking into account the fact that some
hormones were not represented in all of the blocks.
Let's now look at the hormone means. Although the data are identical
with the figures used in the one-way analysis, the means are not the
same:
Means for node
block
1
2
3
4
Mean
52.17
55.14
52.67
52.33
StDev
0.7731
0.9981
0.7731
0.7731
hormone
Mean
StDev
1
2
3
4
5
6
50.00
55.35
50.00
53.75
53.00
56.35
0.9468
1.1270
0.9468
0.9468
0.9468
1.1270
For example, the mean for hormone 2 has increased from 54.667 to
55.35. This adjustment takes account of the fact that the representative of
hormone 2 in block 2 was accidentally lost. Block 2 was, in general, one
in which, for those treatments which were present, flowering was initiated
115
116
II~_________C_O_N_S_ID_E_R__T_H_E_T_R_E_A_T_M_E_N_T_S__O_F_T_H_E_F_I_E_L_D________~
Residual plots - hormones
Normal Plot of Residuals
I Chart of Residuals
3;---------------~
..
ro
::>1
"0
iii 0
r:========1
UClS.IJ8
;'1.001
If)
III
... -
0:::_1
-2
5
ro
::>
~O
-2
0::
-5
-I
o
Normal Score
- - - - - - - - - lClS.ZJS
'2
10
20
Obser va t ion Numbe r
Histogram of Residuals
5
3~--------~
.--
'2
r--
ro
::>1
f---
"0
Vi 0
v
f---,
0::_
-l-.---~~-'-----j
-2
- '2
-I
2
Residual
Figure 7.8
49 50 51 52 53 54 55 56
Fit
at a higher node number. It is only fair that we should adjust the value
for hormone 2 upwards. This method provides us with an estimate of the
results we might have expected to obtain if all hormones were present in
all blocks.
We plot a histogram of the residuals and a graph of residuals against
fitted values. These are shown in Figure 7.8. The residuals are slightly
skewed to the right but the variance is similar across the range of fitted
values.
7.4 EXERCISES
Confidence interval and least significant difference
(i) Calculate a 95% confidence interval for the population mean yield of
variety 4 in section 6.6.
(ii) Calculate a 95% least significant difference for the difference between
any two variety means in section 6.6. Is there evidence for a significant
difference between varieties 1 and 2?
EXERCISES
L-____________________________________________________
II
Answer
= J(0.084/4) = 0.l449
Therefore the confidence interval is
4.2 (2.179 x 0.1449) = 4.2 0.32
irrig
fert
0
0
0
0
0
0
0
0
0
0
1
1
height
17
16
15
18
14
16
117
118
7
8
9
10
1
1
0
0
0
0
1
1
1
1
11
12
13
14
15
16
0
0
1
1
1
1
1
1
1
1
16
14
13
13
14
12
21
20
19
18
Carry out an analysis of variance with a model accounting for the effects
of the two factors and their interaction and display the relevant means.
Produce an interaction plot (Stat, Anova, Balanced Anova, Interactions
Plot, Factors irrig fert, Response data height) and a residual plot and
interpret them.
Answer
Analysis of Variance (Balanced Designs)
Factor
irrig
fert
Type
fixed
fixed
Levels
2
2
Values
o
o
1
1
Source
irrig
fert
irrig* fert
Error
Total
SS
25.000
1. 000
64.000
16.000
106.000
MS
25.000
1. 000
64.000
1.333
18.75
0.75
48.00
0.001
0.403
0.000
Means
irrig
0
1
fert
0
1
irrig
0
0
1
1
height
14.750
17.250
8
8
height
15.750
16.250
8
8
fert
0
1
0
1
4
4
4
4
height
16.500
13.000
15.000
19.500
-----'I I
E_X_E_R_C_IS_E_S_ _ _ _ _ _ _ _ _
L -_ _ _ _ _ _ _ _ _ _
0
I
0
--- I
17.8
~ 16.8
([)
2::: 15.B
14.8
13.8
12.8
fertilizer
Figure 7.9
I Chart of Residuals
1.5
ro
-6
g;
0.5
- - - - - - - - - 1 UCL3.723
"DI
(f)
(f)
III
~-0.5
0
_I
a::: _2
-3
-4 l-===:::;===;::::==~ LCL-3.723
-1.5 .'-r---..,.---.---~
-1.5
'0.5
0.5
1.5
Normal Score
5
10
15
Observation Number
Histogram of Residuals
r----
r--
;---
ro
-6
0.5
(f)
~-0.5
-1.5
-1.5
Figure 7.10
-0.5
0.5
Residual
1.5
13 14 15 16 17 18 19 20
Fit
119
120
I LI_________C_O_N_S_ID_E_R__T_H_E_T_R_E_A_T_M_E_N_T_S_O__F_T_H_E_F_I_E_L_D________~
General linear model analysis for missing data
You have discovered that unfortunately the plants in the last plot in your
experiment have been destroyed. Erase row 16 from your worksheet and
reanalyse the data using the general linear model option instead of
balanced Anova.
Answer
Analysis of Variance for height
Source
irrig
fert
irrig* fert
Error
Total
Seq SS
21. 376
0.665
66.692
13.000
101. 733
OF
1
1
1
11
14
Adj SS
27 . 923
2.077
66.692
13.000
Adj MS
27.923
2.077
66.692
1.182
F
23.63
1. 76
56.43
0.000
0.212
0.000
Mean
14.75
17.50
StOev
0.3844
0.4151
fert
0
1
Mean
15.75
16.50
StOev
0.3844
0.4151
irrig* fert
0
0
1
0
1
0
1
1
Mean
16.50
13.00
15.00
20.00
StOev
0.5436
0.5436
0.5436
0.6276
Note that there are now only 15 observations and so the total degrees of
freedom have been reduced to 14. MINITAB calculates standard errors
(SE) for all the means but labels them StDev (standard deviation). The SE
for means with 8 replicates (fert 0 and irrig 0) is 0.3844. This is obtained
by taking the adjusted MS of 1.182, dividing by 8 and taking the square
root. The SE values of 0.4151 are obtained by dividing by 7 (for seven
replicates) and those of 0.5436 and 0.6276 by dividing by 4 and 3
respectively (according to the replication).
The previous chapters have been concerned with estimating the effects of
different treatments (for example frequency of cutting vegetation) and
testing the null hypothesis that they all have the same effect (for example
on the number of spiders). Here we will see what to do when our
treatments consist of increasing amounts or 'levels' of a factor (for
example water on plants, doses of a drug given to humans).
In this case we want to describe the nature of the relationship (if any)
between the amount of water and plant growth or between the dose of the
drug and the time taken for symptoms of disease to disappear.
The first thing to do is to plot our observations. The usual practice is
to put the response variable (plant weight or time for symptoms to
disappear) on the left-hand or vertical or y axis (also called the ordinate)
and the amount (of water or drug) on the horizontal or x axis (also called
the abscissa). The response variable is also known as the dependent variable
because it may depend on (be affected by) the amount of the substance
applied (the independent variable).
If there is no relationship between the two variables the plot of response
against amount will show a random scatter of points (Figure S.la). The
points do not form any pattern. If we find this, we do not need to carry
out any further analysis, as knowing the amount of substance applied does
not help us to predict the size of the response - the 'best' line through these
points is a horizontal one (Figure S.1b). However, we may see a pattern.
The simplest pattern is where the points seem to lie about a straight line
with a slope. The line may slope up (a positive relationship in which more
of x tends to produce more of y) or down (a negative relationship in which
more of x leads to less of y) (Figures S.lc and d). We then need to find a
way of describing the relationship so that we can make use of it in future
to predict just how much more (or less) y we will get for a given amount of
x.
Another possibility is that the points seem to lie on a curve. Perhaps
122
I I
(bl
(a)
Response
Response
Amount
Amount
(e)
Response
/-
(d)
Response
~
Amount
Amount
(el
(I)
Response
Response
Amount
Amount
T_H_E__M_O_D_E_L__________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
'regress' (go back) towards the mean population height. The name
regression has since stuck to describe the relationship that exists between
two or more variables (in Galton's case, parents' height and height of
child; in the example above, plant growth and amount of water).
To develop the idea further let us take the example of increasing activity
affecting a person's percentage of body fat. First we have to have obtained
appropriate permission for the clinical trial to be carried out. We have to
assume that the people chosen are representative of the population we wish
to study (say, women in a county who are both aged between 30 and 40
and are overweight). We will ask them to take different amounts of exercise
and we will take blood samples monthly to record the concentration of a
chemical which is known to reflect percentage body fat very well. We will
select individual women at random, discuss the project with them and ask
them if they are willing to take part. If so, we allocate ten at random to
the lowest amount of exercise and ten each to take each of three higher
amounts, and ask them to record their actual amount of exercise.
Ideally, we do not inform the women whether they are taking a
relatively high or low amount of exercise (this is known as a blind trial).
This ensures that there is no psychological bias. Otherwise the knowledge
that they were, for example, taking the greatest amount of exercise might
subconsciously affect the manner in which they carried out the exercise
and, hence, the results. In the same way, if you are recording the results of
an experiment try to avoid thinking about what results you expect (want)
from each treatment as you are recording it.
We can now work out how linear regression operates by developing a
model as we did for analysis of variance (Chapter 6). First, imagine that
we knew the concentration of the chemical in the blood at the end of the
trial for each woman, but we did not know the amount of exercise which
had been taken. The sum-of-squares (of the differences between the
observations and their mean) can be calculated (see Figure 8.2, which
shows only a few observations, for clarity). It represents the total amount
of variability in our data.
123
124
II
R_E_L_A_T_IN_G
__O_N_E_T_H_I_N_G_T_O__
A_N_O_T_H_E_R__________~
L ___________
chemical
~--~L---~---~.---.---,rl
mean
exercise
Chemical
(y)
T_H_E__
M_O_D_E_L____________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Chemical
concentration
(g/lnre)
regression line
;
. - ~ observed value of chemical for
10 minutes exercise
-I T1-
+--
10
Exercise (min)
= a +bx
125
126
I LI___________R_E_L_A_T_IN_G__O_N_E_T_H_I_N_G_T_O__A_N_O_T_H_E_R___________
y
=regression coefficient
Let us use MINITAB to examine the data from our clinical trial. We
have entered the x values (amount of brisk walking in minutes per day)
and the y values (concentration of a chemical in the blood in grams per
litre) into columns 1 and 2:
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
exercise
5
5
6
6
7
7
7
9
10
10
10
10
12
12
15
15
17
18
18
19
20
20
20
22
23
25
25
25
28
30
chemical
0.90
0.85
0.79
0.85
0.87
0.82
0.86
0.80
0.81
0.75
0.70
0.74
0.83
0.73
0.71
0.75
0.69
0.65
0.70
0.68
0.61
0.65
0.70
0.68
0.67
0.65
0.58
0.68
0.59
0.56
I I
THE MODEL
31
32
33
34
35
36
37
38
39
40
30
30
32
34
35
35
35
38
39
40
0.54
0.55
0.60
0.57
0.50
0.48
0.41
0.50
0.47
0.45
Notice that the coefficient (b) has a negative value here (-0.0113). This
tells us that the line has a negative slope; it goes from top left to bottom
right.
Second, MINITAB tests whether the intercept (a) and the slope (b) are
significantly different from zero:
Predictor
Constant
exercise
Coef
0.89927
-0.0112574
0.9 0.8 -
CD
.~
E 0. 7 -
StOev
0.01344
0.0005899
t-ratio
66.89
-19.08
0.000
0.000
0
008
0
0
00
0
8
0
<1)
..c:
o 0.6 -
0
0
0 0 0 0 00 0
0
00
0
0
0.5 -
0
0
0
0.4 -
0
0
10
20
30
40
exercise
127
128
I LI___________R_E_L_A_T_I_N_G_O_N__E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
This printout shows that the intercept is significantly different from zero
(Constant, Coef, p < 0.000, which is much less than 0.05). So, when a
woman takes no exercise this regression line predicts that the concentration of the chemical in her blood will be significantly greater than zero the estimated value is 0.899 grams per litre. Also the slope of the line
(exercise, Coef) is significantly different from zero (p < 0.000 again). So
we can be very confident that there is a strong linear relationship between
chemical concentration and exercise.
Third, MINITAB gives the values of sand r2:
s=0.04009
R-sq=90.6%
R-sq(adj) =90.3%
The first (s) is the standard deviation (the square root of the residual or
error variance) and the second (R-sq) - pronounced R squared - is called
the coefficient of determination. It is the proportion of the variation in y
accounted for by variation in x (from 0 to 1, which is the same as from 0
to 100%). If r2 is 100% it indicates that the regression line goes through all
the points on the graph. Our model then explains all the variability in the
response variable: there is no random variation, all the residuals are zero.
In contrast, if r is 0% it is consistent with a random arrangement of points
on the graph. The larger the value of r2, the more useful the independent
variable is likely to be as a predictor of the response variable. (Do not
worry about the adjusted r2 (R-sq(adj value on the printout. It is only
helpful in more complicated models.)
Fourth, the analysis-of-variance table appears. This is of the same
general form as for when we are comparing the effects of several
treatments in an experiment. However, here the treatments line is replaced
by one called 'regression'. This represents the evidence for a linear relation
between the chemical and exercise.
Analysis of Variance
SOURCE
Regression
Error
Total
OF
1
38
39
SS
0.58518
0.06106
0.64624
MS
0.58518
0.00161
364.18
0.000
The evidence is very strong (p < 0.001 which is much less than 0.05). We
can reject the null hypothesis of no linear relationship between chemical
concentration and amount of exercise with great confidence. MINIT AB is
carrying out the same test here as the test shown above for whether the
slope of the line is significantly different from zero (the slope of a
horizontal line). We can see this by noting that the value of the variance
ratio (or F, 364.18) is the square of the t-ratio given above for the slope
(-19.08). This will always be the case. It is probably easier to concentrate
on the analysis of variance when interpreting the analysis.
Notice that the value of (90.6% or 0.906) can be obtained by dividing
the regression sum-of-squares by the total sum-of-squares.
A_S_S_U_M_P_T_I_O_N_S__________________~I
L -__________________
..~
...
m 0.75
u
.c:
U
. .",
0.65
.... ~
.....- ...
... . ....
0.55
.....
0.45
"'
....
"
0.899273 - 1.13E-02
R-Squaced 0.906
....
Regression
95t CI
0.35
10
20
exercise
Figure 8.7
30
40
8.3 ASSUMPTIONS
1. Normal distribution
A Normal distribution of residuals is assumed, since they represent
random variation. There should be many residuals with very small
absolute values (near zero) and only a few with very large ones (far
from zero). We can test this by asking for residual plots in MINITAB
129
130
I I~__________R_E_L_A_T_I_N_G_O_N__E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
Residual plots: chemical on exercise
I Chart of Residuals
.. -..
_0.05
co
::>
~O.OO
...
(I)
o
~0.05
- 0 .1
..,...
, ./
_ 0.1
co
'iii 0.0
o
n::
- 0.1
~.,.---.,.---.,.---.,.---,...-J
-2
-1
Normal Score
10
>-10
30
40
_ 0.05
C,)
....
co
c
o
g-
20
Observation Number
Histogram of Residuals
::>
UCLO.1145
-0
~.
r=========l
::>
::>
~
(f)
0. 0 0
. .. ..
-+-----:,....--..-=.--~'---........_=_1
o
~0.05
'-
u..
- 0. 10 ~---'----,--,-----.----r'
0.45
Figure 8.8
0.55
0.65
Fit
0.75
0.B5
x
Figure 8.9 Variability in y increases with x. This is the 'shotgun effect'.
L -____________________________________________________
II
131
132
I I
70 -
00
60 -
0 0
0
~ 50 Q)
00
0
40 -
0
0
30 -
20 I0
0
0
0
0
40
140
90
traffic
Figure 8.10
lead
44
58
43
60
23
53
48
74
14
38
50
55
14
67
66
18
32
20
30
traffic
79
109
90
104
57
111
124
127
54
102
121
118
35
131
135
70
90
50
70
What is the intercept? What is the slope? [-10.7, 0.569] (Figure 8.11)
Predictor
Constant
traffic
Coef
-10.736
0.56893
StOev
5.813
0.05923
t-ratio
-1.85
9.61
P
0.082
0.000
Is the intercept significantly different from zero? [no, p > 0.05] Note that
the estimate of -10.736 must be meaningless since we cannot have a
F_U_R_T_H_E_R__E_X_A_M_P_L_E_O_F__L_IN_E_A_R__R_E_G_R_E_S_SI_O_N________~I
L -_ _ _ _ _ _ _
Lead
.,
Figure 8.11
40
Traffic
negative value for lead. It may well be that if our sample included lower
levels of traffic we would find that the relationship was curved (Figure
8.11) .
Is the slope significantly different from zero? [yes, p < 0.05)
10
to
::J
"tJ
to
::J
"tJ
.... ...
(f)
<1l
..
c::
-10
-2
(f)
<1l
c::
000
l=====:;::====J
10
~ 5
&4
.--
f--
-12 -7
Figure 8.12
-2
Residual
lCl19.29
20
i;'6
-10
Y\.yv
A b .-. x.a
V .
Observation Number
Histogram of Residuals
7
3
~2
1
0
-20
-1
Normal Score
<1l
10
UCl19.29
13
133
134
II
R_E_L_A_T_I_N_G_O
__
N_E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
L ___________
..
80
60
LJ
~ 40
y -\
R-Squared 0.844
20
_
Regression l
95% CI
140
90
40
traffic
R-sq= 84.4%
Regression
Error
Total
DF
SS
5442.0
1002.8
6444.7
17
18
MS
5442.0
59.0
92.26
p
0.000
With what confidence can you reject the null hypothesis of no linear
relationship between lead concentration and amount of traffic? [99.9%,
p < 0.001]
IS
The lead deposition-traffic data set which we have just analysed was a
typical example of a relationship which is well modelled by linear
regression. However, it is possible for the computer to give you the same
equation (and the same values for p and r2) from very different data sets
for which the model would be inappropriate. This is an alarming thought.
It is important to plot the observations to identify such problems.
We will use four data sets devised by Anscombe (1973) to illustrate this
problem. The x values for data sets 1, 2 and 3 are the same and are in
column 1 while those for data set 4 are in column 5. The corresponding
responses are in columns named Yb Y2, Y3 and Y4:
Row
1
2
3
4
5
6
7
8
9
10
11
x123
10
8
13
9
11
14
6
4
12
7
5
y1
8.04
6.95
7.58
8.81
8.33
9.96
7.24
4.26
10.84
4.82
5.68
y2
9.14
8.14
8.74
8.77
9.26
8.10
6.13
3.10
9.13
7.26
4.74
y3
7.46
6.77
12.70
7.11
7.81
8.84
6.08
5.39
8.15
6.42
5.73
x4
8
8
8
8
8
8
8
19
8
8
8
y4
6.58
5.76
7.71
8.84
8.47
7.04
5.25
12.50
5.56
7.91
6.89
All plots have been produced using (Graph, Character Graphs, Scatter
Plot).
8.5.1 A straight-line relationship with some scatter (data set 1)
In carrying out the regression we can ask for MINITAB to calculate fitted
values and automatically put them into a column called 'FITS'. It will call
y1
x
x
10.0+
x
x
x
7.5+
x
x
5.0+
-+
4.0
+
6.0
+
8.0
+
10.0
12.0
+-x123
14.0
135
136
I ~I____________R_E_L_A_T_IN__G_O_N_E__T_H_I_N_G_T_O__A_N_O_T_H_E_R____________~
the fitted values from the first analysis 'FITS1' and those from the next
analysis 'FITS2'. Instead of asking for ordinary residuals we have asked
for what are called 'standardized residuals' to be calculated and put
automatically into a column named 'SRES1'. The values of ordinary
residuals will depend on the units in which y was measured. If we ask for
them to be standardized by dividing them by an estimate of their
variability we can judge their size more easily. If we find that a
standardized residual is greater than 2 or less than -2 we should be alert
to possible errors in the data or to the possibility that we are not fitting a
sensible model.
Data set 1 is plotted in Figure 8.14.
The regression equation is
yl=3.00+0.500 x123
Note that the values of the intercept (3.0) and the slope (0.5) are the same
in this and the next three analyses. The regression line fitted is exactly the
same for allfour data sets.
Predictor
Constant
x123
s=1.237
Coef
3.000
0.5001
R-sq=66.7%
StDev
1.125
0.1179
R-sq (adj)=62.9%
t-ratio
2.67
4.24
P
0.026
0.002
Note that r2 (66.7%) is the same in this and the next three analyses.
Analysis of Variance
DF
1
9
10
SOURCE
Regression
Error
Total
F
17.99
MS
27.510
1. 529
SS
27.510
13.763
41.273
P
0.002
SRES1
1.2+
-
0.0+
x
-1.2+
-+----+---+---+---+---+-FITS1
5.0
6.0
7.0
8.0
9.0
10.0
'--_ _ _
T_H_E_I_M_P_O_R_T_A_N_C_E_O_F_P_L_O_TT_IN_G_O_B_S_ER_V_A_T_I_O_N_S_ _ _---',
Note that the probability (p = 0.002) is the same in this and the next three
analyses. We now ask for standardized residuals to be plotted against
fitted values (Figure 8.15). There is no pattern in the residuals. A straightline model seems satisfactory.
8.5.2 A curved relationship (data set 2)
Although the response increases from left to right, the rate is not steady,
the response levels off. Here, the fit of the regression line is poor. Predicted
values are too high at extreme values of x and too low in the middle.
Data set 2 is plotted in Figure 8.16.
The regression equation is
y2=3.00+0.500 x123
Predictor
Constant
x123
s=1.237
Coef
3.001
0.5000
R-sq=66.6%
StOev
1.125
0.1180
R-sq (adj) = 62.9%
t-ratio
2.67
4.24
P
0.026
0.002
Analysis of Variance
SOURCE
Regression
Error
Total
OF
SS
27.500
13.776
41.276
10
MS
27.500
1. 531
17.97
0.002
The poor fit of the regression line is confirmed by the pattern of the
standardized residuals when plotted against the fitted values. They are not
10.0+
y2
x
x
x
7.5+
x
x
5.0+
x
x
2.5+
-+
4.0
+
6.0
Y2
+
8.0
against XI23'
10.0
+
12.0
+-x123
14.0
137
138
I LI___________R_E_L_A_T_I_N_G_O__N_E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
x
1.0+
SRES2
x
0.0+
-2.0+
Figure 8.17
-1.0+
6.0
7.0
8.0
9.0
10.0
8.5.3 All points except one follow one line (data set 3)
One point is clearly very different from the others. MINITAB identifies it
as having a particularly large standardized residual (3.0, it is an outlier).
12.5+
y3
10.0+
x
7.5+
5.0+
-+
4.0
Figure 8.18
+
6.0
+
8.0
+
10.0
12.0
+-x123
14.0
T_H_E_I_M_P_O_R_T_A_N_C_E_O_F_P_L_O_TT_IN_G_O_B_S_E_R_V_A_T_IO_N_S_ _ _---'I
L -_ _ _
Predictor
Constant
x123
s=1.225
StOev
1.114
0.1168
R-sq (adj) = 63.2%
t-ratio
2.70
4.27
P
0.024
0.002
Analysis of Variance
SOURCE
Regression
Error
Total
OF
1
9
10
MS
27.310
1. 500
SS
27.310
13.498
40.808
F
18.21
P
0.002
Unusual Observations
Obs.
3
x123
13.0
y3
12.700
Fit
9.489
StOev.Fit
0.595
Residual
3.211
St.Resid
3.00R
x
SRES3
1.5+
x
0.0+
x
X
-1.5+
-+
5.0
+
6.0
+
7.0
+
8.0
9.0
+-FITS3
10.0
139
140
I LI___________R_E_L_A_T_I_N_G_O__N_E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
x
12.5+
y4
10.0+
x
x
x
7.5+
x
2
x
x
5.0+
--+----+----+----+---+-----+-x4
7.5
10.0
12.5
15.0
17.5
20.0
Y4
against
X4-
8.5.4 All points except one are clustered together on the x axis (data set 4)
Here, one observation is determining the nature of the regression line. It
is important to check that the observation is correct. If so, it again seems
sensible to analyse the data with and without the value, to quantify the
difference it makes. In the absence of the one isolated value there is no
significant linear relationship between y and x. MINITAB identifies such
points on the printout as having much 'influence'. Note that '2' indicates
two points are very close together.
Data set 4 is plotted in Figure 8.20.
The regression equation is
y4 = 3.00 + 0.500 x4
Predictor
Constant
x4
s=1.236
Coef
3.002
0.4999
R-sq=66.7%
StOev
1.124
0.1178
R-sq (adj) = 63.0%
t-ratio
2.67
4.24
P
0.026
0.002
Analysis of Variance
SOURCE
Regression
Error
Total
OF
1
9
10
SS
27.490
13.742
41.232
MS
27.490
1. 527
18.00
0.002
Unusual Observations
Obs.
8
x4
19.0
y4
12.500
Residual
1.236
St.Resid
0.000
*X
---'I I
T_H_E_I_M_P_O_R_T_A_N_C_E_O_F_P_L_O_TT_IN_G_O_B_S_E_R_V_A_T_IO_N_S
___
L -_ _ _
SRES4
1.0+
x
x
0.0+
x
-1.0+
X
X
X
+-----+----+------+------+---+--FITS4
o
10
20
30
40
50
141
142
I ~I___________R_E_L_A_T_IN_G__O_N_E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
Therefore we should not extrapolate from our data; rather we should
extend the range of treatments in a subsequent experiment.
8.6
CONFIDENCE INTERVALS
8.6.1
bt x JRMS
SXX
with t taken from tables for p = 0.05 and for n - 2 df (where n is the
number of points on the graph). Here RMS = residual (or error) mean
square and Sxx = the sum-of-squares of the x values.
This equation is of the same general form as that for a population mean
(Chapter 2) in that we have an estimated value plus or minus t times the
standard error of that value.
8.6.2 For the regression line
We often subsequently wish to predict the value of y (y') for a particular
individual with a value of x (x'). This can be achieved as follows:
ytx
xi)
C_O_R
__
R_EL_A_T_I_O_N__________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
8.7
CORRELATION
(a)
Response
X1
X2
X3
Amount
Amount
Figure 8.22 (a) Confidence interval for a particular value of y. (b) The
regression line.
143
144
II
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
46.1
23.6
23.7
7.0
12.3
14.2
7.4
3.0
7.2
10.6
3.7
3.4
4.3
3.6
5.4
8
24.7
15.2
12.3
10.9
10.8
9.9
8.3
7.2
6.6
5.8
5.7
5.6
4.2
3.9
3.1
First, we ask for a scatter plot (Graph, Character Graphs, Scatter Plot).
This is shown in Figure 8.23. We can ask for the correlation coefficient, r
(Stat, Basic Statistics, Correlation).
Correlation of A and 8=0.939
II
CORRELA nON
~----------------------------------------------------~
30+
15+
------+----------+-------+-----+----+------------+---8
4.0
B.O
12.0
16.0
20.0
24.0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
A
46.1
23.6
23.7
7.0
12.3
14.2
7.4
3.0
7.2
10.6
3.7
3.4
4.3
3.6
5.4
B
24.7
15.2
12.3
10.9
10.8
9.9
8.3
7.2
6.6
5.8
5.7
5.6
4.2
3.9
3.1
rA
15
13
14
7
11
12
9
1
8
10
4
2
5
3
6
rB
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
145
146
I ~I___________R_E_L_A_T_I_N_G_O_N__E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
15.0+
rA
10.0+
5.0+
------+-----------+---------+-------+-----+-----+ ---rB
2.5
5.0
7.5
10.0
12.5
15.0
,
,,
,,
,
,,
,
,,
,
I
.. , ,
I
I
x
Pearson's r
Spearman's r
,,
0.93
1
~___________________E_X_E_R_C_I_SE____________________~I
as the other decreases). Ranking the data maintains the order of the
observations on each axis. However, the size of the difference between one
observation and the next biggest one is standardized. Now the patient
who was an outlier is still the point at the top right of the graph but,
because the data are now ranked, he or she is not so far removed from the
other data points as before. Spearman's r is described as being more robust
because it is less sensitive than Pearson's r to occasional outliers or to
bends in the relationship.
Figure 8.25 illustrates this for two data sets showing how the value of
Spearman's r remains at 1 for a positively (or negatively) curved
relationship whereas Pearson's r gives a lower value, because of the lack of
linearity.
Again we can test the significance of Spearman's r. A table of critical
values gives one of 0.521 for p = 0.05 and 0.654 for p = 0.01. So we have
strong evidence of correlation.
8.7.2
the number of doctors in a city and the number of deaths per year in that
city! At first sight we may be tempted to conclude that having more
doctors leads to more deaths. Therefore if we cut the number of doctors
we might expect fewer deaths. However, we have overlooked the fact that
both the number of doctors and the number of deaths in a city depend
upon the population of a city. We can calculate correlations between any
pair of variables but we must always be wary of assuming that one causes
variation in the other.
8.8
EXERCISE
Feeding lambs
weight
7.4
8.5
7.1
8.2
feed
53
58
49
56
147
148
II
~----------------------------------------------------~
5
6
7
8
9
10
11
12
8.4
8.0
8.5
8.0
8.6
7.6
7.6
7.3
60
52
62
56
54
49
50
46
Answer
Correlations (Pearson)
Correlation of weight and feed = 0.825
Row
1
2
3
4
5
6
7
8
9
10
11
12
weight
7.4
8.5
7.1
8.2
8.4
8.0
8.5
8.0
8.6
7.6
7.6
7.3
feed
53
58
49
56
60
52
62
56
54
49
50
46
rankwt
3.0
10.5
1.0
8.0
9.0
6.5
10.5
6.5
12.0
4.5
4.5
2.0
rankfeed
6.0
10.0
2.5
8.5
11. 0
5.0
12.0
8.5
7.0
2.5
4.0
1.0
It is slightly smaller than the Pearson's coefficient but still very large.
~___________________E_X_E_R_C_I_SE____________________~I
Regression Analysis
The regression equation is
weight=3.17+0.0887 feed
Predictor
Constant
feed
s=0.3092
StOev
1. 038
0.01924
R-sq (adj) = 64 .8%
Coef
3.167
0.08867
R-sq= 68.0%
t-ratio
3.05
4.61
P
0.012
0.000
Analysis of Variance
SOURCE
Regression
Error
Total
OF
1
10
11
MS
2.0306
0.0956
SS
2.0306
0.9560
2.9867
Unusual Observations
Obs.
feed
weight
9
54.0
8.6000
Fi t
7.9555
21.24
0.000
StOev. Fi t
0.0894
Residual
0.6445
St.Resid
2.18R
Interpretation
The initial plot (Figure 8.26) and high correlation coefficient (r = 0.825)
lead us to believe that a straight-line relationship between weight and feed
is appropriate. The linear regression analysis shows that there is very
strong evidence for a linear relationship with more feed leading to heavier
lambs. The p value p < 0.001 shows that we can reject the null hypothesis
of no linear relationship with 99.9% confidence and the graph shows that
the 95% confidence interval for the relationship covers a narrow band
8.5-
0
0
.::; 8.0 -
OJ
Q)
7.5 -
0
0
7.0
45
50
55
feed
60
149
150
II~___________R_E_L_A_T_I_N_G_O_N__E_T_H_I_N_G_T_O__A_N_O_T_H_E_R__________~
Regression of weight on feed intake
9.5
......, 8.5
..c
Ol
(j)
3.16712 8.87[-02
R-Squared ' 0.68
7.5
Regression
95t Cl
6.5
45
50
55
feed
60
Figure 8.27 High-quality plot showing fitted straight line and 95% confidence
interval for weight and feed data.
values.
E_X_E_R_C_I_SE____________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
::>
...
'D
~ 0:0
'D
(J)
Q)
<X
<X
- 0,5
r==========l
---t"4--+~
d-A~-+/\_~"""I
rv~
ro
-I
--L.:...----,--~--.,-l
5
10
Observation Number
Residuals
Histogram of Residuals
9
>-8
::> 5
'D
'- 3
Ct:
2'4
~ 0, a ...j...!.------=--------1
u...2
I
Figure 8.28
Fits
m
::>
VS,
- a,s
g7
Q)
x,o.ooo
- - - - - - - - - lCl'-O,990B
-0,5
0,5
1.5
Normal Score
-1.5
UCl'O,990B
-0,5
0,0
0,5
Residual
-a,5 --!,----,---=----,.-----,!
7,2
7.7
8,2
Fit
8.7
151
9.1 INTRODUCTION
M
__
A_N_N_-_W_H_I_T_N_E_Y_T_E_S_T________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _
(a)
(b)
No.
01
No.
individuals
01
10
indiv- 10
iduals
10
20
30
40
50
60
MANN-WHITNEY TEST
If we have an experiment with two treatments in which our data are either
not measured on an absolute scale (for example they are scores or ranks
153
154
I ~I_______D_A_T_A_A_R__E_S_K_E_W_E_D_,_R_A_N_K_S_,_S_C_O_R_E_S_O_R_C_O__U_N_T_S______~
rather than centimetres or kilograms) or they are skewed rather than
Normally distributed we should use a Mann-Whitney test. This tests
whether there is a difference between the population medians. The
assumptions we must be able to make are: first, the observations must be
random and independent observations from the populations of interest;
and second, the samples we compare are assumed to come from
populations which have a similar shaped distribution (for example, both
are positively skewed).
The growth form of young trees grown in pots can be scored on a scale
from I (poor) to 10 (perfect). This score is a complex matter as it
summarizes the height, girth and straightness of the tree. We have 16 trees.
Eight of the trees were chosen at random to be grown in a new compost
while the remaining eight were grown in a traditional compost.
Unfortunately, one of the latter was accidentally damaged, leaving only
seven trees. The null hypothesis is that the two composts produce trees
with the same median growth form. The alternative hypothesis is that the
composts differ in their effect.
After six months growth the scores were recorded and entered into
columns 1 and 2 of a MINITAB worksheet:
Row
1
2
3
4
5
6
7
new
9
8
6
8
old
5
6
4
6
6
8
7
7
6
N=8
N=7
Median = 7.500
Median = 6.000
This seems rather odd at first sight. ETAI - ETA2 represents the
difference between the 'centre' of the new compost scores and the 'centre'
of the old compost scores. ETA stands for a Greek letter of that name. If
we take away one median from another we have 7.5 - 6 = 1.5. However,
K_R_U_S_K_A_L_-_W_A_L_L_I_S_T_E_ST________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _
4
0
1
5
0
2.5
5
0
2.5
6
0
6
6
0
6
6
0
6
6
N
6
6
7
7
NON
6 10 10
7
N
10
8
N
13
8
N
13
8
N
13
9
N
15
(For example, the two trees with a score of 5 each receive a rank of 2.5
because this is the average of the next two available ranks, 2 and 3). The
sum of the ranks for the new compost trees (the smaller sample size) is
6 + 6 + 10 + 10 + 13 + 13 + 13 + 15 = 86
This is W
w= 86.0
= 0.0128):
This calculation assumes that there are no tied values in the data. Since
there are some ties, in this case an adjustment has to be made to the
calculation of the probability level to give p = 0.0106:
The test is significant at 0.0106 (adjusted for ties)
We can conclude that the two composts differ in their effect on tree growth
form. The new compost is preferable as it produces trees with a higher
median score. We now need to consider the relative costs and benefits of
using the two composts.
155
156
II
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
about the nature of the data than does analysis of variance. However, it still
makes the same assumptions as the Mann-Whitney test does: first, the
observations must be random and independent observations from the
populations of interest; and second, the samples we compare are assumed to
come from populations which have a similar-shaped distribution. This does
not have to be 'Normal'. It could be that both/all tend to have a few large
values and so have 'positive skewness' (Figure 9.1a) or they could both/all
be negatively skewed; but whatever the shape they must both/ all share it.
We will illustrate the use of this test on two data sets. First we will reexamine the data we analysed in Chapter 6 using one-way analysis of
variance.
We should note that data which are suitable for analysis of variance
can always be analysed by Kruskal-Wallis but the reverse is not always
so. Where the option of using analysis of variance exists however it will
normally be preferred since it is a more sensitive and elegant technique
(for example, the analysis of a factorial structure of treatments is easily
achieved).
To remind you, the data are from four treatments (types of vegetation
management by sowing and cutting) each replicated four times on plots
whose positions were randomly selected around a field margin. The data
are the number of spiders per plot.
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
treat
1
1
1
1
2
2
2
2
3
3
3
3
4
4
4
4
spider
21
20
19
18
16
16
14
14
18
17
15
16
14
13
13
12
We ask MINITAB to carry out the Kruskal-Wallis test on the data (Stat,
Non-Parametrics, Kruskal-Wallis, Response spider, Factor treat):
LEVEL
1
2
3
4
OVERALL
NOBS
4
4
4
4
16
MEDIAN
19.50
15.00
16.50
13.00
AVE. RANK
14.4
7.0
9.9
2.7
8.5
Z VALUE
2.85
-0.73
0.67
-2.79
-----'I I
K_R_V_SK_A_L_-_W_A_L_L_I_S_T_ES_T_ _ _ _ _ _ _
L -_ _ _ _ _ _ _
For convenience, the mean rank for all observations has been
converted to zero. This overall mean has been subtracted from each
treatment's mean rank and the result has been divided by the
standard deviation of that treatment's ranks to give a z value for
each treatment. These show the location of each treatment's mean
rank around zero, the overall mean of this 'standard' Normal
distribution (Figure 9.2). We remember from Chapter 2 that 95% of
the values would be expected to lie between + 1.96 and -1.96 units
from the mean. Here we see that two treatments have z values
outside this range suggesting that not all treatments are likely to
come from one population.
~
-4
Treatments
-2
;!
Figure 9.2
value
The program then prints a test statistic (H) (just as we have met t and
F before):
H = 12 . 66
df = 3
= 0 .006
The degrees of freedom are as usual one less than the number of
treatments. The p value is given (0.006), so we don't need to consult
157
158
II
~----------------------------------------------------~
statistical tables. We can reject the null hypothesis that the observations
all come from the same population, because the value of p is very much
less than 0.05. We have strong evidence that at least one of the four
treatments differs from at least one of the other four in terms of its spider
population.
A second version of the test statistic, H, is also given:
H=12.85
df=3
This differs slightly from the first in that an adjustment has been made
to account for the presence of tied observations (for example there are
several plots with 16 spiders). This has to be done because the test assumes
that the observations come from a continuous distribution (which could
contain values like 2.13459) whereas we have used it to analyse counts
where ties are more likely to occur. We should use this second, corrected
version of the statistic.
Let us now analyse the results of a different experiment. A student has
scored the amount of bacterial growth on each of 20 Petri dishes. A score
of 0 indicates no growth while one of 5 shows that the bacterium covers
the entire dish. Five days previously, ten randomly selected dishes had
received a standard amount of an established antibiotic used to inhibit
bacterial growth (treatment 1) while the remaining ten dishes had received
the same amount of a newly discovered inhibitory compound (treatment
2).
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
treat
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
score
1
2
3
2
3
4
2
1
2
3
3
3
5
4
2
5
4
3
4
3
A preliminary plot of the data suggests that the new compound does
not seem to be an improvement (Graph, Character Graphs, Boxplot):
F_R_I_ED
__
M_A_N_'_S_T_E_ST__________________~I
L -_________________
treat
------+
1----
------1
1-------
+ -------+-------+------+------+-----+ ----score
0.80
1.60
2.40
3.20
4.00
4.80
NOBS
MEDIAN
10
10
20
2.000
3.500
1
2
OVERALL
H=6.0
H=6.46
d. f. = 1
d. f. = 1
AVE. RANK
7.2
13.8
Z VALUE
-2.46
2.46
p=0.014
p=O.Ol1 (adj. for ties)
treat
1
1
1
1
2
2
2
2
spider
21
20
19
18
16
16
14
14
block
1
2
3
4
1
2
3
4
159
160
I I
3
3
3
3
4
4
4
4
18
17
15
16
14
13
13
12
1
2
3
4
1
2
3
4
2
3
4
d.L =3
N
4
4
4
4
p=0.008
Est. Median
19.344
14.844
16.469
12.719
Sum of RANKS
16.0
8.0
12.0
4.0
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
treat
1
1
1
1
1
1
1
1
1
1
2
2
2
2
2
2
2
2
2
2
score
1
2
3
2
3
4
2
1
2
3
3
3
5
4
2
5
4
3
4
3
II
time
1
2
3
4
5
6
7
8
9
10
1
2
3
4
5
6
7
8
9
10
d.f. =1
d.f. =1
treat
1
2
N
10
10
p=0.027
p=0.020 (adjusted for ties)
Est. Median
2.000
4.000
Sum of RANKS
11.5
18.5
Here the test statistic, S (as for the H statistic in the Kruskal-Wallis
test), is still highly significant (p < 0.05), but the p value has not been
reduced compared with that from the Kruskal-Wallis test. It appears that
time of assessment is not very important here. After all, with only 20 plates
in total the assessment should be complete in 10 minutes. However, if the
bacteria grew very fast and if there were 100 plates to assess, taking at
least 100 minutes, then blocking for time could be very important.
9.5
161
162
I ~I_______D_A_T_A_A_R__E_S_K_E_W_E_D_,_R_A_N_K__S,_S_C_O_R_E_S_O_R__C_O_U_N_T_S______~
two or more categories, for example: green or brown; live or dead; small,
medium or large. If we prepare a table with two or more rows and two or
more columns to summarize the data, each item in the table will represent
the number of observations in that category. We then want to ask
questions like: Is the proportion of green individuals the same in each
treatment? Let us clarify this with an example.
A new drug has been developed which may be more or less effective at
clearing all parasites from the blood of humans within 36 hours. In an
experiment 287 individuals took part to compare its effectiveness with that
of chloroquinine (the standard). Of the 184 individuals receiving
chloroquinine, 129 were cleared of parasites within 36 hours while 55 were
not. We can summarize these observations and those for the new drug in
a table of observed values (0):
Cleared in 36 h
Chloroquinine
New drug
Total
129
55
23
209
78
80
184
103
287
Note that the numbers of individuals taking the new drug and those
taking the old one do not have to be equal although the analysis will be
more robust if they are of similar magnitude. Our null hypothesis is that
the two variables are statistically independent: the proportion of
individuals from whose blood the parasite has been cleared is the same for
both drugs. Because we have only a sample from each of the two
populations we need statistically to assess these data. The first step is to
calculate the number of individuals we would expect to be in each category
(or 'class' or 'cell') if the null hypothesis is true.
In general, the expected number in each category or 'cell' is:
expected vaIue
The maximum number of people who could appear in the top left 'cell'
is 184 because this is the number who were treated with chloroquinine.
The proportion 209/287 is the overall proportion of individuals from
whose blood the parasite had cleared in 36 hours, irrespective of which
drug they had received. This proportion is then multiplied by the number
of individuals who received chi oro quinine (184) to give the number of
individuals we would expect to find who had both received chloroquinine
and whose blood had cleared in 36 hours assuming both drugs have the
same effect. This value represents R o, the null hypothesis.
For the top left class of the above data set (Chloroquinine, Cleared)
we therefore obtain:
c_H_I-_s~Q~U_A_R_E_D_C_O__N_T_IN_G__EN__C_Y_T_E_S_T__________~I
L -___________
expected value
=E=
209 x 184
287
= l33.99
Not cleared in 36 h
Total
50.01
184
133.99
Chloroquinine
New drug
Total
27.99
78
75.01
209
103
287
The general idea is that if the null hypothesis is true, the observed and
expected counts will be very similar whereas if it is false the counts will be
very different. To enable us to calculate the strength of evidence against
the null hypothesis we calculate a test statistic - chi-squared (chi is a Greek
letter pronounced 'ky' to rhyme with sky):
l = L(O~Ei
This equation states that l is calculated as the sum of (the squares of
the differences between each observed count and its expected value,
divided by its expected value). Here we have:
(129 - l33.99)2 /l33.99 + (55 - 50.01)2/50.01
+(80 - 75.01)2/75.01 + (23 - 27.99i /27.99
= 1.91
The bigger the value of l the greater the chance that we will reject the
null hypothesis. Here, l = 1.91. We compare this with the critical value
from a statistical table (Table C.6) of l taken from the column headed
95% (p = 0.05). If 1.91 is bigger than this critical value then the null
hypothesis is rejected with 95% confidence. As always, we need to know
the degrees of freedom to find the critical value in a statistical table. In a
contingency table there are (r - 1) x (c - 1) degrees of freedom where
r = number of rows and c = number of columns. Thus for a 5 x 4 table the
df = 12. Here in our 2 x 2 table the df = 1. So we need the critical value
from the column headed 95% for 1 df in Table C.6. The calculated value of
l = 1.91 is less than the table value of 3.841 so we have no reason to reject
the null hypothesis of no difference in the efficacy of the two drugs.
MINIT AB can carry out a l analysis for us. The contents of the table
of observed counts is placed into the worksheet:
Row
1
clear
129
80
not
55
23
163
164
I LI______D__A_TA__A_R_E_S_K_E_W_E_D_,_R_A_N_K_S_,_S_C_O_R_E_S_O_R_C_O_U_N_T_S______~
Expected counts are printed below observed counts
1
2
Total
clear
129.00
133.99
not
55.00
50.01
Total
184
80.00
75.01
23.00
27.99
103
209
78
287
contingency
With fertilizer
20
10
Medium
15
Large
The expected value for the top right class (large, no fertilizer) would
be (30 x 8)/60 = 4. For the bottom right class (large, with fertilizer) it
----'I I
C_H_I-_S_Q_V_A_R_E_D_C_O_N_T_IN_G_E_N_C_y_T_E_S_T_ _ _ _ _
L -_ _ _ _ _ _
20
No fertilizer
With fertilizer
Large
10
20
10
Now there will be no problem with expected values being too small.
They all happen to be (30 x 30)/60 = 15.
The reason for ensuring that expected values are greater than 5 is
that the test is over-sensitive to small differences when the expected
value is small. This is because dividing by a very small expected value
(imagine dividing by 0.1) will give rise to a ridiculously high component
of/.
3. When the contingency table is larger than the 2 x 2 case look at the
component of / which comes from each class. Those classes with large
values are mainly responsible for the rejection of the null hypothesis
and are therefore the ones to concentrate on when it comes to
interpreting your results, as we can see in the following example.
A further example of a
9.5.2
contingency test
2
3
attack
112
82
123
avoid
188
218
177
oil
creosote
copper arsenate
112.00
105.67
188.00
194.33
avoid
Total
82.00
105.67
218.00
194.33
300
123.00
105.67
177.00
194.33
300
317
583
900
Total
300
df = 2, P = 0 . 001
165
166
I LI_______D_A_T_A_A_R_E__SK__E_W_E_D_,_R_A_N_K_S_,_S_C_O_R_E_S_O_R_C_O__U_N_T_S______~
The table value of l for 2 df at p = 0.05 is 5.99 and for p = 0.01 it is
9.21. We therefore have strong evidence to reject the null hypothesis that
all three chemicals are equally effective. MINITAB provides the value
p = 0.001.
If we examine the components of l we see that the two coming from
creosote are especially high (5.301 and 2.882). This tells us that it is
creosote which has a different effect from the others. Comparing the
observed and expected values we see that creosote was more effective than
the other two treatments with only 82/300 = 27% of stakes being attacked
compared with 37% and 41% for the others.
An important assumption we have made is that the allocation of
treatments to the 900 stakes was carried out at random. If the 300 stakes
for each treatment had been grouped, any differences between treatment
might have been due to environmental differences such as one treatment
being on a sandier soil which is better shaded, thus possibly affecting the
density of termites in the area. As with many other forms of sampling,
therefore, randomization is the key.
9.6
observed
70
134
75
expected
69.75
139.50
69.75
EX
__
ER
__
C_IS_E_S____________________~I
L -____________________
Now click to the right of the MINITAB prompt in the Session window
'MTB>' and type the commands in each line exactly as shown, followed
by the 'ENTER' key.
The first line is an instruction to calculate the l value, using Calc,
mathematical expressions):
MTB
>
let
kl
sum((cl
c2)**2/c2)
The next three lines ask for the p value to be calculated and stored:
MTB > cdf k1 k2;
SUBC > chisquare 2.
MTB > let k3 = 1 -
k2
Now we can ask for the values of X2 and its p value to be displayed along
with the observed and expected value by returning to the usual commands
and asking for c1, c2 , kl and k3 to be displayed:
Data Display
Row
observed
70
134
75
1
2
K1
K3
expected
69.75
139.50
69.75
0.612903
0.736054
The kl value (Kl) is l and the k3 value (K3) is the p value. The p value
is very large, indicating that there is no reason to doubt the hypothesis of
a 1 :2: 1 ratio of phenotypes A, Band C.
9.7 EXERCISES
Non-parametric tests
Enter the observations for five replicates of each of two treatments A and
B into two columns of a MINIT AB worksheet:
Row
1
2
3
4
5
12
10
9
8
9
9
8
5
5
4
Carry out a Mann-Whitney test to compare the two sets of data (Stat,
non-parametric tests). What is your conclusion?
167
168
II~______D__A_T_A_A_R_E_S_K_E_W_E_D_,_R_A_N_K_S_,_S_CO__R_E_S_O_R_C_O_U_N_T_S______~
(b) Including a third treatment
The appropriate test is a Kruskal-Wallis test. For this the data need to
be in one column and treatment codes in a second column:
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
score
12
10
9
8
9
9
8
5
5
4
7
treat
1
1
1
1
1
2
2
2
2
2
3
3
3
3
3
5
4
3
This requires a third column of block codes. Then the Friedman test can
be carried out. What is your conclusion?
Row
score
12
2
3
4
5
6
7
8
9
10
treat
block
1
1
1
1
1
5
2
3
4
5
1
2
10
9
8
9
9
8
5
5
4
11
12
13
14
15
2
2
2
2
2
3
3
5
4
3
3
3
3
Answers
(a) Mann- Whitney confidence interval and test
A
N= 5
Median =
B
N= 5
Median =
Point estimate for ETAI-ETA2 is
9.000
5.000
4.000
2
3
4
3
4
E_X_E_R_C_IS_E_S____________________~I
L -____________________
We have evidence that the two populations differ in their median values
(p = 0.047). Treatment A has a significantly higher median value (9
compared to 4).
(b) Kruskal- Wallis test
LEVEL
1
2
3
OVERALL
H=8.15
H=8.29
NOBS
5
5
5
15
d.f.=2
d.f.=2
MEDIAN
9.000
5.000
5.000
AVE. RANK
12.5
6.8
4.7
8.0
Z VALUE
2.76
-0.73
-2.02
p=0.017
p=0.016 (adjusted for ties)
We have strong evidence that the three populations do not all have the
same median value (p = 0.02). Treatment A has a higher median than the
other two (9 compared to 5 and 5).
(c) Friedman test
Friedman test of score by treat blocked by block
S=9.10
S=9.58
d.f. =2
d.L =2
treat
1
2
3
N
5
5
5
p=O.Oll
p=0.009 (adjusted for ties)
Est.
Median
9.000
6.000
5.000
Sum of
RANKS
15.0
9.5
5.5
Chi-squared test
The medical officer of a large factory administered five different influenza
vaccines (see rows 1 to 5) to randomly chosen employees in December.
Next March she recorded the incidence of flu and carried out a l test on
the data as shown in the MINITAB worksheet:
169
170
I IL-_____D__A_TA__A_R_E_S_K_E_W_E_D_,_R_A_N_K_S_,_S_C_O_R_E_S_O_R_C_O_U_N_T_S______~
MTB>print cl c2
Row
1
2
3
4
5
flu
43
52
25
48
57
noflu
237
198
245
212
233
(a) How many people were expected to contract flu when given vaccine
3?
(b) What is the component of X2 from the cell for 'no flu, vaccine 5'.
(c) What are the degrees of freedom?
(d) Enter the data into a MINITAB worksheet and carry out the l
analysis.
(e) Use (Calc, mathematical expressions, New variable=c3, Expression =
cll(c1 +c2 to work out the proportion of people with flu for each
vaccine.
(f) Interpret the output.
(g) Remove vaccine 3 from the worksheet, reanalyse the data and
interpret the results.
Answers
(b) (0 -
noflu
237
233.33
Total
280
52
41. 67
198
208.33
250
25
45.00
245
225.00
270
48
43.33
212
216.67
260
57
48.33
233
241.67
290
Total
225
1125
1350
~____________________E_X_E_R_C_IS_E_S____________________~I
0.503+0.101+
1. 554 + O. 311 = 16. 555
df=4, p=0.002
flu
43
52
25
48
57
noflu
237
198
245
212
233
prop
o.15357l
0.208000
0.092593
0.184615
0.196552
Vaccine 3 has a much lower proportion of people with flu (0.09) than
do the others (between 0.15 and 0.20). This is shown by the large
component of l (8.889) coming from this cell.
(f) The calculated X2 value of 16.55 is greater than the value in the l table
(Table C.6) for row 4 (df) and column 1 of 13.3. So we can reject the
null hypothesis of each vaccine having the same proportion of people
contracting flu with 99% confidence. MINITAB gives a more accurate
value of p = 0.002 - showing that we are 99.8% confident in this
conclusion.
(g) Chi-squared test for four vaccines
Expected counts are printed below observed counts
flu
43
51.85
nof1u
237
228.15
Total
280
52
46.30
198
203.70
250
48
48.15
212
211.85
260
57
53.70
233
236.30
290
Total
200
880
1080
171
10
arranged during the experiment so that each one has an equal chance of
being in any particular position (randomization). However, the name
'experiment' is often mistakenly applied to investigations which are really
observational studies. For example, the investigator may make
observations without applying any treatments or in which the so-called
replicate plots of each treatment are grouped together, i.e. they are neither
randomized nor interspersed (Figure 10.1).
In this chapter we will outline an exploratory technique which can
simplify the results from such observational studies - principal components
analysis. Its great strength is that it can summarize information about
many different characteristics recorded from each individual in the study.
Site 1
Figure 10.1
Site 2
Site 3
C_A_SE_S_T_U_DY
__
1______________~1
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _
We will illustrate its use by reference to two data sets: first, a study of
the economic and social characteristics of a group of countries; and
second, a study of the vegetation on a hillside.
popden
2
97
247
72
2
189
311
12
107
74
18
56
229
24
agemp
nat inc
6
9
6
31
13
15
11
10
31
19
6
61
3
4
8.4
10.7
12.4
4.1
11.0
5.7
8.7
6.8
2.1
5.3
12.8
1.6
7.2
10.6
capinv
10.1
9.2
9.1
8.1
6.6
7.9
10.9
8.0
5.5
6.9
7.2
8.8
9.3
7.3
infmort
12
10
15
19
11
15
8
14
39
15
7
153
13
13
energy
tvsets
5.2
3.7
4.6
1.7
5.8
2.5
3.3
3.4
1.1
2.0
6.3
0.7
3.9
8.7
36
28
33
12
25
22
24
26
9
21
37
5
39
62
We then obtain summary statistics for each of the variables (see section
2.2 for an explanation of the columns) (Stat, Basic Statistics, Descriptive
Statistics):
pop den
agemp
nat inc
capinv
infmort
energy
tvsets
Mean
Median
TrMean
14
14
14
14
14
14
14
102.9
16.07
7.671
8.207
24.6
3.779
27.07
73.0
10.50
7.800
8.050
13.5
3.550
25.50
StDev
SEMean
93.9
13.42
7.750
8.208
15.3
3.625
26.00
101.3
15.73
3.619
1.462
37.7
2.211
14.40
27.1
4.20
0.967
0.391
10.1
0.591
3.85
173
174
I I
Min
2.0
3.00
1. 600
5.500
7.0
0.700
5.00
Max
311. 0
61. 00
12.800
10.900
153.0
8.700
62.00
Q1
16.5
6.00
5.000
7.125
10.8
1. 925
18.75
Q3
199.0
22.00
10.775
9.225
16.0
5.350
36.25
popden
-0.150
0.019
0.490
-0.131
-0.255
-0.069
agemp
natinc
capinv
infmort
energy
-0.786
-0.183
0.890
-0.715
-0.783
0.196
-0.602
0.830
0.722
0.002
0.009
0.134
-0.494
-0.526
0.915
C_A_S_E__
ST_U_D
__
Y_l__________________~1
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
(a)
-I
+----+---+---+---+----+--popden
60
120
180
240
300
(b)
--I
1------
- + - - - - - - + - - - - + - - - - - + - - - - + - - - + ---agemp
12
24
36
48
60
(c)
------1
1-----
-+---+----+-----+---+-----+-natinc
2.0
4.0
6.0
8.0
10.0
12.0
(d)
--------1
- - - + - - - - - + - - - + - - - - + - - - + - - - + -capinv
6.0
7.0
-I +-
8.0
9.0
10.0
11.0
(e)
+ - - - + - - - - + - - - - - + - - - + - - - - - + ---infmort
30
60
90
120
150
(I)
---I
---+----+---+---+----+----energy
1.5
3.0
4.5
6.0
7.5
(g)
---------1
1-
+---+-----+----+---+------+--tvsets
12
24
36
48
60
Figure 10.2 Boxplots for Case study I: (a) population density; (b) agricultural
employment; (c) national income per capita; (d) capital investment; (e) infant
mortality rate; (f) energy consumption per capita; (g) number of TV sets per 100
people.
175
176
II
~----------------------------------------------------~
values for each variable for each country to obtain a score. We can ask
for our scores to be combined in such a way as to explain as much of the
variation present in our data set as possible. This is the basis for principal
components analysis (PCA). It provides us with the 'best' scores.
We ask for a PCA to be carried out on the seven variables in columns
1 to 7, with the 'best' scores for the first principal component to be put in
column 15 (with a further set of scores which accounts for some of the
remaining variation being put into column 16 - the second principal
component and so on). We use (Stat, Multivariate, Principal Components,
Variables Cl-C7, Number of Components 7, Type of Matrix Correlation,
Storage Coefficients C8-CI4, Scores C15-C2l). We name columns 8 to
14 PCAl-PCA7 and columns IS to 21 scorel-score7.
The resulting printout can seem rather overwhelming, but we often need
only concentrate on small parts of it.
Eigenanalysis of the Correlation Matrix
Eigenvalue 3.9367 1.5639 0.8100
Proportion 0.562 0.223
0.116
Cumulative 0.562 0.786 0.902
The first part of the output (above) has seven columns. Each represents
one of seven principal components which summarize the variability in the
data and each contains an eigenvalue. We can think of the eigenvalue as
the amount of variation in the data set which is explained by that
particular principal component. For example, the first principal component
has an eigenvalue of 3.9367. This represents 56.2% (shown underneath the
eigenvalue in row 2, but as a proportion 0.562) of the sum of all seven
eigenvalues. Principal component number two accounts for a further
22.3%, so components I and 2 account for 78.6% (given in row three as
0.786) of all the variation between the two of them.
This is good news; we have accounted for over three-quarters of the
variation in the data by two sets of scores whereas the original data
contained seven variables. If we include the third principal component we
can account for 90% of the variation. However we shall probably obtain
enough insight into the structure of the data by examining only the first
two principal components.
The printout then shows the coefficient or weight allocated to each
variable in each of the seven principal components to account for the
maximum amount of variation and to produce the 'best scores':
Variable
popden
agemp
natinc
capinv
PCl
-0.009
0.476
-0.452
-0.083
PC2
0.717
-0.108
0.015
0.639
PC3
0.252
-0.257
-0.146
-0.538
PC4
PC5
0.634
0.060
0.158
0.156
0.022
0.805
-0.519 -0.116
PC6
-0.116
-0.598
0.189
-0.114
PC7
-0.059
0.537
0.301
0.003
CASE STUDY I
infmort
energy
tvsets
0.394
-0.450
-0.452
-0.067
-0.234
-0.085
-0.626
-0.311
-0.267
I I
0.379 0.097
0.227
0.016
0.329 -0.549
0.462
-0.577
0.182
-0.286
-0.511
0.524
What are these coefficients? How are they used? Let us take the
column headed PCl. We will use these coefficients to obtain the score
for the first country (Australia) on principal component 1. To do this
we will copy down the coefficients and then put the actual data for
Australia's population density, etc., beside them (check that you can see
that the data come from row I in the printout at the beginning of this
section):
Variable
PCI
Data
popden
agemp
natinc
capinv
infmort
energy
tvsets
-0.009
0.476
-0.452
-0.083
0.394
-0.450
-0.452
2
6
8.4
10.1
12
5.2
36
PCI
Data
Standardized data
popden
agemp
natinc
capinv
infmort
energy
tvsets
-0.009
0.476
-0.452
-0.083
0.394
-0.450
-0.452
2
6
8.4
10.1
12
5.2
36
-0.996
-0.640
0.201
1.295
-0.334
0.643
0.620
Now we multiply the coefficient and the standardized value for each
variable to give a final column of numbers which, when added together,
give the score for Australia:
177
178
II~___________SU_M__M_A_R_I_Z_IN_G__D_A_T_A_F_R_O_M__A_S_T_U_D_Y__________~
Variable
PCI
Standard data
popden
agemp
natinc
capinv
infmort
energy
tvsets
-0.009
0.476
-0.452
-0.083
0.394
-0.450
-0.452
-0.996
-0.640
0.201
1.295
-0.334
0.643
0.620
0.008964
-0.304640
-0.090852
-0.107485
-0.131596
-0.289350
-0.280240
-1.195199
We ask MINITAB to display this score and those for the other 13
countries for the first two principal components whose columns we have
named scorel and score2. Notice that our value for the score of Australia
on PC1 (-1.195199) agrees with the MINITAB value (-1.19523) to three
figures after the decimal point.
Row
1
2
3
4
5
6
7
8
9
10
11
12
13
14
scorel
-1.19523
-0.81330
-1.41233
1. 74510
-0.89650
0.54333
-0.43256
-0.05447
2.56455
0.91468
-1. 88907
4.74817
-0.93008
-2.89230
score2
0.00515
0.48198
1.39261
-0.06283
-1. 55911
0.65681
2.78599
-0.62952
-0.91376
-0.56365
-1.24519
-0.17634
1. 39480
-1.56695
We now put the two- or three-letter codes for each country into column
22. We ask for a labelled plot of score2 against scorel with each country
being labelled according to its code in column 22 (Graph, Plot, y = score2,
x=scorel, Annotation Data Labels, Show Data Labels, Use labels C22).
Figure 10.3 summarizes our data. We see that the countries are well
spread out. Let us take the first principal component (x axis). What aspects
of the data does it represent? The weights or coefficients for agricultural
employment and infant mortality are large and positive. This means that
countries with a high proportion of people in agricultural employment and
with high infant mortality (Turkey) are at the right-hand side of the graph.
The weights for national income, energy and TV sets are large and
negative. This means that countries with high national income, high energy
consumption and a large number of TV sets per 100 people are at the
left-hand side of the graph (Sweden and the USA). With this information
we can reasonably summarize the meaning of scorel as a summary
C_A
__
SE__
ST_U_D
__
Y_2__________________~1
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Principal components
analysis of socio-economic
data from 14 countries
o
Jap
WG UK
o
Ib8
FrB
0
Aus
0
NZ
Swe
0
Sga
Gre
0
TI5'
Por
0
Ice
0
-2
-1
score 1
Figure 10.3 Plot of score2 against scorel for data for 14 countries in Case
study I.
statistic of economic development on which a low score represents a high
level of development.
Looking at score2 we see that countries at the top of the graph are
characterized by a high population density and a high level of capital
investment (Japan) whereas those at the bottom of the graph have a low
population density and a lower amount of capital investment (Iceland,
USA). This graph enables us to see the similarities and differences between
the 14 countries at a glance. In this sense it is a great improvement on
the original table of data. We could if we wanted continue with the other
components. For example, we would next look at component 3 (score3).
However it is not very worth while because it only accounts for a further
11 % of the variation and the remaining components only account for
about the same amount of variation again between them.
10.2
CASE STUDY 2
179
180
I I
x
x
:1
x
:1
10
= Standard
deviation
:r
x
x
x
x
0
Site
Figure 10.4 The mean number of species per quadrat on each of three sites in
Case study 2.
separately for each area; for example by giving estimates of the mean
number of species per quadrat with a standard deviation to show the
variability within the site (Figure 10.4). However, what we should not do is
use the method of analysis of variance to answer the question: 'Is there
evidence of a difference in percentage cover of a particular species between
the three areas?' This is because we have not started from a common
population and then applied the treatments (top, middle, bottom) at
random.
Instead we should use an exploratory technique, such as principal
components analysis, to summarize the variability. This has the added
advantage of using the information about all the species together. It is a
type of multivariate analysis.
We enter the data into MINITAB, with 37 columns (= species) and 30
rows (=quadrats).
Here are the data (collected by Wye College students) for the first six
species (= columns) which represent Agrostis capillaris, Brachypodium
pinnatum, Carex caryophyllea, Carex jlacca, Dactylis glomerata and
Festuca ovina:
a
a
a
a
17
13
100
50
20
a
a
40
66
70
53
83
87
100
a
a
50
90
40
a
a
a
a
a
a
a
a
a
a
40
40
a
a
a
a
a
a
a
a
a
a
100
60
53
100
70
47
22
87
80
40
60
a
a
100
100
90
87
89
93
90
80
100
90
100
60
CASE STUDY 2
0
0
0
0
0
0
0
0
0
0
40
60
0
20
40
0
40
40
50
88
50
100
10
60
80
60
0
0
0
0
8
20
0
0
0
50
25
38
10
20
80
40
0
0
0
0
0
0
0
0
0
0
0
0
I I
88
63
30
50
100
80
20
0
0
0
0
0
0
20
0
0
0
0
0
0
5
50
0
0
0
40
60
80
40
0
0
40
20
0
40
50
100
88
65
100
90
70
0
90
40
80
40
0
100
0
100
0
100
90
11. 544
0.312
0.312
5.144
0.139
0.451
2.639
0.071
0.522
2.257
0.061
0.583
2.016
0.054
0.638
1.773
0.048
0.686
Here the first two axes only account for 45% of the total variation but,
as we shall see, it will still provide a useful summary.
Variable
ag.cap
brac.pin
car.car
car.fla
dac.glo
fest.ov
hol.lan
lol.per
ach.mill
agr.eup
cam. rot
cir.ac
cir.ar
cir.pal
crep.spp
PC1
-0.121
0.120
0.239
0.268
-0.173
0.040
-0.096
-0.177
-0.080
-0.085
0.163
0.183
-0.088
-0.062
-0.072
PC2
-0.032
0.283
-0.005
-0.022
0.190
0.196
0.207
-0.185
0.175
0.173
-0.023
0.057
0.219
0.203
-0.231
PC3
-0.248
-0.114
-0.060
-0.049
-0.171
-0.089
-0.299
-0.062
0.293
-0.349
0.124
0.021
0.189
-0.339
-0.090
PC4
-0.152
-0.086
0.063
0.077
-0.095
-0.079
-0.173
0.085
0.219
-0.037
-0.198
0.171
0.277
0.022
0.127
PC5
0.277
-0.034
0.011
-0.051
-0.156
-0.259
0.108
0.019
0.147
-0.219
-0.282
0.228
-0.047
-0.130
-0.023
PC6
-0.257
-0.024
0.046
-0.015
0.172
-0.307
0.054
0.193
-0.050
0.121
-0.110
-0.005
-0.102
-0.055
0.093
181
182
I
cru.lae
cyn.cri
gal.ver
gen.am
gle.hed
hel.num
hie .pi!
hyp.per
leon.sp
lot. cor
ori.vul
pim. sax
plan.lan
pol. vul
pot. rep
pru.vul
ran. spp
san.min
thy. spp
tri.rep
ver.cha
vio.hir
0.243
-0.197
-0.143
-0.095
0.247
0.017
-0.003
-0.029
-0.074
-0.324
0.072
-0.006
-0.313
-0.144
0.164
-0.182
0.002
0.021
0.021
-0.193
0.192
-0.027
0.137
0.116
-0.078
-0.194
0.031
0.069
-0.044
-0.197
-0.107
-0.017
0.113
-0.025
0.044
-0.233
-0.189
-0.117
-0.310
0.046
-0.043
-0.101
0.167
-0.124
0.130
0.010
0.282
0.263
0.173
-0.092
0.145
0.286
0.005
-0.033
-0.388
0.039
0.073
0.258
0.204
-0.090
-0.014
-0.064
-0.020
0.020
0.327
0.054
0.088
-0.229
-0.247
0.260
0.155
-0.003
0.059
-0.130
0.003
-0.093
0.061
-0.137
-0.084
0.114
-0.364
-0.147
0.368
-0.056
-0.071
0.042
-0.100
-0.046
-0.008
0.023
0.375
0.025
-0.176
0.134
0.095
-0.112
-0.112
-0.315
0.209
-0.078
0.070
-0.093
0.136
-0.501
0.061
0.063
0.053
0.024
-0.203
-0.082
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
score1
-2.65742
-3.11976
-2.50492
-2.57511
-1.33581
-0.97591
-2.70955
-2.10436
-1.74847
-1. 88399
6.87997
3.72506
6.68716
5.48874
2.73888
4.11199
6.12395
3.77876
3.09141
2.23702
-2.96016
-3.19158
-2.29824
-2.34998
score2
3.23235
3.64558
2.44219
3.52153
2.05404
2.16692
3.06575
1. 72044
0.68887
1.47517
0.59776
1.42079
-1.77234
-0.24516
-0.02376
1.25905
-0.83651
-0.01016
-0.28989
0.76591
-1.76356
-3.29659
-1.86741
-4.23661
C_A_S_E_ST_U_D
__
Y_2__________________~1
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
-1.54096
-2.96905
-2.46572
-1. 66073
-1. 67677
-2.13446
25
26
27
28
29
30
-3.42761
-2.81131
-3.48355
-2.09768
-1.73013
-0.16408
We insert ten 'l's, followed by ten '2's and ten '3's in column 39 so that
we can then obtain a labelled plot showing the three areas (Graph, Plot,
y=score2, x=scorel, For each Group, Group Variables C39). This is
shown in Figure 10.5. We can edit the graph to add a title and name the
three areas.
We can now interpret the meaning of the two axes. For each of the first
two components I have selected species with relatively large (positive or
negative) coefficients and with whose ecology I am familiar.
Principal component I
Large weights or coefficients
(positive or negative)
Short name
Full names
-0.224
tri.rep
-0.177
lol.per
-0.168
plan.lan
0.281
san.min
0.273
hel.num
Trifolium repens
(white clover)
Lolium perenne
(perennial ryegrass)
Plantago lanceolata
(ribwort plantain)
Sanguisorba minor
(salad burnet)
Helianthemum nummularia
(rock rose)
N
(j)
L
'-'
(f)
0g
3 2 1 -
o-
0 0
00
+
++ +
-1 x
-2 -3 -
x
x
-4 -
x
x
-5
-4
Top
Middle
Bot tom
score 1
Figure 10.5 Plot of score2 against score! for plant species data in Case study 2.
183
184
I LI___________S_V_M_M_A__R_IZ_I_N_G_D__A_T_A_F_R_O_M__A_S_T_V_D_y__________~
Principal component 2
Large weights or coefficients
(positive or negative)
Short name
Full names
-0.324
lot.cor
Lotus corniculatus
-0.313
plan.lan
Plantago lanceolata
0.283
brac.pin
Brachypodium pinnatum
0.247
gle.hed
Glechoma hederacea
(ribwort plantain)
(tor grass)
(ground ivy)
o Top
+
3
2
o
Axis 2 0
-1
-5
++ +
~
-2
-3
-4
light g?azing,O
00
heavy g~aZing
fertile moist ....
oo([--------'>~ infertile dry
"""'-r-------,-------.-----'
-4
Axis 1
Middle
Bot tom
--'I I
S_PE_C_I_A_L_IZ_E_D_P_R_O_G_R_A_M_S_ _ _ _ _ _ _
L - -_ _ _ _ _ _ _
185
Your project
11
If you are short of ideas for a project ask a lecturer for help. Perhaps the
lecturer has an idea for a project which may appeal to you. Find out if last
year's project reports are available for you to consult, to gain an idea of
the sort of topics which might be suitable. There are also books which
contain project suggestions (Appendix B).
Start thinking about your project early. You need to find an appropriate
supervisor who is willing and able to advise you. There may be only one
person who has the experience to supervise a project in the particular
subject area which interests you. However, if there is a choice, ideally you
should find someone whom you believe you will get on with and who will
also be available to give you feedback at critical times.
It is important to realize that your potential supervisor will probably
be a very busy person and you should always arrange to see him or her by
appointment and have a well-prepared list of points ready to discuss. A
supervisor will have his or her own other deadlines for teaching, research,
administration or pastoral care. He or she will be pleased to help you but
you should each be aware of the other's preoccupations because this will
help you to plan your joint use of time efficiently.
With this background in mind it is useful to find out whether your
potential supervisor will be available to give advice at intervals throughout
your project. Will you be able to meet for regular discussion? If not, is
there someone else who can help you if you need advice when your
supervisor is unavoidably absent?
S_E_E_K_A_D
__
V_IC_E_A_T
__T_H_E_S_T_A_R_T______________~I
L -_____________
Pay attention to any deadline for agreeing a project title and outline
with your supervisor.
187
188
II~___________________y_O_U_R__PR__O_JE_C_T__________________~
appointment to ask advice as soon as possible. It is always a great relief
to share a concern and to be able to plan ahead with confidence.
Remember, biological material is variable. We need to draw conclusions
from experiments or surveys by induction from a sample to the whole
population. Statistical theory allows definite statements to be made with a
known probability of being correct.
We should note that rigorous statistical inferences can only be drawn
from properly designed experiments. In these, the experimenter has the
power to allocate different treatments to particular individuals. Observational studies in which comparisons are made between individuals which
happen to have naturally encountered different conditions are always open
to qualified interpretation.
11.3 GENERAL PRINCIPLES OF EXPERIMENTAL DESIGN AND
EXECUTION
In Chapters 3 and 4 we covered the importance of randomization,
replication and blocking in designing an experiment in some detail. We
will now see how these topics fit into the broader considerations involved
in designing your project.
11.3.1 Why are you carrying out this experiment?
OK - it is a project which fulfils part of the requirement for your degree.
However, are you just generally curious about whether changing the
nutrients which a plant receives will change its growth rate; do you want
to compare the effects of two or more ages of insect host on the rate of
parasite development; are you working in the controlled conditions of a
glasshouse in the hope that your results may help in the selection of
treatments for a subsequent field experiment?
11.3.2 What population are you studying?
You should ensure that the experimental material is representative of the
population about which you wish to make inferences. The plant ecologist
John Harper has pointed out the 'trilemma' experienced by an experimental scientist. He or she may seek:
Precision
Precision can be achieved by using unique genotypes and carrying out the
work in controlled environments. This should provide repeatable estimates
with narrow confidence intervals. But how relevant are the results to the
'real world'?
----li
E_X_P_E_R_IM_EN_T_A_L_D_E_SI_G_N_A_N_D_E_X_E_C_V_T_IO_N
____
L--_ _ _ _
Realism
This may be achieved by studying individual plants in the field. This leads
to low precision. Estimates will have very large confidence intervals
because of the large amount of random variation. Only large differences
between populations will be detected as statistically significant.
Generality
It may be desirable to have information about treatment effects on a wide
range of, for example, different soil types. With limited resources there is
a danger of sacrificing both precision and realism and of ending up with a
shallow study.
11.3.3 What are the experimental units and how are they grouped?
Experimental units (test tubes, plots, pots, plants or animals) should not
be able to influence each other and should be of a practical size and shape
for the study. For example, if you are interested in the weight gain made
by insects feeding on different species of plants, the experimental unit is
the plant. Sensibly, you may choose to have ten insects feeding on each
plant. Their average weight gain will provide a good estimate of the plant's
value as a food source but you should not be tempted to consider the
weight of each insect as an independent value because they have been
competing with each other for food, so you must use the mean weight of
the ten insects as the unit of assessment.
The plant pots should be large enough to allow for plant growth
throughout the time of the experiment and spaced widely enough apart so
that the plants do not shade each other.
It may be possible to group experimental units so that members of the
same group will experience similar background conditions. For example,
put the biggest insects or the tallest plants into group one, medium-sized
individuals into a second group and the smallest ones into a third.
Consider applying each treatment (and a control) to one individual
selected at random from within each of these groups. This will improve
precision (in a similar way to stratified random sampling). It will result in
narrower confidence intervals and in a greater chance of detecting
differences between treatment populations (if they exist).
Treatments should be applied and records should be made group by
group rather than treatment by treatment. In the analysis you would then
account for variation between these groups or blocks.
11.3.4 What are the experimental treatments?
Is there a naturally defined control? If so, it should be included. For
example, in estimating the effects of different levels of a chemical on an
189
190
II
YOUR PROJECT
~----------------------------------------------------~
~________E_X_P_E_R_I_M_E_N_T_A_L_D__ES_I_G_N_A__N_D_E_X_E_C_U_T_I_O_N________~I
Questions to be answered. (How does pH affect reaction time?)
Hypotheses to be tested. (Null hypothesis is that there is no linear
response.)
Effects to be estimated. (The mean increase in temperature is 5C with
a 95% confidence interval of between 4 and 6C.)
If you have both a primary and a secondary objective you should make
sure that the design of the experiment is effective and efficient for the
primary objective and, ideally, also for the secondary objective.
Think carefully about what you will be measuring or counting. Some
variates will be relatively easy to observe: the number of live seedlings in a
pot, for example. Others may be more difficult: the leaf area of a plant,
say. Decide which variates are of most interest to you. If they prove to be
very time-consuming to record you may choose not to record any others.
Consider whether you will analyse each variate separately or whether you
will combine any of them before analysis. For example, you might decide
to multiply leaf width by leaf length to obtain an approximate estimate of
leaf area.
If you make the same measurement on each plant on several occasions
(for example measuring height), these will not be independent
observations; they are repeated measures. A simple approach is to
subtract, say, the first height from the last height and to analyse the
increase in height over time.
It may well be sensible to obtain some estimate of the size of individuals
before they have been exposed to any treatments. For example, the weight
of each animal or the height of each plant might be recorded and called a
covariate. These values can then be used to account for some of what
would otherwise be classified as simply random variation present in the
experimental results. This may well increase the precision of your
experiment. You should ask advice about how to make use of such
information.
11.3.7 How many replicates should you have?
This is a common question. To answer it you need to know or be able to
make a sensible guess at two things:
I. The minimum size of the difference between any two treatment means
which you would regard as being of practical importance (for example,
5 kg per plot difference in mean yields or 2 mm per day difference in rate
of growth).
2. The likely variability of the experimental material. A pilot study or
the results of other people's work on similar systems is useful here.
They will probably have presented an estimate of the variability in their
experiment in the form of a standard error of the mean or of a
191
192
I 1L-__________________y_O_U_R__P_R_O_JE_C_T__________________~
difference between two means, a confidence interval, or a least
significant difference. All of these contain the standard error of a mean
which can be used to calculate the standard deviation. To do this we
multiply the standard error by the square root of the number of
replicates which they used in their experiment. For example, if the
standard error of their mean was 0.5 kg and there were four replicates
then the standard deviation will be: 0.5 x -/4 = 1.0kg.
The standard deviation can be used to calculate the number of replicates
you need to stand a reasonably good chance of detecting the required
difference (if it exists) between two treatment populations. You may need
to ask a statistician for advice about this because the required calculation
depends on the design of the proposed experiment. If you have too few
replicates there will be no hope of detecting your required difference. If
you have too many replicates you may be wasting valuable time and
money. Your project will in all probability have a small budget and you
must remain within it.
Once the number of replicates has been fixed it is important to
remember to assess and record them separately. Sometimes people
inadvertently combine the information about one treatment from all
replicates. This makes the results impossible to analyse.
11.3.8 What resources (and constraints) do you have?
Will your project need to take place at a certain time of year? It is no good
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
II
193
194
II
YOUR PROJECT
~----------------------------------------------------~
friend you can update it, say, once a week as an insurance policy. Also, if
you have a brainstorm and remove copies of a file from both of your dayto-day disks in error (easy to do when you are tired), your third disk is
not there so, thankfully, you cannot make the same mistake with that one
immediately. If ever you delete a file and regret it because it was your only
copy, ask for help from a computer expert immediately. As long as you
have not tried to save anything else onto the disk it will usually be possible
to retrieve the file.
It is vital to make comprehensive notes of everything that happens as
you go along. It is infuriating trying to puzzle out exactly what you did
and when some months after the event. Take a few photographs of key
features to illustrate the report. For example, striking differences in
appearance between the treatments or unexpected events like a flood on
part of the site may help to brighten your presentation. Ask a friend to
take a photograph of you applying treatments and recording results. When
you read a useful paper, make a note not only of its contents but also of
its full bibliographical reference (Chapter 12). You may wish to store the
references in a file on your word processing package on the computer so
that you can edit a copy of this file at the end of your project to provide
the references section of your report.
11.3.10
S_V_R_V_E_y_D
__
E_SI_G_N__
A_N_D_E_X_E_C_V_T_I_O_N____________~I
L -_ _ _ _ _ _ _ _ _ _ _
195
196
I LI__________________y_O_U_R__P_R_O_JE_C_T__________________~
floor flats because flats are grouped in two-storey blocks with five flats
per storey. A random sample is preferable.
If you visit each selected household you may find that in many cases
the inhabitants are out when you call. It is important to call back many
times (perhaps at a different time of day) to ensure that you catch them in.
For example, if you call only between 9 am and 5 pm on a weekday your
sample will under-represent households where all adults are working
outside the home. Some people may not wish to take part in the survey. It
may be easier, quicker (and safer?) simply to deliver a questionnaire to
each selected household. In this case there should also be a covering letter
which clearly states where you come from and how you may be contacted
as well as the purpose and importance of the survey, together with a
stamped addressed envelope for the return of the completed questionnaire.
It will undoubtedly be necessary and worth while to send duplicate
questionnaires and return envelopes to those who have not responded by
the date requested. This will minimize non-response errors; perhaps those
who tend not to respond quickly are also those who tend not to participate
in the waste collection scheme?
0
0
0 8
0
0
4
5.
4.
0
0
0
0
0
0
0
0
0
0
0
0
0
....
Terrace house
Semi-detached house
Detached house
Flat
Other (please specify)
Newspapers
Magazineslwaste paper
Cardboard
Glass
Batteries
Aluminium cans
Tin cans
Kitchen waste
Garden waste
Other (please specify)
Figure 11.1
L_
Yes
No
If yes go to question 4
If no go to question 3
2.
1.
000
Office
use only
Ref 1-3
020
021
022
023
o 19
0 9
010
o 11
o 12
013
014
015
016
017
018
c:::
CZl
z<3
>-l
c:::
\.)
tTl
:><
tTl
:>
CZl
atTl
-<
~
:;d
198
I LI___________________y_O_U_R__PR__O_JE_C_T__________________~
1. Misreading handwriting (age 23 instead of 28).
2. Lapses of memory. (When did you start participating in the scheme?)
It is better to give a range of time periods: less than six months ago;
between six months and one year ago; and so on; and ask people to
select one.
3. The tendency of people to want to appear 'socially acceptable' and so
to overestimate, say, the proportion of household waste they put out
for recycling.
A pilot study will help to minimize response errors but it is also
important to try to validate the answers independently. For example, you
could monitor the actual amount of each type of waste collected from the
village each week for a few weeks.
11.4.3 Analysis of questionnaire data
In many cases the results of the questionnaire may simply be presented
as the percentage of respondents who fall in a certain category (e.g. '55%
put out waste for recycling'), but ensure that the total number of
respondents is also given (an estimate based upon 200 respondents will be
more precise than one based upon ten). However, if it was the intention
to compare the characteristics of different types of respondent, this can be
done using a X2 contingency test (section 9.5). (Remember to use the actual
numbers not percentages.)
Suppose we obtained the following results:
Type of house
Recycles waste
1. Terrace
30
24
14
20
15
3
30
2. Semi-detached
3. Detached
4. Flat
10
No
20
23.29
Total
50
24
20.84
15
18.16
39
14
9.08
3
7.92
17
10
21.37
30
18.63
40
S_V_R_V_E_y_D
__
ES_I_G_N_A
__
N_D_E_X_E_C_V_T_I_O_N____________~I
L -___________
Total
78
68
146
The p value is less than 0.001, indicating very strong evidence that there
is a difference in the behaviour of people living in different types of home.
The components of l which come from the fiats (6.0 and 6.9) are by far
the largest, suggesting that this group differ from the other three. We can
break down the table in two ways to examine this further. First we can
group together the first three types of home (group 1) and compare the
results from their respondents to those from the fiats (group 4):
Expected counts are printed below observed counts
Yes
68
56.63
No
38
49.37
Total
106
10
21. 37
30
18.63
40
Total
78
68
146
ChiSq=2.283+2.618+
6.049+6.939=17.890
df=l, p=O.OOO
No
20
17.92
Total
50
24
25.02
15
13.98
39
14
10.91
3
6.09
17
Total
68
38
106
ChiSq:::;0.134 +0 .240 +
0.041+0.074+
0.878+1.571=2.939
df=2, p=0.230
199
200
I ~I___________________Y_O_U_R__P_R_O_JE_C_T__________________~
Now, the p value is 0.23 so there is no reason to doubt our null
hypothesis of no difference in behaviour between those living in the
remaining three types of home.
11.4.4 Ethical considerations
Proposed experiments in medical research (clinical trials) must be
approved by an independent committee of responsible people. For
example, the allocation of treatments at random to people who are
suffering from a disease may not be considered to be ethical if there is
already evidence that one of the treatments is likely to be more efficacious
than another. There are also strict controls on research work involving
animals. Such matters are the responsibility of your supervisor.
However, there are also more general ethical considerations. It is
unethical to waste resources, to carry out a badly designed experiment, or
to surveyor to analyse it incorrectly and so mislead those reading a report
of your results.
11.5 HEALTH AND SAFETY
Your institution will have guidelines which you must read and follow.
There will be booklets on this subject available from your library. Half-anhour spent considering possible problems in advance is time well spent.
Accidents do happen but sensible precautions will minimize the risk of
their occurrence.
11.5.1 In the field
Fieldwork can be dangerous and you should discuss with your supervisor
whether you need to be accompanied or not. You should always leave a
note of your route with a responsible person, together with a time by
which you should have contacted them to confirm that you have returned
safely. You should also always obtain permission to enter and work on
private land (this includes nature reserves). Plants must not be dug up and
removed from a site.
Wear sensible protective clothing and take a map, compass (which you
know how to use!), whistle, first-aid kit, and emergency food supplies with
you. If your last tetanus inoculation was ten or more years ago it is
sensible to have a booster in case you cut yourself. If you carry out fieldwork near water which is frequented by rats you should be aware of the
possibility of contracting leptospirosis. If you work in areas where there
are sheep you may risk contracting Lyme disease which is spread by means
of ticks. Such diseases may first express themselves by 'flu-like' symptoms,
----li I
H_E_A_L_T_H_A_N_D_S_A_F_ETY
________
L - -_ _ _ _ _ _ _ _
201
12
COMPUTERS
BASICS
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
II
used ones. These or other computer packages will check your grammar
and suggest how to improve your style.
In recent years computers have revolutionized access to the literature.
It is now possible to search databases for published papers which include a
keyword or set of keywords. For example, you might ask for references
to papers which include the keyword 'blood' or only for those which
contain the phrase 'blood pressure'. You can obtain a copy of the output
on disk, so that you can read the titles and abstracts at leisure and edit out
the unwanted references before printing out details of the interesting ones
you wish to follow up.
12.2 BASICS
Writers' block is very common. The only way around it is to work back
from the final deadline and set down a schedule of work to achieve it. This
should include some spare time to allow for emergencies.
12.2.1 Structure
A report should have a clear structure. If you are not sure what that
structure should be, start with the basics. Useful section headings are as
follows:
Introduction
This presents the background to the study, and reviews the relevant
published literature before outlining the general aims and specific
objectives of the project.
Materials and methods
203
204
II
PREPARING A REPORT
~----------------------------------------------------~
Results
This presents the data obtained with a brief explanation of the major
trends revealed. You should illustrate the results section with tables and
graphs (see section 12.3) and with reference to statistical analyses as
appropriate. Ideally you should provide exact 'p values' rather than simply
'p < 0.05'. It is important to concern yourself with your original
objectives. It is bad practice to 'dredge the data', in other words to make
every possible comparison between treatments in the hope of coming up
with a 'significant' difference!
Discussion
How do the results affect our existing knowledge of the subject? The
discussion provides the setting for interpretation of your results. You
should try to avoid repeating the results in this section. Here you may be
more imaginative and speculative in your writing, perhaps outlining
possible reasons for the findings and giving suggestions for further work.
It is important to check that your conclusions are justified by the analysis
of the data and that you are not trying to make the results fit some
preconceived idea of what you think ought to have happened.
This section is the place also to compare and contrast your results with
those of previous studies, perhaps suggesting reasons for any discrepancies. It is important not to attempt to extrapolate the results of a
sub-population which you have studied to the whole population. For
example, what you have discovered about the behaviour of worms on the
local farm may well not be true for worms of a different species, or on a
different soil type, or 200 kilometres further north.
References
This is a list of the material cited in the report. You should quote sources
of reference from the published literature or personal communication in
the text where appropriate and list them fully in the references section.
You may be instructed to follow a particular 'house style'. This may seem
---'I I
B_A_S_IC_S_ _ _ _ _ _ _ _
L - -_ _ _ _ _ _ _ _
fiddly to you but it is essential to give accurate references and to doublecheck that the list contains all the required references and no others.
Consider how annoyed you would be if you wanted to follow up an
incorrect reference.
It is important to give credit to other people for their work. You can
refer to their publications in your text by giving the surname(s) of the
author(s) and the year of publication:
Our findings agree with those of Watt (1992) and differ markedly
from the results of work in the USA (Smith, 1990; Jones and Smith,
1994; Smith et al., 1995).
Notice how grouped references in the text are given in date order rather
than alphabetical order of first author's name. Also, it is common for
publications with three or more authors to be referred to by the first
author's name followed by 'et al.' at the second or subsequent time of
mention. This is an abbreviation for 'et alia' which is Latin for 'and others'
and is written in italics to indicate that it is in a foreign language.
If you attempt to represent other people's work as your own, either by
copying parts of their text (with or without editing it), or by rephrasing
their ideas without acknowledging the source, you are guilty of plagiarism.
At best this is bad manners and at worst it is criminal. If you are
submitting your report as part of the requirements of a degree you will find
that your institution has clear rules forbidding plagiarism and stating the
disciplinary action which may be taken if your report contains such
material. Failure to acknowledge sources is also unprofessional in that you
should enable your readers to read the original material if they wish.
In the list of references at the end of the report it is important to provide
enough detail to enable readers to locate the works. There are different
ways of doing this and journals will specify their particular 'house style'. A
commonly used system has the following formats for three different types
of publication:
Paper in ajournal
Bannister, N.R., and Watt, T.A. (1995) Effects of cutting on the growth
of Crataegus monogyna (hawthorn) in hedges. Journal of Environmental
Management, 45, 395-410.
Chapter in a book
Bannister, N.R., and Watt, T.A. (1994) Hedgerow management: history
and effects. In Hedgerow Management and Nature Conservation, eds
T.A. Watt and G.P. Buckley, Wye College Press, Wye, Kent, pp. 7-15.
Book
Watt, T.A. (1997) Introductory Statistics for Biology Students, 2nd edn,
Chapman & Hall, London.
Notice how the surname(s) and initials of the author(s) are always
205
206
I LI________________P_R_E_P_A_R_I_N_G_A__R_E_P_O_R_T_________________
followed by the year of publication. The title of the document follows with
the journal or book title then being given. Italics are used for journal or
book titles. Editors' surnames and initials are given for books with
chapters written by different authors and the publisher's name and place
of publication are included for the books. Finally, page numbers are
required where only part of a book or journal is relevant.
The best way to become familiar with these principles is to read journal
articles and to see how to summarize information clearly and to present
cogent arguments justified by relevant references. Then have a go for
yourself!
Acknowledgements
You should thank all those people or organizations who have helped
you: your supervisor, members of the statistical and computing staff, the
laboratory supervisor and your friends. Don't forget the land-owners if
you have been working in the field and do send them a copy of your
finished report.
Appendixes
These are optional and contain items which are not required to follow
the flow of the argument. Examples include: raw data, calibration curves,
species lists, mathematical equations and statistical tables.
Abstract
A brief statement of aims, methods and results is helpful in all but the
briefest of reports. It should focus on the 'take-home message'. This may
well be the last part which you write. In many journals this is placed at the
beginning of the paper so that it can be readily scanned by those who
may not have time to read any more.
12.2.2 Drafts
When the above outline is in place, a first draft can appear. At this point
it becomes obvious that there are gaps which you need to fill or that points
should be made in a different order to emphasize connections. It is
important to immerse yourself in this task so that the momentum is
maintained. Write 'REPORT WRITING' in your diary and put a notice
saying 'REPORT WRITING - KEEP OUT' on your door! A first draft
will say things like: **must find comparative figures** or **check
analysis** or **find reference** or **yuck! - rewrite this!**. These points
are attended to in preparing a second draft which should then be given to
IL_L_U_S_T_R_A_T_IN
__
G_R_E_S_U_L_T_S______________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a friend whose opinion you value and (if it is the custom) to your
supervisor for comments.
This is a vital stage and such comments wi111ead to great improvements
in the clarity of the third draft. Don't be surprised if the reviewer covers
it with comments in red ink. We all have to endure this because it is easier
for others to spot our errors and inconsistencies than it is for us to see
them ourselves. You can usefully have your revenge by constructively
criticizing another student's draft report.
As you are writing you should consider who will read your report. What
will they know already? What do you want them to understand and to
remember? Browse through the literature in your field. When you find a paper
which is clear, brief and stimulating, use it as a model for your approach.
When you find yourself absorbed in typing your report onto a computer
you should be sure to take a short break after an hour's work, and a long
break after a further hour, otherwise you will find that your body will
protest, your mind become tired, and you will make mistakes. If you
find that your neck or your arms are beginning to ache ask advice about
your body position relative to the computer. Perhaps the chair needs
adjustment.
12.3 ILLUSTRATING RESULTS
Graphs
I 207
208
I LI________________P_R_E_P_A_R_I_N_G__A_R_E_P_O_R_T________________~
10
Yield
t/ ha
SE meanI
OL-__-L-L____
control
~~
____
organic
~~
___ _
conventional
fertilizer
Figure 12.1
12.3.2 Tables
Although graphical plots can show general trends, tables are required to
convey quantitative features. Ideally, a good table will display patterns
and exceptions at a glance but it will usually be necessary to comment in
the text on the important points it makes. Some useful guidelines for
producing tables have been suggested by Ehrenberg (1977):
1. Round data to two significant figures. This refers to digits which are
'effective', in other words, which vary from one observation to
another.
2. Provide row and column averages or totals on the right-hand side of
the rows and at the bottom of the columns.
3. If you want to compare a series of numbers it is easier to do so if they
are in a column rather than in a row. This is because the eye finds it
easier to make comparisons vertically than horizontally.
4. Ordering columns and rows by the size of the values in each cell helps
to show any patterns.
5. Single spacing between cells guides the eye down a column and gaps
between rows guide the eye across the table.
If you have carried out a factorial analysis of variance there is a neat
way of presenting the treatment means in a table, with their standard
errors. You may remember that in Chapter 7 we discussed a wholly
randomized design experiment with two levels of sowing and two levels of
cutting the field margins. The factorial analysis of variance of the mean
number of spiders per quadrat in each plot produced an error or residual
mean square of 1.333 (Table 6.6). Each of the four treatments had four
replicates, so each main effect has eight replicates. Use this information
and the formula for calculating a standard error (above) to check
Table 12.1.
E_X_E_RC_I_SE________________~I
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Table 12.1
plot
The effect of sowing and cutting on the mean number of spiders per
Cut once
Cut twice
Mean
Sown
Unsown
Mean
19.5
15.0
17.25
16.5
13.0
14.75
18.0
14.0
16.0
SE sowing mean
SE treatment mean
12.4
SE cutting
mean
0.408
0.408
0.577
LANGUAGE
EXERCISE
209
210
II
P_R_E_PA_R
__
IN_G
__
A_R_E_P_O_R_T______________~
L ________________
Introduction
Forage rape (Brassica napus ssp. oleifera) is a crop grown for feeding
to livestock in autumn and winter. An average fertilizer requirement
for the crop is 125 kgha- I N, 125 kgha- I P20 and 125 kgha- I K 20
(Lockhart and Wiseman, 1978). We wished to investigate the optimal
amount of a compound fertilizer containing these nutrients to apply
to seedlings growing on what was reputed to be a very nutrient-poor
soil. A pilot study was carried out to find out the range of amounts
of fertilizer which should be used in a more detailed pot experiment
before carrying out a field experiment.
Materials and methods
On 10 September 1996 a 1 cm layer of gravel was put into each of
80 pots (lOcm x 10cm in cross-section and 12cm deep). The pots
were then filled with sieved soil from a nutrient-poor site near
Largetown, Midshire. There were 20 replicate pots for each of four
fertilizer treatments: the equivalent of 67.5, 101, 135 and 270gm-2
(1 gm- 2 is equivalent to 10kgha- l ) of Growmore granular fertilizer
(N : P: K 7: 7 : 7) which was ground to a powder and mixed throughout the soil. The recommended rate for an 'ordinary garden soil' is
135 gm- 2 The treatments were identified by colour codes (unknown
to the assessors to prevent bias) painted on the pots. The pots were
arranged in a wholly randomized design in the glasshouse.
Ten seeds of Brassica napus were sown in each pot and covered
with a thin layer of soil. The pots were examined daily and watered
using rain water as required. The plants were thinned to six per pot
on 16 September and to four per pot on 20 September.
At harvest on 15 October the number of plants per pot was
counted and their health scored (0 = perfect to 5 = dead). Plant
height and the number of true leaves per plant were noted and leaf
EXERCISE
L -____________________________________________________
width and length (cm) were put into a formula to obtain an estimate
ofleaf area (cm 2 ) (H. Moorby, pers. comm.):
LA = -0.35 + 0.888(LW x LL)
where LA=leaf area, LW=maximum leaf width and LL=midrib length.
Shoot material from each pot was put into a labelled paper bag
and oven-dried at 80 a C for 48 hours before being weighed.
This report concentrates on the effect of fertilizer on shoot drymatter yield. The data were analysed using analysis of variance in
MINITAB Release 11 (Minitab Inc., 1996).
Results
By 16 September germination had occurred in all pots but with
increasing fertilizer there were fewer plants and these were of smaller
size and were yellower. There was very strong evidence that fertilizer
affected shoot dry-matter yield (p < 0.0001, Table 1).
Table 1 Analysis of variance for total shoot dry weight per pot
Source
Fertilizer
Error
Total
DF
3
76
79
SS
l.l0257
0.94326
2.04584
MS
0.36752
0.01241
F
29.61
p
0.000
67.5
0.4770
101
0.3655
135
0.3235
270
0.1505
SEmean
0.02491
0.8
0.7
shoot
d,y
weight
g/pot
0.6
0.5
0.4
0.3
0.2
0.1
B$ $
0.0
67.5
10 1
135
270
Figure 1 Boxplots of the total shoot dry weight per pot for the four
fertilizer levels. There was a negative linear relationship between leaf area
and shoot dry weight.
I I
211
212
I LI________________P_R_E_PA_R_I_N_G__A_R_E_P_O_R_T________________
Shoot dry weight against leaf
area, labelled by fertilizer level
0.8
0.7 o
0.6 shoot
0.5 -
weight
0.4 -
dry
g/pol
o
o
x
00
t++ +
0.3 -
fertilizer
g/sq m
o 67.5
+ 101
x
135
* 270
50
100150200250300350
leaf area per pot
sq em
Figure 2 Graph of shoot dry weight per pot against leaf area per pot,
labelled by fertilizer level: A=67.5, B= 101, C= l35 and D=270g m- 2
Discussion
I thank the glasshouse staff for maintenance of the plants and the
first-year students on Course 199 for harvesting them and recording
the data, with help from P.Q. Smith. I thank E.O. Jones for
computing advice.
References
Appendix A
Choosing how to analyse data
from a replicated, randomized
experiment
214
I I
APPENDIX A
12
14
20
15
18
24
21
25
32
32
29
31
Appendix B
References and Further reading
REFERENCES
FURTHER READING
There are vast numbers of articles and books written about statistics and
carrying out research at a wide range of levels. I have selected a few which
are modern and relatively easy to understand.
Elementary statistics
216
I LI____________________A_P_P_E_N_D_I_X_B____________________~
Clegg, F. (1983) Simple Statistics: A Course Book for the Social Sciences,
Cambridge University Press, Cambridge. [An excellent, simple and
humorous book.]
Heath, D. (1995) An Introduction to Experimental Design and Statistics
for Biology, UCL Press, London. [Strong on experimental design; includes
Poisson and binomial distributions but not two-way Anova.]
Mead, R., Curnow, R.N., and Hasted, A.M. (1993) Statistical Methods
in Agriculture and Experimental Biology, 2nd edn, Chapman & Hall,
London. [A revision of a very popular textbook. It is more sophisticated
and covers more complex methods than the present book but is highly
recommended as a next step if you have found this book easy to follow.]
Neave, H.R., and Worthington, P.L. (1989) Distribution-Free Tests,
Unwin Hyman, London. [This covers non-parametric statistical methods
which are useful when your data do not meet the assumptions (like
Normality of residuals) required for parametric tests.]
Porkess, R. (1991) Dictionary of Statistics, 2nd edn, Collins, London.
[Provides definitions of technical terms.]
Rees, D.G. (1989) Essential Statistics, 3rd edn, Chapman & Hall, London.
[Very clear. Covers many of the same topics as this book but with a
slightly more mathematical approach; omits analysis of variance and PCA
but includes a final chapter on MINIT AB examples.]
Samuels, M.L. (1989) Statistics for the Life Sciences, Maxwell Macmillan,
San Francisco. [Very thorough. Includes basic analysis of variance and
linear regression but not PCA.]
Sokal, R.R., and Rohlf, F.J. (1981) Biometry: The Principles and Practice
of Statistics in Biological Research, W.H. Freeman, New York. [More
advanced; many research scientists would regard this as their standard
reference book and it is fine if you are really keen.]
Medical statistics
APPENDIXB
L -____________________________________________________
II
Fowler, J., and Cohen, L. (1990) Practical Statistics for Field Biology,
Open University Press, Milton Keynes. [Very popular, especially with
ecologists; a slightly more mathematical approach than this book.]
217
218
I IL____________________A_P_P_E_N_D_I_X_B____________________~
Fry, J.C. (ed.) (1993) Biological Data Analysis: A Practical Approach,
IRL Press, Oxford. [An advanced text which is designed for biologists ideal for postgraduate research students.]
Pentz, M., Shott, M., and Aprahamian, F. (1988) Handling Experimental
Data, Open University Press, Milton Keynes. [A beginners' guide: very
useful.]
Report writing
Cooper, B.M. (1964) Writing Technical Reports, Penguin, Harmondsworth. [Particularly helpful on correctness and style.]
O'Connor, M. (1991) Writing Successfully in Science, Harper Collins,
London. [An excellent book which covers all aspects of communicating in
science, including presenting posters and talks as well as writing a paper.]
Wheatley, D. (1988) Report Writing, Penguin, London. [A good starting
point.]
Statistical tables
The following two books contain useful tables, including those reproduced
in this book.
CauIcutt, R. (1991) Statistics in Research and Development, 2nd edn,
Chapman & Hall, London.
Mead, R., and Curnow, R.N. (1983) Statistical Methods on Experimental
Biology, Chapman & Hall, London.
Other books
Appendix C
Statistical tables
220
I I
APPENDIXC
TABLE C.l
-4.00
-3.50
-3.0
-2.95
-2.90
0.00003
0.00023
0.0014
0.0016
0.0019
-1.50
-1.45
-1.40
-1.35
-1.30
0.0668
0.0735
0.0808
0.0885
0.0968
0.00
0.05
0.10
0.15
0.20
0.5000
0.5199
0.5398
0.5596
0.5793
1.55
1.60
1.65
1.70
1.75
0.9394
0.9452
0.9505
0.9554
0.9599
-2.85
-2.80
-2.75
-2.70
-2.65
0.0022
0.0026
0.0030
0.0035
0.0040
-1.25
-1.20
-1.15
-1.10
-1.05
0.1056
0.1151
0.1251
0.1357
0.1469
0.25
0.30
0.35
0.40
0.45
0.5987
0.6179
0.6368
0.6554
0.6736
1.80
1.85
1.90
1.95
2.00
0.9641
0.9678
0.9713
0.9744
0.9772
-2.60
-2.55
-2.50
-2.45
-2.40
0.0047
0.0054
0.0062
0.0071
0.0082
-1.00
-0.95
-0.90
-0.85
-0.80
0.1587
0.1711
0.1841
0.1977
0.2119
0.50
0.55
0.60
0.65
0.70
0.6915
0.7088
0.7257
0.7422
0.7580
2.05
2.10
2.15
2.20
2.25
0.9798
0.9821
0.9842
0.9861
0.9878
-2.35
-2.30
-2.25
-2.20
-2.15
0.0094
0.0107
0.0122
0.0139
0.0158
-0.75
-0.70
-0.65
-0.60
-0.55
0.2266
0.2420
0.2578
0.2743
0.2912
0.75
0.80
0.85
0.90
0.95
0.7734
0.7881
0.8023
0.8159
0.8289
2.30
2.35
2.40
2.45
2.50
0.9893
0.9906
0.9918
0.9929
0.9938
-2.10
-2.05
-2.00
-1.95
-1.90
0.0179
0.0202
0.0228
0.0256
0.0287
-0.50
-0.45
-0.40
-0.35
-0.30
0.3085
0.3264
0.3446
0.3632
0.3821
1.00
1.05
1.10
1.15
1.20
0.8413
0.8531
0.8643
0.8749
0.8849
2.55
2.60
2.65
2.70
2.75
0.9946
0.9953
0.9960
0.9965
0.9970
-1.85
-1.80
-1.75
-1.70
-1.65
0.0322
0.0359
0.0401
0.0446
0.0495
-0.25
-0.20
-0.15
-0.10
-0.05
0.4013
0.4207
0.4404
0.4602
0.4801
1.25
1.30
1.35
1.40
1.45
0.8944
0.9032
0.9115
0.9192
0.9265
2.80
2.85
2.90
2.95
3.00
0.9974
0.9978
0.9981
0.9984
0.9986
-1.60
-1.55
0.0548
0.0606
0.00
0.5000
1.50
0.9332
3.50
4.00
0.99977
0.99997
I I
APPENDIXC
TABLE C.2
This table gives the value of t for which a particular percentage P of the
Student's t-distribution lies outside the range -t to +t. These values of t
are tabulated for various degrees of freedom.
Degrees of
freedom
50
20
lO
1
2
3
4
5
1.00
0.82
0.76
0.74
0.73
3.08
1.89
1.64
1.53
1.48
6.31
2.92
2.35
2.13
2.02
12.7
4.30
3.18
2.78
2.57
31.8
6.96
4.54
3.75
3.36
6
7
8
9
10
0.72
0.71
0.71
0.70
0.70
1.44
1.42
1.40
1.38
1.37
1.94
1.89
1.86
1.83
1.81
2.45
2.36
2.31
2.26
2.23
12
15
20
24
30
0.70
0.69
0.69
0.68
0.68
1.36
1.34
1.32
1.32
1.31
1.78
1.75
1.72
1. 71
1.70
40
60
0.68
0.68
0.67
1.30
1.30
1.28
1.68
1.67
1.64
00
0.2
0.1
63.7
9.92
5.84
4.60
4.03
318
22.3
10.2
7.17
5.89
637
31.6
12.9
8.61
6.87
3.14
3.00
2.90
2.82
2.76
3.71
3.50
3.36
3.25
3.17
5.21
4.79
4.50
4.30
4.14
5.96
5.41
5.04
4.78
4.59
2.18
2.13
2.09
2.06
2.04
2.68
2.60
2.53
2.49
2.46
3.05
2.95
2.85
2.80
2.75
3.93
3.73
3.55
3.47
3.39
4.32
4.07
3.85
3.75
3.65
2.02
2.00
1.96
2.42
2.39
2.33
2.70
2.66
2.58
3.31
3.32
3.09
3.55
3.46
3.29
221
4.76
4.35
4.07
3.86
3.71
5.14
4.74
4.46
4.26
4.10
3.89
3.68
3.49
3.40
3.32
3.23
3.15
5.99
5.59
5.32
5.12
4.96
4.75
4.54
4.35
4.26
4.17
4.08
4.00
6
7
8
9
10
12
15
20
24
30
40
60
2.84
2.76
3.49
3.29
3.10
3.01
2.92
19.2
9.28
6.59
5.41
19.0
9.55
6.94
5.79
18.5
10.1
7.71
6.61
2
3
4
5
n2
2.45
2.37
2.34
2.25
3.00
2.79
2.60
2.51
2.42
3.11
2.90
2.71
2.62
2.53
3.26
3.06
2.87
2.78
2.69
2.61
2.53
4.28
3.87
3.58
3.37
3.22
19.3
8.94
6.16
4.95
4.39
3.97
3.69
3.48
3.33
19.3
9.01
6.26
5.05
4.53
4.12
3.84
3.63
3.48
19.2
9.12
6.39
5.19
nl
2.25
2.17
2.91
2.71
2.51
2.42
2.33
4.21
3.79
3.50
3.29
3.14
19.4
8.89
6.09
4.88
2.18
2.10
2.85
2.64
2.45
2.36
2.27
4.15
3.73
3.44
3.23
3.07
19.4
8.85
6.04
4.82
2.08
1.99
2.75
2.54
2.35
2.25
2.16
4.06
3.64
3.35
3.14
2.98
19.4
8.79
5.96
4.74
10
These tables give the values of F for which a given percentage of the F-distribution is greater than F.
TABLE C.3
2.00
1.92
2.69
2.48
2.28
2.18
2.09
4.00
3.57
3.28
3.07
2.91
19.4
8.74
5.91
4.68
12
1.79
1.70
2.51
2.29
2.08
1.98
1.89
3.84
3.41
3.12
2.90
2.74
19.5
8.64
5.77
4.53
24
(")
t:I
tTl
"tI
>
"tI
98.5
34.1
21.2
16.3
13.7
12.3
11.3
10.6
10.0
9.33
8.68
8.10
7.82
7.56
7.31
7.08
6
7
8
9
10
12
15
20
24
30
40
60
5.18
4.98
6.93
6.36
5.85
5.61
5.39
10.98
9.55
8.65
8.02
7.56
99.0
30.8
18.0
13.3
4.31
4.13
5.95
5.42
4.94
4.72
4.51
9.78
8.45
7.59
6.99
6.55
99.2
29.5
16.7
12.1
3.83
3.65
5.41
4.89
4.43
4.22
4.02
9.15
7.85
7.01
6.42
5.99
99.2
28.7
16.0
11.4
3.51
3.34
5.06
4.56
4.10
3.90
3.70
8.75
7.46
6.63
6.06
5.64
99.3
28.2
15.5
11.0
2
3
4
5
n2
TABLE C.4
3.29
3.12
4.82
4.32
3.87
3.67
3.47
8.47
7.19
6.37
5.80
5.39
99.3
27.9
15.2
10.7
nl
3.12
2.95
4.64
4.14
3.70
3.50
3.30
8.26
6.99
6.18
5.61
5.20
99.4
27.7
15.0
10.5
2.80
2.63
4.30
3.80
3.37
3.17
2.98
4.50
4.00
3.56
3.36
3.17
2.99
2.82
7.87
6.62
5.81
5.26
4.85
99.4
27.2
14.5
10.1
10
8.10
6.84
6.03
5.47
5.06
99.4
27.5
14.8
10.3
2.66
2.50
2.29
2.12
3.78
3.29
2.86
2.66
2.47
7.31
6.07
5.28
4.73
4.33
7.72
6.47
5.67
5.11
4.71
4.16
3.67
3.23
3.03
2.84
99.5
26.6
13.9
9.47
24
99.4
27.1
14.4
9.89
12
tTl
t:l
><!
)'"t:J
'"t:J
27.0
21.7
18.5
16.4
14.9
13.0
11.3
9.95
9.34
8.77
8.25
7.77
35.5
29.3
25.4
22.9
21.0
18.6
16.6
14.8
14.0
13.3
12.6
12.0
6
7
8
9
10
12
15
20
24
30
40
60
6.59
6.17
10.8
9.34
8.10
7.55
7.05
23.7
18.8
15.8
13.9
12.6
999
141
56.2
33.2
999
149
61.3
37.1
999
167
74.1
47.2
5.70
5.31
9.63
8.25
7.10
6.59
6.12
21.9
17.2
14.4
12.6
11.3
999
137
53.4
31.1
8.38
7.09
6.02
5.55
5.12
4.73
4.37
5.13
4.76
20.0
15.5
12.9
11.1
9.93
20.8
16.2
13.5
11.7
10.5
8.89
7.57
6.46
5.98
5.53
999
133
50.5
28.8
nl
999
135
51.7
29.8
2
3
4
5
n2
TABLE C.5
4.44
4.09
8.00
6.74
5.09
5.23
4.82
19.5
15.0
12.4
10.7
9.52
999
132
49.7
28.2
4.21
3.86
7.71
6.47
5.44
4.99
4.58
19.0
14.6
12.1
10.4
9.20
999
131
49.0
27.7
3.87
3.54
7.29
6.08
5.08
4.64
4.24
18.4
14.1
11.5
9.87
8.74
999
129
48.1
26.9
10
3.64
3.32
7.00
5.81
4.82
4.39
4.00
18.0
13.7
11.2
9.57
8.44
999
128
47.4
26.4
12
3.01
2.69
6.25
5.10
4.15
3.74
3.36
16.9
12.7
10.3
8.72
7.64
1000
126
45.8
25.1
24
\.)
><:
t:I
tTl
"0
"0
;J>
I I
APPENDIXC
TABLE C.6
50
1
2
3
4
5
0.45
1.39
2.37
3.36
4.35
6
7
8
9
10
5.35
6.35
7.34
8.34
9.34
10
2.5
3.84
5.99
7.82
9.49
11.1
5.02
7.38
9.35
11.1
12.8
6.64
9.21
11.3
13.3
15.1
10.8
13.8
16.3
18.5
20.5
10.6
12.0
13.4
14.7
16.0
12.6
14.1
15.5
16.9
18.3
14.5
16.0
17.5
19.0
20.5
16.8
18.5
20.1
21.7
23.2
22.5
24.3
26.1
27.9
29.6
2.71
4.61
6.25
7.78
9.24
0.1
12
15
20
24
30
11.3
14.3
19.3
23.3
29.3
18.5
22.3
28.4
33.2
40.3
21.0
25.0
31.4
36.4
43.8
23.3
27.5
34.2
39.4
47.0
26.2
30.6
37.6
43.0
50.9
32.9
37.7
45.3
51.2
59.7
40
60
39.3
59.3
51.8
74.4
55.8
79.1
59.3
83.3
63.7
88.4
73.4
99.6
225
6077
1796
25 16
8451
3428
1982
77 51
0096
8884
5972
5750
1680
5432
2897
3231
6444
8838
5880
82 15
71 00
71 85
9666
1837
6434
0647
8088
3355
6600
7925
63 18
7370
33 18
13 33
6754
5385
9383
4202
4242
6671
7305
5076
3963
1948
3920
5866
8634
4487
4898
2139
14 19
6917
9721
2355
5809
77 21
11 32
9973
7488
81 23
5022
1971
1954
11 75
0083
7822
3006
91 76
5684
2996
5891
9929
2561
21 88
1748
6334
8332
2717
4089
51 66
9522
2329
8761
41 10
2764
4389
4343
3501
9982
2873
3061
7450
5466
6528
72 96
1922
8608
44 46
3297
3826
4020
6655
1569
64 78
1793
6156
9737
0340
5289
72 76
6050
1843
61 71
2993
1770
61 13
64 17
91 14
2771
1296
5430
2802
0613
9753
31 89
11 23
5014
7469
5951
1244
2994
5451
0284
2027
21 16
1975
3623
9570
6950
7611
9560
5054
2276
0755
53 11
7759
2371
5290
9049
7574
7056
5396
2258
2988
2252
5420
Random numbers
10 27
8590
4433
4757
0320
Table C.7
3895
4976
1537
6290
7083
9333
9928
3412
1771
1290
7654
7430
9262
3075
0351
1562
3680
41 91
72 81
8239
1624
2376
9752
78 16
6009
3741
2422
7981
8346
4925
8887
0650
2226
4775
1447
1229
9886
5704
1691
5939
(")
><:
ti
.....
;I>
'"C
'"C
1224
8132
6484
0507
2194
8736
2677
0912
2007
5308
7454
2689
4327
2408
9737
2851
5308
9945
7496
1876
0280
2289
9445
10 30
7339
4791
3924
7729
0478
8381
9767
5280
8069
0048
1491
5062
1759
7317
3795
7695
9410
5806
8528
2544
7628
9614
13 38
3356
7392
5340
4569
6237
4177
17 15
0923
3478
3272
1728
2134
0184
15 18
8000
6317
9566
1860
6328
7008
3988
3719
4626
0301
8246
2957
6812
2261
2250
4892
0657
5918
2820
0602
7571
9931
4202
44 92
9811
7322
7331
6987
2925
2425
9396
3489
3826
9941
5702
9575
7196
8595
5035
3994
9513
2462
3148
7609
1833
6470
2444
9179
9642
1364
8275
9495
0190
2790
0701
8856
81 36
21 87
5782
13 91
7691
7582
8221
4696
8260
8344
8733
8627
5722
4274
7516
4570
6830
3543
13 00
7553
3765
7316
8813
5450
2455
7889
7687
3937
9041
4924
0821
4791
9434
3667
9505
5985
8380
0709
7880
7917
4262
7837
5253
6027
3409
2759
8675
2712
33 11
2093
4006
3170
5971
77 67
3068
3804
1989
6226
9493
5381
4384
1598
7673
2668
9712
1862
0795
3044
77 59
1259
77 91
5352
2359
0092
8302
0480
9865
4583
1453
5417
4523
1666
6822
tT1
(j
><
Z
0
.....
)'"C
'"C
228
I I
APPENDIXC
Two-sided test
One-sided test
5%
(0.05)
1%
(0.01)
5%
(0.05)
1%
(0.01)
2
3
4
5
0.950
0.878
0.811
0.754
0.990
0.959
0.917
0.875
0.900
0.805
0.729
0.669
0.980
0.934
0.882
0.833
6
7
8
9
10
0.707
0.666
0.632
0.602
0.576
0.834
0.798
0.765
0.735
0.708
0.621
0.582
0.549
0.521
0.497
0.789
0.750
0.715
0.685
0.658
11
12
13
14
15
0.553
0.532
0.514
0.497
0.482
0.684
0.661
0.641
0.623
0.606
0.476
0.457
0.441
0.426
0.412
0.634
0.612
0.592
0.574
0.558
20
30
40
60
0.423
0.349
0.304
0.250
0.537
0.449
0.393
0.325
0.360
0.296
0.257
0.211
0.492
0.409
0.358
0.295
Index
Abscissa 121
Absolute values 8
Abstracts, reports 206
Accidents 110-16, 200
Acknowledgements section, reports
206
Active v. passive voice 209
Adjusted sums-of-squares 115
Aims and experimental design 58-9,
188, 190-1
Alternative hypothesis 28
analysis of variance 85, 86
Analysis of results 79
analysis-of-variance table 84--7
choice of method 213-14
different treatments 89
exercise 94--7
experimental design 195
MINITAB 89-94
questionnaires 198-200
randomized complete block designs
87-9
wholly randomized design 80-3
Analysis of variance (Anova) 78, 213,
214
exercise 94--7
factorial structure of treatments 103,
104, 105-7, 108, 118
linear regression 128
MINITAB 89-94, 105-7, 108, 118
non-parametric equivalents 153
one-way 80-3
independent I-test 31
MINITAB 90-1,112-13
non-parametric equivalent 153,
156
plan 84, 87
table 84--7, 88
230
II
INDEX
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
Calculators 21-2
block sum-of-squares 88
random numbers 44
random sampling 45
stratified 47
total sum-of-squares 79
treatment sum-of-squares 83
t-test exercise 40
Carbon copies of results 193
Categorical data 153, 161-6,213
Chi-squared 163, 164, 166, 167, 171
Chi-squared contingency test 153,
161-6,213,214
exercise 169-71
questionnaire data 198-200
Chi-squared distribution 225
Chi-squared goodness-of-fit test 166-7
Circular sampling plots 52-3
Clinical trials
ethical issues 200
linear regression 123
Closed questions 196
Clusters, irregular stratified random
sampling 51
Coefficient of determination
correlation coefficients 144
linear regression 128, 150
Computers and computer packages
analysis of variance 87
data entry errors 194
familiarity with 195
filing system 194-5
independent t-test 33
recording data 62, 193-4
reports 202-3, 207
vegetation data analysis 185
see also MINITAB
Confidence intervals
exercises 36-7, 116, 117
graphs 207
importance 18-22
linear regression 142-3
MINITAB 35
treatment means 98-9
difference between two 99-101
t-test 40-1
Constraints, and experimental design
192
Continuous distributions 5, 213
linear regression 131
Control of Substances Hazardous to
Health 201
Controls 58
experimental design 189-90
Correlation 143-7
exercise 148-9
observational studies 174
Correlation coefficients 143-7
observational studies 174
COSHH201
Covariates 191
Critical values, analysis of variance 86,
88-9
Data-loggers 193
Degrees of freedom 11
analysis of variance 84, 88
factorial structure of treatments
104,105
F-table 86
chi-squared contingency test 163,
164,170
Kruskal-Wallis test 157
stratified random sampling 48-9
Dependent variables 121
Design, experimental 55, 188-95
controls 58
exercise 64-6
laying out the experiment 59-61
MINITAB 62-4
objectives 58-9
randomization 56-8
recording data 61-2
replication 55-6
De-trended correspondence analysis
(DCA) 185
Disasters 110-16
Discontinuous distributions 5-6
Discussion section, reports 204
Disks 193-4
Distribution-free (non-parametric) tests
152-3,213
Dotplots 22, 38
Draft reports 206-7
Dust extractors 201
Ehrenberg, A.S.C. 208
Eigenvalues 176, 181
Errors
experimental design 193, 194-5
linear regression 129, 131, 139
non-response 196
response 196-8
see also Residuals
---'I I
'----_ _ _ _ _ _ _ _I_N_D_EX
________
Ethical issues 200
Expected values 69, 70, 71, 75-6
analysis of variance 93-4
chi-squared contingency test 162,
163,164-5,170
chi-squared goodness-of-fit test 166
exercise 77-8
Experimental units 55, 189
Experiments
design and execution 188-95
linear regression 141
planning 55
controls 58
exercise 64-6
laying out the experiment 59-61
MINITAB 62-4
objectives 58-9
randomization 56--8
recording data 61-2
replication 55--6
Extrapolation 141-2
Factorial analysis 101-5,213
exercises 117-19
experimental design 190
MINIT AB 105-9
tables 208
Factors 101
F-distribution 222-4
analysis of variance 86, 87
Fieldwork safety 200-1
Fisher, R.A. 79
Fit see Expected values
Fitted values see Expected values
Friedman test 153, 159-60,213,214
exercise 168, 169
Fume cupboards 201
Galton, Sir Francis 122-3
Generality of experiments 189
General linear model (glm) 114-15,
120
Generic names 209
Goodness of fit, chi-squared test 166--7
Grand mean 69, 70, 74, 75
exercise 77
Graphs 207-8
H 157, 158
Harper, John 188
Hazardous substances 201
Health and safety 200--1
High significance 33
Histograms
exercise 16
MINITAB 23, 27, 38, 39
factorial analysis 109, 111
missing observations 116
Homogeneity of variance 130-1
Hypothesis testing 27-9
analysis of variance 85
see also t-test
Illustrating results 207-9
Independent factors 103, 104
Independent t-test 29-31
calculation 31-4
MINITAB 34-5
Independent variables 121
linear regression 131
Induction 188
Inference 188
Integration 6-7
Interaction diagrams 109, Ill, 118,
119
Interaction of factors 103-4
MINITAB 106, 109
Introduction to report 203
Irregular stratified random sampling 51
Kruskal-Wallis test 153, 155-9,213,
214
exercise 168, 169
Laboratory health and safety 201
Language of reports 209
Laying out the experiment 59-61
Least significant difference (LSD) 100,
101
exercises 116, 117
graphs 207
Least-squares fit 125
Leptospirosis 200
Levels 10 1, 102
Linear regression 122-3, 213
assumptions 129-31
confidence intervals 142-3
correlation 143-7
example 131-4
exercise 147-51
model 123-9
plotting observations, importance
134-42
Literature searches 203
231
232
I I~________________IN_D_E_X________________~
Lower quartile 26
Lyme disease 200
Main effects 102-4, 106-7
Mann-Whitney test 153-5,213
exercise 167, 168-9
Materials and methods, report section
203-4
Maximum values 24
Mean 2-3
block 74,75
confidence intervals 36
exercise 17
grand 69, 70, 74, 75, 77
limitations 24--5
MINITAB 24, 34--5
Normal distribution 6, 7
sampling distribution 13-14
standard error see Standard error
statistical mode, calculators 22
treatment 69
analysis of variance 83, 85
confidence intervals 98-101
trimmed 24
Mean square 84--5
factorial structure of treatments
104
see also Variance
Measurements 2-3
Median 25
Mann-Whitney test 154--5
MINITAB 25-6
Minimum values 24
MINITAB 22-6,37-9
analysis of variance 89-94, 95-7,
118
missing observations 112-16
chi-squared contingency test 163-4,
165-6, 170-1
questionnaire analysis 198-9
chi-squared goodness-of-fit test
166-7
correlation 144, 145-6, 148
data-loggers 193
factorial analysis 105-9, 117-19
Friedman test 159-60,161, 168,
169
Kruskal-Wallis test 156, 157, 158-9,
168, 169
linear regression 126-30, 132-4
confidence intervals 143
exercise 148-51
importance of plotting
observations 135-41
Mann-Whitney test 154--5,167,
168-9
planning an experiment 62-4, 65
principal components analysis
case study 1: 173-4, 175, 176-7,
178,179
case study 2: 180-3, 184
t-test 33, 34--5, 40-1
word processing packages 202
Missing observations 109-16
exercise 120
experimental design 194
Mistakes 11 0-16
Models 68-71
blocking 71-6
exercise 77-8
Multiple regression 125
Multivariate analysis 180
Negatively skewed distributions 25
Negative relationship 121, 122
Non-parametric tests 152-3,213
Non-response errors 196
Non-significance 33
Normal distribution 3-7, 220
checking for 26
confidence intervals 18
factorial analysis 109
linear regression 129-30
Notebooks 202
Notes 194
Null hypothesis 28,89
analysis of variance 85, 86, 87
t-test 29-30, 33, 35
Objectives, and experimental design
58-9, 188, 190-1
Observational studies
interpretation 188
summarizing data from 172-3
case study 1: 173-9
case study 2: 179-85
specialized programs for analysing
vegetation data 185
Observations 10
number of 45, 60
Observed values 70, 76
One-sample t-tests 35
One-tailed t-tests 34
One-way analysis of variance 80-3
-----'I I
IN_D_EX
________
L - -_ _ _ _ _ _ _ _ _
independent t-test 31
MINITAB 90-1
missing observations 112-13
non-parametric equivalent 153, 156
Open questions 196
Ordinate axis 121
Outliers
boxplots 26, 194
correlation 146, 147
linear regression 138
non-parametric tests 152, 153
Paired t-tests 35
Parameters 3
Parametric tests 152
Passive v. active voice 209
Past v. present tense 209
Pearson's r (Product moment
correlation coefficient) 145, 146, 147
critical values 228
exercise 148
Photocopies of results 193
Photographs 194
Pilot studies
experimental design 191
random sampling 45
survey design 198
Pipe symbol 105-6
Plagiarism 205
Planning an experiment 55
controls 58
exercise 64-6
laying out the experiment 59-61
MINITAB 62-4
objectives 58-9
randomization 56-8
recording data 61-2
replication 55-6
Plots 55
Plural v. singular 209
Poisson distribution 5-6
Pooled variance 32, 40
Population
experimental design 188-9
mean 3, 34-5
confidence intervals 36
sampling 2
Positively skewed distributions 5, 25
Positive relationship 121, 122
Precision of estimates
experimental design 188
and replication 56
233
234
II
INDEX
L -_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
S 161
Safety 200-1
Sample mean 34-5
confidence intervals 36
Sampling 2-3, 42
bias 195-6
confidence intervals 19-20
distribution 13-14
exercises 53-4
practical problems 52-3
random 42-5
chi-squared contingency test 166
irregular stratified 51
stratified 45-9
systematic 49-51
two-stage 51-2
Scatter plots 63
correlation 144, 145, 146
linear regression 135, 136, 137-8,
139-40, 141
Sequential sums-of-squares 115
Sheep-borne diseases 200
'Shotgun effect' 130
Significance 33, 87
Significant difference 100
Significant figures 208
Singular v. plural 209
Skewed distributions 5, 25, 152, 153
Mann-Whitney test 154
Spearman's r 145-7,148,153
Square sampling plots 52
Standard deviation 11-12
exercise 17
experimental design 192
linear regression 128
MINITAB24
Normal distribution 6, 7
statistical mode, calculators 21, 22
Standard error 14-15
confidence intervals 18,99
exercise 17
flow chart 20
graphs 207
MINITAB24
missing observations 113
random sampling 45
stratified 47,49
statistical mode, calculators 22
Standard error of difference (SED)
confidence intervals 100
independent t-test 31, 32, 33
Standardized data 174, 177
INDEX
L -____________________________________________________
II
Treatments 55
experimental design 189-90
factorial structure 101-5
Treatment sum-of-squares
analysis of variance
one-way 80, 81-2, 83
two-way 88
factorial structure of treatments
103-4
Trimmed mean 24
t-test 29
exercises 36, 39--41
independent 29-34
MINITAB 34-5
stratified random sampling 48
Two-sample (independent) (-test 29-31
calculation 31--4
MINITAB 34--5
Two-stage sampling 51-2
Two-tailed t-tests 33--4
Two-way analysis of variance 87-9
MINITAB 91-2, 93--4
missing observations 113-16
non-parametric equivalent 153
Unbalanced analysis of variance 113,
114,115
Unrepresentative samples 42
random sampling 43, 45
Upper quartile 26
Validation of data 198
Variance 10-11
analysis of see Analysis of variance
homogeneity 130-1
independent t-test 31-2
pooled 32, 40
ratio 85, 86-7, 88-9
factorial structure of treatments
102
MINITAB 90,91
stratified random sampling 47,49
Variates 191
Variation 3
expressing 8-15
Normal distribution 6, 7
sources 67-9
blocking 71-6
exercise 77-8
model 69-71
Very high significance 33
VESPAN III 185
235
236
II
INDEX
~----------------------------------------------------~
W155
Wilcoxon rank sum test see MannWhitney test
Word 202
WordPerfect 202