You are on page 1of 23

Investigating a new estimator of the

serial correlation coecient


Manfred Mudelsee
Institute of Mathematics and Statistics
University of Kent at Canterbury
Canterbury
Kent CT2 7NF
United Kingdom
8 May 1998
Abstract
A new estimator of the lag-1 serial correlation coecient ,

n1
i=1

3
i

i+1
/

n1
i=1

4
i
, is compared with the old estimator,

n1
i=1

i

i+1
/

n1
i=1

2
i
, for a stationary AR(1) process with known mean. The
mean and the variance of both estimators are calculated using the second or-
der Taylor expansion of a ratio. No further approximation is used. In case of
the mean of the old estimator, we derive Marriott and Popes (1954) formula,
with (n 1)
1
instead of (n)
1
, and an additional term (n1)
2
. In case
of the variance of the old estimator, Bartletts (1946) formula results, with
(n 1)
1
instead of (n)
1
. The theoretical expressions are corroborated with
simulation experiments. The main results are as follows. (1) The new esti-
mator has a larger negative bias and a larger variance than the old estimator.
(2) The theoretical results for the mean and the variance of the old estimator
describe the principal behaviours over the entire range of , in particular, the
decline to zero negative bias as approaches unity.
Mudelsee M (1998) Investigating a new estimator of the serial correlation coefficient.
Institute of Mathematics and Statistics, University of Kent, Canterbury, IMS Technical
Report UKC/IMS/98/15, Canterbury, United Kingdom, 22 pp.
1 Introduction
Consider the following estimators of the serial correlation coecient of
lag 1 from a time series
i
(i = 1, . . . , n) sampled from a process E
i
with
known (zero) mean:
=

n1
i=1

i

i+1

n1
i=1

2
i
, (1)

n1
i=1

3
i

i+1

n1
i=1

4
i
. (2)
Estimators of type (1) are well-known (e. g. Bartlett 1946, Marriott and
Pope 1954), whereas (2) is new to my best knowledge.
Let E
i
be the stationary AR(1) process,
E
1
N(0, 1),
E
i
= E
i1
+ U
i
, i = 2, . . . , n, (3)
with U
i
i. i. d. N(0,
2
) and
2
= 1
2
. The following misjudgement of
mine led to the present study.

minimises
n1

i=1
(
i+1

i
)
2

2
i
,
which is a weighted sum of squares. However, since E
i
has constant variance
unity, no weighting should be necessary.
Nevertheless, we compare estimators (1) and (2) for process (3). We re-
strict ourselves to 0 < < 1. We rst calculate their variances, respectively
means, up to the second order of the Taylor expansion of a ratio, similarly
to Bartlett (1946), respectively Marriott and Pope (1954). Since we concen-
trate on the lag-1 estimators and process (3), no further approximation is
necessary. These theoretical expressions and also those of White (1961) for
estimator (1) are then examined with simulation experiments.
2 Series expansions
2.1 Old estimator
2.1.1 Variance
We write (1) as
=
1
n

n1
i=1

i

i+1
1
n

n1
i=1

2
i
(4)
=
c
v
, say,
1
and have, up to the second order of deviations from the means of c and v,
the variance
var( ) = var(c/v)
var(c)
E
2
(v)
2 E(c)
cov(v, c)
E
3
(v)
+ E
2
(c)
var(v)
E
4
(v)
. (5)
We now apply a standard result for quadravariate standard Gaussian distri-
butions with serial correlations
j
(e. g. Priestley 1981:325),
cov(
a

a+s
,
b

b+s+t
) =
ba

ba+t
+
ba+s+t

bas
,
from which we derive, for
i
following process (3):
cov
_
1
n
n1

a=1

a+s
,
1
n
n1

b=1

b+s+t
_
=
=
1
n
2
n1

a,b=1
(
|ba|

|ba+t|
+
|ba+s+t|

|bas|
). (6)
Without further approximation we can directly derive the various constituents
of the right-hand side of (5) by the means of (6).
var(c) = var
_
1
n
n1

i=1

i+1
_
= (put s = 1 and t = 0 in (6))
=
1
n
2
_
_
n1

a,b=1

2|ba|
+
n1

a,b=1

|ba+1|

|ba1|
_
_
.
We nd, by summing over the points of the a-b lattice in a diagonal manner,
n1

a,b=1

2|ba|
= (n 1) + 2
n2

i=1
i
2(ni1)
= (n 1)
_
2
1
2
1
_
+ 2
(
2n4

2
)
(1
2
)
2
(7)
(at the last step we have used the arithmetic-geometric progression), and
n1

a,b=1

|ba+1|

|ba1|
= (n 1)
2
+ 2
n2

i=1
i
2(ni1)
= (n 1)
_
2
1
2
+
2
2
_
+ 2
(
2n4

2
)
(1
2
)
2
. (8)
(7) and (8) give var(c). Further,
cov(v, c) = cov
_
1
n
n1

i=1

2
i
,
1
n
n1

i=1

i+1
_
2
= (put s = 0 and t = 1 in (6))
=
2
n
2
n1

a,b=1

|ba|

|ba+1|
.
We nd
n1

a,b=1

|ba|

|ba+1|
=
2n3
+
n2

i=1
(2 i + 1)
2(ni)3
= (n 1)
2

_
1
1
2
1
_
+2
(
2n5

3
)
(1
2
)
2
+
(
2n3

1
)
(1
2
)
. (9)
(At the last step we have used the arithmetic-geometric and also the geomet-
ric progression.) (9) gives cov(v, c). Further,
var(v) = var
_
1
n
n1

i=1

2
i
_
= (put s = t = 0 in (6))
=
2
n
2
n1

a,b=1

2|ba|
,
which is given from (7). We further need
E(v) =
n 1
n
(10)
and
E(c) =
(n 1)
n
. (11)
All three terms contributing to var( ) have a part (n1)
2
. However,
these parts cancel out:
var( )
(1
2
)
(n 1)
. (12)
Bartlett (1946) already gave, up to the order (n)
1
,
var( )
(1
2
)
n
, (13)
which cannot be distinguished from our result.
3
2.1.2 Mean
We have, up to the second order of deviations from the means of c and
v, the mean
E( ) = E(c/v)
E(c)
E(v)

cov(v, c)
E
2
(v)
+ E(c)
var(v)
E
3
(v)
. (14)
As above, we derive cov(v, c) and var(v) from (9) and (7), respectively, and
have E(v) and E(c) given by (10) and (11), respectively, yielding
E( )
2
(n 1)
+
2
(n 1)
2
(
2n1
)
(1
2
)
. (15)
Marriott and Pope (1954) investigated the estimator

=
1
n1

n1
i=1

i

i+1
1
n

n1
i=1

2
i
and gave, up to the order (n)
1
,
E(

)
2
n
,
which cannot be distinguished from the rst two terms of our result.
2.1.3 Unknown mean of the process
I have also tried to calculate up to the same order of approximation the
mean and the variance of the estimator

n1
i=1
_

1
n1

n1
j=1

j
_ _

i+1

1
n1

n1
j=1

j+1
_

n1
i=1
_

1
n1

n1
j=1

j
_
2
,
for the case when the mean of the process is unknown. That would require
to calculate the following sums over the points of a cubic lattice:
n1

a,b,c=1

|ba|

|cas|
for s = 0 and 1. However, I was unable to obtain exact formulas.
2.2 New estimator
2.2.1 Variance
We write (2) as

=
1
n

n1
i=1

3
i

i+1
1
n

n1
i=1

4
i
=
d
w
, say.
4
var(

), up to the second order of the Taylor expansion, is given by (5),


substituting d for c and w for v, respectively. We need the following result
for octavariate standard Gaussian distributions with serial correlations
j
(Appendix),
cov(
3
a

a+s
,
3
b

b+s+t
) = 9
ba

ba+t
+ 9
ba+s+t

bas
+ 18
s

ba

ba+s+t
+ 18
ba

bas

s+t
+ 18
s

2
ba

s+t
+ 18
2
ba

ba+s+t

bas
+ 6
3
ba

ba+t
, (16)
from which we derive, for
i
following process (3):
cov
_
1
n
n1

a=1

3
a

a+s
,
1
n
n1

b=1

3
b

b+s+t
_
=
=
1
n
2
n1

a,b=1
( 9
|ba|

|ba+t|
+ 9
|ba+s+t|

|bas|
+ 18
|s|

|ba|

|ba+s+t|
+ 18
|ba|

|bas|

|s+t|
+ 18
|s|

2|ba|

|s+t|
+ 18
2|ba|

|ba+s+t|

|bas|
+ 6
3|ba|

|ba+t|
) . (17)
Without further simplication we can directly obtain the various constituents
of var(

) by the means of (17).


var(d) = var
_
1
n
n1

i=1

3
i

i+1
_
= (put s = 1 and t = 0 in (17))
=
1
n
2
_
_
9
n1

a,b=1

2|ba|
+ 9
n1

a,b=1

|ba+1|

|ba1|
+ 18
n1

a,b=1

|ba|

|ba+1|
+ 18
n1

a,b=1

|ba|

|ba1|
+ 18
2
n1

a,b=1

2|ba|
+ 18
n1

a,b=1

2|ba|

|ba+1|

|ba1|
+ 6
n1

a,b=1

4|ba|
_
_
.
The rst and the fth sum is given by (7), the second sum by (8) and the
third by (9). We nd,
n1

a,b=1

|ba|

|ba1|
=
n1

a,b=1

|ba|

|ba+1|
,
5
which is given by (9),
n1

a,b=1

2|ba|

|ba+1|

|ba1|
= (n 1)
2
+ 2
n2

i=1
i
4(ni1)
= (n 1)
_
2
1
4
+
2
2
_
+ 2
(
4n8

4
)
(1
4
)
2
(18)
and
n1

a,b=1

4|ba|
= (n 1) + 2
n2

i=1
i
4(ni1)
= (n 1)
_
2
1
4
1
_
+ 2
(
4n8

4
)
(1
4
)
2
. (19)
(7), (8), (9), (18) and (19) give var(d). Further,
cov(w, d) = cov
_
1
n
n1

i=1

4
i
,
1
n
n1

i=1

3
i

i+1
_
= (put s = 0 and t = 1 in (17))
=
1
n
2
_
_
36
n1

a,b=1

|ba|

|ba+1|
+ 36
n1

a,b=1

2|ba|
+ 24
n1

a,b=1

3|ba|

|ba+1|
_
_
.
The last sum of this expression,
n1

a,b=1

3|ba|

|ba+1|
= (n 1) +
n2

i=1
i
4(ni)3
+
n2

i=1
i
4(ni)5
= (n 1)
_
1

3
+
1

5

1

_
+
(
4n7
+
4n9
) (1
4n+4
)
(1
4
)
2
. (20)
(7), (9) and (20) give cov(w, d). Further,
var(w) = var
_
1
n
n1

i=1

4
i
_
= (put s = t = 0 in (17))
=
1
n
2
_
_
72
n1

a,b=1

2|ba|
+ 24
n1

a,b=1

4|ba|
_
_
,
6
which is given from (7) and (19), respectively. We further need
E(w) =
3 (n 1)
n
(21)
and
E(d) =
3 (n 1)
n
. (22)
As for the old estimator, the terms (n1)
2
which contribute to var(

)
cancel out:
var(

)
5
3
(1
2
)
(n 1)
. (23)
2.2.2 Mean
We use (14) with d instead of c and w instead of v, respectively. We
derive cov(w, d) from (7), (9) and (20), further, var(w) from (7) and (19),
and we have E(w) and E(d) given by (21) and (22), respectively. This yields,
up to the second order of deviations from the means of d and w,
E(

)
1
(n 1)
4
3

_
5
2
1 +
2
_
+
1
(n 1)
2
_
4
(
2n1
)
(1
2
)
+
8
3
(
3

4n1
)
(1
2
) (1 +
2
)
2
_
. (24)
2.3 Remark

is found to have a larger negative bias and also a larger variance than .
In the simulation experiment, we intend not only to prove that. We are also
interested to examine the signicance of the term (n1)
2
in (15). In the
comparison we include the theoretical results of White (1961) who studied
the old estimator (1) for process (3). He calculated the kth moment of (c/v)
via the joint moment generating function m(c, v), with c and v dened as in
(4) without the factors 1/n,
E(
k
) =
_
0

_
v
k

. . .
_
v
2

k
m(c, v)
c
k

c =0
dv
1
dv
2
. . . dv
k
.
He expanded the integrand to terms of order n
3
and
4
and gave the fol-
lowing results (the index W refers to his study):
var( )
W

_
1
n

1
n
2
+
5
n
3
_

_
1
n

9
n
2
+
53
n
3
_

12
n
3

4
, (25)
E( )
W

_
1
2
n
+
4
n
2

2
n
3
_
+
2
n
2

3
+
2
n
2

5
. (26)
7
3 Simulation experiments
In the rst experiment, for each combination of n and listed in Table
1, we generated a set of 250 000 time series from process (3). For every time
series,

and have been calculated. The sample means,


sim
and
sim
,
and the sample standard deviations,

sim
and
sim
, over the simulations
are listed in Table 1. Therein, our theoretical valuesfrom (12), (15), (23)
and (24)are included. In case of the means, these are additionally splitted
into terms (n 1)
2
and terms of lower order. Also the theoretical values
from Whites (1961) formula, herein (25) and (26), are listed.
We further investigated whether the distribution of

approaches Gaus-
sianity as fast as that of . Fig. 1 shows histograms of

, respectively ,
from the rst simulation experiment in comparison with Gaussian distribu-
tions N(

sim
,
2

sim
), respectively N(
sim
,
2
sim
).
In the second simulation experiment (Fig. 2), formulas (12) versus (25)
and (15) versus (26) are examined over the entire range of , with particular
interest on large. We plot the negative bias, E( ), and

var( ). For
each combination of and n, the number of simulated time series after (3)
is 250 000.
The error due to the limited number of simulations is small enough for
not inuencing any intended comparison.
4 Results and discussion
The rst simulation experiment (Table 1) conrms, over a broad range of
and n, that

has a larger bias and a larger variance, respectively, than .


In general, the deviations between the theoretical results and the simula-
tion results decrease with n increased, as is to be expected.
For any combination of and n listed in Table 1, our terms (n1)
2
in
(15), respectively (24), bring these theoretical results closer to the simulation
results, with the exception of E( ) for = 0.9 and n = 800 where the
simulation noise prevents that comparison. For large, respectively n small,
these second order terms contribute heavier.
The frequency distribution of

(Fig. 1) has a similar shape as that of


. It is shifted to smaller values against that and also broader, reecting the
larger negative bias, respectively the larger variance. The functional form
of the distribution of is approximately the Leipnik distribution, which is
heavier skewed for large and tends to Gaussianity with n increased. Both
estimators seem to approach Gaussianity equally fast (Fig. 1).
The second simulation experiment (Fig. 2) compares Whites (1961) the-
oretical results for , (25) and (26), with ours, (12) and (15). It shows that
his are better performing up to a certain value of .
Above, our formulas describe better the simulation, particularly, the de-
8
cline to zero negative bias, respectively zero variance, as approaches unity.
The decline to zero negative bias is caused by the term (n 1)
2
in (15).
Those declines are reasonable, since for = 1 all time series points have
equal value and c = v in (4).
For small , his formulas perform better since they are more accurate
with respect to powers of (1/n) than ours. For larger his approximation
becomes less accurate. In particular, for n 3 and > 0, E( )
W
cannot
become zero. For n 10,
d
d
( E( )
W
) cannot become negative. For n 8,
var( )
W
cannot become zero. That means, for those cases Whites (1961)
formulas cannot produce the decline to zero negative bias and zero variance,
respectively.
The fact that Bartletts (1946) formula for the variance, (13), describes
the simulation better than (12) for 0, is regarded as spurious.
These results mean that the expansion formulas for var( ) and E( ), (5)
and (14), respectively, are sucient to derive the principal behaviours for
1. In case of the mean, however, no further approximation is allowed
which would lead to the term (n 1)
2
in (15) be neglected.
5 Conclusions
1. In case of the stationary AR(1) process (3) with known mean, the new
estimator,

, has a larger negative bias and a larger variance than the


old estimator, . Its distribution tends to Gaussianity about equally
fast.
2. Our formula (15) for E( ) describes the expected decline to zero nega-
tive bias for 1.
3. The formula (12) for var( ) describes the expected decline to zero vari-
ance for 1.
4. The second order Taylor expansion of a ratio is sucient to derive
these two principal behaviours for 1, if no further approximations
are made. This condition could be fullled since we had concentrated
ourselves on the lag-1 estimator and process (3).
5. For unknown mean of the process, it is more complicated to derive the
equations with the same accuracy.
Acknowledgements
Q. Yao is appreciated for comments on the manuscript. The present study
was carried out while the author was Marie Curie Research Fellow (EU-TMR
grant ERBFMBICT971919).
9
References
Bartlett, M. S., 1946, On the theoretical specication and sampling prop-
erties of autocorrelated time-series: Journal of the Royal Statistical Society
Supplement, v. 8, p. 27-41 (Corrigenda: 1948, Journal of the Royal Statis-
tical Society Series B, v. 10, no. 1).
Marriott, F. H. C., and Pope, J. A., 1954, Bias in the estimation of au-
tocorrelations: Biometrika, v. 41, p. 390-402.
Priestley, M. B., 1981, Spectral Analysis and Time Series: London, Aca-
demic Press, 890 p.
Scott, D. W., 1979, On optimal and data-based histograms: Biometrika,
v. 66, p. 605-610.
White, J. S., 1961, Asymptotic expansions for the mean and variance of
the serial correlation coecient: Biometrika, v. 48, p. 85-94.
Appendix
We assume that
i
is drawn from a standard Gaussian distribution with
serial correlations
j
. We derive (16) by means of the moment generating
function. Write
cov(
3
a

a+s
,
3
b

b+s+t
) = E(
3
a

a+s

3
b

b+s+t
) E(
3
a

a+s
) E(
3
b

b+s+t
)
= E(
3
a

a+s

3
b

b+s+t
) 9
s

s+t
.
Now, E(
3
a

a+s

3
b

b+s+t
) is the coecient of (t
3
1
t
2
t
3
3
t
4
)/(3! 1! 3! 1!) in the mo-
ment generating function
m(t
1
, t
2
, t
3
, t
4
) = exp
_
_
1
2
4

i=1
4

j=1

ij
t
i
t
j
_
_
,
with

11
=
22
=
33
=
44
= 1,

12
=
21
=
s
,

13
=
31
=
ba
,

14
=
41
=
ba+s+t
,

23
=
32
=
bas
,
10

24
=
42
=
ba+t
,

34
=
43
=
s+t
.
Terms (t
3
1
t
2
t
3
3
t
4
) in m(t
1
, t
2
, t
3
, t
4
) can only be within the fourth order of
the series expansion of the exponential function, i. e., within
1
4!
_
_
1
2
4

i=1
4

j=1

ij
t
i
t
j
_
_
4
,
further restricting, within
1
384
(t
2
1
+ t
2
3
+ 2
12
t
1
t
2
+ 2
13
t
1
t
3
+ 2
14
t
1
t
4
+ 2
23
t
2
t
3
+ 2
24
t
2
t
4
+ 2
34
t
3
t
4
)
4
,
further restricting, within
1
384
(4
2
13
t
2
1
t
2
3
+ 2 t
2
1
t
2
3
+ 4
12
t
3
1
t
2
+ 4
13
t
3
1
t
3
+ 4
14
t
3
1
t
4
+ 4
23
t
2
1
t
2
t
3
+ 4
24
t
2
1
t
2
t
4
+ 4
34
t
2
1
t
3
t
4
+ 4
12
t
1
t
2
t
2
3
+ 4
13
t
1
t
3
3
+ 4
14
t
1
t
2
3
t
4
+ 4
23
t
2
t
3
3
+ 4
24
t
2
t
2
3
t
4
+ 4
34
t
3
3
t
4
+ 8
12

13
t
2
1
t
2
t
3
+ 8
12

14
t
2
1
t
2
t
4
+ 8
12

34
t
1
t
2
t
3
t
4
+ 8
13

14
t
2
1
t
3
t
4
+ 8
13

23
t
1
t
2
t
2
3
+ 8
13

24
t
1
t
2
t
3
t
4
+ 8
13

34
t
1
t
2
3
t
4
+ 8
14

23
t
1
t
2
t
3
t
4
+ 8
23

34
t
2
t
2
3
t
4
)
2
.
Finally, these terms are
1
384
(2 4
2
13
t
2
1
t
2
3
8
12

34
t
1
t
2
t
3
t
4
+ 2 4
2
13
t
2
1
t
2
3
8
13

24
t
1
t
2
t
3
t
4
+ 2 4
2
13
t
2
1
t
2
3
8
14

23
t
1
t
2
t
3
t
4
+ 2 2 t
2
1
t
2
3
8
12

34
t
1
t
2
t
3
t
4
+ 2 2 t
2
1
t
2
3
8
13

24
t
1
t
2
t
3
t
4
+ 2 2 t
2
1
t
2
3
8
14

23
t
1
t
2
t
3
t
4
+ 2 4
12
t
3
1
t
2
4
34
t
3
3
t
4
+ 2 4
13
t
3
1
t
3
4
24
t
2
t
2
3
t
4
+ 2 4
13
t
3
1
t
3
8
23

34
t
2
t
2
3
t
4
+ 2 4
14
t
3
1
t
4
4
23
t
2
t
3
3
+ 2 4
23
t
2
1
t
2
t
3
4
14
t
1
t
2
3
t
4
+ 2 4
23
t
2
1
t
2
t
3
8
13

34
t
1
t
2
3
t
4
+ 2 4
24
t
2
1
t
2
t
4
4
13
t
1
t
3
3
+ 2 4
34
t
2
1
t
3
t
4
4
12
t
1
t
2
t
2
3
+ 2 4
34
t
2
1
t
3
t
4
8
13

23
t
1
t
2
t
2
3
+ 2 4
12
t
1
t
2
t
2
3
8
13

14
t
2
1
t
3
t
4
+ 2 4
13
t
1
t
3
3
8
12

14
t
2
1
t
2
t
4
+ 2 4
14
t
1
t
2
3
t
4
8
12

13
t
2
1
t
2
t
3
+ 2 8
12

13
t
2
1
t
2
t
3
8
13

34
t
1
t
2
3
t
4
+ 2 8
13

14
t
2
1
t
3
t
4
8
13

23
t
1
t
2
t
2
3
).
11
Thus, we nd
E(
3
a

a+s

3
b

b+s+t
) =
3! 1! 3! 1!
384
(96
12

34
+ 96
13

24
+ 96
14

23
+ 192
12

13

14
+ 192
13

23

34
+ 192
12

2
13

34
+ 192
2
13

2
14

23
+ 64
3
13

24
).
This gives the nal result
cov(
3
a

a+s
,
3
b

b+s+t
) = 9
ba

ba+t
+ 9
ba+s+t

bas
+ 18
s

ba

ba+s+t
+ 18
ba

bas

s+t
+ 18
s

2
ba

s+t
+ 18
2
ba

ba+s+t

bas
+ 6
3
ba

ba+t
.
It should be noted that this result has been checked using the joint cumulant
of order eight, in my assessment without less eort.
12
T
a
b
l
e
1
:
F
i
r
s
t
s
i
m
u
l
a
t
i
o
n
e
x
p
e
r
i
m
e
n
t
.
T
h
e
n
u
m
b
e
r
o
f
s
i
m
u
l
a
t
e
d
t
i
m
e
s
e
r
i
e
s
a
f
t
e
r
(
3
)
i
s
i
n
e
a
c
h
c
a
s
e
2
5
0
0
0
0
.
S
.
d
.
:
s
t
a
n
d
a
r
d
d
e
v
i
a
t
i
o
n
.
T
h
e
o
r
d
e
r
o
f
t
h
e
u
n
c
e
r
t
a
i
n
t
y
o
f
t
h
e
m
e
a
n
,
r
e
s
p
e
c
t
i
v
e
l
y
t
h
e
s
t
a
n
d
a
r
d
d
e
v
i
a
t
i
o
n
,
d
u
e
t
o
t
h
e
l
i
m
i
t
e
d
n
u
m
b
e
r
o
f
s
i
m
u
l
a
t
i
o
n
s
,
i
s
S
.
d
.
/

2
5
0
0
0
0
.
S
i
m
:
s
i
m
u
l
a
t
i
o
n
(
s
a
m
p
l
e
m
e
a
n
a
n
d
s
a
m
p
l
e
s
t
a
n
d
a
r
d
d
e
v
i
a
t
i
o
n
,

s
i
m
a
n
d

s
i
m
,
r
e
s
p
e
c
t
i
v
e
l
y

s
i
m
a
n
d

s
i
m
)
.
T
(
2
4
,
2
3
)
:
t
h
e
o
r
e
t
i
c
a
l
v
a
l
u
e
,
t
h
i
s
s
t
u
d
y
,
(
2
4
)
,
r
e
s
p
e
c
t
i
v
e
l
y
(
2
3
)
.
2
n
d
:
t
e
r
m

(
n

1
)

2
i
n
(
1
5
)
,
r
e
s
p
e
c
t
i
v
e
l
y
(
2
4
)
.
1
s
t
:
t
e
r
m
s
n
o
t

(
n

1
)

2
i
n
(
1
5
)
,
r
e
s
p
e
c
t
i
v
e
l
y
(
2
4
)
.

S
i
m

S
i
m

T
(
2
4
)
1
s
t

T
(
2
4
)
2
n
d

T
(
2
4
,
2
3
)

T
(
1
5
)
1
s
t

T
(
1
5
)
2
n
d

T
(
1
5
,
1
2
)

T
(
2
6
,
2
5
)

=
.
2
0
0
M
e
a
n
.
1
5
4
3
7
4
3
3
0
.
1
6
8
6
7
6
9
7
4
.
1
0
8
8
3
1
9
0
8
.
0
1
0
5
4
1
7
1
6
.
1
1
9
3
7
3
6
2
5
.
1
5
5
5
5
5
5
5
5
.
0
0
5
1
4
4
0
3
2
.
1
6
0
6
9
9
5
8
8
.
1
6
7
7
6
6
4
0
0
n
=
1
0
S
.
d
.
.
3
2
2
5
0
0
7
0
3
.
3
0
7
5
2
4
1
2
9
.
4
2
1
6
3
7
0
2
1
.
3
2
6
5
9
8
6
3
2
.
3
0
4
0
7
3
6
7
5
S
.
d
.
/
5
0
0
.
0
0
0
6
4
5
0
0
1
.
0
0
0
6
1
5
0
4
8

=
.
2
0
0
M
e
a
n
.
1
7
1
4
6
4
2
4
4
.
1
8
2
3
8
7
1
9
7
.
1
5
6
8
1
5
1
1
4
.
0
0
2
3
6
5
3
1
5
.
1
5
9
1
8
0
4
3
0
.
1
7
8
9
4
7
3
6
8
.
0
0
1
1
5
4
2
0
1
.
1
8
0
1
0
1
5
6
9
.
1
8
1
9
9
1
6
0
0
n
=
2
0
S
.
d
.
.
2
3
9
3
5
3
3
7
6
.
2
1
6
9
5
5
3
6
0
.
2
9
0
1
9
0
5
0
0
.
2
2
4
7
8
0
5
9
4
.
2
1
6
2
3
5
0
5
7
S
.
d
.
/
5
0
0
.
0
0
0
4
7
8
7
0
6
.
0
0
0
4
3
3
9
1
0

=
.
2
0
0
M
e
a
n
.
1
8
6
2
0
8
4
4
8
.
1
9
2
2
4
1
0
8
8
.
1
8
3
2
5
4
8
4
0
.
0
0
0
3
5
5
6
3
4
.
1
8
3
6
1
0
4
7
5
.
1
9
1
8
3
6
7
3
4
.
0
0
0
1
7
3
5
3
8
.
1
9
2
0
1
0
2
7
3
.
1
9
2
3
2
3
4
5
6
n
=
5
0
S
.
d
.
.
1
6
2
2
7
3
4
1
1
.
1
3
7
7
3
8
7
1
5
.
1
8
0
7
0
1
5
8
0
.
1
3
9
9
7
0
8
4
2
.
1
3
7
7
2
0
3
1
9
S
.
d
.
/
5
0
0
.
0
0
0
3
2
4
5
4
6
.
0
0
0
2
7
5
4
7
7

=
.
5
0
0
M
e
a
n
.
3
8
8
2
9
4
7
7
0
.
4
2
2
6
8
8
0
3
4
.
2
4
8
1
4
8
1
4
8
.
0
3
6
4
3
3
3
4
4
.
2
8
4
5
8
1
4
9
3
.
3
8
8
8
8
8
8
8
8
.
0
1
6
4
6
0
8
4
2
.
4
0
5
3
4
9
7
3
1
.
4
2
2
1
2
5
0
0
0
n
=
1
0
S
.
d
.
.
3
0
3
6
3
5
6
9
6
.
2
9
1
7
9
9
3
5
2
.
3
7
2
6
7
7
9
9
6
.
2
8
8
6
7
5
1
3
4
.
2
8
0
1
7
8
5
1
4
S
.
d
.
/
5
0
0
.
0
0
0
6
0
7
2
7
1
.
0
0
0
5
8
3
5
9
8

=
.
5
0
0
M
e
a
n
.
4
6
4
7
4
0
5
2
0
.
4
8
0
8
7
1
0
3
2
.
4
5
3
7
4
1
4
9
6
.
0
0
1
2
2
9
1
1
7
.
4
5
4
9
7
0
6
1
4
.
4
7
9
5
9
1
8
3
6
.
0
0
0
5
5
5
3
2
4
.
4
8
0
1
4
7
1
6
0
.
4
8
0
9
1
7
0
0
0
n
=
5
0
S
.
d
.
.
1
4
3
1
0
0
8
7
2
.
1
2
4
2
3
1
5
6
1
.
1
5
9
7
1
9
1
4
1
.
1
2
3
7
1
7
9
1
4
.
1
2
4
2
0
9
5
0
0
S
.
d
.
/
5
0
0
.
0
0
0
2
8
6
2
0
1
.
0
0
0
2
4
8
4
6
3

=
.
5
0
0
M
e
a
n
.
4
8
6
4
2
8
0
4
2
.
4
9
3
4
9
0
9
1
5
.
4
8
4
7
8
7
4
7
2
.
0
0
0
1
3
2
9
2
6
.
4
8
4
9
2
0
3
9
8
.
4
9
3
2
8
8
5
9
0
.
0
0
0
0
6
0
0
5
7
.
4
9
3
3
4
8
6
4
7
.
4
9
3
4
3
5
8
1
4
n
=
1
5
0
S
.
d
.
.
0
8
6
5
8
9
7
1
1
.
0
7
1
0
3
9
0
8
3
.
0
9
1
5
9
2
9
1
3
.
0
7
0
9
4
7
5
6
5
.
0
7
1
0
8
3
6
7
5
S
.
d
.
/
5
0
0
.
0
0
0
1
7
3
1
7
9
.
0
0
0
1
4
2
0
7
8
13
T
a
b
l
e
1
:
(
C
o
n
t
i
n
u
e
d
)

S
i
m

S
i
m

T
(
2
4
)
1
s
t

T
(
2
4
)
2
n
d

T
(
2
4
,
2
3
)

T
(
1
5
)
1
s
t

T
(
1
5
)
2
n
d

T
(
1
5
,
1
2
)

T
(
2
6
,
2
5
)

=
.
9
0
0
M
e
a
n
.
7
9
4
8
2
4
7
6
5
.
8
3
1
9
8
1
0
3
0
.
6
5
3
9
9
8
2
5
5
.
0
6
0
1
7
6
3
8
2
.
7
1
4
1
7
4
6
3
7
.
8
0
5
2
6
3
1
5
7
.
0
2
5
7
6
4
0
1
2
.
8
3
1
0
2
7
1
7
0
.
8
2
5
3
7
2
4
5
0
n
=
2
0
S
.
d
.
.
1
5
2
8
9
8
6
1
7
.
1
4
1
6
4
9
9
9
8
.
1
2
9
0
9
9
4
4
4
.
1
0
0
0
0
0
0
0
0
.
1
3
9
6
4
0
9
6
8
S
.
d
.
/
5
0
0
.
0
0
0
3
0
5
7
9
7
.
0
0
0
2
8
3
2
9
9

=
.
9
0
0
M
e
a
n
.
8
6
7
5
9
8
4
6
6
.
8
8
3
2
3
9
3
2
7
.
8
5
2
7
8
7
5
4
3
.
0
0
2
2
5
1
8
5
8
.
8
5
5
0
3
9
4
0
2
.
8
8
1
8
1
8
1
8
1
.
0
0
0
9
6
6
6
0
3
.
8
8
2
7
8
4
7
8
5
.
8
8
2
6
2
2
0
9
8
n
=
1
0
0
S
.
d
.
.
0
5
6
1
4
4
7
6
9
.
0
4
9
5
5
2
1
8
9
.
0
5
6
5
5
6
6
3
7
.
0
4
3
8
0
8
5
8
2
.
0
4
9
8
3
1
6
8
4
S
.
d
.
/
5
0
0
.
0
0
0
1
1
2
2
8
9
.
0
0
0
0
9
9
1
0
4

=
.
9
0
0
M
e
a
n
.
8
9
4
6
4
6
0
5
5
.
8
9
7
7
5
1
6
9
9
.
8
9
4
1
5
0
1
4
6
.
0
0
0
0
3
4
5
7
1
.
8
9
4
1
8
4
7
1
7
.
8
9
7
7
4
7
1
8
3
.
0
0
0
0
1
4
8
3
9
.
8
9
7
7
6
2
0
2
3
.
8
9
7
7
5
9
7
4
4
n
=
8
0
0
S
.
d
.
.
0
1
9
1
7
6
1
5
3
.
0
1
5
7
3
2
5
3
5
.
0
1
9
9
0
8
0
0
7
.
0
1
5
4
2
0
6
7
5
.
0
1
5
7
2
3
8
2
4
S
.
d
.
/
5
0
0
.
0
0
0
0
3
8
3
5
2
.
0
0
0
0
3
1
4
6
5

=
.
9
8
0
M
e
a
n
.
9
0
3
6
7
0
9
6
8
.
9
2
9
3
6
7
3
5
1
.
7
0
6
3
0
1
4
7
0
.
1
8
2
7
9
9
7
1
1
.
8
8
9
1
0
1
1
8
2
.
8
7
6
8
4
2
1
0
5
.
0
7
3
4
7
7
6
6
7
.
9
5
0
3
1
9
7
7
2
.
9
0
0
7
8
0
5
6
3
n
=
2
0
S
.
d
.
.
1
1
7
5
6
5
8
0
5
.
1
0
6
5
1
5
8
7
3
.
0
5
8
9
3
7
9
6
9
.
0
4
5
6
5
3
1
5
4
.
1
1
8
1
8
5
4
3
8
S
.
d
.
/
5
0
0
.
0
0
0
2
3
5
1
3
1
.
0
0
0
2
1
3
0
3
1

=
.
9
8
0
M
e
a
n
.
9
6
4
3
4
1
7
7
3
.
9
7
1
3
3
5
7
1
9
.
9
5
3
8
6
7
9
7
9
.
0
0
2
9
1
5
3
2
0
.
9
5
6
7
8
3
2
9
9
.
9
7
0
1
5
0
7
5
3
.
0
0
1
2
4
9
4
3
8
.
9
7
1
4
0
0
1
9
2
.
9
7
0
3
9
0
0
1
0
n
=
2
0
0
S
.
d
.
.
0
2
1
5
9
1
0
5
1
.
0
1
9
0
2
2
4
3
0
.
0
1
8
2
1
1
4
8
7
.
0
1
4
1
0
6
5
5
7
.
0
1
9
5
4
4
0
2
2
S
.
d
.
/
5
0
0
.
0
0
0
0
4
3
1
8
2
.
0
0
0
0
3
8
0
4
4

=
.
9
8
0
M
e
a
n
.
9
7
7
7
5
9
6
0
2
.
9
7
9
0
3
3
8
4
1
.
9
7
7
3
9
8
5
6
3
.
0
0
0
0
2
8
8
9
9
.
9
7
7
4
2
7
4
6
2
.
9
7
9
0
1
9
5
0
9
.
0
0
0
0
1
2
3
8
6
.
9
7
9
0
3
1
8
9
5
.
9
7
9
0
2
1
9
0
2
n
=
2
0
0
0
S
.
d
.
.
0
0
5
5
2
6
1
9
3
.
0
0
4
6
5
2
7
3
5
.
0
0
5
7
4
5
9
9
9
.
0
0
4
4
5
0
8
3
1
.
0
0
4
6
5
8
7
3
1
S
.
d
.
/
5
0
0
.
0
0
0
0
1
1
0
5
2
.
0
0
0
0
0
9
3
0
5

=
.
9
9
9
M
e
a
n
.
9
9
5
1
0
5
9
9
1
.
9
9
6
6
0
4
5
6
7
.
9
8
8
3
2
5
3
1
5
.
0
0
6
2
2
6
6
6
5
.
9
9
4
5
5
1
9
8
1
.
9
9
4
9
9
5
9
9
1
.
0
0
2
5
3
5
1
3
8
.
9
9
7
5
3
1
1
3
0
.
9
9
5
0
3
5
9
0
4
n
=
5
0
0
S
.
d
.
.
0
0
5
6
9
9
7
6
4
.
0
0
4
8
4
5
8
7
7
.
0
0
2
5
8
3
9
2
8
.
0
0
2
0
0
1
5
0
2
.
0
0
5
9
5
3
7
6
0
S
.
d
.
/
5
0
0
.
0
0
0
0
1
1
3
9
9
.
0
0
0
0
0
9
6
9
1

=
.
9
9
9
M
e
a
n
.
9
9
7
5
8
7
2
4
0
.
9
9
8
1
8
0
1
5
5
.
9
9
6
3
3
5
3
3
4
.
0
0
0
5
7
4
4
3
4
.
9
9
6
9
0
9
7
6
8
.
9
9
8
0
0
0
5
0
0
.
0
0
0
2
4
5
5
4
3
.
9
9
8
2
4
6
0
4
4
.
9
9
8
0
0
2
9
9
4
n
=
2
0
0
0
S
.
d
.
.
0
0
1
8
7
0
8
1
6
.
0
0
1
6
2
2
3
5
4
.
0
0
1
2
9
0
9
9
4
.
0
0
1
0
0
0
0
0
0
.
0
0
1
7
2
8
4
4
4
S
.
d
.
/
5
0
0
.
0
0
0
0
0
3
7
4
1
.
0
0
0
0
0
3
2
4
4

=
.
9
9
9
M
e
a
n
.
9
9
8
9
0
7
9
2
5
.
9
9
8
9
6
0
6
3
0
.
9
9
8
8
9
3
4
6
4
.
0
0
0
0
0
0
9
3
2
.
9
9
8
8
9
4
3
9
7
.
9
9
8
9
6
0
0
3
9
.
0
0
0
0
0
0
3
9
9
.
9
9
8
9
6
0
4
3
9
.
9
9
8
9
6
0
0
4
3
n
=
5
0
0
0
0
S
.
d
.
.
0
0
0
2
4
7
5
7
7
.
0
0
0
2
0
7
9
4
8
.
0
0
0
2
5
8
1
3
6
.
0
0
0
1
9
9
9
5
1
.
0
0
0
2
0
7
7
7
9
S
.
d
.
/
5
0
0
.
0
0
0
0
0
0
4
9
5
.
0
0
0
0
0
0
4
1
5
14
-0.5 0.0 0.5 1.0
= 0.20
n = 10
-0.5 0.0 0.5
= 0.20
n = 20
-0.2 0.0 0.2 0.4 0.6
= 0.20
n = 50
Figure 1: First simulation experiment (cf. Table 1). Histograms of

(thick line), respectively (thin line), compared with Gaussian distributions


N(

sim
,
2

sim
) (heavy line), respectively N(
sim
,
2
sim
) (light line). The
number of histogram classes follows Scott (1979).
15
= 0.50
n = 10
= 0.50
n = 50
= 0.50
n = 150
-0.5 0.0 0.5 1.0
0.0 0.5
0.3 0.4 0.5 0.6 0.7
Figure 1: (Continued)
16
= 0.90
n = 20
= 0.90
n = 100
= 0.90
n = 800
0.5 1.0
0.7 0.8 0.9 1.0
0.85 0.90 0.95
Figure 1: (Continued)
17
= 0.98
n = 20
= 0.98
n = 200
= 0.98
n = 2000
0.6 0.8 1.0 1.2
0.90 0.95 1.00
0.96 0.97 0.98 0.99
Figure 1: (Continued)
18
= 0.999
n = 500
= 0.999
n = 2000
= 0.999
n = 50000
0.98 0.99 1.00 1.01
0.995 1.000
0.9985 0.9990 0.9995
Figure 1: (Continued)
19
0.0 0.2 0.4 0.6 0.8 1.0
0.00
0.05
0.10
- E(^)
n = 10
SQRT[var(^)]
n = 10
0.0 0.2 0.4 0.6 0.8 1.0

0.00
0.10
0.20
0.30
Figure 2: Second simulation experiment. Above: negative bias, below:
standard deviation. Simulation results (dots). Theoretical result, our for-
mula, (12) and (15), respectively, (heavy line). Theoretical result, Whites
(1961) formula, (25) and (26), respectively, (light line).
20
0.00
0.04
0.08
0.0 0.2 0.4 0.6 0.8 1.0
- E(^)
n = 20
0.0 0.2 0.4 0.6 0.8 1.0

0.00
0.10
0.20
SQRT[var(^)]
n = 20
Figure 2: (Continued)
21
0.0 0.2 0.4 0.6 0.8 1.0
0.00
0.01
0.02
- E(^)
n = 100
SQRT[var(^)]
n = 100
0.0 0.2 0.4 0.6 0.8 1.0

0.00
0.05
0.10
Figure 2: (Continued)
22

You might also like