You are on page 1of 4

TESTING FOR AUTOCORRELATION AND ITS REMEDIES

The error term of one observation should be independent of the error


term of other observation, i.e., i and j should not correlate.
OR
The error term of last year should not affect error term of current year, if it
does then we have a problem of autocorrelation, Mathematically.
Cov (i and j) = 0
This is no-serial-autocorrelation assumption. However, when this
assumption is violated and the two error terms are correlated, then we
face the problem of autocorrelation. If such a correlation is observed in
cross-sectional data, it is called spatial autocorrelation, but spatial
autocorrelation occurs by chance, not usually. It is the time series data
where chances of the occurrences of autocorrelation are great.

Consequences
1 The residual variance is likely to underestimate the true variance
2.
2 As a result, we are likely to overestimate R2.
3 Var (i) underestimates.
4 Consequently, t and F tests are no longer valid; these mislead
about the statistical significance of estimated regression
coefficients.

Check and solve the if problem of Autocorrelation exists


YEAR

1990

90

79.7

1991

89.7

79.8

1992

89.8

81.4

1993

91.1

81.2

1994

91.2

84

1995

91.5

86.4

1996

92.8

88.1

1997

95.9

90.7

1998

96.3

91.3

1999

97.3

92.4

2000

95.8

93.3

2001

96.4

94.5

2002

97.4

95.9

2003

100

100

2004

99.9

100.1

2005

99.7

101.4

2006

99.1

102.2

2007

99.6

105.2

2008

101.1

107.5

2009

105.1

110.5

f(X)

0 + 1X + e

Model

R
.974a

Model Summaryb
R Square
Adjusted R
Std. Error of the Durbin-Watson
Square
Estimate
.949
.947
1.02209
1.195

a. Predictors: (Constant), X
b. Dependent Variable: Y

Model
Regression
1

Sum of Squares
353.101

Residual
Total

ANOVAa
df
1

Mean Square
353.101

18.804

18

1.045

371.905

19

F
338.000

Sig.
.000b

Sig.

a. Dependent Variable: Y
b. Predictors: (Constant), X
Coefficientsa
Unstandardized Coefficients

Model

B
1

(Constant)
X

a. Dependent Variable: Y

Std. Error

53.612

2.316

.454

.025

Standardized
Coefficients
Beta
.974

23.147

.000

18.385

.000

Model is statistically significant (F = 338.000; p , 0.01); R2 is very good; t statistic is very


significant (p , 0.01); however, DW = 1.195, indicating that the model is suffering from
autocorrelation problem.

Detecting Autocorrelation
1. The Runs test
The runs or Geary test is a non-parametric test used to detect autocorrelation problem. We
have already saved regression residuals. We now use the following SPSS command to run the
runs test.
ANALYZENONPARAMETRIC TESTStake saved residuals to test-variable list
boxclick MEANOK

Unstandardized
Residual
Test Valuea

0E-7

Cases < Test Value

10

Cases >= Test Value

10

Total Cases

20

Number of Runs
Z
Asymp. Sig. (2-tailed)

9
-.689
.491

a. Mean

The output box indicates that:


1. There are 10 negative sign cases (out of total 20 cases)
2. There are 10 positive sign cases
3. Number of runs are = 9
The number of runs should lie between Z = 1.96 for no-autocorrelation; our Z = - 0.689
indicates the mean-runs are lying inside the critical region; hence results suggests the
problem of autocorrelation doesnt exist.

2. Using DW statistic
The Durban-Watson d or DW statistic ranges between 0 and 4; where:
a. There is no-autocorrelation around a d = 2 (between du and 4-du)
b. Then there are two indecision zones on both sides of No-autocorrelation zone.
c. On both extreme ends, positive autocorrelation and negative autocorrelation zones
exist.
[
+

[ Indecision ]

No

]
[ Indecisive ]

Autocorrelation
Autocorrelation
[
0 ____ _
=3.048 _____ 4

Zone

Autocorrelation [

Zone

dl=0.952_____du=1.147____2_____4-du=2.853_____4-dl

How to test? The estimated model estimates DW = 1.195 which needs to compare with the
tabulated values provided in the Durban-Watson d statistic tables. We have n = 20 and K = 1
(k excluding intercept). At n = 20 and K= 1, table provides dl = 0.952 and du = 1.147. As
calculated DW = 1.195 falls above du that is no autocorrelation zone, that suggests that there
is no problem of autocorrelation exists in this data.

Remedies of Autocorrelation
There are two major remedies, namely:
The First-Differencing method
a. When the coefficient of autocorrelation (rho = ) is not known, then remedy is
first-differencing, that is:
(Yt Yt-1) = 1 (Xt Xt-1) + e

The Rho-Corrected regression


b. When is known, then remedy is:
(Yt Yt-1) = + 1(Xt Xt-1) + e

You might also like