You are on page 1of 22

Econometrics

Chapter 12 Autocorrelation
Autocorrelation
In the classical regression model, it is
assumed that E(u
t
u
s
) = 0 if t is not
equal to s.
What happens when this assumption is
violated?
First-order autocorrelation
1 1 2 2
where:
t o t t k kt t
Y X X X u | | | | = + + + + +
1
2 2
( ) 0
( ) 0 for
( )
1
t t t
t
t s
t
u u
E
E t s
E
c
c
c
c c
c o

= +
=
= =
=
<
Positive first-order
autocorrelation ( > 0)
Negative first-order
autocorrelation ( < 0)
Incorrect model specification
and apparent autocorrelation
Violation of assumption of
classical regression model
1 1
2
1 1
2
( ) [( )( )]
( )
0
t s t t t
t t t
u
E u u E u u
E u u
c
c
o


= +
= +
= =
1
corr( )
t t
u u

=
Consequences of first-order
autocorrelation
OLS estimators are unbiased and
consistent
OLS estimators are not BLUE
Estimated variances of residuals is
biased
Biased estimator of standard errors of
residuals (usually a downward bias)
Biased t-ratios (usually an upward bias)
Detection
Durbin-Watson statistic
2
1
2
2
1
( )
N
t t
t
N
t
t
u u
d
u

=
=

2(1 ) d ~
Acceptance and rejection
regions for DW statistic
o
1
H : 0
H : 0

=
=
AR(1) correction: known
1 1 2 2
1
where:
t o t t k kt t
t t t
Y X X X u
u u
| | | |
c

= + + + + +
= +
Lagging this relationship 1 period:
1 1 1 1 2 2 1 1 1 t o t t k kt t
Y X X X u | | | |

= + + + + +
Multiplying this by -
1 1 1 1 2 2 1 1 1 t o t t k kt t
Y X X X u | | | |

=
With a little bit of algebra:
( ) ( ) ( ) ( ) ( )
1 1 1 1 1 2 2 2 1 1 1
1
t t o t t t t k kt kt t t
Y Y X X X X X X u u | | | |

= + + + + +
( ) ( ) ( ) ( )
1 1 1 1 1 2 2 2 1 1
1
t t o t t t t k kt kt t
Y Y X X X X X X | | | | c

= + + + + +
AR(1) correction: known
Solution?
quasi-difference each variable:
1
1 1 1 1
2 2 2 1
1
t t t
t t t
t t t
kt kt kt
Y Y Y
X X X
X X X
X X X

=
=
=
=
Regress:
1 1 2 2 t o t t k kt t
Y X X X | | | | c = + + + + +
AR(1) correction: known
This procedure provides unbiased and
consistent estimates of all model
parameters and standard errors.
If = 1, a unit root is said to exist. In
this case, quasi-differencing is
equivalent to differencing:
1 1 2 2 t o t t k kt t
Y X X X | | | | c A = + A + A + + A +
Generalized least squares
This approach is referred to as:
Generalized Least Squares (GLS)
GLS estimation strategy:
If one of the assumptions of the classical
regression model is violated, transform the
model so that the transformed model
satisfies these assumptions.
Estimate the transformed model
AR(1) correction: unknown
Cochrane-Orcutt procedure:
1. Estimate the original model using OLS. Save the
error terms
2. Regress saved error term on lagged error term
(without a constant) to estimate
3. Estimate a quasi-differenced version of original
model. Use the estimated parameters to
generate new estimate of error term.
4. Go to step 2. Repeat this process until change in
parameter estimates become less than selected
threshold value.
This results in unbiased and consistent estimates of
all model parameters and standard errors.
Prais-Winsten estimator
Cochrane-Orcutt method involves the loss of
1 observation.
Prais-Winsten estimator is similar to
Cochrane-Orcutt method, but applies a
different transformation to the first
observation (see text, p. 444).
Monte Carlo studies indicate substantial
efficiency gain from the use of the Prais-
Winsten estimator (relative to the Cochrane-
Orcutt method)
Hildreth-Lu estimator
A grid search algorithm helps ensure
that the estimator reaches a global
minimum sum of squared error terms
rather than a local minimum sum of
squared error terms
Maximum likelihood estimator
selects parameter values that maximize
the computed probability of observing
the realized outcomes for the
dependent and independent variables
an asymptotically efficient estimator


Higher-order autocorrelation
AR(p):
1 1 2 2 t t t p t p
u u u u

= + + +
Detection of AR(p) error process
Breusch-Godfrey test:
1. Estimate the parameters of original model and
save error term
2. Regress estimated error term on all independent
variables in the original model and the first p
lagged error terms
3. Compute the Breusch-Godfrey Lagrange
Multiplier test statistic: NR
2

4. An AR(p) process is found to exist if the LM
statistic exceeds the critical value for a _
2
variate
with p degrees of freedom.
Use Box-Pierce or Ljung-Box statistic (see p.
451)
Lagged dependent variable as
regressor
Durbin-Watson statistic is biased
downward when a lagged dependent
variable is used as a regressor.
Use Durbins h test or Lagrange
Multiplier test (test statistic = (N-1)R
2
in
this case).
Correction: Hatanakas estimator (on
pp. 458-9 of the text)
Correction of AR(p) process
Use Prais-Winsten (modified for an
AR(p) process) or maximum likelihood
estimator

You might also like