Professional Documents
Culture Documents
1 of 19
Outline
(1) Interpretation of Time Series Regressions
(2) Assumptions and Results:
(a) Consistency
Example: AR(1)
(b) Unbiasedness and Bias in Dynamic Models
Example: AR(1)
(c) Asymptotic Distribution
(3) Autocorrelation of the Error Term
2 of 19
yt = x0t + t,
t = 1, 2, ..., T.
()
yt = yt1 + t.
Shocks to the process ( t) have dynamic eects.
More complicated dynamics in the autoregressive distributed lag (ADL) model:
OLS Estimator
One way to motivate OLS is the so-called moment condition
E[xt t] = 0.
The model implies that
()
= yt x0t , so that
E[xtyt] E[xtx0t] = 0.
b = E[xtx0 ]1E[xtyt].
T
X
t=1
T
X
t=1
xtx0t E[xtx0t].
T
T
X
X
b = T 1
T 1
xtx0
xtyt .
t=1
t=1
4 of 19
Main Assumption
We impose assumptions to ensure that a LLN applies to the sample averages.
Main Assumption:
Consider a time series yt and the k 1 vector time series xt. We assume
0
(1) that zt = (yt, x0t) has a joint stationary distribution; and
(2) that the process zt is weakly dependent, so that zt and zt+k becomes approximately
independent for k .
Under these assumptions, most of the results for linear regression on random samples
carry over to the time series case.
5 of 19
Consistency
b converges to
A minimal requirement for an estimator is that it is consistent, so that
as we get more and more observations.
Result 1: Consistency
Let yt and xt obey the main assumption. If the regressors are predetermined,
E[xt t] = 0,
(#)
b as T .
then the OLS estimator is consistent, i.e.
E[ t | xt] = 0,
(##)
6 of 19
yt = yt1 +
t = 1, 2, ..., T .
+
,
=
P
P
P
T
T
2
2
2
T 1 t=1 yt1
T 1 t=1 yt1
T 1 Tt=1 yt1
plim T 1
T
T
X
t=1
T
X
2
yt1
= q<
t yt1
= E [ tyt1] = 0
(O)
(OO)
t=1
7 of 19
Unbiasedness
b, is unbiasedness: E[]
b = .
A stronger requirement for an estimator,
Result 2: Unbiasedness
Let yt and xt obey the main assumption. If the regressors are strictly exogenous,
For unbiasedness we need strict exogeneity, which is not fulfilled in a dynamic regression.
Consider the first order autoregressive model
yt = yt1 + t.
Here yt is function of
t,
so
yt = 0.9 yt1 + t,
N(0, 1).
0.9
0.8
Mean
0.7
0.6
0.5
0.4
10
20
30
40
50
60
70
80
90
100
9 of 19
Asymptotic Distribution
To derive the asymptotic distribution we need a CLT; additional restrictions on
t.
E[ 2t | xt] = 2
E[ t s | xt, xs] = 0 for all t 6= s.
Then as T , the OLS estimator is asymptotically normal:
b
T N(0, 2E[xtx0t]1).
Inserting natural estimators, we can test hypothesis using
T
!1
X
a
b
.
N ,
xtx0
b2
t
t=1
10 of 19
are uncorrelated.
E[yt | xt, yt1, xt1, yt2, xt2, ..., y1, x1] = E[yt | xt] = x0t.
xt contains all relevant information in the available information set.
No-serial-correlation is practically the same as dynamic completeness.
All systematic information in the past of yt and xt is used in the regression model.
This is often taken as an important design criteria for a dynamic regression model.
We should always test for no-autocorrelation in time series models.
11 of 19
Residual autocorrelation does not imply that the DGP has autocorrelated errors.
Autocorrelation is taken as a signal of misspecification. Dierent possibilities:
(I) Autoregressive errors in the DGP.
(II) Dynamic misspecification.
(III) Omitted variables and non-modelled structural shifts.
(IV) Misspecified functional form.
To solution to the problem depends on the interpretation.
12 of 19
Consequences of Autocorrelation
Autocorrelation will not violate the assumptions for Result 1 in general.
But E[xt t] = 0 is violated if the model includes a lagged dependent variable.
Look at an AR(1) model with error autocorrelation, i.e. the two equations
yt = yt1 + t
t = t1 + vt ,
Both yt1 and
depends on
t1,
vt IID(0, 2v ).
so E [yt1 t] 6= 0.
yt = x0t + t
t = t1 + vt ,
vt IID(0, 2v ).
14 of 19
E[(xt xt1) ( t
t1)]
= 0.
15 of 19
E[yt | xt] 6= E[yt | xt, yt1, xt1, yt2, xt2, ..., y1, x1].
The (dynamic) model is misspecified and should be reformulated.
Natural remedy is to extend the list of variables in xt.
If autocorrelation seems of order one, then a starting point is the GLS transformation.
But the AR(1) structure is only indicative and we look at the unrestricted ADL model
16 of 19
yt = x1t 1 + x2t 2 + t,
()
yt = x1t 1 + ut.
()
0 for t < T0
x2t =
.
1 for t T0
17 of 19
yt = g(xt) +
is non-linear, then the residuals from a linear regression will typically be autocorrelated.
The obvious solution is to try to reformulate the functional form of the regression line.
18 of 19
yt = x0t + t.
t1.
LM = T R2 2(1).
Note that xt and
19 of 19