You are on page 1of 5

September 16, 2004 Hayashi Econometrics

Solution to Chapter 9 Analytical Exercises


1. From the hint, we have
T  2  2 T
1X 1 T 1 0 1 X
t t1 = (t )2 . ()
T t=1 2 T 2 T 2T t=1

Consider
the second term on the RHS of (). Since E(0 / T ) 0 and Var(0 / T ) 0,
0 / T converges in mean square (by Chevychevs LLN), and hence in probability, to 0. So the
second term vanishes (converges in probability to zero) (this can actually be shown directly
from the definition of convergence in probability). Next, consider the expression T / T in the
first term on the RHS of (). It can be written as
T
1 1X
T = (0 + 1 + + T ) = 0 + T t .

m
T T T T t=1

er as
co
As just seen, 0T vanishes. Since t is I(0) satisfying (9.2.1)-(9.2.3), the hypothesis of Proposi-

eH w
tion 6.9 is satisfied (in particular, the absolute summability in the hypothesis of the Proposition
is satisfied because it is implied by the one-summability (9.2.3a)). So

o.
rs e 1X
T
T
t X, X N (0, 1).
ou urc
T t=1 d

where 2 is the long-run variance of t . Regarding the third term on the RHS of (), since
1
PT 2 1
t is ergodic stationary, 2T t=1 (t ) converges in probability to 2 0 . Finally, by Lemma
o

2
2.4(a) we conclude that the RHS of () converges in distribution to 2 X 2 21 0 .
aC s
vi y re

2. (a) The hint is the answer.


(b) From (a),
1
PT
T t=1 yt yt1
T (b
1) = 1
PT
.
2
t=1 (yt1 )
ed d

T2
Apply Proposition 9.2(d) to the numerator and Proposition 9.2(c) to the denominator.
ar stu

(c) Since {yt } is random walk, 2 = 0 . Just set 2 = 0 in (4) of the question.
(d) First, a proof that b p 0. By the algebra of OLS,
T
1X
is

b =
(yt b yt1 )
T t=1
Th

T
1X
= 1)yt1 )
(yt (b
T t=1
T T
1X 1X
sh

= 1)
yt (b yt1
T t=1 T t=1
T T
!
1X 1   1 1X
= 1)
yt T (b yt1 .
T t=1 T T T t=1

https://www.coursehero.com/file/8818911/analqs-ch9/
PT
The first term after the last equality, T1 t=1 yt , vanishes (converges to zero in proba-
bility) because yt is ergodic stationary and E(yt ) = 0. To show that
 the second term
1
after the last equality vanishes, we first note that T T (b 1) vanishes because
PT
1) converges to a random variable by (b). By (6) in the hint, 1T T1 t=1 yt1
T (b

converges to a random variable. Therefore, by Lemma 2.4(b), the whole second term
vanishes.
Now turn to s2 . From the hint,
T T
1 X 2 1X
s2 = b )2
(yt 1)]
[T (b b ) yt1
(yt
T 1 t=1 T 1 T t=1
T
1 2 1 X
+ [T (b
1)] 2 (yt1 )2 . ()
T 1 T t=1

b p 0, it should be easy to show that the first term on the RHS of () converges
Since
to 0 in probability. Regarding the second term, rewrite it as

m
er as
T T
2 1X 2 T 1 1X

co
[T (b
1)] yt yt1 [T (b b
1)] yt1 . ()
T 1 T t=1 T 1 T T t=1

eH w
o.
PT
By Proposition 9.2(b), T1 t=1 yt yt1 converges to a random variable. So does T
rs e
1). Hence the first term of () vanishes. Turning to the second term of (), (6)
(b
ou urc
PT
in the question means 1T T1 t=1 yt1 converges to a random variable. It should now
be routine to show that the whole second term of () vanishes. A similar argument,
this time utilizing Proposition 9.2(a), shows that the third term of () vanishes.
o

(e) By (7) in the hint and (3), a little algebra yields


aC s

PT1
b 1 yt yt1
vi y re


t = 1 = q t=1
T
.
s qP
T 2
PT
s T12 t=1 (yt1 )2
t=1 (yt1 )

Use Proposition 9.2(c) and (d) with 2 = 0 = 2 and the fact that s is consistent for
ed d

to complete the proof.


ar stu

3. (a) The hint is the answer.


(b) From (a), we have
1
PT
t=1 yt yt1
1) =
T (b T
.
is

1
PT 2
T2 t=1 (yt1 )

Let and t be as defined in the hint. Then yt = + t and yt = t . By construction,


Th

PT t
t=1 yt1 = 0. So
1
PT
t=1 t t1
1) = T1 P
T (b T
.
( )2
2 t=1 t1
sh

Since {t } is driftless I(1), Proposition 9.2(e) and (f) can be used here.
(c) Just observe that 2 = 0 if {yt } is a random walk with or without drift.

https://www.coursehero.com/file/8818911/analqs-ch9/
4. From the hint,
T T T T
1X 1X 1X 1X
yt1 t = (1) wt1 t + t1 t + (y0 0 ) t . ()
T t=1 T t=1 T t=1 T t=1

Consider first the second term on the RHS of (). Since t1 , which is a function of (t1 , t2 , . . . ),
is independent of t , we have: E(t1 t ) = E(t1 ) E(t ) = 0. Then by the ergodic theorem
PT
this second term vanishes. Regarding the third term of (), T1 t=1 t p 0. So the whole third
term vanishes. Lastly, consider the first term on the RHS of (). Since {wt } is random
 walk and
2
1
PT
t = wt , Proposition 9.2(b) with = 0 = implies T t=1 wt1 t d 2 [W (1)2 1].
2 2

5. Comparing Proposition 9.6 and 9.7, the null is the same (that {yt } is zero-mean stationary
AR(p), (L)yt = t , whose MA representation is yt = (L)t with (L) (L)1 ) but
the augmented autoregression in Proposition 9.7 has an intercept. The proof of Proposition
9.7 (for p = 1) makes appropriate changes on the argument developed on pp. 587-590. Let b
and be as defined in the hint. The AT and cT for the present case is

m
" 1
PT PT
#
2 1 1 ()

er as
T2 t=1 (yt1 ) T T t=1 yt1 (yt1 )
AT = 1 1 PT () 1
PT () 2
,
t=1 (yt1 ) yt1 t=1 [(yt1 ) ]

co

T T T

eH w
PT PT
t
" 1
# " 1
#
T t=1 yt1 T t=1 yt1 t

o.
cT = PT ()
= PT ,
1 1 ()
t=1 (yt1 ) t t=1 (yt1 ) t
rs e T T
ou urc
where t is the residual from the regression of t on a constant for t = 1, 2, ..., T .
(1,1) element of AT : Since {yt } is driftless I(1) under the null, Proposition 9.2(c) can
PT
be used to claim that T12 t=1 (yt1 )2 d 2 (W )2 , where 2 = 2 [(1)]2 with 2
R
o

Var(t ).
aC s

PT
(2,2) element of AT : Since (yt1 )() = yt1 T1 t=1 yt1 , this element can be
vi y re

written as !2
T T T
1X () 2 1X 2 1X
[(yt1 ) ] = (yt1 ) yt1 .
T t=1 T t=1 T t=1
ed d

Since E(yt1 ) = 0 and E[(yt1 )2 ] = 0 (the variance of yt ), this expression converges


ar stu

in probability to 0 .
Off diagonal elements of AT : it equals
T
" T
# T
! T
!
1 1X () 1 1X 1 1X 1X
(yt1 ) yt1 = (yt1 ) yt1 yt1 yt1 .
is

T T t=1 T T t=1 T T t=1 T t=1


Th

The term in the square bracket is (9.4.14), which is shown to converge to a random variable
PT
(Review Question 3 of Section 9.4). The next term, 1T T1 t=1 yt1 , converges to a ran-
PT
dom variable by (6) assumed in Analytical Exercise 2(d). The last term, T1 t=1 yt1 ,
sh

converges to zero in probability. Therefore, the off-diagonal elements vanish.


Taken together, we have shown that AT is asymptotically diagonal:
 R1 
2 0 [W (r)]2 dr 0
AT ,
d 0 0

https://www.coursehero.com/file/8818911/analqs-ch9/
so  R1 1 
2 [W (r)]2 dr 0
(AT )1 0 .
d 0 01

Now turn to cT .
PT
1st element of cT : Recall that yt1 yt1 T1 t=1 yt1 . Combine this with the BN
decomposition yt1 = (1)wt1 + t1 + (y0 0 ) with wt1 1 + + t1 to obtain
T T T
1X 1X 1X
yt1 t = (1) wt1 t + t ,
T t=1 T t=1 T t=1 t1
PT
where wt1 wt1 T1 t=1 wt1 . t1 is defined similarly. Since t1 is independent of
t , the second term on the RHS vanishes. Noting that wt = t and applying Proposition
9.2(d) to the random walk {wt }, we obtain
T  2  
1X
[W (1) ]2 [W (0) ]2 1 .

wt1 t

m
T t=1 d 2

er as
Therefore, the 1st element of cT converges in distribution to

co
eH w
1 
c1 2 (1)
[W (1) ]2 [W (0) ]2 1 .

2

o.
rs e PT
2nd element of cT : Using the definition (yt1 )() yt1 T1 t=1 yt1 , it should be
ou urc
easy to show that it converges in distribution to

c2 N (0, 0 2 ).
o

Using the results derived so far, the modification to be made on (9.4.20) and (9.4.21) on p.
590 for the present case where the augmented autoregression has an intercept is
aC s
vi y re

2 (1) 12 [W (1) ]2 [W (0) ]2 1



2
T (b
1) 1 or 1) DF ,
T (b
2 2 (1)
R
d [W (r)]2 dr d
0
 2 
T (b1 1 ) N 0, .
ed d

d 0
ar stu

Repeating exactly the same argument that is given in the subsection entitled Deriving Test
2
Statistics on p. 590, we can claim that 2(1) is consistently estimated by 1/(1 ).
b This
completes the proof of claim (9.4.34) of Proposition 9.7.
is

6. (a) The hint is the answer.


(b) The proof should be straightforward.
Th

7. The one-line proof displayed in the hint is (with i replaced by k to avoid confusion)

X X X
X X X
|j | = k
| k | = k|k | < , ()
sh



j=0 j=0 k=j+1 j=0 k=j+1 k=0

where {k } (k = 0, 1, 2, ...) is one-summable as assumed in (9.2.3a). We now justify each of


the equalities and inequalities. For this purpose, we reproduce here the facts from calculus
shown on pp. 429-430:

https://www.coursehero.com/file/8818911/analqs-ch9/
P
(i) If {ak } is absolutely summable, then {ak } is summable (i.e., < k=0 ak < ) and

X X
ak |ak |.



k=0 k=0

P
(ii) Consider a sequence with two subscripts, {ajk } (j, k = 0, 1, 2, . . .). Suppose j=0 |ajk | <
P
for each k and let sk j=0 |ajk |. Suppose {sk } is summable. Then
X
X

X  X  X  X 
ajk < and ajk = ajk < .



j=0 k=0 j=0 k=0 k=0 j=0

Since {k } is one-summable, it is absolutely summable. Let


(
k if k j + 1,
ak =
0 otherwise.

m
er as
Then {ak } is absolutely summable because {k } is absolutely summable. So by (i) above, we
have

co

eH w
X X X X X
k = k = ak |ak | = |k |.


k=j+1 k=j+1 k=0 k=0 k=j+1

o.
rs e
Summing over j = 0, 1, 2, ..., n, we obtain
ou urc

X n X n
X X
k |k |.


j=0 k=j+1 j=0 k=j+1
o

If the limit as n of the RHS exists and is finite, then the limit of the LHS exists and
aC s

is finite (this follows from the fact that if {xn } isPnon-decreasing


P in n and if xn A < ,
n
vi y re

then the limit of xn exists and is finite; set xn j=0 | k=j+1 k |). Thus, provided that
P P
j=0 k=j+1 |k | is well-defined, we have



X
X X
X
k |k |.

ed d


j=0 k=j+1 j=0 k=j+1
ar stu

P P
We now show that j=0 k=j+1 |k | is well-defined. In (ii), set ajk as
(
|k | if k j + 1,
is

ajk =
0 otherwise.
Th

P
Then j=0 |ajk | = k |k | < for each k and sk = k |k |. By one-summability of {k }, {sk }
is summable. So the conditions in (ii) are satisfied for this choice of ajk . We therefore conclude
that
X
X X
X  X X  X
sh

|k | = ajk = ajk = k |k | < .


j=0 k=j+1 j=0 k=0 k=0 j=0 k=0

This completes the proof.

https://www.coursehero.com/file/8818911/analqs-ch9/

Powered by TCPDF (www.tcpdf.org)