You are on page 1of 38

Impulse Response and Variance Decomposition in the Presence of Time-Varying Volatility: with Applications to Price Discovery

Chor-yiu SIN Department of Economics Hong Kong Baptist University, Hong Kong May 18, 2005 First draft. Comments are welcome.

Abstract Macroeconomic or nancial data are often modelled with cointegration and time-varying volatility. Noticeable examples include stock prices of the same underlying asset. Interestingly, most if not all studies in price discovery do not consider the possible time-varying volatility. As a result, the impulse responses are not inated or deated by the time-varying volatility. Intuitively, the impacts of an i.i.d. innovation is bigger when the volatility (conditional variance) is larger. On the other hand, the shocks derived from a Choleski decomposition also depend on the time-varying volatility. In this paper, we rst generalize the conventional price discovery impulse response function (IRF) to its time-varying counterpart. Time-varying information share (IS) and the general variance decomposition (VD) are dened accordingly. Using the asymptotic theories developed in Li, Ling and Wong (2001) and Sin and Ling (2004), we extend Phillips (1998) to cases that consider time-varying volatility. Larger sample sizes are required as time-varying volatility and some other parameters need to be estimated. Key Words: Error-correction model; Exchange-traded fund, Impulse response; Information share; Price discovery; Shock; Variance decomposition JEL Codes: C32, C51, G14
Acknowledgments: This research is partially supported by the Hong Kong Research Grant Council competitive earmarked grant HKBU2014/02H. The usual disclaimers apply.

Introduction

Throughout this paper, we consider an mdimensional autoregressive (AR) process of {yt }, which is generated by yt = J(L)yt1 + t , t = Vt1 t , where L is the lag operator. J(L) =
s k=1 1/2

(1.1) (1.2)

Jk Lk1 , Jk s are constant matrices. t =

(1t , . . . , mt ) is a sequence of independently and identically distributed (i.i.d.) random mx1-vectors with zero mean and identity covariance matrix, Vt1 is measurableFt1 , where Ft = {s , s = t, t1, . . .}. As a result, E(t |Ft1 ) = 0 and E(t t |Ft1 ) = Vt1 . The system (1.1) is initialized at t = s + 1, . . . , 0 and we may let these initial values be any (degenerate or non-degenerate) random vectors. It is often convenient to set the initial conditions so that the I(0) component of (1.1) is stationary and we will proceed as if this has been done. The presence of deterministic components in (1.1) does not aect our conclusions in any substantive way, so we will proceed as if they are absent just to keep the derivations as simple as possible. One may see that (1.1) is exactly the partially nonstationary vector autoregressive model employed in Phillips (1998) (see Section 2 below). Instead of assuming an i.i.d. {t }, (1.2) allows possible time-varying heteroskedasticity. In this paper, we adopt the constant-correlation multivariate GARCH rst suggested by Bollerslev (1990). More precisely, we assume that Vt1 = Dt1 Dt1 , where is a symmetric square matrix of constant correlations and Dt1 = diag( h1t1 , . . . , hmt1 ) is a diagonal matrix of conditional standard deviations, where:
q p

hit1 = ai0 +
j=1

aij 2 + itj
k=1

bik hit1k , i = 1, . . . , m.

(1.3)

Despite the fact that many papers in the literature of price discovery, such as Barclay and Hendershott (2003), Covrig and Melvin (2002), Engle and Patton (2004), Hasbrouck (2003), Huang (2002), and Yan and Zivot (2004), consider a system of this kind, few if not none of them estimate model (1.1) with the consideration of GARCH. Following the lines in Ling and Li (1997) and Tse (2002), Sin (2004) derives the asymptotic theory of testing for multivariate ARCH when the conditional mean is an ECM. In addition and unsurprisingly, GARCH is found in many of the time series data, especially those that are related to nancial markets. In the next section, we rst generalize the conventional price discovery impulse response function (IRF) to its time-varying counterpart. Time-varying information share (IS) and time-varying component share (CS) are dened accordingly. A systematic and practical procedure for price discovery dynamics, following the lines in Gonzalo and Ng (2001) and Yan and Zivot (2004), can also be found there. In Section 3, we extend Phillips (1998)s asymptotic theories to cases that consider time-varying volatility, following the lines in Li, Ling and Wong (2001) and Sin and Ling (2004). Conclusions can be found in the last section . The proofs are relegated to either Appendix A or Appendix C. Throughout, for any square matrix , min () stands for the minimum eigenvalue of . T stands for the number of observations. L denotes convergence in distribution, Op (1) denotes a series of random numbers that are bounded in probability, and op (1) denotes a series of random numbers converging to zero in probability.

Price Discovery Dynamics: Denitions and A Practical Procedure

Following Phillips (1998), we rst re-write (1.1) in levels and dierences format as: yt = Ayt1 + (L)yt1 + t , A = J(1), 3

s1

(L) =
k=1

k Lk1 , k =
l=k+1

Jl ,

(2.1)

where A = Im + , and are mxr matrices of full column rank r, r m. Thus the moving average (MA) representation of the system is:
t1 t1

yt =
k=0

M C Mtk =
k=0

k tk ,

(2.2)

where k =: M C k M, M = [Im , 0, . . . , 0], and


C=

J1 Im 0mxm

Js1 Js 0mxm 0mxm Im 0mxm

As in Gonzalo and Ng (2001) and Yan and Zivot (2004), we rst consider the set of permanent and transitory shocks Gtk . In contrast to the existing literature and in view of the time-varying volatility (see (1.2)-(1.3) in Section 1), we write Gtk = GVtk1 tk . Note E[GVtk1 tk tk Vtk1 G | Ftk1 ] = GVtk1 G . Therefore, in order to orthogonalize the Gtk and interpret the coecients of the
1 resulting MA representation in (2.2), for each t k, we let tk = Ptk1 Gtk , and 1/2 1/2 1/2

Ptk1 is the lower triangular Choleski decomposition of GVtk1 G such that: GVtk1 G = Ptk1 Ptk1 , and E[tk tk | Ftk1 ] = Im . Therefore the corresponding MA representation of yt is:
t1

(2.3)

yt =
k=0

k G1 Ptk1 tk .

(2.4)

Thus the systems impulse responses are given by the elements of the sequence of matrices k G1 Ptk1 . It is clear that the impacts of one unit increase in each element of tk depends on Ptk1 . Denote the current Pt as Pn (n may be interpreted as now). One may be interested in the impacts of n+1 on yn+1 , yn+2 , yn+3 , . . ., which are given by 0 G1 Pn , 1 G1 Pn , 2 G1 Pn , . . . respectively. G1 Decompose G = , where G1 t is the dx1 permanent shock and G2 t is G2 the rx1 transitory shock. When d = 1, which is the case in the literature of price 4

discovery where there is one ecient price for stock prices of the same underlying asset, we can dene shock js information share (IS), where j = 1, . . . , m. See, for instance, Hasbrouck (1995) and Yan and Zivot (2004). Consider t = n + 1. As n+1 = G1 Pn n+1 , the permanent shock G1 n+1 = G1 G1 Pn n+1 , where E[G1 G1 Pn n+1 n+1 Pn G1 G1 | Fn ] = G1 Vn G1 but E[n+1 n+1 | Fn ] = Im . Therefore, for n = 1, 2, . . ., we dene the following time-varying IS of shock j: ISjn = ([G1 G1 Pn ]j )2 . G1 Vn G1 (2.5)

In general, for any mx1 vector , we can consider the following variance decomposition (VD) w.r.t. shock j for n = 1, 2, . . ., ([ G1 Pn ]j )2 . Vn (2.6)

In the literature, there are at least two sets of permanent and transitory shocks, they are: G = or G = , where is an mxd- matrix of full column rank such that = 0dxr . The former G is proposed by Warne (1993) while the latter is proposed by Gonzalo and Granger (1995). As far as permanent shock is concerned, both sets coincide with the one by Stock and Watson (1988) who consider the common (random walk) factor of a cointegrated system. Though the latter G is not necessarily invertible (and our analysis will then break down, see Subsection 3.2.1 of Levtchenkova, Pagan and Robertson, 1999), it has the structural interpretation that the transitory shock t does not have a long-run impact on yt . See Denition 1 of Gonzalo and Granger (1995). Further, this property is preserved with the transformation of G1 Pn , as one can show by the arguments similar to those in Sub-section 2.3 of Gonzalo and Ng (2001). See also Sub-section 3.1 of Yan and Zivot (2004). Both G can be obtained from the following error-correction model (ECM), which

is an alternative representation of (2.1):


s1

yt = yt1 +
l=1

l ytl + t .

(2.7)

A practical procedure for estimating the impulse responses can be summarized as: 1. Giving {yt : t = 1, . . . , T }, estimate the parameters of the ECM in (2.7) and the {Vt1 : t = 1, . . . , T } in (1.2). 2. Refer to (2.2). Construct k := M C k M, k = 0, 1, . . .. 3. Construct G. The set of permanent and transitory shocks are constructed as {GVt1 t : t = 1, . . . , T } 4. For n = 1, 2, . . ., obtain a lower triangular matrix Pn such that Pn Pn = GVn G .
1 5. For n = 1, 2, . . ., the impacts of n+1 = Pn Gn+1 on yn+1 , yn+2 , yn+3 , . . ., are 1/2

estimated respectively by 0 G1 Pn , 1 G1 Pn , 2 G1 Pn , . . .. See Theorem 3.4 for the asymptotic properties. 6. For n = 1, 2, . . ., the IS of shock j is estimated by IS jn = 1, . . . , m. See Corollary 3.8 for the asymptotic properties. 7. For n = 1, 2, . . ., the general VD of shock j is estimated by V jn = D j = 1, . . . , m. See Theorem 3.7 for the asymptotic properties.
([ G1 Pn ]j )2 , Vn ([G1 G1 Pn ]j )2 , V n G1 G1

j =

The above procedure diers from that in Gonzalo and Ng (2001) or Yan and Zivot (2004) in two respects. Instead of looking at the impacts on yn+1 , yn+2 , yn+3 , . . ., we look at those on yn+1 , yn+2 , yn+3 , . . .. Secondly and more importantly, due to the time-varying Pn , the impulse responses are also time-varying.

In the balance of this paper, we focus on the volatility model specied in (1.3). We rst denote the parameter in the ECM (2.7) as = [1 , 2 ] , where 1 = vec[ ] (the nonstationary parameter) and 2 = vec[, 1 , . . . , s1 ] (the stationary parameter). Then we denote the parameter in the GARCH equation (1.3) as 6

= [1 , 2 ] , where 1 = [a0 , a1 , . . . , aq , b1 , . . . , bp ] , aj = [a1j , ..., amj ] , bl = [b1l , ..., bml ] , j = 0, 1, ..., q, l = 1, ..., p, and 2 (), which is obtained from vec() by eliminating the supradiagonal and the diagonal elements of (see Magnus, 1988, p.27). Conditional on the initial values yt = 0 for t 0, the log-likelihood function, as a function of the true parameter, can be written as: 1 1 1 lt and lt = t Vt1 t ln |Vt1 |, (2.8) 2 2 t=1 = Dt1 Dt1 , Dt1 = diag( h1t1 , . . . , hmt1 ). Further denote ht1 = l(, ) =
n

where Vt1

(h1t1 , . . . , hmt1 ) , Ht1 = (h1 , . . . , h1 ) , and t1 = [(yt1 , yt1 , . . . , yts+1 ] . 1t1 mt1 By Sections 3 and 4 of Sin and Ling (2004), the score function, as a function of the true parameter, can be expressed as:
1 lt 2 lt lt

1 1 1 Ht1 + (yt1 )Vt1 t , ht1 ( w(t t Vt1 )) 2 1 1 1 1 = Ht1 + (t1 Im )Vt1 t , ht1 ( w(t t Vt1 )) 2 2 1 1 1 ht1 ( w(t t Vt1 )) Ht1 2 = . 1 1 (1 1 Dt1 t t Dt1 1 ) =
T T

Given some initial values 1 , 2 , , we perform a one-step iteration: 1 = 1 (


t=1 T

R1t |1 ,2 , ) (
t=1 T

1 lt |1 ,2 , ),

(2.9) (2.10) (2.11)

2 = 2 (
t=1 T

R2t |1 ,2 , )1 (
t=1 T

2 lt |1 ,2 , ),

= (
t=1

St |1 ,2 , )1 (
t=1

lt |1 ,2 , ),

where, in terms of the true parameter,


1 R1t = (yt1 yt1 Vt1 ) ( 1 R2t = (t1 t1 Vt1 ) ( 2 1 1 ht1 )Dt1 ( 2 + Im )Dt1 ( 1 ht1 )/4,

2 1 2 ht1 )Dt1 (

2 + Im )Dt1 (

2 ht1 )/4;

and, in terms of the true parameter, St = (Sijt )22 with S11t = ( S12t = (
2 1 1 ht1 )Dt1 ( 2 1 ht1 )Dt1 m (Im 2 + Im )Dt1 ( 1 ht1 )/4,

1 )Nm Lm ,

S22t = 2Lm Nm [1 1 ]Nm Lm , 7

= (1, 1, . . . , 1) and w() is a vector containing the diagonal elements of the square matrix . m , Nm and Lm are constant matrices (see Magnus, 1988, pp.109, 48 and 96). In practice, we may repeat the iterative procedure in (2.9)-(2.11), in order to get an estimator closer to the quasi-maximum likelihood estimator (QMLE) of the log-likelihood function (LF) specied in (2.8), though the asymptotic distribution is not altered. In the next section, using the asymptotic theories developed in Li, Ling and Wong (2001), Sin and Ling (2004) and Phillips (1998), we derive the asymptotic properties of the IRF, the general VD, and the IS.

Price Discovery Dynamics: Asymptotic Properties

In order to study the (asymptotic) condence interval, and some other (asymptotic) properties of the IRF, the IS and the general VD, we rst note that for k = 1, 2, . . .: k G1 Pn k G1 Pn = (k k )G1 Pn + k (G1 G1 )Pn + k G1 (Pn Pn ). In Lemma 3.1, we rst derive the asymptotic approximation of (3.1)

T (k k ).

Lemma 3.1 is followed by Lemma 3.2(a), which derives the asymptotic approxima tion of T (G G), where G = . When G = , since T ( ) = Op (T 1/2 ), we only need to derive the asymptotic approximation of T ( ). The result can be found in Lemma 3.2(b). Lemmas 3.2(a)-3.2(b) are followed by Lemma 3.3, which derives the asymptotic approximation of T (hn hn ). All the approximations are in terms of T (1 1 ), T (2 2 ) and T ( ). The proofs of Lemmas 3.1-3.3 can be found in Appendix A. 8

Equipped with these three lemmas (as well as those in Appendix B), we are able to derive an asymptotic approximation of T (k G1 Pn k G1 Pn ) in Theorem 3.4. Corollary 3.5 contains a special case in which the volatility is constant across time. This is the case thoroughly discussed in Gonzalo and Ng (2001) and Yan and Zivot (2004). Though as far as we know, the asymptotic approximation derived in this corollary is new. In Corollary 3.6, we show that the asymptotic distribution derived in Phillips (1998) turns out to be a special case in Theorem 3.4, with the volatility being a constant on the one hand, and no permanent-transitory decomposition is considered (that is, G = Im ) on the other hand. Similarly, we are able to derive an asymptotic approximation of the general T (V jn V Djn ) in Theorem 3.7. Corollary 3.8 documents the special case D T (IS jn ISjn ), in which = G1 = ; while Corollary 3.9 documents the

special case that is pre-determined. The proof of Theorem 3.7 can be found in Appendix C.

We rst state the following assumptions: Assumption 3.1. The determinantal equation | Im J(L)L |= 0 has roots on or outside the unit circle. 2

Assumption 3.2. | ((1) Im ) |= 0, where is dened around (2.6) while, in a similar token, is an mxd matrix of full column rank that is orthogonal to . That is, = 0dxr . 2

Assumption 3.3. For i = 1, . . . , m, ai0 > 0, ai1 , . . . , aiq , bi1 , . . . , bip 0, and
q j=1

aij +

p k=1 bik

< 1.

Assumption 3.4. For i = 1, . . . , m, all eigenvalues of E(Ait Ait ) lie inside

the unit circle, where denotes the Kronecker product and



2 ai1 it

Ait =

ai1

Iq1 0(p1)q

2 aiq it

2 bi1 it

2 bip it

0(q1)1 aiq

bi1

0(q1)p bip Ip1 0(p1)1 2 2

Assumption 3.5. t is symmetrically distributed. Assumption 3.6. For n = 1, 2, . . ., | GVn G |= 0.

Assumptions 3.1 and 3.2 are, respectively, Assumptions 2.1(b) and 2.1(d) in Phillips (1998). Instead of assuming i.i.d. {t }, here we adopt Assumptions 3.3-3.4. Assumptions 3.3-3.4 are the necessary and sucient conditions for E(vec[t t ]vec[t t ] ) < . Following the lines in Johansen (1988), Li, Ling and Wong (2001) and Sin and Ling (2004) show that under Assumptions (3.1)-(3.4), both yt and yt are I(0). Assumption 3.5 allows the parameters in (1.1) (the mean equation parameters) and those in (1.2)-(1.3) (the variance equation parameters) to be estimated separately without altering the asymptotic distributions. Relaxing Assumption 3.5 will result in more involved asymptotic distributions. In spite of the asymmetric t found in many nancial time series, we retain this assumption for sake of simplicity. Finally, on the one hand, Assumption 3.6 is sucient for dening G1 which, as we argue in Section 2, plays an important role in dening the impulse responses. On the other hand, it plays a role in deriving the asymptotic properties of the impulse response. See Lemma A.3.

Lemma 3.1. Suppose Assumptions 3.1-3.5 hold. For xed k = 1, . . ., let k be the reduced rank 1SE or QMLE of the impulse response matrix k , which is in turn dened in (2.2). =
j=0

T vec(k k ) (M C j K 1 K k1j ) T (2 2 ) + Op (T 1/2 ), 10 (3.2)

k1

where M and C are dened around (2.2), and


K 1 =

Im Im 0mxm

0mxm Im

0mxm 0mxm Im Im

, K

0m(s1)xr

0mxm(s1) Im(s1)

It should be clear that Lemma 3.1 is essentially Theorem 2.9(i) of Phillips (1998), with a dierent way of presentation.

Lemma 3.2. Suppose Assumptions 3.1-3.5 hold. Let be the reduced rank 1SE or QMLE of ; and = c c( c)1 c , c = (Ir , 0rxd ) and c = (0dxr , Id ) . (a) T vec = G1 T vec( ) + Op (T 1/2 ); T vec[ ] (b) = G2 T vec( ) + Op (T 1/2 );

(3.3)

(3.4)

where G1 = Kmm
1 [ (1 , 0rxd )] Kmr Imr

1 G2 = [(1 , 0rxd ) ] ,

and Kmm and Kmr are two commutation matrices with dierent dimensions. See Chapter 3 of Magnus (1988). = (1 , 2 ) , where by construction, 1 is an invertible rxr- matrix. 2

It should be emphasized that as is or rank r, we may re-arrange the elements in yt such that 1 is invertible.

Lemma 3.3. Suppose Assumptions 3.1-3.5 hold. Let 1 , 2 and 1 be the re duced rank 1SE or QMLE of 1 , 2 and 1 respectively. Suppose t = 0 for t 0. 11

For n = 1, . . . , T , = T ( hn hn ) 1 hn T (1 1 )
q

1 diag(aj )m ( n+1lj ynlj )}T (1 1 ) T j=1 l=0 q diag(l ){ diag(aj )m (n+1lj nlj Im )} T (2 2 ) 2 2 diag(l ){
l=0 j=1 1/2

+Op (T

),

(3.5)

where l = (1l , . . . , ml ) and for each i = 1, . . . , m, il is implicitly dened such that:


p

(1
l=1

bil Ll )1 =
l=0

il Ll . 2

And

1 hn

is dened around (A.8) in Appendix A.

Refer back to (3.1). By Lemmas 3.1-3.3 and Lemmas B.1-B.3, = T (k G1 Pn k G1 Pn )

1 T (k k )G1 Pn + k G1 T (G G)(Vn G Pn G1 Pn ) 1/2 1/2 1 1/2 +k T (Vn Vn )Vn G Pn + Op (T 1/2 ).

(3.6)

It should be emphasized that if (and thus ) is identied up to an rxr- matrix, as in the case of Johansen (1988,1991) and Gonzalo and Granger (1995), it is not straightforward to interpret the lower triangular Choleski decomposition of GVn G , no matter G = [ , ] or G = [ , ] . Full identication (see, for instance, Ahn and Reinsel, 1990) of and is assumed throughout the balance of the paper. In this regard, without loss of generality, we follow Ahn and Reinsel (1990) and let = [Ir , 0 ], where 0 is a rxd- matrix. With this normalization, as argued in Section 4 of Sin and Ling (2004), 0 can be estimated by 0 11 2 . See also Theorem A.1(a). 12

The asymptotic approximation of

T vec(k G1 Pn k G1 Pn ) is summarized

in the next theorem, which proof can be found in Appendix C.

Theorem 3.4. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6 hold. T vec(k G1 Pn k G1 Pn ) = w1n T vec(0 0 ) + w2n T (2 2 ) + wn T ( ) + Op (T 1/2 ), (3.7) where
1 1 w1n = (Pn GDn k Dn )m l=0

diag(l ){

1 diag(aj )m ( n+1lj ynlj c )}; T j=1

w2n = w21n + w22n ,


k1

w21n =
j=0

(Pn G1 M C j K 1 K k1j )
1 1 (Pn GDn k Dn )m l=0 q

diag(l ){
j=1

diag(aj )m (n+1lj nlj Im )},

w22n =

1 ((Pn GVn Pn G1 ) k G1 )G1 (Imr , 0mrxm(s1) ), if G = [ , ] 1 ((Pn GVn Pn G1 ) k G1 (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ), if G = [ , ] ; 1 1 hn , (Pn GDn

1 1 1 wn = [ (Pn GDn k Dn )m 2

k Dn )Lm ]; T (2 2 ) and T ( ),

and the asymptotic distributions of T vec(0 0 ),

which are independent to each other, can be found in Theorem A.1 in Appendix A. 2

It is not dicult to show that, in practice, one can estimate w1n , w2n and wn with an error of Op (T 1/2 ). As a result, statistical inferences on the IRF can be drawn with these estimates, as well as the simulated asymptotic distributions of T vec(0 0 ), T (2 2 ) and T ( ). While the last two are independently normal with covariance matrices 1 1 and 1 1 , the rst one involves 2 2 2 13

two correlated Brownian motions, which can be simulated along the lines suggested in Sin and Ling (2004). When Vn = V , which is a time-invariant matrix, the IRF is that considered in details by Gonzalo and Ng (2001) and Yan and Zivot (2004). If that is the case, no T vec(0 0 ) is involved. The results are summarized in the next theorem, which proof is omitted.

Corollary 3.5. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6 hold. In addition, Vn = V = DD, a constant matrix with D a diagonal constant matrix containing the standard deviation of t . P is the lower triangular Choleski decomposition of GV G . = T vec(k G1 P k G1 P ) w2 T (2 2 ) + w T ( ) + Op (T 1/2 )

L N (0, V),
1 1 where V = w2 2 2 w2 + w 1 1 w (see Theorem A.1), 2

w2 = w21 + w22 ,
k1

w21 =
j=0

(P G1 M C j K 1 K k1j ), ((P 1 GV P G1 ) k G1 )G1 (Imr , 0mrxm(s1) ), if G = [ , ] 1 1 1 ((P GV P G ) k G (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ), if G = [ , ] ; 2

w22 =

1 w = [ (P 1 GD k D 1 )m , (P 1 GD k D)Lm ]. 2

If we further assume that we do not impose the permanent-transitory decompou sition, that is, we let G = Im (see Phillips, 1998 or L tkepohl, 1990). The results are summarized in the next theorem, which proof is omitted.

14

Corollary 3.6. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6 hold with G = Im . In addition, Vn = V = DD, a constant matrix with D a diagonal constant matrix containing the standard deviation of t . P is the lower triangular Choleski decomposition of V . T vec(k P k P ) w2 T (2 2 ) + w T ( ) + Op (T 1/2 )

L N (0, V),
1 1 where V = w2 2 2 w2 + w 1 1 w (see Theorem A.1), 2 k1

w2 =
j=0

(P M C j K 1 K k1j ), 2

1 w = [ (P 1 D k D 1 )m , (P 1 D k D)Lm ]. 2

Equipped with Theorem 3.4, we are able to discuss the asymptotic properties of the estimated general variance decomposition, as dened in (2.6). Note the variance decomposition share of shock j is the j th element of w( ), where is an mx1vector which is dened as: = Pn G1 Thus it suces to consider the asymptotic properties of: T[ w( ) w( ) ]. Vn Vn (3.9) (3.8)

The results are summarized in the next theorem, which proof can be found in Appendix C. Appendix C also contains two preliminary lemmas.

Theorem 3.7. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6

15

hold. If in addition, T[ T ( ) = Op (1), w( ) w( ) ] Vn Vn = w1n T vec(0 0 ) + wn T ( ) + w2n T (2 2 ) + wn T ( ) + Op (T 1/2 ), (3.10) where w1n = w( ) 2 1 1 1 { (w(Dn Dn )) (Pn GDn Pn G1 Dn )} Vn Vn q 1 diag(l ){ diag(aj )m ( n+1lj ynlj c )}; T j=1 l=0 2 w( ) [m (Pn G1 Pn G1 ) Vn ]; Vn Vn w( ) 2 1 1 { (w(Dn Dn )) (Pn GDn Vn Vn
q

wn =

w2n = w21n + w22n , w21n =


1 Pn G1 Dn )}

l=0

diag(l ){
j=1

diag(aj )m (n+1lj nlj Im )},

w22n =

2 1 Vn m ((Pn GVn 2 1 Vn m ((Pn GVn

Pn G1 ) Pn G1 G1 )G1 (Imr , 0mrxm(s1) ), if G = [ , ] Pn G1 ) Pn G1 G1 (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ), if G = [ , ] ;

wn = (w1 n , w2 n ), 1 w( ) 1 1 1 {(Pn GDn Pn G1 Dn ) [w(Dn Dn )] } 1 hn , Vn Vn 2 w( ) 1 w2 n = {m (Pn GDn Pn G1 Dn )Lm [(Dn Dn ] }; Vn Vn and the asymptotic distributions of T vec(0 0 ), T (2 2 ) and T ( ), w1 n = which are independent to each other, can be found in Theorem A.1 in Appendix A. 2

In Corollary 3.8, we consider the case that = . This is the case in most of the studies of price discovery. Corollary 3.8 is a simplied version of Theorem 3.7. Its proof is straightforward and thus is omitted.

16

Corollary 3.8. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6 hold. If in addition, = , w( ) w( ) ] Vn Vn = w1n T vec(0 0 ) + w2n T (2 2 ) +wn T ( ) + Op (T 1/2 ), T[ where w2n = w21n + w22n , w21n = 2 w( ) [m (Pn G1 Pn G1 ) Vn ]G2 [Imr , 0mrxm(s1) ] Vn Vn w( ) 2 1 1 1 { (w(Dn Dn )) (Pn GDn Pn G1 Dn )} + Vn Vn
q

(3.11)

l=0

diag(l ){
j=1

diag(aj )m (n+1lj nlj Im )},


Pn G1 ) Pn G1 G1 )G1 (Imr , 0mrxm(s1) ), if G = [ , ] Pn G1 ) Pn G1 G1 (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ), if G = [ , ] ;

w22n =

2 1 Vn m ((Pn GVn 2 1 Vn m ((Pn GVn

and w1n and wn are dened as in Theorem 3.7. The asymptotic distributions of T vec(0 0 ), T (2 2 ) and T ( ), which are independent to each other, can be found in Theorem A.1 in Appendix A. 2

In Corollary 3.9, we consider the case that is pre-determined. For instance, = (1, 0, . . . , 0) , which is the case when we consider the (1, 1) element of Vn (see the discussion around (2.5) and (2.6) in Section 2). Corollary 3.9 is a simplied version of Theorem 3.7. Its proof is straightforward and thus is omitted.

Corollary 3.9. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6 hold. If in addition, is pre-determined, w( ) w( ) ] Vn Vn = w1n T vec(0 0 ) + w2n T (2 2 ) + wn T ( ) + Op (T 1/2 ), (3.12) T[ 17

where w2n = w21n + w22n , w21n = w( ) 2 1 1 { (w(Dn Dn )) (Pn GDn Vn Vn


q 1 Pn G1 Dn )}

l=0

diag(l ){
j=1

diag(aj )m (n+1lj nlj Im )},

w22n =

2 1 Vn m ((Pn GVn 2 1 Vn m ((Pn GVn

Pn G1 ) Pn G1 G1 )G1 (Imr , 0mrxm(s1) ), if G = [ , ] 1 1 1 Pn G ) Pn G G (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ), if G = [ , ] ;

and w1n and wn are dened as in Theorem 3.7. The asymptotic distributions of T vec(0 0 ), T (2 2 ) and T ( ), which are independent to each other, can be found in Theorem A.1 in Appendix A. 2

Conclusions

Macroeconomic or nancial data are often modelled with cointegration and timevarying volatility. Noticeable examples include stock prices of the same underlying asset. Interestingly, most if not all studies in price discovery do not consider the possible time-varying volatility. As a result, the impulse responses are not inated or deated by the time-varying volatility. Intuitively, the impacts of an i.i.d. innovation is bigger when the volatility (conditional variance) is larger. On the other hand, the shocks derived from a Choleski decomposition also depend on the time-varying volatility. In this paper, we rst generalize the conventional price discovery impulse response function (IRF) to its time-varying counterpart. Time-varying information share (IS) and the general variance decomposition (VD) are dened accordingly. Using the asymptotic theories developed in Li, Ling and Wong (2001) and Sin and Ling (2004), we extend Phillips (1998) to cases that consider time-varying volatility. Larger sample sizes are required as time-varying volatility and some other parameters need to be estimated. 18

Proofs of Lemmas 3.1-3.3

In this appendix, we rst present the asymptotic distributions of T vec(0 0 ), T (2 2 ) and T ( ), all of which are found to be useful in proving The orem 3.4 and Theorem 3.7 as well as other lemmas. The asymptotic distributions are derived in Li, Ling and Wong (2001) and Sin and Ling (2004).

To facilitate the dicussion on the asymptotic distributions in Theorem A.1, for i = 1, 2, . . . , m, let a(i) (z)b(i) (z)1 = b(i) (z) = 1
p l l=1 bil z . l=1

il z l , where a(i) (z) =

q l=1

ail z l and

Denote l = (1l , . . . , ml ) , l = 1, 2, . Let (Wm (u), Wm (u))

be a 2mdimensional Brownian motion (BM) with the covariance matrix: u u V Im Im 1 ,


l=1 (l l

1 where V = EVt1 , and = E(Vt1 ) + ( ) 1

E(lt1 )), where

= E[w(t t 1 )(w(t t 1 )) ], and lt1 = (tl tl

Ht1 Ht1 ). Let Bd (u) =

1/2 [Id , 0] 1/2 V 1/2 Wm (u), where a = E(at at ) and a1 = [Id , 0dxr ]a [Id , 0dxr ] , a1 a at = [ ( )1 , ( )1 ] t . Moreover, let 11 = [Id , 0dxr ]( where (yt , yt ) =
j=0 j=0

j )[Id , 0dxr ] ,

j atj . Let c P = [P21 , P22 ], where P21 and P22 are dxd-

and dxr- respectively (see Ahn and Reinsel, 1990), where P = [ ( )1 , ( )1 ].

Theorem 4.2 (Li, Ling and Wong, 2001 and Sin and Ling, 2004). Suppose Assumptions 3.1-3.5 hold. Consider the 1SE or QMLE of , 2 and (see the end of Section 2). Dene 0 = 11 2 , where = [1 , 2 ]. Then
1 (a) T (0 0 ) L ( 1 )1 M P21 , T (2 2 ) L N (0, 1 1 ), (b) 2 2 2 T ( ) L N (0, 1 1 ), (c)

19

where
1 1 = E(Vt1 ) + (1

+ Im )
l=1 1 0

(l l

E(lt1 )),

M = (
0

Bd (u)dWm (u) ) (

1 Bd (u)Bd (u) du)1 1/2 11 , a1

1 2 = E(t1 t1 Vt1 ) + l=1 1 = E(t1 t1 Vt1 ) + 2 l=1

E(t1l t1l (1

+ Im ) l l

l l

lt1 ),

E(t1l t1l ( )
lt ).

lt1 ),

= E(St ), and = E(

lt

Proof of Lemma 3.1. We rst dene = (1 , . . . , s1 ). is dened ac cordingly. Given Theorem A.1, as T ( ) = Op (T 1/2 ), following the lines in the proof of Theorem 2.9(i) in Phillips (1998): T vec[k k ] k1 = vec[ k1j T [( ) , ( )]K 1 C j M] + Op (T 1/2 )
j=0 k1

=
j=0

[M C j K

k1j ] T vec[( ) , ( )] + Op (T 1/2 ).(A. 1)

But on the other hand, = = [ T vec[( ) , ( )] T vec[( , ) 0mxm(s1) 0m(s1)xr Im(s1) ] 0m(s1)xm Im ] T vec[2 2 ]. 2 0rxm(s1) Im(s1)

(A. 2)

By (A.1) and (A.2), (3.2) results and the proof is complete.

Proof of Lemma 3.2. We rst note that, given Theorem A.1(b), = Op (T 1/2 ), 20

= c( c)1 ( ) c c[( c)1 ( c)1 ] c = c( c)1 ( ) c + c( c)1 ( c c)( c)1 c + Op (T 1 ) = c( c)1 ( ) + Op (T 1 ). Therefore, 1 T ( ) = T ( )(1 , 0rxd ) + Op (T 1/2 ). (A. 3)

Alternatively put,
1 T vec( ) = [(1 , 0rxd ) ]

T vec( ) + Op (T 1/2 ). . We rst consider

Thus (b) is shown. To prove (a), recall the notation G = vec[G G ]. By (A.3), vec[G G ] = = = = vec( ) vec( )

1 ( (1 , 0rxd )) vec( ) vec( ) 1 ( (1 , 0rxd )) Kmr vec( ) vec( ) 1 ( (1 , 0rxd )) Kmr Imr

vec( )

Since vec[G G] = Kmm vec[G G ], (a) is also shown. The proof is then complete. 2

Proof of Lemma 3.3. The proof follows the lines in the proof of Lemma A.1 in Sin (2004). Denote t = t () and t = t (). By Theorem A.1, T ( ) = Op (1) and = Op (T 1/2 ). Thus for t = 1, . . . , T , 1 1 t = t [T ( ) ] yt1 [ T ( )] t1 + Op (T 1 ). T T (A. 4)

Let stands for the series {t = t ()}T t= and stands for the series {t = t ()}T . Recall the denitions of il s. For i = 1, . . . , m, dene: t=1
n q

hin (, 1 )
l=0

il [i0 + ( a
j=1

aij 2 in+1lj )],

21

hin (, 1 )
l=0

il [i0 + ( a
j=1 q

aij 2 in+1lj )], aij 2 in+1lj )],


j=1 q

h (, 1 ) in
l=0

il [i0 + ( a il [ai0 + (
l=0 j=1

h (, 1 ) in

aij 2 in+1lj )],

(A. 5)

When no ambiguity arises, denote hin = hin (, 1 ) and hin = h (, 1 ). Decom in pose: hn (, 1 ) h (, 1 ) n = (h (, 1 ) hn (, 1 )) + (h (, 1 ) h (, 1 )) + (hn (, 1 ) hn (, 1 )) n n n A1n + A2n + A3n . In Sin (2004), it is shown that A1n = Op (n ) and if t = 0 for t 0, A1n = 0 (A. 7) (A. 6)

On the other hand, let t = (2 , . . . , 2 ) and recall the denition of l in the lemma. 1t mt Sin (2004) also shows: A2n =
1 hn (1

1 ) + Op (T 1 ), where

(A. 8)

1 hn a0 h n

a0 h n ; p

a1 h n , . . . ,

aq h n ;

b1 hn , . . . ,

bp hn )

= Im +
l=1

a0 hnl )diag(bl ) p

=
l=0

diag(l );

aj h n

= diag(n+1j ) +
l=1 p

( (

aj hnl )diag(bl )

=
l=0

diag(l diag(l

n+1lj ), j = 1, . . . , q; hnlj ), j = 1, . . . , p.

bj hn

= diag(hnj ) +
l=1

bj hnl )diag(bl )

=
l=0

Lastly, we consider A3n . For i, . . . , m,


n q

hin (, 1 ) hin (, 1 ) =
l=0

il [
j=1

aij (2 in+1lj 2 in+1lj )].

(A. 9)

22

Consider the terms (2 in+1lj 2 in+1lj )s in (A.9). First by (A.4), for t = 1, . . . , T , w(t t t t ) = w((t t )t ) + w(t (t t ) ) + w((t t )(t t ) ) = 2w((t t )t ) + Op (T 1 ), (A. 10)

where the last equality also follows by Exercise 7.1(a), p.108 of Magnus (1988). Once again by (A.4), w((t t )t ) = m vec[(t t )t ] 1 1 = m vec{[T ( ) ] yt1 t + [ T ( )] t1 t } + Op (T 1 ) T T 1 1 = m ( t yt1 )T (1 1 ) m ( t t1 Im ) T (2 2 ) T T (A. 11) +Op (T 1 ). Therefore, by (A.9), (A.10) and (A.11), A3n = 2 1 diag(aj )m ( n+1lj ynlj )}T (1 1 ) T j=1 l=0 q n 1 diag(l ){ diag(aj )m ( n+1lj nlj Im )} T (2 2 ) 2 T j=1 l=0 diag(l ){ (A. 12)
n q

+Op (T 1 ).

Therefore, with an appropriate choice of the initial values, by (A.6), (A.7), (A.8) and (A.12), (3.5) results. Thus the proof is complete. 2

Other Lemmas

Lemma B.1. Suppose Assumptions 3.1-3.6 hold. (a) G1 = Op (1); 23

(b)

G1 G1 = Op (T 1/2 ); 2

(c) G1 G1 = G1 (G G)G1 + Op (T 1 ).

Proof. Given Assumption (3.6), Lemma 3.2 and Theorem A.1(a), for all > 0, there exists N = N () such that for all n > N , P rob[min (G) < ] < . Thus, (a) is shown. Further, we can write: G1 G1 = G1 (G G)G1 = Op (T 1/2 ), given (a) and Lemma 3.2. Thus (b) is also shown. Finally, from (B.1), G1 G1 = G1 (G G)G1 (G1 G1 )(G G)G1 = G1 (G G)G1 + Op (T 1 ), by (b) and Lemma 3.2. Thus, (c) is also shown. 2 (B. 1)

Lemma B.2. Suppose Assumptions 3.1-3.5 hold.


1/2 1/2 vec[Vn Vn ]

1 1/2 1 ( Dn )m (hn hn ) + (1/2 Dn )Lm (2 2 ) + Op (T 1 ) 2 2

= Op (T 1/2 ).

Proof. Given Vn = Dn Dn in Model (3.1), we can write:


1/2 1/2 Vn Vn

= (Dn Dn )1/2 + Dn (1/2 1/2 ) + (Dn Dn )(1/2 1/2 )

(B. 2)

Given Lemma 3.3, it is not dicult to show that Dn Dn = Op (T 1/2 ) (see, for
1 instance, Lemma A.2 of Sin, 2004). On the other hand, Dn = Op (1), therefore,

1 1 2 2 D [(Dn Dn ) (Dn Dn )2 ] Dn Dn = 2 n 1 1 2 2 = D (Dn Dn ) + Op (T 1 ). 2 n 24

(B. 3)

In spite of the fact that 1/2 is symmetric, it is benecial to write: = 1/2 1/2 1/2 1/2 = (1/2 1/2 )1/2 1/2 (1/2 1/2 ) . Therefore, [(1/2 Im ) + Kmm (1/2 Im )]vec[1/2 1/2 ] = vec[ ]. (B. 5) (B. 4)

First note that is positive denite. By Theorem A.1(c), for all > 0, there exists N = N () such that for all n > N , P rob[min () < 2 ] < P rob[min (1/2 ) < ] < . Thus, we can write: vec[1/2 1/2 ] = [(1/2 Im ) + Kmm (1/2 Im )]1 vec[ ], where [(1/2 Im ) + Kmm (1/2 Im )]1 = Op (1). By Theorem A.1(c), vec[1/2 1/2 ] = Op (T 1/2 ). Thus (B.4) can be re-written as: = (1/2 1/2 )1/2 1/2 (1/2 1/2 ) + (1/2 1/2 )(1/2 1/2 ) = (1/2 1/2 )1/2 1/2 (1/2 1/2 ) + Op (T 1 ). And (B.6) can be re-written as: vec[1/2 1/2 ] = (1/2 Im )(Im2 + Kmm )1 vec[ ] + Op (T 1 ). (B. 8) (B. 7) (B. 6)

From p.48 and p.96 of Magnus (1988), vec[ ] = 2Nm Lm (2 2 ). On the other hand, by Theorem 3.10(i) of Magnus (1988), Im2 + Kmm = 2Nm . Therefore, by (B.2), (B.3) and (B.8),
1/2 1/2 vec[Vn Vn ]

25

1 1/2 1 2 2 ( Dn )vec[Dn Dn ] + (1/2 Dn )(Im2 + Kmm )1 vec[ ] + Op (T 1 ) 2 1 1/2 1 ( Dn )m (hn hn ) + (1/2 Dn )(2Nm )1 2Nm Lm (2 2 ) + Op (T 1 ) = 2 1 1/2 1 ( Dn )m (hn hn ) + (1/2 Dn )Lm (2 2 ) + Op (T 1 ) = 2 = = Op (T 1/2 ), where the last equality is due to Lemma 3.3 and Theorem A.1(c). Thus the proof is complete. 2

Lemma B.3. Suppose Assumptions 3.1-3.6 hold.


1 1/2 1/2 1 1/2 Pn Pn = (G G)Vn G Pn + G(Vn Vn )Vn G Pn + Op (T 1 )

= Op (T 1/2 ). Proof. We rst note that:

1/2 1/2 1/2 1/2 Pn Pn Pn Pn = GVn G GVn G = (GVn )(GVn ) (GVn )(GVn ) .

Given Assumption (3.6), GVn G is positive denite. By Lemmas 3.2-3.3, for all > 0, there exists N = N () such that for all n > N , P rob[min (GVn G ) < 2 ] < P rob[min (Pn ) < ] < . Thus, using the arguments for deriving (B.8) in the proof of Lemma A.2, we can show that:
1 vec[Pn Pn ] = (Pn Im )(Im2 + Kmm )1 vec[GVn G GVn G ] + Op (T 1 ). (B. 9)

Similarly, we can show that:


1/2 1/2 1/2 vec[GVn GVn ] = ((GVn )1 Im )(Im2 + Kmm )1 vec[GVn G GVn G ] + Op (T 1 ). (B. 10)

By (B.9)-(B.10),
1 1/2 1/2 1/2 vec[Pn Pn ] = (Pn GVn Im )vec[GVn GVn ] + Op (T 1 ).

Alternatively put,
1/2 1/2 1 1/2 Pn Pn = (GVn GVn )Vn G Pn + Op (T 1 ).

(B. 11)

26

By Lemma (3.2) and Lemma (B.2),


1/2 1/2 1/2 1/2 1/2 GVn GVn = (G G)Vn + G(Vn Vn ) + Op (T 1 ).

(B. 12)

Therefore, by (B.11)-(B.12),
1 1/2 1/2 1 1/2 Pn Pn = (G G)Vn G Pn + G(Vn Vn )Vn G Pn + Op (T 1 )

= Op (T 1/2 ), where the last equality is also due to Lemma (3.2) and Lemma (B.2). Thus the proof is complete. 2

Lemma B.4. Suppose Assumptions 3.1-3.5 hold. vec[Vn Vn ]


1 = Nm (Dn Dn )m (hn hn ) + 2Nm (Dn Dn )Lm (2 2 ) + Op (T 1 )

= Op (T 1/2 ).

Proof. We rst note that:


1/2 1/2 1/2 1/2 Vn Vn = Vn Vn Vn Vn . 1/2 1/2 By Lemma B.2, Vn Vn = Op (T 1/2 ) and by arguments similar to those that

derive (B.8) in the proof of Lemma B.2,


1/2 1/2 1/2 vec[Vn Vn ] = (Im2 + Kmm )(Vn Im )vec[Vn Vn ] + Op (T 1 ).

Once again by Lemma B.2 and by the denition Nm = 1 (Im2 + Kmm ) (see, for 2 instance, Theorem 3.10(i), p.48 of Magnus, 1988), vec[Vn Vn ]
1/2 1 1/2 = Nm (Vn 1/2 Dn )m (hn hn ) + 2Nm (Vn 1/2 Dn )Lm (2 2 ) + Op (T 1 ) 1 = Nm (Dn Dn )m (hn hn ) + 2Nm (Dn Dn )Lm (2 2 ) + Op (T 1 ).

The proof is thus complete.

27

Proofs of Theorem 3.4 and Theorem 3.7

Proof of Theorem 3.4. Refer to (3.6) and let: T (k G1 Pn k G1 Pn ) = I1T + I2T + I3T + Op (T 1/2 ), where I1T I2T I3T By Lemma 3.1, vec[I1T ] = (Pn G1 Im ) T vec(k k )
k1

T (k k )G1 Pn , 1 k G1 T (G G)(Vn G Pn G1 Pn ), 1/2 1/2 1 1/2 k T (Vn Vn )Vn G Pn .

=
j=0

(Pn G1 M C j K 1 K k1j ) T (2 2 ) + Op (T 1/2 ). (C. 1)

When G = [ , ] , by Lemma 3.2(a), vec[I2T ] 1 = ((Pn GVn Pn G1 ) k G1 ) T vec 1 = ((Pn GVn Pn G1 ) k G1 )G1 T vec( ) + Op (T 1/2 ) 1 = ((Pn GVn Pn G1 ) k G1 )G1 (Imr , 0mrxm(s1) ) T vec(2 2 ) + Op (T 1/2 ). (C. 2) When G = [ , ] , Lemma 3.2(b), vec[I2T ]
1 T ( )(Vn G Pn G1 Pn ) + Op (T 1/2 ) 1 = ((Pn GVn Pn G1 ) k G1 (Id , 0dxr ) ) T vec( ) + Op (T 1/2 ) 1 = ((Pn GVn Pn G1 ) k G1 (Id , 0dxr ) )G2 T vec( ) + Op (T 1/2 ) 1 = ((Pn GVn Pn G1 ) k G1 (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ) T vec(2 2 ) + Op (T 1/2 ).

T ( ) = Op (T 1/2 ) (see Theorem A.1(a)). Further, by

= k G1 (Id , 0dxr )

(C. 3)

28

1/2 On the other hand, by Lemma B.2, and recall the denition Vn = Dn 1/2 ,

vec[I3T ] 1 1/2 1/2 1/2 = (Pn GVn k ) T [Vn Vn ] 1 1 1 (Pn GDn k Dn )m T (hn hn ) = 2 1 +(Pn GDn k Dn )Lm T (2 2 ) + Op (T 1/2 ) = J1T + J2T + Op (T 1/2 ), where J1T J2T 1 1 1 (Pn GDn k Dn )m T (hn hn ). 2 1 (Pn GDn k Dn )Lm T (2 2 ).

(C. 4)

By normalization, T ( ) = T vec(0rxr , 0 0 ). Recall the denition of c in Lemma 3.2. By Lemma 3.3, J1T = J11T + J12T + J13T + Op (T 1/2 ), where J11T 1 1 1 (P GDn k Dn )m 2 n
q

hn T (1 1 ). 1

(C. 5)

1 1 J12T (Pn GDn k Dn )m

l=0

diag(l ){

1 diag(aj )m ( n+1lj ynlj c )}T vec(0 0 ). (C. 6) T j=1


q

1 1 J13T (Pn GDn k Dn )m

l=0

diag(l ){
j=1

diag(aj )m (n+1lj nlj Im )} T (2 2 ).

(C. 7)

Rearranging the terms in (C.1), (C.2) (or (C.3)), and (C.4)-(C.7), (3.7) results. Thus the proof is complete. 2

Before we prove Theorem 3.7, we rst note that: T[ w( Vn Vn ) w( ) w( ) ]. ] = T[ Vn Vn Vn Vn 29 (C. 8)

However, Vn Vn = ( Vn )2 + ( Vn Vn ) Vn . On the other hand, w( Vn Vn ) = w( ) Vn w( )( Vn Vn ) = 2w( ( ) ) Vn w( )( Vn Vn ) +w(( )( ) ) Vn . (C. 10) (C. 9)

In Lemma C.1, we rst derive an asymptotic approximation of ( Vn Vn ) and show that it is Op (T 1/2 ). Consequently, (C.9) can be written as: Vn Vn = ( Vn )2 + Op (T 1/2 ) (C. 11)

On the other hand, in Lemma C.2, we derive an asymptotic approximation of ( ) and show that it is also Op (T 1/2 ). As a result, ( ) = ( )1 ( ) = Op (T 1/2 ). And we can re-write (C.10) as: w( Vn Vn ) = 2w( ( ) ) Vn w( )( Vn Vn ) + Op (T 1 ). (C. 12)

Equipped with Lemmas C.1 and C.2, using (C.11) and (C.12), we are able to prove Theorem 3.7.

Lemma C.1. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6 hold. In addition, ( ) = Op (T 1/2 ). Vn Vn
1 = 2 Vn ( ) + [w(Dn Dn )] (hn hn ) + 2[(Dn Dn )] (2 2 ) + Op (T 1 )

= Op (T 1/2 ).

2 30

Proof. Given ( ) = Op (T 1/2 ). By Lemma B.4, (Vn Vn ) = Op (T 1/2 ). Therefore, Vn Vn = ( ) Vn + (Vn Vn ) + Vn ( ) = ( ) Vn + (Vn Vn ) + Vn ( ) + Op (T 1 ) = 2 Vn ( ) + (Vn Vn ) + Op (T 1 ). On the other hand, by Lemma B.4, (Vn Vn ) = ( )vec(Vn Vn )
1 = ( )Nm (Dn Dn )m (hn hn )

(C. 13)

+2( )Nm (Dn Dn )Lm (2 2 ). But Nm ( ) = Nm ( ) = [Nm vec( )] = vec( ),

(C. 14)

where the rst equality is due to Theorem 3.10(ii), p.48, while the second and the third ones are due to Exercise 1.4, p.10 and Denition 3.2, p.48 respectively. All in Magnus (1988). Therefore,
1 1 m (Dn Dn )Nm ( ) = m (Dn Dn )vec( ) 1 = m vec(Dn Dn ) 1 = w(Dn Dn ),

(C. 15)

where the last equality follows by Theorem 7.3(i), p.110 of Magnus (1988). Similarly, Lm (Dn Dn )Nm ( ) = Lm (Dn Dn )vec( ) = Lm vec(Dn Dn ) = (Dn Dn ), (C. 16)

31

where the last equality follows by Theorem 6.7(i), p.97 of Magnus (1988). Substitute (C.15) and (C.16) into (C.14),
1 (Vn Vn ) = [w(Dn Dn )] (hn hn ) + 2[(Dn Dn )] (2 2 ). (C. 17)

By (C.13) and (C.17), the results follow. Thus the proof is complete.

Lemma C.2. Suppose the assumptions in Lemmas 3.1-3.3 and Assumption 3.6 hold. In addition, ( ) = Op (T 1/2 ). T vec( ) = w1n T vec(0 0 ) + wn T ( ) + w2n T (2 2 ) +wn T ( ) + Op (T 1/2 ), where
1 1 w1n = (Pn GDn Pn G1 Dn )m

(C. 18)

l=0

diag(l ){

1 diag(aj )m ( n+1lj ynlj c )}; T j=1

wn = (Pn G1 Pn G1 ); w2n = w21n + w22n ,


1 1 w21n = (Pn GDn Pn G1 Dn )m q

l=0

diag(l ){
j=1

diag(aj )m (n+1lj nlj Im )},

w22n =

1 ((Pn GVn Pn G1 ) Pn G1 G1 )G1 (Imr , 0mrxm(s1) ), if G = [ , ] 1 1 1 1 ((Pn GVn Pn G ) Pn G G (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ), if G = [ , ] ; 1 1 hn , (Pn GDn

1 1 1 wn = [ (Pn GDn Pn G1 Dn )m 2

Pn G1 Dn )Lm ]; T ( ),

and the asymptotic distributions of T vec(0 0 ),

T (2 2 ) and

which are independent to each other, can be found in Theorem A.1 in Appendix A. 2 Proof. Similar to the proof of Theorem 3.4, T ( ) = T ( G1 Pn G1 Pn ) 32

where I1T I2T I3T

= I1T + I2T + I3T + Op (T 1/2 ), T ( )G1 Pn , 1 G1 T (G G)(Vn G Pn G1 Pn ), 1/2 1/2 1 1/2 T (Vn Vn )Vn G Pn .

Therefore, similar to Theorem 3.4, the w1n , w22n and wn are exactly the same as those in Theorem 3.4 except that k is replaced by = Pn G1 . In Theo rem 3.4, the rst term in w21n is associated with T vec[(k k )G1 Pn ], which is replaced here by: T vec[( )G1 Pn ] = (Pn G1 ) T ( ) = (Pn G1 Pn G1 ) T ( ). That is, we introduce a new term, wn = (Pn G1 Pn G1 ). On the other hand, similar to the arguments for w1n , w22n and wn , w21n equals the second term of the original one with k replaced by Pn G1 . Thus the proof is complete. 2

Proof of Theorem 3.7. By (C.8), (C.12) and (C.11), w( ) w( ) ] Vn Vn 2w( ( ) ) Vn w( )( Vn Vn ) = T[ ] ( Vn )2 w( ) 2 m T vec( ) T ( Vn Vn ). (C. 19) = Vn ( Vn )2 T[ Therefore, by Lemma C.2, Lemma C.1 and Lemma 3.3, it is not dicult to see that: w1n = { 2 w( ) 1 1 1 m (Pn GDn Pn G1 Dn )m (w(Dn Dn )) (2)} Vn ( Vn )2 q 1 diag(l ){ diag(aj )m ( n+1lj ynlj c )} T j=1 l=0 33

w( ) 2 1 1 1 { (w(Dn Dn )) (Pn GDn Pn G1 Dn )} Vn Vn q 1 diag(l ){ diag(aj )m ( n+1lj ynlj c )}. T j=1 l=0

(C. 20)

Similarly, w21n = w( ) 2 1 1 { (w(Dn Dn )) (Pn GDn Vn Vn


q 1 Pn G1 Dn )}

l=0

diag(l ){
j=1

diag(aj )m (n+1lj nlj Im )}.

(C. 21)

On the other hand, by (C.19) and Lemma C.2, w22n =


2 1 Vn m ((Pn GVn 2 1 Vn m ((Pn GVn

Pn G1 ) Pn G1 G1 )G1 (Imr , 0mrxm(s1) ), if G = [ , ] (C. 1 1 1 Pn G ) Pn G G (Id , 0dxr ) )G2 (Imr , 0mrxm(s1) ), if G = [ , ] .

22)

By (C.19), Lemma C.2, Lemma C.1, and Lemma 3.3, wn = 2 w( ) m (Pn G1 Pn G1 ) 2 Vn Vn ( Vn )2 2 w( ) [m (Pn G1 Pn G1 ) Vn ]. = Vn Vn

(C. 23)

Finally, also by (C.19), Lemma C.2, Lemma C.1 and Lemma 3.3, w1 n 1 w( ) 1 1 1 m (Pn GDn Pn G1 Dn )m 1 hn [w(Dn Dn )] 1 hn Vn ( Vn )2 1 w( ) 1 1 1 {(Pn GDn Pn G1 Dn ) [w(Dn Dn )] } 1 hn ; = (C. 24) Vn Vn = w2 n 2 2w( ) 1 m (Pn GDn Pn G1 Dn )Lm [(Dn Dn )] Vn ( Vn )2 2 w( ) 1 {m (Pn GDn Pn G1 Dn )Lm [(Dn Dn ] }. = Vn Vn = Thus, by (C.20)-(C.25), the proof is complete. 2

(C. 25)

REFERENCES
34

Ahn, S.K., Reinsel, G.C., 1990. Estimation for partially nonstationary multivariate models. Journal of the American Statistical Association 85, 813-823. Andersen, T.G., Bollerslev, T., Diebold, F.X., Labys, P., 2003. Modeling and forecasting realized volatility. Econometrica 71, 579-625. Barclay, M.J., Hendershott, T., 2003. Price discovery and trading after hours. Review of Financial Studies 16, 1041-1073. Blanchard, O.J., Quah, D., 1989. The dynamic eects of aggregate demand and supply disturbances. American Economic Review 79, 655-673. Bollerslev, T., 1986. Generalized autoregressive conditional heteroskedasticity. Journal of Econometrics 31, 307-327. Bollerslev, T., 1990. Modelling the coherence in the short-run nominal exchange rates: a multivariate generalized ARCH approach. Review of Economics and Statistics 72, 498-505. Chan, N.H., Wei, C.Z., 1988. Limiting distributions of least squares estimates of unstable autoregressive processes. Annals of Statistics 16, 367-401. Covrig, V., Melvin, M., 2002. Asymmetric information and price discovery in the FX market: does Tokyo know more about the yen? Journal of Empirical Finance 9, 271-285. Elder, J., 2003. An impulse-response function for a vector autoregression with multivariate GARCH-in-mean. Economics Letters 79, 21-26. Elder, J., 2004. Another perspective on the eects of ination uncertainty. Journal of Money, Credit, and Banking 36, 911-928. Enders, W., 2004. Applied econometric time series, 2nd edition. John Wiley, New Jersey. Engle, R.F., 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of United Kingdom ination. Econometrica 50, 987-1008. Engle, R.F., Granger, C.W.J., 1987. Cointegration and error correction: representation, estimation and testing. Econometrica 55, 251-276. Engle, R.F., Patton, A.J., 2004. Impacts of trades in an error-correction model of quote prices. Journal of Financial Markets 7, 125. 35

Glosten, L.R., 1987. Components of the bid-ask spread and the statistical properties of transaction prices. Journal of Finance 42, 1293-1307. Gonzalo, J., Granger, C.W.J., 1995. Estimation of common long-memory components in cointegrated systems. Journal of Business and Economic Statistics 13, 27-35. Gonzalo, J., Ng, S., 2001. A systematic framework for analyzing the dynamic eects of permanent and transitory shocks 25, 1527-1546. Hasbrouck, J., 1995. One security, many markets: determining the contributions to price discovery. Journal of Finance 50, 1175-1199. Hasbrouck, J., 2002. Stalking the ecient price in market microstructure specications: an overview. Journal of Financial Markets 5, 329-339. Hasbrouck, J., 2003. Intraday price formation in U.S. equity index markets. Journal of Finance 58, 2375-2400. Hasbrouck, J., Ho, T.S.Y., 1987. Order arrival, quote behavior, and the returngenerating process. Journal of Finance 42, 1035-1048. Huang, R.D., 2002. The quality of ECN and Nasdaq market maker quotes. Journal of Finance 57, 1285-1319. Jeantheau, T., 1998. Strong consistency of estimators for multivariate ARCH models. Econometric Theory 10, 29-52. Johansen, S., 1988. Statistical analysis of cointegration vectors. Journal of Economic Dynamics and Control 12, 231-254. Johansen, S., 1991. Estimation and hypothesis testing of cointegration vectors in Gaussian vector autoregressive models. Econometrica 59, 1551-1580. Levtchenkova, S., Pagan, A., Robertson, J., 1999. Shocking stories. Journal of Economic Surveys 12, 507-532. Li, W.K., Ling, S., Wong, H., 2001. Estimation for partially nonstationary multivariate autoregressive models with conditional heteroskedasticity. Biometrika 88, 1135-1152. Lin, W.-L., 1997. Impulse response function for conditional volatility in GARCH models. Journal of Business and Economic Statistics 15, 15-25. 36

Ling, S., Li, W.K., 1997. Diagnostic checking of nonlinear multivariate time series with multivariate ARCH errors. Journal of Time Series Analysis 18, 447-464. Ling, S., Li, W.K., 1998. Limiting distributions of maximum likelihood estimators for unstable ARMA models with GARCH errors. Annals of Statistics 26, 84-125. Ling, S., McAleer, M., 2003. Asymptotic theory for a new vector ARMA-GARCH model. Econometric Theory 19, 280-310. Ltkepohl, H., 1989. A Note on the Asymptotic Distribution of Impulse Response u Functions of Estimated VAR Models with Orthogonal Residuals. Journal of Econometrics 42, 371-376. Ltkepohl, H., 1990. Asymptotic Distributions of Impulse Response Functions and u Forecast Error Variance Decompositions of Vector Autoregressive Models. The Review of Economics and Statistics 72, 116-125. Ltkepohl, H., 1993. Introduction to multiple time series analysis, 2nd edition. u Springer-Verlag, New York. Martens, M., 1998. Price discovery in high and low volatility periods: open outcry versus electronic trading. Journal of International Financial Markets, Institutions and Money 8, 243-260. Phillips, P.C.B., 1998. Impulse response and forecast error variance asymptotics in nonstationary VARs. Journal of Econometrics 83, 21-56. Quah, D., 1992. The relative importance of permanent and transitory components: identication and some theoretical bounds. Econometrica 60, 107-118. Sin, C.-y., 2004. Testing for multivariate ARCH when the conditional mean is an ECM: theory and empirical applications. Forthcoming in D. Terrell (ed.), Advances in Econometrics Volume 20: Econometric Analysis of Economic and Financial Time Series Part A. Elsevier Science, Amsterdam. Sin, C.-y., Ling, S., 2004. Estimation and testing for partially nonstationary vector autoregressive models with GARCH. Revised and re-submitted to Journal of Econometrics. Stock, J.H., Watson, M.W., 1988. Testing for common trends. Journal of the American Statistical Association 83, 1097-1107. 37

Tse, Y.K., 2000. A test for constant correlations in a multivariate GARCH model. Journal of Econometrics 98, 107-127. Tse, Y.K., 2002. Residual-based diagnostics for conditional heteroscedasticity models. Econometrics Journal 5, 358-373. Tse, Y.K., Tsui, A.K.C., 2002. A multivariate generalized autoregressive conditional heteroskedasticity model with time-varying correlations. Journal of Business and Economic Statistics 20, 351-362. Warne, A., 1993. A common trends model: identication, estimation and asymptotics. Mimeograph, Institute for International Studies, Stockholm University. Yan, B., Zivot, E., 2004. The dynamics of price discovery. Paper presented at the 2004 European Meeting of the Econometric Society, Madrid, Spain.

38

You might also like