You are on page 1of 33

Econometric Theory, 30, 2014, 252284.

doi:10.1017/S0266466613000182
ASYMPTOTIC NORMALITY FOR
WEIGHTED SUMS OF LINEAR
PROCESSES
KARIM M. ABADIR AND WALTER DISTASO
Imperial College London
LIUDAS GIRAITIS
Queen Mary, University of London
HIRA L. KOUL
Michigan State University
We establish asymptotic normality of weighted sums of linear processes with gen-
eral triangular array weights and when the innovations in the linear process are
martingale differences. The results are obtained under minimal conditions on the
weights and innovations. We also obtain weak convergence of weighted partial sum
processes. The results are applicable to linear processes that have short or long mem-
ory or exhibit seasonal long memory behavior. In particular, they are applicable to
GARCH and ARCH() models and to their squares. They are also useful in de-
riving asymptotic normality of kernel-type estimators of a nonparametric regression
function with short or long memory moving average errors.
1. INTRODUCTION
Numerous inference procedures in statistics and econometrics are based on the
sums
S
n
=
n

j =1
X
j
, W
n
=
n

j =1
z
nj
X
j
of a linear process {X
j
}, where {z
nj
, 1 j n} is an array of known real num-
bers. In this paper we focus on deriving asymptotic distributions of S
n
and W
n
.
In addition, the weak convergence property of the corresponding partial sum
The authors would like to thank Donatas Surgailis for providing part (iii) of Theorem 2.2, Lemma 3.1, and other
useful comments, and to the Editor and the three reviewers for their constructive suggestions. Research is in part
supported by ESRC grant RES062230790 and USA NSF DMS Grant 0704130. Address correspondence to Liudas
Giraitis, School of Ecnomics and Finance, Queen Mary, University of London, Mile End Rd., London E14NS, United
Kingdom; e-mail: l.giraitis@qmul.ac.uk
252 c Cambridge University Press 2013
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 253
processes is also discussed. The linear process is assumed to be a moving average
with martingale differences innovations and may exhibit short- or long-range de-
pendence. It will be shown that {Var(T
n
)}
1/2
(T
n
ET
n
), with T
n
= S
n
or W
n
,
converges weakly to a normal distribution under easily veriable minimal as-
sumptions on weights and innovations. In particular, these assumptions are valid
for the nonlinear squared autoregressive conditional heteroskedasticity (ARCH)
process where innovations are conditionally heteroskedasticity martingale differ-
ences. The proofs use the central limit theorem (CLT) for martingale differences.
Numerous testing procedures, e.g., testing for unit root, Cumulative Sum
(CUSUM) change-point detection, and Kwiatkowski, Phillips, Schmidt and Shin
(1992) (KPSS) test for stationarity, are based on the weak convergence of the par-
tial sum process S
[nt ]
, 0 t 1 to a Gaussian process, and, in particular, require
verication of the CLT for S
n
. The CLT for weighted sums W
n
, in turn, is used in
kernel estimation and in spectral analysis, e.g., obtaining asymptotic normality of
the discrete Fourier transforms of {X
j
}. It is thus of interest to provide easy-to-use
CLTs and invariance principles for the above statistics.
The work of Ibragimov and Linnik (1971) contains a number of useful results
on classical asymptotic theory of weakly dependent random variables. Davydov
(1970) obtained a weak convergence result for the partial sum process of lin-
ear processes with independent and identically distributed (i.i.d.) innovations,
whereas Phillips and Solo (1992) developed CLT and invariance principles for
sums of linear processes based on Beveridge-Nelson decomposition. Peligrad and
Utev (2006) extended Ibragimov and Linnik (Thm. 18.6.5) for linear processes
with innovations following a more general dependence framework. Gordin (1969)
introduced a general method for proving CLTs for stationary processes using a
martingale approximation. In the case of short memory, his method gives the same
result as the Beveridge-Nelson decomposition. Wu and Woodroofe (2004) ob-
tained a CLT for the sums of stationary and ergodic sequences using a martingale
approximation method. Merlev` ede, Peligrad, and Utev (2006) provide a further
survey of some recent results on CLT and its weak invariance principle for sta-
tionary processes.
Section 2 deals with asymptotic normality of S
n
and W
n
, whereas in Section 3
we discuss weak convergence of the corresponding partial sum processes.
Section 4 contains examples and applications, while Section 5 includes simula-
tions. In the sequel, all limits are taken as n , unless specied otherwise;
p
and
D
, respectively, denote the convergence in probability and in distribution,
Z = {0, 1, 2, . . .}; and N
k
(, ) denotes the k-dimensional normal distribu-
tion with mean vector and covariance matrix , k 1. We write N for N
1
. For
any two sequences of real numbers, a
n
, b
n
, a
n
b
n
means a
n
/b
n
1.
2. CLT FOR WEIGHTED SUMS
In this section we consider asymptotic normality of the sums S
n
and W
n
when
{X
j
} is a moving average process,
254 KARIM M. ABADIR ET AL.
X
j
=

k=0
a
k

j k
=
j

k=
a
j k

k
, j Z,

k=0
a
2
k
<. (2.1)
Here, (
j
, F
j
), j Z is a martingale difference sequence (m.d.s.) with constant
variance, where F
j
:=-eld{
i
, i j }, j Z, i.e., E(
j
|F
j 1
) =0, E
2
j
=
2

<
, j Z.
The existing literature on asymptotic distributions of the sums of X
j
s often
assumes that
_

j
_
IID
_
0,
2

_
, i.e., the innovations
j
, j Z are i.i.d. random
variabless (r.v.) with zero mean and nite and positive variance
2

. Several pa-
pers establish a CLT for S
n
under the weaker assumption where {
j
} is a stationary
ergodic m.d.s. In some applications, even this assumption is too restrictive.
Let V
j
:= E
_

2
j
|F
j 1
_
be the conditional variance of
_

j
_
, and
V
( j, k) :=
Cov
_
V
j
, V
k
_
, j, k Z stand for covariance function of V
j
s. To allow for broader
applications, we make the following assumption.
Assumption 2.1.
__

j
, F
j
_
, j Z
_
is an m.d.s. of r.v.s such that E
2
j
=
2

,
for all j Z,
(a) max
j
EV
2
j
<,
(b)
K
:=max
| j k|K
|
V
( j, k)| 0, as K ,
(c) max
j Z
E
2
j
I (|
j
| > K) 0, as K .
First, we discuss the asymptotic normality of S
n
. Theorem 18.6.5 of Ibragimov
and Linnik (1971) gives a CLT for S
n
in the case of i.i.d. innovations under general
conditions. Theorem 2.1 below extends it to m.d. innovations and shows that any
rate of divergence Var(S
n
) of the variance guarantees the CLT. Peligrad
and Utev (2006, Prop. 4) proved this result when
_

j
_
is a stationary and ergodic
m.d.s. For clarity, we provide a brief proof based on the ideas of these works.
Note that if {X
j
} is a zero-mean Gaussian process, then (Var(S
n
))
1/2
S
n
=
D
N(0, 1) for all n 1, and the question about the asymptotic distribution of S
n
reduces to nding the asymptotics of the Var(S
n
).
THEOREM 2.1. Suppose {X
j
} is a linear process (2.1) where {
j
} is either a
stationary and ergodic m.d.s., or satises Assumption 2.1. Then

2
n
:=Var(S
n
)
implies

1
n
S
n

D
N(0, 1). (2.2)
Proof. For simplicity of notation, set a
k
= 0, k = 1, 2, . . . in (2.1), and let
c
nj
=
1
n

n
k=max( j,1)
a
kj
=
1
n

n
k=1
a
kj
, j Z. Without loss of generality,
assume
2

=1. Then,

1
n
S
n
=
n

j =
c
nj

j
,
2
n
Var(S
n
) =
n

j =
c
2
nj
=1, n 1. (2.3)
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 255
Next we show
n

j =
_
c
nj
c
n, j 1
_
2
=o(1), c
n
:=max
j n
|c
nj
| =o(1), (2.4)
which together with Lemma 2.1 below implies (2.2).
To prove (2.4), note that for any s n, t =1, 2, . . . ,
s

j =st
c
2
nj
=
s

j =st
_
c
n, j 1
+
_
c
nj
c
n, j 1
_
_
2
,
c
2
n,s
=c
2
n,st 1
+2
s

j =st
c
n, j 1
_
c
nj
c
n, j 1
_
+
s

j =st
_
c
nj
c
n, j 1
_
2
.
Because

j n
c
2
n, j
= 1, then for every s Z and n 1, lim
t
c
2
n,st 1
= 0.
Since t is arbitrary, take the limit t and use the Cauchy-Schwarz inequality
to obtain
c
2
n,s
=2
s

j =
c
n, j 1
_
c
nj
c
n, j 1
_
+
s

j =
_
c
nj
c
n, j 1
_
2
2
_
n

j =
c
2
nj
_
1/2
B
n
+B
2
n
2B
n
+B
2
n
, (2.5)
where B
n
:=
_

n
j =
(c
nj
c
n, j 1
)
2
_
1/2
does not depend on s. Since by deni-
tion c
nj
c
n, j 1
=
1
n
(a
1j
a
nj +1
),
B
2
n
4
2
n

j =0
a
2
j
=o(1),
because
2
n
and

j =0
a
2
j
<. This completes the proof of (2.4). n
Remark 2.1. The approach of Beveridge-Nelson decomposition used in
Phillips and Solo (1992, Thm. 3.15) allows one to obtain the CLT for a linear pro-
cess {X
j
} with a
k
s satisfying

k=0
k
2
a
2
k
<,
k
being a uniformly square inte-
grable m.d.s., and satisfying n
1

n
k=1
E[
2
k
|F
k1
]
2

a.s. In contrast, Theorem


2.1 gives the same result under much weaker condition on a
k
s.
Compared to Proposition 4 of Peligrad and Utev (2006), the above Theorem
2.1 carries more explanatory character demonstrating simple technical tools for
the proof of such CLT and relaxing the assumption of stationarity and ergodicity
on the m.d.s. {
j
}.
The next lemma provides some sufcient conditions for the CLT of the
weighted sums of m.d.s. with the weights being a triangular array of real
numbers.
256 KARIM M. ABADIR ET AL.
LEMMA 2.1. Suppose S
n
=

n
j =
d
nj

j
, n 1, where {
j
} is a standardized
m.d.s., and {d
nj
} are such that Var(S
n
) =

n
j =
d
2
nj
=1, for all n 1.
(i) In addition, if
_

j
_
satises Assumption 2.1 and
d
n
:=max
j n
|d
nj
| =o(1), (2.6)
then
S
n

D
N(0, 1). (2.7)
(ii) In addition, if
_

j
_
is stationary ergodic and

j n
_
d
nj
d
n, j 1
_
2
=o(1), (2.8)
then (2.6) and (2.7) hold.
Proof. Since

n
j =
d
2
nj
=1, for a xed n 1 we can choose M = M(n)
such that
M1

j =
d
2
nj
1/log(n), n 1. (2.9)
Write
S
n
=
M1

j =
d
nj

j
+
n

j =M
d
nj

j
=: s
n,1
+s
n,2
, say.
Then Es
2
n,1
=

M1
j =
d
2
nj
1/log(n) 0 implies s
n,1
= o
p
(1). To prove (2.7),
it remains to show
s
n,2

D
N(0, 1). (2.10)
We consider cases (i) and (ii) separately.
Case (i). To show (2.10), by the CLT for m.d.s., see Hall and Heyde (1980, Cor.
3.1), it sufces to check the following two conditions,
n

j =M
d
2
nj
E
_

2
j
|F
j 1

p
1, (2.11)
n

j =M
E
_
|d
nj

j
|
2
I
_
|d
nj

j
|
_
|F
j 1

p
0, > 0. (2.12)
Let q
n
:=

n
j =M
d
2
nj
V
j
denote the left-hand side (l.h.s.) of (2.11). We shall show
that
Eq
n
1, Var(q
n
) 0,
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 257
which yields E(q
n
1)
2
0 and, together with Chebyshevs inequality, will im-
ply (2.11). The rst claim follows fromEV
j
=E
2
j
=1, (2.9) and

n
j =
d
2
nj
=1.
To prove the second claim, let
0
:= max
j Z
EV
2
j
< , and set K = K(n) =
[1/d
n
]. By the H olders inequality,
Var(q
n
) =
n

j,k=M
d
2
nj
d
2
nk

V
( j, k)
n

j,k=M: | j k|>K
[. . .] +
n

j,k=M: | j k|K
[. . .]

K
_
n

j =
d
2
nj
_
2
+d
2
n
n

j =M
d
2
nj
k: | j k|K
_
|
V
( j, j )
V
(k, k)|
_
1/2

K
+
0
(2K +1)d
2
n
0,
by Assumptions 2.1(a,b) and (2.6). This completes the proof of (2.11).
To prove (2.12), note that the expected value of the l.h.s. of (2.14) is bounded
above by
n

j =M
d
2
nj
E
_

2
j
I
_
|
j
| d
1
n

_
max
j
E
_

2
j
I
_
|
j
| d
1
n

_
0,
by Assumptions 2.1(c) and (2.6), which together with the Markov inequality
implies (2.12), and also completes the proof of part (i) of the lemma.
Case (ii). The proof of (2.10) combines arguments used in the proofs of
Peligrad and Utev (1997, Thm. 2.1; 2006, Prop. 4). Let X
nj
:= d
nj

j
, M
j n. By Hall and Heyde (1980, Thm. 3.2) to prove (2.10), it sufces to verify
(a) max
Mj n
|X
nj
|
p
0, (b)

n
j =M
X
2
nj

p
1, and (c) Emax
Mj n
X
2
nj
=
O(1).
First, we show that (2.8) implies (2.6). To see this, use the bound analogous
to (2.5), which does not depend on any particular form of c
nj
, to obtain d
2
n,s

2B
n
+B
2
n
, where now B
n
:=
_

n
j =
_
d
nj
d
n, j 1
_
2
_
1/2
. Hence, (2.8) implies
d
n
0.
Now, claims (a) and (c) follow because for any > 0,
E
_
n

j =M
_
d
nj

j
_
2
I (|d
nj

j
| )
_
E
_

2
1
I (|
1
| d
1
n
)
_ n

j =
d
2
nj
0.
To show (b), we need to verify
n

j =M
d
2
nj

2
j

p
1. (2.13)
We do this by truncation. For a k 1, write
n

j =M
d
2
nj

2
j
=
n

j =M
d
2
nj
+
n

j =M
d
2
nj
_

2
j
k
1
k

l=1

2
j +l
_
+
n

j =M
d
2
nj
_
k
1
k

l=1

2
j +l
1
_
=: q
n1
+q
n2
+q
n3
.
258 KARIM M. ABADIR ET AL.
In view of (2.9), q
n1
=

n
j =
d
2
nj
+O(1/logn) 1. Since E
2
j
=1,
E|q
n2
| k
1
E

j =M
kd
2
nj

2
j

k

l=1
n+l

j =M+l
d
2
n, j l

2
j

k
1
n

j =
|kd
2
nj
d
2
n, j 1
. . . d
2
n, j k
|E
_

2
j

+ d
n
E
_

2
n+1
+. . . +
2
n+k
+
2
M

k
n

j =
|d
2
nj
d
2
n, j 1
| +(k +1)d
2
n
0, k 1,
because

n
j =
|d
2
nj
d
2
n, j 1
|
_

n
j =
_
d
nj
d
n, j 1
_
2
_
1/2
_
2

n
j =
d
2
nj
_
1/2
0, by (2.8).
Finally, since {
j
} is stationary and ergodic, by the ergodic theorem, see, e.g.,
Stout (1974, Cor. 3.5.2), E

k
1

k
l=1

2
j +l
E
2
1

=E

k
1

k
l=1

2
l
1

0, k
, and thus E|q
n3
| o(1)

n
j =
d
2
nj
=o(1), which implies (2.13) and completes
the proof. n
Remark 2.2. Note that the above lemma clearly includes the case of {
j
}
IID(0, 1). Furthermore, it gives a generalization of Theorem V.1.2.1 of H ajek and

Sid ak (1967) where d


nj
=0, j 0 and {
j
} IID(0, 1). If
j
IID(0, 1), assump-
tion (2.6) implies the weak Lindeberg condition, see Hall and Heyde (1980, p. 53),
and is minimal. Indeed, let S
n
=

n
j =1
d
nj

j
, d
nj
=
_
1 +cn

_
j
n

, c > 0. If
> 0, then max
1j n
|d
nj
| = o(1) and the CLT holds, as also shown in Phillips
and Magdalinos (2007). However, if = 0, then max
1j n
|d
nj
| = (1 +c)
1
,
(2.6) does not hold and S
n

D

j =1
(1 +c)
j

j
, which is Gaussian only if {
j
}
is Gaussian, as pointed out in Anderson (1959). Peligrad and Utev (1997, Thm.
2.1) derived Lemma 2.1, requiring instead of Assumption 2.1 that the m.d.s. {
j
}
be a pairwise mixing sequence. In their Example 2.1 they showed that (2.6) and
stationarity and ergodicity of m.d.s. alone are not sufcient for the CLT.
The next corollary provides a straightforward generalization of Lemma 2.1 to a
two-sided moving average of m.d.s., while the result for weighted sums of a sta-
tionary ergodic process stated in Proposition 2.1 is useful in various applications.
COROLLARY 2.1. Suppose S
n
=

j =
d
nj

j
, n 1, where {
j
} is an
m.d.s., and {d
nj
} are such that

j Z
d
2
nj
=1.
(i) If {
j
} satises Assumption 2.1 and max
j Z
|d
nj
| = o(1), then S
n

D
N(0,
2

).
(ii) In addition, if {
j
} is stationary ergodic and

j Z
(d
nj
d
n, j 1
)
2
= o(1),
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 259
then S
n

D
N(0,
2

), and

j Z
d
2
nj

2
j
=E
2
1
+o
p
(1). (2.14)
PROPOSITION 2.1. Suppose {
j
} is a stationary ergodic sequence, E|
1
| <
, and {z
nj
} are such that
n

j =2
|z
nj
z
n, j 1
| +|z
n1
| =o
_
n

j =1
|z
nj
|
_
. (2.15)
Then,
n

j =1
z
nj

j
=E
1
_
n

j =1
z
nj
_
+o
p
_
n

j =1
|z
nj
|
_
. (2.16)
Proof. Since |z
nj
|

j
j =2
|z
nj
z
n, j 1
| +|z
n1
|, j = 2, . . . , n, then by (2.15),
max
1j n
|z
nj
| = o(

n
j =1
|z
nj
|), and (2.16) follows by the same argument as in
the proof of (2.13). n
Weighted Sums. We now turn to the asymptotic normality of W
n
. Let z
nj
,
j, n 1 be an array of real numbers, and consider the weighted sums
W
n
=
n

j =1
z
nj
X
j
. (2.17)
The following theorem gives three sufcient (nonequivalent) conditions for the
verication of the CLT for W
n
for different types of weights z
nj
. Subsequently,
they will be strengthened in Proposition 2.2 for easy verication. Here,
2
n
:=
Var(W
n
).
THEOREM 2.2. Suppose {X
j
} is a linear process (2.1) with m.d. innovations
{
j
}. Suppose the weights {z
nj
} in W
n
, {a
j
} and {
j
} in (2.1) satisfy one of three
conditions:
(i) m.d.s.
_

j
_
satises Assumption 2.1,
max
1j n
|z
nj
| =o(
n
), and

n
j =1
z
2
nj
C
2
n
.
(ii) m.d.s. {
j
} satises Assumption 2.1,
max
1j n
|z
nj
| =o(
n
), and

j =0
|a
j
| <.
(iii) m.d.s.
_

j
_
is either stationary and ergodic or satises Assumption 2.1, and
|z
n1
| +

n
j =2
|z
nj
z
n, j 1
| =o(
n
).
Then,

1
n
W
n

D
N(0, 1). (2.18)
260 KARIM M. ABADIR ET AL.
Proof. Similarly as in (2.3), set a
k
= 0, k < 0, and let d
nj
:=
1
n

n
k=max( j,1)
z
nk
a
kj

1
n

n
k=1
z
nk
a
kj
, j Z. Then,

1
n
W
n
=
n

j =
d
nj

j
. (2.19)
Recall Var(
1
n
W
n
) =

n
j =
d
2
nj

=1. We will verify that in cases (i) and (ii),


d
nj
s satisfy (2.6), whereas in case (iii) (2.8) holds, which by Lemma 2.1 proves
(2.18).
Case (i). Clearly, here K
n
:=
n
/max
1j n
|z
nj
| and
|d
nj
|
1
n
n

k=1
|z
nk
a
kj
|I
_
|k j | K
n
_
+
1
n
n

k=1
|z
nk
a
kj
|I
_
|k j | < K
n
_
:=q
n,1 j
+q
n,2 j
, say.
By the Cauchy-Schwarz inequality,
q
n,1 j

1
n
_
n

k=1
z
2
nk
_
1/2
_
n

k=1
a
2
kj
I
_
|k j | K
n
_
_
1/2
C
n

i K
n
a
2
i
,
q
n,2 j

1
n
max
1kn
|z
nk
|
n

k=1
|a
kj
|I (|k j | < K
n
)
K
1
n
_
2K
n
_
1/2
_

i =0
a
2
i
_
1/2
CK
1/2
n
, j n.
Hence, assumption (i) and

i =0
a
2
i
<yield
max
j n
|d
nj
| C
_
n

i K
n
a
2
i
+K
1/2
n
_
0,
thereby proving (2.6).
Case (ii): Here, (2.6) follows because
max
1j n
|d
nj
|
1
n
max
1j n
|z
nj
|

l=0
|a
l
| C
1
n
max
1j n
|z
nj
| 0.
Case (iii). To verify (2.8), dene for simplicity z
n0
=z
n,n+1
=0. Then, one can
write
d
nj
d
n, j 1
=
1
n
n

k=1
z
nk
_
a
kj
a
k+1j
_
=
1
n
n+1

k=1
_
z
nk
z
n,k1
_
a
kj
.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 261
Hence,

j Z
_
d
nj
d
n, j 1
_
2
=
2
n
n+1

k,s=1
_
z
nk
z
n,k1
__
z
ns
z
n,s1
_

j Z
a
kj
a
sj

2
n
n+1

k,s=1
|z
nk
z
n,k1
||z
ns
z
n,s1
|

j =0
a
2
j
C
2
n
_
|z
n1
+|z
nn
| +
n

k=2
|z
nk
z
n,k1
|
_
2
0,
by condition (iii) of the proposition, noting in addition that it implies z
nn
=o(
n
).
This completes the proof of the theorem. n
Remark 2.3. Assumption (ii) of Theorem 2.2 can be applied in case of short
and negative memory linear processes {X
j
} that satisfy

j =0
|a
j
| < , whereas
condition (i) is useful in the case when {X
j
} has long or short memory. Condition
on z
nj
in (iii) is stronger than in (i) but allows m.d.s. {
j
} to be stationary and
ergodic, which is especially tractable.
Remark 2.4. The CLT results of this paper can be applied to the sum W
n
of a
nonstationary process {X
j
} for which the rst differences Y
j
:= X
j
X
j 1
form a
linear process, by rewriting W
n
=

n
j =1
z
nj
X
j
=

n
k=1
{

n
j =k
z
nj
}Y
k
+X
0

n
j =1
z
nj
.
This reduces the problem to the CLT for the weighed sum W
n
of Y
j
s with weights
z

nk
:=

n
j =k
z
nj
.
In applications, the verication of the conditions in Theorem 2.2 for the asymp-
totic normality of W
n
often reduces to analyzing the asymptotic behavior of the
variance
2
n
Var(W
n
) =

n
j,k=1
z
nj

X
( j k)z
nk
, since the remaining condi-
tions on weights z
nj
are usually easy to verify. In the next proposition we pro-
vide stronger sufcient conditions in terms of the spectral density f of {X
j
} and
weights z
nj
for analyzing the asymptotic behavior of
2
n
and validating the condi-
tions of Theorem 2.2. In particular, condition (2.20) on weights is mild and satis-
ed in most of the applications. Part (a) requires f to be only continuous at 0, with
no restrictions on f at higher frequencies. It is satised by spectral densities of au-
toregressive moving average (ARMA) and seasonal Generalized Autoregressive
MovingAverage (GARMA) models. Part (b) allows the spectral density to be
unbounded at the origin, i.e., to have long memory. Part (c) focuses on the case
when f is bounded away from 0 in the whole spectrum, which includes the case
of long memory and seasonal long memory models; see, e.g., Granger and Joyeux
(1980), Hosking (1981) and Gray, Zhang, and Woodward (1989). Parts (a) and (b)
are applicable when m.d.s. {
j
} is stationary and ergodic.
PROPOSITION 2.2. Let {X
j
}, z
nj
, and W
n
be as in (2.1) and (2.17). Assume
that m.d.s. {
j
} is either stationary and ergodic, or satises Assumption 2.1. Then
the following holds.
262 KARIM M. ABADIR ET AL.
(a) Suppose f (u) f (0), u 0, 0 < f (0) <, and
|z
n1
| +
n

j =2
|z
nj
z
n, j 1
| =o
_
_
n

j =1
z
2
nj
_
1/2
_
. (2.20)
Then,
Var(W
n
) 2 f (0)
n

j =1
z
2
nj
. (2.21)
Moreover, the conditions of Theorem 2.2(iii) are satised, and CLT (2.18)
holds.
(b) If f (u) c >0, |u| u
0
, for some c >0, and u
0
>0, and (2.20) holds, then
the conditions of Theorem 2.2(iii) are satised, and CLT (2.18) holds.
(c) Suppose Assumption 2.1 is satised, there exists c >0 such that f (u) c >
0, u , and
max
1j n
|z
nj
| =o
_
_ n

j =1
z
2
nj
_
1/2
_
. (2.22)
Then the conditions of Theorem 2.2(i) are satised, and CLT (2.18) holds.
Proof. (a) Let G(u) :=

n
j =1
e
i j u
z
nj
, u . Since
X
(k) =
_

e
iku
f (u) du,

2
n
=
n

j,k=1
z
nj

X
( j k)z
nk
=
_

f (u)|G(u)|
2
du
=2 f (0)
n

j =1
z
2
nj
+i
n
, i
n
:=
2
n
2 f (0)
n

j =1
z
2
nj
. (2.23)
It remains to show that
i
n
=

2
n
2 f (0)
n

j =1
z
2
nj

=o
_
n

j =1
z
2
nj
_
, (2.24)
which proves (2.21) and together with (2.20) veries condition (iii) of Theorem
2.2 and thus (2.18).
Verication of (2.24) is based on two facts, which will be proved later:
_

|G(u)|
2
du =2
n

j =1
z
2
nj
, sup
|u|
|uG(u)| =o
_
n

j =1
z
2
nj
_
. (2.25)
We proceed as follows. Let > 0. Choose > 0, such that sup
0u
| f (u)
f (0)| . Then by (2.25),
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 263
i
n

_
|u|
| f (u) f (0)||G(u)|
2
du

_
|u|
|G(u)|
2
du +
_
<|u|
| f (u) f (0)|
2
|uG(u)|
2
du
2
n

j =1
z
2
nj
+o
_
n

j =1
z
2
nj
_
_
<|u|
| f (u) f (0)|du
2
n

j =1
z
2
nj
+o
_
n

j =1
z
2
nj
_
,
since
_
|u|
f (u)du =E
2
j
=
2

and f (0) <, which yields (2.24).


To show the rst claim of (2.25), use
_

e
isu
du =0 for s =0, to obtain
_

|G(u)|
2
du =
_

j,k=1
e
i( j k)u
z
nj
z
nk
du =2
n

j =1
z
2
nj
.
To show the second claim, use summation by parts to write
G(u) =
n1

j =1
_
j

l=1
e
ilu
_
(z
nj
z
n, j +1
) +z
nn
n

l=1
e
ilu
.
For j =1, . . . , n, one can bound

l=1
e
ilu

sin( j u/2)
sin(u/2)

| sin(nu/2)|
1
u
1
, |u| .
Therefore, |uG(u)|
_

n1
j =1
|z
nj
z
n, j +1
| +|z
nn
|
_
= o
_

n
j =1
z
2
nj
_
, by (2.20),
which completes the proof of (2.25) and part (a) of the proposition.
(b) Because here f (u) c, |u| u
0
, and by (2.25),

2
n
=
_

f (u)|G(u)|
2
du c
_
|u|u
0
|G(u)|
2
du
=c
_
_
|u|
|G(u)|
2
du
_
u
0
<|u|
|G(u)|
2
du
_
=c2(1+o(1))
n

j =1
z
2
nj
.
This together with (2.20) veries (iii) of Theorem 2.2, which implies (2.18).
(c) Assumption f (u) c > 0 thus implies

2
n
c
_

|G(u)|
2
du =2 c
n

j =1
z
2
nj
, (2.26)
which together with (2.22) yields (i) of Theorem 2.2. This also implies (2.18) and
completes the proof of the proposition. n
264 KARIM M. ABADIR ET AL.
Remark 2.5. Proposition 2.2 veries the CLT for smooth weights z
nj
and
stationary and ergodic m.d.s. {
j
}; e.g., in Corollary 4.4 below, it is shown that
(2.20) holds for kernel weights z
nj
= K((nx j )/(nb)) in nonparametric re-
gression setups. As long as f is continuous at zero, (2.20) yields the asymptotic
behavior of the variance (2.21) and the asymptotic normality of W
n
.
If (2.20) does not hold, Theorem 2.2(i) or Theorem 2.2(ii) may be applied.
For instance, alternating weights z
nj
= (1)
j
= e
ij
do not satisfy (2.20), but
Theorem2.2(i) is applicable. Indeed, max
j
|z
nj
| =1 and

n
j =1
z
2
nj
=n. In addition,
if f (u) f () > 0, u , then, letting D
n
(u) :=

n
l=1
e
ilu
,
Var(W
n
) =
n

j,k=1
z
nj

X
( j k)z
nk
=
_

|D
n
(u +)|
2
f (u)du
=
_

0
f (u )|D
n
(u)|
2
du +
_
0

f (u +)|D
n
(u)|
2
du
= f ()
_

|D
n
(u)|
2
du +o(n) =2 f ()n +o(n),
using f () = f (), as in the proof of (2.21). This shows the applicability of
Theorem 2.2(i) and hence the CLT for W
n
. Note that here (2.21) for Var(W
n
) does
not hold.
The following proposition, which is valid for any short memory covariance-
stationary process {X
j
}, provides an upper bound for Var(W
n
) and analyzes its
asymptotic behavior.
PROPOSITION 2.3. Let {X
j
} be a covariance-stationary process with zero
mean, nite variance, and covariance function such that

kZ
| (k)| <. (2.27)
Let W
n
be as in (2.17) with z
nj
satisfy conditions (2.20) and (2.22). Then,
_
n

j =1
z
2
nj
_
1
Var(W
n
)

kZ
| (k)|,
_
n

j =1
z
2
nj
_
1
Var(W
n
)
2
:=

kZ
(k). (2.28)
Proof. Under (2.27),
f (u) = (2)
1

kZ
e
iku
(k) (2)
1

kZ
| (k)| <, u ,
f (u) f (0) =(2)
1

kZ
(k) =(2)
1

2
, u 0.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 265
Thus, by (2.23) and (2.25),
Var(W
n
) =
_

f (u)|G(u)|
2
du sup
u
f (u)
_

|G(u)|
2
du

kZ
| (k)|
n

j =1
z
2
nj
,
which proves the rst bound of (2.28). The proof of the second bound is the same
as that of (2.21). This completes the proof of the proposition. n
The following result is a multivariate generalization of Theorem 2.2.
THEOREM 2.3. Let z
(i )
n, j
, i =1, . . . , k be k arrays of real weights, and {X
j
} is
a linear process (2.1) with m.d.s. {
j
}. Assume that sums W
(i )
n
:=

n
j =1
z
(i )
n, j
X
j
and
(
(i )
n
)
2
:=Var(W
(i )
n
), i =1, . . . , k satisfy one of the conditions (a) or (b).
(a) {
j
} is stationary and ergodic, and each sum W
(i )
n
, i = 1, . . . , k satises
condition (iii) of Theorem 2.2.
(b) {
j
} satises Assumption 2.1, and each sum W
(i )
n
, i =1, . . . , k, satises one
of the conditions of (i)(iii) of Theorem 2.2.
Let for some (positive denite) matrix ,
_
Cov
_
W
(i )
n
/
(i )
n
, W
( j )
n
/
( j )
n
_
_
i, j =1,...,k
. (2.29)
Then,
_
W
(1)
n
/
(1)
n
, . . . , W
(k)
n
/
(k)
n
_

D
N
k
(0, ). (2.30)
Proof. Similarly, as in (2.19), write
S
(i )
n
:= W
(i )
n
/
(i )
n
=
n

j =
d
(i )
nj

j
, i =1, . . . , k.
To prove (2.30), in view of the Cram er-Wold device, it sufces to show that for
every a =(a
1
, . . . , a
k
) R
k
and k 1,
S
n
:=a
1
S
(1)
n
+ +a
k
S
(k)
n

D
N(0, aa
T
). (2.31)
Write S
n
=

n
j =
d
nj

j
where d
nj
= a
1
d
(1)
nj
+ +a
k
d
(k)
nj
. Condition (2.29)
implies that
Var(S
n
) =E
2
1
n

j =
d
2
nj
aa
T
.
Assume that m.d.s. satises Assumption 2.1. As seen in the proof of Theorem
2.2, any one of the conditions (i), (ii), or (iii) assures that max
j n
|d
(i )
nj
| 0,
266 KARIM M. ABADIR ET AL.
i =1, . . . , k, which yields max
j n
|d
nj
| 0. Hence, coefcients d
nj
of S
n
satisfy
assumptions of Lemma 2.1, which implies (2.31).
Assume that m.d.s. is stationary ergodic. In the proof of Theorem 2.2, it was
shown that condition (iii) yields

j n
_
d
(i )
nj
d
(i )
n, j 1
_
2
0, i = 1, . . . , k. Conse-
quently,

j n
(d
nj
d
n, j 1
)
2
0, and (2.31) follows by Lemma 2.1. n
3. WEAK CONVERGENCE OF PARTIAL SUM PROCESSES
A number of econometric applications require the weak convergence of a suitably
standardized partial sums process S
n
() =

[n]
j =1
X
j
, > 0 to some limit process
S(), 0 1. Observe that for each n, S
n
(), 0 1, is a step function in
, belonging to the Skorokhod functional space D[0, 1].
From Billingsley (1968), we recall that a sequence of stochastic processes
{Z
n
()}, n 1 in D[0, 1] is said to converge weakly to a stochastic process
Z() C[0, 1], and we write Z
n
Z, if every nite dimensional distribution of
{Z
n
()} converges to that of Z() and if {Z
n
()} is tight with respect to the uni-
form metric; see also Pollard (1984). The uniform topology is stronger than the
Skorokhod J
1
-topology, and the verication of tightness in the uniform metric is
relatively easier.
Using the arguments in Section 12 and Theorem 15.5, p. 127 of Billingsley
(1968), see also Pollard (1984, Ch. V.1, Thm. 3), one can show that a sufcient
condition for tightness of {Z
n
} is the following: There exists a sequence of nonde-
creasing right continuous functions F
n
on [0, 1] that are uniformly bounded and
converge uniformly to a continuous function F such that for some > 1, > 0,
and 0 s < t 1,
E|Z
n
(t ) Z
n
(s)|

C[F
n
(t ) F
n
(s)]

, n 1, (3.1)
where C may depend on , but not on s, t , and n.
Now, let S
n
= S
n
(1) and
2
n
:=Var(S
n
). Our goal here is to establish the weak
convergence of {
1
n
S
n
()}. For this purpose, we need to establish that its nite di-
mensional distributions converge to those of the limit process, denoted by
f dd
,
and that the process is tight in uniform metric. We rst focus on the nite dimen-
sional convergence.
According to the Lamperti theorem (1962), if
_

1
n
S
n
()
_

f dd
_
S()
_
, (3.2)
then for some H (0, 1) and a positive slowly varying function L,

2
n
=Var(S
n
) =n
2H
L(n). (3.3)
In most applications

2
n
=Var(S
n
) s
2
n
2H
, for some 0 < H < 1, (3.4)
where 0 < s
2
<is the long-run variance of S
n
.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 267
In the case of a linear process {X
j
} of (2.1), with m.d. stationary and ergodic
innovations {
j
}, (3.4) is also sufcient for (3.2).
The limits in this section will be described by the fractional Brownian motion
(fBm), B
H
(), 0 1, with parameter 0 < H <1, which is a Gaussian process
with the mean EB
H
(t ) 0, and covariance function
r
H
(s, t ) :=
1
2
_
|s|
2H
+|t |
2H
|s t |
2H
_
, 0 s, t 1. (3.5)
Note that if H = 1/2, then B
1/2
= B is Brownian motion. The convergence of
the nite dimensional distributions of
1
n
S
n
(), 0 1 is established in the
following proposition.
PROPOSITION 3.1. Suppose {X
j
} is a linear process (2.1) with the m.d.s.
{
j
} being either stationary and ergodic, or satisfying Assumption 2.1, and that
(3.4) holds.
Then,
_

1
n
S
n
()
_
>0

f dd
_
B
H
()
_
>0
, (3.6)
where B
H
is the fBm with parameter H.
Proof. Let T
n
() :=
1
n
S
n
(). Assumption (3.4) implies that for any 0 < <1,
Var(T
n
()) =
Var(S
n
())
Var(S
n
)
=
(n)
2H
(1+o(1))
n
2H
(1+o(1))

2H
,
and hence,
Cov(T
n
(t ), T
n
(s)) =(1/2){Var(T
n
(t )) +Var(T
n
(s)) Var
_
T
n
(t ) T
n
(s)
_
}
r
H
(t, s), 0 < s < t, (3.7)
where r
H
is as in (3.5). In view of the Cram er-Wold device, it sufces to show that
a
1
, . . . , a
k
R, t
1
, . . . , t
k
> 0, and k 1, S
n
= a
1
S
n
([nt
1
]) + +a
k
S
n
([nt
k
])
satisfy

1
n
S
n

D
S :=a
1
B
H
(t
1
) + +a
k
B
H
(t
k
).
By (3.7), Var(S
n
) , and the rest of the proof repeats the lines of proof of
Theorem 2.1. n
The following proposition is a simplied version of the result obtained by
Taqqu (1975). It shows that long memory of the summands {X
j
} a priori guar-
antees the tightness of the normalized partial sum process. It does not require
{X
j
} to be a linear process.
PROPOSITION 3.2. Let {X
j
} be a second-order stationary process satisfying
(3.4), with 1/2 < H < 1. Then,
1
n
S
n
() is tight with respect to (w.r.t.) uniform
metric. In addition, if
1
n
S
n

f dd
S, where S(u), 0 u 1 is a stochastic pro-
cess, then

1
n
S
n
() S(), in D[0, 1] and uniform metric. (3.8)
268 KARIM M. ABADIR ET AL.
Proof. To check tightness, we shall verify (3.1) for the process T
n
(t ) :=

1
n
S
n
(t ). Let F
n
(t ) := [nt ]/n, F(t ) := t, 0 t 1. Observe that sup
t
|F
n
(t )
F(t )| 0 and F is continuous on [0, 1]. By covariance stationarity of the incre-
ments,
E

T
n
(t ) T
n
(s)

2
=E
_

1
n
[nt ][ns]

j =1
X
j
_
2
=
_
[nt ] [ns]
n
_
2H
(1+o(1))
(1+o(1))
C
_
[nt ] [ns]
n
_
2H
=C
_
F
n
(t ) F
n
(s)

2H
, (3.9)
for some > 0. Since := 2H > 1, this veries (3.1) for the T
n
process with
=2 and completes the proof. n
The following lemma, where S
n
= S
n
(1), gives a useful bound for moments of
the sums of a linear process with martingal difference (m.d.) innovations and is
useful in proving the tightness of the process {T
n
(t )} in the cases of short and neg-
ative memory processes {X
j
}. It extends the well-known Burkholder-Rosenthal
inequality for martingales and some other inequalities involving m.d.s. of Dhar-
madhikari, Fabian, and Jogdeo (1968, Theorem in Section 1) and Borovskikh and
Korolyuk (1997, Chap. 3).
LEMMA 3.1. Let {X
j
} be a linear process (2.1) with m.d. innovations {
j
},
such that E
2
j
=
2

for all j , and :=max


j
E|
j
|
p
<, for some p 2. Then
E|S
n
|
p
c
_
ES
2
n
_
p/2
, n 1, (3.10)
where c > 0 depends only on p and .
Proof. Setting d
nj
=

n
k=max( j,1)
a
kj
, write S
n
=

n
j =
d
nj

j
. Because
E
2
j
=
2

, ES
2
n
=
2


n
j =
d
2
nj
<, and by Lemma 3.2 below,
E|S
n
|
p
=E

j =
d
nj

p
Cmax
j
E|
j
|
p
_
n

j =
d
2
nj
_
p/2
=c
_
ES
2
n
_
p/2
,
where c =Cmax
j
E|
j
|
p
/
p

does not depend on n. n


LEMMA 3.2. Let p 2 and {Y
j
, F
j
, 1 j n} be a m.d.s. with max
j
E|Y
j
|
p
<. Then, for every n 1,
E

j =1
Y
j

p
C
p
_ n

j =1
(E|Y
j
|
p
)
2/p
_
p/2
, p > 2, (3.11)
with a constant C
p
> 0 depending only on p.
The inequality (3.11) remains valid also for n =.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 269
Proof. Let n < . For p 2, by the Burkholder-Rosenthals inequality (see
Hall and Heyde, 1980, p. 24),
E

j =1
Y
j

p
C
p
_
n

j =1
E|Y
j
|
p
+E
_
n

j =1
E[Y
2
j
|F
j 1
]
_
p/2 _
. (3.12)
Recall the fact that for any real numbers a, b, and for 0 < 1, |a +b|

|a|

+|b|

. Apply this fact with =2/p 1 to obtain


_
n

j =1
E|Y
j
|
p
_
2/p

j =1
_
E|Y
j
|
p
_
2/p
. (3.13)
To bound the second term on the r.h.s. of (3.12), rst use the CauchySchwarz
inequality for the conditional expectation with p/2 1 to obtain E[Y
2
j
|F
j 1
]
(E[|Y
j
|
p
|F
j 1
])
2/p
, 1 j n. Next, by Minkowski inequality for any r.v.s X
and Y,
(E|X +Y|
r
)
1/r
(E|X|
r
)
1/r
+(E|Y|
r
)
1/r
, r 1.
These facts, in turn, imply
E
_

n
j =1
E
_
Y
2
j
|F
j 1

_
p/2
E
_

n
j =1
(E
_
|Y
j
|
p
|F
j 1
])
2/p
_
p/2

n
j =1
_
E|Y
j
|
p
_
2/p
_
p/2
,
which together with (3.13) and (3.12) proves (3.11).
Consider now the case of an innite sum. Let p 2. We claim
E

j =1
Y
j

p
C
p
_

j =1
(E|Y
j
|
p
)
2/p
_
p/2
. (3.14)
This inequality is trivially true if the r.h.s. of (3.14) is innite. Assume it is nite.
By (3.11), for any 1 < m n <,
E

j =m
Y
j

p
C
p
_ n

j =m
(E|Y
j
|
p
)
2/p
_
p/2
0, m, n .
Hence, by the Cauchy convergence criterion,

j =1
Y
j
converges in L
p
norm, and
E

j =n+1
Y
j

p
0.
Next, by the Minkowski inequality and (3.11),
E
1/p

j =1
Y
j

p
E
1/p

j =1
Y
j

p
+E
1/p

j =n+1
Y
j

p
C
p
_ n

j =1
(E|Y
j
|
p
)
2/p
_
p/2
+E
1/p

j =n+1
Y
j

p
.
270 KARIM M. ABADIR ET AL.
Claim (3.14) now is proved upon taking limit as n in this bound. n
We are now ready to state and prove the following weak convergence result.
THEOREM 3.1. Assume that the m.d.s. {
j
} in the linear process {X
j
} is either
stationary and ergodic or satises Assumption 2.1. Suppose that S
n
satises (3.4)
with 0 < H < 1. For 0 < H 1/2, assume, in addition, that max
j
E|
j
|
p
<,
for some p > 1/H. Then,

1
n
S
n
() B
H
(), in D[0, 1] and uniform metric, (3.15)
where B
H
is a fBm.
Proof. Proposition 3.1 implies the nite dimensional convergence. To prove
tightness, we shall use (3.10). For H 1/2, by (3.10) and (3.9),
E|T
n
(t ) T
n
(s)|
p
c
_
E(T
n
(t ) T
n
(s))
2

p/2
c

F
n
(t ) F
n
(s)

Hp
.
This veries (3.1) for the T
n
process with = p, = Hp > 1. For H > 1/2,
(3.15) follows from Proposition 3.2. This completes the proof. n
Remark 3.1. A functional CLT for the partial sums process of a short memory
linear process {X
j
} of (2.1) with m.d. innovations and a
j
s such that

j =0
j |a
j
| <
, was established in Phillips and Solo (1992) and for more general stationary
processes in Peligrad and Utev (2005).
4. APPLICATIONS
In this section we shall illustrate the usefulness of the above results in some impor-
tant models. In Section 4.1 these results are shown to be applicable to ARCH and
stochastic volatility processes and the Sample Auto-correlation Function (ACF)
of an m.d.s. Section 4.2 discusses applications to parametric and nonparametric
regression models.
4.1. Conditionally Heteroskedastic Processes
4.1.1. ARCH Process. An important example of a stationary and ergodic
m.d.s. is an ARCH() process, where

j
r
j
=
j

j
,
2
j
=b
0
+

k=1
b
k
r
2
j k
, j Z, {
j
} IID(0, 1),
b
0
> 0, b
k
0, k 1,

k=1
b
k
< 1. (4.1)
This process was introduced by Robinson (1991). It includes the parametric
ARCH and GARCH models of Engle (1982) and Bollerslev (1988). Obviously,
{r
j
} is an m.d.s. E[r
j
|F
j 1
] =0, V
j
=E[r
2
j
|F
j 1
] =
2
j
, and the equations in (4.1)
have a unique second-order stationary solution; see, e.g., Giraitis, Kokoszka and
Leipus (2000). Since r
j
=(
j
,
j 1,

j 2
, . . .), where is a measurable function,
by Theorem 3.5.8 of Stout (1974), r
j
is stationary and ergodic.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 271
4.1.2. Squared ARCH Process. A centered squared ARCH process
X
j
=r
2
j
Er
2
j
,
with r
j
as in (4.1) and satisfying E
1/2
[
4
0
]

k=1
b
k
< 1, has a unique fourth-order
stationary solution, see, e.g., Giraitis, Leipus, and Surgailis (2007), and is covered
by the setup in (2.1). Indeed, by (4.1),
X
j
=

k=1
b
k
X
j k
+
j
,
j
=(
2
j
1)
2
j
, (4.2)
where {
j
} is an m.d.s. with E[
j
|F
j 1
] =0, V
j
=E[
2
j
|F
j 1
] =Var(
2
0
)
4
j
, and
E
2
0
<. Again, since
j
=(
j
,
j 1,

j 2
, . . .), where is a measurable func-
tion, {
j
} is a stationary and ergodic process.
By Proposition 4.1, under some additional conditions, {r
j
} and {
j
} satisfy
Assumption 2.1. Hence, the results for the sums S
n
and weighted sums W
n
of
this paper are applicable also to a centered squared ARCH process (4.2) and to a
linear process (2.1) with ARCH innovations
j
=r
j
of (4.1).
PROPOSITION 4.1. Let r
j
be as in (4.1), and
j
as in (4.2). Then,
(i) {r
j
} and {
j
} are stationary and ergodic m.d. sequences.
(ii) r
j
satises Assumption 2.1, if E
1/2
(
4
0
)

k=1
b
k
< 1.
(iii)
j
satises Assumption 2.1, if E
1/4
(
8
0
)

k=1
b
k
< 1.
Proof. Claim (i) was proved above, just before the statement of this
proposition.
(ii) First we verify Assumption 2.1(a) for V
j
=
2
j
= E[
2
j
|F
j 1
]. Recursion
(4.1) yields Volterra expansion of
2
j
, see, e.g., Giraitis, Kokoszka, and Leipus
(2000):

2
j
b
0
_
1+

k=1
h
k, j
_
,
h
k, j
:=

s
1
,...,s
k
=1
b
s
1
. . . b
s
k

2
j s
1
. . .
2
j s
1
s
k
, j Z. (4.3)
To show Emax
j

4
j
<, let B =

s=1
b
s
. Since b
j
s are nonnegative and
j
s are
i.i.d. r.v.s, by the Cauchy-Schwarz inequality (C-S),
Eh
2
k, j

s
1
,...,s
k
=1

t
1
,...,t
k
=1
b
s
1
. . . b
s
k
b
t
1
. . . b
t
k

_
E
__

2
j s
1
. . .
2
j s
1
s
k
_
2

_
1/2
_
E
__

2
j t
1
. . .
2
j t
1
t
k
_
2

_
1/2
=
_

s
1
,...,s
k
=1
b
s
1
. . . b
s
k
_
2
_
E
4
0
_
k
=
_
B
2
E
4
0
_
k
. (4.4)
272 KARIM M. ABADIR ET AL.
Use this bound, C-S, and BE
1/2

4
0
< 1 of Proposition 4.1(ii), to obtain
E
_
h
k, j
h
p, j

_
E
_
h
2
k, j

E
_
h
2
p, j
_
1/2

_
BE
1/2
_

4
0
_
k+p
,
E
4
j
2b
2
0
_
1+E
_

k=1
h
k, j
_
2
_
2b
2
0
_
1+
_

k=1
(BE
1/2
[
4
0
])
k
_
2
_
<, (4.5)
which veries Assumption 2.1(a).
To verify Assumption 2.1(b) we approximate
2
j
by m-dependent r.v.s as
follows. For m 1, dene

2
mj
= b
0
_
1+

k=1
h
m,k, j
_
,
h
m,k, j
:=
m

s
1
,...,s
k
=1
b
s
1
. . . b
s
k

2
j s
1
. . .
2
j s
1
s
k
. (4.6)
Then
2
mj
, j Z is a stationary m-dependent process, E
4
mj
E
4
j
<, and for
|t j | > m, Cov(
2
mt
,
2
mj
) =0. We will show that for |t j | > m,
max
j
E(
2
j

2
mj
)
2

m
0, m . (4.7)
The latter with the bound |Cov(X, Y)| (EX
2
EY
2
)
1/2
and (4.5) implies
|Cov(
2
t
,
2
j
)| =

Cov(
2
mt
,
2
mj
) +Cov(
2
t

2
mt
,
2
mj
)
+ Cov(
2
mt
,
2
j

2
mj
) + Cov(
2
t

2
mt
,
2
j

2
mj
)

C(
1/2
m
+
m
) 0, m , (4.8)
which veries Assumption 2.1(b) with K =m. To verify (4.7), notice that b
j
0
implies h
k, j
h
m,k, j
. Hence, with B
m
:=

m
s=1
b
s
and

b
s
= b
s
I (1 s m), as
in (4.4),
E(h
k, j
h
m,k, j
)
2
=
_

s
1
,...,s
k
=1
(b
s
1
. . . b
s
k

b
s
1
. . .

b
s
k
)
2
j s
1
. . .
2
j s
1
s
k
_
2
(4.9)

s
1
,...,s
k
=1
{b
s
1
. . . b
s
k

b
s
1
. . .

b
s
k
}
_
2
(E
4
0
)
k
=(B
k
B
k
m
)
2
(E
4
0
)
k
.
Next, since 0 < B
m
B, by the mean value theorem, B
k
B
k
m
k B
k1
(B
B
m
) k B
k

m
, where

m
:=(B B
m
)/B 0, as m . Hence, as in (4.5),
E(
2
j

2
mj
)
2
=b
2
0
E
_

k=1
{h
k, j
h
m,k, j
}
_
2
b
2
0
_

k=1
k
_
BE
1/2
[
4
0
]
_
k
_
2

2
m
=C

2
m
0, m, (4.10)
which proves (4.7).
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 273
Assumption 2.1(c) follows by Markovs inequality, noting that Er
4
j
=
E
4
1
E
4
1
<.
(iii) Here, V
j
=
4
j
Var(
2
0
) = E[
2
j
|F
j 1
]. Thus, verifying Assumption 2.1(a)
is equivalent to proving max
j
E
8
j
< . Toward this goal, rst note that by the
H olders inequality and by arguing as for (4.4), Eh
4
k, j
B
4k
(E
8
0
)
k
. Together with
assumption BE
1/4
[
8
0
] < 1, this bound yields
E
_
h
k
1
, j
. . . h
k
4
, j

_
E
_
h
4
k
1
, j

. . . E
_
h
4
k
4
, j
_
1/4

_
BE
1/4
_

8
0
_
k
1
+...+k
4
,
E
8
j
4b
4
0
_
1+E
_

k=1
h
k, j
_
4
_
4b
4
0
_
1+
_

k=1
_
BE
1/4
[
8
0
]
_
k
_
4
_
<. (4.11)
To verify Assumption 2.1(b), we again use approximating m-dependent variables

2
mj
, (4.6). We show that for |t j | > m,
max
j
E
_

4
j

4
mj
_
2

m
0, m , (4.12)
which together with (4.11), as in (4.8), implies that
|Cov
_

4
t
,
4
j
_
| =

Cov
_

4
mt
,
4
mj
_
+Cov
_

4
t

4
mt
,
4
mj
_
+ Cov
_

4
mt
,
4
j

4
mj
_
+ Cov
_

4
t

4
mt
,
4
j

4
mj
_

C
_

1/2
m
+
m
_
0, m .
Finally, to prove (4.12), use H older inequality, as in (4.9) and (4.10), to obtain
E
_
h
k, j
h
m,k, j
_
4

s
1
,...,s
k
=1
b
s
1
. . . b
s
k

s
1
,...,s
k
=1
b
s
1
. . . b
s
k
_
4
_
E
8
0
_
k

_
B
k
B
k
m
_
4
_
E
8
0
_
k

_
E
8
0
_
k
k
4
B
4k

4
m
,
E
_

2
j

2
mj
_
4
b
4
0
E
_

k=1
_
h
k, j
h
m,k, j
_
_
4
b
4
0
_

k=1
k(BE
1/4
[
8
0
])
k
_
4

4
m
=C

4
m
0, m ,
because BE
1/4
[
8
0
] < 1 by assumption (iii), where

m
= (B B
m
)/B 0. This
together with (4.11) and
2
mj

2
j
implies
E
_

4
j

4
mj
_
2
E
_
_

2
j

2
mj
_
2
_

2
j
+
2
mj
_
2
_
E
_
(
2
j

2
mj
)
2
4
4
j

_
E
__

2
j

2
mj
_
4
_
1/2
4(E
8
j
)
1/2
C

2
m
0, m ,
which proves (4.12) and completes verication of Assumption 2.1(b).
274 KARIM M. ABADIR ET AL.
Assumption 2.1(c) follows by Markovs inequality, noting that E
4
j

CE
8
1
E
8
1
<, which completes the proof of the proposition. n
4.1.3. Stochastic Volatility m.d. Processes. By a stochastic volatility model
one usually understands a stationary process
j
, j Z of the form

j
=
j

j
,
j
IID(0, 1), j Z,
where the (volatility) process
j
> 0 is a function of the past information up to
time j 1. Let F
j 1
be the -eld generated by past innovations
s
, s j 1,
and E
2
j
<. Then E[
j
|F
j 1
] =0,
2
j
=Var(
j
|F
j 1
), and {
j
} is a white noise
process, which by Theorem 3.5.8 of Stout (1974) is also stationary and ergodic.
It is often assumed that the volatility process
j
= h(
j
), j Z is a nonlin-
ear function of a stationary Gaussian or linear process {
j
}, see, e.g., Robinson
(2001). The choice of h(
j
) = exp(
j
) includes the exponential generalized
ARCH(EGARCH) model, proposed by Nelson (1991). Arelated class of stochas-
tic volatility models with long memory in {
j
} was introduced and studied in
Breidt, Crato, and de Lima (1998), Harvey (1998), and Surgailis and Viano
(2002). As a rule, the volatility process V
j

2
j
in these models is a stationary
process with autocovariances decaying to zero and thus satises Assumption 2.1.
4.1.4. Sample ACF. Let {
j
} be a stationary and ergodic m.d.s. such that
2

=
E
2
0
< . Consider the sample autocorrelation
k
=
k
/
0
, k 1, where
k
=
n
1

n
j =k+1
(
j

)(
j k

) and

= n
1

n
j =1

j
. It is well known that in the
case of i.i.d. random variables,

n
k

D
N(0, 1) and (

n
1
, . . . ,

n
k
)
D
N(0, I ), where the limit is a vector of k independent standard normal variables.
These results are widely used for testing for the absence of correlation, see, e.g.,
Brockwell and Davis (1991). The next proposition shows that, in general, this
asymptotic normality result is true only for i.i.d. sequences.
Let be mm matrix with ( j, k)th element
j,k
=E[
2
1

1j

1k
]/
4

.
PROPOSITION 4.2. Let {
j
} be a stationary and ergodic m.d.s. with
2

:=
E
2
0
<such that E
_

2
1

2
1k

<, for k =1, . . . , m, where m is a given positive


integer. Then,
_
n
1
, . . . ,

n
m
_

D
N
m
(0, ). (4.13)
In particular, (4.13) holds in case of ARCH process
j
= r
j
of (4.1), such that
E
1/2
(
4
0
)

k=1
b
k
< 1. If, in addition, E
3
j
= 0, then is diagonal with
k,k
1,
k =1, . . . , m.
Proof. Let 1 k m. Under the given assumptions, by Stout (1974, Thm.
3.5.8),
j
=
j

j k
is a stationary and ergodic m.d.s., and E
2
0
< . Thus, by
Theorem 2.1,

= O
p
(n
1/2
), n
1/2

n
j =k+1

j k

D
N(0, E
2
0
), while by
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 275
ergodicity of
2
j
,
0

p
E
2
0
. Hence,

n
k
= (
2

)
1
n
1/2

n
j =k+1

j k
+
O(n
1/2
) N(0,
k,k
).
To prove (4.13), let
(k)
j
=
j

j k
, k = 1, . . . , m. Then for any real numbers
c
1
, . . . , c
m
,
j
:=c
1

(1)
j
+. . . +c
m

(m)
j
is a stationary ergodic m.d.s., with E
2
j
=

:=c
2
1

1,1
+. . . +c
2
m

m,m
. Let S
n
:=

n
j =m+1

j
. Then ES
2
n
=
2

n , and
by Theorem 2.1, n
1/2
S
n

D
N(0,
2

), which by the Cramer-Wold device im-


plies the claim of (4.13).
In case of ARCH process r
j
, (4.1) and (4.3) imply
k,k
1 and show that is
diagonal if, in addition, E
3
j
=0. n
Remark 4.1. Proposition 4.2 shows that for the ARCH m.d.s. (4.1), the limit
variance of

n
k
, as the rule, is greater than one, and so the 95% condence band
for
k
= 0 is wider than in the i.i.d. case. Moreover, the limit matrix may be
nondiagonal, unless the distribution of
j
in (4.1) is symmetric.
4.2. Regression Models
In this section we discuss application of the above results for obtaining asymptotic
normality of least squares (LS) estimators in parametric regression models and
kernel-type estimators in nonparametric regression models.
But rst we verify the above conditions for asymptotic normality S
n
and weak
convergence of the partial sum process {S
n
(), 0 1} of a linear process {X
j
}
of (2.1) in some typical cases.
Let ( j ), j = 0, 1, 2, . . . denote the autocovariance function of {X
j
} and f
its spectral density. Note that ( j ) := Cov(X
j
, X
0
) =
2

k=0
a
k
a
k+j
, j =
0, 1, 2, . . . . Consider the assumption in terms of f .
f (u) c|u|
2d
, u 0, |d| < 1/2, c > 0. (4.14)
In terms of ( j ), consider the condition

j Z
| ( j )| <,
2
:=

j Z
( j ) > 0, for d =0,
( j ) c

| j |
1+2d
, 0 < d < 1/2,
( j ) c

| j |
1+2d
,

j Z
( j ) =0, 1/2 < d < 0, (4.15)
where c

= 0. The cases d = 0, 0 < d < 1/2, and 1/2 < d < 0 dene short,
long, and negative memory of the process {X
j
}.
COROLLARY 4.1. Let {X
j
} be a linear process (2.1) with stationary and
ergodic m.d. innovations {
j
} satisfying (4.14) or (4.15) with some |d| < 1/2.
Then,

2
n
s
2
n
1/2+d
, n
1/2d
S
n

D
N(0, s
2
), (4.16)
276 KARIM M. ABADIR ET AL.
where
s
2
=2 f (0), d =0;
s
2
=c
f
_
R
sin
2
(u/2)
(u/2)
2
|u|
2d
du, 0 <|d| < 1/2, under (4.14),
s
2
=

kZ
(k), d =0;
s
2
=c

/(d(1+2d)), 0 <|d| < 1/2, under (4.15).


In addition, if for 1/2 < d 0, E|
0
|
p
<, for some p > 1/(1/2+d), then
n
1/2d
S
n
() s B
1/2+d
(), (4.17)
in D[0, 1] and uniform metric.
Proof. The claim about
2
n
in (4.16) under (4.14) or (4.15) is known, see
Robinson (1997). Together with Theorem 2.1 and Proposition 3.1, it proves the
second claim in (4.16) and (4.17). n
Recall also the following known relationship between the weights a
k
and ( j )
when {X
j
} is a linear process (2.1) with stationary m.d. innovations. If
a
k
c
a
|k|
1+d
, 0 < d < 1/2,
=c
a
|k|
1+d
_
1+O
_
k
1
__
, 1/2 < d < 0,
where c
a
=0, then (k) satises (4.15) with c

=
2

c
2
a
B(d, 12d), where B(, )
is the beta function.
If

k=0
|a
k
| <and

k=0
a
k
=0, then

k=0
| (k)| <,

k=0
(k) =
2

k=0
a
k
_
2
> 0.
4.2.1. LS Estimator. Let {X
j
} be a linear process with memory parameter 0
d <1/2. Consider the simple parametric regression model where for some R,
Y
j
= z
nj
+X
j
. A problem of interest is to obtain asymptotic distribution of the
least squares estimator

=

n
j =1
z
nj
Y
j
/

n
j =1
z
2
nj
of . Suppose
z
nj
= g( j/n), j =1, . . . , n, (4.18)
where g is a continuous real valued function on [0, 1]. Moreover, in the short
memory case where d =0, assume that the covariance function of {X
j
} satises

k=0
| (k)| <,

kZ
(k) > 0. (4.19)
In the long memory case 0 < d < 1/2, assume
(k) c

|k|
1+2d
, k . (4.20)
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 277
Now, note that

=

n
j =1
z
nj
X
j
/

n
j =1
z
2
nj
=: W
n
/

n
j =1
z
2
nj
. Dene
v
2
d
:=
_
1
0
g
2
(u)du
_

kZ
(k)
_
, d =0,
:=c

_
1
0
_
1
0
g(u)g(v)|u v|
1+2d
dudv, 0 < d < 1/2,

2
d
:=v
2
d
/
_
_
1
0
g
2
(u)du
_
2
.
The following corollary gives limiting distribution of

.
COROLLARY 4.2. Suppose the linear process {X
j
} of (2.1) with stationary
and ergodic m.d. innovations {
j
} satises (4.19) or (4.20), and z
nj
s are as in
(4.18). Then, with W
n
:=

n
j =1
z
nj
X
j
, and for 0 d < 1/2,
n
1/2d
W
n

D
N
_
0, v
2
d
_
, n
1/2d
(

)
D
N(0,
2
d
). (4.21)
Proof. The second claim in (4.21) follows from the rst claim and the fact
that

n
j =1
z
2
nj
/n
_
1
0
g
2
(u)du, which is assured by the continuity of g. To prove
the rst claim in (4.21), we shall verify condition (i) of Theorem 2.2. Let
2
n
:=
Var(W
n
) =

n
j,k=1
z
nj
z
nk
( j k). We shall prove that
n
12d

2
n
v
2
d
, 0 d < 1/2. (4.22)
Then
2
n
v
2
d
n
1+2d
, which implies

n
j =1
z
2
nj
= O(
2
n
) and max
1kn
|z
nk
|
sup
0u1
|g(u)| =o(
n
), thereby verifying condition (i) of Theorem 2.2 for W
n
.
We now prove (4.22). Suppose d =0. Then,
n
1
n

j,k=1: | j k|>K
|z
nj
z
nk
( j k)| sup
0u1
g
2
(u)
n

|s|>K
| (s)| 0, K ,
whereas for any |i | K, n
1

n
k=1
z
n,k+i
z
nk
(i ) (i )
_
1
0
g
2
(u)du. Whence,
lim
K
lim
n
n

j,k=1: | j k|K
z
nj
z
nk
( j k) =
_
1
0
g
2
(u)du lim
K

|i |K
(i ) =v
2
0
,
which proves
2
n
v
2
0
.
Next, consider the case 0 < d < 1/2. Here, (4.18), (4.20), change of variables,
and the dominated convergence theorem yield
278 KARIM M. ABADIR ET AL.
n
12d

2
n
= n
12d
n

k, j =1
z
nj
z
n,k

X
( j k)
= n
12d
c

k, j =1:k=j
g
_
j
n
_
g
_
k
n
_
| j k|
1+2d
+o(1)
c

_
1
0
_
1
0
g(u)g(v)|u v|
1+2d
dudv =v
2
d
.
This completes the proof of (4.22) and the corollary. n
4.2.2. A Non-Stationary Process. As another application of Theorem 2.2,
consider the weighted sum V
n
:=

n
j =1
z
nj
Y
j
of a nonstationary unit root process
Y
j
:=

j
s=1
X
s
, j =1, 2, . . .. Set (u) =
_
1
u
g(v)dv, 0 u 1. Also, dene
v
2
d
:=
_
1
0

2
(u)du
_

kZ
(k)
_
, d =0,
:=c

_
1
0
_
1
0
(u)(v)|u v|
1+2d
dudv, 0 < d < 1/2.
In view of Remark 2.4, the proof of the following corollary is similar to that of
Corollary 4.2.
COROLLARY 4.3. Suppose the linear process {X
j
} of (2.1) with stationary
and ergodic m.d.s. {
j
} satises (4.19) or (4.20) and z
nj
are as in (4.18). Then,
n
3/2d
V
n

D
N
_
0, v
2
d
_
.
4.2.3. Nonparametric Regression. We shall now show the usefulness of
Theorem 2.2 in deriving limiting distribution of a kernel-type estimator of the
regression function in the nonparametric regression model Y
j
= ( j/n) +X
j
,
when errors X
j
may have long memory. Let K be a density kernel on R with
K
2
2
:=
_
R
K
2
(v)dv <, and b b
n
be a sequence of windowwidths. Akernel-
type estimator of (x) is given by

n
(x) := K
1
nx
n

j =1
K
_
nx j
nb
_
Y
j
, K
nx
:=
n

j =1
K
_
nx j
nb
_
.
Note that as n , K
nx
nb
_
R
K(u)du =nb, for all 0 < x < 1. Let

n
(x) := K
1
nx
n

j =1
K
_
nx j
nb
_

_
j
n
_
,
D
n
(x) :=
n
(x)
n
(x) = K
1
nx
n

j =1
K
_
nx j
nb
_
X
j
.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 279
Then,
n
(x) (x) = D
n
(x) +
n
(x) (x). Typically the bias term
n
(x)
(x) is negligible compared to D
n
(x), and asymptotic distribution of
n
(x)
(x) is determined by that of D
n
(x). Fix an x (0, 1) and let
z
nj
:= K
1
nx
K
_
x nj
nb
_
. (4.23)
Then, clearly D
n
(x) is like a W
n
. Dene

2
d,K
:=K
2
2
kZ
(k), d =0,
:=c

_
1
0
_
1
0
K(u)K(v)|u v|
1+2d
dudv, 0 < d < 1/2.
We have
COROLLARY 4.4. Suppose the linear process {X
j
} of (2.1) with station-
ary and ergodic m.d.s. {
j
} satises (4.19) or (4.20). In addition, suppose b
0, nb , and K is a continuous density on R with
_
R
K
2
(v)dv <.
Then, for every 0 < x < 1, and 0 d < 1/2,
(nb)
12d
Var(D
n
(x))
2
d,K
, n
1/2d
D
n
(x)
D
N(0,
2
d,K
). (4.24)
Proof. With
2
n
:=Var(D
n
(x)) and z
nj
as in (4.23),
(nb)
12d

2
n
(nb)
12d
n

i =1
n

j =1
K
_
nx i
nb
_
K
_
nx j
nb
_
(i j ).
A routine argument shows that continuity of K with K
2
2
< implies
(nb)
12d

2
n

2
d,K
, 0 d < 1/2, and (nb)

n
j =1
z
2
nj
(nb)
1

n
j =1
K
2
_
(x
( j/n))/b
_
K
2
2
. These facts together yield
n

d,K
(nb)
d1/2
,

2
n

n
j =1
z
2
nj
= O((nb)
2d
) = O(1) and max
1j n

1
n
|z
nj
| = O((nb)
1/2d
) =
o(1), verify condition (i) of Theorem 2.2, and hence the corollary. n
4.3.4. Dickey-Fuller Distributions and Their Fractional Versions. The results
of Section 3 also imply a number of existing ndings that are widely used in the
econometric literature. Dickey and Fuller (1979, 1981) simulated the distribution
of the normalized autoregressive coefcient and the t-ratio, when the generating
process has a unit root. Phillips (1987) described their limits in terms of function-
als of Brownian motion, while Abadir (1993, 1995) obtained the explicit expres-
sions for their density and distribution functions. More recently, Dolado, Gonzalo,
and Mayoral (2002) generalized the Dickey-Fuller tests to allow for fractional
roots in the null hypothesis to be tested. Their limiting distribution results (Dolado
et al. (2002), pp.19691070) can be extended by means of our Proposition 3.1
to the case of m.d. innovations instead of just i.i.d. ones, except that they use
a different type of fractional Brownian motion; see Marinucci and Robinson
(1999).
280 KARIM M. ABADIR ET AL.
FIGURE 1. Panel (a): Kernel density of

Var(W
n
)
1/2
W
n
. Panel (b): Kernel density of

Var(V
n
)
1/2
V
n
.
FIGURE 2. Panel (a): Kernel density estimate of

Var(W
n
)
1/2
W
n
. Panel (b): Kernel den-
sity estimate of

Var(V
n
)
1/2
V
n
.
5. SIMULATIONS
In this section we examine the small sample performance of some of the above
asymptotic results. We will consider three experiments, all of them based on a
sample size n =500 and 10, 000 replications.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 281
FIGURE 3. Kernel density of

Var(
n
(x))
1/2
(
n
(x) (x)) (solid line), N(0, 1) density
(dashed line).
5.0.5. ARFIMA-ARCH
We start from the case where X
j
in (2.1) is ARFIMA(1, d, 0) model with the
AR parameter r =0.8 and m.d. innovations
j
generated as ARCH(1) process:

j
=
j

j
,
j
N(0, 1),

2
j
=
0
+
1

2
j 1
, j =1, . . . , n. (5.1)
We take
0
= 0.2,
1
= 0.8, so that the E
2
j
= 1. We simulated the cases d =
0.1, 0.2, 0.3, 0.4, which gave qualitatively similar results. For the sake of brevity,
we report the results for d =0.3 only. We analyze the asymptotic normality of the
suitably standardized sums
W
n
=
n

j =1
z
nj
X
j
, V
n
=
n

j =1
z
nj
Y
j
,
where Y
j
=

j
s=1
X
s
, with the weights z
nj
= ( j/n)
2
+cos( j/n), j = 1, 2, . . . , n.
Var(W
n
) and Var(V
n
) are estimated by the Monte Carlo variances. We then plot
a kernel estimate of the densities of

Var(W
n
)
1/2
W
n
and

Var(V
n
)
1/2
V
n
using
a Gaussian kernel with bandwidth b chosen according Silvermans (1986) rule,
b
n
=(4/3)
1/5
n
1/5
, and superimpose the standard normal density. The results for
ARFIMA(1, 0.3, 0), r =0.8 process {X
j
} with ARCH(1) innovations in Figure 1
282 KARIM M. ABADIR ET AL.
show a very close resemblance between the solid line representing the kernel
density estimate and the dashed line representing N(0, 1) density.
AR-ARCH
Next we analyze asymptotic normality of W
n
and V
n
in the case of AR(1)
process X
j
=X
j 1
+
j
, j Z, with =0.6 and ARCH(1) errors
j
as in (5.1).
The results in Figure 2 show that the kernel density estimate of

Var(W
n
)
1/2
W
n
seems to have less probability mass close to the mean than the N(0, 1) density
and is slightly positively skewed.
Nonparametric regression
To illustrate the t of normal approximation in nonparametric estimation, we
use the regression model Y
j
= ( j/n) +X
j
, j = 1, . . . , n, where X
j
s follow an
ARFIMA(1, 0.3, 0)-ARCH(1) process with AR parameter r = 0.8 generated as
in (5.1). We set ( j/n) = ( j/n)
2
+cos( j/n), and (x) is estimated at x = 1/4
by a kernel-type estimator

n
(x) =

n
j =1
K
_
nxj
nb
_
Y
j

n
j =1
K
_
nxj
nb
_ ,
using a Gaussian kernel and setting b = n
1/4
(4/3)
1/5
SD(J), where SD(J) is
the standard deviation of the regressor J = j/n, j = 1, 2, . . . , n. Var(
n
(x)) is
estimated by the Monte Carlo variance. Figure 3 shows close t of the estimated
density of

Var(
n
(x))
1/2
(
n
(x) (x)), and the standard normal density.
REFERENCES
Abadir, K.M. (1993) The limiting distribution of the autocorrelation coefcient under a unit root.
Annals of Statistics 21, 10581070.
Abadir, K.M. (1995) The limiting distribution of the t ratio under a unit root. Econometric Theory 11,
775793.
Anderson, T.W. (1959) On asymptotic distributions of estimates of parameters of stochastic difference
equations. Annals of Mathematical Statistics 30, 676687.
Billingsley, P. (1968) Convergence of Probability Measures. Wiley.
Bollerslev, T. (1988) On the correlation structure for the generalized autoregressive conditional het-
eroskedastic process. Journal of Time Series Analysis., 9, 121131.
Borovskikh, Y.V. & V.S. Korolyuk (1997) Martingale Approximation. VSP.
Breidt, F.J., Crato, N. and de Lima, P. (1998) On the detection and estimation of long memory in
stochastic volatility. Journal of Econometrics 83, 325348.
Brockwell, P.J. & R.A. Davis (1991) Time Series: Theory and Methods, 2nd ed. Springer Series in
Statistics. Springer-Verlag.
Davydov, Y.A. (1970) The invariance principle for stationary processes. Theory of Probability and its
Applications 15, 487498.
Dharmadhikari, S.W., V. Fabian & K. Jogdeo (1968) Bounds on the moments of martingales. Annals
of Mathematical Statistics 39, 17191723.
Dickey, D.A., & W.A. Fuller (1979) Distribution of estimators of autoregressive time series with a unit
root. Journal of the American Statistical Association 74, 427431.
Dickey, D.A. & W.A. Fuller (1981) Likelihood ratio statistics for autoregressive time series with a unit
root. Econometrica 49, 10571072.
CLT FOR WEIGHTED SUMS OF LINEAR PROCESS 283
Dolado, J.J., J. Gonzalo, & L. Mayoral (2002) A fractional Dickey-Fuller test for unit roots. Econo-
metrica 70, 19632006.
Engle, R.F. (1982) Autoregressive conditional heteroscedasticity with estimates of the variance of
United Kingdom ination. Econometrica 50, 9871008.
Giraitis, L., P. Kokoszka, R. Leipus (2000) Stationary ARCH models: Dependence structure and cen-
tral limit theorems. Econometric Theory 16, 322.
Giraitis, L.R. Leipus & D. Surgailis (2007) Recent advances in ARCH modelling. In G. Teyssiere &
A.P. Kirman (eds.), Long Memory in Economics, pp. 338 Springer.
Gordin, M.I. (1969) The central limit theorem for stationary processes. Soviet Mathematics Doklady
10, 11741176.
Granger, C.W.J. & R. Joyeux (1980) An introduction to long-memory time series models and frac-
tional differencing. Journal of Time Series Analysis 1, 1529.
Gray, H.L., N.-F. Zhang & W.A. Woodward (1989) On generalized fractional processes. Journal of
Time Series Analysis 10, 233257.
H ajek, J. & Z. Sid ak (1967) Theory of Rank Tests. Academic Press.
Hall, P. & C.C. Heyde (1980) Martingale Limit Theory and Applications. Academic Press.
Harvey, A.C. (1998) Long memory in stochastic volatility. In J. Knight, & S. Satchell (eds.), Forecast-
ing Volatility in the Financial Markets, pp. 307320, Butterworth & Heineman.
Hosking, J.R.M. (1981) Fractional differencing. Biometrika 68, 165176
Ibragimov, I.A. & Y.V. Linnik (1971) Independent and Stationary Sequences of Random Variables.
Wolters-Noordhoff.
Kwiatkowski, D., Phillips, P.C.B., Schmidt, P. and Shin, Y. (1992). Testing the null hypothesis of
stationary against the alternative of a unit root: how sure are we that economic time series have a
unit root? Journal of Econometrics 54, 159178.
Lamperti, J.W. (1962) Semi-stable stochastic processes. Transactions of the American Mathematical
Society 104, 6278.
Marinucci, D. & P.M. Robinson (1999) Alternative forms of fractional Brownian motion. Journal of
Statistical Planning and Inference 80, 111122.
Merlev` ede, F., M. Peligrad, & S. Utev (2006) Recent advances in invariance principles for stationary
sequences. Probability Surveys 3, 136.
Nelson, D.B. (1991) Conditional heteroskedasticity in asset returns: A new approach. Econometrica
59, 347370.
Peligrad, M. & S. Utev (1997) Central limit theorem for stationary linear processes. Annals of Proba-
bility 25, 443456.
Peligrad, M. & S. Utev (2005) A new maximal inequality and invariance principle for stationary
sequences. Annals of Probability 33, 798815.
Peligrad, M. & S. Utev (2006) Central limit theorem for stationary linear processes. Annals of Proba-
bility 34, 16081622.
Phillips, P.C.B. (1987) Time series regression with a unit root. Econometrica 55, 277301.
Phillips, P.C.B. & T. Magdalinos (2007) Limit theory for moderate deviations from a unit root. Journal
of Econometrics 136, 115130.
Phillips, P.C.B. & V. Solo (1992) Asymptotics for linear processes. Annals of Statistics 20,
9711001.
Pollard, D. (1984) Convergence of Stochastic Processes. Springer Series in Statistics. Springer-Verlag.
Robinson, P.M. (1991) Testing for strong serial correlation and dynamic conditional heteroskedasticity
in multiple regression. Journal of Econometrics 47, 6784.
Robinson, P.M. (1997) Large-sample inference for non-parametric regression with dependent errors.
Annals of Statistics 25, 20542083.
Robinson, P.M. (2001) The memory of stochastic volatility models. Journal of Econometrics 101,
195218.
Silverman, B.W. (1986) Density Estimation. Chapman & Hall.
Stout, W. (1974) Almost Sure Convergence. Academic Press.
284 KARIM M. ABADIR ET AL.
Surgailis, D. &M.-C. Viano (2002) Long memory properties and covariance structure of the EGARCH
model. ESAIM: Probability & Statistics 6, 311329.
Taqqu, M.S. (1975) Weak convergence to fractional Brownian motion and to the Rosenblatt process.
Zeitschritt f ur Wahrscheinlichkeitstheorie und verwandte Gebiete 31, 287302.
Wu, W.B. & M. Woodroofe (2004) Martingale approximations for sums of stationary processes. An-
nals of Probability 32, 16741690.

You might also like