You are on page 1of 17

a

rn l
u o
o f
J
c
Pr
o ni ob
abil
E lect r ity
Vol. 4 (1999) Paper no. 7, pages 1{17.
Paper URL
http://www.math.washington.edu/~ejpecp/EjpVol4/paper7.abs.html
Journal URL
http://www.math.washington.edu/~ejpecp/

WEAK CONVERGENCE FOR THE ROW SUMS OF A TRIANGULAR ARRAY


OF EMPIRICAL PROCESSES INDEXED BY A MANAGEABLE TRIANGULAR
ARRAY OF FUNCTIONS
Miguel A. Arcones
Department of Mathematical Sciences
State University of New York
Binghamton, NY 13902
arcones@math.binghamton.edu
http://math.binghamton.edu/arcones/index.html

Abstract: We study the weak convergence for the row sums of a general triangular array of empirical
processes indexed by a manageable class of functions converging to an arbitrary limit. As particular
cases, we consider random series processes and normalized sums of i.i.d. random processes with
Gaussian and stable limits. An application to linear regression is presented. In this application, the
limit of the row sum of a triangular array of empirical process is the mixture of a Gaussian process
with a random series process.
Keywords: Empirical processes, triangular arrays, manageable classes.
AMS subject classi cation: 60B12, 60F15.
Submitted to EJP on June 25, 1998. Final version accepted on April 16, 1999.

1
1. Introduction. Let (Sn;j ; Sn;j ), 1  j  kn, be measurable spaces, where fkn g1n=1 is a se-
quence of positive integers converging to in nity. Let fXn;j : 1  j  kn g be Sn;j {valued independent
Q n S ; Qkn S ). Let f (; t) : S ! IR be a measurable function for each
r.v.'s de ned on ( kj =1 n;j j =1 n;j n;j n;j
1  j  kn and each t 2 T . Let cn (t) be a real number. Let
0 kn 1
@ X A
(1:1) Zn (t) := fn;j (Xn;j ; t) cn(t):
j =1
We study the weak convergence of the sequence of stochastic processes fZn (t) : t 2 T g. Observe that
Zn (t) is a sum of independent random variables minus a shift. As usual, we will use the de nition
of weak convergence of stochastic processes in Ho mann{Jrgensen (1991).
As a particular case, we consider normalized sums of i.i.d. random processes. Let fXj g1 j =1
be a sequence of i.i.d.r.v.'s with values in a measurable space (S; S ), let X be a copy of X1 , let
f (; t) : S ! IR be a measurable function for each t 2 T , let fan g1n=1 be a sequence of positive
numbers converging to in nity and let cn (t) be a real number. The sequence of processes
8 0 n 1 9
< X =
(1:2) : Z n ( t ) := @ an
1 f (Xj ; t)A cn (t) : t 2 T ; n  1;
;
j =1
is a particular case of the sequence of processes in (1.1).
Let fXj g1j =1 be a sequence of independent r.v.'s. with values in (Sj ; Sj ). Let fj (; t) : Sj ! IR
be a measurable function for each 1  j and each t 2 T . De ne
X
n
(1:3) fZn (t) := fj (Xj ; t) : t 2 T g;
j =1
This sequence of stochastic processes is another particular case of the processes in (1.1). We call the
process in (1.3) a random series process.
We present weak limit theorems for sums of general triangular arrays of independent random
variables with an arbitrary limit distribution. Usually limit theorems for sums of triangular arrays of
independent r.v.'s are studied for in nitesimal arrays (see for example Gnedenko and Kolmogorov,
1968). For in nitesimal arrays, the limit distribution is in nitely divisible. In general, random series
are not in nitely divisible. The considered set{up allows to have limit distributions which are a
mixture of an in nitely divisible distribution and a random series. In Section 3, an application of
the presented limit theorems is given. In this example, the limit distribution of certain triangular of
empirical processes is a mixture of a Gaussian processes and a random series process.
In Section 2, we prove the weak convergence of the process fZn (t) : t 2 T g, as in (1.1), for classes
of functions satisfying a uniform bound on packing numbers. Given a set K  IRn , the packing
number D(u; K ) is de ned by
(1:4) D(u; K ) := supfm : there exists v1 ; : : : ; vm 2 K such that jvi vj j > u; for i 6= j g;
where jvj is the Euclidean norm. The interest of this concept hinges on the following maximal
inequality:
X
n
(j )
ZD
(1:5) E [sup j j (v(j ) v0 )j]  9 (log D(u; K ))1=2 du;
v2K j =1 0

2
for any K 2 IRn and any v0 2 K , where fj g is a sequence of Rademacher r.v.'s, v = (v(1) ; : : : ; v(n) )
and D = supv2K jvj (see Theorem II.3.1 in Marcus and Pisier, 1981; see also Pollard, 1990, Theorem
3.5). We consider triangular arrays of functions satisfying the following condition:
Definition 1.1. Given a triangular array of sets fSn;j : 1  j  kn; 1  ng, a parameter set
T and functions ffn;j (; t) : 1  j  kn ; 1  n; t 2 T g, fn;j (; t) is de ned on Sn;j , we say that the
triangular array of functions ffn;j (; t) : 1  j  kn ; 1  n; t 2 T g is manageable with respect to the
envelope functions fFn;j () : 1  j  kn ; 1  ng, where Fn;j is a function de ned on Sn;j such that
supt2T jfn;j (xn;j ; t)j  Fn;j (xn;j ) for each xn;j 2 Sn;j , if the function M (u), de ned on (0; 1) by
0 0 11=2 1
X
kn
M (u) := sup D B @ 2 F 2 (xn;j )A ; Gn (xn;1 ; : : : ; xn;k ; n;1 ; : : : ; n;k )C
u @ n;j n;j n n A;
n;n;j ;xn;j j =1

where
Gn(xn;1; : : : ; xn;kn ; n;1; : : : ; n;kn ) = f(n;1fn;1(xn;1; t); : : : ; n;kn fn;kn (xn;kn ; t)) 2 IRkn : t 2 T g:
and the the sup
R is taken over n  1, n;1; : : : ; n;kn 2 f0; 1g and xn;1 2 Sn;1 ; : : : ; xn;kn 2 Sn;kn ,
1
satis es that 0 (log M (u))1=2 du < 1.
The last de nition is a slight modi cation of De nition 7.9 in Pollard (1990). The di erence
between his de nition and ours is that he allows n;1 ; : : : ; n;kn  0. De nition 1.1 is a generalization
to the triangular array case of the concept of VC subgraph classes, which has been studied by several
authors (see for example Vapnik and C ervonenkis, 1971, 1981; Dudley, 1978, 1984; Gine and Zinn
1984, 1986; Pollard 1984, 1990; and Alexander, 1987a, 1987b). We refer to Pollard (1990) for ways
to check De nition 1.1. Observe that by (1.5), for a manageable class and a sequence of Rademacher
r.v.'s fj g1
j =1 ,
X
kn
(1:6) E [sup j j n;j (fn;j (xn;j ; t) fn;j (xn;j ; t0 ))j]
t2T j =1

Z1 0 kn 11=2
X
 9 (log M (u))1=2 du @ n;j
2 F 2 (xn;j )A ;
n;j
0 j =1
for each n;1 ; : : : ; n;kn 2 f0; 1g, each xn;1 2 Sn;1 ; : : : ; xn;kn 2 Sn;kn and each t0 2 T . This inequality
will allow us to obtain the pertinent weak limit theorems.
Triangular arrays of empirical processes have ben considered by several authors. Alexander
(1987a) and Pollard (1990, Theorem 10.6) consider triangular arrays of empirical processes whose
limit distribution is a Gaussian process. More work in triangular arrays, mostly in their relation
with partial{sum processes, can be found in Arcones, Gaenssler and Ziegler (1992); Gaenssler and
Ziegler (1994); Gaenssler (1994); and Ziegler (1997).
Triangular arrays of empirical process with a Gaussian limit appear in statistics very often (see
for example Pollard, 1984, 1990; Le Cam, 1986; Kim and Pollard, 1990; and Arcones, 1994). For

3
an application to M{estimation of the presented results see Arcones (1996). In this reference, the
convergence of M{estimators to a stable limit distribution is considered. These results are not
possible without the contribution in this paper.
2. Weak convergence of row sums of a triangular array of empirical processes indexed
by a manageable triangular array of functions. In this section, we give sucient conditions for
the weak convergence of the stochastic processes in (1.1) when the class of functions ffn;j (xn;j ; t) :
1  j  kn ; t 2 T g is manageable with respect to some triangular array of envelope functions
fFn;j (xn;j ) : 1  j  kng. Every Fn;j (xn;j ) is bigger than or equal to supt2T jfn;j (xn;j ; t)j, but it is
not necessarily the smallest r.v. satisfying this property. We call a nite partition  of T to a map
 : T ! T such that ((t)) = (t) for each t 2 T , and the cardinality of f(t) : t 2 T g is nite.
Theorem 2.1. With the above notation, let b > 0, suppose
P n that: 
(i) The nite dimensional distributions of fZn (t) := kj =1 fn;j (Xn;j ; t) cn (t) : t 2 T g converge
to those of fZ (t) : t 2 T g.
(ii) The triangular array of functions ffn;j (; t) : 1  j  kn ; 1  n; t 2 T g is manageable with
respect to the envelope functions fFn;j () : 1  j  kn ; 1  ng.
P n Prfjf (X ; t)j  2 1 bg < 1.
(iii) For each t 2 T , supn1 kj =1 n;j n;j
(iv) For each  > 0, there exists a nite partition  of T such that
X
kn
lim sup Pr  fsup jfn;j (Xn;j ; t) fn;j (Xn;j ; (t))j  g  :
n!1 j =1 t2T
P
(v) supn1 E [ kj =1
n F 2 (X )I
n;j n;j Fn;j (Xn;j )b ] < 1.
(vi) For each  > 0, there exists a nite partition  of T such that
X
kn
lim sup sup E [(fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b ]  :
n!1 t2T j =1

(vii) For each  > 0, there exists a nite partition  of T such that
lim sup sup jE [Sn (t; b) Sn ((t); b)] cn (t) + cn ((t))j  ;
n!1 t2T
P n f (X ; t)I
where Sn(t; b) = kj =1 n;j n;j Fn;j (Xn;j )b .
Then,
fZn(t) : t 2 T g w! fZ (t) : t 2 T g:
Proof. By Theorem 2.1 in Arcones (1998), it suces to show that for each  > 0, there exists
a nite partition  of T such that
X
kn
(2:1) lim sup Prfsup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))IFn;j (Xn;j )b j  g  4:
n!1 t2T j =1

4
Take a > 0,  > 0,  > 0 and a nite partition  of T , in this order, such that
Xn 2 k
(2:2)  1 lim sup E [ Fn;j (Xn;j )IFn;j (Xn;j )b ] < a;
n!1 j =1
Z2 1=2 a 1=2 
36a1=2 (log M (u))1=2 du  2 ;
0
X
kn Z1
2 1
144 lim sup E [( Fn;j (Xn;j )IFn;j (Xn;j )b ) ] (log M (u))1=2 du < 2 ;
= 2
n!1 j =1 0
X
kn
lim sup sup E [(fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b ] < 2
n!1 t2T j =1
and
Xn k
lim sup  2 8b2 Prfsup jfn;j (Xn;j ; t) fn;j (Xn;j ; (t))j   g  :
n!1 j =1 t2T
Observe that by taking a re nement of partitions, we can get a partition so that both conditions (iv)
and (vi) hold simultaneously. We have that
X
kn
Prfsup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))IFn;j (Xn;j )b j  g
t2T j =1
X
kn
2 (Xn;j )I
 Prf Fn;j Fn;j (Xn;j )b  ag
j =1
X
kn
+ Prfsup (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b  22 g
t2T j =1
X
kn
+ PrfA \ fsup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))IFn;j (Xn;j )b j  gg
t2T j =1
=: I + II + III;
where 8k
<Xn 2
A := : Fn;j (Xn;j )IFn;j (Xn;j )b < a;
j =1
9
X
kn =
sup (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b < 22 ; :
t2T j =1
By (2.2),
Xn 2 k
I  a 1 E [ Fn;j (Xn;j )IFn;j (Xn;j )b ]  ;
j =1
for n large enough. We have that
X
kn
II  Prfsup j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b
t2T j =1

5
E [(fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b ])j  2 g
X kn
 2 2 E [sup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b j]
t2T j =1

2 X Prfsup jfn;j (Xn;j ; t)


kn
 8b2  fn;j (Xn;j ; (t))j   g
j =1 t2T
Xn k
+2 2 E [sup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b; supt2T jfn;j (Xn;j ;t) fn;j (Xn;j ;(t))j< j]
t2T j =1

X kn
+2 2 E [sup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b; supt2T jfn;j (Xn;j ;t) fn;j (Xn;j ;(t))j< j]:
t2T j =1
By (4.19) in Ledoux and Talagrand (1991) and (1.6),
Xnk
2 2 E [sup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b; supt2T jfn;j (Xn;j ;t) fn;j (Xn;j ;(t))j< j]:
t2T j =1
X
kn
 8 2 E [sup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; (t))IFn;j (Xn;j )b j]
t2T j =1
Xkn
 16 2 E [sup j j (fn;j (Xn;j ; t) fn;j (Xn;j ; t0 ))IFn;j (Xn;j )b j]
t2T j =1
Z
 144 2 E [ Dn (log D(u; Fn ))1=2 du];
0
where t0 2 T ,
Fn := f(fn;1 (Xn;1; t)IFn;1 (Xn;1 )b; : : : ; fn;kn (Xn;kn ; t)IFn;kn (Xn;kn )b ) 2 IRkn : t 2 T g
and
X 2
kn X 2 kn
Dn2 := sup fn;j (Xn;j ; t)IFn;j (Xn;j )b  Fn;j (Xn;j )IFn;j (Xn;j )b :
t2T j =1 j =1
By (ii),
X
kn
2 (Xn;j )I 1=2 ):
D(u; Fn )  M (u( Fn;j Fn;j (Xn;j )b )
j =1
So,
Xn k
2 2 E [sup j j ((fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b j]
t2T j =1
Z
2 1 (log M (u))1=2 du E [(X F 2 (Xn;j )I
kn
 144 1=2
n;j Fn;j (Xn;j )b ) ]  ;
0 j =1
for n large enough.
6
By (1.5), Z Dn0
III  9 E [IA (log D(u; Fn0 ))1=2 du];
1
0
where
Fn0 := f(fn;1(Xn;1; t) fn;1(Xn;1 ; (t)); : : : ; fn;kn (Xn;kn ; t) fn;kn (Xn;kn ; (t))) 2 IRkn : t 2 T g
and
X
kn
Dn02 := sup (fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b :
t2T j =1
In A, Dn02  22 . We have that
(log D(u; Fn0 ))1=2  21=2 (log D(2 1 u; Fn ))1=2  21=2 (log M (2 1 a 1=2 u))1=2 :
So, Z 21=2 
III  18 1 (log M (2 1 a 1=2 u))1=2 du
0
Z 2 1=2 a 1=2 
 36a1=2  1 (log M (u))1=2 du  
0
From all these estimations, (2.1) follows. 2
Condition (i) in previous theorem is a necessary condition. Condition (iii) in Theorem 2.1 is a
very weak condition. Under some regularity conditions, conditions (iv) and (vii) in Theorem 2.1
are also necessary (see Theorem 2.2 in Arcones, 1998). Observe that by the Ho mann{Jrgensen
inequality, (v) is equivalent to
X
kn
sup E [j j Fn;j (Xn;j )IFn;j (Xn;j )b j] < 1:
n1 j =1
So, using the second moment, we are not imposing a stronger condition. Under some regularity
conditions, the following condition is also necessary: for each  > 0, there exists a nite partition 
of T such that
lim sup E  [sup jSn (t; b) Sn ((t); b) E [Sn (t; b) Sn ((t); b)]j2 ]  :
n!1 t2T
So, for each  > 0, there exists a nite partition  of T such that
X
kn
lim sup sup Var((fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))IFn;j (Xn;j )b )  :
n!1 t2T j =1

This means that condition (vi) in the previous theorem is close to be a necessary condition. It is a
necessary condition when the r.v.'s are symmetric (under some regularity conditions).
Next, we consider the case of random series processes.
Theorem 2.2. With the notation in (1.3), let b > 0. Suppose that:

7
P
(i) For each t 2 T , nj=1 fj (Xj ; t) converges in distribution.
(ii) The triangular array of functions ffj (; t) : 1  j  n; t 2 T g is manageable with respect to
the envelope functions fFj () : 1  j  ng.
(iii) For each  > 0, there exists a nite partition  of T such that
X
1
Pr  fsup jfj (Xj ; t) fj (Xj ; (t))j  g  :
j =1 t2T
P
(iv) 1 2
j =1 E [Fj (Xj )IFj (Xj )b ] < 1.
(v) For each  > 0, there exists a nite partition  of T such that
X
1
sup E [(fj (Xj ; t) fj (Xj ; (t)))2 IFj (Xj )b ]  :
t2T j =1

(vi) For each  > 0, there exists a nite partition  of T such that
X
1
sup j E [(fj (Xj ; t) fj (Xj ; (t)))IFj (Xj )b ]j  :
t2T j =1
P
Then, f nj=1 fj (Xj ; t) : t 2 T g converges weakly.
Proof. We apply Theorem 2.1. Conditions (i), (ii), (iv) and (v) in Theorem 2.1 are assumed.
Condition (iii) follows from the three series theorem. As to condition (vi), we have to prove that for
each  > 0, there exists a nite partition  of t such that
X
1
sup E [(fj (Xj ; t) fj (Xj ; (t)))2 IFj (Xj )b ]  :
t2T j =1

Take m < 1 such that


X
1
(2:3) E [Fj2 (Xj )IFj (Xj )b ]  2 2 :
j =m+1
Take a nite partition  of T such that
X
1
Pr  fsup jfj (Xj ; t) fj (Xj ; (t))j  m 1=2 2 1 1=2 g  2 2 b 2 :
j =1 t2T

Then,
X
m
sup E [(fj (Xj ; t) fj (Xj ; (t))2 IFj (Xj )b ]  2 1 :
t2T j =1
By (2.3),
X
1
sup E [(fj (Xj ; t) fj (Xj ; (t))2 IFj (Xj )b ]  2 1 :
t2T j =m+1
Hence, the claim follows. 2

8
P
By the Ito{Nisio theorem in l1 (T ) the convergence of f nj=1 fj (Xj ; t) : t 2 T g in the previous
theorem holds outer almost surely. A proof of this fact can be found in Proposition A.13 in van der
Vaart and Wellner (1996). We must notice that this proposition in van der Vaart and Wellner (1996)
P
is wrong: it is not true that the outer a.s. convergence of f nj=1 fj (Xj ; t) : t 2 T g implies the weak
convergence of this sequence (see the remark after Theorem 2.3 in Arcones, 1998).
The proof of Theorem 2.1, above, and Theorem 2.5 in Arcones (1998) give that:
Theorem 2.3. With the above notation, let b > 0, suppose that:
(i) For each  > 0,
X
kn
PrfFn;j (Xn;j )  g ! 0:
j =1
(ii) For each s; t 2 T , the following limit exists
X
kn
nlim
!1 Cov(fn;j (Xn;j ; s)Ijfn;j (Xn;j ;s)jb ; fn;j (Xn;j ; t)Ijfn;j (Xn;j ;t)jb ):
j =1
(iii) The triangular array of functions ffn;j (; t) : 1  j  kn ; 1  n; t 2 T g is manageable with
respect to the envelope functions fFn;j () : 1  j  kn ; 1  ng.
P n F 2 (X )I
(iv) supn1 E [ kj =1 n;j n;j Fn;j (Xn;j )b ] < 1.
(v) For each  > 0, there exists a nite partition  of T such that
X
kn
lim sup sup E [(fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 Ijfn;j (Xn;j ;t) fn;j (Xn;j ;(t))jb ]  :
n!1 t2T j =1

Then,
X
kn
fZn(t) := (fn;j (Xn;j ; t) E [fn;j (Xn;j ; t)Ijfn;j (Xn;j ;t)jb ]) : t 2 T g w! fZ (t) : t 2 T g;
j =1
where fZ (t) : t 2 T g is a mean{zero Gaussian process with covariance given by
X
kn
E [Z (s)Z (t)] = nlim
!1 Cov(fn;j (Xn;j ; s)Ijfn;j (Xn;j ;s)jb ; fn;j (Xn;j ; t)Ijfn;j (Xn;j ;t)jb ):
j =1

Under regularity conditions (i) and (ii) in Theorem 2.3 are necessary conditions for the weak
convergence of fZn (t) : t 2 T g to a Gaussian process. Last corollary is related with Theorem 10.6 in
Pollard (1990) (see also theorems 2.2 and 2.7 in Alexander, 1987a). Observe that under condition
(i), (v) is equivalent to:
(v)' For each  > 0, there exists a nite partition  of T such that
X
kn
lim sup sup E [(fn;j (Xn;j ; t) fn;j (Xn;j ; (t)))2 IFn;j (Xn;j )b ]  :
n!1 t2T j =1

The following corollary follows directly from Theorem 2.3.


9
Corollary 2.4. Let fXj g1 j =1 be a sequence of i.i.d.r.v.'s with values in a measurable space
(S; S ), let f (; t) : S ! IR be a measurable function for each t 2 T , let fan g1 n=1 be a sequence
of positive numbers regularly varying of order 1=2, let F (x) be a measurable function such that
F (X )  supt2T jf (X; t)j and let b > 0. Suppose that:
(i) For each  > 0,
n PrfF (X )  an g ! 0:
(ii) For each s; t 2 T , the following limit exists
2 Cov(f (X; s)I
!1 nan
nlim jf (X;s)jban ; f (X; t)Ijf (X;t)jban ):
(iii) The triangular array of functions f(f (x1 ; t); : : : ; f (xn ; t)) : 1  n; t 2 T g is manageable with
respect to the envelope functions f(F (x1 ); : : : ; F (xn )) : 1  ng.
(iv) supn1 nan 2 E [F 2 (X )IF (X )ban ] < 1.
(v) For each  > 0, there exists a nite partition  of T such that
lim sup sup nan 1 E [(f (X; t) f (X; (t)))2 Ijf (X;t) f (X;(t))jan ]  :
n!1 t2T
Then,
X
n
fan 1 (f (Xj ; t) E [f (Xj ; t)]) : t 2 T g w! fZ (t) : t 2 T g;
j =1
where fZ (t) : t 2 T g is a mean{zero Gaussian process with
E [Z (s)Z (t)] = nlim 2
!1 nan Cov(f (X; s)Ijf (X;s)jan ; f (X; t)Ijf (X;t)jan );
for each s; t 2 T .
When an = n1=2 , Alexander (1987b) obtained necessary and sucient conditions for the CLT of
empirical processes indexed by VC classes.
Next, we consider the weak convergence of fZn (t) : t 2 T g to an in nitely divisible process
without Gaussian part.
Theorem 2.5. With the notation corresponding to the processes in (1.1), let b > 0, suppose that:
(i) The nite dimensional distributions of fZn (t) : t 2 T g converge.
(ii) For each  > 0,
max sup Prfjfn;j (Xn;j ; t)j  g ! 0;
1j k n t2T
(iii) For each  > 0, there exists a nite partition  of T such that
X
kn
lim sup Pr  fsup jfn;j (Xn;j ; t) fn;j (Xn;j ; (t))j  g  :
n!1 j =1 t2T

(iv) The triangular array of functions ffn;j (; t) : 1  j  kn ; 1  n; t 2 T g is manageable with


respect to the envelope functions fFn;j (; t) : 1  j  kn ; 1  ng.

10
(v)
X
kn
2 (Xn;j )I
lim lim sup E [
!0 n!1
Fn;j Fn;j (Xn;j ) ] = 0:
j =1
(vi) For each  > 0, there exists a nite partition  of T such that
lim sup sup jE [Sn (t; b) Sn ((t); b)] cn (t) + cn ((t))j  :
n!1 t2T
Then,
fZn (t) : t 2 T g w! fZ (t) : t 2 T g:
Proof. By Theorem 2.9 in Arcones (1998), we have to prove that
lim lim sup E  [sup jSn (t; ) E [Sn (t; )]j] = 0:
!0 n!1 t2T
By (1.6) and conditions (iv) and (v),
E  [sup jSn(t; ) E [Sn(t; )]j]
t2T
X
kn
 2E [sup j j fn;j (Xn;j ; t)IFn;j (Xn;j ) j]
t2T j =1
X
kn Z1 X
kn
 2E [j j fn;j (Xn;j ; t0 )IFn;j (Xn;j ) j] + 18 (log M (u))1=2 du E [( 2 (Xn;j )I
Fn;j 1=2
Fn;j (Xn;j ) ) ]
j =1 0 j =1
Z1 X
kn
 (2 + 18 (log M (u))1=2 du)(E [ Fn;j
2 (Xn;j )I 1=2
Fn;j (Xn;j ) ]) ! 0;
0 j =1
where t0 2 T . So, the claim follows. 2
As to the case of stable limits, we have that:
Corollary 2.6. With the notation corresponding to the processes in (1.2), let 1 < < 2 and
let b > 0, suppose that:
(i) an % 1 and an is regularly varying of order 1 .
(ii) For each 1 ; : : : ; m 2 IR and each t1 ; : : : ; tm 2 T , there exists a nite constant N (1 ; : : : ; m ; t1 ; : : : ; tm
such that
Xm
lim
n!1 n Pr f l f (X; tl )  uang = 1 u N (1 ; : : : ; m ; t1; : : : ; tm );
l=1
for each u > 0.
(iii) For each  > 0, there exists a nite partition  of T such that
lim sup n Prfsup jf (X; t) f (X; (t))j  an g  :
n!1 t2T
(iv) The triangular array of functions f(f (x1 ; t); : : : ; f (xn ; t)) : 1  n; t 2 T g is manageable with
respect to the envelope functions f(F (x1 ); : : : ; F (xn )) : 1  ng.

11
(v) supn1 n PrfF (X )  ban g < 1.
Then, the sequence of stochastic processes
X
n
fZn(t) := an 1 (f (Xj ; t) E [f (X; t)]) : t 2 T g; n  1;
j =1
converges weakly.
Proof. We apply Theorem 2.5. It is easy to see that conditions (i){(iv) in this theorem are
satis ed. By Lemma 2.7 in Arcones (1998),
lim lim sup nan 2 E [F 2 (X )IF (X )an ] = 0
!0 n!1
and condition (iv) follows. Condition (vi) in Theorem 2.5 follows similarly. 2
Observe that (ii) and (iii) are necessary conditions for the weak convergence of fZn (t) : t 2 T g.
Also note that if F (x) = supt2T jf (x; t)j, then (v) is implied by (ii) and (iii). Last corollary is related
with the work in Romo (1993). Among several di erences, Romo (1993) only considered the case
an = n1= . Observe that it is not clear from the work in Romo (1993) when the sequence of functions
fnL(n1= X1 )IkxkF  g1
n=1 is tight (see Theorem 2.1 in the cited reference). Instead, Corollary 2.6
has conditions ready to use. The proof of the following corollary is similar to that of the last corollary
and it is omitted.
Corollary 2.7. Let b > 0. Under the notation corresponding to the processes in (1.2), suppose
that:
(i) an % 1 and an is regularly varying of order 1.
(ii) For each 1 ; : : : ; m 2 IR and each t1 ; : : : ; tm 2 T , there exists a nite constant N (1 ; : : : ; m ; t1 ; : : : ; tm
such that
X
m
lim
n!1 n Prf l f (X; tl )  uan g = u 1 N (1 ; : : : ; m ; t1 ; : : : ; tm);
l=1
for each u > 0.
(iii) For each 1 ; : : : ; m 2 IR and each t1 ; : : : ; tm 2 T , the following limit exists
X
m X
m
1
!1 nan E [
nlim l f (X; tl )Ij Pml=1 l f (X;tl )jban ] l cn(tl ):
l=1 l=1
(iv) For each  > 0, there exists a nite partition  of T such that
lim sup n Prfsup jf (X; t) f (X; (t))j  an g  :
n!1 t2T
(v) The triangular array of functions f(f (x1 ; t); : : : ; f (xn ; t)) : 1  n; t 2 T g is manageable with
respect to the envelope functions f(F (x1 ); : : : ; F (xn )) : 1  ng.
(vi) supn1 n PrfF (X )  ban g < 1.
(vii) For each  > 0, there exists a nite partition  of T such that
jnan 1 E [(f (X; t) f (X; (t)))IF (X )an ] cn(t) + cn ((t))j  :
nlim
!1 sup
t2T

12
Then, the sequence of stochastic processes
X
n
f(an 1 f (Xj ; t)) cn (t) : t 2 T g; n  1;
j =1
converges weakly.
3. An application to linear regression. In this section, we give an application of Theorem
2.1 to linear regression. We consider the simple linear regression model without a constant term,
that is we assume that: Yn;j = zn;j 0 + Uj , 1  j  n where fUj g1 j =1 is a sequence of i.i.d.r.v.'s;
fzn;j : 1  j  ng are real numbers and 0 2 IR is a parameter to be estimated. As it is well known,
d
this model represents a linear relation between the variables y and z , where Uj is an error term. zn;j
is called the regressor or predictor variable, and usually it can be chosen arbitrarily. Yj is called the
response variable. The problem is to estimate 0 from the data (zn;1 ; Yn;1 ); : : : ; (zn;n ; Yn;n ).
The usual estimator of 0 is the least squares (LS) estimator (see for example Draper and Smith,
1981). The problem with the least squares estimator is that it is not robust. A common alternative
to the LS estimator is the least absolute deviations (LAD) estimator. The LAD estimator ^n is
de ned as
Xn X
n
jYi zi0 ^nj = inf d jYi zi0 j:
2IR i=1
i=1
A nice discussion on these estimators is in Portnoy and Koenker (1997).
In this section, we obtain the asymptotic distribution of the LAD estimator for a particular choice
of the regressor variables zn;1 ; : : : ; zn;n .
We will need the following lemma:
Lemma 3.1. Let  be a Borel subset of IRd . Let fGn () :  2 g be a sequence of stochastic
processes. Let f^n g be a sequence of IRd {valued random variables. Suppose that:
(i) Gn (^n )  inf 2 Gn() + oP (1):
(ii) ^n = 0P (1).
(iii) There exists a stochastic process fG() :  2 IRd g such that for each M < 1,
fGn() : jj  M g
converges weakly to fG() : jj  M g.
(v) With probability one, the stochastic process fG() :  2 IRd g has a unique minimum at ~;
and for each  > 0 and for each M < 1 with j~j  M ,
inf G() > G(~):
jjM
j ~j>
d ~:
Then, ^n !
The proof of the previous lemma is omitted. Similar results have been used by many authors.
Theorem 3.2. Suppose that the following conditions are satis ed:

13
(i) FU (0) = 2 1 , FU (u) is di erentiable at u = 0 and FU0 (0) > 0, where FU (u) = PrfU  ug.
(ii) E [jU j] < 1.
P 2 .
(iii) For each j  1, limn!1 yn;j =: yj exists, where yn;j = an 1 zn;j and a2n = nj=1 zn;j
(iv) limm!1 lim supn!1 maxmj n jyn;j j = 0.
(v) 1
P jy j < 1.
j =1 j
Then, an (^n 0 ) converges in distribution to ~, where ~ is the value of  that minimizes
X
1
G() := g + 2 FU0 (0)2 + (jUj yj j jUj j);
j =1
where 2 = 1
P1 y2 and g is a standard normal r.v. independent of the sequence fU g.
j =1 j j
P
Proof. We apply Lemma 3.1 with Gn() = nj=1 (jUj yn;j j jUj j) and ^n = an (^n 0 ).
P P
Observe that since ^n is the value that minimizes nj=1(jYj zn;j j jUj j) = nj=1(jUj zn;j (
0)j jUj j), an (^n 0) is the value that minimizes Pnj=1(jUj yn;j j jUj j).
First, we prove by using Theorem 2.1 that for each M < 1, fGn () : jj  M g converges weakly
to fG() : jj  M g.
The set of IRn, f(U1 yn;1; : : : ; Un yn;n) :  2 IRg lies in a linear space of dimension one. So,
it has pseudodimension 1 (in the sense of Pollard, 1990). Since jUj yn;j j = max(Uj yn;j ; 0) +
max( Uj + yn;j ; 0) and these operations maintain the pesudodimension bounded, we conclude the
manageability of the triangular array f(U1 yn;1; : : : ; Un yn;n) :  2 IRg.
To prove convergence of the nite dimensional distributions, we need to prove that for each
1 ; : : : ; p ; 1 : : : ; p 2 IR,
n X
X p
(3:1) log E [exp(it k (jUj yn;j k j jUj j))]
j =1 k=1
X
1 X
p X
p X p
! log E [exp(it k (jUj yj k j jUj j))] + it 2k2 FU0 (0) 2 1 t2( k k )2 2 :
j =1 k=1 k=1 k=1
We have that n X
X p
log E [exp(it k (jUj yn;j k j jUj j))]
j =1 k=1
X
m Xp
= log E [exp(it k (jUj yn;j k j jUj j))]
j =1 k=1
Xn Xp
+ log E [exp(it k (jUj yn;j k j jUj j))]:
j =m+1 k=1
By condition (iii)
X
m X
p X
m X
p
log E [exp(it k (jUj yn;j k j jUj j))] ! log E [exp(it k (jUj yj k j jUj j))];
j =1 k=1 j =1 k=1

14
which if m large enough is approximately equal to
X
1 X
p
log E [exp(it k (jUj yj k j jUj j))]:
j =1 k=1
It is easy to see that
(3:2) E [jU + tj jU j] = t2FU0 (0) + o(t2 )
and
(3:3) E [(jU + tj jU j)2 ] = t2 + o(t2 );
as t ! 0. Now, using (3.2) and (3.3),
X
n X
p
log E [exp(it k (jUj yn;j k j jUj j))]
j =m+1 k=1
Xn X
p
 E [exp(it k (jUj yn;j k j jUj j)) 1]
j =m+1 k=1
X
n X
p X p
 E [it k (jUj yn;j k j jUj j) + 2 1 (it k (jUj yn;j k j jUj j)2 ]
j =m+1 k=1 k=1
X
p X p
! it 2 k2 FU0 (0) 2 1 t2 ( k k )2 2 :
k=1 k=1
The checking of the rest of the conditions in Theorem 2.1 is trivial.
P P jy j. Let  = jg +P1 y sign(U )j+3 P1 jy j. It is easy
Let n = j nj=1 yn;j sign(Uj )j +3 1
j =1 j 0 j =1 j j j =1 j
to see that n converges in distribution to 0 and that this convergence is jointly with the weak con-
vergence of Gn to G. Hence, Gn( 2 (FU (0)) 1 n) convergence in distribution to G( 2 (FU (0)) 1 0 ).
We have that
X 1
 2 (FU (0)) 1 G( 2 (FU (0)) 1 0 )  jgjj0 j + 02 j0 j jyj j
j =1
X
1 X
1
 (jgj + 2 jyj j) jyj j:
j =1 j =1
We also have that Gn(  2 (FU (0)) 1 n ) converges in distribution to a r.v. wich can be bound
by below by 2 FU (0)(jgj + 2 1
P jy j) P1 jy j. Now, G (0) = 0, G ( 2 (F (0)) 1  ) and
j =1 j j =1 j n n U n
Gn(  2 (FU (0)) 1 n) converging to positive r.v.'s. and the convexity of Gn imply that jn j 
 2 (FU (0)) 1 jnj for n large. Hence, jnj = OP (1).
We have that
X
1
G()  jgjjj + 2 2 FU0 (0) jj jyj j:
j =1
So, limjj!1 G() = 1. Hence, in order to check condition (iv) in Lemma 3.1, it suce to prove
that for each a; b 2 Q, a < b,
P fG0 () = 0; for each  2 (a; b)g = 0:
15
Now, if G0 () = 0, for each  2 (a; b), then yj 1 Uj 2 (a; b) for in nitely many j 's. By the Lemma of
Borel{Cantelli,
P fyj 1 Uj 2 (a; b) for in nitely many j 0 sg = 0:
Therefore, Lemma 3.1 applies. 2
There are possible choices of fzn;j g satisfying the conditions in Theorem 3.2. For example, if n
is even, let zn;j = 2n j , for 1  j  2 1 n; and let zn;j = n 1=2 2n , for 2 1 n + 1  j  n. If n is odd,
let zn;j = 2n j , for 1  j  2 1 (n + 1); and let zn;j = (n + 1) 1=2 2n , for 2 1 (n + 3)  j  n. Then,
if n is even, a2n = (5=3)4n 3 1 2n ; and if n is odd, a2n = (5=3)4n 3 1 2n 1 ; yj = (3=5)1=2 2 j and
2 = 1=5.
Acknowledgement. I would like to thank Professor Klaus Ziegler for pointing out several typos
and for general comments which improved the presentation of the manuscript.
References
[1] Alexander, K. S. (1987a). Central limit theorems for stochastic processes under random entropy
conditions. Probab. Theor. Rel. Fields 75 351{378.
[2] Alexander, K. S. (1987b). The central limit theorem for empirical processes on Vapnik{C ervonenkis
classes. Ann. Probab. 15 178{203.
[3] Arcones, M. A. (1994). Weak convergence of the row sums of a triangular array of empirical processes
Distributional convergence of M{estimators under unusual rates. Statist. Probab. Lett. 21 271{280.
[4] Arcones, M. A. (1996). M{estimators converging to a stable limit. Preprint.
[5] Arcones, M. A. (1998). Weak convergence of the row sums of a triangular array of empirical processes.
High Dimensional Probability. Progress in Probability 43 1-25. Birkhauser-Verlag, Basel.
[6] Arcones, M. A., Gaenssler, E. P. and Ziegler, K. (1992). Partial-sum processes with random
locations and indexed by Vapnik-C ervonenkis classes of sets in arbitrary sample space. Probability in
Banach Spaces, 8. 379{389. Birkhauser, Boston.
[7] Draper, N. R. and Smith, H. (1981). Applied Regression Analysis. Wiley, New York.
[8] Dudley, R. M. (1978). Central limit theorem for empirical processes. Ann. Probab. 6 899{929.
[9] Dudley, R. M. (1984). A course on empirical processes. Lect. Notes in Math. 1097 1{142. Springer{
Verlag, New York,
[10] Gaenssler, P. (1994). On recent developments in the theory of set{indexed processes. Asymptotic
statistics (Prague, 1993). 87{109. Contributions to Statistics, Physica, Heidelberg.
[11] Gaenssler, P. and Ziegler, K. (1994). A uniform law of large numbers for set{indexed processes with
applications to empirical and partial-sum processes. Probability in Banach spaces, 9 (Sandjberg, 1993)
385{400, Birkhauser Boston, Boston, MA.
[12] Gine, E. and Zinn, J. (1984). Some limit theorems for empirical processes. Ann. Probab. 12 929{989.
[13] Gine, E. and Zinn, J. (1986). Lectures on the central limit theorem for empirical processes. Lect. Notes
in Math. 1221 50{112. Springer{Verlag, New York.
[14] Gnedenko, B. V. and Kolmogorov, A. N. (1968). Limit Distributions for Sums of Independent
Random Variables. Addison{Wesley Publishing Company. Reading, Massachusetts.
[15] Hoffmann{Jrgensen, J. (1991). Stochastic Processes on Polish Spaces. Various Publications Series,
39. Aarhus University, Matematisk Institut, Aarhus, Denmark.
[16] Kim, J. and Pollard, D. (1990). Cube root asymptotics. Ann. Statist. 18 191{219.
[17] Le Cam, L. (1986). Asymptotic Methods in Statistical Decision Theory. Springer{Verlag, New York.
[18] Ledoux, M. and Talagrand, M. (1991). Probability in Banach Spaces. Springer{Verlag, New York.
[19] Marcus, M. B. and Pisier, G. (1981). Random Fourier Series with Applications to Harmonic Anal-
ysis. Ann. Math. Studies 101. Princeton University Press, Princeton, New Jersey.
[20] Pollard, D. (1984). Convergence of Stochastic Processes. Springer{Verlag, New York.

16
[21] Pollard, D. (1990). Empirical Processes: Theory and Applications. NSF{CBMS Regional Conference
Series in Probab. and Statist., Vol. 2. Institute of Mathematical Statistics, Hayward, California.
[22] Portnoy, S. and Koenker, R. (1997). The Gaussian hare and the Laplacian tortoise: computability
of squared{error versus absolute{error estimators. Statist. Science 12 279{300.
[23] Romo, J. A. (1993). Stable limits for empirical processes on Vapnik{C ervonenkis classes of functions.
J. Mult. Anal. 45 73{88.

[24] Vapnik, V.N. and Cervonenkis, A. Ja. (1971). On the uniform convergence of relative frequencies
of events to their probabilities. Theory Probab. Appl. 16 164{280.

[25] Vapnik, V.N. and Cervonenkis. A. Ja. (1981). Necessary and sucient conditions for the convergence
of means to their expectation. Theory Prob. Appl. 26 532{553.
[26] van der Vaart, A. W. and Wellner, J. A. (1996). Weak convergence and Empirical Processes with
Applications to Statistics. Springer{Verlag, New York.
[27] Ziegler, K. (1997). Functional central limit theorems for triangular arrays of function{indexed processes
under uniformly integrable entropy conditions J. Mult. Anal. 62 233{272.

17

You might also like