You are on page 1of 29

2

Section 8: Asymptotic Properties of the MLE

In this part of the course, we will consider the asymptotic properties of the maximum likelihood estimator. In particular, we will study issues of consistency, asymptotic normality, and eciency. Many of the proofs will be rigorous, to display more generally useful techniques also for later chapters. We suppose that X n = (X1 , . . . , Xn ), where the Xi s are i.i.d. with common density p(x; 0 ) P = {p(x; ) : }. We assume that 0 is identied in the sense that if = 0 and , then p(x; ) = p(x; 0 ) with respect to the dominating measure .

For xed , the joint density of X n is equal to the product of the individual densities, i.e.,
n

p(xn ; ) =
i=1

p(xi ; )

As usual, when we think of p(xn ; ) as a function of with xn held xed, we refer to the resulting function as the likelihood function, L( ; xn ). The maximum likelihood estimate for observed xn is the (xn ). Prior to observation, value which maximizes L( ; xn ), xn is unknown, so we consider the maximum likelihood estimator, (X n ). MLE, to be the value which maximizes L( ; X n ), Equivalently, the MLE can be taken to be the maximum of the standardized log-likelihood, l ( ; X n ) log L( ; X n ) 1 = = n n n 1 log p(Xi ; ) = n i=1
n n

l ( ; X i )
i=1

We will show that the MLE is often


P (X n ) 0 1. consistent,

2. asymptotically normal,

( 0 ) (X n ) 0 ) D n( Normal R.V.

3. asymptotically ecient, i.e., if we want to estimate 0 by any other estimator within a reasonable class, the MLE is the most precise. To show 1-3, we will have to provide some regularity conditions on the probability model and (for 3) on the class of estimators that will be considered.

Section 8.1 Consistency We rst want to show that if we have a sample of i.i.d. data from a common distribution which belongs to a probability model, then under some regularity conditions on the form of the density, the (X n )}, will converge in probability to 0 . sequence of estimators, { So far, we have not discussed the issue of whether a maximum likelihood estimator exists or, if one does, whether it is unique. We will get to this, but rst we start with a heuristic proof of consistency.

Heuristic Proof The MLE is the value that maximizes n 1 Q( ; X n ) := n i=1 l( ; Xi ). By the WLLN, we know that 1 Q( ; X n ) = n
n i=1

l( ; Xi ) Q0 ( ) := E0 [l( ; X )] = = E0 [log p(X ; )] {log p(x; )}p(x; 0 )d(x)

We expect that, on average, the log-likelihood will be close to the expected log-likelihood. Therefore, we expect that the maximum likelihood estimator will be close to the maximum of the expected log-likelihood. We will show that the expected log-likelihood, Q0 ( ) is maximized at 0 (i.e., the truth).

Lemma 8.1: If 0 is identied and E0 [| log p(X ; )|] < for all , Q0 ( ) is uniquely maximized at = 0 . Proof: By Jensens inequality, we know that for any strictly convex function g (), E [g (Y )] > g (E [Y ]). Take g (y ) = log(y ). So, for = 0 , E0 [ log( Note that E0 [ p (X ; ) ]= p (X ; 0 ) p(x; ) p(x; 0 )d(x) = p(x; 0 ) p(x; ) = 1 p (X ; ) p (X ; ) )] > log(E0 [ ]) p (X ; 0 ) p (X ; 0 )

p( X ; ) So, E0 [ log( p (X ;0 ) )] > 0 or

Q0 (0 ) = E0 [log p(X ; 0 )] > E0 [log p(X ; )] = Q0 ( ) This inequality holds for all = 0 .

Under technical conditions for the limit of the maximum to be the (X n ) should converge in probability to 0 . maximum of the limit, Sucient conditions for the maximum of the limit to be the limit of the maximum are that the convergence is uniform and the parameter space is compact.

The discussion so far only allows for a compact parameter space. In theory compactness requires that one know bounds on the true parameter value, although this constraint is often ignored in practice. It is possible to drop this assumption if the function Q( ; X n ) cannot rise too much as becomes unbounded. We will discuss this later.

10

Denition (Uniform Convergence in Probability): Q( ; X n ) converges uniformly in probability to Q0 ( ) if


sup |Q( ; X n ) Q0 ( )| 0 > 0,

P ( 0 )

More precisely, we have that for all


P0 [sup |Q( ; X n ) Q0 ( )| > ] 0 Why isnt pointwise convergence enough? Uniform convergence guarantees that for almost all realizations, the paths in are in the -sleeve. This ensures that the maximum is close to 0 . For pointwise convergence, we know that at each , most of the realizations are in the -sleeve, but there is no guarantee that for another value of the same set of realizations are in the sleeve. Thus, the maximum need not be near 0 .

11

Theorem 8.2: Suppose that Q( ; X n ) is continuous in and there exists a function Q0 ( ) such that 1. Q0 ( ) is uniquely maximized at 0 2. is compact 3. Q0 ( ) is continuous in 4. Q( ; X n ) converges uniformly in probability to Q0 ( ). (X n ) dened as the value of which for each X n = xn then P (X n ) maximizes the objective function Q( ; X n ) satises 0 .

12

Proof: For a positive , dene the -neighborhood about 0 to be ( ) = { : 0 < } We want to show that (X n ) ( )] 1 P 0 [ as n . Since ( ) is an open set, we know that ( )C is a compact set (Assumption 2). Since Q0 ( ) is a continuous function (Assumption 3), then sup( )C {Q0 ( )} is a achieved for a in the compact set. Denote this value by . Since 0 is the unique max, let Q0 (0 ) Q0 ( ) = > 0.

13

Now for any , we distinguish between two cases. Case 1: ( )C . Let An be the event that sup( Then,
)C

|Q( ; X n ) Q0 ( )| < /2.

An Q( ; X n ) < Q0 ( ) + /2 Q0 ( ) + /2 = Q0 (0 ) + /2 = Q0 (0 ) /2

14

Case 2: ( ). Let Bn be the event that sup( ) |Q( ; X n ) Q0 ( )| < /2. Then, Bn Q( ; X n ) > Q0 ( ) /2 for all Q(0 ; X n ) > Q0 (0 ) /2 By comparing the last expressions for each of cases 1,2, we ( ). But by conclude that if both An and Bn hold then ( )) 1. uniform convergence, pr(An Bn ) 1, so pr(

15

A key element of the above proof is that Q( ; X n ) converges uniformly in probability to Q0 ( ). This is often dicult to prove. A useful condition is given by the following lemma: Lemma 8.3: If X1 , . . . , Xn are i.i.d. p(x; 0 ) {p(x; ) : }, is compact, log p(x; ) is continuous in for all and all x X , and if there exists a function d(x) such that | log p(x; )| d(x) for all and x X , and E0 [d(X )] < , then i. Q0 ( ) = E0 [log p(X ; )] is continuous in ii. sup |Q( ; X n ) Q0 ( )| 0 Example: Suicide seasonality and von Mises distribution (in class)
P

16

Proof: We rst prove the continuity of Q0 ( ). For any , choose a sequence k which converges to . By the continuity of log p(x; ), we know that log p(x; k ) log p(x; ). Since | log p(x; k )| d(x), the dominated convergence theorem tells us that Q0 (k ) = E0 [log p(X ; k )] E0 [log p(X ; )] = Q0 ( ). This implies that Q0 ( ) is continuous. Next, we work to establish the uniform convergence in probability. We need to show that for any , > 0 there exists N ( , ) such that for all n > N ( , ), P [sup |Q( ; X n ) Q0 ( )| > ] <

17

Since log p(x; ) is continuous in and since is compact, we know that log p(x; ) is uniformly continuous (see Rudin, page 90). Uniform continuity says that for all > 0 there exists a ( ) > 0 such that | log p(x; 1 ) log p(x; 2 )| < for all 1 , 2 for which 1 2 < ( ). This is a property of a function dened on a set of points. In contrast, continuity is dened relative to a particular point. Continuity of a function at a point says that for all > 0, there exists a ( , ) such that | log p(x; ) log p(x; )| < for all for which < ( , ). For continuity, depends on and . For uniform continuity, depends only on . In general, uniform continuity is stronger than continuity; however, they are equivalent on compact sets.

18

Aside: Uniform Continuity vs. Continuity Consider f (x) = 1/x for x (0, 1). This function is continuous for each x (0, 1). However, it is not uniformly continuous. Suppose this function was uniformly continuous. Then, we know for any > 0, we can nd a ( ) > 0 such that |1/x1 1/x2 | < for all x1 , x2 such that |x1 x2 | < ( ). Given an > 0, consider the points x1 and x2 = x1 + ( )/2. Then, we know that ( )/2 1 1 1 1 |= | |=| x1 x2 x1 x1 + ( )/2 x1 (x1 + ( )/2)
) 1 Take x1 suciently small so that x1 < min( ( 2 , 2 ). This implies that ( )/2 1 ( )/2 > = > x1 (x1 + ( )/2) x1 ( ) 2x1

This is a contradiction.

19

Uniform continuity also implies that (x, ) = sup


{(1 ,2 ): 1 2 < }

| log p(x; 1 ) log p(x; 2 )| 0

(1)

as 0. By the assumption of Lemma 8.3, we know that (x, ) 2d(x) for all . By the dominated convergence theorem, we know that E0 [(X, )] 0 as 0. Now, consider open balls of length about each , i.e., : < }. The union of these open balls contains B (, ) = { . This union is an open cover of . Since is a compact set, we know that there exists a nite subcover, which we denote by {B (j , ), j = 1, . . . , J }.

20

Taking a , by the triangle inequality, we know that |Q( ; X n ) Q0 ( )| |Q( ; X n ) Q(j ; X n )| + |Q(j ; X n ) Q0 (j )| + |Q0 (j ) Q0 ( )| (2) (3) (4)

Choose j so that B (j ; ). Since j < , we know that (2) is equal to 1 | n


n

{log p(Xi ; ) log p(Xi ; j )}|


i=1

and this is less than or equal 1 n 1 | log p(Xi ; ) log p(Xi ; j )| n i=1
n n

(Xi , )
i=1

21

We also know that (4) is less than or equal to sup


{(1 ,2 ): 1 2 < }

|Q0 (1 ) Q0 (2 )|

Since Q0 ( ) is uniformly continuous, we know that this bound can be made arbitrarily small by choosing to be small. That is, this bound can be made less than /3, for a < 3 . We also know that (3) is less than
j =1,...,J

max |Q(j ; X n ) Q0 (j )|

Putting these results together, we have that for any < 3

22

1 sup |Q(; X n ) Q0 ()| n

(Xi , )
i=1 j =1,...,J

+ max |Q(j ; X n ) Q0 (j )| + /3

23

So for any < 3 , we know that P0 [sup |Q( ; X n ) Q0 ( )| > ] 1 P0 [ n 1 P0 [ n


n

(Xi , ) + max |Q(j ; X n ) Q0 (j )| > 2 /3]


i=1 n i=1 j =1,...,J

(Xi , ) > /3] +

(5) (6)

P0 [ max |Q(j ; X n ) Q0 (j )| > /3]


j =1,...,J

Now, we can show that we can take n suciently large so that (5) and (6) can be made small.

24

Note that (5) is equal to 1 P0 [ n


n

{(Xi , ) E0 [(X, )]} + E0 [(X, )] > /3]


i=1

We already have demonstrated that E0 [(X, )] 0 as 0. Choose small enough (< 1 )that E0 [(X, )] < /6. Call this number 1 . Take < min(1 , 3 ). Then (5) is less than 1 P0 [ n
n

{(Xi , ) E0 [(X, )]} > /6]


i=1

By the WLLN, we know that there exists N1 ( , ) so that for all n > N1 ( , ), the above term is less than /2. So, (5) is less than /2.

25

For the considered so far, nd the nite subcover {B (j , ), j = 1, . . . , J }. Now, (6) is equal to P 0 [J j =1 {|Q(j ; X n )Q0 (j )| > /3}]
J

P0 [|Q(j ; X n )Q0 (j )| > /3]


j =1

By the WLLN, we know that for each j and for any > 0, we know there exists N2j ( , ) so that for all n > N2j ( , ) P0 [|Q(j ; X n ) Q0 (j )| > /3] /(2J ) Let N2 ( , ) = maxj =1,...,J {N2j }. Then, for all n > N2 ( , ), we know that
J

P0 [|Q(j ; X n ) Q0 (j )| > /3] < /2


j =1

This implies that (6) is less than /2.

26

Combining the results for (5) and (6), we have demonstrated that there exists an N ( , ) = max(N1 ( , ), N2 ( , )) so that for all n > N ( , ), P0 [sup |Q( ; X n ) Q0 ( )| > ] <

Q.E.D.

27

Other conditions that might be useful to establish uniform convergence in probability are given in the lemmas below. Lemma 8.4 may be useful when the data are not independent. Lemma 8.4: If is compact, Q0 ( ) is continuous in , Q( ; X n ) Q0 ( ) for all , and there is an > 0 and C (X n ) , which is bounded in probability such that for all , X n ) Q(, X n )| C (X n ) |Q(, then,
P P

sup |Q( ; X n ) Q0 ( )| 0 > 0 there

Aside: C (X n ) is bounded in probability if for every exists N ( ) and ( ) > 0 such that P0 [|C (X n )| > ( )] <

28

for all n > N ( ).

29

is Not Compact Suppose that we are not willing to assume that is compact. One way around this is bound the objective function from above uniformly in parameters that are far away from the truth. For example, suppose that there is a compact set D such that E0 [ sup {log p(X ; )}] < Q0 (0 ) = E0 [log p(X ; 0 )]
D C

By the law of large numbers, we know that with probability approaching one that sup {Q( ; X n )} < 1 n 1 n
n

sup
C i=1 D n

D C

log p(Xi ; )

log p(Xi ; 0 ) = Q(0 ; X n )


i=1

30

(X n ) Therefore, with probability approaching one, we know that is in the compact set D . Then, we can apply the previous theory to show consistency. The following lemma can be used in cases where the objective function is concave. Lemma 8.5: If there is a function Q0 ( ) such that i. Q0 ( ) is uniquely maximized at 0 ii. 0 is an element of the interior of a convex set (does not have to be bounded) iii. Q(, xn ) is concave in for each xn , and iv. Q( ; X n ) Q0 ( ) for all (X n ) exists with probability approaching one and then P (X n ) 0 .
P

You might also like