You are on page 1of 3

1.

3 Model estimation: background


Let

l ( ) = log [ f ( y , , )] = log { f [ y , h ( x ), ]} ,
where

= h ( x ) . Let

l ( )
1 U 1 ( )

U ( )

l ( ) l ( )

2 = 2
U ( ) =
=
M .


l ( )
(
)
U

and

2 l ( )
U ( )
=

A ( ) =
t

2 l ( )

1
1

2
l ( )
= 2 1

2
l ( )

p
1

A11 ( )
A ( )
21
=
M

A p1 ( )

If

2 l ( )
1 2
2 l ( )
2 2
M
2
l ( )
p 2
A12 ( ) L
A22 ( ) L
M

A p 2 ( )

O
L

L
L
O
L

2 l ( )
1 p
2 l ( )
2 p

2
l ( )
p p

A1 p ( )
A2 p ( )
M

A pp ( )

is the maximum likelihood estimate (MLE), then

( )

U = 0
1

Further, by mean value theorem,

( )

( )(

( )(

U ( 0 ) = U U ( 0 ) =
0 = A 0 ,

where 0 , . Thus,

0 = A 1 ( )U ( 0 ) = 0 + A 1 ( )U ( 0 ) .
Motivated by the last equation, two algorithms can be used to obtain
the maximum likelihood estimate .

t 1

t = t 2
M

tp

Let

and

t + 1

( t + 1)1

= ( t + 1) 2
M

( t + 1) p

be the maximum

likelihood estimate at the tth and (t+1)th iterations, respectively.


1. Newton-Raphson method:

( ) ( )
= A ( ) + U ( ), t = 0 , 1, 2 , K

t +1 = t + A 1 t U t

( )

A t t + 1

2. Fishers scoring method:

( ) ( )
= I ( ) + U ( ), t = 0 , 1, 2 , K

t +1 = t + I 1 t U t

( )

I t t + 1

where

I 11 ( )
I ( )
21
I ( ) =
M

I p1 ( )

I 12 ( )
I 22 ( )
M
I p 2 ( )

L
L
O
L

I 1 p ( )
I 2 p ( )
M

I pp ( )

l ( )
= E [ A ( )] = E
2

2

The above two algorithms will stop as

( ) <

where

or

+1

<

is some prespecified small number.

Note:

U ( )

is called the score function while

information matrix.

I ( )

is called the

You might also like