You are on page 1of 12

Outline

Emprirical Bayes

Bayesian Statistics Course


Alvaro Montenegro

November 7, 2011

Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Emprirical Bayes
Preliminary

Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

Reconstruction of the the Normal Joint Distribution

Lets suppose that


y| N(, )
N(, B)
then,
  

 
y

+B B
N[
,

B
B

Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

Introduction to EB

For simplicity, suppose a two stage bayesian model.


Assume a likelihood f (y|) for the observed data.
For assume an apriori distribution with cdf G () and density or
mass function g (|), where is a vector of hyperparameters. If
is known, the Bayes theorem say us that
p(|y, ) =

Alvaro Montenegro

f (y|)g (|)
m(y|)

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

Introduction to EB II

m(y|) denotes the marginal distribution of y,


Z
m(y|) = f (y|)g (|)d
If is unknown the fully Bayesian approach would adopt a hyper
prior distribution h(). The posterior distribution is then
R
Z
f (y|)g (|)h()d
p(|y) = R R
= p(|y, )h(|y)d
f (y|u)g (u|)h()dud

Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

Introduction to EB II

In empirical Bayes analysis, we use the marginal distribution


m(y|) to estimate by
= (y), (e.g, the marginal MLE).
Inference is then based on the estimate posterior distribution
p(|y,
)
The name empirical Bayes arises from the fact that we are using
the data to estimate the hyperparameter .

Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

Parametric EB (PEB)
If m(y|) is available directly, then we used it directly to find the
MMLE of . Gaussian /Gaussian models
Consider the two-stage Gaussian/Gaussian model
yi |i N(i , 2 ),
2

i N(, ),

i = 1, , k
i = 1, , k

Assume that both 2 and 2 are known. Then =


And,
 
   2

yi

+ 2 2
N
,
i

2
2
Thus, yi N(, 2 + 2 ) and cor 2 (yi , i ) =
Alvaro Montenegro

2
2 + 2

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

Gaussian/Gaussian model
Hence, the marginal density of y = (y1 , , yn )t , is given by
"
#
k
X
1
2
1
m(y|) =
exp 2(2 + 2 )
(yi )
[2( 2 + 2 )]k/2
i=1
1 Pk
So,
= y = k i=1 yi is the MMLE of
We conclude that the estimated posterior distribution is
p(i |yi ,
) = N(B
+ (1 B)yi , (1 B) 2 )
where
B = 2 /( 2 + 2 )
Then,
i = B y + (1 B)yi = y + (1 B)(yi y )
Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

Gaussian/Gaussian model II
Now, assume that is also unknown. Then = (, ). Now we
have to decide what estimate to use for (or 2 or B).
The MMLE for 2 in this case is

where s 2 =

1
k

2 = (s 2 2 )+ = max{0, s 2 2 }
Pk
)2 .
i=1 (yi y

The MMLE for B is


=
B

2
2
=
2 + 2
2 + (s 2 2 )+

i y )
i = y + (1 B)(y
Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

EM algorithm for PEB


This alternative is useful if the MMLE for is relatively
straightforward after is observed. The MMLE of can be
computed using the the prior g . Let
S(|) =

log (g (|))

be the score function.


I

I
I

E-Step. Let (j) denote the current estimate of the


hyperparameter at iteration j. Compute the posterior mean of
from the posterior p(|y, (j) ).
(j) ) = E (S(|)|y, (j) )

Compute S(|
M-Step. Uses S to compute a new estimate of the

hyperparameter, that is obtain the MLE of from S.


Alvaro Montenegro

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

EM algorithm for the Gaussian/Gaussian model

For simplicity, let T = 2 . Let log (l()) be the likelihood of for


component i. Then
2log (l()) = log (T ) + (i )2 /T
Hence,
1
S(i , ) =
2

Alvaro Montenegro

2 (iT)
(i )2
1
T
T2

Bayesian Statistics Course

Outline
Emprirical Bayes

Preliminary

EM algorithm for the Gaussian/Gaussian model II

(j)

E-Step. sample (obtain) i from


(j) ) = N(B

i , 2 (1 B)).

p(i |y ,
(j) , T
+ (1 B)y
P
(j)
M-Step. Estimate
(j+1) as
(j+1) = k1 ki=1 i . Estimate
P
(j+1) as T
(j+1) = 1 k ((j)
T
(j) )2
k

i=1

Alvaro Montenegro

Bayesian Statistics Course

You might also like