A Gentle Introduction to
Markov Chain Monte Carlo (MCMC)
Ed George
University of Pennsylvania
Seminaire de Printemps
VillarssurOllon, Switzerland
March 2005
1
1. MCMC: A New Approach to Simulation
Consider the general problem of trying to calculate characteristics
of a complicated multivariate probability distribution f(x) on x =
(x
1
, . . . , x
p
).
For example, suppose we want to calculate the mean of x
1
,
_ _
x
1
f(x
1
, x
2
)dx
1
dx
2
where
f(x
1
, x
2
) (1 + x
2
1
)
1
x
n
2
exp
_
1
2x
2
2
i
(y
i
x
1
)
2
x
2
_
(y
1
, . . . , y
n
are xed constants). Bad news: This calculation is
analytically intractable.
2
A Monte Carlo approach: Simulate k observations x
(1)
, . . . , x
(k)
from f(x) and use this sample to estimate the characteristics of
interest. (Careful: Each x
(j)
= (x
(j)
1
, . . . , x
(j)
p
) is a multivariate
observation). For example, we could estimate the mean of x
1
by
x
1
=
1
k
j
x
(j)
1
.
If x
(1)
, . . . , x
(k)
were independent observations (i.e. an iid sam
ple), we could use standard central limit theorem results to draw
inference about the quality of our estimate.
Bad news: In many problems, methods are unavailable for direct
simulation of an iid sample from f(x).
3
Good news: In many problems, methods such as the Gibbs sam
pler and the MetropolisHastings algorithms can be used to sim
ulate a Markov chain x
(1)
, . . . , x
(k)
which is converging in distri
bution to f(x), (i.e. as k increases, the distribution of x
(k)
gets
closer and closer to f(x)).
Recall that a Markov chain x
(1)
, . . . , x
(k)
is a sequence such that
for each j 1, x
(j+1)
is sampled from a distribution p(x  x
(j)
)
which depends on x
(j)
(but not on x
(1)
, . . . , x
(j1)
).
The function p(x  x
(j)
) is called a Markov transition kernel. If
p(x x
(j)
) is timehomogeneous (i.e. p(x x
(j)
) does not depend on
j) and the transition kernel satises
_
p(x  x
)f(x
)dx
= f(x),
then the chain will converge to f(x) if it converges at all.
4
Simulation of a Markov chain requires a starting value x
(0)
. If the
chain is converging to f(x), then the dependence between x
(j)
and
x
(0)
diminishes as j increases. After a suitable burn in period
of l iterations, x
(l)
, . . . , x
(k)
behaves like a dependent sample from
f(x).
Such behavior is illustrated by Figure 1.1 on page 6 of Gilks,
Richardson & Spieglehalter (1995).
The output from such simulated chains can be used to estimate
the characteristics of f(x). For example, one can obtain approxi
mate iid samples of size m by taking the nal x
(k)
values from m
separate chains.
It is probably more ecient, however, to use all the simulated
values. For example, x
1
=
1
k
j
x
(j)
1
will still converge to the
mean of x
1
.
5
MCMC is the general procedure of simulating such Markov chains
and using them to draw inference about the characteristics of f(x).
Methods which have ignited MCMC are the Gibbs sampler and
the more general MetropolisHastings algorithms. As will we now
see, these are simply prescriptions for constructing a Markov tran
sition kernel p(xx
f(x
1
) =
1
m
m
i=1
f(x
1
 x
(i)
2
).
Figure 3 of Casella & George (1992) illustrates the improvement
obtained by this estimate.
Note that the conditional distributions for the above setup, the
Binomial and the Beta, can be simulated by routine methods.
This is not always the case. For example, f(x
1
 x
2
) from page
2 is not of standard form. Fortunately, such distributions can be
simulated using envelope methods such as rejection sampling, the
ratioofuniforms method or adaptive rejection resampling. As
well see, MetropolisHastings algorithms can also be used for this
purpose.
9
3. MetropolisHastings Algorithms (MH)
MH algorithms generate Markov chains which converge to f(x),
by successively sampling from an (essentially) arbitrary proposal
distribution q(xx
),
entails simulating x
(1)
, . . . , x
(k)
as follows:
Simulate a transition candidate x
C
from q(x  x
(j)
)
Set x
(j+1)
= x
C
with probability
(x
(j)
, x
C
) = min
_
1,
q(x
(j)
 x
C
)
q(x
C
 x
(j)
)
f(x
C
)
f(x
(j)
)
_
Otherwise set x
(j+1)
= x
(j)
.
10
The original Metropolis algorithm was based on symmetric q,
(i.e. q(x  x
) = q(x
) and f(x).
When x is continuous, a popular choice for q(x  x
) is x = x
+z
where z N
p
(0, ). The resulting chain is called a random walk
chain. Note that the choice of scale can critically aect the
mixing (i.e. movement) of the chain. Figure 1.1 on page 6 of Gilks,
Richardson & Spieglehalter (1995) illustrates this when p = 1.
Other distributions for z can also be used.
11
Another useful choice, called an independence sampler, is obtained
when the proposal q(x  x
. The
resulting is of the form
(x
(j)
, x
C
) = min
_
1,
q(x
(j)
)
q(x
C
)
f(x
C
)
f(x
(j)
)
_
.
Such samplers work well when q(x) is a good heavytailed approx
imation to f(x).
It may be preferable to use an MH algorithm which updates the
components x
(j)
i
of x one at a time. It can shown that the Gibbs
sampler is just a special case of such a singlecomponent MH al
gorithm where q is chosen so that 1.
12
Finally, to see why MH algorithms work, it is not too hard to show
that the implied transition kernel p(x  x
) of any MH algorithm
satises
p(x  x
)f(x
) = p(x
 x)f(x),
a condition called detailed balance or reversibility. Integrating
both sides of this identity with respect to x
yields
_
p(x  x
)f(x
)dx
= f(x),
showing that f(x) is the limiting distribution when the chain con
verges.
4. The Model Liberation Movement
Advances in computing technology have unleashed the power of
Monte Carlo methods, which in turn, are now unleashing the po
tential of statistical modeling.
13
Our new ability to simulate from complicated multivariate prob
ability distributions via MCMC is having impact in many areas
of Statistics, but most profoundly for Bayesian approaches to sta
tistical modeling.
The Bayesian paradigm uses probability to characterize ALL un
certainty as follows:
Data is a realization from a model p(Data  ), where is
an unknown (possibly multivariate) parameter.
is treated as a realization from a prior distribution p().
Postdata inference about is based on the posterior dis
tribution
p( Data) =
p(Data  )p()
_
p(Data  )p()d
14
In the past, analytical intractability of the expression for p(Data)
severely stymied realistic practical Bayesian methods. Unrealis
tic, oversimplied models were too often used to facilitate calcu
lations. MCMC has changed this, and opened up vast new realms
of modeling possibilities.
My initial example
f(x
1
, x
2
) (1 + x
2
1
)
1
x
n
2
exp
_
2
x
2
2
i
(y
i
x
1
)
2
x
2
_
was a just a disguised posterior distribution for the Bayesian setup
y
1
, . . . , y
n
iid N(,
2
)
Cauchy(0, 1) Exponential(1).
The posterior of the parameters and is
p(,  Data) (1 +
2
)
1
n
exp
_
1
2
2
i
(y
i
)
2
_
.
15
In the above example, f(x) can only be specied up to a norming
constant. This is typical of Bayesian formulations. A huge attrac
tion of GS and MH algorithms is that these norming constants are
not needed.
The previous example is just a toy problem. MCMC is in fact
enabling posterior calculation for extremely complicated models
with hundreds and even thousands of parameters.
Going even further, the Bayesian approach can be used to obtain
posterior distributions over model spaces. Under such formula
tions, MCMC algorithms are leading to new search engines which
automatically identify promising models.
16
References For Getting Started
Casella, G. & George, E.I. (1992) Explaining the Gibbs Sampler, The
American Statistician, 46, 167174.
Chib, S. & Greenberg, E. (1995) Understanding the MetropolisHastings
Algorithm, The American Statistician, 49, 327335.
Gilks, W. R., Richardson, S. & D.J. Spieglehalter (1995) Markov Chain
Monte Carlo in Practice, Chapman & Hall, London.
Robert, C.P. & Casella, G. (2004) Monte Carlo Statistical Methods,
2nd Edition, Springer, New York.
17
Lecture II
Bayesian Approaches for Model Uncertainty
Ed George
University of Pennsylvania
Seminaire de Printemps
VillarssurOllon, Switzerland
March 2005
1
1. A Probabilistic Setup for Model Uncertainty
Suppose a set of K models {M
1
, . . . , M
K
} are under consideration
for data Y .
Under M
k
, Y has density p(Y 
k
, M
k
) where
k
is a vector of
unknown parameters that indexes the members of M
k
. (More
precisely, M
k
is a model class).
The Bayesian approach proceeds by assigning a prior probability
distribution p(
k
 M
k
) to the parameters of each model, and a
prior probability p(M
k
) to each model.
Intuitively, this complete specication can be understood as a
three stage hierarchical mixture model for generating the data Y ;
rst the model M
k
is generated from p(M
1
), . . . , p(M
K
), second
the parameter vector
k
is generated from p(
k
 M
k
), and third
the data Y is generated from p(Y 
k
, M
k
).
2
Letting Y
f
be a future unknown observation, this formulation in
duces a joint distribution
p(Y
f
, Y,
k
, M
k
) = p(Y
f
, Y 
k
, M
k
)p(
k
 M
k
)p(M
k
).
Conditioning on Y , all remaining uncertainty is captured by the
joint posterior distribution p(Y
f
,
k
, M
k
 Y ). Through condition
ing and marginalization, this can be used for a variety Bayesian
inferences and decisions.
For example, for prediction one would margin out both
k
and
M
k
and use the predictive distribution p(Y
f
 Y ) which in eect
averages over all the unknown models.
3
Of particular interest are the posterior model probabilities
p(M
k
 Y ) =
p(Y  M
k
)p(M
k
)
j
p(Y  M
j
)p(M
j
)
where
p(Y  M
k
) =
p(Y 
k
, M
k
)p(
k
 M
k
)d
k
is the marginal or integrated likelihood of M
k
.
In terms of the three stage hierarchical mixture formulation, p(M
k
Y )
is the probability that M
k
generated the data, i.e. that M
k
was
generated from p(M
1
), . . . , p(M
K
) in the rst step.
The model posterior distribution p(M
1
Y ), . . . , p(M
K
Y ) provides
a complete postdata representation of model uncertainty and is
the fundamental object of interest for model selection and model
averaging.
4
A natural and simple strategy for model selection is to choose the
most probable M
k
, the one for which p(M
k
 Y ) largest. However,
for the purpose of prediction with a single model, it may be bet
ter to use the median posterior model. Alternatively one might
prefer to report a set of high posterior models along with their
probabilities to convey the model uncertainty.
Based on these posterior probabilities, pairwise comparison of
models is summarized by the posterior odds
p(M
1
 Y )
p(M
2
 Y )
=
p(Y  M
1
)
p(Y  M
2
)
p(M
1
)
p(M
2
)
.
Note how the data, through the Bayes factor
p(Y  M
1
)
p(Y  M
2
)
,
updates the prior odds to yield the posterior odds.
5
2. Examples
As a rst example, consider the problem of choosing between
two nonnested models, M
1
and M
2
for discrete count data Y =
(y
1
, . . . , y
n
) where
p(Y 
1
, M
1
) =
n
(1 )
s
, s =
y
i
,
a geometric distribution where
1
= , and
p(Y 
2
, M
2
) =
e
n
x
i
!
a Poisson distribution where
2
= .
Suppose further that uncertainty about
1
= is described by a
uniform prior
p(  M
1
) = 1 for [0, 1]
and uncertainty about
2
= is described by an exponential prior
p(  M
2
) = e
for [0, ).
6
Under these priors, the marginal distributions are
p(Y  M
1
) =
1
0
n
(1 )
s
d =
n!s!
(n + s + 1)!
p(Y  M
2
) =
0
e
(n+1)
x
i
!
d =
s!
(n + 1)
s+1
x
i
!
The Bayes Factor for M
1
vs M
2
is then
p(Y  M
1
)
p(Y  M
2
)
=
n!(n + 1)
s+1
x
i
!
(n + s + 1)!
.
When p(M
1
) = p(M
2
) = 1/2, this equals the posterior odds.
Note that in contrast to the likelihood ratio statistic which com
pares maximized likelihoods, the Bayes factor compares averaged
likelihoods.
Caution  the choice of priors here can be very inuential.
7
As our second example, consider the problem of testing
H
0
: = 0 vs H
1
: = 0 when y
1
, . . . , y
n
iid N(, 1).
This can be treated as a Bayesian model selection problem by
letting
p(Y 
1
, M
1
) = p(Y 
2
, M
2
) = (2)
n/2
exp
(y
i
)
2
2
2
2
2
n
2
2
y
2
2(1 + n
2
)
8
3. General Considerations for Prior Selection
For a given set of models M, the eectiveness of the Bayesian
approach rests rmly on the specication of the parameter priors
p(
k
 M
k
) and the model space prior p(M
1
), . . . , p(M
K
).
The most common and practical approach to prior specication
in model uncertainty problems, especially large ones, is to try
and construct noninformative, semiautomatic formulations, using
subjective and empirical Bayes considerations where needed.
A simple and popular choice for the model space prior is
p(M
k
) 1/K
which is noninformative in the sense of favoring all models equally.
However, this can be deceptive because it may not be uniform over
other characteristics such as model size.
9
Turning to the choice of parameter priors p(
k
 M
k
), the use of
improper noninformative priors must be ruled out because their
arbitrary norming constants are problematic for posterior odds
comparisons.
Proper priors guarantee the internal coherence of the Bayesian
formulation and allow for meaningful hyperparameter specica
tions.
An important consideration for prior specication is the analytical
or numerical tractability for obtaining marginals p(Y  M
k
).
For nested model formulations, centering priors is often straight
forward. The crucial challenge is setting the prior dispersion. It
should be large enough to avod too much prior inuence, but small
enough to avoid overly diuse specications. Note that in our pre
vious normal example, the Bayes factor goes to as , the
BartlettLindley paradox.
10
4. Extracting Information from the Posterior
When exact calculation of the posterior is not feasible, MCMC
methods can often be used to simulate an approximate sample
from the posterior. This can be used to estimate posterior char
acteristics or to search for high probability models.
For a model characteristic , MCMC methods such as the such as
the GS and MH algorithms entail simulation of a Markov chain,
say
(1)
,
(2)
, . . ., that is converging to its posterior distribution
p(  Y ).
When p(Y  M
k
) can be obtained analytically, the GS and MH
algorithms can be applied to directly simulate a model index from
p(M
k
 Y ) p(Y  M
k
)p(M
k
).
Otherwise, one must simulate from p(
k
, M
k
 Y ).
11
Conjugate priors are often used because of the computational ad
vantages of having closed form expressions for p(Y  M
k
).
Alternatively, it is sometimes useful to use a computable approx
imation for p(Y  M
k
) such as a Laplace approximation
p(Y  M
k
) (2)
d
k
/2
H(
k
)
1/2
p(Y 
k
, M
k
)p(
k
 M
k
)
where d
k
is the dimension of
k
,
k
is the maximum of h(
k
)
log p(Y 
k
, M
k
)p(
k
 M
k
), and H(
k
) is minus the inverse Hessian
of h(
k
) evaluated at
k
.
This is obtained by substituting the Taylor series approximation
h(
k
) h(
k
)
1
2
(
k
k
)
H(
k
)(
k
k
) for h(
k
) in p(M
k
 Y ) =
exp{h(
k
)}d
k
.
Going further people sometimes use the BIC approximation
log p(Y  M) log p(Y 
k
, M
k
) (d
k
/2) log n
obtained by using the MLE
k
and ignoring the terms that are
constant in large samples.
12
References For Getting Started
Chipman, H., George, E.I. and McCulloch, R.E. (2001). The Practical
Implementation of Bayesian Model Selection (with discussion). In
Model Selection (P. Lahiri, ed.) IMS Lecture Notes Monograph
Series, Volume 38, 65134.
Clyde, M. & George, E.I. (2004). Model Uncertainty, Statistical Sci
ence, 19 1 8194.
George, E.I. (1999). Bayesian Model Selection. In Encyclopedia of
Statistical Sciences, Update Volume 3, (eds. S. Kotz, C. Read
and D. Banks), pp 3946, Wiley, N.Y.
13
Lecture III
Bayesian Variable Selection
Ed George
University of Pennsylvania
Seminaire de Printemps
VillarssurOllon, Switzerland
March 2005
1
1. The Variable Selection Problem
Suppose one wants to model the relationship between Y a variable
of interest, and a subset of x
1
, . . . , x
p
a set of potential explana
tory variables or predictors, but there is uncertainty about which
subset to use. Such a situation is particularly of interest when p
is large and x
1
, . . . , x
p
is thought to contain many redundant or
irrelevant variables.
This problem has received the most attention under the normal
linear model
Y =
1
x
1
+ +
p
x
p
+ where N
n
(0,
2
I)
when some unknown subset of regression coecients are so small
that it would be preferable to ignore them.
This normal linear model setup is important not only because of
its analytical tractability, but also because it is a canonical ver
sion of other important problems such as modern nonparametric
regression.
2
It will be convenient here to index each of the 2
p
possible subset
choices by
= (
1
, . . . ,
p
)
,
where
i
= 0 or 1 according to whether
i
is small or large, re
spectively. The size of the th subset is denoted q
1. We
refer to as a model since it plays the same role as M
k
described
in Lecture II.
2. Model Space Priors for Variable Selection
For the specication of the model space prior, most Bayesian vari
able selection implementations have used independence priors of
the form
p() =
i
i
(1 w
i
)
1
i
.
Under this prior, each x
i
enters the model independently with
probability p(
i
= 1) = 1 p(
i
= 0) = w
i
.
3
A useful simplication of this yields
p() = w
q
(1 w)
pq
,
where w is the expected proportion of x
i
s in the model. A special
case being the popular uniform prior
p() 1/2
p
.
Note that both of these priors are informative about the size of
the model.
Related priors that might also be considered are
p() =
B( +q
, +p q
)
B(, )
obtained putting a Beta prior on w, and more generally
p() =
_
p
q
_
1
h(q
)
obtained by putting a prior h(q
+, N
n
(0,
2
I)
where X
is the n x q
is a q
1 vector of unknown
regression coecients. Here, (
,
2
) plays the role of
k
described
in Lecture II.
Perhaps the most commonly applied parameter prior form for this
setup is the conjugate normalinversegamma prior
p(

2
, ) = N
q
(0,
2
),
p(
2
 ) = p(
2
) = IG(/2, /2).
(p(
2
) here is equivalent to /
2
2
).
5
A valuable feature of this prior is its analytical tractability;
and
2
can be eliminated by routine integration to yield
p(Y  ) X
+
1

1/2


1/2
( +S
2
)
(n+)/2
where
S
2
= Y
Y Y
(X
+
1
)
1
X
Y.
The use of these closed form expressions can substantially speed
up posterior evaluation and MCMC exploration, as we will see.
In choosing values for the hyperparameters that control p(
2
),
may be thought of as a prior estimate of
2
, and may be thought
of as the prior sample size associated with this estimate.
Let
2
FULL
and
2
Y
denote the traditional estimates of
2
based on
the saturated and null models respectively. Treating
2
FULL
and
2
Y
as rough under and overestimates of
2
, one might choose
and so that p(
2
) assigns substantial probability to the interval
(
2
FULL
,
2
Y
). This should at least avoid gross misspecication.
6
Alternatively, the explicit choice of and can be avoided by
using p(
2
) 1/
2
, the limit of the inversegamma prior as 0.
For choosing the prior covariance matrix
that controls p(

2
, ),
specication is substantially simplied by setting
= c V
, where
c is a scalar and V
= (X
)
1
or
V
= I
q
, the q
identity matrix.
Having xed V

2
, ) is relatively at over the region of plausible values
of

2
, ) assigns substantial
probability to the range of all plausible values for
. Choices of
c between 10 and 10,000 seem to yield good results.
7
4. Posterior Calculation and Exploration
The previous conjugate prior formulations allow for analytical
margining out of and
2
from p(Y, ,
2
 ) to yield a com
putable, closed form expression
g() p(Y  )p() p(  Y )
that can greatly facilitate posterior calculation and exploration.
For example, when
= c (X
)
1
, we can obtain
g() = (1 +c)
q
/2
( +Y
Y (1 + 1/c)
1
W
W)
(n+)/2
p()
where W = T
1
X
T =
X
) op
erations per update, where is the changed value.
8
The availability of g() p(  Y ) allows for the exible construc
tion of MCMC algorithms that simulate a Markov chain
(1)
,
(2)
,
(3)
, . . .
converging (in distribution) to p(  Y ).
A variety of such MCMC algorithms can be conveniently obtained
by applying the GS with g(). For example, by generating each
component from the full conditionals
p(
i

(i)
, Y )
(
(i)
= {
j
: j = i}) where the
i
may be drawn in any xed or
random order.
The generation of such components can be obtained rapidly as a
sequence of Bernoulli draws using simple functions of the ratio
p(
i
= 1,
(i)
 Y )
p(
i
= 0,
(i)
 Y )
=
g(
i
= 1,
(i)
)
g(
i
= 0,
(i)
)
.
9
Such g() also facilitates the use of MH algorithms. Because
g()/g(
) = p(  Y )/p(

(j)
).
2. Set
(j+1)
=
with probability
(

(j)
) = min
_
q(
(j)

)
q(

(j)
)
g(
)
g(
(j)
)
, 1
_
. (1)
Otherwise,
(j+1)
=
(j)
.
A useful class of MH algorithms, the Metropolis algorithms, are
obtained from the class of symmetric transition kernels of the form
q(
1

0
) = q
d
if
p
1

0
i
1
i
 = d. (2)
which simulate a candidate
A
g() so that p(A Y ) = C g(A). Then, a consistent
estimate of C is
C =
1
g(A)K
K
k=1
I
A
(
(k)
)
where I
A
( ) is the indicator of the set A.
This yields improved estimates of the probability of individual
values
p(  Y ) =
C g(),
as well as an estimate of the total visited probability
p(B  Y ) =
C g(B),
where B is the set of visited values.
12
The simulated
(1)
, . . . ,
(K)
can also play an important role in
model averaging. For example, suppose one wanted to predict a
quantity of interest by the posterior mean
E( Y ) =
all
E( , Y )p(  Y ).
When p is too large for exhaustive enumeration and p(  Y ) can
not be computed, E( Y ) is unavailable and is typically approx
imated by something of the form
E( Y ) =
S
E( , Y ) p(  Y, S)
where S is a manageable subset of models and p(  Y, S) is a
probability distribution over S. (In some cases, E(  , Y ) will
also need to be approximated).
13
Letting S be the sampled values, a natural and consistent choice
for
E( Y ) is
E
f
( Y ) =
S
E( , Y ) p
f
(  Y, S)
where p
f
(  Y, S) is the relative frequency of in S. However, it
appears that when g() is available, one can do better by using
E
g
( Y ) =
S
E( , Y ) p
g
(  Y, S)
where p
g
(  Y, S) = g()/g(S) is the renormalized value of g().
For example, when S is an iid sample from p(  Y ),
E
g
(  Y )
approximates the best unbiased estimator of E( Y ) as the sam
ple size increases. To see this, note that when S is an iid sample,
E
f
(  Y ) is unbiased for E(  Y ). Since S (together with g)
is sucient, the RaoBlackwellized estimator E(
E
f
( Y )  S) is
best unbiased. But as the sample size increases, E(
E
f
( Y )  S)
E
g
( Y ).
14
6. Calibration and Empirical Bayes Variable Selection
Let us now focus on the special case when the conjugate normal
inversegamma prior,
p(

2
, ) = N
q
(0, c
2
(X
)
1
),
is combined with
p() = w
q
(1 w)
pq
/
2
F q
where SS
(X
)
1
X
Y and F is a xed
penalty value for adding a variable.
Popular model selection criteria simply entail maximizing C
F
()
with particular choices of F and
2
=
2
.
For orthogonal variables, x
i
added t
2
i
> F.
Some choices for F
F = 0 : Select full model
F = 2 : Cp and AIC
F = log n : BIC
F = 2 log p : RIC
16
The relationship with C
F
() is obtained by reexpressing the model
posterior under the prior setup as
p(  Y ) exp
_
c
2(1 +c)
{SS
/
2
F(c, w) q
}
_
,
where
F(c, w) =
1 +c
c
_
2 log
1 w
w
+ log(1 +c)
_
.
As a function of for xed Y , p( Y ) is increasing in C
F
() when
F = F(c, w). Thus, Bayesian model selection based on p(  Y )
is equivalent to model selection based on the criterion C
F(c,w)
().
For example, by appropriate choice of c, w, the mode of p(  Y )
can be made to correspond to the best C
p
, AIC, BIC or RIC
models.
Since c and w control the expected size and proportion of the
nonzero components of , the dependence of F(c, w) on c and w
provides an implicit connection between the penalty F and the
prole of models for which its value may be appropriate.
17
The awful truth: c and w are unknown
Empirical Bayes Idea: Use c and w which maximize the marginal
likelihood
L(c, w  Y, )
p(  w)p(Y  , , c)
w
q
(1 w)
pq
(1 +c)
q
/2
exp
_
c SS
2
2
(1 +c)
_
.
For orthogonal xs (and known), this simplies to
L(c, w  Y, )
p
i=1
[(1 w)e
t
2
i
/2
+w(1 +c)
1/2
e
t
2
i
/2(1+c)
]
where t
i
= b
i
v
i
/ is the tstatistic associated with x
i
At least in the orthogonal case, c and w can be found numerically
using GaussSeidel, EM algorithm, etc.
18
The best marginal maximum likelihood model is then the one
which maximizes the posterior p(  Y, c, w, ) or equivalently
C
MML
C
F( c, w)
In contrast to criteria of the form C
F
() with prespecied xed
F, C
MML
uses an adaptive penalty F( c, w) that is implicitly based
on the estimated distribution of the regression coecients.
Estimating
after selecting
MML
might then proceed using
E(
 Y, c, w, ,
MML
) =
c
1 + c
MML
A computable conditional maximum likelihood approximation C
CML
for the nonorthogonal case is available.
19
Consider the simple model with X = I,
Y = + where N
n
(0, I)
where = (
1
, . . . ,
p
)
) is such that
1
, . . . ,
q
iid N(0, c)
q+1
, . . . ,
p
0
For p = n = 1000, and xed values of c and q, simulated Y from
the above model
Evaluate by estimating
R(, ) E
c,q
i
(Y
i
I[x
i
]
i
)
2
Figures 1ab and 2 illustrate the adaptive advantages of the em
pirical Bayes selection criteria.
20
0 200 400 600 800 1000
Nonzero Components
0
1000
2000
3000
L
o
s
s
MML
CML
AIC/Cp
BIC
RIC
CBIC
MRIC
Figure 1(a). The average loss of the selection procedures when c =25 and the number of nonzero components q =
0,10,25,50,100,200, 300, 400, 500, 750, 1000. We denote C
MML
by MML, C
CML
by CML, Cauchy BIC by CBIC and modified RIC
by MRIC.
0 20 40 60 80 100
Nonzero Components
0
100
200
300
L
o
s
s
MML
CML
BIC
RIC/CBIC
MRIC
Figure 1(b). The average loss of the selection procedures when c =25 and the number of nonzero components q =0,10,25,50,100.
We denote C
MML
by MML, C
CML
by CML, Cauchy BIC by CBIC and modified RIC by MRIC. RIC and CBIC are virtually identical
here and so have been plotted together.
0 200 400 600 800 1000
Nonzero Components
0
500
1000
1500
2000
2500
3000
L
o
s
s
MML
CML
AIC/Cp
BIC
RIC
CBIC
MRIC
Figure 1(c). The average loss of the selection procedures when c =5 and the number of nonzero components q =0,10,25,50,100,200,
300, 400, 500, 750, 1000. We denote C
MML
by MML, C
CML
by CML, Cauchy BIC by CBIC and modified RIC by MRIC.
References For Getting Started
Chipman, H., George, E.I. and McCulloch, R.E. (2001). The Practical
Implementation of Bayesian Model Selection (with discussion). In
Model Selection (P. Lahiri, ed.) IMS Lecture Notes Monograph
Series, Volume 38, 65134.
George, E.I. and Foster, D.P. (2000) Calibration and empirical Bayes
variable selection. Biometrika 87, 731748.
21
Lecture IV
High Dimensional
Predictive Estimation
Ed George
University of Pennsylvania
Seminaire de Printemps
VillarssurOllon, Switzerland
March 2005
1
1. Estimating a Normal Mean: A Brief History
Observe X  N
p
(, I) and estimate by under
R
Q
(, ) = E
(X)
2
MLE
(X) = X is the MLE, best invariant and minimax with
constant risk
Shocking Fact:
MLE
is inadmissible when p 3. (Stein 1956)
Bayes rules are a good place to look for improvements
For a prior (), the Bayes rule
(X) = E
(  X) minimizes
E
R
Q
(, )
Remark: The (formal) Bayes rule under
U
() 1 is
U
(X)
MLE
(X) = X
2
The Risk Functions of Two Minimax Estimators
H
(X), the Bayes rule under the Harmonic prior
H
() =
(p2)
,
dominates
U
when p 3. (Stein 1974)
a
(X), the Bayes rule under
a
() where
 s N
p
(0, s I) , s (1 + s)
a2
dominates
U
and is proper Bayes when p = 5 and a [.5, 1) or
when p 6 and a [0, 1). (Strawderman 1971)
A Unifying Phenomenon: These domination results can be at
tributed to properties of the marginal distribution of X under
H
and
a
.
3
The Bayes rule under () can be expressed as
(X) = E
(  X) = X +log m
(X)
where
m
(X)
_
e
(X)
2
/2
() d
is the marginal of X under (). ( = (
x
1
, . . . ,
x
p
)
)
(Brown 1971)
The risk improvement of
(X) over
U
(X) can be expressed as
R
Q
(,
U
) R
Q
(,
) = E
_
(log m
(X))
2
2
2
m
(X)
m
(X)
_
= E
_
4
2
_
m
(X)
_
m
(X)
_
(
2
=
i
2
x
2
i
) (Stein 1974, 1981)
4
That
H
(X) dominates
U
when p 3, follows from the fact that
the marginal m
(X) under
H
is superharmonic, i.e.
2
m
(X) 0
That
a
(X) dominates
U
when p 5 (and conditions on a),
follows from the fact that the sqrt of the marginal under
a
is
superharmonic, i.e.
2
_
m
(X) 0
(Fourdrinier, Strawderman and Wells 1998)
5
2. The Prediction Problem
Observe X  N
p
(, v
x
I) and predict Y  N
p
(, v
y
I)
Conditionally on , Y is independent of X
v
x
and v
y
are known (for now)
The Problem: To estimate p(y  ) by q(y  x).
Measure closeness by KullbackLeibler loss,
L(, q(y  x)) =
_
p(y  ) log
p(y  )
q(y  x)
dy
Risk function
R
KL
(, p) =
_
L(, q(y  x)) p(x  ) dx = E
(y  x) =
_
p(y  )(  x)d = E
[p(y  )X]
minimizes
_
R
KL
(, p)()d (Aitchison 1975)
Let p
U
(y  x) denote the Bayes rule under
U
() 1
p
U
(y  x) dominates p(y  = x), the naive plugin predictive
distribution (Aitchison 1975)
p
U
(y  x) is best invariant and minimax with constant risk
(Murray 1977, Ng 1980, Barron and Liang 2003)
Shocking Fact: p
U
(y  x) is inadmissible when p 3
7
p
H
(y  x), the Bayes rule under the Harmonic prior
H
() =
(p2)
,
dominates p
U
(y  x) when p 3. (Komaki 2001).
p
a
(y  x), the Bayes rule under
a
() where
 s N
p
(0, s v
0
I) , s (1 + s)
a2
,
dominates p
U
(y  x) and is proper Bayes when v
x
v
0
and when
p = 5 and a [.5, 1) or when p 6 and a [0, 1). (Liang 2002)
Main Question: Are these domination results attributable to the
properties of m
?
8
4. A Key Representation for p
(y  x)
Let m
(x; v
x
) denote the marginal of X  N
p
(, v
x
I) under
().
Lemma: The Bayes rule p
(y  x) can be expressed as
p
(y  x) =
m
(w; v
w
)
m
(x; v
x
)
p
U
(y  x)
where
W =
v
y
X + v
x
Y
v
x
+ v
y
N
p
(, v
w
I)
Using this, the risk improvement can be expressed as
R
KL
(, p
U
)R
KL
(, p
) =
_ _
p
v
x
(x) p
v
y
(y) log
p
(y  x)
p
U
(y  x)
dxdy
= E
,v
w
log m
(W; v
w
)E
,v
x
log m
(X; v
x
)
9
5. An Analogue of Steins Unbiased Estimate of Risk
Theorem:
v
E
,v
log m
(Z; v) = E
,v
_
2
m
(Z; v)
m
(Z; v)
1
2
log m
(Z; v)
2
_
= E
,v
_
2
2
_
m
(Z; v)/
_
m
(Z; v)
_
Proof relies on using the heat equation
v
m
(z; v) =
1
2
2
m
(z; v)
Remark: This shows that the risk improvement in the quadratic
risk estimation problem can be expressed in terms of log m
as
R
Q
(,
U
) R
Q
(,
) = 2
_
v
E
,v
log m
(Z; v)
_
v=1
10
6. General Conditions for Minimax Prediction
Let m
(y  x) will be
minimax if either of the following hold:
(i)
_
m
(z; v) is superharmonic
(ii) m
(z; v) is superharmonic
Corollary: If m
(y  x) will be
minimax if () is superharmonic
p
(y  x) will dominate p
U
(y  x) in the above results if the super
harmonicity is strict on some interval.
11
7. Sucient Conditions for Admissibility
Theorem (Blyths Method): If there is a sequence of nite non
negative measures satisfying
n
({ : 1}) 1 such that
E
n
[R
KL
(, q)] E
n
[R
KL
(, p
n
)] 0
then q(y  x) is admissible.
Theorem: For any two Bayes rules p
and p
n
E
n
[R
KL
(, p
)]E
n
[R
KL
(, p
n
)] =
1
2
_
v
x
v
w
_
h
n
(z; v)
2
h
n
(z; v)
m
(z; v)dzdv
where h
n
(z; v) = m
n
(z; v)/m
(z; v).
Using the explicit construction of
n
() from Brown and Hwang
(1984), we obtain tail behavior conditions that prove admissibility
of p
U
(y  x) when p 2, and admissibility of p
H
(y  x) when p 3.
12
8. Minimax Shrinkage Towards 0
Because
H
and
m
a
are superharmonic under suitable condi
tions, the result that p
H
(y  x) and p
a
(y  x) dominate p
U
(y  x)
and are minimax follows immediately from the Theorem.
By the Theorem, any of the improper superharmonic tpriors of
Faith (1978) or any of the proper generalized tpriors of Four
drinier, Strawderman and Wells (1998) yield Bayes rules that
dominate p
U
(y  x) and are minimax.
The risk functions R
KL
(, p
H
) and R
KL
(, p
a
) take on their min
ima at = 0, and then asymptote up to R
KL
(, p
U
) as .
13
Figure 1a displays the dierence between the risk functions
[R
KL
(, p
U
) R
KL
(, p
H
)]
at = (c, . . . , c)
, 0 c 4 when v
x
= 1 and v
y
= 0.2 for
dimensions p = 3, 5, 7, 9.
Figure 1b displays the dierence between the risk functions
[R
KL
(, p
U
) R
KL
(, p
a
)]
at = (c, . . . , c)
, 0 c 4 when a = 0.5, v
x
= 1 and v
y
= 0.2
for dimensions p = 3, 5, 7, 9.
14
Figure 1a. The risk difference between
U
p and
H
p : ) , ( ) , (
H U
p R p R .
Here ) , , ( c c L = , 1 =
x
v , 2 . 0 =
y
v
Figure 1b. The risk difference between
U
p and
a
p with 5 . 0 = a : ) , ( ) , (
a U
p R p R .
Here ) , , ( c c L = , 1 =
x
v , 2 . 0 =
y
v
Our Lemma representation
p
H
(y  x) =
m
H
(w; v
w
)
m
H
(x; v
x
)
p
U
(y  x)
shows how p
H
(y  x) shrinks p
U
(y  x) towards 0 by an adaptive
multiplicative factor of the form
b
H
(x, y) =
m
H
(w; v
w
)
m
H
(x; v
x
)
Figure 2 illustrates how this shrinkage occurs for various values of
x when p = 5.
15
Figure 2. Shrinkage of )  ( x y p
U
to obtain )  ( x y p
H
when 5 = p . Here ) 0 , 0 , 0 , , (
2 1
y y y =
) 0 , 0 , 0 , 0 , 2 ( = x ) 0 , 0 , 0 , 0 , 3 ( = x ) 0 , 0 , 0 , 0 , 4 ( = x
9. Shrinkage Towards Points or Subspaces
We can trivially modify the previous priors and predictive distri
butions to shrink towards an arbitrary point b R
p
.
Consider the recentered prior
b
() = ( b)
and corresponding recentered marginal
m
b
(z; v) = m
(z b; v).
This yields a predictive distribution
p
b
(y  x) =
m
b
(w; v
w
)
m
b
(x; v
x
)
p
U
(y  x)
that now shrinks p
U
(y  x) towards b rather than 0.
16
More generally, we can shrink p
U
(y  x) towards any subspace B
of R
p
whenever , and hence m
, is spherically symmetric.
Letting P
B
z be the projection of z onto B, shrinkage towards B
is obtained by using the recentered prior
B
() = ( P
B
)
which yields the reecentered marginal
m
B
(z; v) := m
(z P
B
z; v).
This modication yields a predictive distribution
p
B
(y  x) =
m
B
(w; v
w
)
m
B
(x; v
x
)
p
U
(y  x)
that now shrinks p
U
(y  x) towards B.
If m
B
(y  x) will dominate p
U
(y  x) and be minimax.
17
10. Minimax Multiple Shrinkage Prediction
For any spherically symmetric prior, a set of subspaces B
1
, . . . , B
N
,
and corresponding probabilities w
1
, ..., w
N
, consider the recen
tered mixture prior
() =
N
i=1
w
i
B
i
(),
and corresponding recentered mixture marginal
m
(z; v) =
N
1
w
i
m
B
i
(z; v).
Applying the
(X) = X+log m
(X; v)
yields minimax multiple shrinkage estimators of . (George 1986)
18
Applying the predictive construction with m
(z; v) yields
p
(y  x) =
N
i=1
p(B
i
 x) p
B
i
(y  x)
where p
B
i
(x; v
x
)
N
i=1
w
i
m
B
i
(x; v
x
)
is the posterior weight on the ith prior component.
Theorem: If each m
B
i
(y  x) will
dominate p
U
(y  x) and will be minimax.
19
Figure 3 illustrates the risk reduction
[R
KL
(, p
U
) R
KL
(, p
H
)]
for = (c, . . . , c)
obtained by p
H
which adaptively shrinks
p
U
(y  x) towards the closer of the two points b
1
= (2, . . . , 2) and
b
2
= (2, . . . , 2) using equal weights w
1
= w
2
= 0.5
20
Figure 3. The risk difference between
U
p and multiple shrinkage
*
H
p : ) , ( ) , (
*
H
U
p R p R .
Here ) , , ( c c L = , 1 =
x
v , 2 . 0 =
y
v , , 2
1
= a 2
2
= a , 5 . 0
2 1
= = w w .
11. The Case of Unknown Variance
If v
x
and v
y
are unknown, suppose there exists an available inde
pendent estimate of v
x
of the form s/k where
S v
x
2
k
.
Also assume that v
y
= r v
x
, for a known constant r.
Substitute the estimates v
x
= s/k, v
y
= rs/k and v
w
=
r
r+1
s/k
for v
x
, v
y
and v
w
respectively.
The predictor
p
(y  x) =
m
(w; v
w
)
m
(x; v
x
)
p
U
(y  x)
will still dominate p
U
(yx) if any of the conditions of the Theorem
are satised.
Note however, p
U
(y  x) is no longer best invariant or minimax.
21
12. A Complete Class Theorem
Theorem: In the KL risk problem, the class of all generalized
Bayes procedures is a complete class.
A (possibly randomized) decision procedure is a probability dis
tribution G(.  x) for each x over the action space, namely the set
of all densities g(  x) : R
p
R of Y . The Bayes rule under a
prior can then be denoted G
(.  x) =
_
p(y )p
( x)d, which
is a nonrandomized rule.
The complete class result is proved by showing;
(i) If G is an admissible procedure, then it is nonrandomized.
(ii) There exists a sequence of priors {
i
} such that G
i
(.  x)
G(.  x) weak* for a.e. x.
(iii) We can nd a subsequence {
i
} of {
i
} and a limiting prior
, which satisfy
i
weak
and G
(.  x) G
(.  x)
weak
(.  x) for a.e. x, so
that G is a generalized Bayes rule.
22
References For Getting Started
Brown, L.D., George, E.I. and Xu, X. (2005). Admissible Predictive
Estimation. Working paper.
George, E.I., Liang, F. and Xu, X. (2005). Improved Minimax Predic
tive Densities under KullbackLeibler Loss. Annals of Statistics,
to appear.
23