Professional Documents
Culture Documents
Outline
1. Random Variables 2. Introduction 3. Estimation techniques 4. Extensions to Complex Vector Parameters 5. Application to communication systems
[Kay93] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, Prentice-Hall, New Jersey, 1993. [Cover-Thomas91] T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, New York, 1991.
Random Variables
Denitions
A random variable
is a function that assigns a number to every outcome of an experiment. is completely characterized by:
A random variable
Its cumulative distribution function (cdf): Its probability density function (pdf):
Properties
lies between
and
then is
The mean of
is given by m E
The variance of
is given by var E m m
Random Variables
Examples
pdf:
var
exp
var
are given by
we have
, we can dene
and
and
and
and
Random Variables
Random Variables
Function of random variables
Suppose
and
, e.g.,
and
var
is given by E m m
and
Random Variables
Vector random variables
Its cdf/pdf is the joint cdf/pdf of all these random variables. The mean of is given by m
E E
is given by E m m
cov
cov
Introduction
Problem Statement
Suppose we have an unknown scalar parameter that we want to estimate from an observed vector , which is related to through the following relationship
where
Note that
Introduction
Special Models
To solve any estimation problem, we need a model. Here, we will look deeper into two specic models: The linear model: The relationship between and is then given by
where
mean , m
The linear Gaussian model: This model is a special case of the linear model, where the noise vector is assumed to be Gaussian (or normal) distributed: exp
det
Estimation Techniques
We can view the unknown parameter as a deterministic variable Minimum Variance Unbiased (MVU) Estimator Best Linear Unbiased Estimator (BLUE) Maximum Likelihood Estimator (MLE) Least Squares Estimator (LSE) is viewed as a random variable
Minimum Mean Square Error (MMSE) Estimator Linear Minimum Mean Square Error (LMMSE) Estimator
mse
The MSE depends does not only depend on the variance but also on the bias. This means that an estimator that tries to minimize the MSE will often depend on the parameter , and is therefore unrealizable. Solution: constrain the bias to zero and minimize the variance, which leads to the so-called Minimum Variance Unbiased (MVU) estimator: unbiased: m for all
var
Remark: The MVU does not always exist and is generally difcult to nd. 10
For the linear Gaussian model the MVU exists and its solution can be found by means of the Cramer-Rao lower bound (see notes, [Kay93], [Cover-Thomas91]):
Properties: m
var
11
m Minimum variance:
for all
var
The rst condition can only be satised if we assume a linear model for m :
cov
cov
subject to
12
min
cov
subject to
cov
cov
cov
cov
Properties:
var
cov
13
Remark: For the linear model the BLUE equals the MVU only when the noise is Gaussian.
14
depends on , we often write it as a function that is parametrized on : . This function can also be interpreted as the likelihood function, since it
tells us how likely it is to observe a certain . The Maximum Likelihood Estimator (MLE) nds the that maximizes
The MLE is generally easy to derive. Asymptotically, the MLE has the same mean and variance as the MVU (but not asymptotically equivalent to the MVU).
for a certain .
15
For the linear Gaussian model, the likelihood function is given by exp
det
16
min
Remark: For the linear Gaussian model, the MLE is equivalent to the MVU estimator.
to zero we get
17
is minimal
Properties: No probabilistic assumptions required The performance highly depends on the noise
18
min
Remark: For the linear model the LSE corresponds to the BLUE when the noise is white, and to the MVU when the noise is Gaussian and white.
19
As before
Let us compute
For the linear model the LSE leads to the following orthogonality condition:
20
This allows us to use prior knowledge about , i.e., its prior pdf
Bmse
and are random, hence the notation Bmse for Bayesian MSE.
mse
E
Bmse
Whereas the rst MSE depends on , the second MSE does not depend on .
21
Bmse Since
Problem: Solution:
min
to zero we obtain:
Remarks: In contrast to the MVU estimator the MMSE estimator always exists. The MMSE has a smaller average MSE (Bayesian MSE) than the MVU, but the MMSE estimator is biased whereas the MVU estimator is unbiased. 22
For the linear Gaussian model where is assumed to be Gaussian with mean 0 and variance
where the last equality is due to the matrix inversion lemma (see notes):
Remark: Compare this with the MVU for the linear Gaussian model.
23
As for the BLUE, we now constrain the estimator to have the form The Bayesian MSE can then be written as Bmse E E E
to zero, we obtain
24
Let us compute E
:
25
, the LMMSE
where the last equality is again due to the matrix inversion lemma.
Remark: The LMMSE estimator is equivalent to the MMSE estimator when the noise and the unknown parameter are Gaussian.
26
Summary
MVU
BLUE MLE
LSE
and var.
and var.
MMSE
LMMSE
27
MVU
BLUE MLE
LSE
and cov.
MMSE
LMMSE
28
has length K
has length L
Application to Communications
channel
29
, we obtain
Application to Communications
and
. . .
. . .
..
..
. . .
..
. . .
..
Dening
30
Application to Communications
Most communications systems (GSM, UMTS, WLAN, ...) consist of two periods: Training period: During this period we try to estimate the channel by transmitting some known symbols, also known as training symbols or pilots. Data period: During this period we use the estimated channel to recover the unknown data symbols that convey useful information. What kind of processing do we use in each of these periods? During the training period we use one of the previously developed estimation techniques on the channel estimation model, , assuming that is known.
During the data period we use one of the previously developed estimation techniques on the symbol estimation model, , assuming that is known.
31
Application to Communications
Channel estimation
BLUE, LSE (or when the noise is Gaussian also the MVU and MLE):
LMMSE (or when the noise and channel are Gaussian also the MMSE):
Remark: Note that the LMMSE estimator requires the knowledge of which is generally not available.
32
Application to Communications
Symbol estimation
BLUE, LSE (or when the noise is Gaussian also the MVU and MLE):
LMMSE (or when the noise and symbols are Gaussian also the MMSE):
Remark: Note that the LMMSE estimator requires the knowledge of can be set to
which
33