Professional Documents
Culture Documents
Outline
1. Random Variables
2. Introduction
3. Estimation techniques
4. Extensions to Complex Vector Parameters
5. Application to communication systems
Random Variables
Definitions
A random variable
A random variable
Properties
and
lies between
then is
The mean of
is given by
The variance of
m
is given by
var
Random Variables
Examples
pdf:
var
exp
pdf:
var
Random Variables
Two random variables
we have
and
For independent random variables
are given by
and
The conditional pdfs
, we can define
and
For two random variables
Random Variables
Function of random variables
, e.g.,
and
Suppose
and
and
is given by
The variance of
is given by
var
Random Variables
Vector random variables
is given by
E
m
m
is given by
cov
cov
Introduction
Problem Statement
Suppose we have an unknown scalar parameter that we want to estimate from an observed
where
Note that
Introduction
Special Models
To solve any estimation problem, we need a model. Here, we will look deeper into two
specific models:
and is then given by
where
, cov
mean , m
The linear Gaussian model: This model is a special case of the linear model, where the
noise vector
exp
det
Estimation Techniques
mse
var
The MSE depends does not only depend on the variance but also on the bias.
This means that an estimator that tries to minimize the MSE will often depend on the
parameter , and is therefore unrealizable.
Solution: constrain the bias to zero and minimize the variance, which leads to the so-called
Minimum Variance Unbiased (MVU) estimator:
unbiased: m
for all
Remark: The MVU does not always exist and is generally difficult to find.
10
For the linear Gaussian model the MVU exists and its solution can be found by means of
the Cramer-Rao lower bound (see notes, [Kay93], [Cover-Thomas91]):
var
11
Properties:
Unbiased:
for all
Minimum variance:
var
cov
The first condition can only be satisfied if we assume a linear model for m :
m
Hence, we have to solve
12
subject to
cov
min
cov
min
Problem:
to zero we get
cov
cov
cov
cov
13
cov
var
Properties:
Proof:
cov
cov
cov
cov
cov
Solution:
Remark: For the linear model the BLUE equals the MVU only when the noise is Gaussian.
14
tells us how likely it is to observe a certain . The Maximum Likelihood Estimator (MLE)
for a certain .
15
exp
min
det
16
min
Problem:
Solution:
Proof:
Rewriting the cost function that we have to minimize, we get
to zero we get
Remark: For the linear Gaussian model, the MLE is equivalent to the MVU estimator.
17
Properties:
No probabilistic assumptions required
The performance highly depends on the noise
18
is minimal
min
min
Problem:
Proof:
Solution:
As before
Remark: For the linear model the LSE corresponds to the BLUE when the noise is white,
and to the MVU when the noise is Gaussian and white.
19
Let us compute
For the linear model the LSE leads to the following orthogonality condition:
20
This allows us to use prior knowledge about , i.e., its prior pdf
Bmse
and are random, hence the notation Bmse for Bayesian MSE.
mse
Bmse
21
Whereas the first MSE depends on , the second MSE does not depend on .
, so that
We know that
Bmse
Since
min
Problem:
Solution:
to zero we obtain:
Remarks:
In contrast to the MVU estimator the MMSE estimator always exists.
The MMSE has a smaller average MSE (Bayesian MSE) than the MVU, but the MMSE
estimator is biased whereas the MVU estimator is unbiased.
22
For the linear Gaussian model where is assumed to be Gaussian with mean 0 and variance
where the last equality is due to the matrix inversion lemma (see notes):
23
Remark: Compare this with the MVU for the linear Gaussian model.
As for the BLUE, we now constrain the estimator to have the form
24
to zero, we obtain
Bmse
Let us compute E
25
, the LMMSE
estimator is given by
where the last equality is again due to the matrix inversion lemma.
Remark: The LMMSE estimator is equivalent to the MMSE estimator when the noise and
the unknown parameter are Gaussian.
26
Summary
linear model
deterministic
deterministic
MVU
BLUE
MLE
LSE
and var.
and var.
LMMSE
27
MMSE
linear model
deterministic
deterministic
MVU
BLUE
MLE
LSE
and cov.
28
LMMSE
MMSE
Application to Communications
has length L
has length K
29
channel
Application to Communications
30
.
..
.
..
..
.
..
..
.
..
.
..
..
.
, we obtain
and
Defining
Application to Communications
Most communications systems (GSM, UMTS, WLAN, ...) consist of two periods:
Training period: During this period we try to estimate the channel by transmitting some
known symbols, also known as training symbols or pilots.
Data period: During this period we use the estimated channel to recover the unknown
data symbols that convey useful information.
What kind of processing do we use in each of these periods?
During the training period we use one of the previously developed estimation techniques
, assuming that
is known.
During the data period we use one of the previously developed estimation techniques
31
, assuming that
is known.
Application to Communications
Channel estimation
BLUE, LSE (or when the noise is Gaussian also the MVU and MLE):
LMMSE (or when the noise and channel are Gaussian also the MMSE):
32
Application to Communications
Symbol estimation
BLUE, LSE (or when the noise is Gaussian also the MVU and MLE):
LMMSE (or when the noise and symbols are Gaussian also the MMSE):
33
can be set to
which