You are on page 1of 152

Estimation Theory

Alireza Karimi
Laboratoire dAutomatique, MEC2 397,
email: alireza.karimi@ep.ch
Spring 2011
(Introduction) Estimation Theory Spring 2011 1 / 152
Course Objective
Extract information from noisy signals
Parameter Estimation Problem : Given a set of measured data
{x[0], x[1], . . . , x[N 1]}
which depends on an unknown parameter vector , determine an estimator

= g(x[0], x[1], . . . , x[N 1])


where g is some function.
Applications : Image processing, communications, biomedicine, system
identication, state estimation in control, etc.
(Introduction) Estimation Theory Spring 2011 2 / 152
Some Examples
Range estimation : We transmit a pulse that is reected by the aircraft.
An echo is received after second. Range is estimated from the equation
= 2R/c where c is the lights speed.
System identication : The plant is excited with a signal u and the
output signal y is measured. If we have :
y[k] = G(q
1
, )u[k] + n[k]
where n[k] is the measurement noise. The model parameters are estimated
using u[k] and y[k] for k = 1, . . . , N.
DC level in noise : Consider a set of data {x[0], x[1], . . . , x[N 1]} that
can be modeled as :
x[n] = A + w[n]
where w[n] is some zero mean noise process. A can be estimated using the
measured data set.
(Introduction) Estimation Theory Spring 2011 3 / 152
Outline
Classical estimation ( deterministic)
Minimum Variance Unbiased Estimator (MVU)
Cramer-Rao Lower Bound (CRLB)
Best Linear Unbiased Estimator (BLUE)
Maximum Likelihood Estimator (MLE)
Least Squares Estimator (LSE)
Bayesian estimation ( stochastic)
Minimum Mean Square Error Estimator (MMSE)
Maximum A Posteriori Estimator (MAP)
Linear MMSE Estimator
Kalman Filter
(Introduction) Estimation Theory Spring 2011 4 / 152
References
Main reference :
Fundamentals of Statistical Signal Processing
Estimation Theory
by Steven M. KAY, Prentice-Hall, 1993 (available in Library de La
Fontaine, RLC). We cover Chapters 1 to 14, skipping Chapter 5 and
Chapter 9.
Other references :
Lessons in Estimation Theory for Signal Processing, Communications
and Control. By Jerry M. Mendel, Prentice-Hall, 1995.
Probability, Random Processes and Estimation Theory for Engineers.
By Henry Stark and John W. Woods, Prentice-Hall, 1986.
(Introduction) Estimation Theory Spring 2011 5 / 152
Review of Probability and Random Variables
(Probability and Random Variables) Estimation Theory Spring 2011 6 / 152
Random Variables
Random Variable : A rule X() that assigns to every element of a sample
space a real value is called a RV. So X is not really a variable that varies
randomly but a function whose domain is and whose range is some
subset of the real line.
Example : Consider the experiment of throwing a coin twice. The sample
space (the possible outcomes) is :
= {HH, HT, TH, TT}
We can dene a random variable X such that
X(HH) = 1, X(HT) = 1.1, X(TH) = 1.6, X(TT) = 1.8
Random variable X assigns to each event (e.g. E = {HT, TH} ) a
subset of the real line (in this case B = {1.1, 1.6}).
(Probability and Random Variables) Estimation Theory Spring 2011 7 / 152
Probability Distribution Function
For any element in , the event {|X() x} is an important event.
The probability of this event
Pr [{|X() x}] = P
X
(x)
is called the probability distribution function of X.
Example : For the random variable dened earlier, we have :
P
X
(1.5) = Pr [{|X() 1.5}] = Pr [{HH, HT}] = 0.5
P
X
(x) can be computed for all x R. It is clear that 0 P
X
(x) 1.
Remark :
For the same experiment (throwing a coin twice) we could dene
another random variable that would lead to a dierent P
X
(x).
In most of engineering problems the sample space is a subset of the
real line so X() = and P
X
(x) is a continuous function of x.
(Probability and Random Variables) Estimation Theory Spring 2011 8 / 152
Probability Density Function (PDF)
The Probability Density Function, if it exists, is given by :
p
X
(x) =
dP
X
(x)
dx
When we deal with a single random variable the subscripts are removed :
p(x) =
dP(x)
dx
Properties :
(i )
_

p(x)dx = P() P() = 1


(ii ) Pr [{|X() x}] = Pr [X x] = P(x) =
_
x

p()d
(iii ) Pr [x
1
< X x
2
] =
_
x
2
x
1
p(x)dx
(Probability and Random Variables) Estimation Theory Spring 2011 9 / 152
Gaussian Probability Density Function
A random variable is distributed according to a Gaussian or normal
distribution if the PDF is given by :
p(x) =
1

2
2
e

(x)
2
2
2
The PDF has two parameters : , the mean and
2
the variance.
We note X N(,
2
) when the random variable X has a normal
(Gaussian) distribution with the mean and the standard deviation .
Small means small variability (uncertainty) and large means large
variability.
Remark : Gaussian distribution is important because according to the
Central Limit Theorem the sum of N independent RVs has a PDF that
converges to a Gaussian distribution when N goes to innity.
(Probability and Random Variables) Estimation Theory Spring 2011 10 / 152
Some other common PDF
Chi-square
2
: p(x) =
_
1

2
n
(n/2)
x
n/21
exp(
x
2
) for x > 0
0 for x < 0
Exponential ( > 0) : p(x) =
1

exp(x/)u(x)
Rayleigh ( > 0) : p(x) =
x

2
exp(
x
2
2
2
)u(x)
Uniform (b > a) : p(x) =
_
1
ba
a < x < b
0 otherwise
where (z) =
_

0
t
z1
e
t
dt and u(x) is the unit step function.
(Probability and Random Variables) Estimation Theory Spring 2011 11 / 152
Joint, Marginal and Conditional PDF
Joint PDF : Consider two random variables X and Y then :
Pr [x
1
< X x
2
and y
1
< Y y
2
] =
_
x
2
x
1
_
y
2
y
1
p(x, y)dxdy
Marginal PDF :
p(x) =
_

p(x, y)dy and p(y) =


_

p(x, y)dx
Conditional PDF : p(x|y) is dened as the PDF of X conditioned on
knowing the value of Y.
Bayes Formula : Consider two RVs dened on the same probability space
then we have :
p(x, y) = p(x|y)p(y) = p(y|x)p(x) or p(x|y) =
p(x, y)
p(y)
(Probability and Random Variables) Estimation Theory Spring 2011 12 / 152
Independent Random Variables
Two RVs X and Y are independent if and only if :
p(x, y) = p(x)p(y)
A direct conclusion is that :
p(x|y) =
p(x, y)
p(y)
=
p(x)p(y)
p(y)
= p(x) and p(y|x) = p(y)
which means conditioning does not change the PDF.
Remark : For a joint Gaussian pdf the contours of constant density is an
ellipse centered at (
x
,
y
). For independent X and Y the major (or
minor) axis is parallel to x or y axis.
(Probability and Random Variables) Estimation Theory Spring 2011 13 / 152
Expected Value of a Random Variable
The expected value, if it exists, of a random variable X with PDF p(x) is
dened by :
E(X) =
_

xp(x)dx
Some properties of expected value :
E{X + Y} = E{X} + E{Y}
E{aX} = aE{X}
The expected value of Y = g(X) can be computed by :
E(Y) =
_

g(x)p(x)dx
Conditional expectation : The conditional expectation of X given that a
specic value of Y has occurred is :
E(X|Y) =
_

xp(x|y)dx
(Probability and Random Variables) Estimation Theory Spring 2011 14 / 152
Moments of a Random Variable
The r th moment of X is dened as :
E(X
r
) =
_

x
r
p(x)dx
The rst moment of X is its expected value or the mean ( = E(X)).
Moments of Gaussian RVs : A Gaussian RV with N(,
2
) has
moments of all orders in closed form
E(X) =
E(X
2
) =
2
+
2
E(X
3
) =
3
+ 3
2
E(X
4
) =
4
+ 6
2

2
+ 3
4
E(X
5
) =
5
+ 10
3

2
+ 15
4
E(X
6
) =
6
+ 15
4

2
+ 45
2

4
+ 15
6
(Probability and Random Variables) Estimation Theory Spring 2011 15 / 152
Central Moments
The r th central moment of X is dened as :
E[(X )
r
] =
_

(x )
r
p(x)dx
The second central moment (variance) is denoted by
2
or var(X).
Central Moments of Gaussian RVs :
E[(X )
r
] =
_
0 if r is odd

r
(r 1)!! if r is even
where n!! denotes the double factorial that is the product of every odd
number from n to 1.
(Probability and Random Variables) Estimation Theory Spring 2011 16 / 152
Some properties of Gaussian RVs
If X N(
x
,
2
x
) then Z = (X
x
)/
x
N(0, 1).
If Z N(0, 1) then X =
x
Z +
x
N(
x
,
2
x
).
If X N(
x
,
2
x
) then Z = aX + b N(a
x
+ b, a
2

2
x
).
If X N(
x
,
2
x
) and Y N(
y
,
2
y
) are two independent RVs, then
aX + bY N(a
x
+ b
y
, a
2
x
+ b
2
y
)
The sum of square of n independent RV with standard normal
distribution N(0, 1) has a
2
n
distribution with n degree of freedom.
For large value of n,
2
n
converges to N(n, 2n).
The Euclidian norm

X
2
+ Y
2
of two independent RVs with
standard normal distribution has the Rayleigh distribution.
(Probability and Random Variables) Estimation Theory Spring 2011 17 / 152
Covariance
For two RVs X and Y, the covariance is dened as

xy
= E[(X
x
)(Y
y
)]

xy
=
_

(x
x
)(y
y
)p(x, y)dxdy
If X and Y are zero mean then
xy
= E{XY}.
var(X + Y) =
2
x
+
2
y
+ 2
xy
var(aX) = a
2

2
x
Important formula : The relation between the variance and the mean of
X is given by

2
= E[(X )
2
] = E(X
2
) 2E(X) +
2
= E(X
2
)
2
The variance is the mean of the square minus the square of the mean.
(Probability and Random Variables) Estimation Theory Spring 2011 18 / 152
Independence, Uncorrelatedness and Orthogonality
If
xy
= 0, then X and Y are uncorrelated and
E{XY} = E{X}E{Y}
X and Y called orthogonal if E{XY} = 0.
If X and Y are independent then they are uncorrelated.
p(x, y) = p(x)p(y) E{XY} = E{X}E{Y}
Uncorrelatedness does not imply the independence. For example, if X
is a normal RV with zero mean and Y = X
2
we have p(y|x) = p(y)
but

xy
= E{XY} E{X}E{Y} = E{X
3
} 0 = 0
The correlation only shows the linear dependence between two RV so
is weaker than independence.
For Jointly Gaussian RVs, independence is equivalent to being
uncorrelated.
(Probability and Random Variables) Estimation Theory Spring 2011 19 / 152
Random Vectors
Random Vector : is a vector of random variables
1
:
x = [x
1
, x
2
, . . . , x
n
]
T
Expectation Vector :
x
= E(x) = [E(x
1
), E(x
2
), . . . , E(x
n
)]
T
Covariance Matrix : C
x
= E[(x
x
)(x
x
)
T
]
C
x
is an n n symmetric matrix which is assumed to be positive
denite and so invertible.
The elements of this matrix are : [C
x
]
ij
= E{[x
i
E(x
i
)][x
j
E(x
j
)]}.
If the random variables are uncorrelated then C
x
is a diagonal matrix.
Multivariate Gaussian PDF :
p(x) =
1
_
(2)
n
det(C
x
)
exp
_

1
2
(x
x
)
T
C
1
x
(x
x
)
_
1. In some books (including our main reference) there is no distinction between
random variable X and its specic value x. From now on we adopt the notation
of our reference.
(Probability and Random Variables) Estimation Theory Spring 2011 20 / 152
Random Processes
Discrete Random Process : x[n] is a sequence of random variables
dened for every integer n.
Mean value : is dened as E(x[n]) =
x
[n].
Autocorrelation Function (ACF) : is dened as
r
xx
[k, n] = E(x[n]x[n + k])
Wide Sense Stationary (WSS) : x[n] is WSS if its mean and its
autocorrelation function (ACF) do not depend on n.
Autocovariance function : is dened as
c
xx
[k] = E[(x[n]
x
)(x[n + k]
x
)] = r
xx
[k]
2
x
Cross-correlation Function (CCF) : is dened as
r
xy
[k] = E(x[n]y[n + k])
Cross-covariance function : is dened as
c
xy
[k] = E[(x[n]
x
)(y[n + k]
y
)] = r
xy
[k]
x

y
(Probability and Random Variables) Estimation Theory Spring 2011 21 / 152
Discrete White Noise
Some properties of ACF and CCF :
r
xx
[0] |r
xx
[k]| r
xx
[k] = r
xx
[k] r
xy
[k] = r
yx
[k]
Power Spectral Density : The Fourier transform of ACF and CCF gives
the Auto-PSD and Cross-PSD :
P
xx
(f ) =

k=
r
xx
[k] exp(j 2fk)
P
xy
(f ) =

k=
r
xy
[k] exp(j 2fk)
Discrete White Noise : is a discrete random process with zero mean and
r
xx
[k] =
2
[k] where [k] is the Kronecker impulse function. The PSD of
white noise becomes P
xx
(f ) =
2
and is completely at with frequency.
(Probability and Random Variables) Estimation Theory Spring 2011 22 / 152
Introduction and Minimum Variance Unbiased
Estimation
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 23 / 152
The Mathematical Estimation Problem
Parameter Estimation Problem : Given a set of measured data
x = {x[0], x[1], . . . , x[N 1]}
which depends on an unknown parameter vector , determine an estimator

= g(x[0], x[1], . . . , x[N 1])


where g is some function.
The rst step is to nd the PDF of data as a
function of : p(x; )
Example : Consider the problem of DC level in white Gaussian noise with
one observed data x[0] = + w[0] where w[0] has the PDF N(0,
2
).
Then the PDF of x[0] is :
p(x[0]; ) =
1

2
2
exp
_

1
2
2
(x[0] )
2
_
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 24 / 152
The Mathematical Estimation Problem
Example : Consider a data sequence that can be modeled with a linear
trend in white Gaussian noise
x[n] = A + Bn + w[n] n = 0, 1, . . . , N 1
Suppose that w[n] N(0,
2
) and is uncorrelated with all the other
samples. Letting = [A B] and x = [x[0], x[1], . . . , x[N 1]] the PDF is :
p(x; ) =
N1

n=0
p(x[n]; ) =
1
(

2
2
)
N
exp
_

1
2
2
N1

n=0
(x[n] A Bn)
2
_
The quality of any estimator for this problem is related to the assumptions
on the data model. In this example, linear trend and WGN PDF
assumption.
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 25 / 152
The Mathematical Estimation Problem
Classical versus Bayesian estimation
If we assume is deterministic we will have a classical estimation
problem. The following method will be studied : MVU, MLE, BLUE,
LSE.
If we assume is a random variable with a known PDF, then we will
have a Bayesian estimation problem. In this case the data are
described as the joint PDF
p(x, ) = p(x|)p()
where p() summarizes our knowledge about before any data is
observed and p(x|) summarizes our knowledge provided by data x
conditioned on knowing . The following methods will be studied :
MMSE, MAP, Kalman Filter.
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 26 / 152
Assessing Estimator Performance
Consider the problem of estimating a DC level A in uncorrelated noise :
x[n] = A + w[n] n = 0, 1, . . . , N 1
Consider the following estimators :

A
1
=
1
N
N1

n=0
x[n]

A
2
= x[0]
Suppose that A = 1,

A
1
= 0.95 and

A
2
= 0.98. Which estimator is better ?
An estimator is a random variable, so its
performance can only be described by its PDF or
statistically (e.g. by Monte-Carlo simulation).
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 27 / 152
Unbiased Estimators
An estimator that on the average yield the true value is unbiased.
Mathematically
E(

) = 0 for a < < b


Lets compute the expectation of the two estimators

A
1
and

A
2
:
E(

A
1
) =
1
N
N1

n=0
E(x[n]) =
1
N
N1

n=0
E(A + w[n]) =
1
N
N1

n=0
(A + 0) = A
E(

A
2
) = E(x[0]) = E(A + w[0]) = A + 0 = A
Both estimators are unbiased. Which one is better ?
Now, lets compute the variance of the two estimators :
var(

A
1
) = var
_
1
N
N1

n=0
x[n]
_
=
1
N
2
N1

n=0
var(x[n]) =
1
N
2
N
2
=

2
N
var(

A
2
) = var(x[0]) =
2
> var (

A
1
)
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 28 / 152
Unbiased Estimators
Remark : When several unbiased estimators of the same parameters from
independent set of data are available, i.e.,

1
,

2
, . . . ,

n
, a better estimator
can be obtained by averaging :

=
1
n
n

i =1

i
E(

) =
Assuming that the estimators have the same variance, we have :
var(

) =
1
n
2
n

i =1
var(

i
) =
1
n
2
n var(

i
) =
var(

i
)
n
By increasing n, the variance will decrease (if n ,

).
It is not the case for biased estimators, no matter how many estimators are
averaged.
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 29 / 152
Minimum Variance Criterion
The most logical criterion for estimation is the Mean Square Error (MSE) :
mse(

) = E[(

)
2
]
Unfortunately this type of estimators leads to unrealizable estimators (the
estimator will depend on ).
mse(

) = E{[

E(

) + E(

) ]
2
} = E{[

E(

) + b()]
2
}
where b() = E(

) is dened as the bias of the estimator. Therefore :


mse(

) = E{[

E(

)]
2
} + 2b()E[

E(

)] + b
2
() = var(

) + b
2
()
Instead of minimizing MSE we can minimize the variance of
the unbiased estimators :
Minimum Variance Unbiased Estimator
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 30 / 152
Minimum Variance Unbiased Estimator
Existence of MVU Estimator : In general MVU estimator does not
always exist. There may be no unbiased estimator or non of unbiased
estimators has a uniformly minimum variance.
Finding the MVU Estimator : There is no known procedure which
always leads to the MVU estimator. Three existing approaches are :
1
Determine the Cramer-Rao lower bound (CRLB) and check to see if
some estimator satises it.
2
Apply the Rao-Blackwell-Lehmann-Schee theorem (we will skip it).
3
Restrict to linear unbiased estimators.
(Minimum Variance Unbiased Estimation) Estimation Theory Spring 2011 31 / 152
Cramer-Rao Lower Bound
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 32 / 152
Cramer-Rao Lower Bound
CRLB is a lower bound on the variance of any unbiased estimator.
var(

) CRLB()
Note that the CRLB is a function of .
It tells us what is the best performance that can be achieved
(useful in feasibility study and comparison with other estimators).
It may lead us to compute the MVU estimator.
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 33 / 152
Cramer-Rao Lower Bound
Theorem (scalar case)
Assume that the PDF p(x; ) satises the regularity condition
E
_
ln p(x; )

_
= 0 for all
Then the variance of any unbiased estimator

satises
var(

)
_
E
_

2
ln p(x; )

2
__
1
An unbiased estimator that attains the CRLB can be found i :
ln p(x; )

= I ()(g(x) )
for some functions g(x) and I (). The estimator is

= g(x) and the
minimum variance is 1/I ().
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 34 / 152
Cramer-Rao Lower Bound
Example : Consider x[0] = A + w[0] with w[0] N(0,
2
).
p(x[0]; A) =
1

2
2
exp
_

1
2
2
(x[0] A)
2
_
ln p(x[0], A) = ln

2
2

1
2
2
(x[0] A)
2
Then
ln p(x[0]; A)
A
=
1

2
(x[0] A)

2
ln p(x[0]; A)
A
2
=
1

2
According to Theorem :
var (

A)
2
and I (A) =
1

2
and

A = g(x[0]) = x[0]
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 35 / 152
Cramer-Rao Lower Bound
Example : Consider multiple observations for DC level in WGN :
x[n] = A + w[n] n = 0, 1, . . . , N 1 with w[n] N(0,
2
)
p(x; A) =
1
(

2
2
)
N
exp
_

1
2
2
N1

n=0
(x[n] A)
2
_
Then
ln p(x; A)
A
=

A
_
ln[(2
2
)
N/2
]
1
2
2
N1

n=0
(x[n] A)
2
_
=
1

2
N1

n=0
(x[n] A) =
N

2
_
1
N
N1

n=0
x[n] A
_
According to Theorem :
var (

A)

2
N
and I (A) =
N

2
and

A = g(x[0]) =
1
N
N1

n=0
x[n]
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 36 / 152
Transformation of Parameters
If it is desired to estimate = g(), then the CRLB is :
var(

)
_
E
_

2
ln p(x; )

2
__
1
_
g

_
2
Example : Compute the CRLB for estimation of the power (A
2
) of a DC
level in noise :
var(

A
2
)

2
N
(2A)
2
=
4A
2

2
N
Denition
Ecient estimator : An unbiased estimator that attains the CRLB is said
to be ecient.
Example : Knowing that x =
1
N
N1

n=0
x[n] is an ecient estimator for A, is
x
2
an ecient estimator for A
2
?
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 37 / 152
Transformation of Parameters
Solution : Knowing that x N(A,
2
/N), we have :
E( x
2
) = E
2
( x) + var( x) = A
2
+

2
N
= A
2
So the estimator

A
2
= x
2
is not even unbiased.
Lets look at the variance of this estimator :
var( x
2
) = E( x
4
) E
2
( x
2
)
but we have from the moments of Gaussian RVs (slide 15) :
E( x
4
) = A
4
+ 6A
2

2
N
+ 3(

2
N
)
2
Therefore :
var( x
2
) = A
4
+ 6A
2

2
N
+
3
4
N
2

_
A
2
+

2
N
_
2
=
4A
2

2
N
+
2
4
N
2
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 38 / 152
Transformation of Parameters
Remarks :
The estimator

A
2
= x
2
is biased and not ecient.
As N the bias goes to zero and the variance of the estimator
approaches the CRLB. This type of estimators are called
asymptotically ecient.
General Remarks :
If g() = a + b is an ane function of , then

g() = g(

) is an
ecient estimator. First, it is unbiased : E(a

+b) = a +b = g(),
moreover :
var(

g())
_
g

_
2
var(

) = a
2
var(

)
but var(

g()) = var(a

+ b) = a
2
var(

), so that the CRLB is


achieved.
If g() is a nonlinear function of and

is an ecient estimator,
then g(

) is an asymptotically ecient estimator.


(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 39 / 152
Cramer-Rao Lower Bound
Theorem (Vector Parameter)
Assume that the PDF p(x; ) satises the regularity condition
E
_
ln p(x; )

_
= 0 for all
Then the variance of any unbiased estimator

satises C

I
1
() 0
where 0 means that the matrix is positive semidenite. I() is called the
Fisher information matrix and is given by :
I
ij
() = E
_

2
ln p(x; )

j
_
An unbiased estimator that attains the CRLB can be found i :
ln p(x; )

= I()(g(x) )
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 40 / 152
CRLB Extension to Vector Parameter
Example : Consider a DC level in WGN with A and
2
unknown.
Compute the CRLB for estimation of = [A
2
]
T
.
ln p(x; ) =
N
2
ln 2
N
2
ln
2

1
2
2
N1

n=0
(x[n] A)
2
The Fisher information matrix is :
I() = E
_

2
ln p(x;)
A
2

2
ln p(x;)
A
2

2
ln p(x;)
A
2

2
ln p(x;)
(
2
)
2
_
=
_
N

2
0
0
N
2
4
_
The matrix is diagonal (just for this example) and can be easily inverted to
yield :
var(

)

2
N
var(

2
)
2
4
N
Is there any unbiased estimator that achieves these bounds ?
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 41 / 152
Transformation of Parameters
If it is desired to estimate = g(), and the CRLB for the covariance of

is I
1
(), then :
C


_
g

__
E
_

2
ln p(x; )

2
__
1
_
g

_
T
Example : Consider a DC level in WGN with A and
2
unknown.
Compute the CRLB for estimation of signal to noise ratio = A
2
/
2
.
We have = [A
2
]
T
and = g() =
2
1
/
2
, then the Jacobian is :
g()

=
_
g()

1
g()

2
_
=
_
2A

2

A
2

4
_
So the CRLB is :
var( )
_
2A

2

A
2

4
_ _
N

2
0
0
N
2
4
_
1
_
2A

2
A
2

4
_
=
4 + 2
2
N
(Cramer-Rao Lower Bound) Estimation Theory Spring 2011 42 / 152
Linear Models with WGN
If N point samples of data observed can be modeled as
x = H +w
where
x = N 1 observation vector
H = N p observation matrix (known, rank p)
= p 1 vector of parameters to be estimated
w = N 1 noise vector with PDF N(0,
2
I)
Compute the CRLB and the MVU estimator that achieves this bound.
Step 1 : Compute ln p(x; ).
Step 2 : Compute I() = E
_

2
ln p(x; )

2
_
and the covariance
matrix of

: C

= I
1
().
Step 3 : Find the MVU estimator g(x) by factoring
ln p(x; )

= I()[g(x) ]
(Linear Models) Estimation Theory Spring 2011 43 / 152
Linear Models with WGN
Step 1 : ln p(x; ) = ln(

2
2
)
N

1
2
2
(x H)
T
(x H).
Step 2 :
ln p(x; )

=
1
2
2

[x
T
x 2x
T
H +
T
H
T
H]
=
1

2
[H
T
x H
T
H]
Then I() = E
_

2
ln p(x; )

2
_
=
1

2
H
T
H
Step 3 : Find the MVU estimator g(x) by factoring
ln p(x; )

= I()[g(x) ] =
H
T
H

2
[(H
T
H)
1
H
T
x ]
Therefore :

= g(x) = (H
T
H)
1
H
T
x C

= I
1
() =
2
(H
T
H)
1
(Linear Models) Estimation Theory Spring 2011 44 / 152
Linear Models with WGN
For a linear model with WGN represented by x = H +w the MVU
estimator is :

= (H
T
H)
1
H
T
x
This estimator is ecient and attains the CRLB.
That the estimator is unbiased can be seen easily by :
E(

) = (H
T
H)
1
H
T
E(H +w) =
The statistical performance of

is completely specied because

is a
linear transformation of a Gaussian vector x and hence has a Gaussian
distribution :

N(,
2
(H
T
H)
1
)
(Linear Models) Estimation Theory Spring 2011 45 / 152
Example (Curve Fitting)
Consider tting the data x[n] by a p-th order polynomial function of n :
x[n] =
0
+
1
n +
2
n
2
+ +
p
n
p
+ w[n]
We have N data samples, then :
x = [x[0], x[1], . . . , x[N 1]]
T
w = [w[0], w[1], . . . , w[N 1]]
T
= [
0
,
1
, . . . ,
p
]
T
so x = H +w, where H is :
_

_
1 0 0 0
1 1 1 1
1 2 4 2
p
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 N 1 (N 1)
2
(N 1)
p
_

_
N(p+1)
Hence the MVU estimator is :

= (H
T
H)
1
H
T
x
(Linear Models) Estimation Theory Spring 2011 46 / 152
Example (Fourier Analysis)
Consider the Fourier analysis of the data x[n] :
x[n] =
M

k=1
a
k
cos(
2kn
N
) +
M

k=1
b
k
sin(
2kn
N
) + w[n]
so we have = [a
1
, a
2
, . . . , a
M
, b
1
, b
2
, . . . , b
M
]
T
and x = H +w where :
H = [h
a
1
, h
a
2
, . . . , h
a
M
, h
b
1
, h
b
2
, . . . , h
b
M
]
with
h
a
k
=
_

_
1
cos(
2k
N
)
cos(
2k2
N
)
.
.
.
cos(
2k(N1)
N
)
_

_
, h
b
k
=
_

_
1
sin(
2k
N
)
sin(
2k2
N
)
.
.
.
sin(
2k(N1)
N
)
_

_
Hence the MVU estimate of the Fourier coecients is :

= (H
T
H)
1
H
T
x
(Linear Models) Estimation Theory Spring 2011 47 / 152
Example (Fourier Analysis)
After simplication (noting that (H
T
H)
1
=
2
N
I), we have :

=
2
N
[(h
a
1
)
T
x, . . . , (h
a
M
)
T
x , (h
b
1
)
T
x, . . . , (h
b
M
)
T
x]
T
which is the same as the standard solution :
a
k
=
2
N
N1

n=0
x[n] cos
_
2kn
N
_
,

b
k
=
2
N
N1

n=0
x[n] sin
_
2kn
N
_
From the properties of linear models the estimates are unbiased.
The covariance matrix is :
C

=
2
(H
T
H)
1
=
2
2
N
I
Note that

is Gaussian and C

is diagonal (the amplitude estimates


are independent).
(Linear Models) Estimation Theory Spring 2011 48 / 152
Example (System Identication)
Consider identication of a Finite Impulse Response (FIR) model, h[k] for
k = 0, 1, . . . , p 1, with input u[n] and output x[n] provided for
n = 0, 1, . . . , N 1 :
x[n] =
p1

k=0
h[k]u[n k] + w[n] n = 0, 1, . . . , N 1
FIR model can be represented by the linear model x = H +w where
=
_

_
h[0]
h[1]
. . .
h[p 1]
_

_
p1
H =
_

_
u[0] 0 0
u[1] u[0] 0
.
.
.
.
.
.
.
.
.
.
.
.
u[N 1] u[N 2] u[N p]
_

_
Np
The MVU estimate is

= (H
T
H)
1
H
T
x with C

=
2
(H
T
H)
1
.
(Linear Models) Estimation Theory Spring 2011 49 / 152
Linear Models with Colored Gaussian Noise
Determine the MVU estimator for the linear model x = H +w with w a
colored Gaussian noise with N(0, C).
Whitening approach : Since C is positive denite, its inverse can be
factored as C
1
= D
T
D where D is an invertible matrix. This matrix acts
as a whitening transformation for w :
E[(Dw)(Dw)
T
] = E(Dww
T
D) = DCD
T
= DD
1
D
T
D = I
Now if we transform the linear model x = H +w to :
x

= Dx = DH +Dw = H

+w

where w

= Dw N(0, I) is white and we can compute the MVU


estimator as :

= (H
T
H

)
1
H
T
x

= (H
T
D
T
DH)
1
H
T
D
T
Dx
so, we have :

= (H
T
C
1
H)
1
H
T
C
1
x with C

= (H
T
H

)
1
= (H
T
C
1
H)
1
(Linear Models) Estimation Theory Spring 2011 50 / 152
Linear Models with known components
Consider a linear model x = H +s +w, where s is a known signal. To
determine the MVU estimator let x

= x s, so that x

= H +w is a
standard linear model. The MVU estimator is :

= (H
T
H)
1
H
T
(x s) with C

=
2
(H
T
H)
1
Example : Consider a DC level and exponential in WGN :
x[n] = A + r
n
+ w[n] where r is known. Then we have :
_

_
x[0]
x[1]
.
.
.
x[N 1]
_

_
=
_

_
1
1
.
.
.
1
_

_
A +
_

_
1
r
.
.
.
r
N1
_

_
+
_

_
w[0]
w[1]
.
.
.
w[N 1]
_

_
The MVU estimator is :

A = (H
T
H)
1
H
T
(x s) =
1
N
N1

n=0
(x[n] r
n
) with var(

A) =

2
N
(Linear Models) Estimation Theory Spring 2011 51 / 152
Best Linear Unbiased Estimators (BLUE)
Problems of nding the MVU estimators :
The MVU estimator does not always exist or impossible to nd.
The PDF of data may be unknown.
BLUE is a suboptimal estimator that :
restricts estimates to be linear in data ;

= Ax
restricts estimates to be unbiased ; E(

) = AE(x) =
minimizes the variance of the estimates ;
needs only the mean and the variance of the data (not the PDF). As
a result, in general, the PDF of the estimates cannot be computed.
Remark : The unbiasedness restriction implies a linear model for the data.
However, it may still be used if the data are transformed suitably or the
model is linearized.
(Best Linear Unbiased Estimators) Estimation Theory Spring 2011 52 / 152
Finding the BLUE (Scalar Case)
1
Choose a linear estimator for the observed data
x[n] , n = 0, 1, . . . , N 1

=
N1

n=0
a
n
x[n] = a
T
x where a = [a
0
, a
1
, . . . , a
N1
]
T
2
Restrict estimate to be unbiased :
E(

) =
N1

n=0
a
n
E(x[n]) =
3
Minimize the variance
var(

) = E{[

E(

)]
2
} = E{[a
T
x a
T
E(x)]
2
}
= E{a
T
[x E(x)][x E(x)]
T
a} = a
T
Ca
(Best Linear Unbiased Estimators) Estimation Theory Spring 2011 53 / 152
Finding the BLUE (Scalar Case)
Consider the problem of amplitude estimation of known signals in noise :
x[n] = s[n] + w[n]
1
Choose a linear estimator :

=
N1

n=0
a
n
x[n] = a
T
x
2
Restrict estimate to be unbiased : E(

) = a
T
E(x) = a
T
s =
then a
T
s = 1 where s = [s[0], s[1], . . . , s[N 1]]
T
3
Minimize a
T
Ca subject to a
T
s = 1.
The constrained optimization can be solved using Lagrangian Multipliers :
Minimize J = a
T
Ca + (a
T
s 1)
The optimal solution is :

=
s
T
C
1
x
s
T
C
1
s
and var(

) =
1
s
T
C
1
s
(Best Linear Unbiased Estimators) Estimation Theory Spring 2011 54 / 152
Finding the BLUE (Vector Case)
Theorem (GaussMarkov)
If the data are of the general linear model form
x = H +w
with w is a noise vector with zero mean and covariance C (the PDF of w
is arbitrary), then the BLUE of is :

= (H
T
C
1
H)
1
H
T
C
1
x
and the covariance matrix of

is
C

= (H
T
C
1
H)
1
Remark : If noise is Gaussian then BLUE is MVU estimator.
(Best Linear Unbiased Estimators) Estimation Theory Spring 2011 55 / 152
Finding the BLUE
Example : Consider the problem of DC level in noise : x[n] = A + w[n],
where w[n] is of unspecied PDF with var(w[n]) =
2
n
. We have = A
and H = 1 = [1, 1, . . . , 1]
T
. The covariance matrix is :
C =
_

2
0
0 0
0
2
1
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
2
N1
_

_
C
1
=
_

_
1

2
0
0 0
0
1

2
1
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
1

2
N1
_

_
and hence the BLUE is :

= (H
T
C
1
H)
1
H
T
C
1
x =
_
N1

n=0
1

2
n
_
1
N1

n=0
x[n]

2
n
and the minimum covariance is :
C

= (H
T
C
1
H)
1
=
_
N1

n=0
1

2
n
_
1
(Best Linear Unbiased Estimators) Estimation Theory Spring 2011 56 / 152
Maximum Likelihood Estimation
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 57 / 152
Maximum Likelihood Estimation
Problems : MVU estimator does not often exist or cannot be found.
BLUE is restricted to linear models.
Maximum Likelihood Estimator (MLE) :
can always be applied if the PDF is known ;
is optimal for large data size ;
is computationally complex and requires numerical methods.
Basic Idea : Choose the parameter value that makes the observed data,
the most likely data to have been observed.
Likelihood Function : is the PDF p(x; ) when is regarded as a variable
(not a parameter).
ML Estimate : is the value of that maximizes the likelihood function.
Procedure : Find log-likelihood function ln p(x; ) ; dierentiate w.r.t
and set to zero and solve for .
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 58 / 152
Maximum Likelihood Estimation
Example : Consider DC level in WGN with unknown variance
x[n] = A + w[n]. Suppose that A > 0 and
2
= A. The PDF is :
p(x; A) =
1
(2A)
N
2
exp
_

1
2A
N1

n=0
(x[n] A)
2
_
Taking the derivative of the log-likelihood function, we have :
ln p(x; A)
A
=
N
2A
+
1
A
N1

n=0
(x[n] A) +
1
2A
2
N1

n=0
(x[n] A)
2
What is the CRLB? Does an MVU estimator exist ?
MLE can be found by setting the above equation to zero :

A =
1
2
+

_
1
N
N1

n=0
x
2
[n] +
1
4
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 59 / 152
Stochastic Convergence
Convergence in distribution : Let {p(x
N
)} be a sequence of PDF. If
there exists a PDF p(x) such that
lim
N
p(x
N
) = p(x)
at every point x at which p(x) is continuous, we say that p(x
N
) converges
in distribution to p(x). We write also x
N
d
x.
Example
Consider N independent RV x
1
, x
2
, . . . , x
N
with mean and nite variance

2
. Let x
N
=
1
N
N

n=1
x
n
, then according to the Central Limit Theorem
(CLT), z
N
=

N
x
N

converges in distribution to z N(0, 1).


(Maximum Likelihood Estimation) Estimation Theory Spring 2011 60 / 152
Stochastic Convergence
Convergence in probability : Sequence of random variables {x
N
}
converges in probability to the random variable x if for every > 0
lim
N
Pr {|x
N
x| > } = 0
Convergence with probability 1 (almost sure convergence) :
Sequence of random variables {x
N
} converges with probability 1 to the
random variable x if and only if for all possible events
Pr { lim
N
x
N
= x} = 1
Example
Consider N independent RV x
1
, x
2
, . . . , x
N
with mean and nite variance

2
. Let x
N
=
1
N
N

n=1
x
n
. Then according to the Law of Large Numbers
(LLN), z
N
converges in probability to .
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 61 / 152
Asymptotic Properties of Estimators
Asymptotic Unbiasedness : Estimator

N
is an asymptotically unbiased
estimator of if :
lim
N
E(

N
) = 0
Asymptotic Distribution : It refers to p(x
n
) as it evolves from
n = 1, 2, . . ., especially for large value of n (it is not the ultimate form of
distribution, which may be degenerate).
Asymptotic Variance : is not equal to lim
N
var(

N
) (which is the
limiting variance). It is dened as :
asymptotic var(

N
) =
1
N
lim
N
E{N[

N
lim
N
E(

N
)]
2
}
Mean-Square Convergence : Estimator

N
converges to in a
mean-squared sense, if :
lim
N
E[(

N
)
2
] = 0
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 62 / 152
Consistency
Estimator

N
is a consistent estimator of if for every > 0
plim(

N
) = lim
N
Pr [|

N
| > ] = 0
Remarks :
If

N
is asymptotically unbiased and its limiting variance is zero then
it converges to in mean-square.
If

N
converges to in mean-square, then the estimator is consistent.
Asymptotic unbiasedness does not imply consistency and vice versa.
plim can be treated as an operator, e.g. :
plim(xy) = plim(x)plim(y) ; plim
_
x
y
_
=
plim(x)
plim(y)
The importance of consistency is that any continuous function of a
consistent estimator is itself a consistent estimator.
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 63 / 152
Maximum Likelihood Estimation
Properties : MLE may be biased and is not necessarily an ecient
estimator. However :
MLE is a consistent estimator meaning that
lim
N
Pr [|

| > ] = 0
MLE asymptotically attains the CRLB (asymptotic variance is equal
to CRLB).
Under some regularityconditions, the MLE is asymptotically
normally distributed

a
N(, I
1
())
even if the PDF of x is not Gaussian.
If an MVU estimator exists then ML procedure will nd it.
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 64 / 152
Maximum Likelihood Estimation
Example
Consider DC level in WGN with known variance
2
.
Sol. Then
p(x; A) =
1
(2
2
)
N
2
exp
_

1
2
2
N1

n=0
(x[n] A)
2
_
ln p(x; A)
A
=
1

2
N1

n=0
(x[n] A) = 0

N1

n=0
x[n] N

A = 0
which leads to

A = x =
1
N
N1

n=0
x[n]
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 65 / 152
MLE for Transformed Parameters (Invariance Property)
Theorem (Invariance Property of the MLE)
The MLE of the parameter = g(), where the PDF p(x; ) is
parameterized by , is given by
= g(

)
where

is the MLE of .
It can be proved using the property of the consistent estimators.
If = g() is a one-to-one function, then
= arg max

p(x; g
1
()) = g(

)
If = g() is not a one-to-one function, then
p
T
(x; ) = max
{:=g()}
p(x; ) and = arg max

p
T
(x; ) = g(

)
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 66 / 152
MLE for Transformed Parameters (Invariance Property)
Example
Consider DC level in WGN and nd MLE of = exp(A). Since g() is a
one-to-one function then :
= arg max

p
T
(x, ) = arg max

p(x; ln ) = exp( x)
Example
Consider DC level in WGN and nd MLE of = A
2
. Since g() is not a
one-to-one function then :
= arg max
0
{p(x;

), p(x;

)}
=
_
arg max

0
{p(x;

), p(x;

)}
_
2
=
_
arg max
<A<
p(x; A)
_
2
=

A
2
= x
2
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 67 / 152
MLE (Extension to Vector Parameter)
Example
Consider DC Level in WGN with unknown variance. The vector parameter
= [A
2
]
T
should be estimated.
We have :
ln p(x; )
A
=
1

2
N1

n=0
(x[n] A)
ln p(x; )

2
=
N
2
2
+
1
2
4
N1

n=0
(x[n] A)
2
which leads to the following MLE :

A = x

2
=
1
N
N1

n=0
(x[n] x)
2
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 68 / 152
MLE for General Gaussian Case
Consider the general Gaussian case where x N((), C()).
The partial derivative of the PDF is :
ln p(x; )

k
=
1
2
tr
_
C
1
()
C()

k
_
+
()
T

k
C
1
()(x ())

1
2
(x ())
T
C
1
()

k
(x ())
for k = 1, . . . , p.
By setting the above equations equal to zero, MLE can be found.
A particular case is when C is known (the rst and third terms
become zero).
In addition, if () is linear in , the general linear model is obtained.
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 69 / 152
MLE for General Linear Models
Consider the general linear model x = H +w where w is a noise vector
with PDF N(0, C) :
p(x; ) =
1
_
(2)
N
det(C)
exp
_

1
2
(x H)
T
C
1
(x H)
_
Taking the derivative of ln p(x; ) leads to :
ln p(x; )

=
(H)
T

C
1
(x H)
Then
H
T
C
1
(x H) = 0

= (H
T
C
1
H)
1
H
T
C
1
x
which is the same as MVU estimator. The PDF of

is :

N(, (H
T
C
1
H)
1
)
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 70 / 152
MLE (Numerical Method)
NewtonRaphson : A closed form estimator cannot be always computed
by maximizing the likelihood function. However, the maximum value can
be computed by the numerical methods like the iterative NewtonRaphson
algorithm.

k+1
=
k

_

2
ln p(x; )

T
_
1
ln p(x; )

=
k
Remarks :
The Hessian can be replaced by the negative of its expectation, the
Fisher information matrix I().
This method suers from convergence problems (local maximum).
Typically, for large data length, the log-likelihood function becomes
more quadratic and the algorithm will produce the MLE.
(Maximum Likelihood Estimation) Estimation Theory Spring 2011 71 / 152
Least Squares Estimation
(Least Squares) Estimation Theory Spring 2011 72 / 152
The Least Squares Approach
In all the previous methods we assumed that the measured signal x[n] is
the sum of a true signal s[n] and a measurement error w[n] with known
probabilistic model. In least squares method
x[n] = s[n, ] + e[n]
where e(n) represents the modeling and measurement errors. The objective
is to minimize the LS cost :
J() =
N1

n=0
(x[n] s[n, ])
2
We do not need a probabilistic assumption but only
a deterministic signal model.
It has a broader range of applications.
No claim about the optimality can be made.
The statistical performance cannot be assessed.
(Least Squares) Estimation Theory Spring 2011 73 / 152
The Least Squares Approach
Example
Estimate the DC level of a signal. We observe x[n] = A + [n] for
n = 0, . . . , N 1 and the LS criterion is :
J(A) =
N1

n=0
(x[n] A)
2
J(A)
A
= 2
N1

n=0
(x[n] A) = 0

A =
1
N
N1

n=0
x[n]
(Least Squares) Estimation Theory Spring 2011 74 / 152
Linear Least Squares
Suppose that the observation model is linear x = H + then
J() =
N1

n=0
(x[n] s[n, ])
2
= (x H)
T
(x H)
= x
T
x 2x
T
H +
T
H
T
H
where H is full rank. The gradient is
J()

= 2H
T
x + 2H
T
H = 0

= (H
T
H)
1
H
T
x
The minimum LS cost is :
J
min
= (x H

)
T
(x H

) = x
T
[I H(H
T
H)
1
H
T
]x = x
T
(x H)
where I H(H
T
H)
1
H
T
is an Idempotent matrix.
(Least Squares) Estimation Theory Spring 2011 75 / 152
Comparing Dierent Estimators for the Linear Model
Consider the following linear model
x = H +w
Estimator Assumption Estimate
LSE No probabilistic assumption

= (H
T
H)
1
H
T
x
BLUE w is white with unknown PDF

= (H
T
H)
1
H
T
x
MLE w is white Gaussian noise

= (H
T
H)
1
H
T
x
MVUE w is white Gaussian noise

= (H
T
H)
1
H
T
x
For MLE and MVUE the PDF of the estimate will be
Gaussian.
(Least Squares) Estimation Theory Spring 2011 76 / 152
Weighted Linear Least Squares
The LS criterion can be modied by including a positive denite
(symmetric) weighting matrix W :
J() = (x H)
T
W(x H)
That leads to the following estimator :

= (H
T
WH)
1
H
T
Wx
and minimum LS cost :
J
min
= x
T
[WWH(H
T
WH)
1
H
T
W]x
Remark : If we take W = C
1
, where C is the covariance of noise then
weighted least squares estimator is the BLUE. However, there is no true
LS-based reason for this choice.
(Least Squares) Estimation Theory Spring 2011 77 / 152
Geometrical Interpretation
Recall the general signal model s = H. If we denote the column of H by
h
i
we have :
s = [h
1
h
2
h
p
]
_

2
.
.
.

p
_

_
=
p

i =1

i
h
i
The signal model is a linear combination of the vectors {h
i
}.
The LS minimizes the length of the error vector between the data and
the signal model = x s :
J() = (x H)
T
(x H) = x H
2
The data vector can lie anywhere in R
N
, while signal vectors must lie
in a p-dimensional subspace of R
N
, termed S
p
, which is spanned by
the column of H.
(Least Squares) Estimation Theory Spring 2011 78 / 152
Geometrical Interpretation (Orthogonal Projection)
Intuitively, it is clear that the LS error is minimized when s = H

is
the orthogonal projection of x onto S
p
.
So the LS error vector = x s is orthogonal to all columns of H :
H
T
= 0 H
T
(x H

) = 0

= (H
T
H)
1
H
T
x
The signal estimate is the projection of x onto S
p
:
s = H

= H(H
T
H)
1
H
T
x = Px
where P is the orthogonal projection matrix.
Note that if z Range(H), then Pz = z. Recall that Range(H) is the
subspace spanned by the columns of H.
Now, since Px S
p
then P(Px) = Px. Therefore any projection
matrix is idempotent, i.e. P
2
= P.
It can be veried that P is symmetric and singular (with rank p).
(Least Squares) Estimation Theory Spring 2011 79 / 152
Geometrical Interpretation (Orthonormal columns of H)
Recall that
H
T
H =
_

_
< h
1
, h
1
> < h
1
, h
2
> < h
1
, h
p
>
< h
2
, h
1
> < h
2
, h
2
> < h
2
, h
p
>
.
.
.
.
.
.
.
.
.
.
.
.
< h
p
, h
1
> < h
p
, h
2
> < h
p
, h
p
>
_

_
If the columns of H are orthonormal then H
T
H = I and

= H
T
x.
In this case we have

i
= h
T
i
x, thus
s = H

=
p

i =1

i
h
i
=
p

i =1
(h
T
i
x)h
i
If we increase the number of parameters (the order of the linear
model), we can easily compute the new estimate.
(Least Squares) Estimation Theory Spring 2011 80 / 152
Choosing the Model Order
Suppose that you have a set of data and the objective is to t a
polynomial to data. What is the best polynomial order ?
Remarks :
It is clear that by increasing the order J
min
is monotonically
non-increasing.
By choosing p = N we can perfectly t the data to model. However,
we t the noise as well.
We should choose the simplest model that adequately describes the
data.
We increase the order only if the cost reduction is signicant.
If we have an idea about the expected level of J
min
, we increase p to
approximately attain this level.
There is an order-recursive LS algorithm to eciently compute a
(p + 1)-th order model based on a p-th order one (See 8.6).
(Least Squares) Estimation Theory Spring 2011 81 / 152
Sequential Least Squares
Suppose that

[N 1] based on x[N 1] =
_
x[0], . . . , x[N 1]

is
available. If we get a new data x[N], we want to compute

[N] as a
function of

[N 1] and x[N].
Example
Consider LS estimate of DC level :

A[N 1] =
1
N
N1

n=0
x[n]. We have :

A[N] =
1
N + 1
N

n=0
x[n] =
1
N + 1
_
N
_
1
N
N1

n=0
x[n]
_
+ x[N]
_
=
N
N + 1

A[N 1] +
1
N + 1
x[N]
=

A[N 1] +
1
N + 1
(x[N]

A[N 1])
new estimate = old estimate + gain prediction error
(Least Squares) Estimation Theory Spring 2011 82 / 152
Sequential Least Squares
Example (DC level in uncorrelated noise)
x[n] = A + w[n] and var(w[n]) =
2
n
. The WLS estimate (or BLUE) is :

A[N 1] =

N1
n=0
x[n]

2
n

N1
n=0
1

2
n
Similar to the previous example we can obtain :

A[N] =

A[N 1] +
1/
2
N

N1
n=0
1

2
n
_
x[N]

A[N 1]
_
=

A[N 1] + K[N]
_
x[N]

A[N 1]
_
The gain factor K[N] can be reformulated as :
K[N] =
var(

A[N 1])
var(

A[N 1]) +
2
N
(Least Squares) Estimation Theory Spring 2011 83 / 152
Sequential Least Squares
Consider the general linear model x = H +w, where w is an uncorrelated
noise with the covariance matrix C. The BLUE (or WLS) is :

= (H
T
C
1
H)
1
H
T
C
1
x and C

= (H
T
C
1
H)
1
Lets dene :
C[n] = diag(
2
0
,
2
1
, . . . ,
2
n
)
H[n] =
_
H[n 1]
h
T
[n]
_
=
_
n p
1 p
_
x[n] =
_
x[0], x[1], . . . , x[n]

The objective is to nd

[n], based on n + 1 data samples, as a function of

[n 1] and new data x[n]. The batch estimator is :

[n 1] = [n 1]H
T
[n 1]C
1
[n 1]x[n 1]
with [n 1] = (H
T
[n 1]C
1
[n 1]H[n 1])
1
(Least Squares) Estimation Theory Spring 2011 84 / 152
Sequential Least Squares
Estimator Update :

[n] =

[n 1] +K[n]
_
x[n] h
T
[n]

[n 1]
_
where
K[n] =
[n 1]h[n]

2
n
+h
T
[n][n 1]h[n]
Covariance Update :
[n] = (I K[n]h
T
[n])[n 1]
The following Lemma is used to compute the updates :
Matrix Inversion Lemma
(A + BCD)
1
= A
1
A
1
B[C
1
+DA
1
B]
1
DA
1
with A =
1
[n 1], B = h[n], C = 1/
2
n
, D = h
T
[n].
Initialization ?
(Least Squares) Estimation Theory Spring 2011 85 / 152
Constrained Least Squares
Linear constraints in the form A = b can be considered in LS solution.
The LS criterion becomes :
J
c
() = (x H)
T
(x H) +
T
(A b)
J
c

= 2H
T
x + 2H
T
H +A
T

Setting the gradient equal to zero produces :

c
= (H
T
H)
1
H
T
x
1
2
(H
T
H)
1
A
T

=

(H
T
H)
1
A
T

2
where

= (H
T
H)
1
H
T
x is the unconstrained LSE.
Now A

c
= b can be solved to nd :
= 2[A(H
T
H)
1
A
T
]
1
(A

b)
(Least Squares) Estimation Theory Spring 2011 86 / 152
Nonlinear Least Squares
Many applications have nonlinear observation model : s() = H. This
leads to a nonlinear optimization problem that can be solved numerically.
Newton-Raphson Method : Find a zero of the gradient of the criterion
by linearizing the gradient around the current estimate.

k+1
=

k

_
g()

_
1
g()

k
where
g() =
s
T
()

[x s()] and
g()

=
_
s()

[x s()]
s()

s
T
()

Around the solution [x s()] is small so the rst term in the Jacobian
can be neglected. This makes this method equivalent to the Gauss-Newton
algorithm which is numerically more robust.
(Least Squares) Estimation Theory Spring 2011 87 / 152
Nonlinear Least Squares
Gauss-Newton Method : Linearize signal model around the current
estimate and solve the resulting linear problem.
s() s(
0
) +
s()

=
0
(
0
) = s(
0
) +H(
0
)(
0
)
The solution to the linearized problem is :

= [H
T
(
0
)H(
0
)]
1
H
T
(
0
)[x s(
0
) +H(
0
)
0
]
=
0
+ [H
T
(
0
)H(
0
)]
1
H
T
(
0
)[x s(
0
)]
If we now iterate the solution, it becomes :

k+1
=

k
+ [H
T
(

k
)H(

k
)]
1
H
T
(

k
)[x s(

k
)]
Remark : Both the Newton-Raphson and the Gauss-Newton methods can
have convergence problems.
(Least Squares) Estimation Theory Spring 2011 88 / 152
Nonlinear Least Squares
Transformation of parameters : Transform into a linear problem by
seeking for an invertible function = g() such that :
s() = s(g
1
()) = H
So the nonlinear LSE is

= g
1
( ) where = (H
T
H)
1
H
T
x.
Example (Estimate the amplitude and phase of a sinusoidal signal)
s[n] = Acos(2f
0
n + ) n = 0, 1, . . . , N 1
The LS problem is nonlinear, however, we have :
Acos(2f
0
n + ) = Acos cos 2f
0
n Asin sin 2f
0
n
if we let
1
= Acos and
2
= Asin then the signal model becomes
linear s = H. The LSE is = (H
T
H)
1
H
T
x and g
1
() is
A =
_

2
1
+
2
2
and = arctan
_

1
_
(Least Squares) Estimation Theory Spring 2011 89 / 152
Nonlinear Least Squares
Separability of parameters : If the signal model is linear in some of the
parameters, try to write it as s = H(). The LS error can be minimized
wrt .

= [H
T
()H()]
1
H
T
()x
The cost function is a function of that can be minimized by a
numerical method (e.g. brute force).
J(,

) = x
T
_
I H()[H
T
()H()]
1
H
T
()

x
Then
= arg max

x
T
_
H()[H
T
()H()]
1
H
T
()

x
Remark : This method is interesting if the dimension of is much less
than the dimension of .
(Least Squares) Estimation Theory Spring 2011 90 / 152
Nonlinear Least Squares
Example (Damped Exponentials)
Consider the following signal model :
s[n] = A
1
r
n
+ A
2
r
2n
+ A
3
r
3n
n = 0, 1, . . . , N 1
with = [A
1
, A
2
, A
3
, r ]
T
. It is known that 0 < r < 1. Lets take
= [A
1
, A
2
, A
3
]
T
so we get s = H(r ) with :
H(r ) =
_

_
1 1 1
r r
2
r
3
.
.
.
.
.
.
.
.
.
r
N1
r
2(N1)
r
3(N1)
_

_
Step 1 : Maximize x
T
_
H(r )[H
T
(r )H(r )]
1
H
T
(r )

x to obtain r .
Step 2 : Compute

= [H
T
(r )H(r )]
1
H
T
(r )x.
(Least Squares) Estimation Theory Spring 2011 91 / 152
The Bayesian Philosophy
(The Bayesian Philosophy) Estimation Theory Spring 2011 92 / 152
Introduction
Classical Approach :
Assumes is unknown but deterministic.
Some prior knowledge on cannot be used.
Variance of the estimate may depend on .
MVUE may not exist.
In Monte-Carlo Simulations, we do M runs for each xed and then
compute sample mean and variance for each (no averaging over ).
Bayesian Approach :
Assumes is random with a known prior PDF, p().
We estimate a realization of based on the available data.
Variance of the estimate does not depend on .
A Bayesian estimate always exists
In Monte-Carlo simulations, we do M runs for randomly chosen and
then we compute sample mean and variance over all values.
(The Bayesian Philosophy) Estimation Theory Spring 2011 93 / 152
Mean Square Error (MSE)
Classical MSE
Classical MSE is a function of the unknown parameter and cannot be
used for constructing the estimators.
mse(

) = E{[

(x)]
2
} =
_
[

(x)]
2
p(x; )dx
Note that the E{} is wrt the PDF of x.
Bayesian MSE :
Bayesian MSE is not a function of and can be minimized to nd an
estimator.
Bmse(

) = E{[

(x)]
2
} =
_ _
[

(x)]
2
p(x, )dxd
Note that the E{} is wrt the joint PDF of x and .
(The Bayesian Philosophy) Estimation Theory Spring 2011 94 / 152
Minimum Mean Square Error Estimator
Consider the estimation of A that minimizes the Bayesian MSE, where A is
a random variable with uniform PDF p(A) = U[A
0
, A
0
] and independent
of w[n].
Bmse(

) =
_ _
[A

A]
2
p(x, )dxdA =
_
_
_
[A

A]
2
p(A|x)dA
_
p(x)dx
where we used Bayes theorem : p(x, ) = p(A|x)p(x).
Since p(x) 0 for all x, we need only minimize the integral in brackets.
We set the derivative equal to zero :

A
_
[A

A]
2
p(A|x)dA = 2
_
Ap(A|x)dA + 2

A
_
p(A|x)dA
which results in

A =
_
Ap(A|x)dA = E(A|x)
Bayesian MMSE Estimate : is the conditional mean of A given data x
or the mean of posterior PDF p(A|x).
(The Bayesian Philosophy) Estimation Theory Spring 2011 95 / 152
Minimum Mean Square Error Estimator
How to compute the Bayesian MMSE estimate :

A = E(A|x) =
_
Ap(A|x)dA
The posteriori PDF can be computed using Bayes Rule :
p(A|x) =
p(x|A)p(A)
p(x)
=
p(x|A)p(A)
_
p(x|A)p(A)dA
p(x) is the marginal PDF dened as p(x) =
_
p(x, A)dA.
The integral in the denominator acts as a normalizationof
p(x|A)p(A) such the integral of p(A|x) be equal to 1.
p(x|A) has exactly the same form as p(x; A) which is used in the
classical estimation.
The MMSE estimator always exists and can be computed by :

A =
_
Ap(x|A)p(A)dA
_
p(x|A)p(A)dA
(The Bayesian Philosophy) Estimation Theory Spring 2011 96 / 152
Minimum Mean Square Error Estimator
Example : For p(A) = U[A
0
, A
0
] we have :

A =
1
2A
0
_
A
0
A
0
A
1
(2
2
)
N/2
exp
_
1
2
2
N1

n=0
(x[n] A)
2
_
dA
1
2A
0
_
A
0
A
0
1
(2
2
)
N/2
exp
_
1
2
2
N1

n=0
(x[n] A)
2
_
dA
Before collecting the data the mean of the prior PDF p(A) is the best
estimate while after collecting the data the best estimate is the mean
of posterior PDF p(A|x).
Choice of p(A) is crucial for the quality of the estimation.
Only Gaussian prior PDF leads to a closed-form estimator.
Conclusion : For an accurate estimator choose a prior PDF that can be
physically justied. For a closed-form estimator choose a Gaussian prior
PDF.
(The Bayesian Philosophy) Estimation Theory Spring 2011 97 / 152
Properties of the Gaussian PDF
Theorem (Conditional PDF of Bivariate Gaussian)
If x and y are distributed according to a bivariate Gaussian PDF :
p(x, y) =
1
2
_
det(C)
exp
_
1
2
_
x E(x)
y E(y)
_
T
C
1
_
x E(x)
y E(y)
_
_
with covariance matrix : C =
_
var(x) cov(x, y)
cov(x, y) var(y)
_
then the conditional PDF p(y|x) is also Gaussian and :
E(y|x) = E(y) +
cov(x, y)
var(x)
[x E(x)]
var(y|x) = var(y)
cov
2
(x, y)
var(x)
(The Bayesian Philosophy) Estimation Theory Spring 2011 98 / 152
Properties of the Gaussian PDF
Example (DC level in WGN with Gaussian prior PDF)
Consider the Bayesian model x[0] = A + w[0] (just one observation) with
a prior PDF of A N(
A
,
2
A
) independent of noise.
First we compute the covariance cov(A, x[0]) :
cov(A, x[0]) = E{(A
A
)(x[0] E(x[0])}
= E{(A
A
)(A + w[0]
A
0)}
= E{(A
A
)
2
+ (A
A
)w[0]} = var(A) + 0 =
2
A
The Bayesian MMSE estimate is also Gaussian with :

A =
A|x
=
A
+
cov(A, x[0])
var(x)
[x[0]
A
] =
A
+

2
A

2
+
2
A
(x[0]
A
)
var(

A) =
2
A|x
=
2
A

cov
2
(A, x[0])
var(x)
=
2
A
(1

2
A

2
+
2
A
)
(The Bayesian Philosophy) Estimation Theory Spring 2011 99 / 152
Properties of the Gaussian PDF
Theorem (Conditional PDF of Multivariate Gaussian)
If x (with dimension k 1) and y (with dimension l 1) are jointly
Gaussian with PDF :
p(x, y) =
1
(2)
k+l
2
_
det(C)
exp
_
1
2
_
x E(x)
y E(y)
_
T
C
1
_
x E(x)
y E(y)
_
_
with covariance matrix : C =
_
C
xx
C
xy
C
yx
C
yy
_
then the conditional PDF p(y|x) is also Gaussian and :
E(y|x) = E(y) +C
yx
C
1
xx
[x E(x)]
C
y|x
= C
yy
C
yx
C
1
xx
C
xy
(The Bayesian Philosophy) Estimation Theory Spring 2011 100 / 152
Bayesian Linear Model
Let the data be modeled as
x = H + w
where is a p 1 random vector with prior PDF N(

, C

) and w is a
noise vector with PDF N(0, C
w
).
Since and w are independent and Gaussian, they are jointly Gaussian
then the posterior PDF is also Gaussian. In order to nd the MMSE
estimator we should compute the covariance matrices :
C
xx
= E{[x E(x)][x E(x)]
T
}
= E{(H + w H

)(H +w H

)
T
}
= E{(H(

) + w)(H(

) +w)
T
}
= HE{(

)(

)
T
}H
T
+ E(ww
T
) = HC

H
T
+C
w
C
x
= E{(

)[H(

) +w]
T
}
= E{(

)(

)
T
}H
T
= C

H
T
(The Bayesian Philosophy) Estimation Theory Spring 2011 101 / 152
Bayesian Linear Model
Theorem (Posterior PDF for the Bayesian General Linear Model)
If the observed data can be modeled as
x = H + w
where is a p 1 random vector with prior PDF N(

, C

) and w is a
noise vector with PDF N(0, C
w
), then the posterior PDF p(x|) is
Gaussian with mean :
E(|x) =

+C

H
T
(HC

H
T
+ C
w
)
1
(x H

)
and covariance
C
|x
= C

H
T
(HC

H
T
+C
w
)
1
HC

Remark : In contrast to the classical general linear model, H need not be


full rank.
(The Bayesian Philosophy) Estimation Theory Spring 2011 102 / 152
Bayesian Linear Model
Example (DC Level in WGN with Gaussian Prior PDF)
x[n] = A+w[n], n = 0, . . . , N1, A N(
A
,
2
A
), w[n] N(0,
2
)
We have the general Bayesian linear model x = HA+w with H = 1. Then
E(A|x) =
A
+
2
A
1
T
(1
2
A
1
T
+
2
I)
1
(x 1
A
)
var(A|x) =
2
A

2
A
1
T
(1
2
A
1
T
+
2
I)
1
1
2
A
We use the Matrix Inversion Lemma to get :
(I +1

2
A

2
1
T
)
1
= I
11
T

2
A
+1
T
1
= I
11
T

2
A
+ N
E(A|x) =
A
+

2
A

2
A
+

2
N
( x
A
) and var(A|x) =

2
N

2
A

2
A
+

2
N
(The Bayesian Philosophy) Estimation Theory Spring 2011 103 / 152
Nuisance Parameters
Denition (Nuisance Parameter)
Suppose that and are unknown parameters but we are only interested
in estimating . In this case is called a nuisance parameter.
In the classical approach we have to estimate both but in the
Bayesian approach we can integrate it out.
Note that in the Bayesian approach we can nd p(|x) from p(, |x)
as a marginal PDF :
p(|x) =
_
p(, |x)d
We can also express it as
p(|x) =
p(x|)p()
_
p(x|)p()d
where p(x|) =
_
p(x|, )p(|)d
Furthermore if is independent of we have :
p(x|) =
_
p(x|, )p()d
(The Bayesian Philosophy) Estimation Theory Spring 2011 104 / 152
General Bayesian Estimators
(General Bayesian Estimators) Estimation Theory Spring 2011 105 / 152
General Bayesian Estimators
Risk Function
A general Bayesian estimator is obtained by minimizing the Bayes Risk

= arg min

R(

)
where R(

) = E{C()} is the Bayes Risk, C() is a cost function and


=

is the estimation error.


Three common risk functions
1. Quadratic : R(

) = E{
2
} = E{(

)
2
}
2. Absolute : R(

) = E{||} = E{|

|}
3. Hit-or-Miss : R(

) = E{C()} where C() =


_
0 || <
1 ||
(General Bayesian Estimators) Estimation Theory Spring 2011 106 / 152
General Bayesian Estimators
The Bayes risk is
R(

) = E{C()} =
_ _
C(

)p(x, )dxd =
_
g(

)p(x)dx
where g(

) =
_
C(

)p(|x)d should be minimized.
Quadratic
For this case R(

) = E{(

)
2
} =Bmse(

) which leads to the MMSE


estimator with

= E(|x)
so

is the mean of the posterior PDF p(|x).
(General Bayesian Estimators) Estimation Theory Spring 2011 107 / 152
General Bayesian Estimators
Absolute
In this case we have :
g(

) =
_
|

|p(|x)d =
_

)p(|x)d +
_

)p(|x)d
By setting the derivative of g(

) equal to zero and the use of Leibnitzs


rule, we get :
_

p(|x)d =
_

p(|x)d
Then

is the median (area to the left=area to the right) of p(|x)
(General Bayesian Estimators) Estimation Theory Spring 2011 108 / 152
General Bayesian Estimators
Hit-or-Miss
In this case we have
g(

) =
_

1 p(|x)d +
_

+
1 p(|x)d
= 1
_

+

p(|x)d
For arbitrarily small the optimal estimate is the location of the
maximum of p(|x) or the mode of the posterior PDF.
This estimator is called the maximum a posteriori (MAP) estimator.
Remark : For unimodal and symmetric posterior PDF (e.g. Gaussian
PDF), the mean and the mode and the median are the same.
(General Bayesian Estimators) Estimation Theory Spring 2011 109 / 152
Minimum Mean Square Error Estimators
Extension to the vector parameter case : In general, like the scalar
case, we can write :

i
= E(
i
|x) =
_

i
p(
i
|x)d
i
i = 1, 2, . . . , p
Then p(
i
|x) can be computed as a marginal conditional PDF. For
example :

1
=
_

1
__

_
p(|x)d
2
d
p
_
d
1
=
_

1
p(|x)d
In vector form we have :

=
_

_
_

1
p(|x)d
_

2
p(|x)d
.
.
.
_

p
p(|x)d
_

_
=
_
p(|x)d = E(|x)
Similarly Bmse(

i
) =
_
[C
|x
]
ii
p(x)dx
(General Bayesian Estimators) Estimation Theory Spring 2011 110 / 152
Properties of MMSE Estimators
For Bayesian Linear Model, poor prior knowledge leads to MVU
estimator.

= E(|x) =

+ (C
1

+H
T
C
1
w
H)
1
H
T
C
1
w
(x H

)
For no prior knowledge

0 and C
1

0, and therefore :

[H
T
C
1
w
H]
1
H
T
C
1
w
x
Commutes over ane transformations.
Suppose that = A +b then the MMSE estimator for is
= E(|x) = E(A + b|x) = AE(|x) +b = A

+b
Enjoys the additive property for independent data sets.
Assume that , x
1
, x
2
are jointly Gaussian with x
1
and x
2
independent :

= E() +C
x
1
C
1
x
1
x
1
[x
1
E(x
1
)] +C
x
2
C
1
x
2
x
2
[x
2
E(x
2
)]
(General Bayesian Estimators) Estimation Theory Spring 2011 111 / 152
Maximum A Posteriori Estimators
In the MAP estimation approach we have :

= arg max

p(|x)
where
p(|x) =
p(x|)p()
p(x)
An equivalent maximization is :

= arg max

p(x|)p()
or

= arg max

[ln p(x|) + ln p()]


If p() is uniform or is approximately constant around the maximum
of p(x|) (for large data length), we can remove p() to obtain the
Bayesian Maximum Likelihood :

= arg max

p(x|)
(General Bayesian Estimators) Estimation Theory Spring 2011 112 / 152
Maximum A Posteriori Estimators
Example (DC Level in WGN with Uniform Prior PDF)
The MMSE estimator cannot be obtained in explicit form due to the need
to evaluate the following integrals :

A =
1
2A
0
_
A
0
A
0
A
1
(2
2
)
N/2
exp
_
1
2
2
N1

n=0
(x[n] A)
2
_
dA
1
2A
0
_
A
0
A
0
1
(2
2
)
N/2
exp
_
1
2
2
N1

n=0
(x[n] A)
2
_
dA
The MAP estimator is given as :

A =
_
_
_
A
0
x < A
0
x A
0
x A
0
A
0
x > A
0
Remark : The main advantage of MAP estimator is that for non jointly
Gaussian PDFs it may lead to explicit solutions or less computational
eort.
(General Bayesian Estimators) Estimation Theory Spring 2011 113 / 152
Maximum A Posteriori Estimators
Extension to the vector parameter case

1
= arg max

1
p(
1
|x) = arg max

1
_

_
p(|x)d
2
. . . d
p
This needs integral evaluation of the marginal conditional PDFs.
An alternative is the following vector MAP estimator that maximizes the
joint conditional PDF :

= arg max

p(|x) = arg max

p(x|)p()
This corresponds to a circular Hit-or-Miss cost function.
In general, as N , the MAP estimator the Bayesian MLE.
If the posterior PDF is Gaussian, the mode is identical to the mean,
therefore the MAP estimator is identical to the MMSE estimator.
The invariance property in ML theory does not hold for the MAP
estimator.
(General Bayesian Estimators) Estimation Theory Spring 2011 114 / 152
Performance Description
In the classical estimation the mean and the variance of the estimate
(or its PDF) indicates the performance of the estimator.
In the Bayesian approach the PDF of the estimate is dierent for each
realization of . So a good estimator should perform well for every
possible value of .
The estimation error =

is a function of two random variables
( and x).
The mean and the variance of the estimation error, wrt two random
variables, indicates the performance of the estimator.
The mean value of the estimation error is zero. So the estimates are
unbiased (in the Bayesian sense).
E
x,
(

) = E
x,
( E(|x)) = E
x
[E
|x
() E
|x
(|x)] = E
x
(0) = 0
The variance of the estimate is Bmse :
var() = E
x,
(
2
) = E
x,
[(

)
2
] = Bmse(

)
(General Bayesian Estimators) Estimation Theory Spring 2011 115 / 152
Performance Description (Vector Parameter Case)
The vector of the estimation error is =

and has a zero mean.
Its covariance matrix is :
M

= E
x,
(
T
) = E
x,
{[ E(|x)][ E(|x)]
T
}
= E
x
_
E
|x
_
[ E(|x)][ E(|x)]
T
_
_
= E
x
(C
|x
)
if x and are jointly Gaussian, we have :
M

= C
|x
= C

C
x
C
1
xx
C
x
since C
|x
does not depend on x.
For a Bayesian linear model we have :
M

= C
|x
= C

H
T
(HC

H
T
+C
w
)
1
HC

In this case is a linear transformation of x and and thus is


Gaussian. Therefore :
N(0, M

)
(General Bayesian Estimators) Estimation Theory Spring 2011 116 / 152
Signal Processing Example
Deconvolution Problem : Estimate a signal transmitted through a
channel with known impulse response :
x[n] =
n
s
1

m=0
h[n m]s[m] + w[n] n = 0, 1, . . . , N 1
where s[n] is a WSS Gaussian process with known ACF, and w[n] is WGN
with variance
2
.

x[0]
x[1]
.
.
.
x[N 1]

h[0] 0 0
h[1] h[0] 0
.
.
.
.
.
.
.
.
.
.
.
.
h[N 1] h[N 2] h[N n
s
]

s[0]
s[1]
.
.
.
s[n
s
1]

w[0]
w[1]
.
.
.
w[N 1]

x = H +w s = C
s
H
T
(HC
s
H
T
+
2
I)
1
x
where C
s
is a symmetric Toeplitz matrix with [C
s
]
ij
= r
ss
[i j ]
(General Bayesian Estimators) Estimation Theory Spring 2011 117 / 152
Signal Processing Example
Consider that H = I (no ltering), so the Bayesian model becomes
x = s +w
Classical Approach The MVU estimator is s = (H
T
H)
1
H
T
x = x.
Bayesian Approach The MMSE estimator is s = C
s
(C
s
+
2
I)
1
x.
Scalar case : We estimate s[0] based on x[0] :
s[0] =
r
ss
[0]
r
ss
[0] +
2
x[0] =

+ 1
x[0]
where = r
ss
[0]/
2
is the SNR.
Signal is a Realization of an Auto Regressive Process : Consider
a rst order AR process : s[n] = a s[n 1] + u[n], where u[n] is
WGN with variance
2
u
. The ACF of s is :
r
ss
[k] =

2
u
1 a
2
(a)
|k|
[C
s
]
ij
= r
ss
[i j ]
(General Bayesian Estimators) Estimation Theory Spring 2011 118 / 152
Linear Bayesian Estimators
(Linear Bayesian Estimators) Estimation Theory Spring 2011 119 / 152
Introduction
Problems with general Baysian estimators :
dicult to determine in closed form,
need intensive computations,
involve multidimensional integration (MMSE) or multidimensional
maximization (MAP),
can be determined only under the jointly Gaussian assumption.
What can we do if the joint PDF is not Gaussian or unknown ?
Keep the MMSE criterion ;
Restrict the estimator to be linear.
This leads to the Linear MMSE Estimator :
Which is a suboptimal estimator that can be easily implemented.
It needs only the rst and second moments of the joint PDF.
It is analogous to BLUE in classical estimation.
In practice, this estimator is termed Wiener lter.
(Linear Bayesian Estimators) Estimation Theory Spring 2011 120 / 152
Linear MMSE Estimators (scalar case)
Goal : Estimate , given the data vector x. Assume that only the rst two
moments of the joint PDF of x and are available :
_
E()
E(x)
_
,
_
C

C
x
C
x
C
xx
_
LMMSE Estimator : Take the class of all ane estimators

=
N1

n=0
a
n
x[n] + a
N
= a
T
x + a
N
where a = [a
0
, a
1
, . . . , a
N1
]
T
. Then minimize the Bayesian MSE :
Bmse(

) = E[(

)
2
]
Computing a
N
: Lets dierentiate the Bmse with respect to a
N
:

a
N
E
_
( a
T
x a
N
)
2

= 2E
_
( a
T
x a
N
)

Setting this equal to zero gives : a


N
= E() a
T
E(x).
(Linear Bayesian Estimators) Estimation Theory Spring 2011 121 / 152
Linear MMSE Estimators (scalar case)
Minimize the Bayesian MSE :
Bmse(

) = E[(

)
2
] = E[( a
T
x E() +a
T
E(x))
2
]
= E
_
[a
T
(x E(x)) ( E())]
2
_
= E[a
T
(x E(x))(x E(x))
T
a] E[a
T
(x E(x))( E())]
E[( E())(x E(x))
T
a] + E[( E())
2
]
= a
T
C
xx
a a
T
C
x
C
x
a + C

This can be minimized by setting the gradient to zero :


Bmse(

)
a
= 2C
xx
a 2C
x
= 0
which results in a = C
1
xx
C
x
and leads to (Note that : C
x
= C
T
x
) :

= a
T
x + a
N
= C
T
x
C
1
xx
x + E() C
T
x
C
1
xx
E(x)
= E() +C
x
C
1
xx
(x E(x))
(Linear Bayesian Estimators) Estimation Theory Spring 2011 122 / 152
Linear MMSE Estimators (scalar case)
The minimum Bayesian MSE is obtained :
Bmse(

) = C
T
x
C
1
xx
C
xx
C
1
xx
C
x
C
T
x
C
1
xx
C
x
C
x
C
1
xx
C
x
+ C

= C

C
x
C
1
xx
C
x
Remarks :
The results are identical to those of MMSE estimators for jointly
Gaussian PDF.
The LMMSE estimator relies on the correlation between random
variables.
If a parameter is uncorrelated with the data (but nonlinearly
dependent), it cannot be estimated with an LMMSE estimator.
(Linear Bayesian Estimators) Estimation Theory Spring 2011 123 / 152
Linear MMSE Estimators (scalar case)
Example (DC Level in WGN with Uniform Prior PDF)
The data model is : x[n] = A + w[n] n = 0, 1, . . . , N 1
where A U[A
0
, A
0
] is independent from w[n] (WGN with variance
2
).
We have E(A) = 0 and E(x) = 0. The covariances are :
C
xx
= E(xx
T
) = E[(A1 +w)(A1 +w)
T
] = E(A
2
)11
T
+
2
I
C
x
= E(Ax
T
) = E[A(A1 +w)
T
] = E(A
2
)1
T
Therefore the LMMSE estimator is :

A = C
x
C
1
xx
x =
2
A
1
T
(
2
A
11
T
+
2
I)
1
x =
A
2
0
/3
A
2
0
/3 +
2
/N
x
where

2
A
= E(A
2
) =
_
A
0
A
0
A
2
1
2A
0
dA =
A
3
6A
0

A
0
A
0
=
A
2
0
3
(Linear Bayesian Estimators) Estimation Theory Spring 2011 124 / 152
Linear MMSE Estimators (scalar case)
Example (DC Level in WGN with Uniform Prior PDF)
Comparison of dierent Bayesian estimators :
MMSE :

A =
_
A
0
A
0
Aexp
_
1
2
2
N1

n=0
(x[n] A)
2
_
dA
_
A
0
A
0
exp
_
1
2
2
N1

n=0
(x[n] A)
2
_
dA
MAP :

A =
_
_
_
A
0
x < A
0
x A
0
x A
0
A
0
x > A
0
LMMSE :

A =
A
2
0
/3
A
2
0
/3 +
2
/N
x
(Linear Bayesian Estimators) Estimation Theory Spring 2011 125 / 152
Geometrical Interpretation
Vector space of random variables : The set of scalar zero mean random
variables is a vector space.
The zero length vector of the set is a RV with zero variance.
For any real scalar a, ax is another zero mean RV in the set.
For two RVs x and y, x + y = y + x is a RV in the set.
For two RVs x and y, the inner product is : < x, y >= E(xy)
The length of x is dened as : x =

< x, x > =
_
E(x
2
)
Two RVs x and y are orthogonal if : < x, y >= E(xy) = 0.
For RVs, x
1
, x
2
, y and real numbers a
1
and a
2
we have :
< a
1
x
1
+ a
2
x
2
, y >= a
1
< x
1
, y > +a
2
< x
2
, y >
E[(a
1
x
1
+ a
2
x
2
)y] = a
1
E(x
1
y) + a
2
E(x
2
y)
The projection of y on x is :
< y, x >
x
2
x =
E(yx)

2
x
x
(Linear Bayesian Estimators) Estimation Theory Spring 2011 126 / 152
Geometrical Interpretation
The LMMSE estimator can be determined using the vector space
viewpoint. We have :

=
N1

n=0
a
n
x[n]
where a
N
is zero, because of zero mean assumption.

belongs to the subspace spanned by x[0], x[1], . . . , x[N 1].


is not in this subspace.
A good estimator will minimize the MSE :
E[(

)
2
] =
2
where =

is the error vector.
Clearly the length of the vector error is minimized when is
orthogonal to the subspace spanned by x[0], x[1], . . . , x[N 1] or to
each data sample :
E
_
(

)x[n]
_
= 0 for n = 0, 1, . . . , N 1
(Linear Bayesian Estimators) Estimation Theory Spring 2011 127 / 152
Geometrical Interpretation
The LMMSE estimator can be determined by solving the following
equations :
E
__

N1

m=0
a
m
x[m]
_
x[n]
_
= 0 n = 0, 1, . . . , N 1
or
N1

m=0
a
m
E(x[m]x[n]) = E(x[n]) n = 0, 1, . . . , N 1

E(x
2
[0]) E(x[0]x[1]) E(x[0]x[N 1])
E(x[1]x[0]) E(x
2
[1]) E(x[1]x[N 1])
.
.
.
.
.
.
.
.
.
.
.
.
E(x[N 1]x[0]) E(x[N 1]x[1]) E(x
2
[N 1])

a
0
a
1
.
.
.
a
N1

E(x[0])
E(x[1])
.
.
.
E(x[N 1])

Therefore
C
xx
a = C
T
x
a = C
1
xx
C
T
x
The LMMSE estimator is :

= a
T
x = C
x
C
1
xx
x
(Linear Bayesian Estimators) Estimation Theory Spring 2011 128 / 152
The vector LMMSE Estimator
We want to estimate = [
1
,
2
, . . . ,
p
]
T
with a linear estimator that
minimize the Bayesian MSE for each element.

i
=
N1

n=0
a
in
x[n] + a
iN
Minimize Bmse(

i
) = E
_
(
i

i
)
2

Therefore :

i
= E(
i
) + C

i
x
..
1N
C
1
xx
..
NN
_
x E(x)
_
. .
N1
i = 1, 2, . . . , p
The scalar LMMSE estimators can be combined into a vector estimator :

= E()
. .
p1
+ C
x
..
pN
C
1
xx
..
NN
_
x E(x)
_
. .
N1
and similarly
Bmse(

i
) = [M

]
ii
where M

..
pp
= C

..
pp
C
x
..
pN
C
1
xx
C
x
..
Np
(Linear Bayesian Estimators) Estimation Theory Spring 2011 129 / 152
The vector LMMSE Estimator
Theorem (Bayesian GaussMarkov Theorem)
If the data are described by the Bayesian linear model form
x = H +w
where is a p 1 random vector with mean E() and covariance C

, w
is a noise vector with zero mean and covariance C
w
and is uncorrelated
with (the joint PDF of p(, w) is arbitrary), then the LMMSE estimator
of is :

= E() +C

H
T
(HC

H
T
+C
w
)
1
_
x HE()
_
The performance of the estimator is measured by =

whose mean is
zero and whose covariance matrix is
C

= M

= C

H
T
(HC

H
T
+C
w
)
1
HC

=
_
C
1

+H
T
C
1
w
H
_
1
(Linear Bayesian Estimators) Estimation Theory Spring 2011 130 / 152
Sequential LMMSE Estimation
Objective : Given

[n 1] based on x[n 1], update the new estimate

[n] based on the new sample x[n].


Example (DC Level in White Noise)
Assume that both A and w[n] have zero mean :
x[n] = A + w[n]

A[N 1] =

2
A

2
A
+
2
/N
x
Estimator Update :

A[N] =

A[N 1] + K[N](x[N]

A[N 1])
where K[N] =
Bmse(

A[N 1])
Bmse(

A[N 1]) +
2
Minimum MSE Update :
Bmse(

A[N]) = (1 K[N])Bmse(

A[N 1])
(Linear Bayesian Estimators) Estimation Theory Spring 2011 131 / 152
Sequential LMMSE Estimation
Vector space view : If two observations are orthogonal, the LMMSE
estimate of is the sum of the projection of on each observation.
1
Find the LMMSE estimator of A based on x[0], yielding

A[0] :

A[0] =
E(Ax[0])
E(x
2
[0])
x[0] =

2
A

2
A
+
2
x[0]
2
Find the LMMSE estimator of x[1] based on x[0], yielding x[1|0]
x[1|0] =
E(x[0]x[1])
E(x
2
[0])
x[0] =

2
A

2
A
+
2
x[0]
3
Determine the innovation of the new data : x[1] = x[1] x[1|0].
This error vector is orthogonal to x[0]
4
Add to

A[0] the LMMSE estimator of A based on the innovation :

A[1] =

A[0] +
E(A x[1])
E( x
2
[1])
x[1]
. .
the projection of A on x[1]
=

A[0] + K[1](x[1] x[1|0])
(Linear Bayesian Estimators) Estimation Theory Spring 2011 132 / 152
Sequential LMMSE Estimation
Basic Idea : Generate a sequence of orthogonal RVs, namely, the
innovations :
_
x[0]
..
x[0]
, x[1]
..
x[1] x[1|0]
, x[2]
..
x[2] x[2|0,1]
, . . . , x[n]
..
x[n] x[n|0,1,...,n1]
_
Then, add the individual estimators to yield :

A[N] =
N

n=0
K[n] x[n] =

A[N1]+K[N] x[N] where K[n] =
E(A x[n])
E( x
2
[n])
It can be shown that :
x[N] = x[N]

A[N 1] and K[N] =


Bmse(

A[N 1])

2
+ Bmse(

A[N 1])
the minimum MSE be updated as :
Bmse(

A[N]) = (1 K[N])Bmse(

A[N 1])
(Linear Bayesian Estimators) Estimation Theory Spring 2011 133 / 152
General Sequential LMMSE Estimation
Consider the general Bayesian linear model x = H +w, where w is an
uncorrelated noise with the diagonal covariance matrix C
w
. Lets dene :
C
w
[n] = diag(
2
0
,
2
1
, . . . ,
2
n
)
H[n] =
_
H[n 1]
h
T
[n]
_
=
_
n p
1 p
_
x[n] =
_
x[0], x[1], . . . , x[n]

Estimator Update :

[n] =

[n 1] +K[n]
_
x[n] h
T
[n]

[n 1]
_
where
K[n] =
M[n 1]h[n]

2
n
+h
T
[n]M[n 1]h[n]
Minimum MSE Matrix Update :
M[n] = (I K[n]h
T
[n])M[n 1]
(Linear Bayesian Estimators) Estimation Theory Spring 2011 134 / 152
General Sequential LMMSE Estimation
Remarks :
For the initialization of the sequential LMMSE estimator, we can use
the prior information :

[1] = E() M[1] = C

For no prior knowledge about we can let C

. Then we have
the same form as the sequential LSE, although the approaches are
fundamentally dierent.
No matrix inversion is required.
The gain factor K[n] weights condence in the new data (measured
by
2
n
) against all previous data (summarized by M[n 1]).
(Linear Bayesian Estimators) Estimation Theory Spring 2011 135 / 152
Wiener Filtering
Consider the signal model : x[n] = s[n] + w[n] n = 0, 1, . . . , N 1
where the data and noise are zero mean with known covariance matrix.
There are three main problems concerning the Wiener Filters :
Filtering : Estimate = s[n] (scalar) based on the data set
x = [x[0], x[1], . . . , x[n]]. The signal sample is estimated
based on the the present and past data only.
Smoothing : Estimate = s = [s[0], s[1], . . . , s[N 1]] (vector) based on
the data set x =
_
x[0], x[1], . . . , x[N 1]

. The signal sample


is estimated based on the the present, past and future data.
Prediction : Estimate = x[N 1 + ] for positive integer based on the
data set x = [x[0], x[1], . . . , x[N 1]].
Remarks : All these problems are solved using the LMMSE estimator :

= C
x
C
1
xx
x with the minimum Bmse : M

= C

C
x
C
1
xx
C
x
(Linear Bayesian Estimators) Estimation Theory Spring 2011 136 / 152
Wiener Filtering (Smoothing)
Estimate = s = [s[0], s[1], . . . , s[N 1]] (vector) based on the data set
x =
_
x[0], x[1], . . . , x[N 1]

.
C
xx
= R
xx
= R
ss
+R
ww
and C
x
= E(sx
T
) = E(s(s +w)
T
) = R
ss
Therefore :
s = C
sx
C
1
xx
x = R
ss
(R
ss
+R
ww
)
1
x = Wx
Filter Interpretation : The Wiener Smoothing Matrix can be interpreted
as an FIR lter. Lets dene :
W = [a
0
, a
1
, . . . , a
N1
]
where a
T
n
is the (n + 1)th raw of W. Lets dene also :
h
n
= [h
(n)
[0], h
(1)
[1], . . . , h
(n)
[N 1]] which is just the vector a
n
when
ipped upside down. Then
s[n] = a
T
n
x =
N1

k=0
h
(n)
[N 1 k]x[k]
This represents a time-varying, non causal FIR lter.
(Linear Bayesian Estimators) Estimation Theory Spring 2011 137 / 152
Wiener Filtering (Filtering)
Estimate = s[n] based on the data set x =
_
x[0], x[1], . . . , x[n]

. We
have again C
xx
= R
xx
= R
ss
+R
ww
and
C
x
= E(s[n]
_
x[0], x[1], . . . , x[n]

) =
_
r
ss
[n], r
ss
[n 1], . . . , r
ss
[0]

= (r

ss
)
T
Therefore :
s[n] = (r

ss
)
T
(R
ss
+R
ww
)
1
x = a
T
x
Filter Interpretation : If we dene h
n
= [h
(n)
[0], h
(1)
[1], . . . , h
(n)
[N 1]]
which is just the vector a when ipped upside down. Then :
s[n] = a
T
x =
n

k=0
h
(n)
[n k]x[k]
This represents a time-varying, causal FIR lter. The impulse response can
be computed as :
(R
ss
+R
ww
)a = r

ss
(R
ss
+R
ww
)h
n
= r
ss
(Linear Bayesian Estimators) Estimation Theory Spring 2011 138 / 152
Wiener Filtering (Prediction)
Estimate = x[N 1 + ] based on the data set
x = [x[0], x[1], . . . , x[N 1]]. We have again C
xx
= R
xx
and
C
x
= E(x[N 1 + ][x[0], x[1], . . . , x[N 1]])
=
_
r
xx
[N 1 + ], r
xx
[N 2 + ], . . . , r
xx
[]

= (r

xx
)
T
Therefore :
x[N 1 + ] = (r

xx
)
T
R
1
xx
x = a
T
x
Filter Interpretation : If we let again h[N k] = a
k
, we have
x[N 1 + ] = a
T
x =
N1

k=0
h[N k]x[k]
This represents a time-invariant, causal FIR lter. The impulse response
can be computed as :
R
xx
h = r
xx
For = 1, this is equivalent to an Auto-Regressive process.
(Linear Bayesian Estimators) Estimation Theory Spring 2011 139 / 152
Kalman Filters
(Kalman Filters) Estimation Theory Spring 2011 140 / 152
Introduction
Kalman lter is a generalization of the Wiener lter.
In Wiener lter we estimate s[n] based on the noisy observation
vector x[n]. We assume that s[n] is a stationary random process with
known mean and covariance matrix.
In Kalman lter we assume that s[n] is a non stationary random
process whose mean and covariance matrix vary according to a known
dynamical model.
Kalman lter is a sequential MMSE estimator of s[n]. If the signal
and noise are jointly Gaussian, then the Kalman lter is an optimal
MMSE estimator, if not, it is the optimal LMMSE estimator.
Kalman lter has many applications in Control theory for state
estimation.
It can be generalized to vector signals and noise (in contrast to
Wiener lter).
(Kalman Filters) Estimation Theory Spring 2011 141 / 152
Dynamical Signal Models
Consider a rst order dynamical model :
s[n] = as[n 1] + u[n] n 0
where u[n] is WGN with variance
2
u
, s[1] N(
s
,
2
s
) and s[1] is
independent of u[n] for all n 0. It can be shown that :
s[n] = a
n+1
s[1] +
n

k=0
a
k
u[n k] E(s[n]) = a
n+1

s
and
C
s
[m, n] = E
_
_
s[m] E(s[m])
__
s[n] E(s[n])
_
_
= a
m+n+2

2
s
+
2
u
a
mn
n

k=0
a
2k
for m 0
and C
s
[m, n] = C
s
[n, m] for m < n.
(Kalman Filters) Estimation Theory Spring 2011 142 / 152
Dynamical Signal Models
Theorem (Vector Gauss-Markov Model)
The Gauss-Markov model for a p 1 vector signal s[n] is :
s[n] = As[n 1] +Bu[n] n 0
A is p p and B p r and all eigenvalues of A are less than 1 in
magnitude. The r 1 vector u[n] N(0, Q) and the initial condition is a
p 1 random vector with s[n] N(
s
, C
s
) and is independent of u[n]s.
Then, the signal process is Gaussian with mean E(s[n]) = A
n+1

s
and
covariance :
C
s
[m, n] = A
m+1
C
s
_
A
n+1

T
+
m

k=mn
A
k
BQB
T
_
A
nm+k

T
for m n and C
s
[m, n] = C
s
[n, m] for m < n. The covariance matrix
C[n] = C
s
[n, n] and the mean and covariance propagation equations are :
E(s[n]) = AE(s[n 1]) , C[n] = AC[n 1]A
T
+BQB
T
(Kalman Filters) Estimation Theory Spring 2011 143 / 152
Scalar Kalman Filter
Data model :
State equation s[n] = as[n 1] + u[n]
Observation equation x[n] = s[n] + w[n]
Assumptions :
u[n] is zero mean Gaussian noise with independent samples and
E(u
2
[n]) =
2
u
.
w[n] is zero mean Gaussian noise with independent samples and
E(w
2
[n]) =
2
n
(time-varying variance).
s[1] N(
s
,
2
s
) (for simplicity we suppose that
s
= 0).
s[1], u[n] and w[n] are independent.
Objective : Develop a sequential MMSE estimator to estimate s[n] based
on the data X[n] =
_
x[0], x[1], . . . , x[n]

. This estimator is the mean of


posterior PDF :
s[n|n] = E(s[n]

x[0], x[1], . . . , x[n])


(Kalman Filters) Estimation Theory Spring 2011 144 / 152
Scalar Kalman Filter
To develop the equations of a Kalman lter we need the following
properties :
Property 1 : For two uncorrelated jointly Gaussian data vector, the
MMSE estimator (if it is zero mean) is given by :

= E(|x
1
, x
2
) = E(|x
1
) + E(|x
2
)
Property 2 : If =
1
+
2
, then the MMSE estimator of is :

= E(|x) = E(
1
+
2
|x) = E(
1
|x) + E(
2
|x)
Basic Idea : Generate the innovation x[n] = x[n] x[n|n 1] which is
uncorrelated with previous samples X[n 1]. Then use x[n] instead of x[n]
for estimation (X[n] is equivalent to
_
X[n 1], x[n]

).
(Kalman Filters) Estimation Theory Spring 2011 145 / 152
Scalar Kalman Filter
From Property 1, we have :
s[n|n] = E(s[n]

X[n 1], x[n]) = E(s[n]

X[n 1]) + E(s[n]

x[n])
= s[n|n 1] + E(s[n]

x[n])
From Property 2, we have :
s[n|n 1] = E(as[n 1] + u[n]

X[n 1])
= aE(s[n 1]

X[n 1]) + E(u[n]

X[n 1])
= as[n 1|n 1]
The MMSE estimator of s[n] based on x[n] is :
E(s[n]

x[n]) =
E(s[n] x(n))
E( x
2
[n])
x[n] = K[n](x[n] x[n|n 1])
where x[n|n 1] = s[n|n 1] + w[n|n 1] = s[n|n 1].
Finally we have :
s[n|n] = as[n 1|n 1] + K[n](x[n] s[n|n 1])
(Kalman Filters) Estimation Theory Spring 2011 146 / 152
Scalar State Scalar Observation Kalman Filter
State Model : s[n] = as[n 1] + u[n]
Observation Model : x[n] = s[n] + w[n]
Prediction :
s[n|n 1] = as[n 1|n 1]
Minimum Prediction MSE :
M[n|n 1] = a
2
M[n 1|n 1] +
2
u
Kalman Gain :
K[n] =
M[n|n 1]

2
n
+ M[n|n 1]
Correction :
s[n|n] = s[n|n 1] + K[n](x[n] s[n|n 1])
Minimum MSE :
M[n|n] = (1 K[n])M[n|n 1]
(Kalman Filters) Estimation Theory Spring 2011 147 / 152
Properties of Kalman Filter
The Kalman lter is an extension of the sequential MMSE estimator
but applied to time-varying parameters that are represented by a
dynamic model.
No matrix inversion is required.
The Kalman lter is a time-varying linear lter.
The Kalman lter computes (and uses) its own performance measure,
the Bayesian MSE M[n|n].
The prediction stage increases the error, while the correction stage
decreases it.
The Kalman lter generates an uncorrelated sequence from the
observations so can be viewed as a whitening lter.
The Kalman lter is optimal in the Gaussian case and the optimal
LMMSE estimator if the Gaussian assumption is not valid.
In KF M[n|n], M[n|n 1] and K[n] do not depend on the
measurement and can be computed o line. At steady state (n )
the Kalman lter becomes an LTI lter (M[n|n], M[n|n 1] and K[n]
become constant).
(Kalman Filters) Estimation Theory Spring 2011 148 / 152
Vector State Scalar Observation Kalman Filter
State Model : s[n] = As[n 1] +Bu[n] with u[n] N(0, Q)
Observation Model : x[n] = h
T
[n]s[n] + w[n] with s[1] N(
s
, C
s
)
Prediction :
s[n|n 1] = As[n 1|n 1]
Minimum Prediction MSE (p p) :
M[n|n 1] = AM[n 1|n 1]A
T
+BQB
T
Kalman Gain (p 1) :
K[n] =
M[n|n 1]h[n]

2
n
+h
T
[n]M[n|n 1]h[n]
Correction :
s[n|n] = s[n|n 1] +K[n](x[n] h
T
[n]s[n|n 1])
Minimum MSE Matrix (p p) :
M[n|n] = (I K[n]h
T
[n])M[n|n 1]
(Kalman Filters) Estimation Theory Spring 2011 149 / 152
Vector State Vector Observation Kalman Filter
State Model : s[n] = As[n 1] +Bu[n] with u[n] N(0, Q)
Observation Model : x[n] = H[n]s[n] +w[n] with w[n] N(0, C[n])
Prediction :
s[n|n 1] = As[n 1|n 1]
Minimum Prediction MSE (p p) :
M[n|n 1] = AM[n 1|n 1]A
T
+BQB
T
Kalman Gain Matrix (p M) : (Need matrix inversion !)
K[n] = M[n|n 1]H
T
[n]
_
C[n] +H[n]M[n|n 1]H
T
[n]
_
1
Correction :
s[n|n] = s[n|n 1] +K[n](x[n] H[n]s[n|n 1])
Minimum MSE Matrix (p p) :
M[n|n] = (I K[n]H[n])M[n|n 1]
(Kalman Filters) Estimation Theory Spring 2011 150 / 152
Extended Kalman Filter
The extended Kalman lter is a sub-optimal approach when the state
and the observation equations are nonlinear.
Nonlinear data model : Consider the following nonlinear data model
s[n] = f(s[n 1]) +Bu[n]
x[n] = g(s[n]) +w[n]
where f() and g() are nonlinear function mapping (vector to vector).
Model linearization : We linearize the model around the estimated value
of s using a rst order Taylor series.
f(s[n 1]) f(s[n 1|n 1]) +
f
s[n 1]
_
s[n 1] s[n 1|n 1]

g(s[n]) g(s[n|n 1]) +


g
s[n]
_
s[n] s[n|n 1]

We denote the Jacobians by :


A[n 1] =
f
s[n 1]

s[n1|n1]
H[n] =
g
s[n]

s[n|n1]
(Kalman Filters) Estimation Theory Spring 2011 151 / 152
Extended Kalman Filter
Now we use the linearized models :
s[n] = A[n 1]s[n 1] +Bu[n]
+f(s[n 1|n 1]) A[n 1]s[n 1|n 1]
x[n] = H[n]s[n] +w[n]
+g(s[n|n 1]) H[n]s[n|n 1]
There are two dierences with the standard Kalman lter :
1
A is now, time varying.
2
Both equations have known terms added to them. This will not change
the derivation of the Kalman lter.
The extended Kalman lter has exactly the same equations where A
is replaced with A[n 1].
A[n 1] and H[n] should be computed at each sampling time.
MSE matrices and Kalman gain can no longer be computed o line.
(Kalman Filters) Estimation Theory Spring 2011 152 / 152

You might also like