You are on page 1of 72

Detection and Estimation (Spring, 2009)

MVU

Minimum Variance Unbiased Estimation


MVU Estimators
~ Unbiased Estimators
Unbiased estimator: on the average, the estimator will yield the true value of
the unknown parameter.

E () =

for all

Let = g (x ) , where x = [x[0] x[1] x[N-1]]T

Unbiased E ( ) =

g (x ) p(x; )dx =

Remark: Generally, we seek unbiased estimators (necessary but not sufficient


for good estimator)
Example: x[n] = A + w[n]

n = 0, 1, , N-1

unbiased!

NCTU EE

P.1

Detection & Estimation (Spring, 2009)

MVU

biased!
Bias of estimator = E () = b( )

NCTU EE

P.2

Detection & Estimation (Spring, 2009)

MVU

~ Minimum Variance Criterion


Need optimality criterion to assess performance
Mean Square Error (MSE)

Good estimator has small MSE.


Unfortunately, adoption of this natural criterion leads to unrealizable estimators.

MSE is composed of errors due to the variance of the estimator as well as the
bias!!
for some constant a.

Example:

The optimal value of a depends on the unknown parameter A.

NCTU EE

The estimator is not realizable.

P.3

Detection & Estimation (Spring, 2009)

MVU

Any criterion that depends on the bias will lead to an unrealizable estimator!!
Generally, we constrain the bias to be zero and find the estimator which
minimizes the variance.
Minimum Variance Unbiased (MVU) estimator.

~ Existence of MVU Estimator

must have the smallest variance for all values of .


In general, the MVU estimator does not always exist.
Example: Assume we have two independent observations x[0] and x[1].

The two estimators

are unbiased.

NCTU EE

P.4

Detection & Estimation (Spring, 2009)

MVU

Remark: Its possible that there may not exist even a single unbiased estimator.

~ Finding the MVU Estimator


Approaches:
1. Determine the Cramer-Rao lower bound (CRLB) and check to see if some
estimator satisfies it. (Chapters 3 and 4)

Remarks:
1. If an estimator exists whose variance equals the CRLB for each value of
, then it must be the MVU estimator.
2. It may happen that no estimator exists whose variance equals the bound.

NCTU EE

P.5

Detection & Estimation (Spring, 2009)

MVU

2. Apply the Rao-Blackwell-Lehmann-Scheffe (RBLS) theorem. (Chapter 5)


First find a sufficient statistic
then find a function of the sufficient statistic which is an unbiased estimator
of .
3. Further restrict the class of estimators to be not only unbiased but also linear.
Then, find the minimum variance estimator within this restricted class.
(Chapter 6)

~ Extension to a Vector Parameter


Assume = [ 1 2

" p ]T is a vector of unknown parameters.

= [
An estimator
1
2

" p ]T is unbiased if

By define

We define an unbiased estimator to have the property


MVU estimator:

NCTU EE

var(i ), i = 1,2 ,...,p is minimum.

P.6

Detection & Estimation (Spring, 2009)

MVU

Cramer-Rao Lower Bound (CRLB)


Find a lower bound on the variance of an unbiased estimator.
If the estimator attains the bound for all values of the unknown parameter
its the MVU estimator.
It provides a benchmark for the comparison of performance among unbiased
estimators.

~ Estimator Accuracy Considerations


All our information is embodied in the observation data and the underlying pdf
for that data.
How accurately we can estimate depends directly on the pdf.
The more the pdf is influenced by the unknown parameter, the better we can
estimate the parameter!
Example:
.
Choose
The estimator accuracy improves as 2 decreases.
Alternative Viewpoint:

The probability of observing x[0] when A is the true value.

If = 1/3, it is unlikely that A > 4.


pdf concentration parameter accuracy
NCTU EE

P.7

Detection & Estimation (Spring, 2009)

MVU

When the pdf is viewed as a function of the unknown parameter (with x fixed), it
is termed the likelihood function.
The sharpness of the likelihood function determines how accurately we can
estimate the unknown parameter.

the curvature increases as decreases.

Note:

In general,

2 ln p
is random variable.
2

To measure curvature, we use

.
the average curvature of the log-likelihood function.

NCTU EE

P.8

Detection & Estimation (Spring, 2009)

MVU

~ Cramer-Rao Lower Bound

Remarks:
1.
2. Regularity Condition is satisfied if the order of differentiation and
integration may be interchanged.:

This condition is generally true except when the domain of the pdf for
which it is nonzero depends on the unknown parameter.
An example that does not satisfy the regularity condition:
x[n] are iid according to U[0,], for n = 0, 1, , N-1.
3.

NCTU EE

P.9

Detection & Estimation (Spring, 2009)

MVU

4. Fisher Information

9 Nonnegative
9 Additive for independent observations.

where
As N , CRLB 0 for iid distribution
Proof of Scalar Parameter CRLB:
We consider all unbiased estimator with
That is,

Since
,
we have
.
We now apply the Cauchy-Schwarz inequality

NCTU EE

P.10

Detection & Estimation (Spring, 2009)

MVU

with

On the other hand, we have

If = g() = ,

NCTU EE

P.11

Detection & Estimation (Spring, 2009)

MVU

Moreover, the condition for equality is

where c can depend on but not on x.


If = g() = ,

When the CRLB is attained,

Q.E.D.

Example: DC level in white Gaussian noise


x[n] = A + w[n] n = 0, 1, 2, .., N-1

NCTU EE

P.12

Detection & Estimation (Spring, 2009)

MVU

The sample mean estimator attains the bound and must be the MVU
estimator.
Summary: CRLB theorem gives us a lower bound on variance of any unbiased
estimator but also finds estimator that attains bound (if it exists)
Estimators that attains bound are termed efficient (good use of data)
efficient MVU
MVU > efficient

NCTU EE

P.13

Detection & Estimation (Spring, 2009)

MVU

Example: Phase estimation


Assume we wish to estimate the phase of a sinusoid embedded in WGN.

The amplitude A and frequency f0 are assumed known.

Does an efficient estimator exist?


MVU?
How to find it?

NCTU EE

P.14

Detection & Estimation (Spring, 2009)

MVU

~ General CRLB for Signals in White Gaussian Noise


Assume a deterministic signal with an unknown parameter is observed in
WGN.

Example:

NCTU EE

P.15

Detection & Estimation (Spring, 2009)

MVU

~ Transformation of Parameters
Assume we wish to estimate = g(), a function of .

Example: DC level in WGN


x[n] = A + w[n]
CRLB for = g(A) = A2

Question: The sample mean estimator is efficient for A.


2

Is x efficient for A2? NO!!


,

Not efficient!

x2 is not even an unbiased estimator!!


In general, the efficiency of an estimator is destroyed by a nonlinear
transformation.

NCTU EE

P.16

Detection & Estimation (Spring, 2009)

MVU

The efficiency can be maintained over a linear ( or affine ) transformation.


Proof: Assume is an efficient estimator for and we want to estimate
g() = a + b.

The CRLB is

But we have
achieved!

If the data record is large enough, the efficiency of an estimator is


approximately maintained over nonlinear transformation!!
Example: DC level in WGN
x[n] = A + w[n]
2
= g(A) = A

asymptotically unbiased!

asymptotically efficient!
NCTU EE

P.17

Detection & Estimation (Spring, 2009)

MVU

~ Extension to a Vector Parameter


Wish to estimate a vector parameter = [1 2 p]T
) =
Assume E (

where
I()

Fisher information matrix

Remark:

Example: Line Fitting


x[n] = A + Bn + w[n]

w[n]: WGN

=
B

Likelihood function

NCTU EE

P.18

Detection & Estimation (Spring, 2009)

MVU

Remarks:
1. When B is known, we have a lower CRLB.

CRLB always increases as we estimate more parameters.


2. B is easier to estimate for N 3.

NCTU EE

P.19

Detection & Estimation (Spring, 2009)

MVU

x[n] is more sensitive to changes in B than to changes in A.

Remark:
(positive semi-definite)

NCTU EE

P.20

Detection & Estimation (Spring, 2009)

MVU

Example revisited: Line Fitting


x[n] = A + Bn + w[n]

w[n]: WGN

=
B

Efficient and thus MVU!!!

NCTU EE

P.21

Detection & Estimation (Spring, 2009)

MVU

~ Vector Parameter CRLB for Transformations


Wish to estimate = g().

g: r-dimensional function

g
g 1
var( i ) [
I ()
] ii

or

r p Jacobian matrix

Example: x[n] = A + w[n]


w[n]: WGN with variance 2
A and are unknown.
Wish to estimate the signal-to-noise ratio =

NCTU EE

P.22

A2

Detection & Estimation (Spring, 2009)

MVU

~ CRLB for the General Gaussian Case

For the scalar parameter case


we have

NCTU EE

P.23

Detection & Estimation (Spring, 2009)

MVU

Example: x[n] = s[n;] + w[n], n = 0, 1, , N-1.

w[n]: WGN

x[n] ~ N(s[n;], 2I)

Generalizing to a vector signal parameter estimated in the presence of WGN, we


have

Example: x[n] = w[n]


n = 0, 1, , N-1
w[n]: WGN with unknown variance = 2
C(2) = 2I

NCTU EE

P.24

Detection & Estimation (Spring, 2009)

MVU

Example: x[n] = A + w[n]


n = 0, 1, , N-1
w[n]: WGN
A: Gaussian random variable with zero mean and unknown variance
A2
A is independent of w[n],
x = [x[0], x[1], x[2], , x[N-1]]T is Gaussian with zero mean and
an NN covariance matrix C(A2) whose [i,j] element is

Use Woodburys identity

As N , CRLB 2A4

NCTU EE

P.25

Detection & Estimation (Spring, 2009)

MVU

~ Asymptotic CRLB for WSS Gaussian Random Processes


At times, it is difficult to analytically compute the CRLB due to the need to
invert the covariance matrix.
An alternative form can be applied to WSS Gaussian processes.
Assume x[n] is a zero-mean Gaussian process with the autocorrelation
function rxx[k] = W[x[n]x[n+k]].
Assume the data record length N is much greater than the correlation time of
the process.

Pxx(f;) = F{rxx[k]}

PSD (Power Spectral Density) of x[n]

Example: Center Frequency of Process

Assume Q(f) and 2 are know. Wish to estimate fc.

NCTU EE

P.26

Detection & Estimation (Spring, 2009)

MVU

The possible center frequencies are constrained to be in the interval


[f1, 0.5- f2] so that Q(f-fc) is always within [0, 0.5].

where f << 0.5

For
and Q(f) >> 2,

Narrower PSDs better accuracy

NCTU EE

P.27

Detection & Estimation (Spring, 2009)

MVU

~ Signal Processing Examples


Example: Range Estimation

s(t): nonzero over [0,Ts]

x(t) = s(t-0) + w(t)


0 = 2R/c

R: range, c: the speed of propagation

w(t): Gaussian noise

Assume the maximum time delay is 0max.


observation interval 0 t T = Ts + 0max.

NCTU EE

P.28

Detection & Estimation (Spring, 2009)

MVU

Note that w[n] is WGN with variance

M: length of the sampled signal


n0 = 0/: delay in samples

NCTU EE

P.29

Detection & Estimation (Spring, 2009)

MVU

mean square bandwidth

S(F) = F{s(t)}

F
Ts 2
1 2
exp{2 2 F 2 / F2 } .
If s (t ) = exp{ F (t ) } , S ( F ) =
2
2
2
F =
2

F2
2

For good estimation


(1) large F 2
(2) large SNR.

NCTU EE

P.30

Detection & Estimation (Spring, 2009)

MVU

Example: Sinusoidal Parameter Estimation

Want to estimate A, f0, and , with A > 0 and 0 < f0 < 0.5.
where = [A

where
NCTU EE

P.31

f0

]T

Detection & Estimation (Spring, 2009)

MVU

Linear Models
~ Definition and Properties
Example: Line Fitting

observation matrix

Linear Model:
Assume w ~ N(0,2I).

x ~ N(H,2I)

(A is symmetric)

NCTU EE

P.32

Detection & Estimation (Spring, 2009)

MVU

Assume HTH is invertible.

MVU estimator
Covariance matrix:
Remark: HTH is invertible the columns of H are linearly independent.
Counterexample:

The model parameters will not be identifiable even in the absence of noise.

In practice, if HTH is ill-conditioned or H is approximately rank deficient,

NCTU EE

Cannot estimate reliably.

P.33

Detection & Estimation (Spring, 2009)

MVU

~ Linear Model Examples


Example: Curve Fitting

Assume w(tn) are i.i.d. Gaussian random variables with zero mean and
variance 2.

NCTU EE

P.34

Detection & Estimation (Spring, 2009)

MVU

Example: Fourier Analysis

w[n]: WGN

N 2M

NCTU EE

P.35

(N > 2M)

Detection & Estimation (Spring, 2009)

MVU

~ Gaussian

Discrete Fourier Transform Coefficients.

The amplitude estimates are independent!!


NCTU EE

P.36

Detection & Estimation (Spring, 2009)

MVU

Example: System Identification


A common system model is the tapped delay line (TDL) or finite impulse
response (FIR) filter.

We provide u[n] and measure x[n]. Wish to estimate the weights h[n].
If u[n] is provided for n = 0, 1, .., N-1 and u[n] = 0 for n < 0, we observe

If w[n] is WGN with variance 2, w ~ N(0, 2I).


MVU and efficient

NCTU EE

P.37

Detection & Estimation (Spring, 2009)

MVU

Note that how well we can estimate the tap weights depends on the probing
signal u[n].
The signal should be chosen to be a pseudorandom noise (PRN).
Proof:

can be factored as DTD with D an invertible p p matrix.


.

Note that
Let

and

The minimum variance is attained if

To minimize the variance of the MVU estimator, u[n] should be chosen to


make HTH diagonal.

NCTU EE

P.38

Detection & Estimation (Spring, 2009)

MVU

Since

For large N,

~ autocorrelation function of u[n]


white noise.
Actually, we use pseudo random noise (PRN). Sequence.

The TDL weight estimators are independent.

NCTU EE

P.39

Detection & Estimation (Spring, 2009)

MVU

~ Extension to the Linear Model


A more general linear model allows for noise that is not white.

To find MVU estimator,


(1) Repeat

ln p( x; )
derivation

(2) Whitening approach


Whitening approach
Since C is positive definite,

C-1 is positive definite.

where D is an N N invertible matrix.


D acts as a whitening transformation.

Estimation of signal parameter in colored noise environment.

NCTU EE

P.40

Detection & Estimation (Spring, 2009)

MVU

Example: x[n] = A + w[n], n = 0, 1, , N-1.


w[n]: colored Gaussian noise with covariance matrix C

Recall

The data are first prewhitened to form x[n] and then averaged using
prewhitened averaging weights dn.

Another extension to the linear model:


s: known signal
Let x = x s.

NCTU EE

P.41

Detection & Estimation (Spring, 2009)

MVU

Example: DC level and Exponential in White Noise


x[n] = A + rn + w[n], n = 0, 1, .., N-1
r: known
w[n]: WGN
A is to be estimated.

NCTU EE

P.42

Detection & Estimation (Spring, 2009)

MVU

General Minimum Variance Unbiased


Estimation
~ Sufficient Statistics
If an efficient estimator does not exist, it is still of interest to find the MVU
estimator.
Recall the problem of estimating a DC level A in WGN.
is the MVU estimator with variance 2/N.
Consider

as an estimator.



E[ A] = A , Var[ A] = 2

The variance is much larger because the data {x[1], x[2], , x[N-1]} are
not used.
Which data are important or sufficient for estimation?
Consider

All sets are sufficient since

may be found.

Once the sufficient statistics are known, we no longer need the individual data
values. All information has been summarized in the sufficient statistic.
The sufficient data set that contains the least number of elements is called the
minimal one.

is the minimal sufficient statistic.

NCTU EE

P.43

Detection & Estimation (Spring, 2009)

MVU

To quantify these ideas, consider

and assume

has been observed.

: pdf of the observations after the sufficient statistic has


been observed.

will not depend on A..


Otherwise, the data will provide additional information about A.

is a sufficient stastics, we compute the

To show that
conditional pdf:

where

NCTU EE

P.44

Detection & Estimation (Spring, 2009)

MVU

doesnt depend on A.

~ Finding Sufficient Statistics

Proof:
() Assume the factorization holds.

NCTU EE

P.45

Detection & Estimation (Spring, 2009)

MVU

does not depend on .

T(x) is a sufficient statistic.

() Assume T(x) is a sufficient statistic.


Consider the joint pdf

Since T(x) is sufficient statistic,


=
Moreover, T(x) = T0 defines a surface in RN. The conditional pdf is nonzero
only on that surface.

where

We can let

NCTU EE

P.46

Detection & Estimation (Spring, 2009)

MVU

where
.
Moreover, the pdf of the sufficient statistic can be found based on the
factorization.

Example: DC Level in WGN

is a sufficient statistic for A.


Example: Power of WGN

is a sufficient statistic for 2.

Example: Phase of Sinusoid

A, f0: known.

NCTU EE

w[n]: WGN

P.47

Detection & Estimation (Spring, 2009)

MVU

It doesnt appear that the pdf is factorable as required by the Neyman-Fisher


theorem. No single sufficient statistic exists. However, the pdf can be factored
as

T1(x) and T2(x) are jointly sufficient statistics for the estimation of .
The r statistics T1(x), T2(x), , Tr(x) are jointly sufficient statistics if the
conditional pdf p(x | T1(x), T2(x), , Tr(x), )h(x) does not depend on .

Remark: The original data are always sufficient statistics

NCTU EE

P.48

Detection & Estimation (Spring, 2009)

MVU

~ Using Sufficiency to Find the MVU Estimator


Example: DC Level in WGN
Sufficient statistic:
Based on T(x), there are two ways to find the MVU estimator.


(1) Find any unbiased estimator of A, say A = x[0] , and determine


A = E[ A | T ] .
(2) Find some function g so that A = g (T ) is an unbiased estimator
of A.
N 1

Approach 1: A = E{x[0] | x[n]}


n =0

For a Gaussian random vector [x y]T with mean vector


= [E(x) E(y)]T and covariance matrix

N 1

Let x = x[0] and y = x[n] .


n =0

Since [x y]T is a linear transformation of a Gaussian


random vector, the pdf of [x y]T is N(,C).

NCTU EE

P.49

Detection & Estimation (Spring, 2009)

MVU

N 1

Approach 2: A = g ( x[n])
n =0

Choose a function g so that A is unbiased.


By inspection, g(x) = x/N.

1 N 1
A = x[n]
N n =0
Much easier to apply!!

Remark: A statistic is complete if there is only one function of the statistic that
is unbiased.
Assume T(x) is complete.
(1) is unique

(2) All produce the same
(3) But variance must be decreased.
is the MVU estimator.

NCTU EE

P.50

Detection & Estimation (Spring, 2009)

MVU

Proof of RBLS Theorem:


(1) is a valid estimator of (not a function of )

Since p(x|T(x); ) does not depend on , is solely a function of T(x).


(2) is unbiased.

(3) Var( ) Var( )

NCTU EE

P.51

Detection & Estimation (Spring, 2009)

MVU

Remarks:
1. In general, it is difficult to validate the completeness of a sufficient
statistic.
2. For the exponential family of PDFs, sufficient statistics are complete.
Example: Completeness of a Sufficient Statistic
N 1

Let T = x[n] .
n =0

Wish to show that if E[g(T)] = A for all A, then there is only one
solution for g.
Suppose there exists a second function h with E[h(T)] = A, A .

Since T ~ N(NA,N2)

where v(T) = g(T)-h(T).


Letting = T/N and v() = v(N), we have

which can be recognized as the convolution of v() with a Gaussian


pulse w().

NCTU EE

P.52

Detection & Estimation (Spring, 2009)

MVU

But W(f) = F{Gaussian pulse} > 0


v(f) = 0
f
v(t) = 0
t
g = h.

Example: Incomplete Sufficient Statistic


x[0] = A + w[0], where x[0] ~ U[-0.5, 0.5]
x[0] is a sufficient statistic
x[0] is an unbiased estimator of A.
g(x[0]) = x[0].
Assume we have another function h for which E{h(x[0])} = A.
Let v(T) = g(T)-h(T), where T = x[0].

The nonzero function v(T) = sin(2T) will satisfy this condition.

Hence, a solution is v(T) = g(T) h(T) = sin(2T) or h(T) = T sin(2T).


A = x[0] sin 2x[0] is also based on the sufficient statistic and is
unbiased for A.
The RBLS theorem no longer holds. It is not possible to assert that

A = x[0] is the MVU estimator.


NCTU EE

P.53

Detection & Estimation (Spring, 2009)

MVU

A sufficient statistic is complete if the condition

is satisfied only by v(T) = 0 for all T.

Procedure for finding MVU estimator.


1. Find a single sufficient statistic for by using the Neyman-Fisher
factorization theorem.
2. Determine if the sufficient statistic is complete. If so, proceed; otherwise,
this approach cannot be used.
3. Find a function g of the sufficient statistic that yields an unbiased
estimator = g (T (x)) . The MVU estimator is then .
(alternative implementation)


3. Evaluate = E ( | T (x)) , where is any unbiased estimator.

NCTU EE

P.54

Detection & Estimation (Spring, 2009)

MVU

Example: Mean of Uniform Noise


x[n] = w[n], n = 0, 1, , N-1
where w[n] is i.i.d. noise with pdf u[0,] for > 0.
Wish to find the MVU estimator for the mean = /2.
Since the pdf doesnt satisfy the regularity condition, we cannot adopt
the CRLB approach.
is an unbiased estimator.

where

T(x) = max(x[n]) is a sufficient statistic for .


It can also be shown that T(x) = max(x[n]) is complete.

NCTU EE

P.55

Detection & Estimation (Spring, 2009)

MVU

Next, we must determine a function g so that g(max{x[n]}) is unbiased.

The MVU estimator is

NCTU EE

P.56

Detection & Estimation (Spring, 2009)

MVU

Remarks:
1. The sample mean is not the MVU estimator of the mean for uniformly
distributed noise. For sample mean,
2. In the phase estimation example, we require two sufficient statistics,
T1(x) and T2(x). We need to evaluate

or find g(T1,

T2) that is unbiased. difficult!!

~ Extension to a Vector Parameter


Assume is a p1 vector. A vector statistic T(x) = [T1(x),T2(x), , Tr(x)]T is
sufficient for the estimation of if p(x|T(x); ) does not depend on .
It is possible that
(1) r > p
more sufficient statistics than parameters.
(2) r = p
equal sufficient statistics as parameters
(3) r < p
less sufficient statistics than parameters.
r = p allows us to determine the MVU estimator by transforming the
sufficient statistics to be unbiased.

NCTU EE

P.57

Detection & Estimation (Spring, 2009)

MVU

Example: Sinusoidal Parameter Estimation


A sinusoidal signal in WGN

If A, f0, and 2 are unknown,


= [A f0 2]T

Cannot reduce to the form of p(x; )=g(T(x), )h(x).


If f0 is known but A and 2 are unknown,
= [A 2]T

NCTU EE

P.58

Detection & Estimation (Spring, 2009)

MVU

Remark: In the vector parameter case, completeness means that for v(T), an
arbitrary r 1 function of T, if

then

Example: DC Level in White Noise with Unknown Noise Power


x[n] = A + w[n], n = 0, 1, , N-1
w[n]: WGN with variance 2
A, 2: unknown.
= [A 2]T
N 1
G
1
1 N 1 2
exp[
(
[
]
2
p ( x; A) =

x
n

A
x[n] + NA2 )]

N
2
2 n =0
n=0
(2 2 ) 2

jointly sufficient
r = p = 2 we can find MVU estimator.

NCTU EE

P.59

Detection & Estimation (Spring, 2009)

Since

MVU

Since this pdf belongs to the vector exponential family of PDFs,


complete.
MVU estimator
However, this MVU estimator is not efficient!
NCTU EE

P.60

Detection & Estimation (Spring, 2009)

MVU

Pf:

However, it has been shown that

Alternate approach:

where

NCTU EE

P.61

Detection & Estimation (Spring, 2009)

MVU

Best Linear Unbiased Estimators (BLUE)


~ Definition
Restrict the estimator to be linear in the data

and find MVU estimator within this class.


MVU estimator = BLUE if the MVU estimator happens to be linear.
Example: DC Level in WGN
MVU:
BLUE = MVU
Example: Mean of Uniformly Distributed Noise
MVU:
BLUE is suboptimal

NCTU EE

P.62

Detection & Estimation (Spring, 2009)

MVU

For some estimation problems, the use of a BLUE can be totally inappropriate.
Example: estimation of the power of WGN.
MVU:
If we force the estimator to be linear,

We cannot even find a single linear estimator that is unbiased!


Instead, we may transform the data into y[n] = x[n]2

The BLUE may still be used.

~ Finding the BLUE


Unbiased constraint:

Let a = [a0 a1 aN-1]T

NCTU EE

P.63

Detection & Estimation (Spring, 2009)

MVU

In order to satisfy the unbiased constraint, E{x[n]} must be linear in .

where s[n] are known.

BLUE is applicable to amplitude estimation of signals in noise.


Summary:
subject to the constraint

We minimize the variance

or
where s = [s[0] s[1] s[N-1]]T
Solution: Use the method of Lagrangian multipliers:

NCTU EE

P.64

Detection & Estimation (Spring, 2009)

MVU

The Lagrangian multiplier is found using the constraint

To verify that is unbiased,

To verify that aopt is indeed the global minimum, we check

NCTU EE

P.65

Detection & Estimation (Spring, 2009)

MVU

To determine the BLUE, we only require knowledge of


1. s, the scaled mean
2. C, the covariance
We dont need the entire pdf!!
Example: DC Level in White Noise
x[n] = A + w[n], n = 0, 1, , N-1
w[n]: white noise of unspecified pdf with variance 2
Since E(x[n]) = A, we have s[n] = 1 and thus s = 1.

Remarks:
1. The sample mean is the BLUE independent of the pdf of the data.
2. For Gaussian data, the sample mean is the MVU estimator.

Example: DC Level in Uncorrelated Noise


x[n] = A + w[n], n = 0, 1, , N-1
w[n]: zero mean uncorrelated noise with car(w[n]) = n2.

NCTU EE

P.66

Detection & Estimation (Spring, 2009)

MVU

The BLUE weights those samples most heavily with smallest variances.
Remark: Our model E(x[n]) = s[n] is actually the general linear model

For Gaussian noise, BLUE = MVU estimator.

~ Extension to a Vector Parameter


= [1 2 p]T
Linear estimator:

To be unbiased,

NCTU EE

P.67

Detection & Estimation (Spring, 2009)

MVU

To satisfy the unbiased constraint,

Let

so that

Assume

The BLUE estimator is

Remark: The form of the BLUE is identical to the MVU estimator for the
, where w ~ N(0,C)

general linear model:

NCTU EE

P.68

Detection & Estimation (Spring, 2009)

MVU

~ Signal Processing Example


Example: Source Localization
Determine the position of a source based on the emitted signal of the source.
If R1=R2= R3 t2 t1 = t3 t2 = 0.

Assume N antennas have been placed at known locations and the time of
arrival measurements ti for i = 0, 1, , N-1 are available.
The arrival times are corrupted by noise with zero mean and known covariance
but otherwise unknown pdf.

For signal emitted by the source at T0,

is: measurement noise


zero mean, uncorrelated, variance 2
c: propagation speed.
(xi yi)T position of the ith antenna
(xs ys)T unknown position

NCTU EE

P.69

Detection & Estimation (Spring, 2009)

MVU

The source is near the nominal position (xn yn)T, which has been
obtained from previous measurements.

Let

, we have

Unknown parameters: T0, xs, ys.


To get rid of T0, we consider time difference of arrivals.

NCTU EE

P.70

Detection & Estimation (Spring, 2009)

MVU

w: zero mean, but no longer uncorrelated.

e.g., N = 3

NCTU EE

P.71

Detection & Estimation (Spring, 2009)

NCTU EE

MVU

P.72

You might also like