You are on page 1of 54

!

!"#$%&'()(

(
*#'#+&%&'(,-%.+#%./0


)12! 3456789!5374(
(
In Chapter 5, we considered the problem oI detection theory, where the receiver
receives a noisy version oI a signal and decides which hypothesis is true among
the!"!possible hypotheses. In the binary case, the receiver had to decide between
the null hypothesis #
0
and the alternate hypothesis #
1
.
In this chapter, we assume that the receiver has made a decision in Iavor oI the
true hypothesis, but some parameter associated with the signal may not be known.
The goal is to estimate those parameters in an optimum Iashion based on a Iinite
number oI samples oI the signal.
Let
$
% % % , ... , ,
2 1
be $ independent and identically distributed samples oI a
random variable %, with some density Iunction depending on an unknown
parameter T. Let
$
& & & , ... , ,
2 1
be the corresponding values oI samples
$
% % % , ... , ,
2 1
and ) , ... , , (
2 1 $
% % % ' , a Iunction (a statistic) oI the samples used to
estimate the parameter T. We call
) , ... , , (
`
2 1 $
% % % ' T (6.1)
the ()*+,-*./!oI T. The value that the statistic assumes is called the ()*+,-*( oI T
and is equal to ) , ... , , (
`
2 1 $
& & & ' T . In order to avoid any conIusion between a
random variable and its value, it should be noted that
`
, the estimate oI T, is
actually ) , ... , , (
2 1 $
% % % ' . Consequently, when we speak oI the mean oI
`
, |
`
| 0 ,
we are actually reIerring to )| , ... , , ( |
2 1 $
% % % ' 0 .
The parameter to be estimated may be random or nonrandom. The estimation
oI random parameters is known as the 1-&()2! ()*+,-*+.3, while the estimation oI
nonrandom parameters is reIerred to as the maximum likelihood estimation
(MLE).

345
4
Signal Detection and Estimation 346
In Section 6.2, we present the maximum likelihood estimator, then we use this
estimator to compute the likelihood ratio test. This is called the !"#"$%&'(")
&'*"&'+,,)-$%.', ."/.. In Section 6.4, we present the criteria Ior a 'good estimator.
When the parameter to be estimated is a random variable, we use the Bayes`
estimation. SpeciIically, we study the minimum mean-square estimation, the
minimum mean absolute value oI error estimation, and the maximum a posteriori
estimation. The Cramer-Rao lower bound on the estimator is presented in Section
6.6. Then, we generalize the above concepts to multiple parameter estimation.
Based on the Iact that sometimes it is not possible to determine the optimum
mean-square estimate, even iI it exists, we present the best linear unbiased
estimator, which is a suboptimum solution, and discuss the conditions under which
it becomes optimum. In Section 6.9, we present the least-square estimation, which
is diIIerent than the above-mentioned methods, in the sense that it is not based on
an unbiased estimator with minimum variance, but rather on minimizing the
squared diIIerence between the observed data and the signal data. We conclude the
chapter with a brieI section on recursive least-square estimation Ior real-time
applications.


!"#! $%&'$($)*'+,*'-../),01'$%1'.2)
)
As mentioned in the previous Iunction, the procedure commonly used to estimate
nonrandom parameters is the maximum likelihood- (ML) estimation. Let
0
1 1 1 , ... , ,
2 1
be 0- observations oI the random variable !, with sample values
0
2 2 2 , ... , ,
2 1
. These random variables are independent and identically
distributed. Let ) , (
,
T "
!
3 denote the conditional density Iunction oI the random
variable 1. Note that the density Iunction oI ! depends on the parameter T ,
T , which needs to be estimated. The likelihood Iunction, ), ( 4 is

T T T T
0
*
* 1 0 1 1
2 3 3 2 2 2 3 4
* 0
1
, , 2 1 , ,...,
) , ( ) , ( ) , , , , ( ) (
1
"
!
(6.2)
The value
`
that maximizes the likelihood Iunction is called the 5%6'575-
&'*"&'+,,)- "/.'5%.,$ oI T. In order to maximize the likelihood Iunction, standard
techniques oI calculus may be used. Because the logarithmic Iunction 6 ln is a
monotonically increasing Iunction oI 6, as was shown in Chapter 5, maximizing
) ( 4 is equivalent to maximizing ) ( ln 4 . Hence, it can be shown that a
necessary but not suIIicient condition to obtain the ML estimate
`
is to solve the
&'*"&'+,,)-"87%.',#.
0 ) , ( ln

w
w
"
!
3 (6.3)
Parameter Estimation
347
Invariance Propertv. Let ) ( L be the likelihood Iunction oI u and ) ( g be a one-
to-one Iunction oI u ; that is, iI
2 1 2 1
) ( ) ( u = u u = u g g . II
`
is an MLE oI u,
then )
`
( g is an MLE oI ) ( g .

!"#$%&'()*+(

In Example 5.2, the received signal under hypotheses H
1
and H
0
was
K k N Y H
K k N m Y H
k k
k k
, ... , 2 , 1 , :
, ... , 2 , 1 , :
0
1
= =
= + =

(a) Assuming the constant m is not known, obtain the ML estimate
ml
m
`
oI
the mean.
(b) Suppose now that the mean m is known, but the variance
2
is unknown.
Obtain the MLE oI
2
= .

Solution

Detection theory (Chapter 5) was used to determine which oI the two hypotheses
was true. In this chapter oI estimation theory, we assume that H
1
is true. However,
a parameter is not known and needs to be estimated using MLE.

(a) The parameter
`
to be determined in this example is
ml
m
`
, where the
mean M me . Since the samples are independent and identically distributed, the
likelihood Iunction, using (6.2), is
( )
( )
( )
[
= = (
(

o t
=
(
(

o t
=
K
k
K
k
k
K K
k
M
m v m v
m f
1 1
2
2
2 / 2
2
,
2
exp
2
1
2
exp
2
1
) , ( !
"

Taking the logarithm on both sides, we obtain
( )
( )

=
o

(
(

o t
=
K
k
k
K K
M
m v
m f
1
2
2
2 /
,
2 2
1
ln ) , ( ln !
"

The ML estimate is obtained by solving the likelihood equation, as shown in (6.3).
Hence,
Signal Detection and Estimation 348
0
1
) , ( ln
1
2 2
1
2
1
2
,
=
|
|
.
|

\
|

o
=
o

o
=
o

=
c
c

= = =
! "
#
# #! " ! "
!
! $
#
%
%
#
%
%
#
%
%
&
!
"

or

=
=
#
%
%
" % !
1
) / 1 ( . Thus, the ML estimator is . ) / 1 (
`
1

=
=
#
%
% !'
" % !

(b) The likelihood Iunction is
( )
( )
(
(

o t
= o

=
#
%
%
#
#
! "
(
1
2
2
2
2
2
exp
2
1
) (
Taking the logarithm, we obtain
( )

=
o

o t = o
#
%
%
! "
#
#
(
1
2
2
2
2
ln 2 ln
2
) ( ln
Observe that maximizing ) ( ln
2
( with respect to
2
is equivalent to minimizing
( )

=
o

+ o = o
#
%
%
! "
# )
1
2
2
2
2
ln ) (
Using the invariance property, it is easier to diIIerentiate ) (
2
) with respect to o
to obtain
!'

`
the MLE oI o, instead oI
2

`
!'
the*MLE oI .
2
o Hence,
( )

=
=
o

o
=
o
o
#
%
%
! " #
+
+)
1
3
2
2
0
) (
or ( )

=
= o
#
%
%
! "
#
1
2
1
`

Consequently, the MLE oI
2
is ( ) . ) / 1 (
`
1
2 2

=
= o
#
%
% !'
! " #


!"#! $%&%'()*+%,-)*.%)*/00,-'(1*0-1%21-
-
In Example 5.9, we solved the hypothesis testing problem where the alternative
hypothesis was composite. The parameter ! under hypothesis ,
1
was unknown,
although it was known that ! was either positive or negative. When ! was
positive only (negative only), a UMP test existed and the decision rule was
!"#"$%&%#'()&*$"&*+,'
349
1
2
0
1

2
ln

! $
$
-
-
.
Ior positive'$, and
2
2
1
0

2
ln

!
$
$
-
-
.
Ior negative'$. Since the test designed Ior positive $ was not the same as the test
designed Ior negative' $, we concluded that a UMP test did not exist Ior all
possible values oI $; that is, positive and negative. This requires that diIIerent tests
be used. One approach is to use the concepts developed in Section 6.2. That is, we
use the required data to estimate T, as though hypothesis -
1
is true. Then, we use
these estimates in the likelihood ratio test as iI they are the correct values. There
are many ways to estimate T, as will be shown in this chapter. II the estimates used
are the maximum likelihood estimates, then the result is called the /%,%#"0*1%2'
0*3%0*4++2'#"&*+'&%)& and is given by

) , (
) , (
) (
0
1
0 ,
max
0
1 ,
max
1
0
1
-
-
5
5
.
/

!
!
"
"
(6.4)
1
T ! and
0
T are the unknown parameters to be estimated under hypotheses
1
- !
and
0
- , respectively.

!"#$%&'()*+(
Consider the problem oI Example 5.9, where $ is an'6,3,+7,'parameter. Obtain
the generalized likelihood ratio test and compare it to the optimum Neyman-
Pearson test.

8+06&*+,

Since the 9 observations are independent, the conditional density Iunctions under
both hypotheses
0 1
and - - are
Signal Detection and Estimation 350
[
[
=
=
(
(

o t
=
|
|
.
|

\
|
o

o t
=
!
"
"
# $
!
"
"
# $
% &
# % ' #
&
# % ' #
1
2
2
1 , , 1
1
2
2
0 , , 0
2
) (
exp
2
1
) , , ( :
2
exp
2
1
) , , ( :
1
0
!
!
"
"

where(% is an unknown parameter. Since hypothesis #
0
does not contain % (#
0
is
simple), the estimation procedure is applicable to hypothesis #
1
only. From the
likelihood equation given by (6.3), the ML estimate oI % under #
1
is given by
0
) , , ( ln
1 , ,
1
=
c
c
%
# % '
# $
!
"

Substituting Ior ) , , (
1 , ,
1
# % '
# $
!
"
in the above equation, we have
( )
0
2
1
2
2
=
(
(

c
c

=
!
"
"
% &
%
or

=
=
!
"
!
)
!
%
1
1
`
The details are given in Example 6.1. The likelihood ratio test becomes
( )
q
<
>
|
|
.
|

\
|
o

o t
(

o t
=
[
[
=
=
ln
2
exp
2
1
`
2
1
exp
2
1
) (
0
1
1
2
1
2
2
#
#
&
% &
!
"
"
!
"
"
*
!
Substituting Ior the obtained value oI %
` in the above expression, and simpliIying
aIter taking the logarithm, the test becomes
q
<
>
|
|
.
|

\
|
o

=
ln
2
1
0
1
2
1
2
#
#
&
!
!
"
"

Since
2
1
2
) 2 / 1 (
|
|
.
|

\
|
o

=
!
"
"
& ! is nonnegative, the decision will always be #
1
iI q is
less then one ( ln negative) or q is set equal to one. Consequently, q can always
be chosen greater than or equal to one. Thus, an equivalent test is
!"#"$%&%#'()&*$"&*+,'
351
2
1
2
0
1
2
1
ln 2
1
J K V

-
-
.
/
/
0
0

where 0
1
t . Equivalently, we can use the test
1
0
1
1
1
J

-
-
.
/
1
/
0
0

The decision regions are shown in Figure 6.1.
Given the desired probability oI Ialse alarm, the value oI
1
can be
determined. BeIore we can get an expression Ior
2
! , the probability oI Ialse
alarm, we need to determine the density Iunction oI 1. Since

/
0
0
3
/
1
1
1

the mean and variance oI 3 under hypothesis -
0
are zero and ,
2
V respectively. All
the observations are Gaussian and statistically independent. Thus, the
density.Iunction oI

/
0
0
3 1
1
1
is Gaussian with mean zero and variance .
2
V /
Consequently, 1 is Gaussian with mean zero and variance .
2
V That is,

V S

2
2
0 ,
2
exp
2
1
) , (
0
4
- 1 5
- 1

The probability oI Ialse alarm, Irom Figure 6.2, is

!
!
"#$%&'!()*!!Decision regions oI the generalized likelihood ratio test.
-
1
0
1'
J
1
-J
1
-
0
-
1
Signal Detection and Estimation 352












!"#$%&'()*''Density Iunction oI ! under "
0
.
|
|
.
|

\
|
o

=
|
|
.
|

\
|
o

o t
+
|
|
.
|

\
|
o

o t
=
=
} }



1
2
2
2
2
0 1
2
2
exp
2
1
2
exp
2
1
true) , (decide
1
1
# $%
%
$%
%
" " & &
'

We observe that we are able to determine the value
1
Irom the derived
probability oI Ialse alarm without any knowledge oI (. However, the probability
oI detection cannot be determined without (, but can be evaluated with ( as a
parameter. Under hypothesis "
1
,

=
=
)
*
+ !
1
1 1
is Gaussian with mean )( and
variance .
2
o ) Hence, the density Iunction oI ! is Gaussian with mean ( ) and
variance .
2
o That is,
( )
(
(

=
2
2
1 ,
2
exp
2
1
) , (
1
( ) %
" % ,
" !

The probability oI detection Ior a given value oI (, Irom Figure 6.3, is
( ) ( )
|
|
.
|

\
|
o

+
|
|
.
|

\
|
o
+
=
|
|
.
|

\
|
o

+
|
|
.
|

\
|
o

=
(
(

o t
+
(
(

o t
=
=
} }

( )
#
( )
#
( )
#
( )
#
$%
( ) %
$%
( ) %
" " & &
-
1 1 1 1
2
2
2
2
1 1
1
2
exp
2
1
2
exp
2
1
true) , (decide
1
1

0
) , (
0 ,
0
" ,
" !
!
&
'.
&
'.

1.
-
1.
!%.
!"#"$%&%#'()&*$"&*+,'
353











!
!
"#$%&'!()*!!Density Iunction oI - under .
1
.
In Figure 3.31 oI |1|, it is shown that the generalized likelihood ratio test perIorms
nearly as well as the Neyman-Pearson test.


()+! ,-./!0123/124!"-1!5--6!/,32.43-1,!
!
Since the estimator
`
is a random variable and may assume more than one value,
some characteristics oI a 'good estimate need to be determined.

/,0*")%1'()&*$"&% We say
`
is an unbiased estimator Ior T iI
|
`
| ( Ior all T (6.5)
2*")'+3'()&*$"&+# Let
) ( |
`
| 0 ( (6.6)
1. II ) ( 0 does not depend on | ) ( | 0 0 T T , we say that the estimator
`
has a
4,+5,'0*"s. That is, 0
`
is an unbiased estimate.

2. When 0 0 z ) ( , an unbiased estimate cannot be obtained, since T is unknown.
In this case, we say that the estimator has an 6,4,+5,'0*").

When the parameter T to be estimated satisIies (6.5) and is not random (i.e.,
there is no a priori probability distribution Ior T), it is sometimes reIerred to as
"0)+76&%78'6,0*")%1.
!$' 0
) , (
1 ,
1
. 3
. -
!
!!
9'
!!
9'
J
1'
-J
1'
:'
Signal Detection and Estimation 354
The Iact that the estimator is unbiased, which means that the average value oI
the estimate is close to the true value, does not necessarily guarantee that the
estimator is 'good. This is easily seen by the conditional density Iunction oI the
estimator shown in Figure 6.4. We observe that even though the estimate is
unbiased, sizable errors are likely to occur, since the variance oI the estimate is
large. However, iI the variance is small, the variability oI the estimator about its
expected value is also small. Consequently, the variability oI the estimator is close
to the true value, since the estimate is unbiased, which is a desired Ieature. Hence,
we say that the second measure oI quality oI the estimate is to have a small
variance.

Unbiased Minimum Jariance
`
is a minimum variance and unbiased (MVU)
estimate oI u iI, Ior all estimates ' such that | | = ' E , we have | var| |
`
var| ' s
Ior all . u' That is,
`
has the smallest variance among all unbiased estimates oI u.

Consistent Estimate
`
is a consistent estimate oI the parameter u, based on K
observed samples, iI
( ) 0 all Ior 0
`
lim > c = c > u u

P
K
(6.7)
where ( ) P denotes probability.
Applying the above deIinition to veriIy the consistency oI an estimate is not
simple. The Iollowing theorem is used instead.

Theorem. Let
`
be an unbiased estimator oI u based on K observed samples. II
|
`
| lim =

E
K
(6.8)





!"#$%&'()* Density Iunction oI the unbiased estimator u
`
.
u
| var| |
`
var| ' s
)
`
(

`
f
!"#"$%&%#'()&*$"&*+,'
355
and iI
0 |
`
| var lim =

(
-
(6.9)
then
`
is a consistent estimator oI u.

!"#$%&'()*+(
(
(a) VeriIy iI the estimator
$.
$
`
oI Example 6.1 is an unbiased estimate oI $.
(b) Is the estimator
2

`
$.
unbiased?

/+.0&*+,

(a) The estimator
$.
$
`
is unbiased iI $ $ (
$.
= |
`
| . AIter substitution, we obtain
$ -$
-
1 (
-
1
-
( $ (
-
2
-
-
2
- $.
= =
(

=
(

=

= =
1 1 1
|
`
|
1 1

Hence,
$.
$
`
is unbiased.
(b) The estimator
2

`
$.
is unbiased iI . |
`
|
2 2
o = o
$.
( That is,
( )
2
1 1
2 2 2
1
2
1 1
=
(

+ =
(


= = =
-
2
-
2
2 2 2
-
2
1 $ 1 -$ (
-
$ 1
-
(
Hence,
2

`
$.
is unbiased.




)*,! -./!01(!0234.2356(
(
In the Bayes` estimation, we assign a cost )
`
, ( u u ! to all pairs )
`
, ( u u . The cost is a
nonnegative real value Iunction oI the two random variables u and
`
. As in the
Bayes` detection, the risk Iunction is deIined to be the average value oI the cost;
that is,
| )
`
, ( | u u = 9 ! ( (6.10)
Signal Detection and Estimation 356
The goal is to minimize the risk Iunction in order to obtain
`
, which is the
optimum estimate. In many problems, only the error
~
between the estimate and
the true value is oI interest; that is,

`

~
(6.11)
Consequently, we will only consider costs which are a Iunction oI the error. Three
cases will be studied, and their corresponding sketches are shown in Figure 6.5.

1. Squared error

2
)
`
( )
`
, ( ! (6.12)
2. Absolute value oI error
T T T T
`
)
`
, ( ! (6.13)
3. UniIorm cost Iunction

T T
t T T
T T
2

`
, 0
2

`
, 1
)
`
, (
!
! (6.14)
The unknown parameter is assumed to be a continuous random variable with
density Iunction ) (

! . The risk Iunction can then be expressed as











(a) (b) (c)
!"#$%&'()* Cost Iunctions: (a) squared error, (b) absolute value oI error, and (c) uniIorm.

) ,
`
( ! ! ) ,
`
( ! ! ) ,
`
( ! !
T T
T
2

!
2

!
1
!"#"$%&%#'()&*$"&*+,'
357


f
f
f
f
T T T T T T ! !
"
- - . ( ) ( )
`
, ( )|
`
, ( |
,
! ! ! (6.15)
Note that we take the cost average over all possible values oI T and ", where " is
the vector > @ .
2 1
/
0
1 1 1 We now Iind the estimator Ior the three cost
Iunctions considered.

"#$#%! &'(')*)+&,-(./0*-1,+21131+245')-5,+

The estimator that minimizes the risk Iunction Ior the cost given in (6.12) is
reIerred to as a minimum mean-square estimate (MMSE). The corresponding risk
Iunction is denoted by
$)
. We have


f
f
f
f
f
f
f
f
T T T T T T T T T ! ! ! !
" "
- - . - - - .
$)
) ( )
`
( ) ( )
`
(
,
2
,
2
! ! (6.16)
Using (1.91), the risk Iunction can be rewritten as

f
f
f
f

T T T T - . . -
$)
) ( )
`
( ) (
,
2
! ! !
" "
6 (6.17)
Since the density Iunction ) ( !
"
. is nonnegative, minimizing
$)
is equivalent
to minimizing the expression in brackets oI the above equation. Hence, taking the
derivative with respect to
`
and setting it equal to zero, we have
0 ) ( )
`
(
`
,
2
T T T T
T

f
f
- .
-
-
!
"
6 (6.18)
Using Leibniz`s rule given in (1.38), we obtain
| , | ) (
`
,
! !
"
( - .
$)

f
f
6 (6.19)
That is, the minimum mean-square estimate
$)

`
represents the conditional mean
oI T given ". It can easily be shown that the second derivative with respect to
$)

`

is positive-deIinite, which corresponds to a unique minimum oI ,
$)
and is given
by
Signal Detection and Estimation 358
} }


u u u u = 9 ! " " !
#$ #$
) ( )
`
( ) (
,
2
! ! !
" "
[
{ }
} }


u u u u = ! " % " ! ) ( | , | ) (
,
2
! ! ! !
" "
[ (6.20)
The conditional variance oI u given " is
{ }
}


u u u u = u ! " % ) ( | , | | , var|
,
2
! ! !
"
[ (6.21)
Hence,
#$
9 is just the conditional variance oI u given ", averaged over all
possible values oI ". This estimation procedure using the squared error criterion is
sometimes reIerred to as a minimum variance (MV) oI error estimation.

6.5.2! Minimum Mean Absolute Value of Error Estimate

In this case, the cost Iunction is given by (6.13), and the risk is
! !
"
! ! "
&'$
u u u u = 9
} }


) , (
`
,
! ! !
" "
! ! " "
} }


(
(

u u u u = ) (
`
) (
,
[ (6.22)
Using the same arguments as in Section 6.5.1, the risk can be minimized by
minimizing the integral in brackets, which is given by
} }

`
,

`
,
) ( )
`
( ) ( )
`
( ! " ! " ! !
" "
[ [ (6.23)
DiIIerentiating (6.23) with respect to
`
, and setting the result equal to zero, we
obtain

} }


=
&'$
&'$
! " ! "

`
,

`
,
) ( ) ( ! !
" "
[ [ (6.24)
That is, the estimate
&'$

`
is just the #(!)&* oI the conditional density Iunction
) (
,
!
"
[ " . This estimate is also known as the minimum mean absolute value oI+
error (MAVE) estimate, and thus
#&,( &'$

`

`
.
!"#"$%&%#'()&*$"&*+,'
359
!"#"$! %&'()*)+,+-./012(.2(+3/0()&01+
+
For the uniIorm cost Iunction given by (6.14), the Bayes` risk becomes
! ! ! !
" " "
- - . - . .
/,.
) ) ( ) (
2

`
,
2

`
, } } }

+ u
u

(
(
(
(

u u + u u = 9 4 4
! ! !
" "
- - . .
/,.
) ( 1 ) (
2

`
2

`
,
} }


+ u
u
(
(
(
(

u u = 9 4 (6.25)
where
(

+ u s s u = u u
}
+ u
u
! !
"
,
2

`
) (
2

`
2

`
,
! - . 4 (6.26)
| | ! denotes probability. Hence, the risk
/,.
9 is minimized by maximizing
(6.26). Note that in maximizing (6.26) (minimizing
/,.
9 ), we are searching Ior
the estimate
`
, which minimizes ) (
,
!
"
4 . . This is called the maximum a'
posteriori estimate (MAP),
$"0

`
, which is deIined as
0
) (
`
,
=
u c
u c
u = u
$"0
. !
"
4
(6.27)
Using the logarithm, which is a monotonically increasing Iunction, (6.27) becomes
0

) ( ln
,
=
c
c !
"
4 .
(6.28)
Equation (6.28) is called the 12! %3/"&*+,. This is a necessary but not suIIicient
condition, since ) (
,
!
"
4 . may have several local maxima. Using the Bayes` rule

) (
) ( ) (
) (
,
,
!
# !
!
"
"
"
.
. .
.
u u
= u 4
(6.29)
Signal Detection and Estimation 360
and the Iact that
) ( ln ) ( ln ) ( ln ) ( ln
, ,
! " ! !
# # #
! ! ! ! u + u = u ! (6.30)
then the MAP equation may be rewritten as

0
) ( ln
) ( ln ) ( ln

, ,
=
u c
u c
+
u c
u c
=
u c
u c
!
! ! " ! !
# #
!
(6.31)
We always assume that A is suIIiciently small, so that the estimate
"#$

`
is given
by the MAP equation. That is, the cost Iunction shown in Figure 6.5 may be
deIined as
)
`
, ( 1 ) ,
`
( = % (6.32)
"#$%&'()*+,)

Consider the problem where the observed samples are
& ' ( ) *
' '
, ... , 2 , 1 , = + =
)+and (
'
are statistically independent Gaussian random variables with zero mean
and variance
2
. Find
",
"
`
,
"#$
"
` , and
"#-.
"
`
.


/0123405+

From (6.19), the estimate
",
"
`
is the conditional mean oI " given #. The density
Iunction ) , (
,
!
#
" !
)
is expressed as
) (
) ( ) , (
) , (
,
,
!
!
!
#
#
#
!
" ! " !
" !
) )
)
=
where
|
|
.
|

\
|
o

o t
=
2
2
2
exp
2
1
) (
"
" !
)
,
[
= (
(

=
&
'
'
)
" 6
" !
1
2
2
,
2
) (
exp
2
1
) , ( !
#

and the marginal density Iunction ) ( !
#
! is
Parameter Estimation
361
} }


= = dm m f m f dm m f f
M M M
) ( ) , ( ) , ( ) (
, ,
! ! !
" " "

Note that ) , (
,
!
"
m f
M
is a Iunction oI m, but that ) ( !
"
f is a constant with ! as a
parameter needed to maintain the area under the conditional density Iunction equal
to one. That is,
( )

+
o

o t
=

=
+
K
k
k
K
M
m m v
f
m f
1
2 2
2
1
,
2
1
exp
) (
) 2 ( 1
) , (
!
!
"
"

Expanding the exponent, we have
( )
( )



= = =
= =
= = =
+
|
|
.
|

\
|
+

|
|
.
|

\
|
+
+ =
+
(

+
+ =
+ + = + +
K
k
k
K
k
k
K
k
k
K
k
k
K
k
k
K
k
k
K
k
k
K
k
k k
v v
K
v
K
m K
v v
K
m
m K
v v m K m m m m v v
1
2
2
1
2
1
1
2
1
2
1
2
1 1
2 2 2 2
1
1
1
1
1
1
2
1
2 ) 1 ( ) 2 (

The last two terms in the exponent do not involve m, and can be absorbed in the
multiplicative constant to obtain
(
(

|
|
.
|

\
|
+

o
=

=
2
1
2
2
,
1
1
2
1
exp ) ( ) , (
K
k
k
m
M
v
K
m c m f ! !
"

where . 1 + o = o K
m
By inspection, the conditional mean is
| |

=
+
= =
K
k
k ms
v
K
M E m
1
1
1
,
`
!
According to (6.20),
ms
9 is given by
}


= 9 ! ! !
"
d f M
ms
) ( | , var|
Signal Detection and Estimation 362
Hence, since 1 ) ( =
}


! !
"
! " , then
2 2
) (
# # #$
! " = = 9
}


! !
"
.
The% MAP estimate is obtained using (6.28) and (6.29). Taking the logarithm
oI ) , (
,
!
"
# "
&
, we have
2
1
2
,
1
1

1
) ( ln ) , ( ln
|
|
.
|

\
|
+
=

=
'
(
(
#
&
)
'
# * # " ! !
"

ThereIore,
0
1
1 1
) , ( ln
1
2
,
=
|
|
.
|

\
|
+

o
=
c
c

=
'
(
(
#
&
)
'
#
#
# " !
"

=
+
=
'
(
( #+,
)
'
#
1
1
1
`

That is, .
` `
#$ #+,
# # = We could have obtained this result directly by inspection,
since we have shown that ) , (
,
!
"
# "
&
is Gaussian. Consequently, the maximum
oI ) , (
,
!
"
# "
&
occurs at its mean value.
Using the Iact that the Gaussian density Iunction is symmetric, and that
#+-.
#
`

is the median oI the conditional density Iunction ) , (
,
!
"
# "
&
, we conclude

=
+
= = =
'
(
( #+, #$ #+-.
)
'
# # #
1
1
1
` ` `

From (6.31), iI u is assumed to be random with 0 ) (

= " Ior < < , then


the ML estimate can then be considered to be a special case oI the MAP estimate.
Such a density Iunction Ior u connotes zero a priori inIormation about u.
Furthermore, the MAP estimate oI a Gaussian distributed parameter is equivalent
to the ML estimate as the variance increases; that is, the distribution oI the
parameter to be estimated tends to be uniIorm. In general, Ior a symmetric
distribution centered at the maximum, as shown Figure 6.6(a), the mean, mode,
and median are identical. II the distribution oI the parameter is uniIorm, then the
MAP, the MMSE, and the MAVE estimates are identical. In Figure 6.6(b), we
illustrate the diIIerent estimates when the density Iunction is not symmetric. Recall
that the #.!/+0%is the value oI ) Ior which 2 / 1 ) ( ) ( = > = s ) 1 2 ) 1 2 , while the
#3!. is the value that has the greatest probability oI occurring.

!"#"$%&%#'()&*$"&*+,'
363

''''''''''''''''(a) (b)
Figure 6.6 Density Iunctions showing relations to MAP, MAVE, and MMSE estimates:
(a) symmetric pdI, and (b) nonsymmetric pdI. (-#+$: |2|. 2000 John Wiley and Sons, Inc.
Reprinted with permission.)
Example 6.5
`
Find ,
`
$)
. the minimum mean-square error, and
$"/
.
` , the maximum a posteriori
estimators, oI 0 Irom the observation . 1 0 2 + = 0 and 1 are random variables
with density Iunctions
) 1 (
2
1
) (
2
1
) ( + = . . . 3
0
and

>
s
= =

0 ,
2
1
0 ,
2
1
2
1
) (
, %
, %
% , 3
,
,
,
1

4+56&*+,'
'
The estimate
$"/
.
` maximizes the density Iunction ). , (
,
7 . 3
2 0
Since the
conditional probability density Iunction is
. ,
0 2
% 0 7 3

= ) 2 / 1 ( ) , (
,
, the
probability density Iunction oI 2 is
( ) ( ) 8. . . % 8. . 3 . 7 3 7 3
. ,
0 0 2 2
| 1 |
4
1
) ( ) , ( ) (
,
o + o = =
} }



!!!!! { }
( )
( )
( )

> +
< s +
< +
= + =
+


1 ,
4
1
1 0 ,
4
1
0 ,
4
1
4
1
1
1
1
1
7 % %
7 % %
7 % %
% %
, ,
, ,
, ,
, ,

mean, mode,
median
MAVE
MMSE
) , (
,
7 3
2
!
) , (
,
7 3
2
!
MAP
ML
MAP
MAVE
MMSE
mode median mean
!u u
Signal Detection and Estimation 364
The a posteriori density Iunction is, Irom (6.29), given by
1
,
,
)| 1 ( ) ( |
) (
) ( ) , (
) , (


+
o + o
= =
! !
" !
#
$ $ #
# $
% %
" " %
& '
" ' " & '
& " '
) , (
,
& " '
# $
is zero except when 0 = " and 1 = " . The above expression is
maximized when " ! is minimized. Since " can take only two values, but must
be close to !, we have

<
>
=
2
1
Ior 0
2
1
Ior 1
`
!
!
"
()*

The mean-square error estimate is the mean oI the+ a posteriori+ density
Iunction as given by (6.19). Hence,
} }

+
o + o
= = ,"
% %
" " %
" ," & " "' "
! !
" !
# $ (-
1
,
)| 1 ( ) ( |
) , (
`

Since
}


= ), ( ) ( ) (
0 0
. / ,. . / . . the mean-square estimate is
1
1
`


+
=
! !
!
(-
% %
%
"
and we see that
()*
"
` is not identical to .
`
(-
"


!"!! #$%&'$($%)*+,'-.%/+01*
*
From the MAP equation oI (6.31), iI we set the density Iunction oI u to zero, Ior
all u, we obtain the likelihood equation oI (6.3). That is, the ML estimate can be
considered as a special case oI the MAP estimate. In this case, to check whether
the estimate is 'good, we need to compute its bias and error variance and
determine its consistency. It may be very diIIicult to obtain an expression Ior the
error variance. In this case, the 'goodness oI the estimator is studied in terms oI a
lower bound on the error variance. This bound is known as the 01)(%123)4+
!"#"$%&%#'()&*$"&*+,'
365
-+.,/. The Cramer-Rao bound oI a constant parameter is given by the Iollowing
theorem.

01%+#%$. Let the vector
0
2
3 3 3 | , ... , , |
2 1
! represent 2 observations, and
`
be
the unbiased estimator oI T. Then

T w
T w
t T T T
2
,
) , ( ln
1
| , )
`
( var|
"
!
4
(
(6.33)
where

T w
T w

T w
T w
2
,
2
2
,
) , ( ln ) , ( ln " "
! !
4
(
4
( (6.34)
!#++4. For an unbiased estimator
`
, we have
| ,
`
| ( (6.35)
ThereIore,
0 ) , ( )
`
( | , )
`
( |
,
T T T T T T

f
f
" "
!
/ 4 ( (6.36)
DiIIerentiating (6.36) with respect to T, we obtain
" " "
"
!
!
/ 4 /
4

f
f
f
f
T
T w
T w
T T ) , (
) , (
)
`
(
,
,
(6.37)
The second integral is equal to one. Using the Iact that
5
5 6
5 6 5
5 6
w
w

w
w ) (
) (
1 ) ( ln
(6.38)
where ) (5 6 is a Iunction oI 5,'we can express / ) , (
,
w w "
!
4 as
Signal Detection and Estimation 366

) , ( ln
) , (

) , (
,
,
,
w
w

w
w !
!
!
"
"
"
!
!
!
(6.39)
Substituting (6.39) into (6.37), we obtain

f
f

T w
T w
T T T 1
) , ( ln
) , ( )
`
(
,
,
!
!
!
"
"
"
!
! (6.40)
The Schwarz inequality states that
2
2 2
) ( ) ( ) ( ) (


f
f
f
f
f
f
"# # $ # % "# # $ "# # % (6.41)
where ) ( and ) ( # $ # % are two Iunctions oI #. Equality holds iI and only iI
) ( ) ( # &% # $ , with &' a constant. Rewriting (6.39) in order to use the Schwarz
inequality, we have
1 | ) , ( )
`
( | ) , (
) , ( ln
, ,
,
T T T

T
T w
T w

f
f
! ! !
!
" "
"
" ! !
!
(6.42)
or
1 ) , (
) , ( ln
) , ( )
`
(
,
2
,
,
2
t

T w
T w

T T T

f
f
f
f
! !
!
! !
"
"
"
" !
!
" ! (6.43)
The Iirst integral between brackets is actually |. , )
`
var|( T T T Hence, the
inequality becomes

T w
T w
t T T T
2
,
) , ( ln
1
| , )
`
var|(
$ !
(
)
(6.44)
which proves (6.33).
We now prove (6.34), which says that the Cramer-Rao bound can be
expressed in a diIIerent Iorm. We know that
Parameter Estimation
367

f
f
1 ) , (
,
! !
"
d f (6.45)
DiIIerentiating both sides oI the equation with respect to T results in

f
f

w
w
0

) , (
,
!
!
"
d
f
(6.46)
Rewriting (6.46) and using (6.38), we have

f
f

w
w
0 ) , (

) , ( ln
,
,
! !
!
"
"
d f
f
(6.47)
DiIIerentiating again with respect to T, we obtain

f
f
f
f

w
w
w
w

w
w
0

) , (

) , ( ln
) , (

) , ( ln
, ,
,
2
,
2
! !
! !
!
" "
"
"
f f
d f
f
(6.48)
Substituting (6.47) Ior the second term oI the second integral oI (6.48), and
rearranging terms yields

T w
T w

T w
T w
2
,
2
,
2
) , ( ln ) , ( ln ! !
" "
f
E
f
E (6.49)
which is the same as (6.34), and the prooI oI the theorem is complete.
An important observation about (6.43) is that equality holds iI and only iI
|
`
)| (

) , ( ln
,

w
w
c
f !
"
(6.50)
Any unbiased estimator that satisIies the equality in the Cramer-Rao inequality oI
(6.33) is said to be an efficient estimator.
II an eIIicient estimator exists, it can easily be shown that it equals the ML
estimate. The ML equation is given by
0

) , ( ln

w
w

ml
f !
"
(6.51)
Signal Detection and Estimation 368
Using (6.50), provided that an eIIicient estimate exists, we have

!"
!"
#
$

,
|
`
)| (

) , ( ln
=
=
=
c
c !
"
(6.52)
which equals zero when .
` `
!"
u = u

!"#$%&'()*)

Consider % observations, such that
% & ' ! (
& &
, , 2 , 1 , = + =
where ! is unknown and '
&
s are statistically independent zero mean Gaussian
random variables with unknown variance .
2
o
(a) Find the estimates
2

`
and
`
! Ior
2
and ! , respectively.
(b) Is !
`
an eIIicient estimator?
(c) Find the conditional variance oI the error |. , )
`
var|( ! ! !

)*"+,-*.

(a) Using (6.2), we can determine !
`
! and!
2
`
o simultaneously. The conditional
density Iunction oI " given
2
and o ! is
( )
[
= (
(

=
%
&
&
! /
! $
1
2
2
2
2
exp
2
1
) , , ( !
"

Taking the logarithm, we have
( )
( )

=
%
&
&
! / %
! $
1
2
2
2 2
2
2 ln
2
) , , ( ln !
"

We take the derivative oI the above equation with respect to
2
and o ! to obtain
two equations in two unknowns. That is,
0
2
2
) , , ( ln
1
2
2
=

=
c
c

=
%
&
&
! /
!
! $ !
"

Parameter Estimation
369
and
( )

=
=

+ =
c
c
K
k
k
m v K m f
1
4
2
2 2
2
0
2 2
) , , ( ln !
"

Solving Ior
ml
m
`
and
2
`
ml
o simultaneously, we obtain

=
=
K
k
k ml
v
K
m
1
1
`

and
( )

= = =
=
|
|
.
|

\
|
=
K
k
ml k
K
k
K
k
k k ml
m v
K
v
K
v
K
1
2
1
2
1
2
`
1 1 1

`

(b)
ml
m
`
is an unbiased estimator since
m v E
K
m E
K
k
k ml
=
(

=1
1
|
`
|
To check iI the estimator is eIIicient, we use (6.50) to obtain

= =
|
|
.
|

\
|

o
=
o

=
c
o c
K
k
K
k
k
k
m v
K
K m v
m
m f
1 1
2 2
2
1 ) , , ( ln !
"

where .
`
) / 1 (
`
and / ) (
1
2

=
= = o =
K
k
ml k
m v K m K m c Hence, the estimator is
eIIicient.

(c) To determine the conditional variance oI error, we use (6.33) and (6.34).
Taking the derivative oI the likelihood equation with respect to m, we obtain
2 2
2 2

) , , ( ln K
m
m f
=
c
c !
"

Hence,
Signal Detection and Estimation 370
K
m
m f
E
m m m
2
2
2 2

) , , ( ln
1
| , )
`
var|(

w
w

!
"
!
Cramer-Rao Inequalitv for a Random Parameter

We suppose that T is a random parameter, such that the joint density Iunction
) , (
,
!
"
f oI the observation vector " and the parameter T are known. Then,

T
T w
w
t T T
2
,
2
) , ( ln
1
| )
`
var|(
!
"
f E
(6.53)
where

T
T w
w

T
T w
w
) , ( ln ) , ( ln
,
2
2
2
,
! !
" "
f E f E (6.54)
Equality oI (6.53) holds iI and only iI
!!! )
`
( ) , ( ln
,
T T T
T w
w
c f !
"
! (6.55)
where c is independent oI " and T. Furthermore, the lower bound oI (6.53) is
achieved with equality iI and iI ) , (
,
!
"
f is Gaussian.
It also can be shown that iI the lower bound on the nonrandom parameter oI
(6.34) is denoted J and iI the lower bound on the random parameter oI (6.54) is
denoted L, then

w
w

2

) ( ln f
E J L (6.56)
Next, we present the generalization oI the Cramer-Rao bound Ior a vector
parameter on multiple parameter estimation Ior both random and nonrandom
parameters.



!"#"$%&%#'()&*$"&*+,'
371
!"#! $%&'()&*+),-,$*'*-+*.'($,'(/0+
+
In many radar and communication applications, it may be necessary to examine
several parameters simultaneously. For example, in a radar application, a problem
may be to estimate the range and velocity oI a target; while in a communication
application, the problem may be to estimate the amplitude, arrival time, and a
carrier Irequency oI a received signal. ThereIore, we can now extend the parameter
estimation concepts to multiple parameters. The vector to be estimated may be
random (in this case we use the Bayes` estimation) or nonrandom (in this case we
use the maximum likelihood estimation).

!"#"1! T+023453627+
+
In this case, the vector T is
> @
-
.
T T T
2 1
(6.57)
Then, (6.3) becomes the Iollowing set oI simultaneous likelihood equations
0 ) , , , , , , , ( ln
2 1 2 1
1
T T T
T w
w
. .
/ / / 0
! "

0 ) , , , , , , , ( ln
2 1 2 1
2
T T T
T w
w
. .
/ / / 0
! "

0 ) , , , , , , , ( ln
2 1 2 1
T T T
T w
w
. .
.
/ / / 0
! "
(6.58)
In order to write (6.58) in a more compact Iorm, we deIine the partial derivative
column vector by
-
.

T w
w
T w
w
T w
w

2 1

(6.59)
This operation is generally applied to row vectors only. That is, iI
| |
2 1 ,
-
1 1 1 # , then
Signal Detection and Estimation 372
(
(
(
(
(
(
(
(
(
(

u c
c
u c
c
u c
c
u c
c
u c
c
u c
c
u c
c
u c
c
u c
c
=
(
(
(
(
(
(
(
(
(

u c
c
u c
c
u c
c
= V
!
"
! !
"
"
"
!
#
$ $ $
$ $ $
$ $ $
$ $ $

2 1
2 2
2
2
1
1 1
2
1
1
2 1
2
1
| | !


The ML equation is then
!

= V
=
O
) (
`
,
)| , ( |ln
"
#
"
%&
' (6.60)
We saw in Section 6.4 that a measure oI quality oI the estimate is the bias. The
conditional mean oI the estimate given by (6.6) becomes
) ( | , ) (
`
| $ " + = ( (6.61)
II the bias vector ! = ) ( $ , that is, each component oI the bias vector is zero Ior
any u, then the estimate is said to be unbiased. We note that
u u u u u u u = = = )| (
`
| )| (
~
| | , ) ) (
~
|( ) ( " " " $ ( ( ( (6.62)
A second measure oI quality oI the estimate is the conditional variance oI the
error. For multiple parameters, the corresponding conditional covariance matrix oI
the error is
| , )
~ ~
( )
~ ~
|(
~

#
)
#
)
( = % (6.63)
where
)

~
is the bias vector given by
) ( | , ) (
~
|
~
$ " = = (
)
(6.64)
Note that %
~
is a ! ! matrix. The *+th element is
| , )
~
( )
~
|(
~

)+ + )* * *+
( u u u u = % (6.65)
while the,*th diagonal element is the conditional variance given by
Parameter Estimation
373
| , ) ) (
`
var|( | ,
~
var|
~
|
~
var|
i i i ii i
C T T T V ! (6.66)
Cramer-Rao Bound

The extension oI the Cramer-Rao bound is given by the Iollowing theorem.

Theorem. II
`
is any absolutely unbiased estimator oI based on the observation
vector ", then the covariance oI the error in the estimator is bounded by the
inverse, assuming it exists, oI the Fisher information matrix #.
1
| , )
`
)(
`
|(

t #
T
E (6.67)
where

w
w

w
w

w
w

, ) , ( ln ) , ( ln ) , ( ln
,
2
2
, ,
" " " #
" " "
f E f f E
T

(6.68)
1
# is the inverse matrix oI the Fisher inIormation matrix. Equality holds only iI
> @ T T T T
T
`
) ( ) , y ( ln
, Y

w
w
c f
T

(6.69)
The derivatives
2
,
2
,
/ ) , ( and / ) , ( T T T T w w w w ! !
" "
f f are assumed to exist and
to be absolutely integrable. The Fisher inIormation matrix is deIined as
> @ ^ ` > @ ^ ` > @ T T T
T T
, ) , y ( ln ) , y ( ln J
, Y , Y
T
f f E

(6.70)
which can also be rewritten as
> @ ^ ` > @ T T
T T
, ) , y ( ln J
, Y
T
f E

(6.71)
For simplicity, we give the conditional variance on the error
K i
i i i
, , 2 , 1 ,
`

~
, which is bounded by the inequality
Signal Detection and Estimation 374
> @ > @
!!
! ! !
"
!
t V , ) ) y (
`
( var ,
~
var
2

~ (6.72)
!!
" is the !th diagonal element in the # # u square matrix
1
! . The$!%th element
oI ! in (6.70) is given by

w
w
w
w



,

) , ( ln

) , ( ln
, ,
% !
!%
& ' & '
( "
" "
(6.73)
whereas the !%th element oI (6.71) is given by

w w
w

,

) , ( ln
,
2
% !
!%
& '
( "
"
(6.74)
)*++'.$ One way to prove the above theorem without resorting to excessive matrix
operation is the Iollowing. Since the estimations are unbiased (the expected value
oI each estimator is the true value), we can write

! ! !
, ' ( ) , ( ) (
`
| , ) (
`
|
,

f
f
# # # #
"


(6.75)
or
0 ) , ( | ) (
`
|
,

f
f
# # #
"
, '
! !

(6.76)
DiIIerentiating both sides oI (6.76) with respect to
%
, we have

%
!
%
!
,
'

) , ( ln
) (
`
,
w
w

w
w

f
f
#
#
#
"

(6.77)
Using (6.38) Ior the integral, and the Iact that
$
T w T w /
!
is the Kronecker
!%

(unity Ior % ! , and zero otherwise), (6.77) can be rewritten as
!%
%
!
,
'
'

) , ( ln
) , ( ) (
`
,
,

w
w

f
f
#
#
# #
"
"

(6.78)
!"#"$%&%#'()&*$"&*+,'
375
Consider the case when 1 - , and deIine the 1 . dimensional vector ! (.
is the number oI parameters to be estimated) as

T w
w
T w
w
T w
w
T T

.
/
/
/
) , ( ln
) , ( ln
) , ( ln
) (
`
,
2
,
1
,
1 1

"
"
"
"
!
#
#
#

(6.79)
Note that the mean values oI the components oI ! are all zero. The Iirst term is
zero because the estimate is unbiased, while the other terms are zero in light oI
(6.35), which can be written as
0

) , ( ln
) , (

) , ( ln
,
,
,

w
w

w
w

f
f

"
" "
"
#
#
#
/
( 0 /
/
(6.80)
The covariance matrix oI ! is then

V

T
.. . .
.
.
1
2 2 2
2 2 2
2 2 2
(

2 1
2 22 21
1 12 11
2
~
0
0
1
0 0 1
| |
1
!! $
!!
(6.81)
or in partitioned Iorm,

T
0
0
1
0 0 1
2
~
1

%%&
%%%%%
$
!!
(6.82)
Signal Detection and Estimation 376
Since the covariance matrix is nonnegative deIinite, and consequently its
determinant is nonnegative deIinite, the determinant oI (6.81) is given by

(
(
(
(
(

o =
u
KK K K
K
K
J J J
J J J
J J J
i

3 2
2 23 22
1 13 12
2
~
0
0
1
) det( ! "
##
(6.83)
From (4.30), we observe that (6.83) can be written in terms oI the coIactor J
11
.
Hence,
( )
11
2
~
3 2
3 33 32
2 23 22
2
~ coIactor det J
J J J
J J J
J J J
i i
KK K K
K
K
o =
(
(
(
(
(

o =
u u
! ! "
##

(6.84)
Assuming that the Fisher matrix ! is nonsingular, we have
0 coIactor | |
11
2

~ > = = J E
i
T
! ## "
##
(6.85)
or

ii
J
J
i
= >
!
11 2

~
coIactor
(6.86)
which is the desired result given in (6.72).

!"#"$! %&'()*+%,-./*0%
%
In the Bayes` estimation, we minimize the cost Iunction )| (
`
, | $ C . Consider now
the extension oI the mean-square error criterion and the MAP criterion Ior multiple
parameters estimation.

Mean-Square Estimation

In this case, the cost Iunction is the sum oI the squares oI the error samples given
by
Parameter Estimation
377
) (
~ ~
) (
~
)| ( ) (
`
| | ) (
`
| )| (
~
|
1
2 2
1
! ! ! ! ! !
T
K
i
i i
K
i
i
c


! (6.87)
The risk is


f
f
f
f


d d f
ms
! ! !
"
) , ( )| (
~
|
,
! (6.88)
Substituting (6.87) in (6.88) and using the Bayes` rule, the risk becomes

f
f
f
f

d [ f d f
[
K
i
i i ms
) ( | ) (
`
| ) (
1
2
! ! ! !
" "
(6.89)
As beIore, minimizing the risk is equivalent to minimizing the expression in the
brackets oI (6.89). Each term between the brackets is positive, and thus the
minimization is done term-by-term. From (6.19), the ith term ) (
`
!
i
is minimized
Ior

f
f
!
"
d f
i msi
) , ( ) (
`
,
! (6.90)
In vector Iorm, the MMSE is given by

f
f
!
"
d f E
ms
) , ( | , |
`
,
! (6.91)
It can be shown that the mean-square estimation commutes over a linear
transformation to yield
) (
`
) (
`
! # !
ms ms
T I (6.92)
where # is an K Lu matrix.

MAP Estimation

From (6.28), the MAP estimate
map

`
is obtained by minimizing ) , (
,
!
"

f .
Generalizing the result to the estimation oI multiple parameters estimation, we
obtain the Iollowing set oI MAP equations:
Signal Detection and Estimation 378
K i
f
map
i
, , 2 , 1 , 0
) , ( ln
) (
`
,
= =
u c
c
= !
"
!


(6.93)
Using (6.59), the MAP equation can be written in a single vector to be
) (
`
,
)| , ( |ln
!
"
!
map
f
u = u
u
u V (6.94)
Cramer-Rao Bound

The covariance matrix oI the error oI any unbiased estimator
`
oI is bounded
below by the inverse oI the Fisher inIormation matrix, #, and is given by

#
1
> | )
`
)(
`
|(
T
E u u u u (6.95)
where

(
(

c
c
= ) , ( ln
,
2
2


! #
"
f E (6.96)
Note that the equality holds iI and only iI
)
`
( ) , ( ln
,


=
(

c
c
c f
T
!
"
(6.97)
where c is independent oI " and . II the conditional density Iunction
) , (
,

!
"
f is Gaussian, the lower bound oI (6.95) is achieved with equality.
The inIormation matrix # can be written in terms oI $ as
(
(

c
c
= ) ( ln
2
2


f E $ # (6.98)


!"#! $%&'()*+%,-(.+$*,&%/(%&'*0,'1-(

In many practical problems, it may be not possible to determine the MMSE
estimators oI a random or an unknown parameter, even iI it exists. For example,
we do not know the probability density Iunction oI the data, but we know the Iirst-
Parameter Estimation
379
order and second-order moments oI it. In this case, the methods developed in
estimating the parameters and determining the Cramer-Rao lower bound cannot be
applied. However, we still would like to obtain a reasonable (suboptimum) or
'best estimator, in the sense that it is unbiased and has a minimum variance,
usually called MVU estimator. To do so, we limit the estimator to be a linear
function oI the data, and thus it becomes possible to obtain an explicit expression
Ior the best linear unbiased estimator (BLUE).
We Iirst give the one parameter linear minimum mean-square estimation to
present the Iundamental concepts, and then generalize them to multiple
parameters.

!"#"$! %&'()*+*,'-'+(./&'*+(0'*&1234*+'(56-/,*-/7&(
(
The linear minimum-square estimate oI a random parameter is given by
b aY
lms

`
(6.99)
The corresponding risk Iunction is

f
f
f
f
dv d v f C E
Y lms
) , ( )
`
( )|
`
, ( |
,
2



f
f
f
f
dv d v f b av
Y
) , ( ) (
,
2
(6.100)
Following the same procedure as we did in Section 6.5.1, we observe that
minimizing the risk involves Iinding the constants a and b, so that
lms
is
minimum. Hence, taking the derivatives oI
lms
with respect to a and b and
setting them equal to zero, we have
0 ) , ( ) (
,


f
f
f
f
dv d v vf b av
Y
(6.101)
and
0 ) , ( ) (
,


f
f
f
f
! !
"
d d f b av (6.102)
Using (1.45) and (1.108), (6.101) and (6.102) can be rewritten as
| | | | | |
2
Y E Y bE Y aE (6.103)
Signal Detection and Estimation 380
and
| | | | T ! " # $! (6.104)
We have two equations in two unknowns. Solving Ior $ and ", we obtain

| | | |
| | | | | |
2 2
# ! # !
# ! ! # !
$

(6.105)
and

| | | |
| | | | | |
| | | |
2 2
# ! # !
# ! ! # !
# ! ! "

(6.106)
Knowing that the correlation coeIIicient
#
is given by
%
%
%
& # & !
V V
T
U
T
T
T
)| )(
`
|(
`
(6.107)
with | | |, | # ! & ! &
%
T
T
, , ) |(
2
T T
T V & ! and . ) |(
2
% %
& # ! V
Then,
%
%
$
V
V
U
T
T
(6.108)
and

%
% %
& & "
V
V
U
T
T T
(6.109)
The optimal cost Iunction can be obtained to be
) 1 (
2 2
% '&( T T
U V (6.110)
It can be shown that iI the joint density Iunction ) , (
,
% )
#
is Gaussian, then
the conditional mean | , | % ! is linear in the observation data, and thus the
minimum mean-square estimate is linear. In addition, we usually assume Ior
Parameter Estimation
381
convenience that the parameter T and the observation Y have zero means. In this
case,
lms

`
is unbiased, and is given by
v C C
vv blue
1
T
T !! ! !!!! (6.111)
where | | / 1 and | |
2 1
Y E C Y E C
vv v
T

T
. We now can generalize the result oI
(6.111) Ior multiple parameter estimation.

"#$#%! T!&'()*+!,-./*0!
!
II now T is a random vector parameter and T and ! are assumed to have zero
means, then it can be shown that the BLUE that minimizes the mean-square error
(variance minimum) is given by
! " "
!! !
1
`

blue
(6.112)
and the mean-square error is

T T TT
T T
! !! !
" " " "
1
| )
`
( )
`
( |


T
blue blue
E (6.113)
!!
" is the covariance matrix oI the observation vector !,
1
!!
" is its inverse, and
!
"

is the cross-covariance matrix between ! and T. Note that the mean and
covariance oI the data are unknown, and the means oI ! and T are assumed to be
zero, and thus the linear mean-square estimator is unbiased.

Proof. We now give a derivation oI the result given in (6.112). Since
`
is
restricted to be a linear estimator Ior !, that is a linear Iunction oI the data, then
`

can be written as
#!
`
(6.114)
The problem is to select the matrix # so that the mean-square given by (6.113) is
minimized. Equation (6.113) is called the matrix-valued squared error loss
function. Substituting (6.114) into (6.113), we have
| ) ( ) ( | | )
`
( )
`
( |
T T
E E #! #! T T T T T T (6.115)
Using the Iact that
Signal Detection and Estimation 382
| )
`
( )
`
( | tr | )
`
( )
`
( |
! !
" " (6.116)
then, (6.115) becomes
| ) ( ) ( | tr | )
`
( )
`
( |
! !
" " !" !"
! | tr|
! !
! !# !# ! # #
"" " "

T T TT
!!!!! (6.117)
Note that


" "" " " "" " "" ""
"" " "" "" "
# # # ! # # # !# ! !#
# # ! # # # !
1 1
1 1
) (
) ( ) (




! ! !
!

!
" "" " " " ""
# # # ! # !# ! !#
1

! !
! !!!!!!!!!(6.118)
Using (6.118), we can write
| ) ( ) ( tr| | )
`
( )
`
( |
1 1 1 1


" "" " "" " "" "" " ""
# # # # # ! # # # ! #


!
"
(6.119)
We observe that the gain matrix ! appears only in the second term on the right-
hand side oI (6.119). Thus, each diagonal element in the matrix
| )
`
( )
`
( |
!
" is minimized when ! is given by
1

"" "
# # !

(6.120)
Substituting (6.120) in (6.114), we have
" # #
"" "
1
` `




#$%&
(6.121)
and the prooI is complete.
Note that iI " and T are not zero mean, such that
"
$ " | | " and , | |

$ " !
then
% &"
$'(
T
`
(6.122)
where the matrix & and the vector % are given by
^ ` ^ `
"
'
""
# # " " " " "" &
T
T T

| | | | | | | | | | | |
1
! ! ! !
" " " " " " (6.123)
!"#"$%&%#'()&*$"&*+,'
383
and
| | | | ! " # ( ( (6.124)
By direct substitution, we obtain
) (
`
1
! !! !
$ ! % % $

-./%
(6.125)
The BLUE given in (6.121) has several properties oI interest:
!
% !

|
`
|
0
-./%
( (6.126)
-./% -./%
0
-./% -./%
(



` `
1
|
` `
| % % % %
! ! !

(6.127)
'''''''''''
-./% -./%
0
-./% -./%
(


` `
| )
`
)(
`
|( % % ''''''''''''''(6.128)
0 | )
`
|(
0
-./%
( ! (6.129)
0 |
`
)
`
|(
0
-./% -./%
( ' ' (6.130)
We observe that property (6.129) means that the error in the estimate is orthogonal
to the data !, while property (6.130) means that the error in the estimate is
orthogonal to the estimator
-./%

`
. This concept oI orthogonality is an important
result, which will be developed and used extensively in the next chapter on
Iiltering.

!"#"$! %&'()*+),-*./)01233*1+)45*3/)
)
Consider the general problem oI estimating a random vector with 1 parameters
(denoted as the 1-dimensional vectors T), to be estimated Irom 2 observations
(denoted as the 2-dimensional vector !), in white Gaussian noise. The parameters
T and measurements ! are assumed to be related by the so-called .*,%"# $+3%.
& ' ! (6.131)
' is a 1 2 u known mapping matrix, ! is the 1 u 2 observed random vector, T is
an 1 u 1 random vector to be estimated, and & is a 1 u 2 vector representing errors
in the measurement (noise). Assuming that T and &( have zero means, then ! has
zero mean. The covariance matrix oI ! is
Signal Detection and Estimation 384

!! ! ! ""
# $ # $# $ $# ! $ ! $ #
! ! !

| ) )( |( (6.132)
while the cross-covariance matrix oI " and T is

! "
# $# # (6.133)
Substituting (6.132) and (6.133) in (6.121), we obtain the BLUE" estimate oI T! to
be
" # $ # $# $ $# # $ #
!! ! ! !
1
| || |
`


! ! !
#$%&
(6.134)
with error covariance matrix
) ( ~ ~
!
# $ # # #
T TT TT
T T

!

) ( ) (
1
T TT T T TT ! !! ! !
# $# # $ # $# $ $#
! !
(6.135)
When T! and !% are uncorrelated, which is the usual assumed case, , C
N
"
T
and
the BLUE oI T reduces to
" # $ $# $ #
!!
1
) (
`


! !
TT TT
T (6.136)
while the error matrix becomes
TT TT TT TT
T T
$# # $ $# $ # # #
!!
1
~ ~ ) (


! !
(6.137)
Using the '()*+," +-.&*/+0-" $&''( given in Chapter 4, and aIter some matrix
operation, we have
" # $ #
!!
1
~ ~
~

!
#$%&

(6.138a)
where

1 1 1
~ ~ ) (

$ # $ # #
!!
!
TT
T T
(6.138b)
II no a priori inIormation about T is available, and thus iI
1
TT
# is assumed zero, the
BLUE oI
`
is given by
Parameter Estimation
385
! " # # " #
$$ $$
1 1 1
) (
`

T T
(6.139)
Note that in these results, we only assumed that T is a random parameter.
Consider now the problem oI estimating the unknown vector T! but which is
constrained to be a linear Iunction oI the data (measurements).

The Estimator as a Linear Function of Data

In this case, we require


K
k
k k ik
M i b Y a
1
, , 2 , 1 ,
`
T (6.140)
or, in matrix Iorm
% &!
`
(6.141)
where & is an K M u matrix, and ! and % are 1 u K vectors. In order Ior
`
to be
unbiased, we must have
| ,
`
| E (6.142)
Hence,
% &# % $ # & % ! & % &! | , | | , | | , | E E E (6.143)
only iI
' &# (6.144a)
and
" % (6.144b)
The BLUE estimate is then given by
! " # # " #
$$ $$
1 1 1
) (
`

T T
T (6.145)
ThereIore, with the noise Gaussian in the linear model, we can state the
Iollowing result given by the Gauss-Markov theorem.

Signal Detection and Estimation 386
Gauss-Markov Theorem. II the data is oI the general linear model Iorm
! " # (6.146)
where " is a known M K u matrix, is an 1 u M vector oI parameters to be
estimated, and !$ is a 1 u K noise vector with mean zero and covariance matrix
!!
% , then the BLUE oI that minimizes the mean-square error is
# % " " % "
!! !!
1 1 1
) (
`

T T
T (6.147)
with error covariance matrix

1 1
` `
) ( | , )
`
)(
`
|(

" % " %
!!
T T
blue blue
E

(6.148)
The minimum variance oI
k

`
is then
kk
T
k
| ) |( |
`
var|
1 1
" % "
!!
(6.149)
!"#$%&'()*+(
(
Consider the problem oI Example 6.2 where
K k N A Y
k k
, , 2 , 1 ,
where
k
N is a zero mean white noise. Find the BLUE oI M iI:
(a) The variance oI K k N
k
, , 2 , 1 , is .
2

(b) The noise components are correlated with variance . , , 2 , 1 ,
2
K k
k


Solution

(a) The estimator is constrained to be a linear Iunction oI the data. Let


K
k fk k
M f Y A A
1
, , 2 , 1 ,
`
&

where the A
fk
s are the weighting coeIIicients to be determined. From (6.147), the
BLUE is given by
# % " " % "
!! !!
1 1 1
) (
`

T T
A
!"#"$%&%#'()&*$"&*+,'
387
where
| | | | | |
- - - - - -
. ( / 0 . / ( 1 (
Since .
-
must be unbiased, then
- -
. . ( | | , 1
-
/ , and thus . ! !
Substituting, we have

V V

2
-
-
2
-
-
3 3
4
2
4 2 .
1 1
2 1 2
2
1
2
1
) (
1 1
`
"# " ! ! !
Hence, we observe that the BLUE is the sample mean independently oI the
probability density Iunction oI the data, while the minimum variance is
2
.
3
3
2
2
1
1
1
) (
1
|
`
var|
V

! ! "
! $ !
%%

(b) In this case, the variance matrix is

V
V
V

2
2
2
2
1
0 0
0 0
0 0
2

%%
$
AIter substitution, the BLUE is

V
V

2
-
-
2
-
-
-
4
.
0
2
1
2
1
1
`

while the minimum variance is

2
-
-
.
0
2
1
1
)
`
var(

Signal Detection and Estimation 388
!"#! $%&'()'*+&,%-%'(./&(.01-
-
In studying parameter estimation in the previous sections, our criteria were to Iind
a 'good estimator that was unbiased and had minimum variance. In the least-
square estimation, the criterion is only to minimize the squared diIIerence between
the given data (signal plus noise) and the assumed signal data.
Suppose we want to estimate ! parameters, denoting the !-dimensional
vector u, Irom the " measurements, denoting the "-dimensional vector !
with ! " > . The relation between the parameters u and the observed data ! is
given by the linear model
" # ! + = (6.150)
where # is a known ( ) ! " matrix, and " is the unknown ( ) 1 " error vector
that occurs in the measurement oI u.
The least-square estimator (LSE) oI u chooses the values that make # $ =
closest to the observed data !. Hence, we minimize
u u + u u =
u u = = u

=
# # ! # # ! !!
# ! # !
# # # # # #
#
"
$
$ $
% & ' ) ( ) ( ) ( ) (
1
2

!!!!!!!! # # # ! !!
# # # #
+ = 2 ! !!!!!!!!!(6.151)
Note that # !
#
is a scalar. Taking the Iirst-order partial derivative oI the cost
Iunction ) ( ' with respect to (i.e., the gradient) and setting it equal to zero, we
obtain the set oI linear equations
2

= + =
c
c
# # ! #
# #
'
2 2
) (
(6.152)
and the(LSE is Iound to be
! # # #
# #
)*
1
) (
`

= (6.153)
Note that the second-order partial derivative is
# #
#
'
=
c
c
2
2
) (


(6.154)
Parameter Estimation
389
This matrix is positive-deIinite as long as ! is assumed to be oI Iull rank to
guarantee the inversion oI ! !
T
. Thus, the solution (6.153) is unique and
minimizes ) ( J . The equations
" ! ! !
T T
(6.155)
to be solved Ior
ls

` `
are reIerred to as the normal equations.
We observe that the error in the estimator
ls

`
! is a linear Iunction oI the
measurement errors #! since
ls
T
~
| |
`
1 1
# ! ! ! ! " ! ! !

T T T T T
T T T T
ls

# ! ! ! ! ! ! !
T T T T
1 1
T T
!!!!!! # ! ! !
T T
1
(6.156)
The minimum least-square
min
J can be shown, aIter some matrix operation, to be
" ! ! ! ! " "" ! " ! "
T T T T T
ls
J J
1
min
) ( )
`
( )
`
( )
`
(

T T T
)
`
( ! " "
T
! (6.157)
Generali:ation of the Least-Square Problem

The least-square cost Iunction can be generalized by introducing a K K u positive
deIinite weighting matrix $ to yield
) ( ) ( ) ( ! " $ ! "
T
J ! ! !!!!!!!!!(6.158)
The elements oI the weighting can be chosen to emphasize speciIic values oI the
data that are more reliable Ior the estimate
`
.
The general Iorm oI the least-square estimator can be shown to be
$" ! $! !
T T 1
) (
`

(6.159)
while its minimum least-square error is
" $ ! $! ! $! $ " | ) ( |
1
min
T T T
J

(6.160)
The error covariance matrix becomes
Signal Detection and Estimation 390

1 1
) ( ) (

!" " !#!" " !" " $
%%
! ! !
(6.161)
where #
%%
!is a known positive-deIinite covariance matrix given by
| |
!
" %% #
%%
(6.162)
since " | |% " (i.e.,
%% %%
$ # ).
II the measurement errors % are uncorrelated and have identical variance ,
2

then & #
2
!" and iI" ,
2
& ! then (6.159) reduces to (6.153). That is, a
constant scaling has no eIIect on the estimate.
It can also be shown that the least-square estimator and the linear minimum
mean-square estimator are identical when the weighting matrix ! is chosen as
1
# ! (6.163)
that is, the inverse oI the measurement noise covariance matrix.
!
#$%&'()!*+,

Consider again the problem oI Example 6.5 with # $ % & '
$ $
, , 2 , 1 , .
From (6.153), the least-square estimate is ' " " "
! !
&
1
) (
`

. " is the ) 1 ( u #
column matrix denoted > @ 1 1 1
!
- . Hence,


#
$
$
! !
'
#
&
1
1
1
) (
`
' - - -
which is the ()*+,-. *-)/. Observe that Ior this simple operation, instead oI
applying a derived result, we could have started by writing the least-square cost
Iunction


#
$
$
& 0 & 1
1
2
) ( ) ( , then diIIerentiating ) ( & 1 with respect to &, setting
the result equal to zero, and solving Ior
,(
& &
` `
.

#$%&'()!*+.!
!
Suppose that three measurements oI signal ) 2 / exp( $ (
$
, where T is the
parameter to be estimated, are given by , 5 . 1
1
0 " , 3
2
0 "and" . 5
3
0 Find the
least-square estimate oI T.
.
Parameter Estimation
391
Solution

The data can be put in the Iorm, ! " # given by (6.150). Substituting Ior
the values oI k, we have
3
2
1
482 . 4 5
718 . 2 3
648 . 1 5 . 1
N
N
N




where
T
| 5 3 5 . 1 | $ is a realization oI #, , | 482 . 4 718 . 2 648 . 1 |
T
" and
| |
3 2 1
N N N ! a realization oI !. The least-square estimate is given by
$ " " "
T T
ls
1
) (
`


where 192 . 30
3
1
2

k
k
T
H " " , and


3
1
036 . 30
k
k k
T
Y H $ " . Hence,
995 . 0 ) (
`
3
1
2
3
1 1


k
k
k
k k
T T
ls
H
Y H
$ " " "


!"#$! %&'(%)*+&,-&.)/0)1(.%&,&)/*2./3%,
,
In real time estimation problems (Iiltering), it is necessary to write the estimator

`
in a recursive Iorm Ior eIIiciency. For example, consider a situation where an
estimate
`
is determined based on some data
K
# . II new data
1 K
# is to be
processed aIter having determined an estimate based on the data
K
# , it is best to
use the old solution along with the new data to determine the new least-square
estimator. It is clear that discarding the estimate based on the data
K
# and
restarting the computation Ior a solution is ineIIicient. This procedure oI
determining the least-square estimate Irom an estimate based on
K
# and the new
data
1 K
# is reIerred to as sequential least-square estimation, or more commonly
recursive least-square (RLS) estimation.
Consider the problem oI estimating T Irom the data vectors
M
% given by the
linear model
Signal Detection and Estimation 392
! ! !
! " # ! ! !!!!!!!!!!(6.164a)
where
> @
"
! !
$ $ $ #
2 1
(6.164b)
is an ) 1 ( !# collection oI vectors
!
$ $ $ , , ,
2 1
, since each vector
, , , 2 , 1 , ! $
$
$ is a ) 1 ( # vector,
> @
"
! !
% % % !
2 1
(6.164c)
is an ) 1 ( !# error vector, and
> @
"
! !
& & & "
2 1
(6.164d)
is an ) ( % !# u mapping matrix relating
!
# to the ) 1 ( u % parameter vector to
be estimated.
It can be shown that the RLS estimator is given by
|
`
|
` `
1 1

! ! ! ! ! !
T T T " ! ' (6.165)
where
1

!!
"
! !
( " * '
!!
(6.166)
* is the error covariance matrix given by


!!
"
!
"
!
!
"
"
! !
&
( ( (
( ( (
( ( (
! ! *
!!

2 1
2 22 12
11 12 11
| | (6.167)
and
'( '
"
( '
& | | ( % % (6.168)
!"#"$%&%#'()&*$"&*+,'
393
The covariance matrix oI the individual noise vector !" is
**
#
*
# . Equation
(6.170) indicates that the estimator
-
T
`
based on
-
$ is Iormed as a linear
combination oI
1
`
-
T and a correction term! |
`
|
1

- - - -
T % & ' "
II were a random variable, it can be shown that the generalization oI the
recursive least-square estimation leads to the Kalman Iilter

|3|. In the next chapter
on Iiltering, we present an introduction to Kalman Iiltering.


!"##! $%&&'()*
*
In this chapter, we have developed the concept oI parameter estimation. We used
the maximum likelihood estimation to estimate nonrandom parameters. We Iirst
obtained the likelihood Iunction in terms oI the parameters to be estimated. Then,
we maximized the likelihood Iunction to obtain the estimator, which resulted Irom
solving the likelihood equation. We linked this chapter to the previous one by
presenting the generalized likelihood ratio test in Section 6.3. In the generalized
likelihood ratio test, we used the maximum likelihood estimate oI the unknown
parameter in the composite hypothesis as its true value and then perIormed the
likelihood ratio test. This was an alternative to the case where UMP tests did not
exist. Measuring criteria Ior the estimator, such as bias and consistency, were
presented to determine the quality oI the estimator.
When the parameter to be estimated was a random variable, we used Bayes`
estimation. In Bayes` estimation, we minimized the risk, which is a Iunction oI
error between the estimate and the true value. Three cases were considered; the
squared error, the absolute value error, and the uniIorm cost Iunction. It was
shown that the minimum mean-square error represents the conditional mean oI the
parameter (associated with the observation random variable) to be estimated. The
resulting minimum risk was the conditional variance. In the absolute value error
case, the estimate turned out to be the median oI the conditional density Iunction
oI the parameter to be estimated, given the observation random variable.
For the uniIorm Bayes` cost, the estimator was actually the solution oI the
MAP'equation. In comparing the ML estimate and MAP estimate, it was observed
that the ML estimate was a special case oI the MAP estimate and is obtained by
setting to zero the density Iunction oI the parameter to be estimated in the MAP
equation. In order to measure the 'goodness oI the estimator, the Cramer-Rao
bound was given as an alternate way to measure the error variance, since an
expression Ior the error variance was diIIicult to obtain. The above results were
generalized to multiple parameter estimation in Section 6.7.
Then, we presented linear mean-square estimation Ior situations where it may
have been diIIicult to Iind the MMSE,' even iI existed. We deIined the BLUE in
the sense that the mean-square value is minimized. We veriIied that Ior a joint
Gaussian density Iunction oI the observation and the parameter to be estimated, the
linear mean-square estimator is the optimum MMSE. An introduction to least-
square estimation was presented. We noted that least-square estimation was not
Signal Detection and Estimation 394
based on the criteria oI the unbiased and minimum variance estimator, but rather
on minimizing the squared diIIerence between the given data and the assumed
signal data. We concluded the chapter with a brieI section on recursive least-
square estimation.


!"#$%&'()
)
*+,! Let
!
" " " , , ,
2 1
be the observed random variables, such that
! # $ %& ' "
# # #
, , 2 , 1 ,
The constants , , , 2 , 1 , ! # &
#
are known, while the constants ' and % are
not known. The random variables , , , 2 , 1 , ! # $
#
are statistically
independent, each with zero mean and variance
2
known. Obtain the ML
estimate oI (', %).

*+-! Let " be a Gaussian random variable with mean zero and variance .
2

(a) Obtain the ML estimates oI and .
2

(b) Are the estimates eIIicient?(

*+.! Let "
1
and "
2
be two statistically independent Gaussian random variables,
such that ) " * ) " * 3 | | , | |
2 1
, and 1 | var| | var|
2 1
" " ; ) is unknown.
(a) Obtain the ML estimates oI ).
(b) II the estimator oI ) is oI the Iorm
2 1 1 1
" % " ' , determine '
1
and '
2
, so
that the estimator is unbiased.

*+/! The observation sample oI the envelope oI a received signal is given by the
Iollowing exponential distribution
! #
+
+ ,
#
#
#
"
, , 2 , 1 ,

exp

1
) (


T is an unknown parameter and the observations are statistically independent.
(a) Obtain the ML estimate oI T.
(b) Is the estimator unbiased?
(c) Determine the lower bound on the estimator.
(d) Is the estimator consistent?



!"#"$%&%#'()&*$"&*+,'
395
!"#! Let the observation - satisIy the binomial law, such that the density Iunction
oI - is
. , ,
-
/ /
.
,
0 1

|
|
.
|

\
|
= ) 1 ( ) (
(a) Find an unbiased estimate Ior /.
(b) Is the estimate consistent?

!"!! Obtain the ML estimates oI the mean $ and variance
2
Ior the independent
observations
2
- - - , , ,
2 1
, such that
( )
2 .
$ 0
0 1
.
. -
.
, , 2 , 1 ,
2
exp
2
1
) (
2
2
=
(
(

=
!"$! Let 3 be an unknown deterministic parameter that can have any value in the
interval | 1 , 1 | . Suppose we take two observations oI 3 with independent
samples oI zero-mean Gaussian noise, and with variance
2
superimposed on
each oI the observations.
(a) Obtain the ML estimate oI 3.
(b) Is
$4
3
`
unbiased?

!"%! Let
2
- - - , , ,
2 1
be 2 independent observed random variables, each having a
Poisson distribution given by
. , , 2 , 1 , 0 ,
!
) , (
,
2 . 0
0
% 0 1
.
.
0
. -
.
.
= >
u
= u
u

The parameter u is unknown.
(a) Obtain the ML estimate oI u.
(b) VeriIy that the estimator is unbiased and determine the lower bound.

!"&' Let
2
- - - , , ,
2 1
be 2 independent and identically distributed observations.
The observations are uniIormly distributed between u + u and , where u is
an unknown parameter to be estimated.
(a) Obtain the MLE oI u.
(b)' How is the estimator unbiased?

Signal Detection and Estimation 396
!"#$! Let
!
" " " , , ,
2 1
be ! independent variables with # " $
%
= = ) 1 ( and
# " $
%
= = 1 ) 0 ( , where 1 0 , < s # # is unknown.
(a) Obtain the ML estimate.
(b) Determine the lower bound on the variance oI the estimator, assuming
that the estimator is unbiased.

!"##! Find
&'
(
`
, the minimum mean-square error, and
&)#
(
` , the maximum a
posteriori*estimators, oI + Irom the observations
, + " + =
+ and , are random variables with density Iunctions
)| 1 ( ) 1 ( |
2
1
) ( + + = ( ( ( -
+
and
|
|
.
|

\
|
o

o t
=
2
2
2
exp
2
1
) (
(
. -
,

!"#%! The conditional density Iunction oI the observed random variable " given a
random parameter + is given by

<
> >
=

0 , 0
0 and 0 ,
) , (
,
/
( / (0
( / -
(/
+ "

The a priori probability density Iunction oI + is

<
>
o
=
o
0 , 0
0 ,
) (
) (
1
(
( 0 (
1
( -
( 1
1
+
&
where o is a parameter, 1 is a positive integer, and ) (1 is the gamma
Iunction.
(a) Obtain the a priori*mean and variance oI +.
(b) For " given,
1. Obtain the minimum mean-square error estimate oI +.
2. What is the variance oI this estimate?
(c) Suppose we take ! independent observations oI , , , 2 , 1 , ! % "
%
= such
that

<
> >
=

0 , 0
0 and 0 ,
) , (
,
%
%
(/
% + "
/
( / (0
( / -
%
%

!"#"$%&%#'()&*$"&*+,'
397
1. Determine the minimum mean-square error estimate oI -.
2. What is the variance oI this estimate?
(d) VeriIy iI the MAP estimate equals the MMSE estimate.

!"#$! Consider the problem where the observation is given by . - / + = ln ,
where -'is the parameter to be estimated . -'is uniIormly distributed over
the interval |, 1 , 0 | and . has an exponential distribution given by

>
=

otherwise , 0
0 ,
) (
, %
, 0
,
.

Obtain
(a) The mean-square estimate,
$)
1
`
.
(b) The MAP estimate,
$"2
1
` .
(c) The MAVE estimate,
$"3%
1
`
.

!"#%! The observation / is given by . - / + = , where - and . are two random
variables. . is normal with mean one and variance
2
, and - is uniIormly
distributed over the interval |0, 2|. Determine the MAP estimate oI the
parameter'-.

!"#&! Show that the mean-square estimation | , |
`
! (
$)
= commutes over a
linear transIormation.

!"#!! Suppose that the joint density Iunction oI the observation / and the
parameter u is Gaussian. The means

and $ $
4
are assumed to be zero. u
can then be expressed as a linear Iorm oI the data. Determine an expression
Ior the conditional density ) , (
,
4 0
/
.

!"#'! Consider the problem oI estimating a parameter u Irom one observation /.
Then, . / + = , where u and the noise . are statistically independent with
( )

s s
= u
otherwise , 0
1 0 , 1

0 and

s s
=
otherwise , 0
2 0 ,
2
) (
,
,
, 0
.

Determine
567%

`
, the best linear unbiased estimate oI u.

(
Signal Detection and Estimation 398
!"#"$"%&"'(
(
|1| Van Trees, H. L., Detection, Estimation, and Modulation Theorv, Part I, New York: John Wiley
and Sons, 1968, p. 95.
|2| Vaseghi, S. V., Advanced Digital Signal Processing and Noise Reduction, New York: John Wiley
and Sons, 2000.
|3| Sorenson, H. W., Parameter Estimation. Principles and Problems, New York: Marcel Dekker,
1980.


)"*"&+",(-./*.01$2345

Dudewicz, E. J., Introduction to Statistics and Probabilitv, New York: Holt, Rinehart and Winston,
1976.
Gevers, M., and L Vandendorpe, Processus Statistiques, Estimation et Prediction, Universite
Catholique de Louvain, 1996.
Haykin, S., Adaptive Filter Theorv, Englewood CliIIs, NJ: Prentice Hall, 1986.
Helstrom, C. W., Elements of Signal Detection and Estimation, Englewood CliIIs, NJ: Prentice Hall,
1995.
Kay, S. M., Fundamentals of Statistical Signal Processing. Estimation Theorv, Englewood CliIIs, NJ:
Prentice Hall, 1993.
Lewis, T. O., and P. L. Odell, Estimation in Linear Models, Englewood CliIIs, NJ: Prentice Hall, 1971.
Mohanty, N., Signal Processing. Signals, Filtering, and Detection, New York: Van Nostrand Reinhold,
1987.
Sage, A. P., and J. L. Melsa, Estimation Theorv with Applications to Communications and Control,
New York: McGraw-Hill, 1971.
Shanmugan, K. S., and A. M. Breipohl, Random Signals. Detection, Estimation, and Data Analvsis,
New York: John Wiley and Sons, 1988.
Srinath, M. D., and P. K. Rajasekaran, An Introduction to Statistical Signal Processing with
Applications, New York: John Wiley and Sons, 1979.
Stark, H., and J. W. Woods, Probabilitv, Random Processes, and Estimation Theorv for Engineers,
Englewood CliIIs, NJ: Prentice Hall, 1986
Urkowitz, H., Signal Theorv and Random Processes, Dedham, MA: Artech House, 1983.
Whalen, A. D., Detection of Signals in Noise, New York: Academic Press, 1971.

You might also like