You are on page 1of 5

EE 636: Detection and Estimation

Estimation theory
Kalpana Dhaka

In estimation theory hypothesis are continuous.


Parameter estimation problem
Example: We want to measure a voltage a at single time instant
a: Parameter to be measured (Voltage), where V A +V
r: Observation variable, where r = a + n (Measurement corrupted by noise)
Estimation model has following four components:
1. Parameter space: Output of the source is a parameter (or variable). This output
is a point in the parameter space.
2. Probabilistic mapping from parameter space to observation space: Probability law
that governs the effect of a on the observation.
3. Observation space: Finite-dimensional space. Observation vector R is a point in
observation space.
4. Estimation rule: Mapping of the observation space into an estimate.
In estimation theory, problem is defined as: observe r and estimate the value of a.
We will address the following cases:
1. Parameter is a random variable whose behavior is governed by a probability
density.
2. Parameter is unknown quantity but not a random variable.
Parameter is a random variable whose behavior is governed by a probability
density.
In estimation theory, a and a
(R) (estimate of a) are continuous variables. We assign a
cost to all the pairs [a, a(R)] over the range of interest. Cost depends on the error of the
estimate. Error is defined as
a (R) = a
(R) a .
Thus, the cost function C(a ) is a function of single variable. Different cost functions
1. C(a ) = a2 (Square error cost function)
2. C(a ) = |a | (Absolute value cost function)
3.
C(a ) =

0 |a | /2
,
1 |a | > /2

(Uniform cost function)

We choose a cost function to accomplish two objectives:


1

1. Cost function measures user satisfaction adequately, that is, assign an analytic
measure to a subjective quality.
2. Assign cost function such that it results in tractable problem.
Corresponding to a priori probability in the detection problem, we have an a priori
probability pa (A) in random parameter estimation problem. We assume pa (A) is
known. Bayes risk is defined as
Z Z
R = E{C[a, a
(R)]} =
C[a(R), A]pa,R (A, R) dA dR

Expectation is over the random variable a and observed variable r. Bayes estimate is
the estimate that minimize the risk.
Z
Z
R = E{C[a, a
(R)]} =
dR pr (R)
C[a(R), A]pa|R (A) dA

Square error cost function or minimum mean square error (MMSE)


Z
Z
Rms =
dR pr (R)
(a(R) A)2 pa|R (A) dA

Now, our objective is to find the value of a


(R) such that Rms is minimum. The inner
integral and pr (R) are non-negative. Thus, Rms can be minimized by minimizing inner
integral. The estimate of A which minimize Bayes risk is obtained as
Z
Z
Z
d
2
(a(R) A) pa|R (A) dA = 2
Apa|r (A)dA + 2a(R)
pa|r (A) dA = 0
da(R)

On simplifying, we get
ams (R) =

Apa|r (A)dA .

Estimate of A is mean of the a posteriori density function.


Absolute value cost function:
Z
Z
Rabs =
dR pr (R)

|a(R) A|pa|R (A) dA

The estimate of A which minimize Bayes risk is obtained using inner integral. We can
write
Z
Z a(R)
Z
|a(R) A|pa|R (A) dA =
(a(R) A)pa|R (A) dA +
(A a
(R))pa|R (A) dA

a
(R)

Now, on differentiating above expression with respect to a


(R) and equating to zero, we
get
Z aabs (R)
Z
pa|r (A) dA =
pa|r (A) dA .

a
abs (R)

Estimate of A is median of the a posteriori density function.


Uniform cost function or Maximum a posteriori (MAP)
Runf =

dR pr (R) 1

a
unf (R)+
2

pa|r (A) dA

a
unf (R)
2

To minimize Runf , maximize


Z

a
unf (R)+
2

a
unf (R)
2

pa|r (A) dA = max pa|r (A) .


A

For small , a
(R) is value of a at which a posteriori density is maximum denoted by
a
map (R). Estimate of A is mode of the a posteriori density function. Maximum of
pa|r (A) is equivalent to maximum of ln pa|r (A), since log is a monotone function. This is
represented as


= 0.
ln pa|r (A)

A
A=
a(R)

Condition to find maximum

1. ln pa|r (A) has continuous first derivative.


2. Maximum is interior to the allowable range of A.
p

(R)p (A)

Now, pa|r (A) = r|apr (R)a . This implies ln pa|r (A) = ln pr|a (R) + ln pa (A) ln pr (R).
Now, maximum of pa|r (A) is equivalent to maximum of (A), where
(A) = ln pr|a (R) + ln pa (A). This is equivalent to



ln pr|a (R)
ln pa (A)
(A)
=
+
=0





A
A
A
A=
a(R)

A=
a(R)

A=
a(R)

Example 1:

ri = a + ni ,
That is

r1
r2
..
.
rN

1iN.

a
a
..
.

n1
n2
..
.
nN

Given ni is i.i.d. and normally distributed with zero mean and 2 variance. pa (A) is
normally distributed with a mean and a2 variance.
Solution:
pr1 ,r2 ,...,rN |A (R1 , R2 , . . . , RN ) =

N
Y
i=1

pri |a (Ri ) =

N
Y
i=1

pni (Ri A)

)
(Ri A)2

=
exp
2
2 2
2
i=1
)
(
(A a )2
1
exp
pa (A) = p
2a2
2a2
(
(
)
)
N
Y
1
(A a )2
(Ri A)2
1

p
pa|r (A) = k(R)
exp
exp
2
2
2
2
2a2
2
2
a
i=1
(
!)
N
2
2
X
(R

A)
(A

)
i
a
pa|r (A) = k (R) exp
+
2
2
2
2
a
i=1
N
Y

N
1
1
A2 2 + 2
2

a
!)
 PN R
  PN R
2

i
i
a
i=1
i=1
2A
+ 2 +
+ a2
2
a
2
a
(
 PN R
a 
1
2
2

i=1 i
+
= k (R) exp 2 A 2A0
20
2
a2
!)
 PN R
2

i
a
i=1
+04
.
+ 2
2
a

pa|r (A) = k (R) exp

where 0 = 1/

N
2

1
a2

. Thus, we get

pa|r (R) = k (R) exp


1
2 A 02
20

PN

a 
i=1 Ri
+
2
a2

Since pa|r (A) is Gaussian distributed, we have


a
abs = a
M SE = a
M AP = 02

PN

a
i=1 Ri
+ 2
2

Example 2:
r = ln a + n
where
pa (A) =

1, 0 a 1
0 , otherwise

!2 )

and
pn (N) = eN u(N) .
Solution:
pr|a (R) = pn (R ln A) =

exp{(R ln A)} = AeR , R ln A or A eR


0,
otherwise

Now, eR can be greater than or less than one. Thus, we have two case
Case 1: eR > 1, this implies R > 0.
Z 1
Z 1
eR
pr (R) =
pr|a (R)dA =
AeR dA =
2
A=0
A=0
Case 2: eR 1, this implies R 0
pr (R) =

eR

pr|a (R)dA =
A=0

eR

AeR dA =

A=0

eR
.
2

Using above pdfs, we get


pa|r (A) =

2A ,
R>0
2Ae2R , R 0

Next, we will determine estimate of A.


1. MSE
For R > 0
E[pa|r (R)] =
For R 0
E[pa|r (R)] =
2. Absolute error
For R > 0

For R 0

Z
Z

eR
0

1
0

2
.
3

2
A 2Ae2R dA = eR
3

a
abs

2A dA =
0

a
abs

A 2A dA =

1
,
2

2Ae2R dA =

1
,
2

1
a
abs =
2
1
a
abs = eR
2

3. MAP
For R > 0 implies 0 a 1, thus for this range of a maximum value of
pa|r (R) = 2A is for a = 1. a
M AP = 1.
For R 0 implies 0 a eR , thus for this range of a maximum value of
pa|r (R) = 2AeR is for a = eR . a
M AP = eR .

You might also like