Professional Documents
Culture Documents
Saravanan Vijayakumaran
sarva@ee.iitb.ac.in
1 / 44
Motivation
System Model used to Derive Optimal Receivers
4 / 44
I want the truth!
5 / 44
You
can't
handle
the
truth!
6 / 44
Why Study the Simplified System Model?
7 / 44
Unsimplifying the System Model
Effect of Propagation Delay
Consider a complex baseband signal
X
s(t) = bn p(t nT )
n=
9 / 44
Unsimplifying the System Model
Effect of Carrier Offset
Frequency of the local oscillator (LO) at the receiver differs
from that of the transmitter
Suppose the LO frequency at the transmitter is fc
h i
sp (t) = Re 2s(t)e j2fc t .
10 / 44
Unsimplifying the System Model
Effect of Clock Offset
Frequency of the clock at the receiver differs from that of
the transmitter
The clock frequency determines the sampling instants at
the matched filter output
Suppose the symbol rate at the transmitter is T1 symbols
per second
Suppose the receiver sampling rate is 1+
T symbols per
second where || 1 and may be positive or negative
The actual sampling instants and ideal sampling instants
will drift apart over time
11 / 44
The Solution
Estimate the unknown parameters , , f and
Timing Synchronization Estimation of
Carrier Synchronization Estimation of and f
Clock Synchronization Estimation of
Perform demodulation after synchronization
12 / 44
Parameter Estimation
Parameter Estimation
Hypothesis testing was about making a choice between
discrete states of nature
Parameter or point estimation is about choosing from a
continuum of possible states
Example
Consider the complex baseband signal below
The phase can take any real value in the interval [0, 2)
The amplitude A can be any real number
The delay can be any real number
14 / 44
System Model for Parameter Estimation
Consider a family of distributions
Y P ,
15 / 44
Which is the Optimal Estimator?
Assume there is a cost function C which quantifies the
estimation error
C :R
such that C[a, ] is the cost of estimating the true value of
as a
Examples of cost functions
Squared Error C[a, ] = (a )2
Absolute Error C[a, ] = |a |
0 if |a |
Threshold Error C[a, ] =
1 if |a | >
16 / 44
Which is the Optimal Estimator?
With an estimator we associate a conditional cost or risk
conditioned on
n h io
R () = E C (Y),
17 / 44
Which is the Optimal Estimator?
Given that
n h io h i
R () = E C (Y), = E C (Y), =
18 / 44
Minimum-Mean-Squared-Error (MMSE) Estimation
C[a, ] = (a )2
The posterior cost is given by
h i2
2
E ((y) ) Y = y = (y)
2(y)E Y = y
2
+E Y = y
19 / 44
Example 1: MMSE Estimation
Suppose X and Y are jointly Gaussian random variables
Let the joint pdf be given by
1 1 T 1
pXY (x, y ) = 1
exp (s ) (s )
2|| 2 2
2
x x x x y
where s = ,= and =
y y x y y2
Suppose Y is observed and we want to estimate X
The MMSE estimate of X is
XMMSE (y ) = E X Y = y
20 / 44
Example 1: MMSE Estimation
The conditional distribution of X given Y = y is a Gaussian
RV with mean
x
X |y = x + (y y )
y
and variance
X2 |y = (1 2 )x2
Thus the MMSE estimate of X given Y = y is
x
XMMSE (y ) = x + (y y )
y
21 / 44
Example 2: MMSE Estimation
Suppose A is a Gaussian RV with mean and known
variance v 2
Suppose we observe Yi , i = 1, 2, . . . , M such that
Yi = A + Ni
Mv 2
2
A1 (y) +
AMMSE (y) = Mv 2
2
+1
1 PM
where A1 (y) = M i=1 yi
22 / 44
Minimum-Mean-Absolute-Error (MMAE) Estimation
C[a, ] = |a |
The Bayes estimate ABS is given by the median of the
posterior density p(|Y = y)
Pr < t Y = y
Pr > t Y = y , t < ABS (y)
Pr < t Y = y
Pr > t Y = y , t > ABS (y)
p(|Y = y)
Pr < t Y = y
Pr > t Y = y
t ABS (y)
23 / 44
Minimum-Mean-Absolute-Error (MMAE) Estimation
R
For Pr[X 0] = 1, E[X ] = 0 Pr[X > x] dx
Since |(y) | 0
E |(y) |Y = y
Z
= Pr |(y) | > x Y = y dx
Z0
= Pr > x + (y)Y = y dx
0
Z
+ Pr < x + (y)Y = y dx
Z 0
= Pr > t Y = y dt
(y)
(y)
Z
+ Pr < t Y = y dt
24 / 44
Minimum-Mean-Absolute-Error (MMAE) Estimation
Differentiating E |(y) |Y = y wrt to (y)
E |(y) |Y = y
(y)
Z
= Pr > t Y = y dt
(y) (y)
Z (y)
+ Pr < t Y = y dt
(y)
= Pr < (y)Y = y Pr > (y)Y = y
27 / 44
Maximum A Posteriori (MAP) Estimation
For the threshold cost function, we have1
h i
E C (y), Y = y
Z
= C[(y), ]p Y = y d
Z (y)
Z
= p Y = y d +
p Y = y d
(y)+
Z
Z (y)+
= p Y = y d p Y = y d
(y)
Z (y)+
= 1 p Y = y d
(y)
(y)
R (y)+
The shaded area is the integral p Y = y d
(y)
29 / 44
Maximum A Posteriori (MAP) Estimation
p(|Y = y)
R (y)+
p Y = y
(y)
MAP (y)
30 / 44
Maximum Likelihood (ML) Estimation
The ML estimator is given by
ML (y) = argmax p Y = y
31 / 44
Example 1: ML Estimation
Suppose we observe Yi , i = 1, 2, . . . , M such that
Yi N (, 2 )
M
1 X
ML (y) = yi
M
i=1
Assignment 5
32 / 44
Example 2: ML Estimation
Suppose we observe Yi , i = 1, 2, . . . , M such that
Yi N (, 2 )
M
1 X
ML (y) = yi
M
i=1
M
1 X
2
ML (y) = (yi ML (y))2
M
i=1
Assignment 5
33 / 44
Example 3: ML Estimation
Suppose we observe Yi , i = 1, 2, . . . , M such that
Yi Bernoulli(p)
M
1 X
pML (y) = yi
M
i=1
Assignment 5
34 / 44
Example 4: ML Estimation
Suppose we observe Yi , i = 1, 2, . . . , M such that
Yi Uniform[0, ]
Assignment 5
35 / 44
Reference
Chapter 4, An Introduction to Signal Detection and
Estimation, H. V. Poor, Second Edition, Springer Verlag,
1994.
36 / 44
Parameter Estimation of Random Processes
ML Estimation Requires Conditional Densities
ML estimation involves maximizing the conditional density
wrt unknown parameters
Example: Y N (, 2 ) where is known and 2 is
unknown
(y )2
1
p Y = y = e 22
2 2
38 / 44
Maximizing Likelihood Ratio for ML Estimation
Consider Y N (, 2 ) where is unknown and 2 is
known
1 (y )2
p(y |) = e 2 2
2 2
H1 : Y N (, 2 )
H0 : Y N (0, 2 )
40 / 44
Likelihood Ratio of a Signal in AWGN
Let Hs () be the hypothesis corresponding the following
received signal
H0 : y (t) = n(t)
Z = hy , s i
s (t)
y (t) = y (t) hy , s i
ks k2
41 / 44
Likelihood Ratio of a Signal in AWGN
Under both hypotheses y (t) is equal to n (t) where
s (t)
n (t) = n(t) hn, s i
ks k2
Hs () : Z N (ks k2 , 2 ks k2 )
H0 () : Z N (0, 2 ks k2 )
42 / 44
Likelihood Ratio of Signals in AWGN
The likelihood ratio of signals in real AWGN is
ks k2
1
L(y |s ) = exp hy , s i
2 2
ks k2
1
L(y |s ) = exp Re(hy , s i)
2 2
43 / 44
Thanks for your attention
44 / 44