Professional Documents
Culture Documents
Random Signals
Assumptions:
• Target signal s[n]: Random, zero-mean Gaussian random process with known covari-
ance.
p(x; H1 )
L(x) = >
p(x; H0 )
39
20
The signal to be zero mean, white, WSS Gaussian random process with
a known covariance
E(x) = ν var(x) = 2 ν
for odd ν
For our example we set the degree of freedom, ν N, and IID data x =x[n]
Generalization to Random Signals with
Arbitrary Covariance Matrix
We assume the signal s[n] is a zero mean, Gaussian random process
with covariance matrix
1 1 1 1 1 1 1
(A + BCD) =A A B(DA B+C ) DA
we can write
2 1
Cs + I
as ✓ ◆ 1
1 1 1 1
2
I 4
Cs + 2
I
45
23
Conclusions
The NP detector correlates the received data with an
estimate of the signal therefore termed an estimator-
correlator
MMSE
19/12/16
with 1
1 1 2
ŝ = 2 2
Cs + I Cs 1 x = Cs (Cs + 2
I) 1
x
Summary (1)
Deterministic signals:
T (x) = xT C 1
s
T (x) = xT DT Ds
24
What if the PDFs are not completely known?
A radar signal that returns from a target will be
delayed by the propogation time of the signal
through the medium
It is arrival time is generally unknown
A communication receiver may not have perfect
knowledge of the frequency of a transmitted
signal
The noise characteristics may not be known a
priori
The noise is modeled as a white Gaussian noise but
with unknown variance
Detection: chapter 6
Spring 2010
• minimize Bayesian risk (assign costs to decisions, have priors of the different
hypotheses): likelihood ratio test, threshold set by priors+costs
Unknown parameters?
Detection under NP:
51
H0 : x[n] = w[n] n = 0, 1, . . . , N 1
H1 : x[n] = A + w[n] n = 0, 1, . . . , N 1
where A is unknown, but we know A > 0. Further we know that w[n] is WGN with variance
2
. NP detector decides H1 if
PDFs parameterized by A!
p(x; A, H1 ) This is called: Composite
L(x) = >
p(x; A, H0 ) Hypothesis testing.
52
26
increases with increasing A
We may say that over all possible detectors that
have a given the one that yields the highest
is specified by the uniformly most powerful
(UMP) test
Any other test has a poorer performance
N 1
1 X 0
A>0: x[n] > +
N n=0
N 1
1 X 0
A<0: x[n] <
N n=0
For A > 0 0
!
0 +
PF A = P r{x̄ > + ; H0 } =Q p
2 /N
For A < 0 0
! 0
!
0
PF A = P r{x̄ < ; H0 } = 1 Q p =Q p
2 /N 2 /N
55
0
! 0
! !
A +A A
for A < 0 PD = 1 Q p =Q p =Q Q 1
(PF A ) + p
2 /N 2 /N 2 /N
56
28
19/12/16
1. Bayesian approach:
Assign priors to unknown parameters and 1 under hypotheses H0 and H1 , respectively:
0
Z
p(x; H0 ) = p(x| 0 ; H0 )p( 0 )d 0
Z • Need to choose prior pdf.
p(x; H1 ) = p(x| 1 ; H1 )p( 1 )d 1
2. GLRT:
• GLRT:
p(x; ˆ1 , H1 )
LG (x) = >
p(x; ˆ0 , H0 )
with ˆi given by
ˆi = arg max p(x; i , Hi )
i
58
29
19/12/16
H0 : x[n] = w[n] n = 0, 1, . . . , N 1
H1 : x[n] = A + w[n] n = 0, 1, . . . , N 1
59
1 h N 1
1 X i
= max N
exp 2
(x[n] A)2 .
A (2⇡ 2 2 ) 2 n=0
P
This will lead to  = 1
N x[n] = x̄.
1 N x̄2
ln LG (x) = 2
( 2N x̄2 + N x̄2 ) =
2 2 2
60
30
19/12/16
00
!
00 00
PF A = P r{|x̄| > ; H0 } = 2P r{x̄ > ; H0 } = 2Q p
2 /N
00 p
= 2 /N Q 1 (P
F A /2)
r ! !
00
1 N A2 1 A
PD = P r{|x̄| > ; H1 } = Q Q (PF A /2) 2
+Q Q (PF A /2) + p
2 /N
61
31