You are on page 1of 169

Chapter 1

Probability Concepts


1.1 The given sets are:
A = {1,2,3,4} B = {0,1,2,3,4,5,6,7,8}
C = {x | x real and 1 x <3} D = {2,4,7} E = {4,7,8,9,10}.
We observe that:

A is finite and countable. D is finite and countable.
B is finite and countable. E is finite and countable.
C is infinite and uncountable.

1.2 By inspection,
(a) B AI = A = {1,2,3,4}.
(b) E D B A U U U = {1,2,3,4,5,6,7,8,9,10}.
(c) D E B I U ) ( = D = {2,4,7}.
(d) E B = {1,2,3,5,6}.
(e) E D B A I I I ={4}.
1.3 The universal set is U = {0,1,2,3,4,5,6,7,8,9,10,11,12}. The subsets are
A = {0,1,4,6,7,9}, B = {2,4,6,8,10,12} and C = {1,3,5,7,9,11}. By inspection,
(a) B AI = {4,6}. (b) ( B AU ) C I = C AI = {1,7,9}.

1
Signal Detection and Estimation
2
(c) C BU = {0}. (d) A B = {0,1,7,9}.
(e) ) ( ) ( C A B A U I U = ) ( C B A I U = A = {0,1,4,6,7,9}.
(f ) C AI = {0,4,6}. (g) B = C .
(h) C BI = B = {2,4,6,8,10,12}.

1.4 Applying the definitions, we have












U A
B
(a) B A (b) C B A I U ) (
U
A
B
C
U
A B
(e) AB
U
A B


C D

A
U
A d) (
D C B A c I I I ) (
Probability Concepts
3
1.5 This is easily verified by Venn diagrams. Hence, we have

B A and C B , then C A

1.6 By inspection, B and C are mutually exclusive.

1.7 Let R, W and B denote red ball drawn, white ball drawn and blue ball drawn,
respectively

(a) 5 . 0
2
1
7 3 10
10
balls of number Total
balls red of Number
) ( = =
+ +
= = R P .
(b) 15 . 0
20
3
) ( = = W P . (c) 35 . 0
20
7
) ( = = B P .
(d) 5 . 0
2
1
) ( 1 ) ( = = = R P R P . (e) 65 . 0
20
13
20
3 10
) ( = =
+
= W R P U .

1.8 Let B
1
first ball drawn is blue.
W
2
2
nd
ball drawn is white.
R
3
3
rd
ball drawn is red.

(a) The ball is replaced before the next draw the events are independent
and hence,
) | ( ) | ( ) ( ) (
2 1 3 1 2 1 3 2 1
W B R P B W P B P R W B P I I I =
) ( ) ( ) (
3 2 1
R P W P B P =
02625 . 0
8000
210
20
10
20
3
20
7
= = =
A
B
C
U
B C
A
B
10R , 3W,

7B
Signal Detection and Estimation
4
(b) Since the ball is not replaced, the sample size changes and thus, the
events are dependent. Hence,
) | ( ) | ( ) ( ) (
2 1 3 1 2 1 3 2 1
W B R P B W P B P R W B P I I I =
0307 . 0
18
10
19
3
20
7
= =
1.9

Let R
1
and R
2
denote draw a red ball from box B
1
and B
2
respectively, and let
W
1
and W
2
denote also draw a white ball from B
1
and B
2
.

(a) ) ( ) ( ) | ( ) ( ) (
2 1 1 2 1 2 1
R P R P R R P R P R R P = = I since the events are
independent. Hence,
111 . 0
9
1
9
2
20
10
) (
2 1
= = = R R P I .
(b) Similarly, 1 . 0
9
6
20
3
) ( ) ( ) (
2 1 2 1
= = = W P W P W W P I
(c) Since we can have a different color from each box separately, then
25 . 0
20
7
9
6
9
1
20
3
) ( ) ( ) (
1 2 2 1
= + = + = B W P B W P B W P I I I .
1.10 Let B
1
and B
2
denote Box 1 and 2 respectively. Let B denote drawing a black
ball and W a white ball. Then ,



10R , 3W

7B

2R , 6W

1B
B
1
B
2

4W , 2B

3W , 5B
B
1 B
2
Probability Concepts
5
Let B
2
be the larger box, then P(B
2
) = 2P(B
1
). Since 1 ) ( ) (
1 2
= + B P B P , we
obtain
3
2
) ( and
3
1
) (
2 1
= = B P B P .

(a) P(1B | B
2
) = 625 . 0
8
5
= .
(b) P(1B | B
1
) = 3333 . 0
6
2
= .
(c) This is the total probability of drawing a black ball. Hence
. 5278 . 0
3
1
6
2
3
2
8
5
) ( ) | 1 ( ) ( ) | 1 ( ) 1 (
1 1 2 2
= + =
+ =

B P B B P B P B B P B P

(d) Similarly, the probability of drawing a white ball is
. 4722 . 0
3
1
6
4
3
2
8
3
) ( ) | 1 ( ) ( ) | 1 ( ) 1 (
1 1 2 2
= + =
+ =

B P B W P B P B W P W P

1.11 In four tosses:__ __ __ __, we have three 1s and one is not 1. For example
1 111 . Hence, the probability is
6
5
6
1
6
5
6
1
6
1
6
1
3
|
.
|

\
|
= but we have
|
|
.
|

\
|
3
4
ways of
obtaining this. Therefore, the probability of obtaining 3 ones in 4 tosses is
01543 . 0
6
5
6
1
! 1 ! 3
! 4
3
= |
.
|

\
|
.

1.12 Let R, W and G represent drawing a red ball, a white ball, and a green ball
respectively. Note that the probability of selecting Urn A is P(Urn A) = 0.6, Urn B
is P(Urn B) = 0.2 and Urn C is P(Urn C) = 0.2 since
P(Urn A)+P(Urn B)+P(Urn C) =1.
(a) ( P 1W | Urn B) = 3 . 0
100
30
) (Urn
) Urn 1 (
= =
B P
B W P

I
.
(b) ( P 1G | Urn B) = 4 . 0
100
40
= .
(c) P(Urn C | R) =
) (
) (Urn
R P
R C P I
. Also,
Signal Detection and Estimation
6
P(R | Urn C) =
) (
) (Urn ) Urn | (
) | (Urn
) (Urn
(Urn
R P
C P C R P
R C P
C P
R C P
=
) I
.

We need to determine the total probability of drawing a red ball, which is
( ) ( ) ( ) 32 . 0 2 . 0
100
40
2 . 0
100
30
6 . 0
100
30
) (Urn ) Urn | ( ) (Urn ) Urn | ( ) (Urn ) Urn | ( ) (
= + + =
+ + =

C P C R P B P B R P A P A R P R P
Thus, 25 . 0
32 . 0
) 2 . 0 ( ) 4 . 0 (
) | (Urn = = R C P .
1.13 In drawing k balls, the probability that the sample drawn does not contain a
particular ball in the event E
i
, i = 0, 1,2, , 9, is

M
k
j i
k
i
E E P
E P
|
.
|

\
|
=
|
.
|

\
|
=
10
8
) (
10
9
) (

(a) P(A) = P(neither ball 0 nor ball1) = P(E
0
E
1
) =
k
k
10
8
.
(b) P(B) = P( ball 1 does not appear but ball 2 does)
=
k
k k
k
k
k
k
E E P E P
10
8 9
10
8
10
9
) ( ) (
2 1 1

= = .
(c) P(AB) = ) (
2 1 0
E E E P = = ) ( ) (
2 1 0 1 0
E E E P E E P
k
k k
k
k
k
k
10
7 8
10
7
10
8
= .
(d)
k
k k k
AB P B P A P B A P
10
7 8 9
) ( ) ( ) ( ) (
+
= + = U .

1.14 We have

<
+
=

0 , 0
0 , ) 3 (
2
1
2
1
) (
x
x x e
x f
x
X


Probability Concepts
7
(a)




= + = + = + =
0 0 0 0
1
2
1
2
1
) 3 (
2
1
2
1
)] 3 (
2
1
2
1
[ ) ( dx x dx e dx x e dx x f
x x
X
.
Hence, ) (x f
X
is a density function.

(a) P(X = 1) = 0 (the probability at a point of a continuous function is zero).
5 . 0
2
1
) 3 ( = = = X P .
( ) 6839 . 0 1
2
1
2
1
2
1
) ( ) 1 (
1
1 1
= + = + = =



e dx e dx x f X P
x
X
.
1.15

(a) The cumulative distribution function of X for all the range of x is,


+ = = =
x x
X X
x x du du u f x F
3
1 3 for
8
3
8
1
8
1
) ( ) ( ,
and

+ = +
x
x x du
1
1 1 for
2
1
4
1
4
1
4
1
,
and 3 1 for
8
5
8 8
1
4
3
1
+ = +

x
x
du
x
,
(1/2)
1/2
. . x
0 1 2 3
1/8
fX(x)
. . . . x
-3 -2 -1 0 1 2 3
1/4
fx(x)
Signal Detection and Estimation
8
and 3 for 1 ) ( = x x F
X
.
Thus,


(b) Calculating the area from the graph, we obtain
2
1
4
1
2 ) 1 ( = = < X P .

1.16 The density function is as shown


(a) 2 2 for
2
1
4
1
4
1
) ( ) (
2
< + = = =

x x du x F x X P
x
X

(b)
2
1
4
1
) 1 (
1
1
= =

dx X P
(c)
3
4
4
1
2 ] [ , 0 ] [
2
0
2 2 2
=
|
|
.
|

\
|
= = =

dx x X E X E
x
.
(d)

= =

2 sin
2
1
4
] [ ) (
2 2
j
e e
e E
j j
X j
x
.


3 , 1
3 1 ,
8
5
8
1
1 1 ,
2
1
4
1
1 3 ,
8
3
8
1
3 , 0

< +
< +
< +
<
x
x x
x x
x x
x
F
X
(x) =
fX(x)
x
-2 -1 0 2 1
1/4
Probability Concepts
9
1.17 The density function is shown below


(a) 75 . 0
4
3
) 2 (
2
3
2
1
2 / 3
1
1
2 / 1
= = + = |
.
|

\
|
< <

dx x xdx X P .

(b) 1 ) 2 ( ] [
2
1
1
0
2
= + =

xdx x dx x X E as can be seen from the graph.

(c) The MGF of X is
) 1 2 (
1
) 2 ( ] [ ) (
2
2
2
1
1
0
+ = + = =

t t tx tx tX
x
e te
t
dx e x xdx e e E t M .
(d)
3
2
4
2 2 2
0
2 ) 4 ( ) 1 ( 2 ) 1 2 ( 2 ) ( 2 ) (
t
t e t te
t
e te t e e t
dt
t dM
t t t t t t
t
x
+
=
+
=
=


Using L'hopital's rule, we obtain 1 ) 0 ( = = t M
x
.

1.18 (a)
4 2
) (
3
2
] [
1
0
2

+

= + = =

dx x x X E and
1
3
) ( ) (
1
0
2
=

+ = + =

+

dx x dx x f
x
.
Solving the 2 equations in 2 unknowns, we obtain

=
=
2
3 / 1

(b) 511 . 0
45
23
2
3
1
] [
1
0
2 2 2
= = |
.
|

\
|
+ =

dx x x X E .
Then, the variance of X is ( ) 667 . 0
45
3
] [ ] [
2 2 2
= = = X E X E
x
.
fX (x)
1
x
0 1/2 1 3/2 2
Signal Detection and Estimation
10
1.19 (a)

=
j i
j i j i
y x P y x XY E
,
) , ( ] [
0 ] 0 [
6
1
)] 1 )( 1 ( ) 1 )( 1 ( ) 1 )( 1 ( ) 1 )( 1 [(
12
1
= + + + + + + =
( )
3
1
12
4
1 and
3
1
6
2
0 ,
3
1
12
4
1
where, ) ( ] [
= = = = = = = = =
= =

) P(X ) P(X X P
x X P x X E
i
i i


Hence, the mean of X is 0
3
1
1
3
1
0
3
1
1 ] [ = |
.
|

\
|
+ |
.
|

\
|
+ |
.
|

\
|
= X E .
Similarly, 0
3
1
1
3
1
0
3
1
1 ] [ = |
.
|

\
|
+ |
.
|

\
|
+ |
.
|

\
|
= Y E . Therefore, ] [ ] [ ] [ Y E X E XY E = .

(b) We observe that
12
1
) 1 , 1 ( = = = Y X P
9
1
) 1 , 1 ( = = = Y X P , thus X
and Y are not independent.

1.20 (a)

+

+

= = + =
2
0
2
0
8
1
1 ) ( ) , ( k dxdy y x k dxdy y x f
XY
.
(b) The marginal density functions of X and Y are:

+ = + =
2
0
2 0 for
4
1
4
) (
8
1
) ( x
x
dy y x x f
X
.

+ = + =
2
0
2 0 for
4
1
4
) (
8
1
) ( y
y
dx y x y f
Y
.
(c) P(X < 1 | Y < 1)=
3
1
8 / 3
8 / 1
)
4
1
4
1
(
) (
8
1
1
0
1
0
1
0
= =
+
+


dy y
dxdy y x
.
(d) | | | |

= = + =
2
0
6
7
) 1 (
4
Y E dx x
x
X E .
Probability Concepts
11
To determine
xy
, we solve for
| |

= + =
2
0
2
0
3
4
) (
8
dxdy y x
xy
XY E .
6
11
Thus, .
3
5
] [ ] [
2 2
= = = =
y x
Y E X E and the correlation coefficient is
| | | | | |
0909 . 0
11
1
=

=
y x
Y E X E XY E
.
(e) We observe from (d) that X and Y are correlated and thus, they are not
independent.

1.21 (a)

+

+

= = =
4
0
5
1
96
1
1 ) , ( k dy dx kxy dxdy y x f
XY
.

(b)

= = =
2
0
5
3
09375 . 0
32
3
96
) 2 , 3 ( dxdy
xy
Y X P .

03906 . 0
128
5
96
) 3 2 , 2 1 (
3
2
2
1
= = = < < < <

dxdy
xy
Y X P .

(c)

=
< <
< < < <
= < < < <
3
2
) (
128 / 5
) 3 2 (
) 3 2 , 2 1 (
) 3 2 | 2 1 (
dy y f
Y P
Y X P
Y X P
Y
where,

< < = =
5
1
4 0
8 96
) ( y ,
y
dx
xy
y f
Y
. Therefore,
125 0
8
1
16 / 5
128 / 5
) 3 2 | 2 1 ( . Y X P = = = < < < <



(d) | |

= = = = =
5
1
5
1
444 . 3
9
31
12
) ( | dx
x
x dx x xf y Y X E
X
.



Signal Detection and Estimation
12
1.22 (a) We first find the constant k. Hence,

= =
2
1
3
1
6
1
1 k dy dx kxy

(b) The marginal densities of X and Y are

< < = =
2
1
3 1 for
4 6
1
) ( x
x
xydy x f
X

and

< < = =
3
1
2 1 for
3
2
6
1
) ( y y dy dx xy y f
Y
.
Since = = ) , (
6
1
) ( ) ( y x f xy y f x f
xy Y X
X and Y are independent.

1.23 We first determine the marginal densities functions of X and Y to obtain

= =
1
0
3 3
8 16
) (
x
dy
x
y
x f
X
for x > 2.
and

= =
2
3
2
16
) ( y dx
x
y
y f
Y
for 0 < y < 1.
Then, the mean of X is | |

= =
2
4 ) ( dx x xf X E
X
,
and the mean of Y is | |

= =
1
0
2
3
2
2 dy y Y E .



4792 . 0
48
23
6
1
) 3 (
2
1
3
1
= = = < +

y
xydxdy Y X P
y x = 3
1
y
x
2
1 2
3
3
Probability Concepts
13
1.24 We first find the constant of k of ) ( y f
Y
to be

= =
0
3
9 1 k dy kye
y
.
(a)
3 2
1
0
1
0
3 2
14 9 18 1 ) 1 ( 1 ) 1 (


= = + = > +

e e dxdy ye Y X P Y X P
y
y x
.

(b)


= = < <
1
2
1
7 5
4 4 ) , ( ) 1 , 2 1 ( e e dxdy y x f Y X P
XY
.
(c)


= = < <
2
1
4 2 2
2 ) 2 1 ( e e dx e X P
x
.
(d)


= =
1
3 3
4 9 ) 1 ( e dy ye Y P
y
.
(e)
5 2
3
7 5
4
4 4
) 1 (
) 1 , 2 1 (
) 1 | 2 1 (

< <
= < < e e
e
e e
Y P
Y X P
Y X P .

1.25 (a) Using ) (x f
X
, | | | |

+

= = = =
0
2
1 2 2 ) ( ) ( ) ( dx e x dx x f x g X g E Y E
x
X
.

(b) We use the transformation of random variables (the fundamental theorem)
to find the density function of Y to be
y
y
X Y
e e
y
f y f

= = |
.
|

\
|
=
2
2
2
2
1
2 2
1
) ( .
Then, the mean of Y is | |

= =
0
1 dy ye Y E
y
.
x + y = 1
y


1
x
0 1
Signal Detection and Estimation
14
Both results (a) and (b) agree.
1.26 (a) To find the constant k, we solve

+

+

= = =
3
0
3
81
8
1 ) , (
y
XY
k dy dx kxy dxdy y x f .
(b) The marginal density function of X is

= =
0
3
3 0 for
81
4
81
8
) ( x x xydy x f
X
.
(c) The marginal density function of Y is

= =
3
3
3 0 for ) 9 (
81
4
81
8
) (
y
Y
y y y xydx y f .
(d)


= = =
otherwise , 0
0 , 2
81
4
81
8
) (
) , (
) | (
2
3
|

x y
x
y
x
xy
x f
y x f
x y f
X
XY
X Y


and


= =
otherwise , 0
3 ,
9
2
) (
) , (
) | (
2
|
x y
y
x
y f
y x f
Y X f
Y
XY
Y X


1.27 The density function of Y X Z + = is the convolution of X and Y given by

+

= = dx x z f x f Y f x f z f
Y X Y X Z
) ( ) ( ) ( ) ( ) (

0 z - 4 z z-4 0 z
Probability Concepts
15

=
=

otherwize , 0
4 ,
4
) 1 (
4
1
4 0 ,
4
1
4
1
) (
4
4
0
z
e e
dx e
z
e
dx e
z f
z
x
z
z
z z
x
Z

1.28 The density function of Y X Z + = is

=
_
) ( ) ( ) ( dy y z f y f z f
X Y Z
.
Graphically, we have


Hence, we have


fY (y)
0 0.3 0.5 0.2 0 0 0
Z fZ (z)
z = 0 0.4 0.2 0.4 0 0 0 0 0 0 0 0
z = 1 0.4 0.2 0.4 0 0 0 0 0 0 0
z = 2 0.4 0.2 0.4 0 0 0 0 0 0.12
z = 3 0 0.4 0.2 0.4 0 0 0 0 0.26
z = 4 0 0.4 0.2 0.4 0 0 0 0.30
z = 5 0 0 0.4 0.2 0.4 0 0 0.24
z = 6 0 0 0 0.4 0.2 0.4 0 0.08
z = 7 0 0 0 0 0.4 0.2 0.4 0

x
0 1 2 3
0.4 0.4

0.2
fX(x)
y
0 1 2 3
0.5

0.3
0.2
fY(y)
x
-3 -2 -1 0
0.4 0.4

0.2
fX(-x)
Signal Detection and Estimation
16
The plot of ) (z f
Z
is

Note that

= + + + + =
i
i Z
z z f 0 . 1 08 . 0 24 . 0 3 . 0 26 . 0 12 . 0 ) ( as expected.

1.29 (a)

+
= = =
0
/
0
) (
0 for ) ( ) (
y z
y x
Z
z dxdy e z XY P z F XY Z .




= =

0 0 0
/
/
0
1 ) 1 ( dy e dy e e dy e dx e
y
z
y
y y z y
y z
x
.
Therefore,

= =


otherwise , 0
0 ,
1
) ( ) (
0
) (
0
z dy e
y
dy e
dz
d
z F
dz
d
z f
y
z
y
y
z
y
Z Z

.
(b)

+

= + = dy y z f y f z f Y X Z
X Y Z
) ( ) ( ) (


=
z
y z y
dy e e
0
) (


=

otherwise , 0
0 , z ze
z


1.30 The density function of XY Z = is

+

< < = = =
1
1 0 for ln
1
) , (
1
) (
z
XY Z
z z dy
y
dy y
y
z
f
y
z f .
z
0 1 2 3 4 5 6 7
0.3
0.26 0.24
0.12 0.08
0 0

fZ(z)
0 z
Probability Concepts
17
1.31 (a) The marginal density function of X is

=
0
0 for ) ( x e dy e x f
x x
X
.
(b) The marginal density function of Y is

=
0
0 for
1
) ( y dx e y f
x
Y
.
(c) Since = ) , ( ) ( ) ( y x f y f x f
XY Y X
X and Y are statistically independent.
(d)

+

= = + = dy y z f y f y f x f z f Y X Z
X Y Y X Z
) ( ) ( ) ( ) ( ) ( .







y

1

x
e

x

) (x f
X
( ) y f
Y

For < y 0

z- 0 z
( )

=
z
z x
Z
e dx e z f
0
1
1
) (
Signal Detection and Estimation
18











Then,


1.32 ( ) ) ( ) ( ) (
z
x
Y P yz X P z
Y
X
P z Z P z F
Y
X
Z
Z
= = |
.
|

\
|
= = =
0 ,
1
1
1
0 /

> |
.
|

\
|
+ = =




z
z

dxdy e e
z x
y x

Hence, the density function is

<
>
|
.
|

\
|

= =
0 0
0
1
) ( ) (
2
z ,
z ,
z z F
dz
d
z f
Z Z


1.33 (a) Solving the integral

= =
2
1
3
1
2 1 2 1
6
1
1 k dx dx x kx .

(b) The Jacobian of the transformation is

( )

e 1
1

) (z f
z
z
0
0 z- z

For < y
( )
| |

=
z
z
z z x
Z
e e dx e z f
1
) (
Probability Concepts
19
. 2
2
0 1
) , (
2 1
2 1
2
1
2
2
1
2
2
1
1
1
2 1
x x
x x x

x
y
x
y
x
y
x
y
x x J = =

=



Hence,


= =
otherwise , 0
,
12
1
) , (
) , (
) , (
2 1
2 1
2 1
2 1
2 1
2 1

D ,x x
x x J
x x f
Y Y f
X X
Y Y



where D is the domain of definition.


Side 1 :
2
2 2 1 1
1 x y x y = = = , then . 4 1
4 2
1 1
2
2 2
2 2

= =
= =
y
y x
y x


Side 2 :
2
2 2 1 1
3 3 x y x y = = = , then . 12 3
12 2
3 1
2
2 2
2 2

= =
= =
y
y x
y x


Side 3 :
1 1 2 2
4 4 2 y x y x = = = , then

= =
= =
. 12 3
4 1
2 1
2 1
y x
y x


Side 4 :
1 1 2 2
1 y x y x = = = .

Therefore, D is as shown below
x2


3
1 1
x y =
2
2
2 1 2
x x y =
1 2
1
4
x1
0 1 2 3
Signal Detection and Estimation
20


1.34 (a) The marginal density functions of X
1
and X
2
are

+


+

>
= =
0 , 0
0 ,
) (
1
1
2
) ( 2
1
1
2 1
1
x
x e
dx e x f
x
x x
X


and

+


+

>
= =
0 , 0
0 ,
) (
2
2
1
) ( 2
2
2
2 1
2
x
x e
dx e x f
x
x x
X


Since = ) ( ) ( ) , (
2 1 2 1
2 1 2 1
x f x f x x f
X X X X
X
1
and X
2
are independent.
(b) The joint density function of ) , (
2 1
Y Y is given by
( )

=
. ,
D ,x x ,
x x J
x x f
y y f f
X X
Y Y
otherwise 0
,
) , (
) , (
2 1
2 1
2 1
2 1
2 1
2 1



The Jacobian of the transformation is given by
D
+ + + y
1

1 2 3
12 +
+
+
+
+
+
+
4 +
3

1
2
y
Probability Concepts
21
.
1
1
1 1
) , (
2
2
2
1
2
1
2
2
2
1
2
2
1
1
1
2 1
x
x
x
x
x

x

x
y
x
y
x
y
x
y
x x J =

=



Hence,
2
2
2
1
) ( 2
2 1
1
) , (
2 1
2 1
x
x
x
e
y y f
x x
Y Y

=
+
, but
2 1 1
x x y + = and
2 2 1
2
1
2
x y x
x
x
y = = .
Thus, . ) , (
2 1
2
1 2
2 1
1
2 1
x x
x
e y y f
y
Y Y
+
=

Also,
2 2 1 1 1 2
x y y x y x = =
) 1 (
2 1 2
y y x + = .
Making the respective substitutions, we obtain
2
2
1 2
1
2
2
2
1
2
2 1
) 1 (
) 1 (
) , (
1 1
2 1
y
y
e
y
y
y
e y y f
y y
Y Y
+
=
+
=

for 0
1
> y and 0
2
> y .





Chapter 2


Distributions


2.1 Let A = { seven appears} = {(1 ,6) , (2 , 5) , (3 , 4) , (4 , 3) , (5 , 2) , (6 , 1)}.
Then, ( ) .
6
5
6
1
1 and
6
1
36
6
) ( = = = = A P A P

(a) This is Bernoulli trials with k = 2 successes and n = 6 trials. Hence,

2009 0
6
5
6
1
! 2 ! 4
! 6
6
5
6
1
2
6
trials) 6 in successes 2 (
4 2 4 2
. n k P = |
.
|

\
|
|
.
|

\
|
= |
.
|

\
|
|
.
|

\
|
|
|
.
|

\
|
= = =

(b) 3349 . 0
6
5
6
5
6
1
0
6
trials) 6 in successes (no
6 6 0
= |
.
|

\
|
= |
.
|

\
|
|
.
|

\
|
|
|
.
|

\
|
= = n P

2.2 The number of ways of obtaining 4 white balls out of 10 is
|
|
.
|

\
|
4
10
. The other
number of different ways of obtaining 3 other balls (not white) is
|
|
.
|

\
|
3
9
. Hence, the
probability of obtaining the fourth white ball in the seventh trial is
3501 . 0
7
19
3
9
4
10
=
|
|
.
|

\
|
|
|
.
|

\
|
|
|
.
|

\
|

or, using the formula of the hypergeometric distribution without replacement, we
have 19 = N balls, 10 = r and 4 = k in 7 = n trials. Hence,


22
Distributions
23
. 3501 . 0
7
19
3
9
4
10
) 4 ( =
|
|
.
|

\
|
|
|
.
|

\
|
|
|
.
|

\
|
= = X P
2.3 The probability of success is 9 . 0 = p while the probability of failure is
1 . 0 1 = = p q .

(a) ) 9 ( ) 8 ( ) 7 ( ) 6 ( zone) in land 6 least (at = + = + = + = = X P X P X P X P P
) 10 ( = + X P

( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) . . . . .
. . . . . .
998 0 9 0
10
10
1 0 9 0
9
10
1 0 9 0
8
10
1 0 9 0
7
10
1 0 9 0
6
10
10 9
2 8 3 7 4 6
=
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
=

(b) ( ) ( ) ( ) . 1 . 0 1 . 0 9 . 0
0
10
zone) in lands none ( P
10 10 0
=
|
|
.
|

\
|
=
(c) Probability that at least 70
0
/
0
land in zone is
= = = = = = + = ) 10 ( ) 9 ( ) 8 ( ) 7 ( X P X P X P X P
( ) ( ) ( ) ( ) ( ) ( ) ( ) 0.987. 9 . 0
10
10
1 . 0 9 . 0
9
10
1 . 0 9 . 0
8
10
1 . 0 9 . 0
7
10
10 9 2 8 3 7
=
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|
+
|
|
.
|

\
|

Hence, the program is successful.
2.4 Substitution for 0 = k in the Poisson density function, we have

= = = e X P 2 . 0 ) 0 ( . Hence, = 1.609.

. .
e
.
e .
X P X P X P X P X P
.
219 0
]
! 2
) 609 1 (
609 1 2 . 0 [ 1
)] 2 ( ) 1 ( ) 0 ( [ 1 ) 2 ( 1 ) 2 (
609 1
2
609 . 1
=
+ + =
= + = + = = = >





2.5 Let X represent the Poisson distribution of the incoming calls with
hour 12 = .

Signal Detection and Estimation
24
(a) The probability of more than 15 calls per a given hour is
( )

= = = >
15
0
12
1556 . 0
!
12
1 ) 15 ( 1 ) 15 (
k
k
k
e X P X P
(b) No calls in 15 minute ( 4 / 1 hour) break ) 0 ( = X P in 15 minutes.
Hence,
0498 . 0 ) 0 (
3
= = =

e X P
2.6 X is Poisson distributed and ) 1 ( ) 3 / 2 ( ) 2 ( = = = X P X P . That is,
! 1 3
2
! 2
2


e e . Solving for we obtain
3
4
0
3
4
= = |
.
|

\
|
since 0 = is
not a solution. Therefore,
2636 . 0 ) 0 (
3 / 4
= = =

e X P and
( )
1041 . 0
! 3
3 / 4
) 3 (
3
3 / 4
= = =

e X P
2.7 The lack of memory property is

( )
( )
( )
( )
( )
( )
2
1
2 1
1
1 2 1
1 2 1
2
1
2 1
) | (
x X P e
e
e
x X P
x x X P
x X P
x X x x X P
x X x x X P
x
x
x x
= = =
>
+
=
>
> +
= > +

+

I

2.8 (a) In this case the parameter
12
1 1
=

= and the density function is

=



otherwise , 0
0 ,
12
1
) (
12

x e
x f
x
X

Hence, 2325 0
12
1
1 ) 15 ( 1 ) 15 (
15
0
12
. e X P X P
k
k

= = = >
(b) 0833 . 0
12
1
) 0 ( = = = X P .
Distributions
25
2.9 X is the standard normal X ~ N (0. 1)

(a) )] 1 P( 2[1 ) 1 ( 2 ) 1 ( ) 1 ( ) 1 ( = > = > + < = > X X P X P X P X P
3174 . 0 ) 8413 . 0 2(1 )] 1 ( 2[1 = = = I



(b) 1587 0 ) 1 ( 1 ) 1 ( 1 ) 1 ( . I X P X P = = = >

2.10 X ~ N (0,1). Then, 0013 0 3 3 1 3 1 3 . ) Q( ) I( ) P(X ) P(X = = = = >

2.11 20
0
/
0
of 200 = 40 times. A success is when
X = 7 {(1,6), (2,5), (3,4), (4,3), (5,2), (6,1)}.
Hence, we have success with .
6
5
1 and
6
1
36
6
= = = = p q p

(a) ) 40 ( time) the of 20 least at (success
0
0
= X P P

|
.
|

\
|
|
.
|

\
|
|
|
.
|

\
|
=
200
40
200
6
5
6
1 200
k
k k
k
= 0.1223.
(b) Using the central limit theorem to obtain the approximation to the normal
distribution, we have
. . ) . I(
npq
X-np
-P X P X P 1423 0 07 1 1
36
1000
6
200
39
1 ) 39 ( 1 ) 40 ( = =
|
|
|
|
|
.
|

\
|

=

+1 -1
x
fX(x)
Signal Detection and Estimation
26
2.12
100 2 1
X X X X S
k
+ + + + + = L L , X
k
is Poisson distributed with =3.2
(a)

= =
= = = =
4
1
2 3
100
5
2 . 3
2602 0
!
) 2 3 (
1 ) 4 ( 1
!
) 2 . 3 (
) 5 (
k
k
.
k
k
. .
k
.
e S P
k
e S P
(b) Using the central limit theorem, S becomes normal with mean 3.2 and
variance 3.2. That is S ~ N(0.032 100, 0.032 100) . Hence,
3264 . 0 ) 45 . 0 ( 45 0 1
2 3
2 3 4
2 3
2 3
1 ) 4 ( 1 ) 5 ( = =
|
|
.
|

\
|

= Q ) . I(
.
.
.
. S-
P S P S P
2.13 X ~ N(1, 2)
(a) From the tables,
|
|
.
|

\
|

= >
2
1 2
2
1
1 2 1 2
X
P ) P(X ) P(X
2399 0 707 0 1 . ) . I( = =
(b)
|
|
.
|

\
|

=
2
2 2 2
2
2
2
2 6 1
) 2 2 6 1 (
. X .
P . X . P
. . ) . I( ) . I( .
X
P .
X
P 166 0 28 0 14 0 28 0
2
2
14 0
2
2
= =
|
|
.
|

\
|

|
|
.
|

\
|

=

2.14


=
otherwise , 0
6 1 ,
5
1
) (

x
x f
X










Using the fundamental theorem, we have
2
2
and
1 1
x
dy
dx
dx
x
dy
x
y = = = . Hence,
+ + + + + + x
1 2 3 4 5 6
fX(x)

5
1

Distributions
27
. 1
6
1
for
5
1
5
1
) (
) (
) (
2
2
= = = = y
y
x
dy
dx
x f
dx
dy
x f
y f
X
X
Y












2.15 (a) We found in Example 1.19 for
2
X Y = that

>
=
0 , 0
0 , ) ( ) (
) (
y
y y F y F
y F
X X
Y


and
| |

>
=
0 , 0
0 , ) ( ) (
2
1
) (
y
y y f y f
y y f
X X
Y


For X uniformly distributed between 0 and 1, we obtain

< <

=
1 , 1
1 0 ,
0 , 0
) (
y
y y
y
y F
Y

and

<
=
otherwise , 0
1 0 ,
2
1
) (

y
y y f
Y



1
FY(y)
y
0 1
y
1
fY(y)
1/2


+ + + + + + y

6
1

6
2

6
3

6
4

6
5
1
5
1

fY(y)
5
36

Signal Detection and Estimation
28
(b) For e Z
X
, =
) ( ) ( z Z P z F
Z
=

> = =

=
0 , ) (ln ) ln ( ) (
0 , 0
z z F z X P z e P
z
X
X

Hence,

>
=
0 , 0
0 , ln
z
z z) ( F
(z) F
X
Z
. The density function is

>
= =
0 , 0
0 , ) (ln
1
) (
) (
z
z z f
z
dz
z dF
z f
X Z
Z


Substituting for z, we obtain

<
<
=
e z
e z z
z
z F
Z
, 1
1 , ln
1 , 0
) ( and


=
otherwise , 0
1 ,
1
) (

e z
z z f
Z


2.16 X and Y are standard normal, that is
2
2
2
1
) (
x
X
e x f

= and
2
2
2
1
) (
y
Y
e y f

= .
(a)

<
>
= =
0 ,
0
Y
Y
X
Y ,
Y
X
Y
X
Z


The distribution function of Z is
|
|
.
|

\
|
< +
|
|
.
|

\
|
> = = 0 0 ) ( ) ( Y z
Y
X
P Y z
Y
X
P z Z P z F
Z

) y yz X P( ) y yz X P( 0 0 < + > = with the regions shown
below.
Distributions
29

( ) ( )




+


=
=
+ =
0
2 2
0
2 2
0
0
0
0
2 2 2 2
2
1
2
1
) , (
) , ( ) , ( ) (
dy e ye

dy e ye


yz,y)dy ( yf dy y yz yf
dxdy y x f dxdy y x f z F
y yz y yz
-
XY XY
yz
XY
y
XY Z


Using Leibnizs rule, we obtain
z.
z
z f
Z
all for
) 1 (
1
) (
2
+
=
(b) For
Y
X
W = , we have .
Y
X
Y
X
Z
Y
X
Z = = = Thus,

<
>
= = =
0
0
Z , Z
, Z Z
Z
Y
X
W .











Using the fundamental theorem of transformation of random variables, we have

x < -yz
y < 0
y
x
x=-yz
y
x
x=yz
x<yz
y>0
w
w
z
z
1
z
2
Signal Detection and Estimation
30

) ( '
) (
) ( '
) (
) (
2
2
1
1
z g
z f
z g
z f
w f
Z Z
W
+ = where w z =
1
and w z =
2
.
Also, 1 ) (
0 1
0 1
) ( ' =

<
> +
= z g'
z ,
z ,
z g .

Substituting, we obtain

( ) ] 1 [
1
] 1 [
1
) ( ) ( ) (
2 2
2 1
+
+
+
= + =
w w
z f z f w f
Z Z W

Therefore, . w
w
w f
W
<
+
= 0 for
) 1 (
2
) (
2


2.17 The joint density function of ) , (
2 1
X X is
( )
2
2
2 1
2 1
2
2
2 1
2
1
) , (

+

=
x x
X X
e x x f with
2
1
2
2
2
2
1 1
and
X
X
Y X X Y = + = .
Solving
2 2 1
2
1
2
2
2
1
and y x x y x x = = + , we obtain
( )
2
2
2 1
1
2
2
1
2
2
1
2
2
2
2
2
1
2
2
2
2
2
2
1
and
1
1
y
y y
x
y
y
x y y x y x y x
+
=
+
= = + = +
By definition, 0
1
y since
2 2 1
y x x = and hence, we have 2 solutions:
2
2
1
2
2
2
1 2
1
2
2
1
2
2
2
1 2
1
1 1
1 1
y
y
x
y
y y
x
y
y
x
y
y y
x
+
=
+
=
+
=
+
=



The Jacobian of the transformation is
Distributions
31
( ) ( )
1
2
2
1 1
2
2
2 / 1
2
2
2
1
2 / 1
2
2
2
1
2
2
2
1
2
2
1
2
2
2
2
1
2
2
2
2
1
1
2 1
1 1 1
1 1
1
) , (
y
y
y y
y
x x x x
x
x

x
x

x

x x
x

x x
x
x x J
+
= =
+

+
=

+ +
=

Therefore,
(
(
(

|
|
|
.
|

\
|
+

+
|
|
|
.
|

\
|
+ +
+
=
2
2
1
2
2
2 1
2
2
1
2
2
2 1
2
2
1
2 1
1
,
1 1
,
1
1
) , (
2 1
y
y
y
y y
f
y
y
y
y y
f
y
y
y y f
XY XY Y Y

Note that ) , ( ) , (
2 1 2 1
2 1 2 1
x x f x x f
X X X X
= . Hence,
) ( ) ( ) (
2
1
1
2
) , (
2 1 1
2
2 2
2
1
2 1
2 1
2 2
1
2 1
y f y f y u e
y
y
y y f
Y Y
y
Y Y
=
+
=


where, ) ( ) (
1
2
1 1
2 2
1
1
y u e ky y f
y
Y

= . We determine the constant k to be

= =
0
2
1
2
1
1
1
2 2
1
k dy e y k
y
. Thus, the density function of Y
1
and Y
2
are
respectively
) (
2
1
) (
1
2
2
1
1
2 2
1
1
y u e
y
y f
y
Y

=
and
2
2
2
1
1 1
) (
2
y
y f
Y
+

=


Signal Detection and Estimation
32
2.18 X is a standard normal
2
2
2
1
) (
x
X
e x f

= .
Y is a chi-square distributed random variable with n degrees of freedom
( )
0 for
2 / 2
1
) (
2
1
2
2 /
> =

y e y
n
y f
y n
n
Y

Let
n Y
X
T
/
= , then the cumulative distribution of T is
( )
( )

= = =
0
/
2
) (
1
2
2 /
2
2 2 / 2
1
/ ) ( ) (
n y t x y n
n
T
dxdy e y
n
n y t X P t T P t F
since the region of integration is the xy-plane with n y t x / . Note that the joint
density function of X and Y is just the product of the individual density functions
since we assume X and Y independent. Making the change of variables
n y u x / = , then du n y dx / = and ) / (
2 2
n y u x = . Substituting in the
integral, we obtain
( )
( )


=

=
|
|
.
|

\
|
+



(
(
(

=
t
u y
n
u y
n
n
t
n
y u y n
n
T
du dy e y
n
dudy e
n
y
e y
n
t F
0
1
2
2
1
2 /
0
2
1
2
2 /
2
2
2 2 / 2
1
2 2 / 2
1
) (


Let dz
n
u
dy z
n
u
y z
n
u y
2 2
2
1
2
and
1
2
1
2
+
=
+
= =
|
|
.
|

\
|
+ . The integral
becomes
Distributions
33
( )

=

=
+

+
(
(
(
(
(
(
(

|
|
.
|

\
|
+

=
t
u z
n
z
n
n
n
T
du dz
n
u
e z
n
t F
0
2
1
2
2
1
2 /
2
1
1
2 2 / 2
2
) (

( )
du
n
u
n n
n
t
u
n

=
+
|
|
.
|

\
|
+

|
.
|

\
| +
=
2
1
2
1
1
2 /
2
1


since .
n
m
n
m m dz e z
z m
2
1
with
2
1
) 1 ( !
0

= |
.
|

\
| +
= + = =



Taking the derivative of F
T
(t) using Leibnizs rule, we obtain the required density
function given by (2.171).

2.19 With = 0, the Cauchy density function is given by .
1
) (
2 2
x
x f
X
+

=
The moment generating function of X is then


+
+

=
+

= = dx
x
x
j dx
x
x
dx
x
e
e E
x j
X j
x
2 2 2 2 2 2
sin cos
] [ ) (
since x j x e
x j
+ =

sin cos . Also, 0 ) ( lim =


p
p
p
dx x f when f(x) is an odd
function of x. Then,

=
0
2 2
cos 2
) ( dx
x
x
x

since
2 2
cos
x
x
+

is even. Using the tables of integrals, we obtain
. 0 and 0 ) ( > > =

, e
x

Signal Detection and Estimation
34
2.20 (a) The mean value of Weibull distribution is given by
. ] [
0
dx e abx X E
b
ax b

=

Let
b
b b b
a
u
bu
du
dx
x
dx
ax b dx abx du ax u
/ 1
1
) ( |
.
|

\
|
= = = =

since
b
a
u
x
/ 1
|
.
|

\
|
= .
Hence, |
.
|

\
|
+ = = |
.
|

\
|
=


b
a du e u a du
a
u
bu
ue b X E
b u b b
b
u
1
1
1
] [
/ 1
0
/ 1 / 1
0
/ 1
.

(b) The variance is ( )
2 2 2
] [ ] [ X E X E = . We need to determine the mean
square value, which is . ] [
0
1 2
dx e abx X E
b
ax b

+
= Following the same approach as
in (a), we obtain

|
.
|

\
|
+ = =

b
a du e u a X E
b
u
b b
2
1 ] [
2
0
2 2
2
. Hence,

|
.
|

\
|
+ |
.
|

\
|
+ =

2 2
2
1
1
2
1
b b
a
b







Chapter 3


Random Processes

3.1 (a) The mean and autocorrelation functions of ) (t X are

+ =
8 /
8 /
0 0
cos
2 4 4
) cos( )] ( [ t
A
d t A t X E
) 2 2 cos(
2
cos
2
)] 2 2 2 [cos(
2
cos
2
)] cos( ) cos( [ ) , (
0 0
2
0
0 0
2
0
0 0 0
+

+ =
+ + + =
+ + + = +
t
A A
t E
A A
A t A E t t R
xx
2
2


(b) E[X(t)] and R
xx
(t + , t) are functions of time, then the process X(t) is not
stationary.

3.2 (a) At ) 0 ( 1 ) ( 0
0
X t s T = = = , and at ) 1 ( 0 ) ( 1
0
X t s T = = = . Then, we
have


(b) ]} ) ( ) ( [ { )] ( ) ( [ ) , (
0 0 0 2 0 1 2 1 2 1
t T T t X T t X E E t X t X E t t R
xx
= = =

35
FX (x ; 0) fX (x ; 0)
1
1
x
x
1 0
2
1
2
1

2
1

Signal Detection and Estimation
36

) 1 ( ) 1 (
2
1
) ( )
2
1
) 1 ( ) 1 ( ) 1 ( ) 0 ( ) 0 ( ) 0 (
2 1 2 1
0 2 1 0 2 1
+ =
= + = =
t s t s t s s(t
T P t s t s T P t s t s




3.3 (a) The time average of ) (t X is
)] ( [ 0 ) cos(
2
1
lim ) (
0
t X E d t A
T
t x
T
T
T
= + = > <


ensemble average.
Therefore, the process ) (t X is not ergodic in the mean
(b)


+ + + = > + <
T
T
T
dt t A t A
T
t x t x ) cos( ) cos(
2
1
lim ) ( ) (
0 0 0

+ = ) , ( cos
2
0
2
t t R
A
xx
The process is not ergodic in
the autocorrelation.

3.4 (a) )] ( ) ( [ ) , (
2 2
t X t X E t t R
yy
+ = +

) 2 cos(
8 4
)] 2 cos( ) 4 2 4 [cos(
8 4
)] 2 2 cos( ) 2 2 2 [cos(
4 4
) 2 2 cos(
2
1
2
1
) 2 2 2 cos(
2
1
2
1
)] ( cos ) ( cos [
0
4 4
0 0 0
4 4
0 0 0
4 4
0 0 0
4
0
2 2
0 0
2 2
+ =
+ + + + =
+ + + + =
)
`

+ +
(

+ + + =
+ + + =
A A
t E
A A
t t E
A A
t t E A
t A t A E



t2
1/2
1/2
-1/2
-1/2
3/2
3/2
Height
2
1

t1
Random Processes

37
(b) )] ( cos [ )] ( [ )] ( [
0
2 2 2
+ = = t A E t X E t Y E

2
)] 2 2 [cos(
2 2
2
0
2 2
A
t E
A A
= + + = = constant. Therefore, Y(t)
is wide-sense stationary.

3.5 (a) ] [ ] [ ] [ )] ( [
) ( ) ( + +
= =
t j t j
e E A E Ae E t X E , where

=
0
2
2
2
2
2
] [
a
da e
a
A E
a
and
. 0 )] ( [ 0
2
1
] [ ] [
2
0
) (
= =

= =

+
t X E d e e e E e e E
j t j j t j t j

(b)
) ( 2 ) ( ) (
2 1
2 1 2 1
] [ ] [ ) , (
t t j t j t j
xx
e A E Ae Ae E t t R
+ +
= = , where

=
0
2
2
2
3
2
2 ] [
2
2
da e
a
A E
a

Let t
1
= t + and t
2
= t ) ( 2 ) , (
2
= = +

xx
j
xx
R e t t R . Therefore, ) (t X is
wide-sense stationary.

3.6 (a) The autocorrelation function of Z(t) is
) ( ) ( ] [ )] ( ) ( ) ( ) ( [ )] ( ) ( [ ) (
2 2
= + + = + =
yy xx zz
R R A E t Y t Y t X t X A E t Z t Z E R ,
since A, X(t) and Y(t) are statistically independent.

. 13 ) 2 ( 9 ] [ ] [
2 2 2 2
= + = + = A E A E
a
Therefore,
) 9 ( cos 26 ) (
3 2
+ = e e R
zz

(b) From(3.31), we have 0 )] ( [ ) ( lim
2
= =

t Z E R
zz
. Therefore, the mean of
) (t Z
0 )] ( [ = t Z E
Signal Detection and Estimation
38
Since 0 =
z
m , then )] ( [
2 2
t Z E
z
= . Hence,
. 260 ) 1 9 ( 26 ) 0 ( )] ( [
2
= + = =
zz
R t Z E
3.7 Let s(t) be the square wave with amplitude A and without the shift t
0
. s(t)is
periodic. From (3.40) we have
). ( ) ( ) ( ) ( ) (
] [
) , (
0
0 0 2
0
0 1
2
2 1
= +

= =
xx
T T
xx
R dt t s t s
T
dt t t s t t s
T
A E
t t R
2

Two possible cases (i)
2
0
T
and (ii) 0
2

T
.
(i) For
2
0
T
, we have




(ii) For 0
2

T
, we have





|
.
|

\
|
=

|
.
|

\
|
+ |
.
|

\
|
= +

T
R
T T T
dt t s t s
xx
T
4
1 ) (
2 2
) 1 (
2
) 1 ( ) ( ) (
2
0
2 2


s(t+)
t
t
T T/2
-1
+1
)
2
(
T

s(t)
Random Processes

39




)
4
1 ( ) (
2
T

R
xx
+ = as shown below


A plot of the autocorrelation function is shown below












3.8 (a) As in the previous problem, s(t) is periodic and T
0
is uniformly distributed
over the period X(t) is stationary in the wide-sense.



s(t+)
t
t
T T/2
-1
)
2
(
T
s(t)

+ + + + = +


)]
2
( )[ 1 ( ]
2
)
2
[( ) 1 ( )] (
2
)[ 1 ( ) )( 1 ( ) ( ) (
0
T
T
T T T
dt t s t s
T


RXX ()
T/2 -T/2 T/4
-3T/4
3T/4
-T/4
-
2

2

Signal Detection and Estimation
40
(b) Consider one period only


A. x
A
x
x F
A
Tx T
t T
T
t P t T
A
Tx
t P
T
T t
A
Tx T
T P
A
Tx
T t T P x t X P
t X , P
A x x t X P t X P x F
A x x F
t
t
t X
t t
t t
t
t t t X
t t X
t
t
t
+ =
+ < + < =
+ < + + + < = <
= =
< < + = =
> =
0 for
4 4
3
) ( Therefore,
].
8 4 4
[ ]
8
[
]
4 8 4
[ ]
8
[ ] ) ( 0 [
and
4
3
] 0 ) ( [ Hence
. 0 for ] ) ( 0 [ ] 0 ) ( [ ) (
and , for 1 ) (
0 0
0 0 0 0





x(t)
X(t)
A
T
t
T0
4
0
T
T +
A
t Tx
T
8
) (
0
+
A
t Tx T
T
8
) (
4
0
+
xt
A
3/4
1
) (
t
t
X
x F
Random Processes

41

(d)


= = =
8 4
) ( )] ( [
0
A
dx
A
x
dx x f x t X E
A
t
t
t t X t
t
and
.
192
13
12
)] ( [
2 2
2
2
A
A
t X E
t
x
= =
(e)

= = > <
T
A
dt t x
T
t x
0
8
) (
1
) ( and
12
) (
2
2
A
t x = > < .
3.9 (a) In fixing 2 / 1
1
= t and 2 / 3
2
= t , we obtain two independent random
variables ( ) 2 / 1 X and ( ) 2 / 3 X with marginal density functions
|
.
|

\
|
= |
.
|

\
|
2
rect
2
1
2
1
;
x
x f
X
and |
.
|

\
|
= |
.
|

\
|
2
rect
2
1
2
3
;
x
x f
X
. Therefore, the joint density
function is just the product of the marginal density functions to yield
4
1
2
1
2
1
) 5 . 1 , 5 . 0 ; 0 , 0 ( = =
X
f .

(b) We observe that the duration of 1 second represents the length of a pulse
and thus, the samples spaced 1 second apart will be independent
4
1
) 5 . 1 , 5 . 0 ; 0 , 0 ( =
Y
f as in (a).

3.10 )] ( ) ( [ ) ( t Y t Y E R
yy
+ =

. R R R
t X t X t X t X E
xx xx xx
) 1 ( ) 1 ( ) ( 2
)]} 1 ( ) ( )][ 1 ( ) ( {[
+ + + =
+ + + + =


) (
t
t
X
x f
xt
A
0
(3/4)
A 4
1
(c)


=
=
otherwise , 0
0 ,
4
1
0 , ) (
4
3
) (

A x
A
x x
x f
t
t t
t X
t


Signal Detection and Estimation
42


3.11 The autocorrelation function of the process ) 1 ( ) ( ) ( = t X t Y t Z is

)]} 1 ( ) ( )][ 1 ( ) ( {[ ) ( + + = t X t Y t X t Y E R
zz


) ( ) (
) ( ) 1 ( ) 1 ( ) (
+ =
+ + =
xx yy
xx xy yx yy
R R
R R R R

since R
yx
= R
xy
= 0 from orthogonality. Therefore, ) ( 2 ) ( 2 ) ( f S f S f S
xx yy zz
= =
as shown below.












3.12 )]. ( [ )] ( [
6 2
t X E t Y E = From Equation (2.80), we have

( )
3
2 6
3
6
6
15 15
2 ! 3
! 6
)] ( [ = =

=


t X E where,
the variance .
2
) (
0
0
0
0 2

= = = =




N
df e N df e
N
df f S
f f
xx

Therefore,
the mean square value .

15 )] ( [
3
0 2
|
|
.
|

\
|
=
N
t Y E

+1 -1 +2 -2
2 2
1
+1 -1 +2 -2

0 0
1
Ryy ()

2
+1 -1
+ 1
f
0
Szz (f)
Random Processes

43
3.13 ) 2 ( ) 1 ( ) ( ) ( ) ( ) ( ) (
2 1
+ = + = t X t X t h t X t h t X t Y . Thus,
) 1 ( ) 1 ( ) ( 2
) 1 ( ) 1 ( ) ( ) (
)]} 2 ( ) 1 ( [ )] 2 ( ) 1 ( {[ ) (
+ + + =
+ + + + =
+ + + + =
xx xx xx
xx xx xx xx
yy
R R R
R R R R
t X t X t X t X E R
















3.14 (a) ) 1 ( ) ( ) ( + = t N t N t Y or, we have





with ) 1 ( ) ( ) ( + = t t t h . From (3.135),
2
) ( ) ( ) ( f H f S f S
nn yy
= where
f j
e f H
2
1 ) (

+ = and thus, ( )( ) ( ) f e e f H
f j f j


+ = + + =
+
2 cos 1 2 1 1 ) (
2 2
2
.

Hence, the output power spectral density is ] 2 cos 1 [ ) rect( 2 ) ( f f f S
yy
+ = .














2 1 0 -1 -2
1
) (
yy
R
) (t h
) (t Y ) (t N
-1/2 1/2 0
f
) ( f S
yy
4
Signal Detection and Estimation
44
(b) )]} ( ) ( )][ ( ) ( {[ )] ( ) ( [ ) ( t N t V t N t U E t Z t W E R
wz

+ + + + = + =
) ( ) ( + =
nn uv
R R

Since ) (t U and ) (t N are statistically independent and zero mean. Hence,
) ( ) ( ) ( f S f S f S
nn uv wz
+ = as shown below.










3.15 ) ( ) ( ) ( ) (
) (
=

h h R R
g
xx yy
4 43 4 42 1


For 0 1 , we have










+ = + =
1 1
) (
) 1 ( ) 1 ( ) ( dt e t e dt e t g
t t
=
2
e

For 1 0 , we have







e e dt e t dt e t g
t t +


+ = + + =

2 ) 2 ( ) 1 ( ) 1 ( ) (
) 1 (
0
1 0
) ( ) (

+1 -1 0
+1 -1 0

) ( f S
wz
f
1/2 -1/2
3/2
Random Processes

45
For 1 , we have






e e e dt e t dt e t g
t t +


+ = + + =

2 ) 1 ( ) 1 ( ) (
) 1 ( ) 1 (
0
1
1
0
) ( ) (

Now, ). ( ) ( ) ( = h g R
yy
In the same manner, we have:

For 1 ,
) e e e e e
dt e e e e
dt e e t e dt e e t R
t t t t
t t t t t
yy
2 1 3 3
1
) 1 ( ) 1 (
0
1
1
0
) 1 ( 2
3
2
3
1
3
1
(
] 2 [
] 2 ) 2 ( [ ) ( ) (

+
+ + =
+ +
+ + =



For 0 1 ,
e e e e e e
dt e e e e
dt e e t e dt e e t R

t t t t
t t t t t
yy
]
3
2
3
1
) 1 [(
] 2 [
) 2 2 ( ) ( ) (
2 1 3 3
1
) 1 ( ) 1 (
0 1
0
1 2

+
+ + + =
+ +
+ + =



For 1 0 ,

e e e e e e e e
dt e e e e dt e e t e R

t t t t t t t
yy
]
2
1
3
2
3
2
2
1
[
] 2 [ ] 2 2 [ ) (
2 1 3 3 1 2
1
) 1 ( ) 1 (
1
) 1 (

+
+ + + =
+ + + =



+1
-1
0
Signal Detection and Estimation
46
For 1 ,
. e e
2
1
e
2
1
] 2 [ ) (
1 1 ) 1 ( ) 1 ( +

+
+ = + =

dt e e e e R
t t t t
yy

3.16 ) ( ) ( ) (
2
f S f H f S
xx yy
= . The transfer function of the RC network is
RC f j
f H
+
=
2 1
1
) ( . Hence,
2 2 2
2
4
2
) (
f
d e e f S
f j
xx
+

= =



) 4 1 )( 4 (
2
) (
2 2 2 2 2 2 2
C R f f
f S
yy
+ +

=
3.17 The transfer function of the RLC network is
RC j LC
C j
L j R
c j
f H
+
=

+ +

=
2
1
1
1
1
) (
The mean of the output is 2 ) 0 ( ) ( ) ( = = =
x y x y
m m H t m t m . Also,
) ( ) ( ) (
2
f S f H f S
xx yy
= where
2 2
4 4
4
) ( 4 ) (
f
f f S
xx
+
+ =
Therefore,

+
+
+
=
2 2 2 2 2
1
1
) ( 4
) ( ) 1 (
1
) (
f
f
RC LC
f S
yy


3.18 The spectrum of ) ( f S
nn
does not contain any impulse at f and thus,
0 )] ( [ )] ( [ = + = t N E t N E . The samples at t and + t are uncorrelated if
0 ) , ( = + t t C
nn
. It follows that the samples are uncorrelated provided
. 0 ) ( =
nn
R Hence,
Random Processes

47
.
2
2 sin
2
) ( )] ( ) ( [ ) (
0
2 0 2






= = = + =
B
B
f j f j
nn nn
B
B
B N df e
N
df e f S t N t N E R


From the plot of ) (
nn
R ,













We observe that 0 ) ( =
nn
R for ... , 2 , 1 ,
2
= = k
B
k
. Therefore, the
sampling rates are ,
2 1
k
B
f
s
= =

K , 3 , 2 , 1 = k .

3.19 (a) Nyquist rate . sec
2
1
2
1
= =
c
f
T
(b)

<
= =
otherwise , 0
1 , 1
) rect( ) rect( ) (
f f
f f f S
xx




sinc sinc sinc ) (
2
. R
xx
= =


or, { } )
2
1
( ) ( ] ) 1 [( ) (
xx xx
R T R T n X nT X E = = + since sec
2
1
= T . Therefore,
2
2
2
2
2
) 2 (
)
2
( sin
)
2
1
( sinc )
2
1
( |
.
|

\
|

= =
xx
R and
2
2
|
.
|

\
|
=

since the process is zero


mean and stationary; that is, a shift in time does not change .

) (
nn
R
B 2
1
B 2
2
B 2
3
B 2
4

B 2
1
B 2
2
B 2
3

B 2
4

B N
0

0
Signal Detection and Estimation
48
3.20 From (3.31), ( ) . 2 )] ( [ 4 ) ( lim )] ( [
2
= = =

t X E R t X E
xx
The mean of
Y(t) is

= = =
t t
t d d X E t Y E
0 0
2 2 )] ( [ )] ( [ Y(t) is not stationary.

3.21 If ), ( 2 ) , (
2 1 2 1
t t t t R
xx
= then

=
1 2
0 0
2 1
) ( 2 ) , (
t t
yy
d d t t R .
We have 2 cases:
2 1
t t > and
2 1
t t < .

Case1:
2 1
t t >

= =
(
(

=
2 2 1
0 0
2
0
2 1
. 2 2 ) ( 2 ) , (
t t t
yy
t d d d t t R









Case2:
2 1
t t <

= =
(
(

=
1 1 2
0 0
1
0
2 1
. 2 2 ) ( 2 ) , (
t t t
yy
t d d d t t R











Therefore,

<
<
= =
1 2 2
2 1 1
2 1 2 1
, 2
, 2
) , min( 2 ) , (
t t t
t t t
t t t t R
yy
.



=
t1 t2
t2
0


=
t1
t1
t2
0
Random Processes

49
3.22 (a)

=
1
0
) ( dt t X I
a
. From (2.80),


=
odd , 0
even ,
2 )! 2 / (
!
] [
2



n
n
n
n
X E
n
n
n
.
Hence,
2
4
4
2 ! 2
! 4
] [ =
a
I E . The variance of I
a
is ] [ ] [
2 2 2
a a i
I E I E
a
= with

= =
1
0
0 )] ( [ ] [ dt t X E I E
a
and .
3
2
] [
2
=
a
I E Hence, .
3
2
] [
2 2
= =
a
I E After
substitution, we obtain .
3
4
4 ! 2
3
2
! 4
] [
4
4
=
|
.
|

\
|
=
a
I E

(b) 0 ] [ ] [ ] [ = =
b a b a
I E I E I I E since 0 ] [ =
a
I E and the random variable I
b
is
obtained independently from I
b
.

(c) The mean of I
c
is

= =
(
(

=
T T
c
dt t X E dt t X E I E
0 0
. 0 )] ( [ ) ( ] [ Hence,
]. [ ] var[
2
c c
I E I = Using (3.203), the variance of I
c
is


=
|
|
.
|

\
|
= =
T
T
T
T
xx xx xx c
T d R T d R
T
T d R T I
1
1
. ) ( ) ( 1 ) ( ) ( ] var[
or,

= = =

1
0
1
0 0
) 1 ( 2 ) 1 ( 2 ) ( ) ( 2 ) ( ) ( ] var[ d d T d R T d R T I
T
xx
T
T
xx c

T T =
3
1
for . 1 >> T

3.23 (a) We first compute the mean of Y(t) to obtain

= =
t t
d X E d X E t Y E
0 0
)] ( [ ] ) ( [ )] ( [
But,
1 )] ( [ 1 ) ( lim )] ( [
2
= = =

t X E R t X E
xx

Signal Detection and Estimation
50
Therefore, t d t Y E
t
= =

0
) 1 ( )] ( [ , which is function of time ) (t Y t is not
stationary.

(b)

=
(
(

= =
1 2 2 1
0 0 0 0
2 1 2 1
) , ( ) ( ) ( )] ( ) ( [ ) , (
t t
xx
t t
yy
d d R d X d X E t Y t Y E t t R


=
1 2
0 0
) (
t t
xx
d d R

3.24 (a) ) (t X and ) (

t X orthogonal . 0 ) (

=
x x
R From (3.225),
) (

) (

) (

= =
xx xx x x
R R R which is not zero for all . (a) is False.

(b) j H j t X = )} (
~
{ H )] (

) (

[ )} (

) ( { t X t X j t X j t X + = + , but ) ( ) (

t X t X = and
hence, j H j t X = )} (
~
{ = + ) (
~
) ( ) (

t X t X t X (b) is true.

(c) If
t f j
e t X t X
0
2
1
) ( ) ( = is an analytic signal 0 ) (
1 1
= f S
x x
for 0 < f .
. ) ( ] ) ( ) ( [ )] ( ) ( [ ) (
0 0 0
1 1
) (
1 1
+
= + = + =
j
xx
t j t j
x x
e R e t X e t X E t X t X E R
The power spectral density of the process ) (
1
t X is then ) ( ) (
0
1 1
f f S f S
xx x x
= ,
which is zero if
c
f f >
0
so that all the spectrum will be shifted to the right.

(d)


= = =
c
f
xx x x x x
df f S df f S R t X E
0
~ ~ ~ ~
2
) ( 4 ) ( ) 0 ( )] (
~
[ , since from (3.235),

<
>
=
0 , 0
0 , ) ( 4
) ( ~ ~
f
f f S
f S
xx
x x
.
Hence,

= =
c c
f
xx
f
xx
t X E df f S df f S
0
2
0
)] ( [ 2 ) ( 2 2 ) ( 4 (d ) is true.

Also,

)] 0 (

) 0 ( [ 2 ) 0 ( )] (
~
[ ~ ~
2
xx xx x x
R j R R t X E + = = from (3.233) possibly true if
, 0 ) 0 (

=
xx
R but ) (

) (

=
xx xx
R R from (3.225). At 0 = , we have
Random Processes

51
) 0 (

) 0 (

xx xx
R R = and thus, = = = )] ( [ 2 ) 0 ( 2 )] (
~
[ 0 ) 0 (

2 2
t X E R t X E R
xx xx

(d ) is true.

3.25 (a) The equivalent circuit using a noiseless resistor is










The transfer function
RC j LC
L j R
C j
C j
j H
n
v v
+
=
+ +

=
) 1 (
1
1
1
) (
2 0
. Hence,
the power spectral density of ) (
0
t v is
RC j LC
kTR
f S j H f S
n n n
v v v v v v
+
= =
) 1 (
2
) ( ) ( ) (
2
2
0 0 0
.
(b) The input impedance is
2 2 2
2 2
2 2 2
) ( ) 1 (
) 1 (
) ( ) 1 (
) (
1
) (
1
) (
RC LC
C R LC L
j
RC LC
R
L j R
C j
jL R
C j
j Z
+

+
+
=
+ +

=
But Nyquist theorem says

kT S
vv
2 ) ( = e
2 2 2
) ( ) 1 (
2
)} ( {
RC LC
kTR
j Z
+
= which agrees with the result
obtained in (a).






R
L
C v0(t)
+
_

Vn (t)
) ( j Z
Signal Detection and Estimation
52
3.26 (a) The equivalent circuit with noiseless resistors is








2
2
2
1
) ( ) ( ) ( ) ( ) ( ) ( ) (
2 2 1 1
2 2 1 1 0 0
f H f S f H f S f S f S f S
e e e e
n n n n v v v v v v
+ = + =
Using superposition principle, the power spectral density at the terminal pairs for
each source is
2
2 1
1 1
)] ( [ 1
1
2 ) (
1 1
R R C
R kT f S
v v
+ +
= and
2
2 1
2 2
)] ( [ 1
1
2 ) (
2 2
R R C
R kT f S
v v
+ +
= .
Hence, the output power spectral density is
2
2 1
2 2 1 1
)] ( [ 1
) ( 2
) (
0 0
R R C
R T R T k
f S
v v
+ +
+
=
(b) In order to determine the autocorrelation function, we rewrite ) (
0 0
f S
v v
as
2 2
2
2 1
2 1
2 1
2 2 1 1
4
] ) [(
1
) (
1
2
) (
) (
) (
0 0
f
C R R
C R R
C R R
R T R T k
f S
v v
+
+
(

+
+
+
=
Hence,
(

+
+
=
C R R C R R
R T R T k
R
vv
) (
exp
) (
) (
) (
2 1 2 1
2 2 1 1

(c) The mean square value is
C R R
R T R T k
R
vv
) (
) (
) 0 (
2 1
2 2 1 1
+
+
=
_
) (
1 1
T R
C
+

) (
1
t V
n


) (
2
t V
n
) (
1 1
T R
v0(t)
Random Processes

53
Substituting for the given values of C T T R R and , , ,
2 1 2 1
, we obtain
10
10 457 . 0 ) 0 (

=
vv
R . Therefore, the root mean square value is
. V 76 . 6 volts 10 76 . 6
6
=



3.27 The equivalent circuit using a noiseless resistor is









(a) The transfer function relating ) (t I to ) (t V
n
is
L j R
j H
n
in
+
=
1
) ( .

Therefore, the power spectral density of ) (t I is
2 2
2
) (
2
) ( ) ( ) (
L R
kTR
S j H S
n n n
v v iv ii
+
= = = .

(b) From (3.244), we need to determine the power spectral density of the
short-circuit current. Hence, we have, for the circuit below,







2 2 2 2
) (

) (

1 1
L R
L
j
L R
R
L j R Z
Y
in
in
+

+
=
+
= =
Therefore, the power spectral density of the short-circuit current is
kT S
ii
2 ) ( = e
2 2
) (
2 } {
L R
R
kT Y
in
+
=
R
Yin
L
R
I(t)
L

) (t V
n
Signal Detection and Estimation
54
3.28 (a)


= ) ( ) ( ) ( d h t X t Y . The mean of Y(t) is


= = ) 0 ( )] ( ) ( [ )] ( [ H m d h t X E t Y E
x

where, . 1 ) 0 (
0

= =

d e H Hence .
x y
m m =

(b) Since , 0 )] ( [ = =
x
m t X E the mean of Y(t) is 0 )] ( [ = t Y E and the variance is


= . ) ( ) ( ) ( )] ( [
2
d d h h R t Y E
xx

The autocorrelation function of the input is ) ( ) ( = k R
xx
, where k is a constant.
Therefore,



= = = = .
2
) ( ) ( ) ( ) ( )] ( [
0
2 2 2
k
d e k d h k d d h h k t Y E
3.29 Since ) ( f S
nn
does not have an impulse at 0 = f , 0 )] ( [ = t N E and the mean
of the output of the linear filter is . 0 ) 0 ( )] ( [ )] ( [ = = H t N E t Y E Hence, the variance
of Y(t) is


= = = df f S R t Y E
yy yy y
) ( ) 0 ( )] ( [
2 2

where,
2
0
2
) (
2
) ( ) ( ) ( f H
N
f H f S f S
nn yy
= = .
The system function is given by

>

|
|
.
|

\
|

=
B f
B f
B
f
K
f H
, 0
, 1
) (
Therefore,
Random Processes

55


|
.
|

\
|
=
(

|
.
|

\
|
+
(

|
.
|

\
|
+ =

B B
B
y
df
B
f
K N df
B
f
K
N
df
B
f
K
N
0
2
2
0
2
0
0
2
0
0 2
. 1 1
2
1
2


3
2
0
BK N
=





Chapter 4


Discrete Time Random Processes


4.1 (a) Using 0
1 3 1
1 1 1
3 2 2
) det( =



= I A , we obtain
3
1
= , 1
2
= and 2
3
= .
Then,
(
(
(

=
(
(
(

(
(
(

=
c
b
a
c
b
a
3
1 3 1
1 1 1
3 2 2
1 1 1
x Ax
Solving for a, b and c, we obtain 1 = = = c b a and thus,
(
(
(

=
1
1
1
1
x
Similarly,
(
(
(

= =
1
1
1
2 2 2 2
x x Ax and
(
(
(

= =
1
071 . 0
786 . 0
3 3 2 3
x x Ax .

The modal matrix is then
(
(
(


=
1 1 1
071 . 0 1 1
786 . 0 1 1
M ,
(
(
(

93 . 0 93 . 0 0
33 . 0 83 . 0 5 . 0
4 . 0 1 . 0 5 . 0
1
M
The Jordan form is



56
Discrete Time Random Processes
57
AM M J
1
=
(
(
(


(
(
(

(
(
(

=
1 1 1
071 . 0 1 1
786 . 0 1 1
1 3 1
1 1 1
3 2 2
93 . 0 93 . 0 0
33 . 0 83 . 0 5 . 0
4 . 0 1 . 0 5 . 0


(
(
(

=
2 0 0
0 1 0
0 0 3

(b)
6 and
3 , 3
0
6 0 0
0 2 1
0 2 4
3
2 1
=
= + =
=



=
j j
I A
Solving
0 and
5 . 0 5 . 0 , 1
) 3 (
6 0 0
0 2 1
0 2 4
1 1 1
=
= =

(
(
(

+ =
(
(
(

(
(
(


=
c
j b a
c
b
a
j
c
b
a
x Ax
Thus,
(
(
(

=
0
2
1
2
1
1
1
j x and
(
(
(

+ =
0
2
1
2
1
1
2
j x .
Again, solving
3 3 3
x Ax = we obtain
(
(
(

=
1
0
0
3
x
The modal matrix is then
(
(
(

=
(
(
(

+ =

1 0 0
0 5 . 0 5 . 0
0 5 . 0 5 . 0
1 0 0
0 5 . 0 5 . 0 5 . 0 5 . 0
0 1 1
1
j j
j j
j j M M
and
(
(
(

+
= =

6 0 0
0 3 0
0 0 3
1
j
j
AM M J


Signal detection and estimation
58
(c) Similarly, we solve
4 0
2 4 0
1 6 0
1 2 4
3 2 1
= = = =



= I A
Note that we have an algebraic multiplicity of 3 = r . Solving, =
=
1
2
I A
degeneracy 2 1 3 = = q ; that is, we have two eigenvectors and one generalized
eigenvector. Thus,
=
1 1
4x Ax
(
(
(

=
0
0
1
1
x or
(
(
(

=
2
1
1
2
x
Solving for the generalized eigenvector, we have
(
(
(

= =
0
0
1
) 4 (
22 2 22
x x x I A
Therefore, the modal matrix is | |
(
(
(

= =
1 2 0
0 1 0
0 1 1
22 2 1
x x x M
(
(
(


=

1 2 0
0 1 0
0 1 1
1
M and the Jordan form is
(
(
(
(

= =

4 0 0
1 4 0
0 0 4
1
AM M J

4.2 (a) Solving for the determinant of A, we have = 0 6 ) det( A the matrix
is of full rank r
A
= 3.
(b) Solving

=
=
=
=



=
4142 . 3
3
5858 . 0
0
5 1 0
3 1 1
2 0 1
0 ) det(
3
2
1
I A
Discrete Time Random Processes
59
We observe that all 0 <
i
, for i = 1, 2, 3, and thus the matrix is negative definite.

(c) Solving v Av = , we obtain
(
(
(

=
1511 . 0
6670 . 0
7296 . 0
1
v ,
(
(
(

=
4082 . 0
8165 . 0
4082 . 0
2
v and
(
(
(

=
4879 . 0
7737 . 0
4042 . 0
3
v .
4.3 The characteristic equation is
) 3 ( ) 2 (
1 0 0 1
1 3 1 1
0 0 2 0
1 0 0 3

3
=

= I A

= =
= =

1 m ty multiplici algebraic with 3


3 ty multiplici algebraic with 2
2
1 1


2
m

Note that the rank of r = = 2
1
I A . Thus, 2 2 4
1
= = = r n q . Thus, for
2
1
= , we have 2 eigenvectors and 1 generalized eigenvector since . 3
1
= m
(
(
(
(

=
(
(
(
(

= = =
0
1
1
0
and
1
0
0
1
2 2
3 1
x x x x A . The generalized eigenvector is
(
(
(
(

= =
1
1
0
0
) 2 (
12 1 12
x x x I A
For
(
(
(
(

= = =
0
1
0
0
3 , 3
4 4 4
x x Ax
Signal detection and estimation
60
Hence, the modal matrix is
(
(
(
(


= =
0 0 1 1
1 1 1 0
0 1 0 0
0 0 0 1
] [
4 3 12 1
x x x x M
Note that
(
(
(
(

= =

3 0 0 0
0 2 0 0
0 0 2 0
0 0 1 2
1
AM M
4.4 Let the M roots of
i
, i = 1, 2, , M, be the eigenvalues of R, then
0 ) det( = I R . Also,
0 ) det( ) det( )] ( det[ ) det(
1 1
= = =

R I R R I R I R
Since the correlation matrix R is nonsingular ) 0 ) (det( R , then
0
1
det 0 ) det(
1 1
=
(

|
.
|

\
|

= =
m
R I R I
The eigenvalues are non-zero for the non trivial solution ) 0 ( and thus,
0

1
det
1
=
(

|
.
|

\
|

I R , which means that M i


i
, , 2 , 1 ,
1
L =

, are eigenvalues of
.
1
R

4.5 From (4.121), two eigenvectors
i
v and
j
v are orthogonal if , 0 =
j
H
i
v v i j.
From the definition,
i i i
v Rv = (1)
and
j j j
v Rv = (2)
Premultiplying both sides of (1) by
H
j
v , the Hermitian vector of
j
v ,we obtain
Discrete Time Random Processes
61
i
H
i i i
H
j
v v Rv v = (3)
Since the correlation matrix R is Hermitian, R R =
H
. Taking the Hermitian of
(2), we have
H
j j
H
j
v R v = (4)
since
i
is real. Postmultiplying (4) by
i
v yields
i
H
j j i
H
j
v v Rv v = (5)
Subtracting (5) from (3), we obtain
0 ) ( =
i
H
j j i
v v (6)
which yields 0 =
i
H
j
v v sine
j i
. Therefore, the eigenvectors
i
v and
j
v are
ortogonal.

4.6 Let
M
v v v ,..., ,
2 1
be the eigenvectors corresponding to M eigenvalues of the
correlation matrix R. From (4.120), the eigenvectors are linearly independent if
= + + +
n n
a a a v v v L
2 2 1 1
0 (1)
for 0
2 1
= = = =
n
a a a L . Let
i i
= I R T , then =
i i
v T 0 and
j i j j i
v v T ) ( = if i j. Multiplying (1) by
1
T gives
= + + +
n n n
a a a v v v ) ( ) ( ) (
1 3 1 3 3 2 1 2 1
L 0 (2)
Similarly multiplying (2) by
2
T and then
3
T and so on until
1 n
T , we obtain
= + +
n n n n
a a v v ) )( ( ) )( (
2 1 1 3 2 3 1 3 3
L 0 (3)
M
L L ) )( ( ) ( ) )( (
2 1 1 2 1 2 1 1 1 1
+
n n n n n n n n n
a a v
=
n n n
v ) (
2
0 (4)

=
n n n n n n n n
a v ) )( ( ) )( (
1 2 2 1
L 0 (5)
Signal detection and estimation
62
From (5), since 0 ) (
i n
for i n. a
n
= 0.

Using (5) and (4), we see again 0
1
=
n
a , and so on going backward until
Equation (1). Hence,
= = = = =
n n
a a a a
1 2 1
L 0
and thus, the eigenvectors are linearly independent.

4.7 From (4.121), since the matrix is symmetric, the normalized eigenvectors x
1
and x
2
corresponding to the eigenvalues
1
and
2
are orthogonal and A has the
form
(

=
22 12
12 11
a a
a a
A since it is symmetric.

Let
2 1
x x X y x + = . Then
2 2 1 2 1
x x x A x A AX y x y x + = + = since
i i
x Ax = . Also,
1 ) )( (
2
2
1
2
2 2 1 1 2 1
= + = + + = y x y x y x x x x x AX X
T
(1)
The equation of the ellipse has the form
1
2
2
2
2
= +
b
y
a
x

Therefore, (1) represents an ellipse for
1
/ 1 = a and
2
/ 1 = b . Assuming
2 1
> , then a is the minor axis and b is the major axes and the ellipse is a shown
below.













2
1


1
1

y
x1 x2
x
Discrete Time Random Processes
63
(b) For
(

=
5 3
3 5
A , we solve for the eigenvalues

max 1
2
8 ) 2 )( 8 ( 9 ) 5 (
5 3
3 5
det ) det( = = = =
(



= A I and
min
2 = = . Solving for the eigenvectors, we have = 0
1
) 8 ( x I A
(

=
1
1
2
1
1
x , and
(

= =
1
1
2
1
0 ) 2 (
2 2

x x I A .

Note that x
1
and x
2
are orthogonal. From (a), the semi-major axis is
( ) ( ) 707 . 0 2 / 1 / 1
min
= = and the semi-minor axis is
( ) ( ) 354 . 0 8 / 1 / 1
max
= = . The ellipse is shown below















4.8 (a) The second order difference equation of the AR process is.
) ( ) 2 ( ) 1 ( ) (
2 1
n e n X a n X a n X + =
and thus, the characteristic equation is
0 1
2
2
1
1
= + +

Z a Z a
y
x2
x1
x
0.354
0.707
Signal detection and estimation
64


(b) Solving for the roots of the second order equation, we obtain
|
.
|

\
|
=
2
2
1 1 1
4
2
1
a a a P and
|
.
|

\
|
+ =
2
2
1 1 2
4
2
1
a a a P
For stability of the system, the poles
1
P and
2
P must be inside the unit circle, that
is 1
1
< P and 1
2
< P . Applying these two conditions, we obtain
1
1
2 1
2 1

+
a a
a a

and
1 1
2
a
4.9 (a) The Yule-Walker equations for the AR(2) process are r R = or
r a R = . Applying this to our system, we have
(

=
(


) 2 (
) 1 (
) 0 ( ) 1 (
) 1 ( ) 0 (
2
1
r
r
r r
r r

For a real-valued stationary process ) 1 ( ) 1 ( r r = , and thus solving the two
equations in two unknowns, we obtain
) 1 ( ) 0 (
)] 2 ( ) 0 ( )[ 1 (
2 2
1 1
r r
r r r
a

= =
Z
-1
Z
-1
) (n X
) 1 ( n X
) 2 ( n X
) (n e
-a1
-a2
Discrete Time Random Processes
65
) 1 ( ) 0 (
) 1 ( ) 2 ( ) 0 (
2 2
2
2 2
r r
r r r
a

= =
where
2
) 0 (
x
r = .
(b) Note that r(1) and r(2) may be expressed in terms of parameters of the
systems as in (4.184) and (4.186) to obtain
2
1
2
2
1
1
) 1 (
x x
a
a
r =
+

= with
2
1
1
1 a
a
+

=
and
2
2
2
2
2
2
1
1
) 2 (
x x
a
a
a
r =
|
|
.
|

\
|

+
= with
2
2
2
1
2
1
a
a
a

+
=
4.10 The state diagram is















We have
S
1
and S
2
: irreducible ergodic.
S
3
: aperiodic and transient.
S
4
: absorbing.

4.11 Let S
1
, S
2
and S
3
represent symbols 1, 2 and 3, respectively. Then, the state
diagram is





S1 S2
1/3
1/2
1/2
2/3
S3 S4
1/4
1/2
1
1/4
Signal detection and estimation
66
















(b) The n-step transition matrix is
(
(
(

= =
3400 . 0 2700 . 0 3900 . 0
3200 . 0 2800 . 0 4000 . 0
3000 . 0 2700 . 0 4300 . 0
) 2 (
2
P P
(
(
(

= =
3220 . 0 2730 . 0 4050 . 0
3200 . 0 2720 . 0 4080 . 0
3140 . 0 2730 . 0 4130 . 0
) 3 (
3
P P
(
(
(

= =
3290 . 0 2727 . 0 4083 . 0
3284 . 0 2728 . 0 4088 . 0
3174 . 0 2727 . 0 4099 . 0
) 4 (
4
P P
(
(
(

= =
3283 . 0 2727 . 0 4089 . 0
3282 . 0 2727 . 0 4090 . 0
3180 . 0 2727 . 0 4093 . 0
) 5 (
5
P P
M
(
(
(

= =
3282 . 0 2727 . 0 4091 . 0
3282 . 0 2727 . 0 4091 . 0
3182 . 0 2727 . 0 4091 . 0
) 6 (
6
P P

0.4
S1 S2
S3
0.3
0.3
0.4
0.2
0.3
0.4
0.5
0.2
Discrete Time Random Processes
67
(
(
(

=
3282 . 0 2727 . 0 4091 . 0
3282 . 0 2727 . 0 4091 . 0
3182 . 0 2727 . 0 4091 . 0
) 20 ( P
(c) The state probabilities are given by
n T
n P p P = ) ( . Thus,
] 3400 . 0 2700 . 0 3900 . 0 [ ) 1 ( = = P p p
T T

with | | 4 . 0 3 . 0 3 . 0 ) 0 ( = = p P
T
.
] 3220 . 0 2730 . 0 4050 . 0 [ ) 2 (
2
= = P p p
T T

] 3150 . 0 2727 . 0 4083 . 0 [ ) 3 (
3
= = P p p
T T

] 3183 . 0 2727 . 0 4089 . 0 [ ) 4 (
4
= = P p p
T T

M
] 3182 . 0 2727 . 0 4091 . 0 [ ) 5 (
4
= = P p p
T T

] 3182 . 0 2727 . 0 4091 . 0 [ ) 20 (
20
= = P p p
T T

4.12 (a)














0.5
S1 R S2 N
S3 S
0.25
0.25
0.5
0.25
0.25
0.5
0.5
Signal detection and estimation
68
(b)
(
(
(

=
500 . 0 250 . 0 250 . 0
500 . 0 000 . 0 500 . 0
250 . 0 250 . 0 500 . 0
Snow
Nice
Rain
) 1 (
Snow Nice Rain
P

(
(
(

=
438 . 0 188 . 0 375 . 0
375 . 0 250 . 0 375 . 0
375 . 0 188 . 0 438 . 0
Snow
Nice
Rain
) 2 (
Snow Nice Rain
P

(
(
(

=
406 . 0 203 . 0 391 . 0
406 . 0 188 . 0 406 . 0
391 . 0 203 . 0 406 . 0
Snow
Nice
Rain
) 3 (
Snow Nice Rain
P

(
(
(

=
402 . 0 199 . 0 398 . 0
398 . 0 203 . 0 398 . 0
398 . 0 199 . 0 402 . 0
Snow
Nice
Rain
) 4 (
Snow Nice Rain
P

(
(
(

=
400 . 0 200 . 0 399 . 0
400 . 0 199 . 0 400 . 0
399 . 0 200 . 0 400 . 0
Snow
Nice
Rain
) 5 (
Snow Nice Rain
P

(
(
(

=
400 . 0 200 . 0 400 . 0
400 . 0 200 . 0 400 . 0
400 . 0 200 . 0 400 . 0
Snow
Nice
Rain
) 6 (
Snow Nice Rain
P

We observe that after 6 days of weather predictions, we have probability of Rain =
0.4, probability of Nice = 0.2 and probability of Snow = 0.4 no matter where the
chain started. Therefore, this chain is a regular Markov chain.
Discrete Time Random Processes
69
(c) Using
n T
n P p P = ) ( , we have
P p p
T T
= ) 1 ( with | | 1 . 0 2 . 0 7 . 0 ) 0 ( = = p P
T
.
Therefore, ] 325 . 0 200 . 0 475 . 0 [ ) 1 ( =
T
p .
] 381 . 0 200 . 0 419 . 0 [ ) 2 (
2
= = P p p
T T

] 395 . 0 200 . 0 404 . 0 [ ) 3 (
3
= = P p p
T T

] 399 . 0 200 . 0 401 . 0 [ ) 4 (
4
= = P p p
T T

M
] 400 . 0 200 . 0 400 . 0 [ ) 5 (
5
= = P p p
T T

] 400 . 0 200 . 0 400 . 0 [ ) 20 (
20
= = P p p
T T

Hence, the steady state distribution vector is
(
(
(

=
(
(
(

=
4 . 0
2 . 0
4 . 0
3
2
1
P
4.13 (a) This is a two-state Markov chain as shown below







(b) To verify that it is true by induction, we must verify that it is true for
1 = n first, then assuming it is true for n yields it is true for 1 + n . That is,
) 1 ( ) 1 ( ) 1 (
n
n P P P = + must be verified. Since ) 1 ( ) (
n
n P P = , for 1 = n , we have
S0 S1
a
b
1-b 1- a





Chapter 5


Statistical Decision Theory

5.1 (a) The LRT is
<
>
=
0
1
0 |
1 |
) | (
) | (
) (
0
1
H
H
H y f
H y f
y
H Y
H Y


We observe that for ) 2 ln(
2 / 1
2 0
2
1
2
1

<
>

<
>


H
H
y
H
H
e
y
y
, while for 2 > y , we
always decide H
0
.

(b) For minimum probability of error criterion 0
11 00
= = C C and
1
10 01
= = C C
(i) = = = 693 . 0 ) 2 ln( 1
2
1
0
P choose H
0
for 693 . 0 0 y ;
otherwise choose H
1
. The minimum probability of error is
355 . 0
2
1
) (
693 . 0
0
0
2
693 . 0
1
= + =


dy P dy e P P
y

72
) | (
0
0
|
H y f
H Y
) | (
1
1
|
H y f
H Y
2
1

0.693 1 2
1
y
Statistical Decision Theory

73
(ii) Similarly, =
3
2
1
P choose H
1
for 2 39 . 1 y and 308 . 0 ) ( = P .
(iii)
3
1
1
= P ,
<
>
0
0
1
H
H
y always decide H
1
and 288 . 0 ) ( = P .

5.2 (a)
<
>
=
0
1
0 |
1 |
) | (
) | (
) (
0
1
H
H
H y f
H y f
y
H Y
H Y




(i)
2
1
< ,


> ) ( y always decide H
1




(ii)
2
1
> ,


< ) ( , 1 0 y y decide H
0

> ) ( , 2 1 y y decide H
1

2
1
1 2

) ( y
2
1
1 2
y
) ( y

2
1
y
) ( y

1 2
Signal detection and estimation
74

(iii) ,
2
1
=



decide H
1
or H
0
in at the range 1 0 y and decide H
1
for 2 1 < y .
(b) (i)
2
1
< , the probability of false alarm is
1 1 ) | (
1
0
0 |
1
0
= = =

dy dy H y f P
Z
H Y F
.
The probability of detection is
0 1 1
2
1
) | (
2
0
1 |
1
1
= = = = =
D M
Z
H Y D
P P dy dy H y f P
(ii)
2
1
> , 0 0 ) | (
2
1
0 |
1
0
= = =

dy dy H y f P
Z
H Y F
and
2
1
2
1
2
1
) | (
2
1
1 |
1
1
= = = =
M
Z
H Y D
P dy dy H y f P
5.3 Minimum probability of error criterion 1 and 0
10 01 11 00
= = = = C C C C .

(a) The conditional density functions are
( )
(
(


=
2
2
0 |
2
exp
2
1
) | (
0
A y
H y f
H Y

( )
(
(


=
2
2
1 |
2
exp
2
1
) | (
1
A y
H y f
H Y

2
1
=
y
) ( y
1 2
Statistical Decision Theory

75
( )
( ) ( )
1
0
2
0
1
1
0
0
1
2
2
2
2
1
0
0
1
2 2
2 2
|
|
ln
2
ln
2 2
ln
] 2 / ) ( exp[
] 2 / ) ( exp[
) (
0
1
P
P
A
H
H
y
P
P
H
H
A y A y
y
P
P
H
H
A y
A y
f
f
y
H Y
H Y

<
>

<
>

+
+

=
=
<
>
+

= =


(b)
A A
H
H
y
P
P
2 2
0
1
0
1
549 . 0
3 ln
2

3
=
<
>
=
0
0
1
0 1
H
H
y P P
<
>
=

A A
H
H
y
P
P
2 2
0
1
0
1
256 405 . 0
3
5
=
<
>
=

As P
1
increases
D
P increases and
F
P increases, but
F
P increases at a faster
rate.

5.4 The received signals under each hypothesis are
N A Y H
N Y H
N A Y H
+ =
=
+ =
:
:
:
2
0
1

-A
H1 H0
A 0
) | (
0
0
|
H y f
H Y
) | (
1
1
|
H y f
H Y
y
Signal detection and estimation
76

(a) By symmetry, we observe that the thresholds are and , and
) | (error ) | (error
2 1
H P H P =
( )


(
(


= dy
A y
H P
2
2
1
2
exp
2
1
) | (error
( )


(
(


= dy
A y
H P
2
2
2
2
exp
2
1
) | (error



|
|
.
|

\
|


=
|
|
.
|

\
|


+
|
|
.
|

\
|


=
dy
y
dy
y
dy
y
H P
2
2
2
2
2
2
0
2
exp
2
1
2
2
exp
2
1
2
exp
2
1
) | (error

But ) | (error ) | (error
2 1
H P H P = and hence,

(
(

|
|
.
|

\
|

+
(
(


dy
A y
dy
y
dy
A y
P
2
2
2
2
2
2
2
) (
exp
2
exp 2
2
) (
exp
2
1
3
1
(error)

-A

H0
A 0
-
H1 H2
y
) | (
0
0
|
H y f
H Y
) | (
2
2
|
H y f
H Y
) | (
1
1
|
H y f
H Y
Statistical Decision Theory

77

(
(

+
|
|
.
|

\
|


dy
A y
dy
y
2
2
2
2
2
) (
exp
2
exp
2
1
3
2

Now,
( )
2
0
2
exp
2
exp 0
2
2
2
2
A A P
e
= =
(
(


+
|
|
.
|

\
|


(b) Substituting for the value of
2
A
= and solving the integrals we obtain
|
|
.
|

\
|

=
|
|
.
|

\
|

=
2
3
4
2 2
erfc
3
2
(error)
A
Q
A
P
5.5 (a)

+

+
= =
otherwise , 0
3 1 ,
8
3
8
1
1 1 ,
4
1
1 3 ,
8
3
8
1
) ( ) ( ) | (
1 |
1
y y
y
y y
n f s f H y f
N S H Y

as shown below




The LRT is then

< <
+

+

=
3 2 ,
2 1 ,
2
3
2
1
1 1 , 1
1 2 ,
2
3
2
1
2 3 ,
) (
y
y y
y
y y
y
y
1/4
1 2 3 -1 -2
-3
y
) | (
1
1
|
H y f
H Y
Signal detection and estimation
78
(i)
4
1
= , we have



(y) > always decide H
1
.
(ii) =1



2 cases: decide H
1
when

< < <


< <
=
3 2 and 1 1 and 2 3 when
2 1 and 1 2 when
) (
1
0
y y y H
y y H
y
or, decide H
0
when

< <
<
=
3 2 and 2 3 when
2 2 when
) (
1
0
y y H
y H
y
(iii) 2 =




decide H
0
when 2 2 y since 2 ) ( = < y
decide H
1
when 2 3 y and 3 2 y since > ) ( y
1/2
1 2 3 -1 -2
-3
y
1

) ( y
4 / 1 =
1/2
1 2 3 -1 -2
-3
y
1

=1
1/2
1 2 3 -1 -2
-3
y
1

=2
) ( y
Statistical Decision Theory

79
(b)

=
1
0
) | (
0 |
Z
H Y F
dy H y f P and

=
1
1
) | (
1 |
Z
H Y D
dy H y f P
(i) 1
4
1
= = =
D F
P P
(ii) 625 . 0 and
2
1
1 = = =
D F
P P or, 125 . 0 and 0 = =
D F
P P
(iii) 125 . 0 and 0 2 = = =
D F
P P

(c) The ROC is shown below

5.6 (a) 0 all for ) , ( ) (
0 0
0 0
0
= =

= =



s e dn e dn e
N
dn n s f s f
s
N
s
N
s
SN S

0
0
0
0
0 ,
1
) ( N n
N
ds e
N
n f
s
N
=


(b) = ) ( ) ( ) , ( n f s f n s f
N S SN
S and N are statistically independent.
(c)

= =
0
) ( ) ( ) ( ) ( ) ( d y f f n f s f y f
S N N S Y

Solving the convolution as shown in Chapter 2 in detail, we obtain

PD
PF
1
1
1/2
1/2
0.625
0.162
2
1
=
1
2
1
< <
1 >
Signal detection and estimation
80
| | { }

<

=

y N y N y
N
N y e
N
y f
y
Y
0 0
0
0
0
, ) exp( ) ( exp
1
0 , ) 1 (
1
) (


5.7 (a) The LRT is
=
|
|
.
|

\
|

<
>
=
<
>

2
ln
2
1
) (
2
1
2
1
) (
0
1
2
0
1
2
2
H
H
y y y T
H
H
e
e
y
y
y

Solving, + = = = 2 1 1 0
2
1
) (
2
y y y y T as shown below


To determine the decisions, we observe that we have 3 cases:
(i)
2
1
, (ii) 0
2
1
< < and (iii) 0 >
T(y)
y
-1/2
0 1 2 -1 -2
fY (y)
y
N0
) 1 (
1
0
0
N
e
N


Statistical Decision Theory

81
(i)
e 2 2
1






> ) ( y T always decide H
1







(ii)
2 2
0
2
1
<

< <
e


+ = 2 1 1
1
y , + + = 2 1 1
2
y , + = 2 1 1
3
y and + + = 2 1 1
4
y

Decide H
1
when
1
y y ,
3 2
y y y and
4
y y .
Decide H
0
when
2 1
y y y < < and
4 3
y y y < < .

(iii)
2
0

> >







y
-1/2
1
2
-1
-2

y1 y2 y3 y4
H1
H1
H1
H0 H0
T(y)
y
-1/2
0 1 2 -1 -2

y
-1/2
0 1
2
-1
-2

y1 y2
H1 H1 H0
Signal detection and estimation
82
+ = 2 1 1
1
y and + + = 2 1 1
2
y .

Decide H
1
when
1
y y and
2
y y
Decide H
0
when
2 1
y y y < <

(b) 47 . 0 2
3
2
1
0
0
= = =
P
P
P
02 . 0 ) 393 . 2 ( 2
2
1
2
1
) | ( ) | (
2 1 1
2
2 1 1
2
0 | 0 1
2 2
1
0
= =

=
= =

+ +

Q dy e dy e
dy H y f H H P P
y y
Z
H Y F

09 . 0
2
1
2
1
) | ( ) | (
2 1 1
2 1 1
1 | 1 1
1
1
= + = = =


+ +

+

+
dy e dy e dy H y f H H P P
y y
Z
H Y D

(c) ) 2 1 1 ( 2 + + = Q P
F

) 2 1 1 exp( 1 1 )] 2 1 1 ( exp[ + = = + + =
D M D
P P P
The optimum threshold
opt
is obtained when
M F
P P = , or
) 2 1 1 ( 2 ) 2 1 1 exp( 1
opt opt
Q + + = +
(d) 2 . 0
2
1
2
1
1
2
1
2
2 2
=

dy e dy e P
y y
F

or, 0 22 . 0 2 . 0
2
erf 1 . 0 ) ( 2 . 0 ) ( 2
1
1
1 1
> =
|
|
.
|

\
|
= = Q Q ; that is the
decision regions as given in (a) part (iii).



Statistical Decision Theory

83
5.8
(
(

=
2
) 1 (
exp
2
1
) | (
2
1 |
1
y
H y f
H Y

|
|
.
|

\
|

=
2
exp
2
1
) | (
2
0 |
0
y
H y f
H Y

(a) The LRT is
2
1
1
2
exp
2
1
2
) 1 (
exp
2
1
) (
0
1
0
1
2
2
H
H
y
H
H
y
y
y
<
>

<
>
(
(

(
(

=
(b) 005 . 0
2
1
) | (
2
0 |
2
1
0
=

= =

dy e dy H y f P
y
Z
H Y F

581 . 2 005 . 0
2
erf 1
2
1
) ( =
(
(

|
|
.
|

\
|
= Q
(c)


=
=

=
(
(

=
1
2
581 . 2
2
013 . 0 ) 581 . 1 (
2
1
2
) 1 (
exp
2
1
2
Q dx e dy
y
P
x
D


5.9 The LRT is

<
>
|
|
.
|

\
|

(
(

= =

=
=
0
1
1
2
2
1
2
2
0 |
1 |
2
exp
2
1
2
) (
exp
2
1
) | (
) | (
) (
0
1
H
H
y
m y
H f
H f
y
K
k
k
K
k
k
H
H
y
y
Y
Y

4 4 3 4 4 2 1
3 2 1

=
+

<
>

2
ln
2
0
1
) (
1
Km
m
H
H
y
y T
K
k
k
as given in Example 5.2. Hence,
<
>
0
1
) (
H
H
y T .

Signal detection and estimation
84
5.10

=
|
|
.
|

\
|


=
K
k
k
H
y
H f
1
2
0
2
0
0 |
2
exp
2
1
) | (
0
y
Y

=
|
|
.
|

\
|


=
K
k
k
H
y
H f
1
2
1
2
1
1 |
2
exp
2
1
) | (
1
y
Y

=

<
>
=
K
k
k
H
H
y T
1
0
1
2
) ( y where,
|
|
.
|

\
|




=
1
0
2
0
2
1
2
1
2
0
ln ln
2
K from Example 5.9.

5.11 (a) The probability of false alarm is
|
|
.
|

\
|

=

= =

0
2
0
0 |
0
2
1
0
2
1
) | ( Q dy e d H f P
y
Z
H F
y y
Y

where, 1 and ln ln
2
1
0
2
0
2
1
2
1
2
0
=
|
|
.
|

\
|




= K .
|
|
.
|

\
|

=
|
|
.
|

\
|

=

= =

1 1
2
1
1
2
1
1
1
2
Q P Q dy e P P
M
y
M D

(b) The ROC is P
D
versus P
F
. For 2 2
2
0
2
1
= = , we have
(
(


=
2
) 2 ln( 4
Q P
D
and )] 2 ln( 4 [ = Q P
F
for various values of . Hence,



1/2
1/2
PD
PF
Statistical Decision Theory

85
(c) The minimax criterion when 0
11 00
= = C C and 1
10 01
= = C C yields
M F
P P = . Hence,
|
|
.
|

\
|

=
|
|
.
|

\
|

0 1
1
opt opt
Q Q .
5.12 (a)
|
|
.
|

\
|


=
2
2
0 |
2
exp
2
1
) | (
0
y
H y f
H Y


(
(


=
2
2
1 |
2
) (
exp
2
1
) | (
1
m y
H y f
H Y


(
(


=
2
2
2 |
2
) (
exp
2
1
) | (
2
m y
H y f
H Y


The receiver based on the minimum probability of error selects the hypothesis
having the largest a posteriori probability ) | ( y H P
j
, where
) (
) ( ) | (
) | (
|
y f
H P H y f
y H P
Y
j j H Y
j
j
=
3
1
) ( =
j
H P and ) ( y f
Y
is common to all a posteriori probabilities We choose
H
j
for which ) | (
| j H Y
H y f
j
is largest. This is equivalent to choosing H
j
for which
) (
j
m y is smallest. Hence, we have

(b) The minimum probability of error is

= =
= =
3
1
3
1
) | (
3
1
) | ( ) ( ) (
j
j
j
j j
H P H P H P P where,
H0 H2 H1
-m m
y
-m/2 m/2
0
Signal detection and estimation
86
|
.
|

\
|

=
(
(


=
|
|
.
|

\
|
> =

2
2
1
2
) (
exp
2
1
2
) | (
2 /
2
2 /
2
2
1
1 1
2
m
Q dx e
dy
m y
H
m
y P H P
m
x
m

By symmetry, ) | ( ) | (
3 1
H P H P = and
dy e dy e
H
m
Y
m
Y P H
m
Y P H P
m
y
m
y


+

=
|
|
.
|

\
|
> < =
|
|
.
|

\
|
> =
2 /
2
2 /
2
0 0 0
2
2
2
2
2
1
2
1
2
and
2 2
) | (

|
.
|

\
|

= |
.
|

\
|

2 3
4
) (
2
2
2
1
2
2 /
2
2
m
Q P
m
Q dx e
m
x

(c) The conditional density functions become
2
0 |
2
0
2
1
) | (
y
H Y
e H y f

=
(
(

=
2
) 1 (
exp
2
1
) | (
2
1 |
1
y
H y f
H Y

8
2 |
2
2
2 2
1
) | (
y
H Y
e H y f

=
The boundary between H
0
and H
1
is
2
1
= y , while the boundary between H
0
and
H
2
is obtained from
36 . 1 ) | ( ) | (
8 2
2 | 0 |
2 2
2 0
= =

y e e H y f H y f
y y
H Y H Y

For the boundary between H
1
and H
2
, we have
Statistical Decision Theory

87
18 . 0 and 85 . 2
2
1
) | ( ) | (
2 1
2 2
) 1 (
2 | 1 |
2 2
2 1
= =

y y e e H y f H y f
y y
H Y H Y



) | ( ) ( ) (
3
1
j
j
j
H P H P P =

=
where,
527 . 0 ) 0 ( ) 36 . 1 (
2
1
2
1
) | ( ) | (
0
2
36 . 1
2
0 | 0
2 2
2 1
0
= + =

+

= =


Q Q
dy e dy e dy H y f H P
y y
Z Z
H Y
U
29 . 0 ) 85 . 1 (
2
1
2
1
2
1
2
1
2
1
) | ( ) | (
85 . 1
2
2 / 1
2
85 . 2
2
) 1 (
2 / 1
2
) 1 (
1 | 1
2 2
2 2
2 0
1
= + |
.
|

\
|
=

= =

Q Q
dx e dx e
dy e dy e dy H y f H P
x x
y y
Z Z
H Y
U

H0 H2
H1
y
2.85 1.36 1 0.5
0 -0.18 -1.36
H2
-1 2
2.85 1.36 1
0.5
y
0 -0.18 -1.36
) | (
1 1 |
H y f
H Y
) | (
0
0
|
H y f
H Y

) | (
2
2
|
H y f
H Y

Signal detection and estimation
88
1 ) 7 . 5 ( ) 72 . 2 ( 1
) 7 . 5 ( ) 72 . 2 (
2
1
2
1
2
1
2 2
1
) | ( ) | (
7 . 5
2
72 . 2
2
7 . 5
72 . 2
2
85 . 2
36 . 1
8
2 | 2
2 2
2 2
1 0
2
=
=

= =

Q Q
Q Q dx e dx e
dx e dy e dy H y f H P
x x
x y
Z Z
H Y
U

6 . 0 )] | ( ) | ( ) | ( [
3
1
) (
2 1 0
+ + = H P H P H P P
5.13

=
|
|
.
|

\
|


=
K
k
k
H
y
H f
1
2
2
0 |
2
exp
2
1
) | (
0
y
Y

= (
(

+
=
K
k
m
k
m
H
y
H f
1
2 2
2
2 2
1 |
) ( 2
exp
1
) | (
1
y
Y

1
) ( 2
exp ) (
) | (
) | (
0
1
2 2 2
2
2 /
2 2
2
0 |
1 |
0
1
H
H
H f
H f
m
m T
K
m
H
H
<
>
(
(

+

|
|
.
|

\
|
+

= y y y
y
y
Y
Y

Taking the logarithm on both sides and rearranging terms, we obtain the decision
rule

|
|
.
|

\
|

+
<
>
2
2 2
2
2 2 2
0
1
ln
) (
m
m
m T
K
H
H
y y
or,

<
>

=
0
1
1
2
H
H
y
K
k
k


Statistical Decision Theory

89
5.14 The conditional density functions are
| |
(
(

+
= y y y
Y
T
m
K
m
H
H f
) 1 ( 2
1
exp
) 1 ( 2
1
) | (
2 2 /
2
1 |
1

where | |
T
K
y y y L
2 1
= y and
|
.
|

\
|

= y y y
Y
T
K
H
H f
2
1
exp
) 2 (
1
) | (
2 /
0 |
0

The LRT is then

<
>

(
(

|
|
.
|

\
|
+
=
0
1
2
2
2 /
2
) 1 ( 2
exp
1
1
) (
H
H
m
m T
K
m
y y y
Taking logarithm
<
>
|
|
.
|

\
|
+
+
+

ln
1
1
ln
2
) 1 ( 2
0
1
2 2
2
H
H
K
m m
m T
y y
1
2
2
2
0
1
) 1 ln(
2
) 1 ( 2
+ +

+
<
>

m
m
m T
K
H
H
y y
or,
1
0
1
1
2

<
>

=
H
H
y
K
k
k

We observe that the LRT does not require knowledge of
2
m
to make a decision.
Therefore, a UMP test exists.

Signal detection and estimation
90
5.15

=
|
|
.
|

\
|

=
K
k
k
H
y
H f
1
2
0 |
2
exp
2
1
) | (
0
y
Y

= (
(

=
K
k
k
H
m y
H f
1
2
1 |
2
) (
exp
2
1
) | (
1
y
Y

(a) |
.
|

\
|
=
<
>
=
=
K
k
k
H
H
m
K
y m
H
H
H f
H f
1
2
0
1
0 |
1 |
2
exp ) (
) | (
) | (
) (
0
1
y
y
y
y
Y
Y

or,
+
<
>

=
m
Km
H
H
y
K
k
k
2
ln 2
2
0
1
1
. Therefore, a test can be conducted without
knowledge of m A UMP test exists.
(b) = 05 . 0
F
P The test decides H
0
when > =

=
K
k
k
y y T
1
) ( , where T is
Gaussian with mean zero and variance K under H
0
. Hence,
05 . 0
2
erf 1
2
1
) | (
0 |
0
=
(
(

|
|
.
|

\
|
=
|
|
.
|

\
|
= =

K K
Q dt H t f P
H T F

Using 9 . 0 ) | (
1 |
1
> =

dt H t f P
H T D
where T is Gaussian with mean Km under H
1
,
we obtain from the table in the appendix 16 K .
5.16 Since the observations are independent, the LRT becomes

<
>
(
(

|
|
.
|

\
|

+ + +
|
|
.
|

\
|

= =
0
1
0 1
2 1
1
0
0 |
1 | 1 1
) ( exp
) | (
) | (
) (
0
1
H
H
y y y
H f
H f
K
K
H
H
L
y
y
y
Y
Y

Statistical Decision Theory

91

<
>
(
(

|
|
.
|

\
|


|
|
.
|

\
|

=

=
0
1
1 0
0 1
1 1
0
exp ) (
H
H
y
K
k
k
K
y
Taking the natural logarithm and simplifying the expression, we obtain
=
|
|
.
|

\
|




<
>
=

=
K
k
k
H
H
y T
1 0
1
0 1
1 0
0
1
ln ln ) ( y
For a UMP test of level , we need
05 . 0 ] | ) ( [
0
= > H T P Y or 95 . 0 ] | ) ( [
0
= H T P Y
We now determine the distribution of the test statistic ) (Y T using the
characteristic function such that
| | | | | | | | | |
) ( ) ( ) (
) (
2 1
2 1 2 1
) (
=
= = =
+ + +
K
K K
y y y
Y j Y j Y j Y Y Y j j
t
e E e E e E e E e E
L
L
L Y

since the Y
k
s, K k , , 2 , 1 L = are statistically independent. From (2.93),
K
j

= ) 1 ( ) (
t
. Hence, from (2.102), ) (Y T is a gamma distribution
) , ( P K G with density function

>
=

otherwise , 0
0 ,
) (
1
) (
/ 1
t e t
K t f
t K
K
T

Therefore, for 21 = K , (see Table 9 page 456, Dudewicz
1
)
62 . 290 062 . 29 95 . 0 ] | ) ( [
0
0
= =

= H T P Y
The test decides H
1
(rejects H
0
) if 62 . 290 ) ( > Y T

1 Dudewicz, E. J., Introduction to Statistics and Probability, Holt, Rinehart and Winston, New York,
1976.
Signal detection and estimation
70
(

=
(
(
(
(

+
+
+
+ +
+
+ +
+
+
=
b b
a a
b a
b ab b a
b a
b ab b b
b a
ab a a a
b a
ab a a b
1
1
) 1 (
2 2
2 2
P
(
(
(
(

+
+
+

+

+
+
(

= = +
b a
b a b a
b a
b a b b
b a
b a a a
b a
b a a b
b b
a a
n
n n
n n
n
) 1 ( ) 1 (
) 1 ( ) 1 (
1
1
) 1 ( ) 1 ( ) 1 ( P P P
Let b a x =1 , then
(
(
(
(

+
+
+

+
+
(

= +
b a
bx a
b a
bx b
b a
ax a
b a
ax b
b b
a a
n
n n
n n
1
1
) 1 ( P
) 1 (
1
) 1 ( ) 1 (
) 1 ( ) 1 ( 1
1
1 1
1 1
2 2 2 2
2 2 2 2
+ =
(
(

+
+
+
=
(
(


+
+
=
(
(

+ + + + +
+ + + + +
+
=
+ +
+ +
n
bx a bx b
ax a ax b
b a
b a bx a b a bx b
b a ax a b a ax b
b a
x b ab bx a abx ab x b b bx b abx b
abx a x a a ax a abx ab x a ab ax b
b a
n n
n n
n n
n n
n n n n n n
n n n n n n
P

and ) 1 ( ) 1 ( ) 1 (
n
n P P P = + is verified.
The limiting transition matrix is
(
(
(

+ +
+ +
=
(

+
=

b a
a
b a
b
b a
a
b a
b
a b
a b
b a
n
n
1
) ( lim P
if 1 < x . Note that b a x = 1 and thus, 1 < x requires 1 0 < < a and
1 0 < < b .

Discrete Time Random Processes
71
(c) For the special case, 0 = = b a , we have
) 0 (
1 0
0 1
) 0 ( ) ( P P P =
(

=
n
n
Also, for 1 = = b a , we have
n
n
(

=
0 1
1 0
) 0 ( ) ( P P
(

= =
1 0
0 1
) 1 ( 1 P n
) 0 (
0 1
1 0
) 2 ( 2 P P =
(

= = n
Continuing for all values of n, we observe that

=
=
odd for ,
0 1
1 0
) 2 (
even for , ) 0 (
) (
n
n
n P
P
P

and thus, the limiting state probabilities do not exist.





Chapter 6


Parameter Estimation


6.1
k k k
Z bx a Y + + = . Y
k
is Gaussian with mean
k
bx a + and variance
2
.
Since
K
Y Y Y , , ,
2 1
L are statistically independent, the likelihood function is

=
= =
K
k
k Y
y f b a L f
k
1
) ( ) , ( ) ( y
Y

( ) )
`


=

=
K
k
k k
K
bx a y b a L
1
2
2
)] ( [
2
1
exp
2
1
) , (
Taking the logarithm, we have

=
K
k
k k
bx a y
K
K b a L
1
2
2
) (
2
1
2 ln
2
ln ) , ( ln

Hence,
0
) , ( ln ) , ( ln
=

b
b a L
a
b a L

=
k
k k k
k k k k
k
x
x x
K
x
x y
K
x y
K
y
K
a
1
1
1 1

2


92
Parameter Estimation

93

=
k k k
k k k k
x x
K
x
x y
K
x y
b
1
1



6.2 The conditional density function is
|
|
.
|

\
|

2
2
2
exp
2
1
) (
y
y f
Y

(a)
2
2
2
ln 2 ln
2
1
) ( ln

y
y f
Y
. Hence,
) (
1 1
) ( ln
2 2
3 3
2

y
y
y f
Y
, which cannot be written as
| | ) ( ) ( y c No efficient estimate exists for .
(b) ) (
1
2
1
) (
) ( ln
2 2
4 2
2
2 2

y
y
y f
Y
| |
2 2 2
) ( ) ( y c
Therefore, an efficient estimate for
2
exists.
Note that | | | | = =
2 2 2 2 2
Y E E The estimate is unbiased.

6.3 (a) The likelihood function is ) ( ) ( ) ( ) (
2 1
2 1
y f y f f m L
Y Y
= = y
Y
since Y
1
and
Y
2
are statistically independent ( ) ( ) | |
)
`

=
2
2
2
1
3
2
1
exp
2
1
) ( m y m y m L .
( ) ( )
2
2
2
1
3
2
1
2
1
2 ln ) ( ln m y m y m L =
10
3
0
) ( ln
2 1
y y
m
m
m L +
= =


The statistic is ) 3 (
10
1
) (
2 1
Y Y m = Y
Signal detection and estimation
94
(b) + = + = + ) 3 ( 3 ] [
2 1 2 1 2 2 1 1
a a m m a m a Y a Y a E m if unbiased. Thus, we
must have 1 ) 3 (
2 1
= + a a .

6.4 (a) The likelihood function is
|
|
.
|

\
|

=

= =

K
k
k
K
K
k
y
y e f
k
1 1

1
exp
1 1
) ( y
Y

Taking the logarithm

=
K
k
k
y K f
1

1
ln ) ( ln y
Y

Hence,

= =
= =

=

K
k
k ml
K
k
k
y
K
y
K
f
1 1
2
1

0
1
) ( ln y
Y


(b) | | = =
(

=
=
K
K
Y
K
E E
K
k
K
1 1

1
. Therefore, the estimator is unbiased.
(c) To determine the Cramer-Rao bound, we solve
2 2 2
1
3 2 2

2
2 2
) ( ln

=
(

=
(
(

=
K K K
Y
K
E
f
E
K
k
k
y
Y

(d) The variance of
ml

is
| | | |


= =
= = =
=
(

=
(
(

|
|
.
|

\
|
= =
K
k
K
k
k k
K
k
K
k
K
k
k
Y Y C
K
Y Y
K
E Y
K
E E
1 1
2
1 1
2
2
1
2
) , (
1
1 1
)

( )

( var
l
l

Since the observations are independent
2
1
]

var[
y ml
K
=
Parameter Estimation

95
where
2 2 2 2
] [ ] [ ] var[ = = = Y E Y E Y
y
. Hence,

=
K
ml
2
]

var[ Cramer-Rao
bound. Hence, the estimator is consistent.

6.5 (a) Y is binomial np Y E = ] [ . An unbiased estimate for p is
Y
n
Y p
n
Y
E Y E
1

[ = =
(

= .
(b) ( )
2
)

var(

>
Y
p Y P where | |
n
p p
p Y E Y
) 1 (
)

( ]

var[
2

= = . Thus,
( )

> n
n
p p
p Y p as 0
) 1 (

2
. Therefore, Y

is consistent.

6.6
( )

= =
(

=
(
(


=
K
k
K
k
k
K
k
m y
m y
f
1 1
2
2 2 /
2
2
2
) (
2
1
exp
2
1
2
) (
exp
2
1
) ( y
Y


Let =
2

=
K
k
k
m y
K
m L
1
2
) (
1
2 ln
2
) , (
We need 0
) , ( ln ) , ( ln
=


=


m
m L m L

Applying

=
= =


K
k
k
y
K
m
m
m L
1
1
0
) , ( ln

and

=
= =


K
k
k
m y
K
m L
1
2
) (
2

0
) , ( ln
where

=
=
K
k
k
y
K
m
1
1


6.7 (a) 0
) (
2
) (
exp
2
1
) (
2
1
2
2
=

(
(

=
x
x f
x y
x f
X
k
k
X
y
y
Y
Y
yields
2
2 1
+ y y . Consequently,
Signal detection and estimation
96

+ +
+
+
=
2 if , ) (
2
1
2 if , 1
2 if , 1

2 1 2 1
2 1
2 1
y y y y
y y
y y
x
ml

(b) | | A x x Y Y E x E
ml
= + = + = ] [
2
1
] [
2
1
) (
2 1
Y . Therefore, ) ( Y
ml
x is
unbiased.
6.8 (a) The likelihood function is given by

=
=

=
= K
k
K
k
k
y
K
k
y
y
e
y
e f
K
k
k
k
1
1

!
!
) (
1
y
Y

Taking the logarithm, we have
|
|
.
|

\
|
+ =

= =
K
k
k
K
k
k
y y K f
1 1
! ln ln ) ( ln y
0
1
) ( ln
1
=

+ =

=
K
k
k
y K
f y

=
=
K
k
k ml
y
K
1
1


(b)
ml

unbiased = =
(

=

=
) (
1 1
]

[
1
K
K
Y E
K
E
K
k
k ml
which is true, since

=
K
k
k
Y
1
is also a Poisson with parameter K .
We have,
(
(

|
|
.
|

\
|

=

=
2
1
2
1
) ( ln
K
k
k
K Y E f E J y
Parameter Estimation

97
( )

=
(

=
(
(

|
|
.
|

\
|

=

= =
K
K K K
K
K E
Y Y
K
K E
K
k
k
K
k
k
2 2
2
2
2
1
2
1
2
1
) (
2
1 2

Hence, | |


K
)

( var is the Cramer-Rao bound.


6.9 (a) The conditional density function is given by

=
=
otherwise , 0
, , 2 , 1 , ,
2
1
) (

K k y
y f
k
k Y
k
L


The likelihood function is
( )

=
=
otherwise , 0
, , 2 , 1 , ,
2
1
) (
K k y
L
k
K
L

Maximizing ) ( L is equivalent to selecting as the smallest possible value while
) ( L is positive. Hence,
k
y and
k
y for K k , , 2 , 1 L = . Note that
1
y ,
2
y , ,
K
y ,
1
y ,
2
y , ,
K
y is written as
) , , , , , , , (
2 1 2 1 K K
y y y y y y L L which is true if and only if
( )
K
y y y , , ,
2 1
L . Therefore, ( )
K ml
y y y , , , max

2 1
L = .


y
fY(y)
1/2
- 0
Signal detection and estimation
98
(b) From the MLE,
( ) | | | | | | | | y y P y y P y y P y y y y P y P
K K
= = L L
2 1 2 1
, , , max )

(
| |

<
< |
.
|

\
|


= =
0 , 0
0 ,
, 1
x
y
y
y
y Y P
n
n

<

y
ny
y f
n
n
0 , ) (
1

. Hence,
1
]

[
0
1
+

=


n
n
dy
ny
y E
n
n
and thus,
the unbiased estimator is |
.
|

\
| +

1
n
n
.
6.10 (a) The likelihood function is

= =
= = =

=

=
otherwise , 0
, , 2 , 1 and 1 , 0 , ) 1 (
) ( ) ( ) (
1
1
K k y p p
p y f p f p L
k
K
k
y k y
K
k
k P
k k
L
y
Y
Ky K Ky
p p

= ) 1 ( , since the Y
k
s are i. i. d.
Taking the logarithm ) 1 ln( ) ( ln ) ( ln p Ky K p Ky p L + = and
y p
p
Ky K
p
Ky
p
p L
ml
= =

0
1
) (
0
) ( ln

(b) Solving for one sample, we have

=
(

= =
|
|
.
|

\
|

=
(

0 ,
) 1 (
1
) 1 ln(
1 ,
1
ln
) ( ln
2
2
2
2
2
y
p
p
p
y
p
p
p
p f
p
y
and
Parameter Estimation

99
) 1 (
1
1
1 1
) 1 (
1
) 1 (
1
) ( ln
2 2
p p p p
p
p
P
P p f
p
E

+ =
(
(

+ |
.
|

\
|
=
(

y
Therefore, the Cramer-Roa bound for the K independent and identical observations
is
K
p p
Y
) 1 (
] var[


6.11


= dx x f x y f y f
X X Y Y
) ( ) ( ) (
| |
)
`

+
(


=
)
`

+ +
(
(


2
2
2
2
2
2
) 1 (
2
1
exp ) 1 (
2
1
exp
2 2
1
) 1 ( ) 1 (
2
1
2
) (
exp
2
1
y y
dx x x
x y

| |
(

+
(

+ +
(

= =
2
2
2
2
2
2
) 1 (
2
1
exp ) 1 (
2
1
exp
) 1 ( ) 1 ( ) (
2
1
exp
) (
) ( ) (
) (
y y
x x x y
y f
x f x y f
y x f
Y
X X Y
Y X

As in Example 6.5

<
+
=
0 if , 1
0 if , 1

y
y
x
map

| |
dx
y y
x x x y
x dx y x xf x
Y X ms

+
(

+ +
(

= =
2
2
2
2
2
2
) 1 (
2
1
exp ) 1 (
2
1
exp
) 1 ( ) 1 ( ) (
2
1
exp
) (
2
2
2 2
2 2
2
2
1
1

=
+

=
y
y
y y
y y
e
e
e e
e e

Therefore,
map ms
x x
Signal detection and estimation
100
6.12 (a)

<

=

0 , 0
0 ,
) ( ) (
1
x
x e x
r x f
x r
r
X

X is a Gamma distribution with mean

=
r
X E ] [ and variance
2
] var[

=
r
X .
(b) (i) The marginal density function of Y is



= = dx x f x y f dx x y f y f
X X Y YX Y
) ( ) ( ) , ( ) (

4 4 4 4 3 4 4 4 4 2 1
1 function density Gamma
0
) (
1
1
0
1
) 1 (
) (
) (
) (

+
+
+

+
+
+

=

=
dx e x
r r
y
y
r
dx e x
r
xe
x y r
r
r
r
x r
r
xy

+

=
+
otherwise , 0
0 ,
) (
) (
1
y
y
r
y f
r
r
Y

Therefore,
| |

+
+ +
= =
+
4 4 4 4 4 3 4 4 4 4 4 2 1
on distributi Gamma
1
) 1 (
) ( exp ) (
) (
) ( ) (
) (
r
x y y x
y f
x f y x f
x y f
r r
Y
X Y X
X Y

MMSE estimate of x is
y
r
y X E x
ms
+
+
= =
1
] [
(ii) The variance of the estimate is ] [ ] [ ] var[
2 2
ms ms ms
x E x E x = where,

=
r
x E
ms
] [ and
) 2 (
) 1 (
] [
2
2
2
+
+
=
r
r r
x E
ms
. Hence,
2
1
] var[
2
+

=
r
r
x
ms

Parameter Estimation

101
(c)


=
=

<
>
= =
K
k
k
k
K
k
k
x K
k X Y X
y
y x y e x
x y f x f
k
1
1
0 , 0
0 , 0 ,
) ( ) ( y
Y

In order to obtain ) ( y x f
Y X
, we need
4 4 4 4 4 4 4 3 4 4 4 4 4 4 4 2 1
L
1 on distributi Gamma
1
0
1 1
1
exp
) (
) ( ) 2 )( 1 (
) ( ) ( ) , ( ) (

+ =
+
=


|
|
.
|

\
|
+
+
+
|
|
.
|

\
|
+
+ +
=
= =


dx x y x
K r
y
y
r K r K r
dx x f x f dx x f f
K
k
k
K r
K
k
k
K r
K
k
k
r
X X X
y y y
Y Y Y

|
|
.
|

\
|
+
+ +
=
+
=

otherwise , 0
0 ,
) ( ) 2 )( 1 (
) (
1
k
K r
K
k
k
r
y
y
r K r K r
f
L
y
Y

4 4 4 4 4 4 4 3 4 4 4 4 4 4 4 2 1
on distributi Gamma
1
1 1
exp
) ( ) (
) ( ) (
) (
(
(

|
|
.
|

\
|
+
+
+
= =

=
+ =
x y x
K r
y
f
x f x f
x f
K
k
k
K r
K
k
k
X X
X
y
y
y
Y
Y
Y

MMSE estimate of X is
| |

=
+
+
= =
K
k
k
ms
y
K r
X E X
1
) (

y y
(ii) The variance of the estimate is
]

[ ]

[ ]

var[
2 2
ms ms ms
X E X E X = where,

=
r
X E
ms
]

[ and
) 1 (
) 1 )( (
]

[
2
2
+ +
+ +
=
K r
K r K r
X E
ms
) 1 (
]

var[
2
+ +
=
K r
rK
X
ms
.
Signal detection and estimation
102
(d) x y x K r
K r
y
x f
K
k
k
K
k
k
X |
|
.
|

\
|
+ + +
(
(
(
(

+
+
=

=
=
1
1
ln ) 1 (
) (
ln ) ( ln y
Y

The MAP estimate is

=
+
+
= =

K
k
k
map
X
y
K r
x
x
x f
1
) 1 (
0
) ( ln y
Y
. Therefore,
ms map
x x
6.13 (a)


=

otherwise , 0
0 ,
) (
n e
n f
n
N


=
otherwise , 0
1 0 , 1
) (
x
x f
X


= dx x f x y f y f
X X Y Y
) ( ) ( ) ( where

>
=
otherwise , 0
ln , )] ln ( exp[
) (
x y x y
x y f
X Y

Hence, the marginal density function of Y is

y
e
Y
y dx x y
y dx x y
y f
0
1
0
0 , )] ln ( exp[
0 , )] ln ( exp[
) (
and
Parameter Estimation

103

<

= =


y
e
X Y ms
y dx x y x
y dx x y x
x y f x x
0
1
0
0 , )] ln ( exp[
0 , )] ln ( exp[
) (
or,

<

=
0 ,
3
2
0 ,
3
2

y
e
y
x
y
ms

(b)
map
map
x x
X
X Y
x x
Y X
x
x f
x
x y f
x
y x f

) ( ln
) ( ln ) ( ln
=
=


and

>
=
otherwise , 1
ln and 1 0 , )] ln ( exp[
) ( ) (
x y x x y
x f x y f
X X Y

The quantity is maximized when 0 ) ln ( > x y
or,

=
0 , 1
0 ,

y
y e
x
y
map

(c)



= =

0 and 0 , 2
0 and 1 0 , 2
) (
) ( ) (
) (
2
y e x xe
y x x
y f
x f x y f
y x f
y y
Y
X X Y
Y X

{
median

2
1
) ( ) ( = =



abs
abs
x
Y X
x
Y X
dx y x f dx y x f
Hence, for
2
1
2
2
1
0

=

abs
x
x dx x y
abs

Signal detection and estimation
104
and
for
2
2
2
1
0

0
2
y
abs
x
y
e
x dx xe y
abs
= =



6.14
(


= =
2
2
) 1 (
2
1
exp
2
1
) ( ) ( x y x y f x y f
N X Y


=
otherwise , 0
2 0 ,
2
1
) (
x
x f
X

2 0 for 1 0
) ( ln
) ( ln
= =

x y x
x
x f
x
x y f
map
X
X Y

6.15 Van Trees
1
shows that the mean square estimation commutes over a linear
transformation. To do so, consider a linear transformation on given by
D =
where D is an K L matrix. The cost function of the random vector is
= =
=
L
i
T
i i
1
2
) ( ) ( ] ) (

[ ] ), (

[ y y y y C
and ) (
~
y is

(
(
(
(
(
(

=
K K

) (

) (

) (

~
2 2
1 1
y
y
y
M

Following the same procedure as we did in estimating ) (

y
ms
, we obtain



1 Van Trees, H. L., Detection, Estimation, and Modulation Theory, Part I, John Wiley and Sons, New
York, 1968.
Parameter Estimation

105


= = d f E
ms
) | ( ] | [ ) (

|
y y y
Y

substituting (6.92) in (6.94), we obtain
] | [ ] | [ y D y E E =
and thus,
) (

) (

y D y
ms ms
=
6.16 can be expressed as b aY + = . Since 0 and 0 ] [ ] [ = = + =
y
m b Y aE E ,
then 0 = b
From (6.108),
y
y
a

= =

, and from (6.110), the conditional variance is given


by
| | ) 1 ( ) ( ] var[
2 2 2
= = =

Y b aY E Y
ms

Note that for this Gaussian case, the linear mean square estimate is the mean
square estimate. The conditional density function is then
(
(

) 1 ( 2
) (
exp
) 1 ( 2
1
) (
2 2
2
2 2
y
ay
y f
y
Y

6.17 The BLUE of is given by
] [ ] [

1
y yy y blue
m Y C C E + =


Using
) (
) ) (
) (
y f
f y f
y f
Y
Y
Y

=

, the conditional density function


) ( y f
Y

is then
Signal detection and estimation
106

otherwise , 0
1
2
1
, ) 8 20 (
7
1
) ( y f
Y

We compute
12
1
and
2
1
2
= = =

C m ,
9
2
and
3
4
2
= = =
n nn n
C m and
6
11
= + =
n y
m m m .
Since Y and are statistically independent, then
36
11
2 2
= + = =
n yy y
C and C C
Hence, after substitution, the best linear unbiased estimate is y
blue
11
3

= .






Chapter 7


Filtering


7.1 (a) The error is orthogonal to the data
=
(
(


, 0 ) ( ) ( ) ( ) (
0
Y d h t Y t t s E
=


, ) ( ) ( ) (
0
d h t R t t R
yy sy

Let = t
< < =
< < =


for ) ( ) (
for ) ( ) ( ) (
0
h R
d h R t R
yy
yy sy

Taking the Fourier transform, we have
) ( ) ( ) (
0
2
f H f S e f S
yy
t f j
sy
=


0
2
0
0
2
2 2
0
2
0
2
2
4 )] / 4 ( [
/ 4
) ( ) (
) (
) (
) (
) (
t f j
t f j
nn ss
ss
yy
t f j
sy
e
f N
N
e
f S f S
f S
f S
e f S
f H



+ +

=
+
= =
4 4 3 4 4 2 1

Taking the inverse Fourier transform, we obtain
107
Signal detection and estimation
108
0
2
0
4
,
2
) (
0
N
e
N
t h
t t
+ =

=




(b) The minimum mean square error is
(
(


) ( ) ( ) ( ) (
0 0
t t s d h t Y t t s E e
m

= =


d h t R R
sy ss
) ( ) ( ) 0 (
0

Using Equation (7.55)

(
(


=


df
f S
f S f S
f S e
yy
sy sy
ss m
) (
) ( ) (
) ( from
Examples 7.3 and 7.4.
7.2

=
5 . 0
) ( e R
ss
and ) ( ) ( =
nn
R

(a) From Equation (7.54) the transfer function of the optimum unrealizable
filter is
) ( ) (
) (
) (
f S f S
f S
f H
nn ss
ss
+
=
where,
2 2
4 25 . 0
5 . 0
) (
f
f S
ss
+
= and 1 ) ( = f S
nn
.
Hence,
2 2 2 2
4 75 . 0
75 . 0 2
75 . 0
25 . 0
4 75 . 0
5 . 0
) (
f f
f H
+
=
+
=

t
t0

0
2
N
h(t)
Filtering
109
Taking the inverse Fourier transform, the impulse response is


75 . 0
29 . 0 ) ( e h
(b) This is similar to Example 7.5 where 5 . 0 = and 1
2
0
=
N
. Hence,
) ( 62 . 0 ) (
12 . 1
=

u e h
(c) The minimum mean-square error for the unrealizable filter is



+
= =
+
= df
f
df f S f H df
f S f S
f S f S
e
nn
nn ss
nn ss
m
2 2
4 75 . 0
5 . 0
) ( ) (
) ( ) (
) ( ) (

Using

=
+
29 . 0
1
2 2
m
e d
x
.

For the realizable filter, the minimum mean-square error is
62 . 0 ) 62 . 0 ( 1 ) ( ) ( ) 0 (
0
12 . 1 5 . 0
= = =



d e e d h R R e
sy ss m
.
7.3 ) ( ) (
) (
2
t N t s
dt
t ds
= + and thus, we have

We see that
2 2 2 2
4 1
2
) (
4 1
1
) (
2 2
f
f S
f
f S
n n ss
+
=
+
= and
2 2
4 1
2
) (
1 1
f
f S
n n
+
= . Thus,
2 2 2 2 2 2
4 1
4
4 1
2
4 1
2
) ( ) ( ) (
1 1
f f f
f S f S f S
n n ss yy
+
=
+
+
+
= + =
1 2
1
+ f j
) (
2 2
f S
n n
) ( f S
ss
Signal detection and estimation
110
Using Laplace transform, we write
1
2
1
2
) 1 )( 1 (
4
) (
+
+

=
+

=
p p p p
p S
yy
, f j p = 2
Then,
1
2
) (
+
=
+
p
p S
yy
and
1
2
) (

p
p S
yy
. Also,
) (
1
1
) (
) (
) (
) (
p B
p
p S
p S
p S
p S
yy
ss
yy
sy
+

=
+
= =
Therefore, the transfer function and the impulse response are
) (
2
1
) (
2
1
) (
) (
) ( t t h
p S
p B
p H
yy
= = =
+
+

7.4 f j p
f
f S f S f S
nn ss yy
= +
+
= + = 2 ,
2
1
4 1
1
) ( ) ( ) (
2 2

) 1 ( 2
3
) 1 ( 2
3
) 1 ( 2
3
) (
2
2

=
+
+
=
p
p
p
p
p
p
p S
yy

2
1 2
1
1
2 2
1
2
1
1
2 /
) (
) (
) ( ) ( ) (

+
+
+
+
=

+
= = =

p
p
p
p
p
p S
p S
p pS p pS p S
yy
y s
ss sy y s

The transfer function
3
1
2 2
2
) (
) (
) (
+ +
= =
+
+
p p S
p B
p H
yy
and thus, the impulse
response is ) (
2 2
2
) (
3
t u e t h
t
+
= .


Filtering
111
7.5
2 2
4 ) 4 / 1 (
3 / 5
) (
f
f S
ss
+
= ,
2 2
4 1
3 / 7
) (
f
f S
nn
+
=
2
4 1
3 / 20
) (
p
p S
ss

= ,
2
1
3 / 7
) (
p
p S
nn

=
and
) 1 )( 4 1 (
16 9
) ( ) ( ) (
2 2
2
p p
p
p S p S p S
nn ss yy


= + =

) 1 )( 2 1 (
4 3
) 1 )( 2 1 (
4 3
p p
p
p p
p
+ +
+


=
) ( ) ( p S e p S
ss
p
sy

=
Also,
|
|
.
|

\
|
+
+

=
+ +

p p
e
p p
p
e
p S
p S
p p
yy
sy
2 1
2
4 3
3 / 2
) 4 3 )( 2 1 (
) 1 (
3
20
) (
) (

Knowing,
2 /
2 1
2
t
e
p

+
and
(

+
+

) (
2
1
exp
2 1
2
t
p
e
p
, the transfer
function is
0 ,
4 3
1
2
) (
) (
) (
2 /
>
+
+
= =

+
+
p
p
e
p S
p B
p H
yy

7.6
) 4 / 1 ( 1
2 / 1
) (

=
n
ss
n R ,

=
=
0 , 0
0 , 2
) (
n
n
n R
nn

Taking the Z-transform, we have
) 2 )]( 2 / 1 ( [
2
3
4
) ( ) (


= =
Z Z
Z
Z S Z S
ss sy
and
2 ) ( = Z S
nn
. Hence,
3
4
) 2 ( )] 2 / 1 ( [
) 186 . 3 )( 314 . 0 (
) 2 )]( 2 / 1 ( [
2 7 2
3
4
) ( ) ( ) (
) ( ) (
2
3 2 1 43 42 1
Z S Z S
nn ss yy
yy yy
Z Z
Z Z
Z Z
Z Z
Z S Z S Z S
+


=

+
= + =
Signal detection and estimation
112
Also,
43 42 1
43 42 1
) (
) (
186 . 3
372 . 2
) 2 / 1 (
372 . 0
) 186 . 3 )]( 2 / 1 ( [
2
) (
) (
Z B
Z B
yy
sy
Z Z Z Z
Z
Z S
Z S

=


=


The pulse transfer function is then
L , 2 , 1 , 0 , ) 314 . 0 ( 372 . 0 ) ( and
314 . 0
372 . 0
) (
) (
) ( = =

= =
+
+
n n h
Z
Z S
Z B
Z H
n
yy

(b) The mean square error is given by
61 . 0
314 . 0 1
1
) 372 . 0 (
3
4
3
4
) 314 . 0 (
2
1
) 372 . 0 (
3
4
3
4
) ( ) ( ) 0 (
0 0
=

=
(

= =


=

= n
n
n
sy ss m
n h n R R e

7.7
) 2 )]( 2 / 1 ( [
2
) (
2
1
) (


= =
Z Z
Z
Z S n R
ss
n
ss

1 ) (
0 , 0
0 , 1
) ( =

=
= Z S
n
n
n R
nn nn

Hence,
) 2 )]( 2 / 1 ( [
1 5 . 4
) ( ) ( ) (
2

+
= + =
Z Z
Z Z
Z S Z S Z S
nn ss yy

3 2 1
43 42 1
) (
) (
2
265 . 4
) 2 / 1 (
234 . 0
) (
Z S
Z S
yy
yy
yy
Z
Z
Z
Z
Z S

=
and
43 42 1
43 42 1
) (
) (
265 . 4
265 . 2
) 2 / 1 (
265 . 0
) 265 . 4 )]( 2 / 1 ( [
2
) (
) (
Z B
Z B
yy
sy
Z Z Z Z
Z
Z S
Z S

=


=


Filtering
113
Hence, L , 2 , 1 , ) 234 . 0 ( 265 . 0 ) (
234 . 0
265 . 0
) (
) (
) ( = =

= =
+
+
n n h
Z
Z S
Z B
Z H
n
yy

(b) The mean-square error is
n
n n
sy ss m
n h n R R e )] 5 . 0 )( 235 . 0 [( 265 . 0 1 ) ( ) ( ) 0 (
0 0


=

=
= =
7 . 0
1175 . 0 1
1
265 . 0 1 =

=
7.8 (a) From (7.113), the optimum weights are given by
ys yy
R R
1
0

=
Computing, we have
(

1456 . 1 5208 . 0
5208 . 0 1456 . 1
1
yy
R and
(

=
(


=
(

=
7853 . 0
8360 . 0
4458 . 0
5272 . 0
1456 . 1 5208 . 0
5208 . 0 1450 . 1
02
01
0

That is, 8360 . 0
01
= and 7853 . 0
02
=
(b) From (7.105), the minimum mean-square error is
0 0 0 0
2
R R R
T
ys
T T
ys s m
e + =
Substituting the values and computing, we obtain 1579 . 0 =
m
e .






Chapter 8


Representation of Signals


8.1 (a) We have,

=
|
|
.
|

\
|

T
dt
T
t k
T
T
0
0 cos
2 1

1
1 1
0
=

T
dt
T T

and

=
=
|
|
.
|

\
|

j k
j k
dt
T
t j
T T
t k
T
T
, 0
, 1
cos
2
cos
2
0

Therefore,


T
t k
T
T
cos
2
,
1
are orthonormal functions.
(b) Similarly, to verify that the set functions is orthonormal in the interval
] 1 , 1 [ , we do 1
2
1
2
1
=

T
T
dt
T T

0 cos
2
1
2 cos
1
2
1
0
=

T T
T
dt
T
t k
T
dt
T
t k
T T

and


114
Representation of Signals
115
kj
T T
T
dt
T
t j
T
t k
T
dt
T
t j
T
T
t k
T
=

=


0
cos cos
2
cos
1
cos
1

Hence, the set is orthonormal on the interval ] 1 , 1 [ .
8.2 (a) We solve 0 ) ( ) (
1
1
1
1
2 1
= =


tdt dt t s t s
2 2 ) (
1
0
1
1
2
1
= =

dt dt t s
and
3
2
2 ) (
1
0
2
1
1
2
1
1
2
2
= =

dt t dt t dt t s
Therefore, ) ( and ) (
2 1
t s t s are orthogonal.
(b) ) (
1
t s orthogonal to 3 0 ) 1 ( 1 ) (
1
1
2
3
= = + +

dt t t t s
) (
2
t s orthogonal to 0 0 ) 1 ( ) (
1
1
2
3
= = + +

dt t t t t s .
Therefore,
2
3
3 1 ) ( t t s = .
8.3 Note that = ) ( 2 ) (
1 3
t s t s We have 2 independent signals.
The energy of ) (
1
t s is thus,
T
T T
dt dt dt t s E
T
T
T T
= + = + = =

2 2
) 1 ( 1 ) (
2 /
2
2 /
0 0
2
1 1

Signal detection and estimation
116



= =
T t
T
T
T
t
T
E
t s
t
2
,
1
2
0 ,
1
) (
) (
1
1

) ( ) ( ) (
1 21 2 2
t s t s t f = where

=
T
dt t t s s
0
1 2 21
) ( ) ( . Then,
2
1
) 2 (
1
) 1 (
2 /
2 /
0
21
T
dt
T
dt
T
s
T
T
T
+ =
|
|
.
|

\
|
+
|
|
.
|

\
|
=


T t t s t s t f = = 0
2
3
) ( ) ( ) (
1 21 2 2

and
T t
T
dt
t
T
=
|
.
|

\
|

0
1
2
3
2 / 3
) (
0
2
2

(b) ) ( ) (
1 1
t T t s =
) (
2
3
) (
2
) (
2 1 2
t T t
T
t s + =
) ( 2 ) (
1 3
t T t s =
Thus, the signal constellation is

T
2
3
2
T T T 2
1

1
s
2
s
3
s
Representation of Signals
117
| | 0 ,
1
T = s
(
(

= T
T
2
3
,
2
2
s | | 0 , 2
3
T = s
8.4 ) (
) ( ) (
) (
) (
2
2
2 2
t
t
n
t
dt
t d
t
dt
t d
t
t
n
t
dt
t d
t
dt
d

|
|
.
|

\
|
+

=
|
|
.
|

\
|
+ |
.
|

\
|

) ( ) (
) ( ) (
2
2
2
2
t n t
dt
t d
t
dt
t d
t +

=
where,
| |

d t n j
dt
t d
) sin ( exp sin
) (

| |

d t n j
dt
t d
) sin ( exp sin
) (
2
2
2

After substitution in the differential equation, we have


= + + + d t n j n jt t n t t t )] sin ( exp[ ) sin cos ( ) (
2 2 2 2 2 2

but,


d t n j jt )] sin ( exp[ sin


= )] sin ( exp[ cos t n j jt


+ d t n j t n t )] sin ( exp[ ) cos ( cos


+ = d t n j nt t )] sin ( exp[ cos) cos ( 0
2
Thus,
0 )] sin ( exp[ ) cos ( ) (
2 2 2
= = = +



n
n
ju
du e n d t n j t n n n t t t
where = sin t n u .
Signal detection and estimation
118
8.5 Given the differential system 0 ) ( ) ( = + t t , 0 ) 1 ( ) 0 ( = = , we first
integrate with respect to t 0 ) ( ) 0 ( ) (
0
= +

t
du u t .

0 ) ( ) ( ) 0 ( ) 0 ( ) (
0
= +

t
du u u t t t
Using

= = =
t
du u u t du u u t
0
1
0
) ( ) ( ) ( ) 1 ( ) ( 0 ) 1 ( ) 0 (
since

=
1
0
) ( ) 1 ( ) 0 ( du u u

+ =
1
0
) ( ) 1 ( ) ( ) 1 ( ) (
t
t
du u u du u t t
Therefore, the kernel is



=
1 , 1
0 , 1
) , (
u t u
t u t
t u k
8.6 The integral equation can be reduced to the differential equation by taking the
derivative with respect to t 0 ) ( ) ( = + t t with ( ) 0 2 / and 0 ) 0 ( = = .
Let
t j t j
e c e c t

+ =
2 1
) ( . Then,
2 1 2 1
0 ) 0 ( c c c c = + = =
t j t j
e j c e j c t

=
2 1
) ( and 0
2
2 2
1
=
|
|
.
|

\
|
+ = |
.
|

\
|

j j
e e j c .
0
1
= c trivial solution
2 2
0
2
cos

=

or
L , 2 , 1 , 0 , ) 1 2 (
2
= +

= k k
Therefore, ( ) L , 2 , 1 , 0 , ) 1 2 sin( ] [
) 1 2 ( ) 1 2 (
1
= + = =
+ +
k t k c e e c t
t k j t k j
and
L , 1 , 0 , ) 1 2 (
2
= + = k k
k
.
Representation of Signals
119
8.7 Differentiating twice with respect to t, the integral equation reduces to the
differential equation
0 ) ( ) ( = + t t with 0 ) ( ) 0 ( = = T
Let
t j t j
e c e c t

+ =
2 1
) ( . Then,
2 1
0 ) 0 ( c c = = and
t j t j
e c e c T

+ = =
2 1
0 ) (
or, 0 0 cos = c T c and L , 2 , 1 , 0 ,
2
) 1 2 (
2
=
+
= +

= k
T
k
k T
Therefore, the eigenfunctions are
L , 2 , 1 , 0 ,
2
) 1 2 (
cos ) ( =
+
= k t
T
k
c t
8.8 t n B t n A t jn + = = = + cos sin ) ( 0
2

For t n A t n A t u t + = cos sin ) ( 0
2 1

2
0 ) 0 ( A = = and t n A t = sin ) (
1

For t n B t n B t T t u + = cos sin ) (
2 1

0 cos sin 0 ) (
2 1
= + = T n B T n B T
Also, ) (t continuous
u n B u n B u n A u u + = + = cos sin sin ) 0 ( ) 0 (
2 1 1

and
1 cos sin cos 1 ) 0 ( ) 0 (
2 1 2
= + = + u n B n u n n B u n n B u u
Solving for the constants, we obtain
Signal detection and estimation
120

=
T t u t T n
T n n
u n
u t t n
T n n
T u n
t
, ) ( sin
sin
sin
0 , sin
sin
) ( sin
) (
8.9 For u t For u t
2 1
) , ( c t c u t k + =
4 3
) , ( c t c u t k + =
0 ) 0 (
2
= = c k 0 ) , (
4 3
= + = c T c u T k
) , ( u t k continuous At u t = , we have
4 3 1
c u c u c + =
1
) , 0 ( c u u k
t
=
3
) , 0 ( c u u k
t
= +
1 3 3 3 1
1 1 ) , 0 ( ) , 0 ( c u c u c u c c u u k u u k
t t
+ = + + = = +
T
u
c u c = =
3 4
, and
T
u
c =1
1

Therefore,

=
t u u t
T
u
u t t
T
u T
u t k
0 ,
0 ,
) , (
8.10 Taking the integral of the second order integro-differential equation
) ( ) ( ) , (
0
2
2
t du u u t k
dt
d
T
=
(
(


we have




(
(

=
(
(

+ + =
(
(

+ +
T
t
T
t
t
T
t
T
t
t
T
t
T u t
dt t du u
T
u
du u
T
u
dt
d
du u
T
u
t
T
t
t du u t t t t du u
T
u
t
T
t
t
dt
d
du u
T
u
t du u t du u u du u
T
u
t
dt
d
) ( ) ( ) (
) ( ) ( ) ( ) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
0
0
0 0 0

Representation of Signals
121
Thus,
) ( ) ( ) , (
0
2
2
t du u u t k
dt
d
T
=
(
(


For 0 ) ( ) ( = + t t , 0 ) ( ) 0 ( = = T , we have

=
T
du u u t k t
0
) ( ) , ( ) ( a solution since 0 ) ( ) ( ) ( ) ( = + = t t t t as
expected.
8.11 For Problem 8.8, we have



=
u t
T n n
t T n u n
u t
T n n
t n u T n
u t k
,
sin
) ( sin sin
,
sin
sin ) ( sin
) , (
and



=
t u
T n n
u T n t n
t u
T n n
u n u T n
t u k
,
sin
) ( sin sin
,
sin
sin ) ( sin
) , (
We verify if u n u T n t n u T n = sin ) ( sin sin ) ( sin
?
.
We know that + = )] cos( ) [cos(
2
1
sin sin b a b a b a
)] ( cos ) ( [cos
2
1
)] ( cos ) ( [cos
2
1
u t T n u t T n t u T n t u T n + = +
Thus, they are equal and therefore ) , ( ) , ( t u k u t k = .
For Problem 8.9, we have
Signal detection and estimation
122

=
u t u t
T
u
u t t
T
u T
u t k
,
,
) , (
and

=
t u t u
T
t
t u u
T
t T
t u k
,
,
) , (
We observe that
T
T t u
t u
T
t
t
T
u
T
+ +
= + =
Therefore, ) , ( ) , ( t u k u t k = .
8.12 Here, we have two methods. We have ( ) ) ( ), , ( t u t k c
n n
, that is

+
+

= =
T
u
u T
n n
dt
T
t n
T T
u ut
dt
T
t n
T
t
T
u T
dt t u t k c sin
2
sin
2
) ( ) , (
0 0

Solving the integrals, we obtain the desired result
T
u n
n
T
T
c
n

= sin
) (
2
2
2

Note that we can use the results of Problems 8.10 and 8.11, that is

=
T
n
T
n n
du u t u k dt t u t k c
0 0
) ( ) , ( ) ( ) , ( from Problem 8.11. Then, from Problem
8.11, =


n
n
T
n
c
t
du u t u k
) (
) ( ) , (
0
L , 2 , 1 , sin
2
) (
2
2
=

= n
T
u n
T
n
T
c
n

8.13 We have,



=
t u
T m m
t m t T m
t u
T m m
t T m u m
u t h
,
sin
sin ) ( sin
,
sin
) ( sin sin
) , (
Representation of Signals
123

=
T
t
t T
du u
T m m
t m u T m
du u
T m m
u T m u m
t du u u t h t
) (
sin
sin ) ( sin
) (
sin
) ( sin sin
) (
1
) ( ) , ( ) (
0 0

and
( )
( )

T
t
t
du u
T m m
u T m
t m m
t
T m m
t T m
t m m du u
T m m
u m
t T m m
t
T m m
t m
t T m m t
) (
sin
) ( sin
sin
) (
sin
) ( sin
cos ) (
sin
sin
) ( sin
) (
sin
sin
) ( cos ) (
1
2
0
2

Simplifying the above equation, we have



+ =

T
t
t
du u
T m m
t m u T m
du u
T m m
t T m u m
T m t t
) (
sin
sin ) ( sin
) (
sin
) ( sin sin
sin ) ( ) (
1
2
0
2

From Problem 8.12, ) ( sin sin sin ) ( sin t T m u m t m u T m =
(
(

+ = +

T T
du u u t h t du u u t h t
0 0
2
) ( ) , ( ) ( ) ( ) , ( ) (
Thus,

=
T
du u u t h t
0
) ( ) , ( ) ( is a solution of
0 ) ( ) 0 ( , 0 ) ( ] ) [( ) (
2
= = = + + T t m t
In the second part of the question, we use the integral equation to obtain ) (u c
n
in
) ( ) ( ) , (
1
t u c u t h
n
n
n
=

=
. Here,
Signal detection and estimation
124


=
T
t n
T
t
n
sin
2
) ( and

|
.
|

\
|
=
2
2
) (m
T
n
n

This gives ( )

= = =
T
n
T
n n n
du u t u h dt t u t h t u t h c
0 0
) ( ) , ( ) ( ) , ( ) ( ), , ( ( from Problem
8.12. Therefore, by Problem 8.10, we have

) (
) ( ) , (
0
t
du u t u h
n
T
n
where,
T
t n
T
t
n

= sin
2
) ( and
2
2
) ( |
.
|

\
|
= m
T
n
and
L , 2 , 1 , sin ) (
2
2
2
=

(
(

|
.
|

\
|
= n
T
t n
m
T
n
T
c
n

8.16 Let
t j t j
e c e c t

+ =
2 1
) ( ,
1 2 2 1
0 ) 0 ( c c c c = + = = and thus,
t c e c e c t
t j t j
= =

sin ) (
1 1

Let 0 ,
2 2
> = , then L , 2 , 1 sin ) ( = = k t c t
k

= = + = +
k
k k k k
tan 0 cos sin 0 ) 1 ( ) 1 (
Therefore, L , 2 , 1 , sin ) ( = = k t t
k
for positive roots of

=
k
k
tan .
Case 1: Let
2 1
) ( 0 c t c t + = =
t c t c
1 2
) ( and 0 0 ) 0 ( = = =
1 0 0 ) 1 ( ) 1 (
3 3
= = + = + c c but is positive and thus, 0 = is not
an eigenvalue.
Case 2: 0 > such that 0 and
2 2
> = . Then,
Representation of Signals
125
t j t j
e c e c t

+ =
2 1
) (
1 2 2 1
0 ) 0 ( c c c c = + = =

= = + tan 0 ) 1 ( ) 1 (
when 0 0 , 0 >

> <

Thus, ) ( sin ) ( t t
k k
= is solution where
k
are consecutive positive roots of

= tan .

+1
-1

tanh
tanh
-/
-/
tan

1 2
Signal detection and estimation
126
Case 3: If 0 < , let ) 0 (
2
> = = j .
Then = sinh ) (t .
From 0 ) 1 ( ) 1 ( = + , we have 0 cosh sinh = +
0 > and 0 0 >

< . So t t
0 0
sinh ) ( = is a solution
where 0 tanh >

= .
8.17 0 ) ( ) ( = + t t , 0 ) ( ) 0 ( = = T
Let
t j t j
e c e c t

+ =
2 1
) ( . Then,
2 1
0 ) 0 ( c c = = and 0 sin ) (
1
= =
(

=

T c e e j c T
T j T j

0 = c trivial solution L , 2 , 1 , 0 sin =

= = = k
T
k
k T T
2
|
.
|

\
|
=
T
k
and L , 3 , 2 , 1 , cos ) ( =

= k
T
t k
t
and
1 ) (
0
= t when 0 0 = = k





Chapter 9


The General Gaussian Problem


9.1 (a) We first diagonalize the matrix C.
5 . 1
2 / 1
0
1 2 / 1
2 / 1 1
2
1
=
=
=


= I C
(

=
(

=
b
a
b
a
2
1
1 2 / 1
2 / 1 1

1 1 1
C
2
2
and
2
2
= = b a
Therefore,
(

=
(
(

=
1
1
2
2
2 / 2
2 / 2
1
.
(

=
(
(

= =
1
1
2
2
2 / 2
2 / 2

2 2 2 2
C
We form the modal matrix | |
(

= =
1 1
1 1
2
2
2 1
M M and
(

1 1
1 1
2
2
1
M .
Therefore, the observation vector y in the new coordinate system is


127
Signal detection and estimation
128
) (
2
2
and ) (
2
2
2
2
2
2
2
2
2
2
1 2 1 2 1 1
2
1
y y y y y y
y
y
= + =
(

(
(
(
(

= = My y
The mean vector
1
m is
) (
2
2
and ) (
2
2
2
2
2
2
2
2
2
2
11 12 12 12 11 11
12
11
12
11
m m m m m m
m
m
m
m
= + =
(

(
(
(
(

=
(


1 0 1
m m m m = = . The sufficient statistic is
) )( (
3
1
) )( (
5 . 1

2 / 1

) (
1 2 11 12 2 1 12 11
2 12 1 11
2
1
y y m m y y m m
y m y m y m
T
k k
k k
+ + + =

+

=

=

=
y

or
1
1
1 1
0
1
2
1
) ( m C m y

+ =
<
>

T
H
H
T .
(b)
1 . 1
9 . 0
1 1 . 0
1 . 0 1
2
1
=
=

= C
Then,
(
(
(
(

=
2
2
2
2
1
,
(
(
(
(

=
2
2
2
2
2
,
(
(
(
(

=
(
(
(
(

=

2
2
2
2
2
2
2
2
,
2
2
2
2
2
2
2
2
1
M M
The General Gaussian Problem
129
and
) (
2
2
) (
2
2
2
2
2
2
2
2
2
2
1 2 1
2 1 1
2
1
y y y
y y y
y
y
=
+ =

(
(
(
(

= = My y
) (
2
2
, ) (
2
2
11 12 12 12 11 11
m m m m m m = + = and
1
m m =
The sufficient statistic is
) )( ( 45 . 0 ) )( ( 55 . 0
1 . 1

9 . 0

) (
1 2 11 12 2 1 12 11
2 12 1 11
2
1
y y m m y y m m
y m y m y m
T
k k
k k
+ + + =

+

=

=

=
y

(c)
9 . 1
1 . 0
1 9 . 0
9 . 0 1
2
1
=
=

= C
Then,
(
(
(
(

=
2
2
2
2
1
,
(
(
(
(

=
2
2
2
2
2

) )( ( 26 . 0 ) )( ( 5
9 . 1 1 . 0

) (
1 2 11 12 2 1 12 11
2 12 1 11
2
1
y y m m y y m m
y m y m y m
T
k k
k k
+ + + =

+

=

=

=
y

9.2
53 . 2
47 . 0
2 9 . 0
9 . 0 1
2
1
=
=

= C
(

=
51 . 0
86 . 0
1
,
(

=
86 . 0
51 . 0
2
,
(


=
(

=

86 . 0 51 . 0
51 . 0 86 . 0
86 . 0 51 . 0
51 . 0 86 . 0
1
M M

Signal detection and estimation
130
Then,
2 1 1
2
1
2
1
51 . 0 86 . 0
86 . 0 51 . 0
51 . 0 86 . 0
y y y
y
y
y
y
+ =
(

=
(

= My y and
1 2 2
51 . 0 86 . 0 y y y =
12 11 12 12 11 11 1 1
86 . 0 51 . 0 and 51 . 0 86 . 0 m m m m m m + = + = = Mm m
56 . 2
) 86 . 0 51 . 0 )( 86 . 0 51 . 0 . (
47 . 0
) 51 . 0 86 . 0 )( 51 . 0 86 . 0 (
) (
2 1 12 11
2 1 12 11
y y m m
y y m m
T
+ +
+
+ +
= y

) 34 . 0 2 . 0 )( 34 . 0 2 . 0 (
) 09 . 1 83 . 1 )( 09 . 1 83 . 1 ( ) (
2 1 12 11
2 1 12 11
y y m m
y y m m T
+ + +
+ + = y

9.3 Noise N ) , 0 (
2
n

(a) 0 = =
=
=
=
0 1
1 , 0
2 , 1
, 0 ] | [ m m
j
k
H Y E
j k

(
(

= = = =
2
2
2
0 0
0
0
:
n
n
n n k k
N Y H I C C
(
(

+
+
= + = + =
2 2
2 2
1 1
0
0
:
n s
n s
n s k k k
N S Y H C C C , since I C
2
s s
= .
From Equation (9.64), the LRT reduces to the following decision rule

=

<
>
+

=
2
1
2
0
1
2
2 2 2
2
) (
) (
k
k
n s n
s
H
H
y T y
where ( )
)
`

+ =
0 1 2
ln ln
2
1
ln 2 C C
The General Gaussian Problem
131
or,
2
2
1
2
2 2 2
3
0
1
2
) (
) (

+
=
<
>
=

= k
s
n s n
k
H
H
y T y
(b)
2
1
0 1
= = P P and minimum probability of error criterion 1 = ,
2
2 2
2
ln 2
n
n s

+
= and
2
2 2
2
2 2 2
3
ln
) (
2
n
n s
s
n s n

+
=
The density functions of the sufficient statistics under H
1
and H
0
, from Equation
(9.71) and (9.72), are

>
=

otherwise , 0
0 ,
2
1
) (
2
1
1
2 /
2
1 1
t e
H t f
t
H T

and

>
=

otherwise , 0
0 ,
2
1
) (
2
0
0
2 /
2
0 0
t e
H t f
t
H T

where
2 2 2
1 n s
+ = and
2 2
0 n
= . Consequently,
2
3
3
2
2 / 2 /
2
2
1
n n
e dt e P
t
n
F

=


and
) ( 2 / 2 / 2 /
2
1
2 2
3
2
1 3
3
2
1
2
1
s n
e e dt e P
t
D
+


= =

=


Signal detection and estimation
132
9.4 (a)
(
(
(
(
(

= = =
2
2
2
2
0
0 0 0
0 0 0
0 0 0
0 0 0
4
n
n
n
n
n
K C C
(
(
(
(
(

+
+
+
+
= + =
2 2
2 2
2 2
2 2
1
0 0 0
0 0 0
0 0 0
0 0 0
n s
n s
n s
n s
n s
C C C
where I C
2
s s
= . Hence,
2
0
1
4
1
2
2 2 2
2
) (
) (
<
>
+

=

=
H
H
y T
k
k
n s n
s
y
or,
2
2
2 2 2
3
0
1
4
1
2
2 2 2
2
) (
) (
) (

+
=
<
>
+

=

=
s
n s n
k
k
n s n
s
H
H
y T y
The statistic is

=
=
4
1
2
) (
k
k
y T y .
(b)
2
2 2
2
ln 4
n
n s

+
= and
2
2 2
2
2 2 2
3
ln
) (
n
n s
s
n s n

+
= .
The conditional density functions are then

>
=

otherwise , 0
0 ,
8
1
) (
2
0
0
2 /
4
0 0
t te
H t f
t
H T

and
The General Gaussian Problem
133

>
=

otherwise , 0
0 ,
8
1
) (
2
1
1
2 /
4
1 1
t te
H t f
t
H T

where
2 2
0 n
= and
2 2 2
1 n s
+ = . The probability of false alarm and detection
are then
2
3
3
2
2 /
2
3 2 /
4
2
1
2
1
8
1
n n
e dt te P
n
t
n
F


|
|
.
|

\
|

+ =

=


2
1 3
3
2
1
2 /
2
1
3 2 /
4
1
2
1
2
1
8
1


|
|
.
|

\
|

+ =

=

e dt te P
t
D

9.5 ROC of Problem 9.3 with 1 = SNR , 2 = SNR and 10 = SNR .













0
0
0.2
0.2
0.4
0.4
0.6
0.6
0.8
0.8
1
1
0.1
0.3
0.5
0.9
0.7
PD
PF
Signal detection and estimation
134
9.6
(
(

=
(
(

=
2
2
2
2
0
0
,
2 0
0
n
n
n
s
s
s
C C
From (9.78), the LRT is
2
2
1
0
1
2
2 2
2
2
1
) (
<
>
+

=

= k
k
n s
s
n
H
H
y T
k
k
y
or,
2
2 2 2 2 2
2
0
1
2
2 2
1
2 2
) 2 )( (
) ( 2 ) 2 (
s
n s n s n
n s n s
H
H
y y

+ +

<
>
+ + +
9.7 (a)
(

+
=
n s
n
C C
C
C
0
0
0
and
(

+
=
s
n s
C
C C
C
0
0
1

where
(

=
1 0
0 1
n
C and
(

=
2 0
0 2
s
C
(
(
(
(

=
3 0 0 0
0 3 0 0
0 0 1 0
0 0 0 1
0
C and
(
(
(
(

=
1 0 0 0
0 1 0 0
0 0 3 0
0 0 0 3
1
C
From (9.88), the optimum test reduces to
3
0
1
4
3
2
2
1
2
) (
<
>
=

= =
H
H
y y T
k
k
k
k
y
where
3
is
The General Gaussian Problem
135
2
2
2 2 2
3
) (

+
=
s
n s n
and ( )
1
2
, ln ln
2
1
ln 2
2
2
0 1 2
=
=
(

+ =
n
s
C C
(b) = 0
3
The test reduces to

= =
<
>
4
3
2
0
1
2
1
2
k
k
k
k
y
H
H
y
From (9.94), (9.95) and (9.96), the probability of error is

=
=
=


0
1 0 1 0 1 1
0 0
1 0 0 0 1 0
1
0 1
1
0 1
) , , ( ) | (
) , , ( ) | (
) (
t
T T
t
T T
dt dt H t t f H P
dt dt H t t f H P
P
where,
6 /
1
1
1
18
1
) (
t
T
e t f

= and
2 /
0
0
0
2
1
) (
t
T
e t f

= . Therefore,
4
1
36
1
) (
1
0 1
0
0
2 /
0
6 /
1
= =

t
t t
dt e e dt P .
9.8 (a)
(
(
(

=
1 1 . 0 5 . 0
1 . 0 1 9 . 0
5 . 0 9 . 0 1
C
0741 . 2
9153 . 0
0105 . 0
0
1 1 . 0 5 . 0
1 . 0 1 9 . 0
5 . 0 9 . 0 1
3
2
1
=
=
=
=



= I C
(
(
(

= =
3009 . 0
6249 . 0
7204 . 0
1 1 1 1
C ,
Similarly,
Signal detection and estimation
136
(
(
(

=
8750 . 0
4812 . 0
0519 . 0
2
, and
(
(
(

=
3792 . 0
6148 . 0
6916 . 0
3

The modal matrix is
(
(
(



=
3792 . 0 8750 . 0 3009 . 0
6148 . 0 4812 . 0 6249 . 0
6916 . 0 0519 . 0 7204 . 0
M
and
3 2 1 3
3 2 1 2
3 2 1 1
38 . 0 875 . 0 3 . 0
615 . 0 48 . 0 625 . 0
69 . 0 052 . 0 72 . 0
y y y y
y y y y
y y y y
+ + =
+ =
+ = = My y

Similarly,
1 1
Mm m = and then we use

= =

=

=
3
1
3
1

) (
k k
k k
k k
k k
y m y m
T y
(b)
(
(
(
(

=
1 8 . 0 6 . 0 2 . 0
8 . 0 1 8 . 0 6 . 0
6 . 0 8 . 0 1 8 . 0
2 . 0 6 . 0 8 . 0 1
C
In this case, 1394 . 0
1
= , 0682 . 0
2
= , 9318 . 2 and 8606 . 0
4 3
= =
whereas,
(
(
(
(

=
(
(
(
(

=
(
(
(
(

=
(
(
(
(

=
4445 . 0
5499 . 0
5499 . 0
4445 . 0
and
6768 . 0
2049 . 0
2049 . 0
6768 . 0
,
5499 . 0
4445 . 0
4445 . 0
5499 . 0
,
2049 . 0
6768 . 0
6768 . 0
2049 . 0
4 3 2 1

and the modal matrix is
The General Gaussian Problem
137
(
(
(
(




=
44 . 0 68 . 0 55 . 0 2 . 0
55 . 0 2 . 0 44 . 0 68 . 0
55 . 0 2 . 0 44 . 0 68 . 0
44 . 0 68 . 0 55 . 0 2 . 0
M






Chapter 10


Detection and Parameter Estimation


10.1 (a) t t s = 2 cos ) (
1

|
.
|

\
|
|
.
|

\
|
= |
.
|

\
|
+ =
3
2
sin ) 2 sin(
3
2
cos ) 2 cos(
3
2
2 cos ) (
2
t t t t s
|
.
|

\
|
+ |
.
|

\
|
= |
.
|

\
|
=
3
2
sin ) 2 sin(
3
2
cos ) 2 cos(
3
2
2 cos ) (
3
t t t t s
2
1
2
1
t
Also, = =


2
1
) ( ) 2 (cos
2 / 1
2 / 1
2
1
2 / 1
2 / 1
2
dt t s dt t
2
1
2
1
, 2 sin
2
2
) (
, 2 cos
2
2
) (
2
1

=
=
t
t t
t t

Therefore,
) ( 2 ) (
1 1
t t s =
) (
2
3
) (
2
2
) (
2 1 2
t t t s =
) (
2
3
) (
2
2
) (
2 1 3
t t t s + =




138
Detection and Parameter Estimation

139












(b) The decision space is



10.2
2 1
1 2
2 1
for
1
) ( ] , [ T t T
T T
t f T T t
T
< <

=

) ( ) ( :
) ( ) ( ) ( :
0
1
t N t Y H
t N t s t Y H
=
+ =

A
t
s(t)
T1 t0 t0+T T2
1

1
s
2
s
3
s
Decide
Decide
Decide
2
2 / 2
2 / 3
2 / 3
3
s
2
s
1
s
Received
signal
) (
1
t
) (
2
t




Choose
largest
variable

2 / 1
2 / 1

2 / 1
2 / 1

Signal detection and estimation
140
where
0
2
/
0
1
) (
N n
N
e
N
n f

= .
The problem may be reduced to

Under H
0
, we have

= =
2
0
) ( ) ( ) (
1
T
t
dt t N t N t Y
0 )] ( [ )] ( [
2
0
1
= =

T
t
dt t N E t N E and

=
(
(

=
2
0
2
0
2
0
2
0
2 1 2 1 2 2 1 1
2
1
)] ( ) ( [ ) ( ) ( )] ( [
T
t
T
t
T
t
T
t
dt dt t N t N E dt t N dt t N E t N E
where

=
=
2 1
2 1
0
2 1
, 0
,
2
)] ( ] ( [
t t
t t
N
t N t N E

= =
2
0
2
0
)] ( [ var ) (
2
) (
2
)] ( [
1 0 2
0
2 1 2 1
0 2
1
T
t
T
t
t N t T
N
dt dt t t
N
t N E
Under H
1
, we have ) ( ) ( ) (
0 1
2
2
0
t T A dt A dt t s t s
T
t
T
t
= = =

. Then,
t N t T A t Y H
t N t Y H
( ) ( ) ( :
) ( ) ( :
1 0 1
1 0
+ =
=

The LRT is

2
0
T
t
LRT
) ( ) ( ) ( t N t s t Y + = ) ( ) ( ) (
1 1
t N t s t Y + =
Detection and Parameter Estimation

141
) (
) ( ) , (
) (
) (
) (
0
1 1 ,
0
1
0
1 1
0
1
H y f
dt H t f H t y f
H y f
H y f
y
H Y
H T H T Y
H Y
H Y

= =
(
(


=

) (
exp
) (
1
1
) (
)] ( [
exp
) (
1
0 2 0
2
0 2 0
1 2 0 2 0
2
0
0 2 0
2
1
t T N
y
t T N
dt
T T t T N
t t A y
t T N
T
T

<
>
(
(

=

0
1
) (
exp
) (
)] ( [
exp
1
0 2 0
2
0 2 0
2
0
1 2
2
1
H
H
t T N
y
dt
t T N
t t A y
T T
T
T

| |
<
>
+
(
(

(
(

+ +

=

0
1
) 1 ( 2 exp
) (
exp
) (
) 2 (
exp
1
2
1
0
2 2
0 2 0
2
0 2 0
0 0
2
1 2
H
H
dt t At A t A
t T N
y
t T N
At At y
T T
T
T

| |
4 4 4 4 4 3 4 4 4 4 4 2 1

+

<
>
(
(

+ +

2
1
) 1 ( 2 exp
) (
0
1
) (
) 2 ( 2
exp
0
2 2
1 2
0 2 0
0 0
2
T
T
dt t At A t A
T T
H
H
t T N
At At y

Therefore,
2
) 2 ( ln ) (
0 0 0 2 0
1
0
2
At At t T N
H
H
y
+
<
>
.
10.3 From (10.85), the probability of error is
|
|
.
|

\
|

=
0
2
2
1
) (
N
Q P where
0 1 2 1
2 E E E E + = .
and
Signal detection and estimation
142
T A dt t s E
T
2
0
2
1 1
) ( = =


T A dt t s E
T
2
0
2
0 0
) ( = =


T A
T A
dt t s t s E E
T
2
2
0
2 1 2 1
2
1
,
2
) ( ) ( = = = =

and
|
|
.
|

\
|
=
0
2
2
2
1
) (
N
T A
Q P
The optimum receiver is shown below

10.4 We have,
) ( ) ( :
) ( ) ( ) (
) ( ) ( ) (
:
0
2
1
1
t W t Y H
t W t s t Y
t W t s t Y
H
=

+ =
+ =

Under H
1
, we have
1 1
0
1 1 1
) ( )] ( ) ( [ W E dt t s t W t s Y
T
+ = + =


2 2
0
2 2 2
) ( )] ( ) ( [ W E dt t s t W t s Y
T
+ = + =

.

T
0

T
0
0
0
1
1
Y
H
H
Y
<
>
H1
H0
Y(t)
) (
1
t s
) (
2
t s
Y1
Y2
Detection and Parameter Estimation

143
The problem reduces to:

+
=
0 0
1 1
1 1 1
1
:
:
:
H W
H W
H W E
Y

+
=
0 0
1 2
1 2 2
2
:
:
:
H W
H W
H W E
Y
Under H
0
, we have
. 2 , 1 , ) ( ) (
0
0
= = =

k W W dt t s t W Y
k
T
k k

The LRT is
) (
) ( ) , ( ) ( ) , (
) (
) (
0
2 2 1 , 1 1 1 ,
0
1
0
2 1 1 1
0
1
H f
s P s H f s P s H f
H f
H f
H
S H S H
H
H
y
y y
y
y
Y
Y Y
Y
Y
+
=
where
(
(

=
0
2
1 1
0
1 1 1 ,
) (
exp
1
) , (
1 1 1
N
E y
N
s H y f
S H Y

T
0

T
0
Y(t)
) (
1
t s
) (
2
t s
Y1
Y2
Signal detection and estimation
144
|
|
.
|

\
|

=
0
2
1
0
2 1 1 ,
exp
1
) , (
2 1 1
N
y
N
s H y f
S H Y

|
|
.
|

\
|

=
0
2
2
0
1 1 2 ,
exp
1
) , (
2 1 2
N
y
N
s H y f
S H Y

(
(

=
0
2
2 2
0
2 1 2 ,
) (
exp
1
) , (
2 1 2
N
E y
N
s H y f
S H Y

and
2 , 1 , exp
1
) (
0
2
0
0
0
=
|
|
.
|

\
|

= k
N
y
N
H y f
k
k H Y
k

Therefore, the LRT becomes
0
2
2 0
2
1
0
2
1 0
2
2
/ /
0
0
2
2 2 /
0
/
0
2
1 1
0
1
2
1 ) (
exp
1
2
1 ) (
exp
1
N y N y
N y N y
e e
N
N
E y
e
N
e
N
E y
N

|
.
|

\
|

(
(

+ |
.
|

\
|

(
(


=
<
>


0
1
2 2
2
2
0
1 1
2
1
0
) 2 (
1
exp ) 2 (
1
exp
2
1
H
H
E y E
N
E y E
N

When 1 = , the LRT becomes
2 exp
2
exp exp
2
exp
0
1
0
2
2
2
0
2
0
2
1
1
0
1
H
H
N
E
y
N
E
N
E
y
N
E
<
>
|
|
.
|

\
|

|
|
.
|

\
|
+
|
|
.
|

\
|

|
|
.
|

\
|

The optimum receiver may be


Detection and Parameter Estimation

145







10.5 (a) The probability of error is given by
|
|
.
|

\
|

=
0
2
2
1
) (
N
Q P where
0 1 0 1
2 E E E E + =
49 . 0 ) 1 (
2
1
4
2
0
2
1
= = =

e dt e E
t

The signals are antipodal
|
|
.
|

\
|
= = =
0
92 . 3
2
1
) ( and 96 . 1 1
N
Q P .
(b) The block diagram is shown below with ) ( 43 . 1 ) (
1
t s t .

10.6 At the receiver, we have
T t t W t s t Y H
T t t W t s t Y H
+ =
+ =
0 , ) ( ) ( ) ( :
0 , ) ( ) ( ) ( :
2 2
1 1

T
0
1
0
0
1
P
P
H
H
<
>
Y(t) y1
) (
1
t
H1
H0
] exp[
] exp[
1
Y
1
Y
0
1
2
N
E

0
/
2
1
N E
e

T
0

T
0
2
0
1
H
H
<
>

Y(t)
H1
) (
1
t s
) (
2
t s
0
2
2
N
E

0
/
2
2
N E
e

H0
Signal detection and estimation
146
2
2 1
T
E E = = and ) ( and ) ( 0 ) ( ) (
2 1
0
2 1 12
t s t s dt t s t s
T
= =

are uncorrelated.
The receiver is

where . 2 , 1 , ) (
2 ) (
) ( = = = k t s
T
E
t s
t
k
k
k
k

The observation variables Y
1
and Y
2
are then

=
+ =
=

1
0
1 0
1 1
0
1 1
1
) ( ) ( :
) ( ) ( :
W dt t t Y H
W E dt t t Y H
Y
T
T

+ =
=
=

2 2
0
1 0
2
0
1 1
2
) ( ) ( :
) ( ) ( :
W E dt t t Y H
W dt t t Y H
Y
T
T

This is the general binary detection case. Then,
(

=
(

=
(

=
22
21
2
12
11
1
2
1
and ,
s
s
s
s
Y
Y
s s Y

T
0

T
0
H1
H0
Y(t)
) (
1
t
) (
2
t



Choose
largest
Y1
Y2
Detection and Parameter Estimation

147
The conditional means are
1
12
11
1
1 1
0
] [ s Y m =
(

=
(
(

= =
s
s
E
H E
2
22
21
2
2 2
0
] [ s Y m =
(

=
(

= =
s
s
E
H E
) (
1
t s and ) (
2
t s uncorrelated the covariance matrix is
C C C = =
(

=
2
0
0
1
2 / 0
0 2 /
N
N

and the probability of error is
|
|
.
|

\
|
=
|
|
.
|

\
|

=
0 0
2
1 2
2
1
) (
N
T
Q
N
Q P where E E E 2
2 1
= + =
10.7 At the receiver, we have
T t t W t s E t Y H
T t t W t s E t Y H
+ =
+ =
0 , ) ( ) ( ) ( :
0 , ) ( ) ( ) ( :
2 2 2
1 1 1

T
0

T
0
Y(t)
) (
1
t
) (
2
t
Y1
Y2
Signal detection and estimation
148
with

=
T
dt
E
t s
t
0 1
1
1
) (
) ( and

= =
T T
dt t s dt
t s
t
0
2
0
2
2
) ( 2
2 / 1
) (
) (
Since the signals are orthogonal, we can have a correlation receiver with two
orthogonal functions or with one orthonormal function ) (

t s given by
(
(

=
+

= ) (
2
1
) (
2
3
) ( ) (
) (
2 1
2 1
2 2 1 1

t s t s
E E
t s E t s E
t s
We obtain the sufficient statistic as follows

The conditional means are
3
2
) (
2
1
) (
3
2
)] ( ) ( [ ] | ) ( [
0
2 1 1 1
=
(
(

(
(

+ =

T
dt t s t s t W t s E H y T E
6
1
) (
2
1
) (
2
2
) ( ) (
2
1
] | ) ( [
0
2 1 2 2
=
(
(

(
(

(
(

+ =

T
dt t s t s t W t s E H y T E
The noise variance is 2 / 1 ] | ) ( var[
0
= H y T . Hence, the performance index is
2
d
{ }
3
2 / 1
) 6 / 1 3 / 2 (
] | ) ( var[
| ) ( [ ] | ) ( [
2
0
2
0 1
=
+
=

H y T
H y T E H y T E

The probabilities of false alarm and detection are
|
.
|

\
|
= |
.
|

\
|
=
2
3
2
Q
d
Q P
F

T
0
) (

t S
y(t)
T(y)
Detection and Parameter Estimation

149
|
|
.
|

\
|
= |
.
|

\
|
=
2
3
2
Q
d
Q P
D

and thus, the achievable probability of error is
|
|
.
|

\
|
=

2
3
2
1
) (
2 / 3
2 /
2
Q dx e P
x

(b) In this case, the two signals will have the same energy E and thus,
E d E
E
d 2 4
2 / 1
2
2
= = =
From
2
1
2
3
2
3
) (
2
) (
(
(

|
|
.
|

\
|
= = = |
.
|

\
|
=

Q E E Q
d
Q P

10.8 We need to find the sufficient statistic. Since ) (
1
t s and ) (
2
t s are
orthogonal, let
2 1
2 2 1 1
1
) ( ) (
) (
E E
t s E t s E
t
+

=
Then,

(
(

(
(

+
= =

T
T
T
dt
E E
t s E t s E
t W H
dt
E E
t s E t s E
t W t s E H
dt t t y Y
0 2 1
2 2 1 1
0
0 2 1
2 2 1 1
1
0
1 1
) ( ) (
) ( :
) ( ) (
)] ( ) ( [ :
) ( ) (
Decision region
E1
E2
E
E
Signal detection and estimation
150
Y
1
is Gaussian with conditional means
0 0 1
0 ] | [ m H Y E = =
and
1
2 1
2
2
2 1
1
1
0
2
2
2 1
2 2
0
2
1
2 1
1 1
0
1 2 2 2
0
1 1 1 1 1 1
) ( ) (
) ( ) ( ) ( ) ( ] | [
m
E E
E
P
E E
E
P
dt t s
E E
E P
dt t s
E E
E P
dt t t s E P dt t t s E P H Y E
T T
T T
=
+

+
=
+

+
=
=



The variance is 2 /
0
N and thus,
|
|
.
|

\
|

=
0
2
1
0
0 1 |
exp
1
) | (
0 1
N
y
N
H y f
H Y

(
(

=
0
2
1 1
0
1 1 |
) (
exp
1
) | (
1 1
N
m y
N
H y f
H Y

Applying the likelihood ratio test, taking the natural logarithm and rearranging
terms, we obtain
2 2
ln
1
1
0
0
1
1
m
m
N
H
H
y +

<
>

For minimum probability of error, 1 = and the decision rule becomes
2 1
2 2 1 1
0
1
1
2
2
E E
E P E P m
H
H
y
+

=
<
>

The optimum receiver is
Detection and Parameter Estimation

151

10.9 (a) The energy B dt t dt t dt t A dt t s E
T T T T
k
+
(
(

+ + = =

0
2
3
0
2
2
0
2
1
2
0
2
) ( ) ( ) ( ) (
where B is the sum involving terms of the form
k j dt t t
T
j j

, ) ( ) (
0

But the s are orthonormal 0 = B and thus,
3
3
2
E
A A E = = .
(b) The signals 7 , , 1 , 0 ), ( L = k t s
k
, span a 3-dimentional space. The
coefficients are
k k
T
k k
T
k k
W s dt t t W t s
k dt t t y y
+ = + =
= =

0
0
) ( )] ( ) ( [
3 , 2 , 1 , ) ( ) (

such that
(
(
(

=
3
2
1
y
y
y
y ,
(
(
(

=
3
2
1
W
W
W
W and
(
(
(

=
3
2
1
k
k
k
k
s
s
s
s
Hence,
(
(
(

=
1
1
1
3
0
E
s ,
(
(
(

=
1
1
1
3
1
E
s ,
(
(
(

=
1
1
1
3
2
E
s ,
(
(
(

=
1
1
1
3
3
E
s ,
(
(
(

=
1
1
1
3
4
E
s ,

T
0
2
1
0
1
m
H
H
<
>
y(t) y1
) (
1
t
H1
H0
Signal detection and estimation
152
(
(
(

=
1
1
1
3
5
E
s ,
(
(
(

=
1
1
1
3
6
E
s ,
(
(
(

=
1
1
1
3
7
E
s .
Since the criterion is minimum probability of error, the receiver is then a
"minimum distance" receiver.
The receiver evaluates the sufficient statistic
7 , , 1 , 0 , )] ( ) ( [
0
2
2
L = = =

k dt t s t y T
T
k k j
s y
and chooses the hypothesis for which
j
T is smallest.
Since the transmitted signals have equal energy, the minimum probability of
error receiver can also be implemented as a "largest of " receiver. The receiver
computes the sufficient statistic
7 , , 1 , 0 , ) ( ) (
0
L = = =

k dt t y t s T
T
k
T
k j
y s
and chooses the hypothesis for which
j
T is largest.
(c)

1

1
s
2
s
3
s
4
s
5
s
6
s
7
s 3

0
s
Detection and Parameter Estimation

153
Using "minimum distance" or "nearest neighbor", the decision regions are
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
0 , 0 , 0
3 2 1 7
3 2 1 6
3 2 1 5
3 2 1 4
3 2 1 3
3 2 1 2
3 2 1 1
3 2 1 0
< < <
> < <
< > <
> > <
< < >
> < >
< > >
> > >
y y y H
y y y H
y y y H
y y y H
y y y H
y y y H
y y y H
y y y H

(d) The probability of error is
) ( ) ( ) ( ) (
0
7
0
0
7
0
H P P H P H P P P
j
j
j
j j
= = =

= =

1
Y ,
2
Y and
3
Y are independent Gaussian random variables with conditional
means
3
] [ ] [ ] [
0 3 0 2 0 1
E
H Y E H Y E H Y E = = =
and conditional variances
2
] var[ ] var[ ] var[
0
0 3 0 2 0 1
N
H Y H Y H Y = = =
Therefore, ] 0 , 0 , 0 [ 1 ) | ( ) (
3 2 1 0
> > > = = Y Y Y P H P P

3
0
3
0
0
2
0
3 2 1
3
2
1
) 3 / (
exp
1
1
) 0 ( ) 0 ( ) 0 ( 1
(
(

|
|
.
|

\
|
=

(
(

=
> > > =

N
E
Q
dy
N
E y
N
Y P Y P Y P

Signal detection and estimation
154
10.10 (a) We observe that the dimension of the space is 2 and that we have 4
signal levels per axis Basis functions { }
2 1
, such that 0 ) ( ) (
0
2 1
=

T
dt t t
and
1 ) ( ) (
0
2
2
0
2
1
= =

T T
dt t dt t
The receiver is then

with
t f
T
t
t f
T
t
0 2
0 1
2 sin
2
) (
2 cos
2
) (
=
=

T
0

T
0
Y(t)
) (
1
t
) (
2
t
1
Threshold
Threshold 4-level
signal
4-level
signal
2
3
4
Detection and Parameter Estimation

155

(c) From (b), we observe that the probability of a correct decision is
) along ( ) along (
) along decision correct and along decision (correct ) (
2 1
2 1
=
=
c P c P
P c P

where, ) along (
1
c P is, from the figure below, given by

| | |
.
|

\
|
= + + + =
=

=
q q q q q
P c P
k
k
4
6
1 ) 1 ( ) 2 1 ( ) 2 1 ( ) 1 (
4
1
) decision correct (
4
1
) along (
4
1
1
s

where,
|
|
.
|

\
|
=
0
2N
d
Q q .
1

1
s
2
s
4
s
1

d d
3
s
Signal detection and estimation
156
Similarly, |
.
|

\
|
= q c P
4
6
1 ) along (
2
. Therefore, the probability of a correct
decision is
2
4
6
1 ) ( |
.
|

\
|
= q c P
and the probability of error is
2
4
9
9
3
) ( 1 ) ( q q c P P = =
10.11 From (10.104), we have
M j
T
T
j j
j
T
j
T
j
T
j
T
j
T
j j
, , 2 , 1 2
2 ) )( ( ) (
2
2
2
2
L = + =
+ = = =
y s s y
s s y s y y s y s y s y y

For equal energy
2
R and
2
j
s are common to all hypotheses Minimizing
) (
2
y
j
T is equivalent to maximizing y s
T
j
. Therefore, the receiver computes the
sufficient statistic
M j dt t y t s T
T
j
T
j
T
j
, , 2 , 1 , ) ( ) ( ) (
0
L = = =

y s y
and chooses the hypothesis having the largest dot product. The "Largest of "
receiver is
Detection and Parameter Estimation

157

"Largest of " receiver

10.12 We have

) ( ) ( :
) ( ) ( ) ( :
0
1
t W t Y H
t W t As t Y H
=
+ =


where

=
T
dt t s t W W
0
1
) ( ) (
A H Y E = ] [
1 1

2
] var[
2
] 2 [ ] [
0
1 1
0 2
1
2
1
2
1
2
1
N
H Y
N
A AW W A E H Y E = + = + + =

A unknown H
1
is a Composite hypothesis and

T
0
) (t s
Y(t)

=
= +
1 1 0
1 1 1
:
:
Y W H
Y W A H

T
0

T
0
y(t)
) (
1
t s
) (t s
M







Choose
largest
decision
variable

T
0
) (
2
t s
Decision
T1
T2
TM
Signal detection and estimation
158
) (
) , ( max
) (
0
1 1 ,
0
1 1
1
H y f
H y f
y
H Y
H Y
g


We need the estimate A

of A such that =

0
) ( ln
a
a y f
A Y
the ML estimate is
Y A =

, the observation itself; i. e, where the distribution is maximum. Hence,


(
(

(
(

= =
0
2
0
0
2
0
0
1 ,
exp
2
1
) (
exp
2
1
) (
) , (
) (
0
1
N
y
N
N
a y
N
H y f
H a y f
y
H Y
H A Y
g

<
>
(

+ =
0
1
2 2 2
0
) 2 (
1
exp ) (
H
H
y ay a y
N
y
g

but 1 = and 1 exp ) (
0
1
0
2
H
H
N
y
y y a
g
<
>
|
|
.
|

\
|
= =
or 0
0
1
0
2
H
H
N
y
<
>
. Therefore, always decide H
1
since 0 /
0
2
> N y .
10.13

Y
1
is a sufficient statistic and thus,

T
0
) (t
Y(t)

=
T
dt t t Y Y
0
1
) ( ) (
Detection and Parameter Estimation

159
1 1 1
) ( Y W t
E
Y +

= is Gaussian with mean / E and variance 2 /


0
N .

The conditional density function becomes

=
0
2
1
0
1
)] / ( [
exp
1
) (
1
N
E y
N
y f
Y

Hence,
(
(

|
|
.
|

\
|

= =


2
1
0
1 1
0 0
) ( ln
1
E
y
N
y f
Y

or

=
E
y
1
. Thus,
1

y
E
ml
= and the optimum receiver is shown below.

10.14 The density function of is
2 2
2 /

2
1
) (


= e f . Hence, from the
MAP equation, we have
0
2
0
) ( ln
) ( ln
2 2
1
0

1
1
=

|
|
.
|

\
|

=


+

=
E E
y
N
f
y f
map
Y

0
2 2

0
1
0
2
4
=
|
|
.
|

\
|
+

map
N
E
E y
N

T
0
E t s / ) (
y(t)
Inverter
E
y1
1
1
y
ml


Signal detection and estimation
160
As

, we have
0
2 2
0
1
0
=
=
map
N
E
E y
N

Therefore,
ml map
y
E
= =


lim
1
2
.
10.15 The ML equation is given by
0
) , (
)] , ( ) ( [
2
0
0
=


dt
t s
t s t y
N
T

where, ) cos( ) , ( + = t A t s
c
and ) sin(
) , (
+ =


t A
t s
c
.
Substituting into the ML equation, we have
0 ) sin( )] cos( ) ( [
2
0
0
= + +

dt t t A t y
N
A
c
T
c


+ = + + = +
T
c
T
c c c
T
dt t
A
dt t t A dt t t y
0 0 0
)] ( 2 sin[
2
) sin( ) cos( ) sin( ) (
Assuming many cycles of the carrier within [0,T], the integral involving the double
frequency terms is approximately zero. Hence,

+
T
c c
dt t t t y
0
0 ] cos sin sin )[cos (
Therefore,

=
T
c
T
c
dt t t y tdt t y
0 0
cos ) ( sin sin ) ( cos
Detection and Parameter Estimation

161

=
T
c
T
c
tdt t y
tdt t y
0
0
cos ) (
sin ) (
tan
or,
(
(
(
(
(

T
c
T
c
ml
tdt t y
tdt t y
0
0 1
cos ) (
sin ) (
tan

.
(b) Indeed, it can be shown that
ml

is unbiased and thus, we can apply the


Cramer-Rao lower bound. The Cramer-Rao inequality is given by
(




T
ml
dt
t s
N
0
2
0
) , (
2
]

var[
with
2
) 2 2 cos(
2 2
)] 2 2 cos( 1 [
2
) ( sin
) , (
) sin(
) , (
2
0
2 2
0
2
0
2 2
0
2
T A
dt t
A T A
dt t
A
dt t A dt
t s
t A
t s
T
c
T
c
T
c
T
c


+ = + =
+ =
(



+ =



Hence,
T A
N
ml
2
0
]

var[ then 1
0
2
<<
N
T A
.
10.16 (a) The matched filters to ) (
1
t s and ) (
2
t s are ) ( ) (
1 1
t T s t h = and
) ( ) (
2 2
t T s t h = , respectively, as shown below.
Signal detection and estimation
162

(b) The filters outputs as a function of time when the signal matched to it is
the input are the resulting convolutions ) (
1
t y and ) (
2
t y as shown below.

t
t
s1(t)
h1(t)=s1(T-t)
0
1 2 3 4 5 6 7
0
1 2 3 4 5 6 7
t
y1(t)
0
1 2 3 4 5 6 7 8
1.0
1.0
1
2
t
h1(t)=s1(T-t)
0
1 2 3 4 5 6 7
t
h2(t)=s2(T-t)
0
1 2 3 4 5 6 7
0.5
-0.5
1.0
Detection and Parameter Estimation

163







t
t
s2(t)
h2(t)=s2(T-t)
0
1 2 3 4 5
6
7
0
1 2 3 4 5 6 7
t
y2(t)
0
1 2 3 4 5 6 7 8
0.5
-0.5
0.5
-0.5
1.75
-0.5
9 10
11
12
13 14 10
Signal detection and estimation
164
(c) The output of the filter matched to ) (
2
t s when the input is ) (
1
t s is
) ( ) ( ) (
2 1
t h t s t y = as shown below.



10.17 (a) The signals ) (
1
t s and ) (
2
t s are orthonormal

Hence,


= =
otherwise , 0
2 / , / 2
) ( ) (
1 1
T t T T
t T s t h
and


= =
otherwise , 0
2 / 0 , / 2
) ( ) (
2 2
T t T
t T s t h .

0 1 2 3
4
5 6 7 8 9 10 11
0.5
-0.5
1.0
-1.0
t
) ( ) ( ) (
2 1
t h t s t y =
2 / T T
t
) (
1
t s
2 / T
2 / T T
t
) (
2
t s
2 / T
Detection and Parameter Estimation

165


(b) The noise free output of the matched filters is
2 , 1 , ) ( ) ( ) ( = = k t h t s t y
k k k
. Hence,

Note that we sample at T t = and thus, 1 ) (
1
= T y and 0 ) (
2
= T y .
(c) The SNR at the output of the matched filter is
0 0 0
2
0
2 2
2 / N N
E
N
d
SNR = = = since 1 = E .
10.18 t t s
c
= cos ) (
1
, then the signal energy is 2 / T E = , and the first basis
function is t T t
c
= cos / 2 ) (
1
. Consequently, the first coefficient in the
Karhunen-Love expansion of ) (t Y is

+ +
= =

T
T
c
T
dt t t W H
dt t t W t A H
dt t t Y Y
0
1 0
0
1 1
0
1 1
) ( ) ( :
) ( )] ( ) cos( [ :
) ( ) (
Then, we select a suitable set of functions L , 3 , 2 ), ( = k t
k
, orthogonal to ) (
1
t .
We observe that for 2 > k , we always obtain
k
W independently of the hypothesis.
Only
1
Y depends on which hypothesis is true. Thus,
1
Y is a sufficient statistic.
2 / T T
t
) (
1
t y
1
2 / T T
t
) (
2
t y
2 / 3T
1
0
2 / T T
t
) (
1
t h
2 / T
2 / T T
t
) (
2
t h
2 / T
Signal detection and estimation
166
1
Y is a Gaussian random variable with conditional means
=
(
(

= cos cos
2
] , , [
1 1
E a
T
a E H a Y E
0 ] [ ] , , [
1 0 1
= = W E H a Y E
and variances
2
] , , var[ ] , , var[
0
0 1 1 1
N
H a Y H a Y = =
The conditional likelihood ratio is given by
|
|
.
|

\
|

|
|
.
|

\
|
=

=
2 2
0
1
0 0 1 , ,
1 1 , ,
cos
1
exp cos
2
exp
) , , (
) , , (
] , ) ( [
0 1
1 1
E
N
E y
N H a y f
H a y f
a t y
H A Y
H A Y

) ( ) ( ) , (
,
= f a f a f
A A
since A and are independent. Hence,

=
A
dad a t y t y ] , ) ( [ )] ( [
Substituting for ] , ) ( [ a t y and ) , (
,
a f
A
into the above integral, the decision
rule reduces to

<
>
(
(

+
=
0
1
2
1
0
2
0
2
0
2
0
) 2 (
2
exp
2
)] ( [
H
H
y
N N N
N
t y
a
a
a

or,

<
>
0
1
2
1
H
H
y
with
Detection and Parameter Estimation

167
0
0
2
2
0
2
0
) 2 (
ln
2
) 2 (
N
N N N
a
a
a
+

+
=
(b) The receiver can be implemented as follows

10.19 Under hypothesis H
0
, no signal is present and the conditional density
function was derived in Example 10.7 to be
(
(

=
2
2 2
2
0
2
exp
2
1
) , (
0
s c
s c H Y Y
y y
H y y f
s c

Using the transformations = cos r Y
c
and = sin r Y
s
then,
|
|
.
|

\
|

=
2
2
2
0
2
exp ) (
0
r r
H r f
H R

and the probability of false alarm is
|
|
.
|

\
|
= |
.
|

\
|

=
|
|
.
|

\
|

0
2 2
2
2
exp
2
exp
2
exp
N
dr
r r
P
F

The probability of detection is

=
A
A D D
da a f a P P ) ( ) (
where,

(
(

= dr
AT r r
a P
D
2
2 2
2
2
) 2 / (
exp ) (

T
0

<
>
0
1
H
H
Y(t)
t
T
c
cos
2

Squarer
H1
H0
Signal detection and estimation
168
Solving for the expressions of ) (a P
D
and ) (a f
A
, and solving the integral, we
obtain
(
(

=
) (
2
exp
2
0
2
a
D
T N T
P
Expressing
D
P in terms of
F
P , we obtain
( )
T N
N
F D a
P P
2
0
0
+
=
10.20 (a) 0 ) ( )] ( [ ) ( ) ( ] [
0 0
= =
(
(

=

T
k
T
k k
dt t t N E dt t t N E N E
and
k k k
N
N E N + = =
2
] [ ] var[
0 2

(b) 2 /
0
N is the variance of the white noise process.
k
may be considered
as the variance of the colored noise. That is, we assume that the variance is
composed of the white noise variance and the colored noise variance. The white
noise coefficients are independent; the others are Karhunen-Love coefficients,
which are Gaussian and uncorrelated Independent.

(c) =

) (
2
) , (
0
u t
N
u t c
n n
white Gaussian noise
0 ] [ =
k
N E and
2
] [ ] var[
0 2
N
N E N
k k
= =
10.21 (a) W t N = ) (
1
has one eigenfunction.
0
N is the component to filter It
cannot be whitened since the process has no contribution in any other direction in
the signal space.
(b) In this case, the noise ) ( ) ( ) (
2 1
t N t N t N + = can be whitened by:
Detection and Parameter Estimation

169

That is, the whitening is performed by an amplifier and a dc canceller.
Delay T
2
0
2
) 2 / (
w
w
N +

) (t N
) (t N
0
/ 2 N
T / 1

You might also like