Problem 2.1 :
P(A
i
) =
3
j=1
P(A
i
, B
j
), i = 1, 2, 3, 4
Hence :
P(A
1
) =
3
j=1
P(A
1
, B
j
) = 0.1 + 0.08 + 0.13 = 0.31
P(A
2
) =
3
j=1
P(A
2
, B
j
) = 0.05 + 0.03 + 0.09 = 0.17
P(A
3
) =
3
j=1
P(A
3
, B
j
) = 0.05 + 0.12 + 0.14 = 0.31
P(A
4
) =
3
j=1
P(A
4
, B
j
) = 0.11 + 0.04 + 0.06 = 0.21
Similarly :
P(B
1
) =
4
i=1
P(A
i
, B
1
) = 0.10 + 0.05 + 0.05 + 0.11 = 0.31
P(B
2
) =
4
i=1
P(A
i
, B
2
) = 0.08 + 0.03 + 0.12 + 0.04 = 0.27
P(B
3
) =
4
i=1
P(A
i
, B
3
) = 0.13 + 0.09 + 0.14 + 0.06 = 0.42
Problem 2.2 :
The relationship holds for n = 2 (2134) : p(x
1
, x
2
) = p(x
2
x
1
)p(x
1
)
Suppose it holds for n = k, i.e : p(x
1
, x
2
, ..., x
k
) = p(x
k
x
k1
, ..., x
1
)p(x
k1
x
k2
, ..., x
1
) ...p(x
1
)
Then for n = k + 1 :
p(x
1
, x
2
, ..., x
k
, x
k+1
) = p(x
k+1
x
k
, x
k1
, ..., x
1
)p(x
k
, x
k1
..., x
1
)
= p(x
k+1
x
k
, x
k1
, ..., x
1
)p(x
k
x
k1
, ..., x
1
)p(x
k1
x
k2
, ..., x
1
) ...p(x
1
)
Hence the relationship holds for n = k + 1, and by induction it holds for any n.
1
Problem 2.3 :
Following the same procedure as in example 211, we prove :
p
Y
(y) =
1
a
p
X
_
y b
a
_
Problem 2.4 :
Relationship (2144) gives :
p
Y
(y) =
1
3a [(y b) /a]
2/3
p
X
_
_
_
y b
a
_
1/3
_
_
X is a gaussian r.v. with zero mean and unit variance : p
X
(x) =
1
2
e
x
2
/2
Hence :
p
Y
(y) =
1
3a
2 [(y b) /a]
2/3
e
1
2
(
yb
a
)
2/3
10 8 6 4 2 0 2 4 6 8 10
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
y
pdf of Y
a=2
b=3
Problem 2.5 :
(a) Since (X
r
, X
i
) are statistically independent :
p
X
(x
r
, x
i
) = p
X
(x
r
)p
X
(x
i
) =
1
2
2
e
(x
2
r
+x
2
i
)/2
2
2
Also :
Y
r
+jY
i
= (X
r
+X
i
)e
j
X
r
+X
i
= (Y
r
+jY
i
) e
j
= Y
r
cos +Y
i
sin +j(Y
r
sin +Y
i
cos )
_
X
r
= Y
r
cos +Y
i
sin
X
i
= Y
r
sin +Y
i
cos
_
The Jacobian of the above transformation is :
J =
Xr
Yr
X
i
Yr
Xr
Y
i
X
i
Y
i
cos sin
sin cos
= 1
Hence, by (2155) :
p
Y
(y
r
, y
i
) = p
X
((Y
r
cos +Y
i
sin ) , (Y
r
sin +Y
i
cos ))
=
1
2
2
e
(y
2
r
+y
2
i
)/2
2
(b) Y = AX and X = A
1
Y
Now, p
X
(x) =
1
(2
2
)
n/2
e
x
x/2
2
(the covariance matrix M of the random variables x
1
, ..., x
n
is
M =
2
I, since they are i.i.d) and J = 1/ det(A). Hence :
p
Y
(y) =
1
(2
2
)
n/2
1
 det(A)
e
y
(A
1
)
A
1
y/2
2
For the pdfs of X and Y to be identical we require that :
 det(A) = 1 and (A
1
)
A
1
= I =A
1
= A
Y
(jv) = E
_
e
jvY
_
= E
_
e
jv
n
i=1
x
i
_
= E
_
n
i=1
e
jvx
i
_
=
n
i=1
E
_
e
jvX
_
=
_
X
(e
jv
)
_
n
But,
p
X
(x) = p(x 1) + (1 p)(x)
X
(e
jv
) = 1 +p +pe
jv
Y
(jv) =
_
1 +p +pe
jv
_
n
3
(b)
E(Y ) = j
d
Y
(jv)
dv

v=0
= jn(1 p +pe
jv
)
n1
jpe
jv

v=0
= np
and
E(Y
2
) =
d
2
Y
(jv)
d
2
v

v=0
=
d
dv
_
jn(1 p +pe
jv
)
n1
pe
jv
_
v=0
= np +np(n 1)p
E(Y
2
) = n
2
p
2
+np(1 p)
Problem 2.7 :
(jv
1
, jv
2
, jv
3
, jv
4
) = E
_
e
j(v
1
x
1
+v
2
x
2
+v
3
x
3
+v
4
x
4
)
_
E (X
1
X
2
X
3
X
4
) = (j)
4
4
(jv
1
, jv
2
, jv
3
, jv
4
)
v
1
v
2
v
3
v
4

v
1
=v
2
=v
3
=v
4
=0
From (21151) of the text, and the zeromean property of the given rvs :
(jv) = e
1
2
v
Mv
where v = [v
1
, v
2
, v
3
, v
4
]
, M = [
ij
] .
We obtain the desired result by bringing the exponent to a scalar form and then performing
quadruple dierentiation. We can simplify the procedure by noting that :
(jv)
v
i
=
i
ve
1
2
v
Mv
where
i
=[
i1
,
i2
,
i3
,
i4
] . Also note that :
j
v
v
i
=
ij
=
ji
Hence :
4
(jv
1
, jv
2
, jv
3
, jv
4
)
v
1
v
2
v
3
v
4

V=0
=
12
34
+
23
14
+
24
13
Problem 2.8 :
For the central chisquare with n degress of freedom :
(jv) =
1
(1 j2v
2
)
n/2
4
Now :
d(jv)
dv
=
jn
2
(1 j2v
2
)
n/2+1
E (Y ) = j
d(jv)
dv

v=0
= n
2
d
2
(jv)
dv
2
=
2n
4
(n/2 + 1)
(1 j2v
2
)
n/2+2
E
_
Y
2
_
=
d
2
(jv)
dv
2

v=0
= n(n + 2)
2
The variance is
2
Y
= E (Y
2
) [E (Y )]
2
= 2n
4
For the noncentral chisquare with n degrees of freedom :
(jv) =
1
(1 j2v
2
)
n/2
e
jvs
2
/(1j2v
2
)
where by denition : s
2
=
n
i=1
m
2
i
.
d(jv)
dv
=
_
jn
2
(1 j2v
2
)
n/2+1
+
js
2
(1 j2v
2
)
n/2+2
_
e
jvs
2
/(1j2v
2
)
Hence, E (Y ) = j
d(jv)
dv

v=0
= n
2
+s
2
d
2
(jv)
dv
2
=
_
n
4
(n + 2)
(1 j2v
2
)
n/2+2
+
s
2
(n + 4)
2
ns
2
2
(1 j2v
2
)
n/2+3
+
s
4
(1 j2v
2
)
n/2+4
_
e
jvs
2
/(1j2v
2
)
Hence,
E
_
Y
2
_
=
d
2
(jv)
dv
2

v=0
= 2n
4
+ 4s
2
2
+
_
n
2
+s
2
_
and
2
Y
= E
_
Y
2
_
[E (Y )]
2
= 2n
4
+ 4
2
s
2
Problem 2.9 :
The Cauchy r.v. has : p(x) =
a/
x
2
+a
2
, < x < (a)
E (X) =
_
xp(x)dx = 0
since p(x) is an even function.
E
_
X
2
_
=
_
x
2
p(x)dx =
a
x
2
x
2
+a
2
dx
Note that for large x,
x
2
x
2
+a
2
1 (i.e nonzero value). Hence,
E
_
X
2
_
= ,
2
=
5
(b)
(jv) = E
_
jvX
_
=
_
a/
x
2
+a
2
e
jvx
dx =
_
a/
(x +ja) (x ja)
e
jvx
dx
This integral can be evaluated by using the residue theorem in complex variable theory. Then,
for v 0 :
(jv) = 2j
_
a/
x +ja
e
jvx
_
x=ja
= e
av
For v < 0 :
(jv) = 2j
_
a/
x ja
e
jvx
_
x=ja
= e
av
v
Therefore :
(jv) = e
av
Note: an alternative way to nd the characteristic function is to use the Fourier transform
relationship between p(x), (jv) and the Fourier pair :
e
bt
1
c
c
2
+f
2
, c = b/2, f = 2v
Problem 2.10 :
(a) Y =
1
n
n
i=1
X
i
,
X
i
(jv) = e
av
Y
(jv) = E
_
e
jv
1
n
n
i=1
X
i
_
=
n
i=1
E
_
e
j
v
n
X
i
_
=
n
i=1
X
i
(jv/n) =
_
e
av/n
_
n
= e
av
(b) Since
Y
(jv) =
X
i
(jv) p
Y
(y) = p
X
i
(x
i
) p
Y
(y) =
a/
y
2
+a
2
.
(c) As n , p
Y
(y) =
a/
y
2
+a
2
, which is not Gaussian ; hence, the central limit theorem does
not hold. The reason is that the Cauchy distribution does not have a nite variance.
Problem 2.11 :
We assume that x(t), y(t), z(t) are realvalued stochastic processes. The treatment of complex
valued processes is similar.
(a)
zz
() = E {[x(t +) +y(t +)] [x(t) +y(t)]} =
xx
() +
xy
() +
yx
() +
yy
()
6
(b) When x(t), y(t) are uncorrelated :
xy
() = E [x(t +)y(t)] = E [x(t +)] E [y(t)] = m
x
m
y
Similarly :
yx
() = m
x
m
y
Hence :
zz
() =
xx
() +
yy
() + 2m
x
m
y
(c) When x(t), y(t) are uncorrelated and have zero means :
zz
() =
xx
() +
yy
()
Problem 2.12 :
The power spectral density of the random process x(t) is :
xx
(f) =
_
xx
()e
j2f
d = N
0
/2.
The power spectral density at the output of the lter will be :
yy
(f) =
xx
(f)H(f)
2
=
N
0
2
H(f)
2
Hence, the total power at the output of the lter will be :
yy
( = 0) =
_
yy
(f)df =
N
0
2
_
H(f)
2
df =
N
0
2
(2B) = N
0
B
Problem 2.13 :
M
X
= E [(Xm
x
)(Xm
x
)
] , X =
_
_
X
1
X
2
X
3
_
_ , m
x
is the corresponding vector of mean values.
7
Then :
M
Y
= E [(Ym
y
)(Ym
y
)
]
= E [A(Xm
x
)(A(Xm
x
))
]
= E [A(Xm
x
)(Xm
x
)
]
= AE[(Xm
x
)(Xm
x
)
] A
= AM
x
A
Hence :
M
Y
=
_
11
0
11
+
13
0 4
22
0
11
+
31
0
11
+
13
+
31
+
33
_
_
Problem 2.14 :
Y (t) = X
2
(t),
xx
() = E [x(t +)x(t)]
yy
() = E [y(t +)y(t)] = E
_
x
2
(t +)x
2
(t)
_
Let X
1
= X
2
= x(t), X
3
= X
4
= x(t +). Then, from problem 2.7 :
E (X
1
X
2
X
3
X
4
) = E (X
1
X
2
) E (X
3
X
4
) +E (X
1
X
3
) E (X
2
X
4
) +E (X
1
X
4
) E (X
2
X
3
)
Hence :
yy
() =
2
xx
(0) + 2
2
xx
()
Problem 2.15 :
p
R
(r) =
2
(m)
_
m
_
m
r
2m1
e
mr
2
/
, X =
1
R
We know that : p
X
(x) =
1
1/
p
R
_
x
1/
_
.
Hence :
p
X
(x) =
1
1/
2
(m)
_
m
_
m _
x
_
2m1
e
m(x
)
2
/
=
2
(m)
m
m
x
2m1
e
mx
2
Problem 2.16 :
The transfer function of the lter is :
H(f) =
1/jC
R + 1/jC
=
1
jRC + 1
=
1
j2fRC + 1
8
(a)
xx
(f) =
2
yy
(f) =
xx
(f) H(f)
2
=
2
(2RC)
2
f
2
+ 1
(b)
yy
() = F
1
{
xx
(f)} =
2
RC
_
1
RC
(
1
RC
)
2
+ (2f)
2
e
j2f
df
Let : a = RC, v = 2f. Then :
yy
() =
2
2RC
_
a/
a
2
+v
2
e
jv
dv =
2
2RC
e
a
=
2
2RC
e
/RC
where the last integral is evaluated in the same way as in problem P2.9 . Finally :
E
_
Y
2
(t)
_
=
yy
(0) =
2
2RC
Problem 2.17 :
If
X
(f) = 0 for f > W, then
X
(f)e
j2fa
is also bandlimited. The corresponding autocor
relation function can be represented as (remember that
X
(f) is deterministic) :
X
( a) =
n=
X
(
n
2W
a)
sin 2W
_
n
2W
_
2W
_
n
2W
_
(1)
Let us dene :
X(t) =
n=
X(
n
2W
)
sin 2W
_
t
n
2W
_
2W
_
t
n
2W
_
We must show that :
E
_
X(t)
X(t)
2
_
= 0
or
E
_
_
_
X(t)
X(t)
_
_
_
X(t)
m=
X(
m
2W
)
sin 2W
_
t
m
2W
_
2W
_
t
m
2W
_
_
_
_
_
= 0 (2)
First we have :
E
_
_
X(t)
X(t)
_
X(
m
2W
)
_
=
X
(t
m
2W
)
n=
X
(
n m
2W
)
sin 2W
_
t
n
2W
_
2W
_
t
n
2W
_
9
But the righthandside of this equation is equal to zero by application of (1) with a = m/2W.
Since this is true for any m, it follows that E
__
X(t)
X(t)
_
X(t)
_
= 0. Also
E
__
X(t)
X(t)
_
X(t)
_
=
X
(0)
n=
X
(
n
2W
t)
sin 2W
_
t
n
2W
_
2W
_
t
n
2W
_
Again, by applying (1) with a = t anf = t, we observe that the righthandside of the equation
is also zero. Hence (2) holds.
Problem 2.18 :
Q(x) =
1
2
_
x
e
t
2
/2
dt = P [N x] , where N is a Gaussian r.v with zero mean and unit
variance. From the Cherno bound :
P [N x] e
vx
E
_
e
vN
_
(1)
where v is the solution to :
E
_
Ne
vN
_
xE
_
e
vN
_
= 0 (2)
Now :
E
_
e
vN
_
=
1
2
_
e
vt
e
t
2
/2
dt
= e
v
2
/2 1
2
_
e
(tv)
2
/2
dt
= e
v
2
/2
and
E
_
Ne
vN
_
=
d
dv
E
_
e
vN
_
= ve
v
2
/2
Hence (2) gives :
v = x
and then :
(1) Q(x) e
x
2
e
x
2
/2
Q(x) e
x
2
/2
Problem 2.19 :
Since H(0) =
h(n) = 0 m
y
= m
x
H(0) = 0
10
The autocorrelation of the output sequence is
yy
(k) =
j
h(i)h(j)
xx
(k j +i) =
2
x
i=
h(i)h(k +i)
where the last equality stems from the autocorrelation function of X(n) :
xx
(k j +i) =
2
x
(k j +i) =
_
2
x
, j = k +i
0, o.w.
_
Hence,
yy
(0) = 6
2
x
,
yy
(1) =
yy
(1) = 4
2
x
,
yy
(2) =
yy
(2) =
2
x
,
yy
(k) = 0 otherwise.
Finally, the frequency response of the discretetime system is :
H(f) =
h(n)e
j2fn
= 1 2e
j2f
+e
j4f
=
_
1 e
j2f
_
2
= e
j2f
_
e
jf
e
jf
_
2
= 4e
jf
sin
2
f
which gives the power density spectrum of the output :
yy
(f) =
xx
(f)H(f)
2
=
2
x
_
16 sin
4
f
_
= 16
2
x
sin
4
f
Problem 2.20 :
(k) =
_
1
2
_
k
The power density spectrum is
(f) =
k=
(k)e
j2fk
=
1
k=
_
1
2
_
k
e
j2fk
+
k=0
_
1
2
_
k
e
j2fk
=
k=0
(
1
2
e
j2fk
)
k
+
k=0
(
1
2
e
j2f
)
k
1
=
1
1e
j2f
/2
+
1
1e
j2f
/2
1
=
2cos 2f
5/4cos 2f
1
=
3
54 cos 2f
11
Problem 2.21 :
We will denote the discretetime process by the subscript d and the continuoustime (analog)
process by the subscript a. Also, f will denote the analog frequency and f
d
the discretetime
frequency.
(a)
d
(k) = E [X
(n)X(n +k)]
= E [X
(nT)X(nT +kT)]
=
a
(kT)
Hence, the autocorrelation function of the sampled signal is equal to the sampled autocorrelation
function of X(t).
(b)
d
(k) =
a
(kT) =
_
a
(F)e
j2fkT
df
=
l=
_
(2l+1)/2T
(2l1)/2T
a
(F)e
j2fkT
df
=
l=
_
1/2T
1/2T
a
(f +
l
T
)e
j2FkT
df
=
_
1/2T
1/2T
_
l=
a
(f +
l
T
)
_
e
j2FkT
df
Let f
d
= fT. Then :
d
(k) =
_
1/2
1/2
_
_
1
T
l=
a
((f
d
+l)/T)
_
_
e
j2f
d
k
df
d
(1)
We know that the autocorrelation function of a discretetime process is the inverse Fourier
transform of its power spectral density
d
(k) =
_
1/2
1/2
d
(f
d
)e
j2f
d
k
df
d
(2)
Comparing (1),(2) :
d
(f
d
) =
1
T
l=
a
(
f
d
+l
T
) (3)
(c) From (3) we conclude that :
d
(f
d
) =
1
T
a
(
f
d
T
)
i :
a
(f) = 0, f : f > 1/2T
12
Otherwise, the sum of the shifted copies of
a
(in (3)) will overlap and aliasing will occur.
Problem 2.22 :
(a)
a
() =
_
a
(f)e
j2f
df
=
_
W
W
e
j2f
df
=
sin2W
d
(k) = f
a
(kT) =
sin 2WkT
kT
(b) If T =
1
2W
, then :
d
(k) =
_
2W = 1/T, k = 0
0, otherwise
_
Thus, the sequence X(n) is a whitenoise sequence. The fact that this is the minimum value of
T can be shown from the following gure of the power spectral density of the sampled process:
W W f
s
W f
s
f
s
+W f
s
W f
s
f
s
+W
We see that the maximum sampling rate f
s
that gives a spectrally at sequence is obtained
when :
W = f
s
W f
s
= 2W T =
1
2W
(c) The triangularshaped spectrum (f) = 1
f
W
, f W may be obtained by convolv
ing the rectangularshaped spectrum
1
(f) = 1/
_
2
.Therefore, sampling X(t) at a rate
1
T
= W samples/sec produces a white sequence
with autocorrelation function :
d
(k) =
1
W
_
sin WkT
kT
_
2
= W
_
sin k
k
_
2
=
_
W, k = 0
0, otherwise
_
Problem 2.23 :
Lets denote : y(t) = f
k
(t)f
j
(t).Then :
_
f
k
(t)f
j
(t)dt =
_
y(t)dt = Y (f)
f=0
where Y (f) is the Fourier transform of y(t). Since : y(t) = f
k
(t)f
j
(t) Y (f) = F
k
(f) F
j
(f).
But :
F
k
(f) =
_
f
k
(t)e
j2ft
dt =
1
2W
e
j2fk/2W
Then :
Y (f) = F
k
(f) F
j
(f) =
_
F
k
(a) F
j
(f a)da
and at f = 0 :
Y (f)
f=0
=
_
F
k
(a) F
j
(a)da
=
_
1
2W
_
2 _
e
j2a(kj)/2W
da
=
_
1/2W, k = j
0, k = j
_
Problem 2.24 :
B
eq
=
1
G
_
0
H(f)
2
df
For the lter shown in Fig. P212 we have G = 1 and
B
eq
=
_
0
H(f)
2
df = B
For the lowpass lter shown in Fig. P216 we have
H(f) =
1
1 +j2fRC
H(f)
2
=
1
1 + (2fRC)
2
14
So G = 1 and
B
eq
=
_
0
H(f)
2
df
=
1
2
_
H(f)
2
df
=
1
4RC
where the last integral is evaluated in the same way as in problem P2.9 .
15