You are on page 1of 62

Digital Communication Exercises

Contents
1 Converting a Digital Signal to an Analog Signal 2
2 Decision Criteria and Hypothesis Testing 7
3 Generalized Decision Criteria 11
4 Vector Communication Channels 13
5 Signal Space Representation 17
6 Optimal Receiver for the Waveform Channel 23
7 The Probability of Error 28
8 Bit Error Probability 34
9 Connection with the Concept of Capacity 39
10 Continuous Phase Modulations 41
11 Colored AGN Channel 44
12 ISI Channels and MLSE 47
13 Equalization 52
14 Non-Coherent Reception 58
1
1 Converting a Digital Signal to an Analog Signal
1. [1, Problem 4.15].
Consider a four-phase PSK signal represented by the equivalent lowpass signal
u(t) =

n
I
n
g(t nT)
where I
n
takes on one of of the four possible values
_
1/2(1 j) with equal probability. The
sequence of information symbols I
n
is statistically independent (i.i.d).
(a) Determine the power density spectrum of u(t) when
g(t) =
_
A, 0 t T,
0, otherwise.
(b) Repeat (1a) when
g(t) =
_
Asin(t/T), 0 t T,
0, otherwise.
(c) Compare the spectra obtained in (1a) and (1b) in terms of the 3dB bandwidth and the
bandwidth to the rst spectral zero. Here you may nd the frequency numerically.
Solution:
We have that S
U
(f) =
1
T
[G(f)[
2

m=
C
I
(m)e
j2fmT
, E(I
n
) = 0, E([I
n
[
2
) = 1, hence
C
I
(m) =
_
1, m = 0,
0, m ,= 0.
therefore

m=
C
I
(m)e
j2fmT
= 1 S
U
(f) =
1
T
[G(f)[
2
.
(a) For the rectangular pulse:
G(f) = AT
sin fT
fT
e
j2fT/2
[G(f)[
2
= A
2
T
2
sin
2
fT
(fT)
2
where the factor e
j2fT/2
is due to the T/2 shift of the rectangular pulse from the center.
Hence:
S
U
(f) = A
2
T
sin
2
fT
(fT)
2
(b) For the sinusoidal pulse: G(f) =
_
T
0
Asin(t/T) exp(j2ft)dt. By using the trigonometric
identity sin x =
exp(jx)exp(jx)
2j
it is easily shown that:
G(f) =
2AT

cos Tf
1 4T
2
f
2
e
j2fT/2
[G(f)[
2
=
_
2AT

_
2
cos
2
Tf
(1 4T
2
f
2
)
2
Hence:
S
U
(f) =
_
2A

_
2
T
cos
2
Tf
(1 4T
2
f
2
)
2
2
(c) The 3dB frequency for (1a) is:
sin
2
f
3dB
T
(f
3dB
T)
2
=
1
2
f
3dB

=
0.44
T
(where this solution is obtained graphically), while the 3dB frequency for the sinusoidal pulse
on (1b) is: f
3dB

=
0.59
T
.
The rectangular pulse spectrum has the rst spectral null at f = 1/T, whereas the spectrum
of the sinusoidal pulse has the rst null at 3/2T. Clearly the spectrum for the rectangular
pulse has a narrower main lobe. However, it has higher sidelobes.
2. [1, Problem 4.21].
The lowpass equivalent representation of a PAM signal is
u(t) =

n
I
n
g(t nT)
Suppose g(t) is a rectangular pulse and
I
n
= a
n
a
n2
where a
n
is a sequence of uncorrelated
1
binary values (1, 1) random variables that occur with
equal probability.
(a) Determine the autocorrelation function of the sequence I
n
.
(b) Determine the power density spectrum of u(t).
(c) Repeat (2b) if the possible values of a
n
are (0, 1).
Solution:
(a)
C
I
(m) = EI
n+m
I
n
= E(a
n+m
a
n+m2
)(a
n
a
n2
)
=
_

_
2, m = 0,
1, m = 2
0, otherwise.
= 2(m) (m2) (m+ 2)
(b) S
U
(f) =
1
T
[G(f)[
2

m=
C
I
(m)e
j2fmT
, where

m=
C
I
(m)e
j2fmT
= 4 sin
2
2fT,
and
[G(f)[
2
= (AT)
2
_
sin fT
fT
_
2
.
Therefore:
S
U
(f) = 4A
2
T
_
sin fT
fT
_
2
sin
2
2fT
1
E{a
n
a
m
} = 0 for n = m.
3
(c) If a
n
takes the values (0, 1) with equal probability then Ea
n
= 1/2 and Ea
n+m
a
n
=
1
4
[1 +(m)]. Then:
C
I
(m) =
1
4
[2(m) (m2) (m+ 2)]
ii
(f) = sin
2
2fT
S
U
(f) = A
2
T
_
sin fT
fT
_
2
sin
2
2fT
Thus, we obtain the same result as in (2b) but the magnitude of the various quantities is
reduced by a factor of 4.
3. [2, Problem 1.16].
A zero mean stationary process x(t) is applied to a linear lter whose impulse response is dened
by a truncated exponential:
h(t) =
_
ae
at
, 0 t T,
0, otherwise.
Show that the power spectral density of the lter output y(t) is dened by
S
Y
(f) =
a
2
a
2
+ 4
2
f
2
(1 2 exp(aT) cos 2fT + exp(2aT))S
X
(f)
where S
X
(f) is the power spectral density of the lter input.
Solution:
The frequency response of the lter is:
H(f) =
_

h(t) exp(j2ft)dt
=
_

a exp(at) exp(j2ft)dt
= a
_

exp((a +j2f)t)dt
=
a
a +j2f
[1 e
aT
(cos 2fT j sin 2fT)].
The squared magnitude response is:
[H(f)[
2
=
a
2
a
2
+ 4
2
f
2
_
1 2e
aT
cos 2fT +e
2aT
_
And the required PSD follows.
4. [1, Problem 4.32].
The information sequence a
n
is a sequence of i.i.d random variables, each taking values +1 and
1 with equal probability. This sequence is to be transmitted at baseband by a biphase coding
scheme, described by
s(t) =

n
a
n
g(t nT)
where g(t) is dened by
g(t) =
_
1, 0 t T/2,
1, T/2 t T.
4
(a) Find the power spectral density of s(t).
(b) Assume that it is desirable to have a zero in the power spectrum at f = 1/T. To this end
we use precoding scheme by introducing b
n
= a
n
+ ka
n1
, where k is some constant, and
then transmit the b
n
sequence using the same g(t). Is it possible to choose k to produce a
frequency null at f = 1/T? If yes, what are the appropriate value and the resulting power
spectrum?
(c) Now assume we want to to have zeros at all multiples of f
0
= 1/4T. Is it possibl to have
these zeros with an appropriate choice of k in the previous part? If not then what kind of
precoding do you suggest to result in the desired nulls?
Solution:
(a) Since = 0,
2
a
= 1, we have S
S
(f) =
1
T
[G(f)[
2
.
G(f) =
T
2
sin(fT/2)
fT/2
e
j2fT/4

T
2
sin(fT/2)
fT/2
e
j2f3T/4
=
T
2
sin(fT/2)
fT/2
e
j2fT
(2j sin(fT/2))
= jT
sin
2
(fT/2)
fT/2
e
j2fT

[G(f)[
2
= T
2
_
sin
2
(fT/2)
fT/2
_
2

S
S
(f) = T
_
sin
2
(fT/2)
fT/2
_
2
(b) For non-independent information sequence the power spectrum of s(t) is given by S
S
(f) =
1
T
[G(f)[
2

m=
C
B
(m)e
j2fmT
.
C
B
(m) = Eb
n+m
b
n

= Ea
n+m
a
n
+kEa
n+m1
a
n
+kEa
n+m
a
n1
+k
2
Ea
n+m1
a
n1

=
_

_
1 +k
2
, m = 0,
k, m = 1
0, otherwise.
Hence:

m=
C
B
(m)e
j2fmT
= 1 +k
2
+ 2k cos 2fT
We want:
S
S
(1/T) = 0

m=
C
B
(m)e
j2fmT

f=1/T
= 0 1 +k
2
+ 2k = 0 k = 1
and the resulting power spectrum is:
S
S
(f) = 4T
_
sin
2
fT/2
fT/2
_
2
sin
2
fT
5
(c) The requirement for zeros at f = l/4T, l = 1, 2, . . . means 1 + k
2
+ 2k cos l/2 = 0,
which cannot be satised for all l. We can avoid that by using precoding in the form:
b
n
= a
n
+ka
n4
. Then
C
B
(m) =
_

_
1 +k
2
, m = 0,
k, m = 4
0, otherwise.

m=
C
B
(m)e
j2fmT
= 1 +k
2
+ 2k cos 2f4T
and a value of k = 1 will zero this spectrum in all multiples of 1/4T.
5. [1, Problem 4.29].
Show that 16-QAM on 1, 3 1, 3 can be represented as a superposition of two 4PSK
signals where each component is amplied separately before summing. i.e., let
s(t) = G[A
n
cos 2ft +B
n
sin 2ft] + [C
n
cos 2ft +D
n
sin 2ft]
where A
n
, B
n
, C
n
and D
n
are statically independent binary sequences with element from
the set +1, 1 and G is the amplier gain. You need to show s(t) an also be written as
s(t) = I
n
cos 2ft +Q
n
sin 2ft
and determine I
n
and Q
n
in terms of A
n
, B
n
, C
n
and D
n
.
Solution:
The 16-QAM signal is represented as s(t) = I
n
cos 2ft +Q
n
sin 2ft where I
n
= 1, 3, Q
n
=
1, 3. A superposition of two 4-QAM (4-PSK) signals is:
s(t) = G[A
n
cos 2ft +B
n
sin 2ft] + [C
n
cos 2ft +D
n
sin 2ft]
where A
n
, B
n
, C
n
, D
n
= 1. Clearly I
n
= GA
n
+C
n
, I
n
= GB
n
+D
n
. From these equations it
is easy to see that G = 2 gives the required equivalence.
6
2 Decision Criteria and Hypothesis Testing
Remark 1. Hypothesis testing is another common name for decision problem: You have to decide
between two or more hypothesis, say H
0
, H
1
, H
2
, . . . where H
i
can be interpreted as the unknown
parameter has value i. Decoding a constellation with K symbols can be interpreted as selecting the
correct hypothesis from H
0
, H
1
, . . . , H
K1
where H
i
is the hypothesis that S
i
was transmitted.
1. Consider an equal probability binary source p(0) = p(1) = 1/2, and a continuous output channel:
f
R|M
(r[1) = ae
ar
r 0
f
R|M
(r[0) = be
br
r 0 b > a > 0
(a) Find a constant K such that the optimal decision rule is r
1

0
K.
(b) Find the respective error probability.
Solution:
(a) Optimal decision rule:
p(0)f
R|M
(r[0)
0

1
p(1)f
R|M
(r[1)
Using the dened channel distributions:
be
br
0

1
ae
ar
1
0

1
a
b
e
(ab)r
0
0

1
ln(
a
b
) + (b a)r
r
1

0
ln(
a
b
)
a b
= K
(b)
p(e) = p(0) Prr > K[0 +p(1) Prr < K[1
=
1
2
_
_

K
be
bt
dt +
_
K
0
ae
at
dt

=
1
2
[e
bK
+ 1 e
aK
]
2. Consider a binary source: Prx = 2 = 2/3, Prx = 1 = 1/3, and the following channel
y = A x, A N(1, 1)
where x and A are independent.
7
(a) Find the optimal decision rule.
(b) Calculate the respective error probability.
Solution:
(a) First we will nd the conditional distribution of y given x:
(Y [ 2) N(2, 4), (Y [1) N(1, 1)
Hence the decision rule will be:
2
3
1

8
exp
_

(y + 2)
2
8
_
2

1
1
3
1

2
exp
_

(y 1)
2
2
_

(y + 2)
2
8
2

(y 1)
2
2
3y(y 4)
2

1
0
x(y) =
_
2, y < 0, 4 < y
1, otherwise.
(b)
p(e) =
2
3
_
4
0
f(y[ 2)dy +
1
3
_ _
0

f(y[1)dy +
_

4
f(y[1)dy
_
=
2
3
_
Q
_
0 + 2
2
_
Q
_
4 + 2
2
__
+
1
3
_
1 Q
_
0 1
1
_
Q
_
4 1
1
__
= Q(1)
1
3
Q(3)

= 0.15821
3. Decision rules for binary channels.
(a) The Binary Symmetric Channel (BSC) has binary (0 or 1) inputs and outputs. It
outputs each bit correctly with probability 1 p and incorrectly with probability p. Assume
0 and 1 are equally likely inputs. State the MAP and ML decision rules for the BSC when
p <
1
2
. How are the decision rules dierent when p >
1
2
?
(b) The Binary Erasure Channel (BEC) has binary inputs as with the BSC. However there
are three possible outputs. Given an input of 0, the output is 0 with probability 1p
1
and 2
with probability p
1
. Given an input of 1, the output is 1 with probability 1 p
2
and 2 with
probability p
2
. Assume 0 and 1 are equally likely inputs. State the MAP and ML decision
rules for the BEC when p
1
< p
2
<
1
2
. How are the decision rules dierent when p
2
< p
1
<
1
2
?
Solution:
(a) For equally likely inputs the MAP and ML decision rules are identical. In each case we wish
to maximize p
y|x
(y[x
i
) over the possible choices for x
i
. The decision rules are shown below,
p <
1
2


X = Y
p >
1
2


X = 1 Y
8
(b) Again, since we have equiprobable signals, the MAP and ML decision rules are the same.
The decision rules are as follows,
p
1
< p
2
<
1
2


X =
_
Y, Y = 0, 1.
1, Y = 2.
p
2
< p
1
<
1
2


X =
_
Y, Y = 0, 1.
0, Y = 2.
4. In a binary hypothesis testing problem, the observation Z is Rayleigh distributed under both
hypotheses with dierent parameter, that is,
f(z[H
i
) =
z

2
i
exp
_

z
2
2
2
i
_
z 0, i = 0, 1
You need to decide if the observed variable Z was generated with
2
0
or with
2
1
, namely choose
between H
0
and H
1
.
(a) Obtain the decision rule for the minimum probability of error criterion. Assume that the H
0
and H
1
are equiprobable.
(b) Extend your results to N independent observations, and derive the expressions for the re-
sulting probability of error.
Note: If R Rayleigh() then

N
i=1
R
2
i
has a gamma distribution with parameters N and
2
2
: Y =

N
i=1
R
2
i
(N, 2
2
).
Solution:
(a)
log(f(z[H
i
)) = log z log
2
i

z
2
2
2
i
log f(z[H
1
) log f(z[H
0
) = log

2
0

2
1
+z
2
_
1
2
2
0

1
2
2
1
_
H
1

H
0
0
z
2
H
1

H
0
2 log
_

2
1

2
0
_

_

2
1

2
0

2
1

2
0
_
=
Since z 0, the following decision rule is obtained:

H =
_
H
1
, z

,
H
0
, z <

.
(b) Let
f(z|H
1
)
f(z|H
0
)
be denoted as Likelihood Ration Test (LRT)
2
, hence
log LRT = log f(z[H
1
) log f(z[H
0
)
2
LRT
f(z|H
1
)
f(z|H
0
)
9
For N i.i.d observations:
log(f(z[H
i
)) =
N1

n=0
log(f(z
n
[H
i
))
=
N1

n=0
log z
n
N log
2
i

z
2
n
2
2
i
= N log
2
i
+
N1

n=0
log z
n

z
2
n
2
2
i
The log LRT will be:
log LRT : N log
_

2
1

2
0
_
+
_
1
2
2
0

1
2
2
1
_
N1

n=0
z
2
n
H
1

H
0
0

N1

n=0
z
2
n
H
1

H
0
2N log
_

2
1

2
0
_

_

2
1

2
0

2
1

2
0
_
=
Dene Y =

N1
n=0
z
2
n
, then Y [H
i
(N, 2
2
i
).
P
FA
= Prdecoding H
1
if H
0
was transmitted = PrY > [H
0
= 1
(N, /2
2
0
)
(N)
P
M
= Prdecoding H
0
if H
1
was transmitted = 1 PrY > [H
1
=
(N, /2
2
1
)
(N)
where (K, x/) is the lower incomplete gamma function
3
.
3
(s, x) =

x
0
t
s1
e
t
dt.
10
3 Generalized Decision Criteria
1. Bayes decision criteria.
Consider an equiprobable binary symmetric source m 0, 1. For the observation, R, conditional
probability density function is
f
R|M
(r[M = 0) =
_
1
2
, [r[ < 1,
0, otherwise.
f
R|M
(r[M = 1) =
1
2
e
|r|
(a) Obtain the decision rule for the minimum probability of error criterion and the correspond-
ingly minimal probability of error.
(b) For the cost matrix C =
_
0 2
0
_
, obtain the optimal generalized decision rule and the error
probability.
Solution:
(a)
[r[ > 1 : f
R|M
(r[M = 0) m = 1.
[r[ < 1 :
1
2
e
|r|
1
2
1

0
1 [r[
1

0
0 m = 0
The probability of error
p(e) = p(0) 0 +p(1)
_
1
1
1
2
e
|r|
dr =
1
2
[1 e
1
]
(b) The decision rule
f
R|M
(r[M = 1)
f
R|M
(r[M = 0)
1

0
p(0)
p(1)

C
10
C
00
C
01
C
11
=

2
=
1
2
[r[ > 1 : f
R|M
(r[M = 0) = 0
[r[ < 1 :
1
2
e
|r|
1
2
1

0
1
2
[r[
1

0
ln 2
m =
_
1, [r[ < ln 2, [r[ > 1
0, ln 2 < [r[ < 1.
Probability of error
P
FA
= Pr m = 1[m = 0 =
_
ln 2
ln 2
1
2
dr = ln 2
P
M
= Pr m = 0[m = 1 =
_
ln 2<|r|<1
1
2
e
|r|
dr =
1
2
e
1

=
p(e) = p(0)P
FA
+p(1)P
M
=
1
2
[ln 2 +
1
2
e
1
]
11
2. Non Gaussian additive noise.
Consider the source m 1, 1, Prm = 1 = 0.9, Prm = 1 = 0.1. The observation, y, obeys
y = m+N, N U[2, 2]
(a) Obtain the decision rule for the minimum probability of error criterion and the minimal
probability of error.
(b) For the cost matrix C =
_
0 1
100 0
_
, obtain the optimal Bayes decision rule and the error
probability.
Solution:
(a)
f(y[1) =
_
1
4
, 1 < y < 3,
0, otherwise.
f(y[ 1) =
_
1
4
, 3 < y < 1,
0, otherwise.
m =
_
1, 3 < y < 1,
1, 1 < y < 3.
The probability of error
p(e) = p(1) 0 +p(1)
_
1
1
1
4
dy = 0.05
(b) The decision rule
f(y[1)
f(y[ 1)
1

1
p(1)
p(1)

100
1
p(1)f(y[1)
1

1
100p(1)f(y[ 1)
m =
_
1, 3 < y < 1,
1, 1 < y < 3.
The probability of error
p(e) = p(1) 0 +p(1)
_
1
1
1
4
dy = 0.45
12
4 Vector Communication Channels
Remark 2. Vectors are denoted with boldface letters, e.g. x, y.
1. General Gaussian vector channel.
Consider the Gaussian vector channel with the sources p(m
0
) = q, p(m
1
) = 1q, s
0
= [1, 1]
T
, s
1
=
[1, 1]
T
. For sending m
0
the transmitter sends s
0
and for sending m
1
the transmitter sends s
1
.
The observations, r
i
, obeys
r = s
i
+n n = [n
1
, n
2
], n N(0,
n
),
n
=
_

2
1
0
0
2
2
_
The noise vector, n, and the messages m
i
are independent.
(a) Obtain the optimal decision rule using MAP criterion, and examine it for the following cases:
i. q =
1
2
,
1
=
2
.
ii. q =
1
2
,
2
1
= 2
2
2
.
iii. q =
1
3
,
2
1
= 2
2
2
.
(b) Derive the error probability for the obtained decision rule.
Solution:
(a) The conditional probability distribution function R[S
i
N(S
i
,
n
):
f(r[s
i
) =
1
_
(2)
2
det
n
exp
_
1
2
(r s
i
)
T

1
n
(r s
i
)
_
The MAP optimal decision rule
p(m
0
)f(r[s
0
)
m
0

m
1
p(m
1
)f(r[s
1
)
q
_
(2)
2
det
n
exp
_
1
2
(r s
0
)
T

1
n
(r s
0
)
_
m
0

m
1
1 q
_
(2)
2
det
n
exp
_
1
2
(r s
1
)
T

1
n
(r s
1
)
_
q exp
_
1
2
(r s
0
)
T

1
n
(r s
0
)
_
m
0

m
1
(1 q) exp
_
1
2
(r s
1
)
T

1
n
(r s
1
)
_
(r s
1
)
T

1
n
(r s
1
) (r s
0
)
T

1
n
(r s
0
)
m
0

m
1
2 ln
1 q
q
Assign r
T
= [x, y]
(x + 1)
2

2
1
+
(y + 1)
2

2
2

(x 1)
2

2
1

(y 1)
2

2
2
m
0

m
1
2 ln
1 q
q

2
1
+
y

2
2
m
0

m
1
1
2
ln
1 q
q
13
i. For the case q =
1
2
,
1
=
2
the decision rule becomes
x +y
m
0

m
1
0
ii. For the case q =
1
2
,
2
1
= 2
2
2
the decision rule becomes
x + 2y
m
0

m
1
0
iii. For the case q =
1
3
,
2
1
= 2
2
2
the decision rule becomes
x + 2y
m
0

m
1
ln 2
(b) Denote K
1
2
ln
1q
q
, and dene z =
x

2
1
+
y

2
2
. The conditional distribution of Z is
Z[s
i
N((1)
i

2
1
+
2
2

2
1

2
2
,

2
1
+
2
2

2
1

2
2
), i = 0, 1
The decision rule in terms of z, K
z
m
0

m
1
K
The error probability
p(e) = p(m
0
) Prz < K[m
0
+p(m
1
) Prz > K[m
1

Assigning the conditional distribution


Prz < K[m
0
= 1 Q
_
K

2
1
+
2
2

2
1

2
2
_

2
1
+
2
2

2
1

2
2
_
Prz > K[m
1
= Q
_
K +

2
1
+
2
2

2
1

2
2
_

2
1
+
2
2

2
1

2
2
_
For the case q =
1
2
,
1
=
2
the error probability equals Q
_
_
2

2
1
_
.
2. Non Gaussian additive vector channel.
Consider a binary hypothesis testing problem in which the sources s
0
= [1, 2, 3], s
1
= [1, 1, 3]
are equiprobable. The observations, r
i
, obeys
r = s
i
+n, n = [n
0
, n
1
, n
2
]
where n elements are i.i.d with the following probability density function
f
N
K
(n
k
) =
1
2
e
|n
k
|
14
Obtain the optimal decision rule using MAP criteria.
Solution:
The optimal decision rule using MAP criteria
p(s
0
)f(r[s
0
)
0

1
p(s
1
)f(r[s
1
)
f(r[s
0
)
0

1
f(r[s
1
)
The conditional probability distribution function
f(r[s
i
) = f
N
(r s
i
) =
2

k=0
f
N
(n
k
= r
k
s
ik
)
=
1
2
e
|r
0
s
i,0
|
1
2
e
|r
1
s
i,1
|
1
2
e
|r
2
s
i,2
|
=
1
8
e
[|r
0
s
i,0
|+|r
1
s
i,1
|+|r
2
s
i,2
|]
An assignment of the s
i
elements yield
[r
0
1[ +[r
1
2[ +[r
2
3[
1

0
[r
0
1[ +[r
1
+ 1[ +[r
2
+ 3[
[r
1
2[ +[r
2
3[
1

0
[r
1
+ 1[ +[r
2
+ 3[
Note that the above decision rule compares the distance from the axis in both hypotheses, unlike
in the Gaussian vector channel in which the Euclidean distance is compared.
3. Gaussian two-channel.
Consider the following two-channel problem, in which the observations under the two hypotheses
are
H
0
:
_
Z
1
Z
2
_
=
_
1 0
0
1
2
_ _
V
1
V
2
_
+
_
1

1
2
_
H
1
:
_
Z
1
Z
2
_
=
_
1 0
0
1
2
_ _
V
1
V
2
_
+
_
1
1
2
_
where V
1
and V
2
are independent, zero-mean Gaussin variables with variance
2
.
(a) Find the minimum probability of error receiver if both hypotheses are equally likely. Simplify
the receiver structure.
(b) Find the minimum probability of error.
Solution:
Let Z =
_
Z
1
Z
2
_
. The conditional distribution of Z is
Z[H
0
N(
0
, ), Z[H
1
N(
1
, ),

0
=
_
1

1
2
_
, ,
1
=
_
1
1
2
_
=
2
_
1 0
0
1
4
_
15
(a) The decision rule
f(z[H
1
)
f(z[H
1
)
H
1

H
0
p(H
0
)
p(H
1
)
log f(z[H
1
) log f(z[H
0
)
H
1

H
0
0
2

2
(z
1
+ 2z
2
)
H
1

H
0
0
z
1
+ 2z
2
H
1

H
0
0
(b) Dene X = Z
1
+2Z
2
. Since V
1
, V
2
are independent Z
1
, Z
2
are independent as well. A linear
combination of Z
1
, Z
2
yia Gaussian R.V with the following parameters
EX[H
0
= 2, EX[H
1
= 2,
V arX[H
0
= V arX[H
1
= 2
2
And the probability of error events
P
FA
= Pr

H = H
1
[H = H
0
=
_

0
f(x[H
1
)dx,
P
M
= Pr

H = H
0
[H = H
1
= 1
_

0
f(x[H
0
)dx
16
5 Signal Space Representation
1. [1, Problem 4.9].
Consider a set of M orthogonal signal waveforms s
m
(t), 1 m M, 0 t T
4
, all of which have
the same energy
5
. Dene a new set of waveforms as
s

m
(t) = s
m
(t)
1
M
M

k=1
s
k
(t), 1 m M, 0 t T
Show that the M signal waveforms s

m
(t) have equal energy, given by

= (M 1)

M
and are equally correlated, with correlation coecient

mn
=
1

_
T
0
s

m
(t)s

n
(t)dt =
1
M 1
Solution:
The energy of the signal waveform s

m
(t) is:

=
_

[s

m
(t)[
2
dt =
_

s
m
(t)
1
M
M

k=1
s
k
(t)

2
dt
=
_
T
0
s
2
m
(t) +
1
M
2
M

k=1
M

l=1
_
T
0
s
k
(t)s
l
(t)dt
2
M
M

k=1
_
T
0
s
m
(t)s
k
(t)dt
= +
1
M
2
M

k=1
M

l=1

kl

2
M

=
1
M

2
M
=
M 1
M

The correlation coecient is given by:

mn
=
1

_
T
0
s

m
(t)s

n
(t)dt =
_
T
0
_
s
m
(t)
1
M
M

k=1
s
k
(t)
__
s
n
(t)
1
M
M

l=1
s
l
(t)
_
dt
=
1

__
T
0
s
m
(t)s
n
(t)dt +
1
M
2
M

k=1
M

l=1
_
T
0
s
k
(t)s
l
(t)dt
_

2
M
M

k=1
_
T
0
s
m
(t)s
k
(t)dt
=
1
M
2
M
2
M

M1
M

=
1
M 1
2. [1, Problem 4.10].
4
s
j
(t), s
k
(t) = 0, j = k, j, k {1, 2, . . . , M}.
5
The energy of the signal waveform s
m
(t) is: =

|s
m
(t)|
2
dt
17
Consider the following three waveforms
f
1
(t) =
_

_
1
2
, 0 t < 2,

1
2
, 2 t < 4,
0, otherwise.
f
2
(t) =
_
1
2
, 0 t < 4,
0, otherwise.
f
3
(t) =
_

_
1
2
, 0 t < 1, 2 t < 3

1
2
, 1 t < 2, 3 t < 4
0, otherwise.
(a) Show that these waveforms are orthonormal.
(b) Check if you can express x(t) as a weighted linear combination of f
n
(t), n = 1, 2, 3, if
x(t) =
_

_
1, 0 < t < 1
1, 1 t < 3
1, 3 t < 4
0, otherwise.
and if you can determine the weighting coecients, otherwise explain.
Solution:
(a) To show that the waveforms f
n
(t), n = 1, 2, 3 are orthogonal we have to prove that:
_

f
n
(t)f
m
(t)dt = 0. m ,= n
For n = 1, m = 2:
_

f
1
(t)f
2
(t)dt =
_
4
0
f
1
(t)f
2
(t)dt
=
_
2
0
f
1
(t)f
2
(t)dt +
_
4
2
f
1
(t)f
2
(t)dt
=
1
4
_
2
0
dt
1
4
_
4
2
dt = 0
For n = 1, m = 3:
_

f
1
(t)f
3
(t)dt =
_
4
0
f
1
(t)f
3
(t)dt
=
1
4
_
1
0
dt
1
4
_
2
1
dt
1
4
_
3
2
dt +
1
4
_
4
3
dt = 0
For n = 2, m = 3:
_

f
2
(t)f
3
(t)dt =
_
4
0
f
2
(t)f
3
(t)dt
=
1
4
_
1
0
dt
1
4
_
2
1
dt +
1
4
_
3
2
dt
1
4
_
4
3
dt = 0
18
Thus, the signals f
n
(t) are orthogonal. It is also straightforward to prove that the signals
have unit energy:
_

[f
n
(t)[
2
dt = 1, n = 1, 2, 3
Hence, they are orthonormal.
(b) We rst determine the weighting coecients
x
n
=
_

x(t)f
n
(t)dt, n = 1, 2, 3
x
1
=
_
4
0
x(t)f
1
(t)dt =
1
2
_
1
0
dt +
1
2
_
2
1
dt
1
2
_
3
2
dt +
1
2
_
4
3
dt = 0
x
2
=
_
4
0
x(t)f
2
(t)dt =
1
2
_
4
0
x(t)dt = 0
x
1
=
_
4
0
x(t)f
1
(t)dt =
1
2
_
1
0
dt
1
2
_
2
1
dt +
1
2
_
3
2
dt +
1
2
_
4
3
dt = 0
As it is observed, x(t) is orthogonal to the signal waveforms f
n
(t), n = 1, 2, 3 and thus it can
not represented as a linear combination of these functions.
3. [1, Problem 4.11].
Consider the following four waveforms
s
1
(t) =
_

_
2, 0 t < 1,
1, 1 t < 4,
0, otherwise.
s
2
(t) =
_

_
2, 0 t < 1,
1, 1 t < 3,
0, otherwise.
s
3
(t) =
_

_
1, 0 t < 1, 2 t < 3,
1, 1 t < 2, 3 t < 4,
0, otherwise.
s
4
(t) =
_

_
1, 0 t < 1,
2, 1 t < 3,
2, 3 t < 4,
0, otherwise.
(a) Determine the dimensionality of the waveforms and a set of basis functions.
(b) Use the basis functions to present the four waveforms by vectors s
1
, s
2
, s
3
and s
4
.
(c) Determine the minimum distance between any pair of vectors.
Solution:
(a) As an orthonormal set of basis functions we consider the set
f
1
(t) =
_
1, 0 t < 1,
0, otherwise.
f
2
(t) =
_
1, 1 t < 2,
0, otherwise.
f
3
(t) =
_
1, 2 t < 3,
0, otherwise.
f
4
(t) =
_
1, 3 t < 4,
0, otherwise.
19
In matrix notation, the four waveforms can be represented as
_

_
s
1
(t)
s
2
(t)
s
3
(t)
s
4
(t)
_

_
=
_

_
2 1 1 1
2 1 1 0
1 1 1 1
1 2 2 2
_

_
_

_
f
1
(t)
f
2
(t)
f
3
(t)
f
4
(t)
_

_
Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of the
waveforms is 4.
(b) The representation vectors are
s
1
=
_
2 1 1 1

s
2
=
_
2 1 1 0

s
3
=
_
1 1 1 1

s
4
=
_
1 2 2 2

(c) The distance between the rst and the second vector is:
d
1,2
=
_
[s
1
s
2
[
2
=
_

_
4 2 2 1

2
=

25
Similarly we nd that:
d
1,3
=
_
[s
1
s
3
[
2
=
_

_
1 0 2 0

2
=

5
d
1,4
=
_
[s
1
s
4
[
2
=
_

_
1 1 1 3

2
=

12
d
2,3
=
_
[s
2
s
3
[
2
=
_

_
3 2 0 1

2
=

14
d
2,4
=
_
[s
2
s
4
[
2
=
_

_
3 3 3 2

2
=

31
d
3,4
=
_
[s
3
s
4
[
2
=
_

_
0 1 3 3

2
=

19
Thus, the minimum distance between any pair of vectors is d
min
=

5.
4. [2, Problem 5.4].
(a) Using Gram-Schmidt orthogonalization procedure, nd a set of orthonormal basis functions
to represent the following signals
s
1
(t) =
_
2, 0 t < 1,
0, otherwise.
s
2
(t) =
_
4, 0 t < 2,
0, otherwise.
s
3
(t) =
_
3, 0 t < 3,
0, otherwise.
(b) Express each of the signals s
i
(t), i = 1, 2, 3 in terms of the basis functions found in (4a).
Solution:
(a) The energy of s
1
(t) and the rst basis are
E
1
=
_
1
0
[s
1
(t)[
2
dt =
_
1
0
2
2
dt = 4

1
(t) =
s
1
(t)

E
1
=
_
1, 0 t < 1,
0, otherwise.
20
Dene
s
21
=
_
T
0
s
2
(t)
1
(t)dt =
_
1
0
4 1dt = 4
g
2
(t) = s
2
(t) s
21

1
(t) =
_
4, 1 t < 2,
0, otherwise.
Hence, the second basis function is

2
(t) =
g
2
(t)
_
_
T
0
g
2
2
(t)dt
=
_
1, 1 t < 2,
0, otherwise.
Dene
s
31
=
_
T
0
s
3
(t)
1
(t)dt =
_
1
0
3 1dt = 3
s
32
=
_
2T
T
s
3
(t)
2
(t)dt =
_
2
1
3 1dt = 3
g
3
(t) = s
3
(t) s
31

1
(t) s
32

2
(t) =
_
3, 2 t < 3,
0, otherwise.
Hence, the third basis function is

3
(t) =
g
3
(t)
_
_
T
0
g
2
3
(t)dt
=
_
1, 2 t < 3,
0, otherwise.
(b)
s
1
(t) = 2
1
(t)
s
2
(t) = 4
1
(t) + 4
2
(t)
s
3
(t) = 3
1
(t) 3
2
(t) + 3
3
(t)
5. Optimum receiver.
Suppose one of M equiprobable signals x
i
(t), i = 0, . . . , M1 is to be transmitted during a period
of time T over an AWGN channel. Moreover, each signal is identical to all others in the subinterval
[t
1
, t
2
] where 0 < t
1
< t
2
< T.
(a) Show that the optimum receiver may ignore the subinterval [t
1
, t
2
].
(b) Equivalently, show that if x
0
, . . . , x
M1
all have the same projection in one dimension
6
, then
this dimension may be ignored.
(c) Does this result necessarily hold true if the noise is Gaussian but not white? Explain.
Solution:
(a) The data signals x
i
(t) being equiprobable, the optimum decision rule is the Maximum Like-
lihood (ML) rule, given by, (in vector form) min
i
[y x
i
[
2
. From the invariance of the inner
product, the ML rule is equivalent to,
min
i
_
T
0
[y(t) x
i
(t)[
2
dt
6
x
T
i
=

x
i1
x
i2
. . . x
iN

are vectors of length N, k : x


ik
= x
jk
, i, j {0, . . . , M 1}.
21
The integral is then written as a sum of three integrals,
_
T
0
[y(t) x
i
(t)[
2
dt =
_
t
1
0
[y(t) x
i
(t)[
2
dt +
_
t
2
t
1
[y(t) x
i
(t)[
2
dt +
_
T
t
2
[y(t) x
i
(t)[
2
dt
Since the second integral over the interval [t
1
, t
2
] is constant as a function of i, the optimum
decision rule reduces to,
min
i
__
t
1
0
[y(t) x
i
(t)[
2
dt +
_
T
t
2
[y(t) x
i
(t)[
2
dt
_
And therefore, the optimum receiver may ignore the interval [t
1
, t
2
].
(b) In an appropriate orthonormal basis of dimension N M, the vectors x
i
and y are given
by,
x
T
i
=
_
x
i1
x
i2
. . . x
iN

y
T
=
_
y
1
y
2
. . . y
N

Assume that x
im
= x
1m
for all i, the optimum decision rule becomes,
min
i
M

k=1
[y
k
x
ik
[
2
min
i
M

k=1,k=m
[y
k
x
ik
[
2
+[y
m
x
im
[
2
Since [y
m
x
im
[
2
is constant for all i, the optimum decision rule becomes,
min
i
M

k=1,k=m
[y
k
x
ik
[
2
Therefore, the projection x
m
might be ignored by the optimum receiver.
(c) The result does not hold true if the noise is colored Gaussian noise. This is due to the fact
that the noise along one component is correlated with other components and hence might
not be irrelevant. In such a case, all components turn out to be relevant. Equivalently, by
duality, the same result holds in the time domain.
22
6 Optimal Receiver for the Waveform Channel
1. [1, Problem 5.4].
A binary digital communication system employs the signals
s
0
(t) =
_
0, 0 t < T,
0, otherwise.
s
1
(t) =
_
A, 0 t < T,
0, otherwise.
for transmitting the information. This is called on-o signaling. The demodulator cross-correlates
the received signal r(t) with s
i
(t), i = 0, 1 and samples the output of the correlator at t = T.
(a) Determine the optimum detector for an AWGN channel and the optimum threshold, assuming
that the signals are equally probable.
(b) Determine the probability of error as a function of the SNR. How does on-o signalling
compare with antipodal signaling?
Solution:
(a) The correlation type demodulator employs a lter:
f(t) =
_
1

T
, 0 t < T,
0, otherwise.
Hence, the sampled outputs of the cross-correlators are:
r = s
i
+n, i = 0, 1
where s
0
= 0, s
1
= A

T and the noise term n is a zero-mean Gaussian random variable with


variance
2
n
=
N
0
2
. The probability density function for the sampled output is:
f(r[s
0
) =
1

N
0
e

r
2
N
0
f(r[s
1
) =
1

N
0
e

(rA

T)
2
N
0
The minimum error decision rule is:
f(r[s
1
)
f(r[s
0
)
s
1

s
0
1
r
s
1

s
0
1
2
A

T
(b) The average probability of error is:
p(e) =
1
2
_

1
2
A

T
f(r[s
0
)dr +
1
2
_ 1
2
A

f(r[s
1
)dr
=
1
2
_

1
2
A

T
1

N
0
e

r
2
N
0
dr +
1
2
_ 1
2
A

N
0
e

(rA

T)
2
N
0
dr
=
1
2
_

1
2

2
N
0
A

T
1

2
e

x
2
2
dx +
1
2
_

1
2

2
N
0
A

2
e

x
2
2
dx
= Q
_
1
2
_
2
N
0
A

T
_
= Q(

SNR)
23
where
SNR =
1
2
A
2
T
N
0
Thus, the on-o signaling requires a factor of two more energy to achieve the same probability
of error as the antipodal signaling.
2. [2, Problem 5.11].
Consider the optimal detection of the sinusoidal signal
s(t) = sin
_
8t
T
_
, 0 t T
in additive white Gaussian noise.
(a) Determine the correlator output (at t = T) assuming a noiseless input.
(b) Determine the corresponding match lter output, assuming that the lter includes a delay
T to make it casual.
(c) Hence show that these two outputs are the same at time instant t = T.
Solution:
For the noiseless case, the received signal r(t) = s(t), 0 t T.
(a) The correlator output is:
y(T) =
_
T
0
r()s()d =
_
T
0
s
2
()d =
_
T
0
sin
2
_
8
T
_
d =
T
2
(b) The matched lter is dened by the impulse response h(t) = s(T t). The matched lter
output is therefore:
y(t) =
_

r()h(t )d =
_

s()s(T t +)d
=
_
T
0
sin
_
8
T
_
sin
_
8(T t +)
T
_
d
=
1
2
_
T
0
cos
_
8(T t)
T
_
d
1
2
_
T
0
cos
_
8(T t +)
T
_
d
=
T
2
cos
_
8(t T)
T
_

T
16
sin
_
8(T t)
T
_

T
16
sin
_
8t
T
_
.
(c) When the matched lter output is sampled at t = T, we get
y(T) =
T
2
which is exactly the same as the correlator output determined in item (2a).
3. SNR Maximization with a Matched Filter.
Prove the following theorem:
For the real system shown in Figure 1, the lter h(t) that maximizes the signal-to-noise ratio at
sample time T
s
is given by the matched lter h(t) = x(T
s
t).
24
+ x(t)
n(t)
h(t)
s T t =
) y(T s
Figure 1: SNR maximization by matched lter.
solution:
Compute the SNR at sample time t = T
s
as follows:
Signal Energy = [x(t) h(t)[
t=T
s
]
2
=
_ _

x(t)h(T
s
t)dt
_
2
= [x(t), h(T
s
t)]
2
The sampled noise at the matched lter output has energy or mean-square
Noise Energy = E
__

n(t)h(T
s
t)dt
_

n(s)h(T
s
s)ds
_
=
_

N
0
2
(t s)h(T
s
t)h(T
s
s)dtds
=
N
0
2
_

h
2
(T
s
t)dt
=
N
0
2
|h|
2
The signal-to-noise ratio, dened as the ratio of the signal power in to the noise power, equals
SNR =
2
N
0
[x(t), h(T
s
t)]
2
|h|
2
The Cauchy-Schwarz Inequality states that
[x(t), h(T
s
t)]
2
|x|
2
|h|
2
with equality if and only if x(t) = kh(T
s
t) where k is some arbitrary constant. Thus, by
inspection, the SNR is maximized over all choices for h(t) when h(t) = x(T
s
t). The lter h(t)
is matched to x(t), and the corresponding maximum SNR (for any k) is
SNR
max
=
2
N
0
|x|
2
4. The optimal receiver.
Consider the signals s
0
(t), s
1
(t) with the respective probabilities p
0
, p
1
.
s
0
(t) =
_

_
_
E
T
, 0 t < aT,

_
E
T
, aT t < T,
0, otherwise.
s
1
(t) =
_
_
2E
T
cos
_
2t
T
_
, 0 t < T,
0, otherwise.
25
The observation, r(t), obeys
r(t) = s
i
(t) +n(t), i = 0, 1
En(t)n() =
N
0
2
(t ), n(t) N(0,
N
0
2
(t )).
(a) Find the optimal receiver for the above two signals, write the solution in terms of s
0
(t) and
s
1
(t).
(b) Find the error probability of the optimal receiver for equiprobable signals.
(c) Find the parameter a, which minimizes the error probability.
Solution:
(a) We will use a type II, which uses lters matched to the signals s
i
(t), i = 0, 1. The optimal
receiver is depicted in Figure 2.
r(t)
(t) h
0
(t) h
1
T t =
T t =
+
+
E p
2
1
ln
0
-
2
N
0
E p
2
1
ln
1
-
2
N
0
Max
0
y
1
y
Figure 2: Optimal receiver - II.
where h
0
(t) = s
0
(T t), h
1
(t) = s
1
(T t).
The Max block in Figure 2 can be implemented as follows
y = y
0
y
1
s
0
(t)

s
1
(t)
0
The R.V y obeys
y = [h
0
(t) r(t)]

t=T
+
N
0
2
ln p
0

E
2
[h
1
(t) r(t)]

t=T

N
0
2
ln p
1
+
E
2
=
N
0
2
ln
p
0
p
1
+ [(h
0
(t) h
1
(t)) r(t)]

t=T
Hence the optimal receiver can be implemented using one convolution operation instead of
two convolution operations, as depicted in Figure 3.
(b) For an equiprobable binary constellation, in an AWGN channel, the probability of error is
given by
p(e) = Q
_
d/2

_
, d = |s
0
s
1
|
d
2
= |s
0
s
1
|
2
= |s
0
|
2
+|s
1
|
2
2 s
0
, s
1

26
r(t) (t) h - (t) h
1 0
T t =
+
1
0
ln
p
p
2
N
0
Decision
Rule
Figure 3: Optimal receiver - II.
where
2
is the noise variance.
The correlation coecient between the two signals, , equals
=
s
0
, s
1

|s
0
| |s
1
|
=
s
0
, s
1

E
and for equal energy signals
d
2
= 2E 2 s
0
, s
1

d =
_
2E(1 )
p(e) = Q
_

E(1 )
N
0
_
(c) is the only parameter, in p(e), aected by a. An explicit calculation of yields
s
0
, s
1
=
_
T
0
s
0
(t)s
1
(t)dt
=
_
aT
0
_
E
T
_
2E
T
cos
2t
T
dt
_
T
aT
_
E
E
_
2E
E
cos
2t
T
dt
=

2
E
2
sin 2a +

2
E
2
sin 2a
=

sin 2a
p(e) = Q
_

E(1

sin 2a)
N
0
_
In order to minimize the probability of error, we will maximize the Q function argument:
sin 2a = 1
a =
3
4
27
7 The Probability of Error
1. [1, Problem 5.10].
A ternary communication system transmits one of three signals, s(t), 0, or s(t), every T seconds.
The received signal is one either r(t) = s(t) +z(t), r(t) = z(t) or r(t) = s(t) +z(t), where z(t) is
white Gaussian noise with Ez(t) = 0 and
zz
() =
1
2
Ez(t)z

() = N
0
(t ) . The optimum
receiver computes the correlation metric
U = Re
__
T
0
r(t)s

(t)dt
_
and compares U with a threshold A and a threshold A. If U > A the decision is made that s(t)
was sent. If U < A, the decision is made in favor of s(t). If A U A, the decision is made
in favor of 0.
(a) Determine the three conditional probabilities of error p(e[s(t)), p(e[0)) and p(e[ s(t)).
(b) Determine the average probability of error p(e) as a function of the threshold A, assuming
that the three symbols are equally probable a priori.
(c) Determine the value of A that minimizes p(e).
Solution:
(a) U = Re
_
_
T
0
r(t)s(t)dt
_
, where r(t) =
_
_
_
s(t) +z(t)
s(t) +z(t)
z(t)
_
_
_
depending on which signal was
sent. If we assume that s(t) was sent:
U = Re
__
T
0
s(t)s

(t)dt
_
+ Re
__
T
0
z(t)s

(t)dt
_
= 2E +N
where E =
1
2
_
T
0
s(t)s

(t)dt is a constant, and N = Re


_
_
T
0
z(t)s

(t)dt
_
is a Gaussian
random variable with zero mean and variance 2EN
0
. Hence, given that s(t) was sent, the
probability of error is:
p
1
(e) = PrN < A2E = Q
_
2E A

2EN
0
_
When s(t) is transmitted: U = 2E + N, and the corresponding conditional error proba-
bility is:
p
2
(e) = PrN > A+ 2E = Q
_
2E A

2EN
0
_
and nally, when 0 is transmitted: U = N, and the corresponding error probability is:
p
3
(e) = PrN > A or N < A = 2Q
_
A

2EN
0
_
(b)
p(e) =
1
3
[p
1
(e) +p
2
(e) +p
3
(e)] =
2
3
_
Q
_
2E A

2EN
0
_
+Q
_
A

2EN
0
__
28
(c) In order to minimize p(e):
dp(e)
dA
= 0 A = E
where we dierentiate Q(x) =
_

x
1

2
e

t
2
2
dt with respect to x, using the Leibnitz rule:
d
dx
_ _

f(x)
g(a)da
_
=
df
dx
g(f(x)).
Using this threshold:
p(e) =
4
3
Q
_
_
E
2N
0
_
2. [1, Problem 5.19].
Consider a signal detector with an input
r = A+n, A > 0
where +A and A occur with equal probability and the noise variable n is characterized by the
Laplacian p.d.f:
f(n) =
1

2
e

2|n|

(a) Determine the probability of error as a function of the parameters A and .


(b) Determine the SNR required to achieve an error probability of 10
5
. How does the SNR
compare with the result for Gaussian p.d.f?
Solution:
(a) Let =

. The optimal receiver uses the criterion:


f(r[A)
f(r[ A)
= e
[|rA||r+A|]
A

A
1
r
A

A
0
The average probability of error is:
p(e) =
1
2
PrError[A +
1
2
PrError[ A
=
1
2
_
0

f(r[A)dr +
1
2
_

0
f(r[ A)dr
=
1
2
_
0

2
e
|rA|
dr +
1
2
_

0

2
e
|r+A|
dr
=

4
_
A

e
|x|
dx +

4
_

A
e
|x|
dx
=
1
2
e
A
=
1
2
e

2A

(b) The variance of the noise is


2
, hence the SNR is:
SNR =
A
2

2
29
and the probability of error is given by:
p(e) =
1
2
e

2SNR
For p(e) = 10
5
we obtain:
ln(2 10
5
) =

2SNR SNR = 17.674 dB


If the noise was Gaussian, then the probability of error for antipodal signalling is:
p(e) = Q
_

SNR
_
where SNR is the signal to noise ratio at the output of the matched lter. With p(e) = 10
5
we nd

SNR = 4.26 and therefore SNR = 12.594 dB. Thus the required signal to noise
ratio is 5 dB less when the additive noise is Gaussian.
3. [1, Problem 5.38].
The discrete sequence
r
k
=
_
E
b
c
k
+n
k
, k = 1, 2, . . . , n
represents the output sequence of samples from a demodulator, where c
k
= 1 are elements of
one of two possible code words, C
1
= [1 1 . . . 1] and C
2
= [1 1 . . . 1 1 . . . 1]. The code word
C
2
has w elements that are +1 and n w elements that are 1, where w is a positive integer.
The noise sequence n
k
is white Gaussian with variance
2
.
(a) What is the optimum ML detector for the two possible transmitted signals?
(b) Determine the probability of error as a function of the parameters
2
, E
b
, w.
(c) What is the value of w that minimizes the the error probability?
Solution:
(a) The optimal ML detector selects the sequence C
i
that minimizes the quantity:
D(r, C
i
) =
n

k=1
(r
k

_
E
b
c
ik
)
2
The metrics of the two possible transmitted sequences are
D(r, C
1
) =
w

k=1
(r
k

_
E
b
)
2
+
n

k=w+1
(r
k

_
E
b
)
2
D(r, C
2
) =
w

k=1
(r
k

_
E
b
)
2
+
n

k=w+1
(r
k
+
_
E
b
)
2
Since the rst term of the right side is common for the two equations, we conclude that the
optimal ML detector can base its decisions only on the last n w received elements of r.
That is
w

k=w+1
(r
k

_
E
b
)
2

k=w+1
(r
k
+
_
E
b
)
2
C
2

C
1
0
or equivalently
w

k=w+1
r
k
C
1

C
2
0
30
(b) Since r
k
=

E
b
c
ik
+n
k
the probability of error PrError[C
1
is
PrError[C
1
= Pr
_
_
E
b
(n w) +
n

k=w+1
n
k
< 0
_
= Pr
_
n

k=w+1
n
k
< (n w)
_
E
b
_
The R.V u =

n
k=w+1
n
k
is zero-mean Gaussian with variance
2
u
= (n w)
2
. Hence
PrError[C
1
= p
1
(e) =
1
_
2
2
u
_
(nw)

E
b

exp
_

x
2

2
u
_
dx = Q
_
_
E
b
(n w)

2
_
Similarly we nd that PrError[C
1
= PrError[C
2
and since the two sequences are
equiprobable
p(e) = Q
_
_
E
b
(n w)

2
_
(c) The probability of error p(e) is minimized when
E
b
(nw)

2
is maximized, that is for w = 0. This
implies that C
1
= C
2
and thus the distance between the two sequences is the maximum
possible.
4. Sub optimal receiver.
Consider a binary system transmitting the signals s
0
(t), s
1
(t) with equal probability.
s
0
(t) =
_
_
2E
T
sin
2t
T
, 0 t T,
0, otherwise.
s
1
(t) =
_
_
2E
T
cos
2t
T
, 0 t T,
0, otherwise.
The observation, r(t), obeys
r(t) = s
i
(t) +n(t), i = 0, 1
where n(t) is white Gaussian noise with En(t) = 0 and En(t)n() =
N
0
2
(t ).
(a) Sketch an optimal and ecient (in the sense of minimal number of lters) receiver. What is
the error probability when this receiver is used?
(b) What is the error probability of the following receiver?
_ T
2
0
r(t)dt
s
0

s
1
0
(c) Consider the following receiver
_
aT
0
r(t)dt
s
0

s
1
K, 0 a 1
where K is the optimal threshold for
_
aT
0
r(t)dt. Find a which minimizes the probability of
error. Numerical solution may be used.
31
r(t)
t) - (T s
0
t) - (T s
1
T t =
T t =
Max
Figure 4: Optimal receiver type II.
Solution:
(a) The signals are equiprobable and have equal energy. We will use type II receiver, depicted
in Figure 4.
The distance between the signals is
d
2
=
_
T
0
2E
T
_
sin
_
2t
T
_
cos
_
2t
T
__
2
= 2E d =

2E
The receiver depicted in Figure 4 is equivalent to the the following (and more ecient)
receiver, depicted in Figure 5.
r(t) t) - (T s - t) - (T s
1 0
T t =
0
1
0
s
s
<
>
Figure 5: Ecient optimal receiver.
For a binary system with equiprobable signals s
0
(t) and s
1
(t) the probability of error is given
by
p(e) = Q
_
d
2
_
= Q
_
d
2
_
N
0
2
_
= Q
_
d

2N
0
_
where d, the distance between the signals, is given by
d = |s
0
(t) s
1
(t)| = |s
0
s
1
|
Hence, the probability of error is
p(e) = Q
_
d

2N
0
_
p(e) = Q
_
_
E
N
0
_
(b) Let us dene the random variable, Y =
_ T
2
0
r(t)dt. Y obeys
Y [s
0
=
_ T
2
0
s
0
(t)dt +
_ T
2
0
n(t)dt
Y [s
1
=
_ T
2
0
s
1
(t)dt +
_ T
2
0
n(t)dt
32
Let us dene the random variable N =
_ T
2
0
n(t)dt. N is a zero mean Gaussian random
variable, and variance
VarN = E
__ T
2
0
_ T
2
0
n()n()dd
_
=
_ T
2
0
_ T
2
0
N
0
2
( )dd =
N
o
T
4
Y [s
i
is a Gaussian random variable (note that Y is not gaussian, but a Gaussin Mixture!)
with mean:
EY [s
0
=
_ T
2
0
s
0
(t)dt =

2ET

EY [s
1
=
_ T
2
0
s
1
(t)dt = 0
The variance of Y [s
i
is identical under both cases, and equal to the variance of N. For the
given decision rule the error probability is:
p(e) = p(s
0
) PrY < 0[s
0
+p(s
1
) PrY > 0[s
1

=
1
2
Q
_
2

_
2E
N
0
_
+
1
4
(c) We will use the same derivation procedure as in the previous item.
Dene the random variables Y, N as follows:
Y =
_
aT
0
r(t)dt, N =
_
aT
0
n(t)dt
EN = 0, VarN =
aTN
0
2
EY [s
0
=
_
2E
T
_
aT
0
s
0
(t)dt =

2ET
2
(1 cos 2a)
EY [s
1
=
_
2E
T
_
aT
0
s
1
(t)dt =

2ET
2
sin 2a
VarY [s
0
= VarY [s
1
= VarN
The distance between Y [s
0
and Y [s
1
equals
d =

2ET
2
(1 cos(2a) sin(2a))

For an optimal decision rule the probability of error equals Q


_
d
2
_
. Hence the probability of
error equals
p(e) = Q
_
1
2
_
E
N
0
1

a
[(1 cos(2a) sin(2a))[
_
which is minimized when
1

a
[(1 cos 2a sin 2a)[ is maximized.
Let a
opt
denote the a which maximizes the above expression. Numerical solution yields that
a
opt

= 0.5885
33
8 Bit Error Probability
1. [3, Example 6.2].
Compare the probability of bit error for 8PSK and 16PSK, in an AWGN channel, assuming

b
= 15dB =
E
b
N
0
and equal a-priori probabilities. Use the following approximations:
Nearest neighbor approximation given in class.

b


s
log
2
M
.
The approximation for P
e,bit
given in class.
Solution:
The nearest neighbor approximation for the probability of error, in an AWGN channel, for an
M-PSK constellation is
P
e
2Q
__
2
s
sin(

M
)
_
.
The approximation for P
e,bit
(under Gray mapping at high enough SNR) is
P
e,bit

P
e
log
2
M
.
For 8PSK we have
s
= (log
2
8) 10
15/10
= 94.87. Hence
P
e
2Q
_
189.74 sin(/8)
_
= 1.355 10
7
.
Using the approximation for P
e,bit
we get
P
e,bit
=
P
e
3
= 4.52 10
8
.
For 16PSK we have
s
= (log
2
16) 10
15/10
= 126.49. Hence
P
e
2Q
_
252.98 sin(/16)
_
= 1.916 10
3
.
Using the approximation for P
e,bit
we get
P
e,bit
=
P
e
4
= 4.79 10
4
.
Note that P
e,bit
is much larger for 16PSK than for 8PSK for the same
b
. This result is expected,
since 16PSK packs more bits per symbol into a given constellation, so for a xed energy-per-bit
the minimum distance between constellation points will be smaller.
2. Bit error probability for rectangular constellation.
Let p
0
(t) and p
1
(t) be two orthonormal functions, dierent from zero in the time interval [0, T].
The equiprobable signals dened in Figure 6 are transmitted through a zero-mean AWGN channel
with noise PSD equals N
0
/2.
(a) Calculate P
e
for the optimal receiver.
(b) Calculate P
e,bit
for the optimal receiver (optimal in the sense of minimal P
e
).
(c) Approximate P
e,bit
for high SNR (
d
2

_
N
0
2
). Explain.
34
(t) p
1
(t) p
0
) 010 ( ) 011 ( ) 001 ( ) 000 (
) 110 ( ) 111 ( ) 101 ( ) 100 (
2 d
2 d -
2 d 2 3d 2 d - 2 3d -
Figure 6: 8 signals in rectangular constellation.
Solution:
Let n
0
denote the noise projection on p
0
(t) and n
1
the noise projection on p
1
(t). Clearly n
i

N(0, N
0
/2), i = 0, 1.
(a) Let P
c
denote the probability for correct symbol decision; hence P
e
= 1 P
c
.
Prcorrect decision [(000) was transmitted =
_
1 Q
_
d/2
_
N
0
/2
__
2
(a)
= Prcorrect decision [(100) was transmitted
(b)
= Prcorrect decision [(010) was transmitted
(c)
= Prcorrect decision [(110) was transmitted
= P
1
.
where (a), (b) and (c) are due to the constellation symmetry.
Prcorrect decision [(001) was transmitted =
_
1 Q
_
d/2
_
N
0
/2
___
1 2Q
_
d/2
_
N
0
/2
__
(a)
= Prcorrect decision [(101) was transmitted
(b)
= Prcorrect decision [(011) was transmitted
(c)
= Prcorrect decision [(111) was transmitted
= P
2
.
where (a), (b) and (c) are due to the constellation symmetry.
35
Hence
P
c
=
1
2
_
_
1 Q
_
d/2
_
N
0
/2
__
2
+
_
1 Q
_
d/2
_
N
0
/2
___
1 2Q
_
d/2
_
N
0
/2
__
_
P
e
= 1 P
c
P
e
=
1
2
_
5Q
_
d/2
_
N
0
/2
_
3Q
_
d/2
_
N
0
/2
_
2
_
.
(b) Let b
0
denote the MSB, b
2
denote the LSB and b
1
denote the middle bit
7
. Let b
i
(s), i = 0, 1, 2
denote the i
th
bit of the constellation point s.
Prerror in b
2
[(000) was transmitted =

s:b
2
( s)=0
Pr s was received[(000) was transmitted
= Pr
_

5d
2
< N
0
<
d
2
_
= Pr
_
d
2
< N
0
<
5d
2
_
= Q
_
d/2
_
N
0
/2
_
Q
_
5d/2
_
N
0
/2
_
(a)
= Prerror in b
2
[(100) was transmitted
(b)
= Prerror in b
2
[(010) was transmitted
(c)
= Prerror in b
2
[(110) was transmitted
= P
1
.
where (a), (b) and (c) are due to the constellation symmetry.
Prerror in b
2
[(001) was transmitted =

s:b
2
( s)=1
Pr s was received[(001) was transmitted
= Pr
_
N
0
<
3d
2
_
+ Pr
_
d
2
< N
0
_
= Q
_
d/2
_
N
0
/2
_
+Q
_
3d/2
_
N
0
/2
_
= Prerror in b
2
[(101) was transmitted
= Prerror in b
2
[(011) was transmitted
= Prerror in b
2
[(111) was transmitted
= P
2
.
7
For the top left constellation point in Figure 6 (b
0
, b
1
, b
2
) = (010).
36
Using similar arguments we can calculate the bit error probability for b
1
Prerror in b
1
[(000) was transmitted = Q
_
3d/2
_
N
0
/2
_
= Prerror in b
1
[(100) was transmitted
= Prerror in b
1
[(010) was transmitted
= Prerror in b
1
[(110) was transmitted
= P
3
.
Prerror in b
1
[(001) was transmitted = Q
_
d/2
_
N
0
/2
_
= Prerror in b
1
[(101) was transmitted
= Prerror in b
1
[(011) was transmitted
= Prerror in b
1
[(111) was transmitted
= P
4
.
The bit error probability for b
0
equals
Prerror in b
0
[(000) was transmitted = Q
_
d/2
_
N
0
/2
_
= P
5
.
Due to the constellation symmetry and the bits mapping, the bit error probability for b
0
is
equal for all the constellation points.
Let P
e,b
i
, i = 0, 1, 2 denote the averaged (over all signals) bit error probability of the i
th
bit,
then
P
e,b
0
= P
5
.
P
e,b
1
=
1
2
(P
3
+P
4
).
P
e,b
2
=
1
2
(P
1
+P
2
).
The averaged bit error probability, P
e,bit
, is given by
P
e,bit
=
1
3
2

i=0
P
e,b
i
=
5
6
Q
_
d/2
_
N
0
/2
_
+
1
3
Q
_
3d/2
_
N
0
/2
_

1
6
Q
_
5d/2
_
N
0
/2
_
(c) For
d
2

_
N
0
2
P
e,bit

=
5
6
Q
_
d/2
_
N
0
/2
_
P
e

=
5
2
Q
_
d/2
_
N
0
/2
_
P
e,bit

P
e
3
.
37
Note that
P
e
log
2
M
is the lower bound for P
e,bit
.
38
9 Connection with the Concept of Capacity
1. [2, Problem 9.29].
A voice-grade channel of the telephone network has a bandwidth of 3.4 KHz. Assume real-valued
symbols.
(a) Calculate the capacity of the telephone channel for signal-to-noise ratio of 30 dB.
(b) Calculate the minimum signal-to-noise ratio required to support information transmission
through the telephone channel at the rate of 4800
_
bits
sec
_
.
Solution:
(a) The channel bandwidth is W = 3.4 KHz. The received signal-to-noise ratio is SNR = 10
3
=
30 dB. Hence the channel capacity is
C = W log
2
(1 + SNR) = 3.4 10
3
log
2
(1 + 10
3
) = 33.9 10
3
_
bits
sec
_
.
(b) The required SNR is the solution of the following equation
4800 = 3.4 10
3
log
2
(1 + SNR) SNR = 1.6 = 2.2 dB.
2. [1, Problem 7.17].
Channel C
1
is an additive white Gaussian noise channel with a bandwidth W, average transmitter
power P, and noise power spectral density
N
0
2
. Channel C
2
is an additive white Gaussian noise
channel with the same bandwidth and average power as channel C
1
but with noise power spectral
density S
n
(f). It is further assumed that the total noise power for both channels is the same;
that is
_
W
W
S
n
(f)df =
_
W
W
N
0
2
df = N
0
W.
Which channel do you think has larger capacity? Give an intuitive reasoning.
Solution:
The capacity of the additive white Gaussian channel is:
C = W log
2
_
1 +
P
N
0
W
_
For the nonwhite Gaussian noise channel, although the noise power is equal to the noise power in
the white Gaussian noise channel, the capacity is higher. The reason is that since noise samples
are correlated, knowledge of the previous noise samples provides partial information on the future
noise samples and therefore reduces their eective variance.
3. Capacity of ISI channel.
Consider a channel with Inter Symbol Interference (ISI) dened as follows
y
k
=
L1

i=0
h
i
x
ki
+z
k
.
The channel input obeys an average power constraint Ex
2
k
P, and the noise z
k
is i.i.d
Gaussian distributed: z
k
N(0,
2
z
). Assume that H(e
j2f
) has no zeros and show that the
channel capacity is
C =
1
2
_
W
W
log
_
1 +
_

2
z
/[H(e
j2f
)[
2

2
z
/[H(e
j2f
)[
2
_
df,
39
where is a constant selected such that
_
W
W
_


2
z
[H(e
j2f
)[
2
_
+
df = P.
You may use the following theorem
Theorem 1. Let the transmitter have a maximum average power constraint of P [Watts]. The
capacity of an additive Gaussian noise channel with noise power spectrum N(f)
_
Watts
Hz

is given
by
C =
1
2
_

log
2
_
1 +
_
N(f)

+
N(f)
_
df
_
bits
sec
_
.
where is chosen so that
_ _
N(f)

+
df = P.
Solution:
Since H(e
j2f
) has no zeros the ISI lter is invertible. Inverting the chennel results in

Y (e
j2f
) =
Y (e
j2f
)
H(e
j2f
)
= X(e
j2f
) +
Z(e
j2f
)
H(e
j2f
)
= X(e
j2f
) +

Z(e
j2f
).
This is a problem of colored Gaussian channel with no ISI. The channel PSD is
S
ZZ
(e
j2f
) =

2
z
[H(e
j2f
)[
2
.
The capacity of this channel, using Theorem 1 is given by
C =
1
2
_
W
W
log
_
1 +
_

2
z
/[H(e
j2f
)[
2

2
z
/[H(e
j2f
)[
2
_
df,
where is a constant selected such that
_
W
W
_


2
z
[H(e
j2f
)[
2
_
+
df = P.
40
10 Continuous Phase Modulations
1. [1, Problem 4.14].
Consider an equivalent low-pass digitally modulated signal of the form
u(t) =

n
[a
n
g(t 2nT) jb
n
g(t 2nT T)]
where a
n
and b
n
are two sequences of statistically independent binary digits and g(t) is a
sinusoidal pulse dened as
g(t) =
_
sin
_
t
2T
_
, 0 t 2T,
0, otherwise.
This type of signal is viewed as a four-phase PSK signal in which the pulse shape is one-half
cycle of a sinusoid. Each of the information sequences a
n
and b
n
is transmitted at a rate of
1
2T
_
bits
sec

and, hence, the combined transmission rate is


1
T
_
bits
sec

. The two sequences are staggered


in time by T seconds in transmission. Consequently, the signal u(t) is called staggered four-phase
PSK.
(a) Show that the envelope [u(t)[ is a constant, independent of the information a
n
on the in-
phase component and information b
n
on the quadrature component. In other words, the
amplitude of the carrier used in transmitting the signal is constant.
(b) Determine the power density spectrum of u(t).
(c) Compare the power density spectrum obtained from (1b) with the power density spectrum
of the MSK signal [1, 4.4.2]. What conclusion can you draw from this comparison?
Solution:
(a) Since the signaling rate is
1
2T
for each sequence and since g(t) has duration 2T, for any time
instant only g(t 2nT) and g(t 2nT T) or g(t 2nT +T) will contribute to u(t). Hence,
for 2nT t 2nT +T:
[u(t)[
2
= [a
n
g(t 2nT) jb
n
g(t 2nT T)[
2
= a
2
n
g
2
(t 2nT) +b
2
n
g
2
(t 2nT T)
= g
2
(t 2nT) +g
2
(t 2nT T)
= sin
2
_
t
2T
_
+ sin
2
_
(t T)
2T
_
= sin
2
_
t
2T
_
+ cos
2
_
t
2T
_
= 1, t.
(b) The power density spectrum is:
S
U
(f) =
1
T
[G(f)[
2
where G(f) =
_

g(t)e
j2ft
dt =
_

sin
_
t
2T
_
e
j2ft
dt. By using Eulers formula it is
easily shown that:
G(f) =
4T

cos(2Tf)
1 16T
2
f
2
e
j2fT
S
U
(f) =
16T

2
cos
2
(2Tf)
(1 16T
2
f
2
)
2
41
(c) The above power density spectrum is identical to that for the MSK signal. Therefore, the
MSK signal can be generated as a staggered four phase PSK signal with a half-period sinu-
soidal pulse for g(t).
2. [1, Problem 5.29].
In an MSK signal, the initial state for the phase is either 0 or rad. Determine the terminal
phase state for the following four input pairs of input data b
0
, b
1
: (a) 00; (b) 01; (c) 10; (d) 11.
Solution:
We assume that the input bits 0, 1 are mapped to the symbols -1 and 1 respectively. The terminal
phase of an MSK signal at time instant n is given by
(n; s) =

2
n

k=0
s
k
+
0
where
0
is the initial phase and s
k
is 1 depending on the input bit at the time instant k. The
following table shows (1; s) for two dierent values of
0
, and the four input pairs of data:

0
b
0
b
1
s
0
s
1
(1; s)
0 0 0 -1 -1
0 0 1 -1 1 0
0 1 0 1 -1 0
0 1 1 1 1
0 0 -1 -1 0
0 1 -1 1
1 0 1 -1
1 1 1 1 2
3. [1, Problem 5.30].
A continuous-phase FSK signal with h =
1
2
is represented as
s(t) =
_
2
b
T
b
cos
_
t
2T
b
_
cos
_
2f
c
t
_

_
2
b
T
b
sin
_
t
2T
b
_
sin
_
2f
c
t
_
, 0 t 2T
b
where the signs depend on the information bits transmitted.
(a) Show that this signal has constant amplitude.
(b) Sketch a block diagram of the modulator for synthesizing the signal from the input bit stream.
(c) Sketch a block diagram of the demodulator and detector for recovering the information bit
stream from the signal.
Solution:
(a) The envelope of the signal is

s(t)

2
=
_

s
c
(t)

2
+

s
s
(t)

2
=

2
b
T
b
cos
2
_
t
2T
b
_
+
2
b
T
b
sin
2
_
t
2T
b
_
=
_
2
b
T
b
42
(b) The signal s(t) is equivalent to an MSK signal. Figure 7 depicts a block diagram of the
modulator for synthesizing the signal. In Figure 7 x
e
are the even pulse sequence and x
o
are
the odd pulse sequence.

Figure 7: Modulator block diagram.


(c) Figure 8 depicts a block diagram of the demodulator
Threshold
at 0
Threshold
at 0
Figure 8: Demodulator block diagram.
43
11 Colored AGN Channel
1. Colored noise.
Consider the following four equiprobable signals
s
0
(t) =
1

cos(t), s
1
(t) =
1

sin(t), s
2
(t) = s
0
(t), s
3
(t) = s
1
(t), 0 t 2
The received signal obeys r(t) = s(t) + n(t), where n(t) is a colored Gaussian noise with the
following power spectral density
S
N
() =
N
0
2

2
1 +
2
, =
_
rad
sec
_
the noise n(t) and the signal s(t) are independent.
(a) The optimal receiver for this scenario consists of a whitening lter, H(), followed by an
optimal receiver for the AWGN channel. What should be the whitening lter amplitude,
[H()[
2
, so the noise at the lter output will be white?
(b) Find the above H() and the corresponding h(t) which can be composed of an adder and
an integrator.
(c) For a noise-free channel, what are the transmitted signals, s
i
(t), i = 1, . . . , 3, at the output
of the whitening lter?
(d) Let s
i
(t) s
i
(t) h(t), i = 1, . . . , 3, where h(t) is the impulse response of H(). Find a
set of real orthonormal basis functions which span the set

o = ( s
0
(t), . . . , s
3
(t)). Find the
projection of each element in the set

o on the basis functions.
(e) Sketch the optimal receiver.
Solution:
(a) The noise at the lter output will be white if
[H()[
2

2
1 +
2
N
0
2
= Constant.
(b) Let the constant be
N
0
2
, hence [H()[
2
=
1+
2

2
. One of the lters which obeys [H()[
2
=
1+
2

2
is
H
0
() =
1 +j
j
= 1 +
1
j
.
The impulse response of H() is h(t) = (t)+u(t), where u(t) denotes a step function. Hence
s
i
(t) = s
i
(t) [(t) +u(t)] = s
i
(t) +
_
t

s
i
()d.
Therefore H() can be implemented using an adder and an integrator.
(c)
s
i
(t) = s
i
(t) +
_
t

s
i
()d = s
i
(t) +
_
min{2,t}
0
s
i
()d
=
_

_
0, t < 0
s
i
(t) +
_
t
0
s
i
()d, 0 t 2
_
2
0
s
i
()d, t > 2
=
_
s
i
(t) +
_
t
0
s
i
()d, t [0, 2]
0, t / [0, 2]
44
Therefore the ltered signals are
s
0
(t) =
1

(cos(t) + sin(t)), s
1
(t) =
1

(sin(t) + 1 cos(t))
s
2
(t) = s
0
(t), s
3
(t) = s
1
(t), t [0, 2]
(d) The functions
0
= sin(t) + cos(t) and
1
= sin(t) + 1 cos(t) are orthogonal at the range
[0, 2], hence establish a basis of the signal space. In order to have an orthonormal basis we
should normalize
0
and
1
|
0
(t)|
2
= sin(t) + cos(t), sin(t) + cos(t) = 2
|
1
(t)|
2
= sin(t) + 1 cos(t), sin(t) + 1 cos(t) = 4
Hence the orthonormal basis is

0
(t) =
1

2
(sin(t) + cos(t)),
1
(t) =
1

4
(sin(t) + 1 cos(t))
The vectors of the whitened signals are
s
0
=
_
2
0
_
, s
1
=
_
0
2
_
, s
2
= s
0
, s
3
= s
1
.
(e) A type-II is depicted in Figure 9.
+
jw
1
1+ r(t)
(t) r
~
t) - (T s
0
~
t) - (T s
1
~
T
T
+
+
+
1 -
1 -
-1 2
2
1
- =
-2 4
2
1
- =
MAX
Figure 9: Type-II receiver for ACGN.
where T = 2.
The whitening lter can be integrated into the match lters. Since cos(2 t) = cos(t) and
sin(2 t) = sin(t)
s
0
(2 t) =
_
1

2
(cos(t) sin(t)), 0 t 2,
0, otherwise.
s
1
(2 t) =
_
1

(1 cos(t) sin(t)), 0 t 2,
0, otherwise.
45
Hence
_
(t) +u(t)
_
s
0
(2 t) =
1

(2 cos(t) 1)
_
(t) +u(t)
_
s
1
(2 t) =
1

(f(t) 2 sin(t))
where the denition range of functions cos, sin and 1 is 0 t 2, and the function f(t) is
dened as follows
f(t)
_

_
0, t < 0,
t, 0 t 2,
2, t > 2.
Figure 10 depicts an optimal receiver in which the whitening lter is integrated into the
matched lters.
+
r(t)
T
T
+
+
+
1 -
1 -
-1 2
2
1
- =
-2 4
2
1
- =
MAX
1) - (2cos(t)
T/2
1
2sin(t)) - (f(t)
T/2
1
Figure 10: Type-II receiver for ACGN with whitening lter integrated into the match lters.
46
12 ISI Channels and MLSE
1. [1, Problem 10.2].
In a binary PAM system, the clock that species the sampling of the correlator matched lter
output is oset from the optimum sampling time by 10%.
(a) If the signal pulse used is rectangular, p(t) =
_
A, 0 t < T,
0, otherwise.
, determine the loss in SNR
of the desired signal component sampled at the output of the MF due to the mistiming.
(b) Determine the ISI coecients, f
k
, due to the mistiming and determine its eect on the
probability of error assuming per-symbol decoding designed for binary PAM over AWGN
(no ISI) with equal a-priori probabilities.
Solution:
(a) If the transmitted signal is:
r(t) =

n=
I
n
p(t nT) +n(t)
then the output of the receiving lter is
y(t) =

n=
I
n
x(t nT) +v(t)
where x(t) = p(t) p

(t) and v(t) = n(t) p

(t). If the sampling time is o by 10%,


then the samples at the output of the correlator are taken at t = (m
1
10
)T. Assuming that
t = (m
1
10
)T without loss of generality, then the sampled sequence is:
y
m
=

n=
I
n
x((m
1
10
)T nT) +v((m
1
10
)T).
If the signal pulse is rectangular with amplitude Aand duration T, then

n=
I
n
x((m
1
10
)T nT)
is nonzero only for n = m and n = m1 and therefore, the sampled sequence is given by:
y
m
= I
m
x(
1
10
T) +I
m1
x(T
1
10
T) +v((m
1
10
)T)
=
9
10
A
2
TI
m
+
1
10
A
2
TI
m1
+v((m
1
10
)T)
The variance of the noise is:

2
v
=
N
0
2
A
2
T
and therefore, the SNR is:
SNR =
_
9
10
_
2
2(A
2
T)
2
N
0
A
2
T
=
81
100
2A
2
T
N
0
.
As it is observed, there is a loss of 10 log
10
81
100
= 0.9151 dB due to the mistiming.
47
(b) Recall from item (1a) that the sampled sequence is:
y
m
=
9
10
A
2
TI
m
+
1
10
A
2
TI
m1
+v
m
.
The term
1
10
A
2
TI
m1
expresses the ISI introduced to the system. If I
m
= 1 is transmitted,
then the probability of error is
Pre[I
m
= 1 =
1
2
Pre[I
m
= 1, I
m1
= 1 +
1
2
Pre[I
m
= 1, I
m1
= 1
=
1
2

N
0
A
2
T
_
A
2
T

v
2
N
0
A
2
T
dv +
1
2

N
0
A
2
T
_

8
10
A
2
T

v
2
N
0
A
2
T
dv
=
1
2
Q
_

2A
2
T
N
0
_
+
1
2
Q
_

_
8
10
_
2
2A
2
T
N
0
_
Since the symbols of the binary PAM system are equiprobable the previous derived expression
is the probability of error when a symbol by symbol detector is employed. Comparing this
with the probability of error of a system with no ISI, we observe that there is an increase of
the probability of error by
1
2
Q
_

_
8
10
_
2
2A
2
T
N
0
_

1
2
Q
_

2A
2
T
N
0
_
.
2. [1, Problem 10.8].
A binary antipodal signal is transmitted over a nonideal band-limited channel, which introduced
ISI over two adjacent symbols:
y
m
=

k
I
m
x
km
+v
m
= I
m
+
1
4
I
m1
+v
m
.
where v
m
is an additive noise.
(a) Determine the average probability of error, assuming equiprobable signals and the additive
noise is white and Gaussian, using decoder designed for antipodal signals over AWGN (no
ISI).
(b) By plotting the error probability obtained in (2a) and that for the case of no ISI, determine
the relative dierence in SNR of the error probability of 10
6
.
Solution:
(a) The output of the matched lter at the time instant mT is:
y
m
=

k
I
m
x
km
+v
m
= I
m
+
1
4
I
m1
+v
m
.
The autocorrelation function of the noise samples v
m
is:
Ev
k
v
j
=
N
0
2
x
kj
thus, the variance of the noise is

2
v
=
N
0
2
x
0
=
N
0
2
.
48
If a symbol by symbol detector is employed and we assume that the symbols I
m
= I
m1
=

b
have been transmitted, then the probability of error Pre[I
m
= I
m1
=

b
is:
Pre[I
m
= I
m1
=

b
= Pry
m
< 0[I
m
= I
m1
=

= Pr
_
v
m
<
5
4

b
_
= Q
_
5
4
_
2
b
N
0
_
If however I
m1
=

b
, then:
Pre[I
m
=

b
, I
m1
=

b
= Pr
_
v
m
<
3
4

b
_
= Q
_
3
4
_
2
b
N
0
_
.
Since the symbols are equiprobable, we conclude that:
p(e) =
1
2
Q
_
5
4
_
2
b
N
0
_
+
1
2
Q
_
3
4
_
2
b
N
0
_
.
(b) Figure 11 depicts the error probability obtained in item (2a) vs. the SNR per bit and the
error probability for the case of no ISI. As it observed, the relative dierence in SNR of the
error probability of 10
6
is 2 dB.
-7
-6.5
-6
-5.5
-5
-4.5
-4
-3.5
-3
-2.5
-2
6 7 8 9 10 11 12 13 14
SNR/bit, dB
l
o
g
(
P
(
e
)
Figure 11: Probability of error comparison.
3. [1, Problem 10.24].
Consider a four-level PAM system with possible transmitted levels 3, 1, 1, and 3. The channel
through which the data are transmitted introduces intersymbol interference over two successive
symbols. The equivalent discrete-time channel model obeys
y
k
=
_
0.8I
k
+n
k
, k = 1,
0.8I
k
0.6I
k1
+n
k
, k > 1.
where n
k
is a sequence of real-valued independent zero-mean Gaussian noise variables with
variance
2
= N
0
.
49
(a) Sketch the tree structure, showing the possible signal sequences for the received signals y
1
, y
2
,
and y
3
.
(b) Suppose the Viterbi algorithm is used to detect the information sequence. How many metrics
must be computed at each stage of the algorithm?
(c) How many surviving sequences are there in the Viterbi algorithm for this channel?
(d) Suppose that the received signals are
y
1
= 0.5, y
2
= 2.0, y
3
= 1.0
Determine the surviving sequences through stage y
3
and the corresponding metrics.
Solution:
(a) Figure 12 depicts part of the tree.
1
I
2
I
3
I
3
1
3 -
1 -
3
1
1 -
3 -
3
1
1 -
3 -
3
1
1 -
3 -
3
1
1 -
3 -
3
1
1 -
3 -
Figure 12: Tree structure.
(b) There are four states in the trellis (corresponding to the four possible values of the symbol
I
k1
), and for each one there are four paths starting from it (corresponding to the four
possible values of the symbol I
k
). Hence, 16 metrics must be computed at each stage of the
Viterbi algorithm.
(c) Since, there are four states, the number of surviving sequences is also four.
(d) The metrics are

k
=
_
(y
1
0.8I
k
)
2
, k = 1,

k
(y
k
0.8I
k
+ 0.6I
k1
)
2
, k > 1.
50
Table 1 details the metric for the rst stage.
I
1

1
3 3.61
1 0.09
-1 1.69
-3 8.41
Table 1: First stage metric.
Table 2 details the metric for the second stage.
I
2
I
1

2
(I
2
, I
1
)
3 3 5.57
3 1 0.13
3 -1 6.53
3 -3 13.25
1 3 12.61
1 1 3.33
1 -1 2.05
1 -3 8.77
-1 3 24.77
-1 1 11.65
-1 -1 6.53
-1 -3 9.41
-3 3 42.05
-3 1 25.09
-3 -1 16.13
-3 -3 15.17
Table 2: Second stage metric.
The four surviving paths at this stage are min
I
1
_

2
(x, I
1
)
_
, x = 3, 1, 1, 3:
(I
2
, I
1
) = (3, 1) :
2
(3, 1) = 0.13
(I
2
, I
1
) = (1, 1) :
2
(1, 1) = 2.05
(I
2
, I
1
) = (1, 1) :
2
(1, 1) = 6.53
(I
2
, I
1
) = (3, 3) :
2
(3, 3) = 15.17
Table 3 details the metric for the third stage.
The four surviving paths at this stage are min
I
2
,I
1
_

3
(x, I
2
, I
1
)
_
, x = 3, 1, 1, 3:
(I
3
, I
2
, I
1
) = (3, 3, 1) :
3
(3, 3, 1) = 2.69
(I
3
, I
2
, I
1
) = (1, 3, 1) :
3
(1, 3, 1) = 0.13
(I
3
, I
2
, I
1
) = (1, 3, 1) :
3
(1, 3, 1) = 2.69
(I
3
, I
2
, I
1
) = (3, 3) :
3
(3, 1, 1) = 2.69
51
I
3
I
2
I
1

3
(I
3
, I
2
, I
1
)
3 3 1 2.69
3 1 -1 9.89
3 -1 -1 22.53
3 -3 -3 42.21
1 3 1 0.13
1 1 -1 7.81
1 -1 -1 12.29
1 -3 -3 28.13
-1 3 1 2.69
-1 1 -1 2.69
-1 -1 -1 7.17
-1 -3 -3 19.17
-3 3 1 10.37
-3 1 -1 2.69
-3 -1 -1 7.17
-3 -3 -3 15.33
Table 3: Third stage metric.
13 Equalization
1. [3, Problem 11.13].
This problem illustrates the noise enhancement of zero-forcing equalizers, and how this enhance-
ment can be mitigated using an MMSE approach. Consider a frequency-selective fading channel
with baseband frequency response
H(f) =
_

_
1, 0 [f[ < 10KHz,
1/2, 10KHz [f[ < 20KHz,
1/3, 20KHz [f[ < 30KHz,
1/4, 30KHz [f[ < 40KHz,
1/5, 40KHz [f[ < 50KHz,
0, otherwise.
The frequency response is symmetric in positive and negative frequencies. Assume an AWGN
channel with noise PSD N
0
= 10
9
W/Hz.
(a) Find a ZF analog equalizer that completely removes the ISI introduced by H(f).
(b) Find the total noise power at the output of the equalizer from item (1a).
(c) Assume a MMSE analog equalizer of the form H
eq
(f) =
1
H(f)+
. Find the total noise power
at the output of this equalizer for an AWGN input with PSD N
0
for = 0.5 and for = 1.
(d) Describe qualitatively two eects on a signal that is transmitted over channel H(f) and
then passed through the MMSE equalizer H
eq
(f) =
1
H(f)+
with > 0. What design
considerations should go into the choice of ?
(e) What happens to the total noise power for the MMSE equalizer in item (1c) as ?
What is the disadvantage of letting in this equalizer design?
52
Solution:
(a)
H
zf
(f) =
1
H(f)
=
_

_
1, 0 [f[ < 10KHz,
2, 10KHz [f[ < 20KHz,
3, 20KHz [f[ < 30KHz,
4, 30KHz [f[ < 40KHz,
5, 40KHz [f[ < 50KHz.
(b) The noise spectrum at the output of the lter is given by N(f) = N
0
[H
eq
(f)[
2
, and the noise
power is given by the integral of N(f) from 50 kHz to 50 kHz:
N =
_
50KHz
50KHz
N(f)df = 2N
0
_
50KHz
0
[H
eq
(f)[
2
df
= 2N
0
(1 + 4 + 9 + 16 + 25)(10KHz)
= 1.1mW.
(c) The noise spectrum at the output of the lter is given by N(f) =
N
0
(H(f)+)
2
, and the noise
power is given by the integral of N(f) from 50 kHz to 50 kHz. For = 0.5 we get
N = 2N
0
(0.44 + 1 + 1.44 + 1.78 + 2.04)(10KHz) = 0.134mW.
For = 1 we get
N = 2N
0
(0.24 + 0.44 + 0.56 + 0.64 + 0.69)(10KHz) = 0.0516mW.
(d) As increases, the frequency response H
eq
(f) decreases for all f. Thus, the noise power
decreases, but the signal power decreases as well. The factor should be chosen to balance
maximizing the SNR and minimizing distortion, which also depends on the spectrum of the
input signal (which is not given here).
(e) As , the noise power goes to 0 because H
eq
(f) 0 for all f. However, the signal
power also goes to zero.
2. [1, Problem 10.10].
Binary PAM is used to transmit information over an unequalized linear lter channel. When
a = 1 is transmitted, the noise-free output of the demodulator is
x
m
=
_

_
0.3, m = 1,
0.9, m = 0,
0.3, m = 1,
0, otherwise.
(a) Design a three-tap zero-forcing linear equalizer so that the output is
q
m
=
_
1, m = 0,
0, m = 1.
Remark 3. q
m
does not have to be causal.
(b) Determine q
m
for m = 2, 3, by convolving the impulse response of the equalizer with the
channel response.
53
Solution:
(a) If by c
n
we denote the coecients of the FIR equalizer, then the equalized signal is:
q
m
=
1

n=1
c
n
x
nm
which in matrix notation is written as
_
_
0.9 0.3 0
0.3 0.9 0.3
0 0.3 0.9
_
_
_
_
c
1
c
0
c
1
_
_
=
_
_
0
1
0
_
_
The coecients of the zero-forcing equalizer can be found by solving the above matrix equa-
tion. Thus
_
_
c
1
c
0
c
1
_
_
=
_
_
0.4762
1.4286
0.4762
_
_
.
(b) The values of q
m
for m = 2, 3 are given by
q
2
=
1

n=1
c
n
x
2n
= c
1
x
1
= 0.1429
q
2
=
1

n=1
c
n
x
2n
= c
1
x
1
= 0.1429
q
3
=
1

n=1
c
n
x
3n
= 0
q
3
=
1

n=1
c
n
x
3n
= 0.
3. [1, Problem 10.15].
Repeat problem (2) using the MMSE as the criterion for optimizing the tap coecients. Assume
that the noise power spectral density is 0.1 W/Hz.
Solution:
A discrete time transversal lter equivalent to the cascade of the transmitting lter g
T
(t), the
channel c(t), the matched lter at the receiver g
R
(t) and the sampler, has tap gain coecients
x
m
, where
x
m
=
_

_
0.9, m = 0,
0.3, m = 1,
0, otherwise.
The noise
k
, at the output of the sampler, is a zero-mean Gaussian sequence with autocorrelation
function:
E
k

l
=
2
x
kl
, [k l[ 1.
If the Z-transform of the sequence x
m
, X(z), assumes the factorization:
X(z) = F(z)F

(1/z

)
54
then the lter 1/F

(1/z

) can follow the sampler to white the noise sequence


k
. In this case the
output of the whitening lter, and input to the MSE equalizer, is the sequence
u
n
=

k
I
k
f
nk
+n
k
where n
k
is zero mean white Gaussian with variance
2
. The optimum coecients of the MSE
equalizer, c
k
, satisfy:
1

n=1
c
k

nk
=
k
, k = 1, 0, 1
where

nk
=
_
x
nk
+
2

n,k
, [n k[ 1,
0, otherwise.

k
=
_
f
k
, 1 k 0,
0, otherwise.
With
X(z) = 0.3z + 0.9 + 0.3z
1
= (f
0
+f
1
z
1
)(f

0
+f

1
z)
we obtain the parameters f
0
and f
1
as:
f
0
=
_

0.7854

0.1146
f
1
=
_

0.1146

0.7854
The parameters f
0
and f
1
should have the same sign since f
0
f
1
= 0.3. To have a stable inverse
system 1/F

(1/z

), we select f
0
and f
1
in such a way that the zero of the system F

(1/z

) =
f

0
+f

1
z is inside the unit circle. Thus, we choose f
0
=

0.1146 and f
1
=

0.7854 and therefore,


the desired system for the equalizers coecients is:
_
_
0.9 + 0.1 0.3 0
0.3 0.9 + 0.1 0.3
0 0.3 0.9 + 0.1
_
_
_
_
c
1
c
0
c
1
_
_
=
_
_

0.7854

0.1146
0
_
_
Solving this system, we obtain
c
1
= 0.8596, c
0
= 0.0886, c
1
= 0.0266.
4. [1, Problem 10.21]
8
.
Consider the following channel
y
n
=
1

2
I
n
+
1

2
I
n1
+v
n
v
n
is a real-values white noise Gaussian sequence with zero mean and variance N
0
. Suppose
the channel is to be equalized by DFE having a two-tap feedforward lter (c
0
, c
1
) and a one-tap
feedback lter (c
1
). The c
i
are optimized using the MSE criterion.
8
Read [1, Sub-section 10.3.2] and [1, Example 10.3.1]
55
(a) Determine exactly the optimum coecients as a function of N
0
and approximate their values
for N
0
1.
(b) Determine the exact value of the minimum MSE and a nd rst order approximation (in
terms of N
0
) appropriate to the case N
0
1. Assume EI
2
n
= 1.
(c) Determine the exact value of the output SNR for the three-tap equalizer as a function of N
0
and nd a rst order approximation appropriate to the case N
0
1.
(d) Compare the results in items (4b) and (4c) with the performance of the innite-tap DFE.
(e) Evaluate and compare the exact values of the output SNR for the three-tap and innite-tap
DFE in the special case where N
0
= 0.1 and N
0
= 0.01. Comment on how well the three-tap
equalizer performs relative to the innite-tap equalizer.
Solution:
(a) The tap coecients of the feedforward lter are given by the following equations:
0

j=K
1
c
j

lj
= f

l
, K
1
l 0,
where

lj
=
l

m=0
f

m
f
m+lj
+N
0

lj
, K
1
l, j 0.
The tap coecients of the feedback lter of the DFE are given in terms of the coecients of
the feedforward section by the following equations:
c
k
=
0

j=K
1
c
j
f
kj
, 1 k K
2
,
In this case, K
1
= 1, resulting in the following two equations:

0,0
c
0
+
0,1
c
1
= f

1,0
c
0
+
1,1
c
1
= f

1
From the denition of
lj
the above system can be written as:
_
1
2
+N
0
1
2
1
2
1
2
+N
0
_ _
c
0
c
1
_
=
_
1

2
1

2
_
so:
_
c
0
c
1
_
=
1

2
_
N
2
0
+
3
2
N
0
+
1
4
_
_
1
2
+N
0
N
0
_

_
2
2

2N
0
_
, for N
0
1
The coecient for the feedback section is:
c
1
= c
0
f
1
=
1

2
c
0
1, for N
0
1.
(b)
J
min
(1) = 1
0

j=K
1
c
j
f
j
=
2N
2
0
+N
0
2
_
N
2
0
+
3
2
N
0
+
1
4
_ 2N
0
, for N
0
1
56
(c)
=
1 J
min
(1)
J
min
(1)
=
1 + 4N
0
2N
0
(1 + 2N
0
)

1
2N
0
, for N
0
1
(d) For the innite tap DFE, we have from [1, Example 10.3.1]:
J
min
=
2N
0
1 +N
0
+
_
(1 +N
0
)
2
1
2N
0
, for N
0
1

=
1 J
min
J
min
=
1 +N
0
+
_
(1 +N
0
)
2
1
2N
0
(e) For N
0
= 0.1 we have:
J
min
(1) = 0.146, = 5.83 (7.66 dB)
J
min
= 0.128,

= 6.8 (8.32 dB)


For N
0
= 0.01 we have:
J
min
(1) = 0.0193, = 51 (17.1 dB)
J
min
= 0.0174,

= 56.6 (17.5 dB)


The three-tap equalizer performs very well compared to the innite-tap equalizer. The
dierence in performance is 0.6 dB for N
0
= 0.1 and 0.4 dB for N
0
= 0.01.
57
14 Non-Coherent Reception
1. Minimal frequency dierence for orthogonality.
(a) Consider the signals
s
i
(t) =
_
_
2E
T
cos(2f
i
t), 0 t T,
0, otherwise.
, i = 0, 1
Both frequencies obey f
i
T 1, i = 0, 1. What is the minimal frequency dierence, [f
0
f
1
[,
required for the two signals, s
0
(t) and s
1
(t), to be orthogonal?
(b) Now an unknown phase is added to one of the signals
s
0
(t) =
_
_
2E
T
cos(2f
0
t), 0 t T,
0, otherwise.
, s
1
(t) =
_
_
2E
T
cos(2f
1
t +), 0 t T,
0, otherwise.
Find the minimal frequency dierence required for the two signals to be orthogonal, for an
unknown .
Solution:
We rst solve for the general case, and then assign = 0 for item 1a.
s
0
(t), s
1
(t) =
2E
T
_
T
0
cos(2f
0
t) cos(2f
1
t +)dt
=
1
2

2E
T
_
T
0
_
cos
_
2(f
0
+f
1
)t +
_
+ cos
_
2(f
0
f
1
)t
_
_
dt
= E
_
sin
_
2(f
0
+f
1
)t +
_
2(f
0
+f
1
)T
. .
0 because f
i
T1
+
sin
_
2(f
0
f
1
)t
_
2(f
0
f
1
)T
_
T
0
E
sin
_
2(f
0
f
1
)t
_
2(f
0
f
1
)T

T
0
= 0
..
demand
We now consider the special cases.
(a) For = 0:
s
0
(t), s
1
(t) = 0 sin
_
2(f
0
f
1
)T
_
= 0 2(f
0
f
1
)T = n
where n is an integer, hence
[f
0
f
1
[
min
=
1
2T
(b) For unknown :
s
0
(t), s
1
(t) = 0
sin
_
2(f
0
f
1
)t

T
0
= 0
sin
_
2(f
0
f
1
)T
_
sin() = 0

_
2(f
0
f
1
)T
_
() = n 2
58
where the last step follows from the demand that the result will be zero for any , hence we
require that the dierence between
_
2(f
0
f
1
)T
_
and () will equal n 2, where n
is an integer.
Hence, the minimal frequency dierence for the non-coherent scenario is
[f
0
f
1
[
min
=
1
T
We conclude that for the non-coherent scenario a double bandwidth is required comparing
to the coherent scenario.
2. Non coherent receiver for M orthogonal signals.
Consider the following M orthogonal signals
s
i
(t) =
_
2E
T
sin(
i
t), 0 t T, i = 0, 1, . . . , M 1.
The received signal is
r(t) =
_
2E
T
sin(
i
t +) +n(t)
where U[0, 2) and n(t) is white Gaussian noise with power spectral density
N
0
2
.
The set r
s,i
, r
c,i

M1
i=0
is sucient statistic for decoding r(t), where
r
c,i
=
_
T
0
r(t)
_
2
T
cos(
i
t)dt, r
s,i
=
_
T
0
r(t)
_
2
T
sin(
i
t)dt
In class it was obtained that the optimal receiver for equiprobable a-priori probabilities nds the
maximal r
2
i
= r
2
c,i
+r
2
s,i
, and chooses the respective s
i
(t).
The pdf of r
0
and r
i
, i = 1, . . . , M 1, given that s
0
(t) was transmitted, are:
f(r
0
[s
0
) =
2r
0
N
0
e

r
2
0
N
0
e

E
N
0
I
0
_
2

E
N
0
r
0
_
, r
0
0
f(r
i
[s
0
) =
2r
i
N
0
e

r
2
i
N
0
, r
i
0, i = 1, . . . , M 1
For equiprobable a-priori probabilities and M = 2, the error probability of the optimal receiver is
p(e) =
1
2
e

E
2N
0
Show that for equiprobable a-priori probabilities and general M, the error probability of the
optimal receiver is
p(e) =
M1

i=1
_
M 1
i
_
(1)
i+1

1
i + 1
e

i
i+1
E
N
0
Guideline: Let A, B and C be i.i.d RVs with pdf f
Y
(y). Let X = maxA, B, C. Derive the pdf
f
X
(x).
Solution:
59
Due to symmetry
p(e) =
M1

i=0
p(e[s
i
)p(s
i
) = p(e[s
0
)
The probability of error given s
0
(t) was transmitted obeys
p(e[s
0
) = Prr
max
= maxr
1
, . . . , r
M1
> r
0
[s
0

Note: the r
i
, i = 1, . . . , M 1 are i.i.d.
For i.i.d random variables y
1
, . . . , y
n
with pdf f
Y
(y) and cdf F
Y
(y), the cdf of y
max
= maxy
1
, . . . , y
n

obeys
F
Y
max
(y) = Pry
max
< y = Pry
1
, . . . , y
n
y
(a)
=
_
F
Y
(y)
_
n
f
Y
max
(y) = n
_
F
Y
(y)
_
n1
f
Y
(y)
where (a) follows from the fact the the random variables are i.i.d.
In order to nd f(r
max
[s
0
) we need to nd F(r
i
[s
0
):
F(r
i
[s
0
) =
_
r
i
0
2t
N
0
e

t
2
N
0
dt = 1 e

r
2
i
N
0
Hence
f(r
max
[s
0
) = (M 1)
_
1 e

r
2
max
N
0
_
M2

2r
max
N
0
e

r
2
max
N
0
f(r
max
[s
0
) can be expanded as follows
f(r
max
[s
0
) = (M 1)
M2

j=0
_
e

r
2
max
N
0
_
j
_
M 2
j
_

2r
max
N
0
e

r
2
max
N
0
=
M2

j=0
(M 1)(1)
j
e

(j+1)r
2
max
N
0
2r
max
N
0
_
M 2
j
_
i=j+1
=
M1

i=1
(1)
i+1
_
M 1
i
_
e

ir
2
max
N
0
2r
max
i
N
0
In order to calculate p(e[s
0
) we need to integrate the whole region in which r
max
> r
0
p(e[s
0
) =
_

r
0
=0
f(r
0
[s
0
)
_

r
max
=r
0
f(r
max
[s
0
)dr
max
dr
0
Assigning f(r
max
[s
0
) to the inner integral yields
_

r
max
=r
0
f(r
max
[s
0
)dr
max
=
M1

i=1
(1)
i+1
_
M 1
i
__

r
0
e

ir
2
max
N
0
2r
max
i
N
0
dr
max
. .
Rayleigh distribution
=
M1

i=1
(1)
i+1
_
M 1
i
_
e

ir
2
0
N
0
60
Hence
p(e[s
0
) =
_

r
0
=0
2r
0
N
0
e

r
2
0
N
0
e

E
N
0
I
0
_
2

E
N
0
r
0
_

M1

i=1
(1)
i+1
_
M 1
i
_
e

ir
2
0
N
0
dr
0
Multiplying p(e[s
0
) by
i + 1
i + 1
e
E/(i+1)
2
N
0
/(i+1)
e

E/(i+1)
2
N
0
/(i+1)
= 1
and rearranging the summation elements yields
p(e[s
0
) =
M1

i=1
_
M 1
i
_
(1)
i+1

1
i + 1
e

i
i+1
E
N
0
. .
p(e)

_

0
2(i + 1)r
0
N
0
e

r
2
0
N
0
/(i+1)
e

E/(i+1)
2
N
0
/(i+1)
I
0
_
2
_
E/(i + 1)
2
N
0
/(i + 1)
r
0
_
dr
0
. .

0
Rice distribution=1
=
M1

i=1
_
M 1
i
_
(1)
i+1

1
i + 1
e

i
i+1
E
N
0
61
References
[1] J. G. Proakis, Digital Communications, 4th Edition, John Wiley and Sons, 2000.
[2] S. Haykin, Communication Systems, 4th Edition, John Wiley and Sons, 2000.
[3] A. Goldsmith, Wireless Communications, Cambridge University Press, 2006.
62

You might also like