You are on page 1of 319

Lecture1 Introduction

Digital Communications
College of Computer and Information
2
Lecturer: Dr. Yueheng Li
Office: Qinxue Building #4415
Email: Yueheng_li@hhu.edu.cn
Reference books
1). John G. Proakis, Digital Communications, 4
th
Edition.
(English or Chinese Version as the textbook)
2). T. S. Rappaport, Wireless Communications-Principle and
Practice 2
nd
Edition.
Prerequisites
Probability, Random Process, Matrix Theory, and
Principle of Communications
Pre-course (1)
College of Computer and Information
3
Pre-course (2)
Course Contents
Chapter 1: Introduction
Chapter 2: Probability and Stochastic Processes
Chapter 4: Characteristic of Communication Signals
Chapter 5: Optimum Receivers for Additive White
Gaussian Noise Channel
Score method
Course attending (30%)+Final end-term test (70%)
College of Computer and Information
4
Pre-course (3)
Course Schedule
College of Computer and Information
5
Introduction
Information
source
Source
encoder
Channel
encoder
Digital
modulator
Channel
Output
message
Source
decoder
Channel
decoder
Digital
demodulator
Fig.1-1 Simplified block diagram of a digital
communication system
College of Computer and Information
6
Block element Function explanation
Information Source : Speech, Data and/or Image
Source Encoder/Decoder : Information Source
Binary Data
(as few binary digits as possible)
Channel Encoder/Decoder : Binary Data Code words
(for error protection)
Channel : Physical media for
transmitting the signals
Digital modulator/demodulator: Convert code words into
constellation signals for channel
transmission
Introduction
College of Computer and Information
7
Wire line channels : Telephone network, coaxial cable,
waveguide, etc.
Fiber optic channels: Optical fibers
Wireless electromagnetic channels: (more concerned)
Ground-wave propagation (MF band)
Sky-wave propagation (HF band)
With/without Line-Of-Sight (LOS) propagation (VHF band
and higher)
Underwater acoustic channels:
Storage channels: Tape, CD, DVD, etc.
Communication channels
Introduction
College of Computer and Information
8
Radio Band Designations
Introduction
Frequency Wavelength Radio band designation
30 - 300 Hz 10 - 1Mm ELF (extremely low frequency)
300 - 3000 Hz 1000 - 100 km ULF (ultra low frequency)
3 - 30 kHz 100 - 10 km VLF (very low frequency)
30 - 300 kHz 10 - 1 km LF (low frequency)
300 - 3000 kHz 1000 - 100 m MF (medium frequency)
3 - 30 MHz 100 - 10 m HF (high frequency)
30 - 300 MHz 10 - 1 m VHF (very high frequency)
300 - 3000 MHz 100 - 10 cm UHF (ultra high frequency)
3 - 30 GHz 10 - 1 cm SHF (super high frequency)
30 - 300 GHz 10 - 1 mm EHF (extremely high frequency)
College of Computer and Information
9
Commonly used channel models
) (t s
) (t n
) ( ) ( ) ( t n t s t r + =
) (t s
) (t n
) ( ) ( ) ( ) ( t n t c t s t r + =
) (t c
Additive white Gaussian noise (AWGN) channel
Linear filter channel
Introduction
College of Computer and Information
10
Introduction
Linear time-variant filter channel
(underwater acoustic channels or ionospheric radio channels)
) (t s
) (t n
) ( ) , ( ) ( ) ( t n t c t s t r + =
) , ( t c
A typical example : time-variant multipath propagation
channel in mobile communication environment

=
=
L
k
k k
t a t c
1
) ( ) ( ) ; (
) ( ) ( ) ( ) (
1
t n t s t a t r
L
k
k k
+ =

=

( ) ( , ) ( , ) ( ) s t c t c t s t d

=
where is the
path attenuation.
( )
k
a t
kth
College of Computer and Information
11
Fig.1-2 Multipath fading channel
Introduction
End of Lecture1
Digital Communications
Lecture2 Probability Review Part I
Digital Communications
College of Computer and Information
2
Probability
Some basic definitions
Probability or cumulative distribution function (c.d.f)
Probability density function (p.d.f)
Joint probability distribution and density function
) ( ) ( x X P x F =
dx X dF x p / ) ( ) ( =
) , ( ) , ( y Y x X P y x F
XY
=
( ) ( ) ( )
x
F x p u du x

= < <

) , ( ) , (
2
y x F y x p
XY
Y X
XY

=
) ( < < x
(physical meaning)
(partial derivatives)
College of Computer and Information
3
Probability
Conditional probability density function
Statistical independence
) (
) , (
) | (
|
y p
y x p
y x p
Y
XY
Y X
=
) ( ) ( ) , ( y p x p y x p
Y X XY
=

=
=
N
i
i X N X X X
x p x x x p
i N
1
2 1
) ( ) , , , (
2 1

(two elements)
(multiple variables)
College of Computer and Information
4
Functions of random variables
single variable case
Let be a random variable (r.v.), and
is the function of r.v. , then the c.d.f of is
) ( X g Y =
X
} { ) ( y Y P y F
Y
=
} ) ( { y x g P =

=
y x g
X
dx x p
) (
) (
X
Y
College of Computer and Information
5
Functions of random variables Example1
Let , and .
Find the pdfs and .
2
, X Z aX Y = =
( ) ( 0)
x
X
p x e x

=
) ( y p
Y
) (z p
Z
Example1
Solution 0 , 0 ) 1 ( > y a

=
=
=
a y
X
Y
dx x p
a y X P
y aX P y F
/
0
) (
} / {
} { ) (
) 0 (
} / {
) ( ) (
/
1
1
=
=
=

y e
a y p
y F y p
a y
a
X
a
Y
dy
d
Y
derivative
College of Computer and Information
6
Functions of random variables Example1
Solution (cont.) 0 , 0 ) 2 ( < y a

=
=
=
=
a y
X
a y
X
Y
dx x p
dx x p
a y X P
y aX P y F
/
/
) ( 1
) (
} / {
} { ) (
) 0 (
} / {
) ( ) (
/
1
1
=
=
=

y e
a y p
y F y p
a y
a
X
a
Y
dy
d
Y
Combine (1) and (2)
| / |
| |
1
) (
a y
a
Y
e y p

=
derivative
College of Computer and Information
7
Functions of random variables Example1
Solution (cont.)
0 z

=
=
=
z
X
Z
dx x p
z X P
z X P z F
0
2
) (
} {
} { ) (
z
Z
X
Z
Z
e
z p z p

=
=
2
1
2
1
} { ) (
derivative
Please also see the examples 2-1-1 to 2-1-3 on the textbook
for more exercises.
College of Computer and Information
8
Functions of random variables
multidimensional variables case
Let be random variables with
joint p.d.f
If and
) , , , (
2 1 N
x x x p
X
N
X X X , , ,
2 1
N
) , , , (
2 1 N i i
X X X g Y =
1
1 2
( , , , );
i i N
X g Y Y Y

=
N i , , 2 , 1 =
Then the joint pdf of multiple rvs is
| | ) , , , ( ) , , , (
1 1
2 2
1
1 1 2 1
J g x g x g x p y y y p
N N N

= = = =
X Y
Where is the Jacobian of the transformation
defined by the determinant as follows
J
N
y y y , , ,
2 1

College of Computer and Information
9
N
N
N N
N
y
g
y
g
y
g
y
g
y
g
y
g
J

=


1 1
2
1
1
1
1
1
1
2
1
1
1

. . .

Functions of random variables


multidimensional variables case
College of Computer and Information
10
Functions of random variables Example2
Let be two independent Gaussian variables with zero
mean and the same variance . Find the distributions of the
magnitude and the angle of the variable .
Y X,
2

j
e R jY X Z = + =
Example2
Solution

=
=

) ( sin
) ( cos
1
2
1
1
g R Y
g R X
r r r
r
r
J = + =

=


2 2
sin cos
cos sin
sin cos
}
2
exp{
2
1
) , (
2
2 2
2
,

y x
y x p
Y X
+
=
2 2
2 /
2 2
2 2 2 2
2
,
2
}
2
sin cos
exp{
2
1
) , (


r
R
e
r
r
r r
r p

=
+
=
2 2
2 /
2
2
0
,
) , ( ) (


r
R R
e
r
d r p r p

= =



2
1
) , ( ) (
0
,
= =


dr r p p
R and
Therefore, , that is, r.v.s and are
independent .
) ( ) ( ) , (
, ,


= p r p r p
R R
and
R
Rayleigh
distribution
uniform
distribution
End of Lecture2
Digital Communications
Lecture3 Probability Review Part 2
Digital Communications
Computer and Information College
2
Statistical Averages
Mean or expected value of :
Expected value of :
Variance :
th order central moment :
th order moment :
( ) ( )
X
E X xp x dx

= =

) ( X g Y =
( ) ( ) ( ) E Y g x p x dx


= = dx x p x X E
X X X
) ( ) ( ] ) [(
2 2 2

X
[( ) ] ( ) ( )
n n
X X
E X x p x dx

( ) ( )
n n
E X x p x dx

n
n
Computer and Information College
3
Statistical Averages (cont.)
Joint moment :
Correlation :
Joint central moment :
Covariance :


=
2 1 2 1 2 1 2 1
) , ( ) ( dx dx x x p x x X X E
n k n k
1 2 1 2
1 2 1 2 1 2 1 2
[( ) ( ) ] ( ) ( ) ( , )
k n k n
X X X X
E X X x x p x x dx dx


=


=
2 1 2 1 2 1 2 1
) , ( ) ( dx dx x x p x x X X E
2 1
2 1
2 1
) (
) , ( ) )( (
)] )( [(
2 1
2 1 2 1 2 1
2 1
X X
X X
X X
X X E
dx dx x x p x x
X X E



=
=


Computer and Information College
4
Orthogonal, Uncorrelated and Independent
Orthogonal: , that is, the correlation value is zero.
Uncorrelated:
Independent:
Uncorrelated Orthogonal when
Independent a Uncorrelated (easy)
Independent ` Uncorrelated (see next page example)
0 ) (
2 1
= X X E
) ( ) ( ) , (
2 1 2 1
2 1 2 1
x p x p x x p
X X X X
=
) ( ) ( ) (
2 1 2 1
X E X E X X E =
0 , 0
2 1
= = or
obviously
Computer and Information College
5
Orthogonal, Uncorrelated and Independent
(cont.)
Example Example: Assume variables satisfies uniform distribution on
the area of , that is, the joint pdf is
( , )
2 2 2
x y r +
2 2 2
2
2 2 2
1
,
( , )
0,
x y r
x y r
x y r

+ >

and the marginal pdfs of and are



2 2
2 2
2 2
2 2
2
, | |
( ) ( , )
0, | |
r x
r x
dy r x
x r
x x y dy
r r
x r


=
= =

>

2 2
2 2
2 2
2 2
2
, | |
( ) ( , )
0, | |
r y
r y
r y dx
y r
y x y dx
r r
y r

=
= =

>

It is easy to see that (not independent not independent)


( , ) ( ) ( ) x y x y


Computer and Information College
6
Orthogonal, Uncorrelated and Independent
(cont.)
Example (cont.) Example (cont.): However, the expectation values of variables
satisfy
( , )
( ) ( ) 0
( ) ( ) 0
E x x dx
E y y dy

= =

= =

2 2 2 2
2 2 2 2 2
( ) ( , )
1
0
r y r x
r y r x
E xy x y dxdy
xdx ydy
r





=
= =


and
Obviously we have
( ) ( ) ( ) 0 E E E = =
(uncorrelated) (uncorrelated)
Computer and Information College
7
Characteristic Functions
Definition:
Property I:
Property II: If are independent, then


= = dx x p e e E jv
jvx jvX
X
) ( ) ( ) (



= dv e jv x p
jvx
) ( ) (
2
1

=
=
0
!
) (
) ( ) (
n
n
n
X
n
jv
X E jv
0
|
) (
) ( ) (
=
=
v
n
n
n n
dv
jv d
j X E

N
X X X , , ,
2 1


=
=
N
i
i
X Y
1

=
= =
N
i
X
jvY
Y
jv e E jv
i
1
) ( ) ( ) (
More details see
textbook pp.34-35
(Fourier Transform)
(Inverse Fourier Transform)
More details see
textbook p.35
Computer and Information College
8
Characteristic Functions - Examples
Problem: Let s be independent and identical distribution
(i.i.d.) r.v.s with . (Bernoulli r.v.)
Find the distribution of .
i
X
p X p p X p
i i
= = = = } 1 { ; 1 } 0 {

=
=
n
i
i
X Y
1
Solution:

=

= =

= + =
+ = = =
n
k
jvk k k n n jv
n
i
jv jv
n
i
jvX
n
i
X Y
e p p
k
n
pe p
p e p e e E jv jv
i
i
0
1
1 0
1 1
) 1 ( ] ) 1 [(
] ) 1 ( [ ) ( ) ( ) (

= =
n
k
k k n
n
k
k y
y k jv k k n jvy
Y
k y p p
k
n
dv e p p
k
n
dv e jv y p
0
0
) (
) (
2
1
2
1
) ( ) 1 (
) 1 ( ) ( ) (


_
(Binomial theorem)

n
n

= +
k
b a
k
b a
0
) (
k n k n
Computer and Information College
9
Characteristic Functions Examples (cont.)
k k n
k
k
k k n
k
k
Y
p p
k
n
dy k y p p
k
n
dy y p
k Y k P k Y P

= =
< =

) 1 (
) ( ) 1 ( ) (
} { } {
1
_

( is a very small number.)
a
That is:
... ) ( ) 1 ( ... ) 1 ( ) 1 (
1
) ( ) 1 (
0
) ( ) 1 ( ) (
1 1
0
0
+

+ +

=

=

k y p p
k
n
y p p
n
y p p
n
k y p p
k
n
y p
k k n n
n
n
k
k k n
Y

Computer and Information College


10
Binomial Distribution
Bernoulli random variable:
with probability and with probability .
0 = X p 1 1 = X
p
Binomial random variable:
If s are Bernoulli random variables, then is a
binomial random variable with
i
X

=
=
n
i
i
X Y
1
{ } (1 )
n k k
n
P Y k p p
k


= =


) , , 1 , 0 ( n k =
or

=
n
k
k k n
Y
k y p p
k
n
y p
0
) ( ) 1 ( ) (
(Previous Example)
Computer and Information College
11
Gaussian Distribution
A Gaussian random variable has the p.d.f. ) , ( ~
2
N X
2
2
2
) (
2
2
1
) (

=
x
e x p
where is the mean and is the variance.
The r.v. has a standard normal density.
The probability or cumulative distribution function (c.d.f.) of
, , is
) ( X E = ] ) [(
2 2
= X E
) 1 , 0 ( ~ N X
X
The tail of the cdf of a standard normal distribution defines
the so-called function
The cdf defines the function
) (x F
X


=
x
X
dx e x F
x
2
2
2
) (
2
2
1
) (


=
x
dx e x Q
x
2
2
2
1
) (



= =
x
dx e x Q x
x
2
2
2
1
) ( 1 ) (

Q
Computer and Information College
12
Gaussian Distribution (cont.)
If is a non-standard normal r.v., then ) , ( ~
2
N X X
( )


=
x
X
x F ) ( ( )


=
x c
X
Q x F ) (
The error function and the complementary error function
are defined by


=
x
t
dt e x erf
0
2
2
) (


=
x
t
dt e x erfc
2
2
) (

) (x erf
) (x erfc
The complementary error function and the Q function are
related as follows
) 2 ( 2 ) ( x Q x erfc =
) ( ) (
2
2
1 x
erfc x Q =
Please derive them
as homework.
Computer and Information College
13
Multivariate Gaussian Distribution
Let , be correlated Gaussian r.v.s
with covariance
) , ( ~
2
i i i
N X
) , 1 ( ) (
)] )( [(
,
n j i X X E
X X E r
j i j i
j j i i j i
=
=


n i , , 2 , 1
Let
=
T
n
X X X ) , , , (
2 1
= x
T
n
) , , , (
2 1
=

= =
nn n n
n
T
r r r
r r r
E

. . .

2 1
1 12 11
} {XX
Where the superscript represents a transposition operation.
T
Computer and Information College
14
Multivariate Gaussian Distribution (cont.)
The joint p.d.f of vector defines the so-called multivariate
Gaussian distribution
x
)} ( ) ( exp{ ) (
1
2
1
| | ) 2 (
1
2 / 1 2 /
x x x

=
T
n
p

Where is the determinant of matrix .


| |

For the case of two Gaussian random variables
T
) , (
2 1
=

=
1
1
2
2
2 21
12
2
1

r
r

2
21
2
12
/ / r r = = where
then
2
1
2 2
1 1
1
1 1 | | (1 )



= =


Computer and Information College


15
Multivariate Gaussian Distribution (cont.)
For zero mean, we have
}
1
1
) , ( exp{ ) , (
2
1
2 1
) 1 ( 2
1
1 2
1
2 1 ,
2 2
2 2
2 1

x
x
x x x x p
X X




The remainder of the r.v.s that we are interested in are all
just functions of independent Gaussian r.v.s. For example:
Rayleigh:
Rice:
Central Chi-Square:
Non-cent. Chi-Square:
} exp{
) 1 ( 2
2
1 2
1
2 2
2
2 2 1
2
1
2 2




+

=
x x x x
2
2
2
1
X X R + =
, where ) , 0 ( ~
2
2 , 1
N X
2
2
2
1
X X R + =
, where ) , ( ~
2
1 1
N X ) , ( ~
2
2 2
N X

=
=
n
i
i
X R
1
2
, where ) , 0 ( ~
2
N X
i

=
=
n
i
i
X R
1
2
, where ) , ( ~
2

i i
N X
Computer and Information College
16
Rayleigh Distribution
Let be two i.i.d. r.v.s. Then the magnitude
and the phase of the complex r.v. , i.e.,
and are independent.
(have been proved in Lecture2, Example2)
) , 0 ( ~ ,
2
2 1
N X X
2
2
2
1
X X R + =
2
1
arctan( )
X
X
=
The variable is called Rayleigh Rayleigh variable with p.d.f R
< =

r e r p
r
r
R
0 , ) (
2
2
2
2

is uniformly distributed over


] 2 , 0 [
2 1
X j X r + =
Computer and Information College
17
Rice Distribution
Let be independent r.v.s with distributions
and . Then is with Rice Rice distribution.
2 1
, X X
) , ( ~
2
1 1
N X
) , ( ~
2
2 2
N X
2
2
2
1
X X R + =
The p.d.f of is R
2 2
2
2
0
2 2
( ) ( ), 0
r s
R
r rs
p r e I r

=
Where is the zero-order modified Bessel function of the
first kind, defined as

= =



2
0
cos
2
1
0
cos
1
0
) ( d e d e x I
x x
def

) (
0
x I
2
2
2
1
+ = s
and
Computer and Information College
18
Rice Distribution (cont.)
Proof: Now define , and
, so that and .
) / arctan(
1 2
X X V = ), / arctan(
1 2
= t
t s cos
1
= t s sin
2
=

=
+ =
) arctan(
1
2
2
2
2
1
x
x
v
x x r
a

=
=
v r x
v r x
sin
cos
2
1
r v r v r
v r v r
v v
J = + =

=
2 2
sin cos
cos sin
sin cos
}
2
) sin sin cos (cos 2
exp{
2
}
2
) ( ) (
exp{
2
1
| | ) , ( ) , (
2
2 2
2
2
2
2 2
2
1 1
2
2 1 , ,
2 1

v t v t rs s r r
r
x x
J x x p v r p
X X V R
+ +
=
+
=
=
a

=
=
v r x
v r x
2 2 2
2
2 2 2
1
sin
cos
] 2 , 0 [ V
2 0 t
Computer and Information College
19
}
2
) cos( 2
exp{
2
| | ) , ( ) , (
2
2 2
2
2 1 , ,
2 1

t v rs s r r
J x x p v r p
X X V R
+
=
=
The marginal p.d.f of is R
Rice Distribution (cont.)
2 2
2 2
2
2
0
2
2 2 ( ) cos( )
1
, 2
0 0
( )
( ) ( , )
r s rs
rs
v t
r
R R V
I
p r p r v dv e e dv

+

= =

_
Using the definition of zero-order modified Bessel function of
the first kind, we can get the final result.
2 2
2
2
2
0
2
( )
r s
rs
r
e I

=
Computer and Information College
20
Central Chi-Square Distribution
Central Chi-Square:
The p.d.f of is

=
=
n
i
i
X R
1
2
, where
) , 0 ( ~
2
N X
i
R 0 , ) (
2
2
2 / 2
1 2 /
) 2 / ( ) 2 (
1
=

r e r r p
r
n
n
n
R

dt e t p
t p


=
0
1
) (
(Gamma function)
)! 1 ( ) ( , ) ( , ) (
2
1
2
3
2
1
= = = p p
When is an integer, the c.d.f of is R
2 / n
( ) 0 , 1 ) (
1
0
2
!
1
2
2
2
=

r e r F
k
m
k
r
k R
r

The exponential distribution is a special case of n=2, and


2
2
2
2
1
) (

r
e r p
R

=
2
2
1 ) (

r
e r F
R

=
Computer and Information College
21
Nakagami m-Distribution
( ) 2 / 1 , ) (
2
1 2
) (
2
=


m e r r p
mr
m
m
m
m
R
where
] ) [(
2
2 2
2
), (

= =
R E
m R E
(Fading figure)
dt e t p
t p


=
0
1
) ( (Gamma function)
)! 1 ( ) ( , ) ( , ) (
2
1
2
3
2
1
= = = p p
By setting , Nakagami-m distribution reduces to a common
Rayleigh Rayleigh fading pdf.
1 m =
Computer and Information College
22
Upper Bounds on the Tail Probability-
Chebyshev Inequality
2
2
} | {|


x
X P
Where , and is any positive
number.
] ) [( ], [
2 2
= = X E X E
x

Proof:
2
2
2
2
2
2
) (
) (
) ( } | {|
| |
| |
| |
| |



x
dx x p
dx x p
dx x p X P
x
x
x
x
=


Chebyshev Inequality:
Computer and Information College
23
Upper Bounds on the Tail Probability-
Chernoff Bound
0 ), ( } { >

s all for e E e X P
sX s

) (
) (
) (
) ( } {
) (
) (
sX s
x s
x
x s
x
e E e
dx x p e
dx x p e
dx x p X P

The tightest upper bound is obtained by selecting the value


of satisfying s
Proof:
0 ] [
) (
=
X s
ds
d
e E
Chernoff Bound:
See Problem 2-18
for
2 /
2
) (
x
e x Q

para. to be optimized
End of Lecture3
Digital Communications
Lecture 4 Stochastic Process Part 1
Digital Communications
Computer and Information College
2
Stochastic Process Definition
A random process or stochastic process, , is an ensemble of
sample functions together with a probability rule
which assigns a probability to any event associated with the obser-
vation of these functions.
) (t X
} ) ( , ), ( ), ( {
2 1
_
t X t X t X

The sample function corresponds to the sample point in the
sample space and, occurs with probability .
, the number of the different sample functions, may be finite or
infinite.
Sample functions may be defined at discrete or continuous time
instants , which determine the SP is time discrete or continuous.
Sample function values may be discrete or continuous.
Since occurs with probability , the collection of numbers
forms a random variable, denoted by .
) (
1
t X
1
( )
r
P s

) (
1 1
t X
1
( )
r
P s
, , 2 , 1 )}, ( {
1
= i t X
i
) (
1
t X
t
The ensemble of the sample functions
consists of the sample space.
Computer and Information College
3
Stochastic Process (S.P.)
sample function
sample function
sample function
1
( ) X t
2
( ) X t
( ) X t

Computer and Information College


4
Stochastic Process
The collection of random variables has the joint c.d.f
) ( , ), ( ), (
2 1 n
t X t X t X n
} ) ( , , ) ( { ) ,..., , (
1 1 2 1 ) ( , ), (
1
n n n t X t X
x t X x t X P x x x F
n
< < =

A more compact notation can be obtained by defining the vectors


)) ( , ), ( ), ( ( ) (
2 1 n
t X t X t X t X
,
=
) , , , ( x
2 1 n
x x x
,
=
Then the joint c.d.f and joint p.d.f are defined respectively as
} x ) ( { ) x (
) (
,
,
,
,
< = t X P F
t X
n
t X
n
t X
x x x
F
p

=
...
) x (
) x (
2 1
) (
) (
,
,
,
,
A random process is strictly stationary if and only if
) x ( ) x (
) ( ) (
, ,
, ,
+
=
t X t X
p p
holds for all set of time instants and all time shifts . } ,..., , {
2 1 n
t t t
Computer and Information College
5
Moments of S.P.
For random process, we define the following operator
The ensemble mean of a random process at time is
The ensemble variance of a random process at time is
[ ]
E ensemble average =
( ) t X t
( ) ( )
( )
( )
[ ]
x X t
t E X t x p x dx
+

( ) t X t
( ) ( ) ( ) ( ) ( ) ( )
( )
( )
2 2
2
x X X X t
t E X t t x t p x dx
+


= =


Computer and Information College
6
Autocorrelation and Auto-covariance
The autocorrelation of a random process is
The autocovariance of a random process is
For complex complex random processes, the autocorrelation and autocovariance
are defined as
( ) t X
( ) t X
( ) ( ) ( ) [ ]
2 1 2 1
, t X t X E t t
XX
=
( ) ( ) ( ) ( ) ( ) ( ) ( ) [ ]
( ) ( ) ( )
2 1 2 1
2 2 1 1 2 1
,
,
t t t t
t t X t t X E t t C
X X XX
X X XX


=
=
( ) ( ) ( ) [ ]
( ) ( ) ( ) ( ) ( ) ( ) ( ) [ ]
( ) ( ) ( )
2 1 2 1
2 2 1 1 2 1
2 1 2 1
,
,
,
t t t t
t t X t t X E t t C
t X t X E t t
X X XX
X X XX
XX

=
=
=

Computer and Information College


7
Cross-correlation and Cross-covariance
Consider two random processes and . The crosscorrelation of
and is
) (t X ) (t Y
) (t X ) (t Y
)] ( ) ( [ ) , (
2 1 2 1
t Y t X E t t
XY
=
)] ( ) ( [ ) , (
2 1 2 1
t X t Y E t t
YX
=
and )] ( ) ( [ ) , ( ) , (
1 2 1 2 2 1
t X t Y E t t t t
YX XY
= =
The correlation matrix of and is ) (t X ) (t Y
( )

=
) , ( ) , (
) , ( ) , (
) ( ) (
) (
) (
) , (
2 1 2 1
2 1 2 1
2 2
1
1
2 1
t t t t
t t t t
t Y t X
t Y
t X
E t t
YY YX
XY XX

The cross covariance of and is ) (t X ) (t Y


) ( ) ( ) , (
))] ( ) ( ))( ( ) ( [( ) , (
2 1 2 1
2 2 1 1 2 1
t t t t
t t Y t t X E t t C
Y X XY
Y X XY


=
=
The covariance matrix of and is ) (t X ) (t Y
( )

=
) , ( ) , (
) , ( ) , (
) ( ) (
) (
) (
) , (
2 1 2 1
2 1 2 1
2 2
1
1
2 1
t t C t t C
t t C t t C
t Y t X
t Y
t X
C t t C
YY YX
XY XX
Computer and Information College
8
Wide Sense Stationary
A random process, , is wide sense stationary (WSS) if and only if ) (t X

= =
=
2 1 2 1
) ( ) , (
constant a ) (
t t where t t
t
XX XX
X X


That is, the autocorrelation only depends on the time difference .
If a random process is strictly stationary, then it is WSS if the
second-order statistics exist.
Proof: since holds for all , we have
1 2
1 2
( ) (0)
1 2 1 2 ( ) ( ) 1 2 1 2
1 2 ( ) (0) 1 2 1 2
[ ( )] ( ) ( ) .
( , ) ( )
( ) ( )
X t X
XX X t X t
X t t X XX
E X t x p x dx x p x dx const
t t x x p x x dx dx
x x p x x dx dx

= = =

= =




( ) ( )
(x) (x)
X t X t
p p
+
=
, ,
, ,

Computer and Information College


9
Wide Sense Stationary (cont)
The converse is not true, that is, WSS strictly stationary;
For a Gaussian random process
strictly stationary WSS;
Proof: According to the multivariate Gaussian distribution shown in
Lecture 3. pp.13 Lecture 3. pp.13- -14 14, the final joint pdf is only concerned with the mean
vector and the second-order statistics of crosscorrelation matrix.
If and are each wide sense stationary and jointly wide jointly wide
sense stationary sense stationary, then

true not

) (t X ) (t Y

= =
= =
) ( ) ( ) , (
) ( ) ( ) , (
2 1 2 1
2 1 2 1


XY XY XY
XY XY XY
C t t C t t C
t t t t
)} ( ) ( exp{ ) (
1
2
1
| | ) 2 (
1
2 / 1 2 /
x x x

=
T
n
p

Computer and Information College


10
Proof: Proof: The ensemble mean of is
Example 1: Consider the random process
Where and are constants, and is a uniformly distributed random
phase, that is,
Show that is a WSS random process.
Example1
( ) ( )
cos 2
c
X t A f t = +
A
c
f

( )

. , 0
, 2 0 ,
2
1
otherwise
p

( ) t X
( ) t X
( ) ( )
( )
( )
2
0
2
0
[ cos 2 ]
cos 2
sin 2 | 0
X c
c
c
t E A f t
A f t d
A f t

= +
= +
= + =

Computer and Information College


11
Therefore, is a WSS random process.
( ) ( ) ( )
( ) ( )
( ) ( ) ( )
( )
( ) ( )
1 2 1 2
2
1 2
2 2
1 2 1 2
2
1 2
2
1 2
,
cos 2 cos 2
cos 2 2 2 cos 2
2 2
cos[2 ]
2
cos 2 ,
2
XX
c c
c c c
c
c
t t E X t X t
E A f t f t
A A
E f t f t E f t t
A
f t t
A
f t t


=

= + +


= + + +


=
= =
Example1 (cont.)
=0
( ) t X
The autocorrelation, also autocovariance since the mean is
zero, is
Computer and Information College
12
Example2
2
2
2
1
) (
x
X
e x p

=

Example2: Consider the random process


Find:
1) the p.d.f. of Y(0);
2) the joint p.d.f. of Y(0) and Y()
3) determine whether or not Y(t) is stationary?
) 1 , 0 ( ~ , cos ) ( N X t X t Y =
Solution:
1) To find the p.d.f. of Y(0), we note that
X X Y = = 0 cos ) 0 (
Therefore,
2
0 ) 0 (
2
0
2
1
) (
y
Y
e y p

=

r.v.
Computer and Information College
13
Example2 (cont.)
2) To find the joint p.d.f. of Y(0) and Y(), we note that
) ( ) 0 ( Y X Y = =
) ( } ) ( | ) 0 ( { ) | (
0 0 0 ) ( )| 0 (
y y u y Y y Y P y y F
r Y Y
+ = = =

y

y
1
0
y

y Y v r = ) 0 ( . .
( 0)| ( )
(0)| ( ) 0
0
0
0
0
0
( | )
( | )
( )
( )
Y Y
Y Y
F y y
p y y
y
u y y
y y
y

+
= = +

a
Computer and Information College
14
Example2 (cont.)
) (
2
1
) ( ) | ( ) , (
0
2
) ( 0 ) ( )| 0 ( 0 ) ( ), 0 (
2


y y e
y p y y p y y p
y
Y Y Y Y Y
+ =
=

a
3) To determine whether or not Y(t) is stationary, note that

= =
= =
t X E t Y E t t
t X E t Y E
YY
2 2 2
cos ] [ )] ( [ ) , (
0 cos ] [ )] ( [

Since the second-order moment, that is, the mean square value of
Y(t) varies with time t, this random process is not stationary.
( )
( )
( ) ( ) | |
Y X
Y X
p y p x J

=

Computer and Information College


15
4 If , then
i.e., if is periodic, then is also periodic. (not difficult)
The autocorrelation function, , of a WSS random process
satisfies the following properties.
Properties of Autocorrelation
(This is obvious.)
( )
XX
( ) ( )
2
0
XX
E X t =

( )
1
2
( )
XX XX
=
[ ( ) ( )] [ ( ) ( )] E X t X t E X t X t + = +
3 ( ) ( )
0
XX XX

, (the Cauchy-Schwartz inequality.)
( ) ( )
X t X t T = + ( ) ( )
XX XX
T = +
( )
X t ( )
XX

Next, we will prove the property 3.
Computer and Information College
16
The inequality can be established through the
following steps:
Properties of Autocorrelation (cont.)
( ) ( ) 0
XX XX

( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( )
( ) ( )
2
2 2
2 2
2
0 [ ]
2
2
2 2
2 0 2
XX XX
E X t X t
E X t X t X t X t
E X t E X t E X t X t
E X t E X t X t


+
= + + +

= + + +

= +

=
(0) ( ) (0)
XX XX XX

Therefore,
( ) ( )
0
XX XX

Computer and Information College


17
The crosscorrelation function has the following
properties.
Properties of Cross-correlation
( )
XY
( ) ( )
( ) ( ) ( ) [ ]
( ) ( ) ( ) ( ) ( ) . 0 0 . 3
0 0
2
1
. 2
. 1
2
mean zero has t Y and t X if
YY XX XY
YY XX XY
YX XY


+
=
Computer and Information College
18
Properties of Cross-correlation (cont.)
Proof:
( ) ( ) ( ) { }
( ) ( ) { }
( )
1.
XY
YX
E X t Y t
E Y t X t


= +
= +
=
( ) ( )
{ }
( )
{ }
( ) ( ) { } ( )
{ }
( ) ( ) ( )
2
2 2
2. 0
2
0 2 0
XX XY YY
E X t Y t
E X t E X t Y t E Y t



+
= + + +
= +

[ (0) (0)] 2 ( ) (0) (0)


XX YY XY XX YY
+ +

1
| ( ) | [ (0) (0)]
2
XY XX YY
+
Computer and Information College
19
) 0 ( ) 0 ( | ) ( |
2
YY XX XY

Then for all real number ,
Properties of Cross-correlation (cont.)
( ) ( )
{ }
( )
{ }
( ) ( ) { } ( )
{ }
( ) ( ) ( )
2
2 2
2
2
3. 0
2
0 2 0
XX XY YY
E X t Y t
E X t E X t Y t E Y t



+
= + + +
= +
Note that if the quadratic of is not less than zero, that is
( ) ( ) ( )
2
( ) 0 2 0 0
XX XY YY
f = +
We must have
( ) ( ) ( )
2
4 4 0 0
XY XX YY

2
2
2
( ) 0
4
( , ( )
2
.)
4 0
f a b c
b b ac
if no real root f must be
a
great than or equal to zero
b ac


= + +

=

End of Lecture 4
Digital Communications
Lecture 5 Stochastic Process Part 2
Digital Communications
Computer and Information College
2
for all and .
Classifications of Stochastic Process
Two WSS random processes X (t) and Y (t) are said to be
Uncorrelated if and only if for all . , 0 ) (
Orthogonal if and only if for all .
Statistically independent if and only if
Furthermore, if then,
uncorrelated orthogonal
statistically independent uncorrelated
For Gaussian or normal process,
statistically independent uncorrelated
=
XY
C
( ) 0,
XY
=
( ) ( ) ( ) ( )
( ) ( )
X t Y t X t Y t
p p x p y
+ +
=
t
0 or 0 = =
y x

{[ ( ) ( )][ ( ) ( )]}
[ ( ) ( )] ( ) ( ) 0
X Y
X Y
E X t t Y t t
E X t Y t t t


+ +
= + + =
Computer and Information College
3
Power Spectral Density
The power spectral density (P.S.D) of a WSS random process is
the Fourier transform of the autocorrelation function Fourier transform of the autocorrelation function, i.e.
) (t X

=

d e f
f j
XX XX
2
) ( ) (


= f d e f
f j
XX XX


2
) ( ) (
If is real, then is real and even real and even, and is also real and real and
even. even.
) (t X
) (
XX
) ( f
XX

) ( )] ( ) ( [ )] ( ) ( [ ) ( = + = + = t X t X E t X t X E
XX XX
If is complex, then , and which
implies that is also real real but not necessarily even.
) (t X
) ( ) (
*
=
XX XX
) ( ) (
*
f f
XX XX
=
) ( f
XX

The power, P, of a random process, is


Which is known as Parseval Parseval s s Theorem Theorem to calculate the power of a
stochastic process.
) (t X
f d f t X E P
XX XX


= = = ) ( ) 0 ( )] ( [
2

Computer and Information College
4
) 0 (
) (
) ( ) (
H
d h
d t X h
x
x
y



=
=
=

+

+

Linear Systems
Also a WSS random process Also a WSS random process


d e h f H
f j 2
) ( ) (

=
Suppose that the input to the linear system h(t) is a WSS random
process X(t), with mean and autocorrelation .
x
) (
XX
The input and output waveforms are related by the convolution
integral
) ( * ) ( ) ( ) ( ) ( t h t X d t X h t Y = =

+


Proof: Proof:
The output mean is:
which is just the multiplication of input mean and the direct current
(DC) filter gain.
Computer and Information College
5
Linear Systems (cont.)
The output autocorrelation is
( ) [ ( ) ( )]
[ ( ) ( ) ( ) ( ) ]
( ) ( ) [ ( ) ( )]
( ) ( ) ( )
( ) ( ) ( )
{ ( ) ( ) }* ( )
( ) * ( ) *
YY
XX
XX
XX
XX
E Y t Y t
E h X t d h X t d
h h E X t X t d d
h h d d
h h d d
h d h
h h














= +
= +
= +
= +
= +
= +
=



( )
Taking transforms, the output p.s.d. is
) ( | ) ( |
) ( ) ( ) ( ) (
2
*
f f H
f H f f H f
XX
XX YY
=
=



=


+ =
' ' '
) ( ) ( ) ( ) (
'


d h d h
XX XX
Computer and Information College
6
Example 1
Problem: Consider a linear system with impulse response . We wish
to find the cross-correlation between the input and the output random
processes, and .
) (t h
) (t X ) (t Y
Solution: The cross-correlation is given by
) (
YX
) ( * ) (
) ( ) (
)] ( ) ( [ ) (
] ) ( ) ( ) ( [
)] ( ) ( [ ) (





XX
XX
YX
h
d h
d t X t X E h
d t X t X h E
t X t Y E
=
=
+ =
+ =
+ =


Also in frequency domain,
( ) ( ) ( )
YX XX
f H f f =


= = d t X h t X t h t Y ) ( ) ( ) ( * ) ( ) (
Computer and Information College
7
Problem: Lets consider a linear system with impulse response
. Please find the autocorrelation and p.s.d. of the
system output Y(t) when the system input X(t) is with correlation
.
Example 2
2 2
1 1
( ) ( )
2 2
1 1 2
( ) ( )
2 2 (2 )
t t
t
t t
e u t e u t
j f j f
e e u t e u t
j f j f f



+
= + + =
+ +
( ) ( )
j t
h t e u t

=
) ( ) (
0
N
XX
=
Preliminary:
'
'
2 2 ' 2 '
( ) ( ) ( ) ( ) ( )
t t
j f t j f t j f t
f t e dt F f f t e dt f t e dt F f

=



= = =

Computer and Information College
8
Example 2 (cont.)
Solution Solution:
The p.s.d of the system input is
and the frequency response of the system is
Therefore, the p.s.d of the system output Y(t) is
and the autocorrelation function of Y(t) is
( ) ( )
2
0 0
j f
XX
f N e d N

= =

( ) ( )
2
1
[ ]
2
j t j ft
H f e u t e dt
j f t

= =
+

( ) ( ) ( )
( )
2
0
2
2
2
YY XX
N
f H f f
ft
= =
+
( ) ( ) { } ( ) { }
( )
1 1 1
0 0
2
2
2
2
YY YY YY
N N
F f F f F e
ft





= = = =

+


Computer and Information College
9
Complex Stochastic Process
A complex-valued random process is given by
) ( ) ( ) ( t jY t X t Z + =
where and are real random processes. The auto-
correlation function is
) (t X ) (t Y
)]} , ( ) , ( [ ) , ( ) , ( {
))] ( ) ( )( ( ) ( [(
)] ( ) ( [ ) , (
2 1 2 1 2 1 2 1 2
1
2 2 1 1 2
1
2
*
1 2
1
2 1
t t t t j t t t t
t jY t X t jY t X E
t Z t Z E t t
XY YX YY XX
zz

+ + =
+ =
=
is wide sense stationary if and only if and are WSS
stationary and joint WSS stationary. In this case,
) (t Z
) (t X ) (t Y
)] ( ) ( [ ) (
*
2
1
t Z t Z E
zz
+ =
) ( )] ( ) ( [ ) (
*
2
1
*
= + =
zz zz
t Z t Z E a
2
( ) ( )
j f
zz zz
f e d

) ( ) (
) ( ) (
2
2 * *
f d e
d e f
ZZ
f j
ZZ
f j
ZZ ZZ




= =
=

=
=
) ( ) , (
) ( ) , (
2 1 2 1
2 1 2 1
t t t t
t t t t
XY XY
YX YX


(real)
Computer and Information College
10
Let X(t) be a WSS random process with p.s.d . Consider
a linear system with input X(t) and frequency response
The output of the system is another WSS random processes,
Y(t), with the p.s.d. . Furthermore,
Therefore ,for any ,
Non-negativity of PSD
( )
XX
f
0
1
,
( ) 2
0,
f f
H f
otherwise

2
( ) ( ) ( )
YY XX
f f H f =
{ }
0
0
2 2
2
2
1
0 ( ) ( ) ( ) ( ) ( )
f
YY XX XX
f
E Y t f df f H f df f df

+ + +


= = =


0
f
0
0
2
0
0
2
1
( ) lim ( ) 0
f
XX XX
f
f f df

Parsevals
theorem
Commonly used
band-limited channel
Computer and Information College
11
Sampling Theorem for Stochastic Process
Band-limited processes: Real and WSS random process, X(t),
with for all .
Sampling theorem for deterministic signal: if f(t) is W-limited,
then
Sampling theorem for random signal: if X(t) is W-limited, then
( ) 0
XX
f = f W >
sin 2 ( )
2
( )
2
2 ( )
2
n
n
W t
n
W
f t f
n
W
W t
W

+
=

sin 2 ( )
2
( )
2
2 ( )
2
n
n
W t
n
W
X t X
n
W
W t
W

+
=

(Interpolation formula) (Interpolation formula)


(In the sense that the mean (In the sense that the mean
square error is zero) square error is zero)
Computer and Information College
12
Sampling Theorem for
Stochastic Process (cont.)
Clues for proving the sampling theorem for random signal:
Clue 1. Let , then
(1)
Clue 2. To prove
set in (1).
( ) ( )
XX
t f t + =
sin 2 ( )
2
( )
2
2 ( )
2
XX XX
n
n
W t
n W
t
n
W
W t
W

+
=




+ = +


sin 2 ( )
2
( ) ( ) 0
2
2 ( )
2
n
n
W t
n W
E X t X X t
n
W
W t
W

+
=


t = sin 2 ( )
2
(0) 0
2
2 ( )
2
XX XX
n
W t
n W
t
n
W
W t
W

a
r.v.
Computer and Information College
13
Sampling Theorem for
Stochastic Process (cont.)
Clues for proving the sampling theorem for random signal:
Clue 3. To prove
Set in (1).
Clue 4. Prove
2
sin 2 ( )
2
( ) 0
2
2 ( )
2
n
n
W t
n
W
E X t X
n
W
W t
W

+
=


2
k
W
=
sin 2 ( )
2
( ) 0
2 2
2 ( )
2
n
n
W t
n k W
E X t X X
n
W W
W t
W

+
=


sin 2 ( )
2
( ) 0
2 2
2 ( )
2
XX XX
n
n
W t
k n k W
t
n
W W
W t
W

+
=

a
Computer and Information College
14
Discrete Time Random Processes
Let , where n is an integer time variable, be a complex
discrete time random process.
( ) n X X
n

The mth moment of is
n
X ( )
m m
n n X n n
E X x p x dx
+

=

[ ] [ ] ( )
x n n X n n
n E X x p x dx
+

= =

The autocorrelation of is
n
X
[ ]
When m=1,
( )
* *
,
1 1
, ,
2 2
n k
n k n k X X n k n k
n k E X X x x p x x dx dx
+ +

= =

And the auto covariance is
[ ] [ ] [ ] [ ]
*
, ,
X X
C n k n k n k =
If is a wide sense stationary random process, then
n
X
[ ]
[ ] [ ] [ ] [ ]
2
constant,
, ,
X
X
n
n k n k C n k n k

Computer and Information College


15
Discrete Time Random Processes (cont.)
The p.s.d. of a discrete random process is
Where
Note that for any integer k. ( ) ( ) k f f + =
( ) [ ]
2 j f n
n
f n e

=
=

[ ] ( )
1 2
2
1 2
j f n
n f e d f

(Discrete time Fourier transform)


(Discrete time inverse
Fourier transform)
( )
2 ( ) 2 2
1
2
( ) ( )
( )
j f k n j f n j kn
n n
j f n
n
f k n e n e e
n e

+ +
+
= =
=
+

=
+ = =
=

_
Computer and Information College
16
Cyclo-stationary Process
Consider the random process
Where is a sequence of complex random variables with
mean and autocorrelation , and the signal is a real
deterministic shaping function.
{ }
n
a
a

[ ] n
aa
( ) t
Note that the mean of ( ) t X
( ) ( )

+
=
=
n
a x
nT t t
is periodic.
( ) ( )
n
n
X t a t nT
+
=
=

(Common representation
for BB transmit signals)
Computer and Information College
17
Cyclo-stationary Process (cont.)
The autocorrelation of is ( ) t X
( ) ( ) ( )
*
1
,
2
XX
t t E X t X t + = +

( ) ( ) [ ]

+
=
+
=
+ =
m
m
n
n
mT t a nT t a E
*
2
1

( ) ( ) ( )
aa
n m
n m t mT t nT
+ +
= =
= +

It can be shown that ( ) ( )
, , ,
XX XX
t kT t kT t t + + + = +
that is, the autocorrelation is periodic corresponding to . Therefore, the
random process, ,is called cyclostationary cyclostationary random processes.
The time-averaged p.s.d. of can be computed by first determining the
time-average autocorrelation and then taking the Fourier transform.
( ) ( )
2
2
1
,
T
XX XX
T
t t dt
T

= +

( ) ( )
2 j f
XX
XX
f e d


+

=

( ) t X
( ) t X
t
End of Lecture 5
Digital Communications
Lecture 6 Baseband and Passband-Part1
Digital Communications
Computer and Information College
2
Review of Fourier Transform
If , then
If is real, that is, , then



= dt e t s f S
t f j 2
) ( ) (



= dt e t s f S
t f j 2 * *
) ( ) (
) (t s
) ( ) (
*
t s t s = ) ( ) (
*
f S f S =



j
F
j
F
t u
t u
1
1
) ( ) (
) ( ) (
1
1
( ) 2 ( )
( ) 2 ( )
F
jt
F
jt
t u f
t u f


+

Symmetry
property
That is

=
+ =

t
j
t f u F
t
j
t f u F

2
) ( )] ( [
2
) ( )] ( [
2
1
1
2
1
1
Computer and Information College
3
Review of Fourier Transform (cont.)

<
=
>
=
0
0 0
0
) (
f j
f
f j
f H
1 2
2 2
1 1
2 2
[ ( )] ( )
( ) ( )
[ ( ) ] [ ( ) ]
2 2
1
j f t
j f t j f t
F H f H f e df
j u f e df j u f e df
j j
j t j t
t t
t



=
= +
= + +
=


If
then
j
j
f
) ( f H
Computer and Information College
4
Analytic Signal - Concept
( ) f u 2
( ) t S
p
( ) t S
t
: a unit step function
: real real band-pass signal
: complex complex analytic signal
( ) f u
( ) t S
p
( ) t S
t
( ) f S
p
c
f c
f
f
A
( ) f S
t
c
f f
A 2
Must be an even function Must be an even function
(see page 2)
Computer and Information College
5
Analytic Signal Math Description
Frequency-domain
Time-domain
is the Hilbert Transform Hilbert Transform of
( ) ( ) ( ) f S f u f S
p t
2 =
( )
( ) ( )
2
1 1
( )
{2 ( )}* { ( )}
( ) * ( )

( ) ( )
j f t
t t
p
p
p p
p p
s t S f e df
F u f F S f
j
t s t
t
j
s t s t
t
s t js t


=
=

= +


= +
= +

) (t s
( ) ( )
( )



d
t s
t s
t
t s

= =
1 1

Computer and Information College


6
Hilbert Transform
Definition
Impulse Response
Frequency Response
( )
( )


d
t
s
t
t s t s

+


= =
1 1
* ) (
t
1
( ) t s
( ) t s
( )
t
t h

1
=
+ t

1
0 1
( ) t h
t
( )

<
=
>
=
0
0 0
0
f j
f
f j
f H
j
j
f
( ) f H
Computer and Information College
7
Hilbert Transform - Example
Example 1: Prove that if ,then
Proof:
( ) t t x
0
sin = ( ) t t x
0
cos =
( ) ( )
t j t j
e e
j
t x
0 0
2
1

=
( ) )]
2
( )
2
( [ 2
2
1
0 0

+ = f f
j
f X
( ) ( ) ( ) j f
j
j f
j
f X

2 2
2
2 2
2

0 0

2
2 2 2
1
0 0

+ +

= f f
( )
0 0
0
1

[ ] cos
2
j t j t
x t e e t

= + =
and ) ( 2 1 f
a
a
a
Computer and Information College
8
Passband to Baseband
Time-domain
Frequency-domain
( ) t s
p
( ) t s
t
( ) t s
B
t f j
c
e
2
( ) f u 2
2 2

( ) ( ) [ ( ) ( )]
c c
j f t j f t
B t p p
s t s t e s t js t e

= = +
( ) ( ) 2 ( ) ( )
B t c c p c
S f S f f u f f S f f = + = + +
( ) f S
p
c
f c
f
f
A
( ) f S
B
f
A 2
passband signal
(real)
baseband signal
(complex)
Equivalent Equivalent lowpass lowpass signal signal
) ( ) (
*
f S f S
p p
=
a
This part can represent the whole
information of the transmit signal.
analytic signal
(complex)
Computer and Information College
9
Passband to Baseband (cont.)
( ) t s
B
( ) t s
~
( ) t s
2
c
j f t
e

{ } Re
( ) ( ) ? t s
p
=
( ) ( ) ( ) ( )
( )
( )
( )
2 2
1 1
[ ]
2 2
c c
j f t j f t
B B
s t s t s t s t e s t e


= + = +



( ) ( ) ( ) ( ) f S f S f S + =

~ ~
2
1
( ) ( ) ( ) ( ) ( )
c c p c c c c p c c
f f f S f f f u f f f S f f f u + + + + + =

2 2
2
1
( ) ( ) ( ) ( ) f S f u f S f u
p p
+ =

( ) f S
p
=
( ) f S dt e t s dt e t s
t f j t f j
= =



* * 2 2 *
] ) (
~
[ ) (
~

( ) ( )
1
[ ]
2
B c B c
S f f S f f

= +
Computer and Information College
10
Signal Representations
(1) Complex envelope representation: Let
where Quadrature components
Amplitude
Phase
( ) ( )
( )
( ) ( )
j t
B
s t a t e x t j y t

= = +
( ) ( ) { }
t f t y t f t x
e t s t s
c c
t j
B p
c

2 sin ) ( 2 cos ) (
Re
=
=
( ) ( ) t y t x ,
( ) ( ) = t s t a
B
( ) t
Computer and Information College
11
Signal Representations (cont.)
(2) Amplitude-phase representation
Information is contained in and .
(3) Quadarature representation
} ) ( Re{ )] ( cos[ ) ( ) (
) ( ) ( t j t j
c p
c
e e t a t t t a t s


_
= + =
) (t a ) (t
( ) ( ) ( ) ( ) ( ) ( ) ( ) t t t a t t t a t s
c c p
sin sin cos cos
_ _
=
( ) ( ) ( ) ( ) t t y t t x
c c
sin cos =
Computer and Information College
12
Signal Representations - Example
Example 2: Let passband signal
Find: (a) the baseband signals with respect to and
(b) the baseband signals with respect to
Solution:
a)
( ) cos(10 cos 100
p
s t t t =
Hz f
c
50 =
Hz f
c
60 =
( ) ( ) ( )
t j t j t j t j
p
e e e e t s
100 100 10 10
2
1
2
1

+ + =
( )
t j t j t j t j
e e e e
110 90 90 110
4
1

+ + + =
( ) ( ) ( ) ( ) ( ) [ ] 2 55 45 45 55
4
1
+ + + + + = f f f f f S
p
( ) ( ) ( ) ( ) ( ) [ ] 2 105 95 5 5
4
1
50 2 + + + + + + + = f f f f f u
( ) ( ) [ ] 2 5 5
2
1
+ + = f f
( ) ( ) ( )
10 10
1
cos 10
2
j t j t
B
s t e e t

= + =
( ) ( ) ( ) 50 50 2 + + = f S f u f S
p B
Computer and Information College
13
Example 2: Let passband signal
Find: (a) the baseband signals with respect to and
(b) the baseband signals with respect to
Solution:
b)
( )
Signal Representations Example (cont.)
( ) ( )
10 10 100 100
1 1
2 2
j t j t j t j t
p
s t e e e e

= + +
( )
t j t j t j t j
e e e e
110 90 90 110
4
1

+ + =
( ) ( ) ( ) ( ) ( ) [ ] 2 55 45 45 55
4
1
+ + + + + = f f f f f S
p
( ) ( ) ( ) 60 60 2 + + = f S f u f S
p B
( ) ( ) ( ) ( ) ( ) [ ] 2 115 105 15 5
4
1
60 2 + + + + + + + + = f f f f f u
( ) ( ) [ ] 2 15 5
2
1
+ + + = f f
( ) ( ) ( ) ( )
t j t j t j t j t j t j
B
e t e e e e e t s

20 10 10 20 30 10
10 cos
2
1
2
1

= + = + =
t t t s
p
100 cos 10 cos( ) ( =
Hz f
c
50 =
Hz f
c
60 =
Computer and Information College
14
Signal Energy
Consider a signal that has a duration of .The energy in is
Use the identity ,then
( ) t s
p
T ( ) t s
p
( ) ( ) { } ( ) dt e t s dt t s E
T
t j
B
T
p
c

= =
0
2
0
2
Re

{ }
2
Re

+
=
x x
x
( ) ( ) ( ) dt e t s e t s E
T
t j
B
t j
B
c c
2
0
4
1


+ =

( ) ( ) ( ) ( )dt e t s t s e t s
T
t j
B B
t j
B
c c


+ + =
0
2
2 2
2 2
2
4
1

( )
( ) ( )
( )
( ) ( )
( ) dt t s e t a e t a
T
B
t t j t t j
c c

+ + =
+ +
0
2
2 2 2 2 2 2
2
4
1
_

( ) ( ) ( ) ( )

+ + =
T
c B
T
B
dt t t t s dt t s
0
2
0
2
2 2 cos
2
1
2
1

) (
) ( ) (
t j
B
e t a t S

=
(1) ( ) ( )

+ + =
T
c
T
dt t t t a dt t a
0
2
0
2
2 2 cos ) (
2
1
) (
2
1

Computer and Information College
15
For example, suppose that
Recall that is a low-pass signal. If the duration
satisfies the condition , then there are many cycles
of the carrier within the time duration . In this case, the
second term of above Eq.(1) integrates to zero, and
) ( ) ( ) ( t y j t x t s
B
Signal Energy (cont.)
+ =
T 1 >> T
c

( )

T
B
dt t s E
0
2
2
1
( ) ( ) t u t s
T B
=
( ) ( ) ( ) T t u t u t u
T
=
( ) ( ) t u t a
T
=
( ) 0 = t
( ) ( )
( ) ( )
0
2
2 sin
2 2
2 sin
2
2 cos
2
1
0
0
2
= =

T
T T
T
t T
dt t t u
c
c
T
c
c
T
c T

1 >> T
c

T
Where . In this case
and the second term of above Eq.(1) is
End of Lecture 6
Digital Communications
Lecture 7 Baseband and Passband-Part2
Digital Communications
Computer and Information College
2
Linear Bandpass Systems
Real bandpass system
bandpass input
bandpass output
Consider the linear bandpass system shown below
( )
B
h t
( )
B
S t
( )
B
r t
( ) ( ) ( )
B B B
r t s t h t =
input output
Complex low-pass equivalent system
( )
p
h t
( ) ( ) ( )
p p p
r t s t h t =
) (t r
p
] ) ( Re[ ) (
2 t f j
B p
c
e t S t S

=
Computer and Information College
3
Linear Bandpass Systems (cont.)
* 2 ( ) * 2
( ) ( ( ) ) ( ) ( )
j f t j f t
p p p p
H f h t e dt h t e dt H f

+ +


= = =

We define
Since is real, we have ( )
p
h t

<
>
=
0 , 0
0 ), (
) (
f
f f H
f f H
p
c B
two-side band
Then
Hence
*
*
0 , 0
( )
( ) ( ( )), 0
B c
p p
f
H f f
H f H f f
>

=

= <

) ( ) ( ) (
*
c B c B p
f f H f f H f H + =
Computer and Information College
4
Linear Bandpass Systems (cont.)
( )
p
H f
( )
B
H f
( )
B c
H f f
*
( )
B c
H f f
c
f
c
f
arg ( )
B
H f
( )
B
H f
) ( arg
| ) ( | ) (
f H j
B B
B
e f H f H

=
Computer and Information College
5
Linear Bandpass Systems (cont.)
Therefore
similarly
t f j
B
t f j t f j
B
f f f let
t f j
c B c B
c
c
c
e t h
f d e e f H
f d e f f H f f H F

2
2

2 1
) (

(
) ( )} ( {
=
=
=

t f j
B
t f j t f j
B
t f j t f j
B
f f f let
t f j
c B c B
c
c
c
c
e t h
e f d e f H
f d e e f H
f d e f f H f f H F

2 *
2 *

2
2

2 *

2 * * 1
) (
]

( [

(
) ( )} ( {

=
=
=
=

Hence, is the equivalent complex lowpass impulse response


of the bandpass system .
( )
B
h t
( )
p
h t
} ) ( Re{ 2 ) ( ) ( ) (
2 2 * 2 t f j
B
t f j
B
t f j
B p
c c c
e t h e t h e t h t h

= + =

Computer and Information College
6
Linear Bandpass Systems (cont.)
( )
* *
( )
*
* * *
* *
1
[ ( ) ( )][ ( ) ( )]
2
1
[ ( ) ( ) ( ) ( )
2
( ) ( ) ( ) ( )]
1
[ ( ) ( ) ( ) ( )
2
p
p
S f
B c B c B c B c
H f
B c B c B c B c
B c B c B c B c
B c B c B c B c
H f f H f f S f f S f f
H f f S f f H f f S f f
H f f S f f H f f S f f
H f f S f f H f f S f f
= + +
= +
+ +
= +

_
*
]
1
[ ( ) ( )] ,
2
B c B c
R f f R f f = +
Hence,
Where
or in time-domain
In the last equation, all the quantities are complex.
0 =
0 =
) ( ) ( ) ( f S f H f R
B B B
=
) ( * ) ( ) ( t s t h t r
B B B
=
( ) ( ) ( )
p p p
R f H f S f = Lecture6, p.9 Lecture6, p.9
Computer and Information College
7
Linear Bandpass Systems - Example
) (
) (
f
f
f
f
f j
A
f H
c
c
c
p
+
=

Example: Consider a linear bandpass systems
Find the corresponding baseband system description.
Solution:
f j
A
f f f f f
f
f f
j
A
f f
f f f
j
A
f f
f
f
f f
f j
A
f f H
otherwise
f f f f H
f H
c c
c
c
c
c
c
c
c
c
c
c p
c c p
B




2
) (
2
)
2
(
) (
) (
0
, ) (
) (
2
2
+
=
>> >>
+

+
+
+
=
+

+
+
=
+

+
=
( ) ( )
t
B
h t Ae u t

= a
Computer and Information College
8
Bandpass Stationary Process
} ) ( Re{
) 2 sin( ) ( ) 2 cos( ) (
2 sin )) ( sin( ) ( 2 cos )) ( cos( ) (
)] ( 2 cos[ ) ( ) (
2 t f
c c
c c
c
c
e t Z
t f t Y t f t X
t f t t a t f t t a
t t f t a t N




=
=
=
+ =
_ _
c
f : carrier frequency
: complex envelope process
Question Question: When is stationary?
) (
) (
) ( ) ( ) (
t j
e t a
t jY t X t Z

=
Assume bandpass bandpass process process can be described as ) (t N
+ =
) (t N
where
Computer and Information College
9
Bandpass Stationary Process - Average
[ ( )] [ ( )]cos(2 ) [ ( )]sin(2 )
( ) cos(2 ) ( ) sin(2 )
c c
X c Y c
N
E N t E X t f t E Y t f t
u t f t u t f t
u


=
=
=
If the frequency components in and are much less
than the carrier frequency , which means that and
can be considered as two constants in the duration of
, we must have to make the mean value of
, that is, to be a constant.
) (t u
X
c
f
) (t u
Y
0 ) ( ) ( = = t u t u
Y X
)] ( [ t N E
) (t u
X
) (t u
Y
c
T
) (t N
Analysis: Analysis:
(1)
Computer and Information College
10
Bandpass Stationary Process - Correlation
( , ) { ( ) ( )}
{[ ( ) cos(2 ( )) ( ) sin(2 ( ))]
[ ( ) cos(2 ) ( ) sin(2 )]}
( ) cos[2 ( )]cos(2 )
( ) sin[2 ( )]sin(2 )
( ) cos[2 ( )]sin(2 )
NN
c c
c c
XX c c
YY c c
XY c c
YX
t t E N t N t
E X t f t Y t f t
X t f t Y t f t
f t f t
f t f t
f t f t





+ = +
= + + + +

= +
+ +
+
( ) sin[2 ( )]cos(2 )
1
{[ ( ) ( )]cos(2 )
2
[ ( ) ( )]cos[2 (2 )]
[ ( ) ( )]sin(2 )
[ ( ) ( )]sin[2 (2 )]}
c c
XX YY c
XX YY c
YX XY c
YX XY c
f t f t
f
f t
f
f t





+
= +
+ +

+ +
(2)
0 =
0 =
Using triangular Using triangular
formula formula
Computer and Information College
11
Bandpass Stationary Process - Condition
So the necessary and sufficient conditions for to
be WSS are:
X(t) and Y(t) are wss, and
) (t N

= =
=
) ( ) ( ) ( ) (
) ( ) (


XY XY XY YX
YY XX
( ) ( ) cos(2 ) ( ) sin(2 )
NN XX c YX c
f f =

=
= =
0 ) (
0 ) ( ) (


N
Y X
u
u u
a
Computer and Information College
12
Bandpass Stationary Process - Envelope
) ( ) ( ) ( t jY t X t Z + =
) ( ) ( )] ( ) ( ) ( ) ( [
)] ( ) ( [ ) (
2
1
*
2
1


YX XX YX XY YY XX
ZZ
j j j E
t Z t Z E
+ = + + =
+ =
} ) ( ) ( {
2
1
} ) ( ) ( {
2
1
} ) ( Re{
) 2 sin( ) ( ) 2 cos( ) ( ) (
2 2
2 * 2
2







c c
c c
c
f j
ZZ
f j
ZZ
f j
ZZ
f j
ZZ
f j
ZZ
c YX c XX NN
e e
e e
e
f f

+ =
+ =
=
=
WSS baseband process
) ( ) ( 2 ) ( )] ( ) ( [
2
1
) (
*
c NN c ZZ c ZZ c ZZ NN
f f f f u f f f f f f + + = + =
equivalent baseband signal
( (complex envelope complex envelope) )
Computer and Information College
13
Bandpass Stationary Process
properties of the quadrature components
) ( ) ( ) ( f j f f
YX XX ZZ
+ =
odd aginary f f
even real f f
YX YX YX YX
XX XX XX XX
& Im , ) ( ) ( ) ( ) (
& , ) ( ) ( ) ( ) (
= =
= =


0 ) ( =
XY
) ( f
ZZ

) ( f
ZZ

is real, not necessarily even.


When , is real and even.
As homework
Computer and Information College
14
Representation of Bandpass White
Gaussian Noise

) sin(
)} ( { ) (
0
) (
0
1
2 0
B
N f F
otherwise
f N
f
ZZ ZZ
B
ZZ
= =

0 ) ( ); ( ) ( ) ( = = =
YX ZZ YY XX
) ( f
NN

2
0
N
c
f
f
c
f
B
equivalent equivalent
lowpass lowpass noise noise
autocorrelation autocorrelation
function of the noise function of the noise
) ( ) ( ,
0
t N have we B As
ZZ
=
(AWGN)
(real) (real)
End of Lecture 7
Digital Communications
Lecture 8 Signal Space Representation -Part1
Digital Communications
Computer and Information College
2
Vector Spaces
Let and be two -dimensional
vectors. We define the following operations:
Inner product of and is defined as
Norm of is defined as
Linear independent of a set of vectors: if no one vector can be
represented as a linear combination of the remaining vectors.
( )
n
u u u u , , ,
2 1

,
= ( )
n
v v v v , , ,
2 1

,
=
u
,
v
,

>= <
n
i
i i
v u v u
1
,
, ,
u
,

=
= > < =
n
i
i
u u u u
1
2
,
, , ,
n
Computer and Information College
3
Vector Spaces (cont.)
Triangle Inequality
Cauchy-Schwarz Inequality
Gram-Schmidt Procedure
will be discussed later.
will be discussed later.
v u v u
, , , ,
+ +
v u v u
, , , ,
> < ,
Computer and Information College
4
Vector Spaces (cont.)
Define the inner product inner product between two waveforms and as
And define the norm norm of the waveform as
Note that the squared squared- -norm norm
is the energy contained in waveform .
( ) t u ( ) t v
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
*
,
, ,
,
u t v t u t v t dt
u t v t v t u t
v t u t v t u t dt
+

< >=

< > =< >

< >=

( ) t u
( ) ( ) ( ) > < = t u t u t u ,
( ) ( ) ( ) ( )

+

>= =< dt t u t u t u t u
2 2
,
( ) t u
Computer and Information College
5
Vector Spaces (cont.)
Cauchy-Schwarz Inequality
( ) ( ) ( ) ( ) t v t u t v t u > < ,
( ) ( ) ( ) ( ) t v t u t v t u + +
( ) ( ) ( ) ( )
2
1
2 2
1
2




dt t v dt t u dt t v t u
Triangle Inequality
( ) ( ) ( ) ( )
2
1
2 2
1
2 2
1
2



+ + dt t v dt t u dt t v t u

Computer and Information College


6
Where is a scalar number and let , then
for all real number , we have
Proof of Cauchy-Schwarz Inequality
( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )
2 2 2
2
, ,
,
0
t v t v t u t u t v t u
t v t u t v t u
t v t u

+ > < + > < + =


> + + =<
+

( ) ( ) arg[ , ] j u t v t
r e
< >
=
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) t v t u t v t u
t v t u t v t u
t v r t v t u r t u



+ +
,
0 ,
0 , 2
2 2
2
2
2
2

r
0 || ) ( || | ) ( ), ( |
|| ) ( ||
|| ) ( ||
2 || ) ( ||
2 2
+ > < t u t v t u
t v
t u
t u
|| ) ( ||
|| ) ( ||
t v
t u
r let =
0 ) ( ), ( |
|| ) ( ||
|| ) ( ||
2 || ) ( || 2
2
< t v t u
t v
t u
t u
| ) ( ), ( | || ) ( || || ) ( || > < t v t u t v t u

Computer and Information College


7
Proof of Triangle Inequality
( ) ( ) ( ) ( ) ( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( )
( )
arg[ , ] arg[ , ]
, ,
2 2 2
2 2
2 2
2
, ,
2 ,
2
j u t v t j u t v t
u t v t e u t v t e
u t v t u t v t u t v t v t u t
u t v t u t v t
u t v t u t v t
u t v t

+ = + + +
+ +
+ +
= +

( ) ( ) ( ) ( ) t v t u t v t u + +
Cauchy Cauchy- -Schwarz Schwarz
Inequality Inequality
Computer and Information College
8
Orthogonal Expansions
Consider a real signal with finite energy ,
Suppose there exists a set of orthonormal functions
,
By orthonormal orthonormal we mean
where
We now approximate as the weighted linear sum
and wish to determine the , to minimize the
square error
( ) t s
s
E ( ) ( )

+

= = dt t s t s E
s
2
2
( ) { } t
n

1 , , 0 = N n
( ) ( ) ( ) ( ) [ ] n k t t t t
k n k n
= =

+

,
[ ]

=
0 , 1
0 , 0
n
n
n
( ) t s
( ) ( )

=
=
1
0

N
k
k k
t s t s
k
s 1 , , 1 , 0 = N k
( ) ( ) ( ) ( )
2
1
2
0
[ ]
N
k k
k
s t s t dt s t s t dt

+ +

=

= =


Computer and Information College
9
For a complete complete set of basis functions, .
Orthogonal Expansions (cont.)
To minimize the mean square error mean square error, we take the partial derivative with respect
to each of and set equal to zero, i.e. , for the nth term we solve
Using the orthonormal property of the basis functions, we have
k
s
( ) ( ) ( ) 0 2
1
0
=

=
dt t t s t s
s
n
N
k
k k
n

( ) ( ) ( ) ( )

+

= = dt t t s t t s s
n n n
,
( ) ( )
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
2
1
0
1 1 1
2
0 0 0
1 1 1
2
0 0 0
( )
2
2
k
N
k k
k
N N N
k k k k l l
k k l
N N N
k k k l k l
k k l
S k l
s t s t dt
s t dt s t s t dt s t s t dt
s t dt s s t t dt s s t t dt

=

+ + +

= = =

+ + +

= = =


=


= +
= +





_ _
0 =
1
2 2 2
0
|| ( ) || || ( ) ||
N
s k
k
E s s t s t

=
= =

Computer and Information College


10
Span of a set of functions
Let , for , be a set of functions, not necessarily
orthogonal. Then the span of them is a set containing all the linear
combinations of them, that is,
The above definition can be also written into vector form as
( ) t
k

N n , , 3 , 2 , 1 =
( ) { } ( ) ( ) ( )
1
, 1, 2, 3, , : ,
N
k k k k
k
Span t k N s t s t s t s C
=

= = =

( ) ( ) ( )
( )
( )
( )
1
1
: ,
T N
N
N
t
s t s t s s t C
t




= =






s s
, , ,
.
Computer and Information College
11
Then contains all continuous functions
in duration , and this means
An Example
( )
( )
2 1
2
2
0
2 2
cos 1, 2, 3,
2 2
sin 1, 2, 3,
n
n
n
T
s t
nt
n
T T
nt
s t n
T T


= =

T t 0
( ) { } , 3 , 2 , 1 , = k t s Span
k
[ ] T , 0
complete complete trigonometic trigonometic
function set function set
Let
for


=

=
+ +

=
+ = =
1
2 2
0
1 2 1 2
1
) ( ) ( ) ( ) (
n
n n
n
n n
k
k k
t s a t s a t s a t s
(See Eg.4 (See Eg.4- -2 2- -1 in page 167.) 1 in page 167.)
Computer and Information College
12
Properties of Signal Vectors
Let , for , be a set of orthonormal functions.
Then for any
we have
Then,
I.
II.
III.
( ) t
k

N k , , 3 , 2 , 1 =
( ) ( ) ( ) ( ) { } N k t Span t s t s t s
k j i
, , 3 , 2 , 1 , & , =
( ) ( )
,
T
s t s t =
, ,
( ) ( )
,
T
j j
s t s t =
, ,
( ) ( )
,
T
k k
s t s t =
, ,
( )
( )
( )

=
t
t
t
N

.
,
1
( )
2 2
s t s E
,
= =
( ) ( )
( ) ( )
k j
k j
k j
T
k j
jk
s s
s s
t s t s
dt t s t s
, ,
, ,

=

0

( ) ( )
k j k j jk
s s t s t s d
, ,
= =
(Euclidean Distance)
Computer and Information College
13
Properties of Signal Vectors (cont.)
Proof : (I) Let
where is an orthonormal set.
Then, the signal energy is :
The energy in is just the squared length squared length of its signal vector .
( ) ( )

=
=
1
0
N
k
k k
t s t s

=
N
s
s
s .
,
1
( ) t
k

( ) t s
s
,
( ) ( )
( ) ( )
( ) ( )
2
2
0
1 1
0
0 0
1 1
0
0 0
1
2
2
0
[ ]
T
N N
T
k k l l
k l
N N
T
k l k l
k l
N
k
k
E s t s t dt
s t s t dt
s s t t dt
s s



= =

= =

=
= =
=
=
= =

,
Computer and Information College
14
Properties of Signal Vectors (cont.)
(II) Signal Correlation Signal Correlation: The cross-correlation or similarity between the signals
and is
Note that if and are orthogonal
if , is a scalar
( ) t s
j
( ) t s
k
( ) ( )
( ) ( )
( ) ( )
( ) ( )
( ) ( )
k j
k j
k j
N
n
kn jn
k j
T
m n
N
n
N
m
km jn
k j
T
N
m
m km
N
n
n jn
k j
T
k j
jk
s s
s s
s s
s s
s s
dt t t s s
t s t s
dt t s t s
t s t s
dt t s t s
, ,
, ,
, , , ,

=
1
0
0
1
0
1
0
0
1
0
1
0 0

=
, 1
, 0

( ) t s
j
( ) t s
k
( ) ( ) t s c t s
k j
=
c
( ) m n
Computer and Information College
15
Properties of Signal Vectors (cont.)
(III) Euclidean Distance : The Euclidean distance between
two waveforms and is ( ) t s
j
( ) t s
k
( ) ( )
( ) ( )
( ) ( )
( ) ( ) ( ) ( )
( ) ( )
( )
k j
N
n
kn jn
N
n
kn jn
N
m
km
N
n
jn
N
m
m km
N
n
n jn
T
N
n
n kn
N
m
m km
N
l
l jl
N
n
n jn
N
m
m km
T
N
n
n jn
T
k j
k j jk
s s s s
s s s s
dt t s t s
t s t s t s t s
dt t s t s
dt t s t s
t s t s d
, ,
= =
+ =

+ =
=
=
=

=
2
1
1
0
2
2
1
1
0
1
0
2
1
0
2
2
1
1
0
1
0
0
1
0
1
0
1
0
1
0
2
1
2
1
0
0
1
0
2
1
0
2
} {
) 2 (
} ] 2
[ {
} ] [ {
} ] [ {



End of Lecture 8
Digital Communications
Lecture 9 Signal Space Representation -Part2
Digital Communications
Computer and Information College
2
Vector and Signal Spaces - Review



=
>= < >= < t d t v t u t v t u v u v u
i
n
i
i
) ( ) ( ) ( ), ( ,
* *
1



=
= > < = = > < = t d t u t u t u t u u u u u
n
i
i
2
2
1
) ( ) ( ), ( ) ( ,

) ( ) ( ) ( ) ( t v t u t v t u v u v u + + + +

) ( ) ( ) ( ), ( , t v t u t v t u v u v u > < > <

Inner product
Norm
Linearly independent (definition)
Triangle Inequality
Cauchy-Schwarz Inequality
Gram-Schmidt Procedure (discussed in this lecture)
Computer and Information College
3
Let be a set of orthonormal functions.
Then for any function or waveform
, , , 2 , 1 , ) ( N k for t
k
=
Vector and Signal Spaces Review (cont.)
} , , 3 , 2 , 1 ), ( { ) ( & ) ( ), ( N k t Span t s t s t s
k j i
=

= = = =
) (
) (
) ( ), ( ) ( ), ( ) ( ), ( ) (
1
t
t
t t s t s t s t s t s t s
N
T
k k
T
j j
T

.

k j k j jk
k j
k
T
j
k j
T
k j
jk
s s t s t s d
s s
s s
t s t s
t d t s t s
s t s E


= =

=

=
= =

) ( ) ( ) (
) ( ) (
) ( ) (
) (
) ( ) (
0
2 2

we have
Then
Computer and Information College
4
As we know, is passband signal and baseband signal.
The energy of is
New Definitions for Baseband Signals



= dt t s dt t s
B p
2 2
) (
2
1
) (
) (t s
p
) (t s
B
New definitions for equivalent baseband signal

=
=

) ( ) (
2
1
) ( ) (
) ( ) (
Re
) (
2
1
0
*
2
t s t s d
t s t s
t d t s t s
t s E
Bk Bj jk
Bk Bj
T
Bk Bj
jk
B

) (t s
p
See See signal signal
energy energy in lecture in lecture
6, pp.14 6, pp.14- -15 15
All the signals are
baseband forms
as homework
Computer and Information College
5
Proof of the Cross-correlation Formula
k j
k
T
j
k j
T
k j
jk
s s
s s
t s t s
t d t s t s

=

) ( ) (
) ( ) (
0

Proof: The cross-correlation for two passband signals has


been defined as

+ = =
+ = =

] ) ( ) ( [
2
1
] ) ( Re[ ) (
] ) ( ) ( [
2
1
] ) ( Re[ ) (
2 * 2 2
2 * 2 2
t f j
Bk
t f j
Bk
t f j
Bk k
t f j
Bj
t f j
Bj
t f j
Bj j
c c c
c c c
e t s e t s e t s t s
e t s e t s e t s t s


} ] ) ( ) ( Re[ 2 )] ( ) ( Re[ 2 {
4
1
) ( ) (
0
4 *
_

+ =
t f j
Bk Bj Bk Bj k j
c
e t s t s t s t s t s t s

a
Since
a

=

) ( ) (
) ( ) (
Re
) (
2
1
) (
2
1
)] ( ) ( Re[
2
1
) ( ) (
) ( ) (
*
*
0
t s t s
t s t s
t s t s
t s t s
t s t s
t d t s t s
Bk Bj
Bk Bj
Bk Bj
Bk Bj
k j
T
k j
jk

Computer and Information College


6
Proof of the Cross-correlation Formula
(cont.)
[4 ( ) ( )]
4
2
2
Re[ ( ) ( ) ] Re{ ( ) ( ) }
( ) ( ) cos[4 ( ) ( )]
( ) ( ) cos[4 ( ) ( )]
( ) ( ) cos[4
c j k
c
c
c
c
c
j f t t t
j f t
Bj Bk Bj Bk
Bj Bk c j k
T
nT
T Bj Bk c j k
nT
n
Bj c Bk c c
s t s t e dt a t a t e dt
a t a t f t t t dt
a t a t f t t t dt
a nT a nT f t


+ +

+
+
=
=
= + +
= + +
+


2
2
2
2
( ) ( )]
sin[4 ( ) ( )]
( ) ( )
4
( ) ( ) 0 0
c
c
c
c
c
c
c
T
nT
T j c k c
nT
n
T
nT
c j c k c
Bj c Bk c
T
n
c
nT
Bj c Bk c
n
nT nT dt
f t nT nT
a nT a nT
f
a nT a nT

+
+
=
+

=
+

=
+
+ +
=
= =

) (
) (
) ( ) ( ; ) ( ) (
t j
Bk Bk
t j
Bj Bj
k
j
e t a t s e t a t s

= =
consider the second term
Computer and Information College
7
Example Orthonormal set and functions
) (
0
t
T 3 / T
) (
1
t
3 / T
) (
2
t
T
3 / 2T
T
3 / 2T
3 / T 3 / T 3 / T
) (
0
t s
T 3 / T
) (
1
t s
T 3 / T 3 / 2T
) (
2
t s
T 3 / T
) (
3
t s
T
Computer and Information College
8
Example Expression of functions

=
) (
) (
) (
) (
2
1
0



Orthonormal basis

=
=
=
=
) ( ) (
) ( ) (
) ( ) (
) ( ) (
3 3
2 2
1 1
0 0



T
T
T
T
s s
s s
s s
s s

=
3 /
3 /
3 /
,
3 /
3 /
0
,
0
3 /
3 /
,
0
0
3 /
3 2 1 0
T
T
T
s
T
T s T
T
s
T
s

Projection
a
Computer and Information College
9
Example Energy, correlation and distance

= = = =
= = = =
= = = =
= = = =

2 /
2
1
) (
2
1
) (
2
1
3 /
2
1
) (
2
1
) (
2
1
3 /
2
1
) (
2
1
) (
2
1
6 /
2
1
) (
2
1
) (
2
1
2
3
2
3
0
2
3 3
2
2
2
2
0
2
2 2
2
1
2
1
0
2
1 1
2
0
2
0
0
2
0 0
T s t s t d t s E
T s t s t d t s E
T s t s t d t s E
T s t s t d t s E
T
T
T
T

5 . 0
3
2
3
2
3
,
Re
) ( ) (
) ( ), (
Re
2 1
2 1
2 1
2 1
12
=

=
T T
T
s s
s s
t s t s
t s t s

2 /
2
1
) ( ) (
2
1
2 0 2 0 02
T s s t s t s d = = =

For the example, we have
The correlation between and is
The Euclidean distance between and is ) (
2
t s
) (
1
t s
) (
0
t s
) (
2
t s
Computer and Information College
10
Gram-Schmidt Orthonormalization
) 3 (
) (
) (
, ) ( , ) ( ) ( : 3
) 2 (
) (
) (
), ( , ) ( ) ( : 2
) 1 (
) (
) (
) ( ) ( : 1
1
0
1
1
1
0 0 1 1 1
0
0
0
0 0
i
i
i
i
j
j j i i i
g
t g
t
define and t s t s t g Set
g
t g
t
define and t s t s t g Set
g
t g
t
define and t s t g Set
=
=
=
=
=
=

Given a finite set of signals with finite energies,


an orthonormal set of basis functions can be
constructed according to the following algorithm:
)} ( , ), ( ), ( {
1 1 0
t s t s t s
M

)} ( , ), ( ), ( {
1 1 0
t t t
N

Computer and Information College
11
The dimension of the complex vector space equals if and
only if the set of waveforms is linearly
independent, that is, none of the above waveforms is a linear
combination of the others.
)} ( , ), ( ), ( {
1 1 0
t s t s t s
M

Gram-Schmidt Orthonormalization (cont.)


Repeat Step 3 until all the have been used. If one or
more of the above steps yields , then omit these from
consideration.
s t s
i
)' (
0 ) ( = t g
i
In the end a set of complex orthonormal basis functions
is obtained.
M N
)} ( , ), ( ), ( {
1 1 0
t t t
N

N
M
Computer and Information College
12
A: is a linear combination of
) (t s
i
). ( , ), ( ), (
1 1 0
t s t s t s
i

1. What does it imply that in Gram-Schmidt approach? 0 ) ( = t g


i
2. Why are orthonormal?
)} ( , ), ( ), ( {
1 1 0
t t t
N

Comments on Gram-Schmidt Approach
Two questions:
A: Verify with the inner product of each other.
Computer and Information College
13
An Example
Express the following functions or waveforms in terms of a set
of orthonormal basis functions. Please note that although the
waveforms are real, we will assume that they are the complex
envelopes of a set of real bandpass waveforms.
) (
0
t s
T 3 / T
) (
1
t s
T 3 / 2T
) (
2
t s
T 3 / T
) (
3
t s
T
Computer and Information College
14
An Example (cont.)


= =
else
T t T T
g
t g
, 0
3 / 2 3 / , / 3 ) (
1
1
1

Then
1. Set Then
). ( ) (
0 0
t s t g =
2. Set where


= =
else
T t T
g
t g
, 0
3 / 0 , / 3 ) (
0
0
0

), ( , ) ( ) (
0 0 1 1 1
t s t s t g =
3 / / 3 ) ( ) ( ,
3 /
0
*
0
0
1 0 1
T t d T t d t t s s
T T
= = =


= =
else
T t T
t s t s t g
, 0
3 / 2 3 / , 1
) ( , ) ( ) (
0 0 1 1 1

Computer and Information College
15
But so ignore Step 4.
An Example (cont.)
3. Set , where
) ( , ) ( , ) ( ) (
1 1 2 0 0 2 2 2
t s t s t s t g =
3 / / 3 ) ( ) ( ,
0 ) ( ) ( ,
3 / 2
3 /
*
1
0
2 1 2
*
0
0
2 0 2
T t d T t d t t s s
t d t t s s
T
T
T
T
= = =
= =


= =
else
T t T T
g
t g
, 0
3 / 2 , / 3 ) (
2
2
2

Then
4. Set
). ( , ) ( , ) ( , ) ( ) (
2
3 /
2 3 1
3 /
1 3 0
3 /
0 3 3 3
t s t s t s t s t g
T T T


=
, 0 ) (
3
= t g
Computer and Information College
16
An Example (cont.)
) (
0
t
T
3 / T
) (
1
t
3 / T
) (
2
t
T
3 / 2T
T
3 / 2T
3 / T 3 / T
3 / T
Orthonormal basis functions:
Computer and Information College
17
An Example (cont.)

=
=
=
=
) 3 / , 3 / , 3 / ( ) (
) 3 / , 3 / , 0 ( ) (
) 0 , 3 / , 3 / ( ) (
) 0 , 0 , 3 / ( ) (
3 3
2 2
1 1
0 0
T T T s t s
T T s t s
T T s t s
T s t s

) (
1

) (
0

) (
2

3 / T
3 / T
3 / T
1
s

2
s

3
s

0
s

Projection on the orthonormal basis functions:


End of Lecture 9
Digital Communications
Lecture 10 Linearly Modulated Signals
Digital Communications
Computer and Information College
2
Modulation Block Diagram
k
b
k
a ) (t s
B
) (t s
P
{ }: the information sequence to be transmitted
{ }: the symbol data sequence
k
b
k
a
) (t s
B
: complex baseband signal
) (t s
P
: real passband signal
) ( ] ) ( Re[
2
t S e t S
p
t f j
B
c
=

Computer and Information College


3
Memoryless Modulation: Mapping from to waveforms
without any constraint to the previously transmitted waveforms.
} {
n
a } ,..., 2 , 1 ), ( { M m t s
m
=
Modulation Classification

+
=
=
n
c n P
t f nT t p a t s ) 2 cos( ) ( ) (

+
=
+ =
n
n c P
nT t p a t f A t s )] ( 2 cos[ ) (

+
=
=
n
c n P
t f nT t p a t s ) 2 cos( ) ( ) (

+
=

+ =
n
c n n P
t f T n t p a nT t p a t s ) 2 cos( )] ) 1 ( ( ) ( [ ) (
2 1 1

Linear Modulation: Principle of superposition Principle of superposition applies in the mapping of the
digital sequence into successive waveforms.
Non-linear Modulation: Principle of superposition does not apply to the signals
transmitted in the successive time intervals.
Modulation with Memory: Non memoryless modulation (e.g. coding signal).
Computer and Information College
4
Gray Mapping
Which mapping is better? Why?

Computer and Information College


5
Gray Mapping (cont.)
Optimum Mapping Principle
Only one bit will change in
the adjacent branch
Computer and Information College
6
Linear Memoryless Modulation

=

k
k
kT t a ) (

=
=
k
k B
kT t p a t s ) ( ) (
k
a
{ }: complex information sequence
) (t p
: pulse shaping waveform (real)
T : symbol duration
T
1
: baud rate or symbol rate
Computer and Information College
7
Power Spectrum
*
1
( ) [ ( ) ( )]
2
1
[ ( ) ( )]
2
1
[ ] ( ) ( )
2
( ) ( ) ( )
B B B
n m
n m
n m
n m
aa
n m
t E s t s t
E a p t nT a p t mT
E a a p t mT p t nT
n m p t mT p t nT

+ +

= =
+ +
= =
+ +
= =
+ = +
= +
= +
= +



_
The power spectrum of the equivalent baseband signal is
achieved firstly by calculating its autocorrelation function
) (t S
B
Computer and Information College
8
Power Spectrum (cont.)

+

+
=
+
=
+

+
=
=
+
=


+
=
=

+
=
+
=
=

+
=
+
=

+ + =
+ + =
+ + =
+ + =
+ =
+ =
1 1 1
2 /
2 /
1 1 1
2 /
2 /
1 1 1
2 /
2 /
2 /
2 /
2 /
2 /
) ( ) ( ) (
1
) ( ) ( ) (
1
) ( ) ( ) (
1
) ( ) ( ) (
1
) ( ) ( ) (
1
) , (
1
) (
'
'
'
'
1
dt t p kT t p k
T
dt t p kT t p k
T
dt t p kT t p k
T
dt nT t p kT nT t p k
T
dt nT t p mT t p m n
T
dt t t
T
k
aa
n
T T n
T T n
k
aa
n n
n
nT T
nT T
k
aa
nT t t
T
T
n k
aa
m n k
T
T
n m
aa
T
T
B B






_
_
_
Next, to compute the average autocorrelation in the duration of T
Computer and Information College
9
Power Spectrum (cont.)
1 1
'
1
2
2
1 1 1
2 ( ) '
1 1 1 1
2 ' 2 '
1
( ) ( )
1
( ) ( ) ( )
1
( ) ( ) ( ) ( )
1
( ) ( ) ( )
j f
B B
j f
aa
k
j f t t
aa
k
j f t j f t
aa
k
f e d
k p t kT p t dt e d
T
k p t kT p t e d dt t t
T
k p t kT e p t e dt
T






+ +
+

=

+ +
+
+
=

+ +

=

=
= + +
= + + = +
= +



1
1
''
2
1
2
1 1
2 ( ) ''
1 1 1
'' 2 ( ) '' 2
( ( ) ( ) )
1
( ) ( ) ( )
1
( ) ( ) ( ) ( )
1
( ) ( ) ( ) [ ( ) ( )
j f
j f t
aa
k
j f t kT kT
aa
k
j f t kT j
aa
k
dt P f p e d
k p t kT e P f dt
T
P f k p t kT e dt t t kT
T
P f k p t e dt P f p e
T


+
+

+
+
=

+
+
+
=

+
+

=

=
= +
= + = +
= =

2 2
2
( )]
1

( ) ( ) ( ) [ ( ) ( ) ]
1
( ) ( )
f
j kTf j k f
aa aa aa
k k
aa
d P f
P f k e P f f k e
T
f T P f
T

+ +

= =
=
= =
=


Computer and Information College
10
Power Spectrum (cont.)
If are orthogonal, then
{ }
n
a
is usually band-limited.
{ }
n
a
From the discussion in the previous lecture (Lecture7, p.12) From the discussion in the previous lecture (Lecture7, p.12)
] [ ], [
2
1
} {
2
1
] [
2
2 2
n a a n m n aa
a E m a a E m = = =

+

2
2
1
) (
a aa
f =
2
2
2
) (
2
) ( ) (
1
) ( f P
T
f P T f
T
f
a
aa B

= =
} ) ( Re{ ) (
2

c
f j
B P
e =
)] ( ) ( [
2
1
) ( ) (
* 2
c B c B
f j
P P
f f f f d e f + = =



Computer and Information College
11
Example of Shaping Pulse
Example 1.Strict band band- -limited limited

=
=
T
f AT f P
T t
T t
A t p
2
1
, ) (
/
) / sin(
) (

Computer and Information College


12
Example of Shaping Pulse (cont.)
Example 2. Strict time time- -limited limited
T t
T
t
T
A
t p

+ = 0 ,
2
2
cos 1
2
) (

fT j
e
T f fT
fT AT
f p

=
) 1 (
) sin(
2
) (
2 2
T
Computer and Information College
13
Questions?
Is there any pulse that is both time and band-limited?
How to simulate and/or implement the modulator?
Example of Shaping Pulse (cont.)
Computer and Information College
14
Amplitude Shift Keying (ASK)
Amplitude shift keyed (ASK) signals transmit information via the amplitude of the
signals.
Two types of ASK signals
Pulse amplitude modulation (PAM)
Quadrature amplitude modulation (QAM)
For a PAM, or a QAMsignal, the transmitted band-pass waveform is

( ) ( )
( ) ( ) { }
( ) ( ) ( ) ( )

=
=
=
=
=
k
c
i
k c
r
k
t f j
B p
k
k B
t f kT t p a t f kT t p a
e t s t s
kT t p a t s
c

2 sin 2 cos
Re
2
Where
: complex information sequence
: real shaping pulse waveform
: carrier frequency
: symbol duration
{ }
( )
T
f
t p
a
c
k
Computer and Information College
15
Pulse Amplitude Modulation (PAM)
For PAM,
During any given baud interval, the band-pass waveform
can take on one of the possible values, i.e.,
,
Usually, for some integer , i.e., .
Note that one M-ary symbol is transmitted every seconds,
and is called the symbol or baud duration.
The baud rate is .
Since each M-ary symbol corresponds to information bits,
the bit rate is
and the bit duration is .
( ) { } 1 , 3 , 1 M a
k

( ) t s
M
( ) ( ) t f t p Aa t s
c m m
2 cos = M m , , 1 =
k
M 2 = k
M k
2
log =
T
T
T R 1 =
b
R k R = k T T
b
=
k
Computer and Information College
16
Pulse Amplitude Modulation (PAM, cont.)
Assuming that , the energy in the signal is 1 >> T f
c
( ) t s
m
( )
( )( )
( ) ( )
p
m
T
c
T
m
T
c
m
T
m m
E
a A
dt t f t p dt t p
a A
dt t f t p
a A
dt t s E
2
] 4 cos [
2
4 cos 1
2
2
2
0
0
2
0
2
2
2
0
2
2
2
0
2
=
+ =
+ =
=

( )

T
p
dt t p E
0
2
where is the energy of the pulse , and the third equality
uses the fact that .
p
E
( ) t p
1 >> T f
c
End of Lecture 10
Digital Communications
Lecture 11 QAM, PAM and PSK
Digital Communications
Computer and Information College
2
Baud Rate: Number of symbols transmitted per second
Bit Rate: Number of information bits transmitted per second
Power Spectrum:
Digital Modulation
T
M
2
log
=
2
( ) ( );
( ) Re{ ( ) }
( ) cos(2 ) ( ) sin(2 )
c
B k k kc ks
k
j f t
p B
kc c ks c
k
s t a p t kT a a j a
s t S t e
a p t kT f t a p t kT f t

=
= = +
=
=

2
2

* 2
1
( ) ( ) ( ) ,
( ) ( ) ,
1

[ ] { }, ( ) [ ]
2
B aa
j f t
j f m
aa n m n aa aa
m
f fT P f
T
P f p t e d t
m E a a f m e

+
=

= =

T
1
=
Different Modulations: Linear & Non-linear, Memoryless & With memory
Mapping from Data to Symbol: Gray mapping rule
Linear Modulation:
Lecture10, pp.7-9
Computer and Information College
3
Amplitude Shift Keying - ASK
) 1 , ) ( (
2
0
2
2
>> = =

T f t d t p E E
a
E
c
T
p p
m
m
Amplitude shift keying (ASK) signals transmit information via the amplitude of
the signals. There are two types of ASK signals.
Pulse amplitude modulation (PAM)
Quadrature amplitude modulation (QAM)

=
=

=
) 2 sin( ) ( ) 2 cos( ) ( ) (
) ( ) (
t f kT t p a t f kT t p a t s
kT t p a t s
c ks c
k
kc p
k
k B

Pulse Amplitude Modulation (PAM)
The Signal Energy for every symbol is

=

) 2 cos( ) ( ) (
} ) 1 ( , , 3 , {
t f t p a t s
A M A A a
c m m
m

Computer and Information College


4
PAM Average Energy
)] ) 1 (
2
) 1 (
) 1 ( 4
6
) 1 2 )( 1 (
4 [
2
1
)] ) 1 ( ) 1 ( 4 4 [
2
1
) ) 1 ( ) 1 ( 4 4 (
2
1
} ) 1 2 {(
2
1
2
1 1
2
2
2
1 1
2
2
1
2 2
2
1
2
1
2
1
+ +
+
+
+ +
=
+ + + =
+ + + =
=
= =


= =
=
=
= =
M M
M M
M
M M M
E A
M
M M m M m
E A
M
M m M m
E A
M
A M m
E
M
a
E
M
E
M
E
p
M
m
M
m
p
M
m
p
M
m
p
M
m
m
p
M
m
m av

= =
+ +
=
+
=
n
k
n
k
n n n
k
n n
k
1
2
1
6
) 1 2 )( 1 (
;
2
) 1 (
Assuming equally like symbols, the average symbol energy is
The last equation used the identities
Computer and Information College
5
PAM Average Energy (cont.)
T E P
av
/ =
6
) 1 (
3
) 1 (
2
1
2 2
2
2

=
M E A
A
M M
E
M
E
p
p
av
M
E
k
E
E
av av
av b
2
log
= =
So, the final average symbol energy is
The average energy per bit is
The average transmitted power is given by P

= =
=
M
m
M
m
m
A M m a
1
2
1
2
} ) 1 2 {(
Computer and Information College
6
The PAM signals can be expressed in terms of signal vectors.
Since all the are linearly dependent (they just differ in a
scale factor), there is just one basis function. Using specially
Base Function and Signal Vector
m
p
m m
a
E
s t s
2
) ( =

) (t s
m
t f t Ap t s
c
2 cos ) ( ) (
1
=
1
0
2
1
( ) cos 2 ( ) 2
( ) ( ) cos 2
( )
2
c
c
p
p
Ap t f t s t
t p t f t
s t E
A E

= = =
) (
2
) (
0
t a
E
t s
m
p
m
=
We have the basis function
Then
Hence, the relation between the original signal and its
projection on the basis is
) (t s
m
m
s
,
already including the
amplitude factor A
{ , 3 , , ( 1) }
m
a A A M A
Computer and Information College
7
PAM Signal Space Diagram
}. ) 1 ( , , 3 , { A M A A a
k

1
12
2
2
2
2
min

= = =
M
E
A E A
E
d
av
p
p
1
12
|
2
1 min min

= =
=
M
d d
av
E
With M-ary PAM, takes one of the possible values
k
a
Signal space diagram for 8-PAM signals is like
The minimum distance is directly
The normalized distance is defined as
A
E
p
2

6
) 1 (
2 2

=
M E A
E
p
av
M
Why we set ?
1
av
E =
min, 16
. . 0.217
M
e g d
=

Computer and Information College
8
Quadrature Amplitude Modulation (QAM)
2
( ) ( ) cos(2 ) ( ) sin(2 )
Re{ ( ) } ( )
c
p kc c ks c
k
j f t
k k kc ks
k
s t a p t kT f t a p t kT f t
a p t kT e a a ja

=
=
= = +

Quadrature amplitude modulated (QAM) signals can be thought


as two independent amplitude modulations on the inphase and
quadrature carrier components. The QAM signal has the form
where
= } {
kc
a
= } {
ks
a
inphase information sequence
quadrature information sequence
}. ) 1

( , , 3 , { , A M A A a a
ks kc

With QAM, the and take on discrete values from the set
kc
a
ks
a
Computer and Information College
9
QAM Base Function
1
2
2
( ) ( ) cos(2 )
2
( ) ( ) sin(2 )
c
p
c
p
t p t f t
E
t p t f t
E

M m t p a j a t p a t s
ms mc m m
, , 1 ), ( ) ( ) ( ) ( = + = =
QAM signals can be expressed in terms of signal vectors. Since
the functions and , with are orthogonal,
we have two basis functions
) 2 cos( t f
c
The transmitted QAM passband waveforms are
] ) ( ) Re[(
, , 1 ), 2 sin( ) ( ) 2 cos( ) ( ) (
2 t f j
ms mc
c ms c
k
mc m
c
e t p ja a
M m t f t p a t f t p a t s


+ =
= =

=

The complex envelopes, or baseband signals
) 2 sin( t f
c

1 >> T f
c
Computer and Information College
10
For the case when , the resulting signal space diagram
has a squared constellation. In this case, the QAM signal can
be thought of as 2 PAM signals in quadrature with one-half the
average power in each of the quadrature components.
2

QAM Signal Vector


M M =
Then
T t M m t a
E
t a
E
t s
ms
p
mc
p
m
= + = 0 , , , 1 ), (
2
) (
2
) (
2 1

_ _

Hence
) , (
2
) (
ms mc
p
m m
a a
E
s t s =

Computer and Information College


11
Example: signal constellation for 16-QAM
The minimum distance between QAM signals is
16QAM Signal Constellation
A E d
p
2
min
=
). 4

, 16 ( = = M M
0000 0001 0011 0010
0100 0101 0111 0110
1100 1101 1111 1110
1000 1001 1011 1010
] ) ( ) [(
2
|| ||
2 2
ns ms nc mc
p
n m mn
a a a a
E
s s d
+ =
=
, ,

=
=
0
2
. .
ns ms
nc mc
a a
A a a
g e
A E A
E
d
p
p
= = 2 4
2
2
min
Only I or Q branch is 2A
Computer and Information College
12
16QAM Minimum Distance
The energy in the waveform is
) (t s
m
) (
2
|| ||
2 2 2
ms mc
p
m m
a a
E
s E + = =
,
The average energy is

, (2 1 )
mc ms
a a m M A =

2 2 2
1 1 1 1
2
2
2
2
1 1 1

( ) 2
2 2

1 ( 1)

2
2 3
( 1)

( )
3
M M M M
p p
av m mc ms mc
m m m m
p
p
E E
E E M a M a M a
M M M
E
M M
M A
M
A E M
M M
= = = =
= = + =

= =

The minimum distance and the normalized minimum distance
between signals in terms of the average energy is
1
6
1 min min 1
6
min
| , 2
=
= = = =
M E M
E
p
av
av
d d A E d
Use directly the result surrounded
by the red line on p.5
min, 16
. . 0.632
M
e g d
=

Computer and Information College
13
Phase Shift Keying (PSK)
2 2
2 2
p
av
p
m
E A
E m all for
E A
E = =
} , , 1 ,
1
2 { , ) 2 cos( ) ( ) ( M m
M
m
t f kT t p A t s
k
k
k c p
=

+ =

=
=
k
j
B
kT t p e A t s
k
) ( ) (

M m t f t Ap t f t Ap
t f t Ap t s
c m c m
m c m
, , 1 , 2 sin sin ) ( 2 cos cos ) (
), 2 cos( ) ( ) (
= =
+ =


Phase shift keying (PSK) signals transmit the information via the
phase of the signals. The transmitted bandpass waveform is
The baseband signal or the complex envelope is
During any given baud interval , the waveform can take on
one of the possible values, i.e. ,
The PSK signals all have equal energy
T ) (t s
M
Computer and Information College
14
PSK Base Function and Vector Form

=
=
) 2 sin( ) (
2
) (
) 2 cos( ) (
2
) (
2
1
t f t p
E
t
t f t p
E
t
c
p
c
p

= = ) 1 (
2
sin ), 1 (
2
cos
2
) sin , (cos
2
m
M
m
M
A
E
A
E
s
p
m m
p
m

As in the case of QAM, we can express the in terms of the


orthogonal basis functions
) (t s
m
And the projection vector
) ( sin
2
) ( cos
2
) (
2 1
t A
E
t A
E
t s
m
p
m
p
m

_ _
+ =
Computer and Information College
15
PSK Minimum Distance
M
d d
M
E
M
A
E
d
av
E av
p

sin 2 | , sin 2 sin
2
2
1 min min min
= = = =
=
Example of 8-PSK signal constellation (M=8). Please prove
that the minimum distance between any two signal points is
A
E
p
2
) (
2
t
) (
1
t
min, 16
. . 0.195
M
e g d
=

Computer and Information College
16
PSK Minimum Distance (cont.)

=
) 1 (
2
sin ), 1 (
2
cos
2
) 1 (
2
sin ), 1 (
2
cos
2
n
M
n
M
A
E
s
m
M
m
M
A
E
s
p
n
p
m

)] (
2
cos 1 [
)] 1 (
2
sin ) 1 (
2
sin 2 ) 1 (
2
cos ) 1 (
2
cos 2 1 1 [
2
) 1 (
2
sin ) 1 (
2
sin
2
) 1 (
2
cos ) 1 (
2
cos
2
|| ||
2
2
2 2 2 2
2 2
n m
M
A E
n
M
m
M
n
M
m
M
A E
n
M
m
M
A E
n
M
m
M
A E
s s d
p
p
p p
n m mn
=
+ =

=
=




)] (
2
cos 1 [
2
n m
M
A E d
p mn
=

M
A E
M
A E
M
A E d
p p p
n m

sin 2 ] sin 2 ]
2
cos 1 [
2 2 2
min
1 | |
= = =
=
Proof:
Computer and Information College
17
QAM and are random;
PAM and PSK are special cases of QAM
PAM is real, that is, and
Summary of PAM, QAM and PSK
Baseband and Passband Signals

=
=

=
) 2 sin( ) ( ) 2 cos( ) ( ) (
) ( ) (
t f kT t p a t f kT t p a t s
kT t p a t s
c ks c
k
kc p
k
k B

PSK
k
a
k kc
a a = 0 =
ks
a

=
=
k ks
k kc
a
a

sin
cos
kc
a
ks
a
Computer and Information College
18
Which is which? PAM, PSK, or QAM?
Computer and Information College
19
Where and are arbitrary random carrier phases. The
condition is satisfied. Find conditions on so that the
signals and are orthogonal for all values of and .
1
Supplementary Homework
) ( ] ) ( 2 cos[ ) (
) ( ) 2 cos( ) (
2 2
1 1
t u t f f A t s
t u t f A t s
T c
T c


+ + =
+ =

1 >> t f
c
f
) (
1
t s ) (
2
t s
1
Consider two passband waveforms

End of Lecture 11
Digital Communications
Lecture 12 Other Digital Modulations
Digital Communications
Computer and Information College
2
Frequency Shift Keying (FSK)
Frequency shift keying (FSK) signals transmit information also
via the phase of the signals. The transmitted bandpass wave-
form is
where ,
During any given symbol interval, the transmitted signal is
A special case is orthogonal FSK, when and is
chosen so that the s are orthogonal.
Question Question: What choice of yields orthogonal waveforms?
( ) ( ) ( )

=
+ =
k
k c
t f t f kT t p A t s 2 2 cos
k
f m f =
{ } M m , , 1
( ) ( ) ( ) t f t f t Ap t s
m c m
2 2 cos + =
( ) ( ) t u t p
T
=
f
( ) t s
m
f
must be satisfied from must be satisfied from
the following derivation the following derivation
Computer and Information College
3
T f f
T f f
T
T f f
t f f
T
k j c T f
T
T f f
t f f
T
k j c T
T
k j T
T
k c j c T
T
k j E ij
k j
k j
k j
k j
c k j
k j
av
t f f t f
dt t f f t f dt t f f
dt t f t f t f t f dt t s t s
) ( 2
] ) ( 2 sin[
0
) ( 2
] ) ( 2 sin[
0
4
1
0
) ( 2
] ) ( 2 sin[
0
1
0
1
0
2
0
1
] ) ( 2 4 sin[
] ) ( 2 4 cos[ ] ) ( 2 cos[
) 2 2 cos( ) 2 2 cos( ) ( ) (

+ + + =
+ + + =
+ + = =




FSK Correlation Coefficients
For , we can write
) ( ) ( t u t p
T
=
) 2 2 cos( ) (
2
t f t f t s
m c T
E
m
av
+ =
where
T dt t p E E
A
T
A
p
A
av 2
0
2
2 2
2 2 2
) ( = = =

>>
+ +
1
1 )] ( 2 4 sin[
T f
f f t f
c
k j c

approaching zero approaching zero
A =
Computer and Information College
4
The choice of is the smallest possible frequency
separation that yields orthogonal orthogonal waveforms.
T f 2 / 1
FSK Correlation Coefficients (cont.)
=
Computer and Information College
5
FSK Vector Representation
1 1
,
av
s E e =
, ,
2 2
, ...,
av
s E e =
, ,
M av M
e E s
, ,
=
( )

=
.
.
,
th i e
i
1
( )
( )
( )
( )
2
cos 2
2
cos 2 2
2
cos 2
c
c
c
f t f t
T
f t f t
t
T
f t M f t
T


+



+
=




+


,
.
2
2
min
min min 1
2
2
| 2 1.414
av
ij i j i j av
av
E
d s s s s E
d E
d d
=

= = + =

= =

, , , ,
The distance between any two orthogonal signals is
where and
Equal energy M-ary orthogonal FSK waveforms have the
following vector representation
Computer and Information College
6
FSK Signal Constellation
Example of orthogonal 3-FSK signal constellation (M=3)
av
E
av
E
av
E
( ) t
1

( ) t
3

( ) t
2

Computer and Information College


7
A set of biorthogonal signals can be easily constructed from a
set of orthogonal signals by including the negatives of the
orthogonal signals. M-ary biorthogonal waveforms have the
vector representation
where the vectors have length .
The distance between any two biorthogonal signals is either
or .
Biorthogonal Signals
M
2 / M
1 1
e E s
av
, ,
=
2 2
e E s
av
, ,
=
.
2 / 2 / M av M
e E s
, ,
=
1 1 2 /
e E s
av M
, ,
=
+
2 2 2 /
e E s
av M
, ,
=
+
.
2 / M av M
e E s
, ,
=
i
e
,
2 / M
av
E 2
av
E 2
Computer and Information College
8
Biorthogonal Signals (cont.)
Example of Biorthogonal signal constellation (M=6)
av
E
av
E
av
E
av
E
av
E
av
E
( ) t
1

( ) t
2

( ) t
3

End of Lecture 12
Digital Communications
Lecture 13 Introduction to receiver and
coherent detection
Digital Communications
Computer and Information College
2
Digital Communication Systems
: information sequence to be transmitted
: waveform to be transmitted over channel
: received signal
: demodulated and detected signal
k
a
) (t s
) (t r
k
a
k
a
) (t s ) (t r
k
a
Simplified block diagram of a communication system
Computer and Information College
3
Digital Modulation
ks kc k
n
k B
a j a a kT t p a t s + = =

+
=
; ) ( ) (
QAM: is complex
PAM: is real
PSK:
k
a

+
=
+
=
=
n n
c ks c kc p
t f kT t p a t f kT t p a t s ) 2 sin( ) ( ) 2 cos( ) ( ) (

+
=
=
n
c k p
t f kT t p a t s ) 2 cos( ) ( ) (
k ks k kc
j
k
a a e a
k

sin , cos = = =

+
=
+
=
=
n n
c ks c kc p
t f kT t p a t f kT t p a s ) 2 sin( ) ( ) 2 cos( ) (
k
a
Baseband signal
FSK Other modulations are based on waveforms shown
in Lecture 12
Computer and Information College
4
Channel Models
Additive white Gaussian noise channel (AWGN)
Computer and Information College
5
Quadrature Demodulator
) (t r
) (t r
I
) (t r
Q
) ( ) ( t jr t r r
Q I B
+ =
t f
t f
c
c

2 sin 2
2 cos 2

baseband signal
I branch
baseband signal
Q branch
Computer and Information College
6
Baseband and Passband Signals
2
2
( ) 2cos(2 ) 2 ( ) cos (2 ) 2 ( ) sin(2 ) cos(2 )
( ) ( ) cos(4 ) ( ) sin(4 )
[ ( ) 2cos(2 )] ( ) ( )
( ) [ 2sin(2 )] 2 ( ) sin (
p c I c Q c c
I I c Q c
high frequency components
p c I
p c Q
s t f t s t f t s t f t f t
s t s t f t s t f t
s t f t h t s t
s t f t s t

=
= +
=
=
_
2 ) 2 ( ) sin(2 ) cos(2 )
( ) ( ) sin(4 ) ( ) cos(4 )
[ ( )( sin(2 ))] ( ) ( )
c I c c
Q I c Q c
high frequency components
p c Q
f t s t f t f t
s t s t f t s t f t
s t f t h t s t

=
=
_
a
a
impulse response of ideal lowpass filter

= =
+ = =

+
=
) 2 sin( ) ( ) 2 cos( ) ( } ) ( Re{ ) (
) ( ) ( ) ( ) (
2
t f t s t f t s e t s t s
t js t s kT t p a t s
c Q c I
t f j
B p
n
Q I k B
c

Computer and Information College


7
Noise Term

= =
= =
+ =
+ =
= +
+ =

+

+

0 0 0 0
0 0 0 0
0
) 2 sin( ) ( ) ( 2 ) ( ))] 2 sin( 2 ( ) ( [ ) (
) 2 cos( ) ( ) ( 2 ) ( )] 2 cos( 2 ) ( [ ) (
) ( ) ( ) (
) ( ) ( ) (
) (
2
1
)} ( ) ( {
) ( ) ( ) (
dt t f t n t t h t h t f t n t n
dt t f t n t t h t h t f t n t n
t jn t n t n
t n t s t r
N t n t n E
t n t s t r
c c Q
c c I
Q I B
B B B
p



Question: Are , , and stationary?
) (t n
I
) (t n
Q
) (t n
B
Computer and Information College
8
Noise - Lemma
Lemma: Let h(t) be impulse response of a linear system and
be band-limited, that is . Then for any there is

= dt e t h H
t j
) ( ) (
B when H > = | | 0 ) ( , 2
0
B >
0 0
0 0
'
0
0 0 0
0
0 0 0
( , )
0 0 0
( )
0 0
( )
( , ) ( ) ( )
[ ( ) ] ( )
( ) ( )
( )
j t j j
g t
j t j
let t t
j t t j t
j t t
g t e d h t t h t t e dt e d
h t t e d h t t e dt
H e h t t e dt
H e

+ + +


+ +


+ =
+

= +
= +
=
=

_
_
0 0
0
0 0
0
( )
( ) ( ) 0
j t
j t
h t t e dt
H H e

= + =
_
Proof:
) (
) (
) (
) (
' ' ) (
' ) ( '
0
'
0
0
'



H e
d e h e
d e h
t t j
j t t j
t t j


+
=
=

) (
) (
) (
) (
0
'
0
) ) ( '
0
) (
'
0
'
0
) )( (
0 0
) (
0
'
0 0 0
'
0 0
'
0 0
0 0




=
=
=



H e
t d e t h e e
t d t h e e
t d t t h e e
t j
t j t j t j
t t j t j
t t t
t j t j
0 ) ( ) ( ) , (
0 0 0
0 0
= + =

+

dt e t t h t t h t g
t j

Computer and Information College
9
0 ) ( ) ( ) , (
0 0
0 0
= + =

+

t j
e t t h t t h t g


Noise Lemma (cont.)
The last equation is due to the fact that and :
0
B
B
) ( H ) (
0
H
0
B +
0
B
0

From the inverse Fourier transform, we have


0
2B >
Computer and Information College
10
Noise Lemma (cont.)

= +
= +
= + =
= + =

+

+

+

+

0 ) sin( ) ( ) (
0 ) cos( ) ( ) (
0 ) ( ) ( ) , (
0 ) ( ) ( ) , (
0 0 0 0 0
0 0 0 0 0
0 0 0 2
0 0 0 1
0 0
0 0
dt t t t h t t h
dt t t t h t t h
dt e t t h t t h t g
dt e t t h t t h t g
t j
t j



0 ) ( ) (
0
0
=
t j
e H H


Now we can study the stationary of the filtered noises.
Computer and Information College
11
Noise Stationary?
0
) 4 sin( ) ( ) (
) 2 sin( ) 2 cos(
2
) ( ) ( 4
) 2 sin( ) 2 cos( ) (
2
) ( ) ( 4
) 2 sin( ) 2 cos( )} ( ) ( { ) ( ) ( 4
} )) 2 sin( 2 ( ) ( ) ( ) 2 cos( 2 ) ( ) ( {
)} ( ) ( {
0 ) 2 cos( )] ( [ ) ( } ) 2 cos( ) ( ) ( { )} ( {
0 0 0 0 0
0 0 0
0
0 0
1 0 1 0 1 0
0
1 0
1 0 1 0 1 0 1 0
1 1 1 1 0 0 0 0
0 0 0 0 0 0 0 0
=
+ =
+ =
+ =
+ =
+ =
+
= = =





+

+

+

+

+

+

+

+

+

+

dt t f t t h t t h N
dt t f t f
N
t t h t t h
dt dt t f t f t t
N
t t h t t h
dt dt t f t f t n t n E t t h t t h
dt t f t n t t h dt t f t n t t h E
t n t n E
dt t f t n E t t h dt t f t n t t h E t n E
c
c c
c c
c c
c c
Q I
c c I





The noises of I and Q branches are
uncorrelated or independent.
a
(from the above lemma)
Computer and Information College
12
Noise Stationary? (cont.)
dt t h t h N dt t t h t t h N
dt t f t t h t t h N dt t t h t t h N
dt t f t t h t t h N
dt t f
N
t t h t t h
dt dt t f t f t t
N
t t h t t h
dt dt t f t f t n t n E t t h t t h
dt t f t n t t h dt t f t n t t h E
t n t n E
t t t
c
c
c
c c
c c
c c
I I
) ( ) ( ) ( ) (
) 4 cos( ) ( ) ( ) ( ) (
)] 4 cos( 1 )[ ( ) (
) 2 ( cos
2
) ( ) ( 4
) 2 cos( ) 2 cos( ) (
2
) ( ) ( 4
) 2 cos( ) 2 cos( )} ( ) ( { ) ( ) ( 4
} ) 2 cos( 2 ) ( ) ( ) 2 cos( 2 ) ( ) ( {
)} ( ) ( {
0 0 0 0 0
0
0 0 0 0 0 0 0 0 0
0 0 0 0 0
0 0
2
0
0 0
1 0 1 0 1 0
0
1 0
1 0 1 0 1 0 1 0
1 1 1 1 0 0 0 0
'
0




+

=
+

=
+

+

+

+

+

+

+

+

+

+

+ = + =
+ + + =
+ + =
+ =
+ =
+ =
+ =
+






_
Computer and Information College
13
Noise Stationary? (cont.)
2
0
0 0
*
0
| ) ( | ) (
) ( * ) ( ) ( ) ( )} ( ) ( {
2
1
) (
) ( ) ( ) (
0 )} ( ) ( {
) ( ) ( )} ( ) ( { )} ( ) ( {
0 )} ( { )} ( {
f H N f
h h N dt t h t h N t n t n E
t jn t n t n
t n t n E
dt t h t h N t n t n E t n t n E
t n E t n E
nn
B B nn
Q I B
I Q
I I Q Q
I Q
=
= + = + =
+ =

= +
+ = + = +
= =

+

+


If H(f) is ideal lowpass, that is, then
*If B is large compared with the signal bandwidth, then can be
regarded as white i.e.,

>

=
B f
B f
f H
| | 0
| | 1
) (

>

= =
B f
B f N
f H N f
nn
| | 0
| |
| ) ( | ) (
0 2
0

) (t n
B
) ( ) (
0
N
nn

is WSS stationary. (Ref. lecture5, p.9)
Computer and Information College
14
Transmit one of the M signals, say , over an AWGN channel. Then the
received complex envelope or baseband signal is
Where . The ideal channel has an impulse response with the form
of and just attenuates and delays the signal, but otherwise
leaves it undistorted.
Problem: Problem: Suppose at the receiver, we have known and exactly;
getting these parameters is another issue. By observing , determine
which signal was transmitted.
Optimum Coherent Detection
Suppose that we have a set of M bandpass signals with
complex envelopes that are defined over the time int-
erval
)} ( ),......, ( ), ( {
1 1 0
t s t s t s
M
)} (
~
),......, (
~
), (
~
{
1 1 0
t s t s t s
M
T t 0
) (
~
t s
m
) (
~
) (
~
) (
~
0
t n t t s g t r
m
+ =

j
e g =
) ( ) (
~
0
t t g t h =
) (
~
t r
g
0
t
) ( ) (
~
t n represents t n here
B
1 0 ] ) (
~
Re[ ) (
2
= M i e t s t s
t f j
c

i i
Computer and Information College
15
The basis functions do not span the noise space, i.e., the noise term cannot be
represented exactly in terms of the basis functions. However, we will show later
that the component of the noise process that falls outside of the signal space is
irrelevant to the detection of the signal. We have
We have already seen that the signal set can be expressed in
terms of a set of orthonormal basis functions, that is, ,where N
is the dimension of the signal space.
)} (
Correlation Detector
~
),...., (
~
), (
~
{
1 1 0
t s t s t s
M
)} ( ),...., ( ), ( {
1 1 0
t t t
N

) (
~
t n
*
0
* *
0 0
( ) ( )
( ) ( ) ( ) ( )
T
k k
T T
m k k
r r t t dt
gs t t dt n t t dt


=
= +




_
)
~
,...,
~
,
~
(
~
1 1 0
=
N
r r r r
,
Hence, the projection of the received signal onto the signal space yields the vector
) 0 (
0
= t here
( )
mk k
gs t n = +
Computer and Information College
16
Correlation Detector (cont.)
) (
~
t r

) (
*
0
t
) (
*
1
t
N

T
dt
0
0
~
r
1
~
r
1
~
N
r
r
~,

T
dt
0

T
dt
0
*
1
( ) t
baseband received
signal
Computer and Information College
17
Therefore, the are independent complex Gaussian random
variables with mean and variance .
Noise Statistics
The noise components have mean value
and covariance
k
n
~
0 ) ( )] (
~
[ ]
~
[
0
*
= =

T
k k
dt t t n E n E
jk
T
k j
T T
k j
T
k j
T
s t N
k j jk
N
dt t t N
dtds s t s t N
dtds s t s n t n E n n E

0
0
*
0
0 0
*
0
0
*
0
) (
* *
) ( ) (
) ( ) ( ) (
) ( ) ( )] (
~
) (
~
[
2
1
]
~ ~
[
2
1
0
=
=
=
= =

_
k
r
~
mk
s
~
0
N
P.13, P.13, noise stationary? noise stationary?
) ( ) (
0
N
nn

Computer and Information College
18
Hence, the vector has the joint conditional density function
Joint Conditional Density
The vector has the joint multivariate Gaussian density
function
}
~
2
1
exp{
) 2 (
1
} |
~
|
2
1
exp{
) 2 (
1
)
~
(
2
0
2 /
0
1
0
2
0
2 /
0
n
N N
n
N N
n p
N
N
k
k
N
,
,
=
=

2
/ 2
0 0
1 1
( | ) exp{ }
(2 ) 2
N
p r gs r gs
N N
=
, , , ,

r
~,
2
~
m
s g
,
n
~,
which is a multivariate Gaussian distribution with mean .
End of Lecture 13
Digital Communications
Lecture 14 Correlation Demodulator
& Matched Filter Demodulator
Digital Communications
Computer and Information College
2
Quadrature Demodulator
( ) t r
2cos 2
c
f t
2sin 2
c
f t
( ) t r
I
( ) t r
Q
( ) ( ) ( ) t jr t r t r
Q I B
+ =
Passband signal
Computer and Information College
3
Correlation Detector
( ) t r
B
0
r
1 N
r
r
,
( ) t

T
dt
0
1
r
( ) t

T
dt
0
.
( ) t
N


T
dt
0
Equivalent
baseband signal
Orthonormal Basis
Computer and Information College
4
Correlation Detector (cont.)
Output after the correlation detector:
) 1 0 ( + = N k n gs r
k mk k
The noise component have mean and covariance:
k
n

= =
= =

jk k j jk
T
k k
N n n E
dt t t n E n E

0
0
] [
2
1
0 ) ( )] ( [ ] [
The vector has the joint conditional density function:

=
2
0
2 /
0
2
1
exp
) 2 (
1
) (
m
N
m
s g r
N N
s g r p
, , , ,

r
,
(lecture13, p.17) (lecture13, p.17)
Computer and Information College
5
Correlation Detector - Example
Example: Consider an M-ray baseband PAMsignal:
), ( ) ( nT t g a t s
n
k B
=

+
=
where is rectangular shown as follows:
) (t g
The channel is with unit gain and AWGN with zero-mean and
p.s.d. . Find:
0
N
1) basis functions;
2) output of the correlation detector;
3) mean and variance of the output noise;
4) probability density function (PDF) of the sampled output.
a
T
( ) t g
0 t
Computer and Information College
6
Power Spectral Density of Noise
Real R.V. Case:
) (t n is real,
Complex R.V.
Case:
{ }

= =
= + =

0
2
0
) ( ) (
) ( ) ( ) ( ) (
N d e f
N t n t n E
f j
n n
n



) ( ) ( ) ( t jn t n t n
i r B
+ =
is complex,
{ }

= =
= + =

0
2
0
) ( ) (
) ( ) ( ) (
2
1
) (
N d e f
N t n t n E
f j
n n
B B n



) ( ) ( t n and t n
i r
are two r.v.s with the same correlation
and the same p.d.f.
0
N
) (
0
N
(lecture13, p.13)
Computer and Information College
7
Correlation Detector Example
1. Basis functions:
) ( ) (
) ( , 0
) 0 ( , / 1
) (
1
) (
) (
2
0
2
0
2
t f E a t s
otherwise
T t T
t g
E
t f
T a dt a dt t g E
g m Bm
g
T T
g
=


= =
= = =

2. Output of the correlation detector:
n E a
dt t f t n dt t f E a
dt t f t n t f E a
dt t f t r r
g m
n
T
B
T
g m
T
B g m
T
B
+ =
+ =
+ =
=

_
0 0
2
0
0
) ( ) ( ) (
) ( )] ( ) ( [
) ( ) (
real real
Computer and Information College
8
Correlation Detector Example (cont.)
3. Mean and variance of the noise:
{ } { }
{ }
( )
0 1 1
2
0
0
0 0
2 1 2 1 2 1 0
0 0
2 1 2 1 2 1
0 0
2 1 2 1 2 1
0 0
2 2 2 1 1 1
2 2
0 0
) (
) ( ) ( ) ( ) ( )] ( ) ( [
) ( ) ( ) ( ) (
) ( ) ( ) ( ) (
0 ) ( ) ( ) ( ) (
N dt t f N
dt dt t f t f t t N dt dt t f t f t n t n E
dt dt t f t f t n t n E
dt t f t n dt t f t n E n E
dt t f t n E dt t f t n E n E
T
T T T T
T T
T T
n
T T
= =
= =

= =
= =

4. Probability density function (p.d.f) of the sample output:


=
0
2
0
2
) (
exp
2
1
) (
N
E a r
N
a r p
g m
m

Computer and Information College


9
Matched Filter Receiver
Suppose that we filter the received signal with a bank of matched
filters having the impulse responses:
) (t r
B
T t t T t h
i i
= 0 ), ( ) (
*

and sample the filter outputs at time . T t


The filter outputs are


= =
+ =
= =
T
i B i i
t
i B
t
i B i B i
d r T y y
d t T r
d t h r t h t r t y
0
*
0
*
0
) ( ) ( ) (
) ( ) (
) ( ) ( ) ( * ) ( ) (



Note that , that is, the matched filter outputs are identical to the
correlation detector outputs.
=
i i
r y =
Computer and Information College
10
Matched Filter Receiver - Diagram
( ) t r
B
0
r
1 N
r
r
,
( ) t T

1
r
( ) t T

( ) t T
N

Computer and Information College


11
Matched Filter Properties
Consider a finite energy signal . The filter matched to is
) ( ) (
*
t T s t h =
T t t s 0 ), ( ) (t s
The output of the matched filter is
d t T s s d t h s t y
t t
) ( ) ( ) ( ) ( ) (
*
0 0
+ = =

This is just the time-autocorrelation of the pulse . ) (t s
The filter output at time is

=
T
d s T y
0
2
) ( ) (
T t
Fact: if is corrupted by AWGN, the matched filter maximi Fact: if is corrupted by AWGN, the matched filter maximizes the zes the
signal signal- -to to- -noise ratio at the sampling instant t=T. noise ratio at the sampling instant t=T.
) (t s
=
Computer and Information College
12
Matched Filter Properties - Proof
Let the impulse response of the filter be h(t), then the signal and the noise
components of the matched filter at t=T is
d h T n n d h T s t h t s s
T T
T t
) ( ) ( , ) ( ) ( | ) ( * ) (
0 0

= = =
=
The power of the noise is

= =
T
o
T T
d h N h h T n T n E n E
0
2
2 1
0 0
2 1
2
) ( ) ( ) ( )] ( ) ( [ } {
The signal-to-noise ratio is
o
T
T
o
T T
T
o
T
N
d s
d h N
d h d T s
d h N
d h T s
SNR





=
0
2
0
2
0 0
2 2
0
2
2
0
) (
) (
) ( ) (
) (
) ( ) (
with equality only if ) ( ) (
*
t T s t h =
( (Cauchy Cauchy- -Schwarz Schwarz
Inequality Inequality- -Lecture8, p.5 Lecture8, p.5) )
Computer and Information College
13
Frequency Domain Property
Let the impulse response of the filter be , then its frequency
response is


=

=
= = =
= =
T
ft j
fT j
T
t j T f j
T
t T f j
t t T
T
t f j
T
t f j
dt e t s f s
e f s dt e t s e dt e t s
dt e t T s dt e t h f H
0
2
2 * *
0
2 ' 2
0
) ( 2 ' *
0
2 *
0
2
) ( ) (
) ( ] ) ( [ ) (
) ( ) ( ) (
' '
'



If passing s(t) through the matched filter, then the output has the spectrum
fT j
e f s f Y
2
2
) ( ) (

=
The output at t=T is
df f S df e e f S df e f Y T y
fT j fT j fT j

+

+

+

= = =
2
2 2
2
2
) ( ) ( ) ( ) (

The noises p.s.d. and the total noise power are
df f S N n E f S N e f s N f
o o
fT j
o n

+

= = =
2
2
2
2
2 *
) ( } | {| , ) ( ) ( ) (

) ( ) (
*
t T s t h =
Computer and Information College
14
Noise Remainder Process
Note that
0 0 1 1 , 1 1
( )
1
0
( ) ( ) ( ) ( ) ( ) ... ( ) ( )
( ) ( )
B
B B B B B B N N B
s t
N
Bk k B
k
r t s t n t s t s t s t n t
s t n t

=
= + = + + + +
= +

_
where


= dt t t s s
k B Bk
) ( ) (
*

We know that ) 1 0 ( + = N k n s r
k Bk k
) ( ) ( ) ( ) (
1
0
t n t n r t r
B
N
k
k k k B
+ =

=

a
Computer and Information College
15
Noise Remainder Process (cont.)
Where is called the remainder process
) ( ) ( ) (
1
0
t n t n t z
k
N
k
k B

=
=
) (t z
The remainder process is outside the vector space spanned by the basis
function } {
k

Now the problem Now the problem: Is important for the final detection ? ) (t Z
a
_
) (
1
0
1
0
1
0
] ) ( ) ( [ ) ( ) ( ) ( ) ( ) (
t Z
N
k
k k B
N
k
k k B
N
k
k k k B
t n t n t r t n t n r t r


=

=
+ = + =
Computer and Information College
16
Irrelevance
We have
[ ]
* * *
0
*
1
*
0
1
* *
0
0
0 0
( ) ( ) ( )
( )
( ) ( )
( ) ( ) ( ) ( )
2 ( ) 2 ( ) 0
j mj j
j
N
B k k j
k
T
N
B B j k j n
k
j j
E z t r E z t gs E z t n
E z t n
E n t n t n
E n t n d E n n t
N t N t

=
= +

=



=



=

= =

_
Hence, the vector is uncorrelated with and, therefore,
is irrelevant to the final decision since it does not contain
any information about .
r
,
) (t z
) (t z
r
,
) (
j mj j
n gs r + =
) ) ( ) ( (
0
*

=
T
j B j
d n n
(lecture13, p.17) (lecture13, p.17)
Computer and Information College
17
Example Baseband QAM
Recall that complex envelopes of the QAM signals can be expressed as
The QAM signal vectors are
m p m m
a E s t s 2 ) (

=
Where

=
=

T
p
p
dt t p E
t p
E
t
0
2
0
) (
2
1
) (
2
1
) (
At the output of the correlation detector, we have , where
0 0 0
n gs r
m
+ =
0
r r =
,
T t M m t p a t s
m m
= = 0 , 1 ,..., 1 , 0 ), ( ) (

End of Lecture 14
Digital Communications
Lecture 15 Matched Filter
& Optimum Detector
Digital Communications
Computer and Information College
2
Correlation Detector
) (t r
B
) (
*
0
t
) (
*
1
t
) (
*
1
t
N

T
dt
0
0
r
1
r
1 N
r
r
,

T
dt
0

T
dt
0

Computer and Information College


3
Matched Filter Receiver
) (t r
B
) (
*
0
t T
) (
*
1
t T
N

0
r
1
r
1 N
r
r
) (
*
1
t T
Computer and Information College
4
Matched Filter Properties


=
+ = =
= =
T
i B
T t
T
i B T t i B i
dt t t r
d t T r t T t r r
0
*
0
* *
) ( ) (
| ) ( ) ( | ) ( * ) (


Matched filter and Correlation demodulator are equivalent equivalent. The
output of the branch is
If s(t) is corrupted by AWGN, the matched filter maximizes the
signal-to-noise ratio at the sampling instant t=T. The signal-to-
noise power is
2 2
0 0
2 0
0
0
[ ( ) ( ) ] ( )
( )
T T
T
s T h d s d
SNR
N
N h d

With equality only if ) ( * ) ( t T s t h =


th i
(Ref. Lecture14, (Ref. Lecture14,
p.12 for details) p.12 for details)
Computer and Information College
5
Noise Remainder Process
Let the transmitted baseband signal be . Then
the received signal can be expressed as

=
=
1
0
) ( ) (
N
k
k k B
t s t s

+ + =
1
0
) (
~
1
0
) ( ) ( ) ( ) (
N
k
t n
N
k
k k k
r
k k
t n t n t n gs
k
_
_

dt t t r r
k
T
B k
) ( ) (
*
0

=
) ( ) ( ) ( t n t gs t r
B B
+ =
is the branch output of the demodulator
th
i

=
T
k k
dt t t n n
0
*
) ( ) (

=
=
1
0
) ( ) ( ) (
~
N
k
k k
t n t n t n noise remainder process noise remainder process
The remainder process is outside the vector space
spanned by the basis functions . { } ) (t
k

is the branch filtered noise


th
i
Computer and Information College
6
Irrelevance
Is important for the final detection ?
We have
) (
Hence, the vector is uncorrelated with and, therefore ,
is irrelevant to the final decision since it does not contain
any information about .
~
t n
) (
~
t n
) (
~
t n
r
,
r
,
[ ] [ ] [ ]
[ ] [ ]
0 ) ( 2 ) ( 2
) ( ) ( ) ( ) (
) ( ) (
) (
~
) (
~
) (
~
0 0
1
0
*
0
*
*
1
0
* *
0
*
= =
=

=
+ =

=
=
t N t N
t n n E d n t n E
n t n t n E
n t n E gs t n E r t n E
j j
k
N
k
j k
T
j
j
N
k
k k
j j j

_
*
0
( ( ) ( ) )
T
j B j
n n d =

Computer and Information College


7
Example - QAM
Recall that the complex envelopes of the QAM signals can be
expressed as
T t M m t p a t s
m Bm
= = 0 , 1 ,..., 0 ), ( ) (
The QAM signal vectors are
m p m Bm
a E s t s 2 ) (
0
=
where

=
=

T
p
p
dt t p E
t p
E
t
0
2
0
) (
2
1
) (
2
1
) (
At the output of the correlation detector, we have
where
0
r r =
,
0 0 0
n gs r
m
+ =
Computer and Information College
8
MAP and ML receiver
Suppose that is transmitted and the correlation or matched
filter receiver outputs the vector .
The maximum a posteriori (MAP) receiver observes the vector
and decides in favour of the message that maximizes
the a posteriori probability .
The maximum likelihood (ML) receiver observes and
decides in favour of the message that maximizes the
likelihood probability .
Question: For the MAP and ML receivers,
which has better detection performance?
And which is easier to be implemented ?
) (t s
m
n s r
m
, , ,
+ =
r
,
k
s
,
) ( r sent s P
k
, ,
r
,
k
s
,
) ( sent s r P
k
, ,
Computer and Information College
9
Optimum Decision Rule
If is received and the decision is made that was sent,
then the conditional probability of decision error is
_
, , , ,
,
. .
) ( 1 ) (
prob posteriori a
k k
r e
r sent s P r sent not s P P = =
And the average probability of error is
[ ] r d r P r sent s P P
k e
, , , ,
) ( ) ( 1

+

=
Since MAP receiver maximizes for all , it
minimizes the probability of error.
) ( r sent s P
k
, ,
r
,
r
,
k
s
,
Conclusion: MAP Minimum Decision Error Probability
Computer and Information College
10
Bayes Rule Applied
Using Bayes rule, the a posteriori probability a posteriori probability (APP) (APP) can be
written in the form
Therefore, the MAP decision rule is
Choose if
While the ML decision rule is
Choose if
* If the sent messages are equally likely, i.e., , then the that
maximizes also maximizes . Under this condition, the ML
receiver also minimizes the probability of error.
) ( r s P
m
, ,
M m
r p
P s r P
r s P
m
m
m
,..., 1
) (
) (
) ( = =
,
, ,
, ,
m
s
,
m
s
,
m m P s r P P s r P
m m m m


, , , ,

) ( ) (
m
s
,
m m s r P s r P
m m


, , , ,

) ( ) (
M P
m
/ 1 =
) (
m
s r P
, ,
) ( r s P
m
, ,
APP APP
Likelihood function Likelihood function
Computer and Information College
11
The joint conditional density is multivariate Gaussian with the form
For a channel with AWGN, the are i.i.d. Gaussian random variables with
same variance . (lecture14, p.4) (lecture14, p.4)
k
n
0
N
ML Receiver for AWGN Channels
The choice of that maximizes the likelihood function equals to
minimizing the quantity
Since is independent of the choice of , we just need to maximize
) (
m
s r P
, ,
2
0
2
1
2 /
0
) 2 (
1
) (
m
s g r
N
N
m
e
N
s r P
, ,
, ,

=

{ }
2 2 2 2
, Re 2 ) (
m m m m
s g s g r r s g r s
, , , , , , ,
+ > < = =
m
s
,
) (
m
s r P
, ,
2
r
,
m
s
,
{ } { }
m m m m m
E g s g r s g s g r s
2 2 2
, Re
2
1
, Re ) ( > < = > < =
, , , , , ,

(Lecture9, p.4) (Lecture9, p.4)


Euclidean distance Euclidean distance
Computer and Information College
12
ML Receiver Equivalent Form
The inner product can be written in an integral form to give
If all the have equal energy, then we just maximize
{ }
{ }
m
T
Bm
j
B
m
T
Bm B m
E dt t s e t r
E g dt t s g t r s

0
*
2
0
* *
) ( ) ( Re
) ( ) ( Re ) (
,
m
s
,
{ }
{ }

> < =
T
m
j
m
j
m
dt t s e t r
s e r s
0
*
) ( ) ( Re
, Re ) (

, , ,
) (

j
e g =
(delete one )
Computer and Information College
13
ML Receiver Equivalent Form (cont.)
Proof:

=
+ =

=
1
0
) (
~
1
0
1
0
) ( ) (
] ) ( ) ( [ ) ( ) (
N
k
k Bmk Bm
t n
N
k
k k B
N
k
k k B
t s t s
t n t n t r t r
B


_
> =< =
+ =
+ =
+ =

=
=

=
m Bmk
N
k
k
N
l
T
l B Bml Bmk
N
k
k
T
N
l
l Bml B
l k
T
l k
N
l
Bml
N
k
k
T
N
l
l Bml B
N
k
k k
T
Bm B
s g r s g r
dt t t n s g s g r
dt t s t n g dt t t s g r
dt t s t n t r g dt t s g t r
, ,
_
_
,
) ( ) (
~
) ( ) (
~
) ( ) (
) ( )] (
~
) ( [ ) ( ) (
* *
1
0
1
0
0
0
* * * * *
1
0
0
1
0
* * *
) (
0
*
1
0
* *
1
0
0
1
0
* *
1
0
*
0
* *

a
Computer and Information College
14
The choice of that maximizes maximizes minimizes the quantity
m
s
,
MAP Receiver for AWGN Channels
m m
P s r P ) (
, ,
{ }
{ }
m m
m m
m m m
P N s g r
N
N
N
P s g r
N
N
N
P s r P s
ln 2
2
1
) 2 ln(
2
ln
2
1
) 2 ln(
2
) ( ln ) (
0
2
0
0
2
0
0
+ =
+ =
=
, ,
, ,
, , ,

The joint conditional density is multivariate Gaussian with the form


For a channel with AWGN, are i.i.d. Gaussian random variables with
same variance .
k
n
0
N
) (
m
s r P
, ,
2
0
2
1
2 /
0
) 2 (
1
) (
m
s g r
N
N
m
e
N
s r P
, ,
, ,

=

{ }
m m m m
P N s g s g r r s ln 2 , Re 2 ) ( : minimize
0
2 2 2
+ > < =
, , , , ,


End of Lecture 15
Digital Communications
Lecture 16 Optimum Detector
Digital Communications
Computer and Information College
2
Choose if
The maximum a posteriori (MAP) receiver observes and decides in favour
of the message that maximizes the a posteriori probability .
r
MAP Receiver
,
k
s
,
( ) r sent s P
k
, ,

m
s
,
( ) ( )
m m m m
P s r P P s r P
, , , ,

m m

( )
( )
2
0
1
2
/ 2
0
1
2
m
r gs
N
m m m
N
P r s P P e
N

=
, ,
, ,

{ }
m m m m m
P N s g s g r r P N s g r ln 2 , Re 2 ln 2
0
2 2 2
0
2
+ > < =
, , , , , ,

( ) { }
m m m m
P N s g s g r s ln
2
1
, Re
0
2 2
+ > < =
, , , ,

Maximize:
Minimize:
maximize:
Computer and Information College
3
The maximum likelihood (ML) receiver observes and decides in favour
of the message that maximizes the likelihood function .
r
ML Receiver
,
k
s
,
( ) sent s r P
k
, ,
m
s
,
( ) ( )
m m
s r P s r P
, , , ,
m m

( )
( )
2
0
2
1
2 /
0
2
1
m
s g r
N
N
m
e
N
s r P
, ,
, ,

=

{ }
2 2 2 2
, Re 2
m m m
s g s g r r s g r
, , , , , ,
+ > < =
( ) { }
2 2
2
1
, Re
m m m
s g s g r s
, , , ,
> < =
Choose if
Maximize:
Minimize:

maximize:
Computer and Information College
4
A communication system is designed to transmit binary i.i.d.
data with and using QAM with
constellation as follows. After demodulation, the channel is
with unit gain and the variance of the complex additive
Gaussian noise is . Find the detection region for ML and
MAP receivers respectively.
} {
n
b p b P
n
Example Decision Region for QAM
= = } 0 { p b P
n
= = 1 } 1 {
0
N
0111 0011 1011 1111
0110 0010 1010 1110
0100 0000
1000 1100
0101 0001 1001 1101
5
1
5
3
5
3

5
1

5
1
5
3
5
1

5
3

1
s
2
s
Computer and Information College
5
See lecture 15, p.7 for the See lecture 15, p.7 for the
dimension of vector being 1. dimension of vector being 1.
Example ML Decision Region for QAM
, 1 = g
,
5
3
5
3
) 1111 (
1
j s + = ( ) , 1
4
1
p P =
,
5
1
5
3
) 1110 (
2
j s + = ( )
3
2
1 p p P =
( ) { }
{ } { }
5
9
Im
5
3
Re
5
3
2
1
, Re
2
1
2
1 1
+ =
> < =
r r
s g s r s
( ) { }
{ } { } 1 Im
5
1
Re
5
3
2
1
, Re
2
2
2
2 2
+ =
> < =
r r
s g s r s
( ) ( ) { }
5
2
Im
2 1
= = r s s
( ) ( ) { }
5
2
Im
2 1
<
>

<
>
r s s
)
5
3
5
3
( )
5
3
5
3
(
)
5
3
5
3
)( (
5
3
5
3
, ,
1
+ =
+ =
> + + >=< <
Q I Q I
Q I
Q I
r r j r r
j jr r
j jr r s r

r
,
Computer and Information College
6
Example ML Decision Region for QAM
(cont.)
0111 0011 1011 1111
0110 0010 1010 1110
0100 0000
1000 1100
0101 0001 1001 1101
5
1
5
3
5
3

5
1

5
1
5
3
5
1

5
3

5
2
1
s
2
s
Computer and Information College
7
Example MAP Decision Region for QAM
, , , ,
1 = g
5
3
5
3
1
j s + = ( )
4
1
1 p P =
5
1
5
3
2
j s + =
( )
2
1 p p P =
( ) { }
3
1 0
2
1
2
1 1
ln
2
1
, Re P N s g s r s + > < =
{ } { } ( ) [ ]
4
0
1 ln
5
9
Im
5
3
Re
5
3
p N r r + + =
( ) { }
2 0
2
2
2
2 2
ln
2
1
, Re P N s g s r s + > < =
{ } { } ( ) [ ]
3
0
1 ln 1 Im
5
1
Re
5
3
p p N r r + + =
( ) ( ) { }
p
p N
r s s

= =
1
ln
2
5
5
2
Im
0
2 1

( ) ( ) { }
p
p N
r s s

<
>

<
>
1
ln
2
5
5
2
Im
0
2 1

Computer and Information College
8
Example MAP Decision Region
for QAM (cont.)
{ }


=
p
p N
r
1
ln
2
5
5
2
Im
0
{ }
p
p N
r

=
1
ln
2
5
5
2
Im
0
{ }
p
p N
r

=
1
ln
2
5
Im
0
{ }


=
p
p N
r
1
ln
2
5
5
2
Re
0
5
1
5
3
5
3

5
1

5
1
5
3
5
1

5
3

{ }
p
p N
r

=
1
ln
2
5
Re
0
{ }
p
p N
r

=
1
ln
2
5
5
2
Re
0
{ } 5 . 0 0 < = =
n
b P p
0 >
Computer and Information College
9
Simulation of Digital
Communications Systems
Transmitter: Transmitter:
Generating binary data Generating binary data
Mapping from data to symbols Mapping from data to symbols
Generating shaping pulse Generating shaping pulse
Generating baseband waveform Generating baseband waveform
Channel: Channel:
AWGN channel AWGN channel
Receiver: Receiver:
Matched filter or correlation Matched filter or correlation
ML or MAP detector ML or MAP detector
Investigating system performance Investigating system performance
k
a ( ) t s ( ) t r ( ) k a
End of Lecture 16
Digital Communications
Lecture 17 Pairwise Error Probability and
Union Bound
Digital Communications
Computer and Information College
2
ML and MAP Receivers - Review
2
0
2
1
2 /
0
) 2 (
1
) | ( : maximize
m
s g r
N
N
m
e
N
s r P

, ,

=

The maximum a posteriori (MAP) receiver observes and decides in favour


of the message that maximizes the a posteriori probability (APP) a posteriori probability (APP)
The maximum likelihood (ML) receiver observes and decides in favour of
the message that maximizes the likelihood function likelihood function
r
The MAP receiver minimizes the error probability. However, it requires noise
and symbol probability distribution.
The ML receiver is equivalent to the MAP receiver when all the transmitted
symbols are with the same probability.

k
s

) | ( r sent s P
k

). | ( sent s r P
k

k
s

2
:
m
r gs

( ) { }
m m m m
P N s g s g r s ln
2
1
, Re
0
2 2
+ > < =
, , , ,

minimizes the Euclidean distance Euclidean distance


Computer and Information College
3
where is the squared Euclidean distance between the signal point
and , and
Consider two baseband signal points and in a signal space that contains
M M signal points. Suppose that coherent detection is used with ML decisions,
where the receiver decides in favour of the signal point that is closest in
Euclidean distance to the received signal point (matched filter output).
Pairwise Error Probability
2
2 1
2
12
s s d

=
1
s

2
s

=
0
2
12
2
2 1
4
) , (
N
d
Q s s P

2 2
2 2
1 1
, ( )
2 2
d x x
d
g Q d e d x e d x

+

= = =

1
s

2
s

The probability of error associated with two signal points taken at a time is
called the pairwise error probability.
The pairwise error probability between and is
1
s

2
s

Computer and Information College


4
Pairwise Error Probability - Proof
, :
, :
2
2
2
2
1
1
2
2
2
1
sent is s decision s g r s g r
sent is s decision s g r s g r


>
<
.
,
2
2
2
1 2
2
2
2
1 1
s g r s g r while sent is s
or s g r s g r while sent is s


<
>
2
s to decided
ML decision rule:
When does a decision error happen?
,
1
s to decided
,
Computer and Information College
5
is sent and
is sent while
Pairwise Error Probability Proof (cont.)
2
12
2
2 1 1
2
1
2
1
} ) ( , Re{ 2 d s s g s g r s g r s g r + + >

<
2
12
2
2 1 1
2
1
} ) ( , Re{ d s s g s g r

1
s

2
12
2
2 1 1
2
1
2
2 1 2 1 1
2
1
2
2 1 1
2
2
} ) ( , Re{ 2
) ( } ) ( , Re{ 2
) (
d s s g s g r s g r
s s g s s g s g r s g r
s s g s g r s g r
+ + =
+ + =
+ =



2 1
2
12
, s s d g

= =
where
2
2
2
1
s g r s g r

>

2
12
2
2 1 1
2
1
} ) ( , Re{ d s s g s g r <

1
s

Decision is in error!
Computer and Information College
6
Pairwise Error Probability Proof (cont.)
? ] Re[ } ) ( , Re{
2 1 1
z s s g s g r x = =

y j x z + =
is complex Gaussian, and both x and y are real Gaussian.
0 ) ( ) ( ) (
*
2 1
1
* *
2 1
1
*
= =

=

= =
k k
N
k
k k k
N
k
k z
s s g n E s s g n E

What is the p.d.f. of


) 2 } { , 0 } { , 0 } { (
0
2
2
1 1
N n E n E n E
n s g r n s g r
k k k
= = =
= + =

*
2 1
1
*
2 1
) ( ) ( ,
k k
N
k
k
s s g n s s g n z

= =

=
) ( , Let
2 1 1
s s g s g r z

=

0 } { } { = = y E x E

Computer and Information College


7

=
T
k B k
dt t t n n
0
*
) ( ) (
0
) ( ) ( } )] ( ) ( [ )] ( ) ( [
) ( ) ( {
) ( ) ( )]} ( ) ( )][ ( ) ( {[
) ( ) ( )] ( ) ( [ ] [
2 1 2
*
1
*
0
2 1
0
2 1
0 0
0
2 1 0 2 1 0
0
2
0
1 2
*
1
*
2 2 1 1
0
2
0
1 2
*
1
*
2 1
2
=
+
+
+ + =
=
= =
=



dt dt t t t n t n E j t n t n E j
t t N t t N
dt dt t t t jn t n t jn t n E
dt dt t t t n t n E n E
k k Q I Q I
T T
T T
k k Q I Q I
T T
k k B B k




_ _
_
We know that
Pairwise Error Probability Proof (cont.)
See lecture 13, p.13 See lecture 13, p.13
Computer and Information College
8
Pairwise Error Probability Proof (cont.)
? } ) ( , Re{
2 1 1
s s g s g r x

=
What is the p.d.f. of
2
12
2
0
1
2
1 2 1 1
2
0
1
2
2 1
2
0
2 1
*
2 1
1 ,
*
2 1 0
2 1
*
2 1
1 ,
* *
2 1
*
2 1
1 ,
* *
*
*
2 1
1
* *
2 1
1
*
2
2
2 2
) ( ) ( ] [ 2
) ( ) ( ) (
) ( ) (
) ( ) ( } {
1 1
2 1
2 2 1 1
2 1
2 2 1 1
2 1
2 1
2 2 1 1
2 1
2 1
2 2
2
2 1 1
1
1
d N
s s g N s s g N
s s g s s g k k N
s s g s s g n n E
s s g s s g n n E
s s g n s s g n E z E
N
k
k k
N
k
k k
k k k k
N
k k
k k k k
N
k k
k k
k k k k
N
k k
k k
k k
N
k
k k k
N
k
k

=
= =
=
=


= =
=
=
=
= =

Computer and Information College
9
Pairwise Error Probability Proof (cont.)
? } ) ( , Re{
2 1 1
s s g s g r x

= What is the p.d.f. of
0
) ( ) ( ) (
) ( ) (
) ( ) ( } {
*
2 1
* *
2 1
1 ,
*
] 0 [ ; 0 ] [ ] [
*
2 1
* *
2 1
1 ,
*
*
2 1
1
* *
2 1
1
* 2
2 2 1 1
2 1
2
1 1 1
2 1
2 2 1 1
2 1
2 1
2 2
2
2 1 1
1
1
=
=


=
= =
=
= =
k k k k
N
k k
n E n E n E
k k
k k k k
N
k k
k k
k k
N
k
k k k
N
k
k
s s g s s g n n E
s s g s s g n n E
s s g n s s g n E z E
k k k
_

2 2 2 2 2
2
2 2 2 2 2 2 2 2
0 12 0 12
0 { } { } { } 2 { } { } { }, { } 0
2 { } { } { } { } { }
E z E x E y j E xy E x E y E xy
N d E z E x E y E x E y N d
= = + = =
= = + = =
Computer and Information College
10
Pairwise Error Probability Proof (cont.)
? } ) ( , Re{
2 1 1
s s g s g r x

=

=
2
12
2
0
2
2
12
2
0
2
exp
2
1
) (
d N
x
d N
x p
X


What is the pairwise error probability?

=
=

=
=

< =


0
2
12
2
4
2
2
12
2
0
2
1
2
12
2
0
2
2
12
2
0
2
1
1
2
12
2
2 1
4 2
exp
2
1
) (
2
exp
2
1
) ( sent is |
2
1
) , (
0
2
12
2
2
12
2
2
12
2
N
d
Q t d
t
d N
x
t x d
d N
x
d N
x d x p s d x P s s P
N
d
d
d
X


What is the p.d.f. of
Computer and Information College
11
Error Probability of BPSK

= = + =
0
2
0
2
12
2
2 1 0 0 | 1 1 | _
2
4
) , (
N
E
Q
N
d
Q s s P P P P P P
e e BPSK e

= = = =
= = =
= = =
E e E s E e E s
k k
k
a a e a
j j
k
k ks k kc
j
k
k
2 2 , 2 2
1 , 0 ,
2
2
sin , cos ,
2
0
1


. 8 ) 2 2 (
2 2
12
E E d = =
BPSK (antipodal signals):
In this case
and the error probability is

= =
=
+ =
) ( ) ( ) (
] ) ( Re[
) 2 cos( ) ( ) (
2
t f s t p Ae t s
e e t Ap
t f t Ap t s
k
c k
j
B
t f j
k c m



2
) (
2
0
2 p
T
m
E A
dt t s E = =

p
E
t p
t f basis
) (
) ( =

2
2
A
E
E
p
=

k
p k
a E
E Aa s
=
=
2
Passband Passband signal energy signal energy
Computer and Information College
12
Error Probability of BFSK

=
0
2
_
N
E
Q P
BPSK e

+
+
=

=
) 2 2 2 cos(
2
) 2 2 cos(
2
) (
,
0
,
0
2 1
t f f
T
t f f
T
t
E
s
E
s
c
c

=
E
s
E
s
2
0
,
0
2
2 1

Conclusion: With coherent detection, antipodal signals are 3dB more power
efficient than orthogonal signals.
)] ( ) ( [ ) 2 2 cos( ) ( t U t p t f t f t Ap
T c
Passband signal:
baseband signal:
In this case:
= +
2
) 2 2 ( cos
2
0
2 2
T A
dt t f t f A E
T
c
= + =

T
E
A
2
=



t f j
t f j
e
T
E
e
T
E
2 2
2
2
2

=


t f j
t f j
e
T
e
T
t
2 2
2
1
1
1
) (

E s s d 4 || ||
2
2 1
2
12
= =
, ,

)
1
(
T
f =

Computer and Information College


13
Where is the pairwise error probability.
Sometimes it may be difficult to compute the exact symbol error probability.
Suppose that is transmitted and let denote the event (pairwise error
probability) that the receiver chooses instead.
k
s

j
E
j
s
Union Bound on Symbol Error Probability

k j
j
k j
j
E P E P ) (

) , (
k j
s s P

,
k j
j s
E P P
k
The probability of error for symbol is
k
s

By using the union bound


We have
) , ( ... ) , ( ) , ( ) , (
2 1 k M k k
k j
k j s
s s P s s P s s P s s P P
k

+ + + =

Computer and Information College


14
Union Bound on Symbol Error Probability
(cont.)


0
2
) 1 (
N
E
Q M P
M

For M-ary orthogonal signals MFSK, all signals are separated by distance
and, hence, has the union bound
A further upper bound can be obtained by first computing the minimum
Euclidean distance between any two signal points
Thus the pairwise error probability between and is bounded by
Hence, we can write
2
,
min
min
m n
m n
s s d

=
j
s

k
s

0
2
min
2
4
) , (
N
d
Q s s P
k j


0
2
min
2
4
) 1 (
N
d
Q M P
M

E d 2
min
=
M
P
(Totally M symbols)
See p.12 See p.12
End of Lecture 17
Digital Communications
Lecture 18 Error Probability for Binary PAM with
MAP Receiver and Orthogonal
Modulation with ML receiver
Digital Communications
Computer and Information College
2
Pairwise Error Probability
Consider two baseband signal points and in a signal space that contains M
signal points. Suppose that coherent detection is used with ML decisions, where the
receiver decides in favor of the signal point that is closest in Euclidean distance to
the received signal point .
The probability of error associated with two signal points taken at a time is called
the pairwise error probability.
The pairwise error probability between and is
Where is the squared Euclidean distance between and ,
and
2 2
2 2
1 1
, ( )
2 2
d x x
d
g Q d e dx e dx

+

= = =

1
s
,
2
s
,
1
s
,
2
s
,
r
,
)
4
( ) , (
0
2
12
2
2 1
N
d
Q s s P

=
, ,
2
2
12 1 2
d s s =
, ,
1
s
,
2
s
,
Computer and Information College
3
Union Bound on Symbol Error Probability
A further upper bound can be obtained by first computing the minimum
Euclidean distance between any two signal points
Thus the pairwise error probabilty between and is bounded by
Hence , we can write the symbol error of an M-ary system
For M-ary orthogonal signals, all signals are separated by distance
and, hence, has the union bound


0
min
2 2
4
) 1 (
N
d
Q M P
M


0
2
) 1 (
N
E
Q M P
M

E d 2
min
=
M
P
2
min
,
min
n m
n m
d s s =
, ,
j
s
,
k
s
,
2 2
min
0
( , )
4
j k
d
P s s Q
N





, ,
Computer and Information College
4
P.E. of BPSK & BFSK with ML Detector
BPSK (antipodal signals):
E s 2
1
= E s 2
2
=
( ) E E d 8 2 2
2
2
12
= =
)
2
(
0
2
_
N
E
Q P
BPSK e

=
BPSK (orthogonal signals):
E d 4
2
12
=

=
0
2
_
N
E
Q P
BFSK e

=
0
1
E
s
,

=
E
s
0
2
,

+
+
=
) 2 2 cos(
2
) 2 cos(
2
) (
t f t f
T
t f t f
T
t
c
c

=
0
2
1
E
s
,

=
E
s
2
0
2
,
passband:
baseband:

=


t f j
t f j
e
T
e
T
t
2 2
2
1
1
1
) (

and

Computer and Information College


5
The MAP decision rule with minimum error probability chooses if
P.E. of BPSK & BFSK with MAP Detector
Consider BPSK or BPAM signals having the signal vectors
Suppose further that , i.e. the signals are not transmitted
with equal probability. Then (there is only one basis there is only one basis)
and the conditional density of is
1 1 2 2
2 2
1 2
0 0
2 2
1 2
0
( ) ( )
1 1
exp exp (1 )
2 2
1 1
exp ( )
2
P r gs P P r gs P
r gs p r gs p
N N
p
r gs r gs
N p
>

>




>


, , , ,
E s 2
1
=
,
E s 2
2
=
,
2 1
1 P p P = =
n s r r
i
+ = =
,
r
,

=
2
0
0
1
2
1
exp
2
1
) (
i
gs r
N
N
s g r P

, ,
1
s
,
(app metric) (app metric)
Computer and Information College
6
When , . In this case ML=MAP ML=MAP and chose
if
{ } { }
{ }
p
p
E
N
rg
p
p
N s rg s rg

>

>

1
ln
8
Re
1
ln 2 ] Re 2 Re 2 [
0
0 2 1
0 } Re{ ) ( ) (
2
2
2
1 2 1
> < >

rg gs r gs r s g r P s g r P
, , , ,
0
1
ln
8
0
=

p
p
E
N
2 / 1
2 1
= = p p
1
s
,
P.E. of BPSK & BFSK with MAP Detector
(cont.)
Otherwise, the optimal threshold moves away from the signal point
with largely transmitted probability.
p
p
N gs r gs r

>
1
ln 2 ] [
0
2
2
2
1
_

+
+
} Re{ 2 | | | |
} Re{ 2 | | | |
*
2
* 2
2
2
*
1
* 2
1
2
s rg gs r
s rg gs r
a
0
E s 2
2
=
E s 2
1
=
p
p 1
Computer and Information College
7
P.E. of BPSK & BFSK with MAP Detector
(cont.)
An error happens if is sent while , or
if is send and
If is sent, then with . Therefore,
{ } { }
2
2
0
2 2
2
0
2
2
2 2
2
2
0
( 2 )
2
2
0
2 2
Re 2 Re 2
1
( )
2
1
( )
2
def
u
N
u
x E
N
x
rg g E g n E g n
x rg E g n E u
P u e
N
P x e
N

= + = +

= = + = +

2
s
,
1
s
,
2
s
,
{ }
p
p
E
N
x rg
def

> =

1
ln
8
Re
0
{ }
p
p
E
N
x rg
def

< =

1
ln
8
Re
0
n E g r + = 2
0
2
2
0
2
1
) (
N
n
e
N
n p

Q Q I I
Q I Q I
n g n g
jn n jg g
+ =
+ + )} ( ) Re{(
*
2
0 0
2 2
2 2 2 2 2
) (
] [ ] [
= + =
+ =
N N g g
n g n g E u E
Q I
Q Q I I
(Lecture13,p.13&17) (Lecture13,p.13&17)
Computer and Information College
8
P.E. of BPSK & BFSK with MAP Detector
(cont.)

=
+ =
0
2
2
0
0
2
2
0
2 1 _
2 1
ln
8
) 1 (
2 1
ln
8
) 1 (
N
E
p
p
E
N
Q p
N
E
p
p
E
N
Q p
P p P p P
e e MAP e

= =

+
= = =
+
+


0
2
2
0
2
2 1
ln
8
0
2
2
1
ln
8
2
) 2 (
0
2
1
ln
8
2
2 1
ln
8
2
1
2
2
1
) (
2
0
2
2
0
0
0
2
2 2
0
N
E
p
p
E
N
Q dt e
N
E x
t dx e
N
dx x p P
t
N
E
p
p
E
N
p
p
E
N
N
E x
p
p
E
N
x e

=
0
2
2
0
1
2 1
ln
8 N
E
p
p
E
N
Q P
e

Similarly
Computer and Information College
9
P.E. of BPSK & BFSK with MAP Detector
(cont.)
2
_ _ _
0
2 2 2
0 0
2 2
0 0 0
2 2
0
2
0 0
0
2
2
1 2 1 2 2
ln (1 ) ln
8 8
1 2 2
ln
8
1 2
(1 ) ln
8
e MAP e ML e ML
E
P P P Q
N
N N p E p E E
p Q p Q Q
E p N E p N N
N p E E
p Q Q
E p N N
N p
p Q
E p



=






= + + +



= + +



+
2 2
0
2
2 2
0 0
2 2
0
2
0 0
2 2
0 0
2 1 2
ln
8
2 2
1 2 2
ln
8
2
1 1
(1 )
2 2
N E p E
N p N
E
t t
N p E E
p N N
E
E E
Q
N N
p e dt p e dt

+









=




Computer and Information College
10
P.E. of BPSK & BFSK with MAP Detector
(cont.)
Generally speaking, there are

=
> =
p
p
E
N
z
N
E
y
1
ln
8
0
2
2
0
0
2

and
z y >>
If

>
>
<
0
0
5 . 0 ) 1 (
z
y
p
0
2
1
) 1 2 (
2
1
) 1 (
2
1
2
2 2
2
2 2
<

+
+


z e p
dt e p dt e p
y
t
z y
y
y
y z
t

<
>
>
0
0
5 . 0 ) 2 (
z
y
p
0
2
1
) 1 2 (
2
1
) 1 (
2
1
2
2 2
2
2 2
<

+
+


z e p
dt e p dt e p
y
t
z y
y
y
y z
t

Therefore
ML e MAP e
P P
_ _

(the equals is satisfied when p=0.5)
0
y z y +
z y
2
2
2
1
) (
t
e x p

=

Computer and Information College


11
and choose that signal with the largest largest .
where are i.i.d. zero-mean complex Gaussian r.v.s with variance .
Since the transmit signals have equal energy (see lecture12, p.3 ), the ML
receiver computes the decision variables
( )
i
s
P.E. of M-ary Orthogonal Signals with ML
Detector
Suppose that we have M-ary orthogonal signal set ,
where is a length-M vector with a 1 in the i-th coordinate.
If the signal is transmitted, the received signal vector at the output of the
correlation detector is (see lecture 17, p.12)
,

, 2
i i
E s e =
,
i
e
M i , , 1
i
n
0
N
=
1
s
,

j
M
e g n n n E g r = + = ), , , , 2 (
2 1

,
{ }
_
, , ,
constant
2 2
|| || | |
2
1
, Re ) (
m i i
s g s g r s > < =
{ } > <
i
j
s e r
, ,

, Re
only consider
j
e
Computer and Information College
12
P.E. of M-ary Orthogonal Signals with ML
Detector (cont.)
Since
Let be the probability of correct decision conditioned on
the decision value being . Then , the probabilities of
correct decision and error decision are calculated as

= =
+ =

M i e n E s
e n E E s
j
i i
j
, , 2 }, Re{ 2 ) (
} Re{ 2 2 ) (
1 1

,
,

= M i EN N s
EN E N s
i
, , 2 ), 2 , 0 ( ~ ) (
) 2 , 2 ( ~ ) (
0
0 1

,
,


y s c
P
= ) (
1
,

y s = ) (
1
,

then
, ) (
) (
1
dy y p P P
Y y s c c

+

=
=
,

. 1
c e
P P =
and
Computer and Information College
13
P.E. of M-ary Orthogonal Signals with ML
Detector (cont.)
( ) { }
dx e t Q t
EN
y
t
EN
x
let dx e
EN
n s v r of condition d i i y s P
y s y s P P
x t
M
M
i
y
EN
x
i
M
i
i
M y s c
2
1
0
0
2
2 2
0
2
2 ) (
2
0
2
1
2
1
) ( 1 ) (
2
)
2
(
2 2
1
) .' . . . . (
} ) ( , , ) ( {

=
=

= =

=
=

=
< =
< < =

,
,

,
,
dy
EN
E y
EN EN
y
P
EN E N s
M
c
_
,
) 2 , 2 ( ~ ) (
0
2
0
1
0
0 1
4
) 2 (
exp
4
1
2

=

+

Hence
Computer and Information College
14
where is the symbol energy-to-noise radio.
P.E. of M-ary Orthogonal Signals with ML
Detector (cont.)
Now let , then and
Hence,
Finally, the probability of symbol error is

+ = = dt e x P P
t
M
s c e
2
1
2
2
1
)] 2 ( [ 1 1


0
2 / ) 2 ( EN E y x = E x EN y 2 2
0
+ =
dx EN dy
0
2 =
[ ] dx e x P
x
M
s c
2 /
1 2
2
1
) 2 (

+

+ =

0
2
N
E
s

=
(1)
Computer and Information College
15
Alternative Derivation
We can also derive an alternative expression for the error probability by first
conditioning on the event that one of the decision variables, that is,
is largest.
( ) ( ) [ ] dx e x x M P
x
M
s e
2
2
2
2
1
2 ) 1 (

+

=

M i s
i
, , 2 ), (
,
=
dy
EN
y
EN EN
y
EN
E y
M P
M
e


=

+

0
2
0
2
0 0
4
exp
4
1
2 2
) 2 (
) 1 (

Now let . Then and


0
2 / EN y x =
x EN y
0
2 = dx EN dy
0
2 =
1 M
Hence
)
2
2
( )
2
(
2 2
1
)
2
( } ) ( { } ) ( {
} ) ( , , ) ( , ) ( {
0
2
0
2
2
2 2
) 2 (
0
2
0
1
3
3 1 ) (
0
0
2
2
EN
E y
EN
y
dx e
EN EN
y
y s P y s P
y s y s y s P P
M t
EN
E x
EN
E x
y
M
M
i
i
M y s e

= <

< =
< < < =
=

=
=

, ,
,

, ,
,
1' M s combinations

(2)
Computer and Information College
16
Alternative Derivation (cont.)
Exercise: Show above Eq.(2) is equal to the previous expression (1).
Proof:
( ) ( )
2 2
2 2
2
1
] [
2
1
x x t
e x d dt e x

= =


( ) ( ) [ ] ( ) [ ]
( ) ( ) [ ]
( ) ( ) [ ] ( ) [ ] ( )
( ) [ ] ( ) [ ]
c
x
M
s
t x
x
M
s
M M
s
M
s
M
s e
P
dx e x dx e x
x d x x x
x d x
x d x x M P
s
s
=
+ = =
=
=
=

1
2
1
2 1
2
1
1
] 2 [ 2
2
2 ) 1 (
2
1
2
2
) 2 (
1
1
1
1
1
2
2
2

_
End of Lecture 18
Digital Communications
Lecture 19 Error Probability for PSK, PAM, QAM
& Comparison of Modulation Techniques
Digital Communications
Computer and Information College
2
Bit Energy and Bit Error Probability
Note that is the symbol energy-to-noise ratio.
The bit energy-to-noise ratio is , where M is the number
of the waveforms.
Since is the symbol error probability symbol error probability, what about the bit error
probability?
Orthogonal signals are all equally distant from each other (see
lecture12, p.5) and, therefore, when an error occurs, it will occur to
any other symbol with equal probability. Consequently, the mapping
of bits onto symbols is arbitrary. When we make the following
mapping
0
2
N
E
s

=
s b
M

2
log
1
=
e
P
k
M 2 =
1 ... 111
1 ... 000
0 ... 000
2
1

M
s
s
s

. .

bits k

a
Mapping is arbitrary!
Computer and Information College
3
Given the occurrence of a symbol error at the receivers output, each of the
incorrect symbols will be chosen with equal probability. This means This means
that if is sent, the decision error to any one of the set that if is sent, the decision error to any one of the set is is
Assume that the symbol is sent. ) 0 ,... 0 , 0 (
1
_
Bit Error Probability for M-ary
Orthogonal Signals

bit k
s

1 2
k
e e
k
k
b
P
M
M
P P
) 1 ( 2 1 2
2
1

=

Hence, the average probability of bit error for M-ary orthogonal signals is
achieved by dividing the above formula by (how many error bits exists
in the bits
1
s
,
} ,..., , {
3 2 M
s s s
, , ,
The average number of error bits is
1 2 ) 1 (
=

k
e e
P
M
P

=

1
1
1
1
2
1 2
2
1 2
k
k
n
n
k e
k
k k
n
k
e n
k
k n C P k
P
n C
k
s s
1 1
is sent, however, symbol is sent, however, symbol
except s except s
1 1
is decided. is decided.
k
Computer and Information College
4
where . Notice that the squared Euclidean
distance between any adjacent signal points is .
We have seen earlier (lecture11, p.7) that the average symbol
energy is
M-ary PAM Signal
M-ary PAM signals have the low-pass signal vectors as
m p m
Aa E s =

6
) 1 (
2 2

=
M E A
E
p
av

= =
=
) (
1
) ( ); ( ) (
2 cos ) ( ) (
t p
E
t f t p Aa t s
t f t p Aa t s
p
m B
c m m

)} 1 ( ,...., 3 , 1 { M a
m
2 2
min
4 A E d
p
=
Hence,
) 1 (
24
2
2
min

=
M
E
d
av
Computer and Information College
5
With Gray coding . Also therefore,
M-ary PAM Signal (cont.)

=
0
2
2
0
2
min
2
0
2
min
2
1
6
2
1
4
2
4
2
2
N
E
M
Q
M
M
N
d
Q
M N
d
Q
M
M
P
av
s
s
e


M
P
P
e
b
2
log

b s
M ) (log
2
=
( )
( )

1
log 6
2
log
1
2
2
2
M
M
Q
M M
M
P
b
b

The approximation is accurate for large but will underestimate at low . b

b
P
b

For M-ary PAM signals, we must consider two cases: i) The M-2 signal
points that are surrounded by two marginal signal points; ii) The 2 signal
points on the end. We have the average probability of adjacent errors average probability of adjacent errors
Computer and Information College
6
Two signal constellations having the same have the same upper bound
on error probability. However, the one achieving the value of with smaller
average average symbol energy symbol energy is the most efficient.
We can approximate the error probability by replacing M with the number of
signal points at , i.e.
M-ary QAM Signals
For any given signal set dimension M, many different signal constellations
can be constructed that transmit information in the amplitude and phase of
the carrier.
For all these signal constellations, the error probability is dominated (at high
) by the minimum Euclidean distance, , between the signal points.
Recall that (lecture17, p.14)
0
/ N E
min
d
2 2
min
0
( 1)
4
e
d
P M Q
N


<


min
d
min
M
2 2
min
min
0
( 1)
4
e
d
P M Q
N





min
d
min
d
Computer and Information College
7
M-ary QAM Signals (cont.)
A rectangle M-QAM constellation where and k is even, can be
viewed as two systems in quadrature, each having one half each having one half
of the average power of the M of the average power of the M- -QAM system QAM system (see Lecture11, p5&12 see Lecture11, p5&12).
Since independent decisions can be made on the quadrature com-
ponents, the probability of correct symbol reception for the M-QAM
system is
And the probability of symbol error is
Where is the error probability for the system, with one
half of the average power of the M-QAM system, given by (see p.5)
k
M 2 =
PAM M
( )
2
1
M
c
P P =
( )
2
1 1
M
e
P P =
PAM M
M
P

=
s
M
M
Q
M
P
1
6 1
1 2
Computer and Information College
8
where and are i.i.d Gaussian r.v.s with .
Suppose that is transmitted. Then
M-ary PSK Signals
For PSK, the signal vectors are (see lecture17, p.11)
) sin (cos 2 2
m m
j
m
j E e E s
m

+ = =
M m M m
m
= 1 ; / ) 1 ( 2
1
2 2 (1 0) s E E j = = +
) ( 2
1 Q I
jn n E g n gs r + + = + =
I
n
Q
n
) , 0 (
0
N N
2
1
2 ( )
j j
I Q
r e r s e n E n jn



= = + = + +
where we have ignored the phase rotation on the noise, since it
does not affect the noise statistics and hence, does not affect the
error probability.
The coherent receiver forms
where
Computer and Information College
9
M-ary PSK Signals (cont.)
The phase of the rotated received signal point is
As long as , a correct decision will be made. So,
to compute the final symbol error probability, we need the p.d.f.
of parameter .
r e
j


I
Q
n E
n
+
=

2
1
2
tan

M M / /

E 2
1
S
2
S
0
S
3
S
4
S
5
S
6
S
7
S
) (
0
t
Complex signal-space diagram for 8-PSK with associated decision regions.
End of Lecture 19
Digital Communications
Lecture 20 Coherent Detection of M-ary and
Differential Detection of Binary PSK
Digital Communications
Computer and Information College
2
For , (2PAM or BPSK),
P.E. for M-ary PAM & QAM Signals
M-ary PAM Signals
( )

=
1
6 1 2
2
M
Q
M
M
P
s
e

0
2
N
E
av
s

=
2 = M ( )
s e
Q P 2 =
M-ary QAM Signals
( )
2
1 1
M
e
P P =
,

=
s
M
M
Q
M
P
1
6 1
1 2
For , 4 = M
( )
( ) ( )

=
=
s s e
s
Q Q P
Q P

2
2
1
1 2 2
2
2
(QPSK)
Computer and Information College
3
Where and are i.i.d with p.d.f.
For PSK, the signal vectors are
where .
Suppose that is transmitted. Then
M-ary PSK Signals
( )
m m
j
m
j E e E s
m

sin cos 2 2 + = =
,
( ) M m M m
m
= 1 ; 1 2
( )
1
2 2 1 0 s E E j = = +
Q I
jn n E g n gs r + + = + = 2
1
I
n
Q
n
( )
0
, 0 N N

y
Q
x
I
j j
n j n E n e s r e r + + = + = =

_
2
1
2

The coherent receiver forms
where we have ignored the phase rotation on the noise term since it dose
not affect the noise statistics and hence, dose not affect the final error
probability.
Computer and Information College
4
As long as , a correct decision will be made. To compute the
probability of symbol error, we need the p.d.f. of .
M-ary PSK Signals (cont.)
The phase of the rotated received signal point is r e
j
,

x
y
Tan
n E
n
Tan
I
Q
1
2
1
2

=
+
=

M M

011
001
000
100
101
111
110
010
2
s
3
s
4
s
5
s
6
s
7
s
8
s
1
s
( ) t
0

E 2
Complex signal-space diagram for 8-PSK with associated decision regions.
Computer and Information College
5
M-ary PSK Signals (cont.)
Define and . Then and .
By using a bivariate transformation, it follows that the joint p.d.f. of and is
2 2
y x R + = ( ) x y Tan
1
=
= cos R x = sin R y
R

0
2 2
0
2 2
2
) cos 2 (
0
2
sin 2
0
2 2 2 2
0
0
2 2 2
0
0
2 2 2
0
sin , cos ,
2
2
sin 2 ) cos 2 (
exp
2
2
cos 2 2 2
exp
2
2
) sin ( ) 2 cos (
exp
2
) , ( ) , (
N
E r
N
E
r y r x XY R
e
N
r
e
N
E E r
N
r
N
E r E r
N
r
N
r E r
N
r
y x p r r p


= =
=

+
=

+
=

+
=
=
( )
( )

+
=
0
2
2
0
2
0
2
2
exp
2
1
,
N
y N x
N
y x p
XY

r r r
r r
y x
r
y
r
x
J
= + =

=




2 2
sin cos
cos sin
sin cos
Computer and Information College
6
( ) ( )
( )
{ }
( ) { } ) ( cos exp
sin exp
2
cos 2
exp
2
sin exp
,
0
2
0
2
2
0
0
2
2
0
2
0
2
0
,
N
E
dx x x
dr
N
E r
N
r
N
E
dr r p p
s s
s
R

=
=

+
+
+

M-ary PSK Signals (cont.)
The marginal p.d.f. of is

Where the last line uses the change of variable .


0
2N r x =
Computer and Information College
7
M-ary PSK Signals (cont.)
Finally, for general M-ary PSK
For BPSK( 2-PAM or antipodal, see lecture17, p.11)
For 4-PSK( equivalent to 4-QAM in performance, see p.2).
( ) ( )
( ) coding Gray
M
P
P M
d p P d p P
e
b b s
M
M
e
M
M
c
2
2
log
log
1
=
= =


( )
s e
Q
N
E
Q P

2
2
0
2
2
=

=
( ) ( )

=
s s e
Q Q P 2
2
1
1 2 2
4
Computer and Information College
8
Comparison of Modulation Techniques
We compare different systems on the basis of bandwidth efficiency and
power efficiency.
(1) Power efficiency is measured by the required to achieve a
given error probability.
(2) Bandwidth efficiency can be measured by . For
0
N E
b
W R
2
2
2
log ,
2log
2log
M PSK DSB QAM DSB
R W M PAM SSB
M
Orthogonal FSK
M

(See the section 5-2-10 of the textbook for details.)


Computer and Information College
9
Receiver for Binary DPSK
( )
( )

+ T n
dt
1
0
( )
( )

+ T n
dt
1
0
( ) t r
I
( ) t r
Q
( ) t Ah
a
( ) t Ah
a
Delay T
Delay T
n
X
n
Y
nd
X
nd
Y
>
<
0?
n
U
n
X

Decision
device
Computer and Information College
10
Differential Detection of Binary PSK
With different PSK(DPSK), information can be transmitted in the
differential carrier phase between successive symbols.
DPSK can be detected noncoherently by using differentially coherent
detection, where the receiver compares the phase of the received signal
between two successive signaling intervals.
Suppose that binary DPSK is used. Let denote the absolute carrier
phase for the nth symbol, and denote the differential carrier
phase. Several mappings exist between the differential carrier phase and
source symbols. Here we consider the mapping
The transmitted bandpass waveform is
n

1
=
n n n

=
+ =
=
1 ,
1 , 0
n
n
n
x
x

( ) ( ) ( )

+ =
n
n c a
t f nT t h A t s 2 cos
Computer and Information College
11
Receiver for DPSK
The received signal is
Where
After quadrature demodulation we have
( ) ( ) ( ) ( ) t n t f nT t h A t r
n
n c a
+ + + =

2 cos
( ) ( ) ( ) ( ) ( ) t f t n t f t n t n
c Q c I
2 sin 2 cos =
( ) ( ) ( ) [ ]
( ) ( ) ( )
( ) ( ) ( ) [ ]
( ) ( ) ( )

+ + =
=
+ + =
=
n
Q n a
LP
c Q
n
I n a
LP
c I
t n nT t h A
t f t r t r
t n nT t h A
t f t r t r




sin
2 sin 2
cos
2 cos 2
Computer and Information College
12
Receiver for DPSK (cont.)
The values of are
Where
is the symbol energy, which is also the bit energy for binary PSK.
kd k kd k
andY Y X X , ,
( ) ( )
( ) ( ) ( ) ( ) ( )
( )
( )
( )
( )
Qd n kd
Q n k
Id n kd
I n
T
I n
T
I k
n E Y
n E Y
n E X
n E
dt t h t n t Ah
dt t h nT t r X
+ + =
+ + =
+ + =
+ + =
+ + =
+ =






1
1
0
0
sin 2
sin 2
cos 2
cos 2
cos
( )

=
T
a
dt t h
A
E
0
2
2
2
Computer and Information College
13
Receiver for DPSK (cont.)
The noise terms are
These are all independent zero-mean Gaussian random variables with
Variance .
0
2EN
( ) ( )
( )
( ) ( ) ( )
( )
( ) ( )
( )
( ) ( ) ( )
( )

+
+ =
+ =
+ =
+ =
kT
T k
a Qd Qd
T k
kT
a Q Q
kT
T k
a Id Id
T k
kT
a I I
dt T k t h t n A n
dt kT t h t n A n
dt T k t h t n A n
dt kT t h t n A n
1
1
1
1
1
1
Computer and Information College
14
Receiver for DPSK (cont.)
In the absence of noise we have
To evaluate the error probability in noise, we need the conditional
densities
( ) ( ) ( ) ( ) [ ]
( )
( )
n
n
n n
n n n n
kd k kd k
x E
E
E
E
Y Y X X U
2 2
2 2
1
2 2
1 1
2 2
4
cos 4
cos 4
sin sin cos cos 4




=
=
=
+ + + + + =
+ =


( )
( ) 1
1
1
1
+ =
=
+ =
=
n x U
n x U
x u p
x u p
n
n
Computer and Information College
15
Receiver for DPSK (cont.)
The values of are
Where
is the symbol energy, are independent Gaussian
random ( real ) variables with zero-mean and variance .
kd k kd k
andY Y X X , ,
( )
( )
( )
( )
Qd n kd
Q n k
Id n kd
I n k
n E Y
n E Y
n E X
n E X
+ + =
+ + =
+ + =
+ + =





1
1
sin 2
sin 2
cos 2
cos 2
( )

=
T
a
dt t h
A
E
0
2
2
2
Qd Q Id I
andn n n n , ,
0
2EN
Computer and Information College
16
Error Probability for DPSK
To evaluate the error probability in noise, we need the conditional
densities
To determine the pdf of conditional pdfs and
. It is convenient to express as
Where
Then is a special case of the general quadratic form given in Proakis
Appendix B Eq.(B-1).
( )
( ) 1
1
1
1
+ =
=
+ =
=
n x U
n x U
x u p
x u p
n
n
( ) 1
1
=
= n x U
x u p
n
( ) 1
1
+ =
+ = n x U
x u p
n
U
{ } ( )
kd k kd k kd k
Z Z Z Z Z Z U

+ = =
2
1
Re
kd kd kd
k k k
jY X Z
jY X Z
+ =
+ =
U
Computer and Information College
17
Error Probability for DPSK (cont.)
The conditional p.d.f. of is
Where is the marcum function.
The biterror probability is
U
( )

< <


< <


=
u x
EN
u x
N
E
Q
EN
E u x
EN
u x
EN
E u x
EN
x u p
n
n n
n
n
n x U
n
0 ,
2
,
2
2
2
exp
4
1
0 ,
2
2
exp
4
1
0 0
2
0
2 2
0
0
2 2
0

( ) b a Q ,
Q
( )
( )
0
2 N
2
0
0
2 2
0
0
exp
1
2
2
exp
4
1
1
E
du
EN
E u
EN
du x u p P
b b
n x U b
n

= =

+
=
= =

Computer and Information College


18
Comparison of Coherent PSK & DPSK
Probability of Bit-Error of Coherent PSK:
Probability of Bit-Error of differential PSK:
( ) ( )
0060 . 0 , 5 ; 0786 . 0 , 0
exp
2
1
2
0
2
= = = =
= =
e b e b
b b
b
b e
P dB P dB
N
E
Q P

( )
0212 . 0 , 5 ; 1839 . 0 , 0
exp
2
1
0
2
= = = =
= =
e b e b
b b e
P dB P dB
N
E
P


End of Lecture 20
Digital Communications

You might also like