You are on page 1of 102

Chapter 1 Random Process

1


1.0Probabilitiesis considered an important
background for analysis and design of
communications systems


1.1 Introduction (Physical phenomenon)
Deterministic model : No uncertainty about its time-
dependent behavior at any instant of time . (e.g. cos(wt))
Random model :The future value is subject to
chance (probability) (e.g. cos(wt+), is a random
variable) in the interval(-T,T)
Example: Thermal noise , Random data stream





2012/9/10
1.1.1 Example of Stochastic Models
Channel noise and interference
Source of information, such as voice
2 2012/9/10
1.1.2 Relative Frequency
How to determine the probability of head
appearance for a coin?

Answer: Relative frequency. Specifically, by
carrying out n coin-tossing experiments, the
relative frequency of head appearance is equal to
N
n
(A)/n, where N
n
(A) is
the number of head appearance in these n random
experiments.
3 2012/9/10
A1.1 Relative Frequency
Is relative frequency close to the true probability
(of head appearance)?
It could occur that 4-out-of-10 tossing results
are head for a fair coin!
Can one guarantee that the true head appearance
probability remains unchanged (i.e., time-
invariant) in each experiment (performed at
different time instance)?
4 2012/9/10
A1.1 Relative Frequency
Similarly, the previous question can be extended to
In a communication system, can we estimate the
noise by repetitive measurements at consecutive
but different time instance?

Some assumptions on the statistical models are
necessary!

5 2012/9/10
1.1.2 Axioms of Probability
Definition of a Probability System (S, F, P) (also
named Probability Space)
1.Sample space S
All possible outcomes (sample points) of the experiment
2.Event space F
Subset of sample space, which can be probabilistically
measured.
A F and B F implies AB F.
3. Probability measure P
6 2012/9/10
1.1.2 Axioms of Probability
3. Probability measure P
A probability measure satisfies:
P(S) = 1 and P(EmptySet) = 0
For any A in F, 0 P(A) 1.
For any two mutually exclusive events A
and B, P(A B) = P(A) + P(B)
7 2012/9/10
1.1.2 Axioms of Probability

8 2012/9/10
1.1.3 Properties from Axioms

9 2012/9/10
1.1.4 Conditional Probability

10 2012/9/10
1.1.5 Random Variable

11 2012/9/10
1.1.5 Random Variable

12 2012/9/10
1.1.5 Random Variable

13 2012/9/10
1.1.5 Random Variable

14 2012/9/10
1.1.5 Random Variable

15 2012/9/10
1.1.6 Random Vector

16 2012/9/10
1.1.6 Random Vector

17 2012/9/10
1.1.6 Random Vector

18 2012/9/10
19


1.2 Mathematical Definition of a Random Process (RP)
The properties of RP
a. Function of time.
b. Random in the sense that before conducting an
experiment, not possible to define the waveform.
Sample space S function of time, X(t,s) (-T,T)
mapping

A random variable (r.v) is a real-valued function defined on the

elements of the Sample space.
2012/9/10
20
(1.1)
2T:The total observation interval
(1.2)
= sample function which is deterministic
At t = t
k
, the set{x
j
(t
k
) } constitutes a random
variable (RV).
To simplify the notation , let X(t,s) = X(t)
X(t):Random process, an ensemble of time functions
together with a probability rule.
Difference between RV and RP
RV: The outcome is mapped into a real number
RP: The outcome is mapped into a function of time
T t -T t,s X s s ) ( S
) ( ) , ( t x s t X s
j j j
=
) (t x
j
2012/9/10
21
Figure 1.1 An ensemble of sample functions:

Note:X(t
1
),X(t
2
)X(t
n
) are statistically independent for any t
1
,t
2
,, t
n


} , , 2 , 1 | ) ( { n j t x
j
=
2012/9/10
22
1.3 Stationary Process
Stationary Process :
The statistical characterization of a process is independent of
the time at which observation of the process is initiated.
Nonstationary Process:
Not a stationary process (unstable phenomenon )
Consider X(t) which is initiated at t = ,
X(t
1
),X(t
2
),X(t
k
) denote the RV obtained at t
1
,t
2
,t
k
For the RP to be stationary in the strict sense (strictly stationary)
The joint distribution function independent of
(1.3)
For all time shift t, all k, and all possible choice of t
1
,t
2
,t
k
) ... ( ) .. ( ,
) ( ) (
, ,
) ( ),..., (
1 1
1 1
k k x x F x x F
k k
t ,...,X t X t X t X
=
+ +
2012/9/10
1.3 Random Process

23 2012/9/10
24
X(t) and Y(t) are jointly strictly stationary if the joint
finite-dimensional distribution of and
are invariant w.r.t. the origin t = 0.

Special cases of Eq.(1.3)
1. k = 1, for all t and t (1.4)
2. k = 2 , t = -t
1

(1.5)

which only depends on t
2
-t
1
(time difference)


{ } ) ( ) (
1 k
t X t X
{ } ) ( ) (
1
' t Y ' t Y
j

) ( F ) ( F ) ( F
) ( ) (
x x x
X t X t X
= =
+
) , ( F ) , ( 2 1 2
) ( (0), ) ( ), (
1 2
1
2 1
x x x x F
t t X X t X t X
=
2012/9/10
1.3 Stationary

25 2012/9/10
26 2012/9/10
27 2012/9/10
1.3 (Strictly) Stationary

Why introducing stationarity?
With stationarity, we can certain that the
observations made at different time
instances have the same distributions!

28 2012/9/10
29


1.4 Mean, Correlation,and Covariance Function
Let X(t) be a strictly stationary RP
The mean of X(t) is


(1.6)

(indep. of t) for all t (1.7)

f
X(t)
(x) : the first order pdf which is independent of time.
The autocorrelation function of X(t) is








for all t
1
and t
2
(1.8)

is the second order pdf
| | ) ( ) ( t X E t
X
=
x x xf
t X
d ) (
) (
}


=
| |
) (
) , (
) , (
) ( ) ( ) ( R
1 2

-

-
2 1 2 1 ) ( (0) 2 1

-

-
2 1 2 1 ) ( ) ( 2 1
2 1 2 1
1 2
2 1
t t R
dx dx x x f x x
dx dx x x f x x
t X t X E ,t t
X
t t X X
t X t X
X
=
=
=
=
} }
} }

X
=
1 2
( ) ( ) 1 2
( , )
X t X t
f x x
stationary
2012/9/10
30
The autocovariance function


(1.10)
Which is of function of time difference (t
2
-t
1
).
We can determine C
X
(t
1
,t
2
) if
X
and R
X
(t
2
-t
1
) are known.
Note that:
1.
X
and R
X
(t
2
-t
1
) only provide a partial description.
2. If
X
(t) =
X
and R
X
(t
1
,t
2
)=R
X
(t
2
-t
1
),
then X(t) is wide-sense stationary (stationary process).
3. The class of strictly stationary processes with finite
second-order moments is a subclass of the class of all
stationary processes.
4. The first- and second-order moments may not exist.
( )( ) | |

) (
) ( ) ( ) ( C
2
1 2
2 1
2 1
X X
X X X
t t R
t X t X E ,t t


=
=
2
1 1
( . . ( ) , )
1
e g f x x
x t
= < <
+
2012/9/10
1.4 Wide-Sense Stationary (WSS)

31 2012/9/10
1.4 Wide-Sense Stationary (WSS)

32 2012/9/10
1.4 Cyclostationarity

33 2012/9/10
34
Properties of the autocorrelation function
For convenience of notation , we redefine

1. The mean-square value





2.
3.


| |
( ) ( ) ( ) R x t x t t t = E +
| | (1.11) all for , ) ( ) ( ) ( t t X t X E R
X
= t
| | (1.12) 0 , ) ( (0)
2
= = t X E R
X
(1.13) ) ( ) ( = R R
X
t
(1.14) (0) ) (
X X
R R s t
| |
( ) ( ) x x t u u t u t = E = +
( ) R t =
2012/9/10
35
0 )] ( [ )] ( ) ( [ 2 )] ( [
2 2
> + + + t X E t X t X E t X E t t
0 ) ( 2 ) 0 ( 2 > t
X X
R R
) 0 ( ) ( ) 0 (
X X X
R R R s s t
0 ) ( 2 )] ( [ 2
2
> t
X
R t X E
) 0 ( | ) ( |
X X
R R s t
Proof of property 3:
0 ] )) ( ) ( [( Consider
2
> + t X t X E
2012/9/10
36
The R
X
(t) provides the interdependence information
of two random variables obtained from X(t) at times
t seconds apart




2012/9/10
37
Example 1.2 (1.15)


(1.16)



(1.17)


) 2 cos( ) ( + = t f A t X
c

s s
=
O
elsewhere , 0
,
2
1
) (

f u
O
f


| |
( ) cos( )
2
2
( ) ( ) 2
X c
A
R E X t X t f t t = + =
2012/9/10
38
Appendix 2.1 Fourier Transform
2012/9/10
39
We refer to |G(f)| as the magnitude spectrum of the signal g(t),
and refer to arg {G(f)} as its phase spectrum.
2012/9/10
40
DIRAC DELTA FUNCTION

Strictly speaking, the theory of the Fourier transform is
applicable only to time functions that satisfy the Dirichlet
conditions. Such functions include energy signals. However,
it would be highly desirable to extend this theory in two
ways:
1. To combine the Fourier series and Fourier transform into a
unified theory, so that the Fourier series may be treated as a
special case of the Fourier transform.
2. To include power signals (i.e., signals for which the average
power is finite) in the list of signals to which we may apply
the Fourier transform.

2012/9/10
41
The Dirac delta function or just delta function, denoted by ,
is defined as having zero amplitude everywhere except
at , where it is infinitely large in such a way that it
contains unit area under its curve; that is

and
) (t o
0 = t
, 0 ) ( = t o
0 = t
}


=1 ) ( dt t o
}


= ) ( ) ( ) (
0 0
t g dt t t t g o
}


= ) ( ) ( ) ( t g d t g t t o t
(A2.3)
(A2.4)
(A2.5)
(A2.6)
2012/9/10
42 2012/9/10
43 2012/9/10
44
Example 1.3 Random Binary Wave / Pulse
1. The pulses are represented by A volts (mean=0).
2. The first complete pulse starts at t
d.



3. During , the presence of +A
or A is random.
4.When , T
k
and T
i
are not in the same pulse
interval, hence, X(t
k
) and X(t
i
) are independent.

s s
=
elsewhere , 0
0 ,
1
) (
T t
T
t f
d
d T
d
nT t t T n
d
< < ) 1 (
T t t i k >
| | | | | | 0 ) ( ) ( ) ( ) ( = =
i k i k
t X E t X E t X t X E
2012/9/10
45
Figure 1.6
Sample function of random binary wave.
2012/9/10
46
Solution 1 For
X(t
k
) and X(t
i
) occur in the same pulse interval

, , 0 , 0,
i k i k k i
t t T t t t t < = < <
T -t t
-t t T- t
i d
i k d
< +
<
i.e.,
iff
| |
| |
T t t
T
t t
A
dt
T
A
dt t f t X t X E
t t T t A
t t X t X E
i k
i k
-t t T-
d
d d
-t t T-
T i k
i k d
d i k
i k
i k
d
<

=
=
=

<
=
}
}
) (1

) ( A ) ( ) (

elsewhere 0,
- - ,
) ( ) (
2

0
2


0
2
2
2012/9/10
47
Similar reason for any other value of t
k










What is the Fourier Transform of ?
Reference : A.Papoulis, Probability, Random
Variables and Stochastic Processes,
Mc Graw-Hill Inc.
) (t
X
R
where ,
0,
), (1
) (
2
i k X
-t t
T
T
T

A
R =

>
<
= t
2 2
( ) sin ( )
x
S f A T c fT =
2012/9/10
Solution 2
48 2012/9/10
49 2012/9/10
50 2012/9/10
51 2012/9/10
52 2012/9/10

53 2012/9/10

54 2012/9/10
55
Cross-correlation Function of X(t) and Y(t)
and

Note and are not general even
functions.
The correlation matrix is



If X(t) and Y(t) are jointly stationary
| |
| | (1.20) ) ( ) ( ) (
(1.19) ) ( ) ( ) (
u X t Y E t,u R
u Y t X E t,u R
YX
XY
=
=
) , ( u t R
YX
) , ( u t R
XY
(

=
) , ( ) , (
) , ( ) , (
) , (
u t R u t R
u t R u t R
u t
Y YX
XY X
R
u t where
R R
R R
Y YX
XY X
=
(

=

(1.21)
) ( ) (
) ( ) (
) (
t t
t t
t R
( , ), ( , )
x y
R t u R t u are autocorrelation functions
2012/9/10
56
(1.22) ) (
)] ( ( ) ( [
)] ( ) ( [
)] ( ) ( [ ) (
, Let
R
t X t Y E
X Y E
Y X E R
t
YX
XY

=

' '
=
+ =
+ =
=
t
t
t
Proof of :
)] ( ) ( E[ ) ( t Y t X R
XY
=
) ( ) ( R R
YX XY
= t
2012/9/10
| | | |
| | | |
| |
| |
X t
c c c
c c c
X t
X c c
R E X t X t X t X t
E X t X t E f t f t f
E X t X t f t f t f d
R E f t f E
EE
u
t
t t
u u
u u u
t
= =
+ +
= + +
= + O
}
=
12 1 2 1 2
( )
2
( )
0
( ) ( ) ( ) ( ) ( )
( ) ( ) cos(2 )sin(2 2 )
1
( ) ( )cos(2 )sin(2 2 )
2
1
( ) sin(4 2 2 ) sin(
2
| |
{ } c
X c
f .
R f =
( ) 2 ) 123
1
( )sin(2 )
2
57
Example 1.4 Quadrature-Modulated Process



where X(t) is a stationary process and O is uniformly
distributed over [0, 2t].








), 2 ( sin ) ( ) (
) 2 ( cos ) ( ) (
2
1
O + =
O + =
t f t X t X
t f t X t X
c
c
12
1 2
At sin( ) 0, ( ) 0 ,
( ) and ( ) are orthogonal at some fixed t.
0, 2
c
f R
X t X t
t t = = =
= 0
2012/9/10
58
1.5 Ergodic Processes
Ensemble averages of X(t) are averages across the process.
(in sample space)
Long-term averages (time averages) are averages along the
process (in time domain)
DC value of X(t) (random variable)

If X(t) is stationary,



(1.24) ) (
2
1
) (
}

=
T
T
x
dt t x
T
T
| | | |
(1.25)
2
1

) (
2
1
) ( E


?

X
T
T
X
T
T
x

dt
T
dt t x E
T
T
=
=
=
}
}

2012/9/10
59
represents an unbiased estimate of
The process X(t) is ergodic in the mean, if


The time-averaged autocorrelation function


If the following conditions hold, X(t) is ergodic in the
autocorrelation functions

| | 0 ) ( var lim b.
) ( lim a.
=
=


T
T
x
T
X x
T
) (T
x

variable. random a s ) (
) 26 1 ( ) ( ) (
2
1
) (
i T , R
. dt t x t x
T
,T R
x
T
T
x
t
}

+ =
| | 0 ) ( var lim
) ( ) , ( lim
=
=


,T R
R T R
x
T
X x
T
t
t
2012/9/10
60
Linear Time-Invariant Systems (stable)
a. The principle of superposition holds
b. The impulse response is defined as the response
of the system with zero initial condition to a unit
impulse or (t) applied to the input of the system
c. If the system is time invariant, then the impulse
response function is the same no matter when
the unit impulse is applied
d. The system can be characterized by the impulse
response h(t)
e. The Fourier transform of h(t) is denoted as H(f)
2012/9/10
61
1.6 Transmission of a random Process Through a
Linear Time-Invariant Filter (System)





where h(t) is the impulse response of the system



If E[X(t)] is finite
and system is stable

If X(t) is stationary,
H(0) :System DC response.
(f)=H(f)X(f)
1 1 1
( ) ( ) ( )
-
Y t h X t d Y

=
}
| |
| |
(1.28) ) ( ) (
) ( ) (
(1.27) ) ( ) (
) ( ) (
1 1 1
1 1 1
1 1 1
}
}
}

=
=
(

=
=
-
X
-
-
Y
d t h
d t x E h
d t X h E
t Y E t
(1.29) ), 0 ( ) (
1 1
H d h
X
-
X Y
= =
}

2012/9/10
62
Consider autocorrelation function of Y(t):



If is finite and the system is stable,


If (stationary)


Stationary input, Stationary output (WSS)


| |
) 30 1 ( ) ( ) ( ) ( ) (
) ( ) ( ) (
2 2 2 1 1 1
. d X h d t X h E
Y t Y E t, R

Y
(

=
=
} }



)] ( E[
2
t X
) 31 1 ( ) ( ) ( ) ( )
2 1 2 2 1 1
. , t R h d h d (t, R
X Y
=
} }



) ( ) , (
2 1 2 1
t R t R
X X
+ =
) 32 1 ( ) ( ) ( ) ( ) (
2 1 2 1 2 1
. d d R h h R X

Y
+ =
} }


| |
} }


= =


X Y
. d d R h h t Y E R ) 33 1 ( ) ( ) ( ) ( ) ( ) 0 (
2 1 1 2 2 1
2


Function of time difference
2012/9/10
63
1.7 Power Spectral Density (PSD)
Consider the Fourier transform of g(t),



Let H(f ) denote the frequency response,
Recall (1.30)








}
}


=
=


df ft j f G t g
dt ft j t g f G
) 2 exp( ) ( ) (
) 2 exp( ) ( ) (



1 1
2
1 2 2 1 1 2
(134) ( ) ( )exp( 2 )
( ) ( )exp( 2 ) ( ) ( )
( )


X
. h H f j f df
E Y t H f j f df h R d d
df H f
t



=
(
(
=

(

=
}
} } }



2 2 2 1 1 1
2 2 2
(135)
(136)
( ) exp( 2 )
( ) exp( 2 ) ( )exp( 2 )


X


X
.
.

d h R ( ) j f d
df H f d h( ) j f R j f d
t
t t t t



=
} } }
} } }
(complex conjugate response of the filter) ( )
*
H f
1 2 - t t t =
| |
1 1 1 2 2 2
( ) ( ) ( ) ( ) ( ) ( ) E Y t Y u E h X t d h X d t t t t t t


(
=
(

} }
let
2 1
1 2
t t t
t t t
=
=
2012/9/10
64

: the magnitude response
Define: Power Spectral Density ( Fourier Transform of )



Recall
Let be the magnitude response of an ideal narrowband filter



A f : Filter Bandwidth



| | ) 37 1 ( ) 2 exp( ) ) ( ) (
2
2
. d f j ( R f H df t Y E
-
X
-
t t =
} }

| | ) 39 1 ( ) ( ) ( ) (
) 38 1 ( ) 2 exp( ) ( ) (
2
2
. df f S f H t Y E
. d f R f S
-
X
-
X X
}
}

=
= t t
| | ) 33 1 ( ) ( ) ( ) ( ) (
2 1 1 2 2 1
2
. d d R h h t Y E X
- -
=
} }

| | in W/Hz ) ( ) ( 2 ) (
, continuous is ) ( and If
2

c X c X
X c
f S f f S t Y E
f S f f
~
<<
(1.40)
) ( f H
) R(
) ( f H

f f ,
f f ,
| f |H
f
f


c
c

>
<
=
A
A
2
1
2
1
0
1
) (
2012/9/10
65
Properties of The PSD




Einstein-Wiener-Khintchine relations:


is more useful than !

) 43 1 ( ) 2 exp( ) ( ) (
) 42 1 ( ) 2 exp( ) ( ) (
. df f j S R
. d f j R f S

X X

X X
t t
t t t
}
}


=
=
) ( f S
X
) ( R
X
) ( ) ( R f S
X X

2012/9/10
66
| |
| |
.
df f S
f S
f
e.
. f S
u du fu j u R
d f j R f d. S
. f f S
f S f t Y E
t X c.
. df f S t X b. E
. d R a. S
X
X
X
X
X
X X
X
X
X
X X
p ) 48 1 (
) (
) (
) (
: pdf a with associated be can PSD The
) 47 1 ( ) (
, ) 2 exp( ) (
) 2 exp( ) ( ) (
) 46 1 ( all for 0 ) (
0 ) ( ) 2 ( ) (
, stationary is ) ( If
) 45 1 ( ) ( ) (
) 44 1 ( ) ( ) 0 (
2
2
}
}
}
}
}


=
=
= =
=
>
> ~
=
=
t
t t
t
( ) ( ) R R t t =
2012/9/10
67
Example 1.5 Sinusoidal Wave with Random Phase











| |





2
2
( ) cos(2 ), ~ ( , )
( ) cos(2 )
2
( ) ( )exp( 2 )
exp( 2 ) exp( 2 ) exp( 2 )
4
c
X c
X X
c c
X t A f t U
A
R f
S f R j f d
A
j f j f j f d
t t t
t t t
t t t t
t t t t t t t

= + O O
=
=
= +
}
}
| |
| |

Appendix 2
2
( ) ( )
4
, exp 2 ( ) ( )
c c
c c
A
f f f f
j f f d f f
o o
t t o

= + +
=
}
(Example 1.2)
The spectral Analyzer can not detected the phase, so the phase information is lost. 2012/9/10
68
Example 1.6 Random Binary Wave (Example 1.3)





Define the energy spectral density of a pulse as




(1.50) ) ( sinc A
) 2 exp( ) 1 ( ) (


0
) 1 (
) (
0 if
1 if

,
,
) (
2 2
2
2
f T T
d f j
T
A f S
T
T
T
A
R
m(t)
m(t)
A
A
t X
T
T
X
X
=
=
>
<

=
=
=

=
}

t t t
t
t
t
t
t
(1.52)
) (
) ( S
(1.51) ) ( sinc ) (
2 2 2
T
f
f
f T T A f
g
X
g
c
=
=
2012/9/10
69
Example 1.7 Mixing of a Random Process with a Sinusoidal Process
| |
| | | |
| |
| |
| | (1.55) ) ( ) (
4
1

)) ( 2 exp( )) ( 2 exp( ) (
4
1

) 2 exp( ) ( ) (
(1.54) ) 2 cos( ) (
2
1

) 2 2 4 cos( ) 2 cos( ) (
2
1

) 2 cos( ) 2 2 cos( ) ( ) (
) ( ) ( ) (
(1.53) ) (0,2 ~ , ) 2 cos( ) ( ) (
c X c X
c c X
Y Y
c X
c c c X
c
c c
Y
c
f f S f f S
d f f j f f j R
d f j R f S
f R
f t f f E R
t f f t f E t X t X E
t Y t Y E R
U t f t X t Y
+ + =
+ + =
=
=
O + + + =
O + O + + + =
+ =
O O + =
}
}


t t t t t t
t t t t
t t t
t t t t t t
t t t t
t
t t
We shift the to the right by , shift it to the left by ,
add them and divide by 4.
) ( f S
X
c
f
c
f
2012/9/10
70
Relation Among The PSD of The Input and Output Random Processes





Recall (1.32)



(1.32)
( )
, or
1 2 1 2 1 2
1 2 1 2 1 2
1 2 0 0 1 2
( ) ( ) ( ) ( )
( ) ( ) ( )exp( 2 )
( )
Y X
Y X
Y
R h h R d d
S f h h R j f d d d
Let
S f
t t t t t t t t
t t t t t t t t t t
t t t t t t t t




= +
= +
+ = = +
=
} }
} } }



( )
1 2 0 0 2 1 1 2 0
2
( ) ( ) ( )exp( 2 )exp( 2 )exp( 2 )
( ) ( ) * ( )
( )
X
X
X
h h R j f j f j f d d d
S f H f H f
H f S f
t t t t t t t t t



=
=
} } }
(1.58)
h(t)
X(t)
S
X
(f)
Y(t)
S
Y
(f)
2012/9/10
71
Relation Among The PSD and
The Magnitude Spectrum of a Sample Function

Let x(t) be a sample function of a stationary and ergodic Process X(t).
In general, the condition for Fourier transformable is


This condition can never be satisfied by any stationary x(t) of infinite duration.
We may write



If x(t) is a power signal (finite average power)


Time-averaged autocorrelation periodogram function

(1.61) ) ( ) (
2
1
lim ) (
average time Take Ergodic
(1.60) ) 2 exp( ) ( ) , (
dt t x t x
T
R
dt ft j t x T f X
T
T T
X
T
T
}
}

+ =

=
t t
t
}


< (1.59) ) ( dt t x
(1.62) ) , (
2T
1
) ( ) (
2
1 2
T f X dt t x t x
T
T
T
+
}

t
For fixed f, it is a r.v. (from one sample function to another)
2012/9/10
72
Take inverse Fourier Transform of right side of (1.62)

From (1.61),(1.63),we have

Note that for any given x(t) periodogram does not converge as
Since x(t) is ergodic








(1.67) is used to estimate the PSD of x(t)

(1.63) ) 2 exp( ) , (
2
1
) ( ) (
2
1 2
df fT j T f X
T
dt t x t x
T
T
t t
t
} }


= +
(1.64) ) 2 exp( ) , (
2
1
lim ) (
2
df f j T f X
T
R
T
X
}


= t t t
T
| |

(1.66)
Recall (1.43) ( )


2
2
2
1
( ) ( ) lim ( ) exp( 2 )
2
1
( ) lim ( ) exp( 2 )
2
( )exp( 2 )
1
( ) lim ( , )
2
X X
T
X
T
X X
X
T
E R R E X f T j f df
T
R E X f T j f df
T
R S f j f df
S f E X f T
T
t t t t
t t t
t t t

(
= =


(
=
`

)
=
(
=

}
}
}
(1.67)
2
1
lim ( )exp( 2 )
2
T
T T
E x t j ft dt
T
t

(
=
(

}
2012/9/10
73
Cross-Spectral Densities

(1.72) ) ( ) ( ) (
(1.22) ) ( ) (
) 2 exp( ) ( ) (
) 2 exp( ) ( ) (
real. be not may ) ( and ) (
(1.69) ) 2 exp( ) ( ) (
(1.68) ) 2 exp( ) ( ) (
f S f S f S
R R
df f j f S R
df f j f S R
f S f S
d f j R f S
d f j R f S
YX YX XY
YX XY
YX YX
XY XY
YX XY
YX YX
XY XY
-


= =
=
=
=
=
=
}
}
}
}

t t t t
t t t t
2012/9/10
74
Example 1.8 X(t) and Y(t) are uncorrelated and zero mean stationary
processes.
Consider

Example 1.9 X(t) and Y(t) are jointly stationary.



(1.75) ) ( ) ( ) (
) ( ) ( ) (
f S f S f S
t Y t X t Z
Y X Z
+ =
+ =
| |




Let
( ) (1.77
1
1 1 1 2 2 2 2
1 1 2 2 1 2 1 2
1 1 2 2 1 2 1 2
( , ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( , )
( ) ( ) ( )
VZ
XY
VZ XY
R t u E V t Z u
E h X t d h Y u d
h h R t u d d
t u
R h h R d d
t t t t t t
t t t t t t
t t t t t t t t





+

=
(
=
(

=
=
=
} }
} }
} }
)
1 2
( ) ( ) ( ) ( ) XY
VZ
F
S f H f H f S f
-
=
1 2 1 2
t u t t t t t + = +
2012/9/10
75
1.8 Gaussian Process
Define : Y as a linear functional of X(t) ()



The process X(t) is a Gaussian process if every linear
functional of X(t) is a Gaussian random variable








}
=
T
dt t X t g Y
0
(1.79) ) ( ) (
(1.81) (0,1) as , )
2
exp(
2
1
) ( Normalized
(1.80)
2
) (
exp
2
1
) (
2
2
2
N
y
y f
y
y f
Y
Y
Y
Y
Y
=
(


=
t
o

o t
( g(t): some function and the integral exists)
( e.g g(t): (t) )
o
Fig. 1.13 Normalized Gaussian distribution
2012/9/10
76
Central Limit Theorem
Let X
i
, i =1,2,3,.N be (a) statistically independent R.V.
and (b) have mean

and variance .
Since they are independently and identically distributed (i.i.d.)
Normalized X
i











The Central Limit Theorem
The probability distribution of V
N
approaches N(0,1)
as N approaches infinity.
Note: For some random variables, the approximation is poor even N is quite
large.
| |
| |

=
=
=
=
= =
N
i
i N
i
i
X i
X
i
Y
N
Y
Y
N i X Y
1
1
V Define
. 1 Var
, 0 E Hence,
1,2,...., ) (
1

o
X

2
X

2012/9/10
77
Properties of A Gaussian Process
1.



0

0


Define ( )
( )

where ( )
0
0
0
0
( ) ( ) ( )
( ) ( )
( ) ( )
( ) ( )
T
T
Y
T
Y
T
Y t h t X d
Z g t h t X d dt
g t h t dt X d
g X d
g g
t t t
t t
t t
t t
t

=
=
=
=
=
}
} }
} }
}

0


( )
By definition is a Gaussian random variable (1.81)
( ) is Gaussian
0
( )
( ) ( ) , 0
Y
T
t h t dt
Z
Y t h t X d t
t
t t t

= s <
}
}
X(t)
h(t)
Y(t)
Gaussian in
Gaussian out
2012/9/10
78
2. If X(t) is Gaussisan
Then X(t
1
) , X(t
2
) , X(t
3
) , ., X(t
n
) are jointly Gaussian.
Let
and the set of covariance functions be
| | ,....,n , i t X E
i t X
i
2 1 ) (
) (
= =
( )( )
| |
X
x x


,
where
Then (1.85)
where mean vector
1
( ) ( )
1 2
1
,
( ),..., ( ) 1
1
2 2
1 2
( , ) ( ) ( ) 1 2
( ) ( ) ( )
1 1
( ..., ) exp( ( ) ( ))
2
(2 )
, ,.
k i
n
i
X k i k X t X t
T
n
T
X t X t n
n
C t t E X t X t k,i , ,...,n
X t , X t ,...., X t
f x x

t

(
= =

=
=
A
= =
| |


covariance matrix {
determinant of covariance matrix
, 1
...,
( , )}
T
n
n
X k i k i
C t t

=
= =
A =
2012/9/10
Supplemental Material
2012/9/10 79
2012/9/10 80
2012/9/10 81
2012/9/10 82
2012/9/10 83
84
3. If a Gaussian process is stationary then it is strictly stationary.
(This follows from Property 2)
4. If X(t
1
),X(t
2
),..,X(t
n
) are uncorrelated as

Then they are independent
Proof : uncorrelated


is also a diagonal matrix, =determinant of
(1.85)



. 2 1 , ] )) ( ( ) ( [( where ,
0
0

2 2
2
2
1
,n , i t X E t X E
i i i
n
= =
(
(
(

= o
o
o

( )
( )
x x
x
where ( ) and ( )
1
1
,
( ), , ( ) 1
1
2 2
1
2
2
1 1
( ..., ) exp( ( ))
2
(2 )
( ) ( )
1
exp
2
2
n
i
i
i
T
X t X t n
n
n
X X i
i
i X
i i X i
i
i
f x x
f f x Independent
x
X X t f x
t

o
to

=
=
A
= :
| |

|
= =
|
|
\ .
[
0 )] ) ( )( ) ( [(
) ( ) (
= =
i k
t X i t X k
t X t X E
1

2012/9/10
85
1.9 Noise
Shot noise
Thermal noise






k: Boltzmanns constant = 1.38 x 10
-23
joules/K, T is the
absolute temperature in degree Kelvin.

| |
| | | |
2 2
2
2
2 2
amps 4
1
4
1
volts 4
f kTG f
R
kT V E
R
I E
f kTR V E
TN TN
TN
A = A = =
A =
2012/9/10
86
White noise







(1.93)
(1.94)
:equivalent noise temperature of the receiver
( ) (1.95)
2
0
0
0
0
( )
2
( )
( ) ( )exp( 2 )
2
( )
W
e
e
W
W w
N
S f
N kT
T
N
R
N
S f R j f d
f t
t o t
t t t t

=
=
=
= =
}
1 Table A6.3 1, ( ) f f

2012/9/10
87
Example 1.10 Ideal Low-Pass Filtered White Noise

) 2 sinc(
(1.97) ) 2 exp(
2
) (
(1.96)
0
2
) (
0


0
0
t
t t t
B B N
df f j
N
R

B f
B f -B
N
f S
B
B
N
N
=
=
>
< <

=
}

2012/9/10
88
Example 1.11 Correlation of White Noise with a Sinusoidal Wave
White noise
| |
2
(1.98)
The variance of is

0
1 1 2 2 1 2
0 0
1 2 1 2
0
2
'( ) ( )cos(2 )
( )
2
( )cos(2 ) ( )cos(2 )
2
( ) ( ) cos(2 )cos(2 )
T
c
T T

c c
c c
w t w t f t dt
T
w' t
E w t f t w t f t dt dt
T
E w t w t f t f t
T
t
o t t
t t
=
(
=
(

=
}
} }
2


From (1.95)

1 2
0
1 2 1 2 1 2
0 0
0
1
2 1 2 1 2
0 0
2
0 0
0
2
( , )cos(2 )cos(2 )
2
( )cos(2 )cos(2 )
2
cos (2 )
2
T T

T T

W c c
T T

c c
T
c
dt dt
R t t f t f t dt dt
T
N
t t f t f t dt dt
T
N N
f t dt
T
t t
o o t t
t
=
=
= =
} }
} }
} }
}
(1.99)
X }
T
dt
0
integer is , , ) 2 cos(
2
k
T
k
f t f
T
c c = t
) (t w ) ( ' T w
) ( ' T w
) ( ' T w
2012/9/10
89
1.10 Narrowband Noise (NBN)





Two representations
a. in-phase and quadrature components (cos(2t f
c
t) ,sin(2t f
c
t))
b.envelope and phase
1.11 In-phase and quadrature representation





signals pass - low are ) ( and ) (
(1.100) ) 2 sin( ) ( ) 2 cos( ) ( ) (
t n t n
t f t n t f t n t n
Q
Q
I
c c I
t t =
(sample function)
0
( )
T
dt
}
2012/9/10
90
Important Properties
1.n
I
(t) and n
Q
(t) have zero mean.
2.If n(t) is Gaussian then n
I
(t) and n
Q
(t) are jointly Gaussian.
3.If n(t) is stationary then n
I
(t) and n
Q
(t) are jointly stationary.

4.

5. n
I
(t) and n
Q
(t) have the same variance .
6.Cross-spectral density is purely imaginary. (problem 1.28)



7.If n(t) is Gaussian, its PSD is symmetric about f
c
, then n
I
(t) and n
Q
(t)
are statistically independent. (problem 1.29)

(1.101)
otherwise

0
, ) ( ) (
) ( ) (
B f -B f f S f f S
f S f S
c N c N
N N
Q I
s s

+ +
= =
2
0 N
( ) ( ) | |
(1.102)
otherwise

0
,

) ( ) (
B f -B f f S f f S j
f S f S
c N c N
N N N N
I Q Q I
s s

+
=
=
2012/9/10
1.
From figure 1.19(a), we see that
Proof of e.q. (1.101)
Supplement
2012/9/10 91
2012/9/10 92
2.
From figure 1.19(a), we see that
Supplement
2012/9/10 93
2012/9/10 94
95
Example 1.12 Ideal Band-Pass Filtered White Noise






| |

(1.103)
Compare with (
0 0
0
0
( ) exp( 2 ) exp( 2 )
2 2
sinc(2 ) exp( 2 ) exp( 2 )
2 sinc(2 )cos(2 )
c c
c c
f B f B
N
f B f B
c c
c
N N
R j f df j f df
N B B j f j f
N B B f
t t t t t
t t t t t
t t t
+ +

= +
= +
=
} }
N
1.97) (a factor of ),
Low-Pass filtered R
0
0
2
( ) ( ) 2 sinc(2 ).
( ( ) sinc(2 ))
I Q
N N
R R N B B
N B B
t t t
t t
= =
=
2012/9/10
96
1.12 Representation in Terms of Envelope and Phase Components













Let N
I
and N
Q
be R.V.s obtained (at some fixed time) from n
I
(t)
and n
Q
(t). N
I
and N
Q
are independent Gaussian with zero mean and
variance .

| |
| |

(1.107)
) (
) (
tan ) (
Phase
(1.106) ) ( ) ( ) (
Envelope
(1.105) ) ( 2 cos ) ( ) (
1
2
1
2 2
(

=
+ =
+ =

t n
t n
t
t n t n t r
t t f t r t n
I
Q
Q I
c

t
2
o
2012/9/10
97





(1.108) )
2
exp(
2
1
) , (
2
2 2
2
,
o to
Q I
Q I N N
n n
n n f
Q I
+
=
(1.112)
(1.111) sin
(1.110) cos Let
(1.109) )
2
exp(
2
1
) , (
2
2 2
2 ,
r dr d dn dn
r n
r n
dn dn
n n
dn dn n n f
Q I
Q
I
Q I
Q I
Q I Q I
N
N
Q I
=
=
=
+
=
o to
2012/9/10
98
Substituting (1.110) - (1.112) into (1.109)




















(1.113)
0 2
0 2
el
, ,
2
2 2
2
,
2 2
( , ) ( , )
exp( )
2 2
( , ) exp( )
2 2
1
, ( )
2
0
I Q
N N I Q I Q R
R

f n n dn dn f r rdrd
r r
drd
r r
f r
f

to o

to o
t
t
t
=
=
=
s s
s s = (1.114)
sewhere

(1.115)
elsewhere
( ) is Rayleigh distribution.
For convenienc
2
2 2
exp( ) , 0
( )
2
0
R
R
r r
r
f r
f r
o o

>
=

e , let . ( ) ( ) (Normalized)
,
( ) (1.118)
elsewhere
2
exp( ) 0
2
0
V R
V
r
f f r

f
o
v
v v
= =

>
=

2012/9/10
99
Figure 1.22 Normalized Rayleigh distribution.
2012/9/10
100
1.13 Sine Wave Plus Narrowband Noise



If n(t) is Gaussian with zero mean and variance
1. and are Gaussian and statistically independent.
2.The mean of is A and that of is zero.
3.The variance of and is .

) ( ) (
) 2 sin( ) ( ) 2 cos( ) ( ) (
(1.119) ) ( ) 2 cos( ) (
t n A t n
t f t n t f t n t x
t n t f A t x
I I
c
Q
c I
c
+ = '
' =
+ =
t t
t
| |
{ }
1

2
-1
Let ( ) (1.123)
(t) tan
2 2
,
2 2
2
2
( )
1
( , ) exp
2 2
( ) ( )
( )
( )
I Q
I Q
N N I Q
I Q
Q
I
n A n
f n n
r t n t n t
n t
n t
to o

'
( ' +
' =
(
(

' = +
(
=
(
'

2
(1.124)
Follow a similar procedure , we have
r
exp( )
and are dependent.
2
,
2 2
2 cos
( , )
2 2
R
r A Ar
f r
R

to o

+
=

2
o
2
o
) ( ' t nI ) (t nQ
) ( ' t nI ) (t nQ
) ( ' t nI ) (t nQ
2012/9/10
101




The modified Bessel function of the first kind of zero
order is defined as (Appendix 3)



It is called Rician distribution.
If
}
}
+
=
=
t
t

o o to

2
0
2 2
2 2
2
2
0
,
(1.126) )d cos exp( )
2
exp(
2

) , ( ) (
Ar A r r
d r f r f
R R


(1.127)
Let (1.128)
2
0
0
2 2
0
2 2 2 2
1
( ) exp( cos )
2
exp( ) ( )
2
R
I x x d
Ar r r A Ar
x , f (r) I

t

t
o o o
=
+
= =
}


=1,
2
0
0
1
0, (0)
2
A I d
t

t
= =
}
it is Rayleigh distribution.
2012/9/10
102







(1.132) ) ( )
2
exp(
(1.131) ) ( ) (
,
0
2 2
av I
a v
v
r f v f
A
a
r
Normalized
R V
+
=
=
= =
o
o o
v
Figure 1.23 Normalized Rician distribution.
2012/9/10

You might also like