You are on page 1of 46

Review of Random Variables and

Random Processes
Uniform PDF
2
2
1
( ) ,
, [ - ] 0,
2
( )
[ ]
12
p x a x b
b a
a b
E X
b a
Variance Var X

o
= s s

+
= =

= = =
0 a b
1/(b-a)
x
p(x)
Gaussian PDF
2
2
1
( )
2
2
1
( ) ,
2
[ ] , [ - ] 0,
[ ]
x
p x e x
E X E X
Variance Var X

o
o t

o

= s s
= =
= =
-5 -4 -3 -2 -1 0 1 2 3 4 5
0.02
0.04
0.06
0.08
0.1
0.12
0.14
x
p
(
x
)
Nth order central moments of a Gaussian Random
Variable is:
{
[( ) ] 1x3x5x...( 1) , n-even
0, n-odd
n n
E X n o =
If =0 and =1, then the PDF is
termed as a standard normal PDF
2
1
2
1
( ) ,
2
x
p x e x
t

= s s
Notation: ( , ) N o
Standard Normal PDF - N(0,1)
2
1
2
1
Cumulative Distributive Function: ( )
2
x
t
x e dt |
t

=
}
2
1
2
1
Right Tail Probability: ( ) Exceeding a value
2
t
x
Q x e dt
t


=
}
( ) 1 ( )
( ) is also referred to as the
Complementary Cumulative Distribution Function
Q x x
Q x
| =

2
1
2
( ) can be approximated as
1
( ) ,
2
x
Q x
Q x e
x t

=
Cumulative Probability Function (x)
-5 -4 -3 -2 -1 0 1 2 3 4 5
-0.5
0
0.5
x
c
u
m
u
l
a
t
i
v
e

d
i
s
t
r
i
b
u
t
i
o
n

f
u
n
c
t
i
o
n
(
x
)
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
10
-3
10
-2
10
-1
10
0
x
c
u
m
u
l
a
t
i
v
e

d
i
s
t
r
i
b
u
t
i
o
n

f
u
n
c
t
i
o
n
(
x
)
Right Tail Probability Function Q(x)
-5 -4 -3 -2 -1 0 1 2 3 4 5
0
0.2
0.4
0.6
0.8
1
-5 -4 -3 -2 -1 0 1 2 3 4 5
10
-10
10
-5
10
0
Actual Q(x) and its Approximation
0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
10
-7
10
-6
10
-5
10
-4
10
-3
10
-2
10
-1
10
0


Approximation to Q(x)
Actual Q(x)
Approximation
is accurate for
x>4
Use of right tail probability
To determine if a random variable has a pdf
that is approximately Gaussian
If the RV has Gaussian pdf, the following
diagram should be a straight line other wise
not

-3 -2 -1 0 1 2 3
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Multivariate Gaussian PDF
Multivariate Gaussian PDF of an nx1 random
vector x is


Covariance Matrix is



-1
1
- ( - ) ( - )
2
1
2 2
1
( )
(2 ) det ( )
T
x C x
n
p x e
C

t
(
(

=
[ ] [( ( ))( ( ))], 1, 2,... ; 1, 2,...
[( ( ))( ( )) ]
0,
( ) ( ) ( ) ( ) ( ) ( ) ( )
ij i i j j
T
i j k l i j k l i k j l i l j k
C E x E x x E x i n j n
E E E
When
E x x x x E x x E x x E x x E x x E x x E x x

= = =
=
=
= + +
C x x x x
C- is the covariance
matrix of size nxn and
- is a vector of means
Covariance Matrix Calculation:
Example
Give three random variables X1, X2 and X3
take values shown below
X1 X2 X3
-1 1 2
-2 3 1
4 0 3
Compute Covariance assuming that all three RVs
are distributed according to uniform pdf.

Covariance Matrix Calculation:
Example (Contd.)
cov[X
i
X
j
]=E[(X
i
-
Xi
) (X
j
-
Xj
)]
Covariance Matrix=



Correlation Coefficient Matrix=

1 1 1 2 1 3
2 1 2 2 2 3
3 1 3 2 3 3
cov( , ) cov( , ) cov( , )
cov( , ) cov( , ) cov( , )
cov( , ) cov( , ) cov( , )
X X X X X X
X X X X X X
X X X X X X
(
(
(
(

1 1 1 1 1 2 1 2 1 3 1 3
2 1 2 1 2 2 2 2 2 3 2 3
3 1 3 1 3 2 3 2 3 3 3 3
cov( , ) / cov( , ) / cov( , ) /
cov( , ) / cov( , ) / cov( , ) /
cov( , ) / cov( , ) / cov( , ) /
X X X X X X
X X X X X X
X X X X X X
X X X X X X
X X X X X X
X X X X X X
o o o o o o
o o o o o o
o o o o o o
(
(
(
(

Covariance Matrix Calculation:
Example (Contd.)
Covariance Matrix=



Correlation Coefficient Matrix=

6.88 2.78 2
2.78 1.56 1
2 1 0.67

(
(

(
(

1 0.8486 0.9333
0.8486 1 0.9819
0.9333 0.9819 1

(
(

(
(

Central Limit Theorem
Provides mathematical justification for using a
Gaussian process as a model for a large
number of different physical phenomena
Suppose X represents the sum of N statistically
independent random variables, then the pdf
of X approaches Gaussian an N tends to
infinity

CLT An Illustration
Assume the following distribution for dice
1 2 3 4 5 6 x
p(x)
Probability of getting two and five dots = 0
CLT An Illustration (Contd.)
Consider samples of size 4
S1=[1 1 5 4] , Mean=2.75
S2=[3 1 6 1], Mean=2.75
S3=*3 1 6 4+, Mean=3.5 and etc
Plot the frequency distribution of each of these
samples
It resemble the normal distribution
Further, increase in the sample size (>4) results in
more resemblance to the normal distribution.

Random Processes
Random Variable
The outcome of a random exp. is
mapped to a number.
Random Process
The outcome of a random exp. is
mapped to a waveform
that is a function of time
The sample space or
ensemble comprised of
functions of time is called a
random or stochastic process
denoted by X(t,s) or X(t)
Random Processes (Contd.)
Each sample point s is a function of time.

For a fixed sample point s
j
, the graph of the
function X(t,s
j
) versus time is called a
realization or sample function (x
j
(t))of the
random process.

A set of sample functions at a fixed time t
k

constitutes a RV.
Random Process An Example
Experiment Tossing of a fair coin
Random process X(t) is defined as
X(t)=sin(t) -if head shows up
X(t)=2t- if tail shows up
There are two sample functions: x
1
(t) and x
2
(t)

Random (Stochastic) Process
Another Example
Experiment is repeated. Throwing of 6 dice
everyday.
Min value =6
Max. Value=36.
No. of sample
functions = 6
corresponding to 6 dice. Observing each die at
time t gives a RV

0
5
10
15
20
25
30
35
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Series1
Stationarity
Consider a RP X(t) initiated at t=-.
Let X(t
1
), X(t
2
), X(t
k
) denote the RVs obtained by
observing the RP X(t) at times t
1
, t
2
, t
k
respectively, then
the joint distribution function of this set of RVs is
F
X(t1),X(t2),X(tk)
(x
1
,x
2
,x
k
)
If all the observation times (t
1
t
2
..t
k
) are shifted by a fixed
time to obtain a new set of RVs X(t
1
+ ), X(t
2
+ ), X(t
k
+
), then the joint distribution function of this new set of
RVs is F
X(t1+

),X(t2 +

),X(tk +

)
(x
1
,x
2
,x
k
)
The RP X(t) is said to be stationary in strict sense if
F
X(t1+

),X(t2 +

),X(tk +

)
(x
1
,x
2
,x
k
) = F
X(t1),X(t2),X(tk)
(x
1
,x
2
,x
k
)
for all time shifts , all k, and all possible choices of
observation times t
1
, t
2
, t
k
.

Formal Definition of Stationarity
A RP X(t), initiated at time t=-, is strictly
stationary if the joint distribution of any set of
RVs obtained by observing the RP X(t) is
invariant with respect to the location of the
origin t=0


Points to note based on
F
X(t1+

),X(t2 +

),X(tk +

)
(x
1
,x
2
,x
k
) =
F
X(t1),X(t2),X(tk)
(x
1
,x
2
,x
k
)

For k=1, F
X(t+

)
(x)= F
X(t)
(x)=F
X
(x) for all time t and
The first-order distribution function of a stationary
RP is independent of time
For k=2 and =-t
1
, we have
F
X(t1),X(t2)
(x1,x2)=F
X(0),X(t2-t1)
(x1,x2) for all t1 and t2
The second-order distribution function of a
stationary RP depends only on the time difference
b/w the observation times and not on the particular
times at which the random process is observed


Mean of a RP X(t)
Mean
f
X(t)
(x) is the first order probability density function
of the process.
For a stationary RP, f
X(t)
(x) is independent of time t.
The mean of a stationary RP is a constant.
ie.,


}


= = dx x xf t X E t
t X X
) ( )] ( [ ) (
) (

t t
X X
all for ) ( =
Recall: Mean of a
RV X = E[X]
Correlation of a RP X(t)
Autocorrelation function of the RP X(t) is given by




Correlation means k=2, where
F
X(t1),X(t2)
(x
1
,x
2
)=F
X(0),X(t2-t1)
(x
1
,x
2
)
The autocorrelation function of a stationary
random process depends only on the time
difference t
2
-t
1
. R
X
(t
1
,t
2
)=R
X
(t
2
-t
1
)


process the of function density y probabilit
order - second the is ) , ( where
) , ( )] ( ) ( [ ) , (
2 1 ) 2 ( ), 1 (
2 1 2 1 ) 2 ( ), 1 ( 2 1 2 1 2 1
x x f
dx dx x x f x x t X t X E t t R
t X t X
t X t X X
} }


= =
Recall: ACF of a RV
X = E[XX]=E[X
2
]
Covariance of a stationary RP X(t)
Autocovariance function of a stationary RP
X(t) is given by

2
X 1 2 X
2
X 2 1 X
2 1 2 1
- ) t - (t R
- ) t , (t R
)] ) ( )( ) ( [( ) , (


=
=
=
X X X
t X t X E t t C
2
X
- E[XX]
)] )( [( ] [
by, given is RV a of ance AutoCovari : Recall


=
=
X X
X X E XX Cov
Few points to note
Autocovariance function of a stationary process X(t)
depends only on the time difference t
2
-t
1

So, if we know mean and ACF of the process, we can
compute autocovariance function.

However,
The mean and ACF function only provide partial
description of the distribution of a RP X(t).
The conditions
are not sufficient to guarantee that the random
process X(t) is stationary.



) ( ) t , (t and all for ) (
1 2 2 1
t t R R t t
X X X X
= =
Few points to note (Contd.)

A RP for which the above mentioned two conditions hold
good is said to be wide-sense stationary

All strict sense random processes are wide-sense
stationary but not all wide-sense stationary processes are
strictly stationary


Problems
sin(1) 2 x
f
X1
(x)
1/2
sin(2) 4 x
f
X2
(x)
1/2
sin() 2 x
f
X3
(x)
1/2
Properties of ACF
ACF of a stationary RP X(t):

Properties
Mean square value of the process may be obtained by
putting =0
R
X
() is an even function of


R
X
() has its maximum magnitude at =0

t t X t X E R
X
all for )] ) ) ( [ ) ( t t + =
)] ( [ ) 0 (
2
t X E R
X
=
)] ( ) ( [ ) ( as, defined also can ) ( So,
) ( ) (
t t t
t t
=
=
t X t X E R R
R R
X X
X X
) 0 ( ) (
X X
R R s t
Physical Significance of ACF
Provides a means of describing the
interdependence of two RVs obtained by
observing a RP X(t) at times seconds apart


The more rapidly the RP X(t) changes with time,
the more rapidly R
X
( ) decreases from its
maximum R
X
(0) as increases


Cross-Correlation Functions
Consider two RPs X(t) and Y(t) with ACFs R
X
(t,u) and
R
Y
(t,u) respectively
Two cross-correlation functions of X(t) and Y(t) are
defined by


Where t and u are two values of time at which the two RPs
are observed

)] ( ) ( [ ) , (
and
)] ( ) ( [ ) , (
u X t Y E u t R
u Y t X E u t R
YX
XY
=
=
(

=
) , ( ) , (
) , ( ) , (
) , (
by given is Y(t) and X(t) RPs for two matrix n Correlatio
u t R u t R
u t R u t R
u t R
Y YX
XY X
Cross-correlation Functions
(Contd.)
If the two RPs X(t) and Y(t) are each wide-
sense stationary and in addition, they are
jointly wide-sense stationary, then the
correlation matrix can be written as

Where =t-u.
Cross-correlation of two RPs X(t) and Y(t)
satisfy
(

=
) ( ) (
) ( ) (
) (
t t
t t
t
Y YX
XY X
R R
R R
R
) ( ) ( t t =
X
Y XY
R R
Two different averages
Ensemble Average Averages across the process
E(X(t
k
)]
Mean of a stochastic process X(t) at some time instant t
k

= Expectation of the RV X(t
k
) obtained by observing the
RP at time instant t
k

Time Average Averages along the process
This is average taken across time
Few points to note
Can we relate them?
Can we compute ensemble average if we know the time average
or when can we substitute time average for ensemble averages?
Time average of a
sample function x(t): <x(t)>
Consider the sample function x(t) of a wide-sense
stationary RP X(t), with the observation interval defined
as TtT.

The time-averaged mean of the sample function x(t) of a RP
X(t) is defined as


The value of time average <x(t)> is viewed as RV which
takes different values based on which sample function
of the RP X(t) is picked to compute time average
1
( ) ( )
2
T
T
x t x t dt
T

=
}
Autocorrelation function of a
sample function x(t): <x(t)x(t+)>
1
( ) ( ) ( ) ( )
2
T
T
x t x t x t x t dt
T
t t

< + >= +
}
View this autocorrelation function also as RV with its
own mean and variance
Few points to note
For a stationary process, if ensemble average turns out to
be same as the time average for all the moments (first
order, second order second order center etc.) then the
stationary process is called an ergodic process
Ergodic
Processes
Stationary Processes
Wide-sense
Stationary Processes
Random Processes
Transmission of a RP through a
linear filter
X(t) is the RP applied as i/p to a LTI filter
Impulse response of the LTI s/m h(t)
Y(t) is the RP at the filter o/p

Compute the mean and autocorrelation
functions of the o/p RP Y(t) assuming that X(t)
is a wide-sense stationary RP
Computation of
Y

| |
system the of response DC or frequency - zero the is H(0) where,
) 0 (
) ) ) (
RP stationary sense - wide is X(t) Because
) (
) ( ) ( ) ( )] ( [
then stable is h(t) IR with s/m the and t all for finite is E[X(t)] - Assume
)
stable, be to h(t) response impulse with system For the
) ) ( ) ( ) ( )] ( [
1 1
1 1 1
1 1 1
1 1
1 1 1
H
d h( t
d t ) h(
d t X E h t t Y E
d h(
d t X h E t t Y E
X Y
X Y
X
-
Y
Y

t t
t t
t t t
t t
t t t
=
=
=
= =
<
(

= =
}
}
}
}
}


Computation of R
Y
()
| |
| |
| |
| | u t d d R h h R
d d u t R h h u t R
) -t (t R ) ,t (t R
d u t R h d h u t R
d u X t X E h d h u t R
d u X h d t X h E u t R
u t u Y t Y E u t R
X Y
X Y
X X
X Y
Y
Y
Y
= + =
+ =
=
=
=
(

=
=
} }
} }
} }
} }
} }


t t t t t t t t t
t t t t t t
t t t t t t
t t t t t t
t t t t t t
where ) ) ( ) ( ) (
) ( ) ( ) ( ) (
WSS, is X RP i/p When the
) )( ( ) ( ) ( ) , (
) ( ) ( ) ( ) ( ) , (
stable is system the and t all for
finite is (t)] E[X value square mean that the Provided
) ( ) ( ) ( ) ( ) , (
observed is RP o/p he at which t
instants time two the are and where )] ( ) ( [ ) , (
2 1 2 1 2 1
2 1 2 1 2 1
1 2 2 1
2 2 1 2 1 1
2 2 1 2 1 1
2
2 2 2 1 1 1
Computation of R
Y
() (Contd.)
Since R
Y
(0)=E[Y
2
(t)], mean-square value of the
RP Y(t) is obtained by substituting =0
| |
} }


=
2 1 1 2 2 1
2
) ( ) ( ) ( )] ( [ t t t t t t d d R h h t Y E
X
E[Y(t)] and E[Y
2
(t)] are constants,
the o/p RP Y(t) is said to be WSS
Power Spectral Density
Till now Wide sense stationary random
processes in the time domain
Now Characterization of random process by
using frequency domain

Fourier transform of the autocorrelation of a
wide sense stationary random process X(t) is
called the power spectral density or power
spectrum

}

= t t
t t
d e R f S
f j
X X
2
) ( ) (
Power Spectral Density (Contd.)
}
}

=
=
df e f S R
d e R f S
f j
X X
f j
X X
t t
t t
t
t t
2
2
) ( ) (
) ( ) (
These are the basic relations in the theory of spectral analysis of
random processes and together called as
Einstein-Wiener-Khintchine relations
PSD of a sinusoidal wave
Random Process
| | ) ( ) (
4
) (
sides, both on ansform Fourier tr the Taking
) 2 cos(
2
) ( : Recall
) , (- interval over the RV d distribute uniformly a is where
) 2 cos( ) (
2
2
c c X
c X
c
f f f f
A
f S
f
A
t R
t f A t X
+ + =
=
O
O + =
o o
t t
t t
t
Relationship among the PSDs of
the I/P and O/P RPs
Let S
Y
(f) denote the PSD of the o/p RP Y(t)
obtained by passing the RP X(t) through a
linear filter of TF H(f)

| |
) ( ) ( ) (
) 0 ( ) (
2 1 1 2 2 1
2
t t t t t t d d R h h
R t Y E
X
Y
} }


=
=
}


= df e f H h
f j
1
2
1
) ( ) (
t t
t
}

=
1
2
1
1
) ( ) ( t t
t t
d e h f H
f j
Relationship among the PSDs of
the I/P and O/P RPs (Contd.)
| |
| |
| |
}
} }
} } }
} } }
} } }


=
= = =
=
= =
=
=
t t
t t
t t t t
t t t t t
t t t t t
t t
t t t
t t
t t
d e R f S
df f H f S df f H f H f S R t Y E
d e R d e h df f H t Y E
d e R d h df f H
d d R h df e f H t Y E
f j
X X
X X Y
f j
X
f j
f j
X
X
f j
2
2 2
2
2
2
2
2
2 1 1 2
1
2
1 2 2 2
2 1 1 2 2
2 2
) ( ) ( : Recall
| ) ( | ) ( ) ( * ) ( ) ( ) 0 ( ) (
) ( ) ( ) ( ) (
then , Let
) ( ) ( ) (
) ( ) ( ] ) ( [ ) (
2
1
1
Relationship among the PSDs of
the I/P and O/P RPs (Contd.)
) ( | ) ( | ) (
) ( ) ( * ) ( ) (
2
f S f H f S
f S f H f H f S
X Y
X Y
=
=
PSD of Y(t) is equal to PSD of x(t)
multiplied by the square magnitude
of the TF H(f)

You might also like