You are on page 1of 201

Random Variable

119

FAoo) = 1
FAoo) = ~(2 -e-
b

2a =b

a 1
-=
b 2

P[1 < X < 2] = Fx(2)- Fx(l)

(iii)

=~(2-e2b)-~(2 e- h )

_1 [2 -e -2b - 2 +e -b]
-

1
P[1 < X < 2] =_[e-b _e-2b ]
2

Example 2.19

If the probability density of a random variable is given by


f(x)

for 0< x< 1

X
{

(2

x)

for 1 < x< 2

find the probabilities that the random variable having this probability density will take on a
value
0) between 0.2 and 0.8
(ll)
between 0.6 and 1.2 .
(Aug I Sep 2006)
Solution

Given
(i)

fx(x)

={

x for O<x <1


(2 - x) for 1 < x < 2

Probability between 0.2 and 0.8 is given by


P{O.2 < x < 0.8} ;;;:

r0 8
JO.2

j~ (x)d,

1
f .x dx = ~2 = -[0.8
2
08

0.2

= 0.3
P(0.2 < x < 0.8).
\.

(ll)

Probability betweenO.6 and 1.2

L2

P{0.6<x<1.2} =

0.6

fx(x)dx

0.2 2 ]

120

Probability Theory and Stochastic Processes

J xdx+ J1.2 (2-x)dx


I

0.6

_ x211
X JI1.2
- + [ 2x-
2

0.6

I
2

= 1-0.6 +2(1.2)_1.2 -2+!..


222
= 0.32+2.4-0.72-1.5
P{0.6<x <1.2}=0.5
Example 2.20

A continuous random variable X has a PDF f(x)

3x2 ,0 < X < 1. Find a and b such that (i)

P{X = a} = P{X > a} and (ii) P{X> b} = 0.05.

(Nov 2006)
Solution

Given the probability density function fx (x) = 3x2 , 0 < x < 1.


(i)

P{X>a} =l-P{X::;a}=l-FxCa)
2

= 1- J:3x dx

= 1-x3

1-a 3

P{X = a} = fx(a) = 3a 2

Given thatP{X = a} = P{X > a}

a 2 (a+3) =1

(ii)

P{X>b} =1-P{X::;b}=0.05
1- Fx(b) =0.05
F}( (b) =

:\;r.2(J;r. = h'

1-b 3 = 0.05
b3 = 0.05

b = ~0.05

= 0.3684

The Random Variable

121

Example 2.21

AliJfu1alog signal received at the detector (measured in microvolts) may be modelled at:! a
Ga'ussian random variable N (200, 256) at a fixed point in time. What is the probability that
the signal is larger than 240 !-lV, given that it is larger than 210 !-lV?
(May 2005)
Solution

Given an analog signal is modelled as a Gaussian random variable N (200, 256).


This indicates that the mean value mx = 200 !-lV and variance a/

256 or ax

16!-lV

The required probability is


P(X> 240) = 1- P(X::; 240)
= 1- F(

(240)

2401~200) = 1- F(8.75)

=Q(8.75)
1
=------------r=======~~

(0.66)(8.75)+ 0.34"J8.75 2 + 5.51

P(X > 240) = 1.066 x 10- 18

and F x (240)

=P(X ~ 240) =1- P(X > 240) ~:1d

F(lOJ
16

F (210) =F(21O-200)
x
10

l-QC~) =l-Q(0.625)
1

=1

0.66 x 0.625 +
F x (210) =1-0.1356=0.8643
The probability that the signal is larger than 240 !-lV, given that it is larger than 210 !-lV is
FxC2IO)
Fx (240)

= 0.8643 = 0.8643
1

Example 2.22

The lifetime of Ie chips manufactured by a semiconductor manufacturer is approximately


normally distributed with a mean = 5 x 106 hours and standard deviation of 5 x 106 hours. A

122

Probability Theory and Stochastic Processes

mainframe manufacturer requires that at least 95% of a batch should have a lifetime greater
than 4x106 hours. Will the deal be made?

(May 2005)

Solution

The lifetime of les are normally distributed or have a Gaussian distribution.


Mean value J.l x = 5 x 106
Standard deviation

(f x

= 5 x 106

hoUrs

Probability of lifetime greater than 4 xl 06 hours is

P{X > 4 X 106} = 1- P{X :::;:4 x106}

=;1-FA4 x 106]
I-F[

4~5J

= I-F(-O.2) = 1-(1-F(0.2))

F(0.2)

=;1-Q(0.2)
1

=1-

:. P{X > 4 x 106}

=:

e-O2'/2

(0.66)(0.2) + 0.34..j0.2 2 + 5.51

--=1-0.419

J2ii

0.581

Example 2.23

For a Gaussian random variablc with a=O and

(J'

=1, what is P(IX I> 2) andp[X > 2]?


(May 2011)

Solution

Given a Gaussian random variable X with mean value Ilx

0 and spread

The probability

P(IXI>2) =P(X <-2)+P(X>2)

(i)

Now

P(X < -2)

P(X < -2) + 1 P(X < 2)

=F( -2) = 1

F(2)

P(X < 2) = F(2)

PCI XI> 2) =I-F(2)+ 1


P(I X I> 2) = 2(1- F(2

F(2)

(J'

1.

The Random Variable

123

I> 2) =2Q(2)

P(I X

also,

Q(2)

..

P(IX I> 2)
P(X >2)

(ii)

e-2'/2

x--=0.0228
0.66 x 2 + 0.34,,12 2 4- 5.51
2n
2 xO.0228=0.0456

1-P(X <2)
1- F(2) =Q(2)

..

P(X>2)

0.0228

Example 2.24

A Rayleigh density function is given by


X2

f(x)

{xe-

x;;::: 0
x<o

o
(a)

Prove that f(x) satisfies the properties of the pdf

Jf(x)dx = 1.

00

(i) f(x) 20 for all x and

(ii)

(b)

Find the distribution functionF(x).

(c)

Find P(0.5 ~ X

(d)

Find P(O.5 < X < 2).

2).

(Sept 2003)

Solution

(a)

Given fx(x)

xe-X212
{

x;;:::O

x<o

(i) By substituting x values, for x;;::: 0, it is observed that fx(x) is always;;::: 0

(ii) Evaluating the integration for x 20, [fx(x)dx


l~l

2""'"

xdx

:.

dt

[fAx)dx

xe-x'i2 dx

124 Probability Theory and Stochastic Processes

Hence the given function f(x) satisfies the properties of the pdf.

(b)

The distribution function FAx) = [fx(x)dx

x=oo
P(0.5 s X s 2) = P(X s 2) P(X s 0.5)

(c)

=Fx(2)

(0.5)

=1
P(0.5s X s 2)

(d)

_1+e-Gs'12

e -118 -e-2

0.7472

P(O.5 < X < 2) =P(X < 2)-P(X < 0.5)


=P(X s 2)-P(X = 2)-P(X sO.5)+P(X =0.5)
P(0.5 < x < 2)

Now

P(X 0.5)

Fx (2) FAO.5) + P(X


fx(0.5)

P(X = 2) = fx(2)
P(0.5<X <2)

-118

0.5) - P(X =2)

=0.4412

2e-2 0.2706
-2
1 -1/8
e +-e
2

3 -1/8 - 3-2
=-e
e
2

Example 2.25

Assume that the number of automobiles arriving at a gasoline station is Poisson distributed
and occur at an average rate of 50/hour. The station has only one gasoline pump. If all cars
are assumed to require one minute to obtain fuel, what is the probability that a waiting line
will occur at the pump?
(Nov 2007)

The Random Variable

125

Solution

Given that the number of automobile arriving at a station is Poisson distributed.


Average rate A, 50/ hour
Time duration 1 minute

A waiting time occurs iftwo or more cars arrive in anyone minute duration. The probability
of a waiting line is

..

P{k;:: 2}

I-P{ksl}
I-FAl)

[bO! +~l! J
O

But

Fx(l)

[1+~]

FAl)

..

P{k;:: 2}

0.2032

Waiting line occurs at about 20.32% of the time.


Example 2.26

If a mass function is given by

P(x)

(i)
(ii)
(iii)

{A(1~-X)'

1,2 ...,50
x 51,52...,100
otherwise
x

Find A that makes the function a probability mass function and sketch its graph.
Find P(x> 50), P(x < 50), P(25 < x < 75), and P(x: odd numbers).
If the events indicated in (ii) are A, B, C, D respectively, find P (A I B), P (A I C),
P (A I D), P (C ID). Are the pairs A, B; A, C; A, D; C, D; independent evellls?
(Dee 2002)

Solution

Given the mass function of a random variable X is

fAx)

{A(l~-X)'

x =1,2 ...,50

x 51, 52..., 100


otherwise

126 Probability Theory and Stochastic Processes

(i)

lfthe function is a probability mass function, we know that 2:fx(x) = 1


50

100

2:Ax+ 2: A (I00-x)=1
x.,1

.<;51

4'[~~+~(lOO-X)] =1
A[ (l + 2 + ... + 50) +(49 +48+ ... + 1)]

=1

1, A (1275 + 1225):; 1 . (since sum of n numbers = n(n + 1)


2
A =

2500
x
2500
100-x
2500
.0

The mass function is fx(x) =

x = 1,2, ... ,50

x. = 51,52, ... ,100


otherwise

The graph of the probability mass function is shown in Fig. 2.21.


50

.. _ ... "'_ ..................... "'_ .... i "

,"
,

,
,

2'mf{x) ~
4
3

.,
,

..

,,

,,

,,

It,~,,

,{tf
o

2
1

12 34.....................505152...................... 100 x

Fig. 2.21 Probability mass function

(ii) (a)

P(x > 50)

~ 100-x
x~51

2500
1

--[49+48+ ... +1]


25p0
=

49x50 = 49 =0.49
2x2500 100

The Random Variable

(b)

49
X
1
P(X < 50) = :2:- = -'-'-[1 + 2+ 3 + ... +49]

x=1

2500

2500

_l_x 49x50
2500
2

(c)
(d)

'

~=0.49
100

x = 50 = 1 = 0 02
P(X 50) = 2500
2500 50
.
P(25 < X < 75)

...,
<=26

X
2500

~(100-X\

...,

x=51

--j
2500

27 + ... +50)+(49+48+47 + ... + 26)]

2
50
-[26+27... +49]+
2500
2500
x50 _ 25X26J+~
2
2
2500
50
x50 25x26]+-
.
2500

(e)

O.]~

=0.72+0.02
The probability P{x: odd number}
99

P[oddnumber]

Ax+ :2: A(100-x)


<=51,53...

+3+5+ ... +49)+(49+47... + 1)]


2 25
+3+5+ ... +49] =--:2:(2m-l)
2500 m=l

2 x 25 x 25
2500

P[oddnumber] =--x
2500
(or) suice we know that the sum of

nodd numbers n; 1J
=(

:=.!. = 0.5
2

127

128

Probability Theory and Stochastic Processes

.
I.e.

:.

(ill)

.
(49+1j,2
[1+3+ ... +49] - 2 - =25x25
2
1
P[oddnumberJ:::: 2500 x25x25:::: 2

0.5

Given the events


peA)

P[X > 50] = 0.49

PCB) = P(X < 50) :::: 0.49


P(C)

1
P(X =50)=-=0.02
50

P(D)

74
P(25 < X < 75) = 100

peE)

P[ odd number] :::: - = 0.5

0.74

Now

(a)

peA IB)

P(AnB)

P(X > 50n X < 50)

::::

PCB)

PCB)

=0

P(AnB) 0, peA) ~ 0, PCB) ~ 0


..
.. A and B are mutually exclusive and independent events .

(b)

peA I C)
.P(AnC)

P(AnC)
P(C)
0,

P(X > 50n X = 50)


P(C)

peA) ~ O,P(C)~ 0

:. A and C are mutually exclusive and independent events

(c)

P(AID)

P(X > 50n25 < X < 75)


P(25 <X < 75)

P(D)

74

}'

::::

100

;:;1

100 x
1)UU

74
100

+48 + ... +26]


74
100
P(AID)

36
74

0.486

36
100
74
100

The Random Variable

peA nD) = 36

100

Since P(AD)

"* P(A)P(D)

peA) =

100'

129

P(D) = 74

100

the events A and D are not dependent.

P(C ID) = P(C nD) = P[X = 50n25 < x < 75]


P(D)
P(D)

(d)

50

74 = 74

100

Since Cc(CnD), P(CnD) =P(C)

:. The events C and D are not independent.

Example 2.27

If the probability density of a random variable is given by

={ceXP(-XI4),

Ix(x)

0::;;x<1
otherwise

find the value that C must have and evaluate Fx (0.5).


Solution

Given probability density function

Ix(x)

c exp(-xI4),

={

otherwise

Ifjjx) is a valid density function, then


00

JIx (x)dx =1 or

JCe- xI4 dt' =1


o

=__1--::-::-:- =1.13
4(1

-----------

0::;;x<1

(May 2010)

130 Probability Theory and Stochastic Processes

O:::::x<1

otherwise

0.5

and

= f1.13 exp (-x/4)dx

Fx(0.5)

= -1.13 x 4[ exp(-x / 4)]~.5

Fx (0.5) = 0.5311

Example 2.28

A random variable Xhas the distribution function


12

n2

= I-u(x-n)

Fx(X)

n=1

650

Find the probability (a) P{-oo<X:::::6.5},(b) P{X>4} and P{6<X:::::9}.

(Nov 2010, May 2010)


Solution
12

Given

Fx(X)

n~

= I-u(x-n)
n=1

650

(a)

Probability of P{-oo < X ::::: 6.5}

n2

n=1

650

=I

Since n should be an integer,


Fx(X::::: 6)

:;:=

_1_[tn2] =_I_n(n +1)(2n +1)1


650 n=1
650
6
n=6

=_I_x 6x7x13 =0.14


650
6
(b)

Probability of P{X > 4} = 1- P{X::::: 4}"

=1- F xC4)

~1-

i:n
n=1

=1- 4x5x9 =0.9538


650 x 6

The Random Variable

(c) Probability of

131

P{6<X~9} =Fx(9)~Fx(6)

i ~=_1_[9X10X19
_ 6x7 X13]
650 650
6
6
n=6

_1_[285 - 91] = 0.2985


650

Example 2.29

The cdf of a random variable Y is


F,(y)

~{l-exP~-O.4JY),

Y20
otherwise

Valuate P{2.5 < Y ~ 6.2}.

(May 2010)

Solution

P{2.5 < Y

6.2)

= Fy (6.2) =

Fy (2.5)

1- exp(-0.4J6.2) - (1- exp(-O.4m))

= exp(-O.4m) - exp(-0.4J6.2)
= 0.531-0.369 = 0.1616
Example 2.30

Find a value for constant A such that

/x(x) = A(I-X

x<-1

)C0s( ~x J

-1~x~1

l<x
(May 2011)

is a validprobability dcnsity function.


Solution

Given

o
/x(x)

A(I-X

x<-1

2)C0s( ~x J
o

1<x

132 Probability Theory and Stochastic Processes

lithe function is a valid probability density function, then

L:fx(x)dx =1

IicX) dx =1

Now LIA(1-x )COSl

Using integration by parts,

-r . icicX

sm

_2_(-2x)dx

=1

4A

x(-coST]

ic

ic
2

8A

iC

'+1

(-cos icX]

r \ ic
11

dx =1

-1

icX 2. icX
-xcos-+-sm- =1
2 ic
2_1
+l

8A[
ic 2. ic (-(-I)cos (-ic]
2. II --ic))]
=1
+-sm
2 -cos-+-:-sm--

iC

ic

\ 2

8A[0+---(-1)
2 2
] =1

-~
ic~

ic ic

32

ic

The Random Variable

107

Example 2.6

Find a constant b > 0, so that the function,


I

fAx)

~oe

3x

is a valid pdf.
otherwise

Solution

Given the function


I

fxCx) =

3x

10:

otherwise

If it is a valid density function, then


fAx);:>: 0, is true, and

3x

-e
J010

d"C =1 or

~[e3X]b
10

-1] =1

3b

In(31)

b = lln(31) = 1.1446
3
Example 2.7

A Gaussian random variable Xwith /lx = 4 and ax


X

S;

3 is generated. Find the probability of

7.75. Write down the function and draw the graph.

Solution

Given a Gaussian random variable with /lx

4 ax = 3. Given the event {X s; 7.75}, then

P{X s; 7.75}

= FxC7.75)

108

Probability Theory and Stochastic Processes

We know that, FAx) = F(

x~~x )

or

Fx(7.75)

=Fe7~-4)=Fe;5)=F(1.25)

Using the Q- function approximation,


F(1.25) = l-Q(1.25)
.
1

F(1.25)

e-1.25'12

(0.66)(1.25)+0.34~(1.25i +5.51

x---

= 0.8944
= 0.8944

P{X 57.75}
The Gaussian density function is
fAx)

~e

,,2na/
1

(x) =

(x_4)2

IS

,J2n(9)

_ (>-42'.

0.133e

18

Figure 2.16 shows the Gaussian density function.


fJx)

0.133 ................_ ..........

7
Fig.2.16 Gaussian density function for llx

4, ax

Example 2.8

Assume that the height of clouds ax above the ground ::tt some location is a Gaussian
random variable X with mean value 2 km and ax = 0.25 km. Find the probability of clouds
higher than 2.5 km.
Solution

Given a Gaussian random variable X


Mean valuellx

= 2 km

The Random Variable


Spread

ax

109

0.25 Ian

Favourable event is {X> 2.5 Ian}


..

P{X > 2.5 Ian} = I-P{X S 2.5 km}

=1- FxC2.5)
I_F(2.5- 2)=I-F(2)
0.25
1-(1-Q(2))

=Q(2)

Using Q-function approximation,


P{X > 2.5 km} = Q(2) =

e-2' 12

0.66x 2+0.34.J2 2 + 5.51

J2ii

P{X > 2.5 km} = 0.0228

The probability that clouds are higher than 2500 m is therefore about 2.28%.

Example 2.9
A production line manufactures 1ill resistors that must satisfy 10% tolerance.
(a) If a resistor is described by the Gaussian random variable X, for which

f.lx

= 10000,

ax

400, what fraction of the resistors is expected to be rejected?

(b) Ifa machine is not properly adjusted, the product resistances change to a new value

where f.lx

1050n. What fraction is now rejected?

Solution
Given that the resistor value is described by the Gaussian distribution. The accepted resistor
value is 1 kO l 0% tolerance, that is, {900 0 to 1100 n}.
The resistor is rejected if its value X is {X < 900} or {X> HOO}.
(a)

The probability of rejection is a fraction of the resistors rejected.


P{resistor rejected} = P{X < 900} + P{X > llOO}

Given mean value f.lx

10000, a x = 400.

Then P{X <900}

=Fx(900)=F(900~~000)

F(-2.5)

1- F(2.5) = Q(2.5)

and

P{X > llOO} = I-Fx(1100) =

I_F(1l00~1000)

110

Probability Theory and Stochastic Processes


= I-F(2.5)
=Q(2.5)
P {resistors rejected} = Q(2.5) + Q(2.5) = 2Q(2.5)

Using the Q-function approximation,


P{resistors rejected} = 2
0.0124
= 1.24 % resistors are rejected

(b)

If mean value is adjusted to Jix


then

P{X<900}

1050Q,

FA900)

(J'x

=40Q

F(900~~050)

F(-3.75)

1- F(3.75) = Q(3.75)

and

P{X>1100} =1-FX<1100)

I_F(1l00~1050)=1

F(1.25)

1- F(1.25) = Q(1.25)

P{resistors rejected} = Q(3.75) + Q(1.25)


1

e-3.7S'12

(0.66)(3.75)+ 0.34.J3.75 2 + 5.51

J2i

(0.66)(1.25) +0.34.J1.25 2 + 5.51

e- L25'12

x---

J2iC

0.1057

= 10.57% resistors are rejected


Example 2.10

The power reflected from an aircraft received by a radar is described by an exponential


distribution. The pdfis given by fx( x)

~e:-XilO,
10

x:> o. The average power is 10 W. What is

the probability that the received power is greater than the average power?
Solution

Given that the power reflected is an exponential distribution. The pdf is given as
fx(x) = ~e-X/lO, x > 0
10

The Random Variable

2.[_10e- J =1 e10

xlJo

Fx{x) =1_e-xIIO

III

x110

x>O

The required probability is


P{X > 1O}

=I-P{X $; 1O}
I-FAlO)

P{X > 1O}

= e- l ~ 0.368

Example 2.11

The amplitude ofthe output signal ofa radar system, i.e., receiving only noise is a Rayleigh's
random variable with a 0, b = 4V. The system shows a false target detection if the signal
exceeds V volts. What is the value of V, if the probability of false detection is 0.001 ?
Solution

Given that the noise can be described by Rayleigh's distribution. The distribution function is
x:C:a
x<a

Given

a = 0,

b 4 volts
x:C:O
x<O

Ifthe noise exceeds V volts, the system shows false target detection. Given the probability of
false detection is 10-3
10-3 =P{X> V} =I-P{X:::;: V}
-1

10-3

"'"

Fx[V]

1 (I

v2 4
/ )

e-V'/4

or

=~4ln(lOOO) =5.256 volts

112

Probability Theory and Stochastic Processes

Example 2.12

The lifetime of a system expressed in weeks is a Rayleigh random variable X for which

~eXp{-X2/400)

x~o

x<o

f(x
) = 200

(a) What is the probability that the system will not last a-full week.
(b) What is the probability that the system lifetime will exceed one year.
Solution

The Rayleigh density function of X is given by

fx(x)

~0

e-x'1400

x<O

200

where x is given in weeks


(a) The probability that the system will not last a full week is
P{X:=;; 1}

F(1) =

L/x (x)dx

_! -200xe

=::

(b)

-x'/400

_e-2/4001~

dx

=::

e-1/400 =::

0.0025

The probability that the system lifetime will exceed one year is (since one year = 52
weeks)
P{X>52} =1-P{x:=;;52}
1-Fx (52)

But

Fx(52)

r 2~O

40

e-x'i dx

_e- x' /400


P{X>52}

I:

= 1-e-'z' 1400

1_(l_e-S2"4oo)
e-52'/400

i::i

0.00116

Example 2.13

A certain large city experiences, on an average, three murders per week. Their occurrence
follows a Poisson distribution.

The Random Variable

(a)
(b)
( c)

113

What is the probability that there are 5 or more murders in a given week?
On an average, how many weeks in a year can this city expect to have no murders?
How many weeks per year (average) can the city expect the number of murders per
week to equal or exceed the average number per week?

Solution

Given in a city, the occurrences of murders per week is a Poisson's distribution.


Average no. of murders b = AT = 3
(a) Probability that there will be five or more murders in a given week
P{k ~ 5} = 1-P{k ~ 4}
=l-e-b

bk

L
k!
k=O

32 3 3 ]
=1-e-3 1+3+-+-+
[
2! 3! 4!

92 22 27]
8

= 1-e-3 [ 1+3+-+-+

(b)

Probability of zero (no) murders


-b bk

-3 bO

P[k=O} =_e_=_e_=e-3 =0.0498


k!
O!

Average number of weeks a year, the city has no murders is

52 x 0.0498 = 2.5889 weeks

(c)

The probability that the number of murders per week is equal to or greater than the
average number of murders per week is

P{k ~ 3} =l-P{k ~ 2}

= l-[P(O) + P(1) + P(2)]

= 1-.!2e-3 = 0.5768

.. The average number of weeks in a ye;u when the number of murders exceeds the
average value is 52 x 0.5768 = 30 weeks .

..

_..

_--_._-

180 Probability Theory and Stochastic Processes


Solution

We know that for a given random variable, the binomial density function is given by
N

fAx) =

L NCK pK(1_p)N-KO(x-K)
K=O

For the random variable {X I x

= K }, the probability mass function is

P(x) = NCxpxqN-x

where
The mean value of X is

forx=0,1,2 ... N

q =1-p

E[X] = LXP(x)
;'(=0

E[X] = LX NCxpXqN-X
x=o

We know that NC = N(N-l)(N-2)

x
x(x -1)(x - 2) ...

NC = N (N-I)C
X

Since x =

(x-I)

is not valid,
N

E[X] = N.p"
N Ie
pX IqN
L(x-I)
x=l

Since

(~-I)C(X_l)pX-lqN-l-(X-l) =

(p +q)N-l = 1

x=l

E[X] =Np(1)=Np

E[X]=Np

The mean square value of X is

1;,'[X2]

LX2p(X)
x=o
N

= Lx2 NCx pXqN x


x=Q

1 (x-I)

Operations on One Random Variable

189

Example 3.22
The pdf of a random variable X is given by
x
20 '

fAx)

2~x~5

Find the pdf of Y ::: 3X 5.

Solution

Given

2~x~5

Ix (x)

otherwise

and the transformation

Y:=3X-5
Y+5

y+5

or

X=-

and

We know that fy(y)


Now,

:=

1 (Y
+5)
?/X
-3

frey)
I' (
Jy

The limits are for

1:1

fx(x)

x=2,

and x=5,

:=

1 (y + 5) / 3 = y + 5
3
20
180
y

=1

y=10
l~y ~10

otherwise

Example 3.23
A Gaussian random variable
with variance 10 and mean 5 is transformed to Y eX. Find the
,
pdfof Y.

190

Probability Theory and Stochastic Processes

Solution

Given

(J/

'"

10,

mx = 5.

We know that the pdf of a Gaussian random variable is

1 ex (-CX-5)2J
.J2nlO p
20

fAx)

The transformation
or

= InCY) ,

x:::: In(y)

y.

We know that
1

~ fx(lny)

frey)

. (-(In(Y)-5)2)
20
'

= y../20n exp

More Solved Examples


Example 3.24

Let X and Ybe random variables such that Y s X, then show that E(Y] s E[X], provided that
the expectation exists.
Solution

Given two random variables X and Y, and Y s X, or Y - X


Take expectations on both sides

s O.

E(Y -X]sO

From the addition theorem, E(Y] E(X] ~ 0


or E( Y] s E(X]
Example 3.25

What is the mathematical expectation of winning Rs. lOon a balanced coin coming up
heads, or losing Rs 10 on its coming up tails?

Operations on One Random Variable

191

Solution

Let winning of Rs.l 0,


losing of Rs.I 0,

XI
X2

+1

= -1

Since the coin is balanced, the probability ofheads is P(xl )

1 and the probability oftails is

1
P(~)=2'

We know that the expectation


E[X] = L>;P(x;)

..

XI P(XI )+X2 P(X2 )

.!.(+1)+.!.(-1) 0
2
2
mathematical expectation is zero

Example 3.26

A fair coin is tossed until a tail appears. Find the mathematical expectation of the number of
tosses.
Solution

Let X be a random variable of the event "number of tosses". Since the coin is fair, the
probability of getting a head is .!. and the probability of getting a tail is .!..
2
2
The possible outcomes oftossing until a tail appears is T, HT, HHT, H, HHT, HHHHT, ...
and so on.
The probabilities of the outcomes are
P(T) =.!.
2
P(HT) =.!..!. = .!.
22 4
P{HHT) = 1 1 1

222

P(HHHT) =

L!. 1..!. = ~
2222

16

1
['(IIlL..IIT) ==
2"

The expectation of X is
E[X]

L>nP(xn}
11

and

192

Probability Theory and Stochastic Processes

lx-+2x-+3x-+4x-+ .....+nx-........
.

2
4
8
16
2n
n
2n

E[X] =

Let
then

Substracting

OO(n
n)
~
2" - 2n
+l

Expand the tenus,

1 2 2 3 3 4 4 5 5

-+---+---+++ ........

2 4 4 8 8 16 16 32 32 64

2
S

i+(~-~)+(~-~)+C: -1~)+(;2 - 3~)+'

1 1 1 1 1
=-+-+ +-+-+ .......

2 2 4 8 16 32

2"

=2_2_=2
1-1
2

:. The required expectation is 2.


Example 3.27

A box contains 4 red and 2 green balls. Two balls are drawn together. Find the expected
value of the number of red balls drawn.
Solution

Let X be the random variable for the event ofnumber ofred balls drawn. When two balls are
drawn together, assume the events are
No red ball and two green balls

= Xl

One red ball and one green ball = x 2


Two red balls and no green ball

= X3

Operations on One Random Variable

193

The probabilities of the events are

4~O 2C2

P(X ) =
1

lxl x2
6x5

C2

1
15

6
15

The expected value is

1
8
6

= "xP(x.)=Ox-+lx-+2x
..,;
15
15
15

E(X]

=~+g 20 =i=1.33
15 15

15

1.33

E(X]
Example 3.28

When two unbiased dice are thrown, find the expected value of the product of the numbers
shown on the dice.
Solution

When two dice are thrown, the sample space is


S

{(1,1), (1,2), ... ,(1,6),(2,1),(2,2), ... ,(2,6),(3,1 ),(3,2), ... , (3,6), ... ,(1,6)..... ,(6,6)}

Let X be the random variable for the events of product of the numbers shown on dices
The possible outcomes areX {I, 2, 3,4, 5, 6, 2,4,6,8,10,12, 3,6,9,12,15,18
4,8,12,16,20,24,5,1O,15,20,25,30,6,12,18,24,30}

Here, products 1,9, 16,36 appear one time, products 2,3,5,8, 10, 15, 18,20,24,25,30
appear two times, products 4,12 appear three times and product 6 appears 4 times.
:. The probabilities are shown in Table 3.5.

Table 3.5 Values of X and P(x)


X=X 1
P(x)

3
36

36 36

36

36

~ r2

10

12

15

16

18

20

24

25

30

36

3
36

-1

36

36

-36

Then
E(X]

= LX,P(x,)
i

- -

36 36

-36 -36 -36 -36

36 36

194

Probability Theory and Stochastic Processes

1
36

2
36

2
36

3
36

2
36

= lx-+ 2x-+3x-+4x-+5x-+ ...... 36x


E[X]

1
36

= 12.61

Example 3.29

When two unbiased dice are thrown, fmd the expected value of the sum of the numbers
shown on the dice.
Solution

When two dice are thrown, the sample space is


S

= {(I,1),(l,2), ... ,(l, 6),(2, 1),(2,2), ... ,(2,6),(3, 1),... ,(3,6),(6, 1), ... ,(6,6)}

Let X be the random variable for the event of the sum ofthe numbers shown on the dices.
The possible outcomes are
X

{2, 3,4, 5, 6, 7,3,4,5,6, 7,8,4,5, 6, 7,8,9,5,6, 7,8,9,10,6,7,8,9,10,11, 7,8,1O,1l,12}

The probabilities are shown in Table 3.6.


Table 3.6 Values ofX and P(x)
2

10

36

6
36

36

3
36

3
36

X=x i
P(x)

-
36

E[X]

36

36

36

1
2
3
4
5
65
2x--+3x--+4x--+5x--+6x--+7x--+8x
36
36
36
36
36
36
36
+9x

432
1
+lOx--+llx--+12x
36
36
36
36

E[X] =7

Example 3.30

Let Xbe a random variable with probabilities as shown in Table 3.7.


Table 3.7 M:zlues ofX andP(x)

X(X j )

-1

p(x)

1
6

-31

1
2

Find (a) E[X], (b) E[X2], (c) E[(2X +1)2] and (d)

(J/ .

Operations on One Random Variable

195

Solution

Given a discrete random variable with probabilities shown in Table 3.7.

(a)

L>;P(x;)

E[X]

(b)

E[X2] = LX;2p(X;)

(~1)2 x

1
1
+(1)2 X_+22 x
6
3
2

1 !..+ 2= 2+!.. 2.5


3
2

(c)

E[(2X +1)2] = L(2x; +1)2 P(x/)


i

1 9
6 3

25
2

-+ +
(or)

E[(2X+l)2]

1
1
1
(2(-1)+1)2 x-+(2xl+ 1)2 x-+(2x2+ 1)2 x
6
3
2

47
3
E[4X2+4X+1J
4E[X2]+4E[XJ+ 1

=4x 25 +4x2+1=1l+ 14 = 47
10
6
3
3

Example 3.31

Consider a random variable X with E[X]=5 and

(j/ =3.

Another random variable is given

as Y=-SX+10. Find (a) E[X2] (b) E[XY] (c) E[y2] and (d)

(j/.

Solution

Given E[X] =5 and (j/

3
Y

-SX +10
9+52 =9+25 =34

(b)

E[XY] = E[X(-SX + 10)]

E[-SX + lOX]
-SE[X2] + 10E[X]

----

--------

............................

_--_ ..

_ -

196

Probability Theory and Stochastic Processes

= -8(34) + 1O(S)
= -272 + SO = -222

(c)

E[y2] =[(-8X+1O)2]=E[64X2+100-16X]
= 64E[X2]-16E[X]+ 100

= 64x 34-16xS+100
=2196

(d)

0/ =E[y2]-E[Yf
Now E[Y]

=E[-~X +10]

=-8E[X]+1O

=-8xS+1O=30

0"/

and

= 2196-30 2 = 2196-900 = 1296

Example 3.32

Consider that a pdf of a random variable X is


I
fxCx) = KO
{

-2:5:x:5:3

otherwise

and another random variable Y = 2X. Then fmd (a) value K, (b) E[X], (c) E[l1 and

(d) E[XJ1.
Solution

(a)

Given

Ix (x)

=H

otherwise

Since fx(x) is a valid density function, then S:fx (x)dx = 1as shown in Fig. 3.7.
fx(x)
11 Kr----+-----,

-2

Fig. 3.7 Uniform density function

r~dx
2K

=1

..::.1
K

=1
-2

Operations on One Random Variable

or

_3-_(0---2.. :. . )
K

or
(b)

197

=1

K =5

E[X] = [x/x(x)dx
=

x.!dx=! Xll3
5

5 2

-2

=~(32

(-2l)

lO

9-4
10

(c)
(d)

E[Y]

=E[2X]

2- =1
2

E[XY] =E[X2X]=2E[X2] =2

r x 2 !dx
.1-2 5

2
38
E[XY] =-[27+8]=
=2.53
15
15
Example 3.33

The pdf of a random variable X is given as

Ix (x) ={0.3507,1x
0

O<x<3
otherwise

Find the (i) mean, (ii) mean of the square and (iii) variance of the random variable X.
(Nov 2010)

Solution
O<x<3

Given
(i)

Mean value is

E[X]

=[, xfAx)ctt

198

Probability Theory and Stochastic Processes

=r x(0.3507.[;)dx

E(X] =O.l4x3 5/2 =2.18674

(ii)

Mean square value

= 0.3507

r
X

5/2

7/2

dx

=0.3507~

7/2 10

(iii) Variance

O'i =E(X2]

X2

= 4.68589 - (2.18674)2 = 0

Example 3.34

A random variable X has a pdf


-1(;

<x<

1(;

elsewhere
Find the mean value ofthe function g(X)

4X2.

(May 2011)
Solution
-1(;

Given

1(;

-<x<
2
2
elsewhere

Operations on a Single Random Variable

This can be written as


N

L)x(x-l)+x) NCxpxqN-X
x=O

Since x

= x(x-l)+x
N

n=O

n=O

Lx(x-l) NCxpxqN-x + LX NCxpXqN-X

E[X2]

So,

Now NCx can be written as


NC

or

(N-I)C

NC

(x-I)

N(N -1)
x(x-I)

(N-2)C
(x-2)

~ x(x-l) N(N 1) (N-2)C


x N-x + N;
( -1)
(x-2)P q
'P
x=O
x x

...

x=O

Since x

0 and 1 are not valid,


N

E[X2] = N(N -1) L (N-2)C(X_2)pX-2 p2qN-2-(x-2) + Np


x=2
N

N(N _I)p2 L (N-2)C(X_2)pX-2 qN-2-(X-2) + Np


x=2
N

'"' (N-2)C
x-2 N-2-(x-2) (p+qi N - 2 ) = 1
.
Smce ...
(x-2)P
q
x-2

:. The variance of X

=E[X2]-E[X]2
(1 2

N(N-I)p2+Np (Npi

(1/

N'2p2 _Npl +Np_N'2p'l.

(1/
(1 2

= Np-Np2 = Np(l- p)

Npq

181

Operations on One Random Variable

199

and the function


Mean value ofthe function g(X),
E[g(X)] = [g(X}fx(x)dx

12

= [ 11:12 (4x 2)-cosx


2
E[g(X)]

2[

dx

12 2
x cosx dx
11:12

To evaluate the integration,


2
2
Jx cosx =x sinx- J(2x)(sinX)dx
x 2 sinx- 2[x(-cosx)

J(-cosx)dx]

Jx cosx =x 2 sinx+ 2xcosx-2sinx

or

E[g(X)] =2[12 x 2cosxdx=2[x 2sinx+2xcosx 2sinxJ11:12


11:12
-11:12
=

E[g(X)]

n2 . n+-cos-
2n n 2'smn)
-sm
2[[
. 4
2 2
2
2

n2

Example 3.35

Givcn thc Rayleigh random variable with density function


f(x)

2
_(.< __ a)2
-(x-a)e b u(x

show that the mean and variance are

E[X]=a+

{7ib

V4

a)

(May 2011)

200

Probability Theory and Stochastic Processes

Solution

Given Rayleigh's density function


2

_(x_a)'

f(x) =-(x-a)e b u(x-a)


b
The mean value is

E[X]

= [:if(x)dx

2X

-(x-a)~

= [ -(x-a)e
b

dx
.

To evaluate the integration,


x(x a) =x2 xa =x2 +a 2 -2ax-a2 +ax

x(x-a) =(x-a)2 +a(x-a)

or

2
-(x-a)
2
E[X] =- [(x-aie-b-dx+~ [(X a)e b dx
b "
b

[f(x)dx

We know that

Now let

2(

-{x-a)'
)
x;a e-b-dx=l

x-a

.Jb

dx
The limit at

=1

=.Jb

dt

x=ais 1=0

(From Appendix A)

E[X] =a+

----.~---

Jib
2

Operations on One Random Variable

E[X] =a+
(ii)

201

{1ib

V4

Variance of X is

_(x_a)'

2X
b dx
-(x-a)e
a
b
To evaluate the integration, the term x 2(x- a) can be expressed in terms of (x-a) as
2
E[X]
=

Now

x 2(x- a)

= (x-a+ a)2(x-a)
=[(x-a)2 +a 2 +2(x-a)a](x-a)

x 2(x- a)

= (x -

E[X2]

a)3 + 2a(x- a)2 + a 2(x- a)

2
-(x-a)
-(x-a)
[-(x-a)3e-b-dx+~ [(x-a)2 e- b-dx

2
-(x-a)
2a [
+b
(x-a)e b dx

The integrations are


for

2( x ~ a )3 e -(x~a)' dx, let

(x-a)2

2(x-a) dx= dt
b

---'-----------'--=t,

2(x-a)3

Then [

(x-a)2

_(x_a)'

dx = [
a

_(x_a)'

2(x-a)dx

For

J1ib

and

r
a

=2a--=a,mb
2
_(x_a)'

2( x-a ) -b-dx
=a 2
e
b

..

-.------.~~--

'

202

Probability Theory and Stochastic Processes

The variance of X is
=E[X ]-X

u'x

=a'+aM+b (a+{!)'

(j2

=a2 +a.Jib +b-a2 _1rb -2a


4

2-

2
(jX

fib
'/4

r::::;1rb
a-v1rb =b-
4

Proved.

Example 3.36
Find the characteristic function for a random variable with density function
Ix (x)

=x

for 0:5: x :5: 1

Solution

The characteristic function of X is given by


tPx(m)

E(e1"'x]

= [, e

tPx (m) ==

jOJx

! xe

Ix (x)dx

jrox

dx

Operations on One Random Variable


Example 3.37

The density function of a random variable is given as


fAx) ae-bx x;?: 0 .
Find the characteristic function and the first two moments.
Solution

Given fAx) =ae-h>: X;?:O. The characteristic function is


lfrx(m) = E[e

jWx

:::= [ ,

e jWX fx(x)dx

:::= [ ,

ae-h>: ~wx dx

ae-'(b-jwx) fx(x)dx

: := a(~J[e(b-jW)XJ'"
b-]m
0

lfrx(m)

:::=(~)(0-1)
b jm

a
b-jm

lfrx(m) = - - .
b ]m

The moments are

(b-~milm==o= a
m2

- ].

:m[ (b- ~m)2 ]


2ja I
(b- jm) m

2a
I
m2 = (b- jm)3im

0
2a
0 == IT

203

204

Probability Theory and Stochastic Processes

Example 3.38

Find the characteristic function for fx(x) e-J.tl,


Solution

Given fAx)=e-l xt , The characteristic function is

1 e(l+ o)XIO +--e


-1 -(I-jO)x!'"
-l+jOJ

'""'"

I-jOJ

1
-1
=-,-[1-0]--,-[0-1]
1+ ]OJ

1-]OJ

1
1
2
lPx(OJ) = - - + - - = -
1+ jOJ 1- jOJ 1+ OJ2

Example 3.39

Show that the characteristic function of a Gaussian random variable with zero mean and
variance

(J'2

is

Solution

We know that the probability density fimction of a Gaussian random variable is


fx(x)

1. e-h2a2

~21t: (J'2

The characteristic function is lPx(OJ) = E(~(OX]

= [fx(x)ejO)xdx

Operations on One Random Variable

Now take the function


_X2
-2

20'

+ jOJx

To convert it into a square form

;: +

jrox =-[(;2uJ -jrox)

U"J- f2u )-'2~


=-[UJ -21" j;; +e;;)' -(j;;J;
2(

= -[

jro)

=-[(f2u -j;;J-(j;)']
Then the characteristic function is

jwu)' (jwu)'

CPxCOJ) = _1_ [ e- .J'ln -.fi +.fi dx


5 0 ' '"
(

(x j(J)(J)'

-{Jia 2

CPxCOJ)

_1_ e-

50'

Let

- - JOJO'
0'

2-

e- .fi" -.fi dx

'"

= Y

dx =dy
0'

dx=dyO'

205

206

Probability Theory and Stochastic Processes

Substituting the values,

_1-.c e~ll2dy

=e-cu'(f2 12 -,

5'->

We know that the area under the Gaussian pdf is

.in [e-

Y212

dy =1

Proved.
Example 3.40

Find the density function of the random variabl,? X if the characteristic function is
~~

I-ol co I

1co I::;; I
otherwise

~b~~

Solution

. Given the characteristic function

tPx (co)

f1-1 co I' 1co I::;; I

otherwise

The density function is the inverse Fourier transform of the characteristic function
with (-m)

Operations on One Random Variable

207

(1-OJ) -jOlXI
1 fl -jOlXdx
=--e
+ J/
jx
0
jx

and

1
1 -i(J)xl
=-+-e

jx

_x 2

()

[1 -e-iX] +
1
=12

jx

Substituting the values in /x(x), we get

1[1 .

ix(x) =21r

~(_eJx+l)+

1 .]

x2(1-e-J",)

[_iX 1 (1 -e- ix'J]


- 12
2 -tf" + +
1rX

jX

el_ +_e_-_]
_l_[l __
1rX2
2

Example 3.41
Show that the distribution function for which the characteristic nmction is e-1(0\ has the density
function
-oo<x<oo

Solution
Given the characteristic function <p,y (OJ)

e -iwi .

The density function ix(x) is the inverse Fourier transfonn of the characteristic function

with (-(I)).

Therefore,

) - 1
f x(X
21r

[ e-lOlle-jOlXdOJ

208

Probability Theory and Stochastic Processes

[eO-iXl"'J!:", + 1+1jX [e-"'(I+PlI]


-0)

1]]
1

21r

1+X2

=-x-

Proved.
Example 3.42

The characteristic function for a Gaussian random variable X; having a mean value of zero,
is </Jx(m) ::;:::exp ( -<.i;m2} Find all the moments of X using </JxCm).
Solution

Given characteristic function

We know that the

nth

moment is

:eo[e-U;W 1
=je

(Uf (2eo)J

(Nov 2010)

182 Probability Theory and Stochastic Processes


Example 3.15

Find the mean and variance of Poisson's distribution fimction.

(May 2011)

Solution

We know that for a given random variable, Poisson's distribution fimction is given by
N
AK
fAx)
0 (x-K)

L:>-4_
Kl

K=O

For the random variable {XI x

where A

= K}, the probability mass fimction is


e- 4 Ax

P(x)

= --for

E[X]

= L XP(x)

xl

Np.

The mean value of X is


00

x=o

Since x = 0 is not valid,


.

00

E[X] - ~

or
'"

Since

-4

(x-I)!

E[X]

x-l

L-e -Ax=1

A Ax-I e- 4

(x-I)!

1 (sum ofall probabilities)


E[X] =Axl
E[X] A=Np

The mean square value of X i8


E[X2]

= fX2p(X)
x=o

= O,I,2, ... ,N,...oo

Operations on One Random Variable

(J)(j2

=j_X_e

co=o

_d

209

=0

-d(

=--2Px(CO)
=-l--GXCO e 2
dco
CO=O dco

ui[e
+( -~i Jm]
.~Ui[e

and

..

m3

Similarly, we can obtain all other moments.

Example 3.43 .

Show that the characteristic function of a random variable having a binomial density is

Px(co)
=[1 p+ p ej())t .
(Nov-20l0)

210

Probability Theory and Stochastic Processes

Solution

The binomial density function is


P(x) = NCxpXqN-X

for

0, 1, 2,....,N

The characteristic function is

tPx (m) = E[ e

jrox

= L ejwXp(x)
x=o
N

tPx(m)

=L

NCxpXqN-Xejrox

x=o
q =l-p
N

tPx(m)

= L NCx(pejcoY(1

p)N-x

x=o
Since

(a+bt

X
L NCxaxb N
x=o

Example 3.44

Find the moment generating function and the characteristic function ofa Poisson distribution.
Solution

Given that the Poisson density function of X is


e-,l A:

P(x) = - -

x!
The moment generating function of X is

x~O

M x(v) = E[e.\V]
00

= Le.\V P(x)
x=o

M (v) = e-,l ~ (,levy


x
L...
I
x=o
x.

Operations on One Random Variable

We know that eX =

211

L
'" XII

:;;=0

n!

:. The moment generating function is Mx (v)


of X is given by
.,
</Ix(m)

E[ejQ)X]

:= eA(e" -1).

Similarly, the characteristic function

LejQ)X P(x)
x=O

:. The characteristic function is </Ix (m)

eA(eit -I)

Example 3.45

A random variable Xhas pdf


1
f(x) = 2 X '

x:::::l,2,3, ...... ,n

Find the moment generating function M( v ).

(May 2011)
Solution

Given

f(x)

1,2,3, ... , n

Since X is a discrete random variable

P(X) =f(x) = 2X

1,2,3, ... ,n

The moment generating function is

1\1xCv) == E[eh-, =

i.>xv P(x)
<=11

II exv ( 1 '\
(eV)A
=2:
- =2:
x=o
2X) x=o 2
fJ

Operations on a Single Random Variable

Since x 2

=x(x-l)+x,

Il .<-2

'" e-'<Il
L-e(x-2)!
_
1 (sum of all probabilities) and LX -- mean value : : : Il

co

Since

183

-A.

x=o

.<=2

So,

x!

E[X2] =1l (1)+1l

E[X2] ::::: 112 + Il

The variance of X is
=E[X2]-E[X]2

0'/

=1l 2 +1l-1l 2 :::::1l

.. for Poisson's distribution, E[X] O'ill = Np

Example 3.16

Five coins are thrown simultaneously. Find the mean and variance ofthe probability ofgetting
a head.
Solution

When coins are thrown simultaneously, it is a binomial distribution. Assume the coins are fair.

The probability of getting head = p :;;; ~ ,

q =p-l=

The mean value of X, E[X]


The varianee

0'/

= No. of coins
1

Np = 5x- =-

2 2

=Npq

1
2

=5

2.5

5x..!..x..!..=~=1.25

2 2

184

Probability Theory and Stochastic Processes

Example 3.17

The mean and variance of a binomial distribution are given by 6 and 2.4 respectively. Find
P(X> 2).
Solution

Given that it is a binomial distribution with mean = Np = 6 and variance = Npq = 2.4.
Then
and

q = 2.4 = 0.4

6q = 2.4,

P =1-q=1-0.4=0.6

p =0.6, q=O.4

0.6

0.6

N =-=-=10

The probability

P(X>2)

=1--:-P[X~2]

= 1-

[~ IOCxpVO- x ]

= 1- [lOCO 0.60.410

P[X> 2]

= 1-[0.410

+ lOCI 0.6 1 0.49 + IOC20.62 0.48 ]

+ 1OxO.6x0.49 + 10;9 0.62 X 0.48 ]

= 1-[1.048x 10-4 + 1.57x..10-3 + 0.0106]

P[X> 2]

= 1-0.lH23 = 0.9878

Example 3.18

Probability mass function of a discrete random variable is given as


-1

-4

-4

2
1

rex)

2
3

(a) Find the moment generating function, moments and


(b)

tht:: mil for t::quiprobable events.

Solution

(a)

The moment generating function for a discrete random variable is


MxCv) = E[eVx ] = ~>vx; P(xJ
i

Operations on One Random Variable

From the given table,Mx(v) =eV(-II2) (~J+eV(1/2)i

3 vI2

MAv) = -e

4
Using the infinite series expansion,

=~[e-VI2+3eVI2J

Mx(v)

185

1 -v12

+-e

(v)=~rl+r+ (~)2 +(~)3 +(~J +.. +(~J1+.!..rl_r+ (~)2 _(~)3 +(~J +.. +(-1)"(~J1
42

2!

M (v)
x

3!

4!

n!

2!

3!

4!

n!

~I+!"-+(~J +! (~J + (~)' +........+2 (~J + (-I)" (~J

22

2!

2 3!

4!

But we know that

Equating the nth of terms of M x (v)

mv
n

or

n!

The moments are

=~[~+ (-I Y ]
2 n n! 4

=J...[~+(-IY
[!]]
2 4
4
n

1 3 1 1 11
=-x---x-=--=
2 4 2 4 22 4
1 3 1 1 1
1
m =-x-+-x-=-xl=
4 4 4 4 4
4
1311111
m3 =-x---x-=-x-=
8 4 8 4 8 2 16
m1

4 n!

n!

186

Probability Theory and Stochastic Processes

m4

(b)

From the

nth

=-1 x -3 + -1 x -1
16 4

1
16

1
and so on
16

- x 1 =-

16 4

moment, it is observed that

mn =(x1t P(Xj ) + (X2tP(X2)


If

P(xj ) =P(x2 )
~

=1

(equiprobable), then

1
~ =0, m4
4

=0, m2 =-,

{i"

Hence

1
16 ,.....

n=even
n=odd

Example 3.19

If a pdf of a random variable X is given by Ix(x) = be-laxl , where a and b are real constants,
find the moment generating function, mean and variance.
Solution

Given

lAx)

(i)

The moment generating function

=be-lax,.

Mx(v)

E[eVx ]

=b[ (evXeQXdx+

=b[ (e(aw)xdx+

e VX e-axdx]
e-(a-v)xdx]

=b[_l_eIP+v.lA!O +2 e-(O-Vl,t!"']
a+v
a-V

-00

~b

11
.
1
(
0)--(0-1)
[a+v
a-V

=b[_I_+_1-J=~
2
2
a+V

2ab

a 2 _v 2

a-V

a _v

Operations on One Random Variable

(ii)

187

The mean value is


1'nt = d M x (v)
dv

Iv

o = 2 ab (a

-1 2 2
-v )

(-

2v)

V =

4aba 4
4b
0=-a
8 -=

(ill)

The variance is

Example 3.20

[X j,X 2 , ... ,X n ] with pmf P(xJ, i=I,2,...... ,n


xjnp(x ) +X np(X ) +X np(X )+ ....

Find the mgffor a discrete random variable X


and hence show that the

nth

moment, mn

Solution

The moment generating function for a discrete random variable is


Mx(v) = E[eVx ] = 2:eVX;P(xJ

Using the infinite series expansion,


Mx(v)

eVX1 P(xj)+eVX2 P(xJ+ eVX3 P(X) + ........

+ ~!

[x/ P(x1 )+ X2 2 p(X2 )+ ......]+ .......

188

Probability Theory and Stochastic Processes


1,

Since

p(X 1)+P(X2)+..

MAv)

=1 +v[XIP(Xt ) + x2 P(x.) +........]

+2T[X1 P(XJ+X2 P(X2)+X3 P(X;)+ ........ ]

+............ + :~[Xlnp(Xl)+X2"-P(X2)+ .........J

We know that
v2
vII.
v3
+-mz+-m3 + ...... +-m.
2!
3:
n!

Comparing the above two.equations, the nth order moments are Mx(v).

l+v~

Mx(v)

mil. =Xt"P(Xl)+X2"P(X2)+X3n(X3)+ ...... +xnP(x.)+ ........ Proved

Example 3.21

If ix(x) =

, find the density function for Y = X2 .


9

Solution

X2
and the transformation Y = - .
9

Given
Then
:. The roots are
So,

&

Xl

-3/y

I~II =

3JY
and

-3 ,

X2

=3/Y

Id;21= 2~

We know that the density function of a transformed random variable Y is

JX(Xl)I~II+ JAX )1:1


2

+3
f,,(y)

1 -9y
3
1 -9y
- - e 2 +----e 2

.J2i

2JY .J2i

344

Probability Theory and Stochastic Processes

More Solved Examples


Example 5.24

Random variablesXand Yhavethejointdensity Ix y(x,y) ;::;,_1 , 0 < x< 6, 0< y < 4. What

24

is the expected value of the function g(x,y) =(XYl?


(Nov 2008, May 2010, May 2011)
Solution

Given the joint density function


Ix y(x,y) =

{_I24

0<x<6,

0<y<4

The expected value of the function of g(X,Y) = (xyi


E[g(XY)] = LLg(XY)lx,y(x,y)dx(Y
=

rr

(xy)2

= 1

r x2dx ~ y2dy

24 k

=_1

2~ dxdy
k.

.[~]6[lT

24 3

3 Jo

=~x 216 x 64 =64


24

Example 5.25

Two random variables X and Yhave thc joint characteristic function

XY (cop co2 )
(i)
(ii)

=exp( - 2~2 - Sro; )

Show that X and Yare zero mean random variables.


Are X and Y correlated?
(Nov 2008, May 2010, Nov 2010)

Operations on Multiple Random Variables


Solution

Given characteristic function


(,lIxy(co,,~)

exp(-2coI2 -8co;)

The joint moments of order n + k are


mnk

(_ y+k

n k
+

)
(i)

c{Jxr (~,C02)1

8~n 801;

~ =~ = 0

The fIrst order moments are

= _ j 8exp(-2co]2 - 8co;) 1

8ml

m] =m2 =0

mlO =0

. . The mean value of X is zero


E[Y] = (_ j) 8c{JXy(m p m1 )1
8m2
m1 =m2 =0

and

=0
:. The mean value of Yis zero.
Correlation of X and. Y is
mOl

(ii)

CXY

ll

Rxy
= (_ j)2

E[XYJ" 11J11

a~y((i)pco2)1
8m1 am2

_8 exp( -2C01

8ml 8m2

ml

= m2 = 0

8m;) 1
m1 = m2

'

=0

345

346

Probability Theory and Stochastic Processes

=-(-4<01 )(-16(02) exp(- 2ro.2 - 8(0;)1 (01 = (02 = 0


mil

=0 and C XY

X and Yare uncorrelated.

Example 5.26

Show that two random variables Xl and X 2 with joint pdf,

h~

ix,x, (x"x,)

elsewhere

are independent and orthogonal.

(May 2009)

Solution
/

Given the joint pdf

elsewhere
The marginal density functions are
00

IXI (XI) =

lXIX, (XI' X2 )dx2

4 1 dx

J16

1[

and

]4
2

4- 2
== 16

lx, (X2 )
==

4J1dx
-4

16

j~, (x2 ) =~,


Since

= 16 ~2

1 ==

1[]4
16 XI -4

2 < x2 < 4

4+41
=--u;2

Operations on Multiple Random Variables

347

The two random variables are statistically independent. For othogonality, the
condition is

Now

'" '"

JJ

E[XI X 2 ]

xjx2fxlX2 (Xl'

-00

Xi)dxl dx2

->

=~[16
64

16][16-4] 0

E[X j X 2 ] =0
:. Xl andX2 are orthogonal. Hence the random variables Xl andX2 are independent
and orthogonal.
Example 5.27

If X and Y are two independent random variables such that E[X] = A" variance of X
;=:

0' 12 ,

E[Y];::; A2 and variance of Y=

a; ,prove that variance [X,Y] -- a a; + Nai +


2
l

2
;";0'1

(May 2010)
Solution

EfXl

Given

Variance of X
Now variance of X;

A"

2
=0'1 ,

E[(xy)2]

E[Y] = ~

Variance nfY

a;

E[XYf

var[ XY] = E[X2 y2] - E[XY]E[XY]

Since X and Yare independent,

E[XY]

= E[X]E[Y]

348

Probability Theory and Stochastic Processes

We know that
Variance of X = a)2

= E[X2 ] - E[X]2

E[X2] =a12 + E[X] =a12 + ~2

..

and Variance of Y

= a; = E[y2] - E[y]2

E[y2] = a; + E[y]2 = a; + A;

So,

var[XY]

= (a12 + ~2)(a; + A;) - ~2 A;


= a;a; + ~2a; + A;a12 + ~2 A;

var[XY]

- ~2 A;

=a)2a; + ~2a; + A;a;

Proved.

Example 5.28

Three statistically independent random variabks X; ,X2 and X3 have mean values

Xl

= 3, X 2 = 6

and X3

= -2. Find the mean values of the following functions:

(a)

g(XI ,X2,X3) = Xl +3X2 +4X3

(b)

g(X),X2,X3) = X lX 2X 3

(c)

g(X"X2,X3) = -2X,X2 -3X)X3 +4X2X 3

(d)

g(X"X2,X3)=X,+X2+X3

(Nov 2007, May 2009)

Solution

Given X" X 2 and X 3, are statistically independent.


X, =3
X 2 =6

X3 =-2
(a) The mean value of g(X" X 2 ,X2 ) = X, + 3X2 + 4X3

E[g(X, ,X2, X 1 )]

= E[X, + 3X2 + 4X1 ]


= E[X,] + 3E[X2]+ 4E[X3]

= 3+3 x6+4(-2) = 13
(b)

The mean value of g(X"X2,X3)=X,X2X 3

E[g(XI ,X2,X3)]

=E[XI X 2X 3]

- - -

--------

Operations on Multiple Random Variables

349

Since XI'X1 and X3 are independent

=E[XdE [X2 JE[X3]

(c)

=3x 6x (-2) =-36


The mean of the function
g(Xp X 2, X 3)

= -2XjX2 -3XjX3 + 4X2X 3

E[g(Xl'X1,X3)] =E[-2XJX 1 -3XJX 3 +4X1X 3]


=E[-2XJX1]-E[3X1X3]+E[4X2X3]
=-2E[X1X 2 ] ..,..3E[X1X 3]+4E[X2 X 3]
= -2E[XJ ]E[X1 ] -3E[X1 ]E[X3 ] +4E[X2]E[X3]

=-2 x3x 6-3 x3 x (-2) + 4x 6x(-2)


(d)

=-66
The mean value of the function
g(X1,X2 ,X3)
E[g(X], X 2 , X 3)]

E[XI

+ X 2 + X 3]

=E[Xj] + E[X2 ] + E[X3]


=3+6 2 7
Example 5.29

A joint density is given as

/x,y(x,y)

X(YO+I.5)
{

Find all the joint moments m nk, where n and k

o< x < 1 and

0 <y <1

elsewhere

0,1,2, ...

Solution

Given the joint density

. ) _ {X(Y + 1.5)
f Xy, (x,y
0

O<x<1 and O<y<1


elsewhere

The joint moments mnk are

JJxnl /x,y(x,y)dxdy

+GO GO

E[xnyk] =

-00-00

(May 2010)

350

Probability Theory and Stochastic Processes

ffX"l x(y+ 1.5)dx cry = ! x"+ldx ! (yk+l + 1.5yk)dy


00
I I

= [~]l [yk+2 + 1.5yk+l


n+20 k+2

= (n

mk

"
The moments are

k+l

1 1

]1
0

3)

+ 2) (k + 2) + 2(k + 1)
5k+8

2(n + 2)(k + l)(k + 2)

nand k

0,1,2,.

13
36

1
2

m20 =-.....

Example 5.30

Statistically independent random variables X and Y have moments mlO

mll = -6 . Find the moment

Given the moments


m02

The moment

/122

12

m20 =14

and

mIl

=-6

is a central moment and is given by


/122

= E[(X

XY(Y _ y)2]

= [E[X2]_~[X]2][ E[y2]_ E[Yf]


We also know that

or

=14,

and

(Feb 2007, Feb 2008, May 2011)

/1 22 '

Solution

m lo =2,

2, m20

-6
2

mOl

=mlJm lO

/122

=(14-22)(12 (-3i)

/1 22

= 30

Operations on Multiple Random Variables

351

Example 1,31
For two random variables X and Y

Find

/ xr(x,y) = 0.38(x + 1)8(y) + 0.18(x)O(y) + 0.18(x)8(y 2)


+0.158(x 1)8(y+ 2) + 0.28(x-I)8(y-l) +0.158 (x-l)8(y-3)
(a) The correlation
b) The covariance
(c) The correlation coefficient of X and Y
(d) Are X and Yeither uncorre1ated or orthogonal?
(Nov 2006, May 2009)

Solution

Given X and Yare discrete random variables.

The density function / xr(x,y) is a probability mass function

Pxr(x,y) =/xr(x,y)

The values can be put in the Table 5A.

Table 5.4 Probability Pxr(x,y)

Pxr(x,y

(-1,0)
0.3

(a)

(0,0)
0.1 .

(X,Y)
(0,2)

(1,2)

0.1

0.15

(1,1)
0.2

I (1,3)

I 0.15

The correlation of X and Y is


Rxr E[XY]
= LLxyPxr(x,y)
=

(IX -2)(0.15.) + (1)(1)(0.2) + (1)(3)(0.15)


-0.3 + 0.2 + 0.45

(b)

0.35

:. Rxr =0.35
The covariance of X and Y is
CXy=Rxr -E[X]E[Y]

Now
wht!rt!

E[X]
PXy(x,oo)

= LXPxr(x,oo)
O.38(x+l)+O.18(x)+U.18(x)
+O.158(x-l)+0.28(x 1)+0.158(x-l)
=0.38(x+ 1)+ 0.28(x) + 0.58(x 1)

E[X]

(0.3)(-1) + (0.2)(0) + (0.5)(1)


=-0.3+0+0.5

0.2

-_

... _ - -

352 Probability Theory and Stochastic Processes

Also
where

E[Y]

LYPxy(oo,y)

Pxy(oo,y) = 0.38 (y) + 0.18 (y) + 0.18 (y- 2)


+O.158(y + 2) + 0.2b(y 1) + 0.158(y - 3)

=O.46(y) + O.18(y- 2)+O.158(y+ 2) +O.2(y-I) +0.158(y- 3)


E[Y] = 0.4(0)+ 0.1(2) +0.15(-2) +0.2(1) +0.15(3)
=0+0.2-0.3+0.2+0.45=0.55
CXY =Rxy -E[X]E[Y]
= 0.35 - (0.2)(0.55) = 0.24
Cxy 0.24
(c) The correlation coefficient of X and Y is

Pxy
Now
and
Then

0'; E[X2]-E[X]2

0'; =E[y2]-E[Yf

E[X2]

= Lx2pxy (x,oo)

= (0.3)( -Ii + (0.2)(0) + (0.5)(1)2


=0.3 +0.5 =0.8

and

E[y2] = ~:>2pXY(oo,y)
= (0.4)(0) + (0.1)(2)2 + (0.15)(-2)2 + (0.2)(1)2 + (0.15)(3)2
=0+0.4+0.6+0.2+1.35 =2.55

E[y2 ]=2.55

0';
0';
Pxy
(d)

0.8 0.22 =0.76


2.35 - 0.55 2 = 2.0475

0.24
.J0.76.J2.0475

=0.1924

Since CXY =0.24* 0 and

Rxy =0.35*0 the given random variables are neither orthogonal nor
uncorrelated.

Operations on MUltiple Random Variables

353

Example 5.32

Two random variables X and Yhave means X = I and Y = 2, variances

0'; =4 and 0'; =1,

and PXY = 0.4. New random variables Wand V are defined by


V =-X +2Y

and

W =X+3Y
Find (a) the means, (b) the variances, (c) the correlations and (d) the correlation coefficient

Pvw

of Vand W.

Solution

Given

and
(a)

Mean values

1,

0';

4,

0'; =1,

-X +2Y

Y=2
PXY =

0.4

W =X+3Y

E[V] =E[-X +2Y]=-E[X]+2E[Y]

+2x2=3
and

E[W]

= E[X + 3Y] = E[X] + 3E[Y]


1+3x2=7

(b)

Variances
O'~ =E[V2]-E[Vf

=[E(-X +2Yi]-32
E[X2 +4y2 -4XY]-9

0';

E[X2]+4E[y2]-4E[XY]-9

Now we know that


ErX

and

1 =O'.~ + ErXr ="'4+ 1= 5

E[y2]

Also

PXY

or

CXy
E[XY]

- - - - - - - - - _.. _

o:+E[Yf~=1+4=5

E[XY]

E[ X]E[Y] = P xyO'xO'y

PxyO'xO'y

+ E[X]E[Y]

354

Probability Theory and Stochastic Processes

=0.4 x 1x 2 + 1 x 2 = 2.8

Substituting the values,

a; =5+4x5-4x2.8 9=4.8

and

a!

=E[W2] E[Wf

E[(X + 3y)2] _7 2

=E[X2 +9y2 +6XY]-49


=E[X2]+9E[y2]+6E[XY] 49

= 5+9 x 5 +6 x 2.8- 49 17.8


(c)

Correlations

E[WV]

E[(-X +2Y)(X +3Y)]


= E[- X2 + 2XY - 3XY + 6y2]
_E[X2] - E[ XY] + 6E[y2]

=-5 -2.8+6 x5
(d)

22.2

Correlation coefficient Pvw = Cvw


avaw

E[VW] - E[V]E[W]

Pvw

22.2-7x3 =~=0.13
Pvw. -../17.8J4i 9.243
Example 5.33

Two random variables X and Yhave the density function


2
-(x+O.5y)2
f
43
' 0 ..

/<,y(x,y) =1

O<x<2 and O<y<3

elsewhere

(i) Find the first and second ordcr moments.


iii) Are X and Yuncorrelated?
(ii) Find the covariance.

(May2011)

Operations on Multiple Random Variables


Solution

-(X+O.5y)2

Given
(i)

/x.y(X,y) = 43

0< x < 2 and 0 < y < 3

0
elsewhere

The joint moments mnk are

:;::;.c [xnl /x,y (x,y)dx dy

E[xnyk]

= 2 r2
43.lJ

xnyk ( x+ y)2 dxdy

Now the fIrst order moments

mlO =2- [ 12+ 9 + 12]

43

57

43

1.326

and

mOl

Second order moments

=2[12+ 81 +18]=1.866
43
8

355

356

Probability Theory and Stochastic Processes

2 [96
] =2.009
=- -+6+18
43 5
m02

=~[~[!.)+~[35)+~[34)l
43 3 3
4 5 2 4

2 [ 24+-+. 243 81] .=4.13


m02 =43
10
2
(ii)

Covariance
CXY

=JlII =E[(X

mll

=~[18+ 81 +24]=2.424

X)(Y

Y)]

Now,

43

Cxr = 2.424 (1.326)(1.866)


=-0.05
(iv) Since covariance Cxr* 0, X and Y are notuncorrelated.
Example 5.34

If X, Y and Z are uncorrelated and independent variables with the same variance
zero mean. fmd the correlation coefficient between (X + Y) and (Y + Z).
Solution

Given

E[X]
2

E[Y]
2

E[Z]=O
2

0'

O'x =O'y =O'z

So,

E[X2] :::: E[y2]

=E[z2] =0'2

Since X, Y, Z are independent,

E[XY] =E[YZ]

Also

E[XZ]=O

E[X +Y] =E(Y +Z]

Now variance of

0'2

and

Operations on Multiple Random Variables

= E[X2] + E[y]2 = 20'2


Also

O';+z

= E[(Y + Z)2] -

E[Y + Z]2

E[y2] +E[Z2] =20'2

and

cov[(X + y), (Y + Z)]

=E[(X + Y)(Y + Z)] -

E[X + Y]E[Y + Z]

= E[XY] + E[y2] + E[XZ] + E[YZ]

:. The correlation coefficient between (X + Y) and (Y + Z) is

cov[(X +Y,Y +Z)]

P(X+Y)(Y+Z) =

2
2
O'X+Y O'Y+Z

0'2

0'2

=../20'2 .,/20'2 = 20'2 ="2


Example 5.35

Joint density functions of two random variables is given by


18y2

for 2<x<00 and O<y<1

fxy(x,y)
= -3,

Find (a) E[X],

(b) E[Y],

x
and (c) E[XY]

Solution

Given

fx,y {x, y)

2 < x < 00 and 0 < y < 1

The marginal density functions are


r18y2

fx{x) = 1-3- dy
x

2<x<00

.!![y3]1 =.!!x..!. =6x

x'

6
fx{x) =3
x

and

x'

2<x<00

frey) = [fx,y(x,y)tb:

O<y<l

357

358

Probability Theory and Stochastic Processes

18y2
fy(Y) =18y 2[-2- =

2x22
8

..
(a)

The mean of X is

9 '
-y"
4

frey)

O<y<l

E[X] = [, xfx(x)dx

= x.dx=
3

6x-=3
2

The mean of Y is E[Y] = ( y fY(Y)~

f y_y2
9 dy
4

E[Y]
E[Y]

..
(c)

(-IT

dx
=6 -:;
x2

E[X]

(b)

ThemeanofXYis

=~[:4]

=~ !y3dy
9
16

E[Y]

9
16

0.5625

E[XY] = [[xyfx,r(x,y)dxdy

r
2

18

r xy 184dxdy
Jo
x

rr

y3
1 x2
dx~ = 18

=18 !lx~~
EIXY]

=9x y4jJ
4

1>3[-IJ'
-:;- "

dy

9 !ldy
2.25

Example 5.36

Two random variables have a un.iform del1s1ly

/Xy(x,y)

~{"~'

UH

a circular region defined by

elsewhere

250

Probability Theory and Stochastic Processes


co

Now

fx(x) = fe- x e- Y~u(x):=; e-Xu(x)

Similarly,

fy(y) =e-Yu(y)

fx(x,y)

fx(x) fy(y)

:. X and Yare statistically independent.

Example 4.7

If the joint pdf of X and Y is


fx,y(x,y)

Solution

Given

and a circle region, x 2 + y2 $; a 2

fx,y(x,y) =---;:'e

as shown in Fig 4.5. The mass in the circle (probability) is

Fig. 4.5 Circle region x 2 + y2 = a 2


Presion

f f fx,r(x,y)dxdy
region

Converting x, y into polar coordinates


r $; a and 0 $; 8 $; 2n:

x = rcosO
Y

rsinO

r2

x2+ l

dx~

=rdrdO

Multiple Random Variables

=_1_
21(; (1'2

f" re- r'/2o'drdO


.b

21(;

-21(;
_.
21(; (1'2

-1
re-r'/20' ur

.. The mass in the given region is l_e-a'I2a'.

Additional Problems
Example 4.8

(a)

Find a constant b (in terms of a) so that the function


fx,y(x,y)

be-(x+Y )
{ 0

0 <: x < a
elsewhere

and

O<y<oo

o< x < a

and

0 < y < 00

is a valid joint density function.


Solution

Given the joint density function


.
{be-(X+Yl
ix,y(x,y) = 0

Since it is a valid density function,


[, [,fX,y(x,y)dxdy

-vi'o"

e'

b[e- a -1][1]

elsewhere

251

250

Probability Theory and Stochastic Processes


co

Now

fx(x) = fe- x e- Y~u(x):=; e-Xu(x)

Similarly,

fy(y) =e-Yu(y)

fx(x,y)

fx(x) fy(y)

:. X and Yare statistically independent.

Example 4.7

If the joint pdf of X and Y is


fx,y(x,y)

Solution

Given

and a circle region, x 2 + y2 $; a 2

fx,y(x,y) =---;:'e

as shown in Fig 4.5. The mass in the circle (probability) is

Fig. 4.5 Circle region x 2 + y2 = a 2


Presion

f f fx,r(x,y)dxdy
region

Converting x, y into polar coordinates


r $; a and 0 $; 8 $; 2n:

x = rcosO
Y

rsinO

r2

x2+ l

dx~

=rdrdO

Multiple Random Variables

=_1_
21(; (1'2

f" re- r'/2o'drdO


.b

21(;

-21(;
_.
21(; (1'2

-1
re-r'/20' ur

.. The mass in the given region is l_e-a'I2a'.

Additional Problems
Example 4.8

(a)

Find a constant b (in terms of a) so that the function


fx,y(x,y)

be-(x+Y )
{ 0

0 <: x < a
elsewhere

and

O<y<oo

o< x < a

and

0 < y < 00

is a valid joint density function.


Solution

Given the joint density function


.
{be-(X+Yl
ix,y(x,y) = 0

Since it is a valid density function,


[, [,fX,y(x,y)dxdy

-vi'o"

e'

b[e- a -1][1]

elsewhere

251

220 Probability Theory and Stochastic Processes

The limits are:


If x = 1,

2 - 3 ;;;;:; -1

If x= 5,

10-3=7

fy(y)

_I (y+3)
48

-I<y<7

elsewhere

Questions
(1)

State and prove the properties of variance of a random variable.

(Nov 2006, Feb 2007)


(2)
(3)
(4)

Explain the following: (i) variance and (ii) skew.


Find the nth moment of a uniform random variable and hence its mean. (Feb 2008)
State and prove the properties of the characteristic function of a random variable X

(Nov 2007)
(5) Defme the moment generating function of a random variable.
(Nov 2007, Feb 2008)
(6) State the properties of a moment generating function.
(Nov 2007, Feb 2008)
(7) Find the moment generating function about the origin ofPoisson's distribution.
(Nov 2007, Feb 2008)
(8) Explain transformations.
(May/June 2004)
(9)
Explain the concept of transformation of a random variable X
(Nov 2007)

Problems
3.1

LetXbe a random variable which can take on the values 1,2 and 3 with respective
probabilities 1/3, 116 and 1/2. Find its 3rd moment about the mean.
3.2 A random variable X has distribution
P(X = 0) = P(X =2)
3.3

p and P(X;;:; 1) = 1- p for 0 s P s

2"1

For what value of p, will X have maximum variance.


'.
Calculate the coefficient of variation between X and Y from the data shown in
Table P3.I.

Operations on One Random Variable

Now

dy

217

= sine de

del
1
dy = sine

or

Example 3.50

A random variable X is unifonnly distributed in (0, 6). If X is transformed to a new random


variable Y = 2(X _3)2 - 4, fmd (i) the density of Y, (ii) Y and (iii)
Solution

Given

ix(x)

1
-6
{0

otherwise

The transformed variable is

Y =2(X -3l

=2(X2 +9 6X) 4
=2X 2 +18 12X 4
=2X 2
or

12X +14

X 2 -6X +7- Y =0
2

The roots are

6
X =--~--------2

X = 6.J36-28+2Y
2

a;.

(Nov 2010)

218

Probability Theory and Stochastic Processes

X =3.!."'8+2Y
2

or

x
XI

=3 ~.j4+y
=3+0.707.jy+4

x2 =3-0.707.jy+4

(i)

dx l

0.707

dx 2

dy

2.jy +4'

-0.707

di- 2.jy +4

We know that the density function of Y is

I' (

}y Y

The limits for Yare for

0.707

(1.+1.)

2~y +4 6 6

=.!. x 0.707
6.jy +4

= 0, y = 14 and for x = 6,y = 4

fy(y) = 0.1178

.jy +4

(ii)

The mean of Y is

Y = ( y fy(y)dy
=

f O.1178 ~dy
=0.1178 fy(y +4t1/2dy
y+4

=0.1178[2Y .jY +4 _ (y +4)3/2]


3/2

---"-"-"---------"-----"-"-----

"--

Operations on One Random Variable

= 0.1178[118.79 Y

219

50.91]

Example 3.51

Let X be a continuous random variable with pdf


1<x<5

fAx)= 102

elsewhere

Find the probability density function of Y

= 2X

3.

Solution

l<x<5

Given

elsewhere
and

Y =2X-3
X

or

2
So

Now the pdf of Y is

dx
dy
fy(y)

+ 1.5

2
fX(X)I:1

=2 Ix l-2 I 1.5j
1

fy(y)

+-=-+1.5
222

fV

fy(y) =_1 (y +1.5J=_1 (y+3)


24 2
48

(May 2011)

252

Probability Theory and Stochastic Processes

-a

=-+1
b

or a=ln(_b)
l+b

Example 4.9

Discrete random variables X and Yhave a joint distribution function


Fx.y(x,y)

O.lu(x +4)(y -1) +O.l5u(x +3)u(y -1)+ 0.22u(x)u(y -3)

+O.18u(x -2)u(y + 2) +0.23 u(x-4)u(y+ 2) +O.l2u(x-4)u(y+3)

Find (a) the marginal distribution functions Fx (x) and Fy (y); (b) P{-l < X S 4,-3 < Y s 3}.
Solution

Given the joint distribution function


FX,y(x,y)

O.lu(x +4)u(y -1)+O.l5u(x+3)u(y -I) + 0.22u (x)u(y - 3)


+O.l8u (x- 2)u(y + 2)+ 0.23u(x-4)u(y + 2) + 0.12u(x -4)u(y+ 3)

(a)

The marginal distribution functions are


Fx(x) =Fx,Y(x,co)
= O.lu(x+4)+ 0, 15u(x + 3) +O.22u(x) + O.18u(x - 2)+ 0.23u(x-4) + O.12u(x-4)
Fx(x) =0.lu(x+4)+O.l5u(x+3)+0.22u(x) +O.18u(x-2)+0.35u(x-4)

and

Fy(y) == FX,y(CX:>,y)

O.lu(y -1)+0.15u(y -1)+0.22u(y-3)


+o.18u(y + 2)+ 0.23u(y + 2) + 0.12u(y + 3)

Fr(y) = 0.25u(y-l) +0,22u(y-3) +0.41u(y+ 2)+0.12u(y +3)

Figure 4.6 shows the marginal distribution functions.

FAx)i _____O~6r=-5---l~
0.47

25

0. 1
0.11
~~---_3~i-_2ri-_~1~~i~2r-~3-+~-+~

(a)
Fig. 4.6 Marginal distribution functions (a) Fx<x) (b) FlY)

Multiple Random Variables

b)

P{-1 < X ~ 4, -3< Y ~3} =0.22+0.18+0.23

0.63
Example 4.10

Show that the function


Gx ,y(x,y)

X<Y
x;;::Y

={O1

cannot be a valid joint distribution function,


Solution

Given the function

o
{1

Gx,y(x,y) =

X<Y
x;;::y

Consider two points (xl' y) and (X2' Y2) in the sample space such that

~ < Yl' Xl < Y2 and (xI'Y 1) ~(X2'Y2)

The joint probability is

P{xJ <X~X2'YI <Y~Y2} =P{X~xl'Y~Yl}+P{X~x2'Y~Y2)


-P{X ~xl'Y~Y2)-P{X ~x2'Y ~Yl}
Gx,y(x i ,YI ) + Gx .y(x2'Yz)- Gx,y (Xl 'Y2)-Gx .r (X2'YI)

From the given function,

Gx,y(Xl'YI)

P{X~XPY~YI}=O

since

XI

<YI

Gx,y(::S,Yz )

{X ~ Xl' Y ~ Yz} ::::: 0

sll1ce

X2

< Y2

Gx .y(Xl' Yz)

P{X ~ XI ,Y ~ ,Va} ::::: 0

since

XI

GX ,y(X2 'YI)

P{X~X2,Y~yJ=1

.. P{XI<X~X2'Yl<Y~Y2}

Probability p

0+0+0-1=-1

--1 is not valid.

Hence the given function is not a valid distribution.

Example 4.11

Joint probability density function of X and Y is


llab

f x.Y (X,Y ) {
.
0

O<x<a
elsewhere

and

O<y<b

< Y7

253

254

Probabillty Theory and Stochastic Processes

(a)

Find and sketch FX,y(x,y).

(b)

If a<b, fmd P{X+Y53a/4}

(May 20ll, May 2010)

Solution

Given the joint probability density function is

ix,(x,y)

( a)

=H

o< x < a,

0<y

~b

elsewhere

The distribution function is


FX,y(x,y) =

t [a~

dxdy

ab

If 0 < x < a and y

b, then

xb x
FXY ( x,y ) = - =
,
ab a

If 0 < y < b and x ~ a, then


y

ab

If y

~b

and x ;::: a, then

ab
ab

If y < 0 or x < 0, then


FX,Y(x,y) = 0

The distribution function is


0

x<O

.xy

O<x<a
ab

FX,Y(x,y) =
O<x<a
a

l.
b

II

O<y<b
x~a

or

y<O

and O<y<b
and

y~b

and

x~a

and

y~b

Multiple Random Variables

Figure 4.7 shows the joint distribution function FX,Y(x,y).

Fx,y(x,y)

Fxr(x,y)

function

Fig. 4.7 Distribution function Fx ,Y (x, y)

(b) If a < b and given X + Y :::;; 3a.


4

Now take the limit

3a
3a
x+ y =-or x =

:. The probability is

p{x + y~ 3a}
= rt rTY_l duiy
4
.Iv={).Ix={) ab
_t,3; - 1 (3a
- - y )dy
y=o ab

3aY_LJ

_1

ab

..

1 1~oI4(3a

ab y=o

_y)dy

al4

p{x + y:::; 3a4 1= 32b


9a
J

Example 4.12

Determine a constant b such that the given function is a valid joint density function.

(
X,y

) {b(X +4y2 )

x,y

O~lxl<1
elsewhere

and

O~y<2

255

256

Probability Theory and Stochastic Processes

Solution

o: :;1 x 1<1

Given

and

0 :::; y < 2

elsewhere

Since ix,y(x,y) is a valid joint density function,


[ [iX,y(x,y)dxdy::::l

l, (x' y +

YL

bIt e3 +
2

2X2 ) dx :::: 1

')'

2x

~1

dx

+-3-

=1

-I

2b [33 +}
68b

J :: 1

=1

or

b::::~
68

Example 4.13

Given the function

_ {(X 2 + y2)/8n
f Xy, (x,y)
0

x 2 +y2 <b
elsewhere

(a)

Find a constant b so that this is a valid joint density function.

(b)

Find P{O.5b < X 2 + y2

:::;

O.8b}.

Solution

Given the density function

' )' _ [(x 2 + y2)/8n


f x y(x,y -I

'

x 2 + y2 < b
elsewhere

x 2 + y2 < b represents the area of the xy plane within a circle of radius

the origin, Converting

ex, y) into polar coordinates, as,shown in Fig. 4.8.

.Jb

with centre at

Multiple Random Variables

257

Fig. 4.8 Area ofthe density function

.Jb

, r

o = tan ~
-I (

and
(a)

~ 0 ~ 2n

rdrdO

Since the function is a valid density function,

f fx
circular plane

So

duly

),

+ Y dxdy

2".b -rdrdO
r2

r'

.b

=1

8n

8n

IJb

4
=_r_
x2n
=1

32n
b

16

=1, b =16, b 4

(b) The probability, P{O.2b < X 2 + y2

~ O.6b}

is, after converting into polar coordinates:

P{O.2b < r2 ~ u.6b}

or P{.JO.2b < r~ .JO.6b}

21f 1:'6b r3

O.2b

-drdO
8n

I~

r4
--x2n
32n
.J02I,

258

Probability Theory and Stochastic Processes

l.-[(O.6b)2 -(O.2b)2]
16

1~ [0.36- 0;04Jb2 (0~~2 )42


:. P{0.2b < x 2 + y2 ~ 0.6b}

0.32

Example 4.14

The joint density function of X and Y is given by


/x.y(x,y)

(a)
(b)

ax2y
{ 0

O<y<x<1
elsewhere

Find a so that the function is a valid density function.


Find the marginal density functions.

Solution

Given

ax 2y
/x,y(x,y) = { 0

O<y<x<l

elsewhere

Since it is a valid density function,

!aY(;l~
1

l ;[l-l]dy
a

4
3 1r (y-y)dy
=1
l

_.i.]1
.5.

=1

1[I~J =1
a =10

Multiple Random Variables

The marginal density functions are


fx(x) == L=xfx,y(x,y)dy

~ LlOx'y<!v
2

fx(x) == {5X (10 x

2
)

0 <x<l

lOx'

(~' ~ 5x'[I-x']

0 < X< 1
otherwise

and

==10y L == 3.33y 4
3
fr(y)

={3.3~y4

O<y<1
otherwise

Example 4.15

The joint distribution function of a bivariate random variable (X; Y) is given by

Fx,Y(x,y)

0 x<O
y<O
0.2 O:S;x<a, O:s;y<b
O:s;y<b
0.4 x~a,
0.8 O:s;x<a, y~b
x~a,

y~b

Find the marginal distributions ofX and Y.

Solution
Given the joint distribution ofX and Y is

Fx,r

0
x<O,
y<O
0.2 o:: ; x < a, O:S; Y < b
O:s;y<b
0.4 x~a,
0.8 O:::;x<a, y~b
1

x~a,

y~b

259

260

Probability Theory and Stochastic Processes

The marginal distribution functions are


x <0
O~X ~a
x~a

y<O

and

O~y~b
y~b

Example 4.16

The joint pdf of a bivariate random variable (X, Y) is given by


O<x<y<l

KXY
fx,y(x,y) = { 0

otherwise

where k is a constant.
(a)
Find the value of K.
(b)
Are X and Y independent?
Solution

Given the joint probability density function of X and Y:

_
f x ,y (x,y )
(a)

{K xy
0

0 < x < y <1


otherwise

Since fx,y(x,y) is a valid density function, we know that


((fX,y(x,y)dxdy =1

Multiple Random Variables


K
- =1

K =8

(b)

The marginal pdfs are


fAx) =

fAx)

Lxfx,y(x,y)~

!8xy dy

O<x<1
2

= 8x Y2

I:

=4x(1-x 2 )
O<x<1
otherwise

and

fy(Y)

! fX,y(x,y)dx= !8.xydx
Y

. x 21.
Y3
3
=8y- =8y-=4y
2 0
2

O<y <1

O<y<1
otherwise

If X and Yare independent, then


fAx)fy(Y) = 4x(1- x 2 )4y 3
= 16x i (1- x 2 )
:#;

fx,y(x,y)

Therefore X and Y are not independent.


Example 4.17

Consider that the joint pdf of random variables, X and Y is


I
-(x+ y)
fx,y(x,y) = 8

{
0

O<x<2
otherwise

Find the (a) conditional density functions and (b) PO < Y < 1/2) I X = 1).

261

262

Probability Theory and Stochastic Processes

Solution

Given the joint density function,


fx,y(x,y)

(a)

.!.(X+ y)
8
{
0

O<x<2

otherwise

The marginal density functions are


fx(x) = [fx,y(x,y)cry
=

r21

.b g(x+ y)cry
1
y2
- .xy+ -

1
fx(x) =-(x+1)

Similarly, fy(Y)

.!.(y + 1)
4

2
]

= -[2x+ 2]
0

O<x<2

0< y < 2

The conditional density functions are

.f X,y

(X IY ) -_ fx,y(x,y)
fy(y)

and fx,y(YI x)

..

- -2("--y-::"+-1)

fx,y(x,y)

x+ y

fAx)

2(x+1)

{ x+y
fx.y(xly) = 2(YO+1)

0<x<2,

0<y<2

otherwise

r x+

fx.y(ylx)

12(X +1)
o

(b)

PO < Y < 1/ 2) I X == 1)

0<x<2,

0<y<2

otherwise

/2
/2 1+
fx ,y(ylx=1)= 2(1+1)

.!.{' (y+ 1)dy = .!.[L+ yJlI2


4

4 2

.!.[.!.+.!.l
=~ = 0.156
4 8 2J 32

400

Probability Theory and Stochastic Processes

Since E[Xj2] =E[X/] =1 and E[XjX2] =0


Now

Ry,y, (r) = cos wtsin wet +r) +sin wtcos w(t+r)


= sin(wt+ wr+wt)
Ry,y, (r) = sin(2wt + wr) = sin w(2t +r)

The cross correlation is a function of time f. Therefore, YI(f) and Y2 (t) are riot jointly WSs.
Example 6.6

For a given random process X(t), the mean value is X

6 and autocorrelation is

RxAr) = 36+ 25e-1tl

Find (a) the average power of the process X(t) and (b) variance of X(t).
Solution

RxAr) = 36+ 25e- 1tl

Given
(a)

The average power is


E[X2(t)]

Rxx(O)

:. Rxx(O) = 36 + 25e-(O) =36+25=61

., The average power is 61 watts.

(b)
Variance of X(t) is

-2

ax =Rxx(O)-X

a/

=61-6 2 =61-36=25

Example 6.7

The autocorrelation function of a stationary random process X(t) is given by


RvAr)
~u, . ,

16
1+8r2

~36+--

Fincllhe mean, mean square ancl VariaIll,;e uf Ule prul,;ess.

Solution

Given the autocorrelation function

(a)

16
Rxx(r) =36+-
2
1+ 8r
Since the process is stationary, we know that

Random Processes

x'

401

= limRxx(r)
T--.<Xl


[ 36+-16 ]
X-' =lim
r"",,<Xl
1+8r 2
=36+0=36

(b)

The mean value is X =6.

We know that the mean square value is


E[X2] Rxx(O)

=36+

16

1+8(0)

36+16=52

(c) Variance of X is given by

(1/

E[XZ]_X2

(1/

52-36=16

:.(1/=16
Example 6.8

Given a WSS Gaussian random process X(tJ, ti == 1,2,3, the autocorrelation function is
Rxx(r)

=16e-21rl

with mean X =3. Find the covariance matrix.

Solution

Given

Rxx(1:)

16e- T!, X

=3

and the random process is X(tJ, Ii = 1,2, 3. Since it is wide sense stationary.

:. Rxx(1:)

RXX(tk

-tJ = 16e-211,-I;1

The autocorre1atiol1111atrix is
[ 1n2
[RxJ 16e16e1

The covariance of X(t) is

..

The variance

matrix is

16e-2
16
16e 2

I~~l
[16
16e- = 2.16
2

16

0.21

2.16

029]

16

2.16

2.16

16

402

Probability Theory and Stochastic Processes

"
t

16e-i -9

16-9

-8.71]"

[ex] = 16e- -9

-6.83
7

16-9

16e-4 -9 16e-2 -9

More Solved Examples


Example 6.9

A random process is described by X(t)

= A , where A is a continuous random variable uni

formly distributed on (0,1). Show that X (t) is stationary process.

(Feb 2007)

Solution

Given X(t)

= A, whereA is a continuous random variable with uniform distribution on (0,1)


:. fAA)

(i)

={~

O~A~l

otherwise

Mean value of X(t) is

E[X(t)]

1X(t) fA (A)dA

= .( A d A = ~(constant)
(ii)

The autocorrelation of X(t) is

Rxx('r) =E[X(t)X(t+r)]
= .( A,AfA(A)dA = .( A2 dA =

A311
3

1
3

Rxx(r) is independent of time.

:. The given fimction X(t) is a stationary process.

Example 6.10

Consider random processes, X(t)=Acos(cqt+O) and Y(t)


A, B, cq and co2 are constants, while 0 and

uniformly distributed on (0, 2n).

= Bcos(co2t +l/J),

where

l/J are statistically independent random variables

Random Processes 403

(i)

Show that X(t) and yet) are jointly WSs.

(ii)

If 0

, showthatX(t)andY(t)arenotjointlyWSSunless COl = co2'

(Feb 2008)

Solution

Given the random processes are


X(t)

Acos(co1t+0)

yet) =Bcos(co2t+)

Also 0 and are uniformly distributed over (0,211:').

Then

and
(i)

{2;

o~O ~211:'

J,(~) ~{~

o~ ~ 211:'

10(0)

otherwise

otherwise

For the processes to be jointly WSS,


E[X(t)] =E[Acos(~t+O)]
A

cos(~t + 0)/0 (O)dO

~~ r cos(~t+O)dO
2Jr

211:' .b

2~ (sin (COlt + O)I~lt)


=~(Sin(COlt)

.. E[X(t)]

sin(co/)]
211:'

0 and the autocorrelation of X(t) is

A2

E[X(t)X(t I'r)]

A2

cos(~t I f))cos(col(t H:)+O))/(J(O)dO

'2lt I

r'
211:'.b

--[cos~T+cos(2~t+~T+20)]
2

404

Probability Theory and Stochastic Processes

A2

RxxCr)

=-cos~r

Similarly,

E[Y(t)]

B2

Ryy(r) = TCOSCOz r

and

Now the cross correlation function between X(t) and yet) is

Rxr(r)

= E[X(t)Y(t +r)]

= [ [ X(t)Y(t + r)!e,1J(8,/fd8d/f>
Since 8 and /f> are statistically independent,

fe,,,,(8,/f

=!e (8).frp (/f

Rxr(r) = [X(t)fe(8)d8 [Y(t+r).frp(/fd/f>


AB2

r2,. cos(~t+8)d8 rZ" cos(co2 (t +r)+/fd/f>

161& k

- AB (0)
- 161&2 ..
Rxr(r)

Therefore, the mean values of X(t) and yet) are constant. The autocorrelation of both
X(t) and yet) are independent of t. The cross correlation is also independent of t.
Hence the random processes are jointly WSs.
(ii)

If 8 = /f>, the expression for cross correlation between X(t) and yet) is

RXy(r) E[X(t)Y(t +r)]

=E[Acos(wl+ 8)Bcos(w2 (t +r) +8)]


= A2B R[ cosm? - fI~)t + 0)?1:) + coscoz + col)t + wz"r + 20)1

=AB

2"

cos((COz coat + co2 r )d8 + A R


41&
41&
The second term is equal to zero.

AB
41&

2" cosCOz

2" cos((coz + ~ )t + COzr + 28)d8

~)t+cozr)d8

Random Processes 405

AB

Rxr(1:) =-COSW2 -Wl)t + W21:)

2
The cross correlation function is a function of both t and 1:. Therefore the givenX(t) and
yet) are not jointly WSS.

But if w2 = WI ' then


.

AB

cos co2 1:
2
The cross correlation function is independent of time t. Therefore the given processes are
Rxr (1:) == -

jointly WSS, at () = and 01

= Ol.!.

Example 6.11

A random process yet) = X(t) X(t + 1:) is defined in terms of a process, X(t) that is at least
WSS.
(a)
Show that the mean value of yet) is zero even if X(t) has a non-zero mean value.
(b)

Show that

(c)

If yet)

a; = 2[Rxx(0)

Rxx(1:)].

X(t)+X(t+1:), fmd E[Y(t)] and

cr;.

Solution

Given the random process


yet)

X(t)

X(t+1:)

where X(t) is a WSS random process.


(a) The mean value of Yet) is
E[Y(t)]
E[Y(t)]

E[X(t) - X(t +1:)]


E[X(t)] - E[X(t +1:)]

For a stationary process, we know that E[X(t)] = E[X(t +1:)].

Given

E[X(t)] -::j:. 0

Then

R[Y(t)]

(b) The variance of Y(t) is

cr; = E[y2(t)] - E[Y(t)]2 .


cr; =E[y2(t)] O=E[y2(t)]
cr; E[[X(t)-X(t+1:W]

(Nov 2007)

406

Probability Theory and Stochastic Processes


E[X2 (t)] + E[X\t + 'l')] - 2E[X(t)]E[X(t + 'l')]
a;

a;
a;
(c)

= E[X2(t)] + E[X 2(t)]-2E[X(t)]E[X(t +'l')]


Rxx(O) + Rxx(0)-2R,rr('l')
2[Rxx (0)

Rxy('l')]

If the random process is given by


Y(t) =X(t)+X(/+'l')

then the mean value is


E[Y(tn

E[X(t) + X(t + 'l')]


;::::E[X(t)]+E[X(t+'l')]

Y(t)

= 2X(/)

2E[X(/)]

The variance of Y(t) is

a; ;: : E[y2(t)] E[Y(t)]2

a; = E[(X(t) + X(t +'l')f] - (2X(t)i


2

ay

ay

ay

_2

E[X2(t)] + E[X 2 (t + 'l')] + 2E[X(t)]E[X(t +'l')]-4X(t)


~

=Rxx(O) + RxAO) + 2Rxx('l') -4X(I)


_2

=2[Rxx(0) + Rxx('l')] -4X(t)

Example 6.12

Determine if the random process X(t) = A, where A is a random variable with mean
variance a~ is mean ergodic.

(Feb 2007)

Solution

Given X(t);:::: A, and X(t)

A.

If X(t) is mean ergodic, then


E[X(t)]

= A[X(t)]

The time average of X(t) is


A[X(t)]

A and

1 I'l' X(t)dt
lim2T T

T->oo

Random Processes 407

= lim-l
r ....oo 2T

fr Adt
r

X(I) "" A[X(t)] = A

:. The random process is ergodic in mean:


Example 6.13

Statistically independent zero mean random processes X(I) and Yet) have
functions
Rxx(-r) == e-jTl

and

aut~correlation

Ryy(-r) = cos(2m) respectively

(a)

Find the autocorrelation function of the sum ';(t)=X(t)+Y(t).

(b)

Find the autocorrelation function of the difference

(c)

Find the cross correlation function of W1(t) and W2(t).

~(t)

X(t) Yet).

(Nov 2007)

Solution

Given X(t) and yet) are two statistically independent zero mean random processes. The
autocorrelation functions are
Rxx(-r) = e-1rl .
Ryy(-r) = cos(2m)

(a)

';(t) =X(t)+Y(t)

Given

The autocorrelation function is

Rw,w. (-r)= E[';(t)W; (I +-r)]

Rw.w. (-r)

=E[(X(/) + Y(t(X(t +-r)+ yet +-r)]


= E[X(t)X(t +-r)+ X(t)Y(t+-r)+ X(t+-r)Y(-r)+ Y(t)Y(t+-r)]

..

Rw.w. (-r)

E[X(t)X(1 +-r)] +E[X(t)Y(t +(1] + E[Y(t)X(t +-r)] + E[Y(t)Y(t+-r)]


Rw,w, (-r) :;:: Rxx(r)+ R.rr(r) + Ryx(r) + Ryy(r)

Since X(t) and yet) are statistically independent,

RXY (r) == Rrx <,t) == 0

..

(b)

Given

Rw.w.(-r) =Rxx(-r)+Ryy(-r)
W2 (t)

= X(t) -

The autocorrelation function of


RW2WZ (-r)

~ (t)

e-1rl+cos21rr

yet)

is

= E[W; (t)W; (t +-r)]

408

Probability Theory and Stochastic Processes

= E[(X(t) - Y(t(X(t +-r)

Y(t +-r]

E[X(t)X(t +(IJ E[X(t)Y(t +-r)] --, E[Y(t)X(t +-r)J + E[Y(t)Y(t +-r)]


Rw,w, (-r)

Rxx(-r)- Rxy(-r)- Rrx (-r) + Ryy(-r)


Rxx(-r) + Ryy(-r) =e-Itl + cos(2/rr)

(c)' The cross correlation of

~ (t)

and W2 (t) is

Rw,w, (-r) =E[~(t)W2(t+-r)]


:;;; E[(X(t) + Y(t(X(t + -r)

yet + -r]

Rxx(-r)- Rxy(-r)+ Rrx(-r)- Ryy(-r)


RX)[> (-r) - Ryy(-r)

cos(2n-r)
Example 6.14

A random process is described by X(t) = A2 cos 2 (we + 0), where A and

we are constants and

o is a random variable uniformly distributed between in. Is X (I) wide sense stationary?
(May 2010)
Solution

Given
X(t) = A2 cos 2 (av + 0), 0 is uniformly distributed between n.
The density function is
1&(0)

{)no

otherwise

The mean value of X (I) is


E[X(I)J -

'"

Jx(t) .ftJ(O)dO

Random Processes 409

: : : ~ [2n + sin 2(av + 8)\11:


4n
2_11:
A2

A2

4n

E[X(t)] ::::: A2
2

(constant)

-[2n + 0] ==

The autocorrelation function is


Rxx(r)

E[X(t)X(t +r)]

E[ A2 cos

(coc + 8)A2 cos2(COc(t +r) + 8)]

A4E[ cos2(coct + 8) cos2(coct + coer + 8)]

= -8A4[ .l-1I:.l-n
r dB + r cos(2coct + 2B)dB + .l-n
r cos(2aV + 2coc T + 2B)d8
n

410

Probability Theory and Stochastic Processes

A4

=-[2n +0+0+ 2ncos2ro-r +0]


8n
c

Example 6.15

Prove that the random process X(t) = Acos(ro/ + 0) is wide sense stationary if it is assumed
that roc is a constant and 0 is a unifOlmly distributed variable in the interval (0,2n).
Solution

Given X(t) = Acos(roct + 0), where 0 is unifonnly distributed between (O,2n).


The density function is

1.(8)

={ ~

O<0<2n

elsewhere

For a process to be wide sense stationary, mean value of X(t) is

E[X(t)] =

Jx(t)fo(O)dO

'"

21<

=-

Jcos(ro t + O)dO = -[sin(ro t + 0)]21<


2n 0
c
2n
c
0

and autocorrelation of X(t) is


Rxx(t,t+-r) =E[X(t)X(t+-r)]

Random Processes 411

1
ACOS(CO 1+ 8)Acos(COc(1 +-r) +8) -d8
c
2n
o

2n

-4nA2 [2nfcosco -rd8 + fcos(2coi + 28 + -r)d8


2n

A2
A2
'-(coscoc -r)(2n) =-coscoc-r
4n
2

Since the mean value is constant and autocorrelation is independent of time, X(I) is wise
sense stationary.
Example 6.16

Let X(t) be a stationary continuous random process that is differentiable. Denote its time
derivative by X(t).
(a)

Show that E[X(t)]=O. (b) Find R . . (-r) in terms of Rxx(-r). (Feb 2007, Feb 2008)
xx

Solution

(a) Given X(t) is a stationary random process.


The time derivative of X(/) is X(/). From derivative principles,

The mean values is


E[.i'(t)] ""

lim [X (I + f1) - X(/)]


" ""

f1

. E[X(t + f1)] - E[X(t)]


11m --=-----'--....::..::....-::..-.:..=
11.--+0:>
A

We know that if X(/) is stationary, then


E[X(t + -r)]

E[X(t)] = E[X(t + f1)]

:. E[X(t)] = 0

412

Probability Theory and Stochastic Processes

(b)

We know that Rxx(T)

E[X(t)X(t +T)] with the derivative of X(t) as

=E[X(t)X(t+T)]

R . . (T)
xx

Consider X (t) has N random variables with joint density function. Then
Rxx(T)

[, [, ... [, ~(t)~(t +T)!Axl'x2 , ' ,XN;tl,t2 .. tN)dx1dx 2 .. dxN


ax(t) OX(t+T)
f x (xI ' x2' .. x N'.[12[ .. tN)dxdx
.. dxN
[, [, '" [~ --.
dt
dt
I
2

Interchanging integration and differentiation,


Rxx (T)

R . . (T)
xx

!:[[,[,...
02

ot

=-2

[,x(t)X(t+T)!x(XpX2, .. ,XN;tlt2 .. tN)dxldx2 .. dxN]

[R.rr(T)]

Example 6.17

Given X=6andR.rr(t,t+T)

36+25exp(-T) for a random process X(t). Indicate which

of the following statements are true based on what is known, with certainty: X(t)
(a)

is first order stationary

(b)

has total average power of 61 W

(c)
(d)
(e)
(f)

is ergodic
is wide sense stationary
has a periodic component
has an AC power of 36 W

(Feb 2007, May 2009)

Solution

Given the random process X(t):


mean value X 6
autocorrelation R;v. (t,t +f) = 36 + 25exp(-T)
(a) First order stationary:
Since the mean value X(t) 6, a constant, the first order density function does not change
with time t.
:. X(t) is first order stationary.
(b) We know that the average power is
Pavg = Rxx(O)

Random Processes
Pavg

(c)

413

36+ 25exp(-0) = 36+25 =61 W

:. The average power is 61 W.


If the process is ergodic, the ensemble average is equal to time averages
A[X(t)] =X=6

and A[Rxx(t,t + -r)] = Rxx(t,t +-r) = 36 + 25e-I~1


Therefore the process is ergodic.
(d)
Wide sense stationary (WSS):
The conditions for the process to be WSS is
E[X] = 6 :::: constant

Rxx(t,t+-r)

and

Rxx(-r) =36+ 25e-1T1

is independent of the absolute time t.


..
The process is WSS.
(e)
The process has no periodic components:
Since

limRxx(t, t J +-r) =E[X]2

T->'"'

lim36+25e-1T1

36

E[X]2 =36

(f)

The AC power of the process is the variance ofX(t).

0-;
E[X(ty]
2

0-x

E[X(/)2]-E[X(t)]2

Rxx (t,t + -r) l-r =0= Rxx(O) =61

= 61- 62 = 25

AC power is 25 watts.
Example 6.18

A random process is defined by X (I) :::: X 0 + Vt ,where Xo and V, are statistically independent
random variahles, unifonnly distributed 011 intervals

[XOl ,XoJ

(a)
(c)

mean, (b) autocorrelation and


autocovariancc fWlctiun.

(d)

Is X(t) stationary in any sense? If so, state the type.

Solution

The given random process is


X(t)

Xo + Vt

and [V; ,f;] respectively. Find

(Feb 2008)

414

Probability Theory and Stochastic Processes

where Xo and V are statistically independent.

elsewhere
1

elsewhere
(a) The mean value of X(t) is

E[X(t)]

=E[Xo + Vt] = E[Xo] + E[Vt]


<X)

a)

Jxolxo (xo )dxo+ JVt Iv (V)dV


-a)

X~ -xg1 + t(~2 _~2)


2(X{)2 -XOl ) 2(~ -~)

=
E[X(t)]

(b) The autocorrelation function of .x{t) is

Rxx(t,t+1:)

= E[X(t)X(t+1:)]
E[(Xo + Vt)(Xo + Vet +1:]

=E[(Xo + Vt)(Xo + Vt + V1:)]


=E[Xg +XoVt+XoV1:+XoVt+V2t2 + V 2t1:]
Rxx(t,t+'t) =E[X~]+(2t+f)E[XoV]+(t2 +tf)E[V2]

Random Processes 415

_ 1 [V}_~3]
11;-~
3

~2 +~2 + ~11;

Since Xo and V, are independent.


E[Xo V]

. Rxx'
(tt+~)

..

=~[Xo] E[V]

X;2+ X ;r+ X oI X 02 + (2t+'r)(X02+XOl)(~+V;)


t(t+'r)(V22+~2+V;V2)
, +-O----'-'-""--'----'...L:...
3
4
3

(c) The autocovariance ftmction of X(t) is


Cxx (t,t +'r)

Rxx(t,t +'r) - E[X(t)]E[X(t +'r)]


.
1
Now E[X(t+'r)] =-[(X02 +X01)+(t+r)(J!; +~)]
.2

and

E[X]E[X(t+:r)]

l'

=2[X02 +XOI +f(J!; + ~)]2[X02 +XOI +(t+r)(J!; +~)]

t(t +r) 2
2
1
2
(11; + ~ + ~J!;)--(X02 +XoJ
3
4

- (2t;r) (X02 +X01 )(1I; + V;) t(t;r) (11;

+~)2

416

Probability Theory and Stochastic Processes


X~ + X;l + 2X02 X 01
4

Cxx {t,t+1:)
.

= X;2 + X;! -2X02 X 01 + t(t+1:) (v;,2 +V;2 -2V;V2)


12

12

(1) The mean of X(t) is not constant.


(2) The autocorrelation function depends on time t. .
(3) The auto covariance function depends on time t.

Hence the given random process is not stationruy.

(d)

Example 6.19

Let X( t) be a WSS process with autocorrelation function Rxx (r) = e -atrl, where a > 0 is a
constant. AssumeX(t) amplitude modulates a carrier cos(a>ot+8) as shown in Fig. 6.4,
where Wo is a constant and 8 is a random variable uniformly distributed on (-1', 1'). IfX(t)
and 8 are statically independent, determine the autocorrelation function of Yet).

x (t) 'lrQ3)

'lr
Y(t)=X(t)cos(o~l+6)

cos(att+ 8)
Fig. 6.4 Modulation system

(Feb 2008)
Solution

Given a WSS random process X (t) has autocorrelation Rxx (1:) = e -alrl a > 0, where 6 is a
uniformly distributed random variable on (-11:,11:).

elsewhere
and X(t) and 6 are statistically independent. FrornFig. 6.4, yet) =X(t)cos(wot+fJ).Thc
autocorrelation of yet) is

Ryy(t,t+1:)

= E[Y(t)Y(t+1:)]
E[X(t)cos(Ct>ot + 6)X(t +1:)cos(C%(t + 1:)+ 6)]

Random Processes 417


::=

E[X(t)X(t +-r)]E[cos(av +8)cos(mo(t +-r) +8)]

::=

1
R,rr (-r)- E[cos mo-r + cos(2mot+ 28 + mo-r)]

. .

Ryy(-r)

Example 6.20

Consider a random process X(t)::= Acosmt, where m is a constant and A is a random variable
uniformly distributed over (0,1). Find the autocorrelation and autocovariance of X(t)....
(May 2010)
Solution

Given a random processX(t)

Acosmt

O<A<1

fAA)

otherwise
1
::=

JAcosmt Acosm(t+-r) dA

o
1
2

JA cosmtcosm(t+-r)dA
o

=cosmtcosm(t+-r)

~3/
3

1
0

RXl((t" It) "" cosmtcosro(tH')


3

(b) Autocovariance of X(t) is


C,rr (t,t +-r)

Now

E[X(t)]

R,rr (I, 1+-r)- E[X(t)]E[X(t+-r)]


::= .( x(t)fA(A)dm

418

Probability Theory and Stochastic Processes

cos lOt

=cos lOt ,LAdA =-2


and

E[X(t+r)]

= cosm(t+r)
.

Cxx (t,t+r )

cosmtcosm(t+r)
3

::::---~-...:..

cosmtcosm(t+f)

Cxx(t,t+r) :::: cosmtcosm(t+r)


12
Example 6.21

A stationary

~dom

process X(t) with mean 2 has the autocorrelation function Rxx(r)


I

:::: 4+exp(-1 r 1110). Find the mean and variance of Y = fX(t)dt.


o

Solution
I

Given E[X(t)] = 2, Rxx(r)

=4+e-1fll1O

and yet)

= fX(t)dt.
o

(a)

Mean value of yet) is


E[Y(t)] ::::E[jX(t)dt]
1

fE[X(t)]dt
o
I

E[Y(t)]

= f2dt = 2
o

(b)

Variance of yet) is

Now for E[y2(t)] we know that, if Y(t)::: fX(t)dt, then


o
T

f (1-1 r I)Rxx(r)dr
-T

Random Processes 419


1

= J(1-ITI)(4+e-ITIIIO)d'r

.. E[y 2 (t)]

-I

-I

f (l +T)(4 + e~110 )dT + f(l-T)(4 + e-rIlO)dT

E[y2(t)]

fTe~!lOdT

-I

-I

-I

-I

= f4dT+ f4TdT+ fef/lodT+


1

+f4dT

o
0

=4T

f4TdT+ fe-filO - fTe-fllodT

+ T
_I
2
1

=-4+ 2 + 10(l-e'!) -lOe-o. 1 -lOO(l-e-c'!)


+4-2 lO(e'!
==200e-ol
E[y2 (t)] ;;:: 200e-cI

(1: =E[y2 (t)]


(1; =4.967

Variance of yet) is

1) + 1Oe-o. I +lOO(e-cl-l)

200+20+4
176 == 4.967
E[Y(t)]2

= 0.967.

Example 6.22

A stationary process has an autocorrelation function given by


_ 25T 2 + 36
T - 6.25T 2 +4

R( )

Find the mean value, mean square value and variance of the process

(May 2011)

420

Probability Theory and Stochastic Processes

Solution
R(1')

Given
(i)

251' 36
6.251'2 + 4

The mean square value is


E[X2]

=R(1')I1' = 0
E[X2]

= 36 =9
4

(ii)

Mean value is

25+ 36

25
6.25

X =2
(iii)

Variance

~)(ample

C1x

2"";

E[X ]-X =9-4

6.23

Assume that an Ergodic random process x(t) has an alitocorrelation function


..
2
18+-2 (1+4cos(21'
6+1'

(i)

F:ind IX I.

(ii)
(iii)

Does this process have a periodic component?


What is the average power in x(t)

Solution
R;IT ('r) "" 18 + '--6
2 2 (1 + 4 cus(2(
+1'

(i)

Mean value:

(May 2011)

Random Processes

10

6+'t"

6+'t"

421

IRxx(,r) I =18+--2 (1+4)=18+-2

IXI

=18

IXI =Ji8 =4.2426


(ii)

For a periodic component


limRxx('t") = lim(18
'1'->00

'1'->00

+~)(1
+4cos(2't"))
6 + 't"

=00

.'. The process has a periodic component.


(iii) Average power of X(t) is
E[X2]

=Rxx(O)
2

=18 + 6(1 + 4cos(0))


E[X2]

18+ 10
6

=19.667

Example 6.24

A stationary ergodic random process has the autocorrelation function with the periodic com
ponents as Rxx ('t")

25 +

4 . Find the mean and variance of X(t).


1+6't"2

(May 2011)
Solution

Given
The mean value:
X-' == limRxx ('t")
'1'->00

X =5

25

422

Probability Theory and Stochastic Processes

Mean square value is

:. Variance
Example 6.25

A Gaussian random process is known to be WSS with a mean ,of X


R xx (r) = 25 e -3 trIl2. Find the covariance function necessary to

random process and give the covariance matrix for

Xi

4' an:d autocorrelation

~pecify t1ae joint density of the

=xttj)' where i

= 1,2, .... ,N.

Solution

Given a WS~ Gaussian random process X(t), wit~ X(t)


X(t), i = 1, 2, .... N

N random variables are given by


i and k

4 and Rxx(r) = 25 e-3trl12


with

tk - ti

= (k

i)

= 1, 2, 3,..... ,N.

The autocorrelation function for N random variables is


RXX(ti,tk )
Rxx (ti'tk )

Rxx~tk-tJ';'Rxx(1:')

= 25 e-311,

-Ii

1/2

The covariance
function is defined as
..
'

Cxx (I;, tk )

But given

E[X(t)]

= E[X(tk)]

Rxi(tp 1k)":" E[X(ti)]E[X(tk

)'

E[x(t)] = 4

Cxx (tp tk )

25 e-311, -li V2 - 4 x 4

Cxx (Ii ,tk )

25e-31I,-lilh -16

.'. The covariance matrix is


25-16
25e-3/2

[Cx]

25e-

16
16

25e- N12 -16

25e:

3/2

-16

25e-3

25 -16

25e-3/2

3/2

25

25e-

-16

25e-(Nll)h -16

For example if N = 2
9
-10.42]
[ -10.42
9

16
16

25~'-(NI2)/2

16

25e-{NI2)h

25e- N12 16
25e-(N/l)/2 )6

16 ...

-16

25 16

1:',

Random Processes 423


. Example 6.26

Telephone calls are initiated through an exchange mean average rate of 75 per minute and
are described by a Poisson process. Find the probability that more than 3 calls are initiated in
(Feb 2007)
any 15 second period.
Solution

Given A. = 75 per minute = 70 per second


60

time period t

5 second

70

At =-x5=6.25
60

No of calls initiated N = 3
Since the process is a Poisson's process, we know that the probability of the number of
calls initiated being ~ N is
P[X(t) ~ N] ==

2:N (At)kk!e-.:u ' k == 0,1,2,3, ......


t=O

Thus, the probability that the number of calls initiated more than 3 is

P[X(t) > 3] == 1 P[X(t) ~ 3]

== 1 [(6.25)0 e-<>25 + 6.25

e-6.25

+ (6.25)2 e-6.25 + (6.25)3


2!
3!

e-625 ]

1-0.1302
P[X(t) > 3]

= 0.869

Questions
(1) Explain the concept ofa random process.
(Nov 2010, May 2004)
(2) Explain how random processes are classIfied willl neat sketches.
(Nov 2003, Nov 2006)
(3) Briefly explain the distribution and density functions in the context of stationary and
independent random processes.
(4) Distinguish between stationary and non-stationary random processes.
.
(Nov 2010, May 2009, May 2004)

436 Probability Theory and Stochastic Processes


Proved.

Pxx
Rxx(O)
The average power Pxx is the autocorrelation of X(t) at the origin.

7.3

Properties of the Power Density Spectrum

The properties of the power density spectrum S xx(m) for a wide sense stationary random
process X{t) are given as

(1)

(7.14)

Sxx(m) 2: 0

Proof

From the defmition, the expected value of a non-negative function E

[I X

T (m)

12]

is always

non-negative.
Hence S.yx (m) 2: 0

(2)

The power spectral density at zero frequency is equal to the area under the curve of
the autocorrelation Rxx (7:) .
That is, Sxx(O)

= [,Rxx(7:)d7:

(7.15)

Proof

From the definition, we know that

At m=O,
Sxx(O)

(3)

[, Rxx(7:)d7:

The power density spectrum of a real process X(t) is an even function

Sxx (-m)

i.e.,

=Sxx (m)

X(t) is real

Proof
CUllsiu~r

a WSS

r~al pruc~ss

Then

Substitute

X(I).

Sxx(m)

7:

= [R xx (7:)e- jCtn d7:

-7:, then

(7.16)

Random Processes: Spectral Characteristics

Sxx(-ro)

437

I Rxx (-1')e- jWf d-r

SinceX(t) is real, from the properties of autocorrelation of (6.47) we know that,

Rxx(--r) =Rxx(-r)

SxxC-ro)

= IRxx (-1')e- JWf d-r

Sxx(-())) =Sxx(ro)
. . Sxx (ro) is an even function.
(4)

(7.17)

Sxx(ro) is always a real function.

Proof
From (7.3) we know that

Since the function 1X T (ro) 12 is a real function, Sx (ro) is always real.

(5)

If S xx (ro) is a psd of the WSS random process X(t), then


(7.18)

(or) the time average of the mean square value of a WSS random process equals the area
under the curve of the power spectral density.
Proof
From (6.34) we know that
Rxx (-r)

A {E[X(t +-r)X(t)]}
= 1 r'Sxx(ro)eJWfdro
2n: Lx,

Now at -r

0,

Rxx(O)

= A{E[X2(t)]} = 2~

I Sxx(f)dro

= Area under the curve of SxX<ro)


(6) If X (t) is a WSS random process with psd Sxx ((f) , then the psd of the derivative of
X(t) is equalto (f)2 times the psd Sxx(ro).

- - - -..........

-~-

.. - - - - - - -......

438

Probability Theory and Stochastic Processes

(7.19)

Proof

From (7.3), we know that

Sxy(oo)

= lim E[lXT(oo)n
T~ro

and

2T

XT(oo)

= [TX(t)e-jllJ/dt

XT(oo)

= dXT(OO)'=~IT X(t)e-jllJldt
doo

doo

= [TX(t)(-joo)e-jllJldt
=(-joo) [TX(t)e-jllJldt

XT (00)

=(- jro)XT(oo)

S .. (00)
xx

S .. (00) =oo2 Sxy (OO)

(7)

Proved..
xx
The power density spectrum and the time average of the autocorrelation function
form a FOllfier transform pair.

Random Processes: Spectral Characteristics 439

(7.20)

(7.21)

and
Proof

Consider a WSS random process X (t). We know that the time average ofthe autocorrelation
function is
A[Rxy(t,t+r)] = A {E[X(t) X(t+r)]}

For a WSS random process,


A[Rxy(t,t +r)] = Rxx(r)
Now the power spectral density is

-1" E[I XT(~)


S xy (w) - 1m --='------=
2T

T->oo

But we know that


1XT(w) 12

X T(w)X; (w)

fT X (tl)e- jrol, dt

Let

X T(w)

=::

and

X;(w)

= fTX(t)ejrol dt

Sxy(w) =

S (w)
xy

T
T
limE[~
r
X(t)droldt r X(tl)eC-)roIldtl]
T->oo
2T iT
iT

= lim_l rT
T->oo

rr E[X(tI)X(t)]e-jw(t,-I)dtdtI

2T iT .LT

We know that,
E[X(tl)X(t)] - Rxy(t,tt>
Su.(co) = lim_l_

rT rr Rxy(t,tj)e-jW(I,-Odtjdt

T--2T iT.LT

(7.22)

By taking inverse Fourier transform,

T T
r
r R (t t )(_1) [e-jW(II-H~dw dt dt1
T->oo2T iT iT XY , 1 2n

= lim_l

440

Probability Theory and Stochastic Processes

. 1 [T [T Rxx(t,t) 8(tt -t-r)dtdl


=T'-?a;J
hml
2T T T
.

(7.23)

fT 8(/1 - I -r)dt, =1, let

Since

t)

t +r.

= A[Rxx(t,t +r)]
1 [ Sxx (00)ej(Qr dr

(7.24)

2rc
Taking Fourier transform,

Sxx (00) = [A[Xx(/,t +r)]e

jlm

dr

Fora WSS,
. . Rxx (r) and S xx (00) are a Fourier transform pair.

Note I
For a WSS random process X(t), the Fourier transform pairs

and

Sxx(oo)

= [Rxx(r)e-j(Q!dr

Rxx(r)

=L [Sxx(oo)eJlmdoo

2rc
are called Wiener-Khintchine relations.

Note 2
The knowledge of the power spectral density of a process X(t) gives the complete form of
autocorrelation function when X(t) is atleast wide sense stationary. For a non-stationary
random proccss, we can get only the time average of the autocorrelation function.
Example 7.1

Determine which of the following functions are valid power density spectrums and why?
(a) eos8(oo)
2 + 00 4

(c)

00

6
2
00 + 300 + 3

Random Processes: Spectral Characteristics 441

Solution
' S ( ) cos8(co)
G Iven
xx CO =
4'
2+co
From the properties of psd,

(a)

(i) The given function S xx (co) is real and positive,

(ii) S

(-co) = cos8( -co)

xx'
2+ (-cot

The function is even. Hence the given function is a valid psd.

(b) GivenSxx(co)

e-(W-I)2,

From the properties of psd,

_ ) _ _(W_I)2

Sxx ( co-e

:. Sxx(-co):7f:Sxx (co)

:. The given function is not a valid psd.

co2
(c) Given Sxx (co) =--;---;:--
+
+3

From the properties of psd,

(i) The given function is real and positive.

S xx (-co)

=Sxx (co) is an even function.

:. The given function is a valid psd.

7.4

Bandwidth of the Power Density Spectrum

Baseband process
Assume that X(t) is a lowpass process, that is, its spectral components are clustered at the
origin, co O. As frequency increases, the spectrum magnitude decreases as shown in
Fig. 7.1. The nonnalisation of a power spectrum is a measure of its spread. It is called
rms bandwidth.
The rms bandwidth is obtained from
(7.25)

442

Probability Theory and Stochastic Processes

Fig. 7.1 Baseband process

Bandpass process

In a bandpass process, the spectral components are clustered near some frequencies

roo and -roo,

as shown in Fig. 7.2. The mean frequency

roo can be expressed as


(7.26)

Fig. 7.2 Spectrum ofa bandpass process

and the rms bandwidth is


2

4 [(m-ro o)2Sxx (m)dm

Wms=~~-------------

[Sxx(m)dm
Example 7.2 .

The power density spectrum of a baseband random process X(t) is

Find the rms bandwidth.


Solution

Given power density spectrum

(7.27)

Random Processes: Spectral Characteristics 443

Now the nTIS bandwidth is

32[ 2(4-00+ +_1


tan-I (00)]00
2 x 2
2-<X)
2
(0 )

=32[0+ :]
and

[S xx (oo)doo

=[

22

8n

doo

(1+: )
r 32
= .Lx, (4'; 002 )2 doo
32[ 8(4 +00

2
(0 )

=32[0+ 16n

+ 2 x 8 tan

=2n

-I

(00 )]00

2"

-<X)

444

Probability Theory and Stochastic Processes

w;.~s

7.5

J4i == 2J1i == 3.55 radls

Cross Power Density Spectrum

Definition I
Consider two real random processes X(t) and y(t). If X(t) and yet) are jointly wide sense
stationary random processes, then the cross power density spectrum is defined as the Fourier
transfonn of the cross correlation function of X(t) and Yet). It is expressed as

and

Sxy(m) = [Rxy(r)e-j{Qt" d1'

(7.28)

Syx(m)

(7.29)

[Ryx(r)e-Jmd1'

By inverse Fourier transfonnation, we can obtain the cross correlation functions, i.e.,
(7.30)

(7.31)

Therefore, the cross psd and cross correlation functions are a Fourier transfonn pair.
Definition 2
If X T (m) and 1;. (m) are the Fourier transfonns of X(t) and Y(t) respectively in the interval

[-T,T] , then the cross power density spectrum is defined as


(7.32)

(7.33)

and

7.5.1 Average Cross Power


The average cross power, Pxy of the WSS random processes X(t) and yet) is defmed as
the cross correlation function at
That is,

l'

= O.

Random Processes: Spectral Characteristics

451

The cross correlation function is


Rxy('r) Inverse Fourier transform of S Xy(w)

=F-l[(a+~wi ]
We know from the Fourier transform that
I

-----::-~

(a+

e- al tu(t)

Additional Problems
Example 7.4

Consider the random process

X(t)

Acos(wt+e)

where A and (f) are real constants and e is a random variable uniformly distributed
over [0, 2n]. Find the average power Pxx'
Solution

We know that

Pxx

A {E[X2(t)]}

Now

X(t)

= Acos(f)t + e)

and

1,(9)

{2;

o<e::;; 2n
otherwise

= .br
L

?
I
A2
cos~((f)t +e)-de
..
2n
"

= A2. [" 1+ cos(2(f)t +2e) de]


2n

. 2

452

Probability Theory and Stochastic Processes

=:; [fl< dO + 12l< cos(2cut + 20)dOJ


2

= A2 [2n + (-)sin(2cut+ 20)1 l<]


4n
A2

=-[2n 4n

4 sin (2cut)]

The time average power is


Pxx

1
= A [E[X2(t)]] = lim-

A2
]dt
r'T [A2
---sin(2cut)
2
n

T~oo2T!T

1 A2
=--(2T)-0
2T 2

Example 7.5

The psd of X(t) is given by


S xx (cu)

={I +0 cu

Find the autocorrelation function.


Solution

The autocorrelation function is

for Icu I < 1


otherwise
(May 2011)

Random Processes: Spectral Characteristics

. ro 2 II
r oleJorr dro =_._
I'

Now take,

11

e Jorr

J-r

I
r .-(2ro)dro
1

-I

Also,

453

eJ())~

-.

11 J-r

j-r

2
]
1 eJ~ - e - J~ e J~ - e - J~ 2 (J~
_ R ()
-r +
+-2 e +e- J~) - -3 (J~
e -e -J~)
.IT
27r [
j-r
j-r
-r
j-r

Example 7.6

The autocorrelation of a WSS random process X(t) is given by


R.cr(-r) = Acos(roo-r)
where A and Wo are constants. Find psd.
Solution

We know that the power spectral density


S.IT (ro) = Fourier transform of R.IT (-r)
S.IT(ro)

= [Rxx(-r)e-Jmd-r

454

Probability Theory and Stochastic Processes

A
-[2no (m - mo) + 2no( m + mo)]
2
Sxx(m) o=An[o(m-mo)+o(m+mo)]
The power density spectrum is shown in Fig. 7.3.

Fig. 7.3 Power density spectrum of A cos

(001:'

Example 7.7

Find the psd ofa WSS random processX(t) whose autocorrelation function is Rxxf"C) == ae-bl~1
Solution

We know that the power spectral density,


Sxx(m)

c:::

S,'i,I,(m) ;;;;;

Fourier transform of Rxx(r)

r:

Ru (T)e- j OJ1:dT

Random Processes: Spectral Characteristics 455

e{b- jro)r:

-w

JOJ

e-{b+ jro)r:

I""

+a---
-(b + JOJ) 0

b-jOJ

b+jOJ

= - - .[1-0]---.[0-1]

a(b+ jOJ+b- jro)

b- JOJ

b+ JOJ

b +OJ

--+-

Example 7.8

A random process has autocorrelation function


Rxx(r)

= {l-ol'l' I,

I'l'l s 1
otherwise
(Nov 2010)

Find the psd and draw plots.


Solution

The autocorrelation function is given as

I'l'l s 1
otherwise
The power spectral density is
Sxx(OJ)

= [R,u('l')e-jWTd'l'

Sxx(OJ) = 11(1-I'l'l)e-

and

jWT

d'l'

-jWT
-jm
J'l'e-jWTd'l' =_e_'l'_ J_e_-d'l'
- JOJ
(-jill)

- _ .._-_ .. _

....

...

_ _...._---_..._

456

Probability Theory and Stochastic Processes

jm
--t:e-jW~

or

jm
jwr

Now

jO

_e- 1 _e- 11
Sx:r (m) = . - +. - +
)!

Jm

1 (jW
--1 (1 -e -jill) - e
jm

jm

-1

e-

jm

Jm

-1

1)
jill

S.rr(m) =-.-+ .
Jm Jm

(e jWl2

e jill

jm

jm

e- J(12 )2

/m

(e

e jW

jm

jilll2

e jill

e- jill

e - jill

jm

+---+--+--+
2
2
2
2
m

-e - j(12)2

sin2(m/2)
(m/2)2

Sin(m/2)2
( (m/2)

S.rr(m) =sa (m/2)

Plots of autocorrelation function

R)(X(-t:)

and the power density spectrum are

shown in Fir,. 7.4.

Fig.7.4 (a) Autocorrelation/unction

Fig.7.4 (b) Power spectral density

Random Processes: Spectral Characteristics 457


Example 7.9

The cross power spectrum of X(t) and yet) is defined as

Sxy(ro)

K + J:'ro

-w<ro<w

elsewhere

where W> 0 and K are constants. Find the cross correlation fimction.
Solution

We know that the cross correlation fimction is

Now the integration


ej(J)'t

L ir

-~(ro)-,

ir

-e

iW'[W
~]
-+j1:
1:7

and

rw

lH e
-w

W
jm

eJ(J)'t

d1:

dro

. I
=-.

J1: -w

j1:

e -i

W'

[-W
1]
--+
j1:

1:7

458

Probability Theory and Stochastic Processes

~[2sinWr]+-1-[2cosWr]2m

Rxy ()
r

2m

I) .

1 z[2sinWr]

211:Wr

K - - - sm W r +cos
Wr
=( -
z

1I:r 1I:Wr

More Solved Examples


Example 7.10

Find the autocorrelation function and power spectral density of the random process
X(t) = Acos(lIV +8), where 8 is a random variable over the ensemble and is unifonnly
distributed over the range (0,211:).

(June 2003)

Solution

Given the random process


Acos(lIV+8)

X(t)

and
(i)

1e(8) = 211:

0~8 ~211:

The autocorrelation function is


R"u(r) =E[X(t)X(t+r)]
E[Acos(% +8)Acos(mo(t+r)+8)]

=A ZE[cos(mot +8)cos(mot + mor + 8)]


AZ

= -E[cos(2mot + mor + 28) + cosmor]


2

= A2 1z" cos m0r


2

211:

d8 + A2
2

r"

I
-[cos(2mot+
mor + 28)]d8
211:

Random Processes: Spectral Characteristics 459

..

(ii)

Rxx(r)

A2
Tcosmo1'

The power spectral density is


S xx (m) = Fourier transform of Rxx (1')

A2
.
A2
- (cosmo1' e- Jtm d1' == - (
2
.
2

l"e ())o7: +e- lI>7:) e- .


J

Jtm

d1'

~ [ ( e-/())-lI7: d1' + (e-/())+lIT d1']


A2
- 2rc[o(m-mo) + o(m +mo)]
4
2

A rc[o(m mo)+o(m+mo)]

Example 7.11

The autocorrelation function of a 'vVSS random process is Rxx(1') = aexp(-(1' I bi). Find
the power spectral density and normalised average power of the signal.
(Nov 2002, May 2011)
Solution

Given the autocorrelation

Then the power spectral density is


Sxx(m) = (Rxx(1')e-/())Td1'

460

Probability Theory and Stochastic Processes

.r:

exp (

IF- j{fJ<

J.n

Converting the exponential term to get a perfect square

((~+ J~. b)2J exp (.J~b )12 dr

=a [exp -

bdz

J2
abe

J2

-z'

[e T

We know that

(ii)

A vcragt;; puwer of the signal is


Pxx =Rxx(O)

Given

Rxx('r;)

= aexp (

_;2 J

At-r =0, Rxx(O) =aexp(O) = a watts

dz

Random Processes: Spectral Characteristics 461

(or) we can also fmd from S xx (co),


Average power
1

Let

or

-lIh'

Pxx

=-[
abJiie21C

=2

cob

4-

dco

'-l

T=J2
zJ2

co = -
b

J2

dco = - dz
b

J2 dz = a -1 - [-z'/2d
b
J2ii e z

D
xx
= ab
- - [ e -z'/2 -

2Jii

Pxx

00

'-l

= a(1) =a watts

Example 7.12

Two independent stationary random processes X(t) and Y(t) have power spectrum densities
S xx (co) =

16
co 2
6 and S xy (co) = 2
respectively with zero means. Let another random
co +1
co +16
2

process U(t) = X(t) + y(t). Then find (i) psd of U(t), (ii) S Xy(co) and (iii)Sxu (co).

(June 1999)
Solution

OiVt:l1

X(t) and y(t) are independent random processes.

=y=o

462

Probability Theory and Stochastic Processes

(i)

Power density of Vet) is


Suu(OJ) :::::sxAOJ)+Syy(OJ)

(ii)

Since X(t) and yet) independent and uncorrelated.


Sxy (OJ)

(iii) Sxu(OJ)

::=

21(; X Y 8 ( OJ) '" 0

= Fouriertransfonnof Rxu('r)

Now

Rxu(r) =E[X(t)V(t+r)]
E[X(t) (X(t+ r) + Y(t + r))]
=E[X(t) X(t+r)+X(t)Y(t+r)]
= E[X(t) X(t +r) + E[X(t)Y(t + r)]
Rxx(r) + Rxy(r)

Sxu(OJ) = Fouriertransfonn of [RxxCr) + RXy(r)]


.'

= Sxx (OJ) + SXy(OJ)

\..

Sxu(OJ)

16
=--:-2

OJ

+16

Example 7.13 .

Find the cross correlation function for the psd S xy(OJ)


Solution

Givenpsdis

The cross correlation function is


Rxy(r) =F-J[Sxy(OJ)]

F-I(25~OJ2 )

= 25 + oi

Random Processes: Spectral Characteristics 463

We know from the Fourier transform that

2a ~ eat,
_II
--::----::a> 0, a constant
2
a +0/

..

1
25 +m2

1
2x5

Example 7.14

Find the cross power spectral density. if (a) Rxy(r,)

A2

=Tcos(mor).
Solution

Given
(a)
The power spectral density

Sxy(m)

(b)

=F[ ~2 sinmo-z-]

A2
RXA, (-z-) =-COS(Olo-Z-)
2
S xy(m)

=F[ ~2 cos(mo-z-)]

A2
TSin(mor) and (b) Rxy(r)

464

Probability Theory and Stochastic Processes

Both power density spectrums are shown in Fig. 7.5.


______ jfrc/2

-COo
-.frc/2

-- ------

(a)

A2
Fig.7.5 (a) psdfor -

sin (Oor

A2

Fig. 7.5 (b) psdfor -

cos roo'r

Example 7.15

X(t) is a stationary random process with spectral density S xx (co). Y (t) is another independent

random process, y(t)

= Acos(coct + 0),

where 0 is a random variable uniformly distributed

over (-n,n). Find the spectral density function of Z(t) = X(t)Y(t).


Solution

Given two independent random processes X(t) and Y(t)

= A cos(coJ + 0) .

The power spectral density of X (t) is Sxx (co). 0 is uniformly distributed over (-n - n).

otherwise
The autocorrelation of Y(t) is
Ryy(r) =E[Y(t)Y(t+r)]
OJ

Jy(t)y(t +r)h(O)dO

Random Processes: Spectral Characteristics 465

A2
TcosrocT
Now

Z(t) ::: X(t)Y(t)


Rzz (T)

=E[X(t)Y(t)X(t +T)Y(t + T)]

Since X(t) and yet) are independent,


Rzz (T)

=E[ X(t)X(t +T)]E[Y(t)Y(t + T)]

The power density spectrum is

Szz(ro)

= F[Rzz(T)]

A2

x F[R,IT (T)cosrocT]

A2
-fC

[sxx (ro + roc) + S xx(ro -

roc)]

Example 7.16

If the autocorrelation of a WSS process is Rxx(T) = K e- K1T1 , show that its spectral density is
given by S(ro)

2
1+(ro/K)2 '

(Nov 2010)

Solution

Given the random process is WSS. The autocorrelation function is Rxx(T)


know that the power spectral density is

; ; ; JK ero

Ktrl e-jUYt: dT

=K[e(K-jaJ)T 1 + i- KA - jro

-00

jaJ
)1:

-(A + jro)

Ke- K1T1

We

466

Probability Theory and Stochastic Processes

=K[ K~ joo + K: jOO]


2K2
K2 +002
2
S xx (m) = -----::
1+(00/ K)2
Example 7.17

Consider a WSS random process X(t) with power spectral density Sxx (00). Another random
process is given by yet) = X(t + T) + X(t T), where T is a constant. Find the power
spectrum of Y(t).
Solution

Given a WSS random process X(t) with psd Sxx (00)


'.

yet) =X(t+T)+X(t-T)

and

Let the autocorrelation function of X(t) be Rxx(r.). We know that the autocorrelation
function is
RIT(r.)
= E[Y(t)Y(t +r.)]

=E[(X(t+ T) + X(t

T(X(t + T +r.)+ X(t -T +r.]

E[X(t+ T)X(t+ T +r.) + X(t +T)X(t-T +r.)


+X(t T)X(t+T+r.)+X(t

T)X(t T+r.)]

= E[X(t + T)X(t+ T +r.)]+ E[X(t+ T)X(t

T +r.)]

+E[X(t-T)X(t+T +r.)]+ E[X(t T)X(t-T +r.)]

Let

1+ T

= II andt

12 , Then

RIT(r.) =E[X(tl)X(t1 +r.)]+E[X(tl)X(t1 +r.-2T)]


+R[ x(t2 )X(t2 +T. + 2T)]+ E[X(!)X(t~ +r.)]
RIT (r.)

or

RIT(r.)

= Rxx(r.) + Rxx (r.

2Rxx(r.) + Rxx(r. - 2T) + Rxx(r. + 2T)

Now the power density spectrum is


SIT (00)

2T) + Rxx (r. + 2T) + Rxx(r.)

=F[Ryy(r.)]

Random Processes: Spectral Characteristics

467

00

f[2R xx (r) + Rxx(r - 2T) + Rxx('r + 2T) ]e-j(Jn d'r

JRxx ('r)e-

co

00

=2

JQYr d'r

+ Rxx ('r 2T)e-jO)"1: d'r

00

+ Rxx ('r + 2T)e- JQYr d'r

fRxx('r-2T)e- j(U(T-2T) e- j(U2Td'r

00

=2Sxx (m)+

00

+ Rxx ('r + 2T)e-j(u(H2T) e+ j(U2T d'r


= 2Sxx (m) + e- j (U2T Sxx(m) + e j (J}2T Sxx(m)

(2 + e j(U2T +e- j (J}2T)Sxx(m)


=(2+2cos2mT) Sxx(m)
Syy(m) =4cos 2(wr)Sxx(m)
Example 7.18

Consider the random process X(t)=Acos(mot+8), where A and

mo are real constants

and 8 is a random variable uniformly distributed on the interval (0, 1'C 12). Find the average
power of X(t).
Solution

Given the random process X (t)

Acos(mot + 8) and 8 is uniformly distributed over (0, 1'C 12).


0<8<1'C12
otherwise

The power is Pxx =E[X 2 (t)]

-00

468

Probability Theory and Stochastic Processes

-A2rc [nil()JdO+. nl2Jcos2(CtV+O)dO]


0

=A2 [rc + sin2(CtV + o)/nI2]


. rc 2
2
()

~[rc
+ (sin(2CtV + rc) - sin2CtV)1
2rc
A2

-[rc
2rc

2sin2CtV]

A2
-sin2wt
rc
Since the power is a function of time, the time average power is

Pxx:

A[pxx:l

Pxx:

1 T
lim- Pxx: dt

T...."'2T

Pxx: =hmr....""2T

-T

J
T (

A2

rc

2wt)dt

Random Processes: Spectral Characteristics 469


Example 1.19

The power spectral density of a stationary random process is given by

-K<(f)<K
otherwise

(Feb 2008)

Find the aucorrelation function.


Solution

We kno"", that the autocorrelation function is

_ A . (K ) _ AK sin(Kr)
Rxx ()
r --sm 1:
1Cr

1C

Kr

Example 1.20

A stationary random process XCI) has autocorrelation function Rxx(r)=10+5cos(2r).


+ lOe-21't"1. Find the dc and ac average powers of X(t).

(Nov 2(03)

470

Probability Theory and Stochastic Processes

Solution

Rxx(-r)

Given

1O+5cos(2-r)+1Oe~2Irl.

(a) The average power of the dc component (without -r) is


10 watts

Pdc

(b).

The average power of the ac component (with -r at -r = 0) is

P"c
P"c

5 + 10 = 15 watts

is also called variance of the process X (t) .

(c) The average power of the process X(t) is

Pxx =Rxx(O) 10+5+10 25 watts


Example 7.21

Assume that the ergodic random process X (t) has an autocorrelation function

Rxx (-r)

18 + - - 2 [1 + 4cos(l2-r)]
6+-r

What is the average power of X(t)?

(May 2010, May 2011)

Solution

Given X(t). is an ergodic process.

Rxx (-r)

= 18 + - - 2[1 + 4cos(12-r)]
6+-r

The average power in X(t) is

Pxx

Rxx(-r)I-r=o
2

18+-(1+4cosO]
6+0

1~+-!-l1 + 4J =18+~
3

Pxx

19.67 wallIS

Example 7. 22

The cross spectral density of two random processes X(t) and Y(t) is given as

Random Processes: Spectral Characteristics 471

Find the cross correlation function.


Solution

Given the cross spectral density

1
+ j4oo+4
1
(2+ joo)2
The cross correlation function is

(From Appendix B)

Example 7.23

The power density spectrum of a WSS process is

Sxx(oo)

4n8(oo)+3n8(m 5n)+3n8(oo+5n) +2n 8(00-4) + 2n8(oo+4)

(a)

What are the frequencies in X(t)?

(b)

Find mean variance

~d

average power of X(t).

Solution

Given the random process is WSS with psd,

Sxx (01)

4n 8(01) + 3no (00 - 5n) + 3n 8(00+ 5n) +2n 8(00- 4) + 2n8 (00+ 4)

Using inverse Fourier transfonnation, the autocorrelation function is

472

Probability Theory and Stochastic Processes

Rxx (r) = 2 + 3cos51IT -+ 2cos41'


(a)

:. The frequencies in X(t) are

It
and
(b)

5n and

02

5n
=-=2.5 Hz
2n

h= 2n

2
n

-Hz

The mean value of X(t) is the dc component of Rxx(1').


E[X(t)]

The average power or the mean square value of X(t) is Rxx(O).

The variance of X(t) is

al

:=:R xxCO)-E[X(t)]2

Example 7.24

Given a random process X(t) AcosOot, where 00 is a. constant and A is uniformly


distributed with mean 5 and variance 2. Find the average power of X(t).
Solution

Given

X(t)

== A.cosOo!

E[A]

=5

a A2

=2

Then
E[A2] a~ +E[A]2
The autocorrelation of X(t) is

2+52 =27

Rxx (t,t + 1') = E[X(t)X(t +1')]

E[A cos (Oot) A cos 00 (t + 1')]

Random Processes: Spectral Characteristics 473

= E[A2 COS(Wot)COSWo(t+1')]
= E[A2]COS(wot) cos Wo(t +1')

Rxx (t,t + 7:) = 27 cos(Wot) cos COo (t + 7:)


We know that the mean square value
E[X2(t)] =limRxx(t,t+1')

.-..0

= 27cos2 wot

E[X2(t)]
The average power is

1 T

Pxx = lim- E[X2(t)]dt

T-..""

2T -T

=limE[ [1+COS2QV] dt
r-"""2T T
2
=E[2T-O]

4T

27
2

Pxx =13.5 watts


Example 7.25

Find the average power of the random process X(t) which has the power spectral density

othelwise
Solution

GiVt:ll psd or x(t) is

~{l ~:
We know that the average power

otherwise

474

Probability Theory and Stochastic Processes


2

Pxx =E[X (t)]

f Sxx(m)dm

1
Jdm
=- f 1 -

<f)

2n

_00

41f (

2n -41f

2~[

(jJ

4n

OJ- : ; [

_1{4n - (-4n)

2n

1
=-[8n]=4

2n

Pxx

=4

watts

Example 7.26
A random process has the power density spectrum ; xx (m)

6m2
- - . Find the average power
1+ (jJ2

in the process.

(May 2011)
Solution
Given

;xx(m)

6m2
1+m2

=-

Average power of X(t) is

- 6
- 2n
Pxx

I'' --dw
m
]
1+ m
-d)

2~ [2h] 2~

(From Appendix A)

1.06 watts

Random Processes: Spectral Characteristics 475


Example 7.27

Find the average power of the WSS random process X(t) which has the power spectral density

Sxx (m) ==

m -17

(m2 + 49)(m2 + 16)

Solution

We know that the average power of X(t) is


Pxx

E[X (t)] == 1
21C

Sxx(m)dm

-<X)

OO

l
m -17
d
21C-m(m2 +49)(m2 -16) m

1
== 21C

==

1oo[

2
m2 +49

m2 +16 dm

2~ [~tan-l(ml7) -tan- (ml 4)


I

:J ~[~-]

-2~_[2; Pxx ==

watts

56

Example 7.28

A WSS random process X(t) has psd


Sxx<m)

m2
== 4 10 2 9
m+ m+

Find the autocorrelation and mean square value of the process.


Solution

Given WSS random process with psd

J:

(FromAppendixA)

476

Probability Theory and Stochastic Processes

The autocorrelation function


Rxx(r) =F-1[Sxx(m)]

_ F- 1 [
m
]
(m2 + 1)(m2 + 9)
1

=F

[i(m29+9 m/+l)]

We know that

(From Appendix A)

The mean square value of the process X(/) is

xx

(O)-~-16

16

1
8

watts

Example 7.29

The spectral density of a WSS random process X(t) is given by

Sxx (m)

oi
= ---:-----::--
2
ol + 13m + 36

Find the autocorrelation and average power of the process.


Solution

Given WSS random process with psd

The autocorrelation function is


Rxx(r)

F-1[Sxx(m)]

Random Processes: Spectral Characteristics

477

We know that

6
R ('r) =F -I [-1
- x -4- +9- x xx
5 oi + 4 30 (j)2 + 9
Rxx(,r)

= -1 e-4lrl +~ e-3lrl

The average power is

30

xx

(1:)1

-1

9
30

--+

1: =0 - 5

Pxx =0.1 watts

Questions
(1)

State and prove any four properties ofthe power spectral density of a random process.

(Feb 2007, Feb 2008)


(2) State and prove the Wiener - Khintchin relations.
or
Power spectrum and autocorrelation functions are a Fourier transform pair. Prove
this statement.
(Nov 2006, Feb 2007, Nov 2007, May 2011)
(3) The psd of a random process is given by

1(j)1<1
elsewhere
Find its autocorrelation function.

(Feb 2008)

500

Probability Theory and Stochastic Processes

8.4

Spectral Characteristics of a System Response

Consider that the random process X(t) is a wide sense stationary process with the auto
correlation function Rxx(1') applied through an LTI system. It is noted that the output
response yet) is also a WSS and the processes X(t) and yet) are jointly WSS. We can obtain
power spectral characteristics of the output process yet) by taking the Fourier transform of
the correlation: functions.

8.4.1 Power Density Spectrum of Response


Consider that a random process X(t) is applied on an L TI system having a transfer function
H(ro). The output response is Yet).
If the power spectrum of the input process is Sxx (ro), then the power spectrum of the
output response is given by
(8.33)

Proof

Let Ryy(1') be the autocorrelation of the output response Yet). Then the power spectrum of
the response is the Fourier transform of Ryy(1').
(8.34)
00

= f Ryy(1')e-jUYrd1'
00

We know that

Ryy(1')

00

f f Rxx(1' +1'1 -1'2 )h(1'Jh(1'2 )d1'1 d1'2


-00-00

ro ro

Then

Syy(ro)

00

= f f f Rxx(1' +1'1 -1'2 )h(1' 1)h(1'2 )d1'1 d1'2e- j d1'


OJ'r

co

00

\1,)

fh(1'1) Ih(r 2 ) f RXX (r+1'1 1'2 )e-;OJfd1'dr:2 dr:]

= [h(r:I)ej()jf1
Let

l'

[h(1'z)e jOJf, [Rxx(1' +1'1 -1'2 )e- j ()jf e- j ()jf'e i (Of 2 d1' dr] d1' 2

+1'] -1'2 =t, d1' =dt


ro

Syy(ro)

00

00

fh(1'])ei0J'r1d1'] f h(1'z )e-i ()jf2d1' 2 f Rxx(t)e-iOJldt

Linear System with Random Processes

501

From (8.8)~
Syy(m) =H*(m)H(m)Sxx(m)=H(-m)H(m)Sxx(m)
2

Syy(m) =IH(m)1 Sxr(m)

Proved.

Similarly, we can prove that the cross power spectral density function is
and
:. From (8.27),

SXy(m) =Sxr(m)H(m)

(8.35)

Srx(m) =Sxr(m)H(-m)

(8.36)

0)

Syy(m) =Sxy(m)H(-m)

(8.37)

= Syx(m)H(m)

(8.38)

A random processX(t) whose mean value is 2 and autocorrelation function is Rxx("C)

4e-21rl

and

Sxy(m)

Example 8.4

is applied to a system whose transfer function is _1_._ . Find the mean value, autocorrelation,
2+Jm
power density spectrum and average power of the output signal Yet).
Solution

Given

Rxx("C)

I
4e-21T1 ,X=2
and H(m)=-
2+jm

16

16
..

Sxx(m)

+4

The mean value of yet) isE[Y(t)]

E[X]H(O).

XH(m)

Y =2. 1 =1
2

Y
The power density spectrum of yet) is

502

Probability Theory and Stochastic Processes

Syy(w)

=IH(W)1 Sxx(W)
1

16

--x
oi +4 w2 +4

16

(oi +4)2

The autocorrelation of yet) is

Ryy{r) = F- 1[Syy (w)]

F- [(m 2l:4i]
16 -21TI
=-re

Ryy(r) =4U-2iTI
Average power of input,Pxx

= Rxx(O)

Average power of output, Pyy

Ryy (0) = 0

8.4.2 Spectrum Bandwidth


The power spectral density is mostly concentrated at a certain frequency value. It decreases
at other frequencies. The bandwidth of the spectrum is the range of frequencies having
significant values. It is defined as "the measure of spread of the spectral density" and is also
called rms bandwidth or nonnalised bandwidth. It is given by

~:lS

= -=-0::.-"'_ _ _ __

fSxAm)dm
8.5

(8.39)

Types of Random Processes

In practical situations, random processes can be categorised into different types depending
on their frequency components. For example, information bearing signals such as audio.
video and modulated waveforms etc., cany the information WIthin a specified frequency
hfmd.
Some important types of random processes arc
(l) Lowpass random processes
(2) Bandpass random processes.
(3) Band limited random processes.
(4) Narrow band random processes

518

8.13

Probability Theory and Stochastic Processes

Equivalent Noise Temperature

It is defmed as the noise temperature with respect to the available noise power or power
spectral density. For a passive RLC network, the noise temperature is given by
T

(8.70)
kelvin
kB
This is the same as physical temperature. For active devices, where the power or power
spectral density is amplified, the temperature increases as the density increases. This
temperature is called equivalent or effective noise temperature.
The effective noise temperature is

T"
Also

SNN(W)

or

kB
wattslHz

(8.71)

Tn

k
The equivalent noise temperature depends on the power spectral density of noise.

Series resistance
If two resistors are in series and operating at different temperatures as shown in Fig. 8.18(a),
then the noise temperature at the output is given by

Tn

=--'--'----"--"'7;+7;

Fig. 8.18 (a) Two series resistors at T/ and T2

Proof
We know that for resistor Rp the mean square noise voltage is
2

V nl

For R2

4K7;RjB

(8.72)

Linear System with Random Processes 519

(8.73)

If Tn is the noise temperature of R] and R2 , then

4KT"BR =4KB(R]T; +R2T2)

Tn
or

= (R]T; + R2T2)/ R

(R]T; +~T2)

(8.74)

Parallel resistance

If two resistors R] and R2 at temperature T] and T2 respectively are connected in parallel,


the noise temperature of the combination can be obtained as

Fig. S.lS (b) Two parallel resistors at T, and Tz

From Fig. 8.13 (b), the mean square noise current is

where

G =_1
2

But
where

Example 8.7

Two resistors of 3kQ and 6.kQ are at temperature 3000 K and 400 0 K respectively. Find
the psd of the noise voltage at the output when resistance are in (a) series and (b) parallel.

520

Probability Theory and Stochastic Processes

Solution

Given

(a)

3kn,

4000 K

R2 =6kn,

3000 K

Rl

When resistors are in series (Fig. 8.19(a)):

R\
_--'\

\r----'\

T2

8.19 (a) Series resistance

The noise temperature is

3000 x 400 + 6000 x 300


3000+6000

RJ +R2
T

Also

Req

3000000
9000
Rl+~

1000 = 138.330 K
3
9000n

:. The power spectral density is

(b)

2x1.38xlO-23 X 10;0 x 9000

S,,(m)

2kT Req

S,,(m)

6x1.38xlO- 17

wattslHz

When the resistance are in parallel (Fig.8.19(b)):

Fig. 8.19 (b) Parallel resistance

For parallel resistance, noise temperature is

T
T = 400 x 6000 + 300 x 3000
9000
T

3300000 = 1100 = 336.670K


9000
3

Linear System with Random Processes

521

Sn(m) =2kT Req

=2 x 1.38 X 10-23 X 2000 x 336.67 =2.024 x 10-17 watts/Hz


8.14

Noise Through Two Port Networks

Consider a two port network with input spectral density S ni (m) and output spectral density
Sno(m) with network transfer function H(m) as shown in Fig. 8.20.

Network

Sni(m)

8,,,,(0)

H(m)
Fig. 8.20 Two port network

When the message signal is passed through the two port network, the noise available at
the output is contributed by (i) input noise and (ii) noise generated internally by the system.
Since the resistors and active devices in a two port network act -as independent noise
sources, the signal quality degrades at the output. The signal which is corrupted by the noise
at the output can be measured in tenns of signal-to-noise ratio, noise figure, equivalent noise
temperature etc.
The output noise spectral density is
Sno (co)

=Sni{m) 1H(m) 12

Output noise power is

and the nns noise voltage is

v;, =[_1_ ""f Sfll} {m)dm]h


21t

8.15

(8.75)

-ro

Signal-to-Noise Ratio

The ratio ofthe signal power to the accompanying noise power is called signal-to-noise ratio.
It is given by

Signal power

Noise power

v2

=-'

v;

522

Probability Theory and Stochastic Processes

or

S
N

Ss(oo) Signal power spectral density


--=---"'--------"---=----_--:
Sn(OO) Noise power spectral density

Input signal-to-noise ratio is

S,,/oo)

S;i (00)
Output signal-to-noise ratio

(~1=
s~ (00)
where

input signal power spectral density


output signal power spectral density

Sn: (00)

input noise power spectral density

Sn; (00) output noise power spectral density


The signal-to-noise ratio at the output of the noise-free two port network is the same as at
the input. But for noisy networks, since the noise power increases, the signal-to-noise ratio at
the output deteriorates.

8.16

Available Power Gain

The available power gain of a system is defmed as


Maximum psd of the signal at the output
Maximum psd of the signalat the input
(8.76)

TIle available output power is

p
aO

= 2rc
1

[Sso(oo)doo

1 [Ga(W)Ssi(w)dw
2rc

8.17

Equivalent Noise Bandwidth

The equivalent noise bandwidth of a system is defmed as the bandwidth of the ideal system
whieh has the same output power as the actual system with infmite bandwidth.

Linear System with Random Processes

523

'"

II H(m) 12 dm

It is expressed as

W
N

and

BN

WN
2n

(8.77)

rls

IH(O) 12
Hz

Proof

Consider a low pass filter as shown in Fig. 8.21 and assume that a white noise is applied at
the input of the filter.

~o

____

-+)I~___~_~_;_~__~--~P.~IT~)

Fig. 8.21 Low pass filter

The output power is

=_1_} No I H(m) 12 dm
2n--l2

PIT = No

2n

}I H(m) 12 dm
0

Now consider an ideal system with bandwidth 2WN


The transfer function is
H(m)

={H~O)

Iml<WN
otherwise

The output power is

Pi~.,,1

No
2
=I- "'I --IH(m)1

Pideal

No
2 WIN
=-IH(O)I

2n--l2

4n

. Equating (8.78) and (8.79),

-_ ....

-_.

__ ._.- - - - -

dm

No

dm=-IH(O)
-WN
4n

12

2WN

(8.79)

524

Probability Theory and Stochastic Processes

<Xl

~ H(ro) 12 dro
W = -"-0_ _----:-_
N
1H(O) 12
Similarly, for a band pass filter, the equivalent noise bandwidth is the bandwidth of the
ideal band pass filter which produces the same output power as the actual system with
infmite bandwidth.

(8.80)

where roo is the resonant frequency.

Note: Noise band width is always greater than a 3-dB bandwidth.

Example S.S
1C

Show that for the RC low pass filter shown in Fig. 8.22 the noise bandwidth is equal to 2
times the 3-dB bandwidth
R

~
}oJC Ie y(~)

x~(O)

Fig. 8.22 RC Low passjilter

Solution

A 3-dB bandwidth is defined as the bandwidth of a system at which the power becomes half
or the voltage becomes 1I.J2 times the m~ximum voltage.
From Fig. 8.22, the syslem lnms[er function is

Linear System with Random Processes

Let We

= RC

H(m) _

1
l+jmRC

H(m) _

1
1+ jmRC

is a 3-dB bandwidth and 1 H (0) I

=1.

1
H(m)

IH(m)1 2 = _ _
1_

1+
We know that noise band width,

II H(m) 12 dm

00

=-"-0_ _ __

I H(O) 12

00

I1+ m IWe dm
2

WN

_0
-

:. The noise bandwidth is equal to


W N -

or Noise bandwidth,

rc

"2

rc
2RC
WN

BN

= 2rc

times the 3-dB bandwidth.


rl s
1 Hz.
4RC

525

Linear System with Random Processes

531

Solution
Given
We know that

T:

=T(F-I)

Noise figure

T
F =1+-!
T

F =1+

8.21

400
300

=2.33

Noise in Cascade Amplifiers

Noise figure
Consider two amplifiers that are cascaded.

Ga, : Power gain of amplifier 1

Ga2 : Power gain of amplifier 2

T:, : Equivalent input noise temperature of amplifier 1


Tez : Equivalent input noise temperature of amplifier 2

F; : Noise figure of amplifier 1


F2 : Noise figure of amplifier 2
The overall gain is
(8.99)

Go =G01 xG02

The total output noise power available is

No
where NOl

No, +NOJ. + No,

= output noise power due to input noise power N/.

(8.100)

N02 : : : output noise power due to noise power generated internally by the first amplifier.

From (K99),

NS)'SJ = Ga,N,.(FJ -1) G

(8.101)

O:l

= output noise power due to noise power generated by the second amplifier

532

Probability Theory and Stochastic Processes

N03 = Ga,Nj(F',. -1)

(8.102)

We know that the overall noise figure is

GaNjF = GaNi + GaNi(F; -1)+ Ga,~(F; -1)


1+F:1 -1+

(8.103)

(F -1)G
2

a2

Ga1 Ga2

or
Similarly, for three amplifiers cascaded,
17

j',

F2 -1
+--+--=-
Gal
Ga ] Ga2

For N amplifiers cascaded as shown in Fig. 8.23,

-[D---[D---{I}- - - -0
Fig. 8.23 Cascaded networks

the overall noise figure is

(8.104)

F=F:+
]
G
al

This equation is called Fris's formula. It shows that the contribution to overall noise figure
is mainly at the first stage.

Equivalent noise telllperatute


We know that, F -:-1 + r;, IT

1']' =l+T", IT

The overall noise figure for two amplifiers cascaded is F

= F; +

Ga ,

Linear System with Random Processes

1+ T: =1+T:l +
T

533

Gal

.. The equivalent noise temperature is

(8.105)
For N amplifiers cascaded, the equivalent noise temperature is

(8.106)

Te =T", + G

0,
Example 8.10

Find the overall noise figure and equivalent input noise temperature of the system shown in
Fig. 8.24.
Amp!
--~Gl =!5dB f--~

:Z:1 =100 K

Amp 2
25dBt---~

=100 K

Room temperature

Fig. 8.24 Two stage amplifier

Solution

Given

q
G2

15dB 101.5
-

31.62

25dR 102.5 = 316.23

T:l =100
T:2 = 150

Noise figure

F; = 1+ T:l

=]

+ 100 J.:B
300

F;
Overall noise figure

1 + Te2 = 1 + 150
T
300

F -1
F =1'; +_1_

G1

= 1.5

534

Probability Theory and Stochastic Processes

=1.33+ 1.5-1
31.62
F = 1.33 + 0.02 = 1.35

Equivalent noise temperature is

., --100 +150
+1'-
Gh

31.62

104.94K
Also

8.22

F =1+

=1.35

Antenna Noise Temperature

Antenna noise temperature is the equivalent temperature of the noise with respect to the
available spectral density of the noise at the antenna. It is denoted by Tant
When the antenna is connected to the receiver, the input spectral density of noise is
given by
k(Tant + 1'.)
2

and input noise power, N; = k(Tant + 1'.)BN

8.2l

(8.107)
(8.108)

Narrow Band Noise

If white noise is passed through a bandpass filter whose band width is WN << (00' the noise
process appearing at the output is called narrow band noise. The spectral components of a
narrow band noise are concentrated about some mid-band frequency (Oo'
We can represent the narrow band noise in canonical form as
N(t) = N j (t) cos( (Oot) - N Q (t)sin( (Oot)

where Nj(t)

and
where
In polar form,

(8.109)

in phase component and NQ(t) = quadrature phase component, given by


N/(t)

N(t)cosV+N(t)sinav

NQ(t)

= N(t)cosV-N(t)sinV

N(t)

= Hilbert transform of N(t)

538

Probability Theory and Stochastic Processes

Additional Problems
Example 8.11
Find the transfer functi?n of the network shown in Fig. 8.29.

~rr.'

R,

Input

C Output

R,

I.

. 0

Fig. 8.29 RC Network

Solution
Let X(co) and Y(co) be the input and output functions. The given network is drawn with
reactance values as shown in Fig. 8.30.

1
jmC,

Y(ro)

Fig. 8.30 (a) RC network with reactance

The impedances are

1
Z =R
I

jcoC
1

R1 + -
jcoC

1+ jcoR1C 2

and

Rx-
2
jcoC2
R +_1_
2
jCOC2

Now from Fig. 8.30(b),

Y(m)

Fig. 8.30 (b) With impedance

Linear System with Random Processes


Yew)

= Z2

539

X(w)

ZI +Z2
.. The transfer function is

H(w)

= Yew)

X(w)

Substituting the values,

Note

H(w) = - = -
RI+R2
Then the network acts as a pure resistive attenuator and is independent of frequency.
Example 8.12

The power density spectrum ofa random process is given by S,rx (w)

16. Find whether


(t}2+16
it is a valid spectral density or not. If it is transmitted through a system as shown in Fig. 8.31,
find the output power density spectrum.
X(/)

...

Fig. 8.31 A delay system

540

Probability Theory and Stochastic Processes

Solution

16
Sxx(co) =2-
co +16

Given

For a valid spectral density, S xx (co) should satisfy


(1)

Sxx(co) > 0 for any value of co

(2)

Sxx(co) = Sxx (-co)

It is observed that S xx (co) satisfies both conditions. .. S xx (co) is a valid spectral


density.
From Fig. 8.31, the output is

yet) = X(t + 2T) + X(t

2T)

The Fourier transform is

Y(co) =eJ2Tw X(co) + e- J2TW X(co)


Y(co) =X(co)(e j2TW +e- JZTW )
H(co)
We know that

eJZTW + e- jZTW = 2cos(2coT)

Y(co)
X(co)
2

Syy(co) =IH(co)1 Sxx(co)

16
2
Syy(co) =4cos (2coT)-z-
co +16

Syy(co)

= 64cos 2 (2coT)
co 2 +16

Example 8.13

A stationary random processX(t) and yet) has spectral density functions Sxx(co) =

m2

and Syy(co) =

2
l!)

+16

16

co +16

. Let another stationary random process be Vet) = X(t) + Yet). Find

the spectral densities Suu (co), S xu (co), and Sw (co) . Assume thatX(t) and yet) are uncorrelated
with zero mean value.

Linear System with Random Processes

541

Solution

Given that

U (t) . X(t) + yet)


Suu(ro) = F[Ruu(1')]
The autocorrelation function is
Ruu(1') =E[U(t)U(t+1')]
E[(X(t) + Y(t))(X(t +1')Y(t +1')]
=E[X(t)X(t +1') + X(t)Y(t +1') + Y(t)X(t +1') + Y(t)Y(t +1')]

= Rxx(1')+ Rxr(1')+ RyxC1') + Ryy(1')


:. The power spectral density is

Suu(ro)
Suu (ro)

F[Rxx(1')] + F[Rxr(1')] + F[ Ryx(1')] + F[Ryy(1')]

= S xx (ro)+ S Xy(ro)+ Sr.x(ro) + Syy (ro)

Since X(t) and Y(t) are uncorrelated and zero have mean values, then.

Suu(ro)
and

16
ro2
ro 2 + 16
+
0+
0+
"-'
ro2 +16
ro2 +l6 ro2 +l6

Sxu(ro) =F[Rxu(ro)]

Now Rxu(ro)

E[X(t) U(t +1')]


:::: E[X(t) (X(t +1') + Y(t +1')]

Rxu(ro) = Rxx(1') + Rxr(1')


Sxu(ro) =Sxx(ro)+Sxr(ro)
or
Similarly,

S xu (ro)

16

=--;:-
2

ro + 16

542

Probability Theory and Stochastic Processes

Example 8.14

A random process yet) is obtained from the system as shown in Fig. 8.32. Find the o/p
spectral density in terms of i/p spectral density, where Ao and

00

are constants, X(t) is the

input random process and its power spectral density is S xx (0) .

X(t)

Product modulator f-Y:-,-(t~)---liI"

Fig. 8.32 Product modulator

Solution

From Fig. 8.32, yet) is given by

yet)
Ryy(t,t+r)

= X(t)Ao cos mot


E[Y(t) yet +r)]
E[X(t) Ao cos(Oot) X(t +r) Ao cos 00 (t +r)]
= AgE[X(t)X(t +r)]cos(Oot) cosOo(t +r)

=AgRxx(r)cos(Oot) cosOo(t+r)
1

Ryy(t,t +r) = AgRxx(r)-(cosOor +cos(2Oot + %r


2 .

Since Ryy depends on t, the output is not a wide sense stationary random process. The

output spectral density is S yy (0)

=F {A[Ryy (t, t + r)]} .

We know that,
1

A[Ryy(t,t +r)] = lim- Ryy(t,t + r)dt


T--+oo 2T -T
1 T Ag
lim- J-R xx (r)(cos%r + cos(2Oot + Oordt
T->oo 2T
2
-T

1 AgR ()r
Sin(2Oot+%r)]T
= 1I m
- - xx r LtcosClV+
T->"'2T

2%

Linear System with Random Processes

543

1~
= T-.oo
lim--Rxx{'r)[2Tcosroo'l' + 0]
2T 2

The power spectral density is S yy (ro)

= F[ Rxx (r)]

F[ ~ Rxx(r)Cos<q,r]

F[Rx.x('l')cos av]

(From Appendix B)

Example 8.15

If the input auto correlation function is Rxx (r) = A e-alrl , where A and a are constants, find
the output spectral density and draw the output power spectrum for

~ xx(ro- %)+ Sxx(ro + roo)]'


Syy(ro) =4[S
Solution

JA e

00

..

S xx ( ro)

-al,1

e j(J)T d'l'

-<0

JA

00

eaTe - jmT

dr + fA e -aT e- jwr d'l'

-'"

Al J

e(a-j(olT dr

+ 'je-ra+JCOlT dr

..,

)'

La

\l

A2a

jro

a + jro

544

Probability Theory and Stochastic Processes

and

S xx ( co + coo)

2Aa
2

Given the output power density, Syy(co)

S,,(w)

(co+coo) +a

[Sxx(CO- coo) + S xx (co + coa )]

~A(1lw-~~+a' + (w+~~+a' 1

Figure. 8.33 shows the input autocorrelation and its density spectrum and Fig. 8.34 shows
the output power density spectrum.
2A/a

o
(a)
Fig. 8.33 (a) Input correlation/unction

Fig. 8.33 (b) Input power spectral density


1_~

Syy(w)

____ _

2a

o
Fig 8.34 Output power density spectrum

Example 8.16

A WSS random process X(t) with psd S.rr(m) is applied at the input of a delay system
shown in Fig. 8.35. Find the psd of Yet).
y(t)

Fig. 8.35 Delay system

..

Linear System with Random Processes

545

Solution

Given the psd ofX(t) is S xr(m). From Fig. 8.35,

yet) = X(t) + X(t-2T)

Taking Fourier transform

jru2T
Y(m) = X(m) + X(m) e
Y(m)

=X(m)[1 + e-J2ruT ]

The system transfer function is

H(m) Y(m) =1+e-J2ruT


X(m)
Outputpsdis Syy(m)

H(m) 12 Sxr(m)
=11+e- J2ruT 12 Sxr(m)

Syy(m)
(1 + cos 2mT

=11 + e-J2ruT 12 Sxr(m)


j sin 2mT) (l + cos 2mT + j sin 2mT)Sxr (m)

[(1 + cos2mT)2 + sin 2 2mT]Sxr(m)

=[I + cos 2 2mT + 2 cos 2mT + sin 2 2mT]Sxr (m)


2[1 + cos2mT]Sxr(m)
Syy(m)

4cos 2 (mT)Sxr(m)

Example 8.17

A random process net) has a psd G( m) = No for 2


transfer function H (m)

{2
. 0

-%

~m~mo

.
otherwIse

00

< m ~ 00. It is processed through a LPF

. Find the psd of the output.

(Nuv 2002, Nov 2003)


Solution

Given

N
G(m) =_0

The output psd is

-oo<m<oo and H(m) =

{20

mo <m <mo
otherwise

546

Probability Theory and Stochastic Processes

Example 8.18

A random noise X(t) having power spectrum S xx (ro) =

3 2 is applied to a network for


49+ro

which h(t) =u(t)r e-71 The network response is denoted by Y(t).


(a)
What is the average power in X(t)?
(b) . Find the power spectrum of Yet).
(c)
Find the average power in Yet).

(Feb 2007, Feb 2009)

Solution

Given input power spectrum Sxx (ro) =


and
(a)

.3 2
49+ ro

h(t) = t 2 e-7I u(t)

The average power in X(t) is

_1 [Sxx(ro)dro
21C

00

3 1C
-x21C 7

0.2143

Pxx =0.2143 watts


(b) Output power spectrum is
Syy(ro) = IH(ro) 12 Sxx(ro)

Now

H(ro) =.F[h(t)] = F[t2 e-7I u(t)] =:

2
,

(7 + }ro)"

(From Appendix A)

Linear System with Random Processes

Syy(ro)

=[

547

2
[2 X__3--=2
(7 + jroi
49 + ro

(c) Average power in yet) is

Pyy = 1 [Syy(ro)dro
2rc -00
(From Appendix A)

Example 8.19

A white noise with spectral density No 12 is transmitted through a linear system as shown in
Fig. 8.36. Find the output spectral density, and average power.

x:: R~
L

Y~W)

Fig. 8.36 Linear RL system

Solution

Given that

S.\:\,(m) =No 12

H(ro)

Y(ro)
X(ro)

R
R + jroL

IH(ro) I

= R 2 +ro2L2

Output spectral density Syy (ro) = IH( roW Sxx (ro)

548

Probability Theory and Stochastic Processes

Syy(CO)
The average power

R2
R2 + co2L2

N 12
0

1S

=N oR2 _1_ tan -1 (roL )'"


41t' R L

NoR

R->

NoR

--1t'=-
41t'L
4L

:. The average power is Pyy

2 2L

watts

Example 8.20

Find the noise bandwidth of a system having the power transfer function 1H( ro) 12

1
4 ' where roo is a real constant.
1+ (rol COo)

Solution

Given

H(ro) 12

=I + (COlI %)4

We know that the noise balluwiuth is

Now

Linear System with Random Processes

549

i
1 dx
2rc .L:, 1+ X4
Since [
-<J)

1 dx _ rc
1+ X4

(From Appendix A)

Example 8.21

A white noise X(/) of psd No is applied on an LTI system having impulse response h(t). If
2
Y(t) is the output, find: (a)E[y2(t)]; (b) Rxy(r); (c) Rrx(r); and (d) Ryy(r).

Solution
Given

S xx(m) = psd of white noise= No


2

Then

(a)

E[y2(t)]

..

RxxCrp r 2 )

o(rl -r2 )

[Rxx(rl,r2)h(rl)h(r2)drldr2

F,[y2(t)]

= [ ~o h2(r)dr

E[y2(t)]

I h2(r)dr

Hence the output power is proportional to the area under the curve of h2(t).

(b) We know that,

Rxy(r)

Rxx(r) *h(r)
[

~o o(r-rl)h(rl)drl

550

Probability Theory and Stochastic Processes

N
RXy('C) =fh('C)
(c)

Ryx('C) =Rxx('C)*h(-'C)

R'LY

(d)

No h(-1:)

('C)

Ryy('C)

=Rxx('C)*h('C)*h(-'C)
[ [ Rxx('C +'C 1 'CZ)h('CI)h('Cz)d'Cld'C2

RyY('C)

E[y2(t)]

RyY('C) = No [ h2('C)d'C

co

Example 8.22

The bandwidth of a system is 10 MHz. Find the thennal noise voltage across an 800n
resistor at room temperature.
Solution

Given BN

10 MHz, R 800 n, T :: 300 K.

The mean square value of noise is

v "2 =4kTRB,v
v~ = 4 X 1.3810-23 X 300 x 800 x 10 X 106

1"1118

noise voltage

V,~

132.48 X 10- 12

V" _

.JJJ2.48 x 10

12 ;:;:: 11.5

It volts

Example 8.23

Find the noise voltage across the capacitor shown in Fig. 8.37 at room temperature.

Linear System with Random Processes

551

Fig. 8.37 RC netlvork

Solution

The impedance across 1, 2 is

ZI2

R
jroC
= 1 + jroC R
=
1
R+-
jroC

Z12

= 1+ (mCR)2

_
R-,,-(1_-:c!-jro_C_R~)

R
Re[Z ] -----;:
12
- 1 + (roCR)2

The power density spectrum is

2kTRe[ZI2] =

2kTR
2

l+roCR

We know that the noise power is

-2

Vn

1 [
= 21C

2kTR

kTR [

1C
-2

+ ro 2C 2 R2

-C<)

ro

1
dro= kTR ltan-l(mC'R)r
1+ ro2 C2 R2
1CCR
~

kT

kT

V'1 =-X1C=
1CC

The noise power across the capacitor is

-2

kT

=V I1 =

Given

k =1.38xlO-23 , T=300 K, C=47pF

552

Probability Theory and Stochastic Processes


23

1.38 x 10- x300 =8.8xlO- W


"
47xlO- '2

The nus noise voltage is

Vn =
Vn

.J8.8 X 10- 11

=9.38 X 10-6 volts

=9.38 f.1 volts


Example 8.24

The noise present at the input of a two port network is 5 JlW. The noise figure is 0.5 dB.
The gain is 106. Calculate
(a)
(b)

the available noise power contributed by the two port system.


OIP available noise power.

Solution

Given watts Ni

F
(a)

5f.1w =5 X 10-6 watts


0.5dB or F=10o.o5 =1.122

G 106
Available noise power contributed by the two port system is
Nsys

G(F -1)N,

Nsys =106 (1.122-1)x5xlO-6


(b)

0.61 watts

Noise power available at the output

No

GFNi
106 x 1.122 x5 x 10-6

= 5.61 watts

Example 8.25

An antenna having noise temperature 300 K is connected to the iJp of a receiver. Equivalent
iJp noise temperature is 2700 K. The mid-band available power gain is 1010 The noise bandwidth
is 1.5 MHz..Find the available output noise power.
Solution

Given

T.nt =30 K,

r: = 270

The input noise power density

K, Ga

= 1010 , BN 1.5 MHz

Linear System with Ra.ndom Processes

23
1.38 X 10- x (30 + 270)
2

553

2.07 x 10-21

Output noise power density

Sno(co)

= GaSni(co)
2.07 X 10-21

GaSni(co)

1010 =2.07xlO- 11 watts/Hz

No =Sno(co)x2BN 2.07xlO-1i x2x1.5x106

Output noise power

.No = 62 f1 watts
Example 8.26

The noise figure of a system is 0.3 dB and the power gain is 108 Find the o/p noise power at
room temperature. Bandwidth is 10 MHz.
Solution

Given

F. =0.3 dB
F=10.03=1.0n,

Ga =108 and BN =10MHz, T=300oK

=GaNi + GaN;(F -: 1)
~"

r., .

No =GfiiF
No =GaBNSi F
No

No

=108 X lOx 106 xkTx1.072

L072x 101.5 xL38xlO-23 x300 = 4.43 x106

4.43 f1 watts

Example 8.27

A communication receiver is shown Fig. 8.38. Find the effective noise temperature of the
receiver, ovemll noise figure and availahle noise power at the output of the receiver.
Antenna 280 K

RF

amplifier
~1",,20K

G1 23dB

f--

L
amplifier ..

Mixer/IF

1'; =6dB

F; =12dB
G3 =40dB
BN 5MHz

G2 =20dB

aJlljJlili<:1

Fig. 8.38 Communication receiver

r-

554

Probability Thepry and Stochastic Processes

Solution

Given

G1 = 25 dB,

G1 = 102.5

G2 = 20 dB,

G2

G3 = 40 dB,

G) = 104

F; =6 dB,

F; =106 =3.98

F;
T

=12 dB,
0

300 K,

102

~ = 101.2

T"

= 316.23

100

::::

10000

=15.85

BN

= 20 K,

5 X 10 Hz,

T.nt = 28 K

We know that

1'",

1) = 300(3.98 1) =894 K
0

T(F;

T,.'J = T(F;

1) = 300(15.85 -1)

4455 K

:. Effective noise temperature

1'"
1'"
Overall noise figure

= 20+

894 + 4455
316.23 31623

=22.96 K

I +T"
T

1+ 22.96
300

F =1.077
Overall gain G = G1 G2 G3
Output noise power,

316.23 x 106

No == GFk(T.nl + T,,)BN

Nfl = 316.23 X 106 x 1.077 x 1.38x 10-23 x (28 + 22.96) x 5 x 106

N" =1.198 JlW

Available noise voltage

vn

=.IN: = 1.0945m volts

Example 8.28

The noise output ofa resistor is amplified by a noiseless amplifier of gain 60 at a bandwidth
of 20 kHz. A meter connected at the output of the amplifier reads 1 mV rms. The

Linear System with Random Processes

555

bandwidth of the amplifier is reduced to 5 kHz, keeping its gain constant. Find the reading
of the meter.
Solution

Given amplifier gain Ga


Voltmeter reading Vrms,

60, bandwidth B.v,

20 kHz

1mV, Bandwidth reduced to BN, == 5 kHz.

Since the amplifier is noiseless, noise figure is constant.

For BN,

V';;'s 2

or

or

Vrms,

= .jO.25 X 10-6 = 0.5 m volts

The metre reading is 0.5 m volts.

More Solved Examples


Example 8.29

Determine which of the following impulse response does not correspond to a system that is
stable or realisable, or both, and state why.
(i)

h(t) =u(t + 3)

(ii)

h(t) == u(t)e -I'

(iii)

h(t) =e' sin(av), coo: real constant

556 Probability Theory and Stochastic Processes

(iv)

h(t) = u(t)e-3t ,

WO:

real constant

(Feb 2006, Feb 2007)


Solution

(i)

Given h(t) = u(t + 3).


The system is unstable and unrealisable because:
(1) For stable system, we know that yet) = [h(r)x(t -r)dr < 00.
Let x(t) = A, finite constant. then, the output

y(t) = [Au(r + 3)dr = A [u(r + 3)dr = 00


(2) Also for the system to be causal, h(t) = 0 for t < 0

he-I) =u(2)*0

but

System is non-causal. Hence the given system is unstable and unrealisable.

(ii)

t2
Given h(t) = u(t)e- .
The system is stable and realisable because:
(1) For a stable system,

y(t) = [A u(r)e-~2 dr
y(t) = A [ u(r)e-~2 dr

= A e-~2 dr = AJii
Bounded output.
(2) For the system to be causal

Hence

h(t) = 0,

t <0

u(t) = 0,

t <0

System is causal. :. The system is stable and realisable.

(iii) Given h(t) = e sine wot), wo: real constant.


l

The system is unstable and unrealisable because:


(1) For a stable system,

y(t)

= [, A e~ sin(wor)dr = 00

Since the exponential function grows without bound.

Linear System with Random Processes

557

(2) For a causal system,


h(t)

0 for t < 0

Here e sin(av)"* 0 for t < O. System is non-causal.


:. The system is unstable and unrealisable.
u(t)e- 31 , roo: is a real constant

(iv) Given h(t)

The system is stable and realisable because:


(1) For a stable system
yet)

A u(1:)e-3t d1:

3t
yet) A[e- d1:=+AI3

.. Bounded output.
(2) For a causal system

h(t)

Here e-31: u(t)

0 for t < 0

0 for t < O. System is causal.

. . The system is stable and realisable.

Example B.30

A signal x(t) =u(t)e4X1 is applied to a network having an impulse response h(t) = Wu(t)e- w1 .
Here a and Ware real positive constants and u(t) is a unit step function. Find (a) y(t) and
(b) the output spectrum.
(Nov 2006, Nov 2008)
Solution

Given

x(t) u(t)e- at
h(t) W u(t)e- Wt

(a) We know that the system response is


yet) x(t) * h(t)

yet) .[ h(1:)x(t -1:)d1:


Wt
a
yU) [Wu(1:)e- u(t-1:)e- (H)d1:

Since u(1:)u(t -1:)

I 0 <1: ~ t
{ o otherwise

558

Probability Theory and Stochastic Processes

therefore

y(t)

y(t)

= We-at

We-at e-(w-a)~ dT

e-~(W-a)

]'

-(W -a)

y(t)

[-at -e-WI]
-W
-e

for t>O

W-a

y(t) =~[e-at -e-Wt]u(t)


. W-a

or
(b)

The input spectrum is

X(eo) =F[x(t)]

X(eo) =F[e-a1u(t)]

a+ jeo

and

!l(eo)

= F[h(t)]
=F[We-Wtu(t)]

W
W+ jeo

The output spectrum is

Y(m)

= X(eo)H(eo)

Y(eo) = - - - -
(a + jeo)(W + jeo)

We can also find the output spectrum using Fourier transform


Y(m)

F[y(t)]

W
=W

[1

a a + jeo

1]

W + jeo

W .[ W + jeo-a- jeo]
W -a (a + jeoXW + jeo)

(a+ jm)(W+ jm)

Linear System with Random Processes

559

Example 8.31

Two systems have transfer fimction Hi (m) and H 2 (m) .


(a) Show that the transfer fimction H (m) of the cascade of the two, which means that
the output of the first feeds the input of the second system is H (m)

=HI (m)H 2 (m) .

(b) For a cascade of N systems with transfer fimctions


Hn(m), n =1,2,3 .....N, show that
N

(Nov 2006, May 2009)

H(m) =nHn(m)
n=1

Solution

(a) Consider the cascade system as shown in Fig.8.39(a).


X(w)

Fig. 8.39 (a) Cascade ~ftwo systems.

We know that

=H2 (m); (m)

and

1';(m)

or 1';(m)

H2 (m)HI (m)X(m)

If H (m) is transfer fimction of the system, then


H(m)

= 1';(m)
X(m)

So
(b) . If the third system

H(m)
~hown

HI (m)Hz(m)

in Fig.8.39(b) is cascaded to the first two systems, then

1;(m)

1'; (m)H3(m)
H 3(m)H 2 (m)H 1(m)X(m)

or

H(m)

Y, (m) ::;: H (0) H (m) H (m)

X(w)
y;;(w)

1';(m)

Fig. 8.39 (b) Third system

560

Probability Theory and Stochastic Processes

Similarly, for a cascade of N systems as shown in Fig. 8.39(c), the output function is

Fig. 8.39 (c) Cascade olN system

Y(m) = H] (m)H2 (m)H3(m) ... .HN(m)X(m)

If H(m) is the overall transfer function, then


Y(m) = H(m)X(m)

or

H(m)

= TIHn(m)
n=]

Proved.

Example 8.32

A random process net) has a psd G(f) = 10-4 for

-00::;

f ::; 00. The random process is

passed through an LPF whose transfer function is

H (f) = {100

- fm ::; ( ::; fm. Find the psd of the wavefonn at the olp of the filter
otherwIse

(May 2011)
Solution

= 10-4

Given

G(f)

and

H(f) =100

00::;

f::;

00

- f m ::; f::; fm

The output psd is

Go(f)

= 1H(f) 12

G(f)

Go(f) =100 2 xl0-4

-f", ::;f::;/,,,
Example 8.33

X(t), a stationary random process with zero mean and autocorrelation Rxx (r)

to a system of function H (m)

= e-21~1 is applied

= _1_ . Find the mean and psd of its output.


2+jm

(May 2011)

Linear System with Random Processes 561


Solution

Rxx(r)

Given

H(j)=

=e-.2lrl,

I
.
2+J(j)

Then the psd is

= F[Rxx(r)] =F[e-2Irl ]

S xx(j)

2(2)

psd of the output is

Syy(co) =Sxx(co)IH(j)1 2

=- - X - - ; : - - 7 " =-2 - x -
(j)2 + 4

co + 4 (j)2 + 4

4
S yy (j) == ----,,.--___:_
Mean value of X(t) is
-2

limRxx (r)

lime-2lrl

T-}OO

r~oo

-2

X =0

or

.. Mean value is zero.


Example 8.34

Consider a linear system as shown in Fig. 8.40.


x(t)

yet)

Fig. 8.40 Linear system

x(t) is the input and yet) is the output of the system.


The autocorrelation ofx(t) is Rxx(r)
mean square value ofthe output yet).

50(r). Find the psd, autocorrelation function and

(Nov 2010)
Solution

Given

H(j) =_1_
6+5(j)
Rxx(r) ==50(r)

562 Probability Theory and Stochastic Processes

(i)

psd of the output is


Sxx(m) IH(m) 12

Syy(m)

psd ofthe input

Sxx(m) :==F[Rxx('r)]

5
+36
(ii) Output autocorrelation function:
Ryy('/:) ::; F- 1[Syy (m)]
- F- 1 [

Since

2a
m +a2

m2 +36

e -al't'l

_____- - B
2

(iii) Mean square value:


Ryy(O) ::;

5
12

5
watts
12

::; 0.4167 watts


Example 8.35

A received SSB signal has a power spectrum which extends over the frequency range from
fe ::; 1 MHz to fe + fm ::; 1.003 MHz. The signal is accompanied by noise with uniform

spectral density 10-9 watts/ Hz .


(a) The noise net) is expressed as
net) =nl(t)cos21fj~t

n,(t)sin2nfet

Find the power spectral densities of the quadrature components in the spectral range
specified as fe ::; f ::; fe + fm
(b) The signal plus its accompanying noise is multiplied by a local carrier cos(21f fet) .
Plot the psd of the noise at the o/p of the multiplier.
(May 1998)

Linear System with Random Processes

563

Solution

Given SSB has frequencies fc :::: 1 MHz to fc + 1m

:. 1m

:::: 1.003 MHz

3 kHz

Noise psd SN(/) :::: 10-9 watts! Hz


(a) Given the noise net)

nJ(t)cos27rfct n.(t)sin27rfct.

The power densities of the in phase and quadrature phase components are

S NJ (f)

S No< (I)

= S N (fc /,,,) + SN (fc + 1m)

:::: 10-9 + 10-9

(b)

=2 X 10-9 watts! Hz

When the signal is multiplied by a local carrier cos(27r fct) , its power will be reduced
by 4 times. The output power has uniform power spectral density of 10--9 watts! Hz,
4
with

frequency

components.

fc + fc + 1m:::: 2fc + 1m (iv) fc -

(i) fc + Ie

= 21e

(ii)

Ie

fc

(iii)

Ie + 1m 1m

The plot ofthe output psd is shown in Fig. 8.41.

psd
10-9

Fig.8.41 Output psd

Example 8.36

A stationary random processX(t) having an autocorrelation function Rxx (1')

2e-41'1'1 is applied

to the networkshowll in Fig. 8.42. Find (i) S.\.",(m), (ii) 1H(m) 12 am.l (iii) Syy(ro).

(Nov 2007)
R=1.5Q
X(t)

C2

2F

Fig. 8.42 RC network

Y(t)

564

Probability Theory and Stochastic Processes

Solution

(i)

2[

]0
-(jOJ 4)

-<t)

2[
.

e--r(jCO+4)

]'"

-,-(ja>+ 4)

2x8

16

--4-) + jOJ1+4]

16
+16
The circuit with reactance values is shown in Fig. 8.43.
Sxx(OJ)

(ii)

=--:-

:R

~,--t------<J
I

jwe,

Y(co)

X(w)

(a)

Fig. 8.43 (a) RC network with reactance

Fig. 8.43 (b) RC network with impedance

The impedances are

and
The transfer function

Linear System with Random Processes

565

I
y (f)

H(oo)

x (f)

::=

1.50, C1

= _~-'=--'=---_
I + R
jooC2 jooC,R

1+ jroC(R+ jroC2 R

4F, C2

H (f)

Z2
Z( +Z2

l+joo~R

H(oo)
Now given R

::=

l+jroC1R
1+ jooR(CI +C2 )

= 2F

1+ joo6

1+ joo9

H (00 )H* (f) = (I + joo6)(1


(I + joo9)(1
(iii)

joo6) _ 1+ 3600
j(09) - 1+81002

The output power density spectrum is


2

Syy(oo) =1 H(oo) 12 S (00)= (1+3600 ) x 16


xx
(1 + 81(0 2 ) 00 2 + 16

Example 8.37

White noise with power density

-t
N

is applied to a network with impulse response

h(t) =u(t)Wt exp( -Wt), where 00 > 0 is a constant. Find the cross correlation of the input
and output.
(May 2011)
Solution

=
and

h(t)

=Wte- Wt

u(t),

W >0

566

Probability Theory and Stochastic Processes

H(ro)

(W + jro/

Cross power spectral density is

Sxy(ro) = Sxx(ro)H(ro)
W
No
S (ro) xy
- (W + jrof 2

S (ro) _ WNo
1
xy
--2- (W + jro)2
Cross correlations of the input and output is

Rxy('r) =F-1[Sxy(ro)]

Wt e-Wf u(t)

Example 8.38

White noise with a two-sided spectral density of 7J / 2 is passed through a low pass RC
network with time constant r == RC and thereafter through an ideal amplifier with a voltage
gain oflO.
(a)

Write the expression for the autocorrelation function Rn(r) of the white noise.

(b)

Write the expression for the power spectral density of the noise at the output of the
amplifier.
Write the expression for the autocorrelation of the output noise in (b). (Nov 1998)

(c)

Solution

Given the white noise is applied on an LPF and amplifier as shown in Fig. 8.44.

W~it~

nOIse 1112

T=RC
LPF

G=lO
Ideal
Amp

Fig. 8.44 Filter and amplifier

~)

Linear System with Random Processes

567

1]
The spectral density of the white noise is S NN (w)

1
- --
1+ jwRC 1+ jon

The transfer function of the LPF is HI (w)

Transfer function of the amplifier is Hz(w) =10. The overall transfer function is

H(w)

= HI (w)Hz(w).
H(w)

(a)

10
I+jon

The autocorrelation ofthe function RN('f) of the white noise is

(2) Output psd is

So(w)
So(w)

1H(w)

=1

Z
I

10 121]
1+ jon 2
100

(c)

SNN(W)

1]

501]

==

Autocorrelation function of the output noise is

Ro(t) ;;;;;:F-1[So(W)];;;;;:F- 1 [
Ro(t) ; ; ;: 251] XF-1[
'f

Ro(t) ; ; ;: 2511 xe

501] ]
1+ W 2'f2

2/'f

w2 + (1/ 'f)2

If IN

'f

Ro(t) ; ; ;: 251] e-III/RC


RC
Example 8.39

Find the noise bandwidth of an the RC lowpass filter shown in Fig. 8.45.

568

Probability Theory and Stochastic Processes


R=lOCl

X(~Y(o)
-

lwcI

c=2J.1f
0

Fig. 8.45 RC lowpass filter

Solution

1
H(ro)

The transfer function is

= Y(ro) =
X(ro)

H(ro)

Now

jOJC
R+_l_
jOJC

1
l+jroRC

We know that the noise bandwidth is


B
N

and

[I

_1
H(ro) 12 dro
- -=2=n_ _ _::--_
IH(O) 12

I H (0) I = 1

B
N

=_1 [
1
dro
2n
1 + (roCR)2

1
1
1
=-x-tan(OJCR) 100
2n RC
0
B
N

1
n
=--x2nRC 2

1
4RC

Now

R =10 Q
C -I, pf
B
N

:. Noise bandwidth,

1
106
- ----------- ---
- 4x20xlO--{) - 80

BN =12.5 KHz

Linear System with Random Processes

569

Example 8.40

If two resistors R, == 600 Q and ~ 200 n at temperatures r; == 30C and T2 100C


respectively are connected in parallel, find the noise temperature of the combination.
Solution

If two resistors are connected in 'parallel, then equivalent noise temperature is


T

:=

Now

~R2 +TzR ,
R, +Rz

T.I =30C:= 30+ 270 = 300 K

T2

100C:= 100+ 270:= 370 K

T := 300x600+200x370

800

Tn =317.5 K=47.5C
Example 8.41

An amplifier has input and output impedance of 75 ohms, 60 dB power gain and a noise
equivalent bandwidth of 15 kHz. When a 75

resistor at 290 K is connected to the input,

the output rms noise voltage is 151!V . Determine the effective noise temperature of the
amplifier assuming that the meter is impedance matched to the amplifier.
Solution

Given
Power gain

Go == 60dB or Go = 106

Bandwidth

BN

1'"

15kHz,
==290o K

rms noise voltage at the output Vnus

=151!V

V!s : -: 22.5

Output noise power,

No ==

Input l10ise power

N, =k1'oB

1.38 x 10-23

X 10-

N
=,__

0_

GaNi

watts

290x 15 x 103

== 6 x 10-17 watts

:. Noise figure F

11

570

Probability Theory and Stochastic Processes

== 22.5xlO-

11

=375

106 x 6xl0- 17
We know that

~
~

(F -1)1;,

== (3.75 -1)270

742.5K

Example 8.42

An amplifier has 3 dB noise figure. Find the equivalent i/p noise temperature at 30C.
Solution

Given

Now

F =3dB or F=10o. 3 =2
T

= 30C = 30+ 273 = 303 OK

=T(F-1)

=303(2 -1) = 303

K.

Example 8.43

For. the satellite receiver shown in Fig. 8046, calculate the effective noise temperature.
(Nov 2011)

Waveguide

L=O.4dB

Maser
gz 26dB
Tez =4K

Toff'

g3=17dB
F; =6dB

Br =25MHz

Fig. 8.46 Satellite receiver

Solution

Given the loss in the wave guide is

L ;::; Oo4dB or L::: 100.04

Gain in the wave guide is gj

=1.09

= ~ = _1_ =0.91
L

Noise figure of the wave guide F;

1.09

=L =1.09

Linear System with Random Processes

571

Effective temperature of the wave guide


~l

~l

(1'; -1)Ta

=(1.09

1)270

26K

Also given

26dB or g2

398.1

F; = 6 dB or F;

3.98

g2

~3

=(F; -1)Ta

(3.98-1)270

~3

864.2 K
Now the effective noise temperature of the receiver is
T

+_e3_

Te =Tel +

g,

glg2

4
864.2
==26+--+---
0.91 0.91x398.l

Example 8.44

An amplifier has 3 stages for which T~ == 2000 K (first stage), T~ == 450 0 K and
,

~3

== 10000 K (last stage). If the available power gain of the second stage is 5, what gain
o

must the first stage have to guarantee an effective input noise temperature of 250 K?
(May 2011)
Solution

Given

Tt'j ==200K', eT2 =450K'' cT';J =1000K

Gi,

1'.,

5,

250K

we know that effective input tempcntlun.;

T ==T +
e

e,

+ GG
I

572 Probability Theory and Stochastic Processes

given

= 200 + -(450 + 200) == 250


Gl

G = 650 =13
I
50

Questions
1.

Define the following systems.


(i) LTI system
(iii) Stable system

(ii) Causal system


(iv) Noise bandwidth

(Nov 2006, Feb 2008)


2.

Two systems have transfer functions Hl(m) and H 2 (m). Show that the transfer
function H(m) of the cascade of the two H(m) = HI (m)H2 (m) .

(Nov 2006, Nov 2007)


3.

For a cascade of N-systems with transfer functions Hn(m), n = 1,2,...,N, show that

(Nov 2007, Nov 2006)


4.
5.

6.

7.
8.

9.
10.

II.

12.

How is the autocorrelation function of white noise represented? What is its


significance?
(Nov 2003, May 2009)
Dcrive the relation between psds of input and output random process of an LTI
system.
. (May 2004, Nov 2004, May 2005, May 2009, May 2010)
Define the following random processes:
(i) Band pass
(ii) Band limited
(iii) Narrow band

(Feb 2007, Nov 2007, Nov 2010)


(Nov 2007)
(May~2005, Nov 2010)

Write short notes on different types of noises.


What is thermal noise? How is it quantified?
or
(Nov 2004)
Write a short note on Johnson's noise
(MaylJune 2004)
What are the causes of thermal noise?
(May 2009)
Discuss the spectral distribution of thermal nuis~.
What nrc the precautions to be taken in tht: t:ast:auing stages of a network from the
point of vit:w ur Iloist;! rtXiuction?
(June 2003, Nov 2004. May 2UUY)
or
Bring out the significance of noise figure in determining the performance ofa commu
(Nov 2003)
nication system.
Explain the concept of effective input noise temperature.
(May 2009, 2011)

Linear System with Random Processes

573

13.
14.

Derive the mathematical description of noise figure.


(May 2004, 2009)
Derive the expression for effective input noise temperature of a cascaded system in
terms of its individual input noise temperature.
(Feb 2007, Nov 2010)

15.

Prove that the output power spectral density equals the input power spectral density
(Nov 2010)
multiplied by the squared magnitude of the transform of the filter.

Problems
8.1

It is desired to generate a random signal X(t) with autocorrelation function,


Rx(1:)

511

' by passing white noise n(t), with power spectral density

S n (f) 1112 wattsl Hz, through an L TI system. Obtain an expression for the
transfer function H if) of the LTI system.
8.2 A WSS random process X(/) is applied to the i/p of an LTI system whose impulse
response is 5te- 2I u(/). The mean of X(t) is 3. Find the mean ofthe olp ofthe system.
8.3

A WSS processX(t) with Rxx(1:) = A e- a1r :, where A and a are real positive constants
. is applied to the i/p of an LTI system with h(/) = e-btu(t), where b is a real positive

constant. Find the psd of the olp of the system.


8.4 The olp of a filter is given by
Y(t) = X(I + T) -X(I -T)

where X(t) is a WSS process with power spectrum Sxy'(m) and T, a constant. Find
the power spectrum of Yet).

8.5 X(t) is a stationary random process with spectral density S xy. (m). yet) is another
independent random process, where Y (I) = A cos( mcl + e), with

e a random variable

uniformly distributed over (-n,n). Find the spectral density function of


Z(t) = X(t)Y(t) .

8.6 A random process n (t) has a psd G(f) = 10-4 for -

00 ~

00.

The random process

is passed through an LPF whose transfer function is


H(f)=

100
{ 0

- f14 ~ f~ fM
otherwise

Find the psd of the waveform at the olp ofthe filter.


8.7 A system's power transfer function is !H(m) 12=

25;~

(May 2011)

574 Probability Theory and Stochastic Processes

(a) What is its noise bandwidth?


(b) Ifwhite noise with power density 6 m W/ Hz is applied to the input, find the noise
power in the system's output.
8.8

White noise with power density N/2 is applied to a lowpass network for which
. IH (0) I:= 2; it has a noise bandwidth of 2 MHz. If the average output noise power is
0.1 W at a 1 12 resistor, what is N ?

8.9 Consider a linear system having transfer function H(m) =_1_. X(t) is the input
6+jm
and yet) is the output of the system. The autocorrelation of X(t) is R xx (r) = 3.0 (r) .
Find the psd, autocorrelation function and mean square value of the output Yet).
8.10 Noise from a 10 kn resistor at room temperature is passed through an ideal LPF
with bandwidth 2.5 MHz and unity gain. Assuming the input noise is Gaussian, with
zero mean, write the expression for pdf of the output.
8.11 Find the rrns noise voltage across a 1 f..l. F capacitor over the entire frequency band,
when the capacitor is shunted by a lkn resistor maintained at 300 K.
8.12 A TV receiver has a 4kn input resistance and operates on the frequency range of
54 - 60MHz. Estimate the rms noise voltage at the input, at an ambient temperature
of 27C.
8.13 Determine the rrns values of the shot noise current in the case of a p-n junction
carrying lOrnA of average current if the operating temperature is 290 0 K and
B 10KHz.
8.14 Compute the input thermal noise voltage of a TV receiver, whose equivalent noise
resistance is 20012 and the source resistance is 300 n. The receiver bandwidth is 6
MHz and the temperature is 27 C.
8.15 In a certain receiving system, the noise power available from the aerial within a 10
kHz band is 10-17 watts. Evaluate the aerial noise temperature.
8.16 Find the available noise power per lmit bandwidth at the i/p of an antenna, with an
effective noise temperature of 15K, feeding into a microwave amplifier with an
effective noise temperature of 20K.
8.17 Calculate the nus noise voltage generated in a bandwidth of 10kHz, by a resistor of
1000 ohms, maintained at 17C. Also fmd the available noise power over this bandwidth.
Determine the corresponding values when the resistor value is increased to 10k
ohms.

Linear System with Random Processes

575

8.18 White noise of two-sided spectral density 2xl0--6 watts / Hz is applied to a simple RC
LPF whose 3 dB cut off frequency is 4 kHz. Find the mean squared value of the noise
output.
8.19 An amplifier has three stages for which 1',,1

= 200

K (I st stage),

1'"z = 450 K

and

1',,3 10000K (last stage). If the available power gain of the second stage is 5, what
gain must the first stage have to guarantee an effective input noise temperature of
250o K?
(May 2011)
8.20 An amplifier with Go 40dB and BN = 20 KHz is found to have 1'" =10 K . Find 1'"
and noise figure.
8.21 Detennine the overall noise figure referred to the input in the case ofan amplifier with
a noise figure of 9 dB and an available power gain of 15dB, followed by a mixer stage
with a noise figure of 20 dB.
8.22 The noise figure of an amplifier is 0.2 dB. Find the equivalent temperature Te'
8.23 An amplifier has a standard spot noise figure Fo = 6.3 I (8.0dB) . An engineer uses the
amplifier to amplify the output ofan antenna that is known to have antenna temperature
of Ta

180o K.

(a) What is the effective input noise temperature of the amplifier?


(b) What is the operating spot noise figure?
8.24 An antenna is connected to a receiver having an equivalent noise temperature Ta = 10
K. The available gain of the receiver is Ga=10 8 and the noise B W is BN = 10 MHz. If
the available output noise power is 10 JlW , find the antenna temperature.
8.25 A designer requires an amplifier to give an operating spot noise figure of not more
than 1.8 when operating with a 1600 K source. What is the largest value of standard
spot noise figure that will be available in a purchased amplifier?
8.26 Two amplifiers have standard spot noise figure of Fal

= 1.6 and Paz = 1.4 . They have

respective available power gains of GOl = 12 and G02 = 8. The two amplifiers are to
be used in a cascade driven from an antenna.
(a) For best perfonnance, which amplifier should be driven by the antenna.
(b) What is the standard spot noise figure of the best cascade?
8.27 A satellite receiver has the following specifications

1'"

4K,

GI 20dB,

3dB,
40dB,

F; 10dB
G3

= 60dB

576 Probability Theory and Stochastic Processes

To = 290K,

BN =IGHz

Find the available noise power at the receiver output.

8.28 Compute the overall noise figure in the case of four amplifier stages cascaded, given
the following data:
Available gain (dB)
Noise figure (dB)
1st stage
10
50

2nd stage
5
20

3rd stage
8
10

4th stage
12
8.29 An amplifier has three stages for which

r;,3 = 600 K

r;,1 =150

K (first stage),

r;,2 = 350

K and

(output stage). Available power gain of the first stage is 10 and overall

input effective noise temperature is 190K.


(a) What is the available power gain of the second stage?
(b) What is the cascade's standard spot noise figure?
(c) What is the cascade's operating spot noise figure when used with a source of
noise temperature

1'. = 50 K?

8.30 An engineer purchases an amplifier that has a narrow bandwidth of 1 KHz, standard
spot noise bandwidth of 1 KHz and standard spot noise figure of3.8 at its frequency
ofoperation. The amplifier's available output noise power is 0.1 mW when its input is
connected to a radio receiving antenna having an antenna temperature of 80K. Find:
(a) Input effective noise temperature,Te
(b) Its operating spot noise figure F;p'
(c) Its available power gain Ga'
8.31 The noise present at the input to a two port network is IIIW. The noise figure F is 0.5
dB. The receiver gain ga

= 1010 , calculate:

(u) The availahle noise power c:ontrihllteo hy the two pOli.


(b) Thc output available power.
8.32 Find the psd of the thermal noise voltage across the terminals 1 and 2 tor the circuit
shown in Fig. P 8.1

10

Fig. P 8.1 Passive network

You might also like