You are on page 1of 39

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

150

Problem Solutions Chapter 5


Problem 5.1.1
(a) The probability P[X  2; Y  3] can be found be evaluating the joint CDF FX ;Y (x; y)
at x = 2 and y = 3. This yields
P[X

 2 Y  3] = FX Y (2 3) = (1 , e,2 )(1 , e,3)


;

(b) To find the marginal CDF of X, FX (x), we simply evaluate the joint CDF at y = .


FX (x) = FX ;Y (x; ) =

1 , e,x x  0
0
otherwise

(c) Likewise for the marginal CDF of Y , we evaluate the joint CDF at X


FY (y) = FX ;Y (; y) =

= .

1 , e,y y  0
0
otherwise

Problem 5.1.2
(a) Because the probability that any random variable is less than , is zero, we have
FX ;Y (x; ,) = P[X

 x Y  ,]  P[Y  ,] = 0
;

(b) The probability that any random variable is less than infinity is always one.
FX ;Y (x; ) = P[X
(c) Although P[Y

 x Y  ] = P[X  x] = FX (x)
;

 ] = 1, P[X  ,] = 0. Therefore the following is true.


FX Y (, ) = P[X  , Y  ]  P[X  ,] = 0
;

(d) Part (d) follows the same logic as that of part (a).
FX ;Y (,; y) = P[X

 , Y  y]  P[X  ,] = 0
;

(e) Analogous to Part (b), we find that


FX ;Y (; y) = P[X

 Y  y] = P[Y  y] = FY (y)
;

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


Problem 5.1.3
We wish to find P[x1  X  x2 ] or P[y1  Y
and B = fy1  Y  y2 g. Then

 y2 ].

151

Define events A = fy1  Y

 y2 g

P[A [ B] = P[A] + P[B] , P[AB]


Where the intersection of events A and B are all the outcomes such that both A and B occur,
specifically,
AB = fx1  X  x2 ; y1  Y  y2 g.
P[A [ B] = P[x1  X

 x2 ] + P[y1  Y  y2 ] , P[x1  X  x2 y1  Y  y2 ]
;

By Theorem 5.3,
P[x1  X

 x2 y1  Y  y2] = FX Y (x2 y2) , FX Y (x2 y1 ) , FX Y (x1 y2 ) + FX Y (x1 y1)


;

Expressed in terms of the marginal and joint CDFs,


P[A [ B] = FX (x2 ) , FX (x1 ) + FY (y2 ) , FY (y1 )
, FX ;Y (x2 ; y2 ) + FX ;Y (x2; y1 ) + FX ;Y (x1 ; y2) , FX ;Y (x1 ; y1 )
Problem 5.1.4
Its easy to show that the properties of Theorem 5.1 are satisfied. However, those properties are necessary but not sufficient to show F (x; y) is a CDF. To convince ourselves that
F (x; y) is a valid CDF, we show that for all x1  x2 and y1  y2 ,
P[x1 < X1  x2 ; y1 < Y

 y2]  0

In this case, for x1  x2 and y1  y2 , Theorem 5.3 yields


P[x1 < X

 x2 y1
;

<

 y2] = F (x2 y2 ) , F (x1 y2 ) , F (x2 y1) + F (x1 y1)


= FX (x2 ) FY (y2 ) , FX (x1 ) FY (y2 )
, FX (x2 ) FY (y1 ) + FX (x1 ) FY (y1 )
= [FX (x2 ) , FX (x1 )][FY (y2 ) , FY (y1 )]
0
;

Hence, FX (x) FY (y) is a valid joint CDF.


Problem 5.1.5
In this problem, we prove Theorem 5.3 which states
P[x1 < X

 x2 y1
;

<

 y2] = FX Y (x2 y2) , FX Y (x2 y1 ) , FX Y (x1 y2 ) + FX Y (x1 y1)


;

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

152

(a) The events A, B, and C are


Y

y2

y2

y2

y1

y1

y1

x1

x1

x2

x2

x1

(b) In terms of the joint CDF FX ;Y (x; y), we can write


P[A] = FX ;Y (x1 ; y2 ) , FX ;Y (x1 ; y1 )
P[B] = FX ;Y (x2 ; y1 ) , FX ;Y (x1 ; y1 )
P[A [ B [ C] = FX ;Y (x2 ; y2 ) , FX ;Y (x1 ; y1 )
(c) Since A, B, and C are mutually exclusive,
P[A [ B [ C] = P[A] + P[B] + P[C]
However, since we want to express P[C] = P[x1 < X
the joint CDF FX ;Y (x; y), we write

 x2 y1
;

<

 y2 ] in terms of

P[C] = P[A [ B [ C] , P[A] , P[B]


= FX ;Y (x2 ; y2 ) , FX ;Y (x1 ; y2 ) , FX ;Y (x2 ; y1 ) + FX ;Y (x1 ; y1 )
which completes the proof of the theorem.
Problem 5.1.6
The given function is


FX ;Y (x; y) =

1 , e,(x+y) x; y  0
0
otherwise

First, we find the CDF FX (x) and FY (y).




FX (x) = FX ;Y (x; ) =


FY (y) = FX ;Y (; y) =

1 x0
0 otherwise
1 y0
0 otherwise

Hence, for any x  0 or y  0,


P[X

>

x] = 0

P[Y

>

y] = 0

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


For x  0 and y  0, this implies
P[fX

>

xg[fY

>

yg]  P[X

>

x] + P[Y

>

153

y] = 0

However,

 x Y  y] = 1 , (1 , e, x y ) = e, x y
Thus, we have the contradiction that e, x y  0 for all x y  0. We can conclude that the
P[fX

>

xg[fY

>

yg] = 1 , P[X

( + )

( + )

( + )

given function is not a valid CDF.


Problem 5.2.1

(a) The joint PDF of X and Y is

fX ;Y (x; y) =

Y
Y+X=1

c x + y  1; x; y  0
0 otherwise

X
1

To find the constant c we integrate over the region shown. This gives
Z 1 Z 1,x
0

c dy dx = cx ,

cx 1 c
=
2 0 2

=1

Therefore c = 2.
(b) To find the P[X

 Y ] we look to integrate over the area indicated by the graph


P[X

 Y] =

Z 1=2 Z 1,x
0

(2

dy dx

X Y
X=Y

Z 1=2
=

, 4x) dx

= 1=2

(c) The probability P[X + Y


the following integrals

P[X + Y

1

1

2]

2] can be seen in the figure at right. Here we can set up

Z 1=2 Z 1=2,x
0

Z 1=2
=
=

2 dy dx

(1

Y+X=1
Y+X=

, 2x) dx

1=2 , 1=4 = 1=4

X
1

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

154

Problem 5.2.2
Given the joint PDF


fX ;Y (x; y) =

cxy2 0  x; y  1
0
otherwise

(a) To find the constant c integrate fX ;Y (x; y) over the all possible values of X and Y to
get
Z 1Z 1

1=
0

cxy2 dx dy = c=6

Therefore c = 6.
(b) The probability P[X
shaded region.

 Y ] is the integral of the joint PDF fX Y (x y) over the indicated


;

P[X

 Y] =

Z 1Z x

6xy dy dx
0

Z 1

2x4 dx

= 2=5

Similarly, to find P Y  X 2 we can integrate over the


region shown in the figure.


PY

 X2

Z 1 Z x2
=
0

1
Y=X 2

6xy2 dy dx = 1=4

(c) Here we can choose to either integrate fX ;Y (x; y) over the lighter shaded region,
which would require the evaluation of two integrals, or we can perform one integral over the darker region by recognizing

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

P[min(X ; Y )  1=2] = 1 , P[min(X ; Y ) > 1=2]


=1

=1

Z 1 Z 1

1=2 1=2
Z 1
9y2

1=2

155

min(X,Y) <

min(X,Y) >

6xy2 dx dy
dy =

11
32
1

(d) The P[max(X ; Y )  3=4] can be found be integrating over the shaded region shown
below.
P[max(X ; Y )  3=4] = P[X
Z

3
4

3

3
4

4Y

= ;

3

4]

max(X,Y) <

6xy2 dx dy

0 0   
3=4
2 3=4
= x
y3
0

5
= (3=4) = 0:237

Problem 5.2.3


fX ;Y (x; y) =
(a) The probability that X

P[X

 Y] =
=
=

 Y is:
Z Z x
Z0

Z0
0

The P[X + Y

6e,(2x+3y) x  0; y  0
0
otherwise

6e,(2x+3y) dy dx

2e,2x

Y
X Y


,e,3y yy==x0 dx

,2x , 2e,5x ] dx = 3=5

[2e

 1] is found by integrating over the region where X + Y  1

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

P[X + Y

 1] =

Z 1 Z 1,x
0

Z 1

2e,2x

Z 1
=
0

6e,(2x+3y) dy dx
h


,e,3y yy==10,x dx

156

Y
X+Y 1

2e,2x 1 , e,3(1,x) dx

,2x , 2ex,3 1
= ,e
0
,
3
,
2
= 1 + 2e , 3e

(b) The event min(X ; Y )  1 is the same as the event fX


P[min(X ; Y )  10] =

Z Z
1

Z 1Z 1
0

6e,(2x+3y) dy dx = e,(2+3)

(c) The event max(X ; Y )  1 is the same as the event fX


P[max(X ; Y )  1] =

 1 Y  1g. Thus,
 1 Y  1g so that
;

6e,(2x+3y) dy dx = (1 , e,2 )(1 , e,3 )

Problem 5.2.4
The only difference between this problem and Example 5.2 is that in this problem we
must integrate the joint PDF over the regions to find the probabilities. Just as in Example 5.2, there are five cases. We will use variable u and v as dummy variables for x and
y.

x < 0 or y < 0
Y
1

In this case, the region of integration doesnt overlap the region of nonzero probability and
Z y Z x

FX ;Y (x; y) =

, ,

fX ;Y (u; v) du dv = 0
X
x

0<yx1

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


In this case, the region where the integral has a
nonzero contribution is

Z,yZ ,
x

fX ;Y (u; v) dy dx
y

8uv du dv
Z0 y v

=
=

Y
1

Z y Z x

FX ;Y (x; y) =

157

4(x2 , v2 )v dv

X
x

v=y

2x2 v2 , v4 v=0 = 2x2 y2 , y4

0 < x  y and 0  x  1
Y

Z y Z x

FX ;Y (x; y) =

Z,xZ ,
u

fX ;Y (u; v) dv du

8uv dv du

Z0 x 0

4u3 du

0
4

1 x

=x

0 < y  1 and x  1
Y

Z y Z x

FX ;Y (x; y) =

, ,

fX ;Y (u; v) dv du

Z yZ 1
=

2
= 2y

 x  1 and y  1

8uv du dv
0

Z y
=

4v(1 , v2 ) dv

, y4

X
x

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

158

Y
y
1

In this case, the region of integration completely


covers the region of nonzero probability and
Z y Z x

FX ;Y (x; y) =

fX ;Y (u; v) du dv = 1

, ,

X
1 x

The complete answer for the joint CDF is


8
0
>
>
>
>
< 2x2 y2

FX ;Y (x; y) =

x4

>
>
2y2
>
>
:

, y4

, y4

x < 0 or y < 0
0<yx1
0  x  y; 0  x  1
0  y  1; x  1
x  1; y  1

Problem 5.3.1
(a) The joint PDF (and the corresponding region of nonzero probability) are
Y

fX ;Y (x; y) =

1=2 ,1  x  y  1
0
otherwise

-1

(b)
P[X

>

0] =

Z 1Z 1
1
0

dy dx =

Z 1
1

, x dx = 1
2

This result can be deduced by geometry. The shaded triangle of the X ; Y plane corresponding to the event X > 0 is 1=4 of the total shaded area.
(c) For x > 1 or x < ,1, fX (x) = 0. For ,1  x  1,
fX (x) =

fX ;Y (x; y) dy =

Z 1
1
x

dy = (1 , x)=2

The complete expression for the marginal PDF is




fX (x) =

(1

, x)

,1  x  1
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

159

(d) From the marginal PDF fX (x), the expected value of X is


E [X ] =

x fX (x) dx =

1 1
x2 x3
1
x(1 , x) dx =
,
=,

2 ,1
4
6 ,1
3

Problem 5.3.2


fX ;Y (x; y) =

2 x + y  1; x; y  0
0 otherwise

Using the figure to the left we can find the marginal PDFs by integrating over the appropriate regions.
fX (x) =

Z 1,x

2(1 , x) 0  x  1
0
otherwise

2(1 , y) 0  y  1
0
otherwise

2 dy =

Y
Y+X=1

Likewise for fY (y):


fY (y) =

Z 1,y
0

2 dx =

X
1

Problem 5.3.3
Random variables X and Y have joint PDF


fX ;Y (x; y) =

1=(r2 ) 0  x2 + y2  r2
0
otherwise

The marginal PDF of X is


Z

p2

,x2 1
fX (x) = 2 p
dy =
, r2 ,x2 r2
And similarly for fY (y)

pr ,y
p
, r ,y

fY (y) = 2

2
2

1
dx =
r2

2 r2 ,x2
r2

p r ,y

r2

,r  x  r
otherwise

,r  y  r
otherwise

Problem 5.3.4
The joint PDF of X and Y and the region of nonzero probability are

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

160

5x2 =2 ,1  x  1; 0  y  x2
0
otherwise

fX ;Y (x; y) =

-1

We can find the appropriate marginal PDFs by integrating the joint PDF.
(a) The marginal PDF of X is
fX (x) =

Z x2 2
5x

dy =

5x4 =2 ,1  x  1
0
otherwise

(b) Note that fY (y) = 0 for y > 1 or y < 0. For 0  y  1,


Y

fY (y) =

= 5(1

,py 5x2

fX ;Y (x; y) dx =

,y

3=2

,1

Z 1
5x2

dx + p
dx
y 2

)=3

-1 - y

X
1

The complete expression for the marginal CDF of Y is




fY (y) =

5(1 , y3=2 )=3 0  y  1


0
otherwise

Problem 5.3.5
In this problem, the joint PDF is

fX ;Y (x; y) =

2 jxyj =r4 0  x2 + y2  r2
0
otherwise

(a) Since jxyj = jxjjyj, for ,r  x  r, we can write

p2

r ,x
2 jxj
fX (x) =
fX ;Y (x; y) dy = 4
jyj dy
p
r
,
, r2 ,x2
Since jyj is symmetric about the origin, we can simplify the integral to

fX (x) =

4 jxj
r4

p2

,x2

y dy =

2 2
2 jxj 2 r ,x
y
r4 0

2 jxj (r2 , x2 )
r4

Note that for jxj > r, fX (x) = 0. Hence the complete expression for the PDF of X is
(

fX (x) =

2jxj(r2 ,x2 )
r4

,r  x  r

otherwise

(b) Note that the joint PDF is symmetric in x and y so that fY (y) = fX (y).

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

161

Problem 5.3.6
(a) The joint PDF of X and Y and the region of nonzero probability are
Y

cy 0  y  x  1
0 otherwise

fX ;Y (x; y) =

(b) To find the value of the constant, c, we integrate the joint PDF over all x and y.
Z 1Z x

Z Z

, ,

fX ;Y (x; y) dx dy =

cy dy dx =
0

Z 1 2
cx
0

dx =

cx3
c
=

6 0 6

Thus c = 6.
(c) We can find the CDF FX (x) = P[X  x] by integrating the joint PDF over the event
X  x. For x < 0, FX (x) = 0. For x > 1, FX (x) = 1. For 0  x  1,
ZZ

FX (x) =

x0 x
Z x Z x0

fX ;Y


x0 ; y0 dy0 dx0

6y0 dy0 dx0

Z0 x 0

3(x0 )2 dx0 = x3

The complete expression for the joint CDF is


8
< 0

FX (x) =

x3

x<0
0x1
x1

(d) Similarly, we find the CDF of Y by integrating fX ;Y (x; y) over the event Y
y < 0, FY (y) = 0 and for y > 1, FY (y) = 1. For 0  y  1,
ZZ

FY (y) =
=

y0 y
Z yZ 1
0

Z y
=

y0

fX ;Y

 y. For


x0 ; y0 dy0 dx0

6y0 dx0 dy0

6y0 (1 , y0 ) dy0 = 3(y0 )2 , 2(y0 )3 0 = 3y2 , 2y3

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

162

The complete expression for the CDF of Y is


8
< 0

FY (y) =
(e) To find P[Y

X

y<0
3y2 , 2y3 0  y  1
:
1
y>1

2], we integrate the joint PDF fX ;Y (x; y) over the region y  x=2.

P[Y

X

Z 1 Z x=2

2] =

Z 1
=
0

6y dy dx

x=2

Z 1 2
3x
0

3y2 0 dx
dx = 1=4

Problem 5.4.1
(a) The minimum value of W is W = 0, which occurs when X = 0 and Y = 0. The
maximum value of W is W = 1, which occurs when X = 1 or Y = 1. The range of W
is SW = fwj0  w  1g.

(b) For 0  w  1, the CDF of W is

FW (w) = P[max(X ; Y )  w]
= P[X  w; Y  w]

W<w

Z wZ w

fX ;Y (x; y) dy dx

Substituting fX ;Y (x; y) = x + y yields


Z wZ w

FW (w) =

Z w
(x + y) dy dx =

y=w !

y2
xy +
2 y=0

Z w

dx =

The complete expression for the CDF is

8
< 0

FW (w) =

w3
1

w<0
0w1
otherwise

(c) The PDF of W is found by differentiating the CDF.


fY (y) =

dFW (w)
dw

3w2 0  w  1
0
otherwise

2
3
(wx + w =2) dx = w

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

163

Problem 5.4.2
(a) Since the joint PDF fX ;Y (x; y) is nonzero only for 0  y  x  1, we observe that
W = Y , X  0 since Y  X . In addition, the most negative value of W occurs when
Y = 0 and X = 1 and W = ,1. Hence the range of W is SW = fwj, 1  w  0g.
(b) For w < ,1, FW (w) = 0. For w > 0, FW (w) = 1. For ,1  w  0, the CDF of W is
FW (w) = P[Y , X

 w]

Z 1 Z x+w

,w

Z 1
=

,w

Y=X+w

6y dy dx
0

3(x + w)2 dx = (x + w)3 ,w = (1 + w)3

-w

Therefore, the complete CDF of W is


8
< 0

FW (w) =

(1 + w)3

w < ,1
,1  w  0
w>0

(c) By taking the derivative of fW (w) with respect to w, we obtain the PDF


fW (w) =

3(w + 1)2 ,1  w  0
0
otherwise

Problem 5.4.3
Random variables X and Y have joint PDF
Y

fX ;Y (x; y) =

2 0yx1
0 otherwise
1

(a) Since X and Y are both nonnegative, W = Y =X  0. Since Y  X , W = Y =X  1.


Note that W = 0 can occur if Y = 0. Thus the range of W is SW = fwj0  w  1g.
(b) For 0  w  1, the CDF of W is

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

FW (w) = P[Y =X

 w] = P[Y  wX ] = w

The complete expression for the CDF is

164

8
< 0

P[Y<wX]

w<0
w 0w<1
FW (w) =
:
1 w1

(c) By taking the derivative of the CDF, we find that the PDF of W is


fW (w) =

1 0w<1
0 otherwise

(d) We see that W has a uniform PDF over [0; 1]. Thus E [W ] = 1=2.
Problem 5.4.4
Random variables X and Y have joint PDF
Y

fX ;Y (x; y) =

2 0yx1
0 otherwise
1

(a) Since fX ;Y (x; y) = 0 for y > x, we can conclude that Y  X and that W = X =Y  1.
Since Y can be arbitrarily small but positive, W can be arbitrarily large. Hence the
range of W is SW = fwjw  1g.

(b) For w  1, the CDF of W is

FW (w) = P[X =Y  w]
= 1 , P[X =Y > w]
= 1 , P[Y < X =w]
= 1 , 1=w

P[Y<X/w]

1/w
1

Note that we have used the fact that P[Y < X =w] equals 1=2 times the area of the
corresponding triangle. The complete expression for the joint PDF is


FW (w) =

0
w<1
1 , 1=w w  1

(c) The PDF of W is found by differentiating the CDF.


fW (w) =

dFW (w)
dw

1=w2 w  1
0
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

165

Problem 5.4.5
The position of the mobile phone is equally likely to be anywhere in the area of a circle
with radius 16 km. Let X and Y denote the position of the mobile. Since we are given that
the cell has a radius of 4 km, we will measure X and Y in kilometers. Assuming the base
station is at the origin of the X ; Y plane, the joint PDF of X and Y is
 1

x2 + y2  16
otherwise

16

fX ;Y (x; y) =

Since the radial distance of the mobile from the base station is R =
R is
FR (r)

P[R  r] = P X 2 + Y 2  r

X 2 + Y 2 , the CDF of

By changing to polar coordinates, we see that for 0  r  4 km,


FR (r) =

Z 2 Z r 0
r
0

So

dr0 d0 = r2 =16

16

8
< 0

FR (r) =

r<0
0r<4
r4

r2 =16

Then by taking the derivative with respect to r we arrive at the PDF




fR (r) =
Problem 5.5.1
The joint PDF of X and Y is
fX ;Y (x; y) =

r=8 0  r  4
0
otherwise

(x + y)=3

0  x  1; 0  y  2
otherwise

Before calculating moments, we first find the marginal PDFs of X and Y . For 0  x  1,
fX (x) =

For 0  y  2,

fX ;Y (x; y) dy =

fY (y) =

fX ;Y (x; y) dx =

Z 2
x+y

y=2

dy =

x=1

Z 1
x
0

y
x2 xy
2y + 1
+
dx =
+
=

3 3
6
3 x=0
6

Complete expressions for the marginal PDFs are


 2x+2

fX (x) =

xy y2
2x + 2
+
=

3
6 y=0
3

0x1
otherwise

fY (y) =

 2y+1
6

0y2
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

166

(a) The expected value of X is


Z

E [X ] =

x fX (x) dx =

Z 1
2x + 2

dx =

2x3 x2
5
+
=

9
3 0 9

The second moment of X is


1
Z 1
 2 Z 2
x4 2x3
7
2 2x + 2
dx =
x fX (x) dx =
x
+
E X =
=

The variance of X is Var [X ] = E X 2

, (E [X ])2 = 7

18

18 , (5=9)2 = 13=162.

(b) The expected value of Y is


Z

E [Y ] =

y fY (y) dy =

Z 2
2y + 1

dy =

y2 y3
11
+
=

12 9 0
9

The second moment of Y is


2
Z 2
 2 Z 2
y3 y4
16
2 2y + 1
E Y =
dy =
y fY (y) dy =
y
+
=

6
18 12
9
0
,

 
The variance of Y is Var [Y ] = E Y 2

, (E [Y ])2 = 23

81.

(c) The correlation of X and Y is

ZZ

E [XY ] =

xy fX ;Y (x; y) dx dy
Z 1Z 2

=
0

Z 1
=

x+y
xy
3

dy dx
y=2 !

x2 y2 xy3
+
6
9 y=0

dx

8x
+
3
9

2x3 4x2
2
+
=

9
9 0 3

Z 1 2
2x
0

dx =

The covariance is Cov [X ; Y ] = E [XY ] , E [X ]E [Y ] = ,1=81.


(d) The expected value of X and Y is
E [X + Y ] = E [X ] + E [Y ] = 5=9 + 11=9 = 16=9
(e) By Theorem 5.10,
Var [X + Y ] = Var [X ] + Var [Y ] + 2 Cov [X ; Y ] =

13
23 2
+
,
162 81 81

55
162

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


Problem 5.5.2
(a) The first moment of X is
Z 1Z 1

E [X ] =

Z 1

4x y dy dx =
0

2x2 dx =

2
3

The second moment of X is


Z 1
 2 Z 1 Z 1 3
1
E X =
4x y dy dx =
2x3 dx =
0

 
The variance of X is Var [X ] = E X 2

, (E [X ])2 = 1 2 , (2
=

3)2 = 1=18.

(b) The mean of Y is


Z 1Z 1

E [Y ] =

4xy dy dx =

Z 1
4x

dx =

2
3

x dx =

1
2

The second moment of Y is




E Y2

Z 1Z 1
=

 
The variance of Y is Var [Y ] = E Y 2

4xy3 dy dx =

Z 1
0

, (E [Y ])2 = 1 2 , (2
=

3)2 = 1=18.

(c) To find the covariance, we first find the correlation


Z 1Z 1

E [XY ]

2 2

4x y dy dx =
0

Z 1 2
4x

dx =

4
9

The covariance is thus


Cov [X ; Y ] = E [XY ] , E [X ]E [Y ] =

 2

4
2
,
9
3

=0

(d)
E [X + Y ] = E [X ] + E [Y ] =

2 2
+
3 3

(e) By Theorem 5.10, the variance of X + Y is


Var [X + Y ]

Var [X ] + Var [Y ] + 2 Cov [X ; Y ] = 1=18 + 1=18 + 0 = 1=9

167

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

168

Problem 5.5.3
The joint PDF of X and Y and the region of nonzero probability are
Y

fX ;Y (x; y) =

5x2 =2 ,1  x  1; 0  y  x2
0
otherwise

-1

(a) The first moment of X is


E [X ]

Z 1 Z x2
5x2

,1

dy dx =

Z 1
5x5

,1 2

dx =

5x6
=0
12 ,1

Since E [X ] = 0, the variance of X and the second moment are both


1
 2 Z 1 Z x2 2 5x2
5x7
10
Var [X ] = E X =
dy dx =
x
=
2
14 ,1 14
,1 0

(b) The first and second moments of Y are


Z 1 Z x2
5x2

5
2
14
,1 0
Z
Z
2
1
 
5x
5
E Y2 =
dy dx =
x2 y2
2
26
,1 0
E [Y ] =

dy dx =

Therefore, Y has variance




Var [Y ] =

5
5
,
26
14

2
= :0576

(c) Since E [X ] = 0, Cov [X ; Y ] = E [XY ] , E [X ]E [Y ] = E [XY ]. Thus,


Z 1 Z x2

Cov [X ; Y ] = E [XY ] =

1 5x7
5x2
dy dx =
dx = 0
xy
2
,1 4

(d) The expected value of the sum X + Y is


E [X + Y ] = E [X ] + E [Y ] =

5
14

(e) By Theorem 5.10, the variance of X + Y is


Var [X + Y ] = Var [X ] + Var [Y ] + 2 Cov [X ; Y ] = 5=7 + 0:0576 = 0:7719

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

169

Problem 5.5.4
Random variables X and Y have joint PDF
Y

fX ;Y (x; y) =

2 0yx1
0 otherwise

Before finding moments, it is helpful to first find the marginal PDFs. For 0  x  1,
fX (x) =

Z x

fX ;Y (x; y) dy =

2 dy = 2x
0

Note that fX (x) = 0 for x < 0 or x > 1. For 0  y  1,


Z

fY (y) =

Z 1

fX ;Y (x; y) dx =

2 dx = 2(1 , y)

Also, for y < 0 or y > 1, fY (y) = 0. Complete expressions for the marginal PDFs are


fX (x) =

2x 0  x  1
0 otherwise

2(1 , y) 0  y  1
0
otherwise

fY (y) =

(a) The first two moments of X are


Z

E [X ] =

Z 1

x fX (x) dx =

2x2 dx = 2=3

Z 1
 2 Z 2
E X =
x fX (x) dx =
2x3 dx = 1=2

 
The variance of X is Var [X ] = E X 2

, (E [X ])2 = 1 2 , 4
=

9 = 1=18.

(b) The expected value and second moment of Y are


Z

E [Y ] =

Z 1

y fY (y) dy =

2y(1 , y) dy =

Z 1
 2 Z 2
E Y =
y fY (y) dy =
2y2 (1

The variance of Y is Var [Y ] = E Y 2

2y3
y ,
= 1=3
3 0
2

, y) dy =

, (E [Y ])2 = 1 6 , 1
=

2y3 y4
, 2 = 1=6
3
0
9 = 1=18.

(c) Before finding the covariance, we find the correlation


Z 1Z x

E [XY ] =

Z 1

2xy dy dx =
0

x3 dx = 1=4

The covariance is Cov [X ; Y ] = E [XY ] , E [X ]E [Y ] = 1=36.

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

170

(d) E [X + Y ] = E [X ] + E [Y ] = 2=3 + 1=3 = 1


(e) By Theorem 5.10, Var [X + Y ] = Var [X ] + Var [Y ] + 2 Cov [X ; Y ] = 1=6.
Problem 5.5.5
Random variables X and Y have joint PDF


fX ;Y (x; y) =

1=2 ,1  x  y  1
0
otherwise
1

The region of possible pairs (x; y) is shown to the right of the joint
PDF.

-1

(a)
E [XY ] =

Z 1Z 1
xy

,1

dy dx =

1 1
x2 x4
x(1 , x2 ) dx =
,
=0
4 ,1
8 16 ,1

(b)
 X +Y  Z 1 Z 1 1 x y
E e
=
e e dy dx
=
=

,1 x 2
Z
1 1 x 1
e (e , ex ) dx
2 ,1

1 1+x 1 2x 1
e , e
2
4
,1
=

e2 e,2 1
+
,2
4
4

Problem 5.6.1
(a) Given the event A = fX + Y
P[A] =

 1g, we wish to find fX Y jA (x y). First we find


;

Z 1 Z 1,x
0

6e,(2x+3y) dy dx = 1 , 3e,2 + 2e,3

So then
(

fX ;Y jA (x; y) =

6e,(2x+3y)
1,3e,2 +2e,3

x + y  10; x  0; y  0
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

171

Problem 5.6.2
The joint PDF of X and Y is


fX ;Y (x; y) =

 1 is

(a) The probability that Y

P[A] = P[Y

0  x  1; 0  y  2
otherwise

(x + y)=3

 1] =

ZZ
y1
Z 1Z 1

=
0

Z 1
=

fX ;Y (x; y) dx dy
x+y
dy dx
3
!

y=1

xy y2
+
3
6 y=0

Z 1
2x + 1
0

dx =

Y 1

dx
1

x2 x
1
+ =
6 6 0 3

(b) By Definition 5.5, the conditional joint PDF of X and Y given A is


(

fX ;Y jA (x; y) =

2A

fX ;Y (x;y)
P[A]

(x; y)

otherwise

x + y 0  x  1; 0  y  1
0
otherwise

(c) From fX ;Y jA (x; y), we find the conditional marginal PDF fX jA (x). For 0  x  1,
fX jA (x) =

fX ;Y jA (x; y) dy =

y=1

Z 1
0

(x + y) dy =

y2
1
xy +
= x+
2 y=0
2

The complete expression is




x + 1=2 0  x  1
0
otherwise

fX jA (x) =

(d) For 0  y  1, the conditional marginal PDF of Y is


fY jA (y) =

x=1

fX ;Y jA (x; y) dx =

Z 1
0

(x + y) dx =


x2
+ xy
= y + 1=2
2
x=0

The complete expression is




fY jA (y) =

y + 1=2 0  y  1
0
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

172

Problem 5.6.3
Random variables X and Y have joint PDF


fX ;Y (x; y) =

(4x + 2y)=3

0  x  1; 0  y  1
otherwise

 1 2 is
ZZ
P[A] = P[Y  1 2] =

(a) The probability that Y

y1=2
Z 1 Z 1=2

fX ;Y (x; y) dy dx
4x + 2y
dy dx
3

y=1=2
Z 1
4xy + y2
=
dx

3
0
=

Z 1
2x + 1=4

y=0

dx =

x2
x
5
+
=

3 12 0 12

(b) The conditional joint PDF of X and Y given A is


(

fX ;Y jA (x; y) =

2A

fX ;Y (x;y)
P[A]

(x; y)

otherwise


=

8(2x + y)=5 0  x  1; 0  y  1=2


0
otherwise

(c) For 0  x  1, the PDF of X given A is


fX jA (x) =

fX ;Y jA (x; y) dy =

8
5

The complete expression is

Z 1=2
(2x + y) dy =

fX jA (x) =

y=1=2
8
y2
8x + 1
2xy +
=

5
2 y=0
5

0x1
otherwise

(8x + 1)=5

(d) For 0  y  1=2, the conditional marginal PDF of Y given A is


fY jA (y) =

fX ;Y jA (x; y) dx =
=
=

The complete expression is


fY jA (y) =

(8y + 8)=5

8
5

Z 1

(2x + y) dx
0
x=1
8x2 + 8xy

5
8y + 8
5

0  y  1=2
otherwise

x=0

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

173

Problem 5.6.4
Y

fX ;Y (x; y) =

,1  x  1 0  y  x2

5x2
2

otherwise

-1

(a) The event A = fY

P[A] = 2
=

 1 4g has probability
=

Z 1=2 Z x2 2
5x

0
Z 1=2

5x4 dx +

dy dx + 2

Z 1
5x2

1=2

Z 1 Z 1=4 2
5x

1=2 0

dy dx

dx

Y<1/4

1=2
1
5
3
= x
+ 5x =12 1 = 19=48
=2

-1

This implies


fX ;Y jA (x; y) =

fX ;Y (x; y) =P[A] (x; y) 2 A


0
otherwise

120x2 =19 ,1  x  1; 0  y  x2 ; y  1=4


0
otherwise

(b)
fY jA (y) =
=

Z 1
120x2

dx
fX ;Y jA (x; y) dx = 2 p
y 19

 80
19 (1

, y3 2 ) 0  y  1
=

otherwise

(c) The conditional expectation of Y given A is


E [Y jA] =

Z 1=4
80

19

(1

,y

3=2

) dy =

80
19

y2 2y7=2
, 7
2

(d) To find fX jA (x), we can write


fX jA (x) =

fX ;Y jA (x; y) dy

! 1=4

65

=

532
0

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

174

However, when we substitute fX ;Y jA (x; y), the limits will depend on the value of x.
When jxj  1=2, we have
fX jA (x) =

Z x2
120x2

19

dy =

120x4
19

When ,1  x  ,1=2 or 1=2  x  1,


fX jA (x) =

Z 1=4
120x2

19

dy =

30x2
19

The complete expression for the conditional PDF of X given A is

fX jA (x) =

8
30x2 =19
>
>
<

,1  x  ,1 2
,1 2  x  1 2
1 2x1
=

120x4 =19

>
>
:

30x2 =19

otherwise

(e) The conditional mean of X given A is


E [X jA] =

Z 1 2
Z 1
,1 2 30x3
120x5
30x3
dx +
dx +
dx = 0
19
1 2 19
,1
,1 2 19
=

Problem 5.7.1


fX ;Y (x; y) =

(x + y)

0  x; y  1
otherwise

(a) The conditional PDF fX jY (xjy) is defined for all y such that 0  y  1.

(b) For 0  y  1

fX jY (x) =

fX ;Y (x; y)
fX (x)

(x + y)

= R
=
1
0 (x + y) dy

(x+y)
x+1=2

0x1
otherwise

(c) fY jX (yjx) is defined for all values of x in the interval [0; 1].

(d) For 0  x  1,

fY jX (y) =

fX ;Y (x; y)
fY (y)

(x + y)

= R
=
1
0 (x + y) dx

(x+y)
y+1=2

0y1
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

175

Problem 5.7.2
Random variables X and Y have joint PDF
Y

2 0yx1
0 otherwise

fX ;Y (x; y) =

(a) For 0  y  1,
fY (y) =

Z 1

fX ;Y (x; y) dx =

2 dx = 2(1 , y)

Also, for y < 0 or y > 1, fY (y) = 0. The complete expression for the marginal PDF
is


fY (y) =

2(1 , y) 0  y  1
0
otherwise

(b) By Theorem 5.13, the conditional PDF of X given Y is


fX jY (xjy) =
That is, since Y

1
1,y

yx1
otherwise

 X  1, X is uniform over [y 1] when Y = y.


;

(c) The conditional expectation of X given Y


E [X jY

fX ;Y (x; y)
fY (y)

Z
= y] =

=y

x fX jY (xjy) dx =

Z 1
y

can be calculated as
x

1,y

dx =

x2
2(1 , y) y

1+y
2

In fact, since we know that the conditional PDF of X is uniform over [y; 1] when
Y = y, it wasnt really necessary to perform the calculation.
Problem 5.7.3
Random variables X and Y have joint PDF
Y

fX ;Y (x; y) =

2 0yx1
0 otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual


(a) For 0  x  1, the marginal PDF for X satisfies
Z

fX (x) =
Note that fX (x) = 0 for x
marginal PDF of X is

<

176

Z x

fX ;Y (x; y) dy =

2 dy = 2x

0 or x > 1. Hence the complete expression for the




fX (x) =
(b) The conditional PDF of Y given X
fY jX (yjx) =

=x

2x 0  x  1
0 otherwise

is

fX ;Y (x; y)
fX (x)


=

1=x 0  y  x
0
otherwise

(c) Given X = x, Y has a uniform PDF over [0; x] and thus has conditional
expected value
R
E [Y jX = x] = x=2. Another way to obtain this result is to calculate , y fY jX (yjx) dy.
Problem 5.7.4
We are told in the problem statement that if we know r, the number of feet a student
sits from the blackboard, then we also know that that students grade is a Gaussian random
variable with mean 80 , r and standard deviation r. This is exactly
fX jR (xjr) =

p1

2r2

e,(x,[80,r])

2r2

Problem 5.7.5
Random variables X and Y have joint PDF
Y

fX ;Y (x; y) =

1=2 ,1  x  y  1
0
otherwise

1
-1

(a) For ,1  y  1, the marginal PDF of Y is


Z

fY (y) =

fX ;Y (x; y) dx =

1 y
dx = (y + 1)=2
2 ,1

The complete expression for the marginal PDF of Y is




fY (y) =

(y + 1)=2

,1  y  1
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

177

(b) The conditional PDF of X given Y is


fX jY (xjy) =

fX ;Y (x; y)
fY (y)


=

1
1+y

,1  x  y

otherwise

(c) Given Y = y, the conditional PDF of X is uniform over [,1; y]. Hence the conditional
expected value is E [X jY = y] = (y , 1)=2.
Problem 5.7.6
We are given that the joint PDF of X and Y is


fX ;Y (x; y) =
(a) The marginal PDF of X is

p2

fX (x) = 2

1=(r2 ) 0  x2 + y2  r2
0
otherwise
(

,x2 1

dy =

r2

The conditional PDF of Y given X is


fY jX (yjx) =

fX ;Y (x; y)
fX (x)


=

2 r2 ,x2
r2

,r  x  r
otherwise

1=(2 r2 , x2 ) y2  r2 , x2
0
otherwise

(b) Given X = x, we observe that over the interval [, r2 , x2 ; r2 , x2 ], Y has a uniform PDF. Since the conditional PDF fY jX (yjx) is symmetric about y = 0,
E [Y jX

= x] = 0

Problem 5.8.1
X and Y are independent random variables with PDFs
fX (x) =

 1
,x=3 x
3e

(a) To calculate P[X

>

0

fY (y) =

otherwise

 1
,y=2 y
2e

0
otherwise

Y ], we use the joint PDF fX ;Y (x; y) = fX (x) fY (y).


ZZ

P[X

>

Y] =

x>y
Z
1

fX (x) fY (y) dx dy
Z
1

e,x=3 dx dy
2
y 3
Z
1 ,y=2 ,y=3
=
e
dy
e
0 2
Z
1 ,(1=2+1=3)y
1=2
e
=
dy =
1=2 + 2=3
0 2
=

e,y

3
7

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

178

(b) Since X and Y are exponential random variables with parameters X = 1=3 and Y =
1=2, Appendix A tells us that E [X ] = 1=X = 3 and E [Y ] = 1=Y = 2. Since X and
Y are independent, the correlation is E [XY ] = E [X ]E [Y ] = 6.
(c) Since X and Y are independent, Cov [X ; Y ] = 0.
Problem 5.8.2
(a) Since E [,X2 ] = ,E [X2 ], we can use Theorem 5.8 to write

E [X1 , X2 ] = E [X1 + (,X2 )] = E [X1 ] + E [,X2 ] = E [X1 ] , E [X2 ] = 0

(b) By Theorem 4.6(f), Var [,X2 ] = (,1)2 Var [X2] = Var [X2 ]. Since X1 and X2 are independent, Theorem 5.15 says that
Var [X1 , X2 ] = Var [X1 + (,X2 )] = Var [X1 ] + Var [,X2 ] = 2 Var [X ]
Problem 5.8.3
Random variables X1 and X2 are independent and identically distributed with the following PDF:


fX (x) =

x=2 0  x  2
0
otherwise

(a) Since X1 and X2 are identically distributed they will share the same CDF FX (x).
Z x

FX (x) =

, 

fX x0 dx0 =

8
< 0

x0
x2 =4 0  x  2
:
1
x2

(b) Since X1 and X2 are independent, we can say that


P[X1  1; X2  1] = P[X1  1]P[X2  1] = FX1 (1) FX2 (1) = [FX (1)]2 =
(c) For W

= max(X1 ; X2 ),

FW (1) = P[max(X1 ; X2)  1] = P[X1  1; X2  1]

Since X1 and X2 are independent,

FW (1) = P[X1  1]P[X2  1] = [FX (1)]2 = 1=16

(d)
FW (w) = P[max(X1 ; X2)  w] = P[X1  w; X2  w]
Since X1 and X2 are independent,
FW (w) = P[X1  w]P[X2  w] = [FX (w)]

8
< 0
=

w4 =16
1

w0
0w2
w2

1
16

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

179

Problem 5.8.4
X and Y are independent random variables with PDFs


fX (x) =

2x 0  x  1
0 otherwise

fY (y) =

3y2 0  y  1
0
otherwise

For the event A = fX > Y g, this problem asks us to calculate the conditional expectations
E [X jA] and E [Y jA]. We will do this using the conditional joint PDF fX ;Y jA (x; y). Since
X and Y are independent, it is tempting to argue that the event X > Y does not alter the
probability model for X and Y . Unfortunately, this is not the case. When we learn that
X > Y , it increases the probability that X is large and Y is small. We will see this when we
compare the conditional expectations E [X jA] and E [Y jA] to E [X ] and E [Y ].
(a) We can calculate the unconditional expectations, E [X ] and E [Y ], using the marginal
PDFs fX (x) and fY (y).
Z

E [X ] =

Z 1

fX (x) dx =

fY (y) dy =

E [Y ] =

0
Z 1
0

2x2 dx = 2=3
3y3 dy = 3=4

(b) First, we need to calculate the conditional joint PDF fX ;Y jA (x; yja) x; y. The first step
is to write down the joint PDF of X and Y :


fX ;Y (x; y) = fX (x) fY (y) =

6xy2 0  x  1; 0  y  1
0
otherwise

The event A has probability

ZZ

P[A] =

fX ;Y (x; y) dy dx

x>y
Z 1Z x
0

X>Y

6xy2 dy dx

Z 1

2x dx = 2=5

The conditional joint PDF of X and Y given A is


(

fX ;Y jA (x; y) =

=

2A

fX ;Y (x;y)
P[A]

(x; y)

otherwise

15xy2 0  y  x  1
0
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

180

The triangular region of nonzero probability is a signal that given A, X and Y are no
longer independent. The conditional expected value of X given A is
E [X jA] =

Z Z

, ,
Z 1

= 15

x2

x fX ;Y jA (x; yja) x; y dy dx

Z x

Z 1
=5

y2 dy dx

x5 dx = 5=6

The conditional expected value of Y given A is


E [Y jA] =

Z Z

, ,

y fX ;Y jA (x; y) dy dx = 15

Z 1 Z x

y dy dx =

x
0

15
4

Z 1

x5 dx = 5=8

We see that E [X jA] > E [X ] while E [Y jA] < E [Y ]. That is, learning X > Y gives us a
clue that X may be larger than usual while Y may be smaller than usual.
Problem 5.8.5
This problem is quite straightforward. From Theorem 5.2, we can find the joint PDF of
X and Y is
fX ;Y (x; y) =

2 [FX (x) FY (y)]


x y

[ fX (x) FY (y)]
y

= fX (x) fY (y)

Hence, FX ;Y (x; y) = FX (x) FY (y) implies that X and Y are independent.


If X and Y are independent, then
fX ;Y (x; y) = fX (x) fY (y)
By Definition 5.2,
Z x Z y

FX ;Y (x; y) =

, ,

fX ;Y (u; v) dv du
 Z y

Z x

fX (u) du

fY (v) dv

= FX (x) FX (x)

Problem 5.8.6
Random variables X and Y have joint PDF


fX ;Y (x; y) =

2 e,y 0  x  y
0
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

181

For W = Y , X we can find fW (w) by integrating over the region indicated in the figure
below to get FW (w) then taking the derivative with respect to w. Since Y  X , W = Y , X
is nonnegative. Hence FW (w) = 0 for w < 0. For w  0,
Y

FW (w) = 1 , P[W

>

w] = 1 , P[Y

>

X + w]

Z Z

,
0
x
,
w
= 1,e
=1

+w

2 e,y dy dx

X<Y<X+w

The complete expressions for the joint CDF and corresponding joint PDF are


FW (w) =

0
w<0
,
w
1,e
w0

fW (w) =

0
w<0
,
w
e
w0

Problem 5.8.7
(a) To find if W and X are independent, we must be able to factor the joint density
function fX ;W (x; w) into the product fX (x) fW (w) of marginal density functions. To
verify this, we must find the joint PDF of X and W . First we find the joint CDF.

 x W  w] = P[X  x Y , X  w] = P[X  x Y  X + w]
Since Y  X , the CDF of W satisfies FX W (x w) = P[X  x X  Y  X + w]. Thus,
for x  0 and w  0,
FX ;W (x; w) = P[X

Z x Z x0 +w

FX ;W (x; w) =
=

x0

Z x
0

Z x
=
=

2 e,y dy dx0
x

,(x0 +w)

,e
0

x0 +w 
,
y
e 0
dx0

Y
{X<x}{X<Y<X+w}

,x0 dx0

+ e


0 x

e,(x +w) , e,x

,x )(1 , e,w )
= (1 , e
We see that FX ;W (x; w) = FX (x) FW (w). Moreover, by applying Theorem 5.2,
fX ;W (x; w) =

2 FX ;W (x; w)
x w

,x e,w = fX (x) fW (w)

= e

Since we have our desired factorization, W and X are independent.

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

182

(b) Following the same procedure, we find the joint CDF of Y and W .

 w Y  y] = P[Y , X  w Y  y] = P[Y  X + w Y  y]
The region of integration corresponding to the event fY  x + w Y  yg depends on
whether y w or y  w. Keep in mind that although W = Y , X  Y , the dummy
FW;Y (w; y) = P[W

<

arguments y and w of fW;Y (w; y) need not obey the same constraints. In any case, we
must consider each case separately. For y > w, the region of integration resembles
Y
{Y<y} {Y<X+w}
y
w
X

y-w

Thus for y > w, the integration is


FW;Y (w; y) =

Z y,w Z u+w
0

2 e,v dv du +

Z y,w h

y,w u

e,u , e,(u+w) du +

2 e,v dv du

Z y h
y,w

i y,w h

e,u + e,(u+w)
+
e,u

,
,w , we,y
= 1,e
=

Z y Z y

e,u , e,y du

, ue,y

i y

y,w

For y  w,
Z yZ y

FW;Y (w; y) =

Z0 y h u

=
0

2 e,v dv du
i

,e,y + e,u du

w
y

,y , e,u y
= ,ue
=1

, (1 + y)e,y

The complete expression for the joint CDF is


8
< 1

, e,w , we,y
FW Y (w y) =
1 , (1 + y)e,y
:
;

{Y<y}

0wy
0yw
otherwise

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

183

Applying Theorem 5.2 yields


fW;Y (w; y) =

2 FW;Y (w; y)
w y


=

22 e,y 0  w  y
0
otherwise

The joint PDF fW;Y (w; y) doesnt factor and thus W and Y are dependent.
Problem 5.8.8
We need to define the events A = fU

 ug and B = fV  vg. In this case,


FU V (u v) = P[AB] = P[B] , P[Ac B] = P[V  v] , P[U u V  v]
;

>

Note that U = min(X ; Y ) > u if and only if X > u and Y > u. In the same way, since
V = max(X ; Y ), V  v if and only if X  v and Y  v. Thus
P[U

>

u; V

 v] = P[X

u; Y

>

>

u; X

 v Y  v] = P[u
;

<

v u
;

<

 v]

Thus, the joint CDF of U and V satisfies


FU ;V (u; v) = P[V

 v] , P[U

>

u; V

 v] = P[X  v Y  v] , P[u
;

<

v u
;

<

 v]

Since X and Y are independent random variables,


FU ;V (u; v) = P[X  v]P[Y  v] , P[u < X  v]P[u < X  v]
= FX (v) FY (v) , (FX (v) , FX (u)) (FY (v) , FY (u))
= FX (v) FY (u) + FX (u) FY (v) , FX (u) FY (u)
The joint PDF is
fU ;V (u; v) =
=
=

2 FU ;V (u; v)
uv

[ fX (v) FY (u) + FX (u) fY (v)]


u
fX (u) fY (v) + fX (v) fY (v)

Problem 5.9.1
fX ;Y (x; y) = ce,(x

2 =8) (y2 =18)

The omission of any limits for the PDF indicates that it is defined over all x and y. We
know that fX ;Y (x; y) is in the form of the bivariate Gaussian distribution so we look to
Definition 5.10 and attempt to find values for Y , X , E [X ], E [Y ] and .

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

184

(a) First, we know that the constant is


c=

1
2X Y

1 , 2

Because the exponent of fX ;Y (x; y) doesnt contain any cross terms we know that
must be zero, and we are left to solve the following for E [X ], E [Y ], X , and Y :


x , E [X ]
X

2
=

x2
8

y , E [Y ]
Y

2
=

y2
18

From which we can conclude that


E [X ] = E [Y ] = 0
X

Putting all the pieces together, we find that c =

p
p8

18

1
24 .

(b) Since = 0, we also find that X and Y are independent.


Problem 5.9.2
2
2
fX ;Y (x; y) = ce,(2x ,4xy+4y )

Proceeding as in Problem 5.9.1 we attempt to find values for Y , X , E [X ], E [Y ] and .


(a) First, we try to solve the following equations



x , E [X ]
X

2
=

4(1 , 2 )x2

8(1 , 2 )y2

8(1 , 2 )

y , E [Y ] 2
Y
2
X Y

The first two equations yield E [X ] = E [Y ] = 0


(b) To find the correlation coefficient , we observe that
X

q
= 1=

4(1 ,

2 )

q
= 1=

8(1 , 2)

Using X and Y in the third equation yields = 1= 2.

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

185

(c) Since = 1= 2, now we can solve for X and Y .


X

= 1=

= 1=2

(d) From here we can solve for c.


c=

1
2X Y

1,

(e) X and Y are dependent because 6= 0.


Problem 5.9.3
From the problem statement, we learn that
2X

= Y = 0

2
= Y = 1

From Theorem 5.18, the conditional expectation of Y given X is


E [Y jX ] = Y (X ) = Y +

Y
(X , X ) = X
X

In the problem statement, we learn that E [Y jX ] = X =2. Hence = 1=2. From Definition 5.10, the joint PDF is
fX ;Y (x; y) =

p1

32

2
2
e,2(x ,xy+y )=3

Problem 5.9.4
The given joint PDF is
fX ;Y (x; y) = de,(a

2 x2 +bxy+c2 y2 )

In order to be an example of the bivariate Gaussian PDF given in Definition 5.10, we must
have
a2 =

c2 =

,
,
b=
X Y (1 , 2 )
2 )

22X (1

d=

1
2Y2 (1

, 2 )
1

2X Y

We can solve for X and Y , yielding


X

2 (1 , 2 )

= p

2(1 , 2 )

1 , 2

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

186

Thus,
b=

,
= ,2ac
X Y (1 , 2 )

Hence,

,b

2ac

This implies
d2 =

1
42 2X Y2 (1

, 2)a2c2 = a2 c2 , b2

= (1

2 )

Since jj  1, we see thatpjbj  2ac. Further, for any choice of a, b and c that meets this
constraint, choosing d = a2 c2 , b2 =4 yields a valid PDF.
Problem 5.9.5
From Equation (5.5), we can write the bivariate Gaussian PDF as
fX ;Y (x; y) =

p1

e,(x,X )

X 2

22X

p1

e,(y, Y (x))

Y 2

2 Y2

where
Y
Y (x) = Y + (x , X )
X

= Y

1 , 2

However, the definitions of Y (x) and Y are not particularly important for this exercise.
When we integrate the joint PDF over all x and y, we obtain
Z Z

, ,

fX ;Y (x; y) dx dy =

, X 2
Z

p1

e,(x,X )

22X

p1

e,(y, Y (x))

, Y 2
|

{z

2 Y2

dy dx
}

p1

, X 2

e,(x,X )

22X

dx

The marked integral equals 1 because for each value of x, it is the integral of a Gaussian
PDF of one variable over all possible values. In fact, it is the integral of the conditional
PDF fY jX (yjx) over all possible y. To complete the proof, we see that
Z Z

, ,

fX ;Y (x; y) dx dy =

p1

, X 2

e,(x,X )

22X

dx = 1

since the remaining integral is the integral of the marginal Gaussian PDF fX (x) over all
possible x.

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

187

Problem 5.10.1
The mean value of a sum of random variables is always the sum of their individual
means.
n

E [Y ] = E [Xi ] = 0
i=1

The variance of any sum of random variables can be expressed


 in
terms of the individual
2
variances and co-variances. Since the E [Y ] is zero, Var [Y ] = E Y . Thus,
2

!2 3

Xi

Var [Y ] = E 4

"

5=E

i=1

Since E [Xi ] = 0, E Xi2

XiX j

i=1 j=1

= Var [Xi ] = 1

Thus, Var [Y ] = n + n(n , 1).

E Xi X j

and for i 6= j,

= Cov

Xi ; X j

E Xi2

i =1

i=1 j6=i

Xi X j

Problem 5.10.2
The best approach to this problem is find the complementary CDF of W . Since all Xi
are iid with PDF fX (x) x and CDF FX (x),
P[W

>

w] = P[min fX1 ; : : : ; Xn g > w]


= P[X1 > w]P[X2 > w]  P[Xn > w]
= [1 , FX1 (w)][1 , FX2 (w)]  [1 , FXn (w)]
n
= [1 , FX (w)]

The CDF is thus


FW (w) = 1 , P[W

>

w] = 1 , [1 , FX (w)]n

The PDF fW (w) can be found be taking the derivative with respect to w.
fW (w) = n[1 , FX (w)]n,1 fX (w)

Problem 5.10.3
In this problem, W = max(X1 ; : : : ; Xn) where all Xi are independent and identically
distributed. The CDF of W is
FW (w)

=
=
=
=

P[max fX1 ; : : : ; Xn g  w]
P[X1  w]P[X2  w] : : : P[Xn  w]
Fx1 (w) Fx2 (w) : : : Fxn (w)
n
[FX (w)]

And the PDF of W , fW (w) can be found be taking the derivative with respect to w
fW (w) = n [FX (w)]n,1 fX (w)

Yates and Goodman: Probability and Stochastic Processes Solutions Manual

188

Problem 5.10.4
Let A denote the event Xn = max(X1 ; : : : ; Xn ). We can find P[A] by conditioning on the
value of Xn .
P[A] = P[X1  Xn ; X2  Xn ;  ; Xn1  Xn ]
Z

=
=

Z,

P[X1 < Xn ; X2 < Xn;  ; Xn,1 < Xn jXn = x] fXn (x) dx
P[X1 < x; X2 < x;  ; Xn,1 < x] fX (x) dx

Since X1 ; : : : ; Xn,1 are iid,

P[A] =
=

Z,

P[X1  x]P[X2  x]  P[Xn,1  x] fX (x) dx


n,1
[FX (x)]
fX (x) dx


1
n
= [FX (x)]
n
,
1
= (1 , 0)
n
= 1=n

Not surprisingly, since the Xi are identical, symmetry would suggest that Xn is as likely as
any of the other Xi to be the largest. Hence P[A] = 1=n should not be surprising.