150
(b) To find the marginal CDF of X, FX (x), we simply evaluate the joint CDF at y = .
FX (x) = FX ;Y (x; ) =
1 , e,x x 0
0
otherwise
(c) Likewise for the marginal CDF of Y , we evaluate the joint CDF at X
FY (y) = FX ;Y (; y) =
= .
1 , e,y y 0
0
otherwise
Problem 5.1.2
(a) Because the probability that any random variable is less than , is zero, we have
FX ;Y (x; ,) = P[X
x Y ,] P[Y ,] = 0
;
(b) The probability that any random variable is less than infinity is always one.
FX ;Y (x; ) = P[X
(c) Although P[Y
x Y ] = P[X x] = FX (x)
;
(d) Part (d) follows the same logic as that of part (a).
FX ;Y (,; y) = P[X
, Y y] P[X ,] = 0
;
Y y] = P[Y y] = FY (y)
;
y2 ].
151
y2 g
x2 ] + P[y1 Y y2 ] , P[x1 X x2 y1 Y y2 ]
;
By Theorem 5.3,
P[x1 X
y2] 0
x2 y1
;
<
x2 y1
;
<
152
y2
y2
y2
y1
y1
y1
x1
x1
x2
x2
x1
x2 y1
;
<
y2 ] in terms of
FX ;Y (x; y) =
1 , e,(x+y) x; y 0
0
otherwise
FX (x) = FX ;Y (x; ) =
FY (y) = FX ;Y (; y) =
1 x0
0 otherwise
1 y0
0 otherwise
>
x] = 0
P[Y
>
y] = 0
>
xg[fY
>
yg] P[X
>
x] + P[Y
>
153
y] = 0
However,
x Y y] = 1 , (1 , e, x y ) = e, x y
Thus, we have the contradiction that e, x y 0 for all x y 0. We can conclude that the
P[fX
>
xg[fY
>
yg] = 1 , P[X
( + )
( + )
( + )
fX ;Y (x; y) =
Y
Y+X=1
c x + y 1; x; y 0
0 otherwise
X
1
To find the constant c we integrate over the region shown. This gives
Z 1 Z 1,x
0
c dy dx = cx ,
cx 1 c
=
2 0 2
=1
Therefore c = 2.
(b) To find the P[X
Y] =
Z 1=2 Z 1,x
0
(2
dy dx
X Y
X=Y
Z 1=2
=
, 4x) dx
= 1=2
P[X + Y
1
1
2]
Z 1=2 Z 1=2,x
0
Z 1=2
=
=
2 dy dx
(1
Y+X=1
Y+X=
, 2x) dx
X
1
154
Problem 5.2.2
Given the joint PDF
fX ;Y (x; y) =
cxy2 0 x; y 1
0
otherwise
(a) To find the constant c integrate fX ;Y (x; y) over the all possible values of X and Y to
get
Z 1Z 1
1=
0
cxy2 dx dy = c=6
Therefore c = 6.
(b) The probability P[X
shaded region.
P[X
Y] =
Z 1Z x
6xy dy dx
0
Z 1
2x4 dx
= 2=5
PY
X2
Z 1 Z x2
=
0
1
Y=X 2
6xy2 dy dx = 1=4
(c) Here we can choose to either integrate fX ;Y (x; y) over the lighter shaded region,
which would require the evaluation of two integrals, or we can perform one integral over the darker region by recognizing
=1
Z 1 Z 1
1=2 1=2
Z 1
9y2
1=2
155
min(X,Y) <
min(X,Y) >
6xy2 dx dy
dy =
11
32
1
(d) The P[max(X ; Y ) 3=4] can be found be integrating over the shaded region shown
below.
P[max(X ; Y ) 3=4] = P[X
Z
3
4
3
3
4
4Y
= ;
3
4]
max(X,Y) <
6xy2 dx dy
0 0
3=4
2 3=4
= x
y3
0
5
= (3=4) = 0:237
Problem 5.2.3
fX ;Y (x; y) =
(a) The probability that X
P[X
Y] =
=
=
Y is:
Z Z x
Z0
Z0
0
The P[X + Y
6e,(2x+3y) x 0; y 0
0
otherwise
6e,(2x+3y) dy dx
2e,2x
Y
X Y
,e,3y yy==x0 dx
[2e
P[X + Y
1] =
Z 1 Z 1,x
0
Z 1
2e,2x
Z 1
=
0
6e,(2x+3y) dy dx
h
,e,3y yy==10,x dx
156
Y
X+Y 1
2e,2x 1 , e,3(1,x) dx
,2x , 2ex,3 1
= ,e
0
,
3
,
2
= 1 + 2e , 3e
Z Z
1
Z 1Z 1
0
6e,(2x+3y) dy dx = e,(2+3)
1 Y 1g. Thus,
1 Y 1g so that
;
Problem 5.2.4
The only difference between this problem and Example 5.2 is that in this problem we
must integrate the joint PDF over the regions to find the probabilities. Just as in Example 5.2, there are five cases. We will use variable u and v as dummy variables for x and
y.
x < 0 or y < 0
Y
1
In this case, the region of integration doesnt overlap the region of nonzero probability and
Z y Z x
FX ;Y (x; y) =
, ,
fX ;Y (u; v) du dv = 0
X
x
0<yx1
Z,yZ ,
x
fX ;Y (u; v) dy dx
y
8uv du dv
Z0 y v
=
=
Y
1
Z y Z x
FX ;Y (x; y) =
157
4(x2 , v2 )v dv
X
x
v=y
0 < x y and 0 x 1
Y
Z y Z x
FX ;Y (x; y) =
Z,xZ ,
u
fX ;Y (u; v) dv du
8uv dv du
Z0 x 0
4u3 du
0
4
1 x
=x
0 < y 1 and x 1
Y
Z y Z x
FX ;Y (x; y) =
, ,
fX ;Y (u; v) dv du
Z yZ 1
=
2
= 2y
x 1 and y 1
8uv du dv
0
Z y
=
4v(1 , v2 ) dv
, y4
X
x
158
Y
y
1
FX ;Y (x; y) =
fX ;Y (u; v) du dv = 1
, ,
X
1 x
FX ;Y (x; y) =
x4
>
>
2y2
>
>
:
, y4
, y4
x < 0 or y < 0
0<yx1
0 x y; 0 x 1
0 y 1; x 1
x 1; y 1
Problem 5.3.1
(a) The joint PDF (and the corresponding region of nonzero probability) are
Y
fX ;Y (x; y) =
1=2 ,1 x y 1
0
otherwise
1
(b)
P[X
>
0] =
Z 1Z 1
1
0
dy dx =
Z 1
1
, x dx = 1
2
This result can be deduced by geometry. The shaded triangle of the X ; Y plane corresponding to the event X > 0 is 1=4 of the total shaded area.
(c) For x > 1 or x < ,1, fX (x) = 0. For ,1 x 1,
fX (x) =
fX ;Y (x; y) dy =
Z 1
1
x
dy = (1 , x)=2
fX (x) =
(1
, x)
,1 x 1
otherwise
159
x fX (x) dx =
1 1
x2 x3
1
x(1 , x) dx =
,
=,
2 ,1
4
6 ,1
3
Problem 5.3.2
fX ;Y (x; y) =
2 x + y 1; x; y 0
0 otherwise
Using the figure to the left we can find the marginal PDFs by integrating over the appropriate regions.
fX (x) =
Z 1,x
2(1 , x) 0 x 1
0
otherwise
2(1 , y) 0 y 1
0
otherwise
2 dy =
Y
Y+X=1
Z 1,y
0
2 dx =
X
1
Problem 5.3.3
Random variables X and Y have joint PDF
fX ;Y (x; y) =
1=(r2 ) 0 x2 + y2 r2
0
otherwise
p2
,x2 1
fX (x) = 2 p
dy =
, r2 ,x2 r2
And similarly for fY (y)
pr ,y
p
, r ,y
fY (y) = 2
2
2
1
dx =
r2
2 r2 ,x2
r2
p r ,y
r2
,r x r
otherwise
,r y r
otherwise
Problem 5.3.4
The joint PDF of X and Y and the region of nonzero probability are
160
5x2 =2 ,1 x 1; 0 y x2
0
otherwise
fX ;Y (x; y) =
1
We can find the appropriate marginal PDFs by integrating the joint PDF.
(a) The marginal PDF of X is
fX (x) =
Z x2 2
5x
dy =
5x4 =2 ,1 x 1
0
otherwise
fY (y) =
= 5(1
,py 5x2
fX ;Y (x; y) dx =
,y
3=2
,1
Z 1
5x2
dx + p
dx
y 2
)=3
1  y
X
1
fY (y) =
Problem 5.3.5
In this problem, the joint PDF is
fX ;Y (x; y) =
2 jxyj =r4 0 x2 + y2 r2
0
otherwise
p2
r ,x
2 jxj
fX (x) =
fX ;Y (x; y) dy = 4
jyj dy
p
r
,
, r2 ,x2
Since jyj is symmetric about the origin, we can simplify the integral to
fX (x) =
4 jxj
r4
p2
,x2
y dy =
2 2
2 jxj 2 r ,x
y
r4 0
2 jxj (r2 , x2 )
r4
Note that for jxj > r, fX (x) = 0. Hence the complete expression for the PDF of X is
(
fX (x) =
2jxj(r2 ,x2 )
r4
,r x r
otherwise
(b) Note that the joint PDF is symmetric in x and y so that fY (y) = fX (y).
161
Problem 5.3.6
(a) The joint PDF of X and Y and the region of nonzero probability are
Y
cy 0 y x 1
0 otherwise
fX ;Y (x; y) =
(b) To find the value of the constant, c, we integrate the joint PDF over all x and y.
Z 1Z x
Z Z
, ,
fX ;Y (x; y) dx dy =
cy dy dx =
0
Z 1 2
cx
0
dx =
cx3
c
=
6 0 6
Thus c = 6.
(c) We can find the CDF FX (x) = P[X x] by integrating the joint PDF over the event
X x. For x < 0, FX (x) = 0. For x > 1, FX (x) = 1. For 0 x 1,
ZZ
FX (x) =
x0 x
Z x Z x0
fX ;Y
x0 ; y0 dy0 dx0
Z0 x 0
3(x0 )2 dx0 = x3
FX (x) =
x3
x<0
0x1
x1
(d) Similarly, we find the CDF of Y by integrating fX ;Y (x; y) over the event Y
y < 0, FY (y) = 0 and for y > 1, FY (y) = 1. For 0 y 1,
ZZ
FY (y) =
=
y0 y
Z yZ 1
0
Z y
=
y0
fX ;Y
y. For
x0 ; y0 dy0 dx0
162
FY (y) =
(e) To find P[Y
X
y<0
3y2 , 2y3 0 y 1
:
1
y>1
2], we integrate the joint PDF fX ;Y (x; y) over the region y x=2.
P[Y
X
Z 1 Z x=2
2] =
Z 1
=
0
6y dy dx
x=2
Z 1 2
3x
0
3y2 0 dx
dx = 1=4
Problem 5.4.1
(a) The minimum value of W is W = 0, which occurs when X = 0 and Y = 0. The
maximum value of W is W = 1, which occurs when X = 1 or Y = 1. The range of W
is SW = fwj0 w 1g.
FW (w) = P[max(X ; Y ) w]
= P[X w; Y w]
W<w
Z wZ w
fX ;Y (x; y) dy dx
FW (w) =
Z w
(x + y) dy dx =
y=w !
y2
xy +
2 y=0
Z w
dx =
8
< 0
FW (w) =
w3
1
w<0
0w1
otherwise
dFW (w)
dw
3w2 0 w 1
0
otherwise
2
3
(wx + w =2) dx = w
163
Problem 5.4.2
(a) Since the joint PDF fX ;Y (x; y) is nonzero only for 0 y x 1, we observe that
W = Y , X 0 since Y X . In addition, the most negative value of W occurs when
Y = 0 and X = 1 and W = ,1. Hence the range of W is SW = fwj, 1 w 0g.
(b) For w < ,1, FW (w) = 0. For w > 0, FW (w) = 1. For ,1 w 0, the CDF of W is
FW (w) = P[Y , X
w]
Z 1 Z x+w
,w
Z 1
=
,w
Y=X+w
6y dy dx
0
w
FW (w) =
(1 + w)3
w < ,1
,1 w 0
w>0
(c) By taking the derivative of fW (w) with respect to w, we obtain the PDF
fW (w) =
3(w + 1)2 ,1 w 0
0
otherwise
Problem 5.4.3
Random variables X and Y have joint PDF
Y
fX ;Y (x; y) =
2 0yx1
0 otherwise
1
FW (w) = P[Y =X
w] = P[Y wX ] = w
164
8
< 0
P[Y<wX]
w<0
w 0w<1
FW (w) =
:
1 w1
(c) By taking the derivative of the CDF, we find that the PDF of W is
fW (w) =
1 0w<1
0 otherwise
(d) We see that W has a uniform PDF over [0; 1]. Thus E [W ] = 1=2.
Problem 5.4.4
Random variables X and Y have joint PDF
Y
fX ;Y (x; y) =
2 0yx1
0 otherwise
1
(a) Since fX ;Y (x; y) = 0 for y > x, we can conclude that Y X and that W = X =Y 1.
Since Y can be arbitrarily small but positive, W can be arbitrarily large. Hence the
range of W is SW = fwjw 1g.
FW (w) = P[X =Y w]
= 1 , P[X =Y > w]
= 1 , P[Y < X =w]
= 1 , 1=w
P[Y<X/w]
1/w
1
Note that we have used the fact that P[Y < X =w] equals 1=2 times the area of the
corresponding triangle. The complete expression for the joint PDF is
FW (w) =
0
w<1
1 , 1=w w 1
dFW (w)
dw
1=w2 w 1
0
otherwise
165
Problem 5.4.5
The position of the mobile phone is equally likely to be anywhere in the area of a circle
with radius 16 km. Let X and Y denote the position of the mobile. Since we are given that
the cell has a radius of 4 km, we will measure X and Y in kilometers. Assuming the base
station is at the origin of the X ; Y plane, the joint PDF of X and Y is
1
x2 + y2 16
otherwise
16
fX ;Y (x; y) =
Since the radial distance of the mobile from the base station is R =
R is
FR (r)
P[R r] = P X 2 + Y 2 r
X 2 + Y 2 , the CDF of
Z 2 Z r 0
r
0
So
dr0 d0 = r2 =16
16
8
< 0
FR (r) =
r<0
0r<4
r4
r2 =16
fR (r) =
Problem 5.5.1
The joint PDF of X and Y is
fX ;Y (x; y) =
r=8 0 r 4
0
otherwise
(x + y)=3
0 x 1; 0 y 2
otherwise
Before calculating moments, we first find the marginal PDFs of X and Y . For 0 x 1,
fX (x) =
For 0 y 2,
fX ;Y (x; y) dy =
fY (y) =
fX ;Y (x; y) dx =
Z 2
x+y
y=2
dy =
x=1
Z 1
x
0
y
x2 xy
2y + 1
+
dx =
+
=
3 3
6
3 x=0
6
fX (x) =
xy y2
2x + 2
+
=
3
6 y=0
3
0x1
otherwise
fY (y) =
2y+1
6
0y2
otherwise
166
E [X ] =
x fX (x) dx =
Z 1
2x + 2
dx =
2x3 x2
5
+
=
9
3 0 9
, (E [X ])2 = 7
18
18 , (5=9)2 = 13=162.
E [Y ] =
y fY (y) dy =
Z 2
2y + 1
dy =
y2 y3
11
+
=
12 9 0
9
The variance of Y is Var [Y ] = E Y 2
, (E [Y ])2 = 23
81.
ZZ
E [XY ] =
xy fX ;Y (x; y) dx dy
Z 1Z 2
=
0
Z 1
=
x+y
xy
3
dy dx
y=2 !
x2 y2 xy3
+
6
9 y=0
dx
8x
+
3
9
2x3 4x2
2
+
=
9
9 0 3
Z 1 2
2x
0
dx =
13
23 2
+
,
162 81 81
55
162
E [X ] =
Z 1
4x y dy dx =
0
2x2 dx =
2
3
The variance of X is Var [X ] = E X 2
, (E [X ])2 = 1 2 , (2
=
3)2 = 1=18.
E [Y ] =
4xy dy dx =
Z 1
4x
dx =
2
3
x dx =
1
2
E Y2
Z 1Z 1
=
The variance of Y is Var [Y ] = E Y 2
4xy3 dy dx =
Z 1
0
, (E [Y ])2 = 1 2 , (2
=
3)2 = 1=18.
E [XY ]
2 2
4x y dy dx =
0
Z 1 2
4x
dx =
4
9
2
4
2
,
9
3
=0
(d)
E [X + Y ] = E [X ] + E [Y ] =
2 2
+
3 3
167
168
Problem 5.5.3
The joint PDF of X and Y and the region of nonzero probability are
Y
fX ;Y (x; y) =
5x2 =2 ,1 x 1; 0 y x2
0
otherwise
1
Z 1 Z x2
5x2
,1
dy dx =
Z 1
5x5
,1 2
dx =
5x6
=0
12 ,1
5
2
14
,1 0
Z
Z
2
1
5x
5
E Y2 =
dy dx =
x2 y2
2
26
,1 0
E [Y ] =
dy dx =
Var [Y ] =
5
5
,
26
14
2
= :0576
Cov [X ; Y ] = E [XY ] =
1 5x7
5x2
dy dx =
dx = 0
xy
2
,1 4
5
14
169
Problem 5.5.4
Random variables X and Y have joint PDF
Y
fX ;Y (x; y) =
2 0yx1
0 otherwise
Before finding moments, it is helpful to first find the marginal PDFs. For 0 x 1,
fX (x) =
Z x
fX ;Y (x; y) dy =
2 dy = 2x
0
fY (y) =
Z 1
fX ;Y (x; y) dx =
2 dx = 2(1 , y)
Also, for y < 0 or y > 1, fY (y) = 0. Complete expressions for the marginal PDFs are
fX (x) =
2x 0 x 1
0 otherwise
2(1 , y) 0 y 1
0
otherwise
fY (y) =
E [X ] =
Z 1
x fX (x) dx =
2x2 dx = 2=3
Z 1
2 Z 2
E X =
x fX (x) dx =
2x3 dx = 1=2
The variance of X is Var [X ] = E X 2
, (E [X ])2 = 1 2 , 4
=
9 = 1=18.
E [Y ] =
Z 1
y fY (y) dy =
2y(1 , y) dy =
Z 1
2 Z 2
E Y =
y fY (y) dy =
2y2 (1
2y3
y ,
= 1=3
3 0
2
, y) dy =
, (E [Y ])2 = 1 6 , 1
=
2y3 y4
, 2 = 1=6
3
0
9 = 1=18.
E [XY ] =
Z 1
2xy dy dx =
0
x3 dx = 1=4
170
fX ;Y (x; y) =
1=2 ,1 x y 1
0
otherwise
1
The region of possible pairs (x; y) is shown to the right of the joint
PDF.
1
(a)
E [XY ] =
Z 1Z 1
xy
,1
dy dx =
1 1
x2 x4
x(1 , x2 ) dx =
,
=0
4 ,1
8 16 ,1
(b)
X +Y Z 1 Z 1 1 x y
E e
=
e e dy dx
=
=
,1 x 2
Z
1 1 x 1
e (e , ex ) dx
2 ,1
1 1+x 1 2x 1
e , e
2
4
,1
=
e2 e,2 1
+
,2
4
4
Problem 5.6.1
(a) Given the event A = fX + Y
P[A] =
Z 1 Z 1,x
0
So then
(
fX ;Y jA (x; y) =
6e,(2x+3y)
1,3e,2 +2e,3
x + y 10; x 0; y 0
otherwise
171
Problem 5.6.2
The joint PDF of X and Y is
fX ;Y (x; y) =
1 is
P[A] = P[Y
0 x 1; 0 y 2
otherwise
(x + y)=3
1] =
ZZ
y1
Z 1Z 1
=
0
Z 1
=
fX ;Y (x; y) dx dy
x+y
dy dx
3
!
y=1
xy y2
+
3
6 y=0
Z 1
2x + 1
0
dx =
Y 1
dx
1
x2 x
1
+ =
6 6 0 3
fX ;Y jA (x; y) =
2A
fX ;Y (x;y)
P[A]
(x; y)
otherwise
x + y 0 x 1; 0 y 1
0
otherwise
(c) From fX ;Y jA (x; y), we find the conditional marginal PDF fX jA (x). For 0 x 1,
fX jA (x) =
fX ;Y jA (x; y) dy =
y=1
Z 1
0
(x + y) dy =
y2
1
xy +
= x+
2 y=0
2
x + 1=2 0 x 1
0
otherwise
fX jA (x) =
x=1
fX ;Y jA (x; y) dx =
Z 1
0
(x + y) dx =
x2
+ xy
= y + 1=2
2
x=0
fY jA (y) =
y + 1=2 0 y 1
0
otherwise
172
Problem 5.6.3
Random variables X and Y have joint PDF
fX ;Y (x; y) =
(4x + 2y)=3
0 x 1; 0 y 1
otherwise
1 2 is
ZZ
P[A] = P[Y 1 2] =
y1=2
Z 1 Z 1=2
fX ;Y (x; y) dy dx
4x + 2y
dy dx
3
y=1=2
Z 1
4xy + y2
=
dx
3
0
=
Z 1
2x + 1=4
y=0
dx =
x2
x
5
+
=
3 12 0 12
fX ;Y jA (x; y) =
2A
fX ;Y (x;y)
P[A]
(x; y)
otherwise
=
fX ;Y jA (x; y) dy =
8
5
Z 1=2
(2x + y) dy =
fX jA (x) =
y=1=2
8
y2
8x + 1
2xy +
=
5
2 y=0
5
0x1
otherwise
(8x + 1)=5
fX ;Y jA (x; y) dx =
=
=
(8y + 8)=5
8
5
Z 1
(2x + y) dx
0
x=1
8x2 + 8xy
5
8y + 8
5
0 y 1=2
otherwise
x=0
173
Problem 5.6.4
Y
fX ;Y (x; y) =
,1 x 1 0 y x2
5x2
2
otherwise
1
P[A] = 2
=
1 4g has probability
=
Z 1=2 Z x2 2
5x
0
Z 1=2
5x4 dx +
dy dx + 2
Z 1
5x2
1=2
Z 1 Z 1=4 2
5x
1=2 0
dy dx
dx
Y<1/4
1=2
1
5
3
= x
+ 5x =121 = 19=48
=2
1
This implies
fX ;Y jA (x; y) =
(b)
fY jA (y) =
=
Z 1
120x2
dx
fX ;Y jA (x; y) dx = 2 p
y 19
80
19 (1
, y3 2 ) 0 y 1
=
otherwise
Z 1=4
80
19
(1
,y
3=2
) dy =
80
19
y2 2y7=2
, 7
2
fX ;Y jA (x; y) dy
!1=4
65
=
532
0
174
However, when we substitute fX ;Y jA (x; y), the limits will depend on the value of x.
When jxj 1=2, we have
fX jA (x) =
Z x2
120x2
19
dy =
120x4
19
Z 1=4
120x2
19
dy =
30x2
19
fX jA (x) =
8
30x2 =19
>
>
<
,1 x ,1 2
,1 2 x 1 2
1 2x1
=
120x4 =19
>
>
:
30x2 =19
otherwise
Z 1 2
Z 1
,1 2 30x3
120x5
30x3
dx +
dx +
dx = 0
19
1 2 19
,1
,1 2 19
=
Problem 5.7.1
fX ;Y (x; y) =
(x + y)
0 x; y 1
otherwise
(a) The conditional PDF fX jY (xjy) is defined for all y such that 0 y 1.
(b) For 0 y 1
fX jY (x) =
fX ;Y (x; y)
fX (x)
(x + y)
= R
=
1
0 (x + y) dy
(x+y)
x+1=2
0x1
otherwise
(c) fY jX (yjx) is defined for all values of x in the interval [0; 1].
(d) For 0 x 1,
fY jX (y) =
fX ;Y (x; y)
fY (y)
(x + y)
= R
=
1
0 (x + y) dx
(x+y)
y+1=2
0y1
otherwise
175
Problem 5.7.2
Random variables X and Y have joint PDF
Y
2 0yx1
0 otherwise
fX ;Y (x; y) =
(a) For 0 y 1,
fY (y) =
Z 1
fX ;Y (x; y) dx =
2 dx = 2(1 , y)
Also, for y < 0 or y > 1, fY (y) = 0. The complete expression for the marginal PDF
is
fY (y) =
2(1 , y) 0 y 1
0
otherwise
1
1,y
yx1
otherwise
fX ;Y (x; y)
fY (y)
Z
= y] =
=y
x fX jY (xjy) dx =
Z 1
y
can be calculated as
x
1,y
dx =
x2
2(1 , y) y
1+y
2
In fact, since we know that the conditional PDF of X is uniform over [y; 1] when
Y = y, it wasnt really necessary to perform the calculation.
Problem 5.7.3
Random variables X and Y have joint PDF
Y
fX ;Y (x; y) =
2 0yx1
0 otherwise
fX (x) =
Note that fX (x) = 0 for x
marginal PDF of X is
<
176
Z x
fX ;Y (x; y) dy =
2 dy = 2x
fX (x) =
(b) The conditional PDF of Y given X
fY jX (yjx) =
=x
2x 0 x 1
0 otherwise
is
fX ;Y (x; y)
fX (x)
=
1=x 0 y x
0
otherwise
(c) Given X = x, Y has a uniform PDF over [0; x] and thus has conditional
expected value
R
E [Y jX = x] = x=2. Another way to obtain this result is to calculate , y fY jX (yjx) dy.
Problem 5.7.4
We are told in the problem statement that if we know r, the number of feet a student
sits from the blackboard, then we also know that that students grade is a Gaussian random
variable with mean 80 , r and standard deviation r. This is exactly
fX jR (xjr) =
p1
2r2
e,(x,[80,r])
2r2
Problem 5.7.5
Random variables X and Y have joint PDF
Y
fX ;Y (x; y) =
1=2 ,1 x y 1
0
otherwise
1
1
fY (y) =
fX ;Y (x; y) dx =
1 y
dx = (y + 1)=2
2 ,1
fY (y) =
(y + 1)=2
,1 y 1
otherwise
177
fX ;Y (x; y)
fY (y)
=
1
1+y
,1 x y
otherwise
(c) Given Y = y, the conditional PDF of X is uniform over [,1; y]. Hence the conditional
expected value is E [X jY = y] = (y , 1)=2.
Problem 5.7.6
We are given that the joint PDF of X and Y is
fX ;Y (x; y) =
(a) The marginal PDF of X is
p2
fX (x) = 2
1=(r2 ) 0 x2 + y2 r2
0
otherwise
(
,x2 1
dy =
r2
fX ;Y (x; y)
fX (x)
=
2 r2 ,x2
r2
,r x r
otherwise
1=(2 r2 , x2 ) y2 r2 , x2
0
otherwise
(b) Given X = x, we observe that over the interval [, r2 , x2 ; r2 , x2 ], Y has a uniform PDF. Since the conditional PDF fY jX (yjx) is symmetric about y = 0,
E [Y jX
= x] = 0
Problem 5.8.1
X and Y are independent random variables with PDFs
fX (x) =
1
,x=3 x
3e
>
0
fY (y) =
otherwise
1
,y=2 y
2e
0
otherwise
P[X
>
Y] =
x>y
Z
1
fX (x) fY (y) dx dy
Z
1
e,x=3 dx dy
2
y 3
Z
1 ,y=2 ,y=3
=
e
dy
e
0 2
Z
1 ,(1=2+1=3)y
1=2
e
=
dy =
1=2 + 2=3
0 2
=
e,y
3
7
178
(b) Since X and Y are exponential random variables with parameters X = 1=3 and Y =
1=2, Appendix A tells us that E [X ] = 1=X = 3 and E [Y ] = 1=Y = 2. Since X and
Y are independent, the correlation is E [XY ] = E [X ]E [Y ] = 6.
(c) Since X and Y are independent, Cov [X ; Y ] = 0.
Problem 5.8.2
(a) Since E [,X2 ] = ,E [X2 ], we can use Theorem 5.8 to write
(b) By Theorem 4.6(f), Var [,X2 ] = (,1)2 Var [X2] = Var [X2 ]. Since X1 and X2 are independent, Theorem 5.15 says that
Var [X1 , X2 ] = Var [X1 + (,X2 )] = Var [X1 ] + Var [,X2 ] = 2 Var [X ]
Problem 5.8.3
Random variables X1 and X2 are independent and identically distributed with the following PDF:
fX (x) =
x=2 0 x 2
0
otherwise
(a) Since X1 and X2 are identically distributed they will share the same CDF FX (x).
Z x
FX (x) =
,
fX x0 dx0 =
8
< 0
x0
x2 =4 0 x 2
:
1
x2
= max(X1 ; X2 ),
(d)
FW (w) = P[max(X1 ; X2) w] = P[X1 w; X2 w]
Since X1 and X2 are independent,
FW (w) = P[X1 w]P[X2 w] = [FX (w)]
8
< 0
=
w4 =16
1
w0
0w2
w2
1
16
179
Problem 5.8.4
X and Y are independent random variables with PDFs
fX (x) =
2x 0 x 1
0 otherwise
fY (y) =
3y2 0 y 1
0
otherwise
For the event A = fX > Y g, this problem asks us to calculate the conditional expectations
E [X jA] and E [Y jA]. We will do this using the conditional joint PDF fX ;Y jA (x; y). Since
X and Y are independent, it is tempting to argue that the event X > Y does not alter the
probability model for X and Y . Unfortunately, this is not the case. When we learn that
X > Y , it increases the probability that X is large and Y is small. We will see this when we
compare the conditional expectations E [X jA] and E [Y jA] to E [X ] and E [Y ].
(a) We can calculate the unconditional expectations, E [X ] and E [Y ], using the marginal
PDFs fX (x) and fY (y).
Z
E [X ] =
Z 1
fX (x) dx =
fY (y) dy =
E [Y ] =
0
Z 1
0
2x2 dx = 2=3
3y3 dy = 3=4
(b) First, we need to calculate the conditional joint PDF fX ;Y jA (x; yja) x; y. The first step
is to write down the joint PDF of X and Y :
6xy2 0 x 1; 0 y 1
0
otherwise
ZZ
P[A] =
fX ;Y (x; y) dy dx
x>y
Z 1Z x
0
X>Y
6xy2 dy dx
Z 1
2x dx = 2=5
fX ;Y jA (x; y) =
=
2A
fX ;Y (x;y)
P[A]
(x; y)
otherwise
15xy2 0 y x 1
0
otherwise
180
The triangular region of nonzero probability is a signal that given A, X and Y are no
longer independent. The conditional expected value of X given A is
E [X jA] =
Z Z
, ,
Z 1
= 15
x2
x fX ;Y jA (x; yja) x; y dy dx
Z x
Z 1
=5
y2 dy dx
x5 dx = 5=6
Z Z
, ,
y fX ;Y jA (x; y) dy dx = 15
Z 1 Z x
y dy dx =
x
0
15
4
Z 1
x5 dx = 5=8
We see that E [X jA] > E [X ] while E [Y jA] < E [Y ]. That is, learning X > Y gives us a
clue that X may be larger than usual while Y may be smaller than usual.
Problem 5.8.5
This problem is quite straightforward. From Theorem 5.2, we can find the joint PDF of
X and Y is
fX ;Y (x; y) =
[ fX (x) FY (y)]
y
= fX (x) fY (y)
FX ;Y (x; y) =
, ,
fX ;Y (u; v) dv du
Z y
Z x
fX (u) du
fY (v) dv
= FX (x) FX (x)
Problem 5.8.6
Random variables X and Y have joint PDF
fX ;Y (x; y) =
2 e,y 0 x y
0
otherwise
181
For W = Y , X we can find fW (w) by integrating over the region indicated in the figure
below to get FW (w) then taking the derivative with respect to w. Since Y X , W = Y , X
is nonnegative. Hence FW (w) = 0 for w < 0. For w 0,
Y
FW (w) = 1 , P[W
>
w] = 1 , P[Y
>
X + w]
Z Z
,
0
x
,
w
= 1,e
=1
+w
2 e,y dy dx
X<Y<X+w
The complete expressions for the joint CDF and corresponding joint PDF are
FW (w) =
0
w<0
,
w
1,e
w0
fW (w) =
0
w<0
,
w
e
w0
Problem 5.8.7
(a) To find if W and X are independent, we must be able to factor the joint density
function fX ;W (x; w) into the product fX (x) fW (w) of marginal density functions. To
verify this, we must find the joint PDF of X and W . First we find the joint CDF.
x W w] = P[X x Y , X w] = P[X x Y X + w]
Since Y X , the CDF of W satisfies FX W (x w) = P[X x X Y X + w]. Thus,
for x 0 and w 0,
FX ;W (x; w) = P[X
Z x Z x0 +w
FX ;W (x; w) =
=
x0
Z x
0
Z x
=
=
2 e,y dy dx0
x
,(x0 +w)
,e
0
x0 +w
,
y
e 0
dx0
Y
{X<x}{X<Y<X+w}
,x0 dx0
+ e
0 x
,x )(1 , e,w )
= (1 , e
We see that FX ;W (x; w) = FX (x) FW (w). Moreover, by applying Theorem 5.2,
fX ;W (x; w) =
2 FX ;W (x; w)
x w
= e
182
(b) Following the same procedure, we find the joint CDF of Y and W .
w Y y] = P[Y , X w Y y] = P[Y X + w Y y]
The region of integration corresponding to the event fY x + w Y yg depends on
whether y w or y w. Keep in mind that although W = Y , X Y , the dummy
FW;Y (w; y) = P[W
<
arguments y and w of fW;Y (w; y) need not obey the same constraints. In any case, we
must consider each case separately. For y > w, the region of integration resembles
Y
{Y<y} {Y<X+w}
y
w
X
yw
Z y,w Z u+w
0
2 e,v dv du +
Z y,w h
y,w u
e,u , e,(u+w) du +
2 e,v dv du
Z y h
y,w
iy,w h
e,u + e,(u+w)
+
e,u
,
,w , we,y
= 1,e
=
Z y Z y
e,u , e,y du
, ue,y
iy
y,w
For y w,
Z yZ y
FW;Y (w; y) =
Z0 y h u
=
0
2 e,v dv du
i
,e,y + e,u du
w
y
,y , e,u y
= ,ue
=1
, (1 + y)e,y
, e,w , we,y
FW Y (w y) =
1 , (1 + y)e,y
:
;
{Y<y}
0wy
0yw
otherwise
183
2 FW;Y (w; y)
w y
=
22 e,y 0 w y
0
otherwise
The joint PDF fW;Y (w; y) doesnt factor and thus W and Y are dependent.
Problem 5.8.8
We need to define the events A = fU
>
Note that U = min(X ; Y ) > u if and only if X > u and Y > u. In the same way, since
V = max(X ; Y ), V v if and only if X v and Y v. Thus
P[U
>
u; V
v] = P[X
u; Y
>
>
u; X
v Y v] = P[u
;
<
v u
;
<
v]
v] , P[U
>
u; V
v] = P[X v Y v] , P[u
;
<
v u
;
<
v]
2 FU ;V (u; v)
uv
Problem 5.9.1
fX ;Y (x; y) = ce,(x
The omission of any limits for the PDF indicates that it is defined over all x and y. We
know that fX ;Y (x; y) is in the form of the bivariate Gaussian distribution so we look to
Definition 5.10 and attempt to find values for Y , X , E [X ], E [Y ] and .
184
1
2X Y
1 , 2
Because the exponent of fX ;Y (x; y) doesnt contain any cross terms we know that
must be zero, and we are left to solve the following for E [X ], E [Y ], X , and Y :
x , E [X ]
X
2
=
x2
8
y , E [Y ]
Y
2
=
y2
18
p
p8
18
1
24 .
x , E [X ]
X
2
=
4(1 , 2 )x2
8(1 , 2 )y2
8(1 , 2 )
y , E [Y ] 2
Y
2
X Y
q
= 1=
4(1 ,
2 )
q
= 1=
8(1 , 2)
185
= 1=
= 1=2
1
2X Y
1,
= Y = 0
2
= Y = 1
Y
(X , X ) = X
X
In the problem statement, we learn that E [Y jX ] = X =2. Hence = 1=2. From Definition 5.10, the joint PDF is
fX ;Y (x; y) =
p1
32
2
2
e,2(x ,xy+y )=3
Problem 5.9.4
The given joint PDF is
fX ;Y (x; y) = de,(a
2 x2 +bxy+c2 y2 )
In order to be an example of the bivariate Gaussian PDF given in Definition 5.10, we must
have
a2 =
c2 =
,
,
b=
X Y (1 , 2 )
2 )
22X (1
d=
1
2Y2 (1
, 2 )
1
2X Y
2 (1 , 2 )
= p
2(1 , 2 )
1 , 2
186
Thus,
b=
,
= ,2ac
X Y (1 , 2 )
Hence,
,b
2ac
This implies
d2 =
1
42 2X Y2 (1
, 2)a2c2 = a2 c2 , b2
= (1
2 )
Since jj 1, we see thatpjbj 2ac. Further, for any choice of a, b and c that meets this
constraint, choosing d = a2 c2 , b2 =4 yields a valid PDF.
Problem 5.9.5
From Equation (5.5), we can write the bivariate Gaussian PDF as
fX ;Y (x; y) =
p1
e,(x,X )
X 2
22X
p1
e,(y, Y (x))
Y 2
2 Y2
where
Y
Y (x) = Y + (x , X )
X
= Y
1 , 2
However, the definitions of Y (x) and Y are not particularly important for this exercise.
When we integrate the joint PDF over all x and y, we obtain
Z Z
, ,
fX ;Y (x; y) dx dy =
, X 2
Z
p1
e,(x,X )
22X
p1
e,(y, Y (x))
, Y 2

{z
2 Y2
dy dx
}
p1
, X 2
e,(x,X )
22X
dx
The marked integral equals 1 because for each value of x, it is the integral of a Gaussian
PDF of one variable over all possible values. In fact, it is the integral of the conditional
PDF fY jX (yjx) over all possible y. To complete the proof, we see that
Z Z
, ,
fX ;Y (x; y) dx dy =
p1
, X 2
e,(x,X )
22X
dx = 1
since the remaining integral is the integral of the marginal Gaussian PDF fX (x) over all
possible x.
187
Problem 5.10.1
The mean value of a sum of random variables is always the sum of their individual
means.
n
E [Y ] = E [Xi ] = 0
i=1
!2 3
Xi
Var [Y ] = E 4
"
5=E
i=1
XiX j
i=1 j=1
= Var [Xi ] = 1
E Xi X j
and for i 6= j,
= Cov
Xi ; X j
E Xi2
i =1
i=1 j6=i
Xi X j
Problem 5.10.2
The best approach to this problem is find the complementary CDF of W . Since all Xi
are iid with PDF fX (x) x and CDF FX (x),
P[W
>
>
w] = 1 , [1 , FX (w)]n
The PDF fW (w) can be found be taking the derivative with respect to w.
fW (w) = n[1 , FX (w)]n,1 fX (w)
Problem 5.10.3
In this problem, W = max(X1 ; : : : ; Xn) where all Xi are independent and identically
distributed. The CDF of W is
FW (w)
=
=
=
=
P[max fX1 ; : : : ; Xn g w]
P[X1 w]P[X2 w] : : : P[Xn w]
Fx1 (w) Fx2 (w) : : : Fxn (w)
n
[FX (w)]
And the PDF of W , fW (w) can be found be taking the derivative with respect to w
fW (w) = n [FX (w)]n,1 fX (w)
188
Problem 5.10.4
Let A denote the event Xn = max(X1 ; : : : ; Xn ). We can find P[A] by conditioning on the
value of Xn .
P[A] = P[X1 Xn ; X2 Xn ; ; Xn1 Xn ]
Z
=
=
Z,
P[X1 < Xn ; X2 < Xn; ; Xn,1 < Xn jXn = x] fXn (x) dx
P[X1 < x; X2 < x; ; Xn,1 < x] fX (x) dx
P[A] =
=
Z,
1
n
= [FX (x)]
n
,
1
= (1 , 0)
n
= 1=n
Not surprisingly, since the Xi are identical, symmetry would suggest that Xn is as likely as
any of the other Xi to be the largest. Hence P[A] = 1=n should not be surprising.