Professional Documents
Culture Documents
Solutions to Exercises
Chapter 2
Review of Probability
Solutions to Exercises
1.
Y=0
Y=1
Y=2
0.25
0.50
0.25
Y<0
0Y<1
1Y<2
Y2
0.25
0.75
1.0
Y = E(Y ) = 0 Pr (Y = 0) + 1 Pr (Y = 1)
= 0 0.22 + 1 0.78 = 0.78,
X = E( X ) = 0 Pr ( X = 0) + 1 Pr ( X = 1)
= 0 0.30 + 1 0.70 = 0.70.
(b)
X2 = E[( X X )2 ]
= (0 0.70)2 Pr ( X = 0) + (1 0.70)2 Pr ( X = 1)
= (0.70)2 0.30 + 0.302 0.70 = 0.21,
Y2 = E[(Y Y )2 ]
= (0 0.78)2 Pr (Y = 0) + (1 0.78)2 Pr (Y = 1)
= (0.78)2 0.22 + 0.222 0.78 = 0.1716.
0.084
XY
=
= 0.4425.
XY
0.21 0.1716
3.528
cor (W , V ) = WV =
= 0.4425.
WV
7.56 8.4084
4.
(a) E ( X 3 ) = 0 3 (1 p ) + 13 p = p
(b) E ( X k ) = 0 k (1 p) + 1k p = p
(c) E ( X ) = 0.3
var ( X ) = E ( X 2 ) [ E ( X )]2 = 0.3 0.09 = 0.21
Thus, = 0.21 = 0.46.
To compute the skewness, use the formula from exercise 2.21:
E ( X )3 = E ( X 3 ) 3[E ( X 2 )][E ( X )] + 2[E ( X )]3
= 0.3 3 0.32 + 2 0.33 = 0.084
Alternatively, E ( X )3 = [(1 0.3)3 0.3] + [(0 0.3)3 0.7] = 0.084
Thus, skewness = E ( X )3/ 3 = .084/0.463 = 0.87.
Let X denote temperature in F and Y denote temperature in C. Recall that Y = 0 when X = 32 and
Y = 100 when X = 212; this implies Y = (100/180) ( X 32) or Y = 17.78 + (5/9) X. Using Key
Concept 2.3, X = 70F implies that Y = 17.78 + (5/9) 70 = 21.11C, and X = 7F
implies Y = (5/9) 7 = 3.89C.
6.
Unemployment Rate =
Pr ( X = 0, Y = 0) 0.045
=
= 0.0597,
Pr ( X = 0)
0.754
Pr (Y = 1|X = 0) =
Pr ( X = 0, Y = 1) 0.709
=
= 0.9403,
Pr ( X = 0)
0.754
Pr (Y = 0|X = 1) =
Pr ( X = 1, Y = 0) 0.005
=
= 0.0203,
Pr ( X = 1)
0.246
Pr (Y = 1|X = 1) =
Pr ( X = 1, Y = 1) 0.241
=
= 0.9797.
Pr ( X = 1)
0.246
Pr ( X = 1, Y = 0) 0.005
=
= 0.1.
Pr (Y = 0)
0.050
9.
Value of Y
Value of X
14
0.02
0.17
0.02
0.21
1
5
8
Probability distribution
of Y
22
0.05
0.15
0.03
0.23
30
0.10
0.05
0.15
0.30
40
0.03
0.02
0.10
0.15
Probability
Distribution of
65
X
0.01
0.21
0.01
0.40
0.09
0.39
0.11
1.00
Value of Y
30
40
0.15/0.39 0.10/0.39
22
0.03/0.39
65
0.09/0.39
N ,
2
Y
then
Y Y
Y
(a)
Y 1 3 1
= (1) = 0.8413.
Pr (Y 3) = Pr
2
2
(b)
Pr(Y > 0) = 1 Pr(Y 0)
Y 3 03
= 1 Pr
(c)
40 50 Y 50 52 50
Pr (40 Y 52) = Pr
5
5
5
= (0.4) (2) = (0.4) [1 (2)]
= 0.6554 1 + 0.9772 = 0.6326.
(d)
65 Y 5 85
Pr (6 Y 8) = Pr
2
2
2
= (2.1213) (0.7071)
= 0.9831 0.7602 = 0.2229.
11. (a)
(b)
(c)
(d)
(e)
0.90
0.05
0.05
When Y ~ 102 , then Y/10 ~ F10, .
Y = Z 2 , where Z ~ N(0,1), thus Pr (Y 1) = Pr (1 Z 1) = 0.32.
12. (a)
(b)
(c)
(d)
(e)
(f)
0.05
0.950
0.953
The tdf distribution and N(0, 1) are approximately the same when df is large.
0.10
0.01
E (Y Y )4
Y$
Y2
n
43
= 100
= 0.43, and
Pr (Y 101) = Pr
(1.525) = 0.9364.
0.43
0.43
(b) n = 165, Y2 =
Y2
n
43
= 165
= 0.2606, and
Y 100 98 100
Pr (Y > 98) = 1 Pr (Y 98) = 1 Pr
0.2606
0.2606
1 (3.9178) = (3.9178) = 1.000 (rounded to four decimal places).
(c) n = 64, Y2 =
Y2
64
43
= 64
= 0.6719, and
0.6719
0.6719
0.6719
(3.6599) (1.2200) = 0.9999 0.8888 = 0.1111.
15. (a)
9.6 10 Y 10 10.4 10
Pr (9.6 Y 10.4) = Pr
4/n
4/n
4/n
9.6 10
10.4 10
Z
= Pr
4/n
4/n
where Z ~ N(0, 1). Thus,
10.4 10
9.6 10
(i) n = 20; Pr
Z
= Pr (0.89 Z 0.89) = 0.63
4/n
4/n
10.4 10
9.6 10
(ii) n = 100; Pr
Z
= Pr(2.00 Z 2.00) = 0.954
4/n
4/n
10.4 10
9.6 10
(iii) n = 1000; Pr
Z
= Pr(6.32 Z 6.32) = 1.000
4/n
4/n
10
(b)
Pr (10 c Y 10 + c) = Pr
= Pr
As n get large
c
4/ n
4/n
4/n
4/n
c
c
Z
.
4/n
4/n
Y 10
(c) This follows from (b) and the definition of convergence in probability given in Key Concept 2.6.
16. There are several ways to do this. Here is one way. Generate n draws of Y, Y1, Y2, Yn. Let Xi = 1 if
Yi < 3.6, otherwise set Xi = 0. Notice that Xi is a Bernoulli random variables with X = Pr(X = 1) =
Pr(Y < 3.6). Compute X . Because X converges in probability to X = Pr(X = 1) = Pr(Y < 3.6), X
will be an accurate approximation if n is large.
17. Y = 0.4 and Y2 = 0.4 0.6 = 0.24
Y 0.4 0.43 0.4
Y 0.4
0.6124 = 0.27
(a) (i) P( Y 0.43) = Pr
= Pr
0.24/n
0.24/n
0.24/n
(ii) P( Y 0.37) = Pr
1.22 = 0.11
= Pr
0.24/n
0.24/n
0.24/n
0.41 0.4
0.24 /n
Y2 = E (Y Y )
10
= 1.9 105.
(b) (i) E(Y ) = Y = $1000, Y2 = nY = 1.9100
2
5
1.9 10 5
1.9 10
1 (2.2942) = 1 0.9891 = 0.0109.
19. (a)
l
Pr (Y = y j ) = Pr ( X = xi , Y = y j )
i =1
l
(b)
k
j =1
j =1
i =1
E (Y ) = y j Pr (Y = yj ) = yj Pr (Y = yj |X = xi ) Pr ( X = xi )
k
i =1 j =1
l
yj Pr (Y = yj |X = xi ) Pr ( X =xi )
XY = E[( X X )(Y Y )]
l
i=1
j=1
= ( xi X )( y j Y ) Pr ( X =xi , Y =y j )
= ( xi X )( y j Y ) Pr ( X =xi ) Pr (Y =y j )
i=1
j=1
k
= ( xi X ) Pr ( X = xi ) ( yj Y ) Pr (Y = yj
i=1
j =1
= E( X X ) E(Y Y ) = 0 0 = 0,
l
cor (X , Y ) =
l
XY
0
=
= 0.
XY XY
20. (a) Pr (Y = yi ) = Pr (Y = yi | X = xj , Z = zh ) Pr (X = xj , Z = zh )
j =1 h=1
(b)
k
E (Y ) = yi Pr (Y = yi ) Pr (Y = yi )
i=1
k
= yi Pr (Y = yi |X = xj , Z = zh ) Pr (X = xj , Z = zh )
i=1
j =1 h=1
= yi Pr (Y = yi |X = xj , Z = zh ) Pr (X = xj , Z = zh )
j =1 h=1 i=1
= E (Y|X = xj , Z = zh ) Pr (X = xj , Z = zh )
j =1 h=1
11
12
where the first line in the definition of the mean, the second uses (a), the third is a rearrangement,
and the final line uses the definition of the conditional expectation.
21. (a)
E ( X )3 = E[( X )2 ( X )] = E[ X 3 2 X 2 + X 2 X 2 + 2 X 2 3 ]
= E ( X 3 ) 3 E ( X 2 ) + 3 E ( X ) 2 3 = E ( X 3 ) 3 E ( X 2 ) E ( X ) + 3 E ( X )[ E ( X )]2 [ E ( X )]3
= E ( X 3 ) 3 E ( X 2 ) E ( X ) + 2 E ( X )3
(b)
E ( X )4 = E[( X 3 3 X 2 + 3 X 2 3 )( X )]
= E[ X 4 3 X 3 + 3 X 2 2 X 3 X 3 + 3 X 2 2 3 X 3 + 4 ]
= E ( X 4 ) 4 E ( X 3 ) E ( X ) + 6 E ( X 2 ) E ( X ) 2 4 E ( X ) E ( X )3 + E ( X ) 4
= E ( X 4 ) 4[ E ( X )][ E ( X 3 )] + 6[ E ( X )]2 [ E ( X 2 )] 3[ E ( X )]4
22. The mean and variance of R are given by
= w 0.08 + (1 w) 0.05
2 = w2 0.072 + (1 w)2 0.042 + 2 w (1 w) [0.07 0.04 0.25]
where 0.07 0.04 0.25 = Cov ( Rs , Rb ) follows from the definition of the correlation between
Rs and Rb.
(a) = 0.065; = 0.044
(b) = 0.0725; = 0.056
(c) w = 1 maximizes ; = 0.07 for this value of w.
(d) The derivative of 2 with respect to w is
d 2
= 2w .072 2(1 w) 0.042 + (2 4w) [0.07 0.04 0.25]
dw
= 0.0102w 0.0018
solving for w yields w = 18 /102 = 0.18. (Notice that the second derivative is positive, so that this
is the global minimum.) With w = 0.18, R = .038.
23. X and Z are two independently distributed standard normal random variables, so
X = Z = 0, X2 = Z2 = 1, XZ = 0.
(a) Because of the independence between X and Z , Pr ( Z = z| X = x) = Pr ( Z = z), and
E( Z |X ) = E( Z ) = 0. Thus E (Y|X ) = E ( X 2 + Z| X ) = E ( X 2 |X ) + E (Z |X ) = X 2 + 0 = X 2 .
(b) E ( X 2 ) = X2 + X2 = 1, and Y = E ( X 2 + Z ) = E ( X 2 ) + Z = 1 + 0 = 1.
(c) E ( XY ) = E ( X 3 + ZX ) = E ( X 3 ) + E (ZX ). Using the fact that the odd moments of a standard normal
random variable are all zero, we have E( X 3 ) = 0. Using the independence between X and Z , we
have E ( ZX ) = Z X = 0. Thus E ( XY ) = E ( X 3 ) + E ( ZX ) = 0.
13
(d)
Cov (XY ) = E[( X X )(Y Y )] = E[( X 0)(Y 1)]
= E ( XY X ) = E ( XY ) E ( X )
= 0 0 = 0.
0
cor (X , Y ) = XY =
= 0.
XY XY
24. (a) E (Yi 2 ) = 2 + 2 = 2 and the result follows directly.
(b) (Yi/) is distributed i.i.d. N(0,1), W = i=1 (Yi / )2 , and the result follows from the definition of a
n
n2 random variable.
n
Yi 2
Yi 2
E
=
= n.
2
2
i =1
i =1
n
(c) E(W) = E (W ) = E
(d) Write
V=
Y1
in= 2 Yi2
n1
Y1/
in= 2 (Y/ )2
n1
which follows from dividing the numerator and denominator by . Y1/ ~ N(0,1),
n
i =2
n
i =2
(Yi / )2 ~
(Yi / )2 are independent. The result then follows from the definition of