Professional Documents
Culture Documents
Solutions to Problems
Lecture 1
1. P {max(X, Y, Z) t} = P {X t and Y t and Z t} = P {X t}3 by
independence. Thus the distribution function of the maximum is (t6 )3 = t18 , and the
density is 18t17 , 0 t 1.
2. See Figure S1.1. We have
zx
P {Z z} = fXY (x, y) dx dy = ex ey dy dx
yzx x=0 y=0
1
FZ (z) = ex (1 ezx ) dx = 1 , z0
0 1+z
1
fZ (z) = , z0
(z + 1)2
fX (y 1/3 ) 3 y 2/3 1
fY (y) = 2/3
= 3 2/3 = 3
3y b 3y b
fX (tan1 y) 1/
fY (y) = =
|dy/dx|x=tan1 y sec2 x
Lecture 2
1. We have y1 = 2x1 , y2 = x2 x1 , so x1 = y1 /2, x2 = (y1 /2) + y2 , and
(y1 , y2 ) 2 0
= = 2.
(x1 , x2 ) 1 1
2
y y = zx
x
Figure S1.1
Y 3
Y = X
y
X
y 1/3
Figure S1.2
Thus fY1 Y2 (y1 , y2 ) = (1/2)fX1 X2 (x1 , x2 ) = ex1 x2 = exp[(y1 /2) (y1 /2) y2 ] =
ey1 ey2 . As indicated in the comments, the range of the ys is 0 < y1 < 1, 0 < y2 < 1.
Therefore the joint density of Y1 and Y2 is the product of a function of y1 alone and
a function of y2 alone, which forces independence.
2. We have y1 = x1 /x2 , y2 = x2 , so x1 = y1 y2 , x2 = y2 and
(x1 , x2 ) y2 y1
= = y2 .
(y1 , y2 ) 0 1
Thus fY1 Y2 (y1 , y2 ) = fX1 X2 (x1 , x2 )|(x1 , x2 )/(y1 , y2 )| = (8y1 y2 )(y2 )(y2 ) = 2y1 (4y23 ).
Since 0 < x1 < x2 < 1 is equivalent to 0 < y1 < 1, 0 < y2 < 1, it follows just as in
Problem 1 that X1 and X2 are independent.
3. The Jacobian (x1 , x2 , x3 )/(y1 , y2 , y3 ) is given by
y2 y3 y1 y3 y1 y2
y2 y3 y3 y1 y3 y2 y1 y2
0 y3 1 y2
= (y2 y32 y1 y2 y32 )(1 y2 ) + y1 y22 y32 + y3 (y2 y1 y2 )y2 y3 + (1 y2 )y1 y2 y32
This can be expressed as (1)(2y2 )(y32 ey3 /2), and since x1 , x2 , x3 > 0 is equivalent to
0 < y1 < 1, 0 < y2 < 1, y3 > 0, it follows as before that Y1 , Y2 , Y3 are independent.
Lecture 3
1. MX2 (t) = MY (t)/MX1 (t) = (1 2t)r/2 /(1 2t)r1 /2 = (1 2t)(rr1 )/2 , which is
2 (r r1 ).
3
' ' X
y
2
/2
/
an
ct
ar
Figure S1.3
n
n
n
MY (t) = Mi (t) = exp[i (e 1)] = exp
t
i (e 1)
t
which is Poisson (1 + + n ).
5. Since the coin is unbiased, X2 has the same distribution as the number of heads in the
second experiment. Thus X1 + X2 has the same distribution as the number of heads
in n1 + n2 tosses, namely binomial with n = n1 + n2 and p = 1/2.
Lecture 4
1. Let be the normal (0,1) distribution function, and recall that (x) = 1 (x).
Then
n X n
P { c < X < + c} = P {c < <c }
/ n
= (c n/) (c n/) = 2(c n/) 1 .954.
Thus (c n/) 1.954/2 = .977. From tables, c n/ 2, so n 4 2 /c2 .
2. If Z = X Y , we want P {Z > 0}. But Z is normal with mean = 1 2 and
variance 2 = (12 /n1 ) + (22 /n2 ). Thus
Z
P {Z > 0} = P { > } = 1 (/) = (/).
4
2 nS t
E[etS ] = E exp = E[exp(t 2 X/n)]
2 n
where the random variable X is 2 (n 1), and therefore has moment-generating func-
tion M (t) = (1 2t)(n1)/2 . Replacing t by t 2 /n we get
(n1)/2
2t 2
MS 2 (t) = 1
n
Lecture 5
1. By denition of the beta density,
(a + b)
E(X) = xa (1 x)b1 dx
(a)(b) 0
1 ab
= [(a + 1)a(a + b) a2 (a + b + 1)] = .
(a + b)2 (a + b + 1) (a + b)2 (a + b + 1)
Lecture 6
1. Apply the formula for the joint density of Yj and Yk with j = 1, k = 3, n = 3, F (x) =
x, f (x) = 1, 0 < x < 1. The result is fY1 Y3 (x, y) = 6(y x), 0 < x < y < 1. Now let
Z = Y3 Y1 , W = Y3 . The Jacobian of the transformation has absolute value 1, so
fZW (z, w) = fY1 Y3 (y1 , y3 ) = 6(y3 y1 ) = 6z, 0 < z < w < 1. Thus
1
fZ (z) = 6z dw = 6z(1 z), 0 < z < 1.
w=z
2. The probability that more than one random variable falls in [x, x + dx] need not be
negligible. For example, there can be a positive probability that two observations
coincide with x.
3. The density of Yk is
n!
fYk (x) = xk1 (1 x)nk , 0<x<1
(k 1)!(n k)!
k1
n i
P {Yk > p} = p (1 p)ni .
i=0
i
Lecture 7
1. Let Wn = (Sn E(Sn ))/n; then E(Wn ) = 0 for all n, and
1 2
n
Var Sn nM M
Var Wn = = 2 2 = 0.
n2 n i=1 i n n
P
It follows that Wn 0.
6
d
2. All Xi and X have the same distribution (p(1) = p(0) = 1/2), so Xn 0. But if
0 < - < 1 then P {|Xn X| -} = P {Xn = X}, which is 0 for n odd and 1 for n even.
Therefore P {|Xn X| -} oscillates and has no limit as n .
3. By the weak law of large numbers, X n converges in probability to , hence converges
in distribution to . Thus we can take X to have a distribution function F that is
degenerate at , in other words,
0, x<
F (x) =
1, x .
4. Let Fn be the distribution function of Xn . For all x, Fn (x) = 0 for suciently large
n. Since the identically zero function cannot be a distribution function, there is no
limiting distribution.
Lecture 8
1. Note that MXn = 1/(1t)n where 1/(1t) is the moment-generating function of an
exponential random variable (which has mean ). By the weak law of large numbers,
P d
Xn /n , hence Xn /n .
n
2. 2 (n) = i=1 Xi2 , where the Xi are iid, each normal (0,1). Thus the central limit
theorem applies.
b
3. We have n Bernoulli trials, with probability of success p = a f (x) dx on a given trial.
Thus Yn is binomial (n, p). If n and p satisfy the sucient condition given in the text,
the normal approximation with E(Yn ) = np and Var Yn = np(1 p) should work well
in practice.
4. We have E(Xi ) = 0 and
1/2 1/2
Var Xi = E(Xi2 ) = 2
x dx = 2 x2 dx = 1/12.
1/2 0
By the central limit theorem, Yn is approximately normal with E(Yn ) = 0 and Var Yn =
n/12.
5. Let Wn = n(1 F (Yn )). Then
hence
w n
P {Wn w} = 1 , 0 w n,
n
Lecture 9
1. (a) We have
en
f (x1 , . . . , xn ) = x1 ++xn .
x1 ! xn !
With x = x1 + + xn , take logarithms and dierentiate to get
x
(x ln n) = n = 0, = X.
n
n
n
n
(n ln + ( 1) ln xi ) = + ln xi = 0, = n .
i=1
i=1 i=1 ln xi
n n
(d) f (x1 , . . . , xn ) = (1/2)n exp[ i=1 |xi |]. We must minimize i=1 |xi |,
and we must be careful when dierentiating because of the absolute values. If the
order statistics of the xi are yi , i = 1, . . . , n, and yk < < yk+1 , then the sum to be
minimized is
( y1 ) + + ( yk ) + (yk+1 ) + + (yn ).
The derivative of the sum is the number n of yi s less than minus the number of yi s
greater than . Thus as increases, i=1 |xi | decreases until the number of yi s
less than equals the number of yi s greater than . We conclude that is the median
of the Xi .
n
(e) f (x1 , . . . , xn ) = exp[ i=1 xi ]en if all xi , and 0 elsewhere. Thus
n
f (x1 , . . . , xn ) = exp[ xi ]en I[ min(x1 , . . . , xn )].
i=1
n k
P {a X b} = (1 )nk ,
k
k=a
Lecture 10
1. Set 2(b) 1 equal to the desired condence level. This, along with the table of the
normal (0,1) distribution function, determines b. The length of the condence interval
is 2b/ n.
2. Set 2FT (b) 1 equal to the desired condence level. This, along with the table of the
T (n
1) distribution function, determines b. The length of the condence interval is
2bS/ n 1.
3. In order to compute the expected length of the condence interval, we must compute
E(S), and the key observation is
nS 2 2
S= = (n 1).
n 2 n
If f (x) is the chi-square density with r = n 1 degrees of freedom [see (3.8)], then the
expected length is
2b
x1/2 f (x) dx
n1 n 0
and an appropriate change of variable reduces the integral to a gamma function which
can be evaluated explicitly.
9
X X
=
/ n / n
is approximately normal (0,1) by the central limit theorem. With c = 1/ n we have
X
P {b < < b} = (b) (b) = 2(b) 1
c
and if we set this equal to the desired level of condence, then b is determined. The
condence interval is given by (1 bc) < X < (1 + bc), or
X X
<<
1 + bc 1 bc
where c 0 as n .
5. A condence interval of length L corresponds to |(Yn /n) p| < L/2, an event with
probability
L n/2
2 1.
p(1 p)
Setting this probability equal to the desired condence level gives an inequality of the
form
L n/2
> c.
p(1 p)
As in the text, we can replace p(1p) by its maximum value 1/4. We nd the minimum
value of n by squaring both sides.
In the rst example in (10.1), we have L = .02, L/2 = .01 and c = 1.96. This problem
essentially reproduces the analysis in the text in a more abstract form. Specifying how
close to p we want our estimate to be (at the desired level of condence) is equivalent
to specifying the length of the condence interval.
Lecture 11
1. Proceed as in (11.1):
12 2
Z = X Y (1 2 ) divided by + 2
n m
is normal (0,1), and W = (nS12 /12 )+(mS22 /22 ) is 2 (n+m2). Thus n + m 2Z/ W
is T (n + m 2), but the unknown variances cannot be eliminated.
10
2. If 12 = c22 , then
12 2 1 1
+ 2 = c22 +
n m n cm
and
nS12 mS 2 nS12 + cmS22
2 + 22 = .
1 2 c22
Thus 22 can again be eliminated, and condence intervals can be constructed, assuming
c known.
Lecture 12
1. The given test is an LRT and is completely determined by c, independent of > 0 .
2. The likelihood ratio is L(x) = f1 (x)/f0 (x) = (1/4)/(1/6) = 3/2 for x = 1, 2, and
L(x) = (1/8)/(1/6) = 3/4 for x = 3, 4, 5, 6. If 0 < 3/4, we reject for all x, and
= 1, = 0. If 3/4 < < 3/2, we reject for x = 1, 2 and accept for x = 3, 4, 5, 6, with
= 1/3 and = 1/2. If 3/2 < , we accept for all x, with = 0, = 1.
For = .1, set = 3/2, accept when x = 3, 4, 5, 6, reject with probability a when
x = 1, 2. Then = (1/3)a = .1, a = .3 and = (1/2) + (1/2)(1 a) = .85.
3. Since (220-200)/10=2, it follows that when c reaches 2, the null hypothesis is accepted.
The associated type 1 error probability is = 1 (2) = 1 .977 = .023. Thus the
given result is signicant even at the signicance level .023. If we were to take additional
observations, enough to drive the probability of a type 1 error down to .023, we would
still reject H0 . Thus the p-value is a concise way of conveying a lot of information
about the test.
Lecture 13
1. We sum (Xi npi )2 /npi , i = 1, 2, 3, where the Xi are the observed frequencies and the
npi = 50, 30, 20 are the expected frequencies. The chi-square statistic is
A B C
1 49 147 98
2 51 153 102
For example, to nd the entry in the 2C position, we can multiply the row 2 sum by
the column 3 sum and divide by the total number of observations (namely 600) to get
11
(33 49)2 (147 147)2 (114 98)2 (67 51)2 (153 153)2 (86 102)2
+ + + + +
49 147 98 51 153 102
which is 5.224+0+2.612+5.020+0+2.510 = 15.366. There are (h1)(k1) = 12 = 2
degrees of freedom, and P {2 (2) > 5.99} = .05. Since 15.366 > 5.94, we reject H0 .
3. The observed frequencies minus the expected frequencies are
(a + b)(a + c) ad bc (a + b)(b + d) bc ad
a = , b = ,
a+b+c+d a+b+c+d a+b+c+d a+b+c+d
(a + c)(c + d) bc ad (c + d)(b + d) ad bc
c = , d = .
a+b+c+d a+b+c+d a+b+c+d a+b+c+d
The chi-square statistic is
(ad bc)2 1
a + b + c + d (a + b)(c + d)(a + c)(b + d)
and the expression in small brackets simplies to (a + b + c + d)2 , and the result follows.
Lecture 14
1. The joint probability function is
n
e xi en u(x)
f (x1 , . . . , xn ) = = .
i=1
xi ! x1 ! xn !
where
n I is an indicator. We take g(, u(x)) = An ()I[max xi < ] and h(x) =
i=1 B(xi ).
3. f (x1 , . . . , xn ) = n (1 )u(x) , and the factorization theorem applies with h(x) = 1.
n
4. f (x1 , . . . , xn ) = n exp[( i=1 xi )/], and the factorization theorem applies with
h(x) = 1.
12
n
1
n
n n
f (x1 , . . . , xn ) = ( + 1) xi (1 xi )
i=1 i=1
1 n
1
f (x1 , . . . , xn ) = u(x) exp[ xi /]
[()]n n i=1
and the factorization theorem applies with h(x) = exp[ xi /] and g(, u(x)) equal
to the remaining factors.
7. We have
We can drop the subscript since Y is sucient, and we can replace Xi by Xi by
denition of Bs experiment. The result is
as desired.
Lecture 17
1. Take u(X) = X.
2. The joint density is
n
f (x1 , . . . , xn ) = exp (xi ) I[min xi > ]
i=1
so
1
= zn exp(nz) dz + = + .
0 n
7. (a) E[E(I|Y )] = E(I) = P {X1 1}, and the result follows by completeness.
(b) We compute
P {X1 = r, X2 + + Xn = s r}
P {X1 = r|X1 + + Xn = s} = .
P {X1 + + Xn } = s
The numerator is
en (n)s
s!
Y
n1 Y
= 1+ .
n n1
Since
n
(xi 1 ) 1
n
= xi n1 ,
i=1
2 2 i=1
Lecture 18
1. By (18.4), the numerator of (x) is
1
n x
r1 (1 )s1 (1 )nx d
0 x
and the denominator is
1
n x
r1
(1 )
s1
(1 )nx d.
0 x
Thus (x) is
(r + x + 1, n x + s) (r + x + 1) (r + s + n) r+x
= = .
(r + x, n x + s) (r + x) (r + s + n + 1) r+s+n
2. The risk function is
2
r+X 1
E = E [(X n + r r s)2 ]
r+s+n (r + s + n)2
with E (X n) = 0, E [(X n)2 = Var X = n(1 ). Thus
1
R () = [n(1 ) + (r r s)2 ].
(r + s + n)2
The quantity in brackets is
n n2 + r2 + r2 2 + s2 2 2r2 2rs + 2rs2
which simplies to
((r + s)2 n)2 + (n 2r(r + s)) + r2
and the result follows.
3. If r = s = n/2, then (r + s)2 n = 0 and n 2r(r + s) = 0, so
r2
R () = .
(r + s + n)2
4. The average loss using is B() = h()R () d. If (x) has a smaller maximum
risk than (x), then since R is constant, we have R () < R () for all . Therefore
B() < B(), contradicting the fact that is a Bayes estimate.
Lecture 20
1.
Var(XY ) = E[(XY )2 ] (EXEY )2 = E(X 2 )E(Y 2 ) (EX)2 (EY )2
2
= (X + 2X )(Y2 + 2Y ) 2X 2Y = X
2 2
Y + 2X Y2 + 2Y X
2
.
16
2.
= a2 X
2
+ b2 Y2 + 2abX Y .
3.
2
Cov(X, X + Y ) = Cov(X, X) + Cov(X, Y ) = Var X + 0 = X .
4. By Problem 3,
2
X X
X,X+Y = = 2 .
X X+Y X + Y2
5.
2
= (X + 2X )Y 2X Y = X
2
Y .
6. We can assume without loss of generality that E(X 2 ) > 0 and E(Y 2 ) > 0. We will
have equality i the discriminant b2 4ac = 0, which holds i h() = 0 for some .
Equivalently, X + Y = 0 for some . We conclude that equality holds if and only if
X and Y are linearly dependent.
Lecture 21
n 2
1. Let Yi = Xi E(Xi ); then E[ i=1 ti Yi ] 0 for all t. But this expectation is
E[ ti Yi tj Yj ] = ti ij tj = t Kt
i j i,j
tY
n
E(e ) = E exp ci tXi = MX (c1 t, . . . , cn t)
i=1
17
n
1 2 n
= exp t ci i exp t ci aij cj
i=1
2 i,j=1
Lecture 22
1. If y is the best estimate of Y given X = x, then
Y
y Y = (x X )
X
and [see (20.1)] the minimum mean square error is Y2 (1 2 ), which in this case is 28.
We are given that Y /X = 3, so Y = 3 2 = 6 and 2 = 36/Y2 . Therefore
36 36
Y2 (1 ) = Y2 36 = 28, Y = 8, 2 = , = .75.
Y2 64
Finally, y = Y + 3x 3X = Y + 3x + 3 = 3x + 7, so Y = 4.
2. The bivariate normal density is of the form
2
is a complete sucient statistic for = (X , Y2 , , X , Y ). Note also that any statistic
in one-to-one correspondence with this one is also complete and sucient.
Lecture 23
1. The probability of any event is found by integrating the density on the set dened by
the event. Thus
P {a f (X) b} = f (x) dx, A = {x : a f (x) b}.
A
x 1x
ln f (x) = [x ln + (1 x) ln(1 )] =
1
18
2 x 1x
ln f (x) = 2
2 (1 )2
X 1X 1 1 1
I() = E + = + =
2 (1 )2 1 (1 )
1 (1 )
Var Y = .
nI() n
But
1 n(1 ) (1 )
Var X = Var[binomial(n, )] = =
n2 n2 n
so X is a UMVUE of .
Normal:
1
f (x) = exp[(x )2 /2 2 ]
2
(x )2 x
ln f (x) = =
2 2 2
2 1 1 2
2
ln f (x) = 2 , I() = , Var Y
2 n
x
ln f (x) = ( + x ln ) = 1 +
2 x X 1
2
ln f (x) = 2 , I() = E 2
= 2 =
Var Y = Var X
n
so X is a UMVUE of .
19
Lecture 25
1.
c
n
K(p) = pk (1 p)nk
k
k=0
1 6t
= (e + e4t + e2t + 1 + 1 + e2t + e4t + e6t ).
8
Therefore P {W = k} = 1/8 for k = 6, 4, 2, 2, 4, 6, P {W = 0} = 1/4, and
P {W = k} = 0 for other values of k.