Professional Documents
Culture Documents
Problem 1
Let X1 ; :::; Xn be a random sample (i.i.d.) of size n from a population with
distribution f (x) with mean and variance 2 (both …nite). Consider a
statistic formed by taking a linear combination of the Xi s, c1 X1 + ::: + cn Xn ,
where ci 0. For example, the sample mean, X, is the linear combination
with ci = n1 for all i.
a. Under what condition(s) is c1 X1 + ::: + cn Xn an unbiased estimator
of ?
c. Find the linear combination (derive the c’s) that minimizes the
variance without violating the condition(s) you derived in part a. (i.e. …nd
the most e¢ cient linear unbiased estimator of ).
Problem 2
Suppose that you have a sample of size n, X1 ; :::; Xn , from a population
with mean and variance 2 . You know that your draws from the popula-
tion, the Xi , are identically distributed, however, they are not independent
and have Cov (Xi ; Xj ) = 2 > 0 , for all i 6= j.
a. What is E X ? Is the sample mean an unbiased estimator of ?
P
b. What is the MSE of X? You can use the fact that V ar Xi =
P PP i
V ar (Xi ) + Cov (Xi ; Xj ).
i i j6=i
https://www.statisticsassignmenthelper.com/
2 14.30 PROBLEM SET 7
Problem 3
Suppose that Z1 ; Z2 ; :::; Zn is a random sample from an exponential dis-
e x x 0
tribution with parameter , so f (x) =
0 elsewhere
a. Find the Method of Moments estimator for .
Problem 4
Assume a sample of continuous random variables: X1 ; X2 ; :::; Xn , where
E [Xi ] = ; V ar [Xi ] = 2 > 0: Consider the following estimators: b1;n =
n
X
1
Xn ; b2;n = n+1 Xi :
i=1
a. Are b1;n and b2;n unbiased?
d. For what values of n does b2;n have a lower mean square error than
b1;n ?
Problem 5
You have a sample of size n, X1 ; :::; Xn from a U [ l ; u ] distribution.
a. Assume that it is known that l = 0. Find the MM and MLE
estimators of u .
Problem 6
Let X1 ; :::; Xn be a random sample of size n from a U [0; ] population.
Consider the following estimators of :
b1 = k1 X2
b2 = k2 X
b3 = k3 X(n)
14.30 PROBLEM SET 7 3
V ar b
k2 X
b
V ar k3 X(n)
14.30 PROBLEM SET 7 SUGGESTED ANSWERS
Problem 1
a. For the statistic to be unbiased we must have
n
!
X
= E ci Xi
i=1
n
X
= E (Xi ) ci
i=1
n
X
= ci
i=1
P
n
So the statistic will be unbiased if and only if ci = 1.
i=1
b. Given that all of the Xi are independent, the variance of the sta-
tistic is
n
! n
X X
V ar ci Xi = c2i V ar (Xi )
i=1 i=1
n
X
2
= c2i
i=1
c. We want to minimize
n
X
2
c2i
i=1
subject to the constraint
n
X
ci = 1
i=1
The Lagrangian of this optimization problem is
n n
!
X X
2
L= c2i ci 1
i=1 i=1
1
2 14.30 PROBLEM SET 7 SUGGESTED ANSWERS
Problem 2
a.
n
!
1X
E X = E Xi
n
i=1
n
1X
= E (Xi )
n
i=1
n
1X
=
n
i=1
=
So X is an unbiased estimator of .
b.
2
M SE X = E X + V ar X
n
!
1X
= 0 + V ar Xi
n
i=1
n
!
1 X
= V ar Xi
n2
i=1
0 1
1 @ X X X
= V ar (Xi ) + Cov (Xi ; Xj )A
n2
i i j6=i
0 1
1 @ X X X
2 2A
= +
n2
i i j6=i
2 (n 1) 2
= +
n n
14.30 PROBLEM SET 7 SUGGESTED ANSWERS 3
Problem 3
a. We know that the …rst moment of Z is
Z 1
E (Z) = zf (z) dz
Z0 1
= z e z dz
0
1
=
bM M n 1
= Pn =
i=1 zi Z
Note that Ifif we had instead used the version of the exponetial pdf that
replaces with 1 , and estimated the parameter by either method, we
would have found b = Z, which is both unbiased (since E (Z) = 1 = ) and
consistent (by the law of large numbers).
Problem 4
a. Bias b1;n = E b1;n = E [Xn ] = = 0 so b1;n is
unbiased for every n. " #
n
X n
X
1 1
Bias b2;n = E b2;n =E n+1 Xi = n+1 E [Xi ] =
i=1 i=1
n
n+1 = n+1 6= 0 so b2;n is biased for for every n.
b. limn!1 P b1;n < " = limn!1 P (jXn j < ") = P ( " < Xn < + ") =
P (Xn + ") P (Xn ") (because the R.V. are continuous)= F ( + ")
F( ") 6= 0 for some " > 0. Thus b1;n is not consistent.
" n
# n
X 2X
1 1
As for b2;n : V ar b2;n = V ar n+1 Xi = n+1 V ar [Xi ] =
i=1 i=1
2
1 2
n+1 n so:
14.30 PROBLEM SET 7 SUGGESTED ANSWERS 5
2
limn!1 M SE b2;n = limn!1 V ar b2;n +limn!1 Bias b2;n =
0; proving that b2;n is consistent.
Problem 5
a. MM: Since we know that l = 0, we only need to use the …rst
moment equation:
0+ h
E(X) = ;
2
Then, the MM estimator is obtained by solving
bh 1 Xn
= Xi = X
2 n i=1
bh = 2 Xn
Xi = 2X
n i=1
Because the bias is zero and the variance approaches zero as n gets large,
the MSE also approaches zero, and the estimator is consistent.
For the MLE estimator, we use the fact (shown in problem 6) that, for
n 1
this estimator, f (x) = nx n . Then we can see that the MLE estimator is
biased:
Z h
nxn n
E bh = n dx = h
0 h n + 1
n
However, as n gets large, n+1 ! 1, so the bias approaches zero, and we
know that in general, MLE estimators are consistent.
For the MM estimators in part b, …nding the expected value is not par-
ticularly tractable nor particularly interesting, so we
I will
will retract this portion
of the question.
Problem 6
Note that for a U [0; ] distribution, f (x) = 1 , F (x) = x , E (X) = 2 ,
2
and V ar (X) = 12 .
a. It will be important that the sample draws are independent (and
identical). Then,
Pr X(n) x = Pr (max(Xi ) x)
n
= Pr (Xi x)
i=1
= F (x)n
x n
=
14.30 PROBLEM SET 7 SUGGESTED ANSWERS 7
b. In part a, we found the cdf of X(n) , so the pdf is just the derivaive
of this:
d
f(n) (x) = F (x)
dx (n)
d x n
=
dx
nxn 1
= n
E kb1 X2 =
kb1 E (X2 ) =
kb1 =
2
kb1 = 2
E kb2 X =
kb2 E X =
kb2 =
2
kb2 = 2
E kb3 X(n) =
kb3 E X(n) =
Z
nxn 1
kb3 x n dx =
0
n+1
n
kb3 n =
n+1
n+1
kb3 =
n
8 14.30 PROBLEM SET 7 SUGGESTED ANSWERS