You are on page 1of 11

Subnit Assignment For Help

GoTo Answer Directly


info@statisticsassignmenthelper.com

14.30 PROBLEM SET 7

TA: Tonja Bowen Bishop

Due: Tuesday, April 25, 2006

Note: The …rst three PROBLEM SETand6the remaining three are


problems are required,
practice problems. If you choose to do the practice problems now, you will
receive feedback from the grader. Alternatively, you may use them later in
the course to study for exams. Credit may not be awarded for solutions
that do not use methods discussed in class.

Problem 1
Let X1 ; :::; Xn be a random sample (i.i.d.) of size n from a population with
distribution f (x) with mean and variance 2 (both …nite). Consider a
statistic formed by taking a linear combination of the Xi s, c1 X1 + ::: + cn Xn ,
where ci 0. For example, the sample mean, X, is the linear combination
with ci = n1 for all i.
a. Under what condition(s) is c1 X1 + ::: + cn Xn an unbiased estimator
of ?

b. What is the variance of c1 X1 + ::: + cn Xn ?

c. Find the linear combination (derive the c’s) that minimizes the
variance without violating the condition(s) you derived in part a. (i.e. …nd
the most e¢ cient linear unbiased estimator of ).

Problem 2
Suppose that you have a sample of size n, X1 ; :::; Xn , from a population
with mean and variance 2 . You know that your draws from the popula-
tion, the Xi , are identically distributed, however, they are not independent
and have Cov (Xi ; Xj ) = 2 > 0 , for all i 6= j.
a. What is E X ? Is the sample mean an unbiased estimator of ?

P
b. What is the MSE of X? You can use the fact that V ar Xi =
P PP i
V ar (Xi ) + Cov (Xi ; Xj ).
i i j6=i

c. Is the sample mean a consistent estimator of ? Would it be


consistent if = 0?
1

https://www.statisticsassignmenthelper.com/
2 14.30 PROBLEM SET 7

Problem 3
Suppose that Z1 ; Z2 ; :::; Zn is a random sample from an exponential dis-
e x x 0
tribution with parameter , so f (x) =
0 elsewhere
a. Find the Method of Moments estimator for .

b. Find the Maximum Likelihood estimator for .


p
c. Find the Maximum Likelihood estimator for .

d. Is the estimator in part b unbiased? Is it consistent?

Problem 4
Assume a sample of continuous random variables: X1 ; X2 ; :::; Xn , where
E [Xi ] = ; V ar [Xi ] = 2 > 0: Consider the following estimators: b1;n =
n
X
1
Xn ; b2;n = n+1 Xi :
i=1
a. Are b1;n and b2;n unbiased?

b. Are b1;n and b2;n consistent?

c. What do you conclude about the relation between unbiased and


consistent estimators?

d. For what values of n does b2;n have a lower mean square error than
b1;n ?

Problem 5
You have a sample of size n, X1 ; :::; Xn from a U [ l ; u ] distribution.
a. Assume that it is known that l = 0. Find the MM and MLE
estimators of u .

b. Now assume that both l and u are unknown. Find the MM


estimators of l and u .

c. Determine whether the estimators in parts a and b are unbiased


and consistent.

Problem 6
Let X1 ; :::; Xn be a random sample of size n from a U [0; ] population.
Consider the following estimators of :
b1 = k1 X2
b2 = k2 X
b3 = k3 X(n)
14.30 PROBLEM SET 7 3

where the ki are constants and


X(n) = max (X1 ; :::; Xn )
X(n) is known as the nth order statistic.
a. Calculate Pr X(n) x , given our knowledge of the population
distribution.

b. Calculate the pdf of X(n) .

c. Choose k1 ,k2 , and k3 such that all three estimators


h are
i unbiased;
b b b b
call these values k1 ,k2 , and k3 (In other words, solve E k1 X1 = , and so
forth).

d. Calculate the following variances and compare them:


V ar b
k1 X2

V ar b
k2 X
b
V ar k3 X(n)
14.30 PROBLEM SET 7 SUGGESTED ANSWERS

TA: Tonja Bowen Bishop

Problem 1
a. For the statistic to be unbiased we must have
n
!
X
= E ci Xi
i=1
n
X
= E (Xi ) ci
i=1
n
X
= ci
i=1
P
n
So the statistic will be unbiased if and only if ci = 1.
i=1

b. Given that all of the Xi are independent, the variance of the sta-
tistic is
n
! n
X X
V ar ci Xi = c2i V ar (Xi )
i=1 i=1
n
X
2
= c2i
i=1

c. We want to minimize
n
X
2
c2i
i=1
subject to the constraint
n
X
ci = 1
i=1
The Lagrangian of this optimization problem is
n n
!
X X
2
L= c2i ci 1
i=1 i=1
1
2 14.30 PROBLEM SET 7 SUGGESTED ANSWERS

And we have the …rst-order conditions


@L 2
= 2ci =0
@ci
ci = 2
2
This implies that all of the ci are the same, that is c1 = c2 = ::: = cn = c.
We can then solve for this common value c by substituting back into the
P
n
constraint c = 1, which implies that c = n1 .
i=1

Problem 2
a.
n
!
1X
E X = E Xi
n
i=1
n
1X
= E (Xi )
n
i=1
n
1X
=
n
i=1
=

So X is an unbiased estimator of .

b.
2
M SE X = E X + V ar X
n
!
1X
= 0 + V ar Xi
n
i=1
n
!
1 X
= V ar Xi
n2
i=1
0 1
1 @ X X X
= V ar (Xi ) + Cov (Xi ; Xj )A
n2
i i j6=i
0 1
1 @ X X X
2 2A
= +
n2
i i j6=i
2 (n 1) 2
= +
n n
14.30 PROBLEM SET 7 SUGGESTED ANSWERS 3

c. To determine whether the sample mean is a consistent estimator,


we need to see whether the MSE goes to zero as n approaches in…nity.
2 (n 1) 2
lim M SE X = lim +
n!1 n!1 n n
2 (n 1) 2
= lim + lim
n!1 n n!1 n
2 2
= 0+ =
Thus (assuming a positive variance) the sample mean is not a consistent
estimator of the population mean unless = 0.

Problem 3
a. We know that the …rst moment of Z is
Z 1
E (Z) = zf (z) dz
Z0 1
= z e z dz
0
1
=

Then, we calculate our method of moments estimator by setting E (Z) =


Z and solving for .
n
1 1X
= Zi
n
i=1

bM M n 1
= Pn =
i=1 zi Z

b. The exponential pdf is f (z) = e z . Thus, the likelihood function


for is Yn
L( ; z) = e zi :
i=1
Then, the log likelihood function is
Yn
ln L( ; z) = ln e zi
i=1
Xn
= ( zi ) + n ln
i=1
Di¤erentiating the function with respect to and set it to 0;
@ ln L( ; z) Xn n
= (zi ) + = 0
@ i=1
=) Xn n
zi =
i=1
4 14.30 PROBLEM SET 7 SUGGESTED ANSWERS

So we …nd that the MLE estimator is the same as the MM estimator:


bM LE = Pnn =
1
i=1 zi Z
p
c. Using the invariance property, the MLE for is simply
p q r
c
M LE =
bM LE = p1 = Pnn :
Z i=1 Zi

d. We know that bM LE will be consistent because all MLE estimators


are consistent. But is it unbiased? A direct calculation of E(bM LE ) =
E Z1 is untractable (note that E Z1 6= E 1Z , in general). But we can
( )
use Jensens’s inequality to help us out, which tells us that for any random
variable X, if g (x) is a convex function, then

E(g (X)) g (E (X))


with a strict inequality if g (x) is strictly convex. Thus, using Z as our
1
random variable and the strictly convex function g Z = Z
, we have
1 1 1
E > = =
Z E Z E (Z)
and we know that our estimator is biased.

Note that Ifif we had instead used the version of the exponetial pdf that
replaces with 1 , and estimated the parameter by either method, we
would have found b = Z, which is both unbiased (since E (Z) = 1 = ) and
consistent (by the law of large numbers).

Problem 4
a. Bias b1;n = E b1;n = E [Xn ] = = 0 so b1;n is
unbiased for every n. " #
n
X n
X
1 1
Bias b2;n = E b2;n =E n+1 Xi = n+1 E [Xi ] =
i=1 i=1
n
n+1 = n+1 6= 0 so b2;n is biased for for every n.

b. limn!1 P b1;n < " = limn!1 P (jXn j < ") = P ( " < Xn < + ") =
P (Xn + ") P (Xn ") (because the R.V. are continuous)= F ( + ")
F( ") 6= 0 for some " > 0. Thus b1;n is not consistent.
" n
# n
X 2X
1 1
As for b2;n : V ar b2;n = V ar n+1 Xi = n+1 V ar [Xi ] =
i=1 i=1
2
1 2
n+1 n so:
14.30 PROBLEM SET 7 SUGGESTED ANSWERS 5

2
limn!1 M SE b2;n = limn!1 V ar b2;n +limn!1 Bias b2;n =
0; proving that b2;n is consistent.

c. An unbiased estimator is not necessarily consistent; a consistent


estimator is not necessarily unbiased.

Problem 5
a. MM: Since we know that l = 0, we only need to use the …rst
moment equation:
0+ h
E(X) = ;
2
Then, the MM estimator is obtained by solving
bh 1 Xn
= Xi = X
2 n i=1

bh = 2 Xn
Xi = 2X
n i=1

MLE: The uniform pdf is


1 1
f (x) = =
h l h
and the likelihood function and log likelihood functions are given (respec-
tively) by
n
1
L(0; h ; x1 ; x2 ; :::; xn ) =
h
ln L(0; h ; x) = n ln h

In order to maximize the log likelihood function above, we need to


minimize h subject to the constraint xi h 8 xi .
Then, it must be that
bh = max(xi )

b. The …rst two moments are


l + h
E(X) = ;
2
2 2
+ l h l + h
E(X 2 ) = :
3
Then, the MM estimators are obtained by solving
bl + bh 1 Xn
= Xi = X;
2 n i=1
b2 2
+ bh + blbh 1 Xn
l
= X 2 = X 2:
3 n i=1 i
6 14.30 PROBLEM SET 7 SUGGESTED ANSWERS

They are given by


q
bl = X 2
3(X 2 X );
q
bh = X + 2
3(X 2 X ):

c. We begin with the MM estimator for part a:


bh
E bh = E 2X = 2 = bh
2
So this estimator is unbiased. Then we consider the variance.
n
4X 4 2
+ 2
2 1 2
V ar bh = V ar 2X = 2 V ar (Xi ) = 1 2
n n 12
i=1

Because the bias is zero and the variance approaches zero as n gets large,
the MSE also approaches zero, and the estimator is consistent.

For the MLE estimator, we use the fact (shown in problem 6) that, for
n 1
this estimator, f (x) = nx n . Then we can see that the MLE estimator is
biased:
Z h
nxn n
E bh = n dx = h
0 h n + 1
n
However, as n gets large, n+1 ! 1, so the bias approaches zero, and we
know that in general, MLE estimators are consistent.

For the MM estimators in part b, …nding the expected value is not par-
ticularly tractable nor particularly interesting, so we
I will
will retract this portion
of the question.

Problem 6
Note that for a U [0; ] distribution, f (x) = 1 , F (x) = x , E (X) = 2 ,
2
and V ar (X) = 12 .
a. It will be important that the sample draws are independent (and
identical). Then,

Pr X(n) x = Pr (max(Xi ) x)
n
= Pr (Xi x)
i=1
= F (x)n
x n
=
14.30 PROBLEM SET 7 SUGGESTED ANSWERS 7

b. In part a, we found the cdf of X(n) , so the pdf is just the derivaive
of this:

d
f(n) (x) = F (x)
dx (n)
d x n
=
dx
nxn 1
= n

c. We want to choose constants k1 ; k2 ; and k3 such that our estimators


are unbiased. For the …rst estimator,

E kb1 X2 =

kb1 E (X2 ) =

kb1 =
2
kb1 = 2

Then, for the second estimator

E kb2 X =

kb2 E X =

kb2 =
2
kb2 = 2

And for the third estimator

E kb3 X(n) =

kb3 E X(n) =
Z
nxn 1
kb3 x n dx =
0
n+1
n
kb3 n =
n+1
n+1
kb3 =
n
8 14.30 PROBLEM SET 7 SUGGESTED ANSWERS

d. Now we calculate the variances of our estimators, using the con-


stants that we found in part c.
2
V ar (2X2 ) = 4V ar (X2 ) =
3
2
V ar 2X = 4V ar X =
3n
2
n+1 n+1
V ar X(n) = V ar X(n)
n n
To …nd V ar X(n) , we will need E(n) X 2 :
Z
2 nxn+1 n 2
E(n) X = n dx =
0 n+2
so
2
n 2 n 2 n 2
V ar X(n) = =
n+2 n+1 (n + 2) (n + 1)2
and
2
n+1 n+1 n 2 2
V ar X(n) = =
n n (n + 2) (n + 1)2 n (n + 2)
Thus, whenever n > 1,
b
V ar kb3 X(n) < V ar k2 X < V ar kb1 X2

You might also like