Professional Documents
Culture Documents
Marcia Schafgans
Department of Economics
Michaelmas 2002
(1)
XY = p
(2)
=
jj
(
1 , > 0
=
.
1 , < 0
(b) functionally dependent if there exists a function g such that either
Y = g (X)
or X = g (Y )
(3)
(4)
(5)
where fX (x), fY (y) are the marginal density functions of X and Y respectively.
Stochastic independence implies that the correlation coecient equals 0. We
only give a proof for the case where (X, Y ) has a density function. Since
Z Z
E (XY ) =
xyfX,Y (x, y) dxdy
Z Z
=
xyfX (x) fY (y) dxdy
Z
Z
= xfX (x) dx yfY (y) dy
= E (X) E (Y ) ,
we have that
Cov (X, Y ) = E (XY ) E (X) E (Y )
= E (X) E (Y ) E (X) E (Y )
= 0.
Hence, XY = 0.
2
Note: The denition in (a) is not quite satisfactory. Suppose jXY j < 1; we would not
call this lineary independence which is more appropriate if XY = 0. An analogous
shortcoming can be said for the denition (b).
2. If the random variables X and Y have a bivariate normal distribution, show that
Cov(X, Y ) = 0 implies that X and Y are statistically independent.
Answer: We have that (X, Y ) N (, ) where = (X , Y ) is the mean and
!
2
0
X
=
0
Y2
is the covariance matrix. We here have used that Cov(X, Y ) = 0. We now prove
that (5) holds in our case. First, we note that the determinant of , jj, satises
2 2 since is diagonal. Thereby,
jj = X
Y
1
1
1
exp (x X , y Y ) (x X , y Y )
fX,Y (x, y) = p
2
2 jj
!
!#
"
x X
X2 0
1
1
=
exp (x X , y Y )
2X Y
2
0
Y 2
y Y
1
1
1
=
exp 2 (x X )2 2 (y Y )2
2X Y
2X
2Y
1
1
1
1
2
2
exp 2 (x X ) p
exp
(y Y )
=p
2X
2Y2
2X
2Y
= fX (x) fY (y) .
This proves that Cov(X, Y ) = 0 ) X and Y are statistically independent. This result
together with the result from Question 1, (c) gives us the following useful result:
If X and Y have a bivariate normal distribution,
Cov(X, Y ) = 0 , X and Y are statistically independent.
3. The mean square error (M.S.E.) of an estimator b of a scalar parameter , is dened
b = E(b )2. Show that:
as: MSE()
2
b
b
b
MSE() = V ar() + bias() .
Answer:
b = E(b )2
MSE()
2
= E b E b E b
2
2
= E b E b + E b 2E b E b E b
2
b + E b 2 E b E b E b
= V ar()
2
b + E b
= V ar()
2
b
b
= V ar() + bias()
4. Consider drawing independently two random samples from a population distributed
as N (, 2). The rst sample has n1 observations, and its sample mean is given by
P 1
X1 = n11 ni=1
X1i ; the second sample has n2 observations, and its sample mean is
P 2
given by X2 = n12 ni=1
X2i . Two estimators of are proposed:
b1 = 12 X1 + X2
b2 = n1 X1 + n2 X 2 / (n1 + n2) .
Compare the properties of these estimators (unbiasedness, eciency, consistency).
Answer: First observe that
n1
n1
1 X
1 X
n1
EX1 =
EX1i =
= = ,
n1 i=1
n1 i=1
n1
and
n
!
n1
1
X
1
1 X
X1i = 2
V ar (X1i )
V ar X1 = 2 V ar
n1
n
1
i=1
i=1
n1
2
X
1
n1
= 2
2 = 2 2 = ,
n1 i=1
n1
n1
where, in the second equality of the variance we have used the fact that we deal with
a random sample. Similarly, we get that
2
V ar X2 = .
n2
EX2 = ,
Thus,
Eb
1 =
1
2
EX1 + EX2 = 12 ( + ) =
4
E
b2 = n1EX1 + n2 EX2 / (n1 + n2) = (n1 + n2) / (n1 + n2) = ,
therefore we conclude that both estimators are unbiased.
Regarding eciency,
1
1
V ar X1 + V ar X2 =
V ar (b
1 ) =
4
4
V ar (b
2 ) =
n21 V arX1
n22EX2
2 2
+
n1 n2
/ (n1 + n2 ) =
n21
2
=
4
n1
n22
n2
1
1
+
n1 n2
/ (n1 + n2 )2
1
1 1
1
2
2) =
+
V ar (b
1 ) V ar (b
4 n1 n2
n1 + n2
n2 (n1 + n2 ) + n1 (n1 + n2 ) 4n1 n2
= 2
4n1n2 (n1 + n2 )
2
2
n + n1 2n1 n2
= 2 2
4n1n2 (n1 + n2 )
(n1 n2 )2
=
0
4n1n2 (n1 + n2)
2
where = diag (1, ..., n ) contains the eigenvalues of A which are either 0 or 1, and
S is a matrix whose columns are the corresponding eigenvectors (of length 1) of A
Ev =
S Eu
S0
=
=0
1
1 2
2
S
V
ar
(u)
S
=
S
I
S
=
S S = I.
2
2
2
P
P
In total, v N (0, In ). Therefore, v v = ni=1 i vi2 = pi=1 vi2 2p , as v v equals
the sum of the squares of p independent N (0, 1) random variables.
V ar (v) = V ar
Su