You are on page 1of 2

College of Business

City University of Hong Kong

EF5470 Econometrics
Semester A 2010-11
Due: 4 October 2010
Exercise 2

SW = Stock and Watson (2007)

1. SW problem 4.5. Add the following question.


e. Suppose the professor assigned students to one of the two examination times
based on their past GPA, with low GPA students getting more time to complete
the exam. Which of the three assumptions in Key Concept 4.3 will be unlikely to
hold? Will the professor tend to underestimate or overestimate the effect of time
pressure on final exam scores? Explain.

2. SW problem 5.9. Add the following questions.


c. Suppose the random errors ui are homoskedastic. What will the Gauss-Markov
theorem tell you about the variance of β , conditional on X 1 , X 2 ,..., X n ?
d. Derive the variance of β and that of the OLS estimator, conditional on
X 1 , X 2 ,..., X n . Verify that the conditional variance of β is at least as large as that
of the OLS estimator. Hint: Make use of the Cauchy-Schwarz inequality
mentioned in lecture notes “Linear regression” section 4.

3. Consider the regression model yi = β 0 + β1 xi + ui . Let ι = (1,1,...,1) ' be an n-vector


of 1’s, Pι = ι (ι 'ι ) −1ι ' , M ι = I − Pι , x = ( x1 , x2 ,..., xn ) ' , and y = ( y1 , y2 ,..., yn ) '. 1 Write
y = ι ⋅ b0 + x ⋅ b1 + e, where (b0 , b1 ) are the OLS estimators and e is the n-vector of OLS
residuals.
a. The geometry of least squares tells us that e ⊥ ι and e ⊥ x. Derive the formula of
the OLS estimators (b0 , b1 ) from these two orthogonality conditions.
Note: See SW Appendix 4.2 for an alternative derivation that relies on calculus. In
fact you can check that the first order conditions from the calculus approach are
identical to the orthogonality conditions from the geometry approach.
b. Verify that Pι x = ( x , x ,..., x ) ' and M ι x = ( x1 − x , x2 − x ,..., xn − x ) ', where x is the
sample mean of the x’s. That is, M ι will transform a vector of data as deviations
from the sample mean.
c. Why is that Pe ι = 0? Explain and illustrate your answer with a picture.
d. SW p.146 lists four algebraic facts about OLS. (4.32) – (4.35) say that
(i) The sample mean of OLS residuals must be zero,
(ii) ŷ (fitted values) and y share the same sample mean,
(iii) OLS residuals and x must be uncorrelated in the sample, and

1
ι is the Greek letter ‘iota’.

1
(iv) TSS = ESS + SSR; see p.123 – 124 for the definitions of TSS, ESS and
SSR.
Make use of the geometrical properties of least squares to establish these four
facts about OLS. You should be able to find that geometrical reasoning is much
more elegant and actually easier than the brute force algebraic approach in SW.
Hint for (iv): Start by pre-multiplying the identity y = ι ⋅ b0 + x ⋅ b1 + e by M ι .
e. Lecture notes “Linear regression” section 5 talks about the uncentered R-squared
Ru2 , whereas SW p.123-124 talks about the R2 which is usually referred to as the
centered R-squared Rc2 . Both are goodness-of-fit measures of a linear regression
and their definitions are

n n

PX y
2

2 ∑ yˆi 2 PX M ι y
2 ∑ ( yˆ − y )
i
2

Ru2 ≡ 2
= 2
= i =1
n
, Rc2 ≡ 2
= i =1
n
y y
∑y
i =1
i
2 Mι y
∑( y − y)
i =1
i
2

Establish the following results with the help of geometrical reasoning. Draw
pictures to illustrate your answer.
(i) Ru 2 can be made arbitrarily close to 1 by adding a sufficiently large constant
to the dependent variable. Why is that an undesirable property, as far as a
goodness-of-fit measure is of concern? Provide an example.
Hint: Let y* = y + c ⋅ι , where c is a constant. Draw the three vectors y,ι , and y *.
When c goes up, what will happen to y * ?
(ii) Rc 2 is invariant to adding a constant to the dependent variable. Therefore Rc 2
avoids the undesirable property in (i).
(iii) Both Ru 2 and Rc 2 are invariant to multiplying the dependent variable by a
constant. Why is that a desirable property? Provide an example.

4. Verbeek (2004, 2008) Chapter 2, Exercise 2.3 (Asset Pricing – Empirical).

You might also like