You are on page 1of 2

Please read points (i) to (iv) first.

(i) If you do not intend to take more advanced courses, for example HE3021, HE4021, or
MH3510, then you can ignore the questions below.

(ii) The average difficulty level of the questions is 4 to 5.

(iii) If you are very good at math and statistics and want to prepare for the advanced
econometrics topics, then you can try to solve the questions.

(iv) Knowing how to solve the questions here is not a necessary condition to get an A in this
course HE2005.

1. (Difficulty Level: 4; this was a final exam question in 2011 I made for MAEC students)

Consider the linear regression model 𝑦𝑦 = 𝛽𝛽𝛽𝛽 + 𝑢𝑢, where 𝑦𝑦 is a 𝑛𝑛 × 1 vector of observations, 𝑋𝑋 is
𝑛𝑛 × 𝐾𝐾 matrix of nonstochastic explanatory variables such that 𝑟𝑟𝑎𝑎𝑛𝑛𝑛𝑛(×) = 𝐾𝐾 < 𝑛𝑛, 𝛽𝛽 is 𝐾𝐾 × 1
unknown parameters, and u is a 𝑛𝑛 × 1 unobserved disturbance with s a 𝐸𝐸(𝑢𝑢) = 0 and s a 𝐸𝐸(𝑢𝑢𝑢𝑢′ ) =
𝜎𝜎 2 𝐼𝐼.

(a) Show that the OLS estimator is 𝛽𝛽̂ = (𝑋𝑋 ′ 𝑋𝑋)−1 𝑋𝑋 ′ 𝑦𝑦. Check 2nd derivative.

(b) Show that the estimator is unbiased and 𝑉𝑉𝑉𝑉𝑉𝑉�𝛽𝛽̂ � = 𝜎𝜎 2 (𝑋𝑋 ′ 𝑋𝑋)−1

2.(Difficulty Level 5; this was a question in 2010 for Math Students)

This is based on Baltagi (1995). For the simple regression model without a constant, 𝑦𝑦𝑖𝑖 = 𝛽𝛽1 𝑥𝑥𝑖𝑖 +
𝑢𝑢𝑖𝑖 , 𝑖𝑖 = 1, … , 𝑛𝑛; where 𝑢𝑢𝑖𝑖 ~𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖(0, 𝜎𝜎 2 ) independent of 𝑥𝑥𝑖𝑖 . Consider the following three unbiased
estimators of 𝛽𝛽1:
∑ 𝑛𝑛 𝑛𝑛
∑ (𝑥𝑥𝑖𝑖 −𝑥𝑥̅ )(𝑦𝑦𝑖𝑖 −𝑦𝑦�)
𝑥𝑥𝑖𝑖 𝑦𝑦𝑖𝑖 𝑦𝑦�
𝛽𝛽̂1 = ∑𝑖𝑖=1 �
𝑛𝑛 𝑥𝑥 2 , 𝛽𝛽1 = 𝑥𝑥̅̅
and, 𝛽𝛽1̿ = 𝑖𝑖=1
∑𝑛𝑛 (𝑥𝑥 −𝑥𝑥̅ )2
𝑖𝑖=1 𝑖𝑖 𝑖𝑖=1 𝑖𝑖

∑𝑛𝑛
𝑖𝑖=1 𝑥𝑥𝑖𝑖 ∑𝑛𝑛
𝑖𝑖=1 𝑦𝑦𝑖𝑖
where 𝑥𝑥̅ = and 𝑦𝑦� =
𝑛𝑛 𝑛𝑛

(a) Show that 𝐶𝐶𝐶𝐶𝐶𝐶�𝛽𝛽̂1 , 𝛽𝛽


�1 � = 𝑉𝑉𝑉𝑉𝑉𝑉�𝛽𝛽̂1 � > 0, and that 𝜌𝜌12 =

̂
�1 � = �𝑉𝑉𝑉𝑉𝑉𝑉(𝛽𝛽1 )�
2
�𝑡𝑡ℎ𝑒𝑒 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑜𝑜𝑜𝑜𝛽𝛽̂1 𝑎𝑎𝑎𝑎𝑎𝑎 𝛽𝛽 �1 ) .
𝑉𝑉𝑉𝑉𝑉𝑉(𝛽𝛽
(b) Show that the optimal combination of 𝛽𝛽̂1 and 𝛽𝛽
�1, given by 𝛽𝛽̂ = 𝛼𝛼𝛽𝛽̂1 + (1 − 𝛼𝛼)𝛽𝛽
�1 where
−∞ < 𝛼𝛼 < ∞ occurs at 𝛼𝛼 ∗ = 1. Optimality here refers to minimizing the variance. [Hint: read
the paper by Samuel-Cahn (1994)

3.

(Difficulty Level 5; this was a question in 2008 for Statistics students)

Consider a simple regression with no constant: 𝑦𝑦𝑖𝑖 = 𝛽𝛽1 𝑥𝑥𝑖𝑖 + 𝑢𝑢𝑖𝑖 , 𝑖𝑖 = 1, … , 𝑛𝑛 where
𝑢𝑢𝑖𝑖 ~𝑖𝑖𝑖𝑖𝑖𝑖 𝑁𝑁(0, 𝜎𝜎 2 ) independent of 𝑥𝑥𝑖𝑖 . Theil (1971) showed that among all linear estimators in 𝑦𝑦𝑖𝑖 , the
minimum mean square estimator for 𝛽𝛽1, i.e., that which minimizes 𝐸𝐸(𝛽𝛽 �1 − 𝛽𝛽1 )2 is given by
𝑛𝑛 𝑛𝑛
�1 = 𝛽𝛽12 �
𝛽𝛽 𝑥𝑥𝑖𝑖 𝑦𝑦𝑖𝑖 ��𝛽𝛽12 � 𝑥𝑥𝑖𝑖2 + 𝜎𝜎 2 �
𝑖𝑖=1 𝑖𝑖=1

�1 � = 𝛽𝛽1⁄(1 + 𝑐𝑐), where 𝑐𝑐 = 𝜎𝜎 2 ⁄𝛽𝛽12 ∑𝑛𝑛𝑖𝑖=1 𝑥𝑥𝑖𝑖2 .


(a) Show that 𝐸𝐸�𝛽𝛽

�1 � − 𝛽𝛽1 = − 𝑐𝑐 × 𝛽𝛽1. Note that this bias is positive


�1 ) = 𝐸𝐸�𝛽𝛽
(b) Conclude that the 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵(𝛽𝛽 1+𝑐𝑐
�1 is biased towards zero.
(negative) when 𝛽𝛽1 is negative (positive). This also means that 𝛽𝛽

(c) Show that 𝑀𝑀𝑀𝑀𝑀𝑀(𝛽𝛽 �1 ) = 𝐸𝐸(𝛽𝛽


�1 − 𝛽𝛽1 )2 = 𝜎𝜎 2 ⁄[𝛽𝛽12 ∑𝑛𝑛𝑖𝑖=1 𝑥𝑥𝑖𝑖2 + 𝜎𝜎 2 ]. Conclude that it is smaller
than 𝑀𝑀𝑀𝑀𝑀𝑀(𝛽𝛽̂1,𝑂𝑂𝑂𝑂𝑂𝑂 ).

You might also like