Professional Documents
Culture Documents
Walter Sosa-Escudero
Econ 507. Econometric Analysis. Spring 2009
Walter Sosa-Escudero
All models are wrong, but some are useful (George E. P. Box)
Box, G. E. P. and Draper, N., 1987, Empirical Model-Building and Response Surfaces, Wiley, New York, p. 424.
Walter Sosa-Escudero
Motivation
Our last attempt with the linear model So far we have assumed we know the model and its structure. The OLS (or the GMM) estimator consistently estimates the unknown parameters.
What is OLS estimating if the underlying model is completeley unknown (possibly non-linear, endogenous, heteroskedastic, etc.) We will argue that the OLS estimator provides a good linear aproximation of the (possibly non-linear) conditional expectation.
Note: this lecture is highly inspired by Angrist and Pischke (2009).
Walter Sosa-Escudero
E (y |x) gives the expected value of y for given values of x. It provides a reasonable representation of how x alters y . If x is random, E (y |x) is a random function. LIE: E (y ) = E [E (y |x)].
Walter Sosa-Escudero
Decomposition property: any random variable y can be expressed as y = E (y |x) + where is a random variable satifying i) E ( |x) = 0, ii) E (h(x) ) = 0, where h(.) is any function of x.
Intuition: any variable can bedecomposed in two parts: the conditional expectation and an orthogonoal error term. We are not claiming E (y |x) is linear.
Walter Sosa-Escudero
Walter Sosa-Escudero
Prediction property: Let m(x) be any function of x. Then E (y |x) = argmin E (y m(x))2 .
m(x)
Intuition: the conditional expectation is the best prediction, where best means minimum mean squared error.
Walter Sosa-Escudero
Proof:
(y m(x))2
2
= =
y E (y |x)
+ 2 y E (y |x) Now:
The rst term is not aected by the choice of m(x). The third term y E (y |x) E (y |x) m(x) = h(x) (x), with
h(x) E (y |x) m(x), and is equal to zero, by the decomposition property. Hence, the whole expression is minimized if m(x) = E (y |x).
Walter Sosa-Escudero
r(x) solves a minimum mean squared linear prediction problem. Technically, r(X ) is the orthogonal projection of the random variable y on the space spanned by the elements of the random vector x. This is like the population version of what we did before with the data.
Walter Sosa-Escudero
Orthogonality and population coecients: if E (xx ) is invertible, the condition E [x(y x )] = 0 is necessary and sucient to guarantee x exists and is the unique solution to the best linear prediction problem. Corollary = E (xx )1 E (xy )
Note: this the population version of the OLS estimator
Walter Sosa-Escudero
be any linear predictor. Sketchy proof: Let x )2 ] E [(y x ))2 ] = E [(y x ) + x ( 2 )E [x(y x )] + E (x ( ))2 = E [(y x ) ] + 2( ))2 = E [(y x )2 ] + E (x ( E [(y x )2 ]
Walter Sosa-Escudero
Proof: if E (y |x) is linear, then E (y |x) = x for some K vector . By the decomposition property E (x(y E (y |x)) = E (x(y x )) = 0 . Solve and get = , hence E (y |x) = r(x)
Walter Sosa-Escudero
Walter Sosa-Escudero
Walter Sosa-Escudero
Go back to the starting point of this course y =x+u with E (u|x) = 0, among other assumptions. Then trivially E (y |x) = r(x) = x Note that we got r(X ) = E (y |x) by imposing structure on u (predeterminedness). In the population regression approach we rst imposed structure on r(x) (weve forced it to solve a minimum prediction problem) and we got error orthogonality as a consequence.
Walter Sosa-Escudero
If we are interested in E (y |x) and this is a linear function of x, then it coincides with the linear regression function. r(x) provides the best linear representation of y given x. r(x) provides the best linear representation of E (y |x) given x.
2 3
This says that if m(x) is a non-linear function, the OLS method will be estimating consistently the parameters of r(x), who provides the best approximation of m(x) in the MMSE sense.
Walter Sosa-Escudero
Empirical Illustration
Walter Sosa-Escudero