You are on page 1of 195

ECONOMETRICS

Bruce E. Hansen
c 2000, 2008
1
University of Wisconsin
www.ssc.wisc.edu/~bhansen
This Revision: January 17, 2008
Comments Welcome
1
This manuscript may be printed and reproduced for individual or instructional use, but may not be
printed for commercial purposes.
Contents
1 Introduction 1
1.1 Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Observational Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Economic Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Regression and Projection 3
2.1 Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Conditional Density and Mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Regression Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Conditional Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.5 Linear Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.6 Best Linear Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.7 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3 Least Squares Estimation 13
3.1 Random Sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.5 Model in Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.6 Projection Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.7 Residual Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.8 Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.9 Gauss-Markov Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.10 Semiparametric Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.11 Multicollinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.12 Inuential Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.13 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
4 Inference 31
4.1 Sampling Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4.3 Asymptotic Normality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.4 Covariance Matrix Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5 Alternative Covariance Matrix Estimators . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 Functions of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.7 t tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.8 Condence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.9 Wald Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
i
4.10 F Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.11 Normal Regression Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.12 Semiparametric Eciency in the Projection Model . . . . . . . . . . . . . . . . . . . 46
4.13 Semiparametric Eciency in the Homoskedastic Regression Model . . . . . . . . . . 48
4.14 Problems with Tests of NonLinear Hypotheses . . . . . . . . . . . . . . . . . . . . . 49
4.15 Monte Carlo Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.16 Estimating a Wage Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.17 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.18 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5 Additional Regression Topics 63
5.1 Generalized Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2 Testing for Heteroskedasticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.3 Forecast Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4 NonLinear Least Squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.5 Least Absolute Deviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.6 Quantile Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.7 Testing for Omitted NonLinearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.8 Omitted Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.9 Irrelevant Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.10 Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.11 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6 The Bootstrap 81
6.1 Denition of the Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 The Empirical Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.3 Nonparametric Bootstrap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
6.4 Bootstrap Estimation of Bias and Variance . . . . . . . . . . . . . . . . . . . . . . . 83
6.5 Percentile Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.6 Percentile-t Equal-Tailed Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.7 Symmetric Percentile-t Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6.8 Asymptotic Expansions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.9 One-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.10 Symmetric Two-Sided Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.11 Percentile Condence Intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.12 Bootstrap Methods for Regression Models . . . . . . . . . . . . . . . . . . . . . . . . 91
6.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7 Generalized Method of Moments 94
7.1 Overidentied Linear Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
7.2 GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.3 Distribution of GMM Estimator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.4 Estimation of the Ecient Weight Matrix . . . . . . . . . . . . . . . . . . . . . . . . 96
7.5 GMM: The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
7.6 Over-Identication Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.7 Hypothesis Testing: The Distance Statistic . . . . . . . . . . . . . . . . . . . . . . . 98
7.8 Conditional Moment Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
7.9 Bootstrap GMM Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
7.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
ii
8 Empirical Likelihood 105
8.1 Non-Parametric Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
8.2 Asymptotic Distribution of EL Estimator . . . . . . . . . . . . . . . . . . . . . . . . 107
8.3 Overidentifying Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
8.4 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.5 Numerical Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.6 Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
9 Endogeneity 112
9.1 Instrumental Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
9.2 Reduced Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
9.3 Identication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9.4 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9.5 Special Cases: IV and 2SLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9.6 Bekker Asymptotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9.7 Identication Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
9.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
10 Univariate Time Series 122
10.1 Stationarity and Ergodicity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
10.2 Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
10.3 Stationarity of AR(1) Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
10.4 Lag Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
10.5 Stationarity of AR(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
10.6 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
10.7 Asymptotic Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
10.8 Bootstrap for Autoregressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
10.9 Trend Stationarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
10.10Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 128
10.11Model Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
10.12Autoregressive Unit Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
10.13Technical Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
11 Multivariate Time Series 132
11.1 Vector Autoregressions (VARs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
11.2 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
11.3 Restricted VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
11.4 Single Equation from a VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
11.5 Testing for Omitted Serial Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 134
11.6 Selection of Lag Length in an VAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
11.7 Granger Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
11.8 Cointegration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
11.9 Cointegrated VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
12 Limited Dependent Variables 137
12.1 Binary Choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
12.2 Count Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
12.3 Censored Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
12.4 Sample Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
iii
13 Panel Data 142
13.1 Individual-Eects Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
13.2 Fixed Eects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
13.3 Dynamic Panel Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
14 Nonparametrics 145
14.1 Kernel Density Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
14.2 Asymptotic MSE for Kernel Estimates . . . . . . . . . . . . . . . . . . . . . . . . . . 146
A Matrix Algebra 150
A.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
A.2 Matrix Addition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
A.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
A.4 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
A.5 Rank and Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
A.6 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
A.7 Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
A.8 Positive Deniteness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
A.9 Matrix Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
A.10 Kronecker Products and the Vec Operator . . . . . . . . . . . . . . . . . . . . . . . . 156
A.11 Vector and Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
B Probability 159
B.1 Foundations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
B.2 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
B.3 Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
B.4 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
B.5 Common Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
B.6 Multivariate Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
B.7 Conditional Distributions and Expectation . . . . . . . . . . . . . . . . . . . . . . . . 167
B.8 Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
B.9 Normal and Related Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
C Asymptotic Theory 173
C.1 Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
C.2 Weak Law of Large Numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
C.3 Convergence in Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
C.4 Asymptotic Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
D Maximum Likelihood 178
E Numerical Optimization 182
E.1 Grid Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
E.2 Gradient Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
E.3 Derivative-Free Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
iv
Chapter 1
Introduction
Econometrics is the study of estimation and inference for economic models using economic
data. Econometric theory concerns the study and development of tools and methods for applied
econometric applications. Applied econometrics concerns the application of these tools to economic
data.
1.1 Economic Data
An econometric study requires data for analysis. The quality of the study will be largely
determined by the data available. There are three major types of economic data sets: cross-sectional,
time-series, and panel. They are distinguished by the dependence structure across observations.
Cross-sectional data sets are characterized by mutually independent observations. Surveys are
a typical source for cross-sectional data. The individuals surveyed may be persons, households, or
corporations.
Time-series data is indexed by time. Typical examples include macroeconomic aggregates,
prices and interest rates. This type of data is characterized by serial dependence.
Panel data combines elements of cross-section and time-series. These data sets consist surveys
of a set of individuals, repeated over time. Each individual (person, household or corporation) is
surveyed on multiple occasions.
1.2 Observational Data
A common econometric question is to quantify the impact of one set of variables on another
variable. For example, a concern in labor economics is the returns to schooling the change in
earnings induced by increasing a workers education, holding other variables constant. Another
issue of interest is the earnings gap between men and women.
Ideally, we would use experimental data to answer these questions. To measure the returns
to schooling, an experiment might randomly divide children into groups, mandate dierent levels
of education to the dierent groups, and then follow the childrens wage path as they mature and
enter the labor force. The dierences between the groups could be attributed to the dierent levels
of education. However, experiments such as this are infeasible, even immoral!
Instead, most economic data is observational. To continue the above example, what we observe
(through data collection) is the level of a persons education and their wage. We can measure the
joint distribution of these variables, and assess the joint dependence. But we cannot infer causality,
as we are not able to manipulate one variable to see the direct eect on the other. For example,
a persons level of education is (at least partially) determined by that persons choices and their
achievement in education. These factors are likely to be aected by their personal abilities and
attitudes towards work. The fact that a person is highly educated suggests a high level of ability.
This is an alternative explanation for an observed positive correlation between educational levels
1
and wages. High ability individuals do better in school, and therefore choose to attain higher levels
of education, and their high ability is the fundamental reason for their high wages. The point is
that multiple explanations are consistent with a positive correlation between schooling levels and
education. Knowledge of the joint distibution cannot distinguish between these explanations.
This discussion means that causality cannot be infered from observational data alone. Causal
inference requires identication, and this is based on strong assumptions. We will return to a
discussion of some of these issues in Chapter 9.
1.3 Economic Data
Fortunately for economists, the development of the internet has provided a convenient forum for
dissemination of economic data. Many large-scale economic datasets are available without charge
from governmental agencies. An excellent starting point is the Resources for Economists Data
Links, available at http://rfe.wustl.edu/Data/index.html.
Some other excellent data sources are listed below.
Bureau of Labor Statistics: http://www.bls.gov/
Federal Reserve Bank of St. Louis: http://research.stlouisfed.org/fred2/
Board of Governors of the Federal Reserve System: http://www.federalreserve.gov/releases/
National Bureau of Economic Research: http://www.nber.org/
US Census: http://www.census.gov/econ/www/
Current Population Survey (CPS): http://www.bls.census.gov/cps/cpsmain.htm
Survey of Income and Program Participation (SIPP): http://www.sipp.census.gov/sipp/
Panel Study of Income Dynamics (PSID): http://psidonline.isr.umich.edu/
U.S. Bureau of Economic Analysis: http://www.bea.doc.gov/
CompuStat: http://www.compustat.com/www/
International Financial Statistics (IFS): http://ifs.apdi.net/imf/
2
Chapter 2
Regression and Projection
2.1 Variables
The most commonly applied econometric tool is regression. This is used when the goal is to
quantify the impact of one set of variables (the regressors, conditioning variable, or covariates)
on another variable (the dependent variable). We let j denote the dependent variable and
(r
1
, r
2
, ..., r
I
) denote the / regressors. It is convenient to write the set of regressors as a vector in
R
I
:
i =
_
_
_
_
_
r
1
r
2
.
.
.
r
I
_
_
_
_
_
. (2.1)
Following mathematical convention, real numbers (elements of the real line R) are written using
lower case italics such as j, and vectors (elements of R
I
) by lower case bold italics such as i. Upper
case bold italics such as A will be used for matrices.
The random variables (j, i) have a distribution 1 which we call the population. This popu-
lation is innitely large. This abstraction can be a source of confusion as it does not correspond to
a physical population in the real world. The distribution 1 is unknown, and the goal of statistical
inference is to learn about features of 1 from the sample.
For most of our analysis it is unimportant whether the observations j and i come from con-
tinuous or discrete distributions. For example, many regressors in econometric practice are binary,
taking on only the values 0 and 1. Binary variables are often called dummy variables.
2.2 Conditional Density and Mean
To study how the distribution of j varies with the variables i in the population, we start with
) (j [ i) , the conditional density of j given i.
To illustrate, Figure 2.1 displays the density
1
of hourly wages for men and women, from the
population of white non-military wage earners with a college degree and 10-15 years of potential
work experience. These are conditional density functions the density of hourly wages conditional
on race, gender, education and experience. The two density curves show the eect of gender on the
distribution of wages, holding the other variables constant.
While it is easy to observe that the two densities are unequal, it is useful to have numerical
measures of the dierence. An important summary measure is the conditional mean
:(i) = E(j [ i) =
_
o
o
j) (j [ i) dj. (2.2)
1
These are nonparametric density estimates using a Gaussian kernel with the bandwidth selected by cross-
validation. See Chapter 14. The data are from the 2004 Current Population Survey.
3
Figure 2.1: Wage Densities for White College Grads with 10-15 Years Work Experience
In general, :(i) can take any form, and exists so long as E[j[ < . In the example presented in
Figure 2.1, the mean wage for men is $27.22, and that for women is $20.73. These are indicated in
Figure 2.1 by the arrows drawn to the x-axis.
Take a closer look at the density functions displayed in Figure 2.1. You can see that the right tail
of the density is much thicker than the left tail. These are asymmetric (skewed) densities, which is
a common feature of wage distributions. When a distribution is skewed, the mean is not necessarily
a good summary of the central tendency. In this context it is often convenient to transform the
data by taking the (natural) logarithm. Figure 2.2 shows the density of log hourly wages for the
same population, with mean log hourly wages drawn in with the arrows. The dierence in the log
mean wage between men and women is 0.30, which implies a 30% average wage dierence for this
population. This is a more robust measure of the typical wage gap between men and women than
the dierence in the untransformed wage means. For this reason, wage regressions typically use log
wages as a dependent variable rather than the level of wages.
The comparisons in Figures 2.1 and 2.2 are facilitated by the fact that the control variable
(gender) is binary. When the distribution of the control variable takes on multiple values or
is continuous, then comparisons become more complicated. To illustrate, Figure 2.3 displays a
scatter plot
2
of log wages against education levels. Assuming for simplicity that this is the true
joint distribution, the solid line displays the conditional expectation of log wages varying with
education. The conditional expectation function is close to linear; the dashed line is a linear
projection approximation which will be discussed in the Section 2.6. The main point to be learned
from Figure 2.3 is that the conditional expectation describes the central tendency of the conditional
distribution. Of particular interest to graduate students may be the observation that dierence
between a B.A. and a Ph.D. degree in mean log hourly wages is 0.36, implying an average 36%
dierence in wage levels.
2
White non-military male wage earners with 10-15 years of potential work experience.
4
Figure 2.2: Log Wage Densities
2.3 Regression Equation
The regression error c is dened to be the dierence between j and its conditional mean (2.2)
evaluated at the observed value of i:
c = j :(i).
By construction, this yields the formula
j = :(i) +c. (2.3)
Theorem 2.3.1 Properties of the regression error c.
1. E(c [ i) = 0.
2. E(c) = 0.
3. E(/(i)c) = 0 for any function /() .
4. E(ic) = 0.
To show the rst statement, by the denition of c and the linearity of conditional expectations,
E(c [ i) = E((j :(i)) [ i)
= E(j [ i) E(:(i) [ i)
= :(i) :(i)
= 0.
Proofs of the remaining parts of the Theorem are left as an exercise.
The equations
j = :(i) +c
E(c [ i) = 0.
5
Figure 2.3: Conditional Mean of Wages Given Education
are often stated jointly as the regression framework. It is important to understand that this is a
framework, not a model, because no restrictions have been placed on the joint distribution of the
data. These equations hold true by denition. A regression model imposes further restrictions on
the permissible class of regression functions :(i) .
The conditional mean has the property of being the the best predictor of j in the sense of
achieving the lowest mean squared error. To see this, let q (i) be an arbitrary predictor of j given
i. The expected squared error using this prediction function is
E(j q (i))
2
= E(c +:(i) q (i))
2
= Ec
2
+ 2E(c (:(i) q (i))) + E(:(i) q (i))
2
= Ec
2
+ E(:(i) q (i))
2
_ Ec
2
where the third equality uses Theorem 2.3.1.3. The right-hand-side is minimized by setting q (i) =
:(i) . Thus the mean squared error is minimized by the conditional mean.
2.4 Conditional Variance
While the conditional mean is a good measure of the location of a conditional distribution,
it does not provide information about the spread of the distribution. A common measure of the
dispersion is the conditional variance
o
2
(i) = var (j [ i) = E
_
c
2
[ i
_
.
Generally, o
2
(i) is a non-trivial function of i and can take any form subject to the restriction that
it is non-negative. The conditional standard deviation is its square root o(i) =
_
o
2
(i).
In the special case where o
2
(i) is a constant and independent of i so that
E
_
c
2
[ i
_
= o
2
(2.4)
we say that the error c is homoskedastic. In the general case where o
2
(i) depends on i we say
that the error c is heteroskedastic.
6
Some textbooks describe heteroskedasticity as the case where the variance of c varies across
observations. This concept is less helpful than dening heteroskedasticity as the dependence of
the conditional variance o
2
(i) on the observables i.
As an example, take the conditional wage densities displayed in Figure 2.1. The conditional
standard deviation for men is 12.1 and that for women is 10.5. So while men have higher average
wages, they are also somewhat more dispersed. Thus the wage distribution is heteroskedastic its
vairance depends on observables.
2.5 Linear Regression
An important special case of (2.3) is when the conditional mean function :(i) is linear in i
(or linear in functions of i). Notationally it is convenient to augment the regressor vector i by
listing the number 1 as an element. We call this the constant or intercept. Equivalently, we
assume that r
1
= 1, where r
1
is the rst element of the vector i dened in (2.1). Thus (2.1) has
been redened as the / 1 vector
i =
_
_
_
_
_
1
r
2
.
.
.
r
I
_
_
_
_
_
. (2.5)
When :(i) is linear in i, we can write the mean equation as
:(i) = i
t
d = ,
1
+r
2i
,
2
+ +r
Ii
,
I
(2.6)
where
d =
_
_
_
,
1
.
.
.
,
I
_
_
_ (2.7)
is a / 1 parameter vector.
In this case (2.3) can be written as
j = i
t
d +c (2.8)
E(c [ i) = 0. (2.9)
Equation (2.8) is called the linear regression model,
An important special case is homoskedastic linear regression model
j = i
t
d +c (2.10)
E(c [ i) = 0 (2.11)
E
_
c
2
[ i
_
= o
2
. (2.12)
2.6 Best Linear Predictor
While the conditional mean :(i) = E(j [ i) is the best predictor of j among all functions of
i, its functional form is typically unknown, and the linear assumption of the previous section is
empirically unlikely to be accurate. Instead, it is more realistic to view the linear specication (2.6)
as an approximation. We derive an appropriate approximation in this section.
A linear predictor for j given i is any linear function of the form i
t
d where d R
I
. If d is
selected arbitrarily then i
t
d will likely be a poor predictor for j. We can measure the accuracy of
the predictor by the expected squared prediction error
o(d) = E
_
j i
t
d
_
2
= Ej
2
2d
t
E(ij) +dE
_
ii
0
_
d
7
(or mean squared error (MSE)). There is a unique d which minimizes this expression. The corre-
sponding linear predictor i
t
d has the property that it has the lowest expected squared prediction
error o(d) in the class of all linear predictors. We dene this as the best linear predictor of j
given i, and this denes the linear projection model.
The MSE can be written out as
o(d) = Ej
2
2d
t
E(ij) +dE
_
ii
0
_
d
which is a quadratic function of d. The rst-order condition for minimization (from Appendix A.9)
is
0 =
0
0d
o(d) = 2E(ij) + 2E
_
ii
t
_
d.
Solving for d we nd
d =
_
E
_
ii
t
__
1
E(ij) . (2.13)
This is the unique best linear predictor.
It is worth taking the time to understand the notation involved in this expression. E(ii
t
) is a
/ / matrix and E(ij) is a / 1 column vector. Therefore, alternative expressions such as
E(j)
E(
0
)
or E(ij) (E(ii
t
))
1
are incoherent and incorrect. Appendix A provides a comprehensive review
of matrix notation and operation.
The vector (2.13) exits and is unique as long as the / / matrix Q = E(ii
t
) is invertible. The
matrix Q plays an important role in least-squares theory so we will discuss some of its properties
in detail. Observe that for any non-zero o R
I
,
o
t
Qo = E
_
o
t
ii
t
o
_
= E
_
o
t
i
_
2
_ 0
so Q by construction is positive semi-denite. It is invertible if and only if it is positive denite,
which requires that for all non-zero o, E(o
t
i)
2
0. Equivalently, there cannot exist a non-zero
vector o such that o
t
i = 0 identically. This occurs when redundant variables are included in i.
In order for d to be uniquely dened, this situation must be excluded.
Given the denition of d in (2.13), i
t
d is the best linear predictor for j. The error is
c = j i
t
d. (2.14)
Notice that the error c from the linear prediction equation is equal to the error from the regression
equation when (and only when) the conditional mean is linear in i, otherwise they are distinct.
Rewriting, we obtain a decomposition of j into linear predictor and error
j = i
t
d +c. (2.15)
This completes the derivation of the model. We call i
t
d alternatively the best linear predictor of j
given i, or the linear projection of j onto i. In general we call equation (2.15) the linear projection
model.
We now summarize the assumptions necessary for its derivation and list the implications in
Theorem 2.6.1.
Assumption 2.6.1
1. i contains an intercept;
2. Ej
2
< ;
3. Er
2
)
< for , = 1, ..., /.
4. Q = E(ii
t
) is invertible.
8
Theorem 2.6.1 Under Assumption 2.6.1, (2.13) and (2.14) are well dened. Furthermore,
E
_
c
2
_
< (2.16)
E(ic) = 0 (2.17)
and
E(c) = 0. (2.18)
A complete proof of Theorem (2.6.1) is presented in Section 2.7.
It is useful to note that the facts that E(ic) = 0 and E(c) = 0 means that the variables i and
c are uncorrelated.
The two equations (2.15) and (2.17) summarize the linear projection model. Lets compare
it with the linear regression model (2.8)-(2.9). Since from Theorem 2.3.1.4 we know that the
regression error has the property E(ic) = 0, it follows that linear regression is a special case of the
projection model. However, the converse is not true as the projection error does not necessarily
satisfy E(c [ i) = 0.
To see this in a simple example, suppose we take a normally distributed random variable
r ~ (0, 1) and set j = r
2
. Consider the linear projection of j on r and an intercept:
, =
_
1 E(r)
E(r) E
_
r
2
_
_
1
_
E(j)
E(rj)
_
=
_
1 E(r)
E(r) E
_
r
2
_
_
1
_
E
_
r
2
_
E
_
r
3
_
_
=
_
1
0
_
Thus the linear projection equation takes the form
j = ,
1
+r,
2
+c
where ,
1
= 1, ,
2
= 0 and c = j 1 = r
2
1. Observe that E(c) = E
_
r
2
_
1 = 0 and
E(rc) = E
_
r
3
_
E(c) = 0, yet E(c [ r) = r
2
1 ,= 0. In this simple example c is a deterministic
function of r, yet c and r are uncorrelated!
The conditions listed in Assumption 2.6.1 are weak. The nite second moment Assumptions
2.6.1.2 and 2.6.1.3 are called regularity conditions. Assumption 2.6.1.4 is required to ensure that
d is uniquely dened. Assumption 2.6.1.1 is employed to guarantee that (2.18) holds.
We have shown that under mild regularity conditions for any pair (j, i) we can dene a linear
equation (2.15) with the properties listed in Theorem 2.6.1. No additional assumptions are required.
However, it is important to not misinterpret the generality of this statement. The linear equation
(2.15) is dened by the denition of the best linear predictor and the coecient denition (2.13).
In contrast, in many economic models the parameter d may be dened within the model. In this
case (2.13) may not hold and the implications of Theorem 2.6.1 may be false. These structural
models require alternative estimation methods, and are discussed in Chapter 9.
Returning to the joint distribution displayed in Figure 2.3, the dashed line is projection of log
wages onto education. In this example the linear predictor is a close approximation to the condi-
tional mean. In other cases the two may be quite dierent. Figure 2.4 displays the relationship
3
between mean log hourly wages and labor market experience. The solid line is the conditional
mean, and the straight dashed line is the linear projection. In this case the linear projection is a
poor approximation to the conditional mean. It over-predicts wages for young and old workers, and
under-predicts for the rest. Most importantly, it misses the strong downturn in expected wages for
those above 35 years work experience (equivalently, for those over 53 in age).
3
In the population of Caucasian non-military male wage earners with 12 years of education.
9
Figure 2.4: Hourly Wage as a Function of Experience
This defect in the best linear predictor can be partially corrected through a careful selection of
regressors. In the example just presented, we can augment the regressor vector i to include both
crjcric:cc and crjcric:cc
2
. The best linear predictor of log wages given these two variables can
be called a quadratic projection, since the resulting function is quadratic in crjcric:cc. Other than
the redenition of the regressor vector, there are no changes in our methods or analysis. In Figure
2.4 we display as well the quadratic projection. In this example it is a much better approximation
to the conditional mean than the linear projection.
Another defect of linear projection is that it is sensitive to the marginal distribution of the
regressors when the conditional mean is non-linear. We illustrate the issue in Figure 2.5 for a
constructed
4
joint distribution of j and r. The solid line is the non-linear conditional mean of
j given r. The data are divided in two Group 1 and Group 2 which have dierent marginal
distributions for the regressor r, and Group 1 has a lower mean value of r than Group 2. The
separate linear projections of j on r for these two groups are displayed in the Figure by the dashed
lines. These two projections are distinct approximations to the conditional mean. A defect with
linear projection is that it leads to the incorrect conclusion that the eect of r on j is dierent for
individuals in the two Groups. This conclusion is incorrect because in fact there is no dierence in
the conditional mean function. The apparant dierence is a by-product of a linear approximation
to a non-linear mean, combined with dierent marginal distributions for the conditioning variables.
2.7 Technical Proofs
Proof of Theorem 2.6.1. We rst show that the moments E(i) and E(ii
t
) are nite and well
dened. First, it is useful to note that Assumption 2.6.1.3 implies that
E|i|
2
= E
_
i
t
i
_
=
I

)=1
Er
2
)
< . (2.19)
4
The xi in Group 1 are N(2; 1) and those in Group 2 are N(4; 1); and the conditional distriubtion of y given x is
N(m(x); 1) where m(x) = 2x x
2
=6:
10
Figure 2.5: Conditional Mean and Two Linear Projections
Note that for , = 1, ..., /, by the Cauchy-Schwarz Inequality (C.3) and Assumptions 2.6.1.2 and
2.6.1.3
E[r
)
j[ _
_
Er
2
)
_
12
_
Ej
2
_
12
< .
Thus the elements in the vector E(i) are well dened and nite. Next, note that the ,|th element
of E(ii
t
) is E(r
)
r
|
) . Observe that
E[r
)
r
|
[ _
_
Ei
2
)
_
12
_
Ei
2
|
_
12
<
under Assumption 2.6.1.3. Thus all elements of the matrix E(ii
t
) are nite.
Equation (2.13) states that d = (E(ii
t
))
1
E(ij) which is well dened since (E(ii
t
))
1
exists
under Assumption 2.6.1.4. It follows that c = j i
t
d as dened in (2.14) is also well dened.
Note the Schwarz Inequality (A.7) implies (i
t
d)
2
_ |i|
2
|d|
2
and therefore combined with
(2.19) we see that
E
_
i
t
d
_
2
_ E|i|
2
|d|
2
< . (2.20)
Using Minkowskis Inequality (C.5), Assumption 2.6.1.2 and (2.20) we nd
_
E
_
c
2
__
12
=
_
E
_
j i
t
d
_
2
_
12
_
_
Ej
2
_
12
+
_
E
_
i
t
d
_
2
_
12
<
establishing (2.16).
An application of the Cauchy-Schwarz Inequality (C.3) shows that for any ,
E[r
)
c[ _
_
Ei
2
)
_
12
_
Ec
2
_
12
<
and therefore the elements in the vector E(ic) are well dened and nite.
Using the denitions (2.14) and (2.13), and the matrix properties that AA
1
= 1 and 1u = u,
E(ic) = E
_
i
_
j i
t
d
__
= E(ij) E
_
ii
t
_ _
E
_
ii
t
__
1
E(ij) = 0.
Finally, equation (2.18) follows from (2.17) and Assumption 2.6.1.1.
11
2.8 Exercises
1. Prove parts 2, 3 and 4 of Theorem 2.3.1.
2. Suppose that the random variables j and r only take the values 0 and 1, and have the
following joint probability distribution
r = 0 r = 1
j = 0 .1 .2
j = 1 .4 .3
Find E(j [ r) , E
_
j
2
[ r
_
and var (j [ r) for r = 0 and r = 1.
3. Suppose that j is discrete-valued, taking values only on the non-negative integers, and the
conditional distribution of j given i is Poisson:
P(j = / [ i) =
exp(i
t
d) (i
t
d)
)
,!
, , = 0, 1, 2, ...
Compute E(j [ i) and var (j [ i) . Does this justify a linear regression model of the form
j = i
t
d +c?
Hint: If P(j = ,) =
exp(A)A
j
)!
, then Ej = ` and var(j) = `.
4. Let r and j have the joint density ) (r, j) =
3
2
_
r
2
+j
2
_
on 0 _ r _ 1, 0 _ j _ 1. Compute
the coecients of the best linear predictor j = ,
1
+,
2
r +c. Compute the conditional mean
:(r) = E(j [ r) . Are they dierent?
5. Take the bivariate linear projection model
j = ,
1
+,
2
r +c
E(c) = 0
E(rc) = 0
Dene j
j
= Ej, j
a
= Er, o
2
a
= var(r), o
2
j
= var(j) and o
aj
= cov(r, j). Show that
,
2
= o
aj
,o
2
a
and ,
1
= j
j
,
2
j
a
.
6. True or False. If j = r, +c, r R, and E(c [ r) = 0, then E
_
r
2
c
_
= 0.
7. True or False. If j = i
t
d +c and E(c [ i) = 0, then c is independent of i.
8. True or False. If j = i
t
d + c, E(c [ i) = 0, and E
_
c
2
[ i
_
= o
2
, a constant, then c is
independent of i.
9. True or False. If j = r, +c, r R, and E(r
i
c
i
) = 0, then E
_
r
2
c
_
= 0.
10. True or False. If j = i
t
d +c and E(ic) = 0, then E(c [ i) = 0.
11. Let r be a random variable with j = Er and o
2
= var(r). Dene
q
_
r [ j, o
2
_
=
_
r j
(r j)
2
o
2
_
.
Show that Eq (r [ :, :) = 0 if and only if : = j and : = o
2
.
12
Chapter 3
Least Squares Estimation
3.1 Random Sample
In a typical application, an econometrician has a set of observed measurements on a set of
variables for a group of individuals. These individuals may be persons, households, rms or other
economic agents. We call this information the data, dataset, or sample. We denote the number
of individuals in the dataset by the natural number :, and call this the number of observations.
Each observation consists of a set of measurements on a list of variables. Ideally all variables are
measured for all observations, but in some cases some variables are not measured, and describe
these variables as unobserved or missing. The variables include the dependent variable j R and
the regressor i R
I
.
We use the index i to indicate the ith individual in the dataset. The observation for the ith
individual will be written as the pair (j
i
, i
i
) . j
i
is the observed value of j for individual i and i
i
is the observed value of i for the same individual.
If the data is cross-sectional (each observation is a dierent individual) it is often reasonable to
assume the observations are mutually independent. This means that the pair (j
i
, i
i
) is independent
of (j
)
, i
)
) for i ,= ,. (Sometimes the label independent is misconstrued. It is not a statement
about the relationship between j
i
and i
i
.) Furthermore, if the data is randomly gathered, it is
reasonable to model each observation as a random draw from the same probability distribution. In
this case we say that the data are independent and identically distributed or iid. We call
this a random sample.
Assumption 3.1.1 The observations (j
i
, i
i
) i = 1, ..., :, are mutually independent across obser-
vations i and identically distributed.
In Chapter 2 we derived and discussed the best linear predictor of j given i for a pair of random
variables (j, i) RR
I
, and called this the linear projection model. Applied to observations from
a random sample the linear projection model takes the form
j
i
= i
t
i
d +c
i
(3.1)
E(i
i
c
i
) = 0 (3.2)
d =
_
E
_
i
i
i
t
i
__
1
E(i
i
j
i
) . (3.3)
This chapter explores estimation and inference in this context.
In Sections 3.8 and 3.9, we narrow the focus to the linear regression model, but for most of the
chapter we retain the broader focus on the projection model.
13
3.2 Estimation
Equation (3.3) writes the projection coecient d as an explicit function of population moments
E(i
i
j
i
) and E(i
i
i
t
i
) . Their moment estimators are the sample moments

E(i
i
j
i
) =
1
:
a

i=1
i
i
j
i

E
_
i
i
i
t
i
_
=
1
:
a

i=1
i
i
i
t
i
.
The moment estimator of d replaces the population moments in (3.3) with the sample moments:
`
d =
_

E
_
i
i
i
t
i
_
_
1

E(i
i
j
i
)
=
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
:
a

i=1
i
i
j
i
_
=
_
a

i=1
i
i
i
t
i
_
1
_
a

i=1
i
i
j
i
_
(3.4)
Another way to derive
`
d is as follows. Observe that (3.2) can be written in the parametric form
j (d) = E(i
i
(j
i
i
t
i
d)) = 0. The function j (d) can be estimated by
` j (d) =
1
:
a

i=1
i
i
_
j
i
i
t
i
d
_
.
This is a set of / equations which are linear in d. The estimator
`
d is the value which jointly sets
these equations equal to zero:
0 = ` j
_
`
d
_
(3.5)
=
1
:
a

i=1
i
i
_
j
i
i
t
i
`
d
_
=
1
:
a

i=1
i
i
j
i

1
:
a

i=1
i
i
i
t
i
`
d
whose solution is (3.4).
To illustrate, consider the data used to generate Figure 2.3. These are white male wage earners
from the March 2004 Current Population Survey, excluding military, with 10-15 years of potential
work experience. This sample has 988 observations. Let j
i
be log wages and i
i
be an intercept
and years of education. Then
1
:
a

i=1
i
i
j
i
=
_
2.95
42.40
_
1
:
a

i=1
i
i
i
t
i
=
_
1 14.14
14.14 205.83
_
.
14
Thus
`
d =
_
1 14.14
14.14 205.83
_
1
_
2.95
42.40
_
=
_
34.94 2. 40
2. 40 0.170
__
2.95
42.40
_
=
_
1. 313
0.128
_
.
We often write the estimated equation using the format
\
log(\aqc) = 1.313 + 0.128 1dncatio:.
An interpretation of the estimated equation is that each year of education is associated with a
12.8% increase in mean wages.
3.3 Least Squares
Least squares is another classic motivation for the estimator (3.4). Dene the sum-of-squared
errors (SSE) function
o
a
(d) =
a

i=1
_
j
i
i
t
i
d
_
2
=
a

i=1
j
2
i
2d
t
a

i=1
i
i
j
i
+d
t
a

i=1
i
i
i
t
i
d.
This is a quadratic function of d. To visualize this function, Figure 3.1 displays an example sum-
of-squared errors function o
a
(d) for the case / = 2.
The Ordinary Least Squares (OLS) estimator is the value of d which minimizes o
a
(d).
Matrix calculus (see Appendix A.9) gives the rst-order conditions for minimization:
0 =
0
0d
o
a
(
`
d)
= 2
a

i=1
i
i
j
i
+ 2
a

i=1
i
i
i
t
i
`
d
whose solution is (3.4). Following convention we will call
`
d the OLS estimator of d.
As a by-product of OLS estimation, we dene the predicted value
^ j
i
= i
t
i
`
d
and the residual
^ c
i
= j
i
^ j
i
= j
i
i
t
i
`
d.
Note that j
i
= ^ j
i
+ ^ c
i
. It is important to understand the distinction between the error c
i
and the
residual ^ c
i
. The error is unobservable while the residual is a by-product of estimation. These two
variables are frequently mislabeled, which can cause confusion.
Equation (3.5) implies that
1
:
a

i=1
i
i
^ c
i
= 0.
15
Figure 3.1: Sum-of-Squared Errors Function
Since i
i
contains a constant, one implication is that
1
:
a

i=1
^ c
i
= 0.
Thus the residuals have a sample mean of zero and the sample correlation between the regressors
and the residual is zero. These are algebraic results, and hold true for all linear regression estimates.
The error variance o
2
= Ec
2
i
is also a parameter of interest. It measures the variation in the
unexplained part of the regression. Its method of moments estimator is the sample average of
the squared residuals
^ o
2
=
1
:
a

i=1
^ c
2
i
. (3.6)
An alternative estimator uses the formula
:
2
=
1
: /
a

i=1
^ c
2
i
. (3.7)
A justication for the latter choice will be provided in Section 3.8.
A measure of the explained variation relative to the total variation is the coecient of de-
termination or R-squared.
1
2
=

a
i=1
(^ j
i
j)
2

a
i=1
(j
i
j)
2
= 1
^ o
2
^ o
2
j
where
^ o
2
j
=
1
:
a

i=1
(j
i
j)
2
16
is the sample variance of j
i
. The 1
2
is frequently mislabeled as a measure of t. It is an
inappropriate label as the value of 1
2
does not help interpret the parameter estimates
`
d or test
statistics concerning d. Instead, it should be viewed as an estimator of the population parameter
j
2
=
var (i
t
i
d)
var(j
i
)
= 1
o
2
o
2
j
where o
2
j
= var(j
i
). An alternative estimator of j
2
proposed by Theil called R-bar-squared is
1
2
= 1
:
2
~ o
2
j
where
~ o
2
j
=
1
: 1
a

i=1
(j
i
j)
2
.
Theils estimator 1
2
is a ratio of adjusted variance estimators, and therefore is expected to be a
better estimator of j
2
than the unadjusted estimator 1
2
.
3.4 Normal Regression Model
Another motivation for the least-squares estimator can be obtained from the normal regression
model. This is the linear regression model with the additional assumption that the error c
i
is
independent of i
i
and has the distribution N
_
0, o
2
_
. This is a parametric model, where likelihood
methods can be used for estimation, testing, and distribution theory.
The log-likelihood function for the normal regression model is
log 1(d, o
2
) =
a

i=1
log
_
1
(2o
2
)
12
exp
_

1
2o
2
_
j
i
i
t
i
d
_
2
_
_
=
:
2
log
_
2o
2
_

1
2o
2
o
a
(d)
The MLE (
`
d, ^ o
2
) maximize log 1(d, o
2
). Since log 1(d, o
2
) is a function of d only through the sum
of squared errors o
a
(d), maximizing the likelihood is identical to minimizing o
a
(d). Hence the
MLE for d equals the OLS estimator.
Plugging
`
d into the log-likelihood we obtain
log 1
_
`
d, o
2
_
=
:
2
log
_
2o
2
_

1
2o
2
a

i=1
^ c
2
i
.
Maximization with respect to o
2
yields the rst-order condition
0
0o
2
log 1
_
`
d, ^ o
2
_
=
:
2^ o
2
+
1
2
_
^ o
2
_
2
a

i=1
^ c
2
i
= 0.
Solving for ^ o
2
yields the method of moments estimator (3.6). Thus the MLE
_
`
d, ^ o
2
_
for the normal
regression model are identical to the method of moment estimators. Due to this equivalence, the
OLS estimator
`
d is frequently referred to as the Gaussian MLE.
17
3.5 Model in Matrix Notation
For many purposes, including computation, it is convenient to write the model and statistics in
matrix notation. The linear equation (2.15) is a system of : equations, one for each observation.
We can stack these : equations together as
j
1
= i
t
1
d +c
1
j
2
= i
t
2
d +c
2
.
.
.
j
a
= i
t
a
d +c
a
.
Now dene
=
_
_
_
_
_
j
1
j
2
.
.
.
j
a
_
_
_
_
_
, A =
_
_
_
_
_
i
t
1
i
t
2
.
.
.
i
t
a
_
_
_
_
_
, c =
_
_
_
_
_
c
1
c
2
.
.
.
c
a
_
_
_
_
_
.
Observe that and c are :1 vectors, and A is an :/ matrix. Then the system of : equations
can be compactly written in the single equation
= Ad +c.
Sample sums can also be written in matrix notation. For example
a

i=1
i
i
i
t
i
= A
t
A
a

i=1
i
i
j
i
= A
t
.
Thus the estimator (3.4), residual vector, and sample error variance can be written as
`
d =
_
A
t
A
_
1
_
A
t

_
` c = A
`
d
^ o
2
= :
1
` c
t
` c.
A useful result is obtained by inserting = Ad +c into the formula for
`
d to obtain
`
d =
_
A
t
A
_
1
_
A
t
(Ad +c)
_
=
_
A
t
A
_
1
A
t
Ad +
_
A
t
A
_
1
_
A
t
c
_
= d +
_
A
t
A
_
1
A
t
c. (3.8)
3.6 Projection Matrices
Dene the matrices
1 = A
_
A
t
A
_
1
A
t
and
A = 1
a
A
_
A
t
A
_
1
A
t
= 1
a
1
18
where 1
a
is the : : identity matrix. 1 and A are called projection matrices due to the
property that for any matrix Z which can be written as Z = AI for some matrix I (we say that
Z lies in the range space of A), then
1Z = 1AI = A
_
A
t
A
_
1
A
t
AI = AI = Z
and
AZ = (1
a
1) Z = Z 1Z = Z Z = 0.
As an important example of this property, partition the matrix A into two matrices A
1
and
A
2
so that
A = [A
1
A
2
] .
Then 1A
1
= A
1
and AA
1
= 0. It follows that AA = 0 and A1 = 0, so A and 1 are
orthogonal.
The matrices 1 and A are symmetric and idempotent
1
. To see that 1 is symmetric,
1
t
=
_
A
_
A
t
A
_
1
A
t
_
t
=
_
A
t
_
t
_
_
A
t
A
_
1
_
t
(A)
t
= A
_
_
A
t
A
_
t
_
1
A
t
= A
_
(A)
t
_
A
t
_
t
_
1
A
t
= 1.
To establish that it is idempotent,
11 =
_
A
_
A
t
A
_
1
A
t
__
A
_
A
t
A
_
1
A
t
_
= A
_
A
t
A
_
1
A
t
A
_
A
t
A
_
1
A
t
= A
_
A
t
A
_
1
A
t
= 1.
Similarly,
A
t
= (1
a
1)
t
= 1
a
1 = A
and
AA = A (1
a
1)
= A A1
= A,
since A1 = 0.
Another useful property is that
tr 1 = / (3.9)
tr A = : / (3.10)
1
A matrix P is symmetric if P
0
= P: A matrix P is idempotent if P P = P: See Appendix A.8
19
(See Appendix A.4 for denition and properties of the trace operator.) To show (3.9) and (3.10),
tr 1 = tr
_
A
_
A
t
A
_
1
A
t
_
= tr
_
_
A
t
A
_
1
A
t
A
_
= tr (1
I
)
= /,
and
tr A = tr (1
a
1) = tr (1
a
) tr (1) = : /.
Given the denitions of 1 and A, observe that
` = A
`
d = A
_
A
t
A
_
1
A
t
= 1
and
` c = A
`
d = 1 = A. (3.11)
Furthermore, since = Ad +c and AA = 0, then
` c = A (Ad +c) = Ac. (3.12)
Another way of writing (3.11) is
= (1 +A) = 1 +A = ` + ` c.
This decomposition is orthogonal, that is
`
t
` c = (1)
t
(A) =
t
1A = 0.
3.7 Residual Regression
Partition
A = [A
1
A
2
]
and
d =
_
d
1
d
2
_
.
Then the regression model can be rewritten as
= A
1
d
1
+A
2
d
2
+c. (3.13)
Observe that the OLS estimator of d = (d
t
1
, d
t
2
)
t
can be obtained by regression of on A = [A
1
A
2
]. OLS estimation can be written as
= A
1
`
d
1
+A
2
`
d
2
+ ` c (3.14)
Suppose that we are primarily interested in d
2
, not in d
1
, so we are only interested in obtaining
the OLS sub-component
`
d
2
. In this section we derive an alternative expression for
`
d
2
which does
not involve estimation of the full model.
Dene
A
1
= 1
a
A
1
_
A
t
1
A
1
_
1
A
t
1
.
Recalling the denition A = 1 A(A
t
A)
1
A
t
, observe that A
t
1
A
1
= 0 and thus
A
1
A = A A
1
_
A
t
1
A
1
_
1
A
t
1
A = A.
20
It follows that
A
1
` c = A
1
Ac = Ac = ` c.
Using this result, if we premultiply (3.14) by A
1
we obtain
A
1
= A
1
A
1
`
d
1
+A
1
A
2
`
d
2
+A
1
` c
= A
1
A
2
`
d
2
+ ` c (3.15)
the second equality since A
1
A
1
= 0. Premultiplying by A
t
2
and recalling that A
t
2
` c = 0, we
obtain
A
t
2
A
1
= A
t
2
A
1
A
2
`
d
2
+A
t
2
` c = A
t
2
A
1
A
2
`
d
2
.
Solving,
`
d
2
=
_
A
t
2
A
1
A
2
_
1
_
A
t
2
A
1

_
an alternative expression for
`
d
2
.
Now, dene

A
2
= A
1
A
2
(3.16)
= A
1
, (3.17)
the least-squares residuals from the regression of A
2
and , respectively, on the matrix A
1
only.
Since the matrix A
1
is idempotent, A
1
= A
1
A
1
and thus
`
d
2
=
_
A
t
2
A
1
A
2
_
1
_
A
t
2
A
1

_
=
_
A
t
2
A
1
A
1
A
2
_
1
_
A
t
2
A
1
A
1

_
=
_

A
t
2

A
2
_
1
_

A
t
2

_
.
This shows that
`
d
2
can be calculated by the OLS regression of on

A
2
. This technique is called
residual regression.
Furthermore, using the denitions (3.16) and (3.17), expression (3.15) can be equivalently writ-
ten as
=

A
2
`
d
2
+ ` c.
Since
`
d
2
is precisely the OLS coecient from a regression of on

A
2
, this shows that the residual
vector from this regression is ` c, numerically the same residual as from the joint regression (3.14).
We have proven the following theorem.
Theorem 3.7.1 (Frisch-Waugh-Lovell). In the model (3.13), the OLS estimator of d
2
and the
OLS residuals ` c may be equivalently computed by either the OLS regression (3.14) or via the fol-
lowing algorithm:
1. Regress on A
1
, obtain residuals ;
2. Regress A
2
on A
1
, obtain residuals

A
2
;
3. Regress on

A
2
, obtain OLS estimates
`
d
2
and residuals ` c.
In some contexts, the FWL theorem can be used to speed computation, but in most cases
there is little computational advantage to using the two-step algorithm. Rather, the primary use
is theoretical.
A common application of the FWL theorem, which you may have seen in an introductory
econometrics course, is the demeaning formula for regression. Partition A = [A
1
A
2
] where
A
1
= i is a vector of ones, and A
2
is the vector of observed regressors. In this case,
A
1
= 1 i
_
i
t
i
_
1
i
t
.
21
Observe that

A
2
= A
1
A
2
= A
2
i
_
i
t
i
_
1
iA
2
= A
2
A
2
and
= A
1

= i
_
i
t
i
_
1
i
t

= ,
which are demeaned. The FWL theorem says that
`
d
2
is the OLS estimate from a regression of
j
i
on i
2i
i
2
:
`
d
2
=
_
a

i=1
(i
2i
i
2
) (i
2i
i
2
)
t
_
1
_
a

i=1
(i
2i
i
2
) (j
i
)
_
.
Thus the OLS estimator for the slope coecients is a regression with demeaned data.
3.8 Bias and Variance
In this and the following section we consider the special case of the linear regression model
(2.8)-(2.9). In this section we derive the small sample conditional mean and variance of the OLS
estimator.
By the independence of the observations and (2.9), observe that
E(c [ A) =
_
_
_
_
.
.
.
E(c
i
[ A)
.
.
.
_
_
_
_
=
_
_
_
_
.
.
.
E(c
i
[ i
i
)
.
.
.
_
_
_
_
= 0. (3.18)
Using (3.8), the properties of conditional expectations, and (3.18), we can calculate
E
_
`
d d [ A
_
= E
_
_
A
t
A
_
1
A
t
c [ A
_
=
_
A
t
A
_
1
A
t
E(c [ A)
= 0.
We have shown that
E
_
`
d [ A
_
= d (3.19)
which implies
E
_
`
d
_
= d
and thus the OLS estimator
`
d is unbiased for d.
Next, for any random vector Z dene the covariance matrix
var(Z) = E(Z EZ) (Z EZ)
t
= EZZ
t
(EZ) (EZ)
t
and for any pair (Z, X) dene the conditional convariance matrix
var(Z [ A) = E
_
(Z E(Z [ A)) (Z E(Z [ A))
t
[ A
_
.
22
Then given (3.19) we see that
var
_
`
d [ A
_
= E
_
_
`
d d
__
`
d d
_
t
[ A
_
=
_
A
t
A
_
1
A
t
LA
_
A
t
A
_
1
where
L = E
_
cc
t
[ A
_
.
The ith diagonal element of L is
E
_
c
2
i
[ A
_
= E
_
c
2
i
[ i
i
_
= o
2
i
while the i,
t
th o-diagonal element of L is
E(c
i
c
)
[ A) = E(c
i
[ i
i
) E(c
)
[ i
)
) = 0.
Thus L is a diagonal matrix with ith diagonal element o
2
i
:
L = diag
_
o
2
1
, ..., o
2
a
_
=
_
_
_
_
_
o
2
1
0 0
0 o
2
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 o
2
a
_
_
_
_
_
. (3.20)
It is useful to note that
A
t
LA =
a

i=1
x
i
x
t
i
o
2
i
,
a weighted version of A
t
A.
In the special case of the linear homoskedastic regression model, o
2
i
= o
2
and we have the
simplications L = 1
a
o
2
, A
t
LA = A
t
Ao
2
, and
var
_
`
d [ A
_
=
_
A
t
A
_
1
o
2
.
We now calculate the nite sample bias of the method of moments estimator ^ o
2
for o
2
. From
(3.12), the properties of projection matrices and the trace operator observe that
^ o
2
=
1
:
` c
t
` c =
1
:
c
t
AAc =
1
:
c
t
Ac =
1
:
tr
_
c
t
Ac
_
=
1
:
tr
_
Acc
t
_
.
Then
E
_
^ o
2
[ A
_
=
1
:
tr
_
E
_
Acc
t
[ A
__
=
1
:
tr
_
AE
_
cc
t
[ A
__
=
1
:
tr (AD) . (3.21)
Adding the assumption of conditional homoskedasticity E
_
c
2
i
[ i
i
_
= o
2
, then L = 1
a
o
2
so this
simplies to
E
_
^ o
2
[ A
_
=
1
:
tr
_
Ao
2
_
= o
2
_
: /
:
_
,
the nal equality by (3.10). Thus ^ o
2
is biased towards zero. As an alternative, the estimator :
2
dened in (3.7) is unbiased for o
2
by this calculation. This is the justication for the common
preference of :
2
over ^ o
2
in empirical practice. It is important to remember, however, that this
estimator is only unbiased in the special case of the homoskedastic linear regression model. It is
not unbiased in the absence of homoskedasticity or in the projection model.
23
3.9 Gauss-Markov Theorem
In this section we restrict attention to the homoskedastic linear regression model, which is
(2.10)-(2.12). Now consider the class of estimators of d which are linear functions of the vector ,
and thus can be written as

d = A
t

where A is an : / function of A. The least-squares estimator is the special case obtained by


setting A = A(A
t
A)
1
. What is the best choice of A? The Gauss-Markov theorem, which we now
present, says that the least-squares estimator is the best choice, as it yields the smallest variance
among all unbiased linear estimators.
By a calculation similar to those of the previous section,
E
_

d [ A
_
= A
t
Ad,
so

d is unbiased if (and only if) A
t
A = 1
I
. In this case, we can write

d = A
t
= A(Ad +c) = d +Ac.
Thus since var (c [ A) = 1
a
o
2
under homoskedasticity,
var
_

d [ A
_
= A
t
var (c [ A) A = A
t
Ao
2
.
The best linear estimator is obtained by nding the matrix A for which this variance is the
smallest in the positive denite sense. The Gauss-Markov theorem is the famous solution to this
problem.
Theorem 3.9.1 Gauss-Markov. In the homoskedastic linear regression model, the best (minimum-
variance) unbiased linear estimator is OLS.
The Gauss-Markov theorem is an eciency justication for the least-squares estimator, but it
is quite limited in scope. Not only has the class of models been restricted to homoskedastic linear
regressions, the class of potential estimators has been restricted to linear unbiased estimators. This
latter restriction is particularly unsatisfactory as the theorem leaves open the possibility that a non-
linear or biased estimator could have lower mean squared error than the least-squares estimator.
3.10 Semiparametric Eciency
In the previous section we presented the Gauss-Markov theorem as a limited eciency justi-
cation for the least-squares estimator. A broader justication is provided in Chamberlain (1987),
who established that in the projection model the OLS estimator has the smallest asymptotic mean-
squared error among feasible estimators. This property is called semiparametric eciency, and
is a strong justication for the least-squares estimator. We discuss the intuition behind his result
in this section.
Suppose that the joint distribution of (j
i
, i
i
) is discrete. That is, for nite r,
1
_
j
i
= t
)
, i
i
=
)
_
=
)
, , = 1, ..., r
for some constants t
)
,
)
, and
)
. Assume that the t
)
and
)
are known, but the
)
are unknown.
(We know the values j
i
and i
i
can take, but we dont know the probabilities.)
In this discrete setting, the denition (3.3) can be rewritten as
d =
_
_
v

)=1

t
)
_
_
1
_
_
v

)=1

)
t
)
_
_
(3.22)
24
Thus d is a function of (
1
, ...,
v
) .
As the data are multinomial, the maximum likelihood estimator (MLE) is
^
)
=
1
:
a

i=1
1 (j
i
= t
)
) 1
_
i
i
=
)
_
for , = 1, ..., r, where 1 () is the indicator function. That is, ^
)
is the percentage of the observations
which fall in each category. The MLE
`
d
mle
for d is then the analog of (3.22) with the parameters

)
replaced by the estimates ^
)
:
`
d
mle
=
_
_
v

)=1
^
)

t
)
_
_
1
_
_
v

)=1
^
)

)
t
)
_
_
.
Substituting in the expressions for ^
)
,
v

)=1
^
)

t
)
=
v

)=1
1
:
a

i=1
1 (j
i
= t
)
) 1
_
i
i
=
)
_

t
)
=
1
:
a

i=1
v

)=1
1 (j
i
= t
)
) 1
_
i
i
=
)
_
i
i
i
t
i
=
1
:
a

i=1
i
i
i
t
i
and
v

)=1
^
)

)
t
)
=
v

)=1
1
:
a

i=1
1 (j
i
= t
)
) 1
_
i
i
=
)
_

)
t
)
=
1
:
a

i=1
v

)=1
1 (j
i
= t
)
) 1
_
i
i
=
)
_
i
i
j
i
=
1
:
a

i=1
i
i
j
i
.
Thus
`
d
mle
=
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
:
a

i=1
i
i
j
i
_
=
`
d
ols
.
In other words, if the data have a discrete distribution, the maximum likelihood estimator is
identical to the OLS estimator. Since this is a regular parametric model the MLE is asymptotically
ecient (see Appendix D). It follows that the OLS estimator is asymptotically ecient.
The hard part of the argument (which was rigorously developed in Chamberlains paper, but we
do not present it here) is the extension to the case of continuously-distributed data.The intuition
is that all continuous distributions can be arbitrarily well approximated by some multinomial
distribution, and for any multinomial distriubtion the moment estimator is asymptotically ecient.
Formalizing this intuition using a rigorous mathematical argument, Chamberlain proved that the
OLS estimator (3.4) is asymptotically semiparametrically ecient for the parameter d dened in
(2.13) for the class of models satisfying Assumption 2.6.1.
25
3.11 Multicollinearity
If rank(A
t
A) < /, then
`
d is not dened
2
. This is called strict multicollinearity. This
happens when the columns of A are linearly dependent, i.e., there is some o ,= 0 such that
Ao = 0. Most commonly, this arises when sets of regressors are included which are identically
related. For example, if A includes both the logs of two prices and the log of the relative prices,
log(j
1
), log(j
2
) and log(j
1
,j
2
). When this happens, the applied researcher quickly discovers the
error as the statistical software will be unable to construct (A
t
A)
1
. Since the error is discovered
quickly, this is rarely a problem for applied econometric practice.
The more relevant issue is near multicollinearity, which is often called multicollinearity for
brevity. This is the situation when the A
t
A matrix is near singular, when the columns of A are
close to linearly dependent. This denition is not precise, because we have not said what it means
for a matrix to be near singular. This is one diculty with the denition and interpretation of
multicollinearity.
One implication of near singularity of matrices is that the numerical reliability of the calculations
is reduced. In extreme cases it is possible that the reported calculations will be in error.
A more relevant implication of near multicollinearity is that individual coecient estimates will
be imprecise. We can see this most simply in a homoskedastic linear regression model with two
regressors
j
i
= r
1i
,
1
+r
2i
,
2
+c
i
,
and
1
:
A
t
A =
_
1 j
j 1
_
In this case
var
_
`
d [ A
_
=
o
2
:
_
1 j
j 1
_
1
=
o
2
:(1 j
2
)
_
1 j
j 1
_
.
The correlation j indexes collinearity, since as j approaches 1 the matrix becomes singular. We can
see the eect of collinearity on precision by observing that the asymptotic variance of a coecient
estimate o
2
_
1 j
2
_
1
approaches innity as j approaches 1. Thus the more collinear are the
regressors, the worse the precision of the individual coecient estimates.
What is happening is that when the regressors are highly dependent, it is statistically dicult
to disentangle the impact of ,
1
from that of ,
2
. As a consequence, the precision of individual
estimates are reduced.
3.12 Inuential Observations
The ith observation is inuential on the least-squares estimate if the deletion of the observation
from the sample results in a meaningful change in
`
d. To investigate the possibility of inuential
observations, we dene the leave-one-out estimator as that obtained from the sample excluding the
ith observation. The leave-one-out OLS estimator of d is
`
d
(i)
=
_
A
t
(i)
A
(i)
_
1
A
(i)

(i)
(3.23)
where A
(i)
and
(i)
are the data matrices omitting the ith row. A convenient alternative
expression (derived in Section 3.13) is
`
d
(i)
=
`
d (1 /
i
)
1
_
A
t
A
_
1
i
i
^ c
i
(3.24)
where
/
i
= i
t
i
_
A
t
A
_
1
i
i
2
See Appendix A.5 for the dention of the rank of a matrix.
26
is the ith diagonal element of the projection matrix A(A
t
A)
1
A
t
.
We can also dene the leave-one-out residual
^ c
i,i
= j
i
i
t
i
`
d
(i)
= (1 /
i
)
1
^ c
i
. (3.25)
A simple comparison yields that
^ c
i
^ c
i,i
= (1 /
i
)
1
/
i
^ c
i
. (3.26)
As we can see, the change in the coecient estimate by deletion of the ith observation depends
critically on the magnitude of /
i
. The /
i
take values in [0, 1] and sum to /. If the ith observation
has a large value of /
i
, then this observation is a leverage point and has the potential to be
an inuential observation. Investigations into the presence of inuential observations can plot the
values of (3.26), which is considerably more informative than plots of the uncorrected residuals ^ c
i
.
3.13 Technical Proofs
Proof of Theorem 3.9.1. Let A be any : / function of A such that A
t
A = 1
I
. The
variance of the least-squares estimator is (A
t
A)
1
o
2
and that of A
t
is A
t
Ao
2
. It is sucient to
show that the dierence A
t
A(A
t
A)
1
is positive semi-denite. Set C = AA(A
t
A)
1
. Note
that A
t
C = 0. Then we calculate that
A
t
A
_
A
t
A
_
1
=
_
C +A
_
A
t
A
_
1
_
t
_
C +A
_
A
t
A
_
1
_

_
A
t
A
_
1
= C
t
C +C
t
A
_
A
t
A
_
1
+
_
A
t
A
_
1
A
t
C
+
_
A
t
A
_
1
A
t
A
_
A
t
A
_
1

_
A
t
A
_
1
= C
t
C
The matrix C
t
C is positive semi-denite (see Appendix A.7) as required.
Proof of Equation (3.24). Equation (A.2) in Appendix A.5 states that for nonsingular A
and vector I
_
AII
t
_
1
= A
1
+
_
1 I
t
A
1
I
_
1
A
1
II
t
A
1
.
This implies
_
A
t
A i
i
i
t
i
_
1
=
_
A
t
A
_
1
+ (1 /
i
)
1
_
A
t
A
_
1
i
i
i
t
i
_
A
t
A
_
1
and thus
`
d
(i)
=
_
A
t
A i
i
i
t
i
_
1
_
A
t
i
i
j
i
_
=
_
A
t
A
_
1
A
t

_
A
t
A
_
1
i
i
j
i
+(1 /
i
)
1
_
A
t
A
_
1
i
i
i
t
i
_
A
t
A
_
1
_
A
t
i
i
j
i
_
=
`
d
_
A
t
A
_
1
i
i
j
i
+ (1 /
i
)
1
_
A
t
A
_
1
i
i
_
i
t
i
`
d /
i
j
i
_
=
`
d (1 /
i
)
1
_
A
t
A
_
1
i
i
_
(1 /
i
) j
i
i
t
i
^
, +/
i
j
i
_
=
`
d (1 /
i
)
1
_
A
t
A
_
1
i
i
^ c
i
the third equality making the substitutions
`
d = (A
t
A)
1
A
t
and /
i
= i
t
i
(A
t
A)
1
i
i
, and the
remainder collecting terms.
27
3.14 Exercises
1. Let j be a random variable with j = Ej and o
2
= var(j). Dene
q
_
j, j, o
2
_
=
_
j j
(j j)
2
o
2
_
.
Let (^ j, ^ o
2
) be the values such that q
a
(^ j, ^ o
2
) = 0 where q
a
(:, :) = :
1

a
i=1
q
_
j
i
, j, o
2
_
.
Show that ^ j and ^ o
2
are the sample mean and variance.
2. Consider the OLS regression of the : 1 vector on the : / matrix A. Consider an
alternative set of regressors Z = AC, where C is a / / non-singular matrix. Thus, each
column of Z is a mixture of some of the columns of A. Compare the OLS estimates and
residuals from the regression of on A to the OLS estimates from the regression of on Z.
3. Let ` c be the OLS residual from a regression of on A = [A
1
A
2
]. Find A
t
2
` c.
4. Let ` c be the OLS residual from a regression of on A. Find the OLS coecient estimate
from a regression of ` c on A.
5. Let ` = A(A
t
A)
1
A
t
. Find the OLS coecient estimate from a regression of ` on A.
6. Prove that 1
2
is the square of the simple correlation between and ` .
7. Explain the dierence between
1
a

a
i=1
i
i
i
t
i
and E(i
i
i
t
i
) .
8. Let
`
d
a
= (A
t
a
A
a
)
1
A
t
a

a
denote the OLS estimate when
a
is : 1 and A
a
is : /.
A new observation (j
a+1
, i
a+1
) becomes available. Prove that the OLS estimate computed
using this additional observation is
`
d
a+1
=
`
d
a
+
1
1 +i
t
a+1
(A
t
a
A
a
)
1
i
a+1
_
A
t
a
A
a
_
1
i
a+1
_
j
a+1
i
t
a+1
`
d
a
_
.
9. True or False. If j
i
= r
i
, + c
i
, r
i
R, E(c
i
[ r
i
) = 0, and ^ c
i
is the OLS residual from the
regression of j
i
on r
i
, then

a
i=1
r
2
i
^ c
i
= 0.
10. A dummy variable takes on only the values 0 and 1. It is used for categorical data, such as
an individuals gender. Let u
1
and u
2
be vectors of 1s and 0s, with the i
t
th element of u
1
equaling 1 and that of u
2
equaling 0 if the person is a man, and the reverse if the person is
a woman. Suppose that there are :
1
men and :
2
women in the sample. Consider the three
regressions
= j +u
1
c
1
+u
2
c
2
+c (3.27)
= u
1
c
1
+u
2
c
2
+c (3.28)
= j +u
1
c +c (3.29)
(a) Can all three regressions (3.27), (3.28), and (3.29) be estimated by OLS? Explain if not.
(b) Compare regressions (3.28) and (3.29). Is one more general than the other? Explain the
relationship between the parameters in (3.28) and (3.29).
(c) Compute i
t
u
1
and i
t
u
2
, where i is an : 1 is a vector of ones.
(d) Letting o = (c
1
c
2
)
t
, write equation (3.28) as = Ao + c. Consider the assumption
E(i
i
c
i
) = 0. Is there any content to this assumption in this setting?
28
11. Let u
1
and u
2
be dened as in the previous exercise.
(a) In the OLS regression
= u
1
^
1
+u
2
^
2
+ ` u,
show that ^
1
is sample mean of the dependent variable among the men of the sample
(j
1
), and that ^
2
is the sample mean among the women (j
2
).
(b) Describe in words the transformations

+
= u
1
j
1
u
2
j
2
A
+
= A u
1
A
1
u
2
A
2
.
(c) Compare

d from the OLS regresion

+
= A
+

d + c
with
`
d from the OLS regression
= u
1
^ c
1
+u
2
^ c
2
+A
`
d + ` c.
12. The data le cps85.dat contains a random sample of 528 individuals from the 1985 Cur-
rent Population Survey by the U.S. Census Bureau. The le contains observations on nine
variables, listed in the le cps85.pdf.
V1 = education (in years)
V2 = region of residence (coded 1 if South, 0 otherwise)
V3 = (coded 1 if nonwhite and non-Hispanic, 0 otherwise)
V4 = (coded 1 if Hispanic, 0 otherwise)
V5 = gender (coded 1 if female, 0 otherwise)
V6 = marital status (coded 1 if married, 0 otherwise)
V7 = potential labor market experience (in years)
V8 = union status (coded 1 if in union job, 0 otherwise)
V9 = hourly wage (in dollars)
Estimate a regression of wage j
i
on education r
1i
, experience r
2i
, and experienced-squared
r
3i
= r
2
2i
(and a constant). Report the OLS estimates.
Let ^ c
i
be the OLS residual and ^ j
i
the predicted value from the regression. Numerically
calculate the following:
(a)

a
i=1
^ c
i
(b)

a
i=1
r
1i
^ c
i
(c)

a
i=1
r
2i
^ c
i
(d)

a
i=1
r
2
1i
^ c
i
(e)

a
i=1
r
2
2i
^ c
i
(f)

a
i=1
^ j
i
^ c
i
(g)

a
i=1
^ c
2
i
(h) 1
2
Are these calculations consistent with the theoretical properties of OLS? Explain.
29
13. Use the data from the previous problem, restimate the slope on education using the residual
regression approach. Regress j
i
on (1, r
2i
, r
2
2i
), regress r
1i
on (1, r
2i
, r
2
2i
), and regress the
residuals on the residuals. Report the estimate from this regression. Does it equal the value
from the rst OLS regression? Explain.
In the second-stage residual regression, (the regression of the residuals on the residuals),
calculate the equation 1
2
and sum of squared errors. Do they equal the values from the
initial OLS regression? Explain.
30
Chapter 4
Inference
4.1 Sampling Distribution
The least-squares estimator is a random vector, since it is a function of the random data, and
therefore has a sampling distribution. In general, its distribution is a complicated function of the
joint distribution of (j
i
, i
i
) and the sample size :.
Figure 4.1: Sampling Density of
^
,
2
To illustrate the possibilities in one example, let j
i
and r
i
be drawn from the joint density
)(r, j) =
1
2rj
exp
_

1
2
(log j log r)
2
_
exp
_

1
2
(log r)
2
_
and let
^
,
2
be the slope coecient estimate computed on observations from this joint density. Using
simulation methods, the density function of
^
,
2
was computed and plotted in Figure 4.1 for sample
sizes of : = 25, : = 100 and : = 800. The vertical line marks the true value of the projection
coecient.
From the gure we can see that the density functions are dispersed and highly non-normal. As
the sample size increases the density becomes more concentrated about the population coecient.
To characterize the sampling distribution more fully, we will use the methods of asymptotic ap-
31
proximation. A review of the most important tools in asymptotic theory is contained in Appendix
C.
4.2 Consistency
As discussed in Section 4.1, the OLS estimator
`
d is has a statistical distribution which is
unknown. Asymptotic (large sample) methods approximate sampling distributions based on the
limiting experiment that the sample size : tends to innity. A preliminary step in this approach
is the demonstration that estimators converge in probability to the true parameters as the sample
size gets large. This is illustrated in Figure 4.1 by the fact that the sampling densities become more
concentrated as : gets larger.
This derivation is based on three key components. First, the OLS estimator can be written as
a continuous function of a set of sample moments. Second, the weak law of large numbers (WLLN,
Theorem C.2.1) shows that sample moments converge in probability to population moments. And
third, the continuous mapping theorem (Theorem C.4.1) states that continuous functions preserve
convergence in probability. We now explain each step.
First, observe that the OLS estimator
`
d =
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
:
a

i=1
i
i
j
i
_
is a function of the sample moments
1
a

a
i=1
i
i
i
t
i
and
1
a

a
i=1
i
i
j
i
.
Second, the WLLN states that for any iid random variable n
i
such that E[n
i
[ < ,
1
:
a

i=1
n
i
j
E(n
i
)
as : . We want to apply the WLLN to the elements of the sample moments
1
a

a
i=1
i
i
i
t
i
and
1
a

a
i=1
i
i
j
i
. This is okay since these are sample averages of the random variables r
)i
r
|i
and r
)
j
i
.
The WLLN says that for any , and |,
1
:
a

i=1
r
)i
r
|i
j
E(r
)i
r
|i
) ,
and
1
:
a

i=1
r
)i
j
i
j
E(r
)i
j
i
) .
Since this holds for all elements in the matrix
1
a

a
i=1
i
i
i
t
i
and the vector
1
a

a
i=1
i
i
j
i
, it follows
that
1
:
a

i=1
i
i
i
t
i
j
E
_
i
i
i
t
i
_
= Q (4.1)
and
1
:
a

i=1
i
i
j
i
j
E(i
i
j
i
) . (4.2)
The continuous mapping theorem then implies that
`
d =
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
:
a

i=1
i
i
j
i
_
j

_
E
_
i
i
i
t
i
__
1
(E(i
i
j
i
))
= d. (4.3)
32
We have shown that
`
d
j
d. In words, the OLS estimator converges in probability to the true
parameter vector d as the sample size : gets large.
For a slightly dierent demonstration of this result, recall that (3.8) implies that
`
d d =
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
:
a

i=1
i
i
c
i
_
. (4.4)
The WLLN and (2.17) imply
1
:
a

i=1
i
i
c
i
j
E(i
i
c
i
) = 0. (4.5)
Therefore
`
d d =
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
:
a

i=1
i
i
c
i
_
j
Q
1
0
= 0
which is the same as
`
d
j
d.
Theorem 4.2.1 Under Assumptions 2.6.1 and 3.1.1,
`
d
j
d as : .
For further details on the proof see Section 4.17.
Theorem 4.2.1 states that the OLS estimator
`
d converges in probability to d as : diverges
to positive innity. When an estimator converges in probability to the true value as the sample
size diverges, we say that the estimator is consistent. This is a good property for an estimator
to possess. It means that for any given joint distribution of (j
i
, i
i
) , there is a sample size :
suciently large such that the estimator
`
d will be arbitrarily close to the true value d with high
probability. Consistency is also an important preliminary step in establishing other important
asymptotic approximations.
We can similarly show that the estimators ^ o
2
and :
2
are consistent for o
2
.
Theorem 4.2.2 Under Assumptions 2.6.1 and 3.1.1, ^ o
2
j
o
2
and :
2
j
o
2
as : .
The proof is given in Section 4.17.
One implication of this theorem is that multiple estimators can be consistent for the sample
population parameter. While ^ o
2
and :
2
are unequal in any given application, they are close in
value when : is very large.
4.3 Asymptotic Normality
We started this Chapter discussing the need for an approximation to the distribution of the OLS
estimator
`
d. In the previous section we showed that
`
d converges in probability to d. This is a useful
rst step, but in itself does not provide a useful approximation to the distribution of the estimator.
In this section we derive an approximation typically called the asymptotic distribution of the
estimator.
The derivation starts by writing the estimator as a function of sample moments. One of the
moments must be written as a sum of zero-mean random vectors and normalized so that the central
limit theory can be applied. The steps are as follows.
33
Take equation (4.4) and multiply it by
_
:. This yields the expression
_
:
_
`
d d
_
=
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
_
:
a

i=1
i
i
c
i
_
. (4.6)
This shows that the normalized and centered estimator
_
:
_
`
d d
_
is a function of the sample
average
1
a

a
i=1
i
i
i
t
i
and the normalized sample average
1
_
a

a
i=1
i
i
c
i
. Furthermore, the latter has
mean zero so the central limit theorem (CLT) applies. Recall, the CLT (Theorem C.3.1) states
that if u
i
R
I
is iid, Eu
i
= 0 and En
2
)i
< for , = 1, ..., /, then as :
1
_
:
a

i=1
u
i
o
N
_
0, E
_
u
i
u
t
i
__
.
For our application, u
i
= i
i
c
i
which is iid and mean zero since E(u
i
) = E(i
i
c
i
) = 0. We calculate
that E(u
i
u
t
i
) = E
_
i
i
i
t
i
c
2
i
_
and dene this matrix by D. By the CLT we conclude
1
_
:
a

i=1
i
i
c
i
o
N(0, D) . (4.7)
where
D = E
_
i
i
i
t
i
c
2
i
_
.
Then using (4.6), (4.1), and (4.7),
_
:
_
`
d d
_
o
Q
1
N(0, D)
= N
_
0, Q
1
DQ
1
_
.
where the nal equality follows from Theorem B.9.1
A formal statement of this result requires the following strengthening of the moment conditions.
Assumption 4.3.1 In addition to Assumption 2.6.1, E
_
c
4
i
_
< and for , = 1, ..., /, Ei
4
)i
< .
Theorem 4.3.1 Under Assumptions 3.1.1 and 4.3.1, as :
_
:
_
`
d d
_
o
N(0, X )
where
X = Q
1
DQ
1
.
Further details of the proof are given in Section 4.17.
As X is the variance of the asymptotic distribution of
_
:
_
`
d d
_
, X is often referred to as
the asymptotic covariance matrix of
`
d. The expression X = Q
1
DQ
1
is called a sandwich
form.
Theorem 4.3.1 states that the sampling distribution of the least-squares estimator, after rescal-
ing, is approximately normal when the sample size : is suciently large. This holds true for all joint
distibutions of (j
i
, i
i
) which satisfy the conditions of Assumption 4.3.1. However, for any xed :
the sampling distribution of
`
d can be arbitrarily far from the normal distribution. In Figure 4.1
we have already seen a simple example where the least-squares estimate is quite asymmetric and
non-normal even for reasonably large sample sizes.
34
There is a special case where Dand X simplify. We say that c
i
is a Homoskedastic Projection
Error when
cov(i
i
i
t
i
, c
2
i
) = 0. (4.8)
Condition (4.8) holds when i
i
and c
i
are independent, but this is not a necessary condition. Under
(4.8) the asymptotic variance formulas simplify as
D = E
_
i
i
i
t
i
_
1
_
c
2
i
_
= Qo
2
(4.9)
X = Q
1
DQ
1
= Q
1
o
2
= X
0
(4.10)
In (4.10) we dene X
0
= Q
1
o
2
whether (4.8) is true or false. When (4.8) is true then X = X
0
,
otherwise X ,= X
0
. We call X
0
the homoskedastic covariance matrix.
The asymptotic distribution of Theorem 4.3.1 is commonly used to approximate the nite
sample distribution of
_
:
_
`
d d
_
. The approximation may be poor when : is small. How large
should : be in order for the approximation to be useful? Unfortunately, there is no simple answer
to this reasonable question. The trouble is that no matter how large is the sample size, the
normal approximation is arbitrarily poor for some data distribution satisfying the assumptions.
We illustrate this problem using a simulation. Let j
i
= ,
0
+ ,
1
r
i
+ c
i
where r
i
is N(0, 1) , and
c
i
is independent of r
i
with the Double Pareto density )(c) =
c
2
[c[
c1
, [c[ _ 1. If c 2 the
error c
i
has zero mean and variance c,(c 2). As c approaches 2, however, its variance diverges
to innity. In this context the normalized least-squares slope estimator
_
:
c2
c
_
^
,
2
,
2
_
has the
N(0, 1) asymptotic distibution for any c 2. In Figure 4.2 we display the nite sample densities
of the normalized estimator
_
:
c2
c
_
^
,
2
,
2
_
, setting : = 100 and varying the parameter c.
For c = 3.0 the density is very close to the N(0, 1) density. As c diminishes the density changes
signicantly, concentrating most of the probability mass around zero.
Figure 4.2: Density of Normalized OLS estimator
Another example is shown in Figure 4.3. Here the model is j
i
= ,
1
+c
i
where
c
i
=
n
I
i
E
_
n
I
i
_
_
E
_
n
2I
i
_

_
E
_
n
I
i
__
2
_
12
35
and n
i
~ N(0, 1). We show the sampling distribution of
_
:
_
^
,
1
,
1
_
setting : = 100, for / = 1,
4, 6 and 8. As / increases, the sampling distribution becomes highly skewed and non-normal. The
lesson from Figures 4.2 and 4.3 is that the N(0, 1) asymptotic approximation is never guaranteed
to be accurate.
Figure 4.3: Sampling distribution
4.4 Covariance Matrix Estimation
Let
`
Q =
1
:
a

i=1
i
i
i
t
i
be the method of moments estimator for Q. The homoskedastic covariance matrix X
0
= Q
1
o
2
is
typically estimated by
`
X
0
=
`
Q
1
:
2
. (4.11)
Since
`
Q
j
Q and :
2
j
o
2
(see (4.1) and Theorem 4.2.1) it is clear that
`
X
0 j
X
0
. The
estimator ^ o
2
may also be substituted for :
2
in (4.11) without changing this result.
To estimate X = Q
1
DQ
1
, we need an estimate of D = E
_
i
i
i
t
i
c
2
i
_
. The MME estimator is
`
D =
1
:
a

i=1
i
i
i
t
i
^ c
2
i
(4.12)
where ^ c
i
are the OLS residuals. The estimator of X is then
`
X =
`
Q
1
`
D
`
Q
1
.
This estimator was introduced to the econometrics literature by White (1980).
Theorem 4.4.1 Under Assumptions 3.1.1 and 4.3.1, as : ,
`
D
j
D and
`
X
j
X .
36
The estimator
`
X
0
was the dominate covariance estimator used before 1980, and was still the
standard choice for much empirical work done in the early 1980s. The methods switched during
the late 1980s and early 1990s, so that by the late 1990s the White estimate
`
X emerged as the
standard covariance matrix estimator. When reading and reporting applied work, it is important
to pay attention to the distinction between
`
X
0
and
`
X , as it is not always clear which has been
computed. When
`
X is used rather than the traditional choice
`
X
0
, many authors will state that their
standard errors have been corrected for heteroskedasticity, or that they use a heteroskedasticity-
robust covariance matrix estimator, or that they use the White formula, the Eicker-White
formula, the Huber formula, the Huber-White formula or the GMM covariance matrix. In
most cases, these all mean the same thing.
The variance estimator
`
X is an estimate of the variance of the asymptotic distribution of
^
,.
A more easily interpretable measure of spread is its square root the standard deviation. This
motivates the denition of a standard error.
Denition 4.4.1 A standard error :(
^
,) for an estimator
^
, is an estimate of the standard devi-
ation of the distribution of
^
,.
When , is scalar, and
^
\ is an estimator of the variance of
_
:
_
^
, ,
_
, we set :(
^
,) = :
12
_
^
\ .
When d is a vector, we focus on individual elements of d one-at-a-time, vis., ,
)
, , = 0, 1, ..., /.
Thus
:(
^
,
)
) = :
12
_
`
X
))
.
Generically, standard errors are not unique, as there may be more than one estimator of the
variance of the estimator. It is therefore important to understand what formula and method is
used by an author when studying their work. It is also important to understand that a particular
standard error may be relevant under one set of model assumptions, but not under another set of
assumptions, just as any other estimator.
From a computational standpoint, the standard method to calculate the standard errors is to
rst calculate :
1
`
X , then take the diagonal elements, and then the square roots.
To illustrate, we return to the log wage regression of Section 3.2. We calculate that :
2
= 0.20
and
`
D =
_
0.199 2.80
2.80 40.6
_
.
Therefore the two covariance matrix estimates are
`
X
0
=
_
1 14.14
14.14 205.83
_
1
0.20 =
_
6.98 0.480
0.480 .039
_
and
`
X =
_
1 14.14
14.14 205.83
_
1
_
.199 2.80
2.80 40.6
__
1 14.14
14.14 205.83
_
1
=
_
7.20 0.493
0.493 0.035
_
.
In this case the two estimates are quite similar. The (White) standard errors for
^
,
0
are
_
7.2,988 =
.085 and that for
^
,
1
is
_
.035,988 = .006. We can write the estimated equation with standard errors
using the format
\
log(\aqc) = 1.313
(.085)
+ 0.128
(.006)
1dncatio:.
37
4.5 Alternative Covariance Matrix Estimators
MacKinnon and White (1985) suggested a small-sample corrected version of
`
X based on the
jackknife principle. Recall from Section 3.12 the denition of
`
d
(i)
as the least-squares estimator
with the ith observation deleted. From equation (3.13) of Efron (1982), the jackknife estimator of
the variance matrix for
`
d is
`
X
+
= (: 1)
a

i=1
_
`
d
(i)

d
__
`
d
(i)

d
_
t
(4.13)
where

d =
1
:
a

i=1
`
d
(i)
.
Using formula (3.24), you can show that
`
X
+
=
_
: 1
:
_
`
Q
1
`
D
+
`
Q
1
(4.14)
where
`
D
+
=
1
:
a

i=1
(1 /
i
)
2
i
i
i
t
i
^ c
2
i

_
1
:
a

i=1
(1 /
i
)
1
i
i
^ c
i
__
1
:
a

i=1
(1 /
i
)
1
i
i
^ c
i
_
t
and /
i
= i
t
i
(A
t
A)
1
i
i
. MacKinnon and White (1985) present numerical (simulation) evidence
that
`
X
+
works better than
`
X as an estimator of X . They also suggest that the scaling factor
(: 1),: in (4.14) can be omitted.
Andrews (1991) suggested a similar estimator based on cross-validation, which is dened by re-
placing the OLS residual ^ c
i
in (4.12) with the leave-one-out estimator ^ c
i,i
= (1 /
i
)
1
^ c
i
presented
in (3.25). Using this substitution, Andrews proposed estimator is
`
X
++
=
`
Q
1
`
D
++
`
Q
1
where
`
D
++
=
1
:
a

i=1
(1 /
i
)
2
i
i
i
t
i
^ c
2
i
.
It is similar to the MacKinnon-White estimator
`
X
+
, but omits the mean correction. Andrews
(1991) argues that simulation evidence indicates that
`
X
++
is an improvement on
`
X
+
.
4.6 Functions of Parameters
Sometimes we are interested in some lower-dimensional function of the parameter vector d =
(,
1
, ..., ,
I
). For example, we may be interested in a single coecient ,
)
or a ratio ,
)
,,
|
. In these
cases we can write the parameter of interest as a function of d. Let l : R
I
R
q
denote this
function and let
0 = l(d)
denote the parameter of interest. The estimate of 0 is
`
0 = l(
`
d).
What is an appropriate standard error for
`
0? Assume that l(d) is dierentiable at the true
value of d. By a rst-order Taylor series approximation:
l(
`
d) l(d) +H
t
f
_
`
d d
_
.
38
where
H
f
=
0
0d
l(d) / .
Thus
_
:
_
`
0 0
_
=
_
:
_
l(
`
d) l(d)
_
H
t
f
_
:
_
`
d d
_
o
H
t
f
N(0, X )
= N(0, X
0
) . (4.15)
where
X
0
= H
t
f
X H
f
.
If
`
X is the estimated covariance matrix for
`
d, then the natural estimate for the variance of
`
0 is
`
X
0
=
`
H
t
f
`
X
`
H
f
where
`
H
f
=
0
0d
l(
`
d).
In many cases, the function l(d) is linear:
l(d) = H
t
d
for some / matrix H. In this case, H
f
= H and
`
H
f
= H, so
`
X
0
= H
t
`
X H.
For example, if H is a selector matrix
H =
_
1
0
_
so that if d = (d
1
, d
2
), then 0 = H
t
d = d
1
and
`
X
0
=
_
1 0
_
`
X
_
1
0
_
=
`
X
11
,
the upper-left block of
^
\ .
When = 1 (so l(d) is real-valued), the standard error for
^
0 is the square root of :
1
^
\
0
, that
is, :(
^
0) = :
12
_
`
H
t
f
`
X
`
H
f
.
4.7 t tests
Let 0 = /(d) : R
I
R be any parameter of interest (for example, 0 could be a single element
of d),
^
0 its estimate and :(
^
0) its asymptotic standard error. Consider the statistic
t
a
(0) =
^
0 0
:(
^
0)
(4.16)
which dierent writers alternatively label as t-statistic, a z-statistic or a studentized statistic. We
wont be making such distinctions and will typically refer to t
a
(0) as a t-statistic. We also often
suppress the parameter dependence, writing it as t
a
. The t-statistic is a simple function of the
estimate, its standard error, and the parameter.
39
Theorem 4.7.1 t
a
(0)
o
N(0, 1)
Thus the asymptotic distribution of the t-ratio t
a
(0) is the standard normal. Since this dis-
tribution does not depend on the parameters, we say that t
a
(0) is asymptotically pivotal. In
special cases (such as the normal regression model, see Section 3.4), the statistic t
a
has an exact t
distribution, and is therefore exactly free of unknowns. In this case, we say that t
a
is exactly piv-
otal. In general, however, pivotal statistics are unavailable and so we must rely on asymptotically
pivotal statistics.
The t-test is routinely used to test hypotheses on 0. A simple null and composite hypothesis
takes the form
H
0
: 0 = 0
0
H
1
: 0 ,= 0
0
where 0
0
is some pre-specied value. A t-test rejects H
0
in favor of H
1
when [t
a
(0
0
)[ is large. By
large we mean that the observed value of the t-statistic would be unlikely if H
0
were true.
Formally, we rst pick an asymptotic signicance level c. We then nd .
c2
, the upper c,2
quantile of the standard normal distribution which has the property that if 7 ~ N(0, 1) then
P
_
[7[ .
c2
_
= c.
For example, .
.025
= 1.96 and .
.05
= 1.645. A test of asymptotic signicance c rejects H
0
if
[t
a
[ .
c2
. Otherwise the test does not reject, or accepts H
0
.
The asymptotic signicance level is c because Theorem 4.7.1 implies that
P(reject H
0
[ H
0
true) = P
_
[t
a
[ .
c2
[ 0 = 0
0
_
P
_
[7[ .
c2
_
= c.
The rejection/acceptance dichotomy is associated with the Neyman-Pearson approach to hypothesis
testing.
While there is no objective scientic basis for choice of signicance level c, the comon practice
is to set c = .05 or 5%. This implies a critical value of .
.025
= 1.96 - 2. When [t
a
[ 2 it is
common to say that the t-statistic is statistically signicant. and if [t
a
[ < 2 it is common to
say that the t-statistic is statistically insignicant. It is helpful to remember that this is simply
a simple way of saying Using a t-test, the hypothesis that 0 = 0
0
can [cannot] be rejected at the
asymptotic 5% level.
A related statistic is the asymptotic p-value, which can be interpreted as a measure of the
evidence against the null hypothesis. The asymptotic p-value of the statistic t
a
is
j
a
= j(t
a
)
where j(t) is the tail probability function
j(t) = P([7[ [t[) = 2 (1 ([t[)) .
The closer the p-value is to zero, the stronger the evidence is against H
0
. Signicance tests can be
deduced directly from the p-value since for any c, j
a
< c if and only if [t
a
[ .
c2
.
If the p-value j
a
is small (close to zero) then the evidence against H
0
is strong. In a sense,
p-values and hypothesis tests are equivalent since j
a
< c if and only if [t
a
[ .
c2
. Thus an
equivalent statement of a Neyman-Pearson test is to reject at the c% level if and only if j
a
< c.
The p-value is more general, however, in that the reader is allowed to pick the level of signicance
c, in contrast to Neyman-Pearson rejection/acceptance reporting where the researcher picks the
signicance level.
40
Another helpful observation is that the p-value function has simply made a unit-free transforma-
tion of the test statistic. That is, under H
0
, j
a
o
U[0, 1], so the unusualness of the test statistic
can be compared to the easy-to-understand uniform distribution, regardless of the complication of
the distribution of the original test statistic. To see this fact, note that the asymptotic distribution
of [t
a
[ is 1(r) = 1 j(r). Thus
P(1 j
a
_ n) = P(1 j(t
a
) _ n)
= P(1(t
a
) _ n)
= P
_
[t
a
[ _ 1
1
(n)
_
1
_
1
1
(n)
_
= n,
establishing that 1 j
a
o
U[0, 1], from which it follows that j
a
o
U[0, 1].
4.8 Condence Intervals
A condence interval C
a
is an interval estimate of 0 R. It is a function of the data and hence
is random. It is designed to cover 0 with high probability. Either 0 C
a
or 0 , C
a
. The coverage
probability is P(0 C
a
).
We typically cannot calculate the exact coverage probability P(0 C
a
). However we often can
calculate the asymptotic coverage probability lim
ao
P(0 C
a
). We say that C
a
has asymptotic
(1 c)% coverage for 0 if P(0 C
a
) 1 c as : .
A good method for construction of a condence interval is the collection of parameter values
which are not rejected by a statistical test. The t-test of the previous section rejects H
0
: 0 = 0
0
if
[t
a
(0)[ .
c2
where t
a
(0) is the t-statistic (4.16) and .
c2
is the upper c,2 quantile of the standard
normal distribution. A condence interval is then constructed as the values of 0 for which this test
does not reject:
C
a
=
_
0 : [t
a
(0)[ _ .
c2
_
=
_
0 : .
c2
_
^
0 0
:(
^
0)
_ .
c2
_
=
_
^
0 .
c2
:(
^
0),
^
0 +.
c2
:(
^
0)
_
. (4.17)
While there is no hard-and-fast guideline for choosing the coverage probability 1 c, the most
common professional choice is 95%, or c = .05. This corresponds to selecting the condence interval
_
^
0 1.96:(
^
0)
_
-
_
^
0 2:(
^
0)
_
. Thus values of 0 within two standard errors of the estimated
^
0 are
considered reasonable candidates for the true value 0, and values of 0 outside two standard errors
of the estimated
^
0 are considered unlikely or unreasonable candidates for the true value.
The interval has been constructed so that as : ,
P(0 C
a
) = P
_
[t
a
(0)[ _ .
c2
_
P
_
[7[ _ .
c2
_
= 1 c.
and C
a
is an asymptotic (1 c)% condence interval.
4.9 Wald Tests
Sometimes 0 = l(d) is a 1 vector, and it is desired to test the joint restrictions simultane-
ously. In this case the t-statistic approach does not work. We have the null and alternative
H
0
: 0 = 0
0
H
1
: 0 ,= 0
0
.
41
The natural estimate of 0 is
`
0 = l(
`
d) and has asymptotic covariance matrix estimate
`
X
0
=
`
H
t
f
`
X
`
H
f
where
`
H
f
=
0
0d
l(
`
d).
The Wald statistic for H
0
against H
1
is
\
a
= :
_
`
0 0
0
_
t
`
X
1
0
_
`
0 0
0
_
= :
_
l(
`
d) 0
0
_
t
_
`
H
t
f
`
X
`
H
f
_
1
_
l(
`
d) 0
0
_
. (4.18)
When l is a linear function of d, l(d) = H
t
d, then the Wald statistic takes the form
\
a
= :
_
H
t
`
d 0
0
_
t
_
H
t
`
X H
_
1
_
H
t
`
d 0
0
_
.
The delta method (4.15) showed that
_
:
_
`
0 0
_
o
Z ~ N(0, X
0
) , and Theorem 4.4.1
showed that
`
X
j
X . Furthermore, H
f
(d) is a continuous function of d, so by the continuous
mapping theorem, H
f
(
`
d)
j
H
f
. Thus
`
X
0
=
`
H
t
f
`
X
`
H
f
j
H
t
f
X H
f
= X
0
0 if H
f
has full
rank . Hence
\
a
= :
_
`
0 0
0
_
t
`
X
1
0
_
`
0 0
0
_
o
Z
t
X
1
0
Z =
2
q
,
by Theorem B.9.3. We have established:
Theorem 4.9.1 Under H
0
and Assumption 4.3.1, if rank(H
f
) = , then \
a
o

2
q
, a chi-square
random variable with degrees of freedom.
An asymptotic Wald test rejects H
0
in favor of H
1
if \
a
exceeds
2
q
(c), the upper-c quantile
of the
2
q
distribution. For example,
2
1
(.05) = 3.84 = .
2
.025
. The Wald test fails to reject if \
a
is
less than
2
q
(c). As with t-tests, it is conventional to describe a Wald test as signicant if \
a
exceeds the 5% critical value.
Notice that the asymptotic distribution in Theorem 4.9.1 depends solely on the number of
restrictions being tested. It does not depend on / the number of parameters estimated.
The asymptotic p-value for \
a
is j
a
= j(\
a
), where j(r) = P
_

2
q
_ r
_
is the tail probability
function of the
2
q
distribution. The Wald test rejects at the c% level if and only if j
a
< c, and
j
a
is asymptotically U[0, 1] under H
0
. In reporting applied work it is good practice to report the
p-value.
4.10 F Tests
Take the linear model
= A
1
d
1
+A
2
d
2
+c
where A
1
is : /
1
and A
2
is : /
2
and / = /
1
+/
2
. The null hypothesis is
H
0
: d
2
= 0.
42
In this case, 0 = d
2
, and there are = /
2
restrictions. Also l(d) = H
t
d is linear with H =
_
0
1
_
a selector matrix. We know that the Wald statistic takes the form
\
a
= :
`
0
t
`
X
1
0
`
0
= :
`
d
t
2
_
H
t
`
X H
_
1
`
d
2
.
Now suppose that covariance matrix is computed under the assumption of homoskedasticity, so
that
`
X is replaced with
`
X
0
= ^ o
2
_
:
1
A
t
A
_
1
. We dene the homoskedastic Wald statistic
\
0
a
= :
`
0
t
`
X
01
0
`
0
= :
`
d
t
2
_
H
t
`
X
0
H
_
1
`
d
2
.
What we show in this section is that this Wald statistic can be written very simply using the
formula
\
0
a
= :
_
~ o
2
^ o
2
^ o
2
_
(4.19)
where
~ o
2
=
1
:
c
t
c, c = A
1

d
1
,

d
1
=
_
A
t
1
A
1
_
1
A
t
1

are from OLS of on A


1
, and
^ o
2
=
1
:
` c
t
` c, ` c = A
`
d,
`
d =
_
A
t
A
_
1
A
t

are from OLS of on A = (A


1
, A
2
).
The elegant feature about (4.19) is that it is directly computable from the standard output
from two simple OLS regressions, as the sum of squared errors is a typical output from statistical
packages. This statistic is typically reported as an F-statistic which is dened as
1
a
=
: /
:
\
0
a
/
2
=
_
~ o
2
^ o
2
_
,/
2
^ o
2
,(: /)
.
While it should be emphasized that equality (4.19) only holds if
`
X
0
= ^ o
2
_
:
1
A
t
A
_
1
, still this
formula often nds good use in reading applied papers. Because of this connection we call (4.19)
the F form of the Wald statistic. (We can also call \
0
a
a homoskedastic form of the Wald statistic.)
We now derive expression (4.19). First, note that by partitioned matrix inversion (A.3)
H
t
_
A
t
A
_
1
H = H
t
_
A
t
1
A
1
A
t
1
A
2
A
t
2
A
1
A
t
2
A
2
_
1
H =
_
A
t
2
A
1
A
2
_
1
where A
1
= 1 A
1
(A
t
1
A
1
)
1
A
t
1
. Thus
_
H
t
`
X
0
H
_
1
= ^ o
2
:
1
_
H
t
_
A
t
A
_
1
H
_
1
= ^ o
2
:
1
_
A
t
2
A
1
A
2
_
and
\
0
a
= :
`
d
t
2
_
H
t
`
X
0
H
_
1
`
d
2
=
`
d
t
2
(A
t
2
A
1
A
2
)
`
d
2
^ o
2
.
43
To simplify this expression further, note that if we regress on A
1
alone, the residual is
c = A
1
. Now consider the residual regression of c on

A
2
= A
1
A
2
. By the FWL theorem,
c = A
2
`
d
2
+ ` c and

A
t
2
` c = 0. Thus
c
t
c =
_

A
2
`
d
2
+ ` c
_
t
_

A
2
`
d
2
+ ` c
_
=
`
d
t
2

A
t
2

A
2
`
d
2
+ ` c
t
` c
=
`
d
t
2
A
t
2
A
1
A
2
`
d
2
+ ` c
t
` c,
or alternatively,
`
d
t
2
A
t
2
A
1
A
2
`
d
2
= c
t
c ` c
t
` c.
Also, since
^ o
2
= :
1
` c
t
` c
we conclude that
\
0
a
= :
_
c
t
c ` c
t
` c
` c
t
` c
_
= :
_
~ o
2
^ o
2
^ o
2
_
,
as claimed.
In many statistical packages, when an OLS regression is estimated, an F-statistic is reported.
This is
1
a
=
_
~ o
2
j
^ o
2
_
, (/ 1)
^ o
2
,(: /)
.
where
~ o
2
j
=
1
:
(j j)
t
(j j)
is the sample variance of j
i
, equivalently the residual variance from an intercept-only model. This
special F statistic is testing the hypothesis that all slope coecients (all coecients other than the
intercept) are zero. This was a popular statistic in the early days of econometric reporting, when
sample sizes were very small and researchers wanted to know if there was any explanatory power
to their regression. This is rarely an issue today, as sample sizes are typically suciently large that
this F statistic is highly signicant. While there are special cases where this F statistic is useful,
these cases are atypical.
4.11 Normal Regression Model
As an alternative to asymptotic distribution theory, there is an exact distribution theory avail-
able for the normal linear regression model introduced in Section 3.4. The modelling assumption
that the error c
i
is independent of i
i
and N
_
0, o
2
_
can be used to calculate a set of exact distribution
results.
In particular, under the normality assumption the error vector c is independent of A and
has distribution N
_
0, 1
a
o
2
_
. Since linear functions of normals are also normal, this implies that
conditional on A
_
`
d d
` c
_
=
_
(A
t
A)
1
A
t
A
_
c ~ N
_
0,
_
o
2
(A
t
A)
1
0
0 o
2
A
__
where A = 1 A(A
t
A)
1
A
t
. Since uncorrelated normal variables are independent, it follows
that
`
d is independent of any function of the OLS residuals including the estimated error variance
:
2
.
44
The spectral decomposition of A yields
A = H
_
1
aI
0
0 0
_
H
t
(see equation (A.5)) where H
t
H = 1
a
. Let u = o
1
H
t
c ~ N(0, H
t
H) ~ N(0, 1
a
) . Then
(: /) :
2
o
2
=
1
o
2
` c
t
` c
=
1
o
2
c
t
Ac
=
1
o
2
c
t
H
_
1
aI
0
0 0
_
H
t
c
= u
t
_
1
aI
0
0 0
_
u
~
2
aI
,
a chi-square distribution with : / degrees of freedom. Furthermore, if standard errors are calcu-
lated using the homoskedastic formula (4.11)
^
,
)
,
)
:(
^
,
)
)
=
^
,
)
,
)
:
_
_
(A
t
A)
1
_
))
~
N
_
0, o
2
_
(A
t
A)
1
_
))
_
_
o
2
aI

2
aI
_
_
(A
t
A)
1
_
))
=
N(0, 1)
_

2
nk
aI
~ t
aI
a t distribution with : / degrees of freedom.
We summarize these ndings
Theorem 4.11.1 If c
i
is independent of i
i
and distributed N
_
0, o
2
_
, and standard errors are
calculated using the homoskedastic formula (4.11) then

`
d ~ N
_
0, o
2
(A
t
A)
1
_

(aI)c
2
o
2
~
2
aI
,

^
o
j
o
j
c(
^
o
j
)
~ t
aI
In Theorem 4.3.1 and Theorem 4.7.1 we showed that in large samples,
`
d and t are approximately
normally distributed. In contrast, Theorem 4.11.1 shows that under the strong assumption of
normality,
`
d has an exact normal distribution and t
a
has an exact t distribution. As inference
(condence intervals) is based on the t-ratio, the notable distinction is between the N(0, 1) and
t
aI
distributions. The critical values are quite close if : / _ 30, so as a practical matter it does
not matter which distribution is used. (Unless the sample size is unreasonably small.)
Now let us partition d = (d
1
, d
2
) and consider tests of the linear restriction
H
0
: d
2
= 0
H
1
: d
2
,= 0
In the context of parametric models, a good testing procedure is based on the likelihood ratio sta-
tistic, which is twice the dierence in the log-likelihood function evaluated under the null and alter-
native hypotheses. The estimator under the alternative is the unrestricted estimator (
`
d
1
,
`
d
2
, ^ o
2
)
discussed above. The Gaussian log-likelihood at these estimates is
log 1(
`
d
1
,
`
d
2
, ^ o
2
) =
:
2
log
_
2^ o
2
_

1
2^ o
2
` c
t
` c
=
:
2
log
_
^ o
2
_

:
2
log (2)
:
2
.
45
The MLE of the model under the null hypothesis is (

d
1
, 0, ~ o
2
) where

d
1
is the OLS estimate from
a regression of j
i
on i
1i
only, with residual variance ~ o
2
. The log-likelihood of this model is
log 1(

d
1
, 0, ~ o
2
) =
:
2
log
_
~ o
2
_

:
2
log (2)
:
2
.
The LR statistic for H
0
is
11
a
= 2
_
log 1(
`
d
1
,
`
d
2
, ^ o
2
) log 1(

d
1
, 0, ~ o
2
)
_
= :
_
log
_
~ o
2
_
log
_
^ o
2
__
= :log
_
~ o
2
^ o
2
_
.
By a rst-order Taylor series approximation
11
a
= :log
_
1 +
~ o
2
^ o
2
1
_
:
_
~ o
2
^ o
2
1
_
= \
0
a
.
the homoskedastic Wald statistic. This shows that the two statistics 11
a
and \
0
a
will be
numerically close. It also shows that the homoskedastic Wald statistic for linear hypotheses can
also be interpreted as an appropriate likelihood ratio test under normality.
4.12 Semiparametric Eciency in the Projection Model
In this section we return to the question of semiparametric eciency as raised in Section 3.10.
There we presented the intuition behind Chamberlains demonstration of the asymptotic eciency
of the least-squares estimator. In this section we provide an alternative demonstration. It is based
on the rich but technically challenging theory of semiparametric eciency bounds. An excellent
accessible review has been provided by Newey (1990).
Our treatment covers what is known as the smooth function model, which includes the projection
model as a special case. Let z R
n
be a random vector with nite mean = Ez and nite variance
matrix X = var (z) , and let z
1
, ..., z
a
be an iid sample from this distribution. The parameter of
interest is d = g () where g () is a continuously dierentiable function. The standard moment
estimator for is the sample mean ` = :
1

a
i=1
z
i
and that for d is
`
d = g (` ) . This setting
includes the least-squares estimator for the projection model j = i
t
d+c by letting z be the vector
with elements r
)
c and r
)
r
|
for all , _ / and | _ /.
The sample mean has the asymptotic distribution
_
:(` )
o
N(0, X) . Applying the Delta
Method (Theorem C.4.3), we see that the moment estimator
`
d has the asymptotic distribution
_
:
_
`
d d
_
o
N(0, X ) where X =
0
0
0
g () X
0
0
g ()
t
. We want to know if
`
d is the best
feasible estimator. Is there another estimator with a smaller asymptotic variance? While it seems
intuitively unlikely that another estimator could have a smaller asymptotic variance than
`
d, how
do we know that this is not the case?
To show that the answer is not immediately obvious, it might be helpful to review a set-
ting where the sample mean is inecient. Suppose that . R has the density ) (. [ j) =
2
12
exp
_
[. j[
_
2
_
. Since var (.) = 1 we see that the sample mean satises
_
:(^ j j)
o

N(0, 1). In this model the maximum likelihood estimator (MLE) ~ j for j is dierent than the sam-
ple mean (and happens to be the sample median). Recall from the theory of maximum likelhood
that the MLE satises
_
:(~ j j)
o
N(0, J
0
) where J
0
=
_
Eo
2
j
_
1
and o
j
=
0
0j
log ) (. [ j) =

_
2 sgn(. j) is the score. We can calculate that Eo
2
j
= 2 and thus conclude that
_
:(~ j j)
o

N(0, 1,2) . The asymptotic variance of the MLE is half that of the sample mean. In this setting
the sample mean is inecient.
46
But the question at hand is whether or not the sample mean is ecient when the form of the
distribution is unknown. We call this setting semiparametric as the parameter of interest (the
mean) is nite dimensional while the remaining features of the distribution are unspecied. In the
semiparametric context an estimator is called semiparametrically ecient if it has the smallest
asymptotic variance among all semiparametric estimators.
The mathematical trick is to reduce the semiparametric model to a set of parametric submod-
els. The classic Cramer-Rao variance bound can be found for each parametric submodel. The
variance bound for the semiparametric model (the union of the submodels) is then computed by
taking the supremum of the individual variance bounds.
Formally, suppose that the true density of z is the unknown function )(z). A parametric
submodel for z is a density ) (z [ 0) which is a smooth function of a parameter 0 R
n
and there
is some 0
0
such that ) (z [ 0
0
) = )(z). This means that the submodel class passes through the true
density, so the submodel is a true model. Since each submodel is parametric we can calculate its
Cramer-Rao bound. By the Cramer-Rao theorem no estimator (and in particular no semiparametric
estimator) has an asymptotic variance smaller than this bound. This comparison is true for all
submodels, so the asymptotic variance of any semiparametric estimator cannot be smaller than the
Cramer-Rao bound for any parametric submodel. The semiparametric asymptotic variance
bound (which is sometimes called the semiparametric eciency bound) is the supremum of the
Cramer-Rao bounds from all conceivable submodels. It is a lower bound for the asymptotic variance
of any semiparametric estimator. If the asymptotic variance of a specic semiparametric estimator
equals this bound we say that the estimator is semiparametrically ecient.
For many statistical problems it is quite challenging to calculate the semiparametric variance
bound. However the solution is straightforward in the present setting. As the semiparametric
variance bound cannot be smaller than the Cramer-Rao bound for any submodel, and cannot be
larger than the asymptotic variance of any feasible semiparametric estimator, it follows that if the
asymptotic variance of a feasible semiparametric estimator equals the Cramer-Rao bound for at least
one submodel, then this is the semiparametric asymptotic variance bound, and the aforementioned
feasible semiparametric estimator must be semiparametrically ecient. In these cases, it is sucient
to construct a parametric submodel for which the Cramer-Rao bound (equivalently, the asymptotic
variance of the MLE) equals that of a known semiparametric estimator.
We now show this for the moment estimator
`
d = g (` ) discussed above. As
`
d has asymptotic
variance X , our goal is to nd a parametric submodel whose Cramer-Rao bound for estimation of
d is X . The solution involves creating a tilted version of the true density. Consider the parametric
submodel
) (z [ 0) = )(z)
_
1 +0
t
X
1
(z )
_
(4.20)
where )(z) is the true density and = Ez. Note that
_
) (z [ 0) dz =
_
)(z)dz +0
t
X
1
_
)(z) (z ) dz = 1
and for all 0 close to zero ) (z [ 0) _ 0. Thus ) (z [ 0) is a valid density function. It is a parametric
submodel since ) (z [ 0
0
) = )(z) when 0
0
= 0. This parametric submodel has the mean
(0) =
_
z) (z [ 0) dz
=
_
z)(z)dz +
_
)(z)z (z )
t
X
1
0dz
= +0
and parameter of interest d (0) = g ( +0) both which are smooth functions of 0.
Since
0
00
log ) (z [ 0) =
0
00
log
_
1 +0
t
X
1
(z )
_
=
X
1
(z )
1 +0
t
X
1
(z )
47
it follows that the score function for 0 is
s =
0
00
log ) (z [ 0
0
) = X
1
(z ) . (4.21)
By classic theory the asymptotic variance of the MLE
`
0 for 0 is the Cramer-Rao bound (E(ss
t
))
1
=
_
X
1
E
_
(z ) (z )
t
_
X
1
_
1
= X. The MLE for d is d(
`
0) = g
_
+
`
0
_
which by the delta
method has asymptotic variance X =
0
0
0
g () X
0
0
g ()
t
, which is identical to the asymptotic
variance of the moment estimator
`
d. This shows that moment estimators are semiparametrically
ecient, and this includes the OLS estimator in the projection model. We have established the
following theorem.
Theorem 4.12.1 Under Assumptions 3.1.1 and 4.3.1, the semiparametric variance bound for es-
timation of d is X = Q
1
DQ
1
, and the OLS estimator is semiparametrically ecient.
4.13 Semiparametric Eciency in the Homoskedastic Regression
Model
In Section 3.9 we presented the Gauss-Markov theorem, which stated that in the homoskedastic
regression model (2.10)-(2.12), in the class of linear unbiased estimators the one with the smallest
variance is least-squares. As we noted in that section, the restriction to linear unbiased estimators
is unsatisfactory as it leaves open the possibility that an alternative (non-linear) estimator could
have a smaller asymptotic variance. In Sections 3.10 and 4.12 we showed that the OLS estimator
is ecient in the projection model, but this does not address the question of whether or not OLS
is ecient in the homoskedastic regression model (2.10)-(2.12). In this section we return to the
question of ecient estimation in this model using the theory of semiparametric variance bounds
as presented in the previous section.
Recall that in the homoskedastic regression model (2.10)-(2.12) the asymptotic variance of
the OLS estimator
`
d for d is X
0
= Q
1
o
2
. Therefore, as described in the previous section, it
is sucient to nd a parametric submodel whose Cramer-Rao bound for estimation of d is X
0
.
This would establish that X
0
is the semiparametric variance bound and the OLS estimator
`
d is
semiparametrically ecient for d.
Let the joint density of j and i be written as ) (j, i) = )
1
(j [ i) )
2
(i) , the product of the
conditional density of j given i, and the marginal density of i. Now consider the parametric
submodel
) (j, i [ 0) = )
1
(j [ i)
_
1 +
_
j i
t
d
_ _
i
t
0
_
,o
2
_
)
2
(i) . (4.22)
You can check that in this submodel, the marginal density of i is )
2
(i) , and the conditional density
of j given i is )
1
(j [ i)
_
1 + (j i
t
d) (i
t
0) ,o
2
_
. To see that the latter is a valid conditional
density, observe that the regression assumption implies that
_
j)
1
(j [ i) dj = i
t
d and therefore
_
)
1
(j [ i)
_
1 +
_
j i
t
d
_ _
i
t
0
_
,o
2
_
dj =
_
)
1
(j [ i) dj +
_
)
1
(j [ i)
_
j i
t
d
_
dj
_
i
t
0
_
,o
2
= 1.
48
In this parametric submodel the conditional mean of j given i is
E
0
(j [ i) =
_
j)
1
(j [ i)
_
1 +
_
j i
t
d
_ _
i
t
0
_
,o
2
_
dj
=
_
j)
1
(j [ i) dj +
_
j)
1
(j [ i)
_
j i
t
d
_ _
i
t
0
_
,o
2
dj
=
_
j)
1
(j [ i) dj +
_
_
j i
t
d
_
2
)
1
(j [ i)
_
i
t
0
_
,o
2
dj
+
_
_
j i
t
d
_
)
1
(j [ i) dj
_
i
t
d
_ _
i
t
0
_
,o
2
= i
t
(d +0) ,
using the homoskedasticity assumption that
_
(j i
t
d)
2
)
1
(j [ i) dj = o
2
. This means that in
this parametric submodel, the conditional mean is linear in i and the regression coecient is
d (0) = d +0.
We now calculate the score for estimation of 0. Since
0
00
log ) (j, i [ 0) =
0
00
log
_
1 +
_
j i
t
d
_ _
i
t
0
_
,o
2
_
=
i(j i
t
d) ,o
2
1 + (j i
t
d) (i
t
0) ,o
2
the score is
s =
0
00
log ) (j, i [ 0
0
) = ic,o
2
.
The Cramer-Rao bound for estimation of 0 (and therefore d (0) as well) is
_
E
_
ss
t
__
1
=
_
o
4
E
_
(ic) (ic)
t
__
1
= o
2
Q
1
= X
0
.
We have shown that there is a parametric submodel (4.22) whose Cramer-Rao bound for estimation
of d is identical to the asymptotic variance of the least-squares estimator, which therefore is the
semiparametric variance bound.
Theorem 4.13.1 In the homoskedastic regression model (2.10)-(2.12), the semiparametric vari-
ance bound for estimation of d is X
0
= o
2
Q
1
and the OLS estimator is semiparametrically
ecient.
This result is similar to the Gauss-Markov theorem, in that it asserts the eciency of the least-
squares estimator in the context of the homoskedastic regression model. The dierence is that the
Gauss-Markov theorem states that OLS has the smallest variance among the set of unbiased linear
estimators, while Theorem 4.13.1 states that OLS has the smallest asymptotic variance among
regular estimators. This is a much more powerful statement.
4.14 Problems with Tests of NonLinear Hypotheses
While the t and Wald tests work well when the hypothesis is a linear restriction on d, they
can work quite poorly when the restrictions are nonlinear. This can be seen by a simple example
introduced by Lafontaine and White (1986). Take the model
j
i
= , +c
i
c
i
~ N(0, o
2
)
and consider the hypothesis
H
0
: , = 1.
49
Let
^
, and ^ o
2
be the sample mean and variance of j
i
. Then the standard Wald test for H
0
is
\
a
= :
_
^
, 1
_
2
^ o
2
.
Now notice that H
0
is equivalent to the hypothesis
H
0
(r) : ,
v
= 1
for any positive integer r. Letting /(,) = ,
v
, and noting H
o
= r,
v1
, we nd that the standard
Wald test for H
0
(r) is
\
a
(r) = :
_
^
,
v
1
_
2
^ o
2
r
2^
,
2v2
.
While the hypothesis ,
v
= 1 is unaected by the choice of r, the statistic \
a
(r) varies with r. This
is an unfortunate feature of the Wald statistic.
To demonstrate this eect, we have plotted in Figure 4.4 the Wald statistic \
a
(r) as a function
of r, setting :,o
2
= 10. The increasing solid line is for the case
^
, = 0.8. The decreasing dashed
line is for the case
^
, = 1.6. It is easy to see that in each case there are values of r for which the
test statistic is signicant relative to asymptotic critical values, while there are other values of r
for which the test statistic is insignicant. This is distressing since the choice of r is arbitrary and
irrelevant to the actual hypothesis.
Figure 4.4: Wald Statistic as a function of :
Our rst-order asymptotic theory is not useful to help pick r, as \
a
(r)
o

2
1
under H
0
for
any r. This is a context where Monte Carlo simulation can be quite useful as a tool to study
and compare the exact distributions of statistical procedures in nite samples. The method uses
random simulation to create an articial dataset to apply the statistical tools of interest. This
produces random draws from the sampling distribution of interest. Through repetition, features of
this distribution can be calculated.
In the present context of the Wald statistic, one feature of importance is the Type I error
of the test using the asymptotic 5% critical value 3.84 the probability of a false rejection,
P(\
a
(r) 3.84 [ , = 1) . Given the simplicity of the model, this probability depends only on r, :,
50
and o
2
. In Table 2.1 we report the results of a Monte Carlo simulation where we vary these three
parameters. The value of r is varied from 1 to 10, : is varied among 20, 100 and 500, and o is
varied among 1 and 3. Table 4.1 reports the simulation estimate of the Type I error probability
from 50,000 random samples. Each row of the table corresponds to a dierent value of r and thus
corresponds to a particular choice of test statistic. The second through seventh columns contain the
Type I error probabilities for dierent combinations of : and o. These probabilities are calculated
as the percentage of the 50,000 simulated Wald statistics \
a
(r) which are larger than 3.84. The
null hypothesis ,
v
= 1 is true, so these probabilities are Type I error.
To interpret the table, remember that the ideal Type I error probability is 5% (.05) with
deviations indicating distortion. Typically, Type I error rates between 3% and 8% are considered
reasonable. Error rates above 10% are considered excessive. Rates above 20% are unacceptable.
When comparing statistical procedures, we compare the rates row by row, looking for tests for
which rejection rates are close to 5% and rarely fall outside of the 3%-8% range. For this particular
example the only test which meets this criterion is the conventional \
a
= \
a
(1) test. Any other
choice of r leads to a test with unacceptable Type I error probabilities.
In Table 4.1 you can also see the impact of variation in sample size. In each case, the Type I
error probability improves towards 5% as the sample size : increases. There is, however, no magic
choice of : for which all tests perform uniformly well. Test performance deteriorates as r increases,
which is not surprising given the dependence of \
a
(r) on r as shown in Figure 4.4.
Table 4.1
Type I error Probability of Asymptotic 5% \
a
(r) Test
o = 1 o = 3
r : = 20 : = 100 : = 500 : = 20 : = 100 : = 500
1 .06 .05 .05 .07 .05 .05
2 .08 .06 .05 .15 .08 .06
3 .10 .06 .05 .21 .12 .07
4 .13 .07 .06 .25 .15 .08
5 .15 .08 .06 .28 .18 .10
6 .17 .09 .06 .30 .20 .11
7 .19 .10 .06 .31 .22 .13
8 .20 .12 .07 .33 .24 .14
9 .22 .13 .07 .34 .25 .15
10 .23 .14 .08 .35 .26 .16
Note: Rejection frequencies from 50,000 simulated random samples
In this example it is not surprising that the choice r = 1 yields the best test statistic. Other
choices are arbitrary and would not be used in practice. While this is clear in this particular
example, in other examples natural choices are not always obvious and the best choices may in fact
appear counter-intuitive at rst.
This point can be illustrated through another example which is similar to one developed in
Gregory and Veall (1985). Take the model
j
i
= ,
0
+r
1i
,
1
+r
2i
,
2
+c
i
(4.23)
E(i
i
c
i
) = 0
and the hypothesis
H
0
:
,
1
,
2
= r
where r is a known constant. Equivalently, dene 0 = ,
1
,,
2
, so the hypothesis can be stated as
H
0
: 0 = r.
51
Let
`
d = (
^
,
0
,
^
,
1
,
^
,
2
) be the least-squares estimates of (4.23), let
`
X be an estimate of the
asymptotic variance matrix for
`
d and set
^
0 =
^
,
1
,
^
,
2
. Dene
`
H
1
=
_
_
_
_
_
_
_
_
_
_
_
0
1
^
,
2

^
,
1
^
,
2
2
_
_
_
_
_
_
_
_
_
_
_
so that the standard error for
^
0 is :(
^
0) =
_
:
1
`
H
t
1
`
X
`
H
1
_
12
. In this case a t-statistic for H
0
is
t
1a
=
_
^
o
1
^
o
2
r
_
:(
^
0)
.
An alternative statistic can be constructed through reformulating the null hypothesis as
H
0
: ,
1
r,
2
= 0.
A t-statistic based on this formulation of the hypothesis is
t
2a
=
_
^
,
1
r
^
,
2
_
2
_
:
1
H
t
2
`
X H
2
_
12
.
where
H
2
=
_
_
0
1
r
_
_
.
To compare t
1a
and t
2a
we perform another simple Monte Carlo simulation. We let r
1i
and r
2i
be mutually independent N(0, 1) variables, c
i
be an independent N(0, o
2
) draw with o = 3, and
normalize ,
0
= 0 and ,
1
= 1. This leaves ,
2
as a free parameter, along with sample size :. We
vary ,
2
among .1, .25, .50, .75, and 1.0 and : among 100 and 500.
Table 4.2
Type I error Probability of Asymptotic 5% t-tests
: = 100 : = 500
P(t
a
< 1.645) P(t
a
1.645) P(t
a
< 1.645) P(t
a
1.645)
,
2
t
1a
t
2a
t
1a
t
2a
t
1a
t
2a
t
1a
t
2a
.10 .47 .06 .00 .06 .28 .05 .00 .05
.25 .26 .06 .00 .06 .15 .05 .00 .05
.50 .15 .06 .00 .06 .10 .05 .00 .05
.75 .12 .06 .00 .06 .09 .05 .00 .05
1.00 .10 .06 .00 .06 .07 .05 .02 .05
The one-sided Type I error probabilities P(t
a
< 1.645) and P(t
a
1.645) are calculated from
50,000 simulated samples. The results are presented in Table 4.2. Ideally, the entries in the table
should be 0.05. However, the rejection rates for the t
1a
statistic diverge greatly from this value,
especially for small values of ,
2
. The left tail probabilities P(t
1a
< 1.645) greatly exceed 5%,
while the right tail probabilities P(t
1a
1.645) are close to zero in most cases. In contrast, the
52
rejection rates for the linear t
2a
statistic are invariant to the value of ,
2
, and are close to the
ideal 5% rate for both sample sizes. The implication of Table 4.2 is that the two t-ratios have
dramatically dierent sampling behavior.
The common message from both examples is that Wald statistics are sensitive to the algebraic
formulation of the null hypothesis. In all cases, if the hypothesis can be expressed as a linear
restriction on the model parameters, this formulation should be used. If no linear formulation is
feasible, then the most linear formulation should be selected (as suggested by the theory of Park
and Phillips (1988)), and alternatives to asymptotic critical values should be considered. It is also
prudent to consider alternative tests to the Wald statistic, such as the GMM distance statistic
which will be presented in Section 7.7 (as advocated by Hansen (2006)).
4.15 Monte Carlo Simulation
In the previous section we introduced the method of Monte Carlo simulation to illustrate the
small sample problems with tests of nonlinear hypotheses. In this section we describe the method
in more detail.
Recall, our data consist of observations (j
i
, i
i
) which are random draws from a population
distribution 1. Let 0 be a parameter and let T
a
= T
a
((j
1
, i
1
) , ..., (j
a
, i
a
) , 0) be a statistic of
interest, for example an estimator
^
0 or a t-statistic (
^
0 0),:(
^
0). The exact distribution of T
a
is
G
a
(n, 1) = P(T
a
_ n [ 1) .
While the asymptotic distribution of T
a
might be known, the exact (nite sample) distribution G
a
is generally unknown.
Monte Carlo simulation uses numerical simulation to compute G
a
(n, 1) for selected choices of 1.
This is useful to investigate the performance of the statistic T
a
in reasonable situations and sample
sizes. The basic idea is that for any given 1, the distribution function G
a
(n, 1) can be calculated
numerically through simulation. The name Monte Carlo derives from the famous Mediterranean
gambling resort where games of chance are played.
The method of Monte Carlo is quite simple to describe. The researcher chooses 1 (the dis-
tribution of the data) and the sample size :. A true value of 0 is implied by this choice, or
equivalently the value 0 is selected directly by the researcher which implies restrictions on 1.
Then the following experiment is conducted
: independent random pairs (j
+
i
, i
+
i
) , i = 1, ..., :, are drawn from the distribution 1 using
the computers random number generator.
The statistic T
a
= T
a
((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
) , 0) is calculated on this pseudo data.
For step 1, most computer packages have built-in procedures for generating U[0, 1] and N(0, 1)
random numbers, and from these most random variables can be constructed. (For example, a
chi-square can be generated by sums of squares of normals.)
For step 2, it is important that the statistic be evaluated at the true value of 0 corresponding
to the choice of 1.
The above experiment creates one random draw from the distribution G
a
(n, 1). This is one
observation from an unknown distribution. Clearly, from one observation very little can be said.
So the researcher repeats the experiment 1 times, where 1 is a large number. Typically, we set
1 = 1000 or 1 = 5000. We will discuss this choice later.
Notationally, let the /
t
th experiment result in the draw T
ab
, / = 1, ..., 1. These results are stored.
They constitute a random sample of size 1 from the distribution of G
a
(n, 1) = P(T
ab
_ n) =
P(T
a
_ n [ 1) .
From a random sample, we can estimate any feature of interest using (typically) a method of
moments estimator. For example:
53
Suppose we are interested in the bias, mean-squared error (MSE), or variance of the distribution
of
^
0 0. We then set T
a
=
^
0 0, run the above experiment, and calculate
\
1ia:(
^
0) =
1
1
1

b=1
T
ab
=
1
1
1

b=1
^
0
b
0
\
'o1(
^
0) =
1
1
1

b=1
(T
ab
)
2
=
1
1
1

b=1
_
^
0
b
0
_
\
var(
^
0) =
\
'o1(
^
0)
_
\
1ia:(
^
0)
_
2
Suppose we are interested in the Type I error associated with an asymptotic 5% two-sided t-test.
We would then set T
a
=

^
0 0

,:(
^
0) and calculate
^
P =
1
1
1

b=1
1 (T
ab
_ 1.96) , (4.24)
the percentage of the simulated t-ratios which exceed the asymptotic 5% critical value.
Suppose we are interested in the 5% and 95% quantile of T
a
=
^
0. We then compute the 5% and
95% sample quantiles of the sample T
ab
. The c% sample quantile is a number
c
such that c% of
the sample are less than
c
. A simple way to compute sample quantiles is to sort the sample T
ab

from low to high. Then


c
is the th number in this ordered sequence, where = (1 + 1)c. It
is therefore convenient to pick 1 so that is an integer. For example, if we set 1 = 999, then the
5% sample quantile is 50th sorted value and the 95% sample quantile is the 950th sorted value.
The typical purpose of a Monte Carlo simulation is to investigate the performance of a statistical
procedure (estimator or test) in realistic settings. Generally, the performance will depend on : and
1. In many cases, an estimator or test may perform wonderfully for some values, and poorly for
others. It is therefore useful to conduct a variety of experiments, for a selection of choices of : and
1.
As discussed above, the researcher must select the number of experiments, 1. Often this is
called the number of replications. Quite simply, a larger 1 results in more precise estimates of
the features of interest of G
a
, but requires more computational time. In practice, therefore, the
choice of 1 is often guided by the computational demands of the statistical procedure. Since the
results of a Monte Carlo experiment are estimates computed from a random sample of size 1, it
is straightforward to calculate standard errors for any quantity of interest. If the standard error is
too large to make a reliable inference, then 1 will have to be increased.
In particular, it is simple to make inferences about rejection probabilities from statistical tests,
such as the percentage estimate reported in (4.24). The random variable 1 (T
ab
_ 1.96) is iid
Bernoulli, equalling 1 with probability j = E1 (T
ab
_ 1.96) . The average (4.24) is therefore an
unbiased estimator of j with standard error : (^ j) =
_
j (1 j) ,1. As j is unknown, this may be
approximated by replacing j with ^ j or with an hypothesized value. For example, if we are assessing
an asymptotic 5% test, then we can set : (^ j) =
_
(.05) (.95) ,1 .22,
_
1. Hence, standard errors
for 1 = 100, 1000, and 5000, are, respectively, : (^ j) = .022, .007, and .003.
4.16 Estimating a Wage Equation
We again return to our wage equation. We use the sample of wage earners from the March 2004
Current Population Survey, excluding military. For the dependent variable we use the natural log
of wages so that coecients may be interpreted as semi-elasticities. For regressors we include years
of education, potential work experience, experience squared, and dummy variable indicators for
the following: married, female, union member, immigrant, hispanic, and non-white. Furthermore,
54
we included a dummy variable for state of residence (including the District of Columbia, this adds
50 regressors). The available sample is 18,808 so the parameter estimates are quite precise and
reported in Table 4.1, excluding the coecients on the state dummy variables.
Table 4.1 displays the parameter estimates in a standard format. The Table clearly states the
estimation method (OLS), the dependent variable (log(Wage)), and the regressors are clearly la-
beled. Parameter estimates are both reported for the coecients of interest (the coecients on the
state dummy variables are omitted) and standard errors are reported for all reported coecient
estimates. In addition to the coecient estimates, the table also reports the estimated error stan-
dard deviation, the sample size and the regression 1
2
. These are useful summary measures of t
which aid readers.
Table 4.1
OLS Estimates of Linear Equation for Log(Wage)
^
, :(
^
,)
Intercept 1.027 .032
Education .101 .002
Experience .033 .001
Experience
2
.00057 .00002
Married .102 .008
Female .232 .007
Union Member .097 .010
Immigrant .121 .013
Hispanic .102 .014
Non-White .070 .010
^ o .4877
Sample Size 18,808
1
2
.34
Note: Equation also includes state dummy variables.
As a general rule, it is best to always report standard errors along with parameter estimates
(as done in Table 4.1). This allows readers to assess the precision of the parameter estimates, and
form condence intervals and t-tests on individual coecients if desired. For example, if you are
interested in the dierence in mean wages between men and women, you can read from the table
that the estimated coecient on the Female dummy variable is 0.232, implying a mean wage
dierence of 23%. To assess the precision, you can see that the standard error for this coecient
estimate is 0.007. This implies a 95% asymptotic condence interval for the coecient estimate of
[.246, .218]. This means that we have estimated the dierence in mean wages between men and
women to lie between 22% and 25%. I interpret this as a precise estimate because there is not an
important dierence between the lower and upper bound.
Instead of reporting standard errors, some empirical researchers report t-ratios for each pa-
rameter estimate. t-ratios are t-statistics which test the hypothesis that the coecient equals
zero. An example is reported in Table 4.2. In this example, all the t-ratios are highly signicant,
ranging in magnitude from 9.3 to 50. What we learn from these statistics is that these coecients
are non-zero, but not much more. In a sample of this size this nding is rather uninteresting;
consequently the reporting of t-ratios is a waste of space. Again consider the male-female wage
dierence. Table 4.2 reports taht the t-ratio is 33, enabling us to reject the hypothesis that the
coecient is zero. But how precise is the reported estimate of a wage gap of 23%? It is hard to
assess from a quick reading of Table 4.2 Standard errors are much more useful, for they enable for
quick and easy assessment of the degree of estimation uncertainty.
55
Table 4.2
OLS Estimates of Linear Equation for Log(Wage)
Improper Reporting: t-ratios replacing standard errors
^
, t
Intercept 1.027 32
Education .101 50
Experience .033 33
Experience
2
.00057 28
Married .102 12.8
Female .232 33
Union Member .097 9.7
Immigrant .121 9.3
Hispanic .102 7.3
Non-White .070 7
Returning to the estimated wage equation, one might question whether or not the state dummy
variables are relevant. Computing the Wald statistic (4.18) that the state coecients are jointly
zero, we nd \
a
= 550. Alternatively, re-estimating the model with the 50 state dummies excluded,
the restricted standard deviation estimate is ~ o = .4945. The 1 form of the Wald statistic (4.19) is
\
a
= :
_
~ o
2
^ o
2
1
_
= 18, 808
_
.4945
2
.4877
2
1
_
= 528.
Notice that the two statistics are close, but not equal. Using either statistic the hypothesis is easily
rejected, as the 1% critical value for the
2
50
distribution is 76.
Another interesting question which can be addressed from these estimates is the maximal impact
of experience on mean wages. Ignoring the other coecients, we can write this eect as
log(\aqc) = ,
2
1rjcric:cc +,
3
1rjcric:cc
2
+
Our question is: At which level of experience 0 do workers achieve the highest wage? In this
quadratic model, if ,
2
0 and ,
3
< 0 the solution is
0 =
,
2
2,
3
.
From Table 4.1 we nd the point estimate
^
0 =
^
,
2
2
^
,
3
= 28.69.
Using the Delta Method, we can calculate a standard error of :(
^
0) = .40, implying a 95% condence
interval of [27.9, 29.5].
However, this is a poor choice, as the coverage probability of this condence interval is one minus
the Type I error of the hypothesis test based on the t-test. In the previous section we discovered
that such t-tests had very poor Type I error rates. Instead, we found better Type I error rates by
reformulating the hypothesis as a linear restriction. These t-statistics take the form
t
a
(0) =
^
,
2
+ 2
^
,
3
0
_
l
t
0
`
X l
0
_
12
56
where
l
0
=
_
1
20
_
and
`
X is the covariance matrix for (
^
,
2
^
,
3
).
In the present context we are interested in forming a condence interval, not testing a hypothesis,
so we have to go one step further. Our desired condence interval will be the set of parameter values
0 which are not rejected by the hypothesis test. This is the set of 0 such that [t
a
(0)[ _ 1.96. Since
t
a
(0) is a non-linear function of 0, there is not a simple expression for this set, but it can be found
numerically quite easily. This set is [27.0, 29.5]. Notice that the upper end of the condence interval
is the same as that from the delta method, but the lower end is substantially lower.
4.17 Technical Proofs
Proof of Theorem 4.2.1. In order to apply the WLLN to the sample moments
1
a

a
i=1
i
i
i
t
i
,
1
a

a
i=1
i
i
j
i
, or
1
a

a
i=1
i
i
c
i
we need to verify that the random variables r
)i
r
|i
, r
)
j
i
and r
)
c
i
are iid
with nite absolute rst moments. These moment conditions are covered by Assumptions 2.6.1 and
Theorem 2.6.1. Assumption 3.1.1 states that the observations (j
i
, i
i
) are mutually independent
and identically distributed. It follows that any function .
i
= /(j
i
, i
i
) is also iid. This includes c
i
,
r
)i
r
|i
, r
)
j
i
and r
)
c
i
. We have veried that the random variables r
)i
r
|i
, r
)
j
i
and r
)
c
i
are iid with
nite absolute rst moments and therefore the conditions for the WLLN hold. Equations (4.1),
(4.2) and (4.5) are therefore valid.
The nal step of the proof is the application of the continuous mapping theory to obtain (4.3).
To fully understand this applicationwe now walk through its application in more detail. Using (4.4)
we can write
`
d d =
_
1
:
a

i=1
i
i
i
t
i
_
1
_
1
:
a

i=1
i
i
j
i
_
= j
_
1
:
a

i=1
i
i
i
t
i
,
1
:
a

i=1
i
i
j
i
_
where j (A, I) = A
1
I is a function of A and I. This function is a continuous function of A and
I at all values of the arguments such that A
1
exists. Assumption 2.6.1.4 implies that Q
1
exists
and thus j (A, I) is continuous at A = Q. Hence by the continuous mapping theorem (Theorem
C.4.1),
`
d d = j
_
1
:
a

i=1
i
i
i
t
i
,
1
:
a

i=1
i
i
j
i
_
j
j (Q, E(i
i
j
i
))
= E
_
i
i
i
t
i
_
1
E(i
i
j
i
)
= d

Proof of Theorem 4.2.2. Note that


^ c
i
= j
i
i
t
i
`
d
= c
i
+i
t
i
d r
t
i
`
d
= c
i
i
t
i
_
`
d d
_
.
57
Thus
^ c
2
i
= c
2
i
2c
i
i
t
i
_
`
d d
_
+
_
`
d d
_
t
i
i
i
t
i
_
`
d d
_
(4.25)
and
^ o
2
=
1
:
a

i=1
^ c
2
i
=
1
:
a

i=1
c
2
i
2
_
1
:
a

i=1
c
i
i
t
i
_
_
`
d d
_
+
_
`
d d
_
t
_
1
:
a

i=1
i
i
i
t
i
_
_
`
d d
_
j
o
2
the last line using the WLLN, (4.1), (4.5) and Theorem (4.2.1). Thus ^ o
2
is consistent for o
2
.
Finally, since :,(: /) 1 as : , it follows that
:
2
=
_
:
: /
_
^ o
2
j
o
2
.

Proof of Theorem 4.3.1. The discussion preceeding the statement of the theorem, together
with the proof of Theorem 4.2.1, covered all points of the derivation except for the demonstration
that the elements of the vector i
i
c
i
have nite second moments, which we now show. For any
, = 1, ..., /, by the Cauchy-Schwarz Inequality (C.3) and Assumption 4.3.1
E[r
)i
c
i
[
2
= E

r
2
)i
c
2
i

_
_
Er
4
)i
_
12
_
Ec
4
i
_
12
< . (4.26)

Proof of Theorem 4.4.1. We now show


`
D
j
D, from which it follows that
`
X
j
X as
: . Using (4.25)
`
D =
1
:
a

i=1
i
i
i
t
i
^ c
2
i
=
1
:
a

i=1
i
i
i
t
i
c
2
i

2
:
a

i=1
i
i
i
t
i
_
`
d d
_
t
i
i
c
i
+
1
:
a

i=1
i
i
i
t
i
_
_
`
d d
_
t
i
i
_
2
. (4.27)
We now examine each / / sum on the right-hand-side of (4.27) in turn.
Take the rst term on the right-hand-side of (4.27). The ,|th element of i
i
i
t
i
c
2
i
is r
)i
r
|i
c
2
i
.
Using the Cauchy-Schwarz Inequality (C.3) twice and Assumption 4.3.1,
E

r
)i
r
|i
c
2
i

_
_
Er
2
)i
r
2
|i
_
12
_
Ec
4
i
_
12
_
_
Er
4
)i
_
14
_
Er
4
|i
_
14
_
Ec
4
i
_
12
.
Since this expectation is nite, we can apply the WLLN (Theorem C.2.1) to nd that
1
:
a

i=1
i
i
i
t
i
c
2
i
j
E
_
i
i
i
t
i
c
2
i
_
= D.
Now take the second term on the right-hand-side of (4.27). Applying the Triangle Inequality
(A.9) to the matrix Euclidean norm, the Matrix Schwarz Inequality (A.8), equation (A.6) and the
58
Schwarz Inequality (A.7)
_
_
_
_
_
2
:
a

i=1
i
i
i
t
i
_
`
d d
_
t
i
i
c
i
_
_
_
_
_
_
2
:
a

i=1
_
_
_
_
i
i
i
t
i
_
`
d d
_
t
i
i
c
i
_
_
_
_
_
2
:
a

i=1
_
_
i
i
i
t
i
_
_

_
`
d d
_
t
i
i

[c
i
[
_
_
2
:
a

i=1
|i
i
|
3
[c
i
[
_
_
_
_
`
d d
_
_
_. (4.28)
Using Holders inequality (C.4) and Assumption 4.3.1,
E
_
|i
i
|
3
[c
i
[
_
_
_
E|i
i
|
4
_
34 _
E

c
4
i

_
14
< .
By the WLLN
1
:
a

i=1
|i
i
|
3
[c
i
[
j
E
_
|i
i
|
3
[c
i
[
_
< .
Since
`
dd
j
0 it follows that (4.28) converges in probability to zero. This shows that the second
term on the right-hand-side of (4.27) converges in probability to zero.
We now take the third term in (4.27). Again by the Triangle Inequality, the Matrix Schwarz
Inequality, (A.6) and the Schwarz Inequality
_
_
_
_
_
1
:
a

i=1
i
i
i
t
i
_
_
`
d d
_
t
i
i
_
2
_
_
_
_
_
_
1
:
a

i=1
_
_
i
i
i
t
i
_
_
_
_
`
d d
_
t
i
i
_
2
_
1
:
a

i=1
|i
i
|
4
_
_
_
`
d d
_
_
_
j
0
the nal convergence since
`
d d
j
0 and
1
a

a
i=1
|i
i
|
4
j
1 |i
i
|
4
< under Assumption
4.3.1. This shows that the third term on the right-hand-side of (4.27) converges in probability to
zero.
Considering the three terms on the right-hand-side of (4.27), we have shown that the rst
term converges in probability to D, and the second and third converge in probability to zero. We
conclude that
`
D !D as claimed.
Proof of Theorem 4.7.1. By (4.15)
t
a
(0) =
^
0 0
:(
^
0)
=
_
:
_
^
0 0
_
_
^
\
0
o

N(0, \
0
)
_
\
0
= N(0, 1)

59
4.18 Exercises
For exercises 1-4, the following denition is used. In the model = Ad +c, the least-squares
estimate of d subject to the restriction l(d) = 0 is

d = argmin
h(f)=0
o
a
(d)
o
a
(d) = ( Ad)
t
( Ad) .
That is,

d minimizes the sum of squared errors o
a
(d) over all d such that the restriction holds.
1. In the model = A
1
d
1
+ A
2
d
2
+ c, show that the least-squares estimate of d = (d
1
, d
2
)
subject to the constraint that d
2
= 0 is the OLS regression of on A
1
.
2. In the model = A
1
d
1
+ A
2
d
2
+ c, show that the least-squares estimate of d = (d
1
, d
2
),
subject to the constraint that d
1
= c (where c is some given vector) is simply the OLS
regression of A
1
c on A
2
.
3. In the model = A
1
d
1
+ A
2
d
2
+ c, with A
1
and A
2
each : /, nd the least-squares
estimate of d = (d
1
, d
2
), subject to the constraint that d
1
= d
2
.
4. Take the model = Ad+c with the restriction H
t
d = r where H is a known /: matrix, r
is a known : 1 vector, 0 < : < /, and rank(H) = :. Explain why

d solves the minimization
of the Lagrangian
/(d, X) =
1
2
o
a
(d) +X
t
_
H
t
d r
_
where X is : 1.
(a) Show that the solution is

d =
`
d
_
A
t
A
_
1
H
_
H
t
_
A
t
A
_
1
H
_
1
_
H
t
`
d r
_
`
X =
_
H
t
_
A
t
A
_
1
H
_
1
_
H
t
`
d r
_
where
`
d =
_
A
t
A
_
1
A
t

is the unconstrained OLS estimator.


(b) Verify that H
t

d = r.
(c) Show that if H
t
d = r is true, then

d d =
_
1
I

_
A
t
A
_
1
H
_
H
t
_
A
t
A
_
1
H
_
1
H
t
_
_
A
t
A
_
1
A
t
c.
(d) Under the standard assumptions plus H
t
d = r, nd the asymptotic distribution of
_
:
_

d d
_
as : .
(e) Find an appropriate formula to calculate standard errors for the elements of

d.
5. You have two independent samples (
1
, A
1
) and (
2
, A
2
) which satisfy
1
= A
1
d
1
+c
1
and

2
= A
2
d
2
+ c
2
, where E(i
1i
c
1i
) = 0 and E(i
2i
c
2i
) = 0, and both A
1
and A
2
have /
columns. Let
`
d
1
and
`
d
2
be the OLS estimates of d
1
and d
2
. For simplicity, you may assume
that both samples have the same number of observations :.
60
(a) Find the asymptotic distribution of
_
:
__
`
d
2

`
d
1
_
(d
2
d
1
)
_
as : .
(b) Find an appropriate test statistic for H
0
: d
2
= d
1
.
(c) Find the asymptotic distribution of this statistic under H
0
.
6. The model is
j
i
= i
t
i
d +c
i
E(i
i
c
i
) = 0
D = E
_
i
i
i
t
i
c
2
i
_
.
(a) Find the method of moments estimators
_
`
d,
`
D
_
for (d, D) .
(b) In this model, are
_
`
d,
`
D
_
ecient estimators of (d, D)?
(c) If so, in what sense are they ecient?
7. Take the model j
i
= i
t
1i
d
1
+ i
t
2i
d
2
+ c
i
with Ei
i
c
i
= 0. Suppose that d
1
is estimated
by regressing j
i
on i
1i
only. Find the probability limit of this estimator. In general, is it
consistent for d
1
? If not, under what conditions is this estimator consistent for d
1
?
8. Verify that equation (4.13) equals (4.14) as claimed in Section 4.5.
9. Prove that if an additional regressor A
I+1
is added to A, Theils adjusted 1
2
increases if
and only if [t
I+1
[ 1, where t
I+1
=
^
,
I+1
,:(
^
,
I+1
) is the t-ratio for
^
,
I+1
and
:(
^
,
I+1
) =
_
:
2
[(A
t
A)
1
]
I+1,I+1
_
12
is the homoskedasticity-formula standard error.
10. Let be : 1, A be : / (rank /). = Ad + c with E(i
i
c
i
) = 0. Dene the ridge
regression estimator
`
d =
_
a

i=1
i
i
i
t
i
+`1
I
_
1
_
a

i=1
i
i
j
i
_
where ` 0 is a xed constant. Find the probability limit of
`
d as : . Is
`
d consistent
for d?
11. Of the variables (j
+
i
, j
i
, i
i
) only the pair (j
i
, i
i
) are observed. In this case, we say that j
+
i
is
a latent variable. Suppose
j
+
i
= i
t
i
d +c
i
E(i
i
c
i
) = 0
j
i
= j
+
i
+n
i
where n
i
is a measurement error satisfying
E(i
i
n
i
) = 0
E(j
+
i
n
i
) = 0
Let
`
d denote the OLS coecient from the regression of j
i
on i
i
.
(a) Is d the coecient from the linear projection of j
i
on i
i
?
(b) Is
`
d consistent for d as : ?
61
(c) Find the asymptotic distribution of
_
:
_
`
d d
_
as : .
12. The data set invest.dat contains data on 565 U.S. rms extracted from Compustat for the
year 1987. The variables, in order, are
1
i
Investment to Capital Ratio (multiplied by 100).
Q
i
Total Market Value to Asset Ratio (Tobins Q).
C
i
Cash Flow to Asset Ratio.
1
i
Long Term Debt to Asset Ratio.
The ow variables are annual sums for 1987. The stock variables are beginning of year.
(a) Estimate a linear regression of 1
i
on the other variables. Calculate appropriate standard
errors.
(b) Calculate asymptotic condence intervals for the coecients.
(c) This regression is related to Tobins theory of investment, which suggests that invest-
ment should be predicted solely by Q
i
. Thus the coecient on Q
i
should be positive and
the others should be zero. Test the joint hypothesis that the coecients on C
i
and 1
i
are zero. Test the hypothesis that the coecient on Q
i
is zero. Are the results consistent
with the predictions of the theory?
(d) Now try a non-linear (quadratic) specication. Regress 1
i
on Q
i
, C
i
, 1
i
, Q
2
i
, C
2
i
, 1
2
i
,
Q
i
C
i
, Q
i
1
i
, C
i
1
i
. Test the joint hypothesis that the six interaction and quadratic coef-
cients are zero.
13. In a paper in 1963, Marc Nerlove analyzed a cost function for 145 American electric companies.
(The problem is discussed in Example 8.3 of Greene, section 1.7 of Hayashi, and the empirical
exercise in Chapter 1 of Hayashi). The data le nerlov.dat contains his data. The variables
are described on page 77 of Hayashi. Nerlov was interested in estimating a cost function:
TC = )(Q, 11, 11, 11).
(a) First estimate an unrestricted Cobb-Douglass specication
log TC
i
= ,
1
+,
2
log Q
i
+,
3
log 11
i
+,
4
log 11
i
+,
5
log 11
i
+c
i
. (4.29)
Report parameter estimates and standard errors. You should obtain the same OLS
estimates as in Hayashis equation (1.7.7), but your standard errors may dier.
(b) Using a Wald statistic, test the hypothesis H
0
: ,
3
+,
4
+,
5
= 1.
(c) Estimate (4.29) by least-squares imposing this restriction by substitution. Report your
parameter estimates and standard errors.
(d) Estimate (4.29) subject to ,
3
+,
4
+,
5
= 1 using the restricted least-squares estimator
from problem 4. Do you obtain the same estimates as in part (c)?
.
62
Chapter 5
Additional Regression Topics
5.1 Generalized Least Squares
In the projection model, we know that the least-squares estimator is semi-parametrically ecient
for the projection coecient. However, in the linear regression model
j
i
= i
t
i
d +c
i
E(c
i
[ i
i
) = 0,
the least-squares estimator is inecient. The theory of Chamberlain (1987) can be used to show
that in this model the semiparametric eciency bound is obtained by the Generalized Least
Squares (GLS) estimator

d =
_
A
t
L
1
A
_
1
_
A
t
L
1

_
(5.1)
where L = diago
2
1
, ..., o
2
a
and o
2
i
= o
2
(i
i
) = E
_
c
2
i
[ i
i
_
. The GLS estimator is sometimes called
the Aitken estimator. The GLS estimator (5.1) is infeasible since the matrix L is unknown. A
feasible GLS (FGLS) estimator replaces the unknown L with an estimate
`
L = diag^ o
2
1
, ..., ^ o
2
a
.
We now discuss this estimation problem.
Suppose that we model the conditional variance using the parametric form
o
2
i
= c
0
+z
t
1i
o
1
= o
t
z
i
,
where z
1i
is some 1 function of i
i
. Typically, z
1i
are squares (and perhaps levels) of some (or
all) elements of i
i
. Often the functional form is kept simple for parsimony.
Let j
i
= c
2
i
. Then
E(j
i
[ i
i
) = c
0
+z
t
1i
o
1
and we have the regression equation
j
i
= c
0
+z
t
1i
o
1
+
i
(5.2)
E(
i
[ i
i
) = 0.
This regression error
i
is generally heteroskedastic and has the conditional variance
var (
i
[ i
i
) = var
_
c
2
i
[ i
i
_
= E
_
_
c
2
i
E
_
c
2
i
[ i
i
__
2
[ i
i
_
= E
_
c
4
i
[ i
i
_

_
E
_
c
2
i
[ i
i
__
2
.
Suppose c
i
(and thus j
i
) were observed. Then we could estimate o by OLS:
` o =
_
Z
t
Z
_
1
Z
t

j
o
63
and
_
:(` oo)
o
N(0, X
o
)
where
X
o
=
_
E
_
z
i
z
t
i
__
1
E
_
z
i
z
t
i

2
i
_ _
E
_
z
i
z
t
i
__
1
. (5.3)
While c
i
is not observed, we have the OLS residual ^ c
i
= j
i
i
t
i
`
d = c
i
i
t
i
(
`
d d). Thus
c
i
= ^ j j
i
= ^ c
2
i
c
2
i
= 2c
i
i
t
i
_
`
d d
_
+ (
`
d d)
t
i
i
i
t
i
(
`
d d).
And then
1
_
:
a

i=1
z
i
c
i
=
2
:
a

i=1
z
i
c
i
i
t
i
_
:
_
`
d d
_
+
1
:
a

i=1
z
i
(
`
d d)
t
i
i
i
t
i
(
`
d d)
_
:
j
0
Let
o =
_
Z
t
Z
_
1
Z
t
` (5.4)
be from OLS regression of ^ j
i
on z
i
. Then
_
:( oo) =
_
:(` oo) +
_
:
1
Z
t
Z
_
1
:
12
Z
t

o
N(0, X
o
) (5.5)
Thus the fact that j
i
is replaced with ^ j
i
is asymptotically irrelevant. We may call (5.4) the skedastic
regression, as it is estimating the conditional variance of the regression of j
i
on i
i
. We have shown
that o is consistently estimated by a simple procedure, and hence we can estimate o
2
i
= z
t
i
o by
~ o
2
i
= o
t
z
i
. (5.6)
Suppose that ~ o
2
i
0 for all i. Then set

L = diag~ o
2
1
, ..., ~ o
2
a

and

d =
_
A
t

L
1
A
_
1
A
t

L
1
.
This is the feasible GLS, or FGLS, estimator of d. Since there is not a unique specication for
the conditional variance the FGLS estimator is not unique, and will depend on the model (and
estimation method) for the skedastic regression.
One typical problem with implementation of FGLS estimation is that in the linear specication
(5.2), there is no guarantee that ~ o
2
i
0 for all i. If ~ o
2
i
< 0 for some i, then the FGLS estimator
is not well dened. Furthermore, if ~ o
2
i
- 0 for some i then the FGLS estimator will force the
regression equation through the point (j
i
, i
i
), which is typically undesirable. This suggests that
there is a need to bound the estimated variances away from zero. A trimming rule might make
sense:
o
2
i
= max[~ o
2
i
, o
2
]
for some o
2
0.
It is possible to show that if the skedastic regression is correctly specied, then FGLS is asymp-
totically equivalent to GLS. As the proof is tricky, we just state the result without proof.
64
Theorem 5.1.1 If the skedastic regression is correctly specied,
_
:
_

d
G1S

d
1G1S
_
j
0,
and thus
_
:
_

d
1G1S
d
_
o
N(0, X ) ,
where
X =
_
E
_
o
2
i
i
i
i
t
i
__
1
.
Examining the asymptotic distribution of Theorem 5.1.1, the natural estimator of the asymp-
totic variance of
`
d is

X
0
=
_
1
:
a

i=1
~ o
2
i
i
i
i
t
i
_
1
=
_
1
:
A
t

L
1
A
_
1
.
which is consistent for X as : . This estimator

X
0
is appropriate when the skedastic regression
(5.2) is correctly specied.
It may be the case that o
t
z
i
is only an approximation to the true conditional variance o
2
i
=
E(c
2
i
[ i
i
). In this case we interpret o
t
z
i
as a linear projection of c
2
i
on z
i
.

d should perhaps be
called a quasi-FGLS estimator of d. Its asymptotic variance is not that given in Theorem 5.1.1.
Instead,
X =
_
E
_
_
o
t
z
i
_
1
i
i
i
t
i
__
1
_
E
_
_
o
t
z
i
_
2
o
2
i
i
i
i
t
i
___
E
_
_
o
t
z
i
_
1
i
i
i
t
i
__
1
.
X takes a sandwich form similar to the covariance matrix of the OLS estimator. Unless o
2
i
= o
t
z
i
,

X
0
is inconsistent for X .
An appropriate solution is to use a White-type estimator in place of

X
0
. This may be written
as

X =
_
1
:
a

i=1
~ o
2
i
i
i
i
t
i
_
1
_
1
:
a

i=1
~ o
4
i
^ c
2
i
i
i
i
t
i
__
1
:
a

i=1
~ o
2
i
i
i
i
t
i
_
1
= :
_
A
t

L
1
A
_
1
_
A
t

L
1
`
L

L
1
A
__
A
t

L
1
A
_
1
where
`
L = diag^ c
2
1
, ..., ^ c
2
a
. This is estimator is robust to misspecication of the conditional vari-
ance, and was proposed by Cragg (1992).
In the linear regression model, FGLS is asymptotically superior to OLS. Why then do we not
exclusively estimate regression models by FGLS? This is a good question. There are three reasons.
First, FGLS estimation depends on specication and estimation of the skedastic regression.
Since the form of the skedastic regression is unknown, and it may be estimated with considerable
error, the estimated conditional variances may contain more noise than information about the true
conditional variances. In this case, FGLS will do worse than OLS in practice.
Second, individual estimated conditional variances may be negative, and this requires trimming
to solve. This introduces an element of arbitrariness which is unsettling to empirical researchers.
Third, OLS is a robust estimator of the parameter vector. It is consistent not only in the
regression model, but also under the assumptions of linear projection. The GLS and FGLS esti-
mators, on the other hand, require the assumption of a correct conditional mean. If the equation
of interest is a linear projection, and not a conditional mean, then the OLS and FGLS estimators
will converge in probability to dierent limits, as they will be estimating two dierent projections.
And the FGLS probability limit will depend on the particular function selected for the skedastic
regression. The point is that the eciency gains from FGLS are built on the stronger assumption
of a correct conditional mean, and the cost is a loss of robustness to misspecication.
65
5.2 Testing for Heteroskedasticity
The hypothesis of homoskedasticity is that E
_
c
2
i
[ i
i
_
= o
2
, or equivalently that
H
0
: o
1
= 0
in the regression (5.2). We may therefore test this hypothesis by the estimation (5.4) and con-
structing a Wald statistic. At this point it is typical to impose the stronger assumption that c
i
is
independent of i
i
, in which case
i
is independent of i
i
and the asymptotic variance (5.3) for o
simplies to
\
o
=
_
E
_
z
i
z
t
i
__
1
E
_

2
i
_
. (5.7)
Hence the standard test of H
0
is a classic 1 (or Wald) test for exclusion of all regressors from
the skedastic regression (5.4). The asymptotic distribution (5.5) and the asymptotic variance (5.7)
under independence show that this test has an asymptotic chi-square distribution.
Theorem 5.2.1 Under H
0
and c
i
independent of i
i
, the Wald test of H
0
is asymptotically
2
q
.
Most tests for heteroskedasticity take this basic form. The main dierences between popular
tests are which transformations of i
i
enter z
i
. Motivated by the form of the asymptotic variance
of the OLS estimator
`
d, White (1980) proposed that the test for heteroskedasticity be based on
setting z
i
to equal all non-redundant elements of i
i
, its squares, and all cross-products. Breusch-
Pagan (1979) proposed what might appear to be a distinct test, but the only dierence is that they
allowed for general choice of z
i
, and replaced E
_

2
i
_
with 2o
4
which holds when c
i
is N
_
0, o
2
_
. If
this simplication is replaced by the standard formula (under independence of the error), the two
tests coincide.
It is important not to misuse tests for heteroskedasticity. It should not be used to determine
whether to estimate a regression equation by OLS or FGLS, nor to determine whether classic or
White standard errors should be reported. Hypothesis tests are not designed for these purposes.
Rather, tests for heteroskedasticity should be used to answer the scientic question of whether or
not the conditional variance is a function of the regressors. If this question is not of economic
interest, then there is no value in conducting a test for heteorskedasticity.
5.3 Forecast Intervals
In the linear regression model the conditional mean of j
i
given i
i
= i is
:(i) = E(j
i
[ i
i
= i) = i
t
d.
In some cases, we want to estimate :(i) at a particular point i. Notice that this is a (linear)
function of d. Letting /(d) = i
t
d and 0 = /(d), we see that ^ :(i) =
^
0 = i
t
`
d and H
f
= i, so
:(
^
0) =
_
:
1
i
t `
X i. Thus an asymptotic 95% condence interval for :(i) is
_
i
t
`
d 2
_
:
1
i
t `
X i
_
.
It is interesting to observe that if this is viewed as a function of i, the width of the condence set
is dependent on i.
For a given value of i
i
= i, we may want to forecast (guess) j
i
out-of-sample. A reasonable
rule is the conditional mean :(i) as it is the mean-square-minimizing forecast. A point forecast is
the estimated conditional mean ^ :(i) = i
t
`
d. We would also like a measure of uncertainty for the
forecast.
66
The forecast error is ^ c
i
= j
i
^ :(i) = c
i
i
t
_
`
d d
_
. As the out-of-sample error c
i
is
independent of the in-sample estimate
`
d, this has variance
E^ c
2
i
= E
_
c
2
i
[ i
i
= i
_
+i
t
E
_
`
d d
__
`
d d
_
t
i
= o
2
(i) +:
1
i
t
X i.
Assuming E
_
c
2
i
[ i
i
_
= o
2
, the natural estimate of this variance is ^ o
2
+ :
1
i
t
`
X i, so a standard
error for the forecast is ^ :(i) =
_
^ o
2
+:
1
i
t `
X i. Notice that this is dierent from the standard
error for the conditional mean. If we have an estimate of the conditional variance function, e.g.
~ o
2
(i) = o
t
z from (5.6), then the forecast standard error is ^ :(i) =
_
~ o
2
(i) +:
1
i
t `
X i
It would appear natural to conclude that an asymptotic 95% forecast interval for j
i
is
_
i
t
`
d 2^ :(i)
_
,
but this turns out to be incorrect. In general, the validity of an asymptotic condence interval is
based on the asymptotic normality of the studentized ratio. In the present case, this would require
the asymptotic normality of the ratio
c
i
i
t
_
`
d d
_
^ :(i)
.
But no such asymptotic approximation can be made. The only special exception is the case where
c
i
has the exact distribution N(0, o
2
), which is generally invalid.
To get an accurate forecast interval, we need to estimate the conditional distribution of c
i
given
i
i
= i, which is a much more dicult task. Given the diculty, many applied forecasters focus
on the simple approximate interval
_
i
t
`
d 2^ :(i)
_
.
5.4 NonLinear Least Squares
In some cases we might use a parametric regression function :(i, 0) = E(j
i
[ i
i
= i) which
is a non-linear function of the parameters 0. We describe this setting as non-linear regression.
Examples of nonlinear regression functions include
:(r, 0) = 0
1
+0
2
r
1 +0
3
r
:(r, 0) = 0
1
+0
2
r
0
3
:(r, 0) = 0
1
+0
2
exp(0
3
r)
:(i, 0) = G(i
t
0), G known
:(i, 0) = 0
t
1
i
1
+
_
0
t
2
i
1
_

_
r
2
0
3
0
4
_
:(r, 0) = 0
1
+0
2
r +0
3
(r 0
4
) 1 (r 0
4
)
:(i, 0) =
_
0
t
1
i
1
_
1 (r
2
< 0
3
) +
_
0
t
2
i
1
_
1 (r
2
0
3
)
In the rst ve examples, :(i, 0) is (generically) dierentiable in the parameters 0. In the nal
two examples, : is not dierentiable with respect to 0
4
and 0
3
which alters some of the analysis.
When it exists, let
n
0
(i, 0) =
0
00
:(i, 0) .
Nonlinear regression is frequently adopted because the functional form :(i, 0) is suggested
by an economic model. In other cases, it is adopted as a exible approximation to an unknown
regression function.
67
The least squares estimator
`
0 minimizes the sum-of-squared-errors
o
a
(0) =
a

i=1
(j
i
:(i
i
, 0))
2
.
When the regression function is nonlinear, we call this the nonlinear least squares (NLLS)
estimator. The NLLS residuals are ^ c
i
= j
i
:
_
i
i
,
`
0
_
.
One motivation for the choice of NLLS as the estimation method is that the parameter 0 is the
solution to the population problem min
0
E(j
i
:(i
i
, 0))
2
Since sum-of-squared-errors function o
a
(0) is not quadratic,
`
0 must be found by numerical
methods. See Appendix E. When :(i, 0) is dierentiable, then the FOC for minimization are
0 =
a

i=1
n
0
_
i
i
,
`
0
_
^ c
i
. (5.8)
Theorem 5.4.1 If the model is identied and :(i, 0) is dierentiable with respect to 0,
_
:
_
`
0 0
0
_
o
N(0, X )
X =
_
E
_
n
0i
n
t
0i
__
1
_
E
_
n
0i
n
t
0i
c
2
i
__ _
E
_
n
0i
n
t
0i
__
1
where n
0i
= n
0
(i
i
, 0
0
).
Based on Theorem 5.4.1, an estimate of the asymptotic variance X is
`
X =
_
1
:
a

i=1
` n
0i
` n
t
0i
_
1
_
1
:
a

i=1
` n
0i
` n
t
0i
^ c
2
i
__
1
:
a

i=1
` n
0i
` n
t
0i
_
1
where ` n
0i
= n
0
(i
i
,
`
0) and ^ c
i
= j
i
:(i
i
,
`
0).
Identication is often tricky in nonlinear regression models. Suppose that
:(i
i
, 0) = d
t
1
z
i
+d
t
2
i
i
()
where i
i
() is a function of i
i
and the unknown parameter _. Examples include r
i
() = r

i
,
r
i
() = exp(r
i
) , and r
i
(_) = r
i
1 (q (r
i
) ). The model is linear when d
2
= 0, and this is
often a useful hypothesis (sub-model) to consider. Thus we want to test
H
0
: d
2
= 0.
However, under H
0
, the model is
j
i
= d
t
1
z
i
+c
i
and both d
2
and have dropped out. This means that under H
0
, is not identied. This renders
the distribution theory presented in the previous section invalid. Thus when the truth is that
d
2
= 0, the parameter estimates are not asymptotically normally distributed. Furthermore, tests
of H
0
do not have asymptotic normal or chi-square distributions.
The asymptotic theory of such tests have been worked out by Andrews and Ploberger (1994)
and B. Hansen (1996). In particular, Hansen shows how to use simulation (similar to the bootstrap)
to construct the asymptotic critical values (or p-values) in a given application.
68
5.5 Least Absolute Deviations
We stated that a conventional goal in econometrics is estimation of impact of variation in i
i
on the central tendency of j
i
. We have discussed projections and conditional means, but these are
not the only measures of central tendency. An alternative good measure is the conditional median.
To recall the denition and properties of the median, let j be a continuous random variable.
The median 0
0
= med(j) is the value such that P(j _ 0
0
) = P(j _ 0
0
) = .5. Two useful facts
about the median are that
0
0
= argmin
0
E[j 0[ (5.9)
and
Esgn(j 0
0
) = 0
where
sgn (n) =
_
1 if n _ 0
1 if n < 0
is the sign function.
These facts and denitions motivate three estimators of 0. The rst denition is the 50t/
empirical quantile. The second is the value which minimizes
1
a

a
i=1
[j
i
0[ , and the third denition
is the solution to the moment equation
1
a

a
i=1
sgn (j
i
0) . These distinctions are illusory, however,
as these estimators are indeed identical.
Now lets consider the conditional median of j given a random vector i. Let :(i) = med(j [ i)
denote the conditional median of j given i. The linear median regression model takes the form
j
i
= i
t
i
d +c
i
med(c
i
[ i
i
) = 0
In this model, the linear function med(j
i
[ i
i
= i) = i
t
d is the conditional median function, and
the substantive assumption is that the median function is linear in i.
Conditional analogs of the facts about the median are
P(j
i
_ i
t
d
0
[ i
i
= i) = P(j
i
i
t
d [ i
i
= i) = .5
E(sgn(c
i
) [ i
i
) = 0
E(i
i
sgn (c
i
)) = 0
d
0
= min
f
E[j
i
i
t
i
d[
These facts motivate the following estimator. Let
11
a
(d) =
1
:
a

i=1

j
i
i
t
i
d

be the average of absolute deviations. The least absolute deviations (LAD) estimator of d
minimizes this function
`
d = argmin
f
11
a
(d)
Equivalently, it is a solution to the moment condition
1
:
a

i=1
i
i
sgn
_
j
i
i
t
i
`
d
_
= 0. (5.10)
The LAD estimator has the asymptotic distribution
69
Theorem 5.5.1
_
:
_
`
d d
0
_
o
N(0, X ) , where
\ =
1
4
_
E
_
i
i
i
t
i
) (0 [ i
i
)
__
1
_
Ei
i
i
t
i
_ _
E
_
i
i
i
t
i
) (0 [ i
i
)
__
1
and ) (c [ i) is the conditional density of c
i
given i
i
= i.
The variance of the asymptotic distribution inversely depends on ) (0 [ i) , the conditional
density of the error at its median. When ) (0 [ i) is large, then there are many innovations near
to the median, and this improves estimation of the median. In the special case where the error is
independent of i
i
, then ) (0 [ i) = ) (0) and the asymptotic variance simplies
X =
(Ei
i
i
t
i
)
1
4) (0)
2
(5.11)
This simplication is similar to the simplication of the asymptotic covariance of the OLS estimator
under homoskedasticity.
Computation of standard error for LAD estimates typically is based on equation (5.11). The
main diculty is the estimation of )(0), the height of the error density at its median. This can be
done with kernel estimation techniques. See Chapter 14. While a complete proof of Theorem 5.5.1
is advanced, we provide a sketch here for completeness.
5.6 Quantile Regression
The method of quantile regression has become quite popular in recent econometric practice.
For t [0, 1] the tth quantile Q
t
of a random variable with distribution function 1(n) is dened
as
Q
t
= inf n : 1(n) _ t
When 1(n) is continuous and strictly monotonic, then 1 (Q
t
) = t, so you can think of the quantile
as the inverse of the distribution function. The quantile Q
t
is the value such that t (percent) of
the mass of the distribution is less than Q
t
. The median is the special case t = .5.
The following alternative representation is useful. If the random variable l has tth quantile
Q
t
, then
Q
t
= argmin
0
Ej
t
(l 0) . (5.12)
where j
t
() is the piecewise linear function
j
t
() =
_
(1 t) < 0
t _ 0
(5.13)
= (t 1 ( < 0)) .
This generalizes representation (5.9) for the median to all quantiles.
For the random variables (j
i
, i
i
) with conditional distribution function 1 (j [ i) the conditional
quantile function
t
(i) is
Q
t
(i) = inf j : 1 (j [ i) _ t .
Again, when 1 (j [ i) is continuous and strictly monotonic in j, then 1 (Q
t
(i) [ i) = t. For
xed t, the quantile regression function
t
(i) describes how the tth quantile of the conditional
distribution varies with the regressors.
As functions of i, the quantile regression functions can take any shape. However for computa-
tional convenience it is typical to assume that they are (approximately) linear in i (after suitable
70
transformations). This linear specication assumes that Q
t
(i) = d
t
t
i where the coecients d
t
vary across the quantiles t. We then have the linear quantile regression model
j
i
= i
t
i
d
t
+c
i
where c
i
is the error dened to be the dierence between j
i
and its tth conditional quantile i
t
i
d
t
.
By construction, the tth conditional quantile of c
i
is zero, otherwise its properties are unspecied
without further restrictions.
Given the representation (5.12), the quantile regression estimator
`
d
t
for d
t
solves the mini-
mization problem
`
d
t
= argmin
f
o
t
a
(d)
where
o
t
a
(d) =
1
:
a

i=1
j
t
_
j
i
i
t
i
d
_
and j
t
() is dened in (5.13).
Since the quanitle regression criterion function o
t
a
(d) does not have an algebraic solution,
numerical methods are necessary for its minimization. Furthermore, since it has discontinuous
derivatives, conventional Newton-type optimization methods are inappropriate. Fortunately, fast
linear programming methods have been developed for this problem, and are widely available.
An asymptotic distribution theory for the quantile regression estimator can be derived using
similar arguments as those for the LAD estimator in Theorem 5.5.1.
Theorem 5.6.1
_
:
_
`
d
t
d
t
_
o
N(0, X
t
) , where
X
t
= t (1 t)
_
E
_
i
i
i
t
i
) (0 [ i
i
)
__
1
_
Ei
i
i
t
i
_ _
E
_
i
i
i
t
i
) (0 [ i
i
)
__
1
and ) (c [ i) is the conditional density of c
i
given i
i
= i.
In general, the asymptotic variance depends on the conditional density of the quantile regression
error. When the error c
i
is independent of i
i
, then ) (0 [ i
i
) = ) (0) , the unconditional density of
c
i
at 0, and we have the simplication
X
t
=
t (1 t)
) (0)
2
_
E
_
i
i
i
t
i
__
1
.
A recent monograph on the details of quantile regression is Koenker (2005).
5.7 Testing for Omitted NonLinearity
If the goal is to estimate the conditional expectation E(j
i
[ i
i
) , it is useful to have a general
test of the adequacy of the specication.
One simple test for neglected nonlinearity is to add nonlinear functions of the regressors to the
regression, and test their signicance using a Wald test. Thus, if the model j
i
= i
t
i
`
d + ^ c
i
has been
t by OLS, let z
i
= l(i
i
) denote functions of i
i
which are not linear functions of i
i
(perhaps
squares of non-binary regressors) and then t j
i
= i
t
i

d+z
t
i
_+~ c
i
by OLS, and form a Wald statistic
for _ = 0.
Another popular approach is the RESET test proposed by Ramsey (1969). The null model is
j
i
= i
t
i
d +c
i
71
which is estimated by OLS, yielding predicted values ^ j
i
= i
t
i
`
d. Now let
z
i
=
_
_
_
^ j
2
i
.
.
.
^ j
n
i
_
_
_
be an (:1)-vector of powers of ^ j
i
. Then run the auxiliary regression
j
i
= i
t
i

d +z
t
i
_ + ~ c
i
(5.14)
by OLS, and form the Wald statistic \
a
for _ = 0. It is easy (although somewhat tedious) to
show that under the null hypothesis, \
a
o

2
n1
. Thus the null is rejected at the c% level if \
a
exceeds the upper c% tail critical value of the
2
n1
distribution.
To implement the test, : must be selected in advance. Typically, small values such as : = 2,
3, or 4 seem to work best.
The RESET test appears to work well as a test of functional form against a wide range of
smooth alternatives. It is particularly powerful at detecting single-index models of the form
j
i
= G(i
t
i
d) +c
i
where G() is a smooth link function. To see why this is the case, note that (5.14) may be written
as
j
i
= i
t
i

d +
_
i
t
i
`
d
_
2
~
1
+
_
i
t
i
`
d
_
3
~
2
+
_
i
t
i
`
d
_
n
~
n1
+ ~ c
i
which has essentially approximated G() by a :th order polynomial.
5.8 Omitted Variables
Let the regressors be partitioned as
i
i
=
_
i
1i
i
2i
_
.
Suppose we are interested in the coecient on i
1i
alone in the regression of j
i
on the full set i
i
.
We can write the model as
j
i
= i
t
1i
d
1
+i
t
2i
d
2
+c
i
(5.15)
E(i
i
c
i
) = 0
where the parameter of interest is d
1
.
Now suppose that instead of estimating equation (5.15) by least-squares, we regress j
i
on i
1i
only. This is estimation of the equation
j
i
= i
t
1i
_
1
+n
i
(5.16)
E(i
1i
n
i
) = 0
Notice that we have written the coecient on i
1i
as _
1
rather than d
1
and the error as n
i
rather
than c
i
. This is because the model being estimated is dierent than (5.15). Goldberger (1991) calls
(5.15) the long regression and (5.16) the short regression to emphasize the distinction.
Typically, d
1
,= _
1
, except in special cases. To see this, we calculate
_
1
=
_
E
_
i
1i
i
t
1i
__
1
E(i
1i
j
i
)
=
_
E
_
i
1i
i
t
1i
__
1
E
_
i
1i
_
i
t
1i
d
1
+i
t
2i
d
2
+c
i
__
= d
1
+
_
E
_
i
1i
i
t
1i
__
1
E
_
i
1i
i
t
2i
_
d
2
= d
1
+Id
2
72
where
I =
_
E
_
i
1i
i
t
1i
__
1
E
_
i
1i
i
t
2i
_
is the coecient from a regression of i
2i
on i
1i
.
Observe that _
1
,= d
1
unless I = 0 or d
2
= 0. Thus the short and long regressions have the
same coecient on i
1i
only under one of two conditions. First, the regression of i
2i
on i
1i
yields
a set of zero coecients (they are uncorrelated), or second, the coecient on i
2i
in (5.15) is zero.
In general, least-squares estimation of (5.16) is an estimate of _
1
= d
1
+Id
2
rather than d
1
. The
dierence Id
2
is known as omitted variable bias. It is the consequence of omission of a relevant
correlated variable.
To avoid omitted variables bias the standard advice is to include potentially relevant variables
in the estimated model. By construction, the general model will be free of the omitted variables
problem. Typically there are limits, as many desired variables are not available in a given dataset.
In this case, the possibility of omitted variables bias should be acknowledged and discussed in the
course of an empirical investigation.
5.9 Irrelevant Variables
In the model
j
i
= i
t
1i
d
1
+i
t
2i
d
2
+c
i
E(i
i
c
i
) = 0,
i
2i
is irrelevant if d
1
is the parameter of interest and d
2
= 0. One estimator of d
1
is to regress
j
i
on i
1i
alone,

d
1
= (A
t
1
A
1
)
1
(A
t
1
) . Another is to regress j
i
on i
1i
and i
2i
jointly, yielding
(
`
d
1
,
`
d
2
). Under which conditions is
`
d
1
or

d
1
superior?
It is easy to see that both estimators are consistent for d
1
. However, they will (typically) have
dierent asymptotic variances.
The comparison between the two estimators is straightforward when the error is conditionally
homoskedastic E
_
c
2
i
[ i
i
_
= o
2
. In this case
lim
ao
:var(

d
1
) =
_
Ei
1i
i
t
1i
_
1
o
2
= Q
1
11
o
2
,
say, and
lim
ao
:var(
`
d
1
) =
_
Ei
1i
i
t
1i
Ei
1i
i
t
2i
_
Ei
2i
i
t
2i
_
1
Ei
2i
i
t
1i
_
1
o
2
=
_
Q
11
Q
12
Q
1
22
Q
21
_
1
o
2
,
say. If Q
12
= 0 (so the variables are orthogonal) then these two variance matrices equal, and the
two estimators have equal asymptotic eciency. Otherwise, since Q
12
Q
1
22
Q
21
0, then Q
11

Q
11
Q
12
Q
1
22
Q
21
, and consequently
Q
1
11
o
2
<
_
Q
11
Q
12
Q
1
22
Q
21
_
1
o
2
.
This means that

d
1
has a lower asymptotic variance matrix than
`
d
1
. We conclude that the inclusion
of irrelevant variable reduces estimation eciency if these variables are correlated with the relevant
variables.
For example, take the model j
i
= ,
0
+ ,
1
r
i
+ c
i
and suppose that ,
0
= 0. Let
^
,
1
be the
estimate of ,
1
from the unconstrained model, and
~
,
1
be the estimate under the constraint ,
0
= 0.
(The least-squares estimate with the intercept omitted.). Let Er
i
= j, and E(r
i
j)
2
= o
2
a
. Then
under (4.8),
lim
ao
:var(
~
,
1
) =
o
2
o
2
a
+j
2
73
while
lim
ao
:var(
^
,
1
) =
o
2
o
2
a
.
When j ,= 0, we see that
~
,
1
has a lower asymptotic variance.
However, this result can be reversed when the error is conditionally heteroskedastic. In the
absence of the homoskedasticity assumption, there is no clear ranking of the eciency of the
restricted estimator

d
1
versus the unrestricted estimator.
5.10 Model Selection
In earlier sections we discussed the costs and benets of inclusion/exclusion of variables. How
does a researcher go about selecting an econometric specication, when economic theory does not
provide complete guidance? This is the question of model selection. It is important that the model
selection question be well-posed. For example, the question: What is the right model for j?
is not well-posed, because it does not make clear the conditioning set. In contrast, the question,
Which subset of (r
1
, ..., r
1
) enters the regression function E(j
i
[ r
1i
= r
1
, ..., r
1i
= r
1
)? is well
posed.
In many cases the problem of model selection can be reduced to the comparison of two nested
models, as the larger problem can be written as a sequence of such comparisons. We thus consider
the question of the inclusion of A
2
in the linear regression
= A
1
d
1
+A
2
d
2
+c,
where A
1
is : /
1
and A
2
is : /
2
. This is equivalent to the comparison of the two models
/
1
: = A
1
d
1
+c, E(c [ A
1
, A
2
) = 0
/
2
: = A
1
d
1
+A
2
d
2
+c, E(c [ A
1
, A
2
) = 0.
Note that /
1
/
2
. To be concrete, we say that /
2
is true if d
2
,= 0.
To x notation, models 1 and 2 are estimated by OLS, with residual vectors ` c
1
and ` c
2
, estimated
variances ^ o
2
1
and ^ o
2
2
, etc., respectively. To simplify some of the statistical discussion, we will on
occasion use the homoskedasticity assumption E
_
c
2
i
[ i
1i
, i
2i
_
= o
2
.
A model selection procedure is a data-dependent rule which selects one of the two models. We
can write this as

/. There are many possible desirable properties for a model selection procedure.
One useful property is consistency, that it selects the true model with probability one if the sample
is suciently large. A model selection procedure is consistent if
P
_

/= /
1
[ /
1
_
1
P
_

/= /
2
[ /
2
_
1
However, this rule only makes sense when the true model is nite dimensional. If the truth is
innite dimensional, it is more appropriate to view model selection as determining the best nite
sample approximation.
A common approach to model selection is to base the decision on a statistical test such as
the Wald \
a
. The model selection rule is as follows. For some critical level c, let c
c
satisfy
P
_

2
I
2
c
c
_
= c. Then select /
1
if \
a
_ c
c
, else select /
2
.
A major problem with this approach is that the critical level c is indeterminate. The reasoning
which helps guide the choice of c in hypothesis testing (controlling Type I error) is not relevant for
model selection. That is, if c is set to be a small number, then P
_

/= /
1
[ /
1
_
- 1 c but
P
_

/= /
2
[ /
2
_
could vary dramatically, depending on the sample size, etc. Another problem is
74
that if c is held xed, then this model selection procedure is inconsistent, as P
_

/= /
1
[ /
1
_

1 c < 1.
Another common approach to model selection is to use a selection criterion. One popular choice
is the Akaike Information Criterion (AIC). The AIC under normality for model : is
1C
n
= log
_
^ o
2
n
_
+ 2
/
n
:
. (5.17)
where ^ o
2
n
is the variance estimate for model :, and /
n
is the number of coecients in the model.
The AIC can be derived as an estimate of the KullbackLeibler information distance 1(/) =
E(log )( [ A) log )( [ A, /)) between the true density and the model density. The rule is
to select /
1
if 1C
1
< 1C
2
, else select /
2
. AIC selection is inconsistent, as the rule tends to
overt. Indeed, since under /
1
,
11
a
= :
_
log ^ o
2
1
log ^ o
2
2
_
\
a
o

2
I
2
, (5.18)
then
P
_

/= /
1
[ /
1
_
= P(1C
1
< 1C
2
[ /
1
)
= P
_
log(^ o
2
1
) + 2
/
1
:
< log(^ o
2
2
) + 2
/
1
+/
2
:
[ /
1
_
= P(11
a
< 2/
2
[ /
1
)
P
_

2
I
2
< 2/
2
_
< 1.
While many criterions similar to the AIC have been proposed, the most popular is one proposed
by Schwarz based on Bayesian arguments. His criterion, known as the BIC, is
11C
n
= log
_
^ o
2
n
_
+ log(:)
/
n
:
. (5.19)
Since log(:) 2 (if : 8), the BIC places a larger penalty than the AIC on the number of
estimated parameters and is more parsimonious.
In contrast to the AIC, BIC model selection is consistent. Indeed, since (5.18) holds under /
1
,
11
a
log(:)
j
0,
so
P
_

/= /
1
[ /
1
_
= P(11C
1
< 11C
2
[ /
1
)
= P(11
a
< log(:)/
2
[ /
1
)
= P
_
11
a
log(:)
< /
2
[ /
1
_
P(0 < /
2
) = 1.
Also under /
2
, one can show that
11
a
log(:)
j
,
thus
P
_

/= /
2
[ /
2
_
= P
_
11
a
log(:)
/
2
[ /
2
_
1.
75
We have discussed model selection between two models. The methods extend readily to the
issue of selection among multiple regressors. The general problem is the model
j
i
= ,
1
r
1i
+,
2
r
2i
+ +,
1
r
1i
+c
i
, E(c
i
[ i
i
) = 0
and the question is which subset of the coecients are non-zero (equivalently, which regressors
enter the regression).
There are two leading cases: ordered regressors and unordered.
In the ordered case, the models are
/
1
: ,
1
,= 0, ,
2
= ,
3
= = ,
1
= 0
/
2
: ,
1
,= 0, ,
2
,= 0, ,
3
= = ,
1
= 0
.
.
.
/
1
: ,
1
,= 0, ,
2
,= 0, . . . , ,
1
,= 0.
which are nested. The AIC selection criteria estimates the 1 models by OLS, stores the residual
variance ^ o
2
for each model, and then selects the model with the lowest AIC (5.17). Similarly for
the BIC, selecting based on (5.19).
In the unordered case, a model consists of any possible subset of the regressors r
1i
, ..., r
1i
,
and the AIC or BIC in principle can be implemented by estimating all possible subset models.
However, there are 2
1
such models, which can be a very large number. For example, 2
10
= 1024,
and 2
20
= 1, 048, 576. In the latter case, a full-blown implementation of the BIC selection criterion
would seem computationally prohibitive.
5.11 Technical Proofs
Proof of Theorem 5.4.1 (Sketch). First, it must be shown that
`
0
j
0
0
. This can be
done using arguments for optimization estimators, but we wont cover that argument here. Since
`
0
j
0
0
,
`
0 is close to 0
0
for : large, so the minimization of o
a
(0) only needs to be examined for
0 close to 0
0
. Let
j
0
i
= c
i
+n
t
0i
0
0
.
For 0 close to the true value 0
0
, by a rst-order Taylor series approximation,
:(i
i
, 0) :(i
i
, 0
0
) +n
t
0i
(0 0
0
) .
Thus
j
i
:(i
i
, 0) (c
i
+:(i
i
, 0
0
))
_
:(i
i
, 0
0
) +n
t
0i
(0 0
0
)
_
= c
i
n
t
0i
(0 0
0
)
= j
0
i
n
t
0i
0.
Hence the sum of squared errors function is
o
a
(0) =
a

i=1
(j
i
:(i
i
, 0))
2

i=1
_
j
0
i
n
t
0i
0
_
2
and the right-hand-side is the SSE function for a linear regression of j
0
i
on n
0i
. Thus the NLLS
estimator
`
0 has the same asymptotic distribution as the (infeasible) OLS regression of j
0
i
on n
0i
,
which is that stated in the theorem.
Proof of Theorem 5.5.1: The rst step is to show that
`
d
j
d
0
. We sketch the main argument.
For any xed d, by the WLLN, 11
a
(d)
j
E[j
i
i
t
i
d[ . Furthermore, it can be shown that this
76
convergence is uniform in d. It follows that
`
d, the minimizer of 11
a
(d), converges in probability
to d
0
, the minimizer of E[j
i
i
t
i
d[.
Since sgn (a) = 121 (a _ 0) , (5.10) is equivalent to j
a
(
`
d) = 0, where j
a
(d) = :
1

a
i=1
j
i
(d)
and j
i
(d) = i
i
(1 2 1 (j
i
_ i
t
i
d)) . Let j(d) = Ej
i
(d). We need three preliminary results. First,
by the central limit theorem (Theorem C.3.1)
_
:(j
a
(d
0
) j(d
0
)) = :
12
a

i=1
j
i
(d
0
)
o
N
_
0, Ei
i
i
t
i
_
since Ej
i
(d
0
)j
i
(d
0
)
t
= Ei
i
i
t
i
. Second using the law of iterated expectations and the chain rule of
dierentiation,
0
0d
t
j(d) =
0
0d
t
Ei
i
_
1 2 1
_
j
i
_ i
t
i
d
__
= 2
0
0d
t
E
_
i
i
E
_
1
_
c
i
_ i
t
i
d i
t
i
d
0
_
[ i
i
_
= 2
0
0d
t
E
_
i
i
_

0
i
f
0
i
f
0
o
) (c [ i
i
) dc
_
= 2E
_
i
i
i
t
i
)
_
i
t
i
d i
t
i
d
0
[ i
i
_
so
0
0d
t
j(d
0
) = 2E
_
i
i
i
t
i
) (0 [ i
i
)

.
Third, by a Taylor series expansion and the fact j(d
0
) = 0
j(
`
d)
0
0d
t
j(d
0
)
_
`
d d
0
_
.
Together
_
:
_
`
d d
0
_

_
0
0d
t
j(d
0
)
_
1
_
:j(
`
d)
=
_
2E
_
i
i
i
t
i
) (0 [ i
i
)
_
1
_
:
_
j(
`
d) j
a
(
`
d)
_

1
2
_
E
_
i
i
i
t
i
) (0 [ i
i
)
_
1
_
:(j
a
(d
0
) j(d
0
))
o

1
2
_
E
_
i
i
i
t
i
) (0 [ i
i
)
_
1
N
_
0, Ei
i
i
t
i
_
= N(0, X ) .
The third line follows from an asymptotic empirical process argument and the fact that
`
d
j
d
0
.

77
5.12 Exercises
1. For any predictor q(i
i
) for j
i
, the mean absolute error (MAE) is
E[j
i
q(i
i
)[ .
Show that the function q(i) which minimizes the MAE is the conditional median :(i) =
med(j
i
[ i
i
).
2. Dene
q(n) = t 1 (n < 0)
where 1 () is the indicator function (takes the value 1 if the argument is true, else equals
zero). Let 0 satisfy Eq(j
i
0) = 0. Is 0 a quantile of the distribution of j
i
?
3. Verify equation (5.12).
4. In the homoskedastic regression model = Ad +c with E(c
i
[ i
i
) = 0 and E(c
2
i
[ i
i
) = o
2
,
suppose
`
d is the OLS estimate of d with covariance matrix
`
X , based on a sample of size :.
Let ^ o
2
be the estimate of o
2
. You wish to forecast an out-of-sample value of j
a+1
given that
i
a+1
= i. Thus the available information is the sample (, A), the estimates (
`
d,
`
X , ^ o
2
), the
residuals ` c, and the out-of-sample value of the regressors, i
a+1
.
(a) Find a point forecast of j
a+1
.
(b) Find an estimate of the variance of this forecast.
5. In a linear model
= Ad +c, E(c [ A) = 0, var (c [ A) = o
2
D
with D known, the GLS estimator is

d =
_
A
t
D
1
A
_
1
_
A
t
D
1

_
.
the residual vector is ` c = A

d, and an estimate of o
2
is
:
2
=
1
: /
` c
t
D
1
` c.
(a) Why is this a reasonable estimator for o
2
?
(b) Prove that ` c = A
1
c, where A
1
= 1 A
_
A
t
D
1
A
_
1
A
t
D
1
.
(c) Prove that A
t
1
D
1
A
1
= D
1
D
1
A
_
A
t
D
1
A
_
1
A
t
D
1
.
6. Let (j
i
, i
i
) be a random sample with E( [ A) = Ad. Consider the Weighted Least Squares
(WLS) estimator of d

d =
_
A
t
MA
_
1
_
A
t
M
_
where M = diag (n
1
, ..., n
a
) and n
i
= r
2
)i
, where r
)i
is one of the i
i
.
(a) In which contexts would

d be a good estimator?
(b) Using your intuition, in which situations would you expect that

d would perform better
than OLS?
7. Suppose that j
i
= q(i
i
, 0) + c
i
with E(c
i
[ i
i
) = 0,
`
0 is the NLLS estimator, and
`
X is the
estimate of var
_
`
0
_
. You are interested in the conditional mean function E(j
i
[ i
i
= i) =
q(i) at some i. Find an asymptotic 95% condence interval for q(i).
78
8. The model is
j
i
= r
i
, +c
i
E(c
i
[ r
i
) = 0
where r
i
R. Consider the two estimators
^
, =

a
i=1
r
i
j
i

a
i=1
r
2
i
~
, =
1
:
a

i=1
j
i
r
i
.
(a) Under the stated assumptions, are both estimators consistent for ,?
(b) Are there conditions under which either estimator is ecient?
9. In Chapter 6, Exercise 13, you estimated a cost function on a cross-section of electric com-
panies. The equation you estimated was
log TC
i
= ,
1
+,
2
log Q
i
+,
3
log 11
i
+,
4
log 11
i
+,
5
log 11
i
+c
i
. (5.20)
(a) Following Nerlove, add the variable (log Q
i
)
2
to the regression. Do so. Assess the merits
of this new specication using (i) a hypothesis test; (ii) AIC criterion; (iii) BIC criterion.
Do you agree with this modication?
(b) Now try a non-linear specication. Consider model (5.20) plus the extra term ,
6
z
i
,
where
z
i
= log Q
i
(1 + exp ((log Q
i
,
7
)))
1
.
In addition, impose the restriction ,
3
+ ,
4
+ ,
5
= 1. This model is called a smooth
threshold model. For values of log Q
i
much below ,
7
, the variable log Q
i
has a regression
slope of ,
2
. For values much above ,
7
, the regression slope is ,
2
+ ,
6
, and the model
imposes a smooth transition between these regimes. The model is non-linear because of
the parameter ,
7
.
The model works best when ,
7
is selected so that several values (in this example, at
least 10 to 15) of log Q
i
are both below and above ,
7
. Examine the data and pick an
appropriate range for ,
7
.
(c) Estimate the model by non-linear least squares. I recommend the concentration method:
Pick 10 (or more or you like) values of ,
7
in this range. For each value of ,
7
, calculate
z
i
and estimate the model by OLS. Record the sum of squared errors, and nd the value
of ,
7
for which the sum of squared errors is minimized.
(d) Calculate standard errors for all the parameters (,
1
, ..., ,
7
).
10. The data le cps78.dat contains 550 observations on 20 variables taken from the May 1978
current population survey. Variables are listed in the le cps78.pdf. The goal of the exercise
is to estimate a model for the log of earnings (variable LNWAGE) as a function of the
conditioning variables.
(a) Start by an OLS regression of LNWAGE on the other variables. Report coecient
estimates and standard errors.
(b) Consider augmenting the model by squares and/or cross-products of the conditioning
variables. Estimate your selected model and report the results.
79
(c) Are there any variables which seem to be unimportant as a determinant of wages? You
may re-estimate the model without these variables, if desired.
(d) Test whether the error variance is dierent for men and women. Interpret.
(e) Test whether the error variance is dierent for whites and nonwhites. Interpret.
(f) Construct a model for the conditional variance. Estimate such a model, test for general
heteroskedasticity and report the results.
(g) Using this model for the conditional variance, re-estimate the model from part (c) using
FGLS. Report the results.
(h) Do the OLS and FGLS estimates dier greatly? Note any interesting dierences.
(i) Compare the estimated standard errors. Note any interesting dierences.
80
Chapter 6
The Bootstrap
6.1 Denition of the Bootstrap
Let 1 denote a distribution function for the population of observations (j
i
, i
i
) . Let
T
a
= T
a
((j
1
, i
1
) , ..., (j
a
, i
a
) , 1)
be a statistic of interest, for example an estimator
^
0 or a t-statistic
_
^
0 0
_
,:(
^
0). Note that we
write T
a
as possibly a function of 1. For example, the t-statistic is a function of the parameter 0
which itself is a function of 1.
The exact CDF of T
a
when the data are sampled from the distribution 1 is
G
a
(n, 1) = P(T
a
_ n [ 1)
In general, G
a
(n, 1) depends on 1, meaning that G changes as 1 changes.
Ideally, inference would be based on G
a
(n, 1). This is generally impossible since 1 is unknown.
Asymptotic inference is based on approximating G
a
(n, 1) with G(n, 1) = lim
ao
G
a
(n, 1).
When G(n, 1) = G(n) does not depend on 1, we say that T
a
is asymptotically pivotal and use the
distribution function G(n) for inferential purposes.
In a seminal contribution, Efron (1979) proposed the bootstrap, which makes a dierent ap-
proximation. The unknown 1 is replaced by a consistent estimate 1
a
(one choice is discussed in
the next section). Plugged into G
a
(n, 1) we obtain
G
+
a
(n) = G
a
(n, 1
a
). (6.1)
We call G
+
a
the bootstrap distribution. Bootstrap inference is based on G
+
a
(n).
Let (j
+
i
, i
+
i
) denote random variables with the distribution 1
a
. A random sample from this dis-
tribution is called the bootstrap data. The statistic T
+
a
= T
a
((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
) , 1
a
) constructed
on this sample is a random variable with distribution G
+
a
. That is, P(T
+
a
_ n) = G
+
a
(n). We call T
+
a
the bootstrap statistic. The distribution of T
+
a
is identical to that of T
a
when the true CDF of 1
a
rather than 1.
The bootstrap distribution is itself random, as it depends on the sample through the estimator
1
a
.
In the next sections we describe computation of the bootstrap distribution.
6.2 The Empirical Distribution Function
Recall that 1(j, i) = P(j
i
_ j, i
i
_ i) = E(1 (j
i
_ j) 1 (i
i
_ i)) , where 1() is the indicator
function. This is a population moment. The method of moments estimator is the corresponding
81
sample moment:
1
a
(j, i) =
1
:
a

i=1
1 (j
i
_ j) 1 (i
i
_ i) . (6.2)
1
a
(j, i) is called the empirical distribution function (EDF). 1
a
is a nonparametric estimate of 1.
Note that while 1 may be either discrete or continuous, 1
a
is by construction a step function.
The EDF is a consistent estimator of the CDF. To see this, note that for any (j, i), 1 (j
i
_ j) 1 (i
i
_ i)
is an iid random variable with expectation 1(j, i). Thus by the WLLN (Theorem C.2.1), 1
a
(j, i)
j

1 (j, i) . Furthermore, by the CLT (Theorem C.3.1),


_
:(1
a
(j, i) 1 (j, i))
o
N(0, 1 (j, i) (1 1 (j, i))) .
To see the eect of sample size on the EDF, in the Figure below, I have plotted the EDF and
true CDF for three random samples of size : = 25, 50, 100, and 500. The random draws are from
the N(0, 1) distribution. For : = 25, the EDF is only a crude approximation to the CDF, but the
approximation appears to improve for the large :. In general, as the sample size gets larger, the
EDF step function gets uniformly close to the true CDF.
Figure 6.1: Empirical Distribution Functions
The EDF is a valid discrete probability distribution which puts probability mass 1,: at each
pair (j
i
, i
i
), i = 1, ..., :. Notationally, it is helpful to think of a random pair (j
+
i
, i
+
i
) with the
distribution 1
a
. That is,
P(j
+
i
_ j, i
+
i
_ i) = 1
a
(j, i).
We can easily calculate the moments of functions of (j
+
i
, i
+
i
) :
E/(j
+
i
, i
+
i
) =
_
/(j, i)d1
a
(j, i)
=
a

i=1
/(j
i
, i
i
) P(j
+
i
= j
i
, i
+
i
= i
i
)
=
1
:
a

i=1
/(j
i
, i
i
) ,
the empirical sample average.
82
6.3 Nonparametric Bootstrap
The nonparametric bootstrap is obtained when the bootstrap distribution (6.1) is dened
using the EDF (6.2) as the estimate 1
a
of 1.
Since the EDF 1
a
is a multinomial (with : support points), in principle the distribution G
+
a
could
be calculated by direct methods. However, as there are 2
a
possible samples (j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
),
such a calculation is computationally infeasible. The popular alternative is to use simulation to ap-
proximate the distribution. The algorithm is identical to our discussion of Monte Carlo simulation,
with the following points of clarication:
The sample size : used for the simulation is the same as the sample size.
The random vectors (j
+
i
, i
+
i
) are drawn randomly from the empirical distribution. This is
equivalent to sampling a pair (j
i
, i
i
) randomly from the sample.
The bootstrap statistic T
+
a
= T
a
((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
) , 1
a
) is calculated for each bootstrap sam-
ple. This is repeated 1 times. 1 is known as the number of bootstrap replications. A theory
for the determination of the number of bootstrap replications 1 has been developed by Andrews
and Buchinsky (2000). It is desirable for 1 to be large, so long as the computational costs are
reasonable. 1 = 1000 typically suces.
When the statistic T
a
is a function of 1, it is typically through dependence on a parameter.
For example, the t-ratio
_
^
0 0
_
,:(
^
0) depends on 0. As the bootstrap statistic replaces 1 with
1
a
, it similarly replaces 0 with 0
a
, the value of 0 implied by 1
a
. Typically 0
a
=
^
0, the parameter
estimate. (When in doubt use
^
0.)
Sampling from the EDF is particularly easy. Since 1
a
is a discrete probability distribution
putting probability mass 1,: at each sample point, sampling from the EDF is equivalent to random
sampling a pair (j
i
, i
i
) from the observed data with replacement. In consequence, a bootstrap
sample (j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
) will necessarily have some ties and multiple values, which is generally
not a problem.
6.4 Bootstrap Estimation of Bias and Variance
The bias of
^
0 is t
a
= E(
^
0 0
0
). Let T
a
(0) =
^
0 0. Then t
a
= E(T
a
(0
0
)). The bootstrap
counterparts are
^
0
+
=
^
0((j
+
1
, i
+
1
) , ..., (j
+
a
, i
+
a
)) and T
+
a
=
^
0
+
0
a
=
^
0
+

^
0. The bootstrap estimate
of t
a
is
t
+
a
= E(T
+
a
).
If this is calculated by the simulation described in the previous section, the estimate of t
+
a
is
^ t
+
a
=
1
1
1

b=1
T
+
ab
=
1
1
1

b=1
^
0
+
b

^
0
=
^
0
+

^
0.
If
`
0 is biased, it might be desirable to construct a biased-corrected estimator (one with reduced
bias). Ideally, this would be
~
0 =
^
0 t
a
,
83
but t
a
is unknown. The (estimated) bootstrap biased-corrected estimator is
~
0
+
=
^
0 ^ t
+
a
=
^
0 (
^
0
+

^
0)
= 2
^
0
^
0
+
.
Note, in particular, that the biased-corrected estimator is not
^
0
+
. Intuitively, the bootstrap makes
the following experiment. Suppose that
^
0 is the truth. Then what is the average value of
^
0
calculated from such samples? The answer is
^
0
+
. If this is lower than
^
0, this suggests that the
estimator is downward-biased, so a biased-corrected estimator of 0 should be larger than
^
0, and the
best guess is the dierence between
^
0 and
^
0
+
. Similarly if
^
0
+
is higher than
^
0, then the estimator is
upward-biased and the biased-corrected estimator should be lower than
^
0.
Let T
a
=
^
0. The variance of
^
0 is
\
a
= E(T
a
ET
a
)
2
.
Let T
+
a
=
^
0
+
. It has variance
\
+
a
= E(T
+
a
ET
+
a
)
2
.
The simulation estimate is
^
\
+
a
=
1
1
1

b=1
_
^
0
+
b

^
0
+
_
2
.
A bootstrap standard error for
^
0 is the square root of the bootstrap estimate of variance,
:
+
(
^
0) =
_
^
\
+
a
.
While this standard error may be calculated and reported, it is not clear if it is useful. The
primary use of asymptotic standard errors is to construct asymptotic condence intervals, which are
based on the asymptotic normal approximation to the t-ratio. However, the use of the bootstrap
presumes that such asymptotic approximations might be poor, in which case the normal approxi-
mation is suspected. It appears superior to calculate bootstrap condence intervals, and we turn
to this next.
6.5 Percentile Intervals
For a distribution function G
a
(n, 1), let
a
(c, 1) denote its quantile function. This is the
function which solves
G
a
(
a
(c, 1), 1) = c.
[When G
a
(n, 1) is discrete,
a
(c, 1) may be non-unique, but we will ignore such complications.]
Let
a
(c) =
a
(c, 1
0
) denote the quantile function of the true sampling distribution, and
+
a
(c) =

a
(c, 1
a
) denote the quantile function of the bootstrap distribution. Note that this function will
change depending on the underlying statistic T
a
whose distribution is G
a
.
Let T
a
=
^
0, an estimate of a parameter of interest. In (1 c)% of samples,
^
0 lies in the region
[
a
(c,2),
a
(1 c,2)]. This motivates a condence interval proposed by Efron:
C
1
= [
+
a
(c,2),
+
a
(1 c,2)].
This is often called the percentile condence interval.
Computationally, the quantile
+
a
(c) is estimated by ^
+
a
(c), the cth sample quantile of the
simulated statistics T
+
a1
, ..., T
+
a1
, as discussed in the section on Monte Carlo simulation. The
(1 c)% Efron percentile interval is then [^
+
a
(c,2), ^
+
a
(1 c,2)].
84
The interval C
1
is a popular bootstrap condence interval often used in empirical practice.
This is because it is easy to compute, simple to motivate, was popularized by Efron early in the
history of the bootstrap, and also has the feature that it is translation invariant. That is, if we
dene c = )(0) as the parameter of interest for a monotonic function ), then percentile method
applied to this problem will produce the condence interval [)(
+
a
(c,2)), )(
+
a
(1 c,2))], which
is a naturally good property.
However, as we show now, C
1
is in a deep sense very poorly motivated.
It will be useful if we introduce an alternative denition C
1
. Let T
a
(0) =
^
0 0 and let
a
(c)
be the quantile function of its distribution. (These are the original quantiles, with 0 subtracted.)
Then C
1
can alternatively be written as
C
1
= [
^
0 +
+
a
(c,2),
^
0 +
+
a
(1 c,2)].
This is a bootstrap estimate of the ideal condence interval
C
0
1
= [
^
0 +
a
(c,2),
^
0 +
a
(1 c,2)].
The latter has coverage probability
P
_
0
0
C
0
1
_
= P
_
^
0 +
a
(c,2) _ 0
0
_
^
0 +
a
(1 c,2)
_
= P
_

a
(1 c,2) _
^
0 0
0
_
a
(c,2)
_
= G
a
(
a
(c,2), 1
0
) G
a
(
a
(1 c,2), 1
0
)
which generally is not 1c! There is one important exception. If
^
00
0
has a symmetric distribution,
then G
a
(n, 1
0
) = 1 G
a
(n, 1
0
), so
P
_
0
0
C
0
1
_
= G
a
(
a
(c,2), 1
0
) G
a
(
a
(1 c,2), 1
0
)
= (1 G
a
(
a
(c,2), 1
0
)) (1 G
a
(
a
(1 c,2), 1
0
))
=
_
1
c
2
_

_
1
_
1
c
2
__
= 1 c
and this idealized condence interval is accurate. Therefore, C
0
1
and C
1
are designed for the case
that
^
0 has a symmetric distribution about 0
0
.
When
^
0 does not have a symmetric distribution, C
1
may perform quite poorly.
However, by the translation invariance argument presented above, it also follows that if there
exists some monotonic transformation )() such that )(
^
0) is symmetrically distributed about )(0
0
),
then the idealized percentile bootstrap method will be accurate.
Based on these arguments, many argue that the percentile interval should not be used unless
the sampling distribution is close to unbiased and symmetric.
The problems with the percentile method can be circumvented by an alternative method.
Let T
a
(0) =
^
0 0. Then
1 c = P(
a
(c,2) _ T
a
(0
0
) _
a
(1 c,2))
= P
_
^
0
a
(1 c,2) _ 0
0
_
^
0
a
(c,2)
_
,
so an exact (1 c)% condence interval for 0
0
would be
C
0
2
= [
^
0
a
(1 c,2),
^
0
a
(c,2)].
This motivates a bootstrap analog
C
2
= [
^
0
+
a
(1 c,2),
^
0
+
a
(c,2)].
85
Notice that generally this is very dierent from the Efron interval C
1
! They coincide in the special
case that G
+
a
(n) is symmetric about
^
0, but otherwise they dier.
Computationally, this interval can be estimated from a bootstrap simulation by sorting the
bootstrap statistics T
+
a
=
_
^
0
+

^
0
_
, which are centered at the sample estimate
^
0. These are sorted
to yield the quantile estimates ^
+
a
(.025) and ^
+
a
(.975). The 95% condence interval is then [
^
0
^
+
a
(.975),
^
0 ^
+
a
(.025)].
This condence interval is discussed in most theoretical treatments of the bootstrap, but is not
widely used in practice.
6.6 Percentile-t Equal-Tailed Interval
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 < 0
0
at size c. We would set T
a
(0) =
_
^
0 0
_
,:(
^
0) and reject H
0
in favor of H
1
if T
a
(0
0
) < c, where c would be selected so that
P(T
a
(0
0
) < c) = c.
Thus c =
a
(c). Since this is unknown, a bootstrap test replaces
a
(c) with the bootstrap estimate

+
a
(c), and the test rejects if T
a
(0
0
) <
+
a
(c).
Similarly, if the alternative is H
1
: 0 0
0
, the bootstrap test rejects if T
a
(0
0
)
+
a
(1 c).
Computationally, these critical values can be estimated from a bootstrap simulation by sorting
the bootstrap t-statistics T
+
a
=
_
^
0
+

^
0
_
,:(
^
0
+
). Note, and this is important, that the bootstrap test
statistic is centered at the estimate
^
0, and the standard error :(
^
0
+
) is calculated on the bootstrap
sample. These t-statistics are sorted to nd the estimated quantiles ^
+
a
(c) and/or ^
+
a
(1 c).
Let T
a
(0) =
_
^
0 0
_
,:(
^
0). Then taking the intersection of two one-sided intervals,
1 c = P(
a
(c,2) _ T
a
(0
0
) _
a
(1 c,2))
= P
_

a
(c,2) _
_
^
0 0
0
_
,:(
^
0) _
a
(1 c,2)
_
= P
_
^
0 :(
^
0)
a
(1 c,2) _ 0
0
_
^
0 :(
^
0)
a
(c,2)
_
,
so an exact (1 c)% condence interval for 0
0
would be
C
0
3
= [
^
0 :(
^
0)
a
(1 c,2),
^
0 :(
^
0)
a
(c,2)].
This motivates a bootstrap analog
C
3
= [
^
0 :(
^
0)
+
a
(1 c,2),
^
0 :(
^
0)
+
a
(c,2)].
This is often called a percentile-t condence interval. It is equal-tailed or central since the probability
that 0
0
is below the left endpoint approximately equals the probability that 0
0
is above the right
endpoint, each c,2.
Computationally, this is based on the critical values from the one-sided hypothesis tests, dis-
cussed above.
6.7 Symmetric Percentile-t Intervals
Suppose we want to test H
0
: 0 = 0
0
against H
1
: 0 ,= 0
0
at size c. We would set T
a
(0) =
_
^
0 0
_
,:(
^
0) and reject H
0
in favor of H
1
if [T
a
(0
0
)[ c, where c would be selected so that
P([T
a
(0
0
)[ c) = c.
86
Note that
P([T
a
(0
0
)[ < c) = P(c < T
a
(0
0
) < c)
= G
a
(c) G
a
(c)
= G
a
(c),
which is a symmetric distribution function. The ideal critical value c =
a
(c) solves the equation
G
a
(
a
(c)) = 1 c.
Equivalently,
a
(c) is the 1 c quantile of the distribution of [T
a
(0
0
)[ .
The bootstrap estimate is
+
a
(c), the 1 c quantile of the distribution of [T
+
a
[ , or the number
which solves the equation
G
+
a
(
+
a
(c)) = G
+
a
(
+
a
(c)) G
+
a
(
+
a
(c)) = 1 c.
Computationally,
+
a
(c) is estimated from a bootstrap simulation by sorting the bootstrap t-
statistics [T
+
a
[ =

^
0
+

^
0

,:(
^
0
+
), and taking the upper c% quantile. The bootstrap test rejects if
[T
a
(0
0
)[
+
a
(c).
Let
C
4
= [
^
0 :(
^
0)
+
a
(c),
^
0 +:(
^
0)
+
a
(c)],
where
+
a
(c) is the bootstrap critical value for a two-sided hypothesis test. C
4
is called the symmetric
percentile-t interval. It is designed to work well since
P(0
0
C
4
) = P
_
^
0 :(
^
0)
+
a
(c) _ 0
0
_
^
0 +:(
^
0)
+
a
(c)
_
= P([T
a
(0
0
)[ <
+
a
(c))
P([T
a
(0
0
)[ <
a
(c))
= 1 c.
If 0 is a vector, then to test H
0
: 0 = 0
0
against H
1
: 0 ,= 0
0
at size c, we would use a Wald
statistic
\
a
(0) = :
_
`
0 0
_
t
`
X
1
0
_
`
0 0
_
or some other asymptotically chi-square statistic. Thus here T
a
(0) = \
a
(0). The ideal test rejects
if \
a
_
a
(c), where
a
(c) is the (1 c)% quantile of the distribution of \
a
. The bootstrap test
rejects if \
a
_
+
a
(c), where
+
a
(c) is the (1 c)% quantile of the distribution of
\
+
a
= :
_
`
0
+

`
0
_
t
`
X
+1
0
_
`
0
+

`
0
_
.
Computationally, the critical value
+
a
(c) is found as the quantile from simulated values of \
+
a
.
Note in the simulation that the Wald statistic is a quadratic form in
_
`
0
+

`
0
_
, not
_
`
0
+
0
0
_
.
[This is a typical mistake made by practitioners.]
6.8 Asymptotic Expansions
Let T
a
R be a statistic such that
T
a
o
N(0, o
2
). (6.3)
In some cases, such as when T
a
is a t-ratio, then o
2
= 1. In other cases o
2
is unknown. Equivalently,
writing T
a
~ G
a
(n, 1) then
lim
ao
G
a
(n, 1) =
_
n
o
_
,
87
or
G
a
(n, 1) =
_
n
o
_
+o (1) . (6.4)
While (6.4) says that G
a
converges to
_
&
o
_
as : , it says nothing, however, about the rate
of convergence, or the size of the divergence for any particular sample size :. A better asymptotic
approximation may be obtained through an asymptotic expansion.
The following notation will be helpful. Let a
a
be a sequence.
Denition 6.8.1 a
a
= o(1) if a
a
0 as :
Denition 6.8.2 a
a
= O(1) if [a
a
[ is uniformly bounded.
Denition 6.8.3 a
a
= o(:
v
) if :
v
[a
a
[ 0 as : .
Basically, a
a
= O(:
v
) if it declines to zero like :
v
.
We say that a function q(n) is even if q(n) = q(n), and a function /(n) is odd if /(n) = /(n).
The derivative of an even function is odd, and vice-versa.
Theorem 6.8.1 Under regularity conditions and (6.3),
G
a
(n, 1) =
_
n
o
_
+
1
:
12
q
1
(n, 1) +
1
:
q
2
(n, 1) +O(:
32
)
uniformly over n, where q
1
is an even function of n, and q
2
is an odd function of n. Moreover, q
1
and q
2
are dierentiable functions of n and continuous in 1 relative to the supremum norm on the
space of distribution functions.
We can interpret Theorem 6.8.1 as follows. First, G
a
(n, 1) converges to the normal limit at
rate :
12
. To a second order of approximation,
G
a
(n, 1) -
_
n
o
_
+:
12
q
1
(n, 1).
Since the derivative of q
1
is odd, the density function is skewed. To a third order of approximation,
G
a
(n, 1) -
_
n
o
_
+:
12
q
1
(n, 1) +:
1
q
2
(n, 1)
which adds a symmetric non-normal component to the approximate density (for example, adding
leptokurtosis).
6.9 One-Sided Tests
Using the expansion of Theorem 6.8.1, we can assess the accuracy of one-sided hypothesis tests
and condence regions based on an asymptotically normal t-ratio T
a
. An asymptotic test is based
on (n).
To the second order, the exact distribution is
P(T
a
< n) = G
a
(n, 1
0
) = (n) +
1
:
12
q
1
(n, 1
0
) +O(:
1
)
since o = 1. The dierence is
(n) G
a
(n, 1
0
) =
1
:
12
q
1
(n, 1
0
) +O(:
1
)
= O(:
12
),
so the order of the error is O(:
12
).
88
A bootstrap test is based on G
+
a
(n), which from Theorem 6.8.1 has the expansion
G
+
a
(n) = G
a
(n, 1
a
) = (n) +
1
:
12
q
1
(n, 1
a
) +O(:
1
).
Because (n) appears in both expansions, the dierence between the bootstrap distribution and
the true distribution is
G
+
a
(n) G
a
(n, 1
0
) =
1
:
12
(q
1
(n, 1
a
) q
1
(n, 1
0
)) +O(:
1
).
Since 1
a
converges to 1 at rate
_
:, and q
1
is continuous with respect to 1, the dierence
(q
1
(n, 1
a
) q
1
(n, 1
0
)) converges to 0 at rate
_
:. Heuristically,
q
1
(n, 1
a
) q
1
(n, 1
0
) -
0
01
q
1
(n, 1
0
) (1
a
1
0
)
= O(:
12
),
The derivative
0
01
q
1
(n, 1) is only heuristic, as 1 is a function. We conclude that
G
+
a
(n) G
a
(n, 1
0
) = O(:
1
),
or
P(T
+
a
_ n) = P(T
a
_ n) +O(:
1
),
which is an improved rate of convergence over the asymptotic test (which converged at rate
O(:
12
)). This rate can be used to show that one-tailed bootstrap inference based on the t-
ratio achieves a so-called asymptotic renement the Type I error of the test converges at a faster
rate than an analogous asymptotic test.
6.10 Symmetric Two-Sided Tests
If a random variable j has distribution function H(n) = P(j _ n), then the random variable
[j[ has distribution function
H(n) = H(n) H(n)
since
P([j[ _ n) = P(n _ j _ n)
= P(j _ n) P(j _ n)
= H(n) H(n).
For example, if 7 ~ N(0, 1), then [7[ has distribution function
(n) = (n) (n) = 2(n) 1.
Similarly, if T
a
has exact distribution G
a
(n, 1), then [T
a
[ has the distribution function
G
a
(n, 1) = G
a
(n, 1) G
a
(n, 1).
A two-sided hypothesis test rejects H
0
for large values of [T
a
[ . Since T
a
o
7, then [T
a
[
o

[7[ ~ . Thus asymptotic critical values are taken from the distribution, and exact critical values
are taken from the G
a
(n, 1
0
) distribution. From Theorem 6.8.1, we can calculate that
G
a
(n, 1) = G
a
(n, 1) G
a
(n, 1)
=
_
(n) +
1
:
12
q
1
(n, 1) +
1
:
q
2
(n, 1)
_

_
(n) +
1
:
12
q
1
(n, 1) +
1
:
q
2
(n, 1)
_
+O(:
32
)
= (n) +
2
:
q
2
(n, 1) +O(:
32
), (6.5)
89
where the simplications are because q
1
is even and q
2
is odd. Hence the dierence between the
asymptotic distribution and the exact distribution is
(n) G
a
(n, 1
0
) =
2
:
q
2
(n, 1
0
) +O(:
32
) = O(:
1
).
The order of the error is O(:
1
).
Interestingly, the asymptotic two-sided test has a better coverage rate than the asymptotic
one-sided test. This is because the rst term in the asymptotic expansion, q
1
, is an even function,
meaning that the errors in the two directions exactly cancel out.
Applying (6.5) to the bootstrap distribution, we nd
G
+
a
(n) = G
a
(n, 1
a
) = (n) +
2
:
q
2
(n, 1
a
) +O(:
32
).
Thus the dierence between the bootstrap and exact distributions is
G
+
a
(n) G
a
(n, 1
0
) =
2
:
(q
2
(n, 1
a
) q
2
(n, 1
0
)) +O(:
32
)
= O(:
32
),
the last equality because 1
a
converges to 1
0
at rate
_
:, and q
2
is continuous in 1. Another way
of writing this is
P([T
+
a
[ < n) = P([T
a
[ < n) +O(:
32
)
so the error from using the bootstrap distribution (relative to the true unknown distribution) is
O(:
32
). This is in contrast to the use of the asymptotic distribution, whose error is O(:
1
). Thus
a two-sided bootstrap test also achieves an asymptotic renement, similar to a one-sided test.
A reader might get confused between the two simultaneous eects. Two-sided tests have better
rates of convergence than the one-sided tests, and bootstrap tests have better rates of convergence
than asymptotic tests.
The analysis shows that there may be a trade-o between one-sided and two-sided tests. Two-
sided tests will have more accurate size (Reported Type I error), but one-sided tests might have
more power against alternatives of interest. Condence intervals based on the bootstrap can be
asymmetric if based on one-sided tests (equal-tailed intervals) and can therefore be more informative
and have smaller length than symmetric intervals. Therefore, the choice between symmetric and
equal-tailed condence intervals is unclear, and needs to be determined on a case-by-case basis.
6.11 Percentile Condence Intervals
To evaluate the coverage rate of the percentile interval, set T
a
=
_
:
_
^
0 0
0
_
. We know that
T
a
o
N(0, \ ), which is not pivotal, as it depends on the unknown \. Theorem 6.8.1 shows that
a rst-order approximation
G
a
(n, 1) =
_
n
o
_
+O(:
12
),
where o =
_
\ , and for the bootstrap
G
+
a
(n) = G
a
(n, 1
a
) =
_
n
o
_
+O(:
12
),
where ^ o = \ (1
a
) is the bootstrap estimate of o. The dierence is
G
+
a
(n) G
a
(n, 1
0
) =
_
n
o
_

_
n
o
_
+O(:
12
)
= c
_
n
o
_
n
o
(^ o o) +O(:
12
)
= O(:
12
)
90
Hence the order of the error is O(:
12
).
The good news is that the percentile-type methods (if appropriately used) can yield
_
:-
convergent asymptotic inference. Yet these methods do not require the calculation of standard
errors! This means that in contexts where standard errors are not available or are dicult to
calculate, the percentile bootstrap methods provide an attractive inference method.
The bad news is that the rate of convergence is disappointing. It is no better than the rate
obtained from an asymptotic one-sided condence region. Therefore if standard errors are available,
it is unclear if there are any benets from using the percentile bootstrap over simple asymptotic
methods.
Based on these arguments, the theoretical literature (e.g. Hall, 1992, Horowitz, 2001) tends to
advocate the use of the percentile-t bootstrap methods rather than percentile methods.
6.12 Bootstrap Methods for Regression Models
The bootstrap methods we have discussed have set G
+
a
(n) = G
a
(n, 1
a
), where 1
a
is the EDF.
Any other consistent estimate of 1 may be used to dene a feasible bootstrap estimator. The
advantage of the EDF is that it is fully nonparametric, it imposes no conditions, and works in
nearly any context. But since it is fully nonparametric, it may be inecient in contexts where
more is known about 1. We discuss some bootstrap methods appropriate for the case of a regression
model where
j
i
= i
t
i
d +c
i
E(c
i
[ i
i
) = 0.
The non-parametric bootstrap distribution resamples the observations (j
+
i
, i
+
i
) from the EDF, which
implies
j
+
i
= i
+t
i
`
d +c
+
i
E(i
+
i
c
+
i
) = 0
but generally
E(c
+
i
[ i
+
i
) ,= 0.
The the bootstrap distribution does not impose the regression assumption, and is thus an inecient
estimator of the true distribution (when in fact the regression assumption is true.)
One approach to this problem is to impose the very strong assumption that the error -
i
is
independent of the regressor i
i
. The advantage is that in this case it is straightforward to con-
struct bootstrap distributions. The disadvantage is that the bootstrap distribution may be a poor
approximation when the error is not independent of the regressors.
To impose independence, it is sucient to sample the i
+
i
and c
+
i
independently, and then create
j
+
i
= i
+t
i
`
d + c
+
i
. There are dierent ways to impose independence. A non-parametric method
is to sample the bootstrap errors c
+
i
randomly from the OLS residuals ^ c
1
, ..., ^ c
a
. A parametric
method is to generate the bootstrap errors c
+
i
from a parametric distribution, such as the normal
c
+
i
~ N(0, ^ o
2
).
For the regressors i
+
i
, a nonparametric method is to sample the i
+
i
randomly from the EDF
or sample values i
1
, ..., i
a
. A parametric method is to sample i
+
i
from an estimated parametric
distribution. A third approach sets i
+
i
= i
i
. This is equivalent to treating the regressors as xed
in repeated samples. If this is done, then all inferential statements are made conditionally on the
observed values of the regressors, which is a valid statistical approach. It does not really matter,
however, whether or not the i
i
are really xed or random.
The methods discussed above are unattractive for most applications in econometrics because
they impose the stringent assumption that i
i
and c
i
are independent. Typically what is desirable
is to impose only the regression condition E(c
i
[ i
i
) = 0. Unfortunately this is a harder problem.
91
One proposal which imposes the regression condition without independence is the Wild Boot-
strap. The idea is to construct a conditional distribution for c
+
i
so that
E(c
+
i
[ i
i
) = 0
E
_
c
+2
i
[ i
i
_
= ^ c
2
i
E
_
c
+3
i
[ i
i
_
= ^ c
3
i
.
A conditional distribution with these features will preserve the main important features of the data.
This can be achieved using a two-point distribution of the form
P
_
c
+
i
=
_
1 +
_
5
2
_
^ c
i
_
=
_
5 1
2
_
5
P
_
c
+
i
=
_
1
_
5
2
_
^ c
i
_
=
_
5 + 1
2
_
5
For each i
i
, you sample c
+
i
using this two-point distribution.
6.13 Exercises
1. Let 1
a
(i) denote the EDF of a random sample. Show that
_
:(1
a
(i) 1
0
(i))
o
N(0, 1
0
(i) (1 1
0
(i))) .
2. Take a random sample j
1
, ..., j
a
with j = Ej
i
and o
2
= var (j
i
) . Let the statistic of interest
be the sample mean T
a
= j
a
. Find the population moments ET
a
and var (T
a
) . Let j
+
1
, ..., j
+
a

be a random sample from the empirical distribution function and let T


+
a
= j
+
a
be its sample
mean. Find the bootstrap moments ET
+
a
and var (T
+
a
) .
3. Consider the following bootstrap procedure for a regression of j
i
on i
i
. Let
`
d denote the OLS
estimator from the regression of on A, and ` c = A
`
d the OLS residuals.
(a) Draw a random vector (i
+
, c
+
) from the pair (i
i
, ^ c
i
) : i = 1, ..., : . That is, draw a
random integer i
t
from [1, 2, ..., :], and set i
+
= i
i
0 and c
+
= ^ c
i
0 . Set j
+
= i
+t
`
d + c
+
.
Draw (with replacement) : such vectors, creating a random bootstrap data set (
+
, A
+
).
(b) Regress
+
on A
+
, yielding OLS estimates
`
d
+
and any other statistic of interest.
Show that this bootstrap procedure is (numerically) identical to the non-parametric boot-
strap.
4. Consider the following bootstrap procedure. Using the non-parametric bootstrap, generate
bootstrap samples, calculate the estimate
^
0
+
on these samples and then calculate
T
+
a
= (
^
0
+

^
0),:(
^
0),
where :(
^
0) is the standard error in the original data. Let
+
a
(.05) and
+
a
(.95) denote the 5%
and 95% quantiles of T
+
a
, and dene the bootstrap condence interval
C =
_
^
0 :(
^
0)
+
a
(.95),
^
0 :(
^
0)
+
a
(.05)
_
.
Show that C exactly equals the Alternative percentile interval (not the percentile-t interval).
92
5. You want to test H
0
: 0 = 0 against H
1
: 0 0. The test for H
0
is to reject if T
a
=
^
0,:(
^
0) c
where c is picked so that Type I error is c. You do this as follows. Using the non-parametric
bootstrap, you generate bootstrap samples, calculate the estimates
^
0
+
on these samples and
then calculate
T
+
a
=
^
0
+
,:(
^
0
+
).
Let
+
a
(.95) denote the 95% quantile of T
+
a
. You replace c with
+
a
(.95), and thus reject H
0
if
T
a
=
^
0,:(
^
0)
+
a
(.95). What is wrong with this procedure?
6. Suppose that in an application,
^
0 = 1.2 and :(
^
0) = .2. Using the non-parametric bootstrap,
1000 samples are generated from the bootstrap distribution, and
^
0
+
is calculated on each
sample. The
^
0
+
are sorted, and the 2.5% and 97.5% quantiles of the
^
0
+
are .75 and 1.3,
respectively.
(a) Report the 95% Efron Percentile interval for 0.
(b) Report the 95% Alternative Percentile interval for 0.
(c) With the given information, can you report the 95% Percentile-t interval for 0?
7. The datale hprice1.dat contains data on house prices (sales), with variables listed in the
le hprice1.pdf. Estimate a linear regression of price on the number of bedrooms, lot size,
size of house, and the colonial dummy. Calculate 95% condence intervals for the regression
coecients using both the asymptotic normal approximation and the percentile-t bootstrap.
93
Chapter 7
Generalized Method of Moments
7.1 Overidentied Linear Model
Consider the linear model
j
i
= i
t
i
d +c
i
= i
t
1i
d
1
+i
t
2i
d
2
+c
i
E(i
i
c
i
) = 0
where i
1i
is / 1 and i
2
is r 1 with / = / + r. We know that without further restrictions, an
asymptotically ecient estimator of d is the OLS estimator. Now suppose that we are given the
information that d
2
= 0. Now we can write the model as
j
i
= i
t
1i
d
1
+c
i
E(i
i
c
i
) = 0.
In this case, how should d
1
be estimated? One method is OLS regression of j
i
on i
1i
alone. This
method, however, is not necessarily ecient, as there are / restrictions in E(i
i
c
i
) = 0, while d
1
is
of dimension / < /. This situation is called overidentied. There are / / = r more moment
restrictions than free parameters. We call r the number of overidentifying restrictions.
This is a special case of a more general class of moment condition models. Let j(j, z, i, d) be
an / 1 function of a / 1 parameter d with / _ / such that
Ej(j
i
, z
i
, i
i
, d
0
) = 0 (7.1)
where d
0
is the true value of d. In our previous example, j(j, i, d) = i(ji
t
1
d). In econometrics,
this class of models are called moment condition models. In the statistics literature, these are
known as estimating equations.
As an important special case we will devote special attention to linear moment condition models,
which can be written as
j
i
= z
t
i
d +c
i
E(i
i
c
i
) = 0.
where the dimensions of z
i
and i
i
are / 1 and / 1 , with / _ /. If / = / the model is just
identied, otherwise it is overidentied. The variables z
i
may be components and functions of
i
i
, but this is not required. This model falls in the class (7.1) by setting
j(j, z, i, d
0
) = i(j z
t
d) (7.2)
94
7.2 GMM Estimator
Dene the sample analog of (7.2)
j
a
(d) =
1
:
a

i=1
j
i
(d) =
1
:
a

i=1
i
i
_
j
i
z
t
i
d
_
=
1
:
_
A
t
A
t
Zd
_
. (7.3)
The method of moments estimator for d is dened as the parameter value which sets j
a
(d) = 0.
This is generally not possible when / /, are there are more equations than free parameters. The
idea of the generalized method of moments (GMM) is to dene an estimator which sets j
a
(d)
close to zero.
For some / / weight matrix M
a
0, let
J
a
(d) = : j
a
(d)
t
M
a
j
a
(d).
This is a non-negative measure of the length of the vector j
a
(d). For example, if M
a
= 1, then,
J
a
(d) = : j
a
(d)
t
j
a
(d) = : |j
a
(d)|
2
, the square of the Euclidean length. The GMM estimator
minimizes J
a
(d).
Denition 1 d
GAA
= argmin
f
J
a
(d) .
Note that if / = /, then j
a
(
`
d) = 0, and the GMM estimator is the method of moments
estimator. The rst order conditions for the GMM estimator are
0 =
0
0d
J
a
(
`
d)
= 2
0
0d
j
a
(
`
d)
t
M
a
j
a
(
`
d)
= 2
_
1
:
Z
t
A
_
M
a
_
1
:
A
t
_
Z
`
d
_
_
so
2
_
Z
t
A
_
M
a
_
A
t
Z
_
`
d = 2
_
Z
t
A
_
M
a
_
A
t

_
which establishes the following.
Proposition 7.2.1
`
d
GAA
=
__
Z
t
A
_
M
a
_
A
t
Z
__
1
_
Z
t
A
_
M
a
_
A
t

_
.
While the estimator depends on M
a
, the dependence is only up to scale, for if M
a
is replaced
by cM
a
for some c 0,
`
d
GAA
does not change.
7.3 Distribution of GMM Estimator
Assume that M
a
j
M 0. Let
Q = E
_
i
i
z
t
i
_
and
D = E
_
i
i
i
t
i
c
2
i
_
= E
_
j
i
j
t
i
_
,
where j
i
= i
i
c
i
. Then
_
1
:
Z
t
A
_
M
a
_
1
:
A
t
Z
_
j
Q
t
MQ
and
_
1
:
Z
t
A
_
M
a
_
1
_
:
A
t
c
_
o
Q
t
MN(0, D) .
We conclude:
95
Theorem 7.3.1
_
:
_
`
d d
_
o
N(0, X ) , where
X =
_
Q
t
MQ
_
1
_
Q
t
MDMQ
_ _
Q
t
MQ
_
1
.
In general, GMM estimators are asymptotically normal with sandwich form asymptotic vari-
ances.
The optimal weight matrix M
0
is one which minimizes X . This turns out to be M
0
= D
1
.
The proof is left as an exercise. This yields the ecient GMM estimator:
`
d =
_
Z
t
AD
1
A
t
Z
_
1
Z
t
AD
1
A
t
.
Thus we have
Theorem 7.3.2 For the ecient GMM estimator,
_
:
_
`
d d
_
o
N
_
0,
_
Q
t
D
1
Q
_
1
_
.
M
0
= D
1
is not known in practice, but it can be estimated consistently. For any M
a
j
M
0
,
we still call
`
d the ecient GMM estimator, as it has the same asymptotic distribution.
By ecient, we mean that this estimator has the smallest asymptotic variance in the class
of GMM estimators with this set of moment conditions. This is a weak concept of optimality, as
we are only considering alternative weight matrices M
a
. However, it turns out that the GMM
estimator is semiparametrically ecient, as shown by Gary Chamberlain (1987).
If it is known that E(j
i
(d)) = 0, and this is all that is known, this is a semi-parametric
problem, as the distribution of the data is unknown. Chamberlain showed that in this context,
no semiparametric estimator (one which is consistent globally for the class of models considered)
can have a smaller asymptotic variance than
_
C
t
D
1
C
_
1
where C = E
0
0f
0
j
i
(d). Since the GMM
estimator has this asymptotic variance, it is semiparametrically ecient.
This result shows that in the linear model, no estimator has greater asymptotic eciency than
the ecient linear GMM estimator. No estimator can do better (in this rst-order asymptotic
sense), without imposing additional assumptions.
7.4 Estimation of the Ecient Weight Matrix
Given any weight matrix M
a
0, the GMM estimator
`
d is consistent yet inecient. For
example, we can set M
a
= 1

. In the linear model, a better choice is M


a
= (A
t
A)
1
. Given
any such rst-step estimator, we can dene the residuals ^ c
i
= j
i
z
t
i
`
d and moment equations
` j
i
= i
i
^ c
i
= j(j
i
, z
i
, i
i
,
`
d). Construct
j
a
= j
a
(
`
d) =
1
:
a

i=1
` j
i
,
` j
+
i
= ` j
i
j
a
,
and dene
M
a
=
_
1
:
a

i=1
` j
+
i
` j
+t
i
_
1
=
_
1
:
a

i=1
` j
i
` j
t
i
j
a
j
t
a
_
1
. (7.4)
Then M
a
j
D
1
= M
0
, and GMM using M
a
as the weight matrix is asymptotically ecient.
A common alternative choice is to set
M
a
=
_
1
:
a

i=1
` j
i
` j
t
i
_
1
96
which uses the uncentered moment conditions. Since Ej
i
= 0, these two estimators are asymptot-
ically equivalent under the hypothesis of correct specication. However, Alastair Hall (2000) has
shown that the uncentered estimator is a poor choice. When constructing hypothesis tests, under
the alternative hypothesis the moment conditions are violated, i.e. Ej
i
,= 0, so the uncentered
estimator will contain an undesirable bias term and the power of the test will be adversely aected.
A simple solution is to use the centered moment conditions to construct the weight matrix, as in
(7.4) above.
Here is a simple way to compute the ecient GMM estimator for the linear model. First, set
M
a
= (A
t
A)
1
, estimate
`
d using this weight matrix, and construct the residual ^ c
i
= j
i
z
t
i
`
d.
Then set ` j
i
= i
i
^ c
i
, and let ` j be the associated : / matrix. Then the ecient GMM estimator is
`
d =
_
Z
t
A
_
` j
t
` j :j
a
j
t
a
_
1
A
t
Z
_
1
Z
t
A
_
` j
t
` j :j
a
j
t
a
_
1
A
t
.
In most cases, when we say GMM, we actually mean ecient GMM. There is little point in
using an inecient GMM estimator when the ecient estimator is easy to compute.
An estimator of the asymptotic variance of
`
d can be seen from the above formula. Set
`
X = :
_
Z
t
A
_
` j
t
` j :j
a
j
t
a
_
1
A
t
Z
_
1
.
Asymptotic standard errors are given by the square roots of the diagonal elements of
`
X .
There is an important alternative to the two-step GMM estimator just described. Instead, we
can let the weight matrix be considered as a function of d. The criterion function is then
J(d) = : j
a
(d)
t
_
1
:
a

i=1
j
+
i
(d)j
+
i
(d)
t
_
1
j
a
(d).
where
j
+
i
(d) = j
i
(d) j
a
(d)
The
`
d which minimizes this function is called the continuously-updated GMM estimator, and
was introduced by L. Hansen, Heaton and Yaron (1996).
The estimator appears to have some better properties than traditional GMM, but can be nu-
merically tricky to obtain in some cases. This is a current area of research in econometrics.
7.5 GMM: The General Case
In its most general form, GMM applies whenever an economic or statistical model implies the
/ 1 moment condition
E(j
i
(d)) = 0.
Often, this is all that is known. Identication requires | _ / = dim(d). The GMM estimator
minimizes
J(d) = : j
a
(d)
t
M
a
j
a
(d)
where
j
a
(d) =
1
:
a

i=1
j
i
(d)
and
M
a
=
_
1
:
a

i=1
` j
i
` j
t
i
j
a
j
t
a
_
1
,
with ` j
i
= j
i
(

d) constructed using a preliminary consistent estimator



d, perhaps obtained by rst
setting M
a
= 1. Since the GMM estimator depends upon the rst-stage estimator, often the weight
matrix M
a
is updated, and then
`
d recomputed. This estimator can be iterated if needed.
97
Theorem 7.5.1 Under general regularity conditions,
_
:
_
`
d d
_
o
N
_
0,
_
C
t
D
1
C
_
1
_
, where
D = (E(j
i
j
t
i
))
1
and C = E
0
0f
0
j
i
(d). The variance of
`
d may be estimated by
_
`
C
t
`
D
1
`
C
_
1
where
`
D = :
1

i
` j
+
i
` j
+t
i
and
`
C = :
1

i
0
0f
0
j
i
(
`
d).
The general theory of GMM estimation and testing was exposited by L. Hansen (1982).
7.6 Over-Identication Test
Overidentied models (/ /) are special in the sense that there may not be a parameter value
d such that the moment condition
Ej(j
i
, z
i
, i
i
, d) = 0
holds. Thus the model the overidentifying restrictions are testable.
For example, take the linear model j
i
= d
t
1
i
1i
+d
t
2
i
2i
+c
i
with E(i
1i
c
i
) = 0 and E(i
2i
c
i
) = 0.
It is possible that d
2
= 0, so that the linear equation may be written as j
i
= d
t
1
i
1i
+c
i
. However,
it is possible that d
2
,= 0, and in this case it would be impossible to nd a value of d
1
so that
both E(i
1i
(j
i
i
t
1i
d
1
)) = 0 and E(i
2i
(j
i
i
t
1i
d
1
)) = 0 hold simultaneously. In this sense an
exclusion restriction can be seen as an overidentifying restriction.
Note that j
a
j
Ej
i
, and thus j
a
can be used to assess whether or not the hypothesis that
Ej
i
= 0 is true or not. The criterion function at the parameter estimates is
J = : j
t
a
M
a
j
a
= :
2
j
t
a
_
` j
t
` j :j
a
j
t
a
_
1
j
a
.
is a quadratic form in j
a
, and is thus a natural test statistic for H
0
: Ej
i
= 0.
Theorem 7.6.1 (Sargan-Hansen). Under the hypothesis of correct specication, and if the weight
matrix is asymptotically ecient,
J = J(
`
d)
o

2
I
.
The proof of the theorem is left as an exercise. This result was established by Sargan (1958)
for a specialized case, and by L. Hansen (1982) for the general case.
The degrees of freedom of the asymptotic distribution are the number of overidentifying restric-
tions. If the statistic J exceeds the chi-square critical value, we can reject the model. Based on
this information alone, it is unclear what is wrong, but it is typically cause for concern. The GMM
overidentication test is a very useful by-product of the GMM methodology, and it is advisable to
report the statistic J whenever GMM is the estimation method.
When over-identied models are estimated by GMM, it is customary to report the J statistic
as a general test of model adequacy.
7.7 Hypothesis Testing: The Distance Statistic
We described before how to construct estimates of the asymptotic covariance matrix of the
GMM estimates. These may be used to construct Wald tests of statistical hypotheses.
If the hypothesis is non-linear, a better approach is to directly use the GMM criterion function.
This is sometimes called the GMM Distance statistic, and sometimes called a LR-like statistic (the
LR is for likelihood-ratio). The idea was rst put forward by Newey and West (1987).
For a given weight matrix M
a
, the GMM criterion function is
J(d) = : j
a
(d)
t
M
a
j
a
(d)
98
For l : R
I
R
v
, the hypothesis is
H
0
: l(d) = 0.
The estimates under H
1
are
`
d = argmin
f
J(d)
and those under H
0
are

d = argmin
h(f)=0
J(d).
The two minimizing criterion functions are J(
`
d) and J(

d). The GMM distance statistic is the


dierence
1 = J(

d) J(
`
d).
Proposition 7.7.1 If the same weight matrix M
a
is used for both null and alternative,
1. 1 _ 0
2. 1
o

2
v
3. If l is linear in d, then 1 equals the Wald statistic.
If l is non-linear, the Wald statistic can work quite poorly. In contrast, current evidence
suggests that the 1 statistic appears to have quite good sampling properties, and is the preferred
test statistic.
Newey and West (1987) suggested to use the same weight matrix M
a
for both null and alter-
native, as this ensures that 1 _ 0. This reasoning is not compelling, however, and some current
research suggests that this restriction is not necessary for good performance of the test.
This test shares the useful feature of LR tests in that it is a natural by-product of the compu-
tation of alternative models.
7.8 Conditional Moment Restrictions
In many contexts, the model implies more than an unconditional moment restriction of the form
Ej
i
(d) = 0. It implies a conditional moment restriction of the form
E(c
i
(d) [ i
i
) = 0
where c
i
(d) is some : 1 function of the observation and the parameters. In many cases, : = 1.
It turns out that this conditional moment restriction is much more powerful, and restrictive,
than the unconditional moment restriction discussed above.
Our linear model j
i
= z
t
i
d + c
i
with instruments i
i
falls into this class under the stronger
assumption E(c
i
[ i
i
) = 0. Then c
i
(d) = j
i
z
t
i
d.
It is also helpful to realize that conventional regression models also fall into this class, except
that in this case z
i
= i
i
. For example, in linear regression, c
i
(d) = j
i
i
t
i
d, while in a nonlinear
regression model c
i
(d) = j
i
j(i
i
, d). In a joint model of the conditional mean and variance
c
i
(d, _) =
_
_
_
j
i
i
t
i
d
(j
i
i
t
i
d)
2
) (i
i
)
t
_
.
Here : = 2.
Given a conditional moment restriction, an unconditional moment restriction can always be
constructed. That is for any / 1 function (i
i
, d) , we can set j
i
(d) = (i
i
, d) c
i
(d) which
99
satises Ej
i
(d) = 0 and hence denes a GMM estimator. The obvious problem is that the class of
functions is innite. Which should be selected?
This is equivalent to the problem of selection of the best instruments. If r
i
R is a valid
instrument satisfying E(c
i
[ r
i
) = 0, then r
i
, r
2
i
, r
3
i
, ..., etc., are all valid instruments. Which
should be used?
One solution is to construct an innite list of potent instruments, and then use the rst /
instruments. How is / to be determined? This is an area of theory still under development. A
recent study of this problem is Donald and Newey (2001).
Another approach is to construct the optimal instrument. The form was uncovered by Cham-
berlain (1987). Take the case : = 1. Let
H
i
= E
_
0
0d
c
i
(d) [ i
i
_
and
o
2
i
= E
_
c
i
(d)
2
[ i
i
_
.
Then the optimal instrument is
A
i
= o
2
i
H
i
so the optimal moment is
j
i
(d) = A
i
c
i
(d).
Setting j
i
(d) to be this choice (which is / 1, so is just-identied) yields the best GMM estimator
possible.
In practice, A
i
is unknown, but its form does help us think about construction of optimal
instruments.
In the linear model c
i
(d) = j
i
z
t
i
d, note that
H
i
= E(z
i
[ i
i
)
and
o
2
i
= E
_
c
2
i
[ i
i
_
,
so
A
i
= o
2
i
E(z
i
[ i
i
) .
In the case of linear regression, z
i
= i
i
, so A
i
= o
2
i
i
i
. Hence ecient GMM is GLS, as we
discussed earlier in the course.
In the case of endogenous variables, note that the ecient instrument A
i
involves the estimation
of the conditional mean of z
i
given i
i
. In other words, to get the best instrument for z
i
, we need the
best conditional mean model for z
i
given i
i
, not just an arbitrary linear projection. The ecient
instrument is also inversely proportional to the conditional variance of c
i
. This is the same as the
GLS estimator; namely that improved eciency can be obtained if the observations are weighted
inversely to the conditional variance of the errors.
7.9 Bootstrap GMM Inference
Let
`
d be the 2SLS or GMM estimator of d. Using the EDF of (j
i
, i
i
, z
i
), we can apply the
bootstrap methods discussed in Chapter 6 to compute estimates of the bias and variance of
`
d,
and construct condence intervals for d, identically as in the regression model. However, caution
should be applied when interpreting such results.
A straightforward application of the nonparametric bootstrap works in the sense of consistently
achieving the rst-order asymptotic distribution. This has been shown by Hahn (1996). However,
it fails to achieve an asymptotic renement when the model is over-identied, jeopardizing the
100
theoretical justication for percentile-t methods. Furthermore, the bootstrap applied J test will
yield the wrong answer.
The problem is that in the sample,
`
d is the true value and yet j
a
(
`
d) ,= 0. Thus according to
random variables (j
+
i
, i
+
i
, z
+
i
) drawn from the EDF 1
a
,
E
_
j
i
_
`
d
__
= j
a
(
`
d) ,= 0.
This means that (j
+
i
, i
+
i
, z
+
i
) do not satisfy the same moment conditions as the population distrib-
ution.
A correction suggested by Hall and Horowitz (1996) can solve the problem. Given the bootstrap
sample (
+
, A
+
, Z
+
), dene the bootstrap GMM criterion
J
+
(d) = :
_
j
+
a
(d) j
a
(
`
d)
_
t
M
+
a
_
j
+
a
(d) j
a
(
`
d)
_
where j
a
(
`
d) is from the in-sample data, not from the bootstrap data.
Let
`
d
+
minimize J
+
(d), and dene all statistics and tests accordingly. In the linear model, this
implies that the bootstrap estimator is
`
d
+
=
_
Z
+t
A
+
M
+
a
A
+t
Z
+
_
1
_
Z
+t
A
+
M
+
a
_
A
+t

+
A
t
` c
__
.
where ` c = Z
`
d are the in-sample residuals. The bootstrap J statistic is J
+
(
`
d
+
).
Brown and Newey (2002) have an alternative solution. They note that we can sample from
the observations with the empirical likelihood probabilities ^ j
i
described in Chapter 8. Since

a
i=1
^ j
i
j
i
_
`
d
_
= 0, this sampling scheme preserves the moment conditions of the model, so no
recentering or adjustments is needed. Brown and Newey argue that this bootstrap procedure will
be more ecient than the Hall-Horowitz GMM bootstrap.
101
7.10 Exercises
1. Take the model
j
i
= i
t
i
d +c
i
E(i
i
c
i
) = 0
c
2
i
= z
t
i
_ +j
i
E(z
i
j
i
) = 0.
Find the method of moments estimators
_
`
d, ` _
_
for (d, _) .
2. Take the single equation
= Zd +c
E(c [ A) = 0
Assume E
_
c
2
i
[ i
i
_
= o
2
. Show that if
`
d is estimated by GMM with weight matrix M
a
=
(A
t
A)
1
, then
_
:
_
`
d d
_
o
N
_
0, o
2
_
Q
t
A
1
Q
_
1
_
where Q = E(i
i
z
t
i
) and A = E(i
i
i
t
i
) .
3. Take the model j
i
= z
t
i
d + c
i
with E(i
i
c
i
) = 0. Let ^ c
i
= j
i
z
t
i
`
d where
`
d is consistent for
d (e.g. a GMM estimator with arbitrary weight matrix). Dene the estimate of the optimal
GMM weight matrix
M
a
=
_
1
:
a

i=1
i
i
i
t
i
^ c
2
i
_
1
.
Show that M
a
j
D
1
where D = E
_
i
i
i
t
i
c
2
i
_
.
4. In the linear model estimated by GMM with general weight matrix M, the asymptotic vari-
ance of
`
d
GAA
is
X =
_
Q
t
MQ
_
1
Q
t
MDMQ
_
Q
t
MQ
_
1
(a) Let X
0
be this matrix when M = D
1
. Show that X
0
=
_
Q
t
D
1
Q
_
1
.
(b) We want to show that for any M, X X
0
is positive semi-denite (for then X
0
is the
smaller possible covariance matrix and M = D
1
is the ecient weight matrix). To do
this, start by nding matrices A and H such that X = A
t
DA and X
0
= H
t
DH.
(c) Show that H
t
DA = H
t
DH and therefore that H
t
D(AH) = 0.
(d) Use the expressions X = A
t
DA, A = H + (AH) , and H
t
D(AH) = 0 to show
that X _ X
0
.
5. The equation of interest is
j
i
= j(i
i
, d) +c
i
E(z
i
c
i
) = 0.
The observed data is (j
i
, i
i
, z
i
). z
i
is / 1 and d is / 1, / _ /. Show how to construct an
ecient GMM estimator for d.
102
6. In the linear model = Ad + c with E(i
i
c
i
) = 0, the Generalized Method of Moments
(GMM) criterion function for d is dened as
J
a
(d) =
1
:
( Ad)
t
A
`
D
1
a
A
t
( Ad) (7.5)
where ^ c
i
are the OLS residuals and
`
D
a
=
1
a

a
i=1
i
i
i
t
i
^ c
2
i
. The GMM estimator of d, subject
to the restriction l(d) = 0, is dened as

d = argmin
h(f)=0
J
a
(d).
The GMM test statistic (the distance statistic) of the hypothesis l(d) = 0 is
1 = J
a
(

d) = min
h(f)=0
J
a
(d). (7.6)
(a) Show that you can rewrite J
a
(d) in (7.5) as
J
a
(d) =
_
d
`
d
_
t
`
X
1
a
_
d
`
d
_
where
`
X
a
=
_
A
t
A
_
1
_
a

i=1
i
i
i
t
i
^ c
2
i
_
_
A
t
A
_
1
.
(b) Now focus on linear restrictions: l(d) = H
t
d r. Thus

d = argmin
1
0
fr=0
J
a
(d)
and hence H
t

d = r. Dene the Lagrangian /(d, X) =


1
2
J
a
(d) +X
t
(H
t
d r) where X is
: 1. Show that the minimizer is

d =
`
d
`
X
a
H
_
H
t
a
`
X H
_
1
_
H
t
`
d r
_
`
X =
_
H
t
a
`
X H
_
1
_
H
t
`
d r
_
.
(c) Show that if H
t
d = r then
_
:
_

d d
_
o
N(0, X
1
) where
X
1
= X X H
_
H
t
X H
_
1
H
t
X .
(d) Show that in this setting, the distance statistic 1 in (7.6) equals the Wald statistic.
7. Take the linear model
j
i
= z
t
i
d +c
i
E(i
i
c
i
) = 0.
and consider the GMM estimator
`
d of d. Let
J
a
= :j
a
(
`
d)
t
`
D
1
j
a
(
`
d)
denote the test of overidentifying restrictions. Show that J
a
o

2
I
as : by demon-
strating each of the following:
103
(a) Since D 0, we can write D
1
= CC
t
and D = C
t1
C
1
(b) J
a
= :
_
C
t
j
a
(
`
d)
_
t
_
C
t
`
DC
_
1
C
t
j
a
(
`
d)
(c) C
t
j
a
(
`
d) = L
a
C
t
j
a
(d
0
) where
L
a
= 1

C
t
_
1
:
A
t
Z
___
1
:
Z
t
A
_
`
D
1
_
1
:
A
t
Z
__
1
_
1
:
Z
t
A
_
`
D
1
C
t1
j
a
(d
0
) =
1
:
A
t
c.
(d) L
a
j
1

H(H
t
H)
1
H
t
where H = C
t
E(i
i
z
t
i
)
(e) :
12
C
t
j
a
(d
0
)
o
Z ~ N(0, 1

)
(f) J
a
o
Z
t
_
1

H(H
t
H)
1
H
t
_
Z
(g) Z
t
_
1

H(H
t
H)
1
H
t
_
Z ~
2
I
.
Hint: 1

H(H
t
H)
1
H
t
is a projection matrix..
104
Chapter 8
Empirical Likelihood
8.1 Non-Parametric Likelihood
An alternative to GMM is empirical likelihood. The idea is due to Art Owen (1988, 2001) and
has been extended to moment condition models by Qin and Lawless (1994). It is a non-parametric
analog of likelihood estimation.
The idea is to construct a multinomial distribution 1(j
1
, ..., j
a
) which places probability j
i
at each observation. To be a valid multinomial distribution, these probabilities must satisfy the
requirements that j
i
_ 0 and
a

i=1
j
i
= 1. (8.1)
Since each observation is observed once in the sample, the log-likelihood function for this multino-
mial distribution is
log 1(j
1
, ..., j
a
) =
a

i=1
log(j
i
). (8.2)
First let us consider a just-identied model. In this case the moment condition places no
additional restrictions on the multinomial distribution. The maximum likelihood estimators of the
probabilities (j
1
, ..., j
a
) are those which maximize the log-likelihood subject to the constraint (8.1).
This is equivalent to maximizing
a

i=1
log(j
i
) j
_
a

i=1
j
i
1
_
where j is a Lagrange multiplier. The : rst order conditions are 0 = j
1
i
j. Combined with the
constraint (8.1) we nd that the MLE is j
i
= :
1
yielding the log-likelihood :log(:).
Now consider the case of an overidentied model with moment condition
Ej
i
(d
0
) = 0
where j is / 1 and d is / 1 and for simplicity we write j
i
(d) = j(j
i
, z
i
, i
i
, d). The multinomial
distribution which places probability j
i
at each observation (j
i
, i
i
, z
i
) will satisfy this condition if
and only if
a

i=1
j
i
j
i
(d) = 0 (8.3)
The empirical likelihood estimator is the value of d which maximizes the multinomial log-
likelihood (8.2) subject to the restrictions (8.1) and (8.3).
105
The Lagrangian for this maximization problem is
/(d, j
1
, ..., j
a
, X, j) =
a

i=1
log(j
i
) j
_
a

i=1
j
i
1
_
:X
t
a

i=1
j
i
j
i
(d)
where X and j are Lagrange multipliers. The rst-order-conditions of / with respect to j
i
, j, and
X are
1
j
i
= j +:X
t
j
i
(d)
a

i=1
j
i
= 1
a

i=1
j
i
j
i
(d) = 0.
Multiplying the rst equation by j
i
, summing over i, and using the second and third equations, we
nd j = : and
j
i
=
1
:
_
1 +X
t
j
i
(d)
_.
Substituting into / we nd
1(d, X) = :log (:)
a

i=1
log
_
1 +X
t
j
i
(d)
_
. (8.4)
For given d, the Lagrange multiplier X(d) minimizes 1(d, X) :
X(d) = argmin
X
1(d, X). (8.5)
This minimization problem is the dual of the constrained maximization problem. The solution
(when it exists) is well dened since 1(d, X) is a convex function of X. The solution cannot be
obtained explicitly, but must be obtained numerically (see section 6.5). This yields the (prole)
empirical log-likelihood function for d.
1(d) = 1(d, X(d))
= :log (:)
a

i=1
log
_
1 +X(d)
t
j
i
(d)
_
The EL estimate
`
d is the value which maximizes 1(d), or equivalently minimizes its negative
`
d = argmin
f
[1(d)] (8.6)
Numerical methods are required for calculation of
`
d (see Section 8.5).
As a by-product of estimation, we also obtain the Lagrange multiplier
`
X = X(
`
d), probabilities
^ j
i
=
1
:
_
1 +
`
X
t
j
i
_
`
d
__.
and maximized empirical likelihood
1(
`
d) =
a

i=1
log (^ j
i
) . (8.7)
106
8.2 Asymptotic Distribution of EL Estimator
Dene
C
i
(d) =
0
0d
t
j
i
(d) (8.8)
C = EC
i
(d
0
)
D = E
_
j
i
(d
0
) j
i
(d
0
)
t
_
and
X =
_
C
t
D
1
C
_
1
(8.9)
X
X
= DC
_
C
t
D
1
C
_
1
C
t
(8.10)
For example, in the linear model, C
i
(d) = i
i
z
t
i
, C = E(i
i
z
t
i
), and D = E
_
i
i
i
t
i
c
2
i
_
.
Theorem 8.2.1 Under regularity conditions,
_
:
_
`
d d
0
_
o
N(0, X )
_
:
`
X
o
D
1
N(0, X
X
)
where X and X
X
are dened in (8.9) and (8.10), and
_
:
_
`
d d
0
_
and
_
:
`
X are asymptotically
independent.
The proof is given in Section 8.6.
The theorem shows that asymptotic variance X for
`
d is the same as for ecient GMM. Thus
the EL estimator is asymptotically ecient.
Chamberlain (1987) showed that X is the semiparametric eciency bound for d in the overi-
dentied moment condition model. This means that no consistent estimator for this class of models
can have a lower asymptotic variance than X . Since the EL estimator achieves this bound, it is an
asymptotically ecient estimator for d.
8.3 Overidentifying Restrictions
In a parametric likelihood context, tests are based on the dierence in the log likelihood func-
tions. The same statistic can be constructed for empirical likelihood. Twice the dierence between
the unrestricted empirical log-likelihood :log (:) and the maximized empirical log-likelihood for
the model (8.7) is
11
a
=
a

i=1
2 log
_
1 +
`
X
t
j
i
_
`
d
__
. (8.11)
Theorem 8.3.1 If Ej
i
(d
0
) = 0 then 11
a
o

2
I
.
The proof is given in Section 8.6.
The EL overidentication test is similar to the GMM overidentication test. They are asymp-
totically rst-order equivalent, and have the same interpretation. The overidentication test is a
very useful by-product of EL estimation, and it is advisable to report the statistic 11
a
whenever
EL is the estimation method.
107
8.4 Testing
Let the maintained model be
Ej
i
(d) = 0 (8.12)
where j is / 1 and d is / 1. By maintained we mean that the overidentfying restrictions
contained in (8.12) are assumed to hold and are not being challenged (at least for the test discussed
in this section). The hypothesis of interest is
l(d) = 0.
where l : R
I
R
o
. The restricted EL estimator and likelihood are the values which solve

d = argmax
h(f)=0
1(d)
1(

d) = max
h(f)=0
1(d).
Fundamentally, the restricted EL estimator

d is simply an EL estimator with //+a overidentifying
restrictions, so there is no fundamental change in the distribution theory for

d relative to
`
d. To test
the hypothesis l(d) while maintaining (8.12), the simple overidentifying restrictions test (8.11) is
not appropriate. Instead we use the dierence in log-likelihoods:
11
a
= 2
_
1(
`
d) 1(

d)
_
.
This test statistic is a natural analog of the GMM distance statistic.
Theorem 8.4.1 Under (8.12) and H
0
: l(d) = 0, 11
a
o

2
o
.
The proof of this result is more challenging and is omitted.
8.5 Numerical Computation
Gauss code which implements the methods discussed below can be found at
http://www.ssc.wisc.edu/~bhansen/progs/elike.prc
Derivatives
The numerical calculations depend on derivatives of the dual likelihood function (8.4). Dene
j
+
i
(d, X) =
j
i
(d)
_
1 +X
t
j
i
(d)
_
C
+
i
(d, X) =
C
i
(d)
t
X
1 +X
t
j
i
(d)
The rst derivatives of (8.4) are
H
X
=
0
0X
1(d, X) =
a

i=1
j
+
i
(d, X)
H
f
=
0
0d
1(d, X) =
a

i=1
C
+
i
(d, X) .
108
The second derivatives are
H
XX
=
0
2
0X0X
t
1(d, X) =
a

i=1
j
+
i
(d, X) j
+
i
(d, X)
t
H
Xf
=
0
2
0X0d
t
1(d, X) =
a

i=1
_
j
+
i
(d, X) C
+
i
(d, X)
t

C
i
(d)
1 +X
t
j
i
(d)
_
H
ff
=
0
2
0d0d
t
1(d, X) =
a

i=1
_
_
C
+
i
(d, X) C
+
i
(d, X)
t

0
2
0f0f
0
_
j
i
(d)
t
X
_
1 +X
t
j
i
(d)
_
_
Inner Loop
The so-called inner loop solves (8.5) for given d. The modied Newton method takes a
quadratic approximation to 1
a
(d, X) yielding the iteration rule
X
)+1
= X
)
c (H
XX
(d, X
)
))
1
H
X
(d, X
)
) . (8.13)
where c 0 is a scalar steplength (to be discussed next). The starting value X
1
can be set to the
zero vector. The iteration (8.13) is continued until the gradient 1
X
(d, X
)
) is smaller than some
prespecied tolerance.
Ecient convergence requires a good choice of steplength c. One method uses the following
quadratic approximation. Set c
0
= 0, c
1
=
1
2
and c
2
= 1. For j = 0, 1, 2, set
X
j
= X
)
c
j
(H
XX
(d, X
)
))
1
H
X
(d, X
)
))
1
j
= 1(d, X
j
)
A quadratic function can be t exactly through these three points. The value of c which minimizes
this quadratic is
^
c =
1
2
+ 31
0
41
1
41
2
+ 41
0
81
1
.
yielding the steplength to be plugged into (8.13).
A complication is that X must be constrained so that 0 _ j
i
_ 1 which holds if
:
_
1 +X
t
j
i
(d)
_
_ 1 (8.14)
for all i. If (8.14) fails, the stepsize c needs to be decreased.
Outer Loop
The outer loop is the minimization (8.6). This can be done by the modied Newton method
described in the previous section. The gradient for (8.6) is
H
f
=
0
0d
1(d) =
0
0d
1(d, X) = H
f
+X
t
f
H
X
= H
f
since H
X
(d, X) = 0 at X = X(d), where
X
f
=
0
0d
t
X(d) = H
1
XX
H
Xf
,
the second equality following from the implicit function theorem applied to H
X
(d, X(d)) = 0.
The Hessian for (8.6) is
H
ff
=
0
0d0d
t
1(d)
=
0
0d
t
_
H
f
(d, X(d)) +X
t
f
H
X
(d, X(d))

=
_
H
ff
(d, X(d)) +H
t
Xf
X
f
+X
t
f
H
Xf
+X
t
f
H
XX
X
f
_
= H
t
Xf
H
1
XX
H
Xf
H
ff
.
109
It is not guaranteed that H
ff
0. If not, the eigenvalues of H
ff
should be adjusted so that all
are positive. The Newton iteration rule is
d
)+1
= d
)
cH
1
ff
H
f
where c is a scalar stepsize, and the rule is iterated until convergence.
8.6 Technical Proofs
Proof of Theorem 8.2.1. (
`
d,
`
X) jointly solve
0 =
0
0X
1(d, X) =
a

i=1
j
i
_
`
d
_
_
1 +
`
X
t
j
i
_
`
d
__ (8.15)
0 =
0
0d
1(d, X) =
a

i=1
C
i
_
`
d
_
t
X
1 +
`
X
t
j
i
_
`
d
_. (8.16)
Let C
a
=
1
a

a
i=1
C
i
(d
0
) , j
a
=
1
a

a
i=1
j
i
(d
0
) and D
a
=
1
a

a
i=1
j
i
(d
0
) j
i
(d
0
)
t
.
Expanding (8.16) around d = d
0
and X = X
0
= 0 yields
0 C
t
a
_
`
X X
0
_
. (8.17)
Expanding (8.15) around d = d
0
and X = X
0
= 0 yields
0 j
a
C
a
_
`
d d
0
_
+D
a
`
X (8.18)
Premultiplying by C
t
a
D
1
a
and using (8.17) yields
0 C
t
a
D
1
a
j
a
C
t
a
D
1
a
C
a
_
`
d d
0
_
+C
t
a
D
1
a
D
a
`
X
= C
t
a
D
1
a
j
a
C
t
a
D
1
a
C
a
_
`
d d
0
_
Solving for
`
d and using the WLLN and CLT yields
_
:
_
`
d d
0
_

_
C
t
a
D
1
a
C
a
_
1
C
t
a
D
1
a
_
:j
a
(8.19)
o

_
C
t
D
1
C
_
1
C
t
D
1
N(0, D)
= N(0, X )
Solving (8.18) for
`
X and using (8.19) yields
_
:
`
X D
1
a
_
1 C
a
_
C
t
a
D
1
a
C
a
_
1
C
t
a
D
1
a
_
_
:j
a
(8.20)
o
D
1
_
1 C
_
C
t
D
1
C
_
1
C
t
D
1
_
N(0, D)
= D
1
N(0, X
X
)
Furthermore, since
C
t
_
1 D
1
C
_
C
t
D
1
C
_
1
C
t
_
= 0
_
:
_
`
d d
0
_
and
_
:
`
X are asymptotically uncorrelated and hence independent.
110
Proof of Theorem 8.3.1. First, by a Taylor expansion, (8.19), and (8.20),
1
_
:
a

i=1
j
i
_
`
d
_

_
:
_
j
a
+C
a
_
`
d d
0
__

_
1 C
a
_
C
t
a
D
1
a
C
a
_
1
C
t
a
D
1
a
_
_
:j
a
D
a
_
:
`
X.
Second, since log(1 +n) n n
2
,2 for n small,
11
a
=
a

i=1
2 log
_
1 +
`
X
t
j
i
_
`
d
__
2
`
X
t
a

i=1
j
i
_
`
d
_

`
X
t
a

i=1
j
i
_
`
d
_
j
i
_
`
d
_
t
`
X
:
`
X
t
D
a
`
X
o
N(0, X
X
)
t
D
1
N(0, X
X
)
=
2
I
where the proof of the nal equality is left as an exercise.
111
Chapter 9
Endogeneity
We say that there is endogeneity in the linear model j = z
t
i
d + c
i
if d is the parameter of
interest and E(z
i
c
i
) ,= 0. This cannot happen if d is dened by linear projection, so requires a
structural interpretation. The coecient d must have meaning separately from the denition of a
conditional mean or linear projection.
Example: Measurement error in the regressor. Suppose that (j
i
, i
+
i
) are joint random
variables, E(j
i
[ i
+
i
) = i
+t
i
d is linear, d is the parameter of interest, and i
+
i
is not observed. Instead
we observe i
i
= i
+
i
+u
i
where u
i
is an / 1 measurement error, independent of j
i
and i
+
i
. Then
j
i
= i
+t
i
d +c
i
= (i
i
u
i
)
t
d +c
i
= i
t
i
d +
i
where

i
= c
i
u
t
i
d.
The problem is that
E(i
i

i
) = E
_
(i
+
i
+u
i
)
_
c
i
u
t
i
d
_
= E
_
u
i
u
t
i
_
d ,= 0
if d ,= 0 and E(u
i
u
t
i
) ,= 0. It follows that if
`
d is the OLS estimator, then
`
d
j
d
+
= d
_
E
_
i
i
i
t
i
__
1
E
_
u
i
u
t
i
_
d ,= d.
This is called measurement error bias.
Example: Supply and Demand. The variables
i
and j
i
(quantity and price) are determined
jointly by the demand equation

i
= ,
1
j
i
+c
1i
and the supply equation

i
= ,
2
j
i
+c
2i
.
Assume that c
i
=
_
c
1i
c
2i
_
is iid, Ec
i
= 0, ,
1
+ ,
2
= 1 and Ec
i
c
t
i
= 1
2
(the latter for simplicity).
The question is, if we regress
i
on j
i
, what happens?
It is helpful to solve for
i
and j
i
in terms of the errors. In matrix notation,
_
1 ,
1
1 ,
2
_ _

i
j
i
_
=
_
c
1i
c
2i
_
112
so
_

i
j
i
_
=
_
1 ,
1
1 ,
2
_
1
_
c
1i
c
2i
_
=
_
,
2
,
1
1 1
_ _
c
1i
c
2i
_
=
_
,
2
c
1i
+,
1
c
2i
(c
1i
c
2i
)
_
.
The projection of
i
on j
i
yields

i
= ,
+
j
i
+-
i
E(j
i
-
i
) = 0
where
,
+
=
E(j
i

i
)
E
_
j
2
i
_ =
,
2
,
1
2
Hence if it is estimated by OLS,
^
,
j
,
+
, which does not equal either ,
1
or ,
2
. This is called
simultaneous equations bias.
9.1 Instrumental Variables
Let the equation of interest be
j
i
= z
t
i
d +c
i
(9.1)
where z
i
is /1, and assume that E(z
i
c
i
) ,= 0 so there is endogeneity. We call (9.1) the structural
equation. In matrix notation, this can be written as
= Zd +c. (9.2)
Any solution to the problem of endogeneity requires additional information which we call in-
struments.
Denition 9.1.1 The / 1 random vector i
i
is an instrumental variable for (9.1) if E(i
i
c
i
) = 0.
In a typical set-up, some regressors in z
i
will be uncorrelated with c
i
(for example, at least the
intercept). Thus we make the partition
z
i
=
_
z
1i
z
2i
_
/
1
/
2
(9.3)
where E(z
1i
c
i
) = 0 yet E(z
2i
c
i
) ,= 0. We call z
1i
exogenous and z
2i
endogenous. By the above
denition, z
1i
is an instrumental variable for (9.1), so should be included in i
i
. So we have the
partition
i
i
=
_
z
1i
i
2i
_
/
1
/
2
(9.4)
where z
1i
= i
1i
are the included exogenous variables, and i
2i
are the excluded exogenous
variables. That is i
2i
are variables which could be included in the equation for j
i
(in the sense
that they are uncorrelated with c
i
) yet can be excluded, as they would have true zero coecients
in the equation.
The model is just-identied if / = / (i.e., if /
2
= /
2
) and over-identied if / / (i.e., if
/
2
/
2
).
We have noted that any solution to the problem of endogeneity requires instruments. This does
not mean that valid instruments actually exist.
113
9.2 Reduced Form
The reduced form relationship between the variables or regressors z
i
and the instruments i
i
is found by linear projection. Let
I = E
_
i
i
i
t
i
_
1
E
_
i
i
z
t
i
_
be the / / matrix of coecients from a projection of z
i
on i
i
, and dene
u
i
= z
i
I
t
i
i
as the projection error. Then the reduced form linear relationship between z
i
and i
i
is
z
i
= I
t
i
i
+u
i
. (9.5)
In matrix notation, we can write (9.5) as
Z = AI +1 (9.6)
where 1 is : /.
By construction,
E(i
i
u
t
i
) = 0,
so (9.5) is a projection and can be estimated by OLS:
Z = A
^
+ ` u
`
I =
_
A
t
A
_
1
_
A
t
Z
_
.
Substituting (9.6) into (9.2), we nd
= (AI +1) d +c
= AX +r, (9.7)
where
X = Id (9.8)
and
r = 1d +c.
Observe that
E(i
i

i
) = E
_
i
i
u
t
i
_
d + E(i
i
c
i
) = 0.
Thus (9.7) is a projection equation and may be estimated by OLS. This is
= A
`
X + ` r,
`
X =
_
A
t
A
_
1
_
A
t

_
The equation (9.7) is the reduced form for . (9.6) and (9.7) together are the reduced form
equations for the system
= AX +r
Z = AI +1.
As we showed above, OLS yields the reduced-form estimates
_
`
X,
`
I
_
114
9.3 Identication
The structural parameter d relates to (X, I) through (9.8). The parameter d is identied,
meaning that it can be recovered from the reduced form, if
rank (I) = /. (9.9)
Assume that (9.9) holds. If / = /, then d = I
1
X. If / /, then for any M 0, d =
(I
t
MI)
1
I
t
MX.
If (9.9) is not satised, then d cannot be recovered from (X, I) . Note that a necessary (although
not sucient) condition for (9.9) is / _ /.
Since A and Z have the common variables A
1
, we can rewrite some of the expressions. Using
(9.3) and (9.4) to make the matrix partitions A = [A
1
, A
2
] and Z = [A
1
, Z
2
] , we can partition
I as
I =
_
I
11
I
12
I
21
I
22
_
=
_
1 I
12
0 I
22
_
(9.6) can be rewritten as
Z
1
= A
1
Z
2
= A
1
I
12
+A
2
I
22
+1
2
. (9.10)
d is identied if rank(I) = /, which is true if and only if rank(I
22
) = /
2
(by the upper-diagonal
structure of I). Thus the key to identication of the model rests on the /
2
/
2
matrix I
22
in (9.10).
9.4 Estimation
The model can be written as
j
i
= z
t
i
d +c
i
E(i
i
c
i
) = 0
or
Ej
i
(d) = 0
j
i
(d) = i
i
_
j
i
z
t
i
d
_
.
This is a moment condition model. Appropriate estimators include GMM and EL. The estimators
and distribution theory developed in those Chapter 8 and 9 directly apply. Recall that the GMM
estimator, for given weight matrix M
a
, is
`
d =
_
Z
t
AM
a
A
t
Z
_
1
Z
t
AM
a
A
t
.
9.5 Special Cases: IV and 2SLS
If the model is just-identied, so that / = /, then the formula for GMM simplies. We nd that
`
d =
_
Z
t
AM
a
A
t
Z
_
1
Z
t
AM
a
A
t

=
_
A
t
Z
_
1
M
1
a
_
Z
t
A
_
1
Z
t
AM
a
A
t

=
_
A
t
Z
_
1
A
t

115
This estimator is often called the instrumental variables estimator (IV) of d, where A is used
as an instrument for Z. Observe that the weight matrix M
a
has disappeared. In the just-identied
case, the weight matrix places no role. This is also the MME estimator of d, and the EL estimator.
Another interpretation stems from the fact that since d = I
1
X, we can construct the Indirect
Least Squares (ILS) estimator:
`
d =
`
I
1
`
X
=
_
_
A
t
A
_
1
_
A
t
Z
_
_
1
_
_
A
t
A
_
1
_
A
t

_
_
=
_
A
t
Z
_
1
_
A
t
A
_ _
A
t
A
_
1
_
A
t

_
=
_
A
t
Z
_
1
_
A
t

_
.
which again is the IV estimator.
Recall that the optimal weight matrix is an estimate of the inverse of D = E
_
i
i
i
t
i
c
2
i
_
. In the
special case that E
_
c
2
i
[ i
i
_
= o
2
(homoskedasticity), then D = E(i
i
i
t
i
) o
2
E(i
i
i
t
i
) suggesting
the weight matrix M
a
= (A
t
A)
1
. Using this choice, the GMM estimator equals
`
d
2S1S
=
_
Z
t
A
_
A
t
A
_
1
A
t
Z
_
1
Z
t
A
_
A
t
A
_
1
A
t

This is called the two-stage-least squares (2SLS) estimator. It was originally proposed by Theil
(1953) and Basmann (1957), and is the classic estimator for linear equations with instruments.
Under the homoskedasticity assumption, the 2SLS estimator is ecient GMM, but otherwise it is
inecient.
It is useful to observe that writing
1 = A
_
A
t
A
_
1
A
t
`
Z = 1Z = A
`
I
then
`
d =
_
Z
t
1Z
_
1
Z
t
1
=
_
`
Z
t
`
Z
_
1
`
Z
t
.
The source of the two-stage name is since it can be computed as follows
First regress Z on A, vis.,
`
I = (A
t
A)
1
(A
t
Z) and
`
Z = A
`
I = 1Z.
Second, regress on
`
Z, vis.,
`
d =
_
`
Z
t
`
Z
_
1
`
Z
t
.
It is useful to scrutinize the projection
`
Z. Recall, Z = [Z
1
, Z
2
] and A = [Z
1
, A
2
]. Then
`
Z =
_
`
Z
1
,
`
Z
2
_
= [1Z
1
, 1Z
2
]
= [Z
1
, 1Z
2
]
=
_
Z
1
,
`
Z
2
_
,
since Z
1
lies in the span of A. Thus in the second stage, we regress on Z
1
and
`
Z
2
. So only the
endogenous variables Z
2
are replaced by their tted values:
`
Z
2
= A
1
`
I
12
+A
2
`
I
22
.
116
9.6 Bekker Asymptotics
Bekker (1994) used an alternative asymptotic framework to analyze the nite-sample bias in
the 2SLS estimator. Here we present a simplied version of one of his results. In our notation, the
model is
= Zd +c (9.11)
Z = AI +1 (9.12)
= (c, 1)
E( [ A) = 0
E
_

t
[ A
_
= S
As before, A is : | so there are | instruments.
First, lets analyze the approximate bias of OLS applied to (9.11). Using (9.12),
E
_
1
:
Z
t
c
_
= E(z
i
c
i
) = I
t
E(i
i
c
i
) + E(u
i
c
i
) = s
21
and
E
_
1
:
Z
t
Z
_
= E
_
z
i
z
t
i
_
= I
t
E
_
i
i
i
t
i
_
I + E
_
u
i
i
t
i
_
I +I
t
E
_
i
i
u
t
i
_
+ E
_
u
i
u
t
i
_
= I
t
QI +S
22
where Q = E(i
i
i
t
i
) . Hence by a rst-order approximation
E
_
`
d
O1S
d
_
-
_
E
_
1
:
Z
t
Z
__
1
E
_
1
:
Z
t
c
_
=
_
I
t
QI +S
22
_
1
s
21
(9.13)
which is zero only when s
21
= 0 (when Z is exogenous).
We now derive a similar result for the 2SLS estimator.
`
d
2S1S
=
_
Z
t
1Z
_
1
_
Z
t
1
_
.
Let 1 = A(A
t
A)
1
A
t
. By the spectral decomposition of an idempotent matrix, 1 = HAH
t
where A = diag (1
|
, 0) . Let Q = H
t
S
12
which satises EQ
t
Q = 1
a
and partition Q = (g
t
1
Q
t
2
)
where g
1
is | 1. Hence
E
_
1
:

t
1 [ A
_
=
1
:
S
12t
E
_
Q
t
AQ [ A
_
S
12
=
1
:
S
12t
E
_
1
:
g
t
1
g
1
_
S
12
=
|
:
S
12t
S
12
= cS
where
c =
|
:
.
Using (9.12) and this result,
1
:
E
_
Z
t
1c
_
=
1
:
E
_
I
t
A
t
c
_
+
1
:
E
_
1
t
1c
_
= cs
21
,
117
and
1
:
E
_
Z
t
1Z
_
= I
t
E
_
i
i
i
t
i
_
I +I
t
E(i
i
u
i
) + E
_
u
i
i
t
i
_
I +
1
:
E
_
1
t
11
_
= I
t
QI +cS
22
.
Together
E
_
`
d
2S1S
d
_
-
_
E
_
1
:
Z
t
1Z
__
1
E
_
1
:
Z
t
1c
_
= c
_
I
t
QI +cS
22
_
1
s
21
. (9.14)
In general this is non-zero, except when s
21
= 0 (when Z is exogenous). It is also close to zero
when c = 0. Bekker (1994) pointed out that it also has the reverse implication that when c = |,:
is large, the bias in the 2SLS estimator will be large. Indeed as c 1, the expression in (9.14)
approaches that in (9.13), indicating that the bias in 2SLS approaches that of OLS as the number
of instruments increases.
Bekker (1994) showed further that under the alternative asymptotic approximation that c is
xed as : (so that the number of instruments goes to innity proportionately with sample
size) then the expression in (9.14) is the probability limit of
`
d
2S1S
d
9.7 Identication Failure
Recall the reduced form equation
Z
2
= A
1
I
12
+A
2
I
22
+1
2
.
The parameter d fails to be identied if I
22
has decient rank. The consequences of identication
failure for inference are quite severe.
Take the simplest case where / = | = 1 (so there is no A
1
). Then the model may be written as
j
i
= .
i
, +c
i
.
i
= r
i
+n
i
and
22
= = E(r
i
.
i
) ,Er
2
i
. We see that , is identied if and only if ,= 0, which occurs
when E(.
i
r
i
) ,= 0. Thus identication hinges on the existence of correlation between the excluded
exogenous variable and the included endogenous variable.
Suppose this condition fails, so E(.
i
r
i
) = 0. Then by the CLT
1
_
:
a

i=1
r
i
c
i
o

1
~ N
_
0, E
_
r
2
i
c
2
i
__
(9.15)
1
_
:
a

i=1
r
i
.
i
=
1
_
:
a

i=1
r
i
n
i
o

2
~ N
_
0, E
_
r
2
i
n
2
i
__
(9.16)
therefore
^
, , =
1
_
a

a
i=1
r
i
c
i
1
_
a

a
i=1
r
i
z
i
o

2
~ Cauchy,
since the ratio of two normals is Cauchy. This is particularly nasty, as the Cauchy distribution
does not have a nite mean. This result carries over to more general settings, and was examined
by Phillips (1989) and Choi and Phillips (1992).
118
Suppose that identication does not completely fail, but is weak. This occurs when
22
is full
rank, but small. This can be handled in an asymptotic analysis by modeling it as local-to-zero, viz
I
22
= :
12
C,
where C is a full rank matrix. The :
12
is picked because it provides just the right balancing to
allow a rich distribution theory.
To see the consequences, once again take the simple case / = | = 1. Here, the instrument r
i
is
weak for .
i
if
= :
12
c.
Then (9.15) is unaected, but (9.16) instead takes the form
1
_
:
a

i=1
r
i
.
i
=
1
_
:
a

i=1
r
2
i
+
1
_
:
a

i=1
r
i
n
i
=
1
:
a

i=1
r
2
i
c +
1
_
:
a

i=1
r
i
n
i
o
Qc +
2
therefore
^
, ,
o


1
Qc +
2
.
As in the case of complete identication failure, we nd that
^
, is inconsistent for , and the
asymptotic distribution of
^
, is non-normal. In addition, standard test statistics have non-standard
distributions, meaning that inferences about parameters of interest can be misleading.
The distribution theory for this model was developed by Staiger and Stock (1997) and extended
to nonlinear GMM estimation by Stock and Wright (2000). Further results on testing were obtained
by Wang and Zivot (1998).
The bottom line is that it is highly desirable to avoid identication failure. Once again, the
equation to focus on is the reduced form
Z
2
= A
1
I
12
+A
2
I
22
+1
2
and identication requires rank(I
22
) = /
2
. If /
2
= 1, this requires
22
,= 0, which is straightforward
to assess using a hypothesis test on the reduced form. Therefore in the case of /
2
= 1 (one RHS
endogenous variable), one constructive recommendation is to explicitly estimate the reduced form
equation for Z
2
, construct the test of I
22
= 0, and at a minimum check that the test rejects
H
0
:
22
= 0.
When /
2
1, I
22
,= 0 is not sucient for identication. It is not even sucient that each
column of I
22
is non-zero (each column corresponds to a distinct endogenous variable in A
2
). So
while a minimal check is to test that each columns of I
22
is non-zero, this cannot be interpreted
as denitive proof that I
22
has full rank. Unfortunately, tests of decient rank are dicult to
implement. In any event, it appears reasonable to explicitly estimate and report the reduced form
equations for A
2
, and attempt to assess the likelihood that I
22
has decient rank.
119
9.8 Exercises
1. Consider the single equation model
j
i
= .
i
, +c
i
,
where j
i
and .
i
are both real-valued (1 1). Let
^
, denote the IV estimator of , using as an
instrument a dummy variable d
i
(takes only the values 0 and 1). Find a simple expression
for the IV estimator in this context.
2. In the linear model
j
i
= z
t
i
d +c
i
E(c
i
[ z
i
) = 0
suppose o
2
i
= E
_
c
2
i
[ z
i
_
is known. Show that the GLS estimator of d can be written as an
IV estimator using some instrument i
i
. (Find an expression for i
i
.)
3. Take the linear model
= Zd +c.
Let the OLS estimator for d be
`
d and the OLS residual be ` c = Z
`
d.
Let the IV estimator for d using some instrument A be

d and the IV residual be c = Z

d.
If A is indeed endogeneous, will IV t better than OLS, in the sense that c
t
c < ` c
t
` c, at
least in large samples?
4. The reduced form between the regressors z
i
and instruments i
i
takes the form
z
i
= I
t
i
i
+u
i
or
Z = AI +1
where z
i
is / 1, i
i
is | 1, Z is :/, A is :|, 1 is :/, and I is | /. The parameter
I is dened by the population moment condition
E
_
i
i
u
t
i
_
= 0
Show that the method of moments estimator for I is
`
I = (A
t
A)
1
(A
t
Z) .
5. In the structural model
= Zd +c
Z = AI +1
with I | /, | _ /, we claim that d is identied (can be recovered from the reduced form) if
rank(I) = /. Explain why this is true. That is, show that if rank(I) < / then d cannot be
identied.
6. Take the linear model
j
i
= i
i
d +c
i
E(c
i
[ i
i
) = 0.
where r
i
and d are 1 1.
120
(a) Show that E(r
i
c
i
) = 0 and E
_
r
2
i
c
i
_
= 0. Is z
i
= (r
i
r
2
i
)
t
a valid instrumental variable
for estimation of d?
(b) Dene the 2SLS estimator of d, using z
i
as an instrument for r
i
. How does this dier
from OLS?
(c) Find the ecient GMM estimator of d based on the moment condition
E(z
i
(j
i
r
i
d)) = 0.
Does this dier from 2SLS and/or OLS?
7. Suppose that price and quantity are determined by the intersection of the linear demand and
supply curves
Demand : Q = a
0
+a
1
1 +a
2
1 +c
1
Supply : Q = /
0
+/
1
1 +/
2
\ +c
2
where income (1 ) and wage (\) are determined outside the market. In this model, are the
parameters identied?
8. The data le card.dat is taken from David Card Using Geographic Variation in College
Proximity to Estimate the Return to Schooling in Aspects of Labour Market Behavior (1995).
There are 2215 observations with 29 variables, listed in card.pdf. We want to estimate a wage
equation
log(\aqc) = ,
0
+,
1
1dnc +,
2
1rjcr +,
3
1rjcr
2
+,
4
oont/ +,
5
1|ac/ +c
where 1dnc = 1dnatio: (Years) 1rjcr = 1rjcric:cc (Years), and oont/ and 1|ac/ are
regional and racial dummy variables.
(a) Estimate the model by OLS. Report estimates and standard errors.
(b) Now treat 1dncatio: as endogenous, and the remaining variables as exogenous. Estimate
the model by 2SLS, using the instrument :car4, a dummy indicating that the observation
lives near a 4-year college. Report estimates and standard errors.
(c) Re-estimate by 2SLS (report estimates and standard errors) adding three additional
instruments: near2 (a dummy indicating that the observation lives near a 2-year college),
)at/cdnc (the education, in years, of the father) and :ot/cdnc (the education, in years,
of the mother).
(d) Re-estimate the model by ecient GMM. I suggest that you use the 2SLS estimates as
the rst-step to get the weight matrix, and then calculate the GMM estimator from this
weight matrix without further iteration. Report the estimates and standard errors.
(e) Calculate and report the J statistic for overidentication.
(f) Discuss your ndings..
121
Chapter 10
Univariate Time Series
A time series j
t
is a process observed in sequence over time, t = 1, ..., T. To indicate the
dependence on time, we adopt new notation, and use the subscript t to denote the individual
observation, and T to denote the number of observations.
Because of the sequential nature of time series, we expect that j
t
and j
t1
are not independent,
so classical assumptions are not valid.
We can separate time series into two categories: univariate (j
t
R is scalar); and multivariate
(j
t
R
n
is vector-valued). The primary model for univariate time series is autoregressions (ARs).
The primary model for multivariate time series is vector autoregressions (VARs).
10.1 Stationarity and Ergodicity
Denition 10.1.1 j
t
is covariance (weakly) stationary if
E(j
t
) = j
is independent of t, and
cov (j
t
, j
tI
) = (/)
is independent of t for all /.(/) is called the autocovariance function.
j(/) = (/),(0) = corr(j
t
, j
tI
)
is the autocorrelation function.
Denition 10.1.2 j
t
is strictly stationary if the joint distribution of (j
t
, ..., j
tI
) is indepen-
dent of t for all /.
Denition 10.1.3 A stationary time series is ergodic if (/) 0 as / .
The following two theorems are essential to the analysis of stationary time series. There proofs
are rather dicult, however.
Theorem 10.1.1 If j
t
is strictly stationary and ergodic and r
t
= )(j
t
, j
t1
, ...) is a random vari-
able, then r
t
is strictly stationary and ergodic.
Theorem 10.1.2 (Ergodic Theorem). If j
t
is strictly stationary and ergodic and E[j
t
[ < , then
as T ,
1
T
T

t=1
j
t
j
E(j
t
).
122
This allows us to consistently estimate parameters using time-series moments:
The sample mean:
^ j =
1
T
T

t=1
j
t
The sample autocovariance
^ (/) =
1
T
T

t=1
(j
t
^ j) (j
tI
^ j) .
The sample autocorrelation
^ j(/) =
^ (/)
^ (0)
.
Theorem 10.1.3 If j
t
is strictly stationary and ergodic and Ej
2
t
< , then as T ,
1. ^ j
j
E(j
t
);
2. ^ (/)
j
(/);
3. ^ j(/)
j
j(/).
A proof is given in Section 10.13.
10.2 Autoregressions
In time-series, the series ..., j
1
, j
2
, ..., j
T
, ... are jointly random. We consider the conditional
expectation
E(j
t
[ T
t1
)
where T
t1
= j
t1
, j
t2
, ... is the past history of the series.
An autoregressive (AR) model species that only a nite number of past lags matter:
E(j
t
[ T
t1
) = E(j
t
[ j
t1
, ..., j
tI
) .
A linear AR model (the most common type used in practice) species linearity:
E(j
t
[ T
t1
) = c +j
1
j
t1
+j
2
j
t1
+ +j
I
j
tI
.
Letting
c
t
= j
t
E(j
t
[ T
t1
) ,
then we have the autoregressive model
j
t
= c +j
1
j
t1
+j
2
j
t1
+ +j
I
j
tI
+c
t
E(c
t
[ T
t1
) = 0.
The last property denes a special time-series process.
Denition 10.2.1 c
t
is a martingale dierence sequence (MDS) if E(c
t
[ T
t1
) = 0.
123
Regression errors are naturally a MDS. Some time-series processes may be a MDS as a conse-
quence of optimizing behavior. For example, some versions of the life-cycle hypothesis imply that
either changes in consumption, or consumption growth rates, should be a MDS. Most asset pricing
models imply that asset returns should be the sum of a constant plus a MDS.
The MDS property for the regression error plays the same role in a time-series regression as
does the conditional mean-zero property for the regression error in a cross-section regression. In
fact, it is even more important in the time-series context, as it is dicult to derive distribution
theories without this property.
A useful property of a MDS is that c
t
is uncorrelated with any function of the lagged information
T
t1
. Thus for / 0, E(j
tI
c
t
) = 0.
10.3 Stationarity of AR(1) Process
A mean-zero AR(1) is
j
t
= jj
t1
+c
t
.
Assume that c
t
is iid, E(c
t
) = 0 and Ec
2
t
= o
2
< .
By back-substitution, we nd
j
t
= c
t
+jc
t1
+j
2
c
t2
+...
=
o

I=0
j
I
c
tI
.
Loosely speaking, this series converges if the sequence j
I
c
tI
gets small as / . This occurs
when [j[ < 1.
Theorem 10.3.1 If [j[ < 1 then j
t
is strictly stationary and ergodic.
We can compute the moments of j
t
using the innite sum:
Ej
t
=
o

I=0
j
I
E(c
tI
) = 0
var(j
t
) =
o

I=0
j
2I
var (c
tI
) =
o
2
1 j
2
.
If the equation for j
t
has an intercept, the above results are unchanged, except that the mean
of j
t
can be computed from the relationship
Ej
t
= c +jEj
t1
,
and solving for Ej
t
= Ej
t1
we nd Ej
t
= c,(1 j).
10.4 Lag Operator
An algebraic construct which is useful for the analysis of autoregressive models is the lag oper-
ator.
Denition 10.4.1 The lag operator L satises Lj
t
= j
t1
.
124
Dening L
2
= LL, we see that L
2
j
t
= Lj
t1
= j
t2
. In general, L
I
j
t
= j
tI
.
The AR(1) model can be written in the format
j
t
jj
t1
+c
t
or
(1 jL) j
t1
= c
t
.
The operator j(L) = (1 jL) is a polynomial in the operator L. We say that the root of the
polynomial is 1,j, since j(.) = 0 when . = 1,j. We call j(L) the autoregressive polynomial of j
t
.
From Theorem 10.3.1, an AR(1) is stationary i [j[ < 1. Note that an equivalent way to say
this is that an AR(1) is stationary i the root of the autoregressive polynomial is larger than one
(in absolute value).
10.5 Stationarity of AR(k)
The AR(k) model is
j
t
= j
1
j
t1
+j
2
j
t2
+ +j
I
j
tI
+c
t
.
Using the lag operator,
j
t
j
1
Lj
t
j
2
L
2
j
t
j
I
L
I
j
t
= c
t
,
or
j(L)j
t
= c
t
where
j(L) = 1 j
1
L j
2
L
2
j
I
L
I
.
We call j(L) the autoregressive polynomial of j
t
.
The Fundamental Theorem of Algebra says that any polynomial can be factored as
j(.) =
_
1 `
1
1
.
_ _
1 `
1
2
.
_

_
1 `
1
I
.
_
where the `
1
, ..., `
I
are the complex roots of j(.), which satisfy j(`
)
) = 0.
We know that an AR(1) is stationary i the absolute value of the root of its autoregressive
polynomial is larger than one. For an AR(k), the requirement is that all roots are larger than one.
Let [`[ denote the modulus of a complex number `.
Theorem 10.5.1 The AR(k) is strictly stationary and ergodic if and only if [`
)
[ 1 for all ,.
One way of stating this is that All roots lie outside the unit circle.
If one of the roots equals 1, we say that j(L), and hence j
t
, has a unit root. This is a special
case of non-stationarity, and is of great interest in applied time series.
10.6 Estimation
Let
i
t
=
_
1 j
t1
j
t2
j
tI
_
t
d =
_
c j
1
j
2
j
I
_
t
.
Then the model can be written as
j
t
= i
t
t
d +c
t
.
The OLS estimator is
`
d =
_
A
t
A
_
1
A
t
.
125
To study
`
d, it is helpful to dene the process n
t
= i
t
c
t
. Note that n
t
is a MDS, since
E(n
t
[ T
t1
) = E(i
t
c
t
[ T
t1
) = i
t
E(c
t
[ T
t1
) = 0.
By Theorem 10.1.1, it is also strictly stationary and ergodic. Thus
1
T
T

t=1
i
t
c
t
=
1
T
T

t=1
n
t
j
E(n
t
) = 0. (10.1)
The vector i
t
is strictly stationary and ergodic, and by Theorem 10.1.1, so is i
t
i
t
t
. Thus by the
Ergodic Theorem,
1
T
T

t=1
i
t
i
t
t
j
E
_
i
t
i
t
t
_
= Q.
Combined with (10.1) and the continuous mapping theorem, we see that
`
d = d +
_
1
T
T

t=1
i
t
i
t
t
_
1
_
1
T
T

t=1
i
t
c
t
_
j
Q
1
0 = 0.
We have shown the following:
Theorem 10.6.1 If the AR(k) process j
t
is strictly stationary and ergodic and Ej
2
t
< , then
`
d
j
d as T .
10.7 Asymptotic Distribution
Theorem 10.7.1 MDS CLT. If u
t
is a strictly stationary and ergodic MDS and E(u
t
u
t
t
) = D <
, then as T ,
1
_
T
T

t=1
u
t
o
N(0, D) .
Since i
t
c
t
is a MDS, we can apply Theorem 10.7.1 to see that
1
_
T
T

t=1
i
t
c
t
o
N(0, D) ,
where
D = E(i
t
i
t
t
c
2
t
).
Theorem 10.7.2 If the AR(k) process j
t
is strictly stationary and ergodic and Ej
4
t
< , then as
T ,
_
T
_
`
d d
_
o
N
_
0, Q
1
DQ
1
_
.
This is identical in form to the asymptotic distribution of OLS in cross-section regression. The
implication is that asymptotic inference is the same. In particular, the asymptotic covariance
matrix is estimated just as in the cross-section case.
126
10.8 Bootstrap for Autoregressions
In the non-parametric bootstrap, we constructed the bootstrap sample by randomly resampling
from the data values j
t
, i
t
. This creates an iid bootstrap sample. Clearly, this cannot work in a
time-series application, as this imposes inappropriate independence.
Briey, there are two popular methods to implement bootstrap resampling for time-series data.
Method 1: Model-Based (Parametric) Bootstrap.
1. Estimate
`
d and residuals ^ c
t
.
2. Fix an initial condition (j
I+1
, j
I+2
, ..., j
0
).
3. Simulate iid draws c
+
i
from the empirical distribution of the residuals ^ c
1
, ..., ^ c
T
.
4. Create the bootstrap series j
+
t
by the recursive formula
j
+
t
= ^ c + ^ j
1
j
+
t1
+ ^ j
2
j
+
t2
+ + ^ j
I
j
+
tI
+c
+
t
.
This construction imposes homoskedasticity on the errors c
+
i
, which may be dierent than the
properties of the actual c
i
. It also presumes that the AR(k) structure is the truth.
Method 2: Block Resampling
1. Divide the sample into T,: blocks of length :.
2. Resample complete blocks. For each simulated sample, draw T,: blocks.
3. Paste the blocks together to create the bootstrap time-series j
+
t
.
4. This allows for arbitrary stationary serial correlation, heteroskedasticity, and for model-
misspecication.
5. The results may be sensitive to the block length, and the way that the data are partitioned
into blocks.
6. May not work well in small samples.
10.9 Trend Stationarity
j
t
= j
0
+j
1
t +o
t
(10.2)
o
t
= j
1
o
t1
+j
2
o
t2
+ +j
I
o
t|
+c
t
, (10.3)
or
j
t
= c
0
+c
1
t +j
1
j
t1
+j
2
j
t1
+ +j
I
j
tI
+c
t
. (10.4)
There are two essentially equivalent ways to estimate the autoregressive parameters (j
1
, ..., j
I
).
You can estimate (10.4) by OLS.
You can estimate (10.2)-(10.3) sequentially by OLS. That is, rst estimate (10.2), get the
residual
^
o
t
, and then perform regression (10.3) replacing o
t
with
^
o
t
. This procedure is some-
times called Detrending.
127
The reason why these two procedures are (essentially) the same is the Frisch-Waugh-Lovell
theorem.
Seasonal Eects
There are three popular methods to deal with seasonal data.
Include dummy variables for each season. This presumes that seasonality does not change
over the sample.
Use seasonally adjusted data. The seasonal factor is typically estimated by a two-sided
weighted average of the data for that season in neighboring years. Thus the seasonally
adjusted data is a ltered series. This is a exible approach which can extract a wide range
of seasonal factors. The seasonal adjustment, however, also alters the time-series correlations
of the data.
First apply a seasonal dierencing operator. If : is the number of seasons (typically : = 4 or
: = 12),

c
j
t
= j
t
j
tc
,
or the season-to-season change. The series
c
j
t
is clearly free of seasonality. But the long-run
trend is also eliminated, and perhaps this was of relevance.
10.10 Testing for Omitted Serial Correlation
For simplicity, let the null hypothesis be an AR(1):
j
t
= c +jj
t1
+n
t
. (10.5)
We are interested in the question if the error n
t
is serially correlated. We model this as an AR(1):
n
t
= 0n
t1
+c
t
(10.6)
with c
t
a MDS. The hypothesis of no omitted serial correlation is
H
0
: 0 = 0
H
1
: 0 ,= 0.
We want to test H
0
against H
1
.
To combine (10.5) and (10.6), we take (10.5) and lag the equation once:
j
t1
= c +jj
t2
+n
t1
.
We then multiply this by 0 and subtract from (10.5), to nd
j
t
0j
t1
= c 0c +jj
t1
0jj
t1
+n
t
0n
t1
,
or
j
t
= c(1 0) + (j +0) j
t1
0jj
t2
+c
t
= 1(2).
Thus under H
0
, j
t
is an AR(1), and under H
1
it is an AR(2). H
0
may be expressed as the restriction
that the coecient on j
t2
is zero.
An appropriate test of H
0
against H
1
is therefore a Wald test that the coecient on j
t2
is zero.
(A simple exclusion test).
In general, if the null hypothesis is that j
t
is an AR(k), and the alternative is that the error is an
AR(m), this is the same as saying that under the alternative j
t
is an AR(k+m), and this is equivalent
to the restriction that the coecients on j
tI1
, ..., j
tIn
are jointly zero. An appropriate test is
the Wald test of this restriction.
128
10.11 Model Selection
What is the appropriate choice of / in practice? This is a problem of model selection.
One approach to model selection is to choose / based on a Wald tests.
Another is to minimize the AIC or BIC information criterion, e.g.
1C(/) = log ^ o
2
(/) +
2/
T
,
where ^ o
2
(/) is the estimated residual variance from an AR(k)
One ambiguity in dening the AIC criterion is that the sample available for estimation changes
as / changes. (If you increase /, you need more initial conditions.) This can induce strange
behavior in the AIC. The best remedy is to x a upper value /, and then reserve the rst / as
initial conditions, and then estimate the models AR(1), AR(2), ..., AR(/) on this (unied) sample.
10.12 Autoregressive Unit Roots
The AR(k) model is
j(L)j
t
= j +c
t
j(L) = 1 j
1
L j
I
L
I
.
As we discussed before, j
t
has a unit root when j(1) = 0, or
j
1
+j
2
+ +j
I
= 1.
In this case, j
t
is non-stationary. The ergodic theorem and MDS CLT do not apply, and test
statistics are asymptotically non-normal.
A helpful way to write the equation is the so-called Dickey-Fuller reparameterization:
j
t
= j +c
0
j
t1
+c
1
j
t1
+ +c
I1
j
t(I1)
+c
t
. (10.7)
These models are equivalent linear transformations of one another. The DF parameterization
is convenient because the parameter c
0
summarizes the information about the unit root, since
j(1) = c
0
. To see this, observe that the lag polynomial for the j
t
computed from (10.7) is
(1 L) c
0
L c
1
(L L
2
) c
I1
(L
I1
L
I
)
But this must equal j(L), as the models are equivalent. Thus
j(1) = (1 1) c
0
(1 1) (1 1) = c
0
.
Hence, the hypothesis of a unit root in j
t
can be stated as
H
0
: c
0
= 0.
Note that the model is stationary if c
0
< 0. So the natural alternative is
H
1
: c
0
< 0.
Under H
0
, the model for j
t
is
j
t
= j +c
1
j
t1
+ +c
I1
j
t(I1)
+c
t
,
which is an AR(k-1) in the rst-dierence j
t
. Thus if j
t
has a (single) unit root, then j
t
is a
stationary AR process. Because of this property, we say that if j
t
is non-stationary but
o
j
t
is
stationary, then j
t
is integrated of order d, or 1(d). Thus a time series with unit root is 1(1).
129
Since c
0
is the parameter of a linear regression, the natural test statistic is the t-statistic for
H
0
from OLS estimation of (10.7). Indeed, this is the most popular unit root test, and is called the
Augmented Dickey-Fuller (ADF) test for a unit root.
It would seem natural to assess the signicance of the ADF statistic using the normal table.
However, under H
0
, j
t
is non-stationary, so conventional normal asymptotics are invalid. An
alternative asymptotic framework has been developed to deal with non-stationary data. We do not
have the time to develop this theory in detail, but simply assert the main results.
Theorem 10.12.1 (Dickey-Fuller Theorem). Assume c
0
= 0. As T ,
T ^ c
0
o
(1 c
1
c
2
c
I1
) 11
c
11 =
^ c
0
:(^ c
0
)
11
t
.
The limit distributions 11
c
and 11
t
are non-normal. They are skewed to the left, and have
negative means.
The rst result states that ^ c
0
converges to its true value (of zero) at rate T, rather than the
conventional rate of T
12
. This is called a super-consistent rate of convergence.
The second result states that the t-statistic for ^ c
0
converges to a limit distribution which is
non-normal, but does not depend on the parameters c. This distribution has been extensively
tabulated, and may be used for testing the hypothesis H
0
. Note: The standard error :(^ c
0
) is the
conventional (homoskedastic) standard error. But the theorem does not require an assumption
of homoskedasticity. Thus the Dickey-Fuller test is robust to heteroskedasticity.
Since the alternative hypothesis is one-sided, the ADF test rejects H
0
in favor of H
1
when
11 < c, where c is the critical value from the ADF table. If the test rejects H
0
, this means that
the evidence points to j
t
being stationary. If the test does not reject H
0
, a common conclusion is
that the data suggests that j
t
is non-stationary. This is not really a correct conclusion, however.
All we can say is that there is insucient evidence to conclude whether the data are stationary or
not.
We have described the test for the setting of with an intercept. Another popular setting includes
as well a linear time trend. This model is
j
t
= j
1
+j
2
t +c
0
j
t1
+c
1
j
t1
+ +c
I1
j
t(I1)
+c
t
. (10.8)
This is natural when the alternative hypothesis is that the series is stationary about a linear time
trend. If the series has a linear trend (e.g. GDP, Stock Prices), then the series itself is non-
stationary, but it may be stationary around the linear time trend. In this context, it is a silly waste
of time to t an AR model to the level of the series without a time trend, as the AR model cannot
conceivably describe this data. The natural solution is to include a time trend in the tted OLS
equation. When conducting the ADF test, this means that it is computed as the t-ratio for c
0
from
OLS estimation of (10.8).
If a time trend is included, the test procedure is the same, but dierent critical values are
required. The ADF test has a dierent distribution when the time trend has been included, and a
dierent table should be consulted.
Most texts include as well the critical values for the extreme polar case where the intercept has
been omitted from the model. These are included for completeness (from a pedagogical perspective)
but have no relevance for empirical practice where intercepts are always included
130
10.13 Technical Proofs
Proof of Theorem 10.1.3. Part (1) is a direct consequence of the Ergodic theorem. For Part
(2), note that
^ (/) =
1
T
T

t=1
(j
t
^ j) (j
tI
^ j)
=
1
T
T

t=1
j
t
j
tI

1
T
T

t=1
j
t
^ j
1
T
T

t=1
j
tI
^ j + ^ j
2
.
By Theorem 10.1.1 above, the sequence j
t
j
tI
is strictly stationary and ergodic, and it has a nite
mean by the assumption that Ej
2
t
< . Thus an application of the Ergodic Theorem yields
1
T
T

t=1
j
t
j
tI
j
E(j
t
j
tI
).
Thus
^ (/)
j
E(j
t
j
tI
) j
2
j
2
+j
2
= E(j
t
j
tI
) j
2
= (/).
Part (3) follows by the continuous mapping theorem: ^ j(/) = ^ (/),^ (0)
j
(/),(0) = j(/).
.
131
Chapter 11
Multivariate Time Series
A multivariate time series
t
is a vector process :1. Let T
t1
= (
t1
,
t2
, ...) be all lagged
information at time t. The typical goal is to nd the conditional expectation E(
t
[ T
t1
) . Note
that since
t
is a vector, this conditional expectation is also a vector.
11.1 Vector Autoregressions (VARs)
A VAR model species that the conditional mean is a function of only a nite number of lags:
E(
t
[ T
t1
) = E
_

t
[
t1
, ...,
tI
_
.
A linear VAR species that this conditional mean is linear in the arguments:
E
_

t
[
t1
, ...,
tI
_
= u
0
+A
1

t1
+A
2

t2
+ A
I

tI
.
Observe that u
0
is :1,and each of A
1
through A
I
are :: matrices.
Dening the :1 regression error
c
t
=
t
E(
t
[ T
t1
) ,
we have the VAR model

t
= u
0
+A
1

t1
+A
2

t2
+ A
I

tI
+c
t
E(c
t
[ T
t1
) = 0.
Alternatively, dening the :/ + 1 vector
i
t
=
_
_
_
_
_
_
_
1

t1

t2
.
.
.

tI
_
_
_
_
_
_
_
and the :(:/ + 1) matrix
A =
_
u
0
A
1
A
2
A
I
_
,
then

t
= Ai
t
+c
t
.
The VAR model is a system of : equations. One way to write this is to let a
t
)
be the ,th row
of A. Then the VAR system can be written as the equations
1
)t
= a
t
)
i
t
+c
)t
.
Unrestricted VARs were introduced to econometrics by Sims (1980).
132
11.2 Estimation
Consider the moment conditions
E(i
t
c
)t
) = 0,
, = 1, ..., :. These are implied by the VAR model, either as a regression, or as a linear projection.
The GMM estimator corresponding to these moment conditions is equation-by-equation OLS
` u
)
= (A
t
A)
1
A
t

)
.
An alternative way to compute this is as follows. Note that
` u
t
)
=
t
)
A(A
t
A)
1
.
And if we stack these to create the estimate
^
, we nd
`
A =
_
_
_
_
_

t
1

t
2
.
.
.

t
n+1
_
_
_
_
_
A(A
t
A)
1
= A
t
A(A
t
A)
1
,
where
A =
_

1

2

n
_
the T : matrix of the stacked
t
t
.
This (system) estimator is known as the SUR (Seemingly Unrelated Regressions) estimator,
and was originally derived by Zellner (1962)
11.3 Restricted VARs
The unrestricted VAR is a system of : equations, each with the same set of regressors. A
restricted VAR imposes restrictions on the system. For example, some regressors may be excluded
from some of the equations. Restrictions may be imposed on individual equations, or across equa-
tions. The GMM framework gives a convenient method to impose such restrictions on estimation.
11.4 Single Equation from a VAR
Often, we are only interested in a single equation out of a VAR system. This takes the form
j
)t
= u
t
)
i
t
+c
t
,
and i
t
consists of lagged values of j
)t
and the other j
t
|t
:. In this case, it is convenient to re-dene
the variables. Let j
t
= j
)t
, and z
t
be the other variables. Let c
t
= c
)t
and , = a
)
. Then the single
equation takes the form
j
t
= i
t
t
d +c
t
, (11.1)
and
i
t
=
_
_
1
t1

tI
z
t
t1
z
t
tI
_
t
_
.
This is just a conventional regression with time series data.
133
11.5 Testing for Omitted Serial Correlation
Consider the problem of testing for omitted serial correlation in equation (11.1). Suppose that
c
t
is an AR(1). Then
j
t
= i
t
t
d +c
t
c
t
= 0c
t1
+n
t
(11.2)
E(n
t
[ T
t1
) = 0.
Then the null and alternative are
H
0
: 0 = 0 H
1
: 0 ,= 0.
Take the equation j
t
= i
t
t
d +c
t
, and subtract o the equation once lagged multiplied by 0, to get
j
t
0j
t1
=
_
i
t
t
d +c
t
_
0
_
i
t
t1
d +c
t1
_
= i
t
t
d 0i
t1
d +c
t
0c
t1
,
or
j
t
= 0j
t1
+i
t
t
d +i
t
t1
_ +n
t
, (11.3)
which is a valid regression model.
So testing H
0
versus H
1
is equivalent to testing for the signicance of adding (j
t1
, i
t1
) to
the regression. This can be done by a Wald test. We see that an appropriate, general, and simple
way to test for omitted serial correlation is to test the signicance of extra lagged values of the
dependent variable and regressors.
You may have heard of the Durbin-Watson test for omitted serial correlation, which once was
very popular, and is still routinely reported by conventional regression packages. The DW test is
appropriate only when regression j
t
= i
t
t
d +c
t
is not dynamic (has no lagged values on the RHS),
and c
t
is iid N(0, o
2
). Otherwise it is invalid.
Another interesting fact is that (11.2) is a special case of (11.3), under the restriction = d0.
This restriction, which is called a common factor restriction, may be tested if desired. If valid,
the model (11.2) may be estimated by iterated GLS. (A simple version of this estimator is called
Cochrane-Orcutt.) Since the common factor restriction appears arbitrary, and is typically rejected
empirically, direct estimation of (11.2) is uncommon in recent applications.
11.6 Selection of Lag Length in an VAR
If you want a data-dependent rule to pick the lag length / in a VAR, you may either use a testing-
based approach (using, for example, the Wald statistic), or an information criterion approach. The
formula for the AIC and BIC are
1C(/) = log det
_
`
D(/)
_
+ 2
j
T
11C(/) = log det
_
`
D(/)
_
+
j log(T)
T
`
D(/) =
1
T
T

t=1
` c
t
(/)` c
t
(/)
t
j = :(/:+ 1)
where j is the number of parameters in the model, and ` c
t
(/) is the OLS residual vector from the
model with / lags. The log determinant is the criterion from the multivariate normal likelihood.
134
11.7 Granger Causality
Partition the data vector into (
t
, z
t
). Dene the two information sets
T
1t
=
_

t
,
t1
,
t2
, ...
_
T
2t
=
_

t
, z
t
,
t1
, z
t1
,
t2
, z
t2
, , ...
_
The information set T
1t
is generated only by the history of
t
, and the information set T
2t
is
generated by both
t
and z
t
. The latter has more information.
We say that z
t
does not Granger-cause
t
if
E(
t
[ T
1,t1
) = E(
t
[ T
2,t1
) .
That is, conditional on information in lagged
t
, lagged z
t
does not help to forecast
t
. If this
condition does not hold, then we say that z
t
Granger-causes
t
.
The reason why we call this Granger Causality rather than causality is because this is not
a physical or structure denition of causality. If z
t
is some sort of forecast of the future, such as a
futures price, then z
t
may help to forecast
t
even though it does not cause
t
. This denition
of causality was developed by Granger (1969) and Sims (1972).
In a linear VAR, the equation for
t
is

t
= c +j
1

t1
+ +j
I

tI
+z
t
t1
_
1
+ +z
t
tI
_
I
+c
t
.
In this equation, z
t
does not Granger-cause
t
if and only if
H
0
: _
1
= _
2
= = _
I
= 0.
This may be tested using an exclusion (Wald) test.
This idea can be applied to blocks of variables. That is,
t
and/or z
t
can be vectors. The
hypothesis can be tested by using the appropriate multivariate Wald test.
If it is found that z
t
does not Granger-cause
t
, then we deduce that our time-series model of
E(
t
[ T
t1
) does not require the use of z
t
. Note, however, that z
t
may still be useful to explain
other features of
t
, such as the conditional variance.
11.8 Cointegration
The idea of cointegration is due to Granger (1981), and was articulated in detail by Engle and
Granger (1987).
Denition 11.8.1 The : 1 series
t
is cointegrated if
t
is 1(1) yet there exists d, : r, of
rank r, such that z
t
= d
t

t
is 1(0). The r vectors in d are called the cointegrating vectors.
If the series
t
is not cointegrated, then r = 0. If r = :, then
t
is 1(0). For 0 < r < :,
t
is
1(1) and cointegrated.
In some cases, it may be believed that d is known a priori. Often, d = (1 1)
t
. For example,
if
t
is a pair of interest rates, then d = (1 1)
t
species that the spread (the dierence in
returns) is stationary. If = (log(Co::n:jtio:) log(1:co:c))
t
, then d = (1 1)
t
species
that log(Co::n:jtio:,1:co:c) is stationary.
In other cases, d may not be known.
If
t
is cointegrated with a single cointegrating vector (r = 1), then it turns out that d can
be consistently estimated by an OLS regression of one component of
t
on the others. Thus
t
=
(1
1t
, 1
2t
) and d = (,
1
,
2
) and normalize ,
1
= 1. Then
^
,
2
= (
t
2

2
)
1

t
2

1
j
,
2
. Furthermore
this estimation is super-consistent: T(
^
,
2
,
2
)
o
1i:it, as rst shown by Stock (1987). This
135
is not, in general, a good method to estimate d, but it is useful in the construction of alternative
estimators and tests.
We are often interested in testing the hypothesis of no cointegration:
H
0
: r = 0
H
1
: r 0.
Suppose that d is known, so z
t
= d
t

t
is known. Then under H
0
z
t
is 1(1), yet under H
1
z
t
is
1(0). Thus H
0
can be tested using a univariate ADF test on z
t
.
When d is unknown, Engle and Granger (1987) suggested using an ADF test on the estimated
residual ^ .
t
=
`
d
t

t
, from OLS of j
1t
on j
2t
. Their justication was Stocks result that
`
d is super-
consistent under H
1
. Under H
0
, however,
`
d is not consistent, so the ADF critical values are not
appropriate. The asymptotic distribution was worked out by Phillips and Ouliaris (1990).
When the data have time trends, it may be necessary to include a time trend in the estimated
cointegrating regression. Whether or not the time trend is included, the asymptotic distribution of
the test is aected by the presence of the time trend. The asymptotic distribution was worked out
in B. Hansen (1992).
11.9 Cointegrated VARs
We can write a VAR as
A(L)
t
= c
t
A(L) = 1 A
1
L A
2
L
2
A
I
L
I
or alternatively as

t
= D
t1
+L(L)
t1
+c
t
where
D = A(1)
= 1 +A
1
+A
2
+ +A
I
.
Theorem 11.9.1 (Granger Representation Theorem).
t
is cointegrated with :r d if and only
if rank(D) = r and D = od
t
where c is :r, rank (o) = r.
Thus cointegration imposes a restriction upon the parameters of a VAR. The restricted model
can be written as

t
= od
t

t1
+L(L)
t1
+c
t

t
= oz
t1
+L(L)
t1
+c
t
.
If d is known, this can be estimated by OLS of
t
on z
t1
and the lags of
t
.
If d is unknown, then estimation is done by reduced rank regression, which is least-squares
subject to the stated restriction. Equivalently, this is the MLE of the restricted parameters under
the assumption that c
t
is iid N(0, D).
One diculty is that d is not identied without normalization. When r = 1, we typically just
normalize one element to equal unity. When r 1, this does not work, and dierent authors have
adopted dierent identication schemes.
In the context of a cointegrated VAR estimated by reduced rank regression, it is simple to test
for cointegration by testing the rank of D. These tests are constructed as likelihood ratio (LR) tests.
As they were discovered by Johansen (1988, 1991, 1995), they are typically called the Johansen
Max and Trace tests. Their asymptotic distributions are non-standard, and are similar to the
Dickey-Fuller distributions.
136
Chapter 12
Limited Dependent Variables
A limited dependent variable j is one which takes a limited set of values. The most common
cases are
Binary: j 0, 1
Multinomial: j 0, 1, 2, ..., /
Integer: j 0, 1, 2, ...
Censored: j R
+
The traditional approach to the estimation of limited dependent variable (LDV) models is
parametric maximum likelihood. A parametric model is constructed, allowing the construction of
the likelihood function. A more modern approach is semi-parametric, eliminating the dependence
on a parametric distributional assumption. We will discuss only the rst (parametric) approach,
due to time constraints. They still constitute the majority of LDV applications. If, however, you
were to write a thesis involving LDV estimation, you would be advised to consider employing a
semi-parametric estimation approach.
For the parametric approach, estimation is by MLE. A major practical issue is construction of
the likelihood function.
12.1 Binary Choice
The dependent variable j
i
0, 1. This represents a Yes/No outcome. Given some regressors
i
i
, the goal is to describe P(j
i
= 1 [ i
i
) , as this is the full conditional distribution.
The linear probability model species that
P(j
i
= 1 [ i
i
) = i
t
i
d.
As P(j
i
= 1 [ i
i
) = E(j
i
[ i
i
) , this yields the regression: j
i
= i
t
i
d +c
i
which can be estimated by
OLS. However, the linear probability model does not impose the restriction that 0 _ P(j
i
[ i
i
) _ 1.
Even so estimation of a linear probability model is a useful starting point for subsequent analysis.
The standard alternative is to use a function of the form
P(j
i
= 1 [ i
i
) = 1
_
i
t
i
d
_
where 1 () is a known CDF, typically assumed to be symmetric about zero, so that 1(n) =
1 1(n). The two standard choices for 1 are
Logistic: 1(n) = (1 +c
&
)
1
.
137
Normal: 1(n) = (n).
If 1 is logistic, we call this the logit model, and if 1 is normal, we call this the probit model.
This model is identical to the latent variable model
j
+
i
= i
t
i
d +c
i
c
i
~ 1 ()
j
i
=
_
1 if j
+
i
0
0 otherwise
.
For then
P(j
i
= 1 [ i
i
) = P(j
+
i
0 [ i
i
)
= P
_
i
t
i
d +c
i
0 [ i
i
_
= P
_
c
i
i
t
i
d [ i
i
_
= 1 1
_
i
t
i
d
_
= 1
_
i
t
i
d
_
.
Estimation is by maximum likelihood. To construct the likelihood, we need the conditional
distribution of an individual observation. Recall that if j is Bernoulli, such that P(j = 1) = j and
P(j = 0) = 1 j, then we can write the density of j as
)(j) = j
j
(1 j)
1j
, j = 0, 1.
In the Binary choice model, j
i
is conditionally Bernoulli with P(j
i
= 1 [ i
i
) = j
i
= 1 (i
t
i
d) . Thus
the conditional density is
) (j
i
[ i
i
) = j
j
i
i
(1 j
i
)
1j
i
= 1
_
i
t
i
d
_
j
i
(1 1
_
i
t
i
d
_
)
1j
i
.
Hence the log-likelihood function is
log 1(d) =
a

i=1
log )(j
i
[ i
i
)
=
a

i=1
log
_
1
_
i
t
i
d
_
j
i
(1 1
_
i
t
i
d
_
)
1j
i
_
=
a

i=1
_
j
i
log 1
_
i
t
i
d
_
+ (1 j
i
) log(1 1
_
i
t
i
d
_
)

=

j
i
=1
log 1
_
i
t
i
d
_
+

j
i
=0
log(1 1
_
i
t
i
d
_
).
The MLE
`
d is the value of d which maximizes log 1(d). Standard errors and test statistics are
computed by asymptotic approximations. Details of such calculations are left to more advanced
courses.
12.2 Count Data
If j 0, 1, 2, ..., a typical approach is to employ Poisson regression. This model species that
P(j
i
= / [ i
i
) =
exp(`
i
) `
I
i
/!
, / = 0, 1, 2, ...
`
i
= exp(i
t
i
d).
138
The conditional density is the Poisson with parameter `
i
. The functional form for `
i
has been
picked to ensure that `
i
0.
The log-likelihood function is
log 1(d) =
a

i=1
log )(j
i
[ i
i
) =
a

i=1
_
exp(i
t
i
d) +j
i
i
t
i
d log(j
i
!)
_
.
The MLE is the value
`
d which maximizes log 1(d).
Since
E(j
i
[ i
i
) = `
i
= exp(i
t
i
d)
is the conditional mean, this motivates the label Poisson regression.
Also observe that the model implies that
var (j
i
[ i
i
) = `
i
= exp(i
t
i
d),
so the model imposes the restriction that the conditional mean and variance of j
i
are the same.
This may be considered restrictive. A generalization is the negative binomial.
12.3 Censored Data
The idea of censoring is that some data above or below a threshold are mis-reported at the
threshold. Thus the model is that there is some latent process j
+
i
with unbounded support, but we
observe only
j
i
=
_
j
+
i
if j
+
i
_ 0
0 if j
+
i
< 0
. (12.1)
(This is written for the case of the threshold being zero, any known value can substitute.) The
observed data j
i
therefore come from a mixed continuous/discrete distribution.
Censored models are typically applied when the data set has a meaningful proportion (say 5%
or higher) of data at the boundary of the sample support. The censoring process may be explicit
in data collection, or it may be a by-product of economic constraints.
An example of a data collection censoring is top-coding of income. In surveys, incomes above
a threshold are typically reported at the threshold.
The rst censored regression model was developed by Tobin (1958) to explain consumption of
durable goods. Tobin observed that for many households, the consumption level (purchases) in a
particular period was zero. He proposed the latent variable model
j
+
i
= i
t
i
d +c
i
c
i
~ iid N(0, o
2
)
with the observed variable j
i
generated by the censoring equation (12.1). This model (now called
the Tobit) species that the latent (or ideal) value of consumption may be negative (the household
would prefer to sell than buy). All that is reported is that the household purchased zero units of
the good.
The naive approach to estimate d is to regress j
i
on i
i
. This does not work because regression
estimates E(j
i
[ i
i
) , not E(j
+
i
[ i
i
) = i
t
i
d, and the latter is of interest. Thus OLS will be biased
for the parameter of interest d.
[Note: it is still possible to estimate E(j
i
[ i
i
) by LS techniques. The Tobit framework postu-
lates that this is not inherently interesting, that the parameter of d is dened by an alternative
statistical structure.]
139
Consistent estimation will be achieved by the MLE. To construct the likelihood, observe that
the probability of being censored is
P(j
i
= 0 [ i
i
) = P(j
+
i
< 0 [ i
i
)
= P
_
i
t
i
d +c
i
< 0 [ i
i
_
= P
_
c
i
o
<
i
t
i
d
o
[ i
i
_
=
_

i
t
i
d
o
_
.
The conditional distribution function above zero is Gaussian:
P(j
i
= j [ i
i
) =
_
j
0
o
1
c
_
. i
t
i
d
o
_
d., j 0.
Therefore, the density function can be written as
) (j [ i
i
) =
_

i
t
i
d
o
_
1(j=0)
_
o
1
c
_
. i
t
i
d
o
__
1(j0)
,
where 1 () is the indicator function.
Hence the log-likelihood is a mixture of the probit and the normal:
log 1(d) =
a

i=1
log )(j
i
[ i
i
)
=

j
i
=0
log
_

i
t
i
d
o
_
+

j
i
0
log
_
o
1
c
_
j
i
i
t
i
d
o
__
.
The MLE is the value
`
d which maximizes log 1(d).
12.4 Sample Selection
The problem of sample selection arises when the sample is a non-random selection of potential
observations. This occurs when the observed data is systematically dierent from the population
of interest. For example, if you ask for volunteers for an experiment, and they wish to extrapolate
the eects of the experiment on a general population, you should worry that the people who
volunteer may be systematically dierent from the general population. This has great relevance for
the evaluation of anti-poverty and job-training programs, where the goal is to assess the eect of
training on the general population, not just on the volunteers.
A simple sample selection model can be written as the latent model
j
i
= i
t
i
d +c
1i
T
i
= 1
_
z
t
i
_ +c
0i
0
_
where 1 () is the indicator function. The dependent variable j
i
is observed if (and only if) T
i
= 1.
Else it is unobserved.
For example, j
i
could be a wage, which can be observed only if a person is employed. The
equation for T
i
is an equation specifying the probability that the person is employed.
The model is often completed by specifying that the errors are jointly normal
_
c
0i
c
1i
_
~ N
_
0,
_
1 j
j o
2
__
.
140
It is presumed that we observe i
i
, z
i
, T
i
for all observations.
Under the normality assumption,
c
1i
= jc
0i
+
i
,
where
i
is independent of c
0i
~ N(0, 1). A useful fact about the standard normal distribution is
that
E(c
0i
[ c
0i
r) = `(r) =
c(r)
(r)
,
and the function `(r) is called the inverse Mills ratio.
The naive estimator of d is OLS regression of j
i
on i
i
for those observations for which j
i
is
available. The problem is that this is equivalent to conditioning on the event T
i
= 1. However,
E(c
1i
[ T
i
= 1, z
i
) = E
_
c
1i
[ c
0i
z
t
i
_, z
i
_
= jE
_
c
0i
[ c
0i
z
t
i
_, z
i
_
+ E
_

i
[ c
0i
z
t
i
_, z
i
_
= j`
_
z
t
i
_
_
,
which is non-zero. Thus
c
1i
= j`
_
z
t
i
_
_
+n
i
,
where
E(n
i
[ T
i
= 1, z
i
) = 0.
Hence
j
i
= i
t
i
d +j`
_
z
t
i
_
_
+n
i
(12.2)
is a valid regression equation for the observations for which T
i
= 1.
Heckman (1979) observed that we could consistently estimate d and j from this equation, if _
were known. It is unknown, but also can be consistently estimated by a Probit model for selection.
The Heckit estimator is thus calculated as follows
Estimate ` _ from a Probit, using regressors z
i
. The binary dependent variable is T
i
.
Estimate
_
`
d, ^ j
_
from OLS of j
i
on i
i
and `(z
t
i
` _).
The OLS standard errors will be incorrect, as this is a two-step estimator. They can be
corrected using a more complicated formula. Or, alternatively, by viewing the Probit/OLS
estimation equations as a large joint GMM problem.
The Heckit estimator is frequently used to deal with problems of sample selection. However,
the estimator is built on the assumption of normality, and the estimator can be quite sensitive
to this assumption. Some modern econometric research is exploring how to relax the normality
assumption.
The estimator can also work quite poorly if `(z
t
i
^ ) does not have much in-sample variation.
This can happen if the Probit equation does not explain much about the selection choice. Another
potential problem is that if z
i
= i
i
, then `(z
t
i
^ ) can be highly collinear with i
i
, so the second
step OLS estimator will not be able to precisely estimate d. Based this observation, it is typically
recommended to nd a valid exclusion restriction: a variable should be in z
i
which is not in i
i
. If
this is valid, it will ensure that `(z
t
i
^ ) is not collinear with i
i
, and hence improve the second stage
estimators precision.
141
Chapter 13
Panel Data
A panel is a set of observations on individuals, collected over time. An observation is the pair
j
it
, i
it
, where the i subscript denotes the individual, and the t subscript denotes time. A panel
may be balanced:
j
it
, i
it
: t = 1, ..., T; i = 1, ..., :,
or unbalanced:
j
it
, i
it
: For i = 1, ..., :, t = t
i
, ..., t
i
.
13.1 Individual-Eects Model
The standard panel data specication is that there is an individual-specic eect which enters
linearly in the regression
j
it
= i
t
it
d +n
i
+c
it
.
The typical maintained assumptions are that the individuals i are mutually independent, that n
i
and c
it
are independent, that c
it
is iid across individuals and time, and that c
it
is uncorrelated with
i
it
.
OLS of j
it
on i
it
is called pooled estimation. It is consistent if
E(i
it
n
i
) = 0 (13.1)
If this condition fails, then OLS is inconsistent. (13.1) fails if the individual-specic unobserved
eect n
i
is correlated with the observed explanatory variables i
it
. This is often believed to be
plausible if n
i
is an omitted variable.
If (13.1) is true, however, OLS can be improved upon via a GLS technique. In either event,
OLS appears a poor estimation choice.
Condition (13.1) is called the random eects hypothesis. It is a strong assumption, and most
applied researchers try to avoid its use.
13.2 Fixed Eects
This is the most common technique for estimation of non-dynamic linear panel regressions.
The motivation is to allow n
i
to be arbitrary, and have arbitrary correlated with i
i
. The goal
is to eliminate n
i
from the estimator, and thus achieve invariance.
There are several derivations of the estimator.
First, let
d
i)
=
_
_
_
1 if i = ,
0 else
,
142
and
u
i
=
_
_
_
d
i1
.
.
.
d
ia
_
_
_,
an : 1 dummy vector with a 1 in the i
t
t/ place. Let
u =
_
_
_
n
1
.
.
.
n
a
_
_
_.
Then note that
n
i
= u
t
i
u,
and
j
it
= i
t
it
d +u
t
i
u +c
it
. (13.2)
Observe that
E(c
it
[ i
it
, u
i
) = 0,
so (13.2) is a valid regression, with u
i
as a regressor along with i
i
.
OLS on (13.2) yields estimator
_
`
d, ` u
_
. Conventional inference applies.
Observe that
This is generally consistent.
If i
it
contains an intercept, it will be collinear with u
i
, so the intercept is typically omitted
from i
it
.
Any regressor in i
it
which is constant over time for all individuals (e.g., their gender) will be
collinear with u
i
, so will have to be omitted.
There are : +/ regression parameters, which is quite large as typically : is very large.
Computationally, you do not want to actually implement conventional OLS estimation, as the
parameter space is too large. OLS estimation of d proceeds by the FWL theorem. Stacking the
observations together:
= Ad +Lu +c,
then by the FWL theorem,
`
d =
_
A
t
(1 1
1
) A
_
1
_
A
t
(1 1
1
)
_
=
_
A
+t
A
+
_
1
_
A
+t

+
_
,
where

+
= L(L
t
L)
1
L
t

A
+
= A L(L
t
L)
1
L
t
A.
Since the regression of j
it
on u
i
is a regression onto individual-specic dummies, the predicted value
from these regressions is the individual specic mean j
i
, and the residual is the demean value
j
+
it
= j
it
j
i
.
The xed eects estimator
`
d is OLS of j
+
it
on i
+
it
, the dependent variable and regressors in deviation-
from-mean form.
143
Another derivation of the estimator is to take the equation
j
it
= i
t
it
d +n
i
+c
it
,
and then take individual-specic means by taking the average for the i
t
t/ individual:
1
T
i
t
i

t=t
i
j
it
=
1
T
i
t
i

t=t
i
i
t
it
d +n
i
+
1
T
i
t
i

t=t
i
c
it
or
j
i
= i
t
i
d +n
i
+c
i
.
Subtracting, we nd
j
+
it
= i
+t
it
d +c
+
it
,
which is free of the individual-eect n
i
.
13.3 Dynamic Panel Regression
A dynamic panel regression has a lagged dependent variable
j
it
= cj
it1
+i
t
it
d +n
i
+c
it
. (13.3)
This is a model suitable for studying dynamic behavior of individual agents.
Unfortunately, the xed eects estimator is inconsistent, at least if T is held nite as : .
This is because the sample mean of j
it1
is correlated with that of c
it
.
The standard approach to estimate a dynamic panel is to combine rst-dierencing with IV or
GMM. Taking rst-dierences of (13.3) eliminates the individual-specic eect:
j
it
= cj
it1
+ i
t
it
d + c
it
. (13.4)
However, if c
it
is iid, then it will be correlated with j
it1
:
E(j
it1
c
it
) = E((j
it1
j
it2
) (c
it
c
it1
)) = E(j
it1
c
it1
) = o
2
c
.
So OLS on (13.4) will be inconsistent.
But if there are valid instruments, then IV or GMM can be used to estimate the equation.
Typically, we use lags of the dependent variable, two periods back, as j
t2
is uncorrelated with
c
it
. Thus values of j
itI
, / _ 2, are valid instruments.
Hence a valid estimator of c and d is to estimate (13.4) by IV using j
t2
as an instrument for
j
t1
(which is just identied). Alternatively, GMM using j
t2
and j
t3
as instruments (which is
overidentied, but loses a time-series observation).
A more sophisticated GMM estimator recognizes that for time-periods later in the sample, there
are more instruments available, so the instrument list should be dierent for each equation. This is
conveniently organized by the GMM principle, as this enables the moments from the dierent time-
periods to be stacked together to create a list of all the moment conditions. A simple application
of GMM yields the parameter estimates and standard errors.
144
Chapter 14
Nonparametrics
14.1 Kernel Density Estimation
Let A be a random variable with continuous distribution 1(r) and density )(r) =
o
oa
1(r).
The goal is to estimate )(r) from a random sample (A
1
, ..., A
a
While 1(r) can be estimated by
the EDF
^
1(r) = :
1

a
i=1
1 (A
i
_ r) , we cannot dene
o
oa
^
1(r) since
^
1(r) is a step function. The
standard nonparametric method to estimate )(r) is based on smoothing using a kernel.
While we are typically interested in estimating the entire function )(r), we can simply focus
on the problem where r is a specic xed number, and then see how the method generalizes to
estimating the entire function.
Denition 2 1(n) is a second-order kernel function if it is a symmetric zero-mean density
function.
Three common choices for kernels include the Gaussian
1(n) =
1
_
2
exp
_

n
2
2
_
the Epanechnikov
1(n) =
_
3
4
_
1 n
2
_
, [n[ _ 1
0 [n[ 1
and the Biweight or Quartic
1(n) =
_
15
16
_
1 n
2
_
2
, [n[ _ 1
0 [n[ 1
In practice, the choice between these three rarely makes a meaningful dierence in the estimates.
The kernel functions are used to smooth the data. The amount of smoothing is controlled by
the bandwidth / 0. Let
1
I
(n) =
1
/
1
_
n
/
_
.
be the kernel 1 rescaled by the bandwidth /. The kernel density estimator of )(r) is
^
)(r) =
1
:
a

i=1
1
I
(A
i
r) .
This estimator is the average of a set of weights. If a large number of the observations A
i
are near
r, then the weights are relatively large and
^
)(r) is larger. Conversely, if only a few A
i
are near r,
then the weights are small and
^
)(r) is small. The bandwidth / controls the meaning of near.
145
Interestingly,
^
)(r) is a valid density. That is,
^
)(r) _ 0 for all r, and
_
o
o
^
)(r)dr =
_
o
o
1
:
a

i=1
1
I
(A
i
r) dr =
1
:
a

i=1
_
o
o
1
I
(A
i
r) dr =
1
:
a

i=1
_
o
o
1 (n) dn = 1
where the second-to-last equality makes the change-of-variables n = (A
i
r),/.
We can also calculate the moments of the density
^
)(r). The mean is
_
o
o
r
^
)(r)dr =
1
:
a

i=1
_
o
o
r1
I
(A
i
r) dr
=
1
:
a

i=1
_
o
o
(A
i
+n/) 1 (n) dn
=
1
:
a

i=1
A
i
_
o
o
1 (n) dn +
1
:
a

i=1
/
_
o
o
n1 (n) dn
=
1
:
a

i=1
A
i
the sample mean of the A
i
, where the second-to-last equality used the change-of-variables n =
(A
i
r),/ which has Jacobian /.
The second moment of the estimated density is
_
o
o
r
2
^
)(r)dr =
1
:
a

i=1
_
o
o
r
2
1
I
(A
i
r) dr
=
1
:
a

i=1
_
o
o
(A
i
+n/)
2
1 (n) dn
=
1
:
a

i=1
A
2
i
+
2
:
a

i=1
A
i
/
_
o
o
1(n)dn +
1
:
a

i=1
/
2
_
o
o
n
2
1 (n) dn
=
1
:
a

i=1
A
2
i
+/
2
o
2
1
where
o
2
1
=
_
o
o
n
2
1 (n) dn
is the variance of the kernel. It follows that the variance of the density
^
)(r) is
_
o
o
r
2
^
)(r)dr
__
o
o
r
^
)(r)dr
_
2
=
1
:
a

i=1
A
2
i
+/
2
o
2
1

_
1
:
a

i=1
A
i
_
2
= ^ o
2
+/
2
o
2
1
Thus the variance of the estimated density is inated by the factor /
2
o
2
1
relative to the sample
moment.
14.2 Asymptotic MSE for Kernel Estimates
For xed r and bandwidth / observe that
E1
I
(A r) =
_
o
o
1
I
(. r) )(.)d. =
_
o
o
1
I
(n/) )(r +/n)/dn =
_
o
o
1 (n) )(r +/n)dn
146
The second equality uses the change-of variables n = (. r),/. The last expression shows that the
expected value is an average of )(.) locally about r.
This integral (typically) is not analytically solvable, so we approximate it using a second order
Taylor expansion of )(r +/n) in the argument /n about /n = 0, which is valid as / 0. Thus
) (r +/n) )(r) +)
t
(r)/n +
1
2
)
tt
(r)/
2
n
2
and therefore
E1
I
(A r)
_
o
o
1 (n)
_
)(r) +)
t
(r)/n +
1
2
)
tt
(r)/
2
n
2
_
dn
= )(r)
_
o
o
1 (n) dn +)
t
(r)/
_
o
o
1 (n) ndn +
1
2
)
tt
(r)/
2
_
o
o
1 (n) n
2
dn
= )(r) +
1
2
)
tt
(r)/
2
o
2
1
.
The bias of
^
)(r) is then
1ia:(r) = E
^
)(r) )(r) =
1
:
a

i=1
E1
I
(A
i
r) )(r) =
1
2
)
tt
(r)/
2
o
2
1
.
We see that the bias of
^
)(r) at r depends on the second derivative )
tt
(r). The sharper the derivative,
the greater the bias. Intuitively, the estimator
^
)(r) smooths data local to A
i
= r, so is estimating
a smoothed version of )(r). The bias results from this smoothing, and is larger the greater the
curvature in )(r).
We now examine the variance of
^
)(r). Since it is an average of iid random variables, using
rst-order Taylor approximations and the fact that :
1
is of smaller order than (:/)
1
var (r) =
1
:
var (1
I
(A
i
r))
=
1
:
E1
I
(A
i
r)
2

1
:
(E1
I
(A
i
r))
2

1
:/
2
_
o
o
1
_
. r
/
_
2
)(.)d.
1
:
)(r)
2
=
1
:/
_
o
o
1 (n)
2
) (r +/n) dn

) (r)
:/
_
o
o
1 (n)
2
dn
=
) (r) 1(1)
:/
.
where 1(1) =
_
o
o
1 (n)
2
dn is called the roughness of 1.
Together, the asymptotic mean-squared error (AMSE) for xed r is the sum of the approximate
squared bias and approximate variance
'o1
I
(r) =
1
4
)
tt
(r)
2
/
4
o
4
1
+
) (r) 1(1)
:/
.
A global measure of precision is the asymptotic mean integrated squared error (AMISE)
'1o1
I
=
_
'o1
I
(r)dr =
/
4
o
4
1
1()
tt
)
4
+
1(1)
:/
. (14.1)
147
where 1()
tt
) =
_
()
tt
(r))
2
dr is the roughness of )
tt
. Notice that the rst term (the squared bias)
is increasing in / and the second term (the variance) is decreasing in :/. Thus for the AMISE to
decline with :, we need / 0 but :/ . That is, / must tend to zero, but at a slower rate
than :
1
.
Equation (14.1) is an asymptotic approximation to the MSE. We dene the asymptotically
optimal bandwidth /
0
as the value which minimizes this approximate MSE. That is,
/
0
= argmin
I
'1o1
I
It can be found by solving the rst order condition
d
d/
'1o1
I
= /
3
o
4
1
1()
tt
)
1(1)
:/
2
= 0
yielding
/
0
=
_
1(1)
o
4
1
1()
tt
)
_
15
:
12
. (14.2)
This solution takes the form /
0
= c:
15
where c is a function of 1 and ), but not of :. We
thus say that the optimal bandwidth is of order O(:
15
). Note that this / declines to zero, but at
a very slow rate.
In practice, how should the bandwidth be selected? This is a dicult problem, and there is a
large and continuing literature on the subject. The asymptotically optimal choice given in (14.2)
depends on 1(1), o
2
1
, and 1()
tt
). The rst two are determined by the kernel function. Their
values for the three functions introduced in the previous section are given here.
1 o
2
1
=
_
o
o
n
2
1 (n) dn 1(1) =
_
o
o
1 (n)
2
dn
Gaussian 1 1/(2
_
)
Epanechnikov 1,5 1,5
Biweight 1,7 5,7
An obvious diculty is that 1()
tt
) is unknown. A classic simple solution proposed by Silverman
(1986)has come to be known as the reference bandwidth or Silvermans Rule-of-Thumb. It
uses formula (14.2) but replaces 1()
tt
) with ^ o
5
1(c
tt
), where c is the N(0, 1) distribution and ^ o
2
is
an estimate of o
2
= var(A). This choice for / gives an optimal rule when )(r) is normal, and gives
a nearly optimal rule when )(r) is close to normal. The downside is that if the density is very far
from normal, the rule-of-thumb / can be quite inecient. We can calculate that 1(c
tt
) = 3, (8
_
) .
Together with the above table, we nd the reference rules for the three kernel functions introduced
earlier.
Gaussian Kernel: /
v&|c
= 1.06^ o:
15
Epanechnikov Kernel: /
v&|c
= 2.34^ o:
15
Biweight (Quartic) Kernel: /
v&|c
= 2.78^ o:
15
Unless you delve more deeply into kernel estimation methods the rule-of-thumb bandwidth is
a good practical bandwidth choice, perhaps adjusted by visual inspection of the resulting estimate
^
)(r). There are other approaches, but implementation can be delicate. I now discuss some of these
choices. The plug-in approach is to estimate 1()
tt
) in a rst step, and then plug this estimate into
the formula (14.2). This is more treacherous than may rst appear, as the optimal / for estimation
of the roughness 1()
tt
) is quite dierent than the optimal / for estimation of )(r). However, there
are modern versions of this estimator work well, in particular the iterative method of Sheather
and Jones (1991). Another popular choice for selection of / is cross-validation. This works by
constructing an estimate of the MISE using leave-one-out estimators. There are some desirable
properties of cross-validation bandwidths, but they are also known to converge very slowly to the
148
optimal values. They are also quite ill-behaved when the data has some discretization (as is common
in economics), in which case the cross-validation rule can sometimes select very small bandwidths
leading to dramatically undersmoothed estimates. Fortunately there are remedies, which are known
as smoothed cross-validation which is a close cousin of the bootstrap.
149
Appendix A
Matrix Algebra
A.1 Notation
A scalar a is a single number.
A vector u is a / 1 list of numbers, typically arranged in a column. We write this as
u =
_
_
_
_
_
a
1
a
2
.
.
.
a
I
_
_
_
_
_
Equivalently, a vector u is an element of Euclidean / space, written as u R
I
. If / = 1 then u is
a scalar.
A matrix A is a / r rectangular array of numbers, written as
A =
_

_
a
11
a
12
a
1v
a
21
a
22
a
2v
.
.
.
.
.
.
.
.
.
a
I1
a
I2
a
Iv
_

_
By convention a
i)
refers to the element in the i
t
t/ row and ,
t
t/ column of A. If r = 1 then A is a
column vector. If / = 1 then A is a row vector. If r = / = 1, then A is a scalar.
A standard convention (which we will follow in this text whenever possible) is to denote scalars
by lower-case italics (a), vectors by lower-case bold italics (u), and matrices by upper-case bold
italics (A). Sometimes a matrix A is denoted by the symbol (a
i)
).
A matrix can be written as a set of column vectors or as a set of row vectors. That is,
A =
_
u
1
u
2
u
v

=
_

_
o
1
o
2
.
.
.
o
I
_

_
where
u
i
=
_

_
a
1i
a
2i
.
.
.
a
Ii
_

_
are column vectors and
o
)
=
_
a
)1
a
)2
a
)v

150
are row vectors.
The transpose of a matrix, denoted A
t
, is obtained by ipping the matrix on its diagonal.
Thus
A
t
=
_

_
a
11
a
21
a
I1
a
12
a
22
a
I2
.
.
.
.
.
.
.
.
.
a
1v
a
2v
a
Iv
_

_
Alternatively, letting H = A
t
, then /
i)
= a
)i
. Note that if A is / r, then A
t
is r /. If u is a
/ 1 vector, then u
t
is a 1 / row vector. An alternative notation for the transpose of A is A

.
A matrix is square if / = r. A square matrix is symmetric if A = A
t
, which requires a
i)
= a
)i
.
A square matrix is diagonal if the o-diagonal elements are all zero, so that a
i)
= 0 if i ,= ,. A
square matrix is upper (lower) diagonal if all elements below (above) the diagonal equal zero.
An important diagonal matrix is the identity matrix, which has ones on the diagonal. The
/ / identity matrix is denoted as
1
I
=
_

_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
0 0 1
_

_
.
A partitioned matrix takes the form
A =
_

_
A
11
A
12
A
1v
A
21
A
22
A
2v
.
.
.
.
.
.
.
.
.
A
I1
A
I2
A
Iv
_

_
where the
i)
denote matrices, vectors and/or scalars.
A.2 Matrix Addition
If the matrices A = (a
i)
) and H = (/
i)
) are of the same order, we dene the sum
A+H = (a
i)
+/
i)
) .
Matrix addition follows the communtative and associative laws:
A+H = H +A
A+ (H +C) = (A+H) +C.
A.3 Matrix Multiplication
If A is / r and c is real, we dene their product as
Ac = cA = (a
i)
c) .
If u and I are both / 1, then their inner product is
u
t
I = a
1
/
1
+a
2
/
2
+ +a
I
/
I
=
I

)=1
a
)
/
)
.
Note that u
t
I = I
t
u. We say that two vectors u and I are orthogonal if u
t
I = 0.
151
If A is / r and H is r :, so that the number of columns of A equals the number of rows
of H, we say that A and H are conformable. In this event the matrix product AH is dened.
Writing A as a set of row vectors and H as a set of column vectors (each of length r), then the
matrix product is dened as
AH =
_

_
u
t
1
u
t
2
.
.
.
u
t
I
_

_
_
I
1
I
2
I
c

=
_

_
u
t
1
I
1
u
t
1
I
2
u
t
1
I
c
u
t
2
I
1
u
t
2
I
2
u
t
2
I
c
.
.
.
.
.
.
.
.
.
u
t
I
I
1
u
t
I
I
2
u
t
I
I
c
_

_
.
Matrix multiplication is not communicative: in general AH 6= HA. However, it is associative
and distributive:
A(HC) = (AH) C
A(H +C) = AH +AC
An alternative way to write the matrix product is to use matrix partitions. For example,
AH =
_
A
11
A
12
A
21
A
22
_ _
H
11
H
12
H
21
H
22
_
=
_
A
11
H
11
+A
12
H
21
A
11
H
12
+A
12
H
22
A
21
H
11
+A
22
H
21
A
21
H
12
+A
22
H
22
_
.
As another example,
AH =
_
A
1
A
2
A
v

_

_
H
1
H
2
.
.
.
H
v
_

_
= A
1
H
1
+A
2
H
2
+ +A
v
H
v
=
v

)=1
A
)
H
)
An important property of the identity matrix is that if A is /r, then A1
v
= A and 1
I
A = A.
The / r matrix A, r _ /, is called orthogonal if A
t
A = 1
v
.
A.4 Trace
The trace of a / / square matrix A is the sum of its diagonal elements
tr (A) =
I

i=1
a
ii
.
Some straightforward properties for square matrices A and H and real c are
tr (cA) = c tr (A)
tr
_
A
t
_
= tr (A)
tr (A+H) = tr (A) + tr (H)
tr (1
I
) = /.
152
Also, for / r A and r / H we have
tr (AH) = tr (HA) .
Indeed,
tr (AH) = tr
_

_
u
t
1
I
1
u
t
1
I
2
u
t
1
I
I
u
t
2
I
1
u
t
2
I
2
u
t
2
I
I
.
.
.
.
.
.
.
.
.
u
t
I
I
1
u
t
I
I
2
u
t
I
I
I
_

_
=
I

i=1
u
t
i
I
i
=
I

i=1
I
t
i
u
i
= tr (HA) .
A.5 Rank and Inverse
The rank of the / r matrix (r _ /)
A =
_
u
1
u
2
u
v

is the number of linearly independent columns u
)
, and is written as rank (A) . We say that A has
full rank if rank (A) = r.
A square / / matrix A is said to be nonsingular if it is has full rank, e.g. rank (A) = /.
This means that there is no / 1 c ,= 0 such that Ac = 0.
If a square / / matrix A is nonsingular then there exists a unique matrix / / matrix A
1
called the inverse of A which satises
AA
1
= A
1
A = 1
I
.
For non-singular A and C, some important properties include
AA
1
= A
1
A = 1
I
_
A
1
_
t
=
_
A
t
_
1
(AC)
1
= C
1
A
1
(A+C)
1
= A
1
_
A
1
+C
1
_
1
C
1
A
1
(A+C)
1
= A
1
_
A
1
+C
1
_
A
1
Also, if A is an orthogonal matrix, then A
1
= A.
Another useful result for non-singular A is
(A+HCL)
1
= A
1
A
1
HC
_
C +CLA
1
HC
_
1
CLA
1
. (A.1)
In particular, for C = 1, H = I and L = I
t
for vector I we nd
_
AII
t
_
1
= A
1
+
_
1 I
t
A
1
I
_
1
A
1
II
t
A
1
. (A.2)
153
The following fact about inverting partitioned matrices is quite useful. If A HL
1
C and
LCA
1
H are non-singular, then
_
A H
C L
_
1
=
_
_
AHL
1
C
_
1

_
AHL
1
C
_
1
HL
1

_
LCA
1
H
_
1
CA
1
_
LCA
1
H
_
1
_
. (A.3)
Even if a matrix A does not possess an inverse, we can still dene the generalized inverse
A

as a matrix which satises


AA

A = A. (A.4)
The matrix A

is not necessarily unique. The Moore-Penrose generalized inverse A

satises
(A.4) plus the following three conditions
A

AA

= A

AA

is symmetric
A

A is symmetric
For any matrix A, the Moore-Penrose generalized inverse A

exists and is unique.


A.6 Determinant
The determinant is a measure of the volume of a square matrix.
While the determinant is widely used, its precise denition is rarely needed. However, we present
the denition here for completeness. Let A = (a
i)
) be a general / / matrix . Let = (,
1
, ..., ,
I
)
denote a permutation of (1, ..., /) . There are /! such permutations. There is a unique count of the
number of inversions of the indices of such permutations (relative to the natural order (1, ..., /) ,
and let -

= +1 if this count is even and -

= 1 if the count is odd. Then the determinant of A


is dened as
det A =

a
1)
1
a
2)
2
a
I)
k
.
For example, if A is 2 2, then the two permutations of (1, 2) are (1, 2) and (2, 1) , for which
-
(1,2)
= 1 and -
(2,1)
= 1. Thus
det A = -
(1,2)
a
11
a
22
+-
(2,1)
a
21
a
12
= a
11
a
22
a
12
a
21
.
Some properties include
det (A) = det (A
t
)
det (cA) = c
I
det A
det (AH) = (det A) (det H)
det
_
A
1
_
= (det A)
1
det
_
A H
C L
_
= (det L) det
_
AHL
1
C
_
if det L ,= 0
det A ,= 0 if and only if A is nonsingular.
If A is triangular (upper or lower), then det A =

I
i=1
a
ii
If A is orthogonal, then det A = 1
154
A.7 Eigenvalues
The characteristic equation of a square matrix A is
det (A`1
I
) = 0.
The left side is a polynomial of degree / in ` so it has exactly / roots, which are not necessarily
distinct and may be real or complex. They are called the latent roots or characteristic roots or
eigenvalues of A. If `
i
is an eigenvalue of A, then A`
i
1
I
is singular so there exists a non-zero
vector l
i
such that
(A`
i
1
I
) l
i
= 0.
The vector l
i
is called a latent vector or characteristic vector or eigenvector of A corre-
sponding to `
i
.
We now state some useful properties. Let `
i
and l
i
, i = 1, ..., / denote the / eigenvalues and
eigenvectors of a square matrix A. Let A be a diagonal matrix with the characteristic roots in the
diagonal, and let H = [l
1
l
I
].
det(A) =

I
i=1
`
i
tr(A) =

I
i=1
`
i
A is non-singular if and only if all its characteristic roots are non-zero.
If A has distinct characteristic roots, there exists a nonsingular matrix 1 such that A =
1
1
A1 and 1A1
1
= A.
If A is symmetric, then A = HAH
t
and H
t
AH = A, and the characteristic roots are all
real. A = HAH
t
is called the spectral decomposition of a matrix.
The characteristic roots of A
1
are `
1
1
, `
1
2
, ..., `
1
I
.
A.8 Positive Deniteness
We say that a symmetric square matrix A is positive semi-denite if for all c ,= 0, c
t
Ac _ 0.
This is written as A _ 0. We say that A is positive denite if for all c ,= 0, c
t
Ac 0. This is
written as A 0.
Some properties include:
If A = C
t
C for some matrix C, then A is positive semi-denite. (For any c ,= 0, c
t
Ac =
o
t
o _ 0 where o = Cc.) If C has full rank, then A is positive denite.
If A is positive denite, then A is non-singular and A
1
exists. Furthermore, A
1
0.
A 0 if and only if it is symmetric and all its characteristic roots are positive.
If A 0 we can nd a matrix H such that A = HH
t
. We call H a matrix square root
of A. The matrix H need not be unique. One way to construct H is to use the spectral
decomposition A = HAH
t
where A is diagonal, and then set H = HA
12
.
A square matrix A is idempotent if AA = A. If A is idempotent and symmetric then all its
characteristic roots equal either zero or one and is thus positive semi-denite. To see this, note that
we can write A = HAH
t
where H is orthogonal and A contains the (real) characteristic roots.
Then
A = AA = HAH
t
HAH
t
= HA
2
H
t
.
155
By the uniqueness of the characteristic roots, we deduce that A
2
= A and `
2
i
= `
i
for i = 1, ..., /.
Hence they must equal either 0 or 1. It follows that the spectral decomposition of idempotent A
takes the form
A = H
_
1
aI
0
0 0
_
H
t
(A.5)
with H
t
H = 1
a
. Additionally, tr(A) = rank(A).
A.9 Matrix Calculus
Let i = (r
1
, ..., r
I
) be / 1 and q(i) = q(r
1
, ..., r
I
) : R
I
R. The vector derivative is
0
0i
q (i) =
_
_
_
0
0a
1
q (i)
.
.
.
0
0a
k
q (i)
_
_
_
and
0
0i
t
q (i) =
_
0
0a
1
q (i)
0
0a
k
q (i)
_
.
Some properties are now summarized.

0
0
(u
t
i) =
0
0
(i
t
u) = u

0
0
0
(Ai) = A

0
0
(i
t
Ai) = (A+A
t
) i

0
2
00
0
(i
t
Ai) = A+A
t
A.10 Kronecker Products and the Vec Operator
Let A = [u
1
u
2
u
a
] be ::. The vec of A, denoted by vec (A) , is the :: 1 vector
vec (A) =
_
_
_
_
_
u
1
u
2
.
.
.
u
a
_
_
_
_
_
.
Let A = (a
i)
) be an : : matrix and let H be any matrix. The Kronecker product of A
and H, denoted AH, is the matrix
AH =
_

_
a
11
H a
12
H a
1a
H
a
21
H a
22
H a
2a
H
.
.
.
.
.
.
.
.
.
a
n1
H a
n2
H a
na
H
_

_
.
Some important properties are now summarized. These results hold for matrices for which all
matrix multiplications are conformable.
(A+H) C = AC +H C
(AH) (C L) = AC HL
A(H C) = (AH) C
156
(AH)
t
= A
t
H
t
tr (AH) = tr (A) tr (H)
If A is :: and H is : :, det(AH) = (det (A))
a
(det (H))
n
(AH)
1
= A
1
H
1
If A 0 and H 0 then AH 0
vec (AHC) = (C
t
A) vec (H)
tr (AHCL) = vec (L
t
)
t
(C
t
A) vec (H)
A.11 Vector and Matrix Norms
The Euclidean norm of an :1 vector u is
|u| =
_
u
t
u
_
12
=
_
n

i=1
a
2
i
_
12
.
The Euclidean norm of an :: matrix A is
|A| = |vec (A)|
= tr
_
A
t
A
_
12
=
_
_
n

i=1
a

)=1
a
2
i)
_
_
12
.
A useful calculation is for any :1 vectors u and I,
_
_
uI
t
_
_
= |u| |I|
and in particular
_
_
uu
t
_
_
= |u|
2
(A.6)
Some useful inequalities are now given:
Schwarz Inequality: For any :1 vectors u and I,

u
t
I

_ |u| |I| . (A.7)


Schwarz Matrix Inequality: For any :: matrices A and H,
_
_
A
t
H
_
_
_ |A| |H| . (A.8)
Triangle Inequality: For any :: matrices A and H,
|A+H| _ |A| +|H| . (A.9)
Proof of Schwarz Inequality: First, suppose that |I| = 0. Then I = 0 and both [u
t
I[ = 0
and |u| |I| = 0 so the inequality is true. Second, suppose that |I| 0 and dene c = u
I
_
I
t
I
_
1
I
t
u. Since c is a vector, c
t
c _ 0. Thus
0 _ c
t
c = u
t
u
_
u
t
I
_
2
,
_
I
t
I
_
.
157
Rearranging, this implies that
_
u
t
I
_
2
_
_
u
t
u
_ _
I
t
I
_
.
Taking the square root of each side yields the result.
Proof of Schwarz Matrix Inequality: Partition A = [u
1
, ..., u
a
] and H = [I
1
, ..., I
a
]. Then
by partitioned matrix multiplication, the denition of the matrix Euclidean norm and the Schwarz
inequality
_
_
A
t
H
_
_
=
_
_
_
_
_
_
_
u
t
1
I
1
u
t
1
I
2

u
t
2
I
1
u
t
2
I
2

.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
|u
1
| |I
1
| |u
1
| |I
2
|
|u
2
| |I
1
| |u
2
| |I
2
|
.
.
.
.
.
.
.
.
.
_
_
_
_
_
_
_
=
_
_
a

i=1
a

)=1
|u
i
|
2
|I
)
|
2
_
_
12
=
_
a

i=1
|u
i
|
2
_
12
_
a

i=1
|I
i
|
2
_
12
=
_
_
a

i=1
n

)=1
u
2
)i
_
_
12
_
_
a

i=1
n

)=1
|I
)i
|
2
_
_
12
= |A| |H|
Proof of Triangle Inequality: Let u = vec (A) and I = vec (H) . Then by the denition of
the matrix norm and the Schwarz Inequality
|A+H|
2
= |u +I|
2
= u
t
u + 2u
t
I +I
t
I
_ u
t
u + 2

u
t
I

+I
t
I
_ |u|
2
+ 2 |u| |I| +|I|
2
= (|u| +|I|)
2
= (|A| +|H|)
2

158
Appendix B
Probability
B.1 Foundations
The set o of all possible outcomes of an experiment is called the sample space for the exper-
iment. Take the simple example of tossing a coin. There are two outcomes, heads and tails, so
we can write o = H, T. If two coins are tossed in sequence, we can write the four outcomes as
o = HH, HT, TH, TT.
An event is any collection of possible outcomes of an experiment. An event is a subset of o,
including o itself and the null set O. Continuing the two coin example, one event is = HH, HT,
the event that the rst coin is heads. We say that and 1 are disjoint or mutually exclusive
if 1 = O. For example, the sets HH, HT and TH are disjoint. Furthermore, if the sets

1
,
2
, ... are pairwise disjoint and '
o
i=1

i
= o, then the collection
1
,
2
, ... is called a partition
of o.
The following are elementary set operations:
Union: ' 1 = r : r or r 1.
Intersection: 1 = r : r and r 1.
Complement:
c
= r : r , .
The following are useful properties of set operations.
Communtatitivity: ' 1 = 1 ' ; 1 = 1 .
Associativity: ' (1 ' C) = (' 1) ' C; (1 C) = ( 1) C.
Distributive Laws: (1 ' C) = ( 1) '( C) ; '(1 C) = (' 1) (' C) .
DeMorgans Laws: (' 1)
c
=
c
1
c
; ( 1)
c
=
c
' 1
c
.
A probability function assigns probabilities (numbers between 0 and 1) to events in o.
This is straightforward when o is countable; when o is uncountable we must be somewhat more
careful. A set E is called a sigma algebra (or Borel eld) if O E , E implies
c
E, and

1
,
2
, ... E implies '
o
i=1

i
E. A simple example is O, o which is known as the trivial sigma
algebra. For any sample space o, let E be the smallest sigma algebra which contains all of the open
sets in o. When o is countable, E is simply the collection of all subsets of o, including O and o.
When o is the real line, then E is the collection of all open and closed intervals. We call E the
sigma algebra associated with o. We only dene probabilities for events contained in E.
We now can give the axiomatic denition of probability. Given o and E, a probability function
P satises P(o) = 1, P() _ 0 for all E, and if
1
,
2
, ... E are pairwise disjoint, then
P('
o
i=1

i
) =

o
i=1
P(
i
).
Some important properties of the probability function include the following
P(O) = 0
P() _ 1
P(
c
) = 1 P()
159
P(1
c
) = P(1) P( 1)
P(' 1) = P() + P(1) P( 1)
If 1 then P() _ P(1)
Bonferronis Inequality: P( 1) _ P() + P(1) 1
Booles Inequality: P(' 1) _ P() + P(1)
For some elementary probability models, it is useful to have simple rules to count the number
of objects in a set. These counting rules are facilitated by using the binomial coecients which are
dened for nonnegative integers : and r, : _ r, as
_
:
r
_
=
:!
r! (: r)!
.
When counting the number of objects in a set, there are two important distinctions. Counting
may be with replacement or without replacement. Counting may be ordered or unordered.
For example, consider a lottery where you pick six numbers from the set 1, 2, ..., 49. This selection is
without replacement if you are not allowed to select the same number twice, and is with replacement
if this is allowed. Counting is ordered or not depending on whether the sequential order of the
numbers is relevant to winning the lottery. Depending on these two distinctions, we have four
expressions for the number of objects (possible arrangements) of size r from : objects.
Without With
Replacement Replacement
Ordered
a!
(av)!
:
v
Unordered
_
a
v
_ _
a+v1
v
_
In the lottery example, if counting is unordered and without replacement, the number of po-
tential combinations is
_
49
6
_
= 13, 983, 816.
If P(1) 0 the conditional probability of the event given the event 1 is
P( [ 1) =
P( 1)
P(1)
.
For any 1, the conditional probability function is a valid probability function where o has been
replaced by 1. Rearranging the denition, we can write
P( 1) = P( [ 1) P(1)
which is often quite useful. We can say that the occurrence of 1 has no information about the
likelihood of event when P( [ 1) = P(), in which case we nd
P( 1) = P() P(1) (B.1)
We say that the events and 1 are statistically independent when (B.1) holds. Furthermore,
we say that the collection of events
1
, ...,
I
are mutually independent when for any subset

i
: i 1,
P
_

i1

i
_
=

i1
P(
i
) .
Theorem 3 (Bayes Rule). For any set 1 and any partition
1
,
2
, ... of the sample space, then
for each i = 1, 2, ...
P(
i
[ 1) =
P(1 [
i
) P(
i
)

o
)=1
P(1 [
)
) P(
)
)
160
B.2 Random Variables
A random variable A is a function from a sample space o into the real line. This induces a
new sample space the real line and a new probability function on the real line. Typically, we
denote random variables by uppercase letters such as A, and use lower case letters such as r for
potential values and realized values. (This is in contrast to the notation adopted for most of the
textbook.) For a random variable A we dene its cumulative distribution function (CDF) as
1(r) = P(A _ r) . (B.2)
Sometimes we write this as 1
A
(r) to denote that it is the CDF of A. A function 1(r) is a CDF if
and only if the following three properties hold:
1. lim
ao
1(r) = 0 and lim
ao
1(r) = 1
2. 1(r) is nondecreasing in r
3. 1(r) is right-continuous
We say that the random variable A is discrete if 1(r) is a step function. In the latter case,
the range of A consists of a countable set of real numbers t
1
, ..., t
v
. The probability function for
A takes the form
P(A = t
)
) =
)
, , = 1, ..., r (B.3)
where 0 _
)
_ 1 and

v
)=1

)
= 1.
We say that the random variable A is continuous if 1(r) is continuous in r. In this case P(A =
t) = 0 for all t 1 so the representation (B.3) is unavailable. Instead, we represent the relative
probabilities by the probability density function (PDF)
)(r) =
d
dr
1(r)
so that
1(r) =
_
a
o
)(n)dn
and
P(a _ A _ /) =
_
b
o
)(n)dn.
These expressions only make sense if 1(r) is dierentiable. While there are examples of continuous
random variables which do not possess a PDF, these cases are unusual and are typically ignored.
A function )(r) is a PDF if and only if )(r) _ 0 for all r 1 and
_
o
o
)(r)dr.
B.3 Expectation
For any measurable real function q, we dene the mean or expectation Eq(A) as follows. If
A is discrete,
Eq(A) =
v

)=1
q(t
)
)
)
,
and if A is continuous
Eq(A) =
_
o
o
q(r))(r)dr.
The latter is well dened and nite if
_
o
o
[q(r)[ )(r)dr < . (B.4)
161
If (B.4) does not hold, evaluate
1
1
=
_
j(a)0
q(r))(r)dr
1
2
=
_
j(a)<0
q(r))(r)dr
If 1
1
= and 1
2
< then we dene Eq(A) = . If 1
1
< and 1
2
= then we dene
Eq(A) = . If both 1
1
= and 1
2
= then Eq(A) is undened.
Since E(a +/A) = a +/EA, we say that expectation is a linear operator.
For : 0, we dene the :
t
t/ moment of A as EA
n
and the :
t
t/ central moment as
E(A EA)
n
.
Two special moments are the mean j = EA and variance o
2
= E(A j)
2
= EA
2
j
2
. We
call o =
_
o
2
the standard deviation of A. We can also write o
2
= var(A). For example, this
allows the convenient expression var(a +/A) = /
2
var(A).
The moment generating function (MGF) of A is
'(`) = Eexp(`A) .
The MGF does not necessarily exist. However, when it does and E[A[
n
< then
d
n
d`
n
'(`)

A=0
= E(A
n
)
which is why it is called the moment generating function.
More generally, the characteristic function (CF) of A is
C(`) = Eexp(i`A)
where i =
_
1 is the imaginary unit. The CF always exists, and when E[A[
n
<
d
n
d`
n
C(`)

A=0
= i
n
E(A
n
) .
The 1
j
norm, j _ 1, of the random variable A is
|A|
j
= (E[A[
j
)
1j
.
B.4 Gamma Function
The gamma function is dened for c 0 as
(c) =
_
o
0
r
c1
exp(r) .
It satises the property
(1 +c) = (c)c
so for positive integers :,
(:) = (: 1)!
Special values include
(1) = 1
and

_
1
2
_
=
12
.
Sterlings formula is an expansion for the its logarithm
log (c) =
1
2
log(2) +
_
c
1
2
_
log c . +
1
12c

1
360c
3
+
1
1260c
5
+
162
B.5 Common Distributions
For reference, we now list some important discrete distribution function.
Bernoulli
P(A = r) = j
a
(1 j)
1a
, r = 0, 1; 0 _ j _ 1
EA = j
var(A) = j(1 j)
Binomial
P(A = r) =
_
:
r
_
j
a
(1 j)
aa
, r = 0, 1, ..., :; 0 _ j _ 1
EA = :j
var(A) = :j(1 j)
Geometric
P(A = r) = j(1 j)
a1
, r = 1, 2, ...; 0 _ j _ 1
EA =
1
j
var(A) =
1 j
j
2
Multinomial
P(A
1
= r
1
, A
2
= r
2
, ..., A
n
= r
n
) =
:!
r
1
!r
2
! r
n
!
j
a
1
1
j
a
2
2
j
am
n
,
r
1
+ +r
n
= :;
j
1
+ +j
n
= 1
EA
i
= j
i
var(A
i
) = :j
i
(1 j
i
)
cov (A
i
, A
)
) = :j
i
j
)
Negative Binomial
P(A = r) =
(r +r)
r!(r)
j
v
(1 j)
a1
, r = 0, 1, 2, ...; 0 _ j _ 1
EA =
r (1 j)
j
var(A) =
r (1 j)
j
2
Poisson
P(A = r) =
exp(`) `
a
r!
, r = 0, 1, 2, ..., ` 0
EA = `
var(A) = `
We now list some important continuous distributions.
163
Beta
)(r) =
(c +,)
(c)(,)
r
c1
(1 r)
o1
, 0 _ r _ 1; c 0, , 0
j =
c
c +,
var(A) =
c,
(c +, + 1) (c +,)
2
Cauchy
)(r) =
1
(1 +r
2
)
, < r <
EA =
var(A) =
ExponentiaI
)(r) =
1
0
exp
_
r
0
_
, 0 _ r < ; 0 0
EA = 0
var(A) = 0
2
Logistic
)(r) =
exp(r)
(1 + exp (r))
2
, < r < ;
EA = 0
var(A) =

2
3
Lognormal
)(r) =
1
_
2or
exp
_

(log r j)
2
2o
2
_
, 0 _ r < ; o 0
EA = exp
_
j +o
2
,2
_
var(A) = exp
_
2j + 2o
2
_
exp
_
2j +o
2
_
Pareto
)(r) =
,c
o
r
o+1
, c _ r < , c 0, , 0
EA =
,c
, 1
, , 1
var(A) =
,c
2
(, 1)
2
(, 2)
, , 2
Uniform
)(r) =
1
/ a
, a _ r _ /
EA =
a +/
2
var(A) =
(/ a)
2
12
164
Weibull
)(r) =

,
r
1
exp
_

,
_
, 0 _ r < ; 0, , 0
EA = ,
1

_
1 +
1

_
var(A) = ,
2
_

_
1 +
2

2
_
1 +
1

__
Gamma
)(r) =
1
(c)0
c
r
c1
exp
_

r
0
_
, 0 _ r < ; c 0, 0 0
EA = c0
var(A) = c0
2
Chi-Square
)(r) =
1
(r,2)2
v2
r
v21
exp
_

r
2
_
, 0 _ r < ; r 0
EA = r
var(A) = 2r
Normal
)(r) =
1
_
2o
exp
_

(r j)
2
2o
2
_
, < r < ; < j < , o
2
0
EA = j
var(A) = o
2
Student t
)(r) =

_
v+1
2
_
_
r
_
v
2
_
_
1 +
r
2
r
_
(
r+1
2
)
, < r < ; r 0
EA = 0 if r 1
var(A) =
r
r 2
if r 2
B.6 Multivariate Random Variables
A pair of bivariate random variables (A, 1 ) is a function from the sample space into R
2
. The
joint CDF of (A, 1 ) is
1(r, j) = P(A _ r, 1 _ j) .
If 1 is continuous, the joint probability density function is
)(r, j) =
0
2
0r0j
1(r, j).
For a Borel measurable set 1
2
,
P((A < 1 ) ) =
_ _

)(r, j)drdj
165
For any measurable function q(r, j),
Eq(A, 1 ) =
_
o
o
_
o
o
q(r, j))(r, j)drdj.
The marginal distribution of A is
1
A
(r) = P(A _ r)
= lim
jo
1(r, j)
=
_
a
o
_
o
o
)(r, j)djdr
so the marginal density of A is
)
A
(r) =
d
dr
1
A
(r) =
_
o
o
)(r, j)dj.
Similarly, the marginal density of 1 is
)
Y
(j) =
_
o
o
)(r, j)dr.
The random variables A and 1 are dened to be independent if )(r, j) = )
A
(r))
Y
(j).
Furthermore, A and 1 are independent if and only if there exist functions q(r) and /(j) such that
)(r, j) = q(r)/(j).
If A and 1 are independent, then
E(q(A)/(1 )) =
_ _
q(r)/(j))(j, r)djdr
=
_ _
q(r)/(j))
Y
(j))
A
(r)djdr
=
_
q(r))
A
(r)dr
_
/(j))
Y
(j)dj
= Eq (A) E/(1 ) . (B.5)
if the expectations exist. For example, if A and 1 are independent then
E(A1 ) = EAE1.
Another implication of (B.5) is that if A and 1 are independent and 7 = A +1, then
'
Z
(`) = Eexp(`(A +1 ))
= E(exp (`A) exp(`1 ))
= Eexp
_
`
t
A
_
Eexp
_
`
t
1
_
= '
A
(`)'
Y
(`). (B.6)
The covariance between A and 1 is
cov(A, 1 ) = o
AY
= E((A EA) (1 E1 )) = EA1 EAE1.
The correlation between A and 1 is
corr (A, 1 ) = j
AY
=
o
AY
o
a
o
Y
.
166
The Cauchy-Schwarz Inequality implies that [j
AY
[ _ 1. The correlation is a measure of linear
dependence, free of units of measurement.
If A and 1 are independent, then o
AY
= 0 and j
AY
= 0. The reverse, however, is not true.
For example, if EA = 0 and EA
3
= 0, then cov(A, A
2
) = 0.
A useful fact is that
var (A +1 ) = var(A) + var(1 ) + 2 cov(A, 1 ).
An implication is that if A and 1 are independent, then
var (A +1 ) = var(A) + var(1 ),
the variance of the sum is the sum of the variances.
A /1 random vector A = (A
1
, ..., A
I
)
t
is a function from o to R
I
. Let i = (r
1
, ..., r
I
)
t
denote
a vector in R
I
. (In this Appendix, we use bold to denote vectors. Bold capitals A are random
vectors and bold lower case i are nonrandom vectors. Again, this is in distinction to the notation
used in the bulk of the text) The vector A has the distribution and density functions
1(i) = P(A _ i)
)(i) =
0
I
0r
1
0r
I
1(i).
For a measurable function j : R
I
R
c
, we dene the expectation
Ej(A) =
_
R
k
q(i))(i)di
where the symbol di denotes dr
1
dr
I
. In particular, we have the / 1 multivariate mean
= EA
and / / covariance matrix
X = E
_
(A ) (A )
t
_
= EAA
t

t
If the elements of A are mutually independent, then X is a diagonal matrix and
var
_
I

i=1
A
i
_
=
I

i=1
var (A
i
)
B.7 Conditional Distributions and Expectation
The conditional density of 1 given A = i is dened as
)
Y [
(j [ i) =
)(i, j)
)

(i)
167
if )

(i) 0. One way to derive this expression from the denition of conditional probability is
)
Y [A
(j [ i) =
0
0j
lim
.0
P(1 _ j [ i _ A _ i +-)
=
0
0j
lim
.0
P(1 _ j i _ A _ i +-)
P(i _ A _ i +-)
=
0
0j
lim
.0
1(i +-, j) 1(i, j)
1

(i +-) 1
A
(i)
=
0
0j
lim
.0
0
0a
1(i +-, j)
)

(i +-)
=
0
2
0a0j
1(i, j)
)

(i)
=
)(i, j)
)

(i)
.
The conditional mean or conditional expectation is the function
:(i) = E(1 [ A = i) =
_
o
o
j)
Y [
(j [ i) dj.
The conditional mean :(i) is a function, meaning that when A equals i, then the expected value
of 1 is :(i).
Similarly, we dene the conditional variance of 1 given A = i as
o
2
(i) = var (1 [ A = i)
= E
_
(1 :(i))
2
[ A = i
_
= E
_
1
2
[ A = i
_
:(i)
2
.
Evaluated at i = A, the conditional mean :(A) and conditional variance o
2
(A) are random
variables, functions of A. We write this as E(1 [ A) = :(A) and var (1 [ A) = o
2
(A). For
example, if E(1 [ A = i) = c +,
t
i, then E(1 [ A) = c +,
t
A, a transformation of A.
The following are important facts about conditional expectations.
Simple Law of Iterated Expectations:
E(E(1 [ A)) = E(1 ) (B.7)
Proof :
E(E(1 [ A)) = E(:(A))
=
_
o
o
:(i))

(i)di
=
_
o
o
_
o
o
j)
Y [
(j [ i) )

(i)djdi
=
_
o
o
_
o
o
j) (j, i) djdi
= E(1 ).
Law of Iterated Expectations:
E(E(1 [ A, Z) [ A) = E(1 [ A) (B.8)
168
Conditioning Theorem. For any function q(i),
E(q(A)1 [ A) = q (A) E(1 [ A) (B.9)
Proof : Let
/(i) = E(q(A)1 [ A = i)
=
_
o
o
q(i)j)
Y [
(j [ i) dj
= q(i)
_
o
o
j)
Y [
(j [ i) dj
= q(i):(i)
where :(i) = E(1 [ A = i) . Thus /(A) = q(A):(A), which is the same as E(q(A)1 [ A) =
q (A) E(1 [ A) .
B.8 Transformations
Suppose that A R
I
with continuous distribution function 1

(i) and density )

(i). Let
A = j(A) where j(i) : R
I
R
I
is one-to-one, dierentiable, and invertible. Let l() denote the
inverse of j(i). The Jacobian is
J() = det
_
0
0
t
l()
_
.
Consider the univariate case / = 1. If q(r) is an increasing function, then q(A) _ 1 if and only
if A _ /(1 ), so the distribution function of 1 is
1
Y
(j) = P(q(A) _ j)
= P(A _ /(1 ))
= 1
A
(/(1 )) .
Taking the derivative, the density of 1 is
)
Y
(j) =
d
dj
1
Y
(j) = )
A
(/(1 ))
d
dj
/(j).
If q(r) is a decreasing function, then q(A) _ 1 if and only if A _ /(1 ), so
1
Y
(j) = P(q(A) _ j)
= 1 P(A _ /(1 ))
= 1 1
A
(/(1 ))
and the density of 1 is
)
Y
(j) = )
A
(/(1 ))
d
dj
/(j).
We can write these two cases jointly as
)
Y
(j) = )
A
(/(1 )) [J(j)[ . (B.10)
This is known as the change-of-variables formula. This same formula (B.10) holds for / 1, but
its justication requires deeper results from analysis.
As one example, take the case A ~ l[0, 1] and 1 = log(A). Here, q(r) = log(r) and
/(j) = exp(j) so the Jacobian is J(j) = exp(j). As the range of A is [0, 1], that for 1 is [0,).
Since )
A
(r) = 1 for 0 _ r _ 1 (B.10) shows that
)
Y
(j) = exp(j), 0 _ j _ ,
an exponential density.
169
B.9 Normal and Related Distributions
The standard normal density is
c(r) =
1
_
2
exp
_

r
2
2
_
, < r < .
It is conventional to write A ~ N(0, 1) , and to denote the standard normal density function by
c(r) and its distribution function by (r). The latter has no closed-form solution. The normal
density has all moments nite. Since it is symmetric about zero all odd moments are zero. By
iterated integration by parts, we can also show that EA
2
= 1 and EA
4
= 3. In fact, for any positive
integer :, EA
2n
= (2:1)!! = (2:1) (2:3) 1. Thus EA
4
= 3, EA
6
= 15, EA
8
= 105,
and EA
10
= 945.
If 7 is standard normal and A = j + o7, then using the change-of-variables formula, A has
density
)(r) =
1
_
2o
exp
_

(r j)
2
2o
2
_
, < r < .
which is the univariate normal density. The mean and variance of the distribution are j and
o
2
, and it is conventional to write A ~ N
_
j, o
2
_
.
For i R
I
, the multivariate normal density is
)(i) =
1
(2)
I2
det (X)
12
exp
_

(i )
t
X
1
(i )
2
_
, i R
I
.
The mean and covariance matrix of the distribution are and X, and it is conventional to write
A ~ N(, X).
The MGF and CF of the multivariate normal are exp
_
X
t
+X
t
XX,2
_
and exp
_
iX
t
X
t
XX,2
_
,
respectively.
If A R
I
is multivariate normal and the elements of A are mutually uncorrelated, then
X = diago
2
)
is a diagonal matrix. In this case the density function can be written as
)(i) =
1
(2)
I2
o
1
o
I
exp
_

_
(r
1
j
1
)
2
,o
2
1
+ + (r
I
j
I
)
2
,o
2
I
2
__
=
I

)=1
1
(2)
12
o
)
exp
_

_
r
)
j
)
_
2
2o
2
)
_
which is the product of marginal univariate normal densities. This shows that if A is multivariate
normal with uncorrelated elements, then they are mutually independent.
Theorem B.9.1 If A ~ N(, X) and A = u + HA with H an invertible matrix, then A ~
N(u +H, HXH
t
) .
Theorem B.9.2 Let A ~ N(0, 1
v
) . Then Q = A
t
A is distributed chi-square with r degrees of
freedom, written
2
v
.
Theorem B.9.3 If Z ~ N(0, A) with A 0, , then Z
t
A
1
Z ~
2
q
.
Theorem B.9.4 Let 7 ~ N(0, 1) and Q ~
2
v
be independent. Then T
v
= 7,
_
Q,r is distributed
as students t with r degrees of freedom.
170
Proof of Theorem B.9.1. By the change-of-variables formula, the density of A = u +HA is
)() =
1
(2)
I2
det (X
Y
)
12
exp
_

(
Y
)
t
X
1
Y
(
Y
)
2
_
, R
I
.
where
Y
= u+H and X
Y
= HXH
t
, where we used the fact that det (HXH
t
)
12
= det (X)
12
det (H) .

Proof of Theorem B.9.2. First, suppose a random variable Q is distributed chi-square with r
degrees of freedom. It has the MGF
Eexp (tQ) =
_
o
0
1

_
v
2
_
2
v2
r
v21
exp(tr) exp(r,2) dj = (1 2t)
v2
where the second equality uses the fact that
_
o
0
j
o1
exp(/j) dj = /
o
(a), which can be found
by applying change-of-variables to the gamma function. Our goal is to calculate the MGF of
Q = A
t
A and show that it equals (1 2t)
v2
, which will establish that Q ~
2
v
.
Note that we can write Q = A
t
A =

v
)=1
7
2
)
where the 7
)
are independent N(0, 1) . The
distribution of each of the 7
2
)
is
P
_
7
2
)
_ j
_
= 2P(0 _ 7
)
_
_
j)
= 2
_
_
j
0
1
_
2
exp
_

r
2
2
_
dr
=
_
j
0
1

_
1
2
_
2
12
:
12
exp
_

:
2
_
d:
using the changeof-variables : = r
2
and the fact
_
1
2
_
=
_
. Thus the density of 7
2
)
is
)
1
(r) =
1

_
1
2
_
2
12
r
12
exp
_

r
2
_
which is the
2
1
and by our above calculation has the MGF of Eexp
_
t7
2
)
_
= (1 2t)
12
.
Since the 7
2
)
are mutually independent, (B.6) implies that the MGF of Q =

v
)=1
7
2
)
is
_
(1 2t)
12
_
v
= (1 2t)
v2
, which is the MGF of the
2
v
density as desired.
Proof of Theorem B.9.3. The fact that A 0 means that we can write A = CC
t
where C is
non-singular. Then A
1
= C
1t
C
1
and
C
1
Z ~ N
_
0, C
1
AC
1t
_
= N
_
0, C
1
CC
t
C
1t
_
= N(0, 1
q
) .
Thus
Z
t
A
1
Z = Z
t
C
1t
C
1
Z =
_
C
1
Z
_
t
_
C
1
Z
_
~
2
q
.

Proof of Theorem B.9.4. Using the simple law of iterated expectations, T


v
has distribution
171
function
1 (r) = P
_
7
_
Q,r
_ r
_
= E
_
7 _ r
_
Q
r
_
= E
_
P
_
7 _ r
_
Q
r
[ Q
__
= E
_
r
_
Q
r
_
Thus its density is
) (r) = E
d
dr

_
r
_
Q
r
_
= E
_
c
_
r
_
Q
r
_
_
Q
r
_
=
_
o
0
_
1
_
2
exp
_

r
2
2r
___

r
_
1

_
v
2
_
2
v2

v21
exp (,2)
_
d
=

_
v+1
2
_
_
r
_
v
2
_
_
1 +
r
2
r
_
(
r+1
2
)
which is that of the student t with r degrees of freedom.
172
Appendix C
Asymptotic Theory
C.1 Inequalities
The following inequalities are frequently used in asymptotic distribution theory.
Jensens Inequality. If q() : R R is convex, then for any random variable r for which
E[r[ < and E[q (r)[ < ,
q(E(r)) _ E(q (r)) . (C.1)
Expectation Inequality. For any random variable r for which E[r[ < ,
[E(r)[ _ E[r[ . (C.2)
Cauchy-Schwarz Inequality. For any random :: matrices A and A,
E
_
_
A
t
A
_
_
_
_
E|A|
2
_
12
_
E|A |
2
_
12
. (C.3)
Holders Inequality. If j 1 and 1 and
1
j
+
1
q
= 1, then for any random :: matrices A
and A,
E
_
_
A
t
A
_
_
_ (E|A|
j
)
1j
(E|A |
q
)
1q
. (C.4)
Minkowskis Inequality. For any random :: matrices A and A,
(E|A +A |
j
)
1j
_ (E|A|
j
)
1j
+ (E|A |
j
)
1j
(C.5)
Markovs Inequality. For any random vector i and non-negative function q(i) _ 0,
P(q(i) c) _ c
1
Eq(i). (C.6)
Proof of Jensens Inequality. Let a + /n be the tangent line to q(n) at n = Er. Since q(n) is
convex, tangent lines lie below it. So for all n, q(n) _ a + /n yet q(Er) = a + /Er since the curve
is tangent at Er. Applying expectations, Eq(r) _ a +/Er = q(Er), as stated.
Proof of Expecation Inequality. Follows from an application of Jensens Inequality, noting that
the function q(n) = [n[ is convex.
Proof of Holders Inequality. Since
1
j
+
1
q
= 1 an application of Jensens Inequality shows that
for any real a and /
exp
_
1
j
a +
1

/
_
_
1
j
exp(a) +
1

exp(/) .
173
Setting n = exp(a) and = exp(/) this implies
n
1j

1q
_
n
j
+

and this inequality holds for any n 0 and 0.


Set n = |A|
j
,E|A|
j
and = |A |
q
,E|A |
q
. Note that En = E = 1. By the matrix Schwarz
Inequality (A.8), |A
t
A | _ |A| |A |. Thus
E|A
t
A |
(E|A|
j
)
1j
(E|A |
q
)
1q
_
E(|A| |A |)
(E|A|
j
)
1j
(E|A |
q
)
1q
= E
_
n
1j

1q
_
_ E
_
n
j
+

_
=
1
j
+
1

= 1,
which is (C.4).
Proof of Minkowskis Inequality. Note that by rewriting, using the triangle inequality (A.9),
and then Holders Inequality to the two expectations
E|A +A |
j
= E
_
|A +A | |A +A |
j1
_
_ E
_
|A| |A +A |
j1
_
+ E
_
|A | |A +A |
j1
_
_ (E|A|
j
)
1j
E
_
|A +A |
q(j1)
_
1q
+ (E|A |
j
)
1j
E
_
|A +A |
q(j1)
_
1q
=
_
(E|A|
j
)
1j
+ (E|A |
j
)
1j
_
E(|A +A |
j
)
(j1)j
where the second equality picks to satisfy 1,j+1, = 1, and the nal equality uses this fact to make
the substitution = j,(j1) and then collects terms. Dividing both sides by E(|A +A |
j
)
(j1)j
,
we obtain (C.5).
Proof of Markovs Inequality. Let ) denote the density function of i. Then
P(q(i) _ c) =
_
uj(u)c
)(u)du
_
_
uj(u)c
q(u)
c
)(u)du
_ c
1
_
o
o
q(u))(u)du
= c
1
E(q(i))
the rst inequality using the region of integration q(u) c.
C.2 Weak Law of Large Numbers
Let .
a
R be a random variable. We say that .
a
converges in probability to . as : ,
denoted .
a
j
. as : , if for all c 0,
lim
ao
1 ([.
a
.[ c) = 0.
174
This is a probabilistic way of generalizing the mathematical denition of a limit. The limit . may
be a constant or may be random.
If Z
a
R
Iv
is a matrix, we say that Z
a
j
Z as : if each element of Z
a
converges in
probability to the corresponding element of Z.
The WLLN shows that sample averages converge in probability to the population average.
Theorem C.2.1 Weak Law of Large Numbers (WLLN). If r
i
R is iid and E[r
i
[ < ,
then as :
r
a
=
1
:
a

i=1
r
i
j
E(r
i
).
Proof: Without loss of generality, we can set E(r
i
) = 0 by recentering r
i
on its expectation.
We need to show that for all c 0 and j 0 there is some < so that for all : _ ,
1 ([r
a
[ c) _ j. Fix c and j. Set - = cj,3. Pick C < large enough so that
E([r
i
[ 1 ([r
i
[ C)) _ - (C.7)
(where 1 () is the indicator function) which is possible since E[r
i
[ < . Dene the random vectors
n
i
= r
i
1 ([r
i
[ _ C) E(r
i
1 ([r
i
[ _ C))
.
i
= r
i
1 ([r
i
[ C) E(r
i
1 ([r
i
[ C)) .
By the Triangle Inequality (A.9), the Expectation Inequality (C.2), and (C.7),
E[.
a
[ = E

1
:
a

i=1
.
i

_
1
:
a

i=1
E[.
i
[
= E[.
i
[
_ E[r
i
[ 1 ([r
i
[ C) +[E(r
i
1 ([r
i
[ C))[
_ 2E[r
i
[ 1 ([r
i
[ C)
_ 2-. (C.8)
By Jensens Inequality (C.1), the fact that the n
i
are iid and mean zero, and the bound [n
i
[ _ 2C,
(E[n
a
[)
2
_ En
2
a
=
En
2
i
:
_
4C
2
:
_ -
2
(C.9)
the nal inequality holding for : _ 4C
2
,-
2
= 36C
2
,c
2
j
2
.
Finally, by Markovs Inequality (C.6), the fact that r
a
= n
a
+.
a
, the triangle inequality, (C.8)
and (C.9),
P([r
a
[ c) _
E[r
a
[
c
_
E[n
a
[ + E[.
a
[
c
_
3-
c
= j,
the equality by the denition of -. We have shown that for any c 0 and j 0 then for all
: _ 36C
2
,c
2
j
2
, P([r
a
[ c) _ j, as needed.
175
C.3 Convergence in Distribution
Let z
a
be a random vector with distribution 1
a
(u) = P(z
a
_ u) . We say that z
a
converges
in distribution to z as : , denoted z
a
o
z, where z has distribution 1(u) = P(z _ u) ,
if for all u at which 1(u) is continuous, 1
a
(u) 1(u) as : .
Theorem C.3.1 Central Limit Theorem (CLT). If i
i
R
I
is iid and Er
2
)i
< for , =
1, ..., /, then as :
_
:(i
a
) =
1
_
:
a

i=1
(i
i
)
o
N(0, D) .
where = Ei
i
and D = E(i
i
) (i
i
)
t
.
Proof: The moment bound Ei
2
)i
< is sucient to guarantee that the elements of and X are
well dened and nite. Without loss of generality, it is sucient to consider the case = 0 and
D = 1
I
.
For X R
I
, let C (X) = Eexp
_
iX
t
i
i
_
denote the characteristic function of i
i
and set c (X) =
log C(X). Then observe
0
0X
C(X) = iE
_
i
i
exp
_
iX
t
i
i
__
0
2
0X0X
t
C(X) = i
2
E
_
i
i
i
t
i
exp
_
iX
t
i
i
__
so when evaluated at X = 0
C(0) = 1
0
0X
C(0) = iE(i
i
) = 0
0
2
0X0X
t
C(0) = E
_
i
i
i
t
i
_
= 1
I
.
Furthermore,
c
X
(X) =
0
0X
c(X) = C(X)
1
0
0X
C(X)
c
XX
(X) =
0
2
0X0X
t
c(X) = C(X)
1
0
2
0X0X
t
C(X) C(X)
2
0
0X
C (X)
0
0X
t
C(X)
so when evaluated at X = 0
c(0) = 0
c
A
(0) = 0
c
AA
(0) = 1
I
.
By a second-order Taylor series expansion of c(X) about X = 0,
c(X) = c(0) +c
A
(0)
t
X +
1
2
X
t
c
XX
(X
+
)X =
1
2
X
t
c
XX
(X
+
)X (C.10)
where X
+
lies on the line segment joining 0 and X.
176
We now compute C
a
(X) = 1 exp
_
iX
t
_
:i
a
_
the characteristic function of
_
:i
a
. By the prop-
erties of the exponential function, the independence of the i
i
, the denition of c(X) and (C.10)
log C
a
(X) = log Eexp
_
i
1
_
:
a

i=1
X
t
i
i
_
= log E
a

i=1
exp
_
i
1
_
:
X
t
i
i
_
= log
a

i=1
Eexp
_
i
1
_
:
X
t
i
i
_
= :c
_
X
_
:
_
=
1
2
X
t
c
XX
(X
a
)X
where X
a
0 lies on the line segment joining 0 and X,
_
:. Since c
XX
(X
a
) c
XX
(0) = 1
I
, we
see that as : ,
C
a
(X) exp
_

1
2
X
t
X
_
the characteristic function of the N(0, 1
I
) distribution. This is sucient to establish the theorem.

C.4 Asymptotic Transformations


Theorem C.4.1 Continuous Mapping Theorem 1 (CMT). If z
a
j
c as : and q ()
is continuous at c, then q(z
a
)
j
q(c) as : .
Proof: Since q is continuous at c, for all - 0 we can nd a c 0 such that if |z
a
c| < c
then [q (z
a
) q (c)[ _ -. Recall that 1 implies P() _ P(1). Thus P([q (z
a
) q (c)[ _ -) _
P(|z
a
c| < c) 1 as : by the assumption that z
a
j
c. Hence q(z
a
)
j
q(c) as : .
Theorem C.4.2 Continuous Mapping Theorem 2. If z
a
o
z as : and q () is
continuous, then q(z
a
)
o
q(z) as : .
Theorem C.4.3 Delta Method: If
_
:(0
a
0
0
)
o
N(0, X) , where 0 is :1 and X is ::,
and q(0) : R
n
R
I
, / _ :, then
_
:(q (0
a
) q(0
0
))
o
N
_
0, q
0
Xq
t
0
_
where q
0
(0) =
0
00
0
q(0) and q
0
= q
0
(0
0
).
Proof : By a vector Taylor series expansion, for each element of q,
q
)
(0
a
) = q
)
(0
0
) +q
)0
(0
+
)a
) (0
a
0
0
)
where 0
+
a)
lies on the line segment between 0
a
and 0
0
and therefore converges in probability to 0
0
.
It follows that a
)a
= q
)0
(0
+
)a
) q
)0
j
0. Stacking across elements of q, we nd
_
:(q (0
a
) q(0
0
)) = (q
0
+a
a
)
_
:(0
a
0
0
)
o
q
0 N(0, X) = N
_
0, q
0
Xq
t
0
_
.
177
Appendix D
Maximum Likelihood
If the distribution of
i
is 1(, 0) where 1 is a known distribution function and 0 is an
unknown :1 vector, we say that the distribution is parametric and that 0 is the parameter
of the distribution 1. The space is the set of permissible value for 0. In this setting the method
of maximum likelihood is the appropriate technique for estimation and inference on 0.
If the distribution 1 is continuous then the density of
i
can be written as )(, 0) and the joint
density of a random sample (
1
, ...,
a
) is
)
a
(
1
, ...,
a
, 0) =
a

i=1
) (
i
, 0) .
The likelihood of the sample is this joint density evaluated at the observed sample values, viewed
as a function of 0. The log-likelihood function is its natural log
log 1(0) =
a

i=1
log ) (
i
, 0) .
If the distribution 1 is discrete, the likelihood and log-likelihood are constructed by setting ) (, 0) =
P(
i
= , 0) .
Dene the Hessian
H= E
0
2
0000
t
log ) (
i
, 0
0
) (D.1)
and the outer product matrix
D = E
_
0
00
log ) (
i
, 0
0
)
0
00
log ) (
i
, 0
0
)
t
_
. (D.2)
Two important features of the likelihood are
Theorem D.0.4
0
00
Elog ) (
i
, 0)

0=0
0
= 0 (D.3)
H= D = J
0
(D.4)
The matrix J
0
is called the information, and the equality (D.4) is often called the information
matrix equality.
Theorem D.0.5 Cramer-Rao Lower Bound. If
~
0 is an unbiased estimator of 0 R, then
var(
~
0) _ (:J
0
)
1
.
178
The Cramer-Rao Theorem gives a lower bound for estimation. However, the restriction to
unbiased estimators means that the theorem has little direct relevance for nite sample eciency.
The maximum likelihood estimator or MLE
`
0 is the parameter value which maximizes the
likelihood (equivalently, which maximizes the log-likelihood). We can write this as
`
0 = argmax
0
log 1(0).
In some simple cases, we can nd an explicit expression for
`
0 as a function of the data, but these
cases are rare. More typically, the MLE
`
0 must be found by numerical methods.
Why do we believe that the MLE
`
0 is estimating the parameter 0? Observe that when stan-
dardized, the log-likelihood is a sample average
1
:
log 1(0) =
1
:
a

i=1
log ) (
i
, 0)
j
Elog ) (
i
, 0) .
As the MLE
`
0 maximizes the left-hand-side, we can see that it is an estimator of the maximizer of
the right-hand-side. The rst-order condition for the latter problem is
0 =
0
00
Elog ) (
i
, 0)
which holds at 0 = 0
0
by (D.3). In fact, under conventional regularity conditions,
`
0 is consistent
for this value,
`
0
j
0
0
as : .
Theorem D.0.6 Under regularity conditions,
_
:
_
`
0 0
0
_
o
N
_
0, J
1
0
_
.
Thus in large samples, the approximate variance of the MLE is (:1
0
)
1
which is the Cramer-
Rao lower bound. Thus in large samples the MLE has approximately the best possible variance.
Therefore the MLE is called asymptotically ecient.
Typically, to estimate the asymptotic variance of the MLE we use an estimate based on the
Hessian formula (D.1)

H=
1
:
a

i=1
0
2
0000
t
log )
_

i
,
`
0
_
(D.5)
We then set

J
1
0
=

H
1
. Asymptotic standard errors for
`
0 are then the square roots of the diagonal
elements of :
1

J
1
0
.
Sometimes a parametric density function )(, 0) is used to approximate the true unknown
density )(), but it is not literally believed that the model )(, 0) is necessarily the true density.
In this case, we refer to log 1(0) as a quasi-likelihood and the its maximizer
`
0 as a quasi-mle
or QMLE.
In this case there is not a true value of the parameter 0. Instead we dene the pseudo-true
value 0
0
as the maximizer of
Elog ) (
i
, 0) =
_
) () log ) (, 0) d
which is the same as the minimizer of
111C =
_
) () log
_
)()
) (, 0)
_
d
the Kullback-Leibler information distance between the true density )() and the parametric density
)(, 0). Thus the QMLE 0
0
is the value which makes the parametric density closest to the true
value according to this measure of distance. The QMLE is consistent for the pseudo-true value, but
179
has a dierent covariance matrix than in the pure MLE case, since the information matrix equality
(D.4) does not hold. A minor adjustment to Theorem (D.0.6) yields the asymptotic distribution of
the QMLE:
_
:
_
`
0 0
0
_
o
N(0, X ) , X = H
1
DH
1
The moment estimator for X is
`
X =

H
1
`
D

H
1
where

H is given in (D.5) and
`
D =
1
:
a

i=1
0
00
log )
_

i
,
`
0
_
0
00
log )
_

i
,
`
0
_
t
.
Asymptotic standard errors (sometimes called qmle standard errors) are then the square roots of
the diagonal elements of :
1
`
X .
Proof of Theorem D.0.4. To see (D.3),
0
00
Elog ) (
i
, 0)

0=0
0
=
0
00
_
log ) (, 0) ) (, 0
0
) d

0=0
0
=
_
0
00
) (, 0)
) (, 0
0
)
) (, 0)
d

0=0
0
=
0
00
_
) (, 0) d

0=0
0
=
0
00
1

0=0
0
= 0.
Similarly, we can show that
E
_
0
2
0000
0
) (
i
, 0
0
)
) (
i
, 0
0
)
_
= 0.
By direction computation,
0
2
0000
t
log ) (
i
, 0
0
) =
0
2
0000
0
) (
i
, 0
0
)
) (
i
, 0
0
)

0
00
) (
i
, 0
0
)
0
00
) (
i
, 0
0
)
t
) (
i
, 0
0
)
2
=
0
2
0000
0
) (
i
, 0
0
)
) (
i
, 0
0
)

0
00
log ) (
i
, 0
0
)
0
00
log ) (
i
, 0
0
)
t
.
Taking expectations yields (D.4).
Proof of Theorem D.0.5. Let A = (
1
, ...,
a
) be the sample, and set
o =
0
00
log )
a
(A , 0
0
) =
a

i=1
0
00
log ) (
i
, 0
0
)
which by Theorem (D.0.4) has mean zero and variance :H. Write the estimator
~
0 =
~
0 (A ) as a
function of the data. Since
~
0 is unbiased for any 0,
0 = E
~
0 =
_
~
0 (A ) ) (A , 0) dA .
Dierentiating with respect to 0 and evaluating at 0
0
yields
1 =
_
~
0 (A )
0
00
) (A , 0) dA =
_
~
0 (A )
0
00
log ) (A , 0) ) (A , 0
0
) dA = E
_
~
0o
_
.
180
By the Cauchy-Schwarz inequality
1 =

E
_
~
0o
_

2
_ var (o) var
_
~
0
_
so
var
_
~
0
_
_
1
var (o)
=
1
:H
.

Proof of Theorem D.0.6 Taking the rst-order condition for maximization of log 1(0), and
making a rst-order Taylor series expansion,
0 =
0
00
log 1(0)

0=
^
0
=
a

i=1
0
00
log )
_

i
,
`
0
_

i=1
0
00
log ) (
i
, 0
0
) +
a

i=1
0
2
0000
t
log ) (
i
, 0
a
)
_
`
0 0
0
_
,
where 0
a
lies on a line segment joining
`
0 and 0
0
. (Technically, the specic value of 0
a
varies by
row in this expansion.) Rewriting this equation, we nd
_
`
0 0
0
_
=
_

i=1
0
2
0000
t
log ) (
i
, 0
a
)
_
1
_
a

i=1
0
00
log ) (
i
, 0
0
)
_
.
Since
0
00
log ) (
i
, 0
0
) is mean-zero with covariance matrix D, an application of the CLT yields
1
_
:
a

i=1
0
00
log ) (
i
, 0
0
)
o
N(0, D) .
The analysis of the sample Hessian is somewhat more complicated due to the presence of 0
a
. Let
H(0) =
0
2
0000
0
log ) (
i
, 0) . If it is continuous in 0, then since 0
a
j
0
0
we nd H(0
a
)
j
H
and so

1
:
a

i=1
0
2
0000
t
log ) (
i
, 0
a
) =
1
:
a

i=1
_

0
2
0000
t
log ) (
i
, 0
a
) H(0
a
)
_
+H(0
a
)
j
H
by an application of a uniform WLLN. Together,
_
:
_
`
0 0
0
_
o
H
1
N(0, D) = N
_
0, H
1
DH
1
_
= N
_
0, H
1
_
,
the nal equality using Theorem D.0.4 .
181
Appendix E
Numerical Optimization
Many econometric estimators are dened by an optimization problem of the form
`
0 = argmin
0
Q(0) (E.1)
where the parameter is 0 O R
n
and the criterion function is Q(0) : O R. For example
NLLS, GLS, MLE and GMM estimators take this form. In most cases, Q(0) can be computed
for given 0, but
`
0 is not available in closed form. In this case, numerical methods are required to
obtain
`
0.
E.1 Grid Search
Many optimization problems are either one dimensional (: = 1) or involve one-dimensional
optimization as a sub-problem (for example, a line search). In this context grid search may be
employed.
Grid Search. Let = [a, /] be an interval. Pick some - 0 and set G = (/ a),- to be
the number of gridpoints. Construct an equally spaced grid on the region [a, /] with G gridpoints,
which is {0(,) = a + ,(/ a),G : , = 0, ..., G. At each point evaluate the criterion function
and nd the gridpoint which yields the smallest value of the criterion, which is 0(^ ;) where ^ ; =
argmin
0)G
Q(0(,)). This value 0 (^ ;) is the gridpoint estimate of
`
0. If the grid is suciently ne to
capture small oscillations in Q(0), the approximation error is bounded by -, that is,

0(^ ;)
`
0

_ -.
Plots of Q(0(,)) against 0(,) can help diagnose errors in grid selection. This method is quite robust
but potentially costly.
Two-Step Grid Search. The gridsearch method can be rened by a two-step execution. For
an error bound of - pick G so that G
2
= (/ a),- For the rst step dene an equally spaced
grid on the region [a, /] with G gridpoints, which is {0(,) = a + ,(/ a),G : , = 0, ..., G.
At each point evaluate the criterion function and let ^ ; = argmin
0)G
Q(0(,)). For the second
step dene an equally spaced grid on [0(^ ; 1), 0(^ ; + 1)] with G gridpoints, which is {0
t
(/) =
0(^ ; 1) + 2/(/ a),G
2
: / = 0, ..., G. Let
^
/ = argmin
0IG
Q(0
t
(/)). The estimate of
`
0 is
0
_
^
/
_
. The advantage of the two-step method over a one-step grid search is that the number of
function evaluations has been reduced from (/a),- to 2
_
(/ a),- which can be substantial. The
disadvantage is that if the function Q(0) is irregular, the rst-step grid may not bracket
`
0 which
thus would be missed.
E.2 Gradient Methods
Gradient Methods are iterative methods which produce a sequence 0
i
: i = 1, 2, ... which
are designed to converge to
`
0. All require the choice of a starting value 0
1
, and all require the
182
computation of the gradient of Q(0)
j(0) =
0
00
Q(0)
and some require the Hessian
H(0) =
0
2
0000
t
Q(0).
If the functions j(0) and H(0) are not analytically available, they can be calculated numerically.
Take the ,
t
t/ element of j(0). Let c
)
be the ,
t
t/ unit vector (zeros everywhere except for a one in
the ,
t
t/ row). Then for - small
q
)
(0)
Q(0 +c
)
-) Q(0)
-
.
Similarly,
q
)I
(0)
Q(0 +c
)
- +c
I
-) Q(0 +c
I
-) Q(0 +c
)
-) +Q(0)
-
2
In many cases, numerical derivatives can work well but can be computationally costly relative to
analytic derivatives. In some cases, however, numerical derivatives can be quite unstable.
Most gradient methods are a variant of Newtons method which is based on a quadratic
approximation. By a Taylors expansion for 0 close to
`
0
0 = j(
`
0) j(0) +H(0)
_
`
0 0
_
which implies
`
0 = 0 H(0)
1
j(0).
This suggests the iteration rule
`
0
i+1
= 0
i
H(0
i
)
1
j(0
i
).
where
One problem with Newtons method is that it will send the iterations in the wrong direction if
H(0
i
) is not positive denite. One modication to prevent this possibility is quadratic hill-climbing
which sets
`
0
i+1
= 0
i
(H(0
i
) +c
i
1
n
)
1
j(0
i
).
where c
i
is set just above the smallest eigenvalue of H(0
i
) if H(0) is not positive denite.
Another productive modication is to add a scalar steplength `
i
. In this case the iteration
rule takes the form
0
i+1
= 0
i
L
i
j
i
`
i
(E.2)
where j
i
= j(0
i
) and L
i
= H(0
i
)
1
for Newtons method and 1
i
= (H(0
i
) +c
i
1
n
)
1
for
quadratic hill-climbing.
Allowing the steplength to be a free parameter allows for a line search, a one-dimensional
optimization. To pick `
i
write the criterion function as a function of `
Q(`) = Q(0
i
+L
i
j
i
`)
a one-dimensional optimization problem. There are two common methods to perform a line search.
A quadratic approximation evaluates the rst and second derivatives of Q(`) with respect to
`, and picks `
i
as the value minimizing this approximation. The half-step method considers the
sequence ` = 1, 1/2, 1/4, 1/8, ... . Each value in the sequence is considered and the criterion
Q(0
i
+L
i
j
i
`) evaluated. If the criterion has improved over Q(0
i
), use this value, otherwise move
to the next element in the sequence.
183
Newtons method does not perform well if Q(0) is irregular, and it can be quite computationally
costly if H(0) is not analytically available. These problems have motivated alternative choices for
the weight matrix 1
i
. These methods are called Quasi-Newton methods. Two popular methods
are do to Davidson-Fletcher-Powell (DFP) and Broyden-Fletcher-Goldfarb-Shanno (BFGS).
Let
j
i
= j
i
j
i1
0
i
= 0
i
0
i1
and . The DFP method sets
L
i
= L
i1
+
0
i
0
t
i
0
t
i
j
i
+
L
i1
j
i
j
t
i
L
i1
j
t
i
L
i1
j
i
.
The BFGS methods sets
L
i
= L
i1
+
0
i
0
t
i
0
t
i
j
i

0
i
0
t
i
_
0
t
i
j
i
_
2
q
t
i
L
i1
j
i
+
0
i
j
t
i
L
i1
0
t
i
j
i
+
L
i1
j
i
0
t
i
0
t
i
j
i
.
For any of the gradient methods, the iterations continue until the sequence has converged in
some sense. This can be dened by examining whether [0
i
0
i1
[ , [Q(0
i
) Q(0
i1
)[ or [q(0
i
)[
has become small.
E.3 Derivative-Free Methods
All gradient methods can be quite poor in locating the global minimum when Q(0) has several
local minima. Furthermore, the methods are not well dened when Q(0) is non-dierentiable. In
these cases, alternative optimization methods are required. One example is the simplex method
of Nelder-Mead (1965).
A more recent innovation is the method of simulated annealing (SA). For a review see Goe,
Ferrier, and Rodgers (1994). The SA method is a sophisticated random search. Like the gradient
methods, it relies on an iterative sequence. At each iteration, a random variable is drawn and
added to the current value of the parameter. If the resulting criterion is decreased, this new value
is accepted. If the criterion is increased, it may still be accepted depending on the extent of the
increase and another randomization. The latter property is needed to keep the algorithm from
selecting a local minimum. As the iterations continue, the variance of the random innovations is
shrunk. The SA algorithm stops when a large number of iterations is unable to improve the criterion.
The SA method has been found to be successful at locating global minima. The downside is that
it can take considerable computer time to execute.
184
Bibliography
[1] Aitken, A.C. (1935): On least squares and linear combinations of observations, Proceedings
of the Royal Statistical Society, 55, 42-48.
[2] Akaike, H. (1973): Information theory and an extension of the maximum likelihood prin-
ciple. In B. Petroc and F. Csake, eds., Second International Symposium on Information
Theory.
[3] Anderson, T.W. and H. Rubin (1949): Estimation of the parameters of a single equation in
a complete system of stochastic equations, The Annals of Mathematical Statistics, 20, 46-63.
[4] Andrews, D.W.K. (1988): Laws of large numbers for dependent non-identically distributed
random variables, Econometric Theory, 4, 458-467.
[5] Andrews, D.W.K. (1991), Asymptotic normality of series estimators for nonparameric and
semiparametric regression models, Econometrica, 59, 307-345.
[6] Andrews, D.W.K. (1993), Tests for parameter instability and structural change with un-
known change point, Econometrica, 61, 821-8516.
[7] Andrews, D.W.K. and M. Buchinsky: (2000): A three-step method for choosing the number
of bootstrap replications, Econometrica, 68, 23-51.
[8] Andrews, D.W.K. and W. Ploberger (1994): Optimal tests when a nuisance parameter is
present only under the alternative, Econometrica, 62, 1383-1414.
[9] Basmann, R. L. (1957): A generalized classical method of linear estimation of coecients
in a structural equation, Econometrica, 25, 77-83.
[10] Bekker, P.A. (1994): Alternative approximations to the distributions of instrumental vari-
able estimators, Econometrica, 62, 657-681.
[11] Billingsley, P. (1968): Convergence of Probability Measures. New York: Wiley.
[12] Billingsley, P. (1979): Probability and Measure. New York: Wiley.
[13] Bose, A. (1988): Edgeworth correction by bootstrap in autoregressions, Annals of Statistics,
16, 1709-1722.
[14] Breusch, T.S. and A.R. Pagan (1979): The Lagrange multiplier test and its application to
model specication in econometrics, Review of Economic Studies, 47, 239-253.
[15] Brown, B.W. and W.K. Newey (2002): GMM, ecient bootstrapping, and improved infer-
ence , Journal of Business and Economic Statistics.
[16] Carlstein, E. (1986): The use of subseries methods for estimating the variance of a general
statistic from a stationary time series, Annals of Statistics, 14, 1171-1179.
185
[17] Chamberlain, G. (1987): Asymptotic eciency in estimation with conditional moment re-
strictions, Journal of Econometrics, 34, 305-334.
[18] Choi, I. and P.C.B. Phillips (1992): Asymptotic and nite sample distribution theory for IV
estimators and tests in partially identied structural equations, Journal of Econometrics,
51, 113-150.
[19] Chow, G.C. (1960): Tests of equality between sets of coecients in two linear regressions,
Econometrica, 28, 591-603.
[20] Cragg, John (1992): Quasi-Aitken Estimation for Heterskedasticity of Unknown Form"
Journal of Econometrics, 54, 179-201.
[21] Davidson, J. (1994): Stochastic Limit Theory: An Introduction for Econometricians. Oxford:
Oxford University Press.
[22] Davison, A.C. and D.V. Hinkley (1997): Bootstrap Methods and their Application. Cambridge
University Press.
[23] Dickey, D.A. and W.A. Fuller (1979): Distribution of the estimators for autoregressive time
series with a unit root, Journal of the American Statistical Association, 74, 427-431.
[24] Donald S.G. and W.K. Newey (2001): Choosing the number of instruments, Econometrica,
69, 1161-1191.
[25] Dufour, J.M. (1997): Some impossibility theorems in econometrics with applications to
structural and dynamic models, Econometrica, 65, 1365-1387.
[26] Efron, Bradley (1979): Bootstrap methods: Another look at the jackknife, Annals of Sta-
tistics, 7, 1-26.
[27] Efron, Bradley (1982): The Jackknife, the Bootstrap, and Other Resampling Plans. Society
for Industrial and Applied Mathematics.
[28] Efron, Bradley and R.J. Tibshirani (1993): An Introduction to the Bootstrap, New York:
Chapman-Hall.
[29] Eicker, F. (1963): Asymptotic normality and consistency of the least squares estimators for
families of linear regressions, Annals of Mathematical Statistics, 34, 447-456.
[30] Engle, R.F. and C.W.J. Granger (1987): Co-integration and error correction: Representa-
tion, estimation and testing, Econometrica, 55, 251-276.
[31] Frisch, R. and F. Waugh (1933): Partial time regressions as compared with individual
trends, Econometrica, 1, 387-401.
[32] Gallant, A.F. and D.W. Nychka (1987): Seminonparametric maximum likelihood estima-
tion, Econometrica, 55, 363-390.
[33] Gallant, A.R. and H. White (1988): A Unied Theory of Estimation and Inference for Non-
linear Dynamic Models. New York: Basil Blackwell.
[34] Goldberger, Arthur S. (1991): A Course in Econometrics. Cambridge: Harvard University
Press.
[35] Goe, W.L., G.D. Ferrier and J. Rogers (1994): Global optimization of statistical functions
with simulated annealing, Journal of Econometrics, 60, 65-99.
186
[36] Gauss, K.F. (1809): Theoria motus corporum coelestium, in Werke, Vol. VII, 240-254.
[37] Granger, C.W.J. (1969): Investigating causal relations by econometric models and cross-
spectral methods, Econometrica, 37, 424-438.
[38] Granger, C.W.J. (1981): Some properties of time series data and their use in econometric
specication, Journal of Econometrics, 16, 121-130.
[39] Granger, C.W.J. and T. Tersvirta (1993): Modelling Nonlinear Economic Relationships,
Oxford University Press, Oxford.
[40] Gregory, A. and M. Veall (1985): On formulating Wald tests of nonlinear restrictions,
Econometrica, 53, 1465-1468,
[41] Hall, A. R. (2000): Covariance matrix estimation and the power of the overidentifying
restrictions test, Econometrica, 68, 1517-1527,
[42] Hall, P. (1992): The Bootstrap and Edgeworth Expansion, New York: Springer-Verlag.
[43] Hall, P. (1994): Methodology and theory for the bootstrap, Handbook of Econometrics,
Vol. IV, eds. R.F. Engle and D.L. McFadden. New York: Elsevier Science.
[44] Hall, P. and J.L. Horowitz (1996): Bootstrap critical values for tests based on Generalized-
Method-of-Moments estimation, Econometrica, 64, 891-916.
[45] Hahn, J. (1996): A note on bootstrapping generalized method of moments estimators,
Econometric Theory, 12, 187-197.
[46] Hansen, B.E. (1992): Ecient estimation and testing of cointegrating vectors in the presence
of deterministic trends, Journal of Econometrics, 53, 87-121.
[47] Hansen, B.E. (1996): Inference when a nuisance parameter is not identied under the null
hypothesis, Econometrica, 64, 413-430.
[48] Hansen, B.E. (2006): Edgeworth expansions for the Wald and GMM statistics for nonlinear
restrictions, Econometric Theory and Practice: Frontiers of Analysis and Applied Research,
edited by Dean Corbae, Steven N. Durlauf and Bruce E. Hansen. Cambridge University Press.
[49] Hansen, L.P. (1982): Large sample properties of generalized method of moments estimators,
Econometrica, 50, 1029-1054.
[50] Hansen, L.P., J. Heaton, and A. Yaron (1996): Finite sample properties of some alternative
GMM estimators, Journal of Business and Economic Statistics, 14, 262-280.
[51] Hausman, J.A. (1978): Specication tests in econometrics, Econometrica, 46, 1251-1271.
[52] Heckman, J. (1979): Sample selection bias as a specication error, Econometrica, 47, 153-
161.
[53] Horowitz, Joel (2001): The Bootstrap, Handbook of Econometrics, Vol. 5, J.J. Heckman
and E.E. Leamer, eds., Elsevier Science, 3159-3228.
[54] Imbens, G.W. (1997): One step estimators for over-identied generalized method of moments
models, Review of Economic Studies, 64, 359-383.
[55] Imbens, G.W., R.H. Spady and P. Johnson (1998): Information theoretic approaches to
inference in moment condition models, Econometrica, 66, 333-357.
187
[56] Jarque, C.M. and A.K. Bera (1980): Ecient tests for normality, homoskedasticity and
serial independence of regression residuals, Economic Letters, 6, 255-259.
[57] Johansen, S. (1988): Statistical analysis of cointegrating vectors, Journal of Economic
Dynamics and Control, 12, 231-254.
[58] Johansen, S. (1991): Estimation and hypothesis testing of cointegration vectors in the pres-
ence of linear trend, Econometrica, 59, 1551-1580.
[59] Johansen, S. (1995): Likelihood-Based Inference in Cointegrated Vector Auto-Regressive Mod-
els, Oxford University Press.
[60] Johansen, S. and K. Juselius (1992): Testing structural hypotheses in a multivariate cointe-
gration analysis of the PPP and the UIP for the UK, Journal of Econometrics, 53, 211-244.
[61] Kitamura, Y. (2001): Asymptotic optimality and empirical likelihood for testing moment
restrictions, Econometrica, 69, 1661-1672.
[62] Kitamura, Y. and M. Stutzer (1997): An information-theoretic alternative to generalized
method of moments, Econometrica, 65, 861-874..
[63] Koenker, Roger (2005): Quantile Regression. Cambridge University Press.
[64] Kunsch, H.R. (1989): The jackknife and the bootstrap for general stationary observations,
Annals of Statistics, 17, 1217-1241.
[65] Kwiatkowski, D., P.C.B. Phillips, P. Schmidt, and Y. Shin (1992): Testing the null hypoth-
esis of stationarity against the alternative of a unit root: How sure are we that economic time
series have a unit root? Journal of Econometrics, 54, 159-178.
[66] Lafontaine, F. and K.J. White (1986): Obtaining any Wald statistic you want, Economics
Letters, 21, 35-40.
[67] Lovell, M.C. (1963): Seasonal adjustment of economic time series, Journal of the American
Statistical Association, 58, 993-1010.
[68] MacKinnon, J.G. (1990): Critical values for cointegration, in Engle, R.F. and C.W. Granger
(eds.) Long-Run Economic Relationships: Readings in Cointegration, Oxford, Oxford Univer-
sity Press.
[69] MacKinnon, J.G. and H. White (1985): Some heteroskedasticity-consistent covariance ma-
trix estimators with improved nite sample properties, Journal of Econometrics, 29, 305-325.
[70] Magnus, J. R., and H. Neudecker (1988): Matrix Dierential Calculus with Applications in
Statistics and Econometrics, New York: John Wiley and Sons.
[71] Muirhead, R.J. (1982): Aspects of Multivariate Statistical Theory. New York: Wiley.
[72] Nelder, J. and R. Mead (1965): A simplex method for function minimization, Computer
Journal, 7, 308-313.
[73] Newey, W.K. (1990): Semiparametric eciency bounds, Journal of Applied Econometrics,
5, 99-135.
[74] Newey, W.K. and K.D. West (1987): Hypothesis testing with ecient method of moments
estimation, International Economic Review, 28, 777-787.
[75] Owen, Art B. (1988): Empirical likelihood ratio condence intervals for a single functional,
Biometrika, 75, 237-249.
188
[76] Owen, Art B. (2001): Empirical Likelihood. New York: Chapman & Hall.
[77] Park, J.Y. and P.C.B. Phillips (1988): On the formulation of Wald tests of nonlinear re-
strictions, Econometrica, 56, 1065-1083,
[78] Phillips, P.C.B. (1989): Partially identied econometric models, Econometric Theory, 5,
181-240.
[79] Phillips, P.C.B. and S. Ouliaris (1990): Asymptotic properties of residual based tests for
cointegration, Econometrica, 58, 165-193.
[80] Politis, D.N. and J.P. Romano (1996): The stationary bootstrap, Journal of the American
Statistical Association, 89, 1303-1313.
[81] Potscher, B.M. (1991): Eects of model selection on inference, Econometric Theory, 7,
163-185.
[82] Qin, J. and J. Lawless (1994): Empirical likelihood and general estimating equations, The
Annals of Statistics, 22, 300-325.
[83] Ramsey, J. B. (1969): Tests for specication errors in classical linear least-squares regression
analysis, Journal of the Royal Statistical Society, Series B, 31, 350-371.
[84] Rudin, W. (1987): Real and Complex Analysis, 3rd edition. New York: McGraw-Hill.
[85] Said, S.E. and D.A. Dickey (1984): Testing for unit roots in autoregressive-moving average
models of unknown order, Biometrika, 71, 599-608.
[86] Shao, J. and D. Tu (1995): The Jackknife and Bootstrap. NY: Springer.
[87] Sargan, J.D. (1958): The estimation of economic relationships using instrumental variables,
Econometrica, 26, 393-415.
[88] Sheather, S.J. and M.C. Jones (1991): A reliable data-based bandwidth selection method
for kernel density estimation, Journal of the Royal Statistical Society, Series B, 53, 683-690.
[89] Shin, Y. (1994): A residual-based test of the null of cointegration against the alternative of
no cointegration, Econometric Theory, 10, 91-115.
[90] Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis. London: Chap-
man and Hall.
[91] Sims, C.A. (1972): Money, income and causality, American Economic Review, 62, 540-552.
[92] Sims, C.A. (1980): Macroeconomics and reality, Econometrica, 48, 1-48.
[93] Staiger, D. and J.H. Stock (1997): Instrumental variables regression with weak instruments,
Econometrica, 65, 557-586.
[94] Stock, J.H. (1987): Asymptotic properties of least squares estimators of cointegrating vec-
tors, Econometrica, 55, 1035-1056.
[95] Stock, J.H. (1991): Condence intervals for the largest autoregressive root in U.S. macro-
economic time series, Journal of Monetary Economics, 28, 435-460.
[96] Stock, J.H. and J.H. Wright (2000): GMM with weak identication, Econometrica, 68,
1055-1096.
189
[97] Theil, H. (1953): Repeated least squares applied to complete equation systems, The Hague,
Central Planning Bureau, mimeo.
[98] Tobin, J. (1958): Estimation of relationships for limited dependent variables, Econometrica,
26, 24-36.
[99] Wald, A. (1943): Tests of statistical hypotheses concerning several parameters when the
number of observations is large, Transactions of the American Mathematical Society, 54,
426-482.
[100] Wang, J. and E. Zivot (1998): Inference on structural parameters in instrumental variables
regression with weak instruments, Econometrica, 66, 1389-1404.
[101] White, H. (1980): A heteroskedasticity-consistent covariance matrix estimator and a direct
test for heteroskedasticity, Econometrica, 48, 817-838.
[102] White, H. (1984): Asymptotic Theory for Econometricians, Academic Press.
[103] Zellner, A. (1962): An ecient method of estimating seemingly unrelated regressions, and
tests for aggregation bias, Journal of the American Statistical Association, 57, 348-368.
190

You might also like