Professional Documents
Culture Documents
http://www.ppt2txt.com/r/2a6c29b3/
Frontier Functions:
Stochastic Frontier Analysis (SFA) &
Data Envelopment Analysis (DEA)
Production and cost functions
A researcher wishes to estimate a production function or a cost function.
The object is to estimate not the average production or average cost, but the maximum
possible production given a set of inputs or the minimum possible cost of a set of
outputs.
OLS regression estimates the mean of the dependent variable conditional on the
explanatory variables;
Quantile regression is based on a quantile (e.g. 10th,25th, median, 75th, 90th), not the
maximum or minimum;
The max or min cannot be detected directly and used to define the sample for selection
bias analysis;
Limited dependent variable models truncate the dependent variable into categories or
limits but not the maximum or minimum.
Frontier functions: definition
None of those standard econometric models is the answer.
The answer is frontier functions, econometric stochastic frontier analysis (SFA) or linear
programming data envelopment analysis (DEA).
Frontier functions estimate maxima or minima of a dependent variable given
explanatory variables, usually to estimate production or cost functions.
All frontier functions come from one paper, Aigner and Chu (1968).
Aigner and Chu (1968)
D.J. Aigner and S.F. Chu (AER 1968), On Estimating the Industry Production
Function invented this area.
A viable distinction between the average and frontier functions as predictors of
capacityderives from a probability interpretation of alternative forecasts.the
frontier we construct is truly a surface of maximum points. This became Stochastic
Frontier Analysis, Stochastic = probability interpretation.
Estimation, for primary metals production in state aggregates:
one stage least squares and two stage least squares,
quadratic programming (now rarely estimated), and
linear programming, developed into Data Envelopment Analysis in Charnes, Cooper,
and Rhodes (1978) and subsequent research.
Varian (1984)
Varian shows how to estimate and test for the Weak Axiom of Cost Minimization
(WACM) and other microeconomic assumptions
Varian suggests using either regression (SFA) or linear programming (DEA)
The WACM applies to for-profit, not-for-profit, private, and public producers
The only requirement is that minimum inputs are intended to be used to produce desired
output, or maximum output is intended from inputs used
Profit maximization is not required
pdf() = exp(-)-1/(),
with different shapes for > 0, in ranges: less than 1, 1, between 1 and 2, 2, greater than
2
A graph follows; > 2 is required for the pdf and its derivatives to be zero at the limits.
Gamma Distribution: shapes
Ok, so a Gamma Distribution?
No, not really.
The parameters are restricted mathematically. That really annoys researchers.
Some other distribution? No, no other one-sided distribution has the required properties
at the limits.
This is why no one has just one disturbance .
Composite disturbances
The disturbance has two parts
Stochastic frontier (v), unlimited range as usual. The limits of the production or cost
function are at infinity, not a function of the parameters
Inefficiency (u), one sided, non-positive for production, non-negative for cost
Finally, yj = xj + uj + vj , that is, j = uj + vj
So there are two disturbance terms to keep the parameters from affecting the limits
Panel data: Fixed effects
Panel data researchers would like to include fixed or random effects in everything, so
why not frontier models?
Greene (2005) addresses this in detail.
Fixed effects have special problems in non-linear models, but they can work
Random effects are offered by Stata.
Now there are three disturbance terms!
yjt = xjt + j + ujt + vjt
Fixed effects in non-linear models
Fixed effects have well known advantages in linear models but in non-linear models
they:
are inconsistent (too small sample for each fixed effect),
cannot be differenced out (differences of non-linear models are still non-linear),
spread their inconsistency to other coefficients (assuming correlation with other
explanatory variables, which is the motivation for fixed rather than random effects).
Wait, maybe fixed effects are ok
With few units and many observations, fixed effects work because the sample size for
each fixed effect might be large enough. Greene (2005) points this out.
Stata refuses to enter fixed effects in the model.
The user can enter fixed effects.
Random effects, normally distributed, are offered by Stata. As always, they must be
assumed to be uncorrelated with explanatory variables.
The independence assumption cannot be tested by Stata, and there is no Hausman test,
but
Estimate fixed effects by direct inclusion and regress the fixed effects on explanatory
variables to test the independence required for consistent random effects.
Stata: all MLE, all the time
Stata offers MLE with composite disturbances.
The one-sided distribution is half-normal, truncated normal, or exponential (restricted
Gamma)
frontier dependent explanatory, d(hn) or d(tn) or d(e)
In Stata, u is one-sided inefficiency and v is the two-sided stochastic frontier. Stata
uses notation from Greene (1990) in which = ratio of standard deviations u/v, so
that = 0 means there is no inefficiency.
Fixed effects sneaked in by the user under frontier, or random effects by Stata (normally
distributed).
xtfrontier dependent explanatory, re i(group_id)
For minimization, use the option , cost
Stata: heteroscedasticity
Stata offers a lot of heteroscedasticity: either u or v can be heteroscedastic, or both.
Heteroscedastic u (one-sided error, ine
Previous: Performance Evaluation and Benchmarking with Data Envelopment Analysis
Next: Histopathology of Corneal Pathology in Descemet-Stripping Endothelial
Keratoplasty Specimens