Professional Documents
Culture Documents
Lecture 3
BLUE Properties of OLS Estimators
Unbiasedness of Estimates-1
Unbiasedness of estimates
E 1 1
Proofs:
y i xi x y i y
E 1 E y 2 x E i
i
x
2
n
i xi x
E 1 x wi E y i =
n
n x w E
i
2 x i ei
n x wi 1 2 xi 1 1 x wi 2
or E 1
i
i
2 x wi xi
1=
i
i
where wi xi x 2
i
Unbiasedness of Estimates-2
Proof:
x x y y x x y w y w
x
x
2 x i ei
E 2 E wi y i E wi 1 2 xi ei E wi 1 2 wi xi wi ei 2
Variance of Estimator-1
Variance of estimated parameters and the dependent variable
var 1
1
2
N
x i2
var 1 var x wi y i =
xw
1
var 1 x wi var y i 2 2 2 i x 2 w i2 =
n
n
x wi
x2
2 1
2
2
2
var 1 2 2
x w i = var 1
2
n
x
n
n
i
var 2 2
2
xi x
i
Variance-2
var 2 2
2
xi x
i
E 2 E wi y i E wi 1 2 xi ei E wi 1 2 wi xi wi ei 2
2
1
2
2
var 2 E E 2 2
E wi ei w ii2 E ei 2
2
xi x
cov 1 , 2 E 1 E 1 2 E 2 XE 2 E 2
E 1 E 1 Y 2 X Y E 2 X 2 E 2 X
X var 2
varYi 2
Proof:
2
varYi E y i y E 1 2 xi ei 1 2 x
2 xi E ei E 1 E 2 x
N
8
MEAN
11.375
VARIANCE
40.554
MINIMUM
4.0000
1
MAXIMUM
22.000
|_ols y x / pcov
REQUIRED MEMORY IS PAR=
1 CURRENT PAR=
2000
OLS ESTIMATION
8 OBSERVATIONS
DEPENDENT VARIABLE= Y
...NOTE..SAMPLE RANGE SET TO:
1,
8
R-SQUARE =
0.9807
R-SQUARE ADJUSTED =
0.9775
VARIANCE OF THE ESTIMATE-SIGMA**2 = 0.91402
STANDARD ERROR OF THE ESTIMATE-SIGMA = 0.95604
SUM OF SQUARED ERRORS-SSE=
5.4841
MEAN OF DEPENDENT VARIABLE =
11.375
LOG OF THE LIKELIHOOD FUNCTION = -9.84116
VARIABLE
NAME
X
CONSTANT
ESTIMATED STANDARD
T-RATIO
COEFFICIENT
ERROR
6 DF
0.95873
0.5493E-01
17.45
-1.9274
0.8338
-2.312
Coefficient of determination
Coefficient of determination is a measure in the regression analysis that
shows the explanatory power independent variables (regressors) in
explaining the variation on dependent variable (regressand). The total
variation on the dependent variable can be decomposed as following:
Var y i
y
2
y
i
Var y i y i y ei2
2
y e i
y
i
y
2
2
i
2 y i y e i
variables
[Total variation] = [Explained variation] + [Residual variation]
df = T-1
K-1
T-K
8
Coefficient of determination
y
1
y
y
y
2
i
i
2
i
yi y
R2 1 R2
; 0 R2 1
0.9807
R-SQUARE ADJUSTED =
0.9775
9
Y3
and
1 1 1
Y3
bw a i Yi where linear weights, ai , , , are applied to
Y1 Y2
and
.
2
3
6
i
Therefore bw is a linear estimator.
1
1
1
1
1
1
b) bw is unbiased because E bw 2 E Y1 3 E Y2 6 E Y3 2 3 6
2
c) Given that theY1 Y2 andY3 have N , distribution. The variance bof
w
can be computed as
1
1
7
1
1
1 1
varbw var Y1 var Y2 var Y3 2 2 2 2
9
36
18
2
3
6 4
10
2
3
9 varbw
2
7 2
7
1
1
9 3.5 varb 2 9 3 . Therefore
18
18
3
3
Least Square Estimator b has smaller variance than the weighted least square
estimator bw .
11