You are on page 1of 3

Expectations, Variances & Covariances

The Rules of Summation


n

xi x1 x2    xn

covX; Y EXEXYEY

i1
n

x  EX y  EY f x; y

a na

x y

i1
n

covX;Y
r p
varXvarY

axi a xi

i1
n

i1

i1

i1

E(c1X c2Y ) c1E(X ) c2E(Y )


E(X Y ) E(X ) E(Y )

xi yi xi yi

i1
n

i1

i1

axi byi a xi b yi

i1
n

var(aX bY cZ ) a2var(X) b2var(Y ) c2var(Z )


2abcov(X,Y ) 2accov(X,Z ) 2bccov(Y,Z )

a bxi na b xi

i1

If X, Y, and Z are independent, or uncorrelated, random


variables, then the covariance terms are zero and:

i1

xi
x1 x2    xn
x i1n
n

varaX bY cZ a2 varX

xi  x 0

b2 varY c2 varZ

i1
2

f xi ; yj f xi ; y1 f xi ; y2 f xi ; y3 

i1 j1

i1

f x1 ; y1 f x1 ; y2 f x1 ; y3
f x2 ; y1 f x2 ; y2 f x2 ; y3

Expected Values & Variances


EX x1 f x1 x2 f x2    xn f xn
n

xi f xi x f x
x

i1

EgX gx f x
x

Eg1 X g2 X g1x g2 x f x
x

g1x f x g2 x f x
x

Normal Probabilities
Xm
 N0; 1
s
2
If X  N(m, s ) and a is a constant, then

a  m
PX  a P Z 
s
If X  Nm; s2 and a and b are constants; then


am
bm
Z
Pa  X  b P
s
s
If X  N(m, s2), then Z

Assumptions of the Simple Linear Regression


Model
SR1

Eg1 X Eg2 X
E(c) c
E(cX ) cE(X )
E(a cX ) a cE(X )
var(X ) s2 E[X  E(X )]2 E(X2)  [E(X )]2
var(a cX ) E[(a cX)  E(a cX)]2 c2var(X )
Marginal and Conditional Distributions
f x f x; y

for each value X can take

f y f x; y

for each value Y can take

SR2
SR3
SR4
SR5
SR6

The value of y, for each value of x, is y b1


b2x e
The average value of the random error e is
E(e) 0 since we assume that E(y) b1 b2x
The variance of the random error e is var(e)
s2 var(y)
The covariance between any pair of random
errors, ei and ej is cov(ei, ej) cov(yi, yj) 0
The variable x is not random and must take at
least two different values.
(optional) The values of e are normally distributed about their mean e  N(0, s2)

y
x

f xjy PX xjY y

f x; y
f y

If X and Y are independent random variables, then


f (x,y) f (x)f ( y) for each and every pair of values
x and y. The converse is also true.
If X and Y are independent random variables, then the
conditional probability density function of X given that
Y y is f xjy

f x; y f x f y

f x
f y
f y

for each and every pair of values x and y. The converse is


also true.

Least Squares Estimation


If b1 and b2 are the least squares estimates, then
^yi b1 b2 xi
^ei yi  ^yi yi  b1  b2 xi
The Normal Equations
Nb1 Sxi b2 Syi
Sxi b1 Sx2i b2 Sxi yi
Least Squares Estimators
b2

Sxi  xyi  y
S xi  x2

b1 y  b2 x

Elasticity
percentage change in y Dy=y Dy x


h
percentage change in x Dx=x Dx y
h

DEy=Ey DEy x
x


b2 
Dx=x
Dx Ey
Ey

Least Squares Expressions Useful for Theory


b2 b2 Swi ei
wi

Sxi  x2
Swi xi 1;

Sw2i 1=Sxi  x2

Properties of the Least Squares Estimators


"
#
Sx2i
s2
varb1 s2
varb2
2
NSxi  x
Sxi  x2
"
#
x
covb1 ; b2 s2
Sxi  x2
Gauss-Markov Theorem: Under the assumptions
SR1SR5 of the linear regression model the estimators
b1 and b2 have the smallest variance of all linear and
unbiased estimators of b1 and b2. They are the Best
Linear Unbiased Estimators (BLUE) of b1 and b2.
If we make the normality assumption, assumption
SR6, about the error term, then the least squares estimators are normally distributed.
!
!
s2 x2i
s2
; b2  N b2 ;
b1  N b1 ;
NSxi  x2
Sxi  x2
Estimated Error Variance
s
^2

Type II error: The null hypothesis is false and we decide


not to reject it.
p-value rejection rule: When the p-value of a hypothesis test is smaller than the chosen value of a, then the
test procedure leads to rejection of the null hypothesis.

xi  x

Swi 0;

Rejection rule for a two-tail test: If the value of the


test statistic falls in the rejection region, either tail of
the t-distribution, then we reject the null hypothesis
and accept the alternative.
Type I error: The null hypothesis is true and we decide
to reject it.

S^e2i
N 2

Prediction
y0 b1 b2 x0 e0 ; ^y0 b1 b2 x0 ; f ^y0  y0
"
#
q
1
x0  x2
2
b
bf
var f s
^ 1
var
;
se
f

N Sxi  x2
A (1  a)  100% confidence interval, or prediction
interval, for y0
^y0 tc se f
Goodness of Fit
Syi  y2 S^yi  y2 S^e2i
SST SSR SSE
SSR
SSE
1
corry; ^y2
R2
SST
SST
Log-Linear Model
lny b1 b2 x e; b
ln y b1 b2 x
100  b2
% change in y given a one-unit change in x:
^yn expb1 b2 x
^yc expb1 b2 xexp^
s2 =2
Prediction interval:
h
i
h
i
b  tc se f ; exp ln
by tc se f
exp lny

Estimator Standard Errors


q
q
varb1 ; seb2 b
varb2
seb1 b

Generalized goodness-of-fit measure R2g corry;^yn 2

t-distribution

MR1 yi b1 b2xi2    bKxiK ei

If assumptions SR1SR6 of the simple linear regression


model hold, then

MR3 var(yi) var(ei) s2

bk  bk
 tN2 ; k 1; 2
sebk

Interval Estimates
P[b2  tcse(b2)  b2  b2 tcse(b2)] 1  a
Hypothesis Testing
Components of Hypothesis Tests
1. A null hypothesis, H0
2. An alternative hypothesis, H1
3. A test statistic
4. A rejection region
5. A conclusion
If the null hypothesis H0 : b2 c is true, then
t

b2  c
 tN2
seb2

Assumptions of the Multiple Regression Model


MR2 E(yi) b1 b2xi2    bKxiK , E(ei) 0.
MR4 cov(yi, yj) cov(ei, ej) 0
MR5 The values of xik are not random and are not
exact linear functions of the other explanatory
variables.
MR6 yi  Nb1 b2 xi2    bK xiK ; s2 
, ei  N0; s2
Least Squares Estimates in MR Model
Least squares estimates b1, b2, . . . , bK minimize
Sb1, b2, . . . , bK yi  b1  b2xi2      bKxiK2
Estimated Error Variance and Estimator
Standard Errors
s
^2

^e2i
NK

sebk

q
b
varb
k

Hypothesis Tests and Interval Estimates for Single Parameters


bk  bk
 tNK
Use t-distribution t
sebk
t-test for More than One Parameter
H0 : b2 cb3 a
b2 cb3  a
 tNK
seb2 cb3
q
2b
b
b
seb2 cb3 varb
2 c varb3 2c  covb2 ; b3
t

When H0 is true

Joint F-tests
SSER  SSEU =J
SSEU =N  K
To test the overall significance of the model the null and alternative
hypotheses and F statistic are
F

H0 : b2 0; b3 0; : : : ; bK 0
H1 : at least one of the bk is nonzero
F

SST  SSE=K  1
SSE=N  K

RESET: A Specification Test


yi b1 b2 xi2 b3 xi3 ei

^yi b1 b2 xi2 b3 xi3

yi b1 b2 xi2 b3 xi3 g1 ^y2i ei ;

H0 : g1 0

yi b1 b2 xi2 b3 xi3 g1 ^y2i g2 ^y3i ei ;

H0 : g1 g2 0

Model Selection
AIC ln(SSE=N) 2K=N
SC ln(SSE=N) K ln(N)=N
Collinearity and Omitted Variables
yi b1 b2 xi2 b3 xi3 ei
s2
varb2
2
1  r23 xi2  x2 2
When x3 is omitted;

biasb 2

Eb 2

b
covx2 ; x3
 b2 b3
b
varx2

Heteroskedasticity
var(yi) var(ei) si2
General variance function
s2i expa1 a2 zi2    aS ziS
Breusch-Pagan and White Tests for H0: a2 a3    aS 0
When H0 is true x2 N  R2  x2S1
Goldfeld-Quandt test for H0 : s2M s2R versus H1 : s2M 6 s2R
When H0 is true F s
^ 2M =^
s2R  FNM KM ;NR KR
Transformed model for varei s2i s2 xi
p
p
p
p
yi = xi b1 1= xi b2 xi = xi ei = xi
Estimating the variance function

Finite distributed lag model


yt a b0 xt b1 xt1 b2 xt2    bq xtq vt
Correlogram
rk yt  yytk  y= yt  y2
p
For H0 : rk 0;
z T rk  N0; 1
LM test
yt b1 b2 xt r^et1 ^vt

Test H 0 : r 0 with t-test

^et g1 g2 xt r^et1 ^vt

Test using LM T  R2

yt b1 b2 xt et

AR(1) error

To test J joint hypotheses,

ln^e2i

Regression with Stationary Time Series Variables

lns2i

Grouped data
varei s2i

vi a1 a2 zi2    aS ziS vi
(

s2M i 1; 2; . . . ; NM
s2R i 1; 2; . . . ; NR

Transformed model for feasible generalized least squares


 .p
 .p
.p
.p
s
^ i b1 1
s
^ i ei
s
^i
s
^ i b2 xi
yi

et ret1 vt

Nonlinear least squares estimation


yt b1 1  r b2 xt ryt1  b2 rxt1 vt
ARDL(p, q) model
yt d d0 xt dl xt1    dq xtq ul yt1
   up ytp vt
AR(p) forecasting model
yt d ul yt1 u2 yt2    up ytp vt
Exponential smoothing ^yt ayt1 1  a^yt1
Multiplier analysis
d0 d1 L d2 L2    dq Lq 1  u1 L  u2 L2      up Lp
 b0 b1 L b2 L2   
Unit Roots and Cointegration
Unit Root Test for Stationarity: Null hypothesis:
H0 : g 0
Dickey-Fuller Test 1 (no constant and no trend):
Dyt gyt1 vt
Dickey-Fuller Test 2 (with constant but no trend):
Dyt a gyt1 vt
Dickey-Fuller Test 3 (with constant and with trend):
Dyt a gyt1 lt vt
Augmented Dickey-Fuller Tests:
m

Dyt a gyt1 as Dyts vt


s1

Test for cointegration


D^et g^et1 vt
Random walk:
yt yt1 vt
Random walk with drift:
yt a yt1 vt
Random walk model with drift and time trend:
yt a dt yt1 vt
Panel Data
Pooled least squares regression
yit b1 b2 x2it b3 x3it eit
Cluster robust standard errors cov(eit, eis) cts
Fixed effects model
b1i not random
yit b1i b2 x2it b3 x3it eit
yit  yi b2 x2it  x2i b3 x3it  x3i eit  ei
Random effects model
yit b1i b2 x2it b3 x3it eit

bit b1 ui random

yit  ayi b1 1  a b2 x2it  ax2i b3 x3it  ax3i v it


q
a 1  se
Ts2u s2e
Hausman test

h
i1=2
b
b
t bFE;k  bRE;k varb
FE;k  varbRE;k

You might also like