Professional Documents
Culture Documents
*
h T
x |
T , h T
y
+
| =
+ +
h T
*
h T
x )
(
T , h T
y
h T
y
T , h T
+
c +
+
| | =
+
+
=
+
c
2
)
var(
2
)
*
h T
x ( )
T , h T
var( o + |
+
=
+
c
In the latter expression, the first term accounts for parameter
uncertainty, while the second accounts for the usual innovation
uncertainty. Taken together we get an operational density forecast that
accounts for parameter uncertainty:
from which interval forecasts may be constructed as well.
Conditional Forecasting Models (cont)
))
T , h T
( Var ,
*
h T
x
( N
+
c
+
|
Often we do not want to make forecasts of y conditional upon
assumptions about x, rather, we just want the best possible forecast of
y-an unconditional forecast. To get an unconditional forecast from a
regression model, we often encounter the forecasting the right-hand-
side variables problem. That is, to get an optimal unconditional point
forecast for y , we cannot insert an arbitrary value for future x, rather,
we need to insert the optimal point forecast, x
T+h,T
,which yields the
unconditional forecast
We usually don`t have such a forecast for x and the regression model
at hand doesnt help us. Assuming this variable follows and ARIMA
representation, you will learn how to produce these forecasts in the
next set of slides: FORECASTING II
(b) Unconditional Forecasting Models
T , h T
x
1 0 T , h T
y
+
| + | =
+
There are many ways of making forecasts, but all of them need the
following common ingredients in order for success:
(i) that there are regularities to capture
(ii) that such regularities are informative about the future
(iii) they are encapsulated in the selected forecasting method, and
(iv) non-regularities and excluded.
The main alternatives are (for some of them see Reading I):
1) Guessing
2) Extrapolation
3) Leading Indicators
4) Surveys
5) Time-Series Models
6) Econometric Models
Evaluation of Forecasts
The most common overall accuracy measures are:
mean squared error:
root mean squared error
mean absolute error
where e
t+h,t
=y
t+h
-y
t+h,t
are the forecast errors.
Evaluation of Forecasts (cont)
=
+
=
=
+
=
=
+
=
T
1 t
|
t , h t
e |
T
1
MAE
T
1 t
2
t , h t
e
T
1
RMSE
T
1 t
2
t , h t
e
T
1
MSE
Suppose two competing forecasting procedures produce errors e
t
(1)
and
e
t
(2)
for t=1, ..., T. Then if expected squared error is to be the criterion,
the procedure yielding the lower MSE over the sample period will be
judged superior.
How can we test MSE(1) = MSE(2) versus the opposite?
Assume that the individual forecast errors are unbiased and not
autocorrelated. Consider, now, the pair of random variables
e
t
(1
)+ e
t
(2)
and e
t
(1
)- e
t
(2)
. Now
so the two expected expected squared errors, will be equal iff this pair
of random variables is uncorrelated.
Q2: Find an easy way of testing this hypothesis (Hint: use regression
analysis).
(a) Comparing Forecast Accuracy
2
2
2
1
)]
) 2 (
e
) 1 (
e )(
) 2 (
e
) 1 (
e [( E o o = +
Let f
t
(1)
and f
t
(2)
be two forecasts of y
t
with errors
Consider now a combined forecast, taken to be a weighted average of
the two individual forecasts,
The forecast error is
Forecast combination
2 1
)
) 2 (
t
e
) 1 (
t
e ( E
and ,
2
j
)
2 ) j (
t
e ( E , 0 )
) j (
t
e ( E
such that and 1,2, j for
) j (
t
f
t
y
) j (
t
e
o o =
o = =
= =
) 2 (
t
f ) k 1 (
) 1 (
t
kf
t
C + =
) 2 (
t
e ) k 1 (
) 1 (
t
ke
t
C
t
y
) c (
t
e + = =
Hence the error variance is
This expression is minimized for the value of k given by
and substituting in the top expression, the minimum achievable error
variance is
Note that , unless . If either
equality holds, then the variance of the combined forecast is equal to
the smaller of the two error variances.
Forecast combination (cont)
2 1
) k 1 ( k 2
2
2
2
) k 1 (
2
1
2
k
2
c
o o + o + o = o
2 1
2
2
2
2
1
2 1
2
2
o
k
o o o + o
o o o
=
2 1
2
2
2
2
1
)
2
1 (
2
2
2
1 2
0 , c
o o o + o
o o
= o
)
2
2
,
2
1
min(
2
0 , c
o o < o
1
2
or
2
1
o
o
=
o
o
=
P1: Show that
P 2: Explain what happens with as approaches to 1 or
+1.
Problems on Forecast combination
> s
o
o
> s
1
2
if only and if 0
o
k
2
o , c
o