You are on page 1of 10

Forecasting volatility in Financial Markets:

A Short Review Common Models.


Hoc Leng Chung (344465)
1 Introduction
Volatility forecasting is an essential part of nancial econometrics. It is used to
measure certain nancial risk. According to Poon and Granger (2003), there are
at least 93 published and working papers that study forecasting performance of
volatility models. Furthermore, several times this number of papers have been
written on the subject of volatility forecasting without considering the forecasting
aspect at the time of the writing of that paper. And we are now ten years further
in time and many more papers have appeared on the subject. This abundance of
research shows how important volatility forecasting is. It plays an important role
in investment, security valuation, risk management and monetary policy making.
We can adjust our portfolio to maximize return and minimize risk.
Volatility is not the same as risk. It is a measure of uncertainty. It is an im-
portant input to many investment decisions and portfolio creation. Investors and
portfolio managers have a certain tolerance for risk. Good volatility forecasts are
then used to proxy the risk over the holding period of the investment. Typically,
the standard deviation is used to proxy the volatility.
Investors and portfolio managers who use volatility information needed volatil-
ity forecasts. But volatility was not constant over time. Various approaches were
needed for dierent time periods. A simple approach called historical volatility
emerged. In this method, volatility is estimated using the sample standard devi-
ation of returns over a short period. Is this the right proxy for future volatility?
And how long should the lookback period be? If the period is too long, the esti-
mate does not capture the volatility today. If the period is too short, estimates
will be very noisy. Historical volatility forecasters were historical, this means that
their primary focus is to measure the volatility in the past. The historical volatility
does provide some models to forecast the volatility but was made to measure the
(constant) volatility in the past adequately.
1
The inability of historical volatility to cope with dynamic volatility over time
lead to the development of volatility models which do account for this. Thus the
ARCH Class Conditional Volatility Models and their many extensions were born.
This paper aims to provide an overview of the core historical developments of
historical and ARCH class volatility forecasting models in existing literature. We
will now discuss the historical and ARCH class volatility models more in depth.
Section 2 describes what volatility actually is. Section 3 will introduce some histor-
ical volatility models and some ARCH Class Conditional Volatility Models with a
well-known extension. Section 4 will introduce some popular forecast performance
measures. Section 5 will discuss the historical development of these models.
2 Volatility Denitions and Measurement.
Many investors and nance students often do not understand the dierences be-
tween volatility, standard deviation and risk. As stated before, volatility is not
the same as risk. Investment risk is the uncertainty of a return and the potential
for nancial loss or gain. A high volatility means that the price or return of a
security uctuates a lot. High volatility does imply high investment risk of the
corresponding security. Volatility is a good measure of market risk i.e. the risk
of losses in positions arising from movements in market prices. In addition to
market risk we also have credit risk, liquidity risk, asset-backed risk, reputational
risk, legal risk and there are a few more kinds of risks in nance. So risk is much
broader than just market risk. In nance, the volatility is often referred to as the
standard deviation or variance
2
. It is estimated by from a set of observations
as

2
=
1
n 1
n

i=1
(R
i
R)
2
, (1)
where R
i
is the return of the security at period i and R is the mean return. This
is called the sample variance. The standard deviation is just the square root of
the variance.
Notice that the standard deviation takes deviations around the sample mean.
Figlewski (1997) argues that the sample mean is not a very accurate estimate of
the true mean, especially for small samples. Taking deviations from zero often in-
creases volatility forecasts accuracy. This leads to the use of the squared returns as
a proxy to volatility. The squared returns are also known as the realized volatility.
Besides Figlewski (1997), Lopez (2001) also proposes to use the squared returns
2
to proxy the volatility. They found that (1) is 50 percent greater or smaller than
the true volatility nearly 75% of the time.
3 Volatility models
This section discusses some core volatility forecasting models. The volatility
t
in
this section is the sample standard deviation of the return of period t and
t
is
the forecast of
t
. Period t could be any period. If t is a month, then
t
is often
calculated as the sum of squared returns of all days in the month. If t is a day
we often use the squared return of the day to proxy the volatility. More recently,
the cumulative intra-day squared returns is used to proxy the daily volatility. As
stated before the squared returns are used because it is a more precise estimator
for volatility. But it is also possible to just use the standard deviation. Depending
on the discussed paper this is a motivated choice.
3.1 Historical volatility models
The simplest way to forecast the volatility of a series is to take its historical volatil-
ity using some xed window. We now introduce some historical volatility models:
Random Walk Model
The best forecast of the volatility in period t is just the last periods realized
volatility

2
t
=
2
t1
(2)
Historical Mean Model
The best forecast of the volatility in period t is the historical average of the past
observed volatilities

2
t
=
1
t 1
t1

i=1

2
i
(3)
Moving Average/Historical volatility Model
The historical mean model uses all observations to estimate the volatility even the
very old ones. The very old volatilities are given the same weights as the new
ones. If volatility clustering occurs, recent returns should be given more weight.
Returns of the more recent period has more information than returns of a long
time ago. Financial variables such as interest rates, spreads and economic variables
are often periodic. In order to account for this we need models who place more
weights on recent returns. One solution is the moving average model. The moving
average model uses a moving average instead of the whole average to estimate
3
the volatility. The moving average is dened as the equally weighted average of
realized volatilities of the past m months.

2
t
=
1
m
m

i=1

2
ti
(4)
Exponential Weighted Moving Average
Another solution for this problem is the Exponential Weighted Moving Average
(EWMA). This method adjusts the forecasts based on past forecast errors and
the forecast is calculated as a weighted average of the immediate past observed
volatility and the forecast value for that same period

2
t
=
2
t1
+ (1 )
2
t1
(5)
Here is known as the smoothing parameter and is constrained 0 < < 1. If
1, then the most recent volatility is given more weight.
Simple Regression
It is also possible to use a simple regression to forecast volatility

2
t
= +
2
t1
(6)
3.2 ARCH Class Conditional Volatility Models
In this section, we will discuss the two historically most well known models of this
family ARCH and GARCH. These models utilize conditional variance.
For all models discussed in this section the returns follow the following process
r
t
= +
t

t
=
t
z
t
,
with z
t
a white noise process i.e. a normal distribution with zero mean. We now
proceed to discuss the models.
ARCH(p)
Engle (1982) was rst to introduce the autoregressive conditional heteroscedastic-
ity (ARCH) model for estimating and forecasting volatility. ARCH models are
designed to capture the time-varying volatility clustering. The ARCH(p) model is
given as

2
t
=
0
+
q

i=1

2
t1
(7)
4
where
0
> 0,
i
0 and i > 0. The weights of this equation can then be obtained
using ordinary least squares (OLS). This is a huge advantage to the naive histor-
ical models. The weights are no longer randomly chosen but determined optimally
by OLS.
GARCH(p,q)
Bollerslev (1986) extended the ARCH(p) model and introduced the well known
generalized autoregressive conditional heteroskedasticity (GARCH). The idea is
that ARCH(p) is too restrictive in because it states that the returns before the lag
parameter p does not inuence the current volatility. GARCH(p,q) accounts for
this by adding autoregressive terms. GARCH conditional variance is given by

2
t
=
0
+
q

i=1

2
t1
+
p

i=1

2
t1
(8)
where
0
> 0 and

i
+

i
< 1.
In addition to the discussed models, many more extensions of the ARCH class
have been developed. The basis of all these models stems from the original ARCH
and GARCH models ideas. An incomplete list include: AARCH, APARCH,
EGARCH, FIGARCH, FIEGARCH, STARCH, SWARCH, STGARCH, GJR-GARCH,
TARCH, TGARCH, MARCH, NARCH, SNPARCH, SPARCH, SQGARCH, CES-
GARCH, Component ARCH, Asymmetric Component ARCH, Taylor-Schwert,
Student-t-ARCH, GED-ARCH and many more.
4 Forecast Evaluation
The forecast are typically evaluated by their forecast errors i.e. the dierence
between the real and the predicted value. In order to get the real value the
squared return is again used to proxy it. Now Poon and Granger (2003) name a
few popular simple evaluation measures used in literature which includes the Mean
Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error
(MAE) and Mean Absolute Percentage Error(MAPE).
MSE =
1
n
n

i=1
(
i

i
)
2
(9)
RMSE =

1
n
n

i=1
(
i

i
)
2
(10)
5
MAE =
1
n
n

i=1
|
i

i
|
2
(11)
MAPE =
1
n
n

i=1
|(
i

i
)/
i
|
2
(12)
For all performance measures discussed here we have, the lower the performance
measure the better the forecast.
5 Past and present developments.
This section reviews some core ndings on the development of volatility forecast
models.
5.1 Pre-ARCH/GARCH Era
According to Poon and Granger (2003), one of the rst to forecast volatility before
ARCH/GARCH became widespread in nance literature was Taylor (1987). They
use open, high and low prices to construct one to twenty day volatility forecasts
on the DM/$ futures. They found that the best forecasts were a weighted average
of present and past high, low and close prices. Dimson and Marsh (1990) nd that
time-varying parameter optimized out of sample forecasts of the more sophisticated
EWMA and the simple regression to be inferior to the simple moving average.
The random walk and historical mean were the worse of them all. Sill (1993)
investigated whether macro-economic variables could predict the volatility of stock
returns. They found that S&P 500 volatility is higher during recession and that
commercial T-Bill spreads helps to predict the stock market volatility. Alford
and Boatsman (1995) used historical volatility of a rm to predict a ve-year
period volatility from a sample of 6,879 rm stocks. They nd that using volatility
adjusted for towards a comparable rms forecast yields the best results. They
called this the shrinkage forecast. Figlewski (1997) and Alford and Boatsman
(1995) both emphasize that a long period of estimation for the historical volatility
is needed to produce more correct forecasts.
5.2 ARCH/GARCH era
One of the rst to evaluate the predictive power of ARCH class models was Ak-
giray (1989) which is cited by many papers. There is another paper by Dieobold
(1986) which is even earlier though. Akgiray (1989) nd that the GARCH(1,1)
provided the best forecasts compared to EWMA, ARCH and historical volatility
6
estimate on stock market indexes in all periods. It had the lowest forecast evalu-
ation measures
1
.
In 1988 Nelson introduced the EGARCH (Exponential GARCH) model) in an un-
published manuscript. The EGARCH model species conditional variance in loga-
rithmic form, which means that there is no need to impose estimation constraint in
order to avoid negative variance. With appropriate conditioning of the parameters,
this specication captures the stylized fact that a negative shock leads to a higher
conditional variance in the subsequent period than a positive shock would. So the
EGARCH was the rst model to account for asymmetric volatility reaction. An-
other model which accounts for this asymmetry called the GJR-GARCH was also
developed by Glosten et al. (1993). Many papers began to circulate which tests
the performance of these models. Among some which are in favor of EGARCH
for stock indexes and exchange rates are Cao and Tsay (1992), Heynen and Kat
(1994), Lee (1991) and Pagan and Schwert (1990). Among some which favor GJR-
GARCH are Brailsford and Fa (1996) who nd that GJR-GARCH has superior
out-of-sample performance when forecasting stock market volatility. Engle and
Ng (1993) also share the same opinion. Engle and Ng (1993) also introduced their
own version of NGARCH model which accounts for volatility asymmetry. Zakoian
(1994) also introduced its own model called the TGARCH. So lots of new models
which accounts for asymmetric volatility reaction emerged.
Another class of ARCH models which account for asymmetric volatility began
to surface. This approach models the asymmetrical volatility as a consequence of
the existence of 2 states: a state with low volatility where the volatility adjusts
slower and is more persistent and a state with high volatility where the volatility
adjusts faster and is less persistent. Existence of these two states were documented
by, among other Friedman, Laibson and Minsky (1989) for the stock market and
Ederington and Lee (2001) for the foreign exchange market. The GARCH regime
switching model (RSGARCH) was generalized by Gray (1996) utilizing a rst order
Markov process with state-dependent transition probabilities. Gray (1996) uses
this model which allows for dierent mean level reversion in the returns to fore-
cast U.S. one month T-bill rate volatility. Gray (1996) nd that The RSGARCH
outperforms other single regime models.
More recently, models which model long memory (LM) began to emerge. We
already discussed the stylized fact of volatility clustering. Now we say that volatil-
ity has very long memory when a shock in the volatility series will impact the
size of the volatility over a long period. Now this permanent eect is not desirable.
1
See section 4 for some examples.
7
There are models like the integrated GARCH (IGARCH) by Engle and Bollerslev
(1986) who are able to model this eect, but the resulting unconditional variance
is not nite which implies that shocks still do have a permanent eect on the time
series. We usually take the rst dierence of the times series to transform the time
series to a stationary one. This dierencing lter is called integration of order 1
[I(1)]. More general we have integration of order d [I(d)] and it is even possible
to have fractional integration. Thus models like the FIGARCH of Baillie et al.
(1996) and FIEGARCH of Bollerslev and Ole Mikkelsen (1996) were born. These
models allow for fractional d and provided that d 0 and d < 0.5 the model was
covariance stationary.Vilasuso (2002) nd that the performance of FIGARCH is
superior to GARCH and IGARCH models on the foreign exchange. Kang et al.
(2009) nd that FIGARCH performs than GARCH or IGARCH on the crude oil
market.
Currently, there are many new ARCH class models being developed. For an in-
complete list see section 3.2. Discussing all recent ARCH class models is not the
scope of this paper. We have now succeeded in introducing the core historical
development of ARCH class and historical volatility models.
6 Conclusion
This paper attempted to describe the historical development of volatility forecast-
ing models. We elaborated on the exact denition of volatility, discussed some core
volatility forecasting models and evaluation measures. Then we discussed the his-
torical development of these models by various results obtained by various papers
throughout the development and introduction of these models. We succeeded in
introducing the core historical development of ARCH class and historical volatil-
ity models. The development of ARCH class models is still in progress and many
other ARCH class models have already been developed besides those discussed
here. It is expected that this will continue in the future.
References
Akgiray, V. (1989), Conditional heteroscedasticity in time series of stock returns:
Evidence and forecasts, Journal of business pp. 5580.
Alford, A. W. and Boatsman, J. R. (1995), Predicting long-term stock return
volatility: Implications for accounting and valuation of equity derivatives, Ac-
counting Review pp. 599618.
8
Baillie, R. T., Bollerslev, T. and Mikkelsen, H. O. (1996), Fractionally integrated
generalized autoregressive conditional heteroskedasticity, Journal of economet-
rics 74(1), 330.
Bollerslev, T. (1986), Generalized autoregressive conditional heteroskedasticity,
Journal of econometrics 31(3), 307327.
Bollerslev, T. and Ole Mikkelsen, H. (1996), Modeling and pricing long memory
in stock market volatility, Journal of Econometrics 73(1), 151184.
Brailsford, T. J. and Fa, R. W. (1996), An evaluation of volatility forecasting
techniques, Journal of Banking & Finance 20(3), 419438.
Cao, C. Q. and Tsay, R. S. (1992), Nonlinear time-series analysis of stock volatil-
ities, Journal of Applied Econometrics 7(S1), S165S185.
Dieobold, F. X. (1986), Modeling the persistence of conditional variances: A
comment, Econometric Reviews 5(1), 5156.
Dimson, E. and Marsh, P. (1990), Volatility forecasting without data-snooping,
Journal of Banking & Finance 14(2), 399421.
Ederington, L. and Lee, J. H. (2001), Intraday volatility in interest-rate and
foreign-exchange markets: Arch, announcement, and seasonality eects, Jour-
nal of Futures Markets 21(6), 517552.
Engle, R. F. (1982), Autoregressive conditional heteroscedasticity with estimates
of the variance of united kingdom ination, Econometrica: Journal of the
Econometric Society pp. 9871007.
Engle, R. F. and Bollerslev, T. (1986), Modelling the persistence of conditional
variances, Econometric reviews 5(1), 150.
Engle, R. F. and Ng, V. K. (1993), Measuring and testing the impact of news on
volatility, The Journal of Finance 48(5), 17491778.
Figlewski, S. (1997), Forecasting volatility, Financial Markets, Institutions &
Instruments 6(1), 188.
Friedman, B. M., Laibson, D. I. and Minsky, H. P. (1989), Economic implications
of extraordinary movements in stock prices, Brookings Papers on Economic
Activity 1989(2), 137189.
Glosten, L. R., Jagannathan, R. and Runkle, D. E. (1993), On the relation be-
tween the expected value and the volatility of the nominal excess return on
stocks, The journal of nance 48(5), 17791801.
9
Gray, S. F. (1996), Modeling the conditional distribution of interest rates as a
regime-switching process, Journal of Financial Economics 42(1), 2762.
Heynen, R. and Kat, H. (1994), Partial barrier options, The Journal of Financial
Engineering 3(3).
Kang, S. H., Kang, S.-M. and Yoon, S.-M. (2009), Forecasting volatility of crude
oil markets, Energy Economics 31(1), 119 125.
URL: http://www.sciencedirect.com/science/article/pii/S0140988308001539
Lee, K. Y. (1991), Are the garch models best in out-of-sample performance?,
Economics Letters 37(3), 305308.
Lopez, J. A. (2001), Evaluating the predictive accuracy of volatility models, Jour-
nal of Forecasting 20(2), 87109.
Pagan, A. R. and Schwert, G. W. (1990), Alternative models for conditional stock
volatility, Journal of Econometrics 45(1), 267290.
Poon, S.-H. and Granger, C. W. (2003), Forecasting volatility in nancial markets:
A review, Journal of Economic Literature 41(2), 478539.
Sill, D. K. (1993), Predicting stock-market volatility, Business Review (Jan), 15
28.
Taylor, S. J. (1987), Forecasting the volatility of currency exchange rates, In-
ternational Journal of Forecasting 3(1), 159 170. ce:titleSpecial Issue on
Exchange Rate Forecasting/ce:title.
URL: http://www.sciencedirect.com/science/article/pii/0169207087900859
Vilasuso, J. (2002), Forecasting exchange rate volatility, Economics Letters
76(1), 5964.
Zakoian, J.-M. (1994), Threshold heteroskedastic models, Journal of Economic
Dynamics and control 18(5), 931955.
10

You might also like