Professional Documents
Culture Documents
TM
J.P.Morgan/Reuters
RiskMetrics News
J.P. Morgan and Reuters will collaborate to develop a more powerful version of RiskMetrics.
This Monitor is the first edition under the new collaboration between both firms.
RiskMetrics News also introduces FourFifteen , an Excel based VaR calculator and report
generator developed and distributed by J.P. Morgan.
Some new additions to the RiskMetrics software vendor list.
We review changes to the government yield curve compounding method used for volatility
estimation.
Reuters Ltd
International Marketing
Martin Spencer
(44-171) 542-3260
martin.spencer@reuters.com
Most Value-at-Risk (VaR) methodologies show their limitations when dealing with events that
are relatively infrequent. These shortcomings appear when risk managers select confidence intervals above 99%, and this applies not only to variance-covariance VaR techniques such as
RiskMetrics, but also to historical simulation.
In the case of RiskMetrics, the lack of coverage in the tail of the distributions has been attributed to the assumption that returns follow a conditional normal distribution. Since the distributions of many observed financial return series have tails that are fatter than those implied by
conditional normality, risk managers may underestimate the effective level of risk. The purpose
of this article is to describe a RiskMetrics VaR methodology that allows for a more realistic
model of financial return tail distributions.
A Value-at-Risk analysis of currency exposures
26
In this article, we compute the VaR of a portfolio of foreign exchange flows that consist of exposures to OECD and emerging market currencies, most of which are not yet covered by the RiskMetrics data sets. In doing so, we underscore the limitations of standard VaR practices when
underlying market return distributions deviate significantly from normality.
Estimating index tracking error for equity portfolios
34
RiskMetrics Monitor
Second quarter 1996
page 2
RiskMetrics News
Jacques Longerstaey
Morgan Guaranty Trust Company
Risk Management Advisory
(1-212) 648-4936
riskmetrics@jpmorgan.com
Martin Spencer
Reuters Ltd
International Marketing
(44-171) 542-3260
martin.spencer@reuters.com
Methodology
J.P. Morgan will continue to develop the RiskMetrics set of VaR methodologies and publish them in
the quarterly RiskMetrics Monitor and in the annual RiskMetricsTechnical Document.
RiskMetrics data sets
Reuters will take over the responsibility for data sourcing as well as production and delivery of the risk
data sets.
The current RiskMetrics data sets will continue to be available on the Internet and will be further improved as a benchmark tool designed to broaden the understanding of the principles of market risk measurement.
When J.P. Morgan first launched RiskMetrics in October 1994, the objective was to go for broad market coverage initially, and follow up with more granularity in terms of the markets and instruments covered. This over time, would reduce the need for proxies and would provide additional data to measure
more accurately the risk associated with non-linear instruments.
The partnership will address these new markets and products and will also introduce a new customizable service, which will be available over the ReutersWeb service. The customizable RiskMetrics approach will give risk managers the ability to scale data to meet the needs of their individual trading
profiles. Its capabilities will range from providing customized covariance matrices needed to run VaR
calculations, to supplying data for historical simulation and stress-testing scenarios.
More details on these plans will be discussed in later editions of the RiskMetrics Monitor.
Systems
Both J.P. Morgan and Reuters, through its Sailfish subsidiary, have developed client-site RiskMetrics
VaR applications. These products, together with the expanding suite of third party applications will
continue to provide RiskMetrics implementations.
RiskMetrics Monitor
Second quarter 1996
page 3
RiskMetrics Monitor
Second quarter 1996
page 4
ValueRisk monitors price series and cross-correlation of virtually all traded financial products.
The system calculates volatility and determines the total risk level as well as the stop-out
probability based on historical cross-correlations of all products for various portfolios.
Accounting for unprecedented market dynamics, the system offers quick and powerful simulation capabilities and stress testing function. It is fully compliant with the latest BIS (Basel
Committee) and BAK (Bundesaufsichtsamt fr das Kreditwesen) risk controlling requirements.
Midas-Kapiti International
45 Broadway, New York, N.Y. 10006
Abby Friedman (1-212) 898 9500, FAX (1 212) 898 9510
TMark trader analytics system provides comprehensive high-speed pricing, portfolio analysis
and hedging facilities to support trading of off-balance sheet instruments. Recently introduced
Release 2.4 supports the J.P. Morgan RiskMetrics VaR methodology. Specific features of
Release 2.4 include multi-user processing, deal ticket generation, increased number of currencies and rates, and revaluation of cash flows from basis swaps. TMark is a PC-based system
and runs under Microsoft Windows.
Midas-Kapiti International is one of the world's leading suppliers of software solutions to
banks, financial institutions and corporations. Specializing in applications software for derivatives, market data integration, international banking, trade finance and risk management has
been our focus for more than 20 years. With over 24 international offices serving 1000 customers in 90 countries, we have an unparalleled distribution network which allows us to offer
sales and support to our customers wherever their offices may be located.
Value & Risk GmbH
Quellenweg 5a, 61348 Bad Homburg, Germany
Tako Ossentjuk (49-6172) 685051, FAX (49-6172) 685053, 100714.3446@compuserve.com
Value & Risk calculates Value-at-Risk with several selectable methodologies, such as covariance-methodology, quantile-simulation, user defined/stress test -simulation, worst case, principal component analysis, Monte Carlo, historical simulation, and RiskMetrics. It supports
various distributions and manifold position equivalent calculations as well as stochastic and
non-stochastic hedge proposals. It also offers substantial drill down and reporting functionality including numerous graphics.
Value & Risk co-operates with SAS Institute and is able to implement complete solutions from
data integration to decision support for traders, managers and controllers in a multi-user environment.
Besides supplying the above mentioned methodology, Value & Risk offers the standard
method for capital requirements according to Basel Committee and CAD. All markets and
their instruments are covered including a variety of exotic options.
RiskMetrics Monitor
Second quarter 1996
page 5
JPY
FRF
2 YEAR
NZD
LIR
10 YEAR
DEM
20 YEAR
ESP
30 YEAR
DKK
CAD
RiskMetrics Monitor
Second quarter 1996
page 6
Chart 2
Change in average estimated VaR
RiskMetrics data, Jan-92 to Feb-96
15.0%
10.0%
5.0%
0.0%
-5.0%
-10.0%
USD
JPY
FRF
2 YEAR
NZD
LIR
10 YEAR
DEM
20 YEAR
ESP
DKK
CAD
30 YEAR
RiskMetrics Monitor
Second quarter 1996
page 7
Since its release in October 1994, RiskMetrics has inspired an important discussion on VaR methodologies. A focal point of this discussion has been a RiskMetrics assumption that returns follow a conditional normal distribution. Since the distributions of many observed financial return series have tails
that are fatter than those implied by conditional normality, risk managers may underestimate the risk
of their positions if they assume returns follow a conditional normal distribution. In other words, large
financial returns are observed to occur more frequently than predicted by the conditional normal distribution. Therefore, it is important to be able to modify the current RiskMetrics model to account for
the possibility of such large returns.
The purpose of this article is to describe a RiskMetrics VaR methodology that allows for a more realistic model of financial return tail distributions. The article is organized as follows: the first section
reviews the fundamental assumptions behind the current RiskMetrics calculations, in particular, the
assumption that returns follow a conditional normal distribution. The second section (A new VaR
methodology on page 10) introduces a simple model that allows us to incorporate fatter-tailed distributions. The third section (A statistical model for estimating return distributions and probabilities on
page 18) shows how we estimate the unknown parameters of this model so that they can be used in conjunction with current RiskMetrics volatilities and correlations. The fourth section (on page 23) is the
conclusion to this article.
Darryl Hendricks, Evaluation of Value-at-Risk Models Using Historical Data, FRBNY Economic Policy Review,
April, 1996.
RiskMetrics Monitor
Second quarter 1996
page 8
Extending the RiskMetrics VaR framework to include the large probability of large returns begins
with an understanding of the dynamics of market returns. Chart 1 illustrates such dynamics by showing
a time series of returns for the Nikkei 225 index over the period April 1990 through April 1996.
Chart 1
Nikkei index returns, r(t)
Returns (in percent) for April 1990April 1996
10
high volatility
5
0
-5
low volatility
-10
Mar-90
Aug-91
Dec-92
May-94
Sep-95
Feb-97
In addition to the typical feature of volatility clustering (i.e. periods of high and low volatility), Chart 1
displays large returns that appear to be inconsistent with the remainder of the data series. To show how
the observed returns, r(t), differ from their standardized counterparts, r(t)/t, the standardized returns
for the Nikkei index are presented in Chart 2 on page 8. (To create the standard errors for scaling the
returns, we used the current RiskMetrics daily forecasting methods.)
Chart 2
Standardized returns ( r t t ) of the Nikkei index
April 4, 1990March 26, 1996
8
6
4
2
0
-2
-4
-6
-8
-10
Mar-90
Conditional event
Aug-91
Dec-92
May-94
Sep-95
Feb-97
Comparing Charts 1 and 2, note the large negative standardized return (called Conditional event in
Chart 2) and that it results in part from the low level of volatility that preceded the corresponding ob-
RiskMetrics Monitor
Second quarter 1996
page 9
served return (Chart 1). We can interpret such large conditional non-normal returns as surprises because immediately prior to the observed return there was a period of low return volatility. Hence, the
conditional event return is unexpected.
Conversely, observed returns that appear large may no longer do so when standardized because their
values were expected. In other words, these large returns occur in periods of relatively high volatility.
This scenario is demonstrated in Chart 3, which shows an observed time series of spot gold contract
returns and their standardized values.
Chart 3
Observed spot gold returns (A) and standardized returns (B)
April 4, 1990March 26, 1996
4
(A)
Gold returns
2
0
-2
-4
-6
-8
Mar-90
Aug-91
Dec-92
May-94
Sep-95
Feb-97
(B)
Gold returns
4
2
0
-2
-4
-6
-8
Mar-90
Aug-91
Dec-92
May-94
Sep-95
Feb-97
Chart 3 demonstrates the effect of standardization on return magnitudes. For example, observed returns
(A) show more volatility clustering relative to their standard counterparts (B).
To summarize, RiskMetrics assumes that financial returns divided by their respective volatility forecasts are normally distributed with mean 0 and variance 1. This assumption is crucial because it recognizes that volatility changes over time.
RiskMetrics Monitor
Second quarter 1996
page 10
PDF = p 1 N 1 ( 1, 1 ) + p 2 N 2 ( 2, 2 )
Eq. [1] is known as a normal mixture model. An interesting feature of this model is that it allows us to
assign large returns a larger probability (compared to the standard normal model). Chart 4 on page 11
shows two simulated densities, one from the standard normal distribution, the other from a normal mixture model with the following parameters:
2
RiskMetrics Monitor
Second quarter 1996
page 11
Chart 4
Standard normal and normal mixture probability density functions (PDF)
0.40
0.35
0.30
0.25
0.20
Mixture
0.15
0.10
0.05
Normal
4.50
3.55
2.60
1.65
0.70
-0.25
-1.20
-2.15
-3.10
-4.05
-5.00
0.00
standardized returns
Since the normal mixture model can assign relatively larger-than-normal probabilities to big returns,
we choose to model standardized returns as the sum of a a normal return, n t , with mean zero and vari2
ance n , and another normal return, t , with mean and variance that occurs each period with proba2
bility p. Note that if we set n = 1 , then n t represents the part of the return that is modeled correctly
according to RiskMetrics. It then follows that we can write a standardized return, R(t), as generated
from the following model:
[2]
Rt = nt + t t
RiskMetrics Monitor
Second quarter 1996
page 12
We then conduct a Monte Carlo study on single instruments to determine how accurately our new model captures fat-tails. The experiment is performed in three steps. First, we simulate 10,000 observations
from a t-distribution and then compute the simulated percentiles at the 0.5%, 1%, 2.5% and 5% probability levels. Second, we estimate the parameters of Eq. [2] by using the simulated data, and determine
the percentiles implied by Eq. [2]. (We refer to the percentiles generated from Eq. [2] as mix since
its associated PDF is essentially a normal mixture model.) Third, we compare the actual percentiles
generated from the t-distribution to those produced by Eq. [2] and by the standard normal model (standard RiskMetrics).
Table 1 reports the results of our study. Data in column 2 is simulated from t-distributions with 2, 4, 10
and 100 degrees of freedom. Notice that the smaller the degrees of freedom, the fatter the tails of the
simulated distribution (compare columns 4 and 5).
Table 1
Comparing percentiles of t-distribution and normal mixture
Simulated data from t-distribution with 2, 4, 10, and 100 degrees of freedom
Two degrees of freedom
Parameters1
(1)
Test results
(2)
= 0.1357
= 6.414
(3)
= 1.440
p = 1.3 %
Percentile (%)
t -dist.
mix
(4)
relative error
(mix-t-dist)/(t-dist)
(5)
relative error
(normal-t-dist)/(t-dist)
0.5
5.113
4.453
13%
50%
3.832
3.601
6%
40%
2.5
2.796
2.930
5%
30%
2.195
2.428
10%
25%
Parameters*
Test results
1Parameters
= 0.0753
= 4.394
= 1.148
p = 0.66 %
n
relative error
(mix-t-dist)/(t-dist)
relative error
(normal-t-dist)/(t-dist)
Percentile (%)
t -dist.
mix
0.5
3.130
3.121
0%
17%
2.719
2.753
1%
14%
2.5
2.239
2.288
2%
13%
2.811
1.910
32%
40%
are defined as follows: = mean of the normal distribution, t, describing the standardized return
= standard deviation of the normal distribution, t
n = standard deviation of the normal return, nt
p = probability that the standardized return is generated from the normal distribution
RiskMetrics Monitor
Second quarter 1996
page 13
Table 1 (continued)
Comparing percentiles of t-distribution and normal mixture
Simulated data from t-distribution with 2, 4, 10, and 100 degrees of freedom
Ten degrees of freedom
Parameters*
(2)
(1)
Test results
= 0.226
(3)
= 3.397
= 1.029
p = 0.32 %
Percentile (%)
t -dist.
mix
(4)
relative error
(mix-t-dist)/(t-dist)
(5)
relative error
(normal-t-dist)/t-dist)
0.5
2.896
2.710
6%
12%
2.496
2.428
3%
7%
2.5
2.053
2.034
0.1%
5%
1.677
1.703
2%
2%
Parameters
Test results
1Parameters
= 0.4189
= 2.518
= 1.026
p = 0.2 %
n
relative error
(mix-t-dist)/t-dist)
relative error
(normal-t-dist/t-dist)
Percentile (%)
t -dist.
mix
0.5
2.579
2.626
2%
0%
2.392
2.366
1%
3%
2.5
1.972
1.990
1%
0%
1.648
1.669
1%
1%
are defined as follows: = mean of the normal distribution, t, describing the standardized return
= standard deviation of the normal distribution, t
n = standard deviation of the normal return, nt
p = probability that the standardized return is generated from the normal distribution
The results in Table 1 (columns 4 and 5) clearly show how the mixture model is superior to the normal
model in recovering the tail percentiles of the t-distribution (notably in the cases of 2 and 4 degrees of
freedom). Also, reading from the top of the table downwards, notice how the estimates of , n, and p
change as the distribution becomes more normal. When there are very fat tails, there is a greater chance
of observing a return from the normal distribution with the large variance (1.3% for 2 degrees of freedom vs. 0.2% with 100 degrees of freedom). Furthermore, the estimated standard deviation becomes smaller as the simulated distribution becomes more and more normal.
Next, we consider the case of calculating the VaR of a portfolio of returns. Unlike the single instrument
case described above, when aggregating returns we must grapple with issues such as the correlation
between different returns t s and t s . For example, consider a portfolio that consists of two instruments with returns R 1t and R 2t expressed as follows:
[3]
R 1t = n 1t + 1t 1t
R 2t = n 2t + 2t 2t
Let w 1 and w 2 denote the amount invested in instruments 1 and 2, respectively. We can write the portfolio return, R pt , as
RiskMetrics Monitor
Second quarter 1996
page 14
R pt = w 1 1t R 1t + w 2 2t R 2t
w 1 1t 1t 1t + w 2 2t 2t 2t
New portfolio component
w 1 1t n 1t + w 2 2t n 2t
[4]
Notice that when aggregating returns we must multiply the returns by the RiskMetrics standard deviations ( 1t and 2t ) to preserve the proper scale.
Now, to compute VaR it is simple to evaluate the standard RiskMetrics component since all that is
required are the RiskMetrics standard deviations and correlations which are readily available. However, things are not so straightforward with the New portfolio component. For example, we must determine whether we should use estimates of the correlation between 1t and 2t or simply assume how
they are correlated. Similar issues may apply to 1t and 2t although, according to our estimation procedure, it is not possible to estimate their correlation. Through extensive experimentation with portfolios of various sizes, we have concluded that treating both the t 's and t 's as independent offers the
best results relative to the standard normal model and the assumption that the t 's are perfectly correlated. The implication of this result is that for any portfolio of arbitrary size, all that is required to compute its VaR are the RiskMetrics volatilities and correlations, the probabilities, p, mean estimates
and the standard deviations . Note that there is one p, , and for each time series. We now offer
an example to show how we reached the conclusion to treat the t 's and t 's as independent.
Consider the case of a portfolio that consists of five assets. We assume that the true returns generated
from the following model:
[5]
R jt = n jt + jt jt
for j = 1, 2, 3, 4, 5
where the vector = ( 1, , 5 ) is distributed multivariate normal with mean vector (0,0,0,0,0),
standard deviation vector (10,10,10,10,10), and correlation matrix
[6]
1
0.5
= 0.5
0.5
0.5
0.5
1
0.5
0.5
0.5
0.5
0.5
1
0.5
0.5
0.5
0.5
0.5
1
0.5
0.5
0.5
0.5
0.5
1
RiskMetrics Monitor
Second quarter 1996
page 15
Recall that we ignore the correlations between the t 's because, in practice, it is very difficult to get
good estimates of the correlation matrix .
Table 2
Testing assumptions on correlation between ts
5000 simulated returns from correlated mixture model
Percentiles:
Independent ts
Perfectly correlated ts
1.65
1.88
1.74
2.14
1.96
2.40
2.15
1.0%
3.32
2.32
3.32
3.10
0.5%
5.08
2.57
4.20
8.54
Confidence Interval
True
Normal
5.0%
1.63
2.5%
Next, we conduct the same experiment but this time on real data. We form a portfolio of returns from
5 foreign exchange series again weighting each return series by one unit. Table 3 reports the parameter
estimates after fitting each of the five return series to Eq. [2] on page 11. The percentiles implied by
Eq. [2] under different aggregation assumptions are reported in Table 4.
Table 3
Parameter estimates of the normal mixture model
Fitting the model (Eq. [2]) to 5 foreign exchange return series
Parameter estimates:
t
nt
p (%)
Austria
0.80
3.09
1.01
1.2
Australia
0.33
3.48
1.10
1.5
Belgium
0.68
3.46
1.03
1.5
Canada
0.25
3.20
1.01
1.4
Denmark
1.34
3.00
1.08
1.4
Currencies
Table 4
Testing assumptions on correlation between ts
Portfolio returns generated from 5 foreign exchange series
Confidence Interval
Normal
Percentiles:
Independent ts
Perfectly correlated ts
5.0%
1.65
1.71
1.69
2.5%
1.96
2.05
2.10
1.0%
2.32
2.46
2.56
0.5%
2.57
2.77
3.16
Tables 2 and 4 show that the independence and perfect correlation assumptions give similar results.
However, at the smaller percentiles, say 1% and smaller, the assumption that the t s are independent
tend to be more accurate.
To help summarize how to compute VaR under this new proposed methodology, we refer the reader to
the flow chart presented in Chart 5 on page 17. The gray shaded boxes represent the data required
RiskMetrics Monitor
Second quarter 1996
page 16
(which we would supply) to estimate VaR under the new methodology. In addition to the RiskMetrics
volatilities and correlations, we would supply p, , and for each time series and the formulae that
would take as inputs the portfolio weights, the RiskMetrics volatilities and correlations, and p, ,
and to produce a VaR estimate at a prespecified confidence level. The italicized words tell when the
data would be updated.
RiskMetrics Monitor
Second quarter 1996
page 17
Chart 5
Flow chart of VaR calculation
Use
RiskMetrics
covariance matrix
daily
Compute adjusted
percentile
Standard RiskMetrics
VaR
New VaR
RiskMetrics Monitor
Second quarter 1996
page 18
Thus far, there has been no mention of how p, , and are estimated. The next section describes
t
t
the statistical model used to estimate p, , and . The estimation process is rather technical and
t
t
uninterested readers should skip to Conclusions on page 23.
[7]
where
n t N 0, n
Prob ( t = 1 ) = p
Prob ( t = 0 ) = ( 1 p )
and
2
t N 0,
We analyze Eq. [7] within a Bayesian framework. Given prior distributions and values on
2
2
t, p, t, and n , we derive the marginal posterior distributions of p, t, and n , as well as a time series
of the posterior probabilities of events, i.e., Prob ( t = 1 ) at each point in time. The basic computational tool used is the Gibbs sampler, which uses random draws from the conditional distributions of
each variable of a random vector given all other variables to obtain samples from the marginal distributions.2 The sampler thus only requires the ability to draw random samples from the conditional distributions of the variables involved. This minimum requirement makes the sampler particularly useful
2
as the joint distribution of the variables p, t, and n is complicated.
As previous research has shown, traditional maximum likelihood analysis of models such as Eq. [7] is
complicated because the random mechanism, p, can give rise to unknown numbers of shifts at arbitrary
time points. Alternatively, formulating Eq. [7] by using a Bayesian approach and applying the Gibbs
sampler is an effective way to obtain the marginal posterior distributions. A major advantage of the
Bayesian approach is that there is no need to consider the number of level shifts in the series; the shifts
are in effect governed by the probability p.
Following the Bayesian paradigm, in order to estimate Eq. [7], we must first specify the prior distribu2
tions for p, t, and n . The prior distribution for the variance of the standard normal random variable
nt is
[8]
2 See
n v
RiskMetrics Monitor
Second quarter 1996
page 19
where is an inverse chi-square random variable with v degrees of freedom with mean and variance
given by
2
E n = ( 2 )
>2
2
2
2
V n = 2 ( ) ( 2 ) ( 4 )
>4
2
t NID 0,
V ( p) = 1 2 ( 1 + 2) ( 1 + 2 + 1)
2
The hyperparameters, i.e., the parameters ( , , 1, 2, ) for the prior distributions are assumed
known. In practice, these hyperparameters can be specified by using substantive prior information on
the series under study. The purpose of the Gibbs sampler is to find the conditional posterior distribu2
tions of subsets of the unknown parameters , , p, n . Denoting the conditional probability density
of given by p(|) and using some standard Bayesian techniques, we obtain the posterior distributions, i.e., the distributions after we update our priors with data.3
To quantify our a priori beliefs, we set the priors to the following values:
1 = 2 ; 2 = 98
= 3
2
= 3
2
= 100
These settings imply that before we estimate the parameters of Eq. [7], we believe an event occurs 2%
of the time, the expected value of the standard deviation of nt is one, and event returns, t, are distributed normally with mean of zero and a standard deviation of 10. Chart 6 shows how the prior distribution on event returns compares to the prior distribution of standardized returns et.
3 The
posterior distributions are given in the appendix to this paper (page 24).
RiskMetrics Monitor
Second quarter 1996
page 20
Chart 6
Prior distributions for standardized returns and event returns
0.4
N(0,1)
0.35
0.3
0.25
0.2
0.15
0.1
Prior (0,100)
0.05
0
-10
-8
-6
-4
-2
10
Note that by choosing a large standard deviation we are implying that we do not have strong a priori
beliefs on the values of event returns. Also, assuming a mean of zero implies that there is no bias toward
either positive or negative events.
Combining the observed returns with our prior settings, we estimate the marginal distributions of
2
, p, and n , which represent the probability distributions of the event returns, the probability of an
event and the variance of the normal standardized random variable, respectively. Chart 7 presents the
estimated posterior distribution of event returns for gold spot contracts. Specifically, given our priors
we fit Eq. [7] on page 18 to gold spot contract returns for the period April 4, 1990 through March 26,
1996 and estimate the distribution of .
Chart 7
Posterior distribution of event returns () of gold spot contracts
X-axis is in units of standardized returns
0.12
0.1
PDF
0.08
0.06
0.04
0.02
0
-10.00
-7.00
-4.00
-1.00
2.00
5.00
8.00
Notice how the event returns have a symmetric bimodal distribution. Effectively, what this shape tells
us is that there is important information in the data. The data has transformed the prior distribution into
RiskMetrics Monitor
Second quarter 1996
page 21
something very different. When event returns are negative, values around 4.5 occur most frequently.
Similarly, when event returns are positive the values around 4.3 occur most often. This is in stark contrast to the prior distribution, which assumes that zero is the most commonly observed value for event
returns.
In fact, when fitting Eq. [7] to 215 time series in the RiskMetrics database, which includes foreign
exchange, money market rates, government bonds, commodities, and equities, we have found similar
shapes for the event return distributions. For example, Charts 7 and 8 show event return distributions
for the DEM/USD foreign exchange series and the US S&P 500 equity index for the period April 4,
1990 through March 26, 1996.
Chart 8
Posterior distribution of DEM/USD foreign exchange event returns
X-axis is in units of standardized returns
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
-8.00 -6.00 -4.00 -2.00 0.00 2.00 4.00 6.00 8.00
Note that whereas the DEM/USD event returns have a symmetric bimodal distribution, the S&P 500s
event return distribution has a pronounced mode at 5.0. As Chart 9 on page 22 shows, when events in
the S&P 500 occur, they are more likely to be negative than positive.
RiskMetrics Monitor
Second quarter 1996
page 22
Chart 9
Posterior distribution of S&P 500 event returns
X-axis is in units of standardized returns
0.12
0.1
PDF
0.08
0.06
0.04
0.02
0
-10.00
-7.00
-4.00
-1.00
2.00
5.00
8.00
Charts 7, 8, and 9 presented the distribution of for three different return series, similarly we can estimate the posterior distribution of p, the probability that an event occurs. Chart 10 shows the distribution of the probability of an event occurring for the S&P 500.
Chart 10
Posterior distribution of the probability of an event (p) for the S&P 500
April 4, 1990 through March 26, 1996
12000
10000
8000
6000
4000
2000
0
0.004
0.014
0.024
0.034
0.044
Probability of event
Recall that the prior probability of observing an event followed a beta distribution and it was expected
that, on average, an event would occur 2% of the time. However, after using the data to update our priors we see that for the S&P 500, the estimated distribution of p is positively skewed with a peak
around 1%.
RiskMetrics Monitor
Second quarter 1996
page 23
Finally, note that throughout this section we derived probability distributions of the parameters of interest. However, our VaR methodology requires point estimates, not entire distributions. When computing VaR we take the means of the posterior distributions of p, , and for their point estimates.
Conclusions
This article has developed a new methodology to measure VaR that explicitly accounts for fat-tail distributions. Building on the current RiskMetrics model, VaR under the new methodology takes as inputs portfolio weights two parameters: the current RiskMetrics volatilities and correlations, and
estimates of p, , and . This model was developed keeping in mind three requirements: (1) build
t
t
a VaR model that continues to use RiskMetrics volatilities and correlations, (2) use the conditional
normal distribution as the baseline model and (3) keep the number of additional parameter estimates
required to compute VaR to a minimum. The upshot of meeting these criteria is a relatively simply VaR
framework that goes a long way towards modeling large portfolio losses. Furthermore, the new VaR
methodology serves as a basis from which a risk manager can perform structured Monte Carlo.
In closing, we are interested in your feedback related to this proposed methodology. Send the author
E-mail with any comments or questions you may have regarding the contents of this article.
RiskMetrics Monitor
Second quarter 1996
page 24
Appendix
Conditional posterior distributions
2
n + s
+ T
where
2
s =
( R R)
and
R = sample mean of R t
At any point in time, the conditional posterior probability of t is given by
[A.2]
p g1 ( Rt )
Prob ( t = 1 R t, t, n, p ) = ------------------------------------------------------------------------p g1 ( Rt ) + ( 1 p) go ( Rt )
where
[A.3]
2
2 1 2
g 1 ( R t ) = 2 n
exp 1 2 [ ( R t t ) n ]
[A.4]
2
2 1 2
g 0 ( R t ) = 2 n
exp 1 2 [ R t n ]
if ( t = 1 )
if ( t = 0 )
[A.5]
Prob ( t = 1 R t ) =
Prob ( t = 1 ) Prob ( R t = 1 )
t
------------------------------------------------------------------------------------------------------------------------------------------------------------------Prob ( t = 1 ) Prob ( R t = 1 ) + Prob ( t = 0 ) Prob ( R t = 0 )
t
2
If t = 0 , there is no information on t except its prior, so that t NID 0,
If t = 1 , we use standard results on the relation between a normal prior and likelihood to
derive the posterior distribution of t
[A.6]
* 2
t ( t = 1 ) NID t ,
RiskMetrics Monitor
Second quarter 1996
page 25
where
2
R1
*
t = ----------------2
2
+ n
and
2 2
n
2
= ----------------2
2
+ n
Finally, the conditional posterior distribution of depends only on t . Let k be the number of 1s in
the T x 1 vector = ( 1, 2, T ) , that is, k is the number of events in the time series. Because the
prior of p is Beta ( 1, 2 ) , the conditional posterior distribution of p is Beta ( 1 + k, 2 + T k ) .
RiskMetrics Monitor
Second quarter 1996
page 26
In this article, we compute the VaR of a portfolio of foreign exchange flows that consists of the exposures that are provided in Table A.1 in the Appendix on page 30. In so doing, we underscore the limitations of standard VaR analysis when return distributions deviate significantly from normality. All
exposures are assumed to have been converted to U.S. dollar equivalent at the current spot rate.
Briefly, for a given forecast horizon and confidence level, VaR is the maximum expected loss of a portfolios current value. Based on the standard RiskMetrics methodology, Table 1 reports the portfolios
VaR over different forecast horizons and confidence levels. For example, over the next year, there is a
95% chance that current portfolio value of USD 2,502MM will not fall by more than USD 129.42MM.
Table 1
Portfolio VaR estimates (USD MM)
Confidence
Interval
Time Horizon
Annual Risk
1 Quarter
2 Quarters
3 Quarters
Diversified
Undiversified
99%
91.24
129.03
157.92
182.48
469.57
95%
64.71
91.51
112.08
129.42
333.03
90%
50.21
71.01
86.97
100.43
258.43
The RiskMetrics methodology is based on the precept that risks across instruments are not perfectly
additive given the lack of perfect positive correlation. As a result, the total risk in a portfolio of positions is often less than the sum of the instrument risks taken separately. This diversification benefit can
be estimated by taking the difference in VaR assuming all correlations between exposures are 1 (no diversification) and VaR estimates based on estimated correlations (as presented in Table 1). Consider the
VaR estimates (95% confidence) on the individual exposures for a one year horizon. The total sum of
these numbers is USD 333.03MM (sum of annual VaR estimates in Table A.1 on page 30). This is
equivalent to the VaR estimate of USD 129.42MM reported in Table 1 plus the diversification benefit
of USD 203.61MM.
In addition to the portfolios VaR, we compute the VaR of each foreign exchange exposure for a forecast
horizon of one year, and a 95% confidence level. Table A.1 reports VaR estimates for the 52 positions.
To further help understand the riskiness of the individual foreign exchange positions, Table A.1 also
reports the volatility forecasts of foreign exchange returns (i.e. not weighted by the size of the foreign
exchange positions).
To obtain the aforementioned risk estimates, we conducted the VaR analysis closely following the RiskMetrics methodology. For each of the 52 time series (Table A.1) we used 86 historical weekly prices
for the period 7/15/95 through 4/31/96. Missing observations, due to country-specific holidays, were
forecast using the statistical routine known as the EM algorithm. The VaR estimates reported in
Tables A.1 and A.2 were computed using the standard RiskMetrics delta valuation methodology.
This requires the computation of volatilities for each exposure and correlations between exposures.
Volatilities and correlations were computed using exponentially weighted averages with a decay factor
of 0.94 which implies that our volatility and correlation forecasts effectively use 75 historical data
points. Recall that exponential weighting is a simple way of capturing the dynamics of returns. Its key
feature is that it weighs recent data more heavily than data recorded in the distant past.
See RiskMetricsTechnical Document for exact formulae.
When computing VaR, RiskMetrics assumes that portfolio returns are distributed conditionally normal. That is, it is assumed that returns divided by their respective standard deviations are normally dis-
RiskMetrics Monitor
Second quarter 1996
page 27
tributed. It is important to distinguish this assumption from simply assuming that currency returns are
normally distributed. Table A.2 on page 32 reports various test statistics to determine the validity of
this assumption. All results were computed from normalized returns, that is, returns divided by their
standard errors.
RiskMetrics also assumes that returns are independent over time. Finally, to generate VaR estimates
over different time horizons, we simply scale the weekly VaR forecast by the square root of the time
horizon in weeks. For example, since we define 60 weeks to be one year,1 we obtain the one-year VaR
forecast by multiplying the one-week VaR forecast by the square root of 60.
Discussion
Table A.2 on page 32 presents evidence that several returns series are clearly not conditionally normal.
For example, Mexicos skew statistic is 13.71, which is very nonnormal. When returns are aggregated
into a portfolio, it would not be unreasonable to expect the portfolios return distribution to become
more normal. However, as the sample statistics for the portfolio show in Table A.2, this is not the case.
In fact, the non-normality of the portfolios return is a result of the relatively high weights on returns
that are very non-normal (e.g. China has a weight of 74). Fortunately, there are advanced statistical
methods that allow us to adjust our VaR estimates to reflect the portfolios skewness and kurtosis (see
RiskMetrics Monitor, 1st quarter, 1996). In the current analysis, we did not adjust the VaR estimates
for skewness and kurtosis.
In addition to the general deviations from conditional normality, there is the issue of event risk. For
various reasons, events appear as very large returns that occur with only a small probability. While we
are currently developing a VaR methodology that allows users to explicitly account for event risk in
their VaR calculations, it is not included in this analysis. For a discussion of fat-tail distributions, see
the preceding paper in this edition, An improved methodology for measuring VaR on page 7.
One way to measure how sensitive the VaR estimates in Tables A.1 and A.2 are to its underlying assumptions is to compare these values to VaR estimates produced by historical simulation. Under historical simulation, no statistical distribution for returns is assumed. Instead, sample returns over the 85
week historical sample period and the portfolio weights are used to construct the portfolios profit and
loss (P&L) distribution. It is then assumed that this distribution holds in the future. Chart 1 on page 28
shows the P&L distribution of a portfolio for a one year horizon. For a 95% confidence level, VaR is
given by the 5th percentile of this distribution which is USD 174MM.
1 In
this analysis, we use the convention of 5 weeks per month, 15 weeks per quarter, and 60 weeks per year.
RiskMetrics Monitor
Second quarter 1996
page 28
Chart 1
Portfolio profit & loss distribution over one year based on historical simulation
VaR = USD 174MM at 95% confidence level
14
12
Frequency
10
8
6
4
2
147
54
39
23
-8
-23
-39
-54
-147
0
Portfolio value (mm$)
Table 2 presents portfolio VaR estimates based on historical simulation for various forecast horizons.
Table A.1 on page 30 reports annual VaR estimates for individual foreign exchange exposures.
Table 2
Portfolio VaR estimates (USD MM)
Historical simulation; 95% confidence level
Confidence interval1
1Due
Time Horizon
1 Quarter
2 Quarters
3 Quarters
1 Year
95%
84
119
146
174
90%
80
113
138
160
to an insufficient amount of data, we do not report results for the 99% confidence level.
It should be noted that the accuracy of the VaR estimates produced by historical simulation is very dependent upon the sample size. In general, when no statistical distribution is assumed for returns, it is
difficult to obtain accurate estimates of the 1st, 5th and 10th percentiles without a sufficient amount of
data.
For example, to find the 5th percentile of the profit and loss (P&L) distribution consisting of 85 data
points we first sort the data and then select the 4th largest point. (Actually, the fifth percentile of 85 is
4.25 and using the fourth observation is a rough approximation to the true percentile value). Similarly,
the 1st and 10th percentiles are represented by the first and ninth data points of the sorted P&L series,
respectively. Table 3 presents the first nine P&L values from the sorted distribution as well as their corresponding percentiles.
RiskMetrics Monitor
Second quarter 1996
page 29
Table 3
First nine values of the portfolios profit and loss (P&L) distribution
One year forecast horizon
Order
P&L Values
Approximate Percentiles
1st
288
1st
2nd
279
3rd
258
4th
177
5th
5th
165
6th
160
7th
160
8th
160
9th
140
10th
Note that the 5th percentile in Table 3 does not match its counterpart in Table 2 because there, the percentile is computed as 0.75 (4 + 0.25) 5 = 174. Nevertheless, we present Table 3 to show how percentile estimates are very sensitive to the number of data points used to construct the P&L distribution.
More specifically, by computing the confidence intervals for the estimated percentiles, we find that
there is a 20% chance that the estimated 5th percentile will be less than 279 or greater than 160; there
is only a 67% chance that the 5th percentile is less than 165 and greater than 258. The fact that it is
difficult to get robust estimates of the percentiles in above analysis is one reason for the differences
between RiskMetrics VaR and VaR according to historical simulation. Another reason for the difference is related to the relative data weighting schemes used by both methodologies. In historical simulation, all occurrences have equal weights. Under the standard RiskMetrics approach, market
movements are exponentially weighted.
RiskMetrics Monitor
Second quarter 1996
page 30
Appendix
Table A.1
Portfolio composition and VaR1
Annual Volatility
Weight
Hist. Simulation
OECD
Australia
35
12.361
4.330
4.340
Austria
66
22.565
14.890
13.360
Belgium
32
16.592
5,310
6.220
Denmark
59
15.850
9.350
11.070
France
28
15.616
4.370
5.280
Germany
37
16.847
6.230
7.370
Greece
80
15.220
12.180
14.460
Holland
30
16.738
5.020
5.880
Italy
82
11.190
9.180
12.470
New Zealand
98
10.948
10.730
7.360
Portugal
28
15.208
4.260
5.270
Spain
48
14.843
7.120
8.120
Turkey
68
30.132
20.490
18.450
UK
81
11.641
9.430
14.170
Switzerland
41
19.810
8.120
8.330
6.637
0.400
0.680
10.770
0.430
0.520
Colombia
83
13.462
11.170
7.540
Costa Rica
53
4.353
2.310
1.460
Dominican Rep.
77
8.974
6.910
12.630
El Salvador
29
0.637
0.180
0.260
Equador
51
12.132
6.190
8.370
Guatemala
90
10.521
9.470
10.870
Honduras
10
11.272
1.130
1.840
Jamaica
64
15.298
9.790
8.000
Mexico
99
33.727
33.390
53.560
4.030
0.360
0.160
Peru
27
10.536
2.840
1.870
Trinidad
12
16.731
2.010
2.020
Uruguay
74
7.491
5.540
8.320
Nicaragua
RiskMetrics Monitor
Second quarter 1996
page 31
Weight
RiskMetrics
Hist. Simulation
Malaysia
54
5.065
2.730
2.250
Philippines
79
5.521
4.360
9.850
Thailand
34
2.813
0.960
0.950
Fiji
84
5.651
4.750
4.980
Hong Kong
79
0.346
0.270
0.360
Reunion Island
41
15.614
6.400
7.730
Malawi
55
18.211
10.020
19.350
South Africa
85
7.953
6.760
8.120
Zambia
11
19.112
2.100
3.130
Zimbabwe
57
6.968
3.970
5.100
Ivory Coast
53
15.607
8.270
9.990
18.700
0.370
0.300
0.020
ASEAN
Uganda
Others
3.964
0.040
Czech Repub
64
12.784
8.180
8.300
Hungary
83
12.984
10.780
11.560
India
94
17.787
16.720
13.960
Romania
43
25.866
11.120
4.520
Russia
82
14.730
12.080
20.030
China
Total
1Countries
2,502
are grouped by major economic groupings as defined in Political Handbook of the World: 19951996. New
York: CSA Publishing, State University of New York, 1996. Countries not formally part of an economic group are
listed in their respective geographic areas.
RiskMetrics Monitor
Second quarter 1996
page 32
Table A.2
Testing for conditional normality1
Normalized return series; 85 total observations
Tail Probability (%)8
Tail value9
Mean6
Std. Dev.7
< 1.65
> 1.65
< 1.65
> 1.65
6.105
0.120
0.943
2.900
5.700
2.586
2.306
13.517
0.085
1.037
8.600
5.700
1.975
2.499
19.172
0.089
0.866
8.600
2.900
1.859
2.493
17.510
0.077
0.903
11.400
2.900
1.915
2.576
0.950
17.642
0.063
0.969
8.600
2.900
2.140
2.852
4.453
0.937
18.064
0.085
0.872
5.700
2.900
1.821
2.703
0.098
2.259
0.940
15.678
0.154
0.943
11.400
2.900
1.971
2.658
0.067
4.567
0.939
18.360
0.086
0.865
5.700
2.900
1.834
2.671
Italy
0.480
0.019
0.984
7.661
0.101
0.763
2.900
1.853
New Zealand
1.746
7.829
0.963
8.808
0.068
1.075
2.900
2.900
2.739
3.633
Portugal
1.747
0.533
0.947
21.201
0.062
0.889
11.400
2.900
1.909
2.188
Spain
6.995
1.680
0.935
14.062
0.044
0.957
8.600
2.900
2.293
1.845
Turkey
30.566
118.749
0.865
2.408
0.761
1.162
11.400
2.944
UK
7.035
2.762
0.936
11.711
0.137
0.955
8.600
2.900
2.516
1.811
Switzerland
0.009
0.001
0.992
6.376
0.001
0.995
2.900
5.700
2.415
2.110
Skewness2
Kurtosis3
SW4
OECD
Australia
0.314
3.397
0.958
Austria
0.369
0.673
0.961
Belgium
0.157
2.961
0.943
Denmark
0.650
4.399
0.932
France
0.068
3.557
Germany
0.096
Greece
Holland
BL(18)5
0.880
1.549
0.976
10.900
0.224
0.282
Chile
1.049
0.512
0.983
11.035
0.291
0.904
8.600
2.057
Colombia
2.010
4.231
0.927
4.041
0.536
1.289
11.400
2.900
3.305
2.958
Costa Rica
0.093
33.360
0.878
19.893
0.865
0.425
5.700
2.011
Dominican Rep
0.026
41.011
0.872
10.796
0.050
1.183
5.700
5.700
3.053
3.013
El Salvador
2.708
49.717
0.672
9.626
0.014
0.504
2.900
1.776
Equador
0.002
50.097
0.852
10.463
0.085
1.162
5.700
5.700
3.053
3.013
2.237
Guatemala
0.026
1.946
0.959
12.276
0.280
1.036
8.600
5.700
2.365
Honduras
42.420
77.277
0.705
5.794
0.575
1.415
14.300
3.529
Jamaica
81.596
451.212
0.674
6.030
0.301
1.137
2.900
2.900
6.163
1.869
Mexico
13.71
30.237
0.930
15.156
0.158
0.597
2.900
2.500
0
0
0.051
2.847
0.977
132.183
0.508
0.117
122.807
672.453
0.560
2.713
0.278
1.365
5.700
5.069
Trinidad
0.813
0.339
0.980
10.271
0.146
1.063
8.600
11.400
2.171
1.915
Uruguay
0.724
0.106
0.989
9.464
0.625
0.371
Nicaragua
Peru
RiskMetrics Monitor
Second quarter 1996
page 33
Tail value9
Skewness2
Kurtosis3
SW4
BL(18)5
Mean6
Std. Dev.7
< 1.65
> 1.65
< 1.65
> 1.65
Malaysia
1.495
0.265
0.977
28.815
0.318
0.926
8.600
2.366
Philippines
1.654
0.494
0.975
22.944
0.082
0.393
Thailand
0.077
0.069
0.987
10.099
0.269
0.936
8.600
2.900
2.184
1.955
Fiji
4.073
6.471
0.965
6.752
0.129
0.868
2.900
2.900
3.102
1.737
Hong Kong
5.360
29.084
0.906
12.522
0.032
1.001
5.700
5.700
2.233
2.726
Reunion Island
0.068
3.558
0.950
17.641
0.063
0.969
8.600
2.900
2.140
2.853
ASEAN
9.454
0.870
14.143
0.001
0.250
South Africa
34.464
58.844
0.837
7.925
0.333
1.555
8.600
4.480
Zambia
22.686
39.073
0.891
9.462
0.007
0.011
Zimbabwe
20.831
29.234
0.895
9.142
0.487
0.762
5.700
2.682
Malawi
Ivory Coast
Uganda
0.068
3.564
0.950
17.643
0.064
0.970
8.600
2.900
2.144
2.857
40.815
80.115
0.767
9.629
0.203
1.399
8.600
2.900
4.092
1.953
Others
80.314
567.012
0.557
5.268
0.107
1.521
2.900
2.900
3.616
8.092
Czech Repub
0.167
12.516
0.937
2.761
0.108
0.824
5.700
2.900
2.088
2.619
Hungary
1.961
0.006
0.984
8.054
0.342
0.741
5.700
2.135
China
5.633
3.622
0.950
9.683
0.462
1.336
17.100
5.700
2.715
1.980
89.973
452.501
0.726
5.187
1.249
1.721
14.300
4.078
0.248
2.819
0.959
5.061
0.120
0.369
Portfolio
21.010
11.057
0.951
11.940
0.340
0.926
9.200
2.451
Normal
0.000
3.000
1.000
< 18.000
1.000
5.000
5.000
2.067
2.067
India
Romania
Russia
Countries are grouped by major economic groupings as defined in Political Handbook of the World: 19951996. New York: CSA Publishing, State University of
New York, 1996. Countries not formally part of an economic group are listed after their respective geographic areas.
2
If returns are conditionally normal, the skewness value is zero.
3
If returns are conditionally normal, the kurtosis value is zero.
4
If returns are conditionally normal, this value is one. (SW stands for Shapiro-Wilks test.)
5
If there is autocorrelation in the time series, this value is greater than 18.31. (BL stands for Box-Ljung test statistic.)
6
Sample mean of the return series.
7
Sample standard deviation of the normalized return series.
8
Tail probabilities give the observed probabilities of normalized returns falling below 1.65 and above +1.65. Under conditional normality, these values are 5%.
9
Tail values give the observed average value of normalized returns falling below 1.65 and above +1.65. Under conditional normality, these values are 2.067 and
+2.067, respectively.
RiskMetrics Monitor
Second quarter 1996
page 34
In the RiskMetricsTechnical Document, we outlined a single-index equity VaR approach to estimate the systematic market risk of equity portfolios. In this paper, we discuss the principal variables
influencing the process of portfolio diversification, and suggest an approach to quantifying expected
tracking error to market indices.1
VaR S = MV S 1.65 R
[1]
Since RiskMetrics does not publish volatility estimates for the universe of international stocks,
equity positions are mapped to their respective local indices. This methodology maps the return of a
stock to the return of a stock (market) index in order to attempt to forecast the correlation structure between securities. Let the return of a stock, R S , be defined as
RS = S R M + S + S
[2]
where
E S
= S
R = S R +
S
Since the firm-specific component can be diversified away by increasing the number of different equities that comprise a given portfolio, the market risk, VaR S , of the stock can be expressed as a function
of the stock index
[4]
R = S R
S
paper is an addendum to RiskMetricsTechnical Document. (3rd ed.) New York, May 1995, Section C: Mapping to describe positions, pp. 107156.
1 This
2 A number
of equity analytics firms provide estimates of stock betas across a large number of markets.
RiskMetrics Monitor
Second quarter 1996
page 35
VaR S = MV S S 1.65 R
where
1.65 R = RiskMetrics volatility estimate for the appropriate stock index.
M
[6]
VaR p = 1.65 R
M
MV
Si
Si
i=1
2.5
2.0
1.5
1.0
0.5
0.0
1
10
20
45
150
350
700
3 Edwin
J. Elton and Martin J. Gruber. Modern Portfolio Theory and Investment Analysis. (4th ed.) New York: John
Wiley and Sons, Inc., 1991. p. 33.
RiskMetrics Monitor
Second quarter 1996
page 36
In Modern Portfolio Theory and Investment Analysis, Elton and Gruber derive a formula that illustrates
the process of diversification with respect to the number of different stocks in a portfolio.
The variance of a portfolio of N stocks is
N
[7]
2
P
J=1
2 2
X J J
XX
J
2
K JK
J = 1K = 1
KJ
where
X J = proportion of stock J
2
J
2
JK
For an equally weighted portfolio of N assets (i.e., proportion held in security XJ=1/N) the formula for
portfolio variance becomes
2
1 2 N1 2
P = ---- J + ------------- JK
N
N
where
2
The first term of the equation (1/N times the difference between the variance of individual securities
and the average covariance) corresponds to the residual firm specific risk of a portfolio. The second
term (average covariance), represents the diversified market risk component. Now we can see that as
the number of stocks in a portfolio increases, the firm specific component declines to zero, and we are
left only with undiversifiable market risk. Therefore, the variance of a broad market index, such as the
2
2
S&P 500, should approximate the average covariance of stock returns ( JK R ). Substituting the
M
average covariance with the market index variance yields
[8]
2
2
2
1 2
P = ---- J RM + RM
N
RiskMetrics Monitor
Second quarter 1996
page 37
Applications
Elton and Gruber's derivation clarifies the process of diversification and underlines the key variables
of portfolio risk: (a) average variance, (b) average covariance and (c) number of elements in a portfolio. Using these key variables, we can compare the effect of diversification for different equity markets.
For example, Elton and Gruber show that significantly more risk is diversifiable in the Netherlands or
Belgium (76%, and 80% respectively), than in the more correlated equity markets of Germany and
Switzerland (56%). In general, significant risk reduction through diversification is possible when the
average covariance of a population is small compared to the average variance.
Elton and Gruber's derivation could be applied for estimating the tracking error to a broad market index
when a portfolio is not fully diversified. For example, we could calculate a Diversification Scaling Factor (i.e., proportion of expected total risk to systematic risk) to estimate the incremental risk given the
number of elements within a portfolio.
2
[9]
P
Diversification Scaling Factor = -------2
R
M
1 2 2
= ------------- R + 1
2 J
M
N R
M
Using this formula, RiskMetrics users could adjust their equity VaR estimate upward to reflect firm
specific risk:
Adjusted VaR p = Diversification Scaling Factor VaR p
The potential applications of this technique are broad. For example, the diversification scaling factor
could be used as a back of the envelope estimate of how much residual risk to expect in a stock portfolio, given the number of different stocks held. The advantage of Elton and Gruber's derivation lies in
its simplicity. Using basic variablesthe number of securities in a portfolio, and the proportion of firm
specific to diversified riskone can get an estimate of index tracking error.
Practical considerations
To estimate the average volatility of stocks within an index, one can take either an evenly weighted or
a market-cap weighted volatility of each component security.4 Depending on client demand and resource availability, J.P. Morgan could potentially include this volatility estimate in future releases of
the RiskMetrics data set. Its integration into the RiskMetrics data set would be relatively straightforward because the overall correlation matrix would be unaffected (we assume that firm specific risk
is independent).
4 Component
BARRA.
securities of broad market indices are available from a number of sources, for example, Bloomberg or
RiskMetrics Monitor
Second quarter 1996
page 38
Example 1
Consider a portfolio consisting of four U.S. stocks, with market values of USD 25MM each, a one
month standard deviation for the S&P500 Index of 3.46%, and an average standard deviation of stocks
within the S&P 500 of 8.9%.
Example summary
Average equity volatility (1.65)
Diversified index volatility (1.65)
Number of securities
14.69%
5.72%
4
Security
Market
Value
Beta
Systematic
Risk
Stock A
$25.00
0.5
$0.71
Stock B
$25.00
1.5
$2.14
Stock C
$25.00
$1.43
Stock D
$25.00
0.8
$1.14
$5.43
(2)1
155%
$8.41
2Total
VaR p = 1.65 R
M
MV
Si
Si
i=1
= 1.65 R
S&P500
[ MV A A + MV B B + MV C C + MV D D ]
RiskMetrics Monitor
Second quarter 1996
page 39
p
1
2 2
S&P500
2
2
1
= -------------------------- 8.9% 3.46% + 1
4 ( 3.46% )
= 155%
where
N
MV
Si
Si
i=1
N
MV
2 2
Si J
2 2
Si R
M
i=1
Market risk for individual stocks is aggregated linearly (correlation=1), while residual risk is aggregated assuming independence (i.e., square root of the sum of the squares).
RiskMetrics Monitor
Second quarter 1996
page 40
Example 2
Consider the same parameters outlined in Example 1, except that we hold different proportions of the
same stocks in this portfolio.
Example summary
Average equity volatility (1.65)
Diversified index volatility (1.65)
Number of securities
14.69%
5.72%
4
Security
Market Value
Beta
Systematic Risk
Residual Risk
Stock A
$10.00
0.5
$0.29
$1.44
Stock B
$20.00
1.5
$1.71
$2.39
Stock C
$30.00
$1.71
$4.06
Stock D
$40.00
0.8
$1.83
$5.58
$5.54
$7.44
(3)1
$9.28
MV
Si
Si
i=1
= 1.65 R
S&P500
[ MV A A + MV B B + MV C C + MV D D ]
RiskMetrics Monitor
Second quarter 1996
page 41
MV
2 2
Si R S
2
R
M
i=1
2
2
2 2
2
2
2 2
MV A J A R + MV B J B R
M
M
= 1.65
2 2
2 2
2 2
2 2
+MV C J C R + MV D J D R
M
M
= 1.65
10
+30
8.9 ( 1 ) ( 3.46 )
+ 20
+ 40
= USD 7.44MM
= ( 5.54 ) + ( 7.44 )
= USD 9.28MM
Multi-Market Portfolio
VaR for a portfolio consisting of equities from several different markets follows the same methodology
of aggregating market and residual risk. The difference is that the correlation between different market
indices is incorporated, as well as FX risk.
RiskMetrics Monitor
Second quarter 1996
page 42
RiskMetrics Monitor
Second quarter 1996
page 43
A look at two methodologies that use a basic delta-gamma parametric VaR precept but achieve
results similar to simulation.
RiskMetrics Monitor
Second quarter 1996
page 44
RiskMetrics products
New York
Chicago
Mexico
San Francisco
Toronto
North America
Europe
London
Brussels
Paris
Frankfurt
Milan
Madrid
Zurich
Asia
Singapore
Tokyo
Hong Kong
Australia
RiskMetrics is based on, but differs significantly from, the market risk management systems developed by J.P. Morgan for its own use. J.P. Morgan does not warrant any results
obtained from use of the RiskMetrics data, methodology, documentation or any information derived from the data (collectively the Data) and does not guarantee its sequence,
timeliness, accuracy, completeness or continued availability. The Data is calculated on the basis of historical observations and should not be relied upon to predict future market
movements. The Data is meant to be used with systems developed by third parties. J.P. Morgan does not guarantee the accuracy or quality of such systems.
Additional information is available upon request. Information herein is believed to be reliable, but J.P. Morgan does not warrant its completeness or accuracy. Opinions and estimates constitute our judgement and are
subject to change without notice. Past performance is not indicative of future results. This material is not intended as an offer or solicitation for the purchase or sale of any financial instrument. J.P. Morgan may hold a
position or act as market maker in the financial instruments of any issuer discussed herein or act as advisor or lender to such issuer. Morgan Guaranty Trust Company is a member of FDIC and SFA. Copyright 1996 J.P.
Morgan & Co. Incorporated. Clients should contact analysts at and execute transactions through a J.P. Morgan entity in their home jurisdiction unless governing law permits otherwise.