TM
J.P.Morgan/Reuters
RiskMetrics News
Methodology
J.P. Morgan will continue to develop the RiskMetrics set of VaR methodologies and publish them in
the quarterly RiskMetrics Monitor and in the annual RiskMetrics—Technical Document.
The current RiskMetrics data sets will continue to be available on the Internet and will be further im
proved as a benchmark tool designed to broaden the understanding of the principles of market risk mea
surement.
When J.P. Morgan first launched RiskMetrics in October 1994, the objective was to go for broad mar
ket coverage initially, and follow up with more granularity in terms of the markets and instruments cov
ered. This over time, would reduce the need for proxies and would provide additional data to measure
more accurately the risk associated with nonlinear instruments.
The partnership will address these new markets and products and will also introduce a new customiz
able service, which will be available over the ReutersWeb service. The customizable RiskMetrics ap
proach will give risk managers the ability to scale data to meet the needs of their individual trading
profiles. Its capabilities will range from providing customized covariance matrices needed to run VaR
calculations, to supplying data for historical simulation and stresstesting scenarios.
More details on these plans will be discussed in later editions of the RiskMetrics Monitor.
Systems
Both J.P. Morgan and Reuters, through its Sailfish subsidiary, have developed clientsite RiskMetrics
VaR applications. These products, together with the expanding suite of third party applications will
continue to provide RiskMetrics implementations.
RiskMetrics Monitor
Second quarter 1996
page 3
FourFifteen is a practical tool for measuring the market risk in complex portfolios of financial instru
ments, and runs on both Windows and Macintosh platforms.
J.P. Morgan designed FourFifteen to use the unique database of volatilities and correlations which is
updated daily and needed to generate key market risk information. FourFifteen performs VaR anal
yses of portfolios containing financial products including fixed income, foreign exchange, equities, and
their derivatives in 23 currencies. Users can specify the base currency, horizon, and risk threshold.
FourFifteen also provides an array of standard reports to present risk information in a clear and useful
manner. Users with more specific reporting needs can create customized reports to aid in the identifi
cation of their key sources of risk.
Why FourFifteen? FourFifteen is named after J.P. Morgan’s market risk report produced at 4:15
p.m. each day. The “4:15 Report”, a single sheet of paper, summarizes the daily earnings at risk for
J.P. Morgan worldwide.
FourFifteen allows for a more informed perspective on a wide range of risk management issues in
the areas of trading management, benchmark performance evaluation, asset/liability management, re
source allocation, and regulatory reporting. It is a useful aid to risk analysis and decisionmaking at
micro, macro and strategic levels. The system's inherent flexibility delivers risk analysis tailored to var
ious users, from senior executives to financial analysts.
For more information on FourFifteen, contact your local J.P. Morgan representative.
• microcomp GmbH
RobertHeuserStrasse 15, 50968 Koeln, Germany
Peter Jumpertz, (49221) 937 08020, Fax (49221) 937 08015,
100276.3233@compuserve.com
ValueRisk is a front and middle office tool, that can be easily connected to position keeping
systems and other back office applications. It helps to manage portfolios of all kinds of finan
cial products, including fixed income, equities, commodities, foreign exchange, and deriva
tives. The system is targeted to the individual trader, fund manager or treasurer, and to the
manager of a trading or treasury department as well.
ValueRisk introduces “Risk Percentage,” a common measurement for risk in different kinds of
financial markets. Risk is measured as a percentage relative to a fully leveraged position. It is
based on the accepted returnsoncapital concept at marktomarket prices. The concept is a
basis for clear and communicable capital allocation decisions. It helps to implement well
defined diversification strategies, and limits the loss of a risktaking unit. The system also
serves as the basis for performance measurement.
RiskMetrics Monitor
Second quarter 1996
page 4
ValueRisk monitors price series and crosscorrelation of virtually all traded financial products.
The system calculates volatility and determines the total risk level as well as the stopout
probability based on historical crosscorrelations of all products for various portfolios.
Accounting for unprecedented market dynamics, the system offers quick and powerful simula
tion capabilities and stress testing function. It is fully compliant with the latest BIS (Basel
Committee) and BAK (Bundesaufsichtsamt für das Kreditwesen) risk controlling require
ments.
• MidasKapiti International
45 Broadway, New York, N.Y. 10006
Abby Friedman (1212) 898 9500, FAX (1 212) 898 9510
TMark trader analytics system provides comprehensive highspeed pricing, portfolio analysis
and hedging facilities to support trading of offbalance sheet instruments. Recently introduced
Release 2.4 supports the J.P. Morgan RiskMetrics VaR methodology. Specific features of
Release 2.4 include multiuser processing, deal ticket generation, increased number of curren
cies and rates, and revaluation of cash flows from basis swaps. TMark is a PCbased system
and runs under Microsoft Windows.
Value & Risk calculates ValueatRisk with several selectable methodologies, such as covari
ancemethodology, quantilesimulation, user defined/stress test simulation, worst case, prin
cipal component analysis, Monte Carlo, historical simulation, and RiskMetrics. It supports
various distributions and manifold position equivalent calculations as well as stochastic and
nonstochastic hedge proposals. It also offers substantial drill down and reporting functional
ity including numerous graphics.
Value & Risk cooperates with SAS Institute and is able to implement complete solutions from
data integration to decision support for traders, managers and controllers in a multiuser envi
ronment.
Besides supplying the above mentioned methodology, Value & Risk offers the standard
method for capital requirements according to Basel Committee and CAD. All markets and
their instruments are covered including a variety of exotic options.
RiskMetrics Monitor
Second quarter 1996
page 5
For the current list of software vendors that support RiskMetrics, please refer to the J.P. Morgan Web
page at:
http://www.jpmorgan.com/MarketDataInd/RiskMetrics/Third_party_directory.html
However, continuous compounding, i.e., price = exp ( – rt ) , should have been used instead, given the
yield calculation method of the term structure model published in RiskMetrics—Technical Document.
We will switch to continuous compounding for government zeros beginning June 17th.
The effect of this change on estimated VaR varies according to maturity interval and a currency's yield
levels. Chart 1 shows that the average price volatility for most currencies will increase an average of 5
to 8%. Higher yielding currencies, for example, Lira or Peseta, will increase an average of 10 to 12%.
On the other hand, the volatility for low yielding currencies, for example, Yen, will average an increase
of 2 to 5%.
The increase in average price volatility, however, does not translate into a direct, corresponding in
crease in VaR. The change in VaR is also a function of the time to maturity. The reduction in a cash
flow’s present value using continuous versus discrete compounding is greater the longer the time to
maturity. Thus, a nearer government interval, for example, 2 years, will have more of an increase in
VaR than a farther one. This is because the reduction in present value has a dampening effect visavis
the increase in price volatility. This is particularly noticeable when looking at the overall effect on VaR
of high versus low yielding currencies. Chart 2 on page 6 shows the change in VaR for the 30 year Lira
and the 20 year Yen. In fact, the reduction in present value for the 30 year Lira more than offsets the
increase in its price volatility and the revised VaR estimate is actually less than the current estimate.
Chart 2 also compares the change in present value of the 20 year Yen versus the 30 year Lira.
Chart 1
Changes in average volatility estimates
RiskMetrics data, Jan92 to Feb96
15.0%
12.5%
10.0%
7.5%
5.0%
2.5%
0.0%
USD JPY FRF NZD LIR DEM ESP DKK CAD
Chart 2
Change in average estimated VaR
RiskMetrics data, Jan92 to Feb96
15.0%
10.0%
5.0%
0.0%
5.0%
10.0%
USD JPY FRF NZD LIR DEM ESP DKK CAD
Peter Zangari Since its release in October 1994, RiskMetrics has inspired an important discussion on VaR method
Morgan Guaranty Trust Company ologies. A focal point of this discussion has been a RiskMetrics assumption that returns follow a con
Risk Management Advisory ditional normal distribution. Since the distributions of many observed financial return series have tails
(1212) 6488641
that are “fatter” than those implied by conditional normality, risk managers may underestimate the risk
zangari_p@jpmorgan.com
of their positions if they assume returns follow a conditional normal distribution. In other words, large
financial returns are observed to occur more frequently than predicted by the conditional normal distri
bution. Therefore, it is important to be able to modify the current RiskMetrics model to account for
the possibility of such large returns.
The purpose of this article is to describe a RiskMetrics VaR methodology that allows for a more re
alistic model of financial return tail distributions. The article is organized as follows: the first section
reviews the fundamental assumptions behind the current RiskMetrics calculations, in particular, the
assumption that returns follow a conditional normal distribution. The second section (“A new VaR
methodology” on page 10) introduces a simple model that allows us to incorporate fattertailed distri
butions. The third section (“A statistical model for estimating return distributions and probabilities” on
page 18) shows how we estimate the unknown parameters of this model so that they can be used in con
junction with current RiskMetrics volatilities and correlations. The fourth section (on page 23) is the
conclusion to this article.
Before we can build on the current RiskMetrics methodology, it is important to understand exactly
what RiskMetrics assumes about the distribution of financial returns. RiskMetrics assumes that re
turns follow a conditional normal distribution. This means that while returns themselves are not nor
mal, returns divided by their respective forecasted standard deviations are normally distributed with
mean 0 and a variance 1. For example, let r t , denote the time t return, i.e., the return on an asset over
a oneday period. Further, let σ t denote the forecast of the standard deviation of returns for time t based
on historical data. It then follows from our assumptions that while r t is not necessarily normal, the stan
dardized return, r t ⁄ σ t , is normally distributed. The distinction between these two types of returns is
that, unlike conditional returns, unconditional returns have fattails. Second, timevarying, persistent
volatility, which is a feature of the conditional assumption, is a real phenomenon.
1 SeeDarryl Hendricks, “Evaluation of ValueatRisk Models Using Historical Data,” FRBNY Economic Policy Review,
April, 1996.
RiskMetrics Monitor
Second quarter 1996
page 8
Extending the RiskMetrics VaR framework to include the large probability of large returns begins
with an understanding of the dynamics of market returns. Chart 1 illustrates such dynamics by showing
a time series of returns for the Nikkei 225 index over the period April 1990 through April 1996.
Chart 1
Nikkei index returns, r(t)
Returns (in percent) for April 1990–April 1996
10
high volatility
Nikkei index returns
5
low volatility
10
Mar90 Aug91 Dec92 May94 Sep95 Feb97
In addition to the typical feature of volatility clustering (i.e. periods of high and low volatility), Chart 1
displays large returns that appear to be inconsistent with the remainder of the data series. To show how
the observed returns, r(t), differ from their standardized counterparts, r(t)/σt, the standardized returns
for the Nikkei index are presented in Chart 2 on page 8. (To create the standard errors for scaling the
returns, we used the current RiskMetrics daily forecasting methods.)
Chart 2
Standardized returns ( r t ⁄ σ t ) of the Nikkei index
April 4, 1990–March 26, 1996
8
6
4
Nikkei index returns
2
0
2
4
6 Conditional event
8
10
Mar90 Aug91 Dec92 May94 Sep95 Feb97
Comparing Charts 1 and 2, note the large negative standardized return (called “Conditional event” in
Chart 2) and that it results in part from the low level of volatility that preceded the corresponding ob
RiskMetrics Monitor
Second quarter 1996
page 9
served return (Chart 1). We can interpret such large conditional nonnormal returns as “surprises” be
cause immediately prior to the observed return there was a period of low return volatility. Hence, the
conditional event return is unexpected.
Conversely, observed returns that appear large may no longer do so when standardized because their
values were “expected.” In other words, these large returns occur in periods of relatively high volatility.
This scenario is demonstrated in Chart 3, which shows an observed time series of spot gold contract
returns and their standardized values.
Chart 3
Observed spot gold returns (A) and standardized returns (B)
April 4, 1990–March 26, 1996
4 (A)
2
Gold returns
2
4
6
8
Mar90 Aug91 Dec92 May94 Sep95 Feb97
6 (B)
4
2
Gold returns
0
2
4
6
8
Mar90 Aug91 Dec92 May94 Sep95 Feb97
Chart 3 demonstrates the effect of standardization on return magnitudes. For example, observed returns
(A) show more volatility clustering relative to their standard counterparts (B).
To summarize, RiskMetrics assumes that financial returns divided by their respective volatility fore
casts are normally distributed with mean 0 and variance 1. This assumption is crucial because it recog
nizes that volatility changes over time.
RiskMetrics Monitor
Second quarter 1996
page 10
• First, continue using the current RiskMetrics volatilities and correlations because users have
shown a strong interest in using these estimates when computing VaR. (We believe that this
interest results from the intuitive interpretation of volatility and correlation.)
• And third, set up conditions such that VaR is easily computed. Therefore, the number of new
parameters required to compute VaR must not be so large as to impede implementation.
We motivate the development of the new VaR methodology as follows: Instead of assuming that stan
dardized returns are normally distributed with mean 0 and variance 1, we assume that the standardized
returns are generated from a mixture of two different normal distributions.
For example, suppose that we believe that on most days standardized returns are generated according
to the conditional normal distribution with a zero mean and variance close to 1. However, on other days,
say on days where a large return is observed, we assume that standardized returns are still normally
distributed but with a different mean and variance. In fact, we would expect the variance of this latter
normal distribution to be large. To be more specific, let p 1 be the probability that a standardized return
was generated from the normal distribution N 1 , where N 1 is identified by its mean µ 1 and variance
2
σ 1 . Similarly, let p 2 be the probability that a standardized return was generated from the normal dis
2
tribution N 2 , where N 2 is identified by its mean µ 2 and variance σ 2 . Mathematically, the standard
ized return distribution is generated according to the following probability density function:
[1] PDF = p 1 ⋅ N 1 ( µ 1, σ 1 ) + p 2 ⋅ N 2 ( µ 2, σ 2 )
Eq. [1] is known as a normal mixture model. An interesting feature of this model is that it allows us to
assign large returns a larger probability (compared to the standard normal model). Chart 4 on page 11
shows two simulated densities, one from the standard normal distribution, the other from a normal mix
ture model with the following parameters:
2 2
p 1 = 0.98, p 2 = 0.02, µ 1 = µ 2 = 0, σ 1 = 1, and σ 2 = 100
RiskMetrics Monitor
Second quarter 1996
page 11
Chart 4
Standard normal and normal mixture probability density functions (PDF)
0.40
0.35
0.30
0.25
PDF
0.20
Mixture
0.15
0.10
0.05
Normal
0.00
5.00
4.05
3.10
2.15
1.20
0.25
0.70
1.65
2.60
3.55
4.50
standardized returns
Since the normal mixture model can assign relatively largerthannormal probabilities to big returns,
we choose to model standardized returns as the sum of a a normal return, n t , with mean zero and vari
2
ance σ n , and another normal return, β t , with mean and variance that occurs each period with proba
2
bility p. Note that if we set σ n = 1 , then n t represents the part of the return that is modeled correctly
according to RiskMetrics. It then follows that we can write a standardized return, R(t), as generated
from the following model:
[2] Rt = nt + δt βt
Recall that Eq. [2] on page 11 was motivated by the need to have our VaR model account for large re
turns, i.e. returns that occur less than 5% of the time according to the RiskMetrics model. Unfortu
nately, due to data limitations, it is very difficult in practice to determine the accuracy of forecasting
returns that occur less than 2.5% of the time. One way to get around the situation of not having enough
data to properly test Eq. [2] is to perform a Monte Carlo simulation. Under controlled settings we can
simulate as much data as we need to, and then conduct inference based on the simulated data. The only
requirement is that the simulated data have fat tails so as to reflect observed financial return
distributions. One commonly used distribution that is known to have fattails, and that is easy to sim
ulate, is the tdistribution.
RiskMetrics Monitor
Second quarter 1996
page 12
We then conduct a Monte Carlo study on single instruments to determine how accurately our new mod
el captures fattails. The experiment is performed in three steps. First, we simulate 10,000 observations
from a tdistribution and then compute the simulated percentiles at the 0.5%, 1%, 2.5% and 5% prob
ability levels. Second, we estimate the parameters of Eq. [2] by using the simulated data, and determine
the percentiles implied by Eq. [2]. (We refer to the percentiles generated from Eq. [2] as “mix” since
its associated PDF is essentially a normal mixture model.) Third, we compare the actual percentiles
generated from the tdistribution to those produced by Eq. [2] and by the standard normal model (stan
dard RiskMetrics).
Table 1 reports the results of our study. Data in column 2 is simulated from tdistributions with 2, 4, 10
and 100 degrees of freedom. Notice that the smaller the degrees of freedom, the fatter the tails of the
simulated distribution (compare columns 4 and 5).
Table 1
Comparing percentiles of tdistribution and normal mixture
Simulated data from tdistribution with 2, 4, 10, and 100 degrees of freedom
Two degrees of freedom
1Parameters are defined as follows: µβ = mean of the normal distribution, βt, describing the standardized return
σβ = standard deviation of the normal distribution, βt
σn = standard deviation of the normal return, nt
p = probability that the standardized return is generated from the normal distribution
RiskMetrics Monitor
Second quarter 1996
page 13
Table 1 (continued)
Comparing percentiles of tdistribution and normal mixture
Simulated data from tdistribution with 2, 4, 10, and 100 degrees of freedom
Ten degrees of freedom
The results in Table 1 (columns 4 and 5) clearly show how the mixture model is superior to the normal
model in recovering the tail percentiles of the tdistribution (notably in the cases of 2 and 4 degrees of
freedom). Also, reading from the top of the table downwards, notice how the estimates of σ β, σ n, and p
change as the distribution becomes more normal. When there are very fat tails, there is a greater chance
of observing a return from the normal distribution with the large variance (1.3% for 2 degrees of free
dom vs. 0.2% with 100 degrees of freedom). Furthermore, the estimated standard deviation σ β be
comes smaller as the simulated distribution becomes more and more normal.
Next, we consider the case of calculating the VaR of a portfolio of returns. Unlike the single instrument
case described above, when aggregating returns we must grapple with issues such as the correlation
between different returns δ t ′s and β t ′s . For example, consider a portfolio that consists of two instru
ments with returns R 1t and R 2t expressed as follows:
R 1t = n 1t + δ 1t β 1t
[3]
R 2t = n 2t + δ 2t β 2t
Let w 1 and w 2 denote the amount invested in instruments 1 and 2, respectively. We can write the port
folio return, R pt , as
RiskMetrics Monitor
Second quarter 1996
page 14
R pt = w 1 σ 1t R 1t + w 2 σ 2t R 2t
[4] w 1 σ 1t δ 1t β 1t + w 2 σ 2t δ 2t β 2t w 1 σ 1t n 1t + w 2 σ 2t n 2t
= +
New portfolio component Standard RiskMetrics component
Notice that when aggregating returns we must multiply the returns by the RiskMetrics standard de
viations ( σ 1t and σ 2t ) to preserve the proper scale.
Now, to compute VaR it is simple to evaluate the “standard RiskMetrics component” since all that is
required are the RiskMetrics standard deviations and correlations which are readily available. How
ever, things are not so straightforward with the “New portfolio component”. For example, we must de
termine whether we should use estimates of the correlation between δ 1t and δ 2t or simply assume how
they are correlated. Similar issues may apply to β 1t and β 2t although, according to our estimation pro
cedure, it is not possible to estimate their correlation. Through extensive experimentation with portfo
lio’s of various sizes, we have concluded that treating both the δ t 's and β t 's as independent offers the
best results relative to the standard normal model and the assumption that the δ t 's are perfectly corre
lated. The implication of this result is that for any portfolio of arbitrary size, all that is required to com
pute its VaR are the RiskMetrics volatilities and correlations, the probabilities, p, mean estimates µ β
and the standard deviations σ β . Note that there is one p, µ β , and σ β for each time series. We now offer
an example to show how we reached the conclusion to treat the δ t 's and β t 's as independent.
Consider the case of a portfolio that consists of five assets. We assume that the true returns generated
from the following model:
[5] R jt = n jt + δ jt β jt for j = 1, 2, 3, 4, 5
where the vector β = ( β 1, …, β 5 ) is distributed multivariate normal with mean vector (0,0,0,0,0),
standard deviation vector (10,10,10,10,10), and correlation matrix
Recall that we ignore the correlations between the β t 's because, in practice, it is very difficult to get
good estimates of the correlation matrix Σ β .
Table 2
Testing assumptions on correlation between δt’s
5000 simulated returns from correlated mixture model
Percentiles:
Confidence Interval True Normal Independent δt’s Perfectly correlated δt’s
5.0% −1.63 −1.65 −1.88 −1.74
Next, we conduct the same experiment but this time on real data. We form a portfolio of returns from
5 foreign exchange series again weighting each return series by one unit. Table 3 reports the parameter
estimates after fitting each of the five return series to Eq. [2] on page 11. The percentiles implied by
Eq. [2] under different aggregation assumptions are reported in Table 4.
Table 3
Parameter estimates of the normal mixture model
Fitting the model (Eq. [2]) to 5 foreign exchange return series
Parameter estimates:
Currencies µβt σβt σnt p (%)
Table 4
Testing assumptions on correlation between δt’s
Portfolio returns generated from 5 foreign exchange series
Percentiles:
Confidence Interval Normal Independent δt’s Perfectly correlated δt’s
5.0% −1.65 −1.71 −1.69
Tables 2 and 4 show that the independence and perfect correlation assumptions give similar results.
However, at the smaller percentiles, say 1% and smaller, the assumption that the δ t ′s are independent
tend to be more accurate.
To help summarize how to compute VaR under this new proposed methodology, we refer the reader to
the flow chart presented in Chart 5 on page 17. The gray shaded boxes represent the data required
RiskMetrics Monitor
Second quarter 1996
page 16
(which we would supply) to estimate VaR under the new methodology. In addition to the RiskMetrics
volatilities and correlations, we would supply p, µ β , and σ β for each time series and the formulae that
would take as inputs the portfolio weights, the RiskMetrics volatilities and correlations, and p, µ β ,
and σ β to produce a VaR estimate at a prespecified confidence level. The italicized words tell when the
data would be updated.
RiskMetrics Monitor
Second quarter 1996
page 17
Chart 5
Flow chart of VaR calculation
Compute adjusted
percentile
Thus far, there has been no mention of how p, µ β , and σ β are estimated. The next section describes
t t
the statistical model used to estimate p, µ β , and σ β . The estimation process is rather technical and
t t
uninterested readers should skip to “Conclusions” on page 23.
[7] Rt = nt + δt βt
where
2
n t → N 0, σ n
Prob ( δ t = 1 ) = p
Prob ( δ t = 0 ) = ( 1 – p )
and
2
β t → N 0, ξ
We analyze Eq. [7] within a Bayesian framework. Given prior distributions and values on
2 2
δ t, p, β t, and σ n , we derive the marginal posterior distributions of p, β t, and σ n , as well as a time series
of the posterior probabilities of events, i.e., Prob ( δ t = 1 ) at each point in time. The basic computa
tional tool used is the Gibbs sampler, which uses random draws from the conditional distributions of
each variable of a random vector given all other variables to obtain samples from the marginal distri
butions.2 The sampler thus only requires the ability to draw random samples from the conditional dis
tributions of the variables involved. This minimum requirement makes the sampler particularly useful
2
as the joint distribution of the variables p, β t, and σ n is complicated.
As previous research has shown, traditional maximum likelihood analysis of models such as Eq. [7] is
complicated because the random mechanism, p, can give rise to unknown numbers of shifts at arbitrary
time points. Alternatively, formulating Eq. [7] by using a Bayesian approach and applying the Gibbs
sampler is an effective way to obtain the marginal posterior distributions. A major advantage of the
Bayesian approach is that there is no need to consider the number of level shifts in the series; the shifts
are in effect governed by the probability p.
Following the Bayesian paradigm, in order to estimate Eq. [7], we must first specify the prior distribu
2
tions for p, β t, and σ n . The prior distribution for the variance of the standard normal random variable
nt is
2 –2
[8] σ n → νλ ⋅ χ v
–2
where χ ν is an inverse chisquare random variable with v degrees of freedom with mean and variance
given by
2
E σ n = νλ ⁄ ( ν – 2 ) ν>2
2 2 2
V σ n = 2 ⋅ ( νλ ) ⁄ ( ν – 2 ) ( ν – 4 ) ν>4
2
[9] β t → NID 0, ξ
Finally, it is assumed that the prior probability that an event occurs, p, follows a beta distribution, i.e.,
p → Beta ( γ 1 + γ 2 ) , with mean and variance
E ( p) = γ 1 ⁄ ( γ 1 + γ 2)
2
V ( p) = γ 1 γ 2 ⁄ ( γ 1 + γ 2) ( γ 1 + γ 2 + 1)
2
The hyperparameters, i.e., the parameters ( γ , λ, γ 1, γ 2, ξ ) for the prior distributions are assumed
known. In practice, these hyperparameters can be specified by using substantive prior information on
the series under study. The purpose of the Gibbs sampler is to find the conditional posterior distribu
2
tions of subsets of the unknown parameters δ, β, p, σ n . Denoting the conditional probability density
of ω given ρ by p(ωρ) and using some standard Bayesian techniques, we obtain the posterior distribu
tions, i.e., the distributions after we update our priors with data.3
To quantify our a priori beliefs, we set the priors to the following values:
γ 1 = 2 ; γ 2 = 98
ν = 3
2
λ = σ ⁄3
2
ξ = 100
These settings imply that before we estimate the parameters of Eq. [7], we believe an event occurs 2%
of the time, the expected value of the standard deviation of nt is one, and event returns, βt, are distrib
uted normally with mean of zero and a standard deviation of 10. Chart 6 shows how the prior distribu
tion on event returns compares to the prior distribution of standardized returns et.
3 The posterior distributions are given in the appendix to this paper (page 24).
RiskMetrics Monitor
Second quarter 1996
page 20
Chart 6
Prior distributions for standardized returns and event returns
0.4
0.35
N(0,1)
0.3
0.25
0.2
0.15
0.1
0.05
Prior β Ν(0,100)
0
10 8 6 4 2 0 2 4 6 8 10
Note that by choosing a large standard deviation we are implying that we do not have strong a priori
beliefs on the values of event returns. Also, assuming a mean of zero implies that there is no bias toward
either positive or negative events.
Combining the observed returns with our prior settings, we estimate the marginal distributions of
2
β, p, and σ n , which represent the probability distributions of the event returns, the probability of an
event and the variance of the normal standardized random variable, respectively. Chart 7 presents the
estimated posterior distribution of event returns for gold spot contracts. Specifically, given our priors
we fit Eq. [7] on page 18 to gold spot contract returns for the period April 4, 1990 through March 26,
1996 and estimate the distribution of β.
Chart 7
Posterior distribution of event returns (β) of gold spot contracts
Xaxis is in units of standardized returns
0.12
0.1
0.08
PDF
0.06
0.04
0.02
0
10.00 7.00 4.00 1.00 2.00 5.00 8.00
Spot gold standardized returns
Notice how the event returns have a symmetric bimodal distribution. Effectively, what this shape tells
us is that there is important information in the data. The data has transformed the prior distribution into
RiskMetrics Monitor
Second quarter 1996
page 21
something very different. When event returns are negative, values around −4.5 occur most frequently.
Similarly, when event returns are positive the values around 4.3 occur most often. This is in stark con
trast to the prior distribution, which assumes that zero is the most commonly observed value for event
returns.
In fact, when fitting Eq. [7] to 215 time series in the RiskMetrics database, which includes foreign
exchange, money market rates, government bonds, commodities, and equities, we have found similar
shapes for the event return distributions. For example, Charts 7 and 8 show event return distributions
for the DEM/USD foreign exchange series and the US S&P 500 equity index for the period April 4,
1990 through March 26, 1996.
Chart 8
Posterior distribution of DEM/USD foreign exchange event returns
Xaxis is in units of standardized returns
0.14
0.12
0.1
0.08
PDF
0.06
0.04
0.02
0
8.00 6.00 4.00 2.00 0.00 2.00 4.00 6.00 8.00
DEM/USD standardized returns
Note that whereas the DEM/USD event returns have a symmetric bimodal distribution, the S&P 500’s
event return distribution has a pronounced mode at −5.0. As Chart 9 on page 22 shows, when events in
the S&P 500 occur, they are more likely to be negative than positive.
RiskMetrics Monitor
Second quarter 1996
page 22
Chart 9
Posterior distribution of S&P 500 event returns
Xaxis is in units of standardized returns
0.12
0.1
0.08
PDF
0.06
0.04
0.02
0
10.00 7.00 4.00 1.00 2.00 5.00 8.00
S&P 500 standardized returns
Charts 7, 8, and 9 presented the distribution of β for three different return series, similarly we can es
timate the posterior distribution of p, the probability that an event occurs. Chart 10 shows the distribu
tion of the probability of an event occurring for the S&P 500.
Chart 10
Posterior distribution of the probability of an event (p) for the S&P 500
April 4, 1990 through March 26, 1996
12000
10000
8000
PDF
6000
4000
2000
0
0.004 0.014 0.024 0.034 0.044
Probability of event
Recall that the prior probability of observing an event followed a beta distribution and it was expected
that, on average, an event would occur 2% of the time. However, after using the data to update our pri
ors we see that for the S&P 500, the estimated distribution of p is positively skewed with a peak
around 1%.
RiskMetrics Monitor
Second quarter 1996
page 23
Finally, note that throughout this section we derived probability distributions of the parameters of in
terest. However, our VaR methodology requires point estimates, not entire distributions. When com
puting VaR we take the means of the posterior distributions of p, µ β, and σ β for their point estimates.
Conclusions
This article has developed a new methodology to measure VaR that explicitly accounts for fattail dis
tributions. Building on the current RiskMetrics model, VaR under the new methodology takes as in
puts portfolio weights two parameters: the current RiskMetrics volatilities and correlations, and
estimates of p, µ β , and σ β . This model was developed keeping in mind three requirements: (1) build
t t
a VaR model that continues to use RiskMetrics volatilities and correlations, (2) use the conditional
normal distribution as the baseline model and (3) keep the number of additional parameter estimates
required to compute VaR to a minimum. The upshot of meeting these criteria is a relatively simply VaR
framework that goes a long way towards modeling large portfolio losses. Furthermore, the new VaR
methodology serves as a basis from which a risk manager can perform structured Monte Carlo.
In closing, we are interested in your feedback related to this proposed methodology. Send the author
Email with any comments or questions you may have regarding the contents of this article.
RiskMetrics Monitor
Second quarter 1996
page 24
Appendix
2 2 –2
[A.1] σ n → γλ + ŝ ⋅ χγ + T
where
∑ ( R – R)
2 2
ŝ = t
t
and
R = sample mean of R t
p ⋅ g1 ( Rt )
[A.2] Prob ( δ t = 1 R t, β t, σ n, p ) = 

p ⋅ g1 ( Rt ) + ( 1 – p) ⋅ go ( Rt )
where
2 –1 ⁄ 2 2
[A.3] g 1 ( R t ) = 2πσ n exp – 1 ⁄ 2 ⋅ [ ( R t – β t ) ⁄ σ n ] if ( δ t = 1 )
2 –1 ⁄ 2 2
[A.4] g 0 ( R t ) = 2πσ n exp – 1 ⁄ 2 ⋅ [ R t ⁄ σ n ] if ( δ t = 0 )
[A.5] Prob ( δ t = 1 R t ) =
Prob ( δ t = 1 ) ⋅ Prob ( R t δ = 1 )
t


Prob ( δ t = 1 ) ⋅ Prob ( R t δ = 1 ) + Prob ( δ t = 0 ) ⋅ Prob ( R t δ = 0 )
t t
If δ t = 1 , we use standard results on the relation between a normal prior and likelihood to
derive the posterior distribution of β t
* 2
[A.6] β t ( δ t = 1 ) → NID β t , σ β
RiskMetrics Monitor
Second quarter 1996
page 25
where
2
* R1 ξ
β t = 
2 2
ξ + σn
and
2 2
2 σn ξ
σ β = 
2 2
ξ + σn
Finally, the conditional posterior distribution of ε depends only on δ t . Let k be the number of 1’s in
the T x 1 vector δ = ( δ 1, δ 2, …δ T ) , that is, k is the number of events in the time series. Because the
prior of p is Beta ( γ 1, γ 2 ) , the conditional posterior distribution of p is Beta ( γ 1 + k, γ 2 + T – k ) .
The Gibbs sampler simulates random variables from marginal and joint distributions as follows. Con
It is assumed that the joint distribution is determined uniquely by the full conditional distributions
P ( r ) = ( r j r i ≠ r j ) , j = 1, …, N, and that it is possible to sample from these conditional distribu
tions. For each j, let P ( r j ) denote the marginal distribution. Given an arbitrary set of starting values
( 0) ( 0) ( 0) ( 1) ( 0) ( 0) ( 1)
r 1 , r 2 , …, r N , draw r 1 from P r 1 r 2 , …, r N , then r 2 from
( 0) ( 0) ( 0) ( 1)
P r 2 r 1 , r 3 , …, r N and so on up to r N to complete one iteration of the scheme. After A such
( A) ( A) ( A)
iterations the result is r 1 , r 2 , …, r N . It can be shown that for a large enough sample,
( A) ( A) ( A)
r 1 , r 2 , …, r N is a simulated draw from the joint cumulative distribution function
P ( r 1, r 2, …, r N ) . In this paper we run the sampler 1000 times, then 500 times, and use the last 500
Peter Zangari In this article, we compute the VaR of a portfolio of foreign exchange flows that consists of the expo
Morgan Guaranty Trust Company sures that are provided in Table A.1 in the Appendix on page 30. In so doing, we underscore the limi
Risk Management Advisory tations of standard VaR analysis when return distributions deviate significantly from normality. All
(1212) 6488641
exposures are assumed to have been converted to U.S. dollar equivalent at the current spot rate.
zangari_p@jpmorgan.com
Briefly, for a given forecast horizon and confidence level, VaR is the maximum expected loss of a port
folio’s current value. Based on the standard RiskMetrics methodology, Table 1 reports the portfolio’s
VaR over different forecast horizons and confidence levels. For example, over the next year, there is a
95% chance that current portfolio value of USD 2,502MM will not fall by more than USD 129.42MM.
Table 1
Portfolio VaR estimates (USD MM)
Confidence
Interval Time Horizon Annual Risk
1 Quarter 2 Quarters 3 Quarters Diversified Undiversified
The RiskMetrics methodology is based on the precept that risks across instruments are not perfectly
additive given the lack of perfect positive correlation. As a result, the total risk in a portfolio of posi
tions is often less than the sum of the instrument risks taken separately. This diversification benefit can
be estimated by taking the difference in VaR assuming all correlations between exposures are 1 (no di
versification) and VaR estimates based on estimated correlations (as presented in Table 1). Consider the
VaR estimates (95% confidence) on the individual exposures for a one year horizon. The total sum of
these numbers is USD 333.03MM (sum of annual VaR estimates in Table A.1 on page 30). This is
equivalent to the VaR estimate of USD 129.42MM reported in Table 1 plus the diversification benefit
of USD 203.61MM.
In addition to the portfolio’s VaR, we compute the VaR of each foreign exchange exposure for a forecast
horizon of one year, and a 95% confidence level. Table A.1 reports VaR estimates for the 52 positions.
To further help understand the riskiness of the individual foreign exchange positions, Table A.1 also
reports the volatility forecasts of foreign exchange returns (i.e. not weighted by the size of the foreign
exchange positions).
To obtain the aforementioned risk estimates, we conducted the VaR analysis closely following the Risk
Metrics methodology. For each of the 52 time series (Table A.1) we used 86 historical weekly prices
for the period 7/15/95 through 4/31/96. Missing observations, due to countryspecific holidays, were
forecast using the statistical routine known as the EM algorithm. The VaR estimates reported in
Tables A.1 and A.2 were computed using the standard RiskMetrics delta valuation methodology.
This requires the computation of volatilities for each exposure and correlations between exposures.
Volatilities and correlations were computed using exponentially weighted averages with a decay factor
of 0.94 which implies that our volatility and correlation forecasts effectively use 75 historical data
points. Recall that exponential weighting is a simple way of capturing the dynamics of returns. Its key
feature is that it weighs recent data more heavily than data recorded in the distant past.
See RiskMetrics—Technical Document for exact formulae.
When computing VaR, RiskMetrics assumes that portfolio returns are distributed conditionally nor
mal. That is, it is assumed that returns divided by their respective standard deviations are normally dis
RiskMetrics Monitor
Second quarter 1996
page 27
tributed. It is important to distinguish this assumption from simply assuming that currency returns are
normally distributed. Table A.2 on page 32 reports various test statistics to determine the validity of
this assumption. All results were computed from normalized returns, that is, returns divided by their
standard errors.
RiskMetrics also assumes that returns are independent over time. Finally, to generate VaR estimates
over different time horizons, we simply scale the weekly VaR forecast by the square root of the time
horizon in weeks. For example, since we define 60 weeks to be one year,1 we obtain the oneyear VaR
forecast by multiplying the oneweek VaR forecast by the square root of 60.
Discussion
Table A.2 on page 32 presents evidence that several returns series are clearly not conditionally normal.
For example, Mexico’s skew statistic is 13.71, which is very nonnormal. When returns are aggregated
into a portfolio, it would not be unreasonable to expect the portfolio’s return distribution to become
more normal. However, as the sample statistics for the portfolio show in Table A.2, this is not the case.
In fact, the nonnormality of the portfolio’s return is a result of the relatively high weights on returns
that are very nonnormal (e.g. China has a weight of 74). Fortunately, there are advanced statistical
methods that allow us to adjust our VaR estimates to reflect the portfolio’s skewness and kurtosis (see
RiskMetrics Monitor, 1st quarter, 1996). In the current analysis, we did not adjust the VaR estimates
for skewness and kurtosis.
In addition to the general deviations from conditional normality, there is the issue of event risk. For
various reasons, events appear as very large returns that occur with only a small probability. While we
are currently developing a VaR methodology that allows users to explicitly account for event risk in
their VaR calculations, it is not included in this analysis. For a discussion of fattail distributions, see
the preceding paper in this edition, “An improved methodology for measuring VaR” on page 7.
One way to measure how sensitive the VaR estimates in Tables A.1 and A.2 are to its underlying as
sumptions is to compare these values to VaR estimates produced by historical simulation. Under his
torical simulation, no statistical distribution for returns is assumed. Instead, sample returns over the 85
week historical sample period and the portfolio weights are used to construct the portfolio’s profit and
loss (P&L) distribution. It is then assumed that this distribution holds in the future. Chart 1 on page 28
shows the P&L distribution of a portfolio for a one year horizon. For a 95% confidence level, VaR is
given by the 5th percentile of this distribution which is USD 174MM.
1 In this analysis, we use the convention of 5 weeks per month, 15 weeks per quarter, and 60 weeks per year.
RiskMetrics Monitor
Second quarter 1996
page 28
Chart 1
Portfolio profit & loss distribution over one year based on historical simulation
VaR = USD 174MM at 95% confidence level
14
12
10
Frequency
8
6
4
2
0
147
54
39
23
8
23
39
54
147
8
Table 2 presents portfolio VaR estimates based on historical simulation for various forecast horizons.
Table A.1 on page 30 reports annual VaR estimates for individual foreign exchange exposures.
Table 2
Portfolio VaR estimates (USD MM)
Historical simulation; 95% confidence level
Confidence interval1 Time Horizon
1 Quarter 2 Quarters 3 Quarters 1 Year
It should be noted that the accuracy of the VaR estimates produced by historical simulation is very de
pendent upon the sample size. In general, when no statistical distribution is assumed for returns, it is
difficult to obtain accurate estimates of the 1st, 5th and 10th percentiles without a sufficient amount of
data.
For example, to find the 5th percentile of the profit and loss (P&L) distribution consisting of 85 data
points we first sort the data and then select the 4th largest point. (Actually, the fifth percentile of 85 is
4.25 and using the fourth observation is a rough approximation to the true percentile value). Similarly,
the 1st and 10th percentiles are represented by the first and ninth data points of the sorted P&L series,
respectively. Table 3 presents the first nine P&L values from the sorted distribution as well as their cor
responding percentiles.
RiskMetrics Monitor
Second quarter 1996
page 29
Table 3
First nine values of the portfolio’s profit and loss (P&L) distribution
One year forecast horizon
Order P&L Values Approximate Percentiles
1st −288 1st
2nd −279 
3rd −258 
5th −165 
6th −160 
7th −160 
8th −160 
Note that the 5th percentile in Table 3 does not match its counterpart in Table 2 because there, the per
centile is computed as 0.75 × (4 + 0.25) × 5 = 174. Nevertheless, we present Table 3 to show how per
centile estimates are very sensitive to the number of data points used to construct the P&L distribution.
More specifically, by computing the confidence intervals for the estimated percentiles, we find that
there is a 20% chance that the estimated 5th percentile will be less than −279 or greater than −160; there
is only a 67% chance that the 5th percentile is less than −165 and greater than −258. The fact that it is
difficult to get robust estimates of the percentiles in above analysis is one reason for the differences
between RiskMetrics VaR and VaR according to historical simulation. Another reason for the differ
ence is related to the relative data weighting schemes used by both methodologies. In historical simu
lation, all occurrences have equal weights. Under the standard RiskMetrics approach, market
movements are exponentially weighted.
RiskMetrics Monitor
Second quarter 1996
page 30
Appendix
Table A.1
Portfolio composition and VaR1
Annual Volatility Annual Value at Risk
Weight 1.65 Std. Dev. RiskMetrics Hist. Simulation
OECD
Australia 35 12.361 4.330 4.340
Austria 66 22.565 14.890 13.360
Belgium 32 16.592 5,310 6.220
Denmark 59 15.850 9.350 11.070
France 28 15.616 4.370 5.280
Germany 37 16.847 6.230 7.370
Greece 80 15.220 12.180 14.460
Holland 30 16.738 5.020 5.880
Italy 82 11.190 9.180 12.470
New Zealand 98 10.948 10.730 7.360
Portugal 28 15.208 4.260 5.270
Spain 48 14.843 7.120 8.120
Turkey 68 30.132 20.490 18.450
UK 81 11.641 9.430 14.170
Others
China 1 3.964 0.040 0.020
Czech Repub 64 12.784 8.180 8.300
Hungary 83 12.984 10.780 11.560
India 94 17.787 16.720 13.960
Romania 43 25.866 11.120 4.520
Russia 82 14.730 12.080 20.030
Total 2,502
1Countries are grouped by major economic groupings as defined in Political Handbook of the World: 1995–1996. New
York: CSA Publishing, State University of New York, 1996. Countries not formally part of an economic group are
listed in their respective geographic areas.
RiskMetrics Monitor
Second quarter 1996
page 32
Table A.2
Testing for conditional normality1
Normalized return series; 85 total observations
Tail Probability (%)8 Tail value9
Skewness2 Kurtosis3 SW4 BL(18)5 Mean6 Std. Dev.7 < −1.65 > 1.65 < −1.65 > 1.65
OECD
Australia 0.314 3.397 0.958 6.105 0.120 0.943 2.900 5.700 −2.586 2.306
Austria 0.369 0.673 0.961 13.517 −0.085 1.037 8.600 5.700 −1.975 2.499
Belgium 0.157 2.961 0.943 19.172 −0.089 0.866 8.600 2.900 −1.859 2.493
Denmark 0.650 4.399 0.932 17.510 −0.077 0.903 11.400 2.900 −1.915 2.576
France 0.068 3.557 0.950 17.642 −0.063 0.969 8.600 2.900 −2.140 2.852
Germany 0.096 4.453 0.937 18.064 −0.085 0.872 5.700 2.900 −1.821 2.703
Greece 0.098 2.259 0.940 15.678 −0.154 0.943 11.400 2.900 −1.971 2.658
Holland 0.067 4.567 0.939 18.360 −0.086 0.865 5.700 2.900 −1.834 2.671
Italy 0.480 0.019 0.984 7.661 0.101 0.763 0 2.900 0 1.853
New Zealand 1.746 7.829 0.963 8.808 0.068 1.075 2.900 2.900 −2.739 3.633
Portugal 1.747 0.533 0.947 21.201 −0.062 0.889 11.400 2.900 −1.909 2.188
Spain 6.995 1.680 0.935 14.062 −0.044 0.957 8.600 2.900 −2.293 1.845
Turkey 30.566 118.749 0.865 2.408 −0.761 1.162 11.400 0 −2.944 0
UK 7.035 2.762 0.936 11.711 −0.137 0.955 8.600 2.900 −2.516 1.811
Switzerland 0.009 0.001 0.992 6.376 −0.001 0.995 2.900 5.700 −2.415 2.110
Fiji 4.073 6.471 0.965 6.752 −0.129 0.868 2.900 2.900 −3.102 1.737
Hong Kong 5.360 29.084 0.906 12.522 0.032 1.001 5.700 5.700 −2.233 2.726
Reunion Island 0.068 3.558 0.950 17.641 −0.063 0.969 8.600 2.900 −2.140 2.853
Ivory Coast 0.068 3.564 0.950 17.643 −0.064 0.970 8.600 2.900 −2.144 2.857
Uganda 40.815 80.115 0.767 9.629 −0.203 1.399 8.600 2.900 −4.092 1.953
Others
China 80.314 567.012 0.557 5.268 0.107 1.521 2.900 2.900 −3.616 8.092
Czech Repub 0.167 12.516 0.937 2.761 −0.108 0.824 5.700 2.900 −2.088 2.619
Hungary 1.961 0.006 0.984 8.054 −0.342 0.741 5.700 0 −2.135 0
India 5.633 3.622 0.950 9.683 −0.462 1.336 17.100 5.700 −2.715 1.980
Romania 89.973 452.501 0.726 5.187 −1.249 1.721 14.300 0 −4.078 0
Russia 0.248 2.819 0.959 5.061 −0.120 0.369 0 0 0 0
Alan Laubsch In the RiskMetrics—Technical Document, we outlined a singleindex equity VaR approach to esti
Morgan Guaranty Trust Company mate the systematic market risk of equity portfolios. In this paper, we discuss the principal variables
Risk Management Advisory influencing the process of portfolio diversification, and suggest an approach to quantifying expected
(1212) 6488369
tracking error to market indices.1
laubsch_alan@jpmorgan.com
Since RiskMetrics does not publish volatility estimates for the universe of international stocks,
equity positions are mapped to their respective local indices. This methodology “maps” the return of a
stock to the return of a stock (market) index in order to attempt to forecast the correlation structure be
tween securities. Let the return of a stock, R S , be defined as
[2] RS = βS R M + αS + εS
where
2 2 2 2
[3] σ R = βS ⋅ σ R + σε
S M S
Since the firmspecific component can be diversified away by increasing the number of different equi
ties that comprise a given portfolio, the market risk, VaR S , of the stock can be expressed as a function
of the stock index
[4] σ R = βS ⋅ σ R
S M
1 Thispaper is an addendum to RiskMetrics—Technical Document. (3rd ed.) New York, May 1995, “Section C: Map
ping to describe positions,” pp. 107–156.
2 A number of equity analytics firms provide estimates of stock betas across a large number of markets.
RiskMetrics Monitor
Second quarter 1996
page 35
where
N
[6] VaR p = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1
Chart 1 shows the effect of diversification for U.S. equities, based on a study of monthly volatilities for
all stocks listed on the New York Stock Exchange.3 Total risk rapidly declines and approaches the index
(or fully diversified) volatility level.
Chart 1
Portfolio diversification effect
3.0
2.5
Portfolio standard deviation
2.0
1.5
1.0
0.0
1 10 20 45 150 350 700 ∞
Number of assets in portfolio
3 EdwinJ. Elton and Martin J. Gruber. Modern Portfolio Theory and Investment Analysis. (4th ed.) New York: John
Wiley and Sons, Inc., 1991. p. 33.
RiskMetrics Monitor
Second quarter 1996
page 36
In Modern Portfolio Theory and Investment Analysis, Elton and Gruber derive a formula that illustrates
the process of diversification with respect to the number of different stocks in a portfolio.
N N N
∑ ∑∑XX
2 2 2 2
[7] σP = X J σJ + J K σ JK
J=1 J = 1K = 1
K≠J
where
X J = proportion of stock J
2
σ J = variance of returns for stock J
2
σ JK = covariance of returns between stocks J and K
For an equally weighted portfolio of N assets (i.e., proportion held in security XJ=1/N) the formula for
portfolio variance becomes
2 1 2 N–1 2
σ P =  σ J +  σ JK
N N
where
2
σ P = stock portfolio variance of returns
N = number of entities in the portfolio
2
σ J = average variance of returns for individual securities
2
σ JK = average covariance of returns ( ≈ diversified index variance)
To better illustrate the process of diversification, we can arrange the terms in the following manner:
2 1 2 2 2
σ P =  σ J – σ JK + σ JK
N
The first term of the equation (1/N times the difference between the variance of individual securities
and the average covariance) corresponds to the residual firm specific risk of a portfolio. The second
term (average covariance), represents the diversified market risk component. Now we can see that as
the number of stocks in a portfolio increases, the firm specific component declines to zero, and we are
left only with undiversifiable market risk. Therefore, the variance of a broad market index, such as the
2 2
S&P 500, should approximate the average covariance of stock returns ( σ JK ≈ σ R ). Substituting the
M
average covariance with the market index variance yields
2 1 2 2 2
[8] σ P =  σ J – σ RM + σ RM
N
RiskMetrics Monitor
Second quarter 1996
page 37
Applications
Elton and Gruber's derivation clarifies the process of diversification and underlines the key variables
of portfolio risk: (a) average variance, (b) average covariance and (c) number of elements in a portfo
lio. Using these key variables, we can compare the effect of diversification for different equity markets.
For example, Elton and Gruber show that significantly more risk is diversifiable in the Netherlands or
Belgium (76%, and 80% respectively), than in the more correlated equity markets of Germany and
Switzerland (56%). In general, significant risk reduction through diversification is possible when the
average covariance of a population is small compared to the average variance.
Elton and Gruber's derivation could be applied for estimating the tracking error to a broad market index
when a portfolio is not fully diversified. For example, we could calculate a Diversification Scaling Fac
tor (i.e., proportion of expected total risk to systematic risk) to estimate the incremental risk given the
number of elements within a portfolio.
2
σP
Diversification Scaling Factor = 
2

σR
[9] M
1 2 2
 σ –σ R + 1
= 
2 J M
Nσ R
M
Using this formula, RiskMetrics users could adjust their equity VaR estimate upward to reflect firm
specific risk:
The potential applications of this technique are broad. For example, the diversification scaling factor
could be used as a “back of the envelope” estimate of how much residual risk to expect in a stock port
folio, given the number of different stocks held. The advantage of Elton and Gruber's derivation lies in
its simplicity. Using basic variables—the number of securities in a portfolio, and the proportion of firm
specific to diversified risk—one can get an estimate of index tracking error.
Practical considerations
To estimate the average volatility of stocks within an index, one can take either an evenly weighted or
a marketcap weighted volatility of each component security.4 Depending on client demand and re
source availability, J.P. Morgan could potentially include this volatility estimate in future releases of
the RiskMetrics data set. Its integration into the RiskMetrics data set would be relatively straight
forward because the overall correlation matrix would be unaffected (we assume that firm specific risk
is independent).
4 Component securities of broad market indices are available from a number of sources, for example, Bloomberg or
BARRA.
RiskMetrics Monitor
Second quarter 1996
page 38
Example 1
Consider a portfolio consisting of four U.S. stocks, with market values of USD 25MM each, a one
month standard deviation for the S&P500 Index of 3.46%, and an average standard deviation of stocks
within the S&P 500 of 8.9%.
Example summary
Average equity volatility (1.65σ) 14.69%
Diversified index volatility (1.65σ) 5.72%
Number of securities 4
N
VaR p = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1
= 1.65σ R [ MV A ⋅ β A + MV B ⋅ β B + MV C ⋅ β C + MV D ⋅ β D ]
S&P500
2
σp 1 σ2 – σ2
2
 = 
 2 J R S&P500 + 1
σR Nσ R
S&P500 S&P500
1 2 2
=  8.9% – 3.46% + 1
4 ( 3.46% )
= 155%
Finally, adjust the systematic risk to reflect the expected total risk in the portfolio:
Adjusted Equity VaR p = Diversification scaling factor × Total systematic equity VaR
= 155 % ( 5.43MM )
= USD 8.41MM
Note that Elton and Gruber's derivation assumes that stocks are randomly selected from a population
and that portfolios are evenly distributed. This technique becomes less accurate for asymmetric portfo
lios, or if there are significant concentrations. For portfolios with significant industry concentrations,
one could apply a subindex that more closely reflect the portfolio composition (for example an oil
stock index, or bank stock index). Alternately, one could depart from a one factor CAPM approach to
a multifactor approach for equity VaR.
2 2
VaR = (market risk) + (residual risk)
where
N
market risk = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1
N
∑ MV
2 2
– β Si σ R
2 2
residual risk = 1.65 Si σ J M
i=1
Market risk for individual stocks is aggregated linearly (correlation=1), while residual risk is aggregat
ed assuming independence (i.e., square root of the sum of the squares).
RiskMetrics Monitor
Second quarter 1996
page 40
Example 2
Consider the same parameters outlined in Example 1, except that we hold different proportions of the
same stocks in this portfolio.
Example summary
Average equity volatility (1.65σ) 14.69%
Diversified index volatility (1.65σ) 5.72%
Number of securities 4
N
Systematic VaR p = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1
= 1.65σ R [ MV A ⋅ β A + MV B ⋅ β B + MV C ⋅ β C + MV D ⋅ β D ]
S&P500
∑ MV
2 2
– β R
2
Total residual risk = 1.65 Si σ R S M
i=1
MV A σ J – β A σ R + MV B σ J – β B σ R
2 2 2 2 2 2 2 2
M M
= 1.65
2 2 2 2 2 2 2 2
+MV C σ J – β C σ R + MV D σ J – β D σ R
M M
2 2 2 2 2 2 2 2
10 8.9 – ( 0.5 ) ( 3.46 ) + 20 8.9 – ( 1.5 ) ( 3.46 )
= 1.65
2 2 2 2 2 2 2 2
+30 8.9 – ( 1 ) ( 3.46 ) + 40 8.9 – ( 0.8 ) ( 3.46 )
= USD 7.44MM
2 2
Total equity VaR p = ( market risk ) + ( total residual risk )
2 2
= ( 5.54 ) + ( 7.44 )
= USD 9.28MM
MultiMarket Portfolio
VaR for a portfolio consisting of equities from several different markets follows the same methodology
of aggregating market and residual risk. The difference is that the correlation between different market
indices is incorporated, as well as FX risk.
RiskMetrics Monitor
Second quarter 1996
page 42
RiskMetrics Monitor
Second quarter 1996
page 43
• A look at two methodologies that use a basic deltagamma parametric VaR precept but achieve
results similar to simulation.
• Exploring alternative volatility forecasting methods for the standard RiskMetrics monthly
horizon.
• How accurate are the risk estimates in portfolios that contain Treasury bills proxied by LIBOR
data?
• A solution to the standard cashflow mapping algorithm, which sometimes leads to imaginary
roots.
RiskMetrics Directory: Available exclusively online, a list New York Jacques Longerstaey (1212) 6484936
of consulting practices and software products that incorporate longerstaey_j@jpmorgan.com
the RiskMetrics methodology and data sets. Chicago Michael Moore (1312) 5413511
moore_mike@jpmorgan.com
RiskMetrics Monitor: A quarterly publication that Mexico Beatrice Sibblies (525) 5409554
discusses broad market risk management issues and statistical sibblies_beatrice@jpmorgan.com
questions as well as new software products built by thirdparty
San Francisco Paul Schoffelen (1415) 9543240
vendors to support RiskMetrics.
schoffelen_paul@jpmorgan.com
RiskMetrics data sets: Two sets of daily estimates of future Toronto Dawn Desjardins (1416) 9819264
desjardins_dawn@jpmorgan.com
volatilities and correlations of approximately 420 rates and
prices, with each data set totaling 88,000+ data points. One set Europe
is for computing shortterm trading risks, the other for medium
London Benny Cheung (4471) 3254210
term investment risks. The data sets currently cover foreign
cheung_benny@jpmorgan.com
exchange, government bond, swap, and equity markets in up to
22 currencies. Eleven commodities are also included. Brussels Isabelle Vanderstricht (322) 5088060
vanderstricht_i@jpmorgan.com
A RiskMetrics Regulatory data set, which incorporates the Paris Ciaran O’Hagan (331) 40154058
latest recommendations from the Basel Committee on the use ohagan_c@jpmorgan.com
of internal models to measure market risk, is now available. Frankfurt Robert Bierich (4969) 7124331
bierich_r@jpmorgan.com
Bond Index Cash Flow Maps: A monthly insert into the
Milan Roberto Fumagalli (392) 7744230
Government Bond Index Monitor outlining synthetic cash flow fumagalli_r@jpmorgan.com
maps of J.P. Morgan’s bond indices.
Madrid Jose Luis Albert (341) 5771722
albert_jl@jpmorgan.com
Trouble accessing the Internet? If you encounter any
difficulties in either accessing the J.P. Morgan home page at Zurich Viktor Tschirky (411) 2068686
http://www.jpmorgan.com or in downloading the tschirky_v@jpmorgan.com
RiskMetrics data files, you can call 1800JPMINET in the
Asia
United States.
Singapore Michael Wilson (65) 3269901
wilson_mike@jpmorgan.com
RiskMetrics is based on, but differs significantly from, the market risk management systems developed by J.P. Morgan for its own use. J.P. Morgan does not warrant any results
obtained from use of the RiskMetrics data, methodology, documentation or any information derived from the data (collectively the “Data”) and does not guarantee its sequence,
timeliness, accuracy, completeness or continued availability. The Data is calculated on the basis of historical observations and should not be relied upon to predict future market
movements. The Data is meant to be used with systems developed by third parties. J.P. Morgan does not guarantee the accuracy or quality of such systems.
Additional information is available upon request. Information herein is believed to be reliable, but J.P. Morgan does not warrant its completeness or accuracy. Opinions and estimates constitute our judgement and are
subject to change without notice. Past performance is not indicative of future results. This material is not intended as an offer or solicitation for the purchase or sale of any financial instrument. J.P. Morgan may hold a
position or act as market maker in the financial instruments of any issuer discussed herein or act as advisor or lender to such issuer. Morgan Guaranty Trust Company is a member of FDIC and SFA. Copyright 1996 J.P.
Morgan & Co. Incorporated. Clients should contact analysts at and execute transactions through a J.P. Morgan entity in their home jurisdiction unless governing law permits otherwise.