You are on page 1of 44

RiskMetrics Monitor

TM

J.P.Morgan/Reuters

Second quarter 1996 RiskMetrics News 2


New York • J.P. Morgan and Reuters will collaborate to develop a more powerful version of RiskMetrics.
August 13, 1996 This Monitor is the first edition under the new collaboration between both firms.
• RiskMetrics News also introduces FourFifteen , an Excel based VaR calculator and report
generator developed and distributed by J.P. Morgan.
• Some new additions to the RiskMetrics software vendor list.
• We review changes to the government yield curve compounding method used for volatility
estimation.

Morgan Guaranty Trust Company Research, Development, and Applications


Risk Management Advisory
Jacques Longerstaey This quarter, we address the following subjects:
(1-212) 648-4936
riskmetrics@jpmorgan.com • An improved methodology for measuring VaR 7
Most Value-at-Risk (VaR) methodologies show their limitations when dealing with events that
Reuters Ltd are relatively infrequent. These shortcomings appear when risk managers select confidence inter-
International Marketing vals above 99%, and this applies not only to variance-covariance VaR techniques such as
Martin Spencer RiskMetrics, but also to historical simulation.
(44-171) 542-3260
martin.spencer@reuters.com
In the case of RiskMetrics, the lack of coverage in the tail of the distributions has been attrib-
uted to the assumption that returns follow a conditional normal distribution. Since the distribu-
tions of many observed financial return series have tails that are “fatter” than those implied by
conditional normality, risk managers may underestimate the effective level of risk. The purpose
of this article is to describe a RiskMetrics VaR methodology that allows for a more realistic
model of financial return tail distributions.

• A Value-at-Risk analysis of currency exposures 26


In this article, we compute the VaR of a portfolio of foreign exchange flows that consist of expo-
sures to OECD and emerging market currencies, most of which are not yet covered by the Risk-
Metrics data sets. In doing so, we underscore the limitations of standard VaR practices when
underlying market return distributions deviate significantly from normality.

• Estimating index tracking error for equity portfolios 34


In our RiskMetrics—Technical Document, we outlined a single-index equity VaR approach to
estimate the systematic market risk of equity portfolios. In this article, we discuss the principal
variables influencing the process of portfolio diversification, and suggest an approach to quanti-
fying expected tracking error to market indices.
RiskMetrics Monitor
Second quarter 1996
page 2

RiskMetrics News

Jacques Longerstaey J.P. Morgan and Reuters team up on RiskMetrics


Morgan Guaranty Trust Company
Risk Management Advisory J.P. Morgan and Reuters have recently announced their decision to collaborate in the development of a
(1-212) 648-4936 more powerful and sophisticated version of RiskMetrics.
riskmetrics@jpmorgan.com
Since the launch of RiskMetrics in October 1994, we have received numerous requests to add new
Martin Spencer
products, instruments, and markets to the daily volatility and correlation data sets. We have also per-
Reuters Ltd ceived the need in the market for a more flexible VaR data tool than the standard matrices that are cur-
International Marketing rently distributed over the Internet.
(44-171) 542-3260
martin.spencer@reuters.com The new partnership with Reuters, which will be based on the precept that both firms will focus on their
respective strengths, will help us achieve these things:

Methodology
J.P. Morgan will continue to develop the RiskMetrics set of VaR methodologies and publish them in
the quarterly RiskMetrics Monitor and in the annual RiskMetrics—Technical Document.

RiskMetrics data sets


Reuters will take over the responsibility for data sourcing as well as production and delivery of the risk
data sets.

The current RiskMetrics data sets will continue to be available on the Internet and will be further im-
proved as a benchmark tool designed to broaden the understanding of the principles of market risk mea-
surement.

When J.P. Morgan first launched RiskMetrics in October 1994, the objective was to go for broad mar-
ket coverage initially, and follow up with more granularity in terms of the markets and instruments cov-
ered. This over time, would reduce the need for proxies and would provide additional data to measure
more accurately the risk associated with non-linear instruments.

The partnership will address these new markets and products and will also introduce a new customiz-
able service, which will be available over the ReutersWeb service. The customizable RiskMetrics ap-
proach will give risk managers the ability to scale data to meet the needs of their individual trading
profiles. Its capabilities will range from providing customized covariance matrices needed to run VaR
calculations, to supplying data for historical simulation and stress-testing scenarios.

More details on these plans will be discussed in later editions of the RiskMetrics Monitor.

Systems
Both J.P. Morgan and Reuters, through its Sailfish subsidiary, have developed client-site RiskMetrics
VaR applications. These products, together with the expanding suite of third party applications will
continue to provide RiskMetrics implementations.
RiskMetrics Monitor
Second quarter 1996
page 3

RiskMetrics News (continued)

J.P. Morgan launches FourFifteen


On April 15th, 1996, J.P. Morgan launched FourFifteen, an Excel based VaR calculator and report
generator that uses the RiskMetrics methodology and data sets.

FourFifteen is a practical tool for measuring the market risk in complex portfolios of financial instru-
ments, and runs on both Windows and Macintosh platforms.

J.P. Morgan designed FourFifteen to use the unique database of volatilities and correlations which is
updated daily and needed to generate key market risk information. FourFifteen performs VaR anal-
yses of portfolios containing financial products including fixed income, foreign exchange, equities, and
their derivatives in 23 currencies. Users can specify the base currency, horizon, and risk threshold.
FourFifteen also provides an array of standard reports to present risk information in a clear and useful
manner. Users with more specific reporting needs can create customized reports to aid in the identifi-
cation of their key sources of risk.

Why FourFifteen? FourFifteen is named after J.P. Morgan’s market risk report produced at 4:15
p.m. each day. The “4:15 Report”, a single sheet of paper, summarizes the daily earnings at risk for
J.P. Morgan worldwide.

FourFifteen allows for a more informed perspective on a wide range of risk management issues in
the areas of trading management, benchmark performance evaluation, asset/liability management, re-
source allocation, and regulatory reporting. It is a useful aid to risk analysis and decision-making at
micro, macro and strategic levels. The system's inherent flexibility delivers risk analysis tailored to var-
ious users, from senior executives to financial analysts.

For more information on FourFifteen, contact your local J.P. Morgan representative.

Additions to the list of RiskMetrics systems developers

• microcomp GmbH
Robert-Heuser-Strasse 15, 50968 Koeln, Germany
Peter Jumpertz, (49-221) 937 08020, Fax (49-221) 937 08015,
100276.3233@compuserve.com

ValueRisk is a front and middle office tool, that can be easily connected to position keeping
systems and other back office applications. It helps to manage portfolios of all kinds of finan-
cial products, including fixed income, equities, commodities, foreign exchange, and deriva-
tives. The system is targeted to the individual trader, fund manager or treasurer, and to the
manager of a trading or treasury department as well.

ValueRisk introduces “Risk Percentage,” a common measurement for risk in different kinds of
financial markets. Risk is measured as a percentage relative to a fully leveraged position. It is
based on the accepted returns-on-capital concept at mark-to-market prices. The concept is a
basis for clear and communicable capital allocation decisions. It helps to implement well-
defined diversification strategies, and limits the loss of a risk-taking unit. The system also
serves as the basis for performance measurement.
RiskMetrics Monitor
Second quarter 1996
page 4

RiskMetrics News (continued)

ValueRisk monitors price series and cross-correlation of virtually all traded financial products.
The system calculates volatility and determines the total risk level as well as the stop-out
probability based on historical cross-correlations of all products for various portfolios.
Accounting for unprecedented market dynamics, the system offers quick and powerful simula-
tion capabilities and stress testing function. It is fully compliant with the latest BIS (Basel
Committee) and BAK (Bundesaufsichtsamt für das Kreditwesen) risk controlling require-
ments.

• Midas-Kapiti International
45 Broadway, New York, N.Y. 10006
Abby Friedman (1-212) 898 9500, FAX (1 212) 898 9510

TMark trader analytics system provides comprehensive high-speed pricing, portfolio analysis
and hedging facilities to support trading of off-balance sheet instruments. Recently introduced
Release 2.4 supports the J.P. Morgan RiskMetrics VaR methodology. Specific features of
Release 2.4 include multi-user processing, deal ticket generation, increased number of curren-
cies and rates, and revaluation of cash flows from basis swaps. TMark is a PC-based system
and runs under Microsoft Windows.

Midas-Kapiti International is one of the world's leading suppliers of software solutions to


banks, financial institutions and corporations. Specializing in applications software for deriva-
tives, market data integration, international banking, trade finance and risk management has
been our focus for more than 20 years. With over 24 international offices serving 1000 cus-
tomers in 90 countries, we have an unparalleled distribution network which allows us to offer
sales and support to our customers wherever their offices may be located.

• Value & Risk GmbH


Quellenweg 5a, 61348 Bad Homburg, Germany
Tako Ossentjuk (49-6172) 685051, FAX (49-6172) 685053, 100714.3446@compuserve.com

Value & Risk calculates Value-at-Risk with several selectable methodologies, such as covari-
ance-methodology, quantile-simulation, user defined/stress test -simulation, worst case, prin-
cipal component analysis, Monte Carlo, historical simulation, and RiskMetrics. It supports
various distributions and manifold position equivalent calculations as well as stochastic and
non-stochastic hedge proposals. It also offers substantial drill down and reporting functional-
ity including numerous graphics.

Value & Risk co-operates with SAS Institute and is able to implement complete solutions from
data integration to decision support for traders, managers and controllers in a multi-user envi-
ronment.

Besides supplying the above mentioned methodology, Value & Risk offers the standard
method for capital requirements according to Basel Committee and CAD. All markets and
their instruments are covered including a variety of exotic options.
RiskMetrics Monitor
Second quarter 1996
page 5

RiskMetrics News (continued)

For the current list of software vendors that support RiskMetrics, please refer to the J.P. Morgan Web
page at:

http://www.jpmorgan.com/MarketDataInd/RiskMetrics/Third_party_directory.html

Change to the RiskMetrics yield calculation


Until now, our price volatility for government zeros has been estimated by using discrete compounding,
t
(i.e., price = 1 ⁄ ( 1 + r ) , where r = yield rate, and t = term) to first establish the current price for each
zero yield interval.

However, continuous compounding, i.e., price = exp ( – rt ) , should have been used instead, given the
yield calculation method of the term structure model published in RiskMetrics—Technical Document.

We will switch to continuous compounding for government zeros beginning June 17th.

The effect of this change on estimated VaR varies according to maturity interval and a currency's yield
levels. Chart 1 shows that the average price volatility for most currencies will increase an average of 5
to 8%. Higher yielding currencies, for example, Lira or Peseta, will increase an average of 10 to 12%.
On the other hand, the volatility for low yielding currencies, for example, Yen, will average an increase
of 2 to 5%.

The increase in average price volatility, however, does not translate into a direct, corresponding in-
crease in VaR. The change in VaR is also a function of the time to maturity. The reduction in a cash-
flow’s present value using continuous versus discrete compounding is greater the longer the time to
maturity. Thus, a nearer government interval, for example, 2 years, will have more of an increase in
VaR than a farther one. This is because the reduction in present value has a dampening effect vis-a-vis
the increase in price volatility. This is particularly noticeable when looking at the overall effect on VaR
of high versus low yielding currencies. Chart 2 on page 6 shows the change in VaR for the 30 year Lira
and the 20 year Yen. In fact, the reduction in present value for the 30 year Lira more than offsets the
increase in its price volatility and the revised VaR estimate is actually less than the current estimate.
Chart 2 also compares the change in present value of the 20 year Yen versus the 30 year Lira.

Chart 1
Changes in average volatility estimates
RiskMetrics data, Jan-92 to Feb-96

15.0%
12.5%
10.0%
7.5%
5.0%
2.5%
0.0%
USD JPY FRF NZD LIR DEM ESP DKK CAD

2 YEAR 10 YEAR 20 YEAR 30 YEAR


RiskMetrics Monitor
Second quarter 1996
page 6

RiskMetrics News (continued)

Chart 2
Change in average estimated VaR
RiskMetrics data, Jan-92 to Feb-96

15.0%

10.0%

5.0%

0.0%

-5.0%

-10.0%
USD JPY FRF NZD LIR DEM ESP DKK CAD

2 YEAR 10 YEAR 20 YEAR 30 YEAR

Revision to series used in EM algorithm for swap and government rates


When applying the Expectation Maximization (EM) algorithm to fill in missing data, we predefine a
grouping of instruments related to the instrument type. Until the middle of May when we filled missing
swap or government data, we used only the swap or government series, respectively, for each currency.
This, however, lead to instances of discontinuity in the swap spread over governments. On the day EM
was applied, the filled in swap yields would hardly move even though the corresponding government
yields might have changed +/− 20 basis points. As a result, a previously high correlation between swaps
and government, particularly in the same currency, would decline for a period before recovering. To
remedy this, we now include all the swap and government series for each currency when using the EM
algorithm for either missing swap or government yields.
RiskMetrics Monitor
Second quarter 1996
page 7

An improved methodology for measuring VaR

Peter Zangari Since its release in October 1994, RiskMetrics has inspired an important discussion on VaR method-
Morgan Guaranty Trust Company ologies. A focal point of this discussion has been a RiskMetrics assumption that returns follow a con-
Risk Management Advisory ditional normal distribution. Since the distributions of many observed financial return series have tails
(1-212) 648-8641
that are “fatter” than those implied by conditional normality, risk managers may underestimate the risk
zangari_p@jpmorgan.com
of their positions if they assume returns follow a conditional normal distribution. In other words, large
financial returns are observed to occur more frequently than predicted by the conditional normal distri-
bution. Therefore, it is important to be able to modify the current RiskMetrics model to account for
the possibility of such large returns.

The purpose of this article is to describe a RiskMetrics VaR methodology that allows for a more re-
alistic model of financial return tail distributions. The article is organized as follows: the first section
reviews the fundamental assumptions behind the current RiskMetrics calculations, in particular, the
assumption that returns follow a conditional normal distribution. The second section (“A new VaR
methodology” on page 10) introduces a simple model that allows us to incorporate fatter-tailed distri-
butions. The third section (“A statistical model for estimating return distributions and probabilities” on
page 18) shows how we estimate the unknown parameters of this model so that they can be used in con-
junction with current RiskMetrics volatilities and correlations. The fourth section (on page 23) is the
conclusion to this article.

A review of the implications of the current RiskMetrics assumptions about return


distributions
In a normal market environment RiskMetrics VaR forecasts are given by the bands of a confidence
interval that is symmetric around zero. These bands represent the largest expected change in the value
of a portfolio with a specified level of probability. For example, the VaR bands associated with a 90%
confidence interval are given by { – 1.65σ p, 1.65σ p } where −/+1.65 are the 5th/95th percentiles of the
standardized normal distribution, and σ p is the portfolio standard deviation which may depend on cor-
relations between returns on individual instruments. The scale factors −/+ 1.65 result from the assump-
tion that standardized returns (i.e. a mean centered return divided by its standard deviation) are
normally distributed. When this is true we expect 5% of the standardized observations to lie below
− 1.65 and 5% to lie above +1.65. Often, whether complying with regulatory requirements or internal
policy, risk managers compute VaR at different probability levels such as 95% and 98%. Under the as-
sumption that returns are conditionally normal, the scale factors associated with these confidence in-
tervals are −/+1.96 and −/+2.33, respectively. It is our experience that while RiskMetrics VaR
estimates provide reasonable results for the 90% confidence interval, the methodology does not do as
well at the 95% and 98% confidence levels.1 Therefore, our goal is to extend the RiskMetrics model
to provide better VaR estimates at these larger confidence levels.

Before we can build on the current RiskMetrics methodology, it is important to understand exactly
what RiskMetrics assumes about the distribution of financial returns. RiskMetrics assumes that re-
turns follow a conditional normal distribution. This means that while returns themselves are not nor-
mal, returns divided by their respective forecasted standard deviations are normally distributed with
mean 0 and a variance 1. For example, let r t , denote the time t return, i.e., the return on an asset over
a one-day period. Further, let σ t denote the forecast of the standard deviation of returns for time t based
on historical data. It then follows from our assumptions that while r t is not necessarily normal, the stan-
dardized return, r t ⁄ σ t , is normally distributed. The distinction between these two types of returns is
that, unlike conditional returns, unconditional returns have fat-tails. Second, time-varying, persistent
volatility, which is a feature of the conditional assumption, is a real phenomenon.

1 SeeDarryl Hendricks, “Evaluation of Value-at-Risk Models Using Historical Data,” FRBNY Economic Policy Review,
April, 1996.
RiskMetrics Monitor
Second quarter 1996
page 8

An improved methodology for measuring VaR (continued)

Extending the RiskMetrics VaR framework to include the large probability of large returns begins
with an understanding of the dynamics of market returns. Chart 1 illustrates such dynamics by showing
a time series of returns for the Nikkei 225 index over the period April 1990 through April 1996.

Chart 1
Nikkei index returns, r(t)
Returns (in percent) for April 1990–April 1996

10
high volatility
Nikkei index returns

-5
low volatility
-10
Mar-90 Aug-91 Dec-92 May-94 Sep-95 Feb-97

In addition to the typical feature of volatility clustering (i.e. periods of high and low volatility), Chart 1
displays large returns that appear to be inconsistent with the remainder of the data series. To show how
the observed returns, r(t), differ from their standardized counterparts, r(t)/σt, the standardized returns
for the Nikkei index are presented in Chart 2 on page 8. (To create the standard errors for scaling the
returns, we used the current RiskMetrics daily forecasting methods.)

Chart 2
Standardized returns ( r t ⁄ σ t ) of the Nikkei index
April 4, 1990–March 26, 1996

8
6
4
Nikkei index returns

2
0
-2
-4
-6 Conditional event
-8
-10
Mar-90 Aug-91 Dec-92 May-94 Sep-95 Feb-97

Comparing Charts 1 and 2, note the large negative standardized return (called “Conditional event” in
Chart 2) and that it results in part from the low level of volatility that preceded the corresponding ob-
RiskMetrics Monitor
Second quarter 1996
page 9

An improved methodology for measuring VaR (continued)

served return (Chart 1). We can interpret such large conditional non-normal returns as “surprises” be-
cause immediately prior to the observed return there was a period of low return volatility. Hence, the
conditional event return is unexpected.

Conversely, observed returns that appear large may no longer do so when standardized because their
values were “expected.” In other words, these large returns occur in periods of relatively high volatility.
This scenario is demonstrated in Chart 3, which shows an observed time series of spot gold contract
returns and their standardized values.

Chart 3
Observed spot gold returns (A) and standardized returns (B)
April 4, 1990–March 26, 1996

4 (A)
2
Gold returns

-2

-4

-6

-8
Mar-90 Aug-91 Dec-92 May-94 Sep-95 Feb-97

6 (B)
4
2
Gold returns

0
-2

-4
-6

-8
Mar-90 Aug-91 Dec-92 May-94 Sep-95 Feb-97

Chart 3 demonstrates the effect of standardization on return magnitudes. For example, observed returns
(A) show more volatility clustering relative to their standard counterparts (B).

To summarize, RiskMetrics assumes that financial returns divided by their respective volatility fore-
casts are normally distributed with mean 0 and variance 1. This assumption is crucial because it recog-
nizes that volatility changes over time.
RiskMetrics Monitor
Second quarter 1996
page 10

An improved methodology for measuring VaR (continued)

A new VaR methodology


In developing a new way to measure VaR, we must meet three criteria that relate to RiskMetrics VaR:

• First, continue using the current RiskMetrics volatilities and correlations because users have
shown a strong interest in using these estimates when computing VaR. (We believe that this
interest results from the intuitive interpretation of volatility and correlation.)

• Second, build a model in which it is fairly straightforward to aggregate risks of individual


instruments. Keeping this in mind, we must develop a model that builds on the conditional
normal distribution.

• And third, set up conditions such that VaR is easily computed. Therefore, the number of new
parameters required to compute VaR must not be so large as to impede implementation.

We motivate the development of the new VaR methodology as follows: Instead of assuming that stan-
dardized returns are normally distributed with mean 0 and variance 1, we assume that the standardized
returns are generated from a mixture of two different normal distributions.

For example, suppose that we believe that on most days standardized returns are generated according
to the conditional normal distribution with a zero mean and variance close to 1. However, on other days,
say on days where a large return is observed, we assume that standardized returns are still normally
distributed but with a different mean and variance. In fact, we would expect the variance of this latter
normal distribution to be large. To be more specific, let p 1 be the probability that a standardized return
was generated from the normal distribution N 1 , where N 1 is identified by its mean µ 1 and variance
2
σ 1 . Similarly, let p 2 be the probability that a standardized return was generated from the normal dis-
2
tribution N 2 , where N 2 is identified by its mean µ 2 and variance σ 2 . Mathematically, the standard-
ized return distribution is generated according to the following probability density function:

[1] PDF = p 1 ⋅ N 1 ( µ 1, σ 1 ) + p 2 ⋅ N 2 ( µ 2, σ 2 )

Eq. [1] is known as a normal mixture model. An interesting feature of this model is that it allows us to
assign large returns a larger probability (compared to the standard normal model). Chart 4 on page 11
shows two simulated densities, one from the standard normal distribution, the other from a normal mix-
ture model with the following parameters:

2 2
p 1 = 0.98, p 2 = 0.02, µ 1 = µ 2 = 0, σ 1 = 1, and σ 2 = 100
RiskMetrics Monitor
Second quarter 1996
page 11

An improved methodology for measuring VaR (continued)

Chart 4
Standard normal and normal mixture probability density functions (PDF)

0.40

0.35

0.30

0.25
PDF

0.20
Mixture
0.15

0.10

0.05
Normal
0.00
-5.00
-4.05
-3.10
-2.15
-1.20
-0.25
0.70
1.65
2.60
3.55
4.50

standardized returns

Since the normal mixture model can assign relatively larger-than-normal probabilities to big returns,
we choose to model standardized returns as the sum of a a normal return, n t , with mean zero and vari-
2
ance σ n , and another normal return, β t , with mean and variance that occurs each period with proba-
2
bility p. Note that if we set σ n = 1 , then n t represents the part of the return that is modeled correctly
according to RiskMetrics. It then follows that we can write a standardized return, R(t), as generated
from the following model:

[2] Rt = nt + δt βt

where, δ t = 1 with probability p, or δ t = 0 with probability 1 – p . When δ t = 1 , the standardized


2 2
return is normally distributed with mean µ β and variance σ β + σ n . Otherwise it is distributed normally
2 t
with mean zero and variance σ n . Also, we assume that σ t and β t are both independent of n t . For each
2 2
observed time series we can estimate µ β, σ β, p, and σ n using historical data. A description of this es-
timation process is provided in “A statistical model for estimating return distributions and probabili-
ties” on page 18.

Recall that Eq. [2] on page 11 was motivated by the need to have our VaR model account for large re-
turns, i.e. returns that occur less than 5% of the time according to the RiskMetrics model. Unfortu-
nately, due to data limitations, it is very difficult in practice to determine the accuracy of forecasting
returns that occur less than 2.5% of the time. One way to get around the situation of not having enough
data to properly test Eq. [2] is to perform a Monte Carlo simulation. Under controlled settings we can
simulate as much data as we need to, and then conduct inference based on the simulated data. The only
requirement is that the simulated data have fat tails so as to reflect observed financial return
distributions. One commonly used distribution that is known to have fat-tails, and that is easy to sim-
ulate, is the t-distribution.
RiskMetrics Monitor
Second quarter 1996
page 12

An improved methodology for measuring VaR (continued)

We then conduct a Monte Carlo study on single instruments to determine how accurately our new mod-
el captures fat-tails. The experiment is performed in three steps. First, we simulate 10,000 observations
from a t-distribution and then compute the simulated percentiles at the 0.5%, 1%, 2.5% and 5% prob-
ability levels. Second, we estimate the parameters of Eq. [2] by using the simulated data, and determine
the percentiles implied by Eq. [2]. (We refer to the percentiles generated from Eq. [2] as “mix” since
its associated PDF is essentially a normal mixture model.) Third, we compare the actual percentiles
generated from the t-distribution to those produced by Eq. [2] and by the standard normal model (stan-
dard RiskMetrics).

Table 1 reports the results of our study. Data in column 2 is simulated from t-distributions with 2, 4, 10
and 100 degrees of freedom. Notice that the smaller the degrees of freedom, the fatter the tails of the
simulated distribution (compare columns 4 and 5).

Table 1
Comparing percentiles of t-distribution and normal mixture
Simulated data from t-distribution with 2, 4, 10, and 100 degrees of freedom
Two degrees of freedom

Parameters1 µ = – 0.1357 σ = 6.414 σ = 1.440 p = 1.3 %


β β n
(1) (2) (3) (4) (5)
relative error relative error
Percentile (%) t -dist. mix (mix-t-dist)/(t-dist) (normal-t-dist)/(t-dist)
0.5 −5.113 −4.453 −13% −50%
Test results 1 −3.832 −3.601 −6% −40%

2.5 −2.796 −2.930 5% −30%

5 −2.195 −2.428 10% −25%

Four degrees of freedom

Parameters* µ = – 0.0753 σ = 4.394 σ = 1.148 p = 0.66 %


β β n
relative error relative error
Percentile (%) t -dist. mix (mix-t-dist)/(t-dist) (normal-t-dist)/(t-dist)
0.5 −3.130 −3.121 0% −17%

Test results 1 −2.719 −2.753 1% −14%

2.5 −2.239 −2.288 2% −13%

5 −2.811 −1.910 −32% −40%

1Parameters are defined as follows: µβ = mean of the normal distribution, βt, describing the standardized return
σβ = standard deviation of the normal distribution, βt
σn = standard deviation of the normal return, nt
p = probability that the standardized return is generated from the normal distribution
RiskMetrics Monitor
Second quarter 1996
page 13

An improved methodology for measuring VaR (continued)

Table 1 (continued)
Comparing percentiles of t-distribution and normal mixture
Simulated data from t-distribution with 2, 4, 10, and 100 degrees of freedom
Ten degrees of freedom

Parameters* µ = – 0.226 σ = 3.397 σ = 1.029 p = 0.32 %


β β n
(1) (2) (3) (4) (5)
relative error relative error
Percentile (%) t -dist. mix (mix-t-dist)/(t-dist) (normal-t-dist)/t-dist)
0.5 −2.896 −2.710 −6% −12%
Test results 1 −2.496 −2.428 −3% −7%

2.5 −2.053 −2.034 −0.1% −5%

5 −1.677 −1.703 2% −2%

One hundred degrees of freedom (close to standard normal distribution)

Parameters µ = – 0.4189 σ = 2.518 σ = 1.026 p = 0.2 %


β β n
relative error relative error
Percentile (%) t -dist. mix (mix-t-dist)/t-dist) (normal-t-dist/t-dist)
0.5 −2.579 −2.626 2% 0%
Test results 1 −2.392 −2.366 −1% −3%

2.5 −1.972 −1.990 1% 0%

5 −1.648 −1.669 1% −1%


1Parameters are defined as follows: µβ = mean of the normal distribution, βt, describing the standardized return
σβ = standard deviation of the normal distribution, βt
σn = standard deviation of the normal return, nt
p = probability that the standardized return is generated from the normal distribution

The results in Table 1 (columns 4 and 5) clearly show how the mixture model is superior to the normal
model in recovering the tail percentiles of the t-distribution (notably in the cases of 2 and 4 degrees of
freedom). Also, reading from the top of the table downwards, notice how the estimates of σ β, σ n, and p
change as the distribution becomes more normal. When there are very fat tails, there is a greater chance
of observing a return from the normal distribution with the large variance (1.3% for 2 degrees of free-
dom vs. 0.2% with 100 degrees of freedom). Furthermore, the estimated standard deviation σ β be-
comes smaller as the simulated distribution becomes more and more normal.

Next, we consider the case of calculating the VaR of a portfolio of returns. Unlike the single instrument
case described above, when aggregating returns we must grapple with issues such as the correlation
between different returns δ t ′s and β t ′s . For example, consider a portfolio that consists of two instru-
ments with returns R 1t and R 2t expressed as follows:

R 1t = n 1t + δ 1t β 1t
[3]
R 2t = n 2t + δ 2t β 2t

Let w 1 and w 2 denote the amount invested in instruments 1 and 2, respectively. We can write the port-
folio return, R pt , as
RiskMetrics Monitor
Second quarter 1996
page 14

An improved methodology for measuring VaR (continued)

R pt = w 1 σ 1t R 1t + w 2 σ 2t R 2t
[4] w 1 σ 1t δ 1t β 1t + w 2 σ 2t δ 2t β 2t w 1 σ 1t n 1t + w 2 σ 2t n 2t
= +




















New portfolio component Standard RiskMetrics component

Notice that when aggregating returns we must multiply the returns by the RiskMetrics standard de-
viations ( σ 1t and σ 2t ) to preserve the proper scale.

Now, to compute VaR it is simple to evaluate the “standard RiskMetrics component” since all that is
required are the RiskMetrics standard deviations and correlations which are readily available. How-
ever, things are not so straightforward with the “New portfolio component”. For example, we must de-
termine whether we should use estimates of the correlation between δ 1t and δ 2t or simply assume how
they are correlated. Similar issues may apply to β 1t and β 2t although, according to our estimation pro-
cedure, it is not possible to estimate their correlation. Through extensive experimentation with portfo-
lio’s of various sizes, we have concluded that treating both the δ t 's and β t 's as independent offers the
best results relative to the standard normal model and the assumption that the δ t 's are perfectly corre-
lated. The implication of this result is that for any portfolio of arbitrary size, all that is required to com-
pute its VaR are the RiskMetrics volatilities and correlations, the probabilities, p, mean estimates µ β
and the standard deviations σ β . Note that there is one p, µ β , and σ β for each time series. We now offer
an example to show how we reached the conclusion to treat the δ t 's and β t 's as independent.

Consider the case of a portfolio that consists of five assets. We assume that the true returns generated
from the following model:

[5] R jt = n jt + δ jt β jt for j = 1, 2, 3, 4, 5

where the vector β = ( β 1, …, β 5 ) is distributed multivariate normal with mean vector (0,0,0,0,0),
standard deviation vector (10,10,10,10,10), and correlation matrix

1 0.5 0.5 0.5 0.5


0.5 1 0.5 0.5 0.5
[6] Σ β = 0.5 0.5 1 0.5 0.5
0.5 0.5 0.5 1 0.5
0.5 0.5 0.5 0.5 1

The vector δ = ( δ 1, …, δ 5 ) is distributed according to a correlated multinomial distribution with


probabilities p j = 0.02 for j = 1,2,3,4,5 and a correlation matrix that is the same as Σ β . Finally, n t is
multivariate normal with mean vector (0,0,0,0,0), standard deviation vector (1,1,1,1,1) and correlation
matrix equal to Σ β . We form the portfolio return series weighting each return series by 1 unit
(i.e. wj = 1 for j = 1,2,3,4,5). After generating 5000 portfolio returns according to Eq. [5] on page 14,
we calculate the percentiles at the 0.5%, 1%, 2.5%, and 5% probability levels. These percentiles are
reported in the first column of Table 2 on page 15. Table 2 also reports percentiles computed under the
following conditions: (1) the standard normal assumption, i.e., just accounting for n t , (2) the assump-
tion that the δ t 's are independent, and (3) the assumption that δ t 's are perfectly correlated.
RiskMetrics Monitor
Second quarter 1996
page 15

An improved methodology for measuring VaR (continued)

Recall that we ignore the correlations between the β t 's because, in practice, it is very difficult to get
good estimates of the correlation matrix Σ β .

Table 2
Testing assumptions on correlation between δt’s
5000 simulated returns from correlated mixture model
Percentiles:
Confidence Interval True Normal Independent δt’s Perfectly correlated δt’s
5.0% −1.63 −1.65 −1.88 −1.74

2.5% −2.14 −1.96 −2.40 −2.15

1.0% −3.32 −2.32 −3.32 −3.10

0.5% −5.08 −2.57 −4.20 −8.54

Next, we conduct the same experiment but this time on real data. We form a portfolio of returns from
5 foreign exchange series again weighting each return series by one unit. Table 3 reports the parameter
estimates after fitting each of the five return series to Eq. [2] on page 11. The percentiles implied by
Eq. [2] under different aggregation assumptions are reported in Table 4.

Table 3
Parameter estimates of the normal mixture model
Fitting the model (Eq. [2]) to 5 foreign exchange return series
Parameter estimates:
Currencies µβt σβt σnt p (%)

Austria −0.80 3.09 1.01 1.2

Australia 0.33 3.48 1.10 1.5

Belgium 0.68 3.46 1.03 1.5

Canada −0.25 3.20 1.01 1.4

Denmark −1.34 3.00 1.08 1.4

Table 4
Testing assumptions on correlation between δt’s
Portfolio returns generated from 5 foreign exchange series
Percentiles:
Confidence Interval Normal Independent δt’s Perfectly correlated δt’s
5.0% −1.65 −1.71 −1.69

2.5% −1.96 −2.05 −2.10

1.0% −2.32 −2.46 −2.56

0.5% −2.57 −2.77 −3.16

Tables 2 and 4 show that the independence and perfect correlation assumptions give similar results.
However, at the smaller percentiles, say 1% and smaller, the assumption that the δ t ′s are independent
tend to be more accurate.

To help summarize how to compute VaR under this new proposed methodology, we refer the reader to
the flow chart presented in Chart 5 on page 17. The gray shaded boxes represent the data required
RiskMetrics Monitor
Second quarter 1996
page 16

An improved methodology for measuring VaR (continued)

(which we would supply) to estimate VaR under the new methodology. In addition to the RiskMetrics
volatilities and correlations, we would supply p, µ β , and σ β for each time series and the formulae that
would take as inputs the portfolio weights, the RiskMetrics volatilities and correlations, and p, µ β ,
and σ β to produce a VaR estimate at a prespecified confidence level. The italicized words tell when the
data would be updated.
RiskMetrics Monitor
Second quarter 1996
page 17

An improved methodology for measuring VaR (continued)

Chart 5
Flow chart of VaR calculation

Start with RiskMetrics™


price returns

Map positions to standard


RiskMetrics™ vertices

Estimate volatilities and correlations

Fit individual standardized


return series to a
Use statistical model
RiskMetrics™ monthly
covariance matrix
daily
Estimate standard deviation
σ(β), mean, µ(β), and
probabilities, p
monthly

Compute adjusted
percentile

Standard RiskMetrics™ New VaR


VaR
RiskMetrics Monitor
Second quarter 1996
page 18

An improved methodology for measuring VaR (continued)

Thus far, there has been no mention of how p, µ β , and σ β are estimated. The next section describes
t t
the statistical model used to estimate p, µ β , and σ β . The estimation process is rather technical and
t t
uninterested readers should skip to “Conclusions” on page 23.

A statistical model for estimating return distributions and probabilities


Recall from the previous section that returns divided by the RiskMetrics volatility forecasts are as-
sumed to be generated from a mixture of normal distributions. Using the notation “→,” which is inter-
preted “distributed as,” we write such returns as being generated from the following model:

[7] Rt = nt + δt βt

where

 2
n t → N  0, σ n 
Prob ( δ t = 1 ) = p
Prob ( δ t = 0 ) = ( 1 – p )
and
 2
β t → N  0, ξ 

We analyze Eq. [7] within a Bayesian framework. Given prior distributions and values on
2 2
δ t, p, β t, and σ n , we derive the marginal posterior distributions of p, β t, and σ n , as well as a time series
of the posterior probabilities of events, i.e., Prob ( δ t = 1 ) at each point in time. The basic computa-
tional tool used is the Gibbs sampler, which uses random draws from the conditional distributions of
each variable of a random vector given all other variables to obtain samples from the marginal distri-
butions.2 The sampler thus only requires the ability to draw random samples from the conditional dis-
tributions of the variables involved. This minimum requirement makes the sampler particularly useful
2
as the joint distribution of the variables p, β t, and σ n is complicated.

As previous research has shown, traditional maximum likelihood analysis of models such as Eq. [7] is
complicated because the random mechanism, p, can give rise to unknown numbers of shifts at arbitrary
time points. Alternatively, formulating Eq. [7] by using a Bayesian approach and applying the Gibbs
sampler is an effective way to obtain the marginal posterior distributions. A major advantage of the
Bayesian approach is that there is no need to consider the number of level shifts in the series; the shifts
are in effect governed by the probability p.

Following the Bayesian paradigm, in order to estimate Eq. [7], we must first specify the prior distribu-
2
tions for p, β t, and σ n . The prior distribution for the variance of the standard normal random variable
nt is

2 –2
[8] σ n → νλ ⋅ χ v

2 See “Appendix” on page 24 for a description of this technique.


RiskMetrics Monitor
Second quarter 1996
page 19

An improved methodology for measuring VaR (continued)

–2
where χ ν is an inverse chi-square random variable with v degrees of freedom with mean and variance
given by

 2
E  σ n  = νλ ⁄ ( ν – 2 ) ν>2
 2 2 2
V  σ n  = 2 ⋅ ( νλ ) ⁄ ( ν – 2 ) ( ν – 4 ) ν>4

The prior distribution for the event returns, β t is normal, i.e.,

 2
[9] β t → NID  0, ξ 

where NID stands for normal, independently distributed.

Finally, it is assumed that the prior probability that an event occurs, p, follows a beta distribution, i.e.,
p → Beta ( γ 1 + γ 2 ) , with mean and variance

E ( p) = γ 1 ⁄ ( γ 1 + γ 2)
2
V ( p) = γ 1 γ 2 ⁄ ( γ 1 + γ 2) ( γ 1 + γ 2 + 1)

2
The hyperparameters, i.e., the parameters ( γ , λ, γ 1, γ 2, ξ ) for the prior distributions are assumed
known. In practice, these hyperparameters can be specified by using substantive prior information on
the series under study. The purpose of the Gibbs sampler is to find the conditional posterior distribu-
 2
tions of subsets of the unknown parameters  δ, β, p, σ n  . Denoting the conditional probability density
of ω given ρ by p(ω|ρ) and using some standard Bayesian techniques, we obtain the posterior distribu-
tions, i.e., the distributions after we update our priors with data.3

To quantify our a priori beliefs, we set the priors to the following values:

γ 1 = 2 ; γ 2 = 98
ν = 3
2
λ = σ ⁄3
2
ξ = 100

These settings imply that before we estimate the parameters of Eq. [7], we believe an event occurs 2%
of the time, the expected value of the standard deviation of nt is one, and event returns, βt, are distrib-
uted normally with mean of zero and a standard deviation of 10. Chart 6 shows how the prior distribu-
tion on event returns compares to the prior distribution of standardized returns et.

3 The posterior distributions are given in the appendix to this paper (page 24).
RiskMetrics Monitor
Second quarter 1996
page 20

An improved methodology for measuring VaR (continued)

Chart 6
Prior distributions for standardized returns and event returns

0.4
0.35
N(0,1)

0.3
0.25
0.2
0.15
0.1
0.05
Prior β Ν(0,100)

0
-10 -8 -6 -4 -2 0 2 4 6 8 10

Note that by choosing a large standard deviation we are implying that we do not have strong a priori
beliefs on the values of event returns. Also, assuming a mean of zero implies that there is no bias toward
either positive or negative events.

Combining the observed returns with our prior settings, we estimate the marginal distributions of
2
β, p, and σ n , which represent the probability distributions of the event returns, the probability of an
event and the variance of the normal standardized random variable, respectively. Chart 7 presents the
estimated posterior distribution of event returns for gold spot contracts. Specifically, given our priors
we fit Eq. [7] on page 18 to gold spot contract returns for the period April 4, 1990 through March 26,
1996 and estimate the distribution of β.

Chart 7
Posterior distribution of event returns (β) of gold spot contracts
X-axis is in units of standardized returns

0.12

0.1

0.08
PDF

0.06

0.04

0.02

0
-10.00 -7.00 -4.00 -1.00 2.00 5.00 8.00
Spot gold standardized returns

Notice how the event returns have a symmetric bimodal distribution. Effectively, what this shape tells
us is that there is important information in the data. The data has transformed the prior distribution into
RiskMetrics Monitor
Second quarter 1996
page 21

An improved methodology for measuring VaR (continued)

something very different. When event returns are negative, values around −4.5 occur most frequently.
Similarly, when event returns are positive the values around 4.3 occur most often. This is in stark con-
trast to the prior distribution, which assumes that zero is the most commonly observed value for event
returns.

In fact, when fitting Eq. [7] to 215 time series in the RiskMetrics database, which includes foreign
exchange, money market rates, government bonds, commodities, and equities, we have found similar
shapes for the event return distributions. For example, Charts 7 and 8 show event return distributions
for the DEM/USD foreign exchange series and the US S&P 500 equity index for the period April 4,
1990 through March 26, 1996.

Chart 8
Posterior distribution of DEM/USD foreign exchange event returns
X-axis is in units of standardized returns

0.14
0.12
0.1
0.08
PDF

0.06
0.04
0.02
0
-8.00 -6.00 -4.00 -2.00 0.00 2.00 4.00 6.00 8.00
DEM/USD standardized returns

Note that whereas the DEM/USD event returns have a symmetric bimodal distribution, the S&P 500’s
event return distribution has a pronounced mode at −5.0. As Chart 9 on page 22 shows, when events in
the S&P 500 occur, they are more likely to be negative than positive.
RiskMetrics Monitor
Second quarter 1996
page 22

An improved methodology for measuring VaR (continued)

Chart 9
Posterior distribution of S&P 500 event returns
X-axis is in units of standardized returns

0.12

0.1

0.08
PDF

0.06

0.04

0.02

0
-10.00 -7.00 -4.00 -1.00 2.00 5.00 8.00
S&P 500 standardized returns

Charts 7, 8, and 9 presented the distribution of β for three different return series, similarly we can es-
timate the posterior distribution of p, the probability that an event occurs. Chart 10 shows the distribu-
tion of the probability of an event occurring for the S&P 500.

Chart 10
Posterior distribution of the probability of an event (p) for the S&P 500
April 4, 1990 through March 26, 1996

12000

10000

8000
PDF

6000

4000

2000

0
0.004 0.014 0.024 0.034 0.044
Probability of event

Recall that the prior probability of observing an event followed a beta distribution and it was expected
that, on average, an event would occur 2% of the time. However, after using the data to update our pri-
ors we see that for the S&P 500, the estimated distribution of p is positively skewed with a peak
around 1%.
RiskMetrics Monitor
Second quarter 1996
page 23

An improved methodology for measuring VaR (continued)

Finally, note that throughout this section we derived probability distributions of the parameters of in-
terest. However, our VaR methodology requires point estimates, not entire distributions. When com-
puting VaR we take the means of the posterior distributions of p, µ β, and σ β for their point estimates.

Conclusions
This article has developed a new methodology to measure VaR that explicitly accounts for fat-tail dis-
tributions. Building on the current RiskMetrics model, VaR under the new methodology takes as in-
puts portfolio weights two parameters: the current RiskMetrics volatilities and correlations, and
estimates of p, µ β , and σ β . This model was developed keeping in mind three requirements: (1) build
t t
a VaR model that continues to use RiskMetrics volatilities and correlations, (2) use the conditional
normal distribution as the baseline model and (3) keep the number of additional parameter estimates
required to compute VaR to a minimum. The upshot of meeting these criteria is a relatively simply VaR
framework that goes a long way towards modeling large portfolio losses. Furthermore, the new VaR
methodology serves as a basis from which a risk manager can perform structured Monte Carlo.

In closing, we are interested in your feedback related to this proposed methodology. Send the author
E-mail with any comments or questions you may have regarding the contents of this article.
RiskMetrics Monitor
Second quarter 1996
page 24

An improved methodology for measuring VaR (continued)

Appendix

Conditional posterior distributions


2
The conditional posterior of σ n is inverse chi-squared

2 2 –2
[A.1] σ n → γλ + ŝ ⋅ χγ + T

where

∑ ( R – R)
2 2
ŝ = t
t

and

R = sample mean of R t

At any point in time, the conditional posterior probability of δ t is given by

p ⋅ g1 ( Rt )
[A.2] Prob ( δ t = 1 R t, β t, σ n, p ) = -------------------------------------------------------------------------
-
p ⋅ g1 ( Rt ) + ( 1 – p) ⋅ go ( Rt )

where

 2  –1 ⁄ 2  2
[A.3] g 1 ( R t ) =  2πσ n  exp  – 1 ⁄ 2 ⋅ [ ( R t – β t ) ⁄ σ n ]  if ( δ t = 1 )

 2  –1 ⁄ 2  2
[A.4] g 0 ( R t ) =  2πσ n  exp  – 1 ⁄ 2 ⋅ [ R t ⁄ σ n ]  if ( δ t = 0 )

This result follows from a direct application of Baye’s rule, i.e.,

[A.5] Prob ( δ t = 1 R t ) =

Prob ( δ t = 1 ) ⋅ Prob ( R t δ = 1 )
t
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
-
Prob ( δ t = 1 ) ⋅ Prob ( R t δ = 1 ) + Prob ( δ t = 0 ) ⋅ Prob ( R t δ = 0 )
t t

The conditional posterior distribution of β t is as follows:


 2
If δ t = 0 , there is no information on β t except its prior, so that β t → NID  0, ξ 

If δ t = 1 , we use standard results on the relation between a normal prior and likelihood to
derive the posterior distribution of β t

 * 2
[A.6] β t ( δ t = 1 ) → NID  β t , σ β 
RiskMetrics Monitor
Second quarter 1996
page 25

An improved methodology for measuring VaR (continued)

where

2
* R1 ξ
β t = -----------------
2 2
ξ + σn
and
2 2
2 σn ξ
σ β = -----------------
2 2
ξ + σn

Finally, the conditional posterior distribution of ε depends only on δ t . Let k be the number of 1’s in
the T x 1 vector δ = ( δ 1, δ 2, …δ T ) , that is, k is the number of events in the time series. Because the
prior of p is Beta ( γ 1, γ 2 ) , the conditional posterior distribution of p is Beta ( γ 1 + k, γ 2 + T – k ) .

The Gibbs sampler


The Gibbs sampler is a Monte Carlo integration algorithm which is an adaptation from Markov Chain
Monte Carlo (MCMC) methods. It has been the focus of many academic papers in the statistics litera-
ture. A main reason for its popularity is its conceptual simplicity. The Gibbs sampler generates random
variables from a distribution indirectly, without having to calculate the density. In other words, rather
than having to compute or approximate the probability density function p(x) directly, the Gibbs sampler
makes it possible to generate a sample X 1, …, X N from the cumulative distribution function P(x) with-
out knowing p(x).

The Gibbs sampler simulates random variables from marginal and joint distributions as follows. Con-

sider a set of random variables r = ( r 1, …, r N ) which has a joint distribution function P ( r 1, …, r N ) .

It is assumed that the joint distribution is determined uniquely by the full conditional distributions
P ( r ) = ( r j r i ≠ r j ) , j = 1, …, N, and that it is possible to sample from these conditional distribu-

tions. For each j, let P ( r j ) denote the marginal distribution. Given an arbitrary set of starting values
 ( 0) ( 0) ( 0)  ( 1)  ( 0) ( 0)  ( 1)
 r 1 , r 2 , …, r N  , draw r 1 from P  r 1 r 2 , …, r N  , then r 2 from
 ( 0) ( 0) ( 0)  ( 1)
P  r 2 r 1 , r 3 , …, r N  and so on up to r N to complete one iteration of the scheme. After A such
 ( A) ( A) ( A) 
iterations the result is  r 1 , r 2 , …, r N  . It can be shown that for a large enough sample,
 ( A) ( A) ( A) 
 r 1 , r 2 , …, r N  is a simulated draw from the joint cumulative distribution function
P ( r 1, r 2, …, r N ) . In this paper we run the sampler 1000 times, then 500 times, and use the last 500

simulated samples for statistical inference.


RiskMetrics Monitor
Second quarter 1996
page 26

A Value-at-Risk analysis of currency exposures

Peter Zangari In this article, we compute the VaR of a portfolio of foreign exchange flows that consists of the expo-
Morgan Guaranty Trust Company sures that are provided in Table A.1 in the Appendix on page 30. In so doing, we underscore the limi-
Risk Management Advisory tations of standard VaR analysis when return distributions deviate significantly from normality. All
(1-212) 648-8641
exposures are assumed to have been converted to U.S. dollar equivalent at the current spot rate.
zangari_p@jpmorgan.com

Briefly, for a given forecast horizon and confidence level, VaR is the maximum expected loss of a port-
folio’s current value. Based on the standard RiskMetrics methodology, Table 1 reports the portfolio’s
VaR over different forecast horizons and confidence levels. For example, over the next year, there is a
95% chance that current portfolio value of USD 2,502MM will not fall by more than USD 129.42MM.

Table 1
Portfolio VaR estimates (USD MM)
Confidence
Interval Time Horizon Annual Risk
1 Quarter 2 Quarters 3 Quarters Diversified Undiversified

99% 91.24 129.03 157.92 182.48 469.57

95% 64.71 91.51 112.08 129.42 333.03

90% 50.21 71.01 86.97 100.43 258.43

The RiskMetrics methodology is based on the precept that risks across instruments are not perfectly
additive given the lack of perfect positive correlation. As a result, the total risk in a portfolio of posi-
tions is often less than the sum of the instrument risks taken separately. This diversification benefit can
be estimated by taking the difference in VaR assuming all correlations between exposures are 1 (no di-
versification) and VaR estimates based on estimated correlations (as presented in Table 1). Consider the
VaR estimates (95% confidence) on the individual exposures for a one year horizon. The total sum of
these numbers is USD 333.03MM (sum of annual VaR estimates in Table A.1 on page 30). This is
equivalent to the VaR estimate of USD 129.42MM reported in Table 1 plus the diversification benefit
of USD 203.61MM.

In addition to the portfolio’s VaR, we compute the VaR of each foreign exchange exposure for a forecast
horizon of one year, and a 95% confidence level. Table A.1 reports VaR estimates for the 52 positions.

To further help understand the riskiness of the individual foreign exchange positions, Table A.1 also
reports the volatility forecasts of foreign exchange returns (i.e. not weighted by the size of the foreign
exchange positions).

To obtain the aforementioned risk estimates, we conducted the VaR analysis closely following the Risk-
Metrics methodology. For each of the 52 time series (Table A.1) we used 86 historical weekly prices
for the period 7/15/95 through 4/31/96. Missing observations, due to country-specific holidays, were
forecast using the statistical routine known as the EM algorithm. The VaR estimates reported in
Tables A.1 and A.2 were computed using the standard RiskMetrics delta valuation methodology.
This requires the computation of volatilities for each exposure and correlations between exposures.
Volatilities and correlations were computed using exponentially weighted averages with a decay factor
of 0.94 which implies that our volatility and correlation forecasts effectively use 75 historical data
points. Recall that exponential weighting is a simple way of capturing the dynamics of returns. Its key
feature is that it weighs recent data more heavily than data recorded in the distant past.
See RiskMetrics—Technical Document for exact formulae.

When computing VaR, RiskMetrics assumes that portfolio returns are distributed conditionally nor-
mal. That is, it is assumed that returns divided by their respective standard deviations are normally dis-
RiskMetrics Monitor
Second quarter 1996
page 27

A Value-at-Risk analysis of currency exposures (continued)

tributed. It is important to distinguish this assumption from simply assuming that currency returns are
normally distributed. Table A.2 on page 32 reports various test statistics to determine the validity of
this assumption. All results were computed from normalized returns, that is, returns divided by their
standard errors.

RiskMetrics also assumes that returns are independent over time. Finally, to generate VaR estimates
over different time horizons, we simply scale the weekly VaR forecast by the square root of the time
horizon in weeks. For example, since we define 60 weeks to be one year,1 we obtain the one-year VaR
forecast by multiplying the one-week VaR forecast by the square root of 60.

Discussion
Table A.2 on page 32 presents evidence that several returns series are clearly not conditionally normal.
For example, Mexico’s skew statistic is 13.71, which is very nonnormal. When returns are aggregated
into a portfolio, it would not be unreasonable to expect the portfolio’s return distribution to become
more normal. However, as the sample statistics for the portfolio show in Table A.2, this is not the case.
In fact, the non-normality of the portfolio’s return is a result of the relatively high weights on returns
that are very non-normal (e.g. China has a weight of 74). Fortunately, there are advanced statistical
methods that allow us to adjust our VaR estimates to reflect the portfolio’s skewness and kurtosis (see
RiskMetrics Monitor, 1st quarter, 1996). In the current analysis, we did not adjust the VaR estimates
for skewness and kurtosis.

In addition to the general deviations from conditional normality, there is the issue of event risk. For
various reasons, events appear as very large returns that occur with only a small probability. While we
are currently developing a VaR methodology that allows users to explicitly account for event risk in
their VaR calculations, it is not included in this analysis. For a discussion of fat-tail distributions, see
the preceding paper in this edition, “An improved methodology for measuring VaR” on page 7.

One way to measure how sensitive the VaR estimates in Tables A.1 and A.2 are to its underlying as-
sumptions is to compare these values to VaR estimates produced by historical simulation. Under his-
torical simulation, no statistical distribution for returns is assumed. Instead, sample returns over the 85
week historical sample period and the portfolio weights are used to construct the portfolio’s profit and
loss (P&L) distribution. It is then assumed that this distribution holds in the future. Chart 1 on page 28
shows the P&L distribution of a portfolio for a one year horizon. For a 95% confidence level, VaR is
given by the 5th percentile of this distribution which is USD 174MM.

1 In this analysis, we use the convention of 5 weeks per month, 15 weeks per quarter, and 60 weeks per year.
RiskMetrics Monitor
Second quarter 1996
page 28

A Value-at-Risk analysis of currency exposures (continued)

Chart 1
Portfolio profit & loss distribution over one year based on historical simulation
VaR = USD 174MM at 95% confidence level

14
12
10
Frequency

8
6
4
2
0
-147

-54

-39

-23

-8

23

39

54

147
8

Portfolio value (mm$)

Table 2 presents portfolio VaR estimates based on historical simulation for various forecast horizons.
Table A.1 on page 30 reports annual VaR estimates for individual foreign exchange exposures.

Table 2
Portfolio VaR estimates (USD MM)
Historical simulation; 95% confidence level
Confidence interval1 Time Horizon
1 Quarter 2 Quarters 3 Quarters 1 Year

95% 84 119 146 174

90% 80 113 138 160


1Due to an insufficient amount of data, we do not report results for the 99% confidence level.

It should be noted that the accuracy of the VaR estimates produced by historical simulation is very de-
pendent upon the sample size. In general, when no statistical distribution is assumed for returns, it is
difficult to obtain accurate estimates of the 1st, 5th and 10th percentiles without a sufficient amount of
data.

For example, to find the 5th percentile of the profit and loss (P&L) distribution consisting of 85 data
points we first sort the data and then select the 4th largest point. (Actually, the fifth percentile of 85 is
4.25 and using the fourth observation is a rough approximation to the true percentile value). Similarly,
the 1st and 10th percentiles are represented by the first and ninth data points of the sorted P&L series,
respectively. Table 3 presents the first nine P&L values from the sorted distribution as well as their cor-
responding percentiles.
RiskMetrics Monitor
Second quarter 1996
page 29

A Value-at-Risk analysis of currency exposures (continued)

Table 3
First nine values of the portfolio’s profit and loss (P&L) distribution
One year forecast horizon
Order P&L Values Approximate Percentiles
1st −288 1st

2nd −279 -

3rd −258 -

4th −177 5th

5th −165 -

6th −160 -

7th −160 -

8th −160 -

9th −140 10th

Note that the 5th percentile in Table 3 does not match its counterpart in Table 2 because there, the per-
centile is computed as 0.75 × (4 + 0.25) × 5 = 174. Nevertheless, we present Table 3 to show how per-
centile estimates are very sensitive to the number of data points used to construct the P&L distribution.

More specifically, by computing the confidence intervals for the estimated percentiles, we find that
there is a 20% chance that the estimated 5th percentile will be less than −279 or greater than −160; there
is only a 67% chance that the 5th percentile is less than −165 and greater than −258. The fact that it is
difficult to get robust estimates of the percentiles in above analysis is one reason for the differences
between RiskMetrics VaR and VaR according to historical simulation. Another reason for the differ-
ence is related to the relative data weighting schemes used by both methodologies. In historical simu-
lation, all occurrences have equal weights. Under the standard RiskMetrics approach, market
movements are exponentially weighted.
RiskMetrics Monitor
Second quarter 1996
page 30

A Value-at-Risk analysis of currency exposures (continued)

Appendix
Table A.1
Portfolio composition and VaR1
Annual Volatility Annual Value at Risk
Weight 1.65 Std. Dev. RiskMetrics Hist. Simulation

OECD
Australia 35 12.361 4.330 4.340
Austria 66 22.565 14.890 13.360
Belgium 32 16.592 5,310 6.220
Denmark 59 15.850 9.350 11.070
France 28 15.616 4.370 5.280
Germany 37 16.847 6.230 7.370
Greece 80 15.220 12.180 14.460
Holland 30 16.738 5.020 5.880
Italy 82 11.190 9.180 12.470
New Zealand 98 10.948 10.730 7.360
Portugal 28 15.208 4.260 5.270
Spain 48 14.843 7.120 8.120
Turkey 68 30.132 20.490 18.450
UK 81 11.641 9.430 14.170

Switzerland 41 19.810 8.120 8.330

Latin Amer. Econ. System


Brazil 6 6.637 0.400 0.680
Chile 4 10.770 0.430 0.520
Colombia 83 13.462 11.170 7.540
Costa Rica 53 4.353 2.310 1.460
Dominican Rep. 77 8.974 6.910 12.630
El Salvador 29 0.637 0.180 0.260
Equador 51 12.132 6.190 8.370
Guatemala 90 10.521 9.470 10.870
Honduras 10 11.272 1.130 1.840
Jamaica 64 15.298 9.790 8.000
Mexico 99 33.727 33.390 53.560
Nicaragua 9 4.030 0.360 0.160
Peru 27 10.536 2.840 1.870
Trinidad 12 16.731 2.010 2.020

Uruguay 74 7.491 5.540 8.320


RiskMetrics Monitor
Second quarter 1996
page 31

A Value-at-Risk analysis of currency exposures (continued)

Table A.1 (continued)


Portfolio composition and VaR1
Annual Volatility Annual Value at Risk
Weight 1.65 Std. Dev. RiskMetrics Hist. Simulation
ASEAN
Malaysia 54 5.065 2.730 2.250
Philippines 79 5.521 4.360 9.850
Thailand 34 2.813 0.960 0.950

Fiji 84 5.651 4.750 4.980


Hong Kong 79 0.346 0.270 0.360
Reunion Island 41 15.614 6.400 7.730

Southern African Dev. Comm.


Malawi 55 18.211 10.020 19.350
South Africa 85 7.953 6.760 8.120
Zambia 11 19.112 2.100 3.130
Zimbabwe 57 6.968 3.970 5.100

Ivory Coast 53 15.607 8.270 9.990


Uganda 2 18.700 0.370 0.300

Others
China 1 3.964 0.040 0.020
Czech Repub 64 12.784 8.180 8.300
Hungary 83 12.984 10.780 11.560
India 94 17.787 16.720 13.960
Romania 43 25.866 11.120 4.520
Russia 82 14.730 12.080 20.030

Total 2,502
1Countries are grouped by major economic groupings as defined in Political Handbook of the World: 1995–1996. New
York: CSA Publishing, State University of New York, 1996. Countries not formally part of an economic group are
listed in their respective geographic areas.
RiskMetrics Monitor
Second quarter 1996
page 32

A Value-at-Risk analysis of currency exposures (continued)

Table A.2
Testing for conditional normality1
Normalized return series; 85 total observations
Tail Probability (%)8 Tail value9
Skewness2 Kurtosis3 SW4 BL(18)5 Mean6 Std. Dev.7 < −1.65 > 1.65 < −1.65 > 1.65

OECD
Australia 0.314 3.397 0.958 6.105 0.120 0.943 2.900 5.700 −2.586 2.306
Austria 0.369 0.673 0.961 13.517 −0.085 1.037 8.600 5.700 −1.975 2.499
Belgium 0.157 2.961 0.943 19.172 −0.089 0.866 8.600 2.900 −1.859 2.493
Denmark 0.650 4.399 0.932 17.510 −0.077 0.903 11.400 2.900 −1.915 2.576
France 0.068 3.557 0.950 17.642 −0.063 0.969 8.600 2.900 −2.140 2.852
Germany 0.096 4.453 0.937 18.064 −0.085 0.872 5.700 2.900 −1.821 2.703
Greece 0.098 2.259 0.940 15.678 −0.154 0.943 11.400 2.900 −1.971 2.658
Holland 0.067 4.567 0.939 18.360 −0.086 0.865 5.700 2.900 −1.834 2.671
Italy 0.480 0.019 0.984 7.661 0.101 0.763 0 2.900 0 1.853
New Zealand 1.746 7.829 0.963 8.808 0.068 1.075 2.900 2.900 −2.739 3.633
Portugal 1.747 0.533 0.947 21.201 −0.062 0.889 11.400 2.900 −1.909 2.188
Spain 6.995 1.680 0.935 14.062 −0.044 0.957 8.600 2.900 −2.293 1.845
Turkey 30.566 118.749 0.865 2.408 −0.761 1.162 11.400 0 −2.944 0
UK 7.035 2.762 0.936 11.711 −0.137 0.955 8.600 2.900 −2.516 1.811

Switzerland 0.009 0.001 0.992 6.376 −0.001 0.995 2.900 5.700 −2.415 2.110

Latin Amer. Econ. System


Brazil 0.880 1.549 0.976 10.900 −0.224 0.282 0 0 0 0
Chile 1.049 0.512 0.983 11.035 −0.291 0.904 8.600 0 −2.057 0
Colombia 2.010 4.231 0.927 4.041 −0.536 1.289 11.400 2.900 −3.305 2.958
Costa Rica 0.093 33.360 0.878 19.893 −0.865 0.425 5.700 0 −2.011 0
Dominican Rep 0.026 41.011 0.872 10.796 0.050 1.183 5.700 5.700 −3.053 3.013
El Salvador 2.708 49.717 0.672 9.626 0.014 0.504 0 2.900 0 1.776
Equador 0.002 50.097 0.852 10.463 0.085 1.162 5.700 5.700 −3.053 3.013
Guatemala 0.026 1.946 0.959 12.276 −0.280 1.036 8.600 5.700 −2.365 2.237
Honduras 42.420 77.277 0.705 5.794 −0.575 1.415 14.300 0 −3.529 0
Jamaica 81.596 451.212 0.674 6.030 −0.301 1.137 2.900 2.900 −6.163 1.869
Mexico 13.71 30.237 0.930 15.156 −0.158 0.597 2.900 0 −2.500 0
Nicaragua 0.051 2.847 0.977 132.183 −0.508 0.117 0 0 0 0
Peru 122.807 672.453 0.560 2.713 −0.278 1.365 5.700 0 −5.069 0
Trinidad 0.813 0.339 0.980 10.271 0.146 1.063 8.600 11.400 −2.171 1.915

Uruguay 0.724 0.106 0.989 9.464 −0.625 0.371 0 0 0 0


RiskMetrics Monitor
Second quarter 1996
page 33

A Value-at-Risk analysis of currency exposures (continued)

Table A.2 (continued)


Testing for conditional normality1
Normalized return series; 85 total observations
Tail Probability (%)8 Tail value9
Skewness2 Kurtosis3 SW4 BL(18)5 Mean6 Std. Dev.7 < −1.65 > 1.65 < −1.65 > 1.65
ASEAN
Malaysia 1.495 0.265 0.977 28.815 −0.318 0.926 8.600 0 −2.366 0
Philippines 1.654 0.494 0.975 22.944 −0.082 0.393 0 0 0 0
Thailand 0.077 0.069 0.987 10.099 −0.269 0.936 8.600 2.900 −2.184 1.955

Fiji 4.073 6.471 0.965 6.752 −0.129 0.868 2.900 2.900 −3.102 1.737
Hong Kong 5.360 29.084 0.906 12.522 0.032 1.001 5.700 5.700 −2.233 2.726
Reunion Island 0.068 3.558 0.950 17.641 −0.063 0.969 8.600 2.900 −2.140 2.853

Southern African Dev. Comm.


Malawi 0.157 9.454 0.870 14.143 −0.001 0.250 0 0 0 0
South Africa 34.464 58.844 0.837 7.925 −0.333 1.555 8.600 0 −4.480 0
Zambia 22.686 39.073 0.891 9.462 −0.007 0.011 0 0 0 0
Zimbabwe 20.831 29.234 0.895 9.142 −0.487 0.762 5.700 0 −2.682 0

Ivory Coast 0.068 3.564 0.950 17.643 −0.064 0.970 8.600 2.900 −2.144 2.857
Uganda 40.815 80.115 0.767 9.629 −0.203 1.399 8.600 2.900 −4.092 1.953

Others
China 80.314 567.012 0.557 5.268 0.107 1.521 2.900 2.900 −3.616 8.092
Czech Repub 0.167 12.516 0.937 2.761 −0.108 0.824 5.700 2.900 −2.088 2.619
Hungary 1.961 0.006 0.984 8.054 −0.342 0.741 5.700 0 −2.135 0
India 5.633 3.622 0.950 9.683 −0.462 1.336 17.100 5.700 −2.715 1.980
Romania 89.973 452.501 0.726 5.187 −1.249 1.721 14.300 0 −4.078 0
Russia 0.248 2.819 0.959 5.061 −0.120 0.369 0 0 0 0

Portfolio 21.010 11.057 0.951 11.940 −0.340 0.926 9.200 0 −2.451 0


Normal 0.000 3.000 1.000 < 18.000 - 1.000 5.000 5.000 −2.067 2.067
1
Countries are grouped by major economic groupings as defined in Political Handbook of the World: 1995–1996. New York: CSA Publishing, State University of
New York, 1996. Countries not formally part of an economic group are listed after their respective geographic areas.
2
If returns are conditionally normal, the skewness value is zero.
3
If returns are conditionally normal, the kurtosis value is zero.
4
If returns are conditionally normal, this value is one. (SW stands for Shapiro-Wilks test.)
5
If there is autocorrelation in the time series, this value is greater than 18.31. (BL stands for Box-Ljung test statistic.)
6
Sample mean of the return series.
7
Sample standard deviation of the normalized return series.
8
Tail probabilities give the observed probabilities of normalized returns falling below −1.65 and above +1.65. Under conditional normality, these values are 5%.
9
Tail values give the observed average value of normalized returns falling below −1.65 and above +1.65. Under conditional normality, these values are −2.067 and
+2.067, respectively.
RiskMetrics Monitor
Second quarter 1996
page 34

Estimating index tracking error for equity portfolios

Alan Laubsch In the RiskMetrics—Technical Document, we outlined a single-index equity VaR approach to esti-
Morgan Guaranty Trust Company mate the systematic market risk of equity portfolios. In this paper, we discuss the principal variables
Risk Management Advisory influencing the process of portfolio diversification, and suggest an approach to quantifying expected
(1-212) 648-8369
tracking error to market indices.1
laubsch_alan@jpmorgan.com

The current RiskMetrics framework


The market risk of the stock, VaRS, is defined as the market value of the investment in that stock, MV S ,
multiplied by the price volatility estimate of that stock’s returns, 1.65σ R :
S

[1] VaR S = MV S ⋅ 1.65σ R


S

Since RiskMetrics does not publish volatility estimates for the universe of international stocks,
equity positions are mapped to their respective local indices. This methodology “maps” the return of a
stock to the return of a stock (market) index in order to attempt to forecast the correlation structure be-
tween securities. Let the return of a stock, R S , be defined as

[2] RS = βS R M + αS + εS

where

R M = the return of a stock (market) index


β S = a measure of the expected change in R S given a change in R M (beta)
α S = the expected value of the stock′s return that is firm-specific
ε S = the random element of the firm specific normal return
E [ εS] = 0
and
2 2
E εS = Sε

As such, the returns of assets are explained by market-specific2 ( β S R M ) and stock-specific


( α S + ε S ) components. Similarly, the total variance of stocks is a function of the market- and firm-spe-
cific variances.

2 2 2 2
[3] σ R = βS ⋅ σ R + σε
S M S

Since the firm-specific component can be diversified away by increasing the number of different equi-
ties that comprise a given portfolio, the market risk, VaR S , of the stock can be expressed as a function
of the stock index

[4] σ R = βS ⋅ σ R
S M

1 Thispaper is an addendum to RiskMetrics—Technical Document. (3rd ed.) New York, May 1995, “Section C: Map-
ping to describe positions,” pp. 107–156.

2 A number of equity analytics firms provide estimates of stock betas across a large number of markets.
RiskMetrics Monitor
Second quarter 1996
page 35

Estimating index tracking error for equity portfolios (continued)

Substituting Eq. [4] into Eq. [1] yields

[5] VaR S = MV S ⋅ β S ⋅ 1.65σ R


M

where

1.65σ R = RiskMetrics volatility estimate for the appropriate stock index.


M

Using this framework, the VaR of a portfolio of stocks becomes:

N
[6] VaR p = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1

The process of portfolio diversification


For well diversified equity portfolios, the current RiskMetrics technique should yield reasonable risk
estimates. For example, as a rule of thumb, in the U.S. equity market, portfolios with 25 or more stocks
are generally well diversified (although the level of diversification will vary depending on whether
there are any significant concentrations, for example to a particular industry sector).

Chart 1 shows the effect of diversification for U.S. equities, based on a study of monthly volatilities for
all stocks listed on the New York Stock Exchange.3 Total risk rapidly declines and approaches the index
(or fully diversified) volatility level.

Chart 1
Portfolio diversification effect
3.0

2.5
Portfolio standard deviation

2.0

1.5

1.0

0.5 1.0 represents fully diversified market risk

0.0
1 10 20 45 150 350 700 ∞
Number of assets in portfolio

3 EdwinJ. Elton and Martin J. Gruber. Modern Portfolio Theory and Investment Analysis. (4th ed.) New York: John
Wiley and Sons, Inc., 1991. p. 33.
RiskMetrics Monitor
Second quarter 1996
page 36

Estimating index tracking error for equity portfolios (continued)

In Modern Portfolio Theory and Investment Analysis, Elton and Gruber derive a formula that illustrates
the process of diversification with respect to the number of different stocks in a portfolio.

The variance of a portfolio of N stocks is

N N N

∑ ∑∑XX
2 2 2 2
[7] σP = X J σJ + J K σ JK
J=1 J = 1K = 1
K≠J

where

X J = proportion of stock J
2
σ J = variance of returns for stock J
2
σ JK = covariance of returns between stocks J and K

For an equally weighted portfolio of N assets (i.e., proportion held in security XJ=1/N) the formula for
portfolio variance becomes

2 1 2 N–1 2
σ P = ---- σ J + ------------- σ JK
N N

where

2
σ P = stock portfolio variance of returns
N = number of entities in the portfolio
2
σ J = average variance of returns for individual securities
2
σ JK = average covariance of returns ( ≈ diversified index variance)

To better illustrate the process of diversification, we can arrange the terms in the following manner:

2 1 2 2  2
σ P = ----  σ J – σ JK  + σ JK
N

The first term of the equation (1/N times the difference between the variance of individual securities
and the average covariance) corresponds to the residual firm specific risk of a portfolio. The second
term (average covariance), represents the diversified market risk component. Now we can see that as
the number of stocks in a portfolio increases, the firm specific component declines to zero, and we are
left only with undiversifiable market risk. Therefore, the variance of a broad market index, such as the
2 2
S&P 500, should approximate the average covariance of stock returns ( σ JK ≈ σ R ). Substituting the
M
average covariance with the market index variance yields

2 1 2 2  2
[8] σ P = ----  σ J – σ RM  + σ RM
N
RiskMetrics Monitor
Second quarter 1996
page 37

Estimating index tracking error for equity portfolios (continued)

Applications
Elton and Gruber's derivation clarifies the process of diversification and underlines the key variables
of portfolio risk: (a) average variance, (b) average covariance and (c) number of elements in a portfo-
lio. Using these key variables, we can compare the effect of diversification for different equity markets.
For example, Elton and Gruber show that significantly more risk is diversifiable in the Netherlands or
Belgium (76%, and 80% respectively), than in the more correlated equity markets of Germany and
Switzerland (56%). In general, significant risk reduction through diversification is possible when the
average covariance of a population is small compared to the average variance.

Elton and Gruber's derivation could be applied for estimating the tracking error to a broad market index
when a portfolio is not fully diversified. For example, we could calculate a Diversification Scaling Fac-
tor (i.e., proportion of expected total risk to systematic risk) to estimate the incremental risk given the
number of elements within a portfolio.

2
σP
Diversification Scaling Factor = --------
2
-
σR
[9] M

1  2 2 
- σ –σ R  + 1
= -------------
2  J M
Nσ R
M

Using this formula, RiskMetrics users could adjust their equity VaR estimate upward to reflect firm
specific risk:

Adjusted VaR p = Diversification Scaling Factor × VaR p

The potential applications of this technique are broad. For example, the diversification scaling factor
could be used as a “back of the envelope” estimate of how much residual risk to expect in a stock port-
folio, given the number of different stocks held. The advantage of Elton and Gruber's derivation lies in
its simplicity. Using basic variables—the number of securities in a portfolio, and the proportion of firm
specific to diversified risk—one can get an estimate of index tracking error.

Practical considerations
To estimate the average volatility of stocks within an index, one can take either an evenly weighted or
a market-cap weighted volatility of each component security.4 Depending on client demand and re-
source availability, J.P. Morgan could potentially include this volatility estimate in future releases of
the RiskMetrics data set. Its integration into the RiskMetrics data set would be relatively straight-
forward because the overall correlation matrix would be unaffected (we assume that firm specific risk
is independent).

4 Component securities of broad market indices are available from a number of sources, for example, Bloomberg or
BARRA.
RiskMetrics Monitor
Second quarter 1996
page 38

Estimating index tracking error for equity portfolios (continued)

Example 1
Consider a portfolio consisting of four U.S. stocks, with market values of USD 25MM each, a one
month standard deviation for the S&P500 Index of 3.46%, and an average standard deviation of stocks
within the S&P 500 of 8.9%.

Example summary
Average equity volatility (1.65σ) 14.69%
Diversified index volatility (1.65σ) 5.72%
Number of securities 4

Security Market Beta Systematic


Value Risk
Stock A $25.00 0.5 $0.71
Stock B $25.00 1.5 $2.14
Stock C $25.00 1 $1.43
Stock D $25.00 0.8 $1.14

Total systematic equity VaR (1)1,2 $5.43


Diversification scaling factor (2)1 155%
1
Adjusted equity VaR (3) $8.41
1This parameter is calculated as shown in its respective section (see pages 38–39).
2Total systematic equity VaR assumes a perfect correlation of 1 and therefore, is
additive and equal to the sum of the values in the “Systematic Risk” column.

(1) Total systematic equity VaR

N
VaR p = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1
= 1.65σ R [ MV A ⋅ β A + MV B ⋅ β B + MV C ⋅ β C + MV D ⋅ β D ]
S&P500

= 1.65 ( 3.46 ) ( 25MM ) ⋅ [ 0.5 + 1.5 + 1.0 + 0.8 ]


= 5.72 ( 25MM ) ( 3.8 )
= USD 5.43MM
RiskMetrics Monitor
Second quarter 1996
page 39

Estimating index tracking error for equity portfolios (continued)

(2) Diversification scaling factor

2
σp 1  σ2 – σ2 
2
- = ----------------------
----------------- 2  J R S&P500  + 1
σR Nσ R
S&P500 S&P500

1  2 2
= --------------------------  8.9% – 3.46%  + 1
4 ( 3.46% )
= 155%

(3) Adjusted equity VaR

Finally, adjust the systematic risk to reflect the expected total risk in the portfolio:

Adjusted Equity VaR p = Diversification scaling factor × Total systematic equity VaR
= 155 % ( 5.43MM )
= USD 8.41MM

Note that Elton and Gruber's derivation assumes that stocks are randomly selected from a population
and that portfolios are evenly distributed. This technique becomes less accurate for asymmetric portfo-
lios, or if there are significant concentrations. For portfolios with significant industry concentrations,
one could apply a sub-index that more closely reflect the portfolio composition (for example an oil
stock index, or bank stock index). Alternately, one could depart from a one factor CAPM approach to
a multi-factor approach for equity VaR.

Total equity VaR for asymmetrically distributed portfolios


For a more precise calculation of portfolio volatility, one should consider the exact weighting of assets.
The following shows how to calculate VaR for a single market portfolio, assuming an average variance
for equities.

2 2
VaR = (market risk) + (residual risk)

where

N
market risk = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1
N

∑ MV
2  2
– β Si σ R 
2 2
residual risk = 1.65 Si  σ J M
i=1

Market risk for individual stocks is aggregated linearly (correlation=1), while residual risk is aggregat-
ed assuming independence (i.e., square root of the sum of the squares).
RiskMetrics Monitor
Second quarter 1996
page 40

Estimating index tracking error for equity portfolios (continued)

Example 2
Consider the same parameters outlined in Example 1, except that we hold different proportions of the
same stocks in this portfolio.

Example summary
Average equity volatility (1.65σ) 14.69%
Diversified index volatility (1.65σ) 5.72%
Number of securities 4

Security Market Value Beta Systematic Risk Residual Risk


Stock A $10.00 0.5 $0.29 $1.44
Stock B $20.00 1.5 $1.71 $2.39
Stock C $30.00 1 $1.71 $4.06
Stock D $40.00 0.8 $1.83 $5.58

Total systematic equity VaR (1)1,2 $5.54


Total residual risk (2)1,3 $7.44
Total equity VaR (3)1 $9.28
1This parameter is calculated as shown in its respective section (see pages 40–41).
2Total systematic equity VaR assumes a perfect correlation of 1 and therefore, is additive and equal to the sum
of the values in the “Systematic Risk” column.
3Total residual risk assumes a correlation of zero and therefore, is calculated as shown on page 41.

(1) Total systematic equity VaR

N
Systematic VaR p = 1.65σ R ⋅
M ∑ MV Si ⋅ β Si
i=1
= 1.65σ R [ MV A ⋅ β A + MV B ⋅ β B + MV C ⋅ β C + MV D ⋅ β D ]
S&P500

( 10MM ) ( 0.5 ) + ( 20MM ) ( 1.5 )


= 1.65 ( 3.46% )
+ ( ( 30MM ) ( 1 ) ) + ( 40MM ) ( 0.8 )
= 5.72% ( 97MM )
= USD 5.54MM
RiskMetrics Monitor
Second quarter 1996
page 41

Estimating index tracking error for equity portfolios (continued)

(2) Total residual risk

∑ MV
2  2
– β R 
2
Total residual risk = 1.65 Si  σ R S M
i=1

MV A  σ J – β A σ R  + MV B  σ J – β B σ R 
2 2 2 2 2 2 2 2
M M
= 1.65
2 2 2 2  2 2 2 2 
+MV C  σ J – β C σ R  + MV D  σ J – β D σ R 
M M

2 2 2 2 2 2 2 2
10 8.9 – ( 0.5 ) ( 3.46 ) + 20 8.9 – ( 1.5 ) ( 3.46 )
= 1.65
2 2 2 2 2 2 2 2
+30 8.9 – ( 1 ) ( 3.46 ) + 40 8.9 – ( 0.8 ) ( 3.46 )
= USD 7.44MM

(3) Total equity VaR

2 2
Total equity VaR p = ( market risk ) + ( total residual risk )
2 2
= ( 5.54 ) + ( 7.44 )
= USD 9.28MM

Multi-Market Portfolio
VaR for a portfolio consisting of equities from several different markets follows the same methodology
of aggregating market and residual risk. The difference is that the correlation between different market
indices is incorporated, as well as FX risk.
RiskMetrics Monitor
Second quarter 1996
page 42
RiskMetrics  Monitor
Second quarter 1996
page 43

Previous editions of the RiskMetrics Monitor

1st Quarter 1996: January 23, 1996

• Basel Committee revises market risk supplement to 1988 Capital Accord.

• A look at two methodologies that use a basic delta-gamma parametric VaR precept but achieve
results similar to simulation.

4th Quarter 1995: October 12, 1995

• Exploring alternative volatility forecasting methods for the standard RiskMetrics monthly
horizon.

• How accurate are the risk estimates in portfolios that contain Treasury bills proxied by LIBOR
data?

• A solution to the standard cashflow mapping algorithm, which sometimes leads to imaginary
roots.

3rd Quarter 1995: July 5, 1995

• Mapping and estimating VaR for interest rate swaps.

• Adjusting correlations obtained from nonsynchronous data.


RiskMetrics Monitor
Second quarter 1996
page 44

RiskMetrics products Worldwide RiskMetrics contacts


Introduction to RiskMetrics: An eight-page document that For more information about RiskMetrics, please contact the
broadly describes the RiskMetrics methodology for author or any other person listed below.
measuring market risks.
North America

RiskMetrics Directory: Available exclusively on-line, a list New York Jacques Longerstaey (1-212) 648-4936
of consulting practices and software products that incorporate longerstaey_j@jpmorgan.com
the RiskMetrics methodology and data sets. Chicago Michael Moore (1-312) 541-3511
moore_mike@jpmorgan.com
RiskMetrics Monitor: A quarterly publication that Mexico Beatrice Sibblies (52-5) 540-9554
discusses broad market risk management issues and statistical sibblies_beatrice@jpmorgan.com
questions as well as new software products built by third-party
San Francisco Paul Schoffelen (1-415) 954-3240
vendors to support RiskMetrics.
schoffelen_paul@jpmorgan.com

RiskMetrics data sets: Two sets of daily estimates of future Toronto Dawn Desjardins (1-416) 981-9264
desjardins_dawn@jpmorgan.com
volatilities and correlations of approximately 420 rates and
prices, with each data set totaling 88,000+ data points. One set Europe
is for computing short-term trading risks, the other for medium-
London Benny Cheung (44-71) 325-4210
term investment risks. The data sets currently cover foreign
cheung_benny@jpmorgan.com
exchange, government bond, swap, and equity markets in up to
22 currencies. Eleven commodities are also included. Brussels Isabelle Vanderstricht (32-2) 508-8060
vanderstricht_i@jpmorgan.com
A RiskMetrics Regulatory data set, which incorporates the Paris Ciaran O’Hagan (33-1) 4015-4058
latest recommendations from the Basel Committee on the use ohagan_c@jpmorgan.com
of internal models to measure market risk, is now available. Frankfurt Robert Bierich (49-69) 712-4331
bierich_r@jpmorgan.com
Bond Index Cash Flow Maps: A monthly insert into the
Milan Roberto Fumagalli (39-2) 774-4230
Government Bond Index Monitor outlining synthetic cash flow fumagalli_r@jpmorgan.com
maps of J.P. Morgan’s bond indices.
Madrid Jose Luis Albert (34-1) 577-1722
albert_j-l@jpmorgan.com
Trouble accessing the Internet? If you encounter any
difficulties in either accessing the J.P. Morgan home page at Zurich Viktor Tschirky (41-1) 206-8686
http://www.jpmorgan.com or in downloading the tschirky_v@jpmorgan.com
RiskMetrics data files, you can call 1-800-JPM-INET in the
Asia
United States.
Singapore Michael Wilson (65) 326-9901
wilson_mike@jpmorgan.com

Tokyo Yuri Nagai (81-3) 5573-1168


nagai_y@jpmorgan.com

Hong Kong Martin Matsui (85-2) 973-5480


matsui_martin@jpmorgan.com

Australia Debra Robertson (61-2) 551-6200


robertson_d@jpmorgan.com

RiskMetrics is based on, but differs significantly from, the market risk management systems developed by J.P. Morgan for its own use. J.P. Morgan does not warrant any results
obtained from use of the RiskMetrics data, methodology, documentation or any information derived from the data (collectively the “Data”) and does not guarantee its sequence,
timeliness, accuracy, completeness or continued availability. The Data is calculated on the basis of historical observations and should not be relied upon to predict future market
movements. The Data is meant to be used with systems developed by third parties. J.P. Morgan does not guarantee the accuracy or quality of such systems.

Additional information is available upon request. Information herein is believed to be reliable, but J.P. Morgan does not warrant its completeness or accuracy. Opinions and estimates constitute our judgement and are
subject to change without notice. Past performance is not indicative of future results. This material is not intended as an offer or solicitation for the purchase or sale of any financial instrument. J.P. Morgan may hold a
position or act as market maker in the financial instruments of any issuer discussed herein or act as advisor or lender to such issuer. Morgan Guaranty Trust Company is a member of FDIC and SFA. Copyright 1996 J.P.
Morgan & Co. Incorporated. Clients should contact analysts at and execute transactions through a J.P. Morgan entity in their home jurisdiction unless governing law permits otherwise.