Professional Documents
Culture Documents
School of Economics
Lund University
Sweden Spring 2000
Tutor: Authors:
Conclusions: We can conclude that most VaR methods work well at the 95%
confidence level, while at the 99% level the results are more
ambiguous. The methods based on the assumption about normally
distributed returns produce attractive results for the OMX
portfolio, but for the small- cap and mixed portfolios these methods
tend to underestimate the VaR. Further on, our study shows that
the portfolio returns are not normally distributed. Due to this fact
we recommend the historical simulation approach, which does not
rest on the assumption about normality. In addition, the historical
simulation with a window size of 250 trading days produces the
most attractive results for the small-cap and mixed portfolios.
However, no ne of the VaR methods seem to produce totally
perfect and reliable results. Therefore VaR can be questioned as a
useful tool for asset managers managing Swedish equity portfolios.
1. Introduction 1
1.1 Background…………………………………………………………………………... 1
1.2 Problem Discussion…………………………………………………………………... 2
1.3 Purpose………………………………………………………………………………. 3
1.4 Target Group ………………………………………………………………………… 4
1.5 Disposition…………………………………………………………………………… 4
2. Methodology 5
2.1 General methodology………………………………………………………………… 5
2.1.1 Choice of Subject…………………………………………………………………... 5
2.1.2 Perspective………………………………………………………………………5
2.1.3 Scientific Approach…………………………………………………………….. 6
2.1.4 Theory and Object……………………………………………………………….7
2.2 Practical Methodology……………………………………………………………….. 7
2.2.1 Primary Data……………………………………………………………………. 7
2.2.2 Secondary Data…………………………………………………………………. 8
2.2.3 Criticism of the Sources…………………………………………………………8
2.3 Validity……………………………………………………………………………….. 9
2.4 Reliability…………………………………………………………………………….. 9
2.5 Empirical Study………………………………………………………………………. 9
2.6 Criticism of Chosen Methodology…………………………………………………… 10
2.7 Alternative Methodology…………………………………………………………….. 10
3. VaR Theory 11
3.1 Risks………………………………………………………………………………….. 11
3.1.1 Value at Risk (VaR)……………………………………………………………..13
3.2 The Implications of VaR for Asset Managers………………………………………... 14
3.3 Normal Distribution of Financial Returns……………………………………………. 16
3.3.1 Skewness and Kurtosis…………………………………………………………. 18
3.4 VaR Approaches………………………………………………………………………19
3.4.1 The Historical Simulation Approach…………………………………………… 19
3.4.1.1 Advantages and Disadvantages of the HS Approach…………………….. 20
3.4.2 The Equally Weighted Average Approach……………………………………... 21
3.4.2.1 Advantages and Disadvantages of the EqWMA Approach………………. 21
3.4.3 The Exponentially Weighted Average Approach………………………………. 22
3.4.3.1 What Value of λ Should Be Used?……………………………………….. 24
3.4.2.2 Advantages and Disadvantages of the ExpWMA Approach……………... 24
3.4.4 The New Improved VaR Methodology………………………………………… 25
3.4.4.1 Advantages and Disadvantages of the Improved VaR Methodology…….. 26
3.4.5 Monte Carlo Simulation…………………………………………………………26
3.4.5.1 Advantages and Disadvantages with the MCS Approach………………... 27
3.4.6 Semi-parametric VaR Approach……………………………………………….. 27
3.4.6.1 Advantages and Disadvantages with the Semi-parametric VaR Approach.28
3.4.7 The Stress Testing Approach…………………………………………………… 29
3.4.7.1 Advantages and Disadvantages with Stress Testing……………………… 29
3.5 Multi-day VaR Prediction……………………………………………………………. 30
4. Statistical Methodology 31
4.1 Working Process………………………………………………………………………31
4.2 VaR Methods Used in the Study……………………………………………………... 32
4.3 Calculation Procedure for the HS Approach…………………………………………. 33
4.4 Calculation Procedure for the EqWMA Approach…………………………………... 34
4.5 Calculation Procedure for the ExpWMA Approach…………………………………. 34
4.6 Performance Evaluation Criteria……………………………………………………... 35
4.6.1 Mean Relative Bias……………………………………………………………... 35
4.6.1.1 Calculation Procedure…………………………………………………….. 35
4.6.2 Root Mean Squared Relative Bias……………………………………………… 36
4.6.2.1 Calculation Procedure…………………………………………………….. 36
4.6.3 Annualized Percentage Volatility………………………………………………. 36
4.6.3.1 Calculation Procedure…………………………………………………….. 36
4.6.4 Fraction of Outcomes Covered…………………………………………………. 37
4.6.4.1 Calculation Procedure…………………………………………………….. 37
4.6.5 Multiple Needed to Attain Desired Coverage………………………………….. 37
4.6.5.1 Calculation Procedure…………………………………………………….. 38
4.6.6 Average Multiple of Tail Event to Risk Measure………………………………. 38
4.6.6.1 Calculation Procedure…………………………………………………….. 38
4.6.7 Maximum Multiple of Tail Event to Risk Measure……………………………..38
4.6.7.1 Calculation Procedure…………………………………………………….. 39
4.6.8 Correlation between Risk Measure and Absolute Value of Outcome………….. 39
4.6.8.1 Calculation Procedure…………………………………………………….. 39
4.6.9 Mean Relative Bias for Risk Measures Scaled to Desired Level of Coverage…. 40
4.6.9.1 Calculation procedure…………………………………………………….. 40
4.7 Hypothesis Testing…………………………………………………………………… 40
4.7.1 Actual Portion of Fraction of Outcomes Covered……………………………… 41
4.7.2 Difference in FoOC between OMX and Small-cap Portfolios…………………. 42
4.7.3 Significance Test of the Correlation Coefficients……………………………….42
4.8 Normality Tests………………………………………………………………………. 43
4.9 Criticism of Primary Data……………………………………………………………. 44
5. Results 45
5.1 Mean Relative Bias…………………………………………………………………... 45
5.2 Root Mean Squared Relative Bias……………………………………………………. 46
5.3 Annualized Percentage Volatility…………………………………………………….. 47
5.4 Fraction of Outcomes Covered………………………………………………………..47
5.4.1 Significance Testing……………………………………………………………..48
5.4.1.1 Fraction of Outcomes Covered………………………………………….... 49
5.4.1.2 Difference between the OMX and Small-cap Portfolios…………………. 49
5.5 Multiple Needed to Attain Desired Coverage………………………………………... 50
5.6 Average Multiple of Tail Events to Risk Measure…………………………………… 50
5.7 Maximum Multiple of Tail Event to Risk Measure………………………………….. 51
5.8 Correlation between Risk Measure and Absolute Value of Outcome………………...52
5.8.1 Significance Testing……………………………………………………………. 52
5.9 Mean Relative Bias for Risk Measures Scaled to Desired Level of Coverage………. 52
5.10 Normality Tests……………………………………………………………………... 53
5.10.1 Results from the Normality Tests……………………………………………... 53
6. Conclusions 57
6.1 Evaluation of VaR Methods………………………………………………………….. 57
6.2 The Distribution of Financial Returns………………………………………………... 58
6.3 Implications of VaR for Asset Managers…………………………………………….. 59
6.4 Suggestions for further Research……………………………………………………...61
List of references
Appendices
List of Tables
Table 1. The number of historical observations used by the ExpWMA approach…………... 23
Table 2. Results from normality tests…………………………………………………………54
List of Figures
Figure 1. Normal vs leptokurtic distribution…………………………………………………. 17
Figure 2. Normal distributions with different variances……………………………………... 25
Figure 3. A random normal distribution plotted against the OMX portfolio returns…………54
Figure 4. A random normal distribution plotted against the small-cap portfolio returns……..55
Figure 5. A random normal distribution plotted against the mixed portfolio returns………... 55
Chapter 1 - Introduction
1.1 Background
While finance is about risk/return and risk management, the specialized study of
risk is a rather recent phenomenon1 . It has become a critical issue over the last
decade since organizations have suffered great losses, often from risks they never
should have taken in the first place 2 . The most well known example of this is
probably the collapse of Barings Bank in 1995, that was caused by the Singapore
based derivatives trader Nick Leeson, who took large positions in futures and
options on Asian Stock Exchanges 3 . Other internationally well known companies
that have been seriously hurt by insufficient risk management techniques are the
German commodity trading firm Metallgesellschaft in 1993 and Summito Corp.
in 1996, that lost more than 1.8 billion USD through unauthorized copper trades 4 .
In Sweden, Electrolux lost 250 million SEK on currency trading and
Meritanordbanken lost 290 million SEK taking short positions in stocks. Although
these losses to a large extent can be labelled as fraud, they are also results of an
unsatisfactory risk communication system. 5
Today the financial system is very vulnerable, since the solidity in the banking
industry is as low as a few percentage points. One way to solve this problem
would be to raise the capital base in the financial sector, so banks more easily
could cope with unanticipated disturbances and falls on the financial markets. On
the other hand, to keep excess capital is costly and it has to be paid by someone,
most certainly the clients of the banks. The solidity has become a less important
measure however, since many risks are outside the balance sheet. Therefore a risk
measure of the total risk exposure is needed. 6
The number one tool in this respect has become Value-at-Risk (from now on
mentioned as VaR), which today is used by all US Commercial Banks to monitor
trading portfolios on a daily basis 7 . The VaR method is basically a statistical
estimation which measures, at a certain confidence level, the amount that may be
lost within a certain time period, due to potential changes in the market prices of
the underlying assets 8 .
1
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 1.
2
Thornberg, J., “Derivative users lack refined controls of risk”, (1998), p. 2.
3
Koupparis, P., “Barings – A Random Walk to Self-Destruction”, (1995), p. 3.
4
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 4-5.
5
Björklund, M., “Bristande kontroll möjliggör svindlerier”, (2000), p. 7.
6
Bäckström, U., “Betydelsen av riskhantering”, (2000), p. 1-2.
7
Jorion, P., “In Defense of VAR”, (1997), p.1.
8
Yiehmin Lui, R., “VaR and VaR derivatives”, (1996), p. 2.
1
1.2 Problem Discussion
To further complicate the VaR calculations, the volatility on the stock market is
not constant over time. Evidence show that financial returns experience clusters of
high volatility, i.e. a day with a large absolute outcome is followed by another day
with a large absolute outcome. 12 In addition, the volatility on the Stockholm stock
exchange has increased during the second half of the 90s and today there is a
larger portion of “glamour” stocks, i.e. stocks with low book to market and high
p/e-ratios 13,14 . These stocks have a more volatile share price development, since
the time horizon for their expected profits are longer than for other companies and
they are more dependent on future expectations.
In addition, there might be a difference in how applicable the VaR models are for
equities with different market capitalisation (market cap). Smaller companies’
distribution of returns might differ from larger companies, since they might not be
as frequently traded, and new information regarding these stocks can have a
greater impact on the share price.
9
Bäckström, U., “Betydelsen av riskhantering”, (2000), p. 3.
10
See for instance JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 64., or
Dowd, K., “A Value at Risk Approach to Risk -Return Analysis”, (1999), p. 66.
11
Danielsson, J., “Class notes Corporate Finance and Financial Markets”,(1998-1999), p. 11.
12
JPMorgan/ Reuters, RiskMetrics - Monitor (1996), p. 9.
13
See appendix 1.
14
Affärsvärlden, “Riktningen och värde avgör svängningarna”, (1998), p. 28.
2
Out of an asset manager perspective the portfolio risk is one of the most decisive
parameters to have perfect control over. A well- functioning VaR measurement
method could therefore be a superior way to supervise the portfolio risk and
quantify potential losses. However, as stated before there are several potential
problems that can make VaR an unstable and perhaps unreliable method, where
the risks most crucial to capture, i.e. extreme events, are the most difficult to
cover. A well- functioning VaR measurement method could also serve as a
communication tool between customers/management and the asset manager.
Even if the VaR calculations are quite complex for the general public, the results
per se are easy to understand. For an asset manager the discreteness regarding the
portfolio holdings is important, since the holdings determine the level of
competitiveness. VaR could therefore be a way of comparing portfolio risks
between asset managers without unmasking too much of the holdings.
Some previous studies have been made in this area, but these have mostly been
focused on the American financial markets. In addition, none of the studies we
have seen have tested different asset characteristics. We have tried to take one
step further by looking at possible impacts of equity market cap on the reliability
of VaR.
1.3 Purpose
The purpose of this Master thesis is to examine the applicability of different VaR
methods for Swedish equity portfolios. In addition, we will analyse if equity
market cap has any impact on how well- functioning and reliable the VaR methods
are. Based on these results we will discuss the implications of VaR for asset
managers.
3
1.4 Target Group
This Master thesis is intended for people with at least a basic knowledge in
finance. The target group is mainly people working with asset ma nagement and
others with an interest in financial economics.
1.5 Disposition
In the initial stage of chapter 3 different types of risk are presented. Then we will
describe the usefulness of VaR for asset managers and present theories regarding
the distribution of financial returns. Finally, we will outline seven different VaR
approaches, wit h a particular focus on the three we use in our study.
Chapter 4 describes how the study was conducted. We will in detail present our
procedure for collection of data, construction of portfolios, choice of VaR
methods and calculations. In addition, the performance criteria, normality and
statistical significance tests that have been used in the study are described.
In chapter 5 the results of the study are presented. Based on the performance
criteria, the normality and significance tests presented in the previous chapter we
will analyse and comment on our results.
In chapter 6 we will present the conclusions of our study. The applicability of the
VaR methods as well as the implications for asset managers are discussed.
4
Chapter 2 - Methodology
The methodology chapter is one of the elementary parts of an academic paper and
what is written should be possible to be evaluated and replicated 15 . This means
that the content of the paper should be open for questions and by repeating the
same investigation the same result should be reached. This chapter describes our
approach to reach the purpose and goal of this Master thesis and how we have
tackled the subject. We discuss the validity as well as the reliability of the sources
and round off by giving proposals of other ways in which the subject could have
been addressed. A detailed description of the methodology and data used in the
VaR tests is presented in chapter 4.
VaR is a risk measurement approach that since its breakthrough in the beginning
of the 90s has become increasingly popular, especially in the banking industry.
Several examinations have been done over the VaR concept with many different
perspectives, but we have not identified any research of VaR with a focus on the
Swedish equity market. In addition, the topics stock market risk and equity risk
management are very timely, both due to recent large losses and the increasing
volatility on the Swedish stock market during the last five years. We find the
combination of VaR for Swedish equities and the risk topics per se to be very
appealing, which explains the choice of subject for this Master thesis.
2.1.2 Perspective
We have chosen to write this Master thesis from the perspective of an asset
manager managing equity portfolios. This is because we find this perspective to
be the most interesting one, since asset managers are likely the actors on the stock
market that will benefit the most from a risk measurement tool when stock
markets are becoming increasingly more volatile.
15
Backman, J., Att skriva och läsa vetenskapliga rapporter, (1985), p. 27.
5
2.1.3 Scientific approach
There are two main scientific ways to view a problem, the positivistic and the
hermeneutic 16 . The positivistic is basically a rationalistic view that has its roots in
the growing scientific society in the 18th century. The central point is that there is
a reality that we can get knowledge about by observation. The knowledge is
neutral and totally objective, i.e. without personal conception. 17 By experiment,
quantitative measurement and logical reasoning, these theories are built and can
then be converted into hypotheses that can be tested. Statements should be
presented with clear definitions and a logical as well as an analytical approach. 18
The hermeneutic approach on the other hand is based on a view that, on the
contrary to nature laws, there are no laws for human behaviour or for society that
are constant 19 . The dialogue between people plays a central role and the
hermeneutic approach means that the scientist should try to understand other
scient ists’ actions and get a general picture of the subject20 .
Both of these approaches could be criticized and many scientific studies contain
both of these approaches linked together. The positivistic approach could be
regarded as too simplistic, but can in some cases be a complement to the
hermeneutic approach, for which it is hard to control the reliability. 21
In this Master thesis we will base our research on a quantitative study and
therefore we will take a positivistic approach. Hence, we will try to logically
analyze our results from our study of the VaR for stocks located on the Stockholm
stock exchange. In the interpretation of the study we will try to bring up our
findings to a more general level and discuss how they can be of assistance to asset
managers. However, asset managers are also influenced by many other variables,
for example risk attitude and how they view the stock market. Therefore, we
should have a more hermeneutic approach in our discussion of the implications of
VaR for asset managers.
16
Svenning, C., Metodboken, (1996), p. 25.
17
Halvorsen, K., Samhällsvetenskaplig metod, (1992), p. 14.
18
Wiedersheim-Paul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 150.
19
Halvorsen, K., Samhällsvetenskaplig metod, (1992), p. 14.
20
Wiedersheim-Paul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 151.
21
Ibid, p. 151.
6
2.1.4 Theory and object
To simplify the reality it is common to use different kinds of theories and models.
These theories and models also facilitate how to evaluate our findings. Theories
are often more general than models, since theories can incorporate more variables
and refer to longer periods of time. Models on the other hand are a development
of the theories and aim at specifying what the theories have built up, so that the
models can be used in practice. Models can also be built without any support from
theories, especially in undeveloped areas. 22
The object we have chosen to study is, as mentioned before, the Stockholm stock
exchange. Stocks on the most traded list and the A-list as well as smaller
companies traded on the over-the-counter (OTC) list and the O-list are also
included in the study.
Practical methodology involves how data is collected and how it is evaluated and
analyzed. This data can be of two kinds, primary and secondary data 23 .
The primary data that has been used in this Master thesis is exclusively the
processed historical time series stock data that are used to compute different VaR
measures. Put in its context this data gives information on how applicable
different VaR methods are for equities listed on the Stockholm stock exchange.
The data has been collected from the Bloomberg database and was then processed
to get data files that could be handled in Excel. A more thorough description on
how this research was done is found in chapter 4.
22
Halvorsen, K., Samhällsvetenskaplig metod, (1992), p. 44.
23
Wiedershiem-Paul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 76.
7
2.2.2 Secondary data
Primary data – The criticism of the primary sources will be discussed in chapter
4, section 4.9.
Secondary data – To evaluate the secondary sources three criteria could be used24 .
The first criterion is to analyze how current the sources are. Since VaR is a
relatively new topic that has not been around for long it makes it even more
important that the sources are up to date. In our case, we believe that this criterion
is met because most of the articles and other sources we have used were published
in the mid or late 90s.
The second criterion is to evaluate if the authors of the sources have any interests
of their own in the subject they are writing about. We have mainly used scientific
research articles and hence, we believe that we meet the criterion on this point.
However, some of our sources are published by the RiskMetrics group and since
this is a profit making company it is possible that they try to promote their way of
viewing VaR models. Therefore, these sources have to be interpreted carefully.
The third criterion is to investigate if the sources have any relation with one
another. Some of our articles are written by the same person or organization. In
addition, the sources have made references to each other and hence, they are not
totally independent. However, to try to minimize this problem we have used
numerous different sources and authors that are well know and accepted.
A large part of the secondary data is collected from American studies, which is
natural since VaR is a frequently used method in the US. This can imply that these
studies are difficult to translate to Swedish circumstances. However, there are no
proper alternatives since VaR is a relatively new phenomenon and the American
financial markets are normally in the forefront regarding new issues.
24
Wiedersheim-Paul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 82.
8
2.3 Validity
In an evaluation of the sources other terms that should be discussed are validity
and reliability. Validity can be defined as a measurement tool’s ability to measure
what it is intended to measure. 25 In our case it means that we have to think about
if the theory we have used to build models and the data we have collected for the
study actually make us reach the purpose of this Master thesis.
VaR research is developing rapidly and hence, there are no absolute truths about
which theories and models are the correct ones. However, we have tried to use the
latest findings in the area to incorporate what is known about VaR so far. In
addition, the theories and models that our study is based on are to a large extent
well known and very well accepted in finance.
2.4 Reliability
The term reliability means that the measurement tools should give trustworthy and
stable results 26 . The methodology we have used should be able to be used by
others and the same result should be reached. We believe that the data we have
used is reliable and the fact that it had to be somewhat processed, e.g. filling in
missing values on days when no shares changed hands and correcting for
dividends, to get the data series does not change this opinion. Another thing that
could be questioned is the randomness of the companies chosen for the mixed and
small-cap portfolios. We have used Excel to construct the portfolios and hence, no
bias should enter these portfolios.
We have put together three different stock portfolios, each containing ten stocks.
The VaR for these portfolios is estimated using three different approaches,
historical simulation, equally weighted moving average and exponentially
weighted moving average. In addition, we have used different window lengths,
i.e. the number of observatio ns, to calculate the volatility. One approach in
combination with a specific window is referred to as a method.
25
Wiedersheim-Paul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 28.
26
Ibid, p. 29.
9
Hence, the total time series of historical stock data ranges from 1994-01-01 to
1999-12-31. To evaluate our findings nine performance criteria are used 27 . A
detailed description on how the study was performed and evaluated can be found
in chapter 4.
By using numbers and statistical methods there is a chance to both get precise
information, but also to get distorted and therefore inappropriate information28 . To
be able to use the findings in the right way the data of course have to be collected
and treated in an appropriate manner since mistakes can lead to misinterpretations
that affect the whole study.
To reach the purpose of this Master thesis we believe that there are no proper
alternatives for using a quantitative study. A qualitative study would never give us
the results we are looking for. However, it is possible that other approaches
should have been included in calculating the VaR. We have used the approaches
that are most established in financial theory today and that have been used in
studies with similar purposes. Regarding the discussion whether VaR is a useful
measure for an asset manager, perhaps an opinion poll could be of interest to get a
direct viewpoint on the usefulness of VaR in practice. Instead we chose to put the
focus on whether VaR calculations are trustworthy in the first place, rather than
examine the interest from the market before we know if VaR per se is a useful
tool.
27
These are found in Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”,
(1996).
28
Wiedersheim-Paul F. & Eriksson L., Att utreda forska och rapportera, (1991), p. 69.
10
Chapter 3 - VaR Theory
In this chapter the theories which the study is built on are outlined. At first a broad
view on different types of risk is given. Next the chapter focuses on market risk,
in particular the term VaR is presented, and the implications of VaR for asset
managers are described. Further, the distribution of financial returns is discussed,
since this is a fundamental point for VaR. Finally, different approaches for
measuring VaR are presented in detail.
3.1 Risks
Risk per se can be defined as the vola tility of unexpected outcomes, generally for
values of assets and liabilities29 . Specialized studies of risk are a rather new
phenomenon, but recent large losses such as Orange County, Barings and
Metallgesellschaft have motivated a rapid development in specialized techniques
for risk management 30 . This Master thesis is specializing on a financial risk called
market risk, which involves the uncertainty of earnings resulting from changes in
market conditions such as asset prices, interest rates, volatility, and market
liquidity31 . However, market risk is just one form of risk to which participants in
the financial market are subject to. The major types of risk can briefly be defined
as:32,33
29
Jorion, P., Value at Risk, (1997), p. 3.
30
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 1.
31
JP Morgan/RiskMetrics group, Introduction to RiskMetrics, (1995), p. 2.
32
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 1-2.
33
Jorion, P., Value at Risk, (1997), p. 3-18.
11
conditions. Market risk includes basis risk and gamma risk 34 . Furthermore,
market risk can be absolute, the loss measured in dollar terms, or relative,
the loss relative to a benchmark index.
• Credit risk – is defined as the risk of a loss due to the inability of a
counterparty to meet its obligations. Credit risk can also lead to losses when
debtors are downgraded by credit agencies, usually leading to a fall in the
market value of its obligations. Further on, credit risk also includes
sovereign risk and settlement risk. The former can for instance be if a
country imposes a foreign-exchange control system, which will limit
counterparties obligations. The latter refers to the risk that a counterpart
cannot fulfil its obligations after one party has already made payment.
• Liquidity risk – can take two forms: market/product liquidity and cash
flow/funding. The former type of risk arises when a transaction cannot be
conducted at prevailing market prices due to insufficient market activity and
poor depth and resiliency in the market. The latter type of risk is associated
with the inability of a firm to fund illiquid assets or to meet cash flow
obligations, which may force early liquidation.
• Operational risk – the risk from the failure of internal systems such as
management failure, fraud, and errors made in instructing payments or
settling transactions.
• Legal risk – risk of changes in regulations or when a counterparty does not
have the legal or regulatory authority to engage in a transaction.
The first proposal on market risk was constructed in 1993 and called the “Basle on
market risk”. It was a building block approach and a start by the authorities to set
up rules and regulate market risks. The proposal was extended in April 1995 to
become the “1995 Basle proposal on market risk”. 35 This proposal is a reflection
of the authorities willingness to prevent systemic risk and it contains a
recommendation how to calculate VaR, which states that:36,37
34
Basis risk = the risk of a change or breakdown of a relationship between products used to
hedge each other. Gamma risk = the risk due to nonlinear relationships.
35
Styblo Beder, T., “VaR: Seductive but Dangerous”, (1995), p. 17.
36
Maymin, Z., “VaR variations: is multiplication factor still too high?”, (1998), p. 1.
37
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 7.
12
3.1.1 Value at Risk (VaR)
The primary tool for market risk is VaR, which is a method of assessing risk
through standard statistical techniques. Philippe Jorion defines VaR as a measure
for the worst expected loss over a given time interval under normal market
conditions at a given confidence level38 . Formally VaR is defined as:
VaR
α= ∫ f (x )dx or Pr [x ≤ VaR] = α (1)
−∞
where x stands for the change in the market value of a given portfolio over a given
time horizon with the probability α. Either of the equations states that a loss equal
to, or larger than the specific VaR occurs with probability α.39
The inputs used to calculate VaR for a certain asset are the volatility, time horizon
and a choice of confidence level. The volatility is estimated implicitly from option
pricing or through statistical models. In practice, past observations are often used
to estimate the future volatility. The time period chosen affects both the measured
volatility and therefore also the VaR, where a longer time period gives a higher
volatility measure and hence, a higher VaR. The chosen confidence interval states
how often the loss on the specific asset will be greater than the VaR. The most
commonly used confidence intervals are 95% and 99%. 40 The formula to calculate
VaR for one asset is:
VaR = CI * V * MV (2)
Further, for a portfolio of multiple assets the correlation between the portfolio
assets has to be taken into cons ideration. To calculate the portfolio VaR the
formula (3) below is used:
38
Jorion, P., Value at Risk, (1997), p. xiii.
39
Danielsson, J. & de Vries, G. C., “Value-at-Risk and Extreme Returns”, (1997), p. 9.
40
Söderlind, L., Att mäta ränterisker, (1996), p. 70-75.
41
Ibid, p. 77.
13
where VaR1 is the VaR for the first asset and VaR2 is the VaR for the second asset.
To calculate VaR for a portfolio of more than two assets a row vector (4), which is
transposed to a column vector (6), and a correlation matrix (5) are used in formula
(7). This can be illustrated with the formulas below:
1 Corr1, 2 Corr1, 3
Ω = Corr2,1 1 Corr2, 3 (5)
Corr3,1 Corr3, 2 1
VaR1
R' = VaR2 (6)
VaR3
The formulas above help us to calculate the VaR of the portfolio as:
To calculate the VaR for a portfolio of more than three assets these are simply
added to the vectors and matrices. 42
Regarding the perspective on this Master thesis, i.e. if the VaR concept is a useful
tool to forecast and measure portfolio risk for asset managers managing Sweden
based equity portfolios, we have used Culp’s, Mensink’s and Neves’ article “VaR
for Asset Managers” as a guideline.
42
Söderlind, L., Att mäta ränterisker, (1996), p. 81.
43
Culp, L. C., Mensink, R. and Neves, M. P. A., “VaR for Asset Managers”, (1999), p. 1.
14
the 1990s have further motivated and increased the interest for the understanding
of risk 44 .
VaR will never tell an asset manager how much risk to take, it will only tell how
much risk is being taken. VaR can be a useful tool for helping asset managers
determine whether the risks they are exposed to are the risks to which they think
they are and want to be exposed to. Out of an investor perspective VaR is a
concept that is easy to understand and thereby a way to monitor the level of risk
exposure the asset managers are undertaking. Culp, Mensink and Neves outline
four concrete applications of VaR for asset management. These applications
involve:
Risk Limits and Risk Budgets48 – risk limits, also known as a risk budget, is an
extreme version of risk targets and risk thresholds. In a risk budget, the portfolios’
total VaR is calc ulated and then allocated to asset classes. Asset managers are not
44
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 1.
45
Culp, L. C., Mensink, R. & Neves, M. P. A., “VaR for Asset Managers”, (1999), p. 14-19.
46
Ibid, p. 19-20.
47
Ibid, p. 20-21.
48
Ibid, p. 21-22.
15
allowed to exceed this risk budget as long as the risk dimensions not have been
altered.
However, the greatest benefit of VaR for an asset manager, according to Philippe
Jorion, probably lies in the imposition of a structured methodology for critically
thinking about risk. Institutions applying VaR are forced to confront their
exposure to financial risk. A well- functioning supervision of VaR should logically
also imply less risk of unexpected and uncontrolled losses. Jorion also states that
“there is no doubt that VaR is here to stay”, but at the same time highlights that
the process and methodology of calculating VaR may be as important as the
number itself. 49
1 ( rt − µ ) 2
f (rt ) = exp − (8)
2σ
2
2πσ 2
However, these advantages have to be weighed against research showing that the
distribution of returns in financial markets experience fat tails 52 . Financial returns
generally exhibit a leptokurtic behaviour and extreme price movements occur
more frequently than what is given by the normal distribution53 . A leptokurtic
49
Jorion, P., “Value at Risk”, (1997), p. xv.
50
Lucas A. & Klaassen P., “Extreme Returns, Downside Risk and Optimal Asset Allocation”,
(1998), p. 71.
51
Hill C., Griffiths W. & Judge G., Undergraduate Econometrics, (1997), p. 30.
52
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 41.
53
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 64.
16
distribution implies that the distribution has a high peak, the sides are low and the
tails are fat, see further 3.3.1 54 . This is illustrated in figure 1 below:
Distributional functions
Normal
Leptokurtic
Standard deviations
Since VaR is concerned with unusual outcomes, e.g. five or one percent, the fact
that tails are fat pose a problem. More outcomes than predicted by the normal
distribution will fall into the category that exceed the VaR measures generated
with normal distribution, i.e. the assumption of normal distribution underestimates
the VaR. This is proved by Lucas and Klaassen who show that the normal
distribution underestimates VaR by more than 30 percent at the 99% level55 .
However, it is not necessary that the fat tails lead to a higher VaR for all
confidence intervals, because there are two effects working in opposite directions.
Firstly, the probability of increasing tail events lead to more careful asset
allocations. Secondly, the increase in the precision of the distribution gives a
higher certainty of the spread of outcomes and therefore lead to a more aggressive
strategy. Lucas and Klaassen show that the latter effect dominates on the 95%
level, while the first effect dominates on the 99% level. This means that by using a
leptokurtic function the asset allocation becomes more risky when using the 95%
level, but more careful when using the 99% level. 56
54
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 4.
55
Lucas A. & Klaassen P., “Extreme Returns, Downside Risk and Optimal Asset Allocation”,
(1998), p. 72.
56
Ibid, p. 75.
17
3.3.1 Skewness and Kurtosis
The normal distribution is symmetric with the mean equal to the median. The
distribution is also called mesokurtic. Departure from symmetry usually implies a
skewed or kurtosis distribution.
3
n 1 n xi − x
Sk ( x) = ∑ (9)
n − 2 n − 1 i=1 s x
4
n( n + 1) 1 n x i − x
Kur ( x ) = ∑ (10)
( n − 2 )( n − 3) n − 1 i =1 s x
57
Aczel, A. D., Complete Business Statistics, (1993), p. 19.
58
Kleinbaum, D. G., Kupper, L. L. & Muller, K. E., Applied Regression Analysis and Other
Multivariable Methods, (1988), p. 188.
59
Aczel, A. D., Complete Business Statistics, (1993), p. 20.
60
Kleinbaum, D. G., Kupper, L. L. & Muller, K. E., Applied Regression Analysis and Other
Multivariable Methods, (1988), p. 188.
61
Afifi, A. A. & Clark, V., Computer-Aided Multivariate Analysis, (1990), p. 66.
62
Benerson, M. L. & Levine, D. M., Basic Business Statistics, (1992), p. 73.
18
3.4 VaR approaches
In the following sections we will present different approaches for VaR estimation.
Firstly, the three approaches performed in our study are outlined in detail and
secondly, other approaches that can be used for VaR estimation are presented
more briefly.
A sample length, a window, for the estimation is chosen and for each day t in the
sample the portfolio return ∆Π t is evaluated w.r.t. historical prices and portfolio
weights w according to:
N
∆Π t = ∑ wi yi ,t (11)
i =1
where N is the number of assets in the portfolio and y i,t is the return on asset i at
time t. Then the portfolio returns ∆Π t should be sorted in ascending order. A level
of confidence and a given probability π is chosen and Pr( ∆Π t < VaR) = π states
that a loss equal to, or larger than the specific VaR occurs with probability π. For
example, if there were 100 observations, the 5th lowest observation value would
be the VaR for a 95% confidence interval. 64,65
There is a trade-off regarding the length of the observation period chosen. Clearly
the choice of for example 125 days is motivated by the desire to capture short-
term movements in the underlying risk of the portfolio and in contrast the choice
of 1250 days may be driven by the desire to estimate the historical percentiles as
accurately as possible 66 . While longer intervals increase the accuracy of estimates
63
Danielsson, J. & de Vries, G., C., “Value-at-Risk and Extreme Returns”, (1997), p. 10.
64
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999), p. 18.
65
Danielsson, J. & de Vries, G., C., “Value-at-Risk and Extreme Returns”, (1997), p. 9.
66
Henricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 43.
19
it could use irrelevant data, thereby missing important changes in the underlying
process 67 .
The advantages of HS are mainly that it is intuitive, easy to implement and that it
does not rely on specific assumptions about valuation models or the underlying
stochastic structure of the market. Further, the me thod is relatively easy to
perform in a spreadsheet program and that same data can be stored and reused for
later estimations of VaR. Furthermore, HS forms the basis for the Basle 1995
proposals on market risk. 69,70
Disadvantages are that past extreme returns can be a poor predictor of extreme
events, for example Danielsson and de Vries show that HS is unable to address
losses which are outside the sample. This drawback is linked to the problem that
HS, due to the discreteness of extreme returns that also will make the VaR being
discrete, can over- or underestimate observations in the tails and over- or
underpredict VaR. 71 The sample size or window length is another decisive aspect
to consider, where the inclusion or exclusion of only one or two days at the
beginning of the sample can cause large swings in the VaR estimate. 72 As
mentioned in 3.1, the Basel Committee proposes a window of at least one year of
past returns 73 . Another criticism is that HS is based on the assumption that past
returns represent the immediate future fairly, but risk contains significant and
predictable time variation that make the HS approach miss situations with
temporarily elevated volatility. Finally, for large portfolios with numerous assets
and exposures the historical approach quickly becomes cumbersome. 74,75
67
Jorion, P., Value at risk, (1997), p. 195.
68
Schachter, B., “Value at Risk Resources – An Irreverent Guide to Value at Risk ”, (1997), p. 2.
69
Jorion, P., Value at risk, (1997), p. 195.
70
Danielsson, J., “Class notes Corporate Finance & Financial Markets”, (1998-1999).
71
Danielsson, J. & de Vries, G., C., “Value-at-Risk and Extreme Returns”, (1997), p. 11.
72
Ibid, p. 11.
73
Danielsson, J., Hartmann, P. & de Vries, C.G. “The Cost of Conservatism” (1998), p. 2.
74
Smithson C., “Class notes of CIBC School of Financial Products”, (1998), p. 3.
75
Jorion, P., Value at Risk, (1997), p. 196.
20
3.4.2 The Equally Weighted Moving Average Approach
The equally weighted moving average (EqWMA) approach assumes that the
distribution of outcomes follow a normal distribution and uses a fixed amount of
historical data to calculate the standard deviation. There are different opinions on
how wide the data window should be. On the one hand only very recent data, e.g.
50 observations, should be used to incorporate changes in the volatility over time.
On the other hand, to be able to estimate potential movements accurately and to
estimate the variance with precision a much wider data window should be used. 76
The calculation of the standard deviation is shown below:
t −1
∑ ( x − µ) 2
1
σt = (12)
( k − 1) s =t − k s
where σt is the estimated standard deviation at time t, and k specifies the number
of observations included in the moving average. x s is the change in the value of
the asset on day s and µ is the mean change in asset value during the estimated
period. 77 By using shorter periods of time the standard deviation gets more
irregular and reacts faster to changes in asset price movements.
Other parameters that have to be set are the confidence interval and the covariance
of the asset returns. The most commonly used confidence intervals are the 95th
and the 99th percentile although VaR is calculated with confidence intervals from
the 90th to the 99.9th 78 .
An advantage with the EqWMA approach is that it is easy to use, since the normal
distribution is only characterised by its mean and variance. The VaR estimation
can be derived directly from the portfolio standard deviation by using a
multiplicative factor that depends on the confidence level. 79 In addition, many
statistical formulas are based on a normal distribution assumption and these
facilitate the analysis of the results. Finally, all moments of positive order exist
and the normal distribution can model VaR estimations outside the sample
range. 80
76
Jorion, P., Value at Risk, (1997), p.99.
77
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 41.
78
Ibid, p. 40.
79
Jorion, P., Value at Risk (1997), p.88.
80
Lucas, A. & Klaassen, P., “ Extreme Returns, Downside Risk, and Optimal Asset Allocation”,
(1998), p. 71.
21
The most obvious disadvantage with the EqWMA approach, as mentioned above,
is that financial returns experience fat tails. Therefore using a normal distribution
underestimates the true VaR, which of course is a very serious drawback81 .
Another disadvantage is that the quality of the VaR estimate calculated with the
EqWMA approach degrades if non- linear instruments, like options, are included
in the portfolio 82 . Moreover, low correlations between assets reduce the portfolio
risk. However, evidence show that correlations increase in periods of instability
on the financial markets, and therefore the normal distribution may underestimate
the true VaR measure. 83
t −1
σ t = (1 − λ ) ∑ λt − s−1 ( x s − µ ) 2 (13)
s= t− k
σ t = λσ 2 t −1 + (1 − λ )( x t −1 − µ ) 2 (14)
Formula (14) shows that on any given day the standard deviation, calculated as an
exponentially moving average, is made up of two components. Firstly, the
weighted average on the previous day and secondly, yesterday’s squared
deviation, which is given a weight of (1-λ). This means that a lower value on λ
makes the importance of observations decline at a more rapid speed. 86
81
Danielsson, J., “Value-at-Risk and Extreme Returns”, (1997), p. 14.
82
Schachter, B., “Value at Risk Resources - An Irreverent guide to Value-at-Risk ”, (1997), p. 2.
83
Jorion, P., Value at Risk, (1997), p. 178.
84
Ibid, p. 177.
85
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 42.
86
Ibid, p. 42.
22
The parameter λ is often referred to as the “decay factor” and determines how fast
the decline in observation weights should be. For the weights to sum up to one an
infinitely number of observations should be used, but in practice a limited number
of observations can be used since the sum of weights will converge to one 87 . By
setting a tolerance level, i.e. how close to one the sum has to be, the number of
observations that has to be used in the calculation of the standard deviation can be
calculated 88 . The number of observations that have to be used at different
tolerance levels can be seen in the table below:
87
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 42.
88
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 93.
89
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 42.
90
JPMorgan/RiskMetrics group, Introduction to RiskMetrics, (1995), p. 2.
91
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 59.
23
3.4.3.1 What value of λ should be used?
By using a low decay factor, e.g. 0.94, it implies that almost the entire VaR
measure is derived from the most recent observations. This means that the VaR
measure becomes very volatile over time. On the one hand, relying on the most
recent observations is important for capturing short-term movements in volatility.
On the other hand, a smaller sample size increases the possibility of measurement
error. 92
The decay factor may not only be used to estimate the volatility of a single asset,
but also to calculate the covariance matrix, which is shown below:
σ 21 (λ1 ) σ 12 (λ 3 )
∑ = σ (λ ) σ 2 2 (λ ) (15)
21 3 2
As can be seen above, the covariance matrix is a function of three decay factors. 93
Although it is possible in theory to estimate all possible decay factors, it is too
complex in practice to calculate them all. Therefore, it has become necessary to
get some form of structure on the value of the decay factors and the most practical
thing to do is to use one lambda for the entire matrix. Still, different values of the
decay factors can be used for different time periods. RiskMetrics has found 0.94
to be the optimal value for daily returns and 0.97 for monthly returns. 94
The advantages are very much the same as with the EqWMA approach. However,
the volatility is much more receptive to variations over time. By using an
exponential moving average the standard deviation is responsive to market shocks
and the following gradual decline in the forecast of volatility. A simple moving
average does not react fast enough to changes in the volatility. 95 In addition, the
ExpWMA approach smoothens out the standard deviation over time. In the
EqWMA approach the standard deviation varies more, since the estimation is
more affected when an observation falls out of the estimation window. 96
92
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 43.
93
JPMorgan/Reuters, RiskMetrics - Technical Document, (1996), p. 97.
94
Ibid, p. 97.
95
JPMorgan, “Introduktion to RiskMetrics”, (1995), p. 4.
96
Ibid, p. 7.
24
The disadvantages, in addition to what was mentioned in section 3.4.2.1, are that
the computations are somewhat more difficult and that the volatility over time is
more unstable than with the EqWMA approach. 97
This methodology is a product from the RiskMetrics group, which has been
developed to overcome the problems with financial returns experiencing fat tails.
The new approach, which is still under development, tries to estimate the
volatility by assuming that returns are generated from a mixture of two different
normal distributions. This is shown in the formula below:
where p1 +p2 =1. PDF stands for the probability density function and p1 is the
probability that the return is generated from the normal distribution N1 , which is
characterised by its mean µ1 and variance σ1 2 . Similarly p2 is the probability that
the return is generated from the normal distribution N2 , which has mean µ2 and
variance σ2 2 . This model makes it possible to assign large returns a higher
probability than the normal distribution. To ge nerate the PDF one can then assign
p1 a value close to 1, with µ1 =0 and σ1 2 =1. The mean for N2 is also set to zero,
but the variance is assigned a value higher than 1. The mixture of the two normal
distributions then has fatter tails compared to N1 . This can be illustrated
graphically as below:
Normal distributions
N1
N2
Standard deviations
97
JPMorgan/Reuters, RiskMetrics – Technical Document (1996), p. 80.
98
JPMorgan/Reuters, RiskMetrics – Monitor, (1996), p. 7-19.
25
3.4.5.1 Advantages and disadvantages of the improved VaR methodology
The advantage with the improved VaR methodology is that it describes reality
more accurately than the traditional VaR models, since it is able to deal with the
fat tails of financial returns. This gives a more precise VaR estimation.
The disadvantages are firstly that the calculations become more complex and the
VaR methodology loses its intuitive appeal, since very sophisticated statistical
techniques are used to calculate the VaR with this model. Secondly, this
methodology has not yet been thoroughly tested and hence, it is uncertain how
well it really works.
Monte Carlo simulation (MCS) is mainly a method used by risk specialists and
risk analysts to value complex derivatives such as exotics, but nowadays MCS is
also a very useful tool for VaR calculation99 . The MCS- method approximates the
behaviour of financial prices by using computer simulations to generate random
price paths. Before the generation of random prices is started one has to select the
distribution the prices should be generated from, as well as the volatility of prices
and the correlation between assets100 . The first step in the simulation is to choose a
stochastic model for the behaviour of prices, and in line with the Black and
Scholes option pricing model a geometric Brownian motion can be used 101,102 .
To generate random numbers Jorion propose either to use a two step process or a
method called bootstrap. The former implies selecting a uniform distribution for
the random number generator over the interval (0, 1), which produces a random
variable. The next step is to transform the uniform random numbers into the
desired distribution, for example a normal, which can be done by inverting the
cumulative probability distribution function. The bootstrap method briefly implies
that random numbers are generated by sampling from historical data with
replacement. 103 When the random numbers are generated the VaR is calculated
with (11).
99
Söderlind, L., Att mäta ränterisker, (1996), p. 102.
100
Ibid, p. 101.
101
See for example Hull, J. C., Options, Futures, and Other Derivatives, (1997).
102
Jorion, P., Value at Risk, (1997), p. 232.
103
Ibid, p. 236.
26
The MCS-method is similar to the HS-method, except that the hypothetical
changes in prices for assets are created by random draws from a stochastic process
in the MCS-method, while HS is directly based on actual historical price
changes104 .
An advantage with the MCS approach is that it can account for a wide range of
risks, including price, volatility and credit risk, and by using different models it
can even account for model risk. The MCS-method is considered to be the most
powerful method to compute VaR. 105, 106
Disadvantages are mainly its complexity and that the method involves costly
investments in intellectual and systems development. For example if 1000 sample
paths are generated with a portfolio of 10 assets, the total number of valuations
amounts to 10,000. Another disadvantage of the model is that it relies on a
specific stochastic model for the underlying risk factors and pricing models for
securities. Hence, there is a sensitivity to model risk. If, for instance the stochastic
process chosen for the price is unrealistic the estimated VaR will also be
misleading, and therefore there is a risk incorporated that the model is wrong.
104
Söderlind, L., Att mäta ränterisker, (1996), p. 100.
105
Jorion, P., Value at Risk, (1997), p. 231.
106
Söderlind, L., Att mäta ränterisker, (1996), p. 106.
107
Jorion, P., Value at Risk, (1997), p. 201.
108
Danielsson, J. & de Vries, G. C., “Value-at-Risk and Extreme Returns”,(1997), p. 19-25.
27
combined with a parametric estimation of the tails of the return distribution109 .
Parametric methods, which are based on cond itional normality, are not well suited
for analysing large risks, where the normality assumption probably leads to an
underestimation of the risk of heavy losses. The main purpose of this mixed
method is to accurately estimate the tails of a distribution, and therefore overcome
problems that methods based on normality assumptions underpredict infrequent
tail events.
The results from a research by Danielsson and de Vries regarding the extreme
value approach, shows that this method performs better than both RiskMetrics and
HS far out in the tails. For the 5th percentile, RiskMetrics is superior but longer
out in the tails the method consistently underpredicts the tail. For HS the opposite
problem is the case, i.e. it consistently overpredicts the tails. Further on, HS is
unable to address losses that are outside the sample.
This approach is relatively more sensitive for a small sample than for example
RiskMetrics and HS, and Danielsson and de Vries propose that a sample such as
one year is not appropriate to use for the extreme value approach.
There are several advantages in using the estimated tail distribution for VaR
estimation. For example, the extreme value method has a smoothing out effect of
events like the ’87 crash compared with for example HS. For HS an event like the
’87 crash will cause a too large VaR estimate and hence impose too conservative
capital provisions, while the extreme value method will give a better VaR
estimate. Furthermore, one can easily obtain the lowest return that occurs with a
given probability by sampling from the tail of the distribution, which facilitates
sensitivity experiments. This is not possible by using the HS. In addition, no
restrictive assumptions are needed, and the method can be used for large
portfolios without endless computation time.
However, in accordance with the MCS method the extreme value method is
complex and relatively less intuitive, even if there are many advantages.
109
To view how to estimate the tails see Danielsson, J., & de Vries, G. C., “Value-at-Risk and
Extreme Returns”, (1997) and Danielsson, J., & de Vries, G.C., “Beyond the Sample: Extreme
Quantile and Probability Estimation”, (1997).
28
3.4.7 The Stress testing approach
Stress testing is more or less a scenario analysis, where the effects from different
events and movements are assessed for an asset or a portfolio of assets 110 . For an
asset manager an example of a scenario could be what happens to the equity
portfolio if the interest rate fluctuates heavily or if a currency suddenly devalues
by 30 percent. When scenarios are selected the portfolio is revalued according to:
N
R p, s = ∑ wi,t Ri, s (17)
i =1
where the portfolio return is derived from the hypothetical component Ri , s under
the new scenario s. Hence, various portfolio returns are generated with the
consideration to pre-specified probabilities ( p, s) for each scenario. VaR can then
be measured out of this generated distribution of portfolio returns. 111
The advantages with stress testing are mainly that it may cover situations that are
completely absent from the historical data. For instance the EMS-breakdown can
serve as an example of an event that would have been beneficial to make a
scenario analysis on in advance. Hence, stress testing is a way to force
management to consider events that they otherwise might ignore. Stress testing is
also relatively easy to implement and communicate 112 .
One serious drawback with stress testing, compared to the methods mentioned
before, is the sensitivity to the choice or creation of scenarios. The method is
completely subjective, where an untenable scenario will lead to an incorrect
measure of VaR. According to Jorion, stress testing does not account for
correlation in-between assets, which is a crucial component for risk
diversification. The example mentioned above about the EMS breakdown is a
good illustration of the ignoration of correlations. Furthermore, stress testing does
not either specify any trustworthy probabilities for, or the likelihood of, worst-
case situations to actually occur. 113
110
Jorion, P., Value at Risk, (1997), p. 196.
111
Ibid, p. 197.
112
Ibid, p. 203.
113
Ibid, p. 196-199.
29
To conclude one can say that stress testing should be considered as a complement
rather than a single VaR approach. Stress testing as a complement tries to capture
what is going on in the tails, but as stated before stress values are subjectively
defined without specified likelihood 114 .
Most financia l firms use one day VaR for internal risk management, but regulators
require that VaR also is calculated for longer time periods. There are two ways
this can be calculated. One can look at past t-day returns or extrapolate the one-
day VaR to t days. 115 RiskMetrics uses the square root of t rule, where the one-day
VaR is multiplied by the square root of t to obtain the VaR for t days 116 . However,
this might produce an overestimation of the multi-day VaR since the multi-day
extreme outcomes are smaller for fat tailed distributed returns than normally
distributed returns. Danielsson found that a scaling factor of around 1.7 should be
used for a ten day VaR estimate, which is significantly less than the square root of
ten (√10≈3.7). 117
114
Longin, F. M., “Stress Testing: A Method based on Extreme Value Theory”, (1999), p. 3.
115
Danielsson, J., “Value-at-Risk and Extreme Returns”, (1997), p. 22.
116
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 84.
117
Danielsson, J., “Value-at-Risk and Extreme Returns”, (1997), p. 8.
30
Chapter 4 - Statistical Methodology
The very first step in the statistical process was to compare which small- and mid-
cap stocks that were listed in the beginning of the test period, i.e. 1994-01-01, and
still listed in the end of the test period, i.e. 1999-12-30. Out of this sample stocks
with extremely poor volume, which implies a low frequency of pricing and poor
data, were excluded. As a rule of thumb stocks changing hands less than two
thirds of the total sample of trading days were excluded. Finally the sample was
122 stocks, which were divided into three groups w.r.t. market cap criteria, see
appendix 2. To set the market cap limits the Carnegie Small-Cap and Mid-Cap
index ranges were used. In the end of the test period the Carnegie Small-Cap
range was 0-5.2 billion SEK and the Mid-Cap range was 5.2-18.1 billion SEK118 .
The small- and mid-cap index values were used to adjust these ranges backward in
the test period. After the stocks were divided into three groups, each of them was
assigned a number, to be used in the random number generation later on.
Three different kinds of equity portfolios, each containing ten stocks, were
constructed. The first portfolio is an OMX-portfolio, which implies a portfolio
consisting of the ten largest stocks listed on the Stockholm stock exchange over
the test period 119 . Every six- month during the test period the portfolio is
reallocated as to maintain the criterion of consisting of the ten largest stocks120 .
The reallocation dates are the 1st of January and 1st of July between 1995 and
1999. The second portfolio is a small-cap portfolio, consisting of ten stocks with
market cap in accordance with the Carnegie Small-Cap index range. The stocks in
the portfolio are randomly selected from the total sample of small-cap stocks.
Every six- month new small-cap stocks are randomly selected so the reallocation
of the portfolio is performed in a statistically correct way. The third portfolio is a
mixed portfolio consisting of the five largest OMX-stocks, two randomly selected
small-cap stocks, and three randomly selected mid-cap stocks. The procedure is
identical as for the first two portfolios. In the portfolios every stock get a weight
of ten percent at every day in the sample, i.e. the portfolios are equally weighted.
The first two portfolios, the OMX and the small-cap portfolio, are motivated by
the purpose to examine the applicability of VaR methods w.r.t. differences in
equity market cap. The mixed portfolio is constructed since it will give a more
realistic reflection of a portfolio held by an asset manager. The portfolios can be
viewed in appendix 3.
118
Segerström, T., Carnegie Asset Management, (00-04-11).
119
The size is measured as market capitalisation, i.e. the share price multiplied by the total number
of outstanding shares.
120
OM-Gruppen, Shares in OMX, (1995-1999).
31
Last price paid data was collected from the Bloomberg database. Especially for
the small-cap stocks the time-series were not totally complete, for example all
stocks were not traded every day in the test period. Hence, the data is corrected so
the time-series are consistent over the sample, i.e. if a stock was not traded on a
specific date the price when the stock last changed hands was used. Out of the last
price paid time series the formulas below are applied, to get the percentage returns
on a daily basis for each stock in all portfolios.
Pt − Pt −1
Rt = (18)
Pt −1
where Rt is the percentage return at time t and Pt-1 is the price at time t-1. 121
Equation (19) is used for dividend replacement, on the ex dates for dividends 122 .
(( Pt + Dt ) − Pt−1 )
Rt = (19)
Pt −1
All in all seven different methods have been used in the study. These are divided
into three subgroups, which are the Historical Simulation approach (HS), the
Equally Weighted Moving Average approach (EqWMA) and the Exponentially
Weighted Moving Average approach (ExpWMA). These three approaches have
been chosen, since they are the methods most widely used in empirical finance of
today123 . No semi-parametric approaches are performed, since none of these are
fully developed and accepted yet, and also because they require an estimation
window longer than one year. Regarding the Monte Carlo simulation it is well
accepted, but too complex, demanding and costly to perform. Stress testing is easy
to perform, but scenario analysis is not included in the purpose of this Master
thesis. The most commonly used window sizes for the HS approach are six
months, one year, two years and five years, i.e. window sizes of 125, 250, 500 and
1250 observations 124 . In our study windows of 125 and 250 trading days have
been used, since the data quality for small-cap stocks were very poor with many
121
JPMorgan/Reuters, RiskMetrics - Technical Document, (1996), p. 46.
122
Ross, S.A., Westerfield, R.W. & Jaffe, J., Corporate Finance, (1996), p. 224.
123
See for instance Danielsson, J. & de Vries, C.G.,“Value-at-Risk and Extreme Returns”, (1997)
or Hendricks D., “Evaluation of Value-at-Risk Models Using Historical Returns”, (1996).
124
See for instance Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical
Returns”, (1996).
32
shares not changing hands frequently in the beginning of the 90s, and in addition
few small-cap shares were listed. Regarding the EqWMA approach the most
commonly used window sizes are the same as for the HS approach, with the only
difference that 50 days is also often used 125 . Therefore, we have chosen window
sizes of 50, 125 and 250 days for this approach. For the ExpWMA the most
widely accepted decay factors are 0.94 and 0.97, and these are the decay factors
applied in this study126 . For all of the approaches mentioned the calculations are
performed at confidence levels of both 95% and 99%. These levels are by far the
most used in statistical testing.
§ HS with a window size of 250 trading days at both the 95% and 99%
confidence interval (CI).
§ HS with a window size of 125 trading days at 95% and 99% CI.
§ EqWMA with a window size of 50 trading days at 95% and 99% CI.
§ EqWMA with a window size of 125 trading days at 95% and 99% CI.
§ EqWMA with a window size of 250 trading days at 95% and 99% CI.
§ ExpWMA with a decay factor of 0.94 at 95% and 99% CI.
§ ExpWMA with a decay factor of 0.97 at 95% and 99% CI.
First, the portfolio return is calculated for each day according to formula (11)
viewed in chapter 3. As stated previously windows of 250 and 125 days are used
both for the 95% and 99% confidence interval. Thus, the 1st percentile, for the
99% confidence interval, and the 5th percentile, for the 95% confidence interval
are calculated using windows of the latest 125 or 250 trading days. For example,
to calculate the 95% confidence interval using the 250-day window at time t the
5th percentile of the window ranging from t-250 to t-1 is calculated. To calculate
the confidence interval for the next day at time t+1, the window from t-249 to t0
is used. The results, i.e. the daily VaR measures, are then compared to the actual
outcomes of the portfolio. The results are then evaluated using the performance
evaluation criteria presented in 4.6.
125
See for instance Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical
Returns”, (1996).
126
For instance RiskMetrics uses 0.94 for daily observations, and 0.97 for monthly observations.
33
4.4 Calculation procedure for the EqWMA approach
The EqWMA approach is tested over 50, 125 and 250 days both at the 95% and
the 99% confidence interval, as stated in 4.2. For each day the variance for each
portfolio is calculated. In the case of a portfolio with ten assets the variance is
obtained by using the following formula:
N N
Var ( r p ) = σ 2p = ∑ (γ i ) 2 σ i2 + ∑ 2γ γ i j σ i, j (20)
i =1 i , j =1, i ≠ j
where γ is the weight, σi2 variance, and σ1,2 the covariance 127 . The square root is
taken of formula (20) to obtain the daily standard deviation of the portfolio. We
have used formula (12) to calculate the standard deviation for each asset in the
portfolio for each day, and the covariance according to formula (21) 128 :
∑ (r
t =1
1 ,t − r1 )( r2, t − r2 )
cov( r1 , r2 ) = σ 1, 2 = (21)
N
where
1 N
ri = ∑ ri,t
N t=1
(22)
The daily VaR measures are finally obtained by multiplying the standard
deviation by -1.645 for the 95% confidence level and -2.327 for the 99% level129 .
The decay factors 0.94 and 0.97 were chosen, and according to table 1 in chapter
three 74 and 151 observations are used respectively at the 1% tolerance level.
Thus, two time-series are created ranging from 1 to 74 and 1 to 151. When these
lambda weights are obtained we use formula (13), presented in chapter 3, to
calculate the standard deviation. The results are then multiplied with the left-hand-
side standard deviation from a normal distribution at the 95% level, which
127
Benninga, S., Financial Modeling, (1998), p. 74.
128
Gustavsson, M., and Svernlöv, M., Ekonomi & Kalkyler, (1994), p. 706.
129
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 14.
34
is –1.645, and at the 99% level, which is –2.327. The results, the daily VaR
measures, are then evaluated against the performance criteria as for the other two
approaches.
Darryll Hendricks uses nine performance criteria to evaluate the quality and
performance of the VaR-approaches EqWMA, ExpWMA, and HS 130 . We have
selected to use all of these criteria to evaluate our VaR examination. Every
criterion is calculated for each of VaR method, portfolio and confidence level, i.e.
42 calculations are performed for each criterion (7 VaR methods*3 portfolios*2
confidence levels). The criteria are in order:
This criterion estimates whether each VaR method produces risk measures of
similar average size. The VaR measures are here compared to each other and not
to the actual portfolio outcomes. The mean relative bias is measured in percentage
terms, where for example a value of 0.15 implies that a given VaR method on
average is 15 percent larger than the average of all methods for the same portfolio
and confidence level.
For each VaR method, portfolio and confidence level an average over the VaR
measures is calculated over all observations in the sample. An average is then
calculated over all averages, to obtain a “total average” over all methods for each
portfolio and confidence level separately. Then we divide the individual average
with the total average and subtract one, to get the percentage difference. For
example, an average is calculated for the OMX-HS250d at the 95% confidence
level over the who le sample period from 1995-1999. This average is then divided
by the average of all seven methods for the OMX portfolio using a 95%
confidence interval. From this number, one is subtracted and the mean relative
bias is obtained.
130
Hendricks, D. “Evaluation of Value-at-Risk Models Using Historical Data” (1996), p. 46.
131
Ibid, p. 46.
35
4.6.2 Root Mean Squared Relative Bias132
This criterion assesses to what extent the VaR measures tend to vary around the
average VaR measures for a given date. This measure can be compared to a
standard deviation for the mean relative bias.
A daily mean relative bias is calculated by taking the daily VaR figure and divide
it by the average over all methods using the same confidence level and portfolio
for that day, and subtract one. Then we obtain a mean relative bias figure for each
day, which is squared, and an average is calculated over the entire sample. To get
the root mean squared relative bias we simply take the square root of this squared
mean.
This criterion evaluates the tendency of the VaR measures to fluctuate over time.
An annualized percentage volatility is calculated for each portfolio and VaR
method at both confidence levels.
A new column of numbers has been calculated for each day in the sample period
according to:
VaRt − VaRt −1
% ∆VaRt = (23)
VaRt −1
Hence, a new time-series for each method, portfolio and confidence level is
created, which views the percentage change in the VaR measures. For these time-
series the standard deviation is calculated. To obtain the annualized percentage
volatility we multiply the standard deviation by the square root of 250 134 .
132
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 47.
133
Ibid, p. 48.
134
In financial theory 250 days is considered to be a measure of the amount of trading days on an
annual basis.
36
4.6.4 Fraction of Outcomes Covered135
This criterion is more of a fundamental test, where the VaR methods are examined
whether they cover the portfolio outcomes that they are intended to capture. For
example, to achieve the desired level the coverage should be 95 percent for the
methods using a 95% confidence interval.
N
∑ violations(1;0)
FoOC = 1 − t =1 (24)
N
where violations can take either of (1) or (0) depending on if a violation is present
or not, i.e. no violation implies a (0) and a violation a (1). For example, if the VaR
measure exceeds the portfolio outcome on five trading days out of a hundred the
fraction of outcomes covered would be 0.95 (1-5/100).
The multiple needed to attain desired coverage is simply a figure on what value
the multiple should be for each VaR measure to attain the desired level of
coverage. This criterion focuses on the size of the adjustments in the risk
measurement required to achieve this perfect coverage. This measure is important
because shortcomings in VaR measures that seem small in probability terms may
be much more significant when considered in terms of the changes required to
remedy them.
135
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 49-50.
37
4.6.5.1 Calculation procedure
First, we divide the daily portfolio returns with the VaR measure for each day.
Out of these generated numbers the percentiles for a 95% and a 99% confidence
interval are calculated, i.e. the 5th and 1st percentile. These percentile values are
the multiples that would have been required for each VaR measure to attain the
desired level of coverage. A value below one implies that the VaR method
overstates the risk and hence a value above one understates the risk. Thus, a
multiple of exactly one is to be preferred.
This evaluation method relates to the median size of outcomes not covered by the
VaR measures. The average multiple of tail events is calculated and compared
with the VaR measure, where tail events are defined as the largest percentage of
losses measured relative to the confidence level chosen.
This performance criterion assesses the size of the maximum portfolio loss. The
maximum multiples are likely to be highly dependent on the length of the sample
period, e.g. for shorter periods the maximum multiple are likely to be lower.
136
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 50.
137
Ibid, p. 51.
138
Ibid, p. 52.
38
4.6.7.1 Calculation procedure
Out of the generated multiple series in 4.6.5 the maximum value is found, which
is the maximum multiple of tail event to risk measure.
This criterion examines to what extent the VaR measures adjust to changes in the
portfolio risks over time 139 . Briefly the correlation coefficient can be considered
as an index showing the degree of linear co-variation between two variables. A
correlation coefficient equal to (–1) implies perfect negative correlatio n, i.e. the
variables are moving exactly in the opposite direction. A coefficient of (+1)
implies perfect positive correlation and hence the variables are moving in exactly
the same direction. 140 According to Hendricks, even a perfect VaR measure
cannot guarantee a correlation of (1) between the risk measure and the portfolio
outcome, which is an important statement to bear in mind 141 . Despite this, a value
close to one is desired.
First the absolute value of the portfolio retur ns are calculated. These values are
then compared with the generated VaR measures using the following correlation
formula:
σ
ρ1, 2 = 1, 2 (25)
σ 1σ 2
∑ (r i −r)
2
i =1
σi = (26)
N −1
139
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 53.
140
Söderlind, L., Att mäta ränterisker, (1996), p. 79.
141
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 53.
142
Ross, S. A., Westerfield, R. W. & Jaffe, J., Corporate Finance, (1996), p. 253.
39
4.6.9 Mean Relative Bias for Risk Measures Scaled to Desired
Level of Coverage144
Here the mean relative bias that results when VaR measures are scaled to either 95
percent or 99 percent coverage is assessed. The purpose is to determine which
approach that could provide the desired level of coverage with the smallest
average VaR measures.
The scaling is performed by multiplying the VaR measures for each method by
the multiple needed to attain desired coverage.
143
Gustavsson, M. & Svernlöv, M., Ekonomi & Kalkyler, (1994), p. 706.
144
Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”, (1996), p. 54.
40
4.7.1 Actual portion of Fraction of Outcomes Covered
To test if the VaR methods cover the portfolio outcomes as is intended, i.e. if the
VaR measures at the 95% level actually cover 95 percent of the outcomes and the
VaR measures at the 99% level cover 99 percent of the outcomes, we set up the
following hypotheses:
The hypotheses for the 99% confidence interval is set up accordingly. Hence, the
null hypothesis, H0 , states that the VaR measures cover the portion of outcomes
that they are intended to. The alternative hypothesis, H1 , states that the VaR
measures differ significantly from the portions of outcomes they are supposed to
cover. These tests are then performed for all seven VaR methods at both
confidence levels and for all three portfolios, which sum up to 42 tests. The
formula used for the calculation is shown below:
P−π
Z= (27)
π (1 − π )
n
where P is the observed portion of the measured variable in the sample, π is the
portion that the test is performed against, i.e. 95% for tests on VaR95% and 99%
for tests on VaR99%, and n is the number of observations. The formula is
approximately normally distributed if (nπ*(1-π)>5), with zero mean and a
standard deviation of one. 145 If the absolute value of Z exceeds 1.96 H0 is rejected
at the 5%-level. Similarly if the absolute value of Z exceeds 2.57 H0 is rejected at
the 1%- level, and if the value exceeds 3.3 H0 is rejected at the 0.1%-level. For
absolute values of Z below 1.96 H0 is accepted. These are the Z-values used, since
they correspond to the different significance levels. 146
145
Körner, S., Statistisk dataanalys, (1987), p. 283.
146
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 14.
41
4.7.2 Difference in FoOC between OMX and small-cap shares
H0 : VaROMX=VaRSMALL
H1 : VaROMX>VaRSMALL
To get an explanation of why this test is one-sided, see section 5.4.1.2. This test is
then performed for all seven methods at both confidence levels. The formula
below is used for the calculation of the significance test:
P1 − P2
Z= (28)
1 1
P (1 − P)( + )
n1 n2
where P1 is the portion of the variable measured in the first sample, i.e. the
fraction of outcomes covered for the OMX portfolios, and P2 is the portion of the
variable measured in the second sample, i.e. the fraction of outcomes covered for
the small-cap portfolios. n1 is the number of observations in the OMX sample and
n2 is the number of observations in the small-cap sample. P is the weighted
average of the two samples and Z is approximately normally distributed with zero
mean and a standard deviation of one. 147 Since H1 is one-sided the critical Z
values become 1.645 for the 5%- level, 2.33 at the 1%- level and 3.1 at the 0.1%-
level. For all Z values below 1.645 H0 is accepted. 148
To test if the correlation coefficients differ significantly from zero the following
hypotheses are set up:
H0 : correlation=0
H1 : correlation≠0
147
Körner, S., Statistisk dataanalys, (1987), p. 285.
148
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 14.
42
The test is performed for all seven methods at both confidence levels and for all
three portfolios. The formula below is used for the calculation:
r n− 2
t= (29)
1− r 2
According to the formulas (9) and (10) in chapter 3, the skewness and kurtosis for
the portfolio returns for each portfolio are computed. When the skewness and
kurtosis are obtained, we use them in the Jarque-Bera formula (30) below to test
whether our data follows a normal distribution or not. Jarque-Bera is a test
statistic for testing if series are normally distributed. The formula (30) follows a
Chi-squared distribution (χ2 ), which is an asymmetric distribution, with two
degrees of freedom:
N −k 2 1
Jarque − Bera = (
2
Sk + Kur − 3) ) (30)
6 4
The results from our tests can be viewed in chapter 5.10. The χ2 generated
probability values are the probabilities that a Jarque-Bera statistic exceeds the
observed value under the null, where a small value leads to the rejection of the
null. 152
149
Körner, S., Statistisk dataanalys, (1987), p. 287.
150
Körner, S., Tabeller och formler för statistiska beräkningar, (1986), p. 17.
151
Eviews 3, User’s Guide, (1998), p. 165.
152
Ibid, p. 165.
43
4.9 Criticism of primary data
The most decisive point in our study is that the collected data is the correct one
and that no price information is missing. Two things that could have an adverse
affect on our study is that the data was not corrected for dividends and that
companies have merged or been acquired during the period of our study. To cope
with these problems the data has been corrected for dividends pay out. When
firms have merged and one stock has been changed for another we have corrected
the data for this as well.
Another thing that is of great importance is that the sample we have used for the
mid-cap and the small-cap portfolios are random. To make sure of this we have
used Excel to generate random numbers that represent different companies.
Some of the small-cap shares have not changed hands on every trading day. To
handle this problem we have used the last price paid on the day when the stock
was last traded. Companies that have been trading on less than two thirds of the
trading days have been excluded in the study to avoid any distorted results.
44
Chapter 5 - Results
In this chapter the results from the study of VaR over three different approaches,
seven different methods and three different kinds of portfolios are presented for
both the 95% and the 99% confidence levels. We regard the performance criteria
fraction of outcomes covered and correlation between risk measure and absolute
value of outcome as most intuitive when analysing the results, and hence we are
giving these criteria most attention. In addition, the results from the significance
and normality tests are presented, see appendix 10-11 and 16. Tables and
diagrams over the results can be found in appendix 4-9, 12-15 and 17-18. In each
section the expectations are firstly discussed, followed by the results per se.
Finally comments are made on how the results correspond with our expectations.
It is hard to have any qualified expectations for this criterion. However, previous
research has found that longer windows normally give higher VaR measures, i.e.
methods with long windows have a mean relative bias above one 1 .
When interpreting the mean relative bias it is important to keep in mind that this
criterion only measure how the size of the VaR measures are to the mean of all
VaR measures for the specific category. For instance, a VaR measure of one
method for the OMX portfolio is compared to other methods’ VaR measures for
the same portfolio and level of confidence. The mean relative bias is measured in
percentage terms, so a value of 0.10 implies that the VaR measure is 10 percent
larger than the average.
For the vast majority of portfolios the mean relative bias is between –0.1 and
+0.1, indicating that most VaR measures vary up to 10 percent from the mean.
The results are presented in appendix 6, and for the 95% confidence level the HS
approach seems to give the lowest VaR measures and the EqWMA approach the
highest. At the 99% confidence interval HS tends to give the highest values,
especially with a window of 250 trading days. Further, there is a distinct
indication that longer windows give higher VaR measures. One possible
explanation for this is Jensen’s inequality theorem, which states that if the true
conditional variance is changing frequently, then the average of a concave
function, i.e. the VaR measure, will tend to be less than the same concave function
of the average variance. Briefly the implication of the theorem for VaR is that
1
Hendricks, D., “Evaluating Value-at-Risk Models Using Historical Data”, (1996), p. 46.
45
VaR measures with short windows should on average be smaller than VaR
measures with longer windows. 2,3
For this criterion we can conclude that the results are in line with our expectations.
In addition, the average VaR of different methods deviates as much as ten percent
from the mean in both directions, indicating that two methods on average can
differ 20 percent from each other.
As with the mean relative bias it is hard to make predictions about the results of
the root mean squared relative bias. However, this criterion is sensitive to
window size and therefore the VaR methods using windows of average sizes are
expected to differ the least from the mean, i.e. give the lowest root mean squared
relative bias.
As can be seen in appendix 7, for both confidence levels and all three portfolios
the EqWMA approach with a window size of 125 trading days and the ExpWMA
approach with a decay factor of 0.97, i.e. 151 days – see table 1, give the lowest
results. It is also interesting to note that on any given day differences in the range
of 30-40 percent between two VaR methods are not uncommon. Many portfolios
have values of 0.15-0.20, which means that one method can produce a VaR
measure 15-20 percent above the average, while another produces values 15-20
percent below the average. Clearly, it is a serious drawback for VaR as a concept
that different methods give remarkably different VaR measures.
The methods that give the lowest values are methods with windows of average
length, i.e. 125 and 151 observations. The HS with a window size of 125 trading
days does not give a low value, which could be due to the fact that the HS
approach is very dependent on a few observations. Since the HS approach only
relies on five or one percent of the observations, depending on the confidence
level, the rest of the observations are irrelevant for the calculations. Therefore it
can be questioned whether HS125d produces reliable results.
2
Hendricks, D., “Evaluating Value-at-Risk Models Using Historical Data”, (1996), p. 66.
3
http://mathworld.wolfram.com/JensensInequality.html
46
5.3 Annualized Percentage Volatility
We expect methods using short windows to have high volatility measures, since
they depend on fewer observations that leave the window more quickly. The
ExpWMA methods should have higher volatility measures than the other
approaches, since they put higher weights on recent observations through the
decay factor. In addition, the small-cap portfolio could probably have a higher
volatility measure comp ared to the other portfolios, since small-cap stocks
generally have a lower liquidity measured as depth. Poor liquidity can sometimes
imply higher volatility since even a small amount of trading can affect prices
heavily.
It is important to note that the values for approaches using the normal distribution
are exactly the same for both confidence levels, since these are multiples of the
same standard deviation. From the diagrams, see appendix 8, it is clear that longer
windows give lower volatility and vic e versa. In addition, the ExpWMA
approach, especially with a decay factor of 0.94, gives the highest volatility
measures. This is not surprising due to the fact that the VaR measure varies as
new observations enter the window.
There seems to be a tendenc y for the small-cap portfolios to have a slightly higher
volatility than the OMX shares even though the differences are small. Another
conclusion that can be drawn is that, for the same window length, the HS
approach tends to have a higher volatility than the EqWMA approach. This is
probably due to the fact that the HS to a high degree only depends on a couple of
observations.
The conclusions from this criterion are very much what we expected. Short
windows give high volatility measures, in particular the ExpWMA. The small-cap
portfolios give higher volatility measures in 13 cases out of 14, so even though the
differences are small there is a tendency for these portfolios to have a higher
volatility measure.
This criterion is the most important and intuitive for evaluating VaR methods. To
be able to use the VaR approach in practice the VaR methods using confidence
levels of 95% and 99% should also give a 95 percent and 99 percent coverage
respectively. However, financial returns experience fat tails and previous research
has shown that VaR models find it hard to cover these confidence levels,
47
especially the 99% level4 . The RiskMetrics group regards the ExpWMA approach
to be superior to the others, especially with a decay factor of 0.94 since this is
used for daily VaR estimation and is receptive for short-term volatility swings.
Furthermore, RiskMetrics considers the HS125d to produce unsatisfactory results,
since the window is too short. 5
The results in appendix 9 show that HS125d sharply underestimates the VaR for
all portfolios at both confidence levels. At the 95% confidence interval the
EqWMA approach, for which seven out of nine portfolios overestimate the VaR,
tends to produce VaR measures that cover more than 95 percent of the outcomes.
For the ExpWMA approach five out of six portfolios overestimate the VaR, while
only one out of six portfolios for the HS approach produces too high VaR
measures. There is no apparent difference in the VaR measures for the small-cap
and OMX portfolios.
Furthermore, at the 99% confidence level only two out of 21 portfolios actually
cover 99 percent of the outcomes. Generally the OMX portfolios have a higher
fraction of outcomes covered than both the small- cap and mixed portfolios. At the
95% confidence level four out of seven methods produce higher VaR measures
for the OMX portfolios, while two methods have exactly the same coverage. At
the 99% level six out of seven methods give higher results for the OMX portfolios
compared to the small-cap.
To some extent the results for this criterion are in line with our expectations. It
seems the VaR methods have a hard time covering 99 percent of the outcomes and
distinctly the HS125d produces unsatisfying results. In addition, the fraction of
outcomes covered for the OMX portfolios seem to be higher than the small-cap
portfolios. However, there is no evidence indicating that the ExpWMA approach
produces superior results compared to the other methods.
To further evaluate the fraction of outcomes covered significance testing has been
performed both to check if the portfolios cover the portion of outcomes that they
are supposed to, and if there is any difference between small-cap and OMX
portfolios. The tests are described in section 4.7, and the results are presented in
appendix 10-11.
4
Hendricks, D., “Evaluating Value-at-Risk Models Using Historical Data”, (1996), p. 49.
5
JPMorgan/Reuters, RiskMetrics – Technical Document, (1996), p. 78.
48
5.4.1.1 Fraction of Outcomes Covered
For the OMX portfolio at the 95% confidence level all H0 are accepted, i.e. all
methods give coverage of 95 percent. On the other hand, at the 99% level both HS
methods underestimate the VaR and especially the HS125d produces too low VaR
measures.
The small-cap portfolios all produce attractive results at the 95% confidence
interval. However, at the 99% level the picture is quite the opposite. Six out of
seven portfolios significantly underestimate the VaR and only H0 for the HS250d
is accepted. This is especially interesting, since the Basel 1995 proposal for
market risk recommends to use HS with a 99% confidence level. In addition, five
out of the six portfolios that underestimate the VaR, do so on the lowest, i.e.
0.1%, significance level. Apparently this is a very serious drawback of the VaR
concept.
For the mixed portfolios no method underestimates the VaR at the 95%
confidence level. A surprising result is that one method, the EqWMA250d,
overestimates the VaR. At the 99% level only two methods produce results for
which H0 is accepted, the HS and the EqWMA with windows of 250 trading days.
All of the other methods underestimate the VaR at the 1% significance level or
less.
On the 95% confidence level none of the methods signals that there is a difference
between the small-cap and the OMX portfolios. On the other hand, for the 99%
confidence interval all methods using a normal distribution get their H0 rejected,
i.e. the methods produce significantly higher fraction of outcomes covered for the
OMX portfolios compared to the small-cap portfolios. For the HS methods no
difference could be detected and hence H0 for these methods are accepted.
49
5.5 Multiple Needed to Attain Desired Coverage
The multiple needed to attain desired coverage is more or less a reflected image
of fraction of outcomes covered. Hence, the methods that tend to overestimate
VaR measured as fraction of outcomes covered will get a multiple less than one in
this test, and vice versa. The larger the deviation from one, the greater the risk
measure has to adjust to achieve a perfect coverage. Since we expect a poor
coverage for the HS125d in the previous test, the multiple is consequently
expected to be relatively high. Similarly we expect values above one for the 99%
confidence level, and the ExpWMA to produce values close to one.
The diagrams in appendix 12 view that the multiples are a reflection of the
fraction of outcomes covered, both at the 95% and the 99% confidence level.
Worth noting is the very poor result by the HS125d at the 99% level, which
underestimates the VaR by approximately 40 percent. Another interesting notation
is the sharp overestimation of VaR by the mixed portfolio for the EqWMA250d.
Further on, at the 95% level no method seems to be superior to the other methods,
but for the 99% level HS250d and EqWMA250d appear to have somewhat more
attractive values. The small-cap portfolios at the 99% level are the worst
performers, while its more difficult to distinguish among the portfolios for the
95% level. For all seven methods and both confidence levels the multiple stretches
in a range of 0.9 – 1.45.
This performance criterion views the size of outcomes not covered by the VaR
measure. To evaluate the test we use the normal distribution as a benchmark. In
calculating the benchmark we use the table-value for the average percentile, i.e.
97.5 for the 95% confidence level and 99.5 for the 99% level, which is 1.96 and
2.327 respectively. These are then divided by the table-values for both the 95%
and the 99% confidence level. Hence, the benchmarks for the 95% level is
(1.96/1.645)=1.19 and for the 99% level (2.575/2.2327)=1.11. This benchmark is
not applicable for the HS, since the method does not rely on normal distributed
returns.
Thus, the EqWMA and the ExpWMA approaches should present values close to
the benchmark if the portfolio returns should be regarded to follow a normal
distribution. At the same time, we are aware of the fact that financial theory states
that returns hardly follow a normal distribution.
50
The diagrams in appendix 13 indicate that the OMX portfolios, for all methods
and both confidence levels, have multiples close to the benchmarks relative the
other portfolios. This result imply that the normal distribution is relatively more
well- functioning for OMX stocks. Later on, in section 5.10, we will see that this is
an ambiguous result. At the same time the small-cap portfolio constantly deviates
the most from the benchmarks. Otherwise we cannot conclude that one method is
superior to the other methods.
The results are somewhat in line with the expectations, since the OMX portfolio
produce values close to the benchmarks. On the other hand, the small-cap and
mixed portfolios have values significantly above the benchmarks. Even if the
results for this criterion should produce values close to the benchmarks financial
theory states that returns experience fat tails, and hence the normal distribution is
a poor approximation.
Basically the maximum multiple value is the largest multiple for each method
over all observations. Regarding the ExpWMA approach, which is known as a
method able to cover short-term volatility well, we expect a relatively low max
multiple.
The results confirm our expectations regarding that the ExpWMA method
properly covers the short-term volatility, which implies a relatively low max
multiple. Hence, the ExpWMA tends to be the superior method in this respect.
However, this conclusion can definitely be too hasty, because the performance
criterion is only based on one single observation over the total sample of 1254
observations
51
5.8 Correlation between Risk Measure and Absolute
Value of Outcome
We expect the methods using short windows to have the highest correlation
coefficients, because these are superior in capturing volatility fluctuations. In
addition, the ExpWMA approach should produce high values, since the strength
of this approach is that it should be more receptive for changes in the portfolio
volatility.
As can be seen in appendix 15 for methods using the same approach, for example
by comparing methods using the EqWMA approach with other methods using the
same approach but with other window sizes, it is apparent that a shorter window
gives a higher correlation coefficient. Furthermore, the ExpWMA approach
produces the highest correlation coefficient. It is also interesting to notice that for
all 14 methods, the OMX portfolios have higher correlation coefficients than the
small-cap portfolios.
As we expected the methods with short windows and methods using the
ExpWMA approach produce the highest correlation coefficients.
To test if the correlation coefficients differ significantly from zero we set up the
hypotheses described in section 4.7.3. The test results are presented in appendix
16. As can be seen in the appendix all correlation coefficients differ from zero at
the lowest significance level, i.e. the 0.1% level. Hence, all VaR methods are
receptive to changes in the market volatility.
This performance criterion assesses which method that can provide the desired
level of coverage with the smallest average VaR measure. Previous research
indicates that the ExpWMA approach should have a relatively low mean relative
bias for risk measure scaled to desired level of coverage, and vice versa for the
52
HS approach6 . In general, as stated before in section 5.1, the mean relative bias
criteria are difficult to make predictions for prior to the investigation.
The diagrams in appendix 17 indicate that the HS method gives a poor value for
the 99% confidence level, especially for the small-cap portfolios. The ExpWMA
approach appears to be superior for both confidence levels, except for the mixed
portfolio with decay factor 0.94 at the 95% level. Moreover, for the EqWMA
approach the small-cap portfolios produce lowest figures two out of three times at
the 99% level, but highest values two out of three times at the 95% level.
The results follow the expectations, based on previous research, that the
ExpWMA has a relatively low mean relative bias for risk measure scaled to
desired level of coverage.
As discussed in section 3.3, the distribution of financial returns can take many
different shapes. Since both the EqWMA and the ExpWMA approach rest on the
assumption of normally distributed returns we find it interesting to examine
whether our data is normally distributed or not. In a first stage we test our data for
skewness and kurtosis, and secondly we perform a Jarque-Bera test for normality,
see section 4.8. It is important to stress that the normal distribution is a well-
functioning approximation for smaller samples, i.e. 30-150 observations, but can
be an insufficient approximation for larger samples such as ours of 1254
observations 7 .
If the portfolio returns are perfectly normally distributed the computed skewness
should be close to zero and the kurtosis close to three. That will produce a Chi-
square probability value of one and hence the null can be accepted, i.e. the
portfolio returns are normally distributed. As can be seen in table 2 below, the test
values for skewness indicate that the returns from the OMX portfolio are right-
skewed, since 0.488 is a relatively high and positive number. Further on, for the
small-cap portfolio the test indicates left-skewed returns, while the mixed
portfolio got results close to zero.
6
See for instance Hendricks, D., “Evaluation of Value-at-Risk Models Using Historical Data”,
(1996).
7
Hagnell, M., Lecturer in Econometrics at Lund University, (00-17-05).
53
A distribution is considered to be leptokurtic if the kurtosis coefficient is higher
than three and platykurtic below three. The kurtosis calculations show that all
three portfolio returns are significantly leptokurtic, especially the returns for the
OMX portfolio with a value of 9.86, but the small and mixed portfolio returns are
also significantly leptokurtic.
The Chi-squared probability values in table 2 clearly indicates that the null
hypotheses set up in section 4.8 are rejected for all portfolios, which implies that
the portfolio returns cannot be regarded as normally distributed.
To find answers for this result we plotted the portfolio returns for each portfolio
against normally distributed random numbers for the same sample size, i.e. 1254
observations. The results are viewed in figures 3-5 below, which demonstrate the
difference in distribution between the portfolio returns for all three portfolios and
a normal distribution. It is difficult to view the kurtosis and skewness, even for the
OMX portfolio with a very high kurtosis value, but compared to the leptokurtic
distribution viewed in section 3.3 all figures are similarly leptokurtic distributed.
Frequency
55
50
45
40
35
30
25 OMX
20
NORMAL
15
10
5
0
Standard deviation
Figure 3. A random normal distribution plotted against the OMX portfolio returns.
54
In accordance with the negative skewness coefficients in table 2 more values
should be striving towards negative numbers for the small-cap portfolio, and the
opposite for the OMX portfolio. The mixed portfolio has a skewness coefficient
close to zero, and hence the distribution can be considered as symmetric.
Frequency
70
65
60
55
50
45
40 SMALL
35
30 NORMAL
25
20
15
10
5
0
Standard deviaton
Figure 4. A random normal distribution plotted against the small-cap portfolio returns.
Frequency
65
60
55
50
45
40 MIX
35 NORMAL
30
25
20
15
10
5
0
Standard deviation
Figure 5. A random normal distribution plotted against the mixed portfolio returns.
Out of the diagrams it is difficult to view the fat tails of the portfolio returns, and
therefore the frequency tables are attached in appendix 18. The kurtosis is
significant and an answer to this result can be found in the heart of portfolio
theory. The figures indicate that too many return observations, as compared with
the generated normal distribution, are clustered around relatively small changes of
the mean for all three portfolios. We believe that this is an effect from
diversification. However, it is important to stress that the clusters around the mean
for the small-cap and to some extent the mixed portfolios can be explained by the
filling in of missing prices explained in sections 2.4 and 4.1. The main reason to
hold a portfolio in the first place is to lower the investment risk via distributing
the capital over different business lines and sectors. If the diversification works it
simply implies that the fluctuation in returns decreases and hence more
observations are clustered around the mean. In addition, there is a tendency for the
55
correlation between stocks to increase in periods of global turbulence and that
might be the reason why financial returns experience fat tails. These two effects
can explain why the portfolio outcomes are not normally distributed.
Due to the fact that the normal distribution appears to be a poor approximation for
the portfolio returns in our study, the use of another approximation would be
beneficial. Here the t- distribution could be an option, since it captures fat tails
well. The t-distribution depends on the number of degrees of freedom, which is an
insecure estimation and might make the VaR process both unreliable and
complicated. Hence, the VaR with an approximated t-distribution could be a less
appealing tool in practice.
Another interesting conclusion that can be drawn from the normality test is linked
to the results from the performance criterion average multiple of tail event to risk
measure. To evaluate the results we used the normal distribution as a benchmark,
see 5.6, where a value close to the benchmark implies that the portfolio returns
should be regarded to follow a normal distribution. The results indicated that the
OMX portfolio followed a normal distribution relatively better than the other
portfolios, which appears to be very ambiguous w.r.t. the results from the
normality tests. However, the average multiple of tail event to risk measure only
compares the 95 and 97.5 respectively 99 and 99.5 percentiles, and does not take
the whole normal distribution into consideration.
56
Chapter 6 - Conclusions
We regard methods producing results that do not differ significantly from the
intended fraction of outcomes covered at the 95% significance level as satisfying.
However, it can be questioned whether estimations that come close to having their
null hypothesis rejected, such as the OMX-HS125d at the 95% confidence level,
can be considered as being a reliable and trustworthy VaR method. Most of our
VaR methods work well on the 95% confidence level, while on the 99% level
only portfolios consisting of OMX stocks produce satisfying results. For the
small-cap and mixed portfolios 11 out of 14 VaR methods significantly
underestimate the fraction of outcomes covered at the 99% confidence level. This
gives indications of serious shortcomings of the VaR concept when differences in
market cap affect the applicability of VaR. These results might suggest that the
VaR methods are mostly applicable to large-cap shares and cannot be used with
small-cap or mixed portfolios, at least not at the 99% significance level.
In our study we can conclude that financial returns do not follow a normal
distribution. Therefore, we argue that VaR approaches based on the normal
distribution should not be used for VaR estimations, even if the EqWMA and the
ExpWMA approaches for the OMX portfolio do well measured as fraction of
outcomes covered. Hence, by assuming a normal distribution despite its
imperfections, cannot be recommended.
The HS approach, which does not rest on the assumption about normal
distribution, indeed produces very unsatisfying results with a window of 125
trading days. However, by using a longer window, i.e. 250 days, the results are
much more promising. For HS the estimations are acceptable at the 95% level,
even though the OMX and small-cap portfolio do not cover exactly 95 percent of
the outcomes. For the mixed portfolio on the other hand, the HS250d slightly
overestimates the VaR. All differences from the 95 percent of outcomes covered
are within the margin of error. At the 99% level the HS250d method clearly
produces the best results for the small-cap and mixed portfolios. However, it does
significantly underestimate the VaR at the 99% level for the OMX portfolio. Still,
we argue that the HS with a window size of 250 trading days is the VaR method
producing the most attractive results in our study. In addition, there is no
significant difference in the fraction of outcomes covered between the small-cap
and OMX portfolio for the HS methods.
57
The approaches based on the normal distribution, i.e. the EqWMA and the
ExpWMA approach, seem to do very well at the 95% level. No method comes
close to having their null hypothesis rejected except for the MIX- EqWMA250d
that overestimates the VaR. On the other hand, at the 99% level the results are not
as good. These results are in line with Lucas & Klaassen’s findings, i.e. VaR
methods based on the normal distribution produce attractive VaR measures at the
95% level but poor at the 99% level, see section 3.3. The approaches work well
with the OMX portfolio, but the results with the small-cap and mixed portfolios
are not acceptable. All methods except for the MIX- EqWMA250d underestimate
the VaR with probability values close to zero. We were expecting the ExpWMA
to be superior to the EqWMA approach, since this approach is favoured by
RiskMetrics and is known to capture volatility fluctuations in a superior way.
However, there is no evidence pointing in this direction in our findings. On the
other hand, the ExpWMA approach generates superior results regarding the mean
relative bias for risk measures scaled to desired level of coverage and the
correlation between risk measure and absolute value of outcome. We can
conclude that a high correlation does not automatically give attractive results, and
even if the correlation coefficient would have been one (1) the VaR method
cannot be regarded as totally perfect.
There is a tendency for the VaR methods with long windows to produce superior
results compared to the methods using shorter windows. It is possible that by
using longer windows than 250 trading days the VaR measures would produce
more attractive results. It might be the case that a one-year window is too short to
capture the risk of infrequent market crashes. By using a longer window this risk
would be captured more accurately.
As we showed in section 5.10 the portfolio returns used in our study do not follow
a normal distribution. The returns for all three portfolios experience a leptokurtic
behaviour with high peaks, thin sides and fat tails. This might be due to the
diversification effect, i.e. portfolio outcomes cluster around the mean, and
increasing correlations between portfolio stocks when markets experience times of
58
high volatility create fat tails. Finally, the normal distribution is not a well-
functioning approximation for large samples, and that the normal distribution
cannot handle large samples properly favour the HS even more, where an
increasing window size only makes VaR to make progress.
In summary, it is obvious that financial returns are not normally distributed and
the VaR approached should not be based on assumptions that do not hold.
Therefore, we recommend the HS approach to be used for VaR estimations.
The stock markets of today are increasingly more volatile and risky than only a
couple of years ago. One of the main reasons is the high paced intrusion of the
“new economy” at the expense of the “old economy”. The “new economy” is
reflected in the stock market, which to a large extent now consists of a relatively
new and untested sector, telecom and IT. The uncertainty regarding the sector per
se and the companies’ ability to live up to the expectations are significant and
contribute to the volatility increase. The companies in this sector are also
relatively more difficult to analyse, which has the effect of a significant dispersion
in motivated valuations among market actors. Contradictory opinions regarding
the valuation of stocks and a high valuation of the stock market per se are both
driving forces for volatility increases. The high valuation and the booming
telecom and IT sectors and the launching of the EMU, which force asset managers
to reallocate their portfolio holding in the region, are all factors that make higher
volatility a natural thing.
Asset managers are highly exposed to these increased risks, and are more or less
forced to invest in the telecom and IT sector to get and maintain the exposure of
the “new economy”. Briefly this implies an increased risk taking. With respect to
the enhanced portfolio risk a way to capture and measure these risks is called for,
even if the asset managers typically are in the business of taking risk and not
avoiding it. The question is therefore if we can recommend VaR as a risk
measurement tool for asset managers focused on the Swedish equity market.
The primary goals with VaR are to show how much risk that is being taken and to
determine whether asset managers are exposed to the risks intended to. Even if the
need for a risk management tool is great, especially in the stock market of today,
the reliability and stableness are the two first characteristics that have to be
commented on. A VaR method that is not reliable and does not give stable results
is not a risk tool to consider for an asset manager. Even if a certain VaR method
never will tell an asset manager how much risk to undertake, it must perfectly
59
reflect how much risk that is being taken. Further on, if a specific VaR method
produces measures that underestimate the risk, an asset manager can for instance
accept additional risk to be undertaken. This gives serious implications for the
portfolio, which consists of more risk than the threshold actually is intended for.
Hence, the threshold or risk target defined in terms of maximum tolerable VaR is
not consistent and reliable. On the other hand, a well- functioning and reliable VaR
method can be a superior way to avoid unexpected and uncontrolled losses.
As stated in chapter 3, Philippe Jorion is of the opinion that there is no doubt that
VaR is here to stay, but he also stresses that the process and methodology of
calculating VaR is equally important as the VaR number itself. On the whole, this
goes hand in hand with the results from this study. As long as a VaR method is
reliable and stable it is probably the superior risk measurement tool for asset
managers managing Swedish equity portfolios. At the same time it is definitely
too much to conclude that any of the VaR methods in this study can be regarded
as totally reliable and stable. Previously we concluded that the normal distribution
is incorrect as an underlying assumption for a VaR approach, which automatically
exclude the EqWMA and the ExpWMA approach. The HS seems to work well
with a window of 250 trading days and it would have been interesting to view the
results from a similar research with longer windows.
One of the purposes with our study was to analyse if the market cap of stocks has
any impact on how well- functioning and reliable the VaR models are. From a
practical viewpoint, both the OMX and the small-cap portfolios can be considered
as extremes because they only consist of stocks from a certain range of market
caps. Of course specialised equity funds can have a concentration similar to our
OMX and small-cap portfolios, such as small- cap and index funds, but these
concentrations are no likely alternatives/hold ings for an asset manager. On the
contrary, the mixed portfolio appears to be more of a natural kind of
diversification among stocks w.r.t. market caps for an asset manager. Therefore,
the VaR results for the mixed portfolios for each method are perhaps the most
interesting, where the HS250d and the EqWMA50d at the 95% level are the
superior methods. At the 99% level the HS250d and the EqWMA250d seem to
achieve the most attractive results measured as fraction of outcomes covered.
Hence, the HS250d method can be regarded as the most useful tool for risk
management of Swedish equity portfolios.
60
6.4 Suggestions for further Research
We believe that there are a number of VaR areas that have not received enough
attention yet. Our study concludes that the normal distribution is an unsatisfactory
approximation of portfolio returns and therefore it would be interesting to view
how applicable the new VaR approaches are to stock portfolios, for example the
semi-parametric approach, the improved VaR methodology and the Monte Carlo
simulation approach.
61
References
Articles
Culp, L. Christopher, Mensink, Ron & Neves, M. P. Andrea (1999) “VaR for
Asset Managers”. Derivatives Quarterly, Vol. 5, No. 2, January (1999).
Danielsson, Jon & de Vries, G., Casper (1997), “Value-at-Risk and Extreme
Returns”. London School of Economics, Financial Market Group Discussion
Paper, No. 273, 1997.
Danielsson, Jon, Hartmann, Philipp & de Vries, G., Casper (1998), “The Cost of
Conservatism”. Http://www.hag.hi.is/~jond/research/.
Danielsson, Jon, & de Vries, G., Casper (1997), “Beyond the Sample: Extreme
Quantile and Probability Estimation”. Http://www.hag.hi.is/~jond/research/.
Lucas, André & Klaassen Pieter (1998), “Extreme Returns, Downside Risk, and
Optimal Asset Allocation”. The Journal of Portfolio Management, Fall 1998.
Maymin, Zak (1998) “VaR variations: is multiplication factor still too high?”.
Http://www.gloriamundi.org/var/.
Styblo Beder, Tanya (1995) “VaR: Seductive but Dangerous”. Financial Analysts
Journal, September/October 1995.
Thornberg, John (1998), “Derivative users lack refined controls of risk”. Working
Paper University of Paisley, December 1998.
Textbooks
Aczel, D. Amir (1993), Complete Business Statistics. IRWIN.
Afifi, A.A & Clark, Virginia (1990), Computer-Aided Multivariate Analysis. Van
Nostrand Reinhold Company.
Gustavsson, Michael & Svernlöv, Magnus (1994), Ekonomi & Kalkyler. Liber-
Hermods.
Hull, John (1997), Options, Futures, and Other Derivatives. Prentice-Hall Int.
Wiedersheim-Paul, Finn & Eriksson, Lars, Torsten (1991), Att utreda, forska och
rapportera. Liber Ekonomi.
Electronic Sources
AffärsData
EconLit
Personal contacts
Mid-Cap
Order Small-Cap stocks Order Small-Cap stocks Order OMX-stocks
Stocks
1 Bulten b 42 Lap Power b 1 Allgon b ABB a
2 Celsius b 43 Scribona 2 Avesta Sheffield ABB Ltd. a
3 Consilium 44 Lindab 3 Höganäs b Asea a
4 Finnveden b 45 Nea b 4 SSAB b Astra b
5 Gunnebo 46 Active b 5 Seco-Tools b AstraZeneca
6 HL Display b 47 Sintercast a 6 Svedala Electrolux b
7 Haldex 48 Havsfrun 7 Assidomän Ericsson b
8 Itab b 49 Skanditek 8 JM b FSPB a
9 KMT 50 Nordifa 9 Lundbergs b H&M b
10 Kabe b 51 Hexagon b 10 NCC b Investor b
11 Kalmar industrier 52 Midway b 11 Tieto-Enator Nokia
12 Nolato 53 Tivox b 12 Invik b Nordbanken Holding
13 Klippan 54 Westergyllen b 13 Kinnevik b Pharmacia a
14 Munksjö 55 Borås Wäfveri b 14 Perstorp b Pharmacia & Upjohn
15 Rottneros 56 Brio b 15 Trelleborg Sandvik b
16 Rörvik Timber 57 Cloetta b 16 Graninge SEB a
17 Bergman &Beving 58 Spendrups b 17 SAS SHB a
18 Bilia b 59 TV4 b 18 Atle Skandia b
19 Doro 60 VLT b 19 Bure Skanska b
20 Elgruppen b 61 Artema b 20 Custos a Sparbanken a
21 Fjällräven b 62 Elekta b 21 Latour Volvo b
22 Folkebolagen 63 Getinge 22 Ratos
23 OEM b 64 Nobel Biocare 23 OM-gruppen
24 Diös 65 J&W
25 Fastpartner 66 KM b
26 Heba b 67 Scandiaconsult
27 Hufvudstaden 68 Ångpanneföreningen b
28 Ljungbergruppen b 69 Senea
29 Norrporten 70 Bong Ljundahl b
30 Peab b 71 Elanders b
31 Piren b 72 Esselte b
32 Platzer 73 Graphium
33 Realia b 74 Strålfors b
34 Wallenstam b 75 Tryckindustri b
35 Wihlborg 76 Geveko
36 B&N b 77 Svolder b
37 Concordia b 78 Öresund
38 Stena-Line b 79 H&Q
39 IMS 80 Matteus
40 Måldata b 81 NH Nordiska
41 IBS b
Appendix 3 – Equity Portfolios
Portfolio 1 – Swedish equity OMX portfolio (large-cap)
Reallocation 95:1 95:2 96:1 96:2 97:1
period / Stocks
Astra b Ericsson b Astra b Astra b Ericsson b
Ericsson b Astra b Ericsson b Ericsson b Astra b
Volvo b Volvo b Volvo b Volvo b ABB a
Asea a Asea a Asea a ABB a Volvo b
Electrolux b Sandvik b SHB a Sandvik b SHB a
Sandvik b Stora a SEB a P&U Sandvik b
Stora a Pharmacia a Skanska b SHB a Skanska b
SEB a Electrolux b Sandvik b Investor b H&M b
SHB a SHB a Stora a Skanska b SEB a
Skanska b Skanska b Electrolux b SEB a Investor b
OMX-HS 250d -0,033 0,146 0,173 0,943 1,039 1,282 3,038 0,232 0,013
Small-HS 250d -0,031 0,160 0,195 0,943 1,056 1,463 4,031 0,166 0,016
Mix-HS 250d -0,012 0,161 0,155 0,953 0,980 1,322 3,980 0,221 -0,013
OMX-HS 125d -0,083 0,131 0,401 0,939 1,076 1,303 2,843 0,303 -0,006
Small-HS 125d -0,049 0,164 0,378 0,941 1,069 1,532 5,149 0,150 0,010
Mix-HS 125d -0,077 0,151 0,388 0,939 1,063 1,453 3,857 0,247 0,000
OMX-EqWMA 50d 0,006 0,139 0,474 0,952 0,983 1,185 2,762 0,352 -0,004
Small-EqWMA 50d -0,001 0,148 0,498 0,947 1,025 1,371 2,986 0,244 0,017
Mix-EqWMA 50d -0,004 0,140 0,467 0,953 0,987 1,326 3,241 0,279 0,003
OMX-EqWMA 125d 0,050 0,107 0,230 0,953 0,953 1,146 2,585 0,255 0,008
Small-EqWMA 125d 0,040 0,129 0,264 0,951 0,996 1,334 4,149 0,151 0,028
Mix-EqWMA 125d 0,043 0,122 0,208 0,957 0,926 1,280 3,698 0,191 -0,015
OMX-EqWMA 250d 0,065 0,187 0,102 0,957 0,965 1,184 2,778 0,204 0,036
Small-EqWMA 250d 0,058 0,172 0,110 0,957 0,946 1,274 3,652 0,184 0,001
Mix-EqWMA 250d 0,076 0,185 0,118 0,966 0,899 1,217 3,500 0,183 -0,013
OMX-ExpWMA 0.94 -0,020 0,161 0,745 0,951 0,990 1,200 2,499 0,397 -0,023
Small-ExpWMA 0.94 -0,025 0,180 0,919 0,950 0,997 1,397 2,698 0,306 -0,034
Mix-ExpWMA 0.94 -0,032 0,165 0,808 0,943 1,069 1,300 3,012 0,335 0,055
OMX-ExpWMA 0.97 0,017 0,086 0,398 0,956 0,953 1,159 2,519 0,367 -0,024
Small-ExpWMA 0.97 0,009 0,094 0,472 0,955 0,966 1,327 2,693 0,266 -0,032
Mix-ExpWMA 0.97 0,006 0,089 0,427 0,955 0,959 1,264 2,987 0,300 -0,016
Appendix 5 – Results at the 99% confidence level
OMX-HS 250d 0,001 0,167 0,228 0,984 1,052 1,173 2,221 0,252 0,024
Small-HS 250d 0,161 0,160 0,316 0,986 1,164 1,268 1,911 0,186 0,096
Mix-HS 250d 0,093 0,198 0,287 0,988 1,050 1,159 2,575 0,200 0,003
OMX-HS 125d -0,088 0,154 0,416 0,979 1,151 1,317 2,300 0,287 0,022
Small-HS 125d 0,027 0,196 0,640 0,974 1,432 1,566 2,230 0,177 0,192
Mix-HS 125d 0,003 0,183 0,571 0,978 1,280 1,489 2,621 0,204 0,123
OMX-EqWMA 50d 0,000 0,142 0,474 0,989 1,014 1,154 1,952 0,352 -0,013
Small-EqWMA 50d -0,053 0,159 0,498 0,977 1,221 1,418 2,111 0,244 -0,063
Mix-EqWMA 50d -0,040 0,144 0,467 0,982 1,181 1,298 2,291 0,279 -0,009
OMX-EqWMA 125d 0,043 0,105 0,230 0,989 1,009 1,161 1,827 0,255 0,024
Small-EqWMA 125d -0,015 0,106 0,264 0,977 1,232 1,447 2,933 0,151 -0,016
Mix-EqWMA 125d 0,005 0,103 0,208 0,980 1,177 1,361 2,614 0,191 0,034
OMX-EqWMA 250d 0,059 0,186 0,102 0,991 0,964 1,211 1,964 0,204 -0,008
Small-EqWMA 250d 0,002 0,148 0,110 0,982 1,159 1,534 2,582 0,184 -0,058
Mix-EqWMA 250d 0,037 0,174 0,118 0,987 1,032 1,244 2,474 0,183 -0,065
OMX-ExpWMA 0.94 -0,026 0,163 0,745 0,986 1,029 1,093 1,767 0,397 -0,024
Small-ExpWMA 0.94 -0,077 0,192 0,919 0,976 1,257 1,409 1,907 0,306 -0,059
Mix-ExpWMA 0.94 -0,068 0,173 0,808 0,980 1,169 1,319 2,129 0,335 -0,047
OMX-ExpWMA 0.97 0,011 0,090 0,398 0,990 0,992 1,096 1,781 0,367 -0,025
Small-ExpWMA 0.97 -0,044 0,108 0,472 0,978 1,169 1,409 1,903 0,266 0,000
Mix-ExpWMA 0.97 -0,031 0,097 0,427 0,982 1,134 1,271 2,111 0,300 -0,039
Appendix 6 – Mean Relative Bias
0,090
0,075
0,060
0,045
0,030
0,015
0,000
-0,015
-0,030
-0,045
-0,060
-0,075 MRB
-0,090
-0,105
Portfolio-Method
0,200
0,175
0,150
0,125
0,100
0,075
0,050
0,025
0,000
-0,025
-0,050 MRB
-0,075
-0,100
Portfolio-Method
Appendix 7 – Root Mean Squared Relative Bias
0,195
0,182
0,169
0,156
0,143
0,130
0,117
0,104
0,091
0,078 RMSRB
0,065
Portfolio-Method
0,195
0,182
0,169
0,156
0,143
0,130
0,117
0,104
0,091
0,078 RMSRB
0,065
Portfolio-Method
Appendix 8 – Annualized Percentage Volatility
0,96
0,88
0,80
0,72
0,64
0,56
0,48
0,40
0,32
0,24
0,16 APV
0,08
0,00
Portfolio-Method
0,96
0,88
0,80
0,72
0,64
0,56
0,48
0,40
0,32
0,24
0,16
0,08 APV
0,00
Portfolio-Method
Appendix 9 – Fraction of Outcomes Covered
0,969
0,966
0,963
0,960
0,957
0,954
0,951
0,948
0,945
0,942
0,939
0,936
0,933 FoOC
0,930
Portfolio-Method
0,994
0,992
0,990
0,988
0,986
0,984
0,982
0,980
0,978
0,976
0,974 FoOC
0,972
0,970
Portfolio-Method
Appendix 10
1,09
1,07
1,05
1,03
1,01
0,99
0,97
0,95
0,93
0,91
0,89 MNtADC
0,87
0,85
Portfolio-Method
1,50
1,45
1,40
1,35
1,30
1,25
1,20
1,15
1,10
1,05
1,00
0,95
0,90 MNtADC
0,85
0,80
Portfolio-Method
Appendix 13 – Average Multiple of Tail Event to Risk Measure
Portfolio-Method
1,60
1,55
1,50
1,45
1,40
1,35
1,30
1,25
1,20
1,15
1,10
1,05
1,00 AMoTEtRM
0,95
0,90
Portfolio-Method
Appendix 14 – Maximum Multiple of Tail Event to Risk Measure
Portfolio-Method
3,00
2,85
2,70
2,55
2,40
2,25
2,10
1,95
1,80
1,65 MMoTEtRM
1,50
Portfolio-Method
Appendix 15 – Correlation between Risk Measure and Absolute
Value of Outcome
0,44
0,4
0,36
0,32
0,28
0,24
0,2
0,16
0,12 CbRMa-
0,08 AVoO
0,04
0
Portfolio-Method
0,44
0,40
0,36
0,32
0,28
0,24
0,20
0,16
0,12 CbRMa-
0,08
0,04 AVoO
0,00
Portfolio-Method
Appendix 16
0,056
0,048
0,040
0,032
0,024
0,016
0,008
0,000
-0,008
-0,016
-0,024 MRBfRM-
-0,032 StDLoC
-0,040
Portfolio-Method
0,184
0,162
0,140
0,118
0,096
0,074
0,052
0,030
0,008
-0,014
-0,036
-0,058 MRBfRM-
-0,080
StDLoC
Portfolio-Method
Appendix 18 – Frequenciesσ of Portfolio vs Normal Distribution
Left-tail