You are on page 1of 16

Introduction

Credit risk can be defined as the risk of loss arising from a borrowers or counterpartys
inability to meet its obligations. The majority of a financial institutions credit risk arises from
its lending activities. Borrowers expect to use future cash flows to pay current debts however
it is near impossible to ensure that borrowers will have the funds to repay their debts (Young
and Associates, 2009).
For this reason credit risk management has become an important tool for lenders in order to
mitigate losses. It does so by understanding the adequacy of both a banks capital and loan
loss reserves at any given time which is a process that has long been a challenge for
financial institutions (Credit risk management, n.d).
High failure of banks and the significant credit problems they faced during the Global
Financial Crisis put credit risk management into the regulatory spotlight (Credit risk
management, n.d). This caused regulators to demand more transparency. Regulators
wanted to ensure that a bank has thorough knowledge of customers and their associated
credit risk. New Basel III regulations have further created a bigger regulatory burden for
banks.
With a variety of available credit modelling techniques, banks are faced with the dilemma of
deciding which model to choose (Allen and Powell, 2011). The two main approaches to
modelling credit risk in the finance literature are the Structural and Reduced form approach:
Structural approach
Merton (1974) was one of the first to present the idea of structural models with regards to
modelling credit risk. Structural models, make assumptions about the dynamics of a firms
assets, its capital structure, its debt and shareholders. The structural approach treats the
firms liabilities as a contingent claim issued against underlying assets. Ie, a liability is
characterized as an option on the firms assets. It therefore provides an explicit relationship
between default risk and capital structure (Allen and Powell, 2011).
Reduced form approach
Reduced form models are based on an assumption that default is a rare event. A companys
time to default is modelled as a stochastic process whose parameters are estimated by
fitting the model to past bond price data. It is called a reduced form model because it
models default as a statistical event without providing an economic explanation for why a
default occurs. Reduced form models originated to overcome structural models weakness
(Phelps, 2006)

Merton Model
Merton proposed a model, which is based on the option pricing theory of Black-Scholes. The
intuition of treating a companys equity as a call option on its assets is the real beauty of the
Merton model. By doing this, it allows for applications of Black-Scholes option pricing
methods. Merton (1974) assumes that the default event occurs at the maturity date of debt if
the value of assets are less than the value of debt. To better understand this model we will
review the following scenario:

If at time t a company has an asset At financed by equity Et and zero-coupon debt Dt with a
face value of K, maturing at time T>t with the capital structure given by a balance sheet
relationship:

= +

(1)

A debt maturity T is chosen such that all debts are mapped into a zero-coupon bond. In the
situation where At > K, the companys debtholders can be paid the full amount K, and
shareholders equity still has value At - K. On the other hand, if the company defaults on its
debt at T if At < K, in which case debtholders have the first claim on residual asset At and
shareholders are left with nothing. Therefore, the equity value at time T can be written as:

= max( , 0)

(2)

The above equation gives us the exact payoff of a European call option written on underlying
asset At with strike price K maturing at T. The Black-Scholes option pricing formulas can be
applied if corresponding modelling assumptions are made. If we assume that the asset value
follows a geometric Brownian motion (GBM) process, with risk-neutral dynamics given by the
stochastic differential equation:

= +

(3)

Where r denotes the continuously compounded risk-free interest rate, Wt is a standard


Brownian motion under risk-neutral measure, and A is the assets return volatility. It should
be noted that At grows at risk-free rate under the risk-neutral measure and thus has drift r in
(3), implicitly assuming the continuous tradability of corporate assets. Now if we apply the
Black-Scholes formula for European call option, it gives us:

= (1 ) (2 )

(4)

Where d1 and d2 are given by:

1 =

1
2

2
ln( )+(+
)

2
ln( )+12

(5)

2 = 1

(6)

Taking this framework into consideration, a credit default at time T is triggered by the event
that shareholders call option matures out-of-money, with a risk-neutral probability:

( < ) = (2 )

(7)

Which can be converted into a real-world probability by extracting the underlying market
price of risk.
Even though debtholders are exposed to default risk, their position can be hedged
completely by purchasing a European put option written on the same underlying asset At
with strike price K. The value of such a put option will be: K AT if AT < K, and worth nothing
if AT > K. If these two positions are combined (debt and put option), it would guarantee a
payoff of K for debtholders at time T, thus forming a risk-free position:

+ =

(8)

Here, Pt denotes the put option price at time t. It can be determined by applying the BlackScholes formula for European put option:

= (2 ) . (1 )

(9)

Corporate debt is represented by a risky bond, and thus should be valued at a credit spread
(risk premium). Let s denote the continuously compounded credit spread, then bond price Dt
can be written as:

= (+)()

(10)

Combining (8), (9) and (10) together gives a closed-form formula for s:

= ln((2 ) +

. (1 ))

(11)

The above equation allows us to solve for credit spread when asset level and return volatility
(At and A ) are available for given T, K, and r . A common way of extracting At and A
involves using the following formula:

(1 ) =

(12)

Equity price Et and its return volatility E are both observed from the equity market. Finally,
(4) and (12) can be solved simultaneously for At and A, which are used in (11) to determine
credit spread s.

KMV Model
The KMV model was developed by Keaholfer, McQuown and Vasicek in 1974. It is an
extension of Mertons model. Models such as the KMV have only begun to achieve a certain
popularity in the last few years, largely due to the fact that the empirical application of these
types of model is relatively recent (Miankov, Katarna and Koiov, 2014).
Similar to what was explained for the Merton model, default happens when the value of
assets falls below a certain value, called the default point. The assets of the firm will earn a
certain return and trend with a given mean and volatility over time. Under KMV model, we
assume that the value of the firms assets are log-normally distributed. We can utilize the
option pricing theory to derive both the market value and the volatility of the assets
KMV follows a three step process in order to calculate credit risk:
Step 1: Determine the value of assets (A) and their volatility (A)
The value of a firms equity is driven by five variables:
1.
2.
3.
4.
5.

E - market value of a firms equity


E Volatility of a firms equity
L structure of liabilities of the company, defined by the leverage ratio
C coupon paid by the debt in the long term
r risk free rate of interest

As previously discussed, the analyst can use equations (4) and (12) simultaneously to solve
for At and A . Of the above, the last three variables are known, and so is the stock
price. KMV then uses an iterative approach to find out V and , given a knowledge of the S,
L, c and r.
Step 2: Calculate the distance to default (DD)
Recognizing that a firm does not have to default the moment its asset value falls below the
face value of debt is a key concept underlying the KMV approach. Default occurs when the
value of the firms assets falls somewhere between the value of the short term debt and the
value of the total debt. In other words, it is possible to not have default even if the value of
the assets has fallen to less than the total debt. This makes sense intuitively since it is
generally the current cash needs (driven by short term debt) that cause default. Even though
the total liabilities may be greater than the total assets, the firm may have enough cash to
keep paying all liabilities as they come due.

The KMV model sets the default point as somewhere between short term debt (STD) and
the total debt as the total of the short term debt and half the value of the long term debt.

= +

(13)

DPT Default point


LTD long-term debt
STD short-term debt
The KMV approach then determines what the distance-to-default is. The distance to default
can be defined as the number of standard deviations assets have to lose before getting to
the default point (DPT). It is calculated as follows:

+(12 2 )

(14)

A Current market value of assets


DPTt default point at time horizon T
rf risk free rate
A annualized asset volatility
The DD can also be expressed in absolute dollar terms, ie, the numerator E(A) DPT
represents the dollar value that needs to be lost to hit the default point. Dividing that by the
standard deviation (in dollars) gives you the number of standard deviations away you are
from the default point:

()

NB: The greater the distance-to default is, the lower the probability of default will be, and
vice versa.

Step 3: Determination of the EDFs


According to Zheng, 2005 KMV tries to obtain the empirical value of the EDF rather than
the theoretical value of the models. This is because in practice, the distribution of the value
of the assets is difficult to measure, which leads KMV to reject the normal or normal
logarithmic density functions.

The KMV Company studied the relationship between the distance-to-default and the
probability of default over a historical series of bankruptcies in order to determine the
probability of insolvency. They collected and analysed data on close to 250,000 companies,
of which nearly 4,700 resulted in insolvency or problems of default. From these data, they
generated a table of frequencies to relate probability of default to different levels of distanceto-default, which is easy to obtain when the market value of the company, its volatility and
the point of insolvency are known. We thus obtain an "empirical" probability.
So, if we want to calculate the probability of default one year ahead for a company whose
distance-to-default (DD) is seven times the variability of its asset value, we refer to the
historical series of companies in the same situation that became insolvent in the following
year. We then divide the number of those insolvent companies by the total population of
companies with the same DD and this gives us a probability of default.
Strengths of the KMV model

It provides you with accurate and timely information from the equity market allowing
for a continuous credit monitoring process that is difficult and expensive to duplicate
using traditional credit analysis.
EDFs calculated on a monthly or a daily basis provide us with a greater degree of
vigilance that annual reviews and other traditional credit processes cannot maintain.
Prior to Moodys purchasing KMV, Changes in EDF tended to anticipate at least one
year earlier than the downgrading of the issuer by rating agencies like Moodys and S
& Ps.

Weaknesses of the KMV model

The only way to calculate Private firms EDFs is by using some comparability
analysis based on accounting data.
The KMV model does not distinguish among different types of long-term bonds
according to their seniority, collateral, covenants, or convertibility.

CreditMetrics
CreditMetrics was introduced in 1997 by JP Morgan to evaluate credit risk. The first of its
kind, this framework was put together with the help of Co-sponsors in the name of the
following 5 leading banks; Bank of America, BZW, Deutsche Morgan Grenfell, Swiss Bank
corporation and Union Bank of Switzerland and a Leading credit risk analytics firm, KMV
Corporation.
CreditMetrics is a framework that has the ability to quantify credit risk for all firms that carry
some form of credit risk, from a number of instruments, during the life of their business. Its
main outputs is standard deviation (a measure of symmetrical dispersion from the mean
portfolio value) and percentile levels (the likelihood that the portfolio value will fall below a
specified value).
JP Morgan identified, at the time, that the Global economy was rapidly growing and
companies were more willing to take on credit, and more complex credit instruments.
Creating a need for a more sophisticated credit framework which came about in the form of
the JP Morgan CreditMetrics.

(McBride, 2009)
The graph above by Reuters shows the level of growth of credit (debt) over the specified
period. Taking a closer look at the 1990s we see that there was a clear rise in debt that
would create the need to calculate the type of exposure people/companies were taking on at
the time.
The specific aim of JP Morgans CreditMetrics is to:
1. Create a benchmark for credit risk measurements; making risk comparable.
2. Promote credit risk transparency and better risk management tools, leading to
improved market liquidity- transparency and understanding risk management tools.
3. Encourage regulatory capital framework that closely reflects economic risk; a model
which will encourage regulators to look closely at credit factors that impact economic
risk as opposed to capital requirements.
4. Complement other elements of credit risk management decisions- creating a
systematic approach for measuring portfolio risk which takes into account the
relationship between assets and existing portfolios. It attempts to compliment other
risk analysis and not alleviate them.
CreditMetrics takes a portfolio approach to risk analysis which enables the model to quantify
the benefits of diversification and the costs of concentrations. This is done by 1. Restating
and aggregating the credit risk for each obligor across the portfolios so that they may be
treated consistently regardless of the asset class 2. Taking into account the correlation of
credit quality across each obligor. This approach quantifies concentration risk from an
additional obligor; considers concentrations amongst most dimensions; creates a benchmark
for estimating market risk; evaluates investment decisions, credit extensions, and risk

mitigating actions more precisely; sets consistent risk-based credit limits based on
exponential amounts; and makes rational risk-based capital allocations.
The framework uses a mark-to-market framework which is important as it includes upgrades
and downgrades in credit quality of the obligor, not just outright defaults, and it included
Value of at risk caused by these movements and not just expected losses.
An important feature of this model is the ability to make credit risk comparable with market
risk. As we can see below the distribution between the two are quite different therefore they
do this by looking into the future and estimating different values across market and credit
outcome distributions.

(Morgan, 1997)
The fat tail in the credit return distribution causes one of the problems encountered in
modelling portfolio credit risk as it is neither analytically nor practically easy. The problem is
caused as deriving the fat tails of credit returns is much more complex than using means
and standard deviations. The other difficulty in modelling credit risk comes in when trying to
calculate correlation, as they must be derived from other sources such as equity prices, or
tabulated at relatively high level of aggregation.
The CreditMetrics methodology can be seen by the Road Map below.

(Morgan, 1997)

Exposure profiles; CreditMetrics incorporates exposure for many types of instruments


including bonds, loans, swaps and fixed rate swaps and in the case of undrawn
instruments such as commitments, the exposure is based on changes in the drawn
amounts upon defaults, upgrades or downgrades

Volatility of each exposure from defaults/ upgrades/ downgrades; the likelihood of an


obligor defaulting or shifting from a one rating to another is estimated using credit
spread data and in default recovery rates. The values are weighted and then
computed on a distribution in order to obtain the standard deviation and expected
value

Correlations; the distributions for each are put together to yield the portfolio results

Certain modelling features have been incorporated into CreditMetrics to make it more
precise these include, 1. Taking into account the volatility of recovery rates, 2. Evaluates
market exposures at the risk horizon, derived from market rates and volatility, 3. Simulates
values to estimate the distribution of the credit portfolio.
Estimating portfolio risk faces many challenges;
1. Unexpected loss is the volatility of the portfolio, it is the loss that cant be predicted
and has to be diversified away. Expected loss on the other hand is a function of the
probability of loss and expected size of loss.
2. The skewness of a credit return distribution is a problem as it cannot be derived
simply from mean and standard deviation calculations, simulation thus computes the
distribution by sampling random outcomes across all possibilities.
3. Low default correlations in credit portfolios means systematic/diversifiable risk is
small. This making the portfolio a lot riskier and creates a risk over-diversifying which
leads to lower returns.
4. The high consequence of illiquidity in credit risk portfolio is a lot higher than for equity
portfolios as we have seen that the probability of default for an individual debt
obligation increases as more debt obligations default.

The CreditMetrics Methodology

The difference that CreditMetrics brought, at the time, was its ability to measure
value at risk at the risk horizon (uses a one-year risk horizon) due to events to events
such as upgrades, downgrades and defaults. It is a probabilistic model based on the
assumption that ratings migrations have occurred at all levels and then computes
that to get a migration likelihood

The detailed road map of the CreditMetrics methodology

(Morgan, 1997)
Step 1- Calculating credit exposure amounts

When looking at exposure the change in value can be attributed to direct changes to
characteristics of the obligation, i.e. upgrades or downgrades, or to market related
changes such as rate hikes for swaps/forwards. This is a flexible model that looks at
all types of instruments but focuses on; receivables, bonds, loan, commitments to
lend, financial letters of credit and swaps and forwards.

Types of exposure, 1. Receivables- they have a horizon shorter than year and their
exposure is the full face value, so change in value is based on default or no default.
Revaluation upon upgrade and downgrade will only be applicable for horizons
greater than 1 year. 2. Bonds and Loans- their value at risk is the present value of
cash flows the exposure is the changes in rates as it will take the current value away
from par. 3. Commitments- the exposure on loan commitments are linked to the
underlying credit rating/quality. CreditMetrics captures this exposure from the
changes in the quality of the commitment (either downgrade/upgrade or default). 4.
Financial letter of credit- these are guarantees against default so the exposure is
seen as the whole amount whether drawn or not. 5. Market-driven instrumentsinstruments such as swaps, forwards, and fixed rate bonds whose exposure is
dependent on underlying market rates. This exposure unexpectedly and constantly
changes over the life of the instrument.

Step 2- Calculating volatility of value due to credit quality changes

Step A: Estimating credit quality migrations (Transition matrices) the table


below shows a graph that JP Morgan sourced from standard and poors CreditWeek
(15 April 96). It shows the likelihood that of debt migrating between different rating
qualities and default in a one-year horizon.

(Morgan, 1997)

Step B: Estimating changes in value upon credit quality migration- 1.


Revaluation in default- recovery rates are given to the different classes of ratings
based on seniority observed from historical data the table below shows the
percentage of par obtained after a category defaults.

(Morgan, 1997)
From the above graph it can be seen that the more senior the debt the higher the value
obtained at default although it also comes with the greatest standard deviation.
2. Revaluation upon downgrade and upgrades (credit spreads)- in order to obtain the
values at horizon after an upgrade or downgrade one must derive the forward zero curves
for each rating category, set at horizon until bond maturity. The curves will be used to
revaluate the remaining cash flows, after upgrades or downgrades, at the risk horizon.

Step C: Compute distribution of bond value- the next three diagrams will illustrate
how to compute the distribution of bond values;

1. Shows the probabilities of a BBB rated bond migrating to another ratings category in
the one ear horizon and then revaluates the Bond value should the migrating occur
for each category.
2. Computes the mean, standard deviation and variance due the changes in credit
quality, which useful in illustrating the distribution for that single exposure
3. Illustrates the distribution for the 5-year bond at the one-year horizon. Note that this
distribution does not incorporate the uncertainty in recovery rates and uncertainty in
the value of the default state caused by volatility in credit spreads.

(Morgan, 1997)- Diagram 1

(Morgan, 1997)- Diagram 2

(Morgan, 1997)- Diagram 3


Step 3- Estimating credit quality correlations and calculating portfolio risk

The rating outcomes on different instruments are not independent of each other as
they are affected by similar economic factors. CreditMetrics incorporates measures
of correlation/interdependence between rating outcomes when estimating joint
likelihood of changes. Theres a number of approaches in calculating correlation as
the inputs available are not the most desirable (complex, controversial, sparse and
poor quality). CreditMetrics advocates for an of the following correlation approaches;
Actual rating and default correlations, bond spread correlations, uniform constant
correlation and equity price correlations. The risk of default arising from the decline of
asset value is part of an extension of CreditMetrics where they find that asset
volatilities also drives the joint default probability between firms.

When calculating portfolio correlation, the model uses an approach which maps the credit to
a set of industries and countries that are most probable to impact their performance. This
mapping is most desirable as it can also be used for unlisted instruments as long as they
participate in some sort of industry or country, the managers can relate them.
In order to obtain the distribution CreditMetrics uses simulation as it would be infeasible to
go through all portfolio states to determine this distribution. The simulation is still complex
and time consuming as it tracks the mean, variance, skewness, kurtosis and percentile
levels. The graph below shows the simulated distribution of credit returns with its famous fat
tails.

(Morgan, 1997)
Risk measures (model output)

Standard deviation- likelihood can now be derived using diagram 2, where standard
deviation is multiplied by z-values from normal distribution graphs. What is found
from this exercise is that risk is more understated, when using a given number of
standard deviations of portfolio value, than for a normally-distributed market portfolio.

Percentile levels- can be interpreted as the likelihood that the portfolio value falls
below the 5th percentile is 5%. The amount of risk due to credit is obtained by adding
up the probability of state values starting from the bottom until it reaches the required
percentile value, that bond value subtracted from the mean will give you the value at
risk due to credit.

CreditMetrics also uses standard deviation and percentile levels to calculate the
marginal risk to a portfolio from an additional credit exposure.

Practical applications
1. Prioritizing risk-reducing actions: Credit risk managers must balance their portfolios
based on the percentage risk of an asset and its exposure to the portfolio. Typically,
manager prefer high percentage risk assets to have low absolute size and vice versa.
2. Risk-based exposure limits: based on the premise of risk-return of a portfolio. The
managers have a responsibility to limit absolute risk by selecting a balanced portfolio.
3. Risk-based capital allocation: percentile levels can be used to estimate the maximum
expected loss to capital. With this approach a manager can manage how much more
risk he is willing to take on.
Weakness of Credit metrics (Crouhy, Galai, & Mark, 2000)

Very reliant on transition probabilities based on average historical frequencies of


defaults and credit migration

Accuracies of the model are based on two key assumptions (which is also applied to
transition probabilities); 1. All firms within the same rating class have the same
default rate, 2. The actual default rate is equal to the historical average default rate.

Conclusion

Credit ratings by firms such as Moodys and S&P have become a key indicator in
determining the risk levels of debt. This can be seen as an advantage as it can guide
investors to limit their risk and exposure. But can be a disadvantage as high
dependence in these models have shown to be detrimental in the past.

(Griffin & Tang, 2012), shows how Credit rating agencies played a part in the global
financial crisis of 2007. The paper attributes it to the downgrades and upgrades of
certain debt instruments that left some banks more exposed than they had accounted
for, and thus were not prepared for the defaulting of many of the instruments. It
concludes that model inputs should be transparent and consistent as changes in in
credit quality, as seen in CreditMetrics discussion above, can substantially affect
default rates.

The weaknesses found in CreditMetrics also demands more models such as KMV
and Merton models to give more accurate readings.

Reference list
Allen D. E. and Powell R. J. (2011). Credit risk measurement methodologies. Retrieved from
http://mssanz.org.au/modsim2011

Article Base, (2009), Credit Risk Management. Retrieved from


http://riskarticles.com/credit-risk-management/

Bruce D. Phelps, (2006), Reduced Form vs. Structural Models of Credit Risk: A Case Study
of Three Models. Retrieved from
http://www.cfapubs.org/doi/pdf/10.2469/dig.v36.n2.4103

Crouhy, M., Galai, D., & Mark, R. (2000). A comparative Analysis of Current Credit Risk
Models. Journal of Banking & Finance 24, 59-117.

Griffin, J. M., & Tang, D. Y. (2012). Did Subctivity Play a Role in CDO Credit Ratings. The
Journal of Finance .

McBride, B. (2009, December 15). Calculated Risk; Finance & Economics. Retrieved from
Calculated Risk Blog: http://www.calculatedriskblog.com/2009/12/

Miankov, M, Koiov, K and Klietik, T. (2014) COMPARISON OF MERTONS MODEL,


BLACK AND COX MODEL AND KMV MODEL. Retrieved from
http://www.bm.vgtu.lt/index.php/bm/bm_2014/paper/viewFile/228/489
Morgan, J. (1997). Introduction to CreditMetrics; The Benchmark for understandng credit risk
.
New York : JP Morgan .

SAS, (n.d), Credit risk management. Retrieved from


http://www.sas.com/en_za/insights/risk-fraud/credit-risk-management.html

You might also like