You are on page 1of 43

IRC and CRM: Modelling Approaches for New Market Risk Measures

Sascha Wilkens*, Jean-Baptiste Brunac** and Vladimir Chorniy***

Version: 22nd April 2011

Abstract. Against the background of new capital requirements for market risks in a banks trading book (Basel 2.5) this paper discusses modelling approaches for the Incremental Risk Charge (IRC) and Comprehensive Risk Measure (CRM). Both are Value-at-Risk-type measures over a one-year capital horizon at a 99.9% confidence level, applicable to vanilla credit and credit correlation products, respectively. With industry-wide standards for corresponding internal models missing so far, the article presents selected risk factor models that can serve as a basis to derive simulation-based P&L distributions and hence the associated risk figures. Example calculations and implementation aspects complement the discussion.

JEL classification: G13; G18. Keywords: Incremental Risk; IRC; Comprehensive Risk; CRM; Model; Basel Committee.

BNP Paribas, Market and Counterparty Risk Analytics, London. E-Mail: Corresponding author. Address: BNP Paribas, Group Risk Management, 10 Harewood Avenue, London NW1 6AA, United Kingdom.


BNP Paribas, Market and Counterparty Risk Analytics, London. E-Mail:


BNP Paribas, Market and Counterparty Risk Analytics, London. E-Mail:

We appreciate helpful input from Fakher Ben Atig, Maria Teresa Cardoso, Laurent Carlier, Peter Dobransky, Andrei Greenberg, Tom Hanney, Vera Minina and Sbastien Rrolle. The views expressed in this paper are those of the authors and do not necessarily reflect the views and policies of BNP Paribas.

Electronic copy available at:

Introduction and regulatory background

With banks holding more and more structured and potentially illiquid credit products in their trading books, the Basel Committee has suggested new capital charges to complement existing market (and credit) risk measures and capital requirements based thereon. The proposed framework (Basel Committee on Banking Supervision (2009, 2011)) has evolved since 2007, accompanied and driven by extensive Quantitative Impact Studies (QIS). Targeted shortcomings are, amongst others, differences in the underlying liquidity of trading book positions where market losses can be driven by the potential for large cumulative price moves over several weeks or months, as well as the limitation of 99% one-day or ten-day Value-at-Risk (VaR), which likely does not adequately reflect large default losses that occur less frequently and thus cannot be scaled to longer time horizons.

One essential part of the new charges are internal models, which require validation by home regulators and are to be introduced together with other requirements (e.g., designated stress tests) until the end of 2011. The Incremental Risk Charge (IRC) is supposed to cover market risks from credit rating migrations and defaults for flow products such as bonds and credit default swaps (CDS).1 More complex instruments such as collateralised debt obligations (CDO) and their hedges (including CDS), excluding re-securitisation positions (e.g., CDO^2) and Leveraged Super Senior (LSS) tranches, can become subject to a separate market risk model the Comprehensive Risk Measure (CRM).2 CRM has to cover all price risks (e.g., credit spread volatility, volatility of implied correlations), in addition to credit migrations and defaults as in IRC. Without a designated CRM model, the credit correlation

1 2

IRC may also include equity positions (not considered here). CDS hedges that are recognised in CRM are not to be included in IRC.

Electronic copy available at:

trading is subject to capital requirements based on the standardised measurement method, which is essentially a banking book-type calculation that hardly recognises risk reduction via hedging.3, 4 This method is to be applied in any case to all securitisation positions not covered by CRM.

Both IRC and CRM are measures based on the 99.9% loss quantile at a one-year capital horizon. Rebalancing may be taken into account via shorter liquidity horizons (not shorter than three months), coupled with a constant level of risk concept. This contrasts with VaR, which is a 99% quantile measure and has a short-term focus.

Despite the importance of the topic, with the exception of a few conference presentations (see, for example, Oehler et al. (2010) or Kainth et al. (2010)), hardly any literature is available on the subject of modelling IRC and CRM. This paper thus aims at closing a research gap by discussing potential modelling approaches for both. The development of risk factor models and their correlation structures is complemented by example calculations and the discussion of implementation aspects. It becomes transparent that high confidence levels and long projection horizons, in conjunction with limited backtesting feasibility, leave a substantial model risk.

The paper is organised as follows. Section 2 provides modelling concepts for IRC. The subsequent Section 3 extends the risk factor landscape to cover the requirements

The method can be very punitive for the complex credit correlation trading business, featuring large offsetting positions, which are typically well-hedged against credit spread and correlation risks, and would thus render the business hardly feasible from a capital requirement perspective.

Notably, the CRM figure is floored according to the standardised measurement method for the credit correlation book (see also the example in Section 4.3).

for CRM. Section 4 is dedicated to the repricing of the products using market scenarios from the IRC and CRM models and the associated derivation of P&L distributions and loss estimates. Practical aspects such as convergence and sensitivity analyses are covered. Section 5 concludes.

IRC: Capturing default and migration risk

The IRC charge is supposed to capture default and rating migration risk, for which the most straightforward way is the use of an asset return model. Concerned products are bonds as well as vanilla credit derivatives such as CDS on single names and indices.

In the simplest case of a one-factor Gaussian copula model, the asset return issuer is driven by one systematic factor


common to all issuers and one

idiosyncratic factor

. The distribution of the risk factors is assumed to be standard

normal. The relative weight of the systematic and idiosyncratic factors is governed by a parameter :5


This implicitly defines the asset correlation between each pair of issuers and . The asset return model assumes that a migration or default event occurs whenever the asset returns of an issuer is below a determined threshold over a certain period.

Regarding the economic assumptions of the model, it is worth noting that the effective default and migration correlations implied by the constant asset correlations

The notation in equation (1) as well as in all equations throughout the paper allows only nonnegative correlations, but can be adapted easily for the more general case should this be necessary.

decrease with the number of intermediate simulation steps increasing. This is a known property of a Gaussian copula model (see, for example, Andersen (2006)) and in line with expectations. An asset model is supposed to reflect mainly the medium and long-term asset values of a firm. On shorter horizons, a default event of one firm, for example, is supposed to be rather an idiosyncratic event in line with a low overall default correlation at the beginning and increasing over time.6

In order to increase the granularity in the asset dependence, one can assume the asset return to be driven by a more detailed set of systematic factors. Reflecting ),

industry standard practice, an example would be the decomposition into global ( regional/country-specific ( ) and industrial factors ( ):

(2) with as the weight given to the systematic factors for an issuer from country and industry ( ; ; ;

as the weight of the global factor for an issuer from country as the weight of the global factor for an issuer from industry as the volatility of the global factor ); as the volatility of the country factor for an issuer from country as the volatility of the industry factor for an issuer from industry

assumed to be identical for all factors

; ;

In case of a portfolio rebalancing at the liquidity horizon, the asset process is de facto reset and default correlations of the underlyings in the then newly introduced positions are expected to start from a low regime again, in line with the model properties.




The distribution of the risk factors is again assumed to be standard normal.7

For the actual calibration, the migration and default thresholds reflecting migration and default probabilities over a certain time horizon need to be determined. They can be implied from the bond and credit derivatives markets or calculated from historical observations. A shortcoming of the first approach is that rating transition probabilities usually cannot be derived from the market. Furthermore, the marketimplied probabilities embed risk premia, which would bias the prediction of the actual default frequency. Therefore, historically calculated transition probabilities as provided, for example, by Moodys are to be preferred (for the generation of thresholds from migration matrices see, for example, Israel et al. (2001)).8 Finally, in order to increase accuracy, different transition matrices for different types of issuers (e.g., corporates, sovereigns) can be used.

For the translation of a rating move into a P&L, one needs to convert the event into a move in credit spreads. This can be accomplished by means of a spread-to-rating map that is based on market means or medians. Figure 1 illustrates such a mapping. In order to reflect the fact that credit spreads for a certain issuer usually differ at least slightly from the rating median and to allow an averaging of the mappings over time,9 relative changes are the preferred choice. Assume, for instance, that an A3-

The asset correlation between pairs of issuers analytical form.


can be easily expressed explicitly in

Furthermore, these default probabilities exhibit a through-the-cycle character, with the advantage of avoiding fluctuations in the charge due to short-term market moves.

This averaging allows the use of a through-the-cycle spread-to-rating mapping. See also footnote 8.

rated issuer with a current market credit spread of 82 bps migrated to Ba3 during the simulation. The spread-to-rating provides a move in median credit spreads from 72 bps to 272 bps, equivalent to an increase by the factor of 3.78. The new credit spread of the actual issuer would then be set to 310 bps, thus accounting for the fact that the concrete name is currently trading above the rating-equivalent median credit spread.10

[Insert Figure 1 about here]

For some products covered by IRC there might be no credit spread available. For those, the P&L due to a rating move can be approximated from a move in yields, for example.


CRM: Integration of additional risk factors


CRM is an IRC-type charge applicable to credit correlation trading products and requires additional risk factors to be captured (Basel Committee on Banking Supervision (2011)):

Cumulative risk from multiple defaults, including ordering of defaults; Credit spread risk, including the gamma and cross-gamma effects; Volatility of implied correlations, including the cross effect between credit spreads and implied correlations;


In practice, it is advisable to apply the credit spread multiples to hazard rates, where available, to prevent the possibility of inconsistent (i.e., arbitrageable) credit curves. Within the framework of a piecewise flat hazard rate model, the difference is usually minimal.

Basis risk, including both the basis between the credit spread of an index and those of its constituent single names; and the basis between the implied correlation of an index tranche and a tranche of a bespoke portfolio;

Recovery rate volatility, as it relates to the propensity for recovery rates to affect tranche prices.

Furthermore, CRM may reflect benefits from dynamic hedging, but only in combination with the recognition of the risk of hedge slippage and the potential costs of rebalancing the hedges.

Since not all these parameters driving the market risks of correlation products are amenable to simulation, the target should be a framework that is capable of jointly modelling as many factors as practical.11 As a general point, the calculations pertinent to the charges are based on a single horizon (or a series of liquidity horizons); see also Section 4. As such, it is not necessary to model the whole time evolution of parameters exactly, as long as the distributions at the fixed horizons are correct. Therefore, the models can be limited to finite-horizon distributions of the risk factors, rather than stochastic processes, even though, frequently, the motivation for the former does come from the latter.


Cumulative risk from multiple defaults

The discussed asset return model for IRC already deals with the cumulative risk from multiple defaults. If at the liquidity horizon several names have defaulted, the cumulative P&L from these events will be captured. Some further considerations are


The remaining risks can be captured via parameter sensitivity analysis or dealt with by establishing P&L or model reserves, for example.

necessary to take into account the ordering of default. Sensitive products are Nth-todefault baskets if names have different recovery rates.12


Credit spread risk

The rating transition-based simulation can be carried over from IRC since CRM is required to reflect similar migration and default risks for the credit correlation book as for the credit flow products within IRC. Since the regulatory requirements for CRM explicitly refer to credit spread risk, however, one might argue that not sufficient richness of credit spread variation (volatility) is achieved with purely rating-based spread simulations.13 Due to the high probability of staying at the same rating, especially for highly-rated names, a large fraction of simulation paths will not have any realised credit spread movement.

This issue can be addressed by adding a separate stochastic credit spread component for each rating so that credit spreads can change even in the absence of migrations.

Figure 2 shows the evolution of credit spreads for the iTraxx and CDX investment grade indices. An example to reflect their dynamics would be a Black-Karasinski process (see, for example, Brigo/Mercurio (2006), pp. 82-86) for the short-term credit spread :


If the ordering of defaults is not available in the simulation, a solution is to take the most conservative scenario by selecting the name with the highest or lowest recovery rate depending on the protection side, i.e., selling or buying.


This point lacks clear guidance from the regulatory framework. Since IRC, for instance, is supposed to complement the VaR as a market risk measure, pure credit spread variation is assumed to be captured sufficiently in the latter and therefore not modelled in IRC. For CRM, this valid double counting argument is countered by an explicit prescription in the modelling requirements.

(3) with volatility parameters. , mean reversion level and speed as rating-dependent . is a

represents a drift adjustment such that

standard normal random variable. In order to introduce a correlation

between the

credit spreads of different issuers, a decomposition into a systematic and an idiosyncratic factor are advisable:14 (4)

Given the empirical characteristics of credit spread moves, additional jump components should be introduced. This is particularly important to capture risk at high quantiles. One simple example are upward and downward jumps with different deterministic sizes and Poisson-distributed frequencies. Systematic jump sizes, , and frequencies, the parameters for , can be assumed to be equal across all names, while idiosyncratic jumps, and

, are rating-specific.

[Insert Figure 2 about here]


is given by (3) and (4), the total credit spread value

at time horizon can be

written as



A possible extension to this approach would be to split the systematic factor further into rating, country and industry subfactors, thus rendering the credit spread a multifactor process as in the case of asset returns.









. The time periods between jumps are exponentially distributed random variables, , and

is the associated Poisson-distributed number of jumps before horizon t.

It should be noted that the interaction between rating migrations and credit spread dynamics is important since both affect the same simulated risk factor. Firstly, one might argue that the rating migration and the credit spread process should be connected. While the former one is supposed to model medium- to long-term changes in economic circumstances, however, the latter one should reflect shortterm, market-driven expectations. It is unclear whether there should be any connection between the two and if yes, how it should be captured or calibrated. Therefore, a suitable assumption could be independence between both processes.15 Secondly, in case of very few maybe only one discrete time points of the simulation and if the parameters for the credit spread dynamics are markedly different across ratings, then effectively, the sequence of applying stochastic credit spread moves and rating transitions defines the overall spread change. In the absence of further insights, a randomised allocation of the sequence between rating migrations and credit spread would be a suitable choice.

Gamma and cross-gamma effects are best captured if the P&L distributions are generated by means of fully revaluing the products (see Section 4.2.1). Otherwise, second-order sensitivities could be employed in a Taylor expansion, however, at longer horizons and 99.9% quantile, this approach has obvious limitations.


This assumption is also in line with the fact that credit spread risk is already captured in the VaR. The introduction of dependence between ratings and credit spreads would aggravate the double counting effect.


Basis risk

The first basis risk to be captured relates to the differential between the market price of a CDS index (such as iTraxx and CDX) and the theoretical value from its reconstitution via single-name CDS with weights (see, for example, as

OKane (2008), pp. 192-195). Define the theoretical index credit spread


calculated as the mean credit spread of the risky levels the market, let

index constituents, weighted by their as the index credit spread from

and index contribution . With

(7) denote the index basis. For example, if the market index credit spread is lower than the one reconstructed from its constituents (the predominant case in practice), negative. The historical evolution of is

for iTraxx and CDX as the main credit

indices is depicted in Figure 3. Time-series analysis suggests evidence of mean reversion in , which leads to a proposed basis model with mean-reverting Ornstein-

Uhlenbeck dynamics: (8) for each maturity and index , with , and representing the long-term mean, stands for a standard normal

mean reversion speed and volatility, respectively. random variable.


The risky level is defined through the price of the premium leg with

[Insert Figure 3 about here]

In practice, one can observe correlation between the bases and the market credit spreads. To capture these dependencies one can introduce a decomposition of the stochastic term such that:

(9) where . stands for the correlation between the basis of index one of index and maturity and the

and maturity . One can use a standard approach such as Maximum

Likelihood estimation for the historical calibration of (8) and (9).17

The second basis risk prescribed by the regulatory framework for CRM relates to the difference between the implied correlation of bespoke and standard CDO tranches (Ahluwalia et al. (2004)). Implied correlation refers to the empirical phenomenon that otherwise identical CDO tranches with different attachment and detachment levels cannot be priced consistently with the standard one-factor Gaussian copula model (Li (2000)). Similar to the implied volatility in the option pricing context, CDO tranches empirically exhibit a correlation skew. For ease of handling, market prices for CDO tranches are usually quoted via base correlations (McGinty et al. (2004)).


It is worth noting that

reflects the instantaneous correlation between the systematic

factors of a lognormal process and a mean reverting process, which differs from the realised correlation between the two (depending on the length of the return window Schwartz/Smith (2000) for details.

). See

Given that historical values for this basis can hardly be derived from reliable time series and since the correlation basis is a model-dependent, second-order implied value from the market implied correlation is a first-order implied value , the recommendation is not to include it as a risk factor, but rather address it with a P&L or model reserve.


Implied correlations

Modelling implied correlations is a complex task for a number of reasons. Firstly, for each index-maturity combination, there is not a single number, as for the CDS-index basis, but a whole curve across different strikes. Secondly, there are constraints not only on the values (which must remain within [0,1]), but also on the slope and curvature for a given base correlation curve if inconsistencies in pricing are to be avoided. Furthermore, since base correlations are usually calibrated from standard tranche spreads, given index credit spread and recovery information, evolution of the latter imposes not easily identifiable constraints on the dynamics of the former. Thirdly, there is abundant empirical evidence of a strong dependence between both the correlation shape parameters and their changes, over time, across maturities of the same index and even across indices. Finally, it is also worth considering the relationship between historical asset return correlations (confer Section 2) and the implied correlations from the CDO tranche market, reflected in the base correlation curves. Although it is obvious that they do not reflect the same thing, an ultimate link between historical and implied correlations is expected.18 It is not entirely clear, however, how to introduce an eventual link between these two correlation


Otherwise, if completely decoupled, one can imagine situations of tranches consistently priced with high implied default correlation, while defaults are generated with very weakly correlated asset returns.

measures.19 From a technical and calibration point of view, a link between implied correlations and credit spreads is the most straightforward approach, which will be relied upon in the following.20

Figure 4 shows the time series for base correlations at different standard strikes for two of the most liquid indices. The similarity in the moves across different points on the capital structure is striking, especially for iTraxx, where it appears that almost all variation can be explained by the level. Even more highly linked are base correlations across maturities of the same index, which prompts to analyse the whole base correlation surfaces, one per each liquid index.

[Insert Figure 4 about here]

The time series patterns suggest that only a limited number of factors govern the distribution of the base correlation moves. One way of reflecting this would be the parameterisation of the surfaces comparable to yield curves, whose shapes are usually expressed by factors such as level, slope and curvature. While this approach


Certain empirical evidence suggests dependence between extreme events in historical asset correlation, implied correlation and credit spreads, but the exact nature is not clear.


Some examples in the literature allow to have a unified approach to default-driving and pricing correlations, without the separation into exogenous asset return correlation structure and simulated implied correlations. For example, a combination of a random correlation and a random recovery model (see, for instance, Andersen/Sidenius (2004)) can impose the dependence structure and either take the existing credit spread (and basis) simulations as inputs or even modify the asset return generation to use the same correlation structure as for pricing. The disadvantages of this approach are the problems with fitting market data for tranche prices and the decoupling from the CDO pricing.


has the advantage of attributing an economic meaning to each factor and thus facilitates the associated risk management, it proves too inflexible to reflect the complex moves of the base correlation surfaces and the corresponding dependence structures. Principal component analysis (PCA) on the moves in excess of the mean can serve as a purely numerical alternative. It can generally be based on:

each curve corresponding to each tenor of each index; each surface of each index; all surfaces together.

The first PCA approach renders it difficult to insure correct correlation between the moves of each tenor and each index. The third approach even though the closest representation of the empirical data structure would render any analysis difficult since all the parameters would be inferred from the PCA. The second approach is the most favourable since it allows keeping control of the correlation between the indices through adequate parameterisation.

For the base correlation moves across maturities and strikes of the same index, one will usually concentrate on the first components.21 In practice, the distribution of

the factor scores tends to be approximately symmetric, but the peak is too high to approximate with a Gaussian distribution. The Normal Inverse Gaussian (NIG) distribution can be used instead, with a density given by



Empirically, it was found that in the case of iTraxx and CDX, the choice than 95% of the variance for either index.

explains more


is the modified Bessel function of the third kind and

. The

NIG allows for fatter tails than the Gaussian and is also an infinitely divisible distribution. The latter is important for inferring properties of the distribution of shifts at a longer horizon (e.g., 1 year) from the distribution of shifts at a shorter horizon, where more data is available for calibration. For an NIG distribution to be symmetric about zero one must have . In this case, the mean and

skewness vanish, while the variance and kurtosis are given, respectively, by

(11) The implementation of the model22 can be achieved as follows. For each index , pre-compute the principal components of historical base correlation surface moves matrix .23 Then simulate independent NIG random

and store them in a variables

(12) with moves , by calibrated to the historical time series of the scores for base correlation matching the first two non-vanishing moments. Each set

for all with the

represents a realisation of the scores associated . Then of the

principal components of base correlation moves for index

generate the corresponding correlation moves for all strikes index,



See, for example, Kalemanova/Werner (2006) for the main properties of the NIG and an efficient implementation.


In practice,

representing 5 strikes each across 4 tenors is a suitable choice.


which can be converted into correlations by adding the mean and using the initial values.24, 25 The fitting of the NIG for each PCA factor of each base correlation curve using (11) is straightforward. On a technical note, within the NIG framework, the variance scales linearly (see, for example, Albrecher et al. (2006)); empirical analysis of base correlation shifts for different return windows , however, shows

evidence that volatility actually increases much more slowly with time, comparable to a mean-reverting process, and thus suggests a volatility fitting on several return sets for different across strikes and tenors.

Regarding the dependency between base correlation moves of different indices and credit spreads let denote the inverse of the NIG distribution function. Then one

can sample the NIG factors according to

(14) where denotes a realisation of variable and , and are

independent standard normal random variables. credit spread moves,

is the systematic factor of the

stands for an additional systematic factor to create

correlation between base correlation moves across different indices (iTraxx and CDX) and denotes an idiosyncratic factor. is used to link the base is used to reproduce

correlation moves to those of the credit spreads and the correlation across different indices.


Recall that averages to

relate to changes in excess of the mean, so one needs to add the historical prior to returning the values.


If the described procedure generates a base correlation value outside the [0,1] bounds, a corresponding cap or floor can be applied in the pricing step (see Section 4).

In the context of introducing correlation amongst base correlation moves of several indices extra care is required. Within the set all random

variables must be uncorrelated in order to reconstruct the correlation moves within each index in accordance with the principal components framework. If any correlation is introduced within the set of random variables, the variance and the correlation between the strikes of the same index will not be reproduced correctly. Using a one-factor approach to correlate the moves refers to the first principal components only, i.e., in the case of two indices, one would correlate via .26 and

In the context of implied correlations, it is worth pointing out that the market for Nth-to-default (NtD) baskets is much less liquid than the CDO market. In particular, standardisation was never realised. Even though some typical baskets are quoted occasionally by major dealers, it has not reached the volumes and intensity of the index tranche market. As a result, market data for implied correlations of baskets is scarce, and therefore, building and calibrating an accurate model describing their evolution is much more challenging.


Recovery rates

In order to model stochastic recovery rates, which are restricted to the interval [0,1], one can start with the following building block:



Empirically, it was found that this approach is suitable, but whether a correlation amongst more than one component is necessary should be cross-checked at least periodically in practice.


is a standard Gaussian random variable and

. This

approach (logit-normal) has been suggested before in the literature (see, for example, Schnbucher (2001)). One can split the Gaussian variable further into a systematic and idiosyncratic part with correlation potentially smaller, interval and also remap the values to another, to obtain the recovery rate of issuer :

(16) with and as standard Gaussian random variables.

With respect to the calibration of (16), one needs to recognise that empirically recovery rates used for pricing purposes (e.g., in CDS) usually do not exhibit a regular marking for non-distressed names, but are rather set at standard levels such as 40% for senior debt and 20% for subordinated debt. The same applies to instruments such as recovery rate swaps. With the derivation of market-implied recovery rates being challenging, even setting equal to the expected market recovery rate

leaves the model in (16) substantially underfitted. As an alternative, one possible proxy for the calibration of a distribution of recovery rates are actually observed values in the past. Figure 5 shows an example of the historical distribution of recovery on corporate defaults, mostly in the US, together with the fitted distribution according to (15). In this case, one finds a fairly wide distribution of realised recovery rates across the spectrum, with a mean of around 50%.

[Insert Figure 5 about here]

Another challenge relates to the link between recovery rates and the other model components. Based on the discussion above, a direct link to credit spreads is not obvious even though recovery rates tend to become more closely monitored by the

market with credit spreads increasing, a simple directional link between the two is not encountered in practice. A conceptionally more appealing link exists between the recovery rates and the systematic driver of the ratings evolution, which reflects the overall credit state of the world. One possibility to achieve this connection is to replace in (16) with an aggregation of the global factors in (2).27

Regarding the actual use of the simulated recovery rates in the deal repricing (see Section 4), one can either employ them only in the default case or, more conservatively, already in case of a substantial rating deterioration when markets tend to move away from standard recovery rate assumptions.


Benefits from dynamic hedging

The regulatory guidelines specify that CRM needs to reflect benefits from dynamic hedging, the risk of hedge slippage and the potential costs of rebalancing such hedges (Basel Committee on Banking Supervision (2009), p. 21). While the recognition of a dynamic hedging strategy might help to reduce the CRM figure, the justification and proof that this strategy will actually be carried out in practice are challenging. Furthermore, the calculation requirements for the P&L generation as outlined in Section 4 pose further restrictions. It is also unclear how a dynamic hedging strategy should be aligned with portfolio resets (rebalancing) for liquidity horizons shorter than one year. Therefore, the assumption of a static portfolio over the liquidity horizon(s) is one option, although it is likely very conservative.


In such a setup, recovery process.

changes its characteristic and acts as the correlation between the asset and



P&L generation and distributions


In order to generate P&L distributions and derivate associated loss quantiles, joint realisations of the risk factors are to be applied to the deals in the IRC and CRM coverage.

One or more liquidity horizons, such as three months (the minimum according to the rulebook), can be defined within the capital horizon. According to Basel Committee on Banking Supervision (2009), the IRC model should be based on the assumption of a constant level of risk over the one-year capital horizon. This [...] implies that a bank rebalances, or rolls over, its trading positions over the one-year capital horizon in a manner that maintains the initial risk level [...]. Correspondingly, the CRM model is subject to the same framework (Basel Committee on Banking Supervision (2011)). The interpretation of this guideline is not unambiguous. With the general approach of rebalancing positions at the beginning of the liquidity horizon, it is not obvious, for example, whether instruments maturing or defaulting before the end of the liquidity horizon are to be replaced as well.

4.2 4.2.1

Implementation aspects Full repricing vs. approximation

In a short-term market risk projection (e.g., one to ten days), first- and second-order sensitivities are usually suitable to calculate P&Ls by means of Taylor expansion without fully repricing the trades. In the context of IRC/CRM, however, the simulation over liquidity horizons up to one year implies extreme risk factor moves, which prevents the use of sensitivities in most circumstances.


Since full revaluation is very costly in terms of computational effort, any acceptable approximation should be aimed for.28 In some circumstances, it might not only be costly to reprice deals, but a full repricing might not even be easily accessible for all kinds of trades. In the case of IRC, one might apply a fallback pricing in the form of an issuer risk approximation. This approach is based on the assumption that the bank is holding par bonds issued by each issuer (issuer risk). The loss due to migration is assumed to be equivalent to a change in the bonds value derived from a change in credit spread produced by a downgrade or an upgrade; similarly for the default case. The value of such a par bond, assuming continuous fixed

coupon payments and continuous compounding, reads as follows:

(17) where denotes the fixed coupon paid, the recovery rate. the maturity, the credit spread, the

interest rate and

Several assumptions on the recovery rate can be made: It can be applied to (a) the bonds PV or (b) the bonds notional. Assumption (a) represents a proportional recovery rate model, while (b) reflects the assumption of fixed recovery rate, as usually applied in the pricing of CDS. For a par bond, the two assumptions result in the same loss on default. However, in the case of distressed bonds or bonds trading far above par, the difference can be significant.

Another potential simplification concerns the repricing over time: With a projection horizon of up to one year, each deal is to be repriced at a future time point. The discounted value in comparison with the initial value, in conjunction with any


In the case of CRM, for example, one might speed up the pricing functions for CDOs by the simplifying assumption of flat hazard rates for each name.

intermediate cash flows, then leads to the P&L. Since this moving forward in time might not be straightforward to implement on a bank-wide scale for example, forward curves for all market parameters and the associated pricing functions with cash flow recognition must be available a simplification consists in a repricing as of today, i.e., without an ageing of the deals. However, the assessment of the conservatism of this approach needs to be considered carefully.29


Pre-computation vs. ad-hoc repricing

In order to speed up the actual simulation, it is preferable to pre-compute P&Ls if these are used several times. In the case of IRC, where rating migrations and defaults are simulated, the number of possible outcomes is limited. For example, if are defined, one requires only be pre-computed for each issuer. ratings

different P&Ls (i.e., including default) that can

Unfortunately, in the case of CRM, numerous risk factors are simulated in addition to default and migration, which renders the pre-calculation of P&Ls difficult, even if multi-dimensional grids are employed. With the use of sensitivities being no alternative, full repricing seems to be the only practical solution.


Multiple liquidity horizons

In case IRC or CRM are calculated assuming liquidity horizons of less than the capital horizon, some amendments of the simulation algorithm are necessary.


Since a bank is usually short credit risk, the corresponding risk premia are realised over time. In this case, ignoring the timing aspect in the repricing will tend to be conservative, i.e., the change in value of the IRC/CRM perimeter will be overestimated.

Same liquidity horizons for all issuers. Here, it is assumed that there are liquidity horizons per year for all issuers. Each simulation path consists of steps. At each step, the P&L is determined for each issuer; the portfolio is then assumed to be rebalanced (see Section 4.1). The process is repeated until the capital horizon is reached.

Different liquidity horizons per issuer. In this case, the highest common factor of liquidity horizons across issuers is chosen to perform the simulation. At each step, all risk factor outcomes are simulated, but the P&L for each issuer is determined only when the time steps are equivalent to the corresponding liquidity horizon. For example, if the simulation is performed using monthly steps and the liquidity horizon for a particular issuer is three months, the P&L for this issuer is defined only after three monthly steps have been simulated.

Several liquidity horizons for one issuer. In this case products or even deals associated with one issuer can have different liquidity horizons. The highest common factor of liquidity horizons across issuers and products/deals is chosen to perform the simulation. The systematic factors are the same for all entities (i.e., products, deals) of the same issuer. Idiosyncratic factors remain the same across all entities of an issuer until they reach their respective liquidity horizons. As before, at the liquidity horizon of an entity its P&L due to risk factor moves is determined and the position is rebalanced. The idiosyncratic factors for this entity are then simulated separately going forward. This process reflects the assumption that, at the liquidity horizon, (a) the position is unwound, (b) the P&L recorded and (c) a position with a (potentially) new issuer is entered into so that the credit quality of the portfolio is reset to the initial level. In case of a default event at a certain liquidity horizon, the corresponding entities of the same issuer but with a


longer liquidity horizon are subject to default as well, although the loss will be realised at a later date, at the liquidity horizon.30



For illustration purposes, P&L distributions under IRC and CRM and the corresponding loss quantiles are calculated for the following portfolios, assuming a one-year liquidity horizon:

IRC o iTraxx Europe, five-year long protection on all 125 constituents, ten million EUR; o iTraxx Europe, five-year short protection on all 125 constituents, ten million EUR; o iTraxx Europe, five-year long protection on the first 63 constituents and short protection on the remaining 62 constituents, ten million EUR.31

CRM o Five-year long protection on a bespoke 3-7% mezzanine tranche,32 ten million EUR; o Five-year short protection on a bespoke 3-7% mezzanine tranche, ten million EUR;


Notably, the default at a certain liquidity horizon does not trigger a default for the entities with a shorter liquidity horizon. The rationale behind this is that the products/deals of an issuer will have been replaced by those of an equivalent issuer reflecting the original rating and according to the constant-level-of-risk assumption.

31 32

Series 9 of the iTraxx is used for this analysis. The constituents are those of the 3-7% mezzanine tranche of the Global1 portfolio from Totem, which comprises North American, European and Japanese names.

o Five-year long protection on a bespoke 3-7% mezzanine tranche, ten million EUR, with additional credit spread hedge; o Five-year long protection on a bespoke 3-7% mezzanine tranche, ten million EUR, with additional credit spread and base correlation hedge.

Selected quantiles of the P&L distributions including the IRC/CRM figures are provided in Table 1, assuming a typical model calibration based on ISDA working group discussions at the end of 2010. The results demonstrate that, in the IRC case, the short protection position in CDS results in a much higher 99.9% market loss projection than the corresponding long position. This is in line with expectations given the limited loss potential in the second case (rating upgrade and resulting credit spread narrowing of the names). The long/short position results in an IRC figure between the long-only and short-only case and is mainly driven by the idiosyncratic risk of the portfolio constituents. The comparison of the quantiles shows that the P&L distributions have a heavier loss tail than a normal distribution in all cases.

In the case of CRM, hedging a long protection position in the tranche with regard to credit spread moves shows a substantial reduction of the loss estimate at 90% and 99%, but much less so at 99.9%. In the latter case, moves in risk factors other than credit spread drive the CRM figure. This becomes transparent from the case where base correlation is hedged additionally. Notably, the loss distributions show heavier loss tails in the hedged cases compared to the outright long or short protection positions. Comparing the long and short protection position in the tranche shows a substantially higher loss projection for the latter, which is expected given the potential defaults of names and resulting losses in the CDO basket that the tranche protection seller needs to cover.

[Insert Table 1 about here]


In the context of the CRM figures note that a floor as a function proportional to the portfolio charge according to the standardised measurement method (e.g., 8%) might be applied by regulators. For the three unhedged, delta-hedged and delta-/correlation-hedged CRM portfolios in Table 1, for example, 8% of the equivalent standardised charge will amount to 0.2 million, 4.5 million and 5.5 million EUR. This underestimates the risk on an outright position and significantly overestimates the risk on a hedged position, thus providing completely wrong hedging incentives. The problem lies in the notional-based calculation of the standardised charge, with no netting applicable across different instruments.

4.4 4.4.1

Model properties Convergence

Stability and convergence of a simulation model are important properties to investigate. One way of conducting such an analysis is by running the simulation several times with changing random number sequences; this allows an empirical determination of the standard error for the estimators of certain values of a distribution such as the mean or a given quantile. The theoretical standard error for the estimator of quantile can also be approximated via

(18) with as the number of simulations and as the density function of the that gives the -quantile of the can thereby be

theoretical distribution evaluated at the point

distribution (see Jorion (2006), pp. 126-128); the function approximated by means of the empirical P&L distribution itself.


Table 2 shows theoretical standard errors for IRC and CRM figures for the example portfolios from Section 4.3. It is evident that with about 100,000 simulations the standard error lies in the magnitude of 1-2% in the test cases considered here. At a 95% confidence level, this means that the IRC and CRM loss estimates are accurate to about 3%. With one million simulations, this can be reduced to about 1%. It should be noted that the actual position hedged or non-hedged, for example drives the accuracy via the shape of the P&L distribution.

[Insert Table 2 about here]

For large-scale bank portfolios, with IRC stretching over many business lines, the overall P&L distribution can be asymmetric, fat-tailed and non-smooth, thus requiring a high number of simulations necessary.33 In the case of CRM the underlying credit correlation book is usually well hedged with respect to many risk factors and thus allows one to use fewer simulations.


Parameter sensitivity

Besides the inherent IRC and CRM model risk, the parameterisation itself poses an important challenge. Amongst the model parameters discussed in Section 3, Table 3 shows selected sensitivity analyses of the IRC/CRM figures with respect to changes in (a) the spread-to-rating mapping, (b) average pairwise asset correlations and (c) the stochastic credit spread component (CRM only).

[Insert Table 3 about here]


Another driver of the required number of simulation is the liquidity horizon. As a tendency, the shorter the liquidity horizon, the less stable the overall one-year P&L distribution.

In case (a), a positive sensitivity of the IRC figures to an increase in the slope of the spread-to-rating becomes transparent. This is in line with expectations since relative credit spread moves upon a rating migration are more extreme in this case both in the spread widening and narrowing case. For the balanced long/short portfolio, the sensitivity is much less pronounced. For CRM, the sensitivity to the spread-to-rating mapping is increasing in case a delta hedge is introduced. This seems counterintuitive at first glance since the delta hedging is supposed to reduce the exposure against credit spread moves. In the example where the long protection position in the mezzanine tranche is hedged, the portfolio includes short positions in CDS. Although the delta sensitivity is reduced for small credit spread moves, the hedged portfolio is much more exposed to default and large credit spread moves affecting the CDS trades. This can lead to situations where the portfolio is no longer hedged and deteriorates to a wiped-out tranche with outright CDS. When the correlation hedge is added, this phenomenon is reduced compared to the delta hedgeonly case.

With regard to case (b), an increase (decrease) in the mean asset correlations increases (decreases) the IRC figures for both the long protection and the short protection position. Credit spread moves amongst names, driven by rating migrations, exhibit a higher (lower) correlation in these cases. While this behaviour is a straightforward result from a lower (higher) degree of portfolio diversification, the results for the balanced long/short protection position are less obvious. In the example considered here, a large increase or decrease both lead to a slight reduction of the IRC figure. Hence, IRC is not necessarily a monotonic function of mean asset correlation. Results from the CRM case confirm this. For example, in the two cases of the hedged tranche, a large change in the asset correlation structure increases the CRM figure, irrespective of the direction of the correlation change.


From (c), it becomes clear that the stochastic credit spread component that complements the rating-induced moves is an important driver of the CRM figure. An increase in the credit spread volatility influences especially the hedged cases. Again the presence of long CDS positions exposes the portfolio to extreme spread moves. The loss quantile then captures scenarios where the rating-induced move brought the portfolio into an unhedged zone where the stochastic credit spread has an important effect. As explained for case (a), this effect is reduced when introducing the additional correlation hedge.



Given the long forecasting horizon and very high loss quantile, the backtesting of an IRC or CRM model in the classical sense is hardly feasible. One simplified way to address the forecasting ability of the models is to monitor at least the cumulative P&L of snapshots of constant portfolios over periods of one year each. Taking potential cash flows into account the P&L should hardly ever exceed the predicted 99.9% IRC/CRM figures. Notably, these thresholds should also not be exceeded on any day during the one-year period. While this clean backtesting approach provides a high-level check of the magnitude of the IRC/CRM figures, it does not allow statistical conclusions as in a one-day VaR backtesting.

Another, simplified check of the model behaviour is a historical portfolio repricing. For this purpose, the IRC or CRM portfolios as of today are repriced over a historical period with the then current market parameters. One would hardly expect any exceedance of the IRC and CRM figures.34 Figure 6 illustrates this approach for a typical, hypothetical CRM portfolio.


Notably, to render this comparison meaningful, both figures should be modified to ignore default events (migration only) since these will not be reflected in such a repricing exercise.

[Insert Figure 6 about here]

In the example, the static, well-hedged CRM portfolio is repriced daily over a historical time period and a sliding one-year-P&L is derived from that. The figure shows the isolated P&Ls for the credit correlation products and their hedges, as well as the overall P&L. By comparing the latter to the projected P&L quantiles from the CRM model (ignoring defaults), one finds that the extremes in the form of the 99.9th and 0.1th quantiles were not exceeded, in spite of the recent financial crisis in 2007/08 falling into the test period.

Summary and outlook

The introduction and regulatory approval of new internal market risk models (Basel 2.5) poses significant challenges for banks. The regulatory auditing is still ongoing in many jurisdictions. So far, industry standards for consistent compliant models have not been established. This paper is the first to discuss risk factor models for IRC and CRM and to shed light on implementation aspects to arrive at P&L distributions and loss figures. While it should fulfil regulatory requirements, this framework uses a range of simplifications and still has potential for extension. Especially for CRM, an avenue of future research could be the use of extreme value theory (EVT) to fit the joint tail behaviour of the various risk factors, either parametrically or non-parametrically. Although theoretically appealing, the fact that, in a well-hedged trading book, extreme PV moves do not tend to coincide with joint extreme realisations of the risk factors35 does not render such an approach


Consider a situation when the correlation hedge slippage (gamma) is larger than the same for credit spreads. In this case, it is entirely possible that an extreme scenario involves a big move in correlations, but hardly any move in credit spreads.

straightforward. As a general conclusion, high confidence levels and long projection horizons in IRC and CRM, in conjunction with limited backtesting feasibility, leave a substantial model risk.

While medium- to long-term market risk measures might also prove valuable as risk management tools, some aspects require further consideration. Lobbying of the industry and work-intense QIS over the past years have helped to achieve more coherent risk measures compared to original plans. With concepts like the CRM floor based on the standardised measurement method still questioned by the industry, the ongoing fundamental review of the trading book regulations might help to derive a capital framework that better reflects economic reality and re-aligns capital requirements on the one side with risk measurement and management on the other side.

Ahluwalia, R.; McGinty, L.; Beinstein, E. (2004), A Relative Value Framework for Credit Correlation, Research Paper, JP Morgan. Albrecher, H.; Ladoucette, S.A.; Schoutens, W. (2006), A Generic One-Factor Lvy Model for Pricing Synthetic CDOs, Katholieke Universiteit Leuven preprint, September. Andersen, L. (2006), Portfolio Losses in Factor Models: Term Structures and Intertemporal Loss Dependence, Working Paper, September. Andersen, L.; Sidenius, J. (2004), Extensions to the Gaussian Copula: Random Recovery and Random Factor Loadings, Journal of Credit Risk, Vol. 1, Winter, pp. 29-70. Basel Committee on Banking Supervision (2009), Guidelines for Computing Capital for Incremental Risk in the Trading Book, July.


Basel Committee on Banking Supervision (2011), Revisions to the Basel II Market Risk Framework, February. Brigo, D.; Mercurio, F. (2006), Interest Rate Models: Theory and Practice. With Smile, Inflation and Credit, 2nd edition, Heidelberg. Israel, R.B.; Rosenthal, J.S.; Wei, J.Z. (2001), Finding Generators for Markov Chains via Empirical Transition Matrices, with Application to Credit Ratings, Mathematical Finance, Vol. 11, pp. 245-265. Kainth, D.; Kwiatowski, J.; Muirden, D. (2010), Modelling the CRM for the Correlation Trading Portfolio, Presentation, Global Derivatives and Risk Management, Paris, 19th May 2010. Kalemanova, A.; Werner, R. (2006), A Short Note on the Efficient Implementation of the Normal Inverse Gaussian Distribution, Risklab and Hypo Real Estate Holding, November. Jorion, P. (2006), Value at Risk The New Benchmark for Managing Financial Risk, 3rd edition, New York. Li, D. (2000), On Default Correlation: A Copula Function Approach, The Journal of Fixed Income, Vol. 9, March, pp. 43-54. McGinty, L.; Beinstein, E.; Ahluwalia, R.; Watts, M. (2004), Introducing Base Correlations, Research Paper, JP Morgan. OKane, D. (2008), Modelling Single-Name and Multi-Name Credit Derivatives, Chichester. Oehler, C.; Appasamy, B.; Stapper, G. (2010), Modelling Incremental Risk Charge, Presentation, Capital Allocation and Management, London, 13th September 2010. Schnbucher, P. (2001), Credit Derivatives Pricing Models, Chichester. Schwartz, E.; Smith, J. (2000), Short-Term Variations and Long-Term Dynamics in Commodity Prices, Management Science, Vol. 46, pp. 893-911.


Table 1: P&L distributions and IRC/CRM figures for selected example portfolios

Loss quantile of P&L distribution




A. IRC iTraxx constituents long protection iTraxx constituents short protection iTraxx constituents long/short protection 56,370 544,690 134,024 158,504 1,162,844 366,700 234,801 1,857,464 620,077

B. CRM Mezzanine tranche long protection Mezzanine tranche short protection Mezzanine tranche long protection plus credit spread hedge Mezzanine tranche long protection plus credit spread and base correlation hedge 1,798,108 3,437,593 467,069 3,175,423 5,364,264 1,427,435 3,792,517 6,339,288 3,000,093




The figures (in EUR) are derived by means of 100,000 simulations each.

Table 2: Convergence behaviour of IRC/CRM simulations

Theoretical standard error of 99.9% loss quantile for different number of simulations




A. IRC iTraxx constituents long protection iTraxx constituents short protection iTraxx constituents long/short protection 2.06% 1.60% 1.81% 1.45% 1.13% 1.28% 0.51% 0.40% 0.45%

B. CRM Mezzanine tranche long protection Mezzanine tranche short protection Mezzanine tranche long protection plus credit spread hedge Mezzanine tranche long protection plus credit spread and base correlation hedge 0.72% 0.62% 2.96% 0.51% 0.44% 2.09% 0.16% 0.14% 0.66%




The standard errors (in EUR) are derived by obtaining the values of the density function from simulations.


Table 3: Selected sensitivity analyses for IRC/CRM figures

Spread-to-rating mapping* Steepening by 50% Flattening by 50%

Mean asset correlations Increase by 100% Decrease by 50%

Stochastic credit spreads Increase in volatility by 50% Decrease in volatility by 50%

A. IRC iTraxx constituents long protection iTraxx constituents short protection iTraxx constituents long/short protection 17.99% 7.50% 1.36% -27.24% -12.52% -1.36% 47.51% 70.11% -1.84% -23.49% -23.95% -1.19%

B. CRM Mezzanine tranche long protection Mezzanine tranche short protection Mezzanine tranche long protection plus credit spread hedge Mezzanine tranche long protection plus credit spread and base correlation hedge 1.39% -0.65% -6.99% -2.10% 0.97% 8.76% 0.87% 6.96% 11.98% 0.54% -0.65% 2.52% 5.96% 6.35% 37.00% -5.19% -5.98% -26.16%







The figures (in EUR) show the differences relative to the base cases in Tab. 1.

* The steepening and flattening is obtained by multiplying dSpread/dRating in the spread-to-rating table by 1.5 and 0.5, respectively.








Migrationevent: Relative changein mediancreditspread



Fig. 1: Spread-to-rating mapping

400 350 300

400 350 300



250 200 150 100 50 0 01/2007 07/2007 01/2008 iTraxx3Y 07/2008 iTraxx5Y 01/2009 07/2009 01/2010 iTraxx10Y 07/2010

250 200 150 100 50 0 01/2007 07/2007 01/2008 CDX3Y 07/2008 CDX5Y 01/2009 CDX7Y 07/2009 01/2010 07/2010



iTraxx Europe (Series 6 through 9) and CDX.NA.IG (Series 7 through 9).

Source: Bloomberg.

Fig. 2: Credit spreads of iTraxx (left) and CDX (right)







07/2007 01/2008 07/2008 iTraxx5Y 01/2009 07/2009 01/2010 iTraxx10Y 07/2010








60% 01/2007

60% 01/2007 07/2007 01/2008 CDX3Y 07/2008 CDX5Y 01/2009 07/2009 01/2010 07/2010





iTraxx Europe (Series 6 through 9) and CDX.NA.IG (Series 7 through 9).

Source: Bloomberg, own calculations.

Fig. 3: Index basis of iTraxx (left) and CDX (right)


100% 90% 80%


100% 90% 80% 70%


70% 60% 50% 40% 30% 20% 10% 0% 08/2007 iTraxx3% 02/2008 08/2008 02/2009 iTraxx9% 08/2009 02/2010 08/2010 iTraxx22%

60% 50% 40% 30% 20% 10% 0% 08/2007 02/2008 08/2008 CDX7% 02/2009 CDX10% 08/2009 CDX15% 02/2010 08/2010





iTraxx Europe (5Y, Series 6 through 9) and CDX.NA.IG (5Y, Series 7 through 9). The strikes are adjusted for pricing purposes if a default occurs in the basket. Source: Bloomberg, own calculations.

Fig. 4: Base correlations for standard tranches of iTraxx (left) and CDX (right)


8% 7% 6% Frequency 5% 4% 3% 2% 1% 0% 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Recoveryrate Historicallyrealisedrecoveryrates Fittedlogitnormaldistribution

Source: BNP Paribas-internal sample data.

Fig. 5: Historical realised recovery rates and fitted distribution

5000 4000 3000 2000 1000

300 250 200



150 100 50

0 1000 50 2000 3000 4000 5000 11/2007 12/2007 01/2008 02/2008 03/2008 04/2008 05/2008 06/2008 07/2008 08/2008 09/2008 P&LCorrelationproducts TotalP&L P&LHedges CRMModel(migrationonly)99.9%and0.1%quantile 100 150 200 250

Fig. 6: Model backtesting Sliding one-year P&L of a CRM portfolio over a historical period in comparison to model P&L quantiles