You are on page 1of 52

Operational Risk Management: A Review

Dean Fantazzini

Moscow
Overview of the Presentation

• Introduction

Dean Fantazzini 2
Overview of the Presentation

• Introduction

• The Basic Indicator Approach

Dean Fantazzini 2-a


Overview of the Presentation

• Introduction

• The Basic Indicator Approach

• The Standardized Approach

Dean Fantazzini 2-b


Overview of the Presentation

• Introduction

• The Basic Indicator Approach

• The Standardized Approach

• Advanced Measurement Approaches

Dean Fantazzini 2-c


Overview of the Presentation

• Introduction

• The Basic Indicator Approach

• The Standardized Approach

• Advanced Measurement Approaches

• The Standard LDA Approach with Comonotonic Losses

Dean Fantazzini 2-d


Overview of the Presentation

• Introduction

• The Basic Indicator Approach

• The Standardized Approach

• Advanced Measurement Approaches

• The Standard LDA Approach with Comonotonic Losses

• The Canonical Aggregation Model via Copulas

Dean Fantazzini 2-e


Overview of the Presentation

• Introduction

• The Basic Indicator Approach

• The Standardized Approach

• Advanced Measurement Approaches

• The Standard LDA Approach with Comonotonic Losses

• The Canonical Aggregation Model via Copulas

• The Poisson Shock Model

Dean Fantazzini 2-f


Overview of the Presentation

• Introduction

• The Basic Indicator Approach

• The Standardized Approach

• Advanced Measurement Approaches

• The Standard LDA Approach with Comonotonic Losses

• The Canonical Aggregation Model via Copulas

• The Poisson Shock Model

• Bayesian Approaches

Dean Fantazzini 2-g


Introduction

• What are operational risks?

The term “operational risks” is used to define all financial risks that are
not classified as market or credit risks. They may include all losses due to
human errors, technical or procedural problems etc.

→ To estimate the required capital for operational risks, the Basel


Committee on Banking supervision (1998-2005) allows for both a simple
“top-down” approach, which includes all the models which consider
operational risks at a central level, so that local Business Lines (BLs) are
not involved.
→ And a more complex “bottom- up” approach, which measures
operational risks at the BLs level, instead, and then they are aggregated,
thus allowing for a better control at the local level.

(LDA).

Dean Fantazzini 3
Introduction

Particularly, following BIS (2003), banks are allowed to choose among three
different approaches:
• The Basic Indicator approach (BI),
• the Standardized Approach (SA),
• the Advanced Measurement Approach (AMA).

If the basic indicator approach is chosen, banks are required to hold a flat
percentage of positive gross income over the past three years.

If the standardized approach is chosen, banks’ activities are separated into a


number of business lines. A flat percentage is then applied to the three year
average gross income for each business line.

Instead, if the advanced measurement approach is chosen, banks are allowed to


develop more sophisticated internal models that considers the interactions
between different BL and ET and they have to push forward risks mitigation
strategies.

Dean Fantazzini 4
The Basic Indicator Approach

Banks using the basic indicator (BI) approach are required to hold a
capital charge set equal to a fixed percentage (denoted by α) of the
positive annual gross income (GI).

If the annual gross income is negative or zero, it has to be excluded when


calculating the average. Hence, the capital charge for operational risk in
year t is given by
3
t 1 X
RCBI = α max(GI t−i , 0) (1)
Zt i=1
P3
where Zt = i=1 I[GI t−i >0] and GI t−i stands for gross income in year t − i.
Note that the operational risk capital charge is calculated on a yearly basis.

the Basel Committee has suggested α = 15%.

→ This is a straightforward, volume-based, one-size-fit-all capital charge.

Dean Fantazzini 5
The Standardized Approach

The BI is designed to be implemented by the least sophisticated banks. Moving


to the Standardized Approach requires the bank to collect gross income data by
business lines.

The model specifies eight business lines : Corporate Finance, Trading and Sales,
Retail Banking, Commercial Banking, Payment and Settlement, Agency Services
and Custody, Asset Management and Retail Broker.

For each business line, the capital charge is calculated by multiplying the gross
income by a factor denoted by β assigned to that business line.

The total capital charge is then calculated as a three-year average over positive
gross incomes, resulting in the following capital charge formula:
 
3 8
1X X
t
RCSA = max  βj GIjt−i , 0 (2)
3 i=1 j=1

Dean Fantazzini 6
The Standardized Approach

We remark that in formula (2), in any given year t − i, negative capital charges
resulting from negative gross income in some business line j may offset positive
capital charges in other business lines (albeit at the discretion of the national
supervisor).

⇒ This kind of netting should induce banks to go from the basic indicator to the
standardized approach.

Table 1.3 gives the beta factors for each business line:

Business Line Beta factors


Corporate Finance 18%
Trading and Sales 18%
Retail Banking 12%
Commercial Banking 15%
Payment and Settlement 18%
Agency Services and Custody 15%
Asset Management 12%
Retail Broker 12%

Table 1: Beta factors for the standardized approach.

Dean Fantazzini 7
Advanced Measurement Approaches

In the 2001 version of the Basel 2 agreement, the Committee described three
specific methods within the AMA framework:
• Internal Measurement Approach (IMA): according to this method, the OR
capital charge depends on the sum of the unexpected and expected losses:

the expected losses are computed by using bank historical data, while those
unexpected are found by multiplying the expected losses by a factor γ,
derived by sector analysis.

• Loss Distribution Approach (LDA): using internal data, it is possible to


compute, for every BL/ET combination, the probability distribution for the
frequency of the loss event as well as for its impact (severity) over a specific
time horizon.

By convoluting the frequency with the severity distribution, analytically or


numerically, the probability distribution of the total loss can be retrieved.
The final capital charge will be equal to a determined percentile of that
distribution.

Dean Fantazzini 8
Advanced Measurement Approaches

• Scorecard: an expert panel has to go through a structured process of


identifying the drivers for each risk category, and then forming these into
questions that could be put on scorecards.

These questions are selected to cover drivers of both the probability and
impact of operational events, and the actions that the bank has taken to
mitigate them. In parallel with the scorecard development and piloting, the
bank’s total economic capital for operational risk is calculated and then
allocated to risk categories.

In the last version of the Basel 2 agreement, these models are not mentioned to
allow for more flexibility in the choice of internal measurement methods.

Given its increasing importance (see e.g. Cruz, 2002) and the possibility to apply
econometric methods, we will focus here only on the LDA approach.

Dean Fantazzini 9
The Standard LDA Approach with Comonotonic Losses

The actuarial approach employs two types of distributions:


• The one that describes the frequency of risky events;
• The one that describes the severity of the losses
Formally, for each type of risk i = 1, . . . , R and for a given time period,
operational losses could be defined as a sum (Si ) of the random number
(ni ) of the losses (Xij ):

Si = Xi1 + Xi2 + . . . + Xini (3)

A widespread statistical model is the actuarial model . In this model, the


probability distribution of Si is described as follows:
Fi (Si ) = Fi (ni ) · Fi (Xij ), where
• Fi (Si ) = probability distribution of the expected loss for risk i;
• Fi (ni ) = probability of event (frequency) for risk i;
• Fi (Xij ) = loss given event (severity) for risk i.

Dean Fantazzini 10
The Standard LDA Approach with Comonotonic Losses

The underlying assumptions for the actuarial model are:

• the losses are random variables, independent and identically


distributed (i.i.d.);

• the distribution of ni (frequency) is independent of the distribution of


Xij (severity).

Moreover,

• The frequency can be modelled by a Poisson or a Negative Binomial


distribution.

• The severity, is modelled by a Exponential or a Pareto or a Gamma


distribution.

→ The distribution Fi of the losses Si for each intersection i among


business lines and event types, is then obtained by the convolution of the
frequency and severity distributions.

Dean Fantazzini 11
The Standard LDA Approach with Comonotonic Losses

However, the analytic representation of this distribution is computationally


difficult or impossible. For this reason, this distribution is usually
approximated by Monte Carlo simulation:

→ We generate a great number of possible losses (i.e. 100.000) with


random extractions from the theoretical distributions that describe
frequency and severity. We thus obtain a loss scenario for each loss Si .

→ A risk measure like Value at Risk (VaR) or Expected Shortfall (ES) is


then estimated to evaluate the capital requirement for the loss Si .
• The VaR at the probability level α is the α-quantile of the loss
distribution for the i − th risk: V aR(Si ; α) : Pr(Si ≥ V aR) ≤ α

• The Expected Shortfall at the probability level α is defined as the


expected loss for intersection i, given the loss has exceeded the VaR
with probability level α : ES(Si ; α) ≡ E [Si |Si ≥ V aR(Si ; α)]

Dean Fantazzini 12
The Standard LDA Approach with Comonotonic Losses

Once the risk measures for each losses Si are estimated, the global VaR (or
ES) is usually computed as the simple sum of these individual measures:

• a perfect dependence among the different losses Si is assumed...

Dean Fantazzini 13
The Standard LDA Approach with Comonotonic Losses

Once the risk measures for each losses Si are estimated, the global VaR (or
ES) is usually computed as the simple sum of these individual measures:

• a perfect dependence among the different losses Si is assumed...

• ... but this is absolutely not realistic!

Dean Fantazzini 13-a


The Standard LDA Approach with Comonotonic Losses

Once the risk measures for each losses Si are estimated, the global VaR (or
ES) is usually computed as the simple sum of these individual measures:

• a perfect dependence among the different losses Si is assumed...

• ... but this is absolutely not realistic!

• If we used the Sklar’s theorem (1959) and the Frechet-Hoeffding


bounds, the multivariate distribution among the R losses would be
given by

H(S1t , . . . , SR,t ) = min (F (S1,t ), . . . , F (SR,t )) (4)

where H is the joint distribution of a vector of losses Sit , i = 1 . . . R,


and F (·) are the cumulative distribution functions of the losses’
marginals. Needless to say, such an assumption in quite unrealistic.

Dean Fantazzini 13-b


The Canonical Aggregation Model via Copulas

• Brief recall to Copula theory :


→ A copula is a multivariate distribution function H of random variables
X1 . . . Xn with standard uniform marginal distributions F1 , . . . , F n,
defined on the unit n-cube [0,1]n

(Sklar’s theorem): Let H denote a n-dimensional distribution function


with margins F1 . . . Fn . Then there exists a n-copula C such that for all
real (x1 ,. . . , xn )

H(x1 , . . . , xn ) = C(F (x1 ), . . . , F (xn )) (5)

If all the margins are continuous, then the copula is unique; otherwise C is
uniquely determined on RanF1 × RanF2 . . . RanFn , where Ran is the
range of the marginals. Conversely, if C is a copula and F1 , . . . Fn are
distribution functions, then the function H defined in (2.2) is a joint
distribution function with margins F1 , . . . Fn .

Dean Fantazzini 14
The Canonical Aggregation Model via Copulas

By applying Sklar’s theorem and using the relation between the


distribution and the density function, we can derive the multivariate
copula density c(F1 (x1 ),, . . . , F n (xn )), associated to a copula function
C(F1 (x1 ),, . . . , F n (xn )):

n n
∂ n [C(F1 (x1 ), . . . , Fn (xn ))] Y Y
f (x1 , ..., xn ) = · fi (xi ) = c(F1 (x1 ), . . . , Fn (xn ))· fi (xi )
∂F1 (x1 ), . . . , ∂Fn (xn ) i=1 i=1

where
f (x1 , ..., xn )
c(F1 (x1 ), ..., Fn (xn )) = n
Q · , (6)
fi (xi )
i=1

By using this procedure, we can derive the Normal and the T-copula...

Dean Fantazzini 15
The Canonical Aggregation Model via Copulas

1. Normal-copula:
1
exp − 12 x′ Σ−1 x

Gaussian
f (x1 , ..., xn ) (2π)n/2 |Σ|1/2
c(Φ(x1 ), ..., Φ(xn )) = n
= n
=
fiGaussian (xi ) √1 − 12 x2i
Q Q 

exp
i=1 i=1

1 1 ′ −1
 
= exp − ζ (Σ − I)ζ
|Σ|1/2 2

where ζ = (Φ−1 (u1 ), ..., Φ−1 (un ))′ is the vector of univariate Gaussian
inverse distribution functions, ui = Φ (xi ), while Σ is the correlation
matrix.
2. T-copula:
! − υ+n
 n ζ ′ Σ−1 ζ 2
υ+n 1+
   
υ
f Student (x1 , ..., xn ) −1/2
Γ
2
Γ
2
υ
c(tυ (x1 ), ..., tυ (xn )) = n = |Σ|   ,
υ+1 ! − υ+1
 
υ

Student Γ Γ ζ2
Q
f (xi ) 2 2 n 2
i Q
1+ i
i=1 2
i=1

where ζ = (t−1 −1 ′
υ (u1 ), ..., tυ (un )) is the vector of univariate Student‘s
T inverse distribution functions, ν are the degrees of freedom,
ui = tν (xi ), while Σ is the correlation matrix.

Dean Fantazzini 16
The Canonical Aggregation Model via Copulas

Di Clemente and Romano (2004) and Fantazzini et al. (2007,


2008) proposed to use copulas to model the dependence among
operational risk losses:

→ By using Sklar’s Theorem, the joint distribution H of a vector of losses


Si , i = 1 . . . R, is simply the copula of the cumulative distribution functions
of the losses’ marginals :

H(S1 , . . . , SR ) = C(F1 (S1 ), . . . , FR (SR )) (7)

...moving to densities, we get:

h(S1 , . . . , SR ) = c(F1 (S1 ), . . . , FR (SR )) · f1 (S1 ) · . . . · fR (SR )

→ The analytic representation for the multivariate distribution of all losses


Si with copula functions is not possible, and an approximate solution with
Monte Carlo methods is necessary.

Dean Fantazzini 17
The Canonical Aggregation Model via Copulas

• Simulation studies: Small sample properties - Marginals


estimators [from Fantazzini et al. (2007, 2008)]

→ The simulation Data Generating Processes (DGPs) are designed to


reflect the stylized facts about real operational risks: we chose the
parameters of the DGPs among the ones estimated in the empirical section.

We consider two DGPs for the Frequency:


Fi (ni ) ∼ P oisson(0.08) (8)
Fi (ni ) ∼ N egative Binomial(0.33; 0.80) (9)
and three DGPs for the Severity:
Fi (Xij ) ∼ Exponential(153304) (10)
Fi (Xij ) ∼ Gamma(0.2; 759717) (11)
Fi (Xij ) ∼ P areto(2.51; 230817) (12)

In addition to the five DGPs, we consider four possible data situations: 1)


T = 72; 2) T = 500; 3) T = 1000; 4) T = 2000.

Dean Fantazzini 18
The Canonical Aggregation Model via Copulas

→ Simulation results:

1. As for Frequency distributions, while the Poisson distribution gives


already consistent estimates with 72 observations, the Negative
Binomial shows dramatic results, instead, with 40 % of cases where we
have negative estimates, and very high MSE and Variation Coeff.
Moreover, even with a dataset of 2000 observations, the estimates are
not yet stable. Datasets of 5000 observations of higher are required.

2. As for Severity distributions, we have again mixed results.


The Exponential and Gamma distributions give already consistent
estimates with 72 observations.
The Pareto have problems in small samples instead, with 2% of cases
of negative coefficients and very high MSE and VC.
Similar to the Negative Binomial, a size of, at least, T =5000 is
required.

Dean Fantazzini 19
The Canonical Aggregation Model via Copulas

• Empirical Analyis

The model we described was applied to an (anonymous) banking loss


dataset, ranging from January 1999 till December 2004, for a total of 72
monthly observations.

→ The overall loss events in this dataset are 407, organized in 2 business
lines and 4 event types, so that we have 8 possible risky combinations (or
intersections) to deal with.

→ The overall average monthly loss was equal to 202.158 euro, the
minimum to 0 (for September 2001), while the maximum to 4.570.852 euro
(which took place on July 2003).

Dean Fantazzini 20
The Canonical Aggregation Model via Copulas

Table 1: Pieces of the banking losses dataset


Frequency 1999 1999 1999 1999 ... 2004 2004
January February March April ... November December
Intersection 1 2 0 0 0 ... 5 0
Intersection 2 6 1 1 1 ... 3 1
Intersection 3 0 2 0 0 ... 0 0
Intersection 4 0 1 0 0 ... 0 0
Intersection 5 0 0 0 0 ... 0 1
Intersection 6 0 0 0 0 ... 2 4
Intersection 7 0 0 0 0 ... 1 0
Intersection 8 0 0 0 0 ... 0 0

Severity 1999 1999 1999 1999 ... 2004 2004


January February March April ... November December
Intersection 1 35753 0 0 0 ... 27538 0
Intersection 2 121999 1550 3457 5297 ... 61026 6666
Intersection 3 0 33495 0 0 ... 0 0
Intersection 4 0 6637 0 0 ... 0 0
Intersection 5 0 0 0 0 ... 0 11280
Intersection 6 0 0 0 0 ... 57113 11039
Intersection 7 0 0 0 0 ... 2336 0
Intersection 8 0 0 0 0 ... 0 0

Dean Fantazzini 21
The Canonical Aggregation Model via Copulas

Figure 1: Global Loss Distribution


(Negative Binomial - Pareto - Normal copula)

Dean Fantazzini 22
The Canonical Aggregation Model via Copulas

Table 2: Correlation Matrix of the risky Intersections


(Normal Copula)
Int. 1 Int. 2 Int. 3 Int. 4 Int. 5 Int. 6 Int. 7 Int. 8
Inters. 1 1 -0.050 -0.142 0.051 -0.204 0.252 0.140 -0.155
Inters. 2 -0.050 1 -0.009 0.055 0.023 0.115 0.061 0.048
Inters. 3 -0.142 -0.009 1 0.139 -0.082 -0.187 -0.193 -0.090
Inters. 4 0.051 0.055 0.139 1 -0.008 0.004 -0.073 -0.045
Inters. 5 -0.204 0.023 -0.082 -0.008 1 0.118 -0.102 -0.099
Inters. 6 0.252 0.115 -0.187 0.004 0.118 1 -0.043 0.078
Inters. 7 0.140 0.061 -0.193 -0.073 -0.102 -0.043 1 -0.035
Inters. 8 -0.155 0.048 -0.090 -0.045 -0.099 0.078 -0.035 1

Dean Fantazzini 23
The Canonical Aggregation Model via Copulas

Table 3: Global VaR and ES for different marginals convolutions,


dependence structures, and confidence levels
VaR 95% VaR 99% ES 95% ES 99%
Poisson Exponential Perfect Dep. 925,218 1,940,229 1,557,315 2,577,085
Normal Copula 656,068 1,086,725 920,446 1,340,626
T copula (9 d.o.f.) 673,896 1,124,606 955,371 1,414,868
Poisson Gamma Perfect Dep. 861,342 3,694,768 2,640,874 6,253,221
Normal Copula 767,074 2,246,150 1,719,463 3,522,009
T copula (9 d.o.f.) 789,160 2,366,876 1,810,302 3,798,321
Poisson Pareto Perfect Dep. 860,066 2,388,649 2,016,241 4,661,986
Normal Copula 663,600 1,506,466 1,294,654 2,785,706
T copula (9 d.o.f.) 672,942 1,591,337 1,329,130 2,814,176
Negative Bin. Exponential Perfect Dep. 965,401 2,120,145 1,676,324 2,810,394
Normal Copula 672,356 1,109,768 942,311 1,359,876
T copula (9 d.o.f.) 686,724 1,136,445 975,721 1,458,298
Negative Bin. Gamma Perfect Dep. 907,066 3,832,311 2,766,384 6,506,154
Normal Copula 784,175 2,338,642 1,769,653 3,643,691
T copula (9 d.o.f.) 805,747 2,451,994 1,848,483 3,845,292
Negative Bin. Pareto Perfect Dep. 859,507 2,486,971 2,027,962 4,540,441
Normal Copula 672,826 1,547,267 1,311,610 2,732,197
T copula (9 d.o.f.) 694,038 1,567,208 1,329,281 2,750,097

Dean Fantazzini 24
The Canonical Aggregation Model via Copulas

Table 4: Backtesting results with different marginals and copulas

VaR Exceedances VaR Exceedances


N / T N / T
Perfect 99.00% 1.39% Perfect 99.00% 1.39%
Dep. 95.00% 4.17% Dep. 95.00% 4.17%
Poisson Normal 99.00% 2.78% Neg. Bin. Normal 99.00% 2.78%
Exp. Copula 95.00% 6.94% Exp. Copula 95.00% 6.94%
T Copula 99.00% 2.78% T Copula 99.00% 2.78%
(9 d.o.f.) 95.00% 6.94% (9 d.o.f.) 95.00% 6.94%
Perfect 99.00% 1.39% Perfect 99.00% 1.39%
Dep. 95.00% 6.94% Dep. 95.00% 4.17%
Poisson Normal 99.00% 1.39% Neg. Bin. Normal 99.00% 1.39%
Gamma Copula 95.00% 6.94% Gamma Copula 95.00% 6.94%
T Copula 99.00% 1.39% T Copula 99.00% 1.39%
(9 d.o.f.) 95.00% 6.94% (9 d.o.f.) 95.00% 6.94%
Perfect 99.00% 1.39% Perfect 99.00% 1.39%
Dep. 95.00% 6.94% Dep. 95.00% 6.94%
Poisson Normal 99.00% 1.39% Neg. Bin. Normal 99.00% 1.39%
Pareto Copula 95.00% 6.94% Pareto Copula 95.00% 6.94%
T Copula 99.00% 1.39% T Copula 99.00% 1.39%
(9 d.o.f.) 95.00% 6.94% (9 d.o.f.) 95.00% 6.94%

Dean Fantazzini 25
The Canonical Aggregation Model via Copulas

- The empirical analysis in Di Clemente and Romano (2004) and Fantazzini et


al. (2007, 2008) showed that is not the choice of the copula, but that of the
marginals which is important.

- Among marginals distributions, particularly the ones used to model the


losses severity are fundamental.

- The best distribution for severity modelling resulted to be the Gamma


distribution, while remarkable differences between the Poisson and Negative
Binomial for frequency modelling, were not found.

- However, we have to remind that the Poisson is much more easier to


estimate, especially with small samples.

- Copula functions allow us to reduce the risk measures capital requirements.

Dean Fantazzini 26
The Poisson Shock Model

Lindskog and McNeil (2003), Embrechts and Puccetti (2008) and


Rachedi and Fantazzini (2009) proposed a different aggregation model.
In this model, the dependence is modelled among severities and among
frequencies, using Poisson processes.

Suppose there are m different types of shock or event and, for e = 1, . . . , m, let net
be a Poisson process with intensity λe recording the number of events of type e
occurring in (0, t].

Assume further that these shock counting processes are independent. Consider
losses of R different types and, for i = 1, . . . , R, let nit be a counting process that
records the frequency of losses of the ith type occurring in (0, t].

At the r th occurrence of an event of type e the Bernoulli variable Ii,r


e indicates

whether a loss of type i occurs. The vectors


Ier = (I1,r
e e
, . . . , IR,r )′
for r = 1, . . . , net are considered to be independent and identically distributed
with a multivariate Bernoulli distribution.

Dean Fantazzini 27
The Poisson Shock Model

⇒ In other words, each new event represents a new independent opportunity to


incur a loss but, for a fixed event, the loss trigger variables for losses of different
types may be dependent. The form of the dependence depends on the
specification of the multivariate Bernoulli distribution and independence is a
special case.

According to the Poisson Shock Model, the loss processes nit are clearly Poisson
themselves, since they are obtained by superpositioning m independent Poisson
processes generated by the m underlying event processes.

⇒ Therefore, (n1t , . . . , nRt ) can be thought of as having a multivariate Poisson


distribution. However, it follows that the total number of losses is not itself a
Poisson process, but rather a compound Poisson process:
e
nt R
m X
X X
e
nt = Ii,r
e=1 r=1 i=1

These shocks cause a certain number of losses in the i-th ET/BL, whose severity
e ), r = 1, . . . , ne , where (X e ) are i.i.d. with distribution function F and
is (Xir t ir it
independent with respect to nt . e

Dean Fantazzini 28
The Poisson Shock Model

⇒ As it may appear immediately from the previous discussion, the key point of
this approach is to identify the underlying m Poisson processes: unfortunately,
this field of studies is quite recent and more research has to be made with this
regard. Moreover, the paucity of data limits any precise identification.

⇒ A simple approach is to identify the m processes with the R risky intersections


(BLs or ETs or both), so that we are back to the standard framework of the LDA
approach. This is the “soft-model” proposed in Embrechts and Puccetti (2008)
and later applied to a real OP dataset by Rachedi and Fantazzini (2009)

Embrechts and Puccetti (2008) and Rachedi and Fantazzini (2009) allow for
positive/negative dependence among the shocks (nit ) and also among loss
severities (Xij ), but the number of shocks and loss severities are independent to
each other:
H f (n1t , . . . , nRt ) = C f (F (n1t ), . . . , F (nRt ))
H s (X1j , . . . , XRj ) = C s (F (X1,j ), . . . , F (XR,j ))
Hf ⊥ Hs

Dean Fantazzini 29
The Poisson Shock Model

Equivalently, if we use the mean loss for the period, i.e. sit , we have

H f (n1t , . . . , nRt ) = C f (F (n1t ), . . . , F (nRt ))


H s (s1t , . . . , sRt ) = C s (F (s1t ), . . . , F (sRt ))
Hf ⊥ Hs

The operative procedure of this approach is the following one:

1. Fit the frequency and severity distributions like in the standard LDA
approach, and compute the relative cumulative distribution functions.

2. Fit a copula C f to the frequency c.d.f.’s. (see the next subsection for an
important remark about this issue).

3. Fit a copula C S to the severity distributions c.d.f.’s.

4. Generate a random vector uf = (uf1t , ..., ufRt ) from the copula C f .

5. Invert each component ufit with the respective inverse distribution function
F −1 (ufit ), to determine a random vector (n1t , ..., nRt ) describing the number
of loss observations.

Dean Fantazzini 30
The Poisson Shock Model

6. Generate a random vector us = (us1 , . . . , usR ) from the copula C S .

7. Invert each component usi with the respective inverse distribution function
F −1 (usi ), to determine a random vector (X1j , ..., XRj ) describing the loss
severities.

8. Convolve the frequencies’ vector (n1t , . . . , nRt ) with the one of the severities
(X1j , . . . , XRj ).

9. Repeat the previous steps a great number of times, i.e. 106 times.

In this way it is possible to obtain a new matrix of aggregate losses which can
then be used to compute the usual risk measures such as the VaR and ES.

Note: copula modelling for discrete marginals is an open problem, see Genest and
Nešlehová (2007, “A primer on copulas for count data”, Astin Bulletin), for a
recent discussion. Therefore, some care has to be taken when considering the
estimated risk measures.

Dean Fantazzini 31
The Poisson Shock Model

Remark 1: Estimating Copulas With Discrete Distributions

According to Sklar (1959), in the case where certain components of the joint
density are discrete (as in our case), the copula function is not uniquely defined
not on [0,1]n , but on the Cartesian product of the ranges of the n marginal
distribution functions.

Two approaches have been proposed to overcome this problem. The first method,
has been proposed by Cameron et al. (2004) and is based on finite difference
approximations of the derivatives of the copula function,

f (x1 , . . . , xn ) = ∆n . . . ∆1 C(F (x1 ), . . . , F (xn ))

where ∆k , for k =1, . . . , n, denotes the k-th component first order differencing
operator being defined through

∆k C[F (x1 ), . . . , F (xk ), . . . F (xn )] = C[F (x1 ), . . . , F (xk ), . . . F (xn )]−


− C[F (x1 ), . . . , F (xk − 1), . . . F (xn )]

Dean Fantazzini 32
The Poisson Shock Model

The second method is the continuization method suggested by Stevens (1950) and
Denuit and Lambert (2005), which is based upon generating artificially continued
variables x∗1 , . . . , x∗n by adding independent random variables u1 , . . . , un (each of
them being uniformly distributed on the set [0,1]) to the discrete count variables
x1 , . . . , xn and which does not change the concordance measure between the
variables.

⇒ The empirical literature clearly shows that maximization of likelihood with


discrete margins often runs into computational difficulties, reflected in the failure
of the algorithm to converge.

⇒ In such cases, it may be helpful to first apply the continuization


transformation and then estimate a model based on copulas for continuous
variables. This is why we advice to rely on the second method.

Dean Fantazzini 33
The Poisson Shock Model

Remark 2: EVT for Modelling Severities

In short, EVT affirms that the losses exceeding a given high threshold u converge
asymptotically to the GPD, whose cumulative function is usually expresses as
follows:

 1 − 1 + ξ y −1/ξ
  ξ 6= 0
GP Dξβ =  β  (13)
 1 − exp − y
β ξ=0
where y = x − u, y ≥ 0 if ξ ≥ 0 and 0 ≤ y ≤ −β/ξ if ξ ≤ 0, and where y are called
excesses whereas x exceedances.

It is possible to determine the conditional distribution function of the excesses,


i.e. y , as a function of x,
Fx (x) − Fx (u)
Fu (y) = P (X − u ≤ y| X > u) = (14)
1 − Fx (u)
In these representations the parameter ξ is crucial: when ξ = 0 we have an
Exponential distribution; when ξ < 0 we have a Pareto Distribution - II Type and
when ξ > 0 we have a Pareto Distribution - I Type.

Dean Fantazzini 34
The Poisson Shock Model

Moreover this parameter has a direct connection with the existence of finite
moments of the losses distributions. We have that
 
E xk = ∞ if k ≥ 1/ξ

Hence, in the case of a GPD as a Pareto - I Type, when ξ ≥ 1 we have infinite


mean models.

Di Clemente and Romano (2004) and Rachedi and Fantazzini (2009), suggest to
model the mean loss severity sit using the lognormal for the body of the
distribution and EVT for the tail, in the following way:

0 < x < ui
  
 Φ ln s it −µ i
σi
Fi (sit ) = (15)
 1 − Nu,i 1 + ξ sit −ui −1/ξ(i)
 
N i β
i i ui ≤ x
where Φ is the standardized normal cumulative distribution functions, Nu,i is the
number of losses exceeding the threshold ui , Ni is the number of the loss data
observed in the ith ET, whereas βi and ξi denote the scale and the shape
parameters of a GPD, respectively.

Dean Fantazzini 35
The Poisson Shock Model

For example, the graphical analysis for the ET3 in Rachedi and Fantazzini
(2009) reported in Figures 1-2 clearly shows that operational risk losses are
characterized by high frequency – low severity and low frequency – high
severity losses.

⇒ Hence the behavior of losses is twofold: one process underlying small


and frequent losses and another one underlying jumbo losses.

⇒ Splitting the model in two parts allows us to estimate the impact of


such extreme losses in a more robust way.

Dean Fantazzini 36
The Poisson Shock Model

Figure 1: Scatter plot of ET3 losses. The dotted lines represent, re-
spectively, mean, 90%, 95% and 99.9% empirical quantiles.

Dean Fantazzini 37
The Poisson Shock Model

Figure 2: Histogram of ET3 losses

Dean Fantazzini 38
The Poisson Shock Model

Rachedi and Fantazzini (2009) analyzed a large dataset consists of 6 years


of loss observations from 2002 to 2007, containing the data of the seven
ETs.

⇛ They compared the comonotonic approach proposed by Basel II, the


canonical aggregation model via copulas and the Poisson shock model

The resulting total operational risk capital charge for the three models is
reported below:

VaR (99.9 %) ES (99.9 %)


Comonotonic 308861 819325
Copula (Canonical aggregation) 273451 671577
Shock Model 231790 655460

Table 2: VaR and ES final estimates

Dean Fantazzini 39
Bayesian Approaches

An important limitation of the advanced measurement approaches (AMAs)


is the inaccuracy and scarcity of data, that is basically due to the relatively
recent definition and management of operational risk.

This makes the process of data recovery generally more difficult, since
financial institutions only started to collect operational loss data a few
years ago.

⇒ In this context, the employment of Bayesian and simulation methods


appears to be a natural solution to the problem.

⇒ In fact, they allow us to combine the use of quantitative information


(coming from the time series of losses collected by the bank) and
qualitative data (coming from experts’ opinions), taking the form of prior
information.

Dean Fantazzini 40
Bayesian Approaches

Besides, simulation methods represent a widely used statistical tool that


overcome computational problems. The combination of the described
methodologies leads to the Markov chain Monte Carlo (MCMC) methods,
which includes the main advantages of both Bayesian and simulation
methods.

⇒ Interesting Bayesian approaches for marginal loss distributions has been


recently proposed in Dalla Valle and Giudici (2008), while Bayesian
copulas in Dalla Valle (2008). We refer there for more details.

⇒ A word of caution: these methods work fine if there is really prior


information (like experts’ opinions).

⇒ Instead, if the prior is chosen to “close” the model, the resulting


estimates may be very poor or unrealistic (generating also numerical
errors), as clearly reported in Tables 12-14 in Dalla Valle and Giudici
(2008), where the ES estimates are higher than e+27 or e+39 !

Dean Fantazzini 41
References

Basel Committee on Banking Supervision (1998). Amendment to the Capital Accord


to Incorporate Market Risks, Basel.
Basel Committee on Banking Supervision (2003). The 2002 loss data collection
exercise for operational risk : summary of the data collected, Bank for International
Settlement document.
Basel Committee on Banking Supervision (2005). Basel II: International
Convergence of Capital Measurement and Capital Standards: a Revised Framework,
Bank for International Settlement document.
Cameron, C., Li, T., Trivedi, P., and Zimmer, D. (2004). Modelling the Differences in
Counted Outcomes Using Bivariate Copula Models with Application to Mismesured
Counts, Econometrics Journal, 7, 566-584.
Cruz, M.G. (2002). Modeling, Measuring and Hedging Operational Risk. Wiley, New
York.
Dalla Valle, L., and Giudici, P. (2008). A Bayesian approach to estimate the
marginal loss distributions in operational risk management, Computational Statistics
and Data Analysis, 52, 3107-3127.
Dalla Valle, L. (2008). Bayesian Copulae Distributions, with Application to
Operational Risk Management, Methodology and Computing in Applied Probability,
11(1), 95-115.
Denuit, M. and Lambert, P. (2005). Constraints on Concordance Measures in
Bivariate Discrete Data, Journal of Multivariate Analysis, 93 , 40-57.

Dean Fantazzini 42
References

Di Clemente, A., and Romano, C. (2004). A Copula-Extreme Value Theory Approach


for Modelling Operational Risk. In: Operational Risk Modelling and Analysis:
Theory and Practice, Risk Books, London.
Embrechts, P., and Puccetti, G. (2008). Aggregating operational risk across matrix
structured loss data, Journal of Operational Risk, 3(2), 29-44.
Fantazzini, D., L. Dallavalle and P. Giudici (2007). Empirical Studies with
Operational Loss Data: DallaValle, Fantazzini and Giudici Study. In: Operational
Risk: A Guide to Basel II Capital Requirements, Models, and Analysis, Wiley, New
Jersey.
Fantazzini, D., Dallavalle, L. and P. Giudici (2008). Copulae and operational risks,
International Journal of Risk Assessment and Management, 9(3), 238-257.
Lindskog, F. and A. McNeil, A. (2003). Common Poisson shock models: applications
to insurance and credit risk modelling, ASTIN Bulletin, 33(2) , 209-238.
Rachedi, O., and Fantazzini, D. (2009). Multivariate Models for Operational Risk: A
Copula Approach using Extreme Value Theory and Poisson Shock Models, In:
Operational Risk towards Basel III: Best Practices and Issues in Modelling,
Management and Regulation, 197-216, Wiley, New York.
Stevens, W. L. (1950). Fiducial Limits of the parameter of a discontinuous
distribution, Biometrika, 37, 117-129.

⇛ ... the book I’m writing with prof. Aivazian (CEMI)... STAY TUNED!

Dean Fantazzini 43

You might also like