You are on page 1of 57

C Formula Sheet

Probability

f ( x) F '( x)

F ( x) f ( x)
F ( x) 1 S ( x)

S ( x) 1 F ( x)
S ( x) e H ( x )

H ( x) ln( S ( x))
h( x) H '( x)

H ( x) h( x)
f ( x) S '( x)
f ( x)
h( x )
S ( x)

E X x p ( x) or
k

f ( x)dx

n' E X n (nth Raw Moment)


n
n E X (nth Central Moment)
1' is the mean

2 2' 2 is the variance


3 3' 32' 2 3
4 4' 43' 62' 2 3 4
Var ( X ) 2 2
Standard Deviation =

3
3

Kurtosis is 2 44

Skewness is 1

Coefficient of Variation is

0 if X is symmetric

Skew( X )=Skew(cX )

n
n!
Combinations :
k k !(n k )!

Percentiles
A 100p th percentile of a random variable X is a number p satisfying:
Pr( X p ) p
Pr( X p ) p

Conditional Probability
Pr( A B)
Pr( B)
Pr( B | A) Pr( A)
Pr( A | B)
Pr( B)
Pr( A | B)

Scaling
To scale a lognormal:
X ~ lognormal ,
cX ~ lognormal ln c,
Let Y cX
y

y
Then FY ( y ) Pr Y y Pr cX y Pr X FX
c

Variance

E XY E X E Y if Independent
E aX bY aE X bE Y

Var aX b a 2Var ( X )
Var aX bY a 2Var ( X ) 2abCov X ,Y b 2Var (Y )

Var ( X ) E X 2 E ( X )2
VaR X VaR 1 X
Cov( X , Y ) E XY E ( X ) E (Y )
Cov A B , C D Cov ( A, C ) Cov ( A, D ) Cov ( B, C ) Cov ( B, D )
Cov( A, A) Var ( A)
Cov( A, B) 0 if Independent
Cov( X , Y )
XY
(Correlation Coefficient)
X Y
Bernoulli Shortcut: For any RV with only 2 values a and b:
Var ( X ) a - b pq
2

Parametric Distributions
Distribution

f(x)

E(X)

Binomial
n trials

n k
nk
k p (1 p)

p k (1 p)nk

np
p

Bernoulli
1 trial
Uniform
continuous on [d, u]

Beta
Exponential
(memoryless)
Weibull

1
u-d

ce

cx 1e

x
1
cx e

Single Parameter
Pareto

Lognormal

2 2

1
1

2
2 1

Gamma

Double Parameter
Pareto

u - d 2 if u 0

cx a1 ( x)b1

x 1
c
1

ln x
2
ce 2

0.5 2

Median e

Var(X)

npq
pq

d u
2
a
ab

xd
F ( x)
ud

E X2

u - d 2
12
2 ab

a b 2 (a b 1)

2
2
2 2
1 2

2 2 2

E X 2

Frailty

Let h x | a( x)
If a(x) is a constant then the hazard rate is exponential, otherwise Weibull
can be Gamma or Inverse Gaussian
x

A( x) a(t )dt
0

H x | A( x)
S x | e A( x )
S ( x) M A( x)

(Use MGF for )

Exponential a(x) leads to a Pareto distribution


Weibull a(x) leads to a Burr distribution
Conditional Variance
Var ( X ) E Var X | I Var E X | I
E X E E X | I P Ai E X | Ai
E X 2 E E X 2 | I

Var X E Var X | I

Splices

1) Sum of functions must integrate to 1


2) To be continuous, functions must be equal at break point
Shifting
3 1
If f 3e

The mean = 1/3 + 1 (the mean of the unshifted exponential plus the shift)

Policy Limits
X ^ d is the LIMITED EXPECTED VALUE
X u
X u

X
X ^d
d

(Cost to Customer)

Definition: E X k x k f ( x) dx kx k 1S ( x)dx

E ( X ) x f ( x)dx

S ( x)dx

E ( X ^ d ) xf ( x)dx d S (d )
0
d

S ( x)dx
0

X ^ d x
k

f ( x )dx d k S (d )

0
d

kx k 1S ( x)dx
0

If Y (1 r ) X then E Y ^ d 1 r E X ^
1 r

Deductibles
Ordinary Deductible of d pays max(0,X-d)
For d = 500, Loss <= 500 pays nothing, Loss of 700 pays 200
Franchise Deductible of d pays nothing if Loss is less than d, and full amount if Loss > d

Payment per Loss with Deductible


Ordinary Deductible
0
Payment from Ins. Co.
X d

E X d

E X d ( x d ) f ( x)dx
d

X d
X d

S (x)dx
d

E X d

k
( x d )k f ( x)dx E X k E X ^ d

Payment per Payment with Deductible


Ordinary Deductible
Y P X d | X d
FY P ( x)

FX ( x d ) FX (d )
1 FX (d )

SY P ( x)

SX (x d )
S X (d )

e( d ) E X d | X d
e(d ) E Y P

E X d
S (d )

e( d )

x d f ( x)dx
d

S (d )

E X E X ^ d
S (d )

S ( x)dx
d

S (d )

e(d ) Mean Excess Loss

E X E X ^ d E X d
pmt from customer

pmt from ins. co.

E X ^ d E X ^ d | X d Pr X d E X ^ d | X d Pr X d
Average loss < d Pr X d d Pr X d
Franchise Deductibles

Expected Payment per Loss E X d dS ( d )


Expected Payment per Payment e( d ) d

Special Cases for e(d)

Distribution

e(d)

Exponential

Uniform 0,
2 Parameter Pareto

1 Parameter Pareto

d
2

d
1
d
1

d d

If X Uniform 0, , then X - d | X d Uniform 0, d


If X Pareto , , then X - d | X d Pareto , d

If X 1 Parameter Pareto , and d , then X - d | X d Pareto ,d


Loss Elimination Ratio

E X ^ d
LER(d )
E X
LER(d ) Expected % of loss not included in payment

Special Cases for LER(d)

Distribution
Exponential
2 Parameter Pareto

LER(d)
d

1 e
1

1

1 Parameter Pareto

d
1

Properties of Risk Measures

Translation Invariance: X c X c
Positive Homogeneity: cX c X
Subadditivity: X Y X Y
Monotinicity: X Y if X Y
A coherent risk measure satisfies all 4 properties
VaR p fails subadditivity
TVaR p is coherent
E X is coherent

VaRp ( X ) p FX1 ( p)
VaR0.99 is the 99th percentile

Value-at-Risk

TVaR p ( X ) E X | X VaR p ( X )

Tail-value-at-Risk

xf ( x)dx

FX1 ( p )

1 p
1

VaR ( X )dy
y

1 p

VaR p ( X ) e X VaR p ( X )

Distribution

Normal

Lognormal

VaR(X)

z p

zp
1 p

z p

If given a mixture, use the survival function and solve for x

TVaR(X)

e 0.5 x
where ( x)
2

z p
E X
1 p

Maximum Covered Loss


For deductible 'd' and maximum covered loss 'u'
0

EX X d
u d

X d
d X u
X u

E Payment per Loss E X ^ u E X ^ d


u

S ( x)dx
d

E X d E X u

Policy Limit: the maximum amount the coverage will pay


- If Policy Limit = 10,000 and d = 500
- Pays 10,000 for loss of 10,500 or higher
- Pays Loss 500 for losses b/w 500 and 10,500
Maximum Covered Loss: the maximum loss amount that is covered
- If MCL = 10,000 and d = 500
- Pays 9,500 for loss of 10,000 or higher
- Pays Loss 500 for losses b/w 500 and 10,000

Coinsurance

E Payment per Loss

E X ^ u E X ^ d
Coinsurance
MCL
ded

Coinsurance of 80% means that the insurance pays 80% of the costs

Inflation

u
d

E Payment per Loss (1 r ) E X ^

E
X
^

1 r
1 r

Variance of Payment per Loss with a deductible
X Loss RV
Y L payment per loss RV
E Y L 0 Pr X d E Y P Pr X d

Var Y L E Var Y L | Case Var E Y L | Case

2
0 Pr X d Var Y P Pr X d E Y P 0 Pr X d Pr X d

Bonus
Pay Bonus of 50% of (500 - X) if X 500
B 0.5Max 0,500 - X
0.5Max 500 500,500 - X
0.5 500 Min 500, X
0.5 500 X ^ 500
250 0.5E X ^ 500

Discrete Distributions
The (a,b,0) class
Distribution
Poisson

a
0

Variance vs. Mean


Variance = Mean

Negative Binomial
(Geometric is NB with r = 1)

Variance > Mean

Binomial

Variance < Mean


Pr N n

Geometric Distribution (memoryless)

A sum of 'n' independent Negative Binomial random variables having the same
and parameters r1,..., rn has a Negative Binomial distribution with parameters
n

and

i 1

A sum of 'n' independent Binomial random variables having the same q


and parameters m1,..., mn has a Binomial distribution with parameters q
n

and

i 1

pk
b
a
pk 1
k

k 1,2,...

ab
1 a
ab
Var N
1 a 2
EN

x x( x 1) ( x n 1)
n
n!

Probability Generating Functions
P ( n ) (0)
pn
n!
p0 P(0)

( n ) P ( n ) (1)
P '(1) E X
P ''(1) E X ( X 1)
P '''(1) E X ( X 1)( X 2)
If given a primary and secondary pgf, substitute the secondary
pgf for 'z' in the primary pgf.

The (a,b,1) class

pk
b
a
pk 1
k

k 2,3,4,...

Zero-Truncated Distributions

p0T 0
pkT

pk
1 p0

Zero-Modified Distributions

p0M 0

1 p p

pkM 1 p0M

pkM 1 p0M pkT


1 p0M
E M
E Orig 1 p0M E Zero Truncated
1 p0

E N cm
Var ( N ) c 1 c m 2 cv
c 1 p0M
m is the mean of the corresponding zero-truncated distribution
v is the variance of the corresponding zero-truncated distribution
Sibuya
ETNB with 1 r 0 and take lim as

a 1
b r 1
p1T r

Poisson/Gamma
The Negative Binomial is a Gamma mixture of Poissons
N ~ Poisson

~ Gamma ,
Negative Binomial ( r ) Gamma( )
Negative Binomial ( ) Gamma( )
Gamma
Mean
Variance 2
Negative Binomial
Mean r
Variance r 1
Negative Binomial (r 1) is Geometric
Gamma 1 is Exponential
Weibull 1 is Exponential
Var Var X E X

where ~ Gamma

Coverage Modifications
Frequency Model

Original Parameters

Exposure n1

Pr X 0 1

Poisson

m, q

Binomial

r,

Negative Binomial

Exposure Modification

Exposure n2

Pr X 0 1

n2

n1
n2
m, q
n1
n2
r,
n1

(a,b,0) and (a,b,1) adjustments

p0M *

p0M

1 p0*

p
0

where * indicates revised parameters

Coverage Modification

Exposure n1

Pr X 0 v

m, vq
r , v

Aggregate Loss Models


Compound Variance

S= aggregate losses
N = frequency RV
X =severity RV
E S E N E X
Var S E N Var X Var N E X

Can only be used when N and X are independent


Var S E X 2

if primary distribution (# claims) is Poisson

Collective Risk vs Individual Risk

Convolution Method

pn Pr( N n) f N (n)
f n Pr( X n) f X (n)
g n Pr( S n) f S (n)
FS ( x)

g
n

n x

fi1 fi2 fik

i1 ...ik n

When given severity distributions with Pr( X 0) 0


1) Modify the frequency to eliminate 0
2) Adjust the severity probabilities after removing 0

Aggregate Deductibles

Assume severity is discrete


d = stop-loss or reinsurance deductible
Assume Premium E S ^ d
E S d E S E S ^ d Net stop-loss Premium
Method 1 - Definition of E S ^ d
u

E S ^ d

hjg

hj

d Pr S d

j 0

u d 1 (the sum of all multiples of h less than d )


h
Method 2 - Integrate the Survival Function
u 1

E S ^ d

hS (hj) d hu S (hu)
j 0

u d 1 (the sum of all multiples of h less than d )


h
To find E S ^ 2.8 where x can be 2, 4 6 or 8...
Method 1

E S ^ 2.8 P S 0 0 P S 2 2 P S 2 2.8

g (0)
amt
amt
d

g (2)
1 g (0) g (2)

E S ^ 4 P S 0 0 P S 2 2 P S 4 4

g (0)
amt
d
amt
g
(2)
1

g
(0)

g
(2)

Method 2


E S ^ 2.8
2
Pr S 0
0.8
Pr S 2

dist between values of x


dist b/w highest value of x below d and d

Aggregate Coverage Modifications


If there is a per-policy deductible and you want Aggregate Payments
1) Expected Payment per Loss x Expected Number of Losses per Year

E S E N E X d

OR
2) Expected Payment per Payment x Expected Number of Payments per Year

E S E N ' E X d | X d

where N' is the number of positive payments frequency Pr( X d )


1) Better for discrete severity distributions
2) Better for if severity is Exponential, Pareto or Uniform
Exact Calculation of Aggregate Loss Distribution
Two cases for which the sum of independent random variables has a simple distribution
2
1) Normal Distribution. If X i are Normal with mean and variance , their sum is normal.

2) Exponential or Gamma Distribution. If X i are exponential or gamma, their sum has a gamma
distribution

Normal Distributions
If n random variables Xi are independent and normally distributed with parameters and 2 ,
their sum is normally distributed with parameters n and n 2 .
Calculate Pr S c | n 1 using and 2
Calculate Pr S c | n 2 using 2 and 2 2 ,etc.
Then multiply each these probabilities by their respective p1, p2, etc.

Exponential and Gamma Distributions


The sum of n exponential random variables with common mean
is a Gamma distribution with parameters n and . When a gamma
distribution's parameter is an integer, the gamma distribution is called
an Erlang distribution.
In a Poisson Process with parameter , the number of events occurring by time t
has a Poisson distribution with mean t.
In a Poisson Process with parameter , the time between events follows
an exponential distribution with parameter 1 .

In a Poisson Process with paremeter 1 , the time between events is exponential

with mean . Therefore, the time until the nth event occurs is Erlang with parameters n and
e

The probability of exactly n events occurring before time x is


Gamma CDF
n 1

FX ( x) 1

j!

j 0

If 1, F ( x) 1- e

x
-

If 2, F ( x) 1- e

x
-

x -
e

x
n!

Empirical Models
Bias
bias E

is the estimator, is the parameter being estimated

bias is the expected value of the estimator minus its true value
Estimator is unbiased if bias 0

for all

Estimator is asymptotically unbiased if lim bias 0


x

The sample variance (with division by n -1) is an unbiased estimator of variance


Sample mean is an unbiased estimator of the true mean
Consistency (weak consistency)

Definition: Estimator is consistent if lim Pr n 1 for all 0


n

1) An estimator is consistent if it is asymptotically unbiased

and Var n 0 as n
2) The MLE is always consistent
3) If MSE 0 then is consistent
Mean Square Error

MSE E

MSE Var bias

MSE is a function of the true value of the parameter

Complete Data
Grouped Data
f n ( x)

nj

n c j c j 1

where x is in c j , c j 1
n j = # points in the interval
n = total points
f n ( x) Histogram
Fn ( x) Ogive
b

To find the 2nd raw moment, calculate

f ( x) x dx
2

2
If there is a policy limit (say 8000), then E X^8000 would have as its last 2 terms:

10,000

8000

f n ( x) x dx
2

5000

f n ( x) 80002 dx

8000

Variance of Empirical Estimators

Binomial:
Variance = mq(1 q)
X
If Y= (Binomial Proportion),
m
q (1 q )
Variance =
m
Multinomial:
Variance mqi (1 qi ) i = category
Covariance -mqi q j
If Y=

X
q (1 qi )
, Variance = i
m
m
-q q
Covariance i j
m

Individual Data

Var Sn x
Var Sn x

S x 1 S x

if S is known

n
Sn x 1 Sn x

n
nx n nx
Var Sn x
where nx is the # of survivors past time x
3
n
nx n y n y

Var y x p x | nx Var y x q x | nx
nx3

The empirical estimators of S ( x) and f ( x) are unbiased


Individual Data

1) Determine the estimator


2) Determine what's random and what's not
3) Write an expression for the estimator with symbols for random variables
4) Calculate the variance of the random variables
5) Calculate the variance for the whole expression
(i.e. Var ( aX bY ), Var ( aX ), etc.)

Kaplan Meier Product Limit


j 1

Sn (t )

i 1

si
1
ri

S (t ) S t0

t0

where t 0 is the end of the study

Exponential Extrapolation

ri risk set
si death
di entry time
ui withdrawal time
xi death time

S x - Pr X x
Shortcut:

1
1
2

n n 1 n 0.5

Nelson Aalen Estimator


j 1

H (t )

i 1

si
ri

S ( x) e H ( x )

Lives that leave at the same time as a death are in the risk set.
Lives that arrive at the same time as a death are not in the risk set.
Censored lives are in the risk set but are not counted as deaths
k = # distinct data points
Confidence Intervals
For S(x), the boundaries must be between 0 and 1.
For H(x), the boundaries can be anything.

Calculator Shortcuts

1) Enter

si
in column 1
ri

2) Enter the formula ln(1- L1) for column 2


3) Select L1 as first variable and L2 as second
e x

4) Calculate

e y

Nelson Aalen Kaplan Meier

Estimation of Related Quantities


If using Kaplan-Meier or Nelson-Aalen methods, E(X^d) = area under the curve of S(x). Multiply
the base x height of each of the rectangles.
Bayes Theorem

P A1 | E

P E | A1 P A1

P E | A1 P A1 P E | A2 P A2 ... P E | An P An

Greenwood's Approximation of Variance (Kaplan-Meier)

Var S (t ) S (t )

r r s
sj

yj t

Var S (t )

Sn x 1 Sn x
n

if data is complete (no censoring or truncation)

Variance of Nelson-Aalen Estimator


sj
Var H (t )
rj 2

yj t

Linear Confidence Intervals

S (t) z
n

0.5(1 p )

Var Sn (t ) , Sn (t ) z0.5(1 p ) Var S n (t )

Log-transformed confidence interval for S(t)


S (t ) 1U , S (t )U
n

z
Var Sn (t )
0.5(1

p
)

where U exp
Sn (t ) ln Sn (t )

Log-transformed confidence interval for H(t)


H (t )

, H (t ) U

z
0.5(1 p ) Var H (t )
where U exp
H (t )

Kernel Smoothing

Uniform
1

k xi ( x) 2b
0

x xi b
K xi ( x)
2b

f ( x)

i 1

F ( x)

i 1

1
n

xi b x xi b
Otherwise
x xi b
xi b x xi b
x xi b
k xi ( x)

f n xi probability

1
n

K xi ( x)

f n xi probability

xi is a sample point
x is the estimation point
Kernel distribution is 1 for observation points more than one bandwidth to the left
Kernel distribution is 0 for observation points more than one bandwidth to the right
K 6 (13) K10 (13) K 25 (13)
ex) To find K12 (11), linearly interpolate between K12 (7) and K12 (17)
Triangular
1
b
Base of triangle is 2b
Height of triangle is

Expected Values
EX |Y Y

(Y is the original random variable)

The mean of the smoothed distribution is the same as the original mean
E X E Y
Uniform Kernel
b2
Var ( X ) Var (Y )
3
Triangular Kernel
b2
Var ( X ) Var (Y )
6

Approximations for Large Data Sets

d j is the # of left truncated observations in c j , c j 1


- number of new entrants

u j is the # of right censored observations in c j , c j 1

x j is the # of losses in c j , c j 1

r j is the risk set c j , c j 1

q 'j is the decrement rate in c j , c j 1


q 'j

xj
rj

Pj 1 Pj d j u j x j
v j # withdrawals
w j # survivors
v j wj u j
All entries/withdrawals at endpoints
r j Pj d j
All entries/withdrawals uniformly distributed

r j Pj 0.5 d j v j

Multiple Decrements

p p '(1) p '(2) ...

xt xt 1 xt 2
'( x )
p

1
1

t 3 t
r
r
t t 1 rt 2
'( x )
t 3 qt

1 t 3 pt'( x)

Parametric Models
Method of Moments
n

xi

i 1

Distribution
Exponential

2
x

i 1

n
Formulas

Formulas

Gamma

Pareto

m2
t m2

2t 2m2
t 2m 2

2ln(m) 0.5ln(t )

Lognormal
Uniform on 0,

2
t

m

m

mt
t 2m 2

2 2ln(m) ln(t )

2m

When they dont specify which moment to use, use the first k moments, where k is the number
of parameters youre fitting.
For an inverse exponential, add the reciprocals to get the mean.

Percentile Matching

p x(n1) p

if (n 1) p is an integer

Otherwise multiply (n 1) p and interpolate


The smoothed empirical percentile is not defined if the product is less than 1 or greater than n

Maximum Likelihood
Type of Data
Discrete distribution, individual data
Continuous distribution, individual data

Formula

px
f ( x)

Grouped Data

F c j F c j 1

Individual Data censored from above at u

1 F (u) for censored observations

Individual Data censored from below at d

F (d )for censored observations

Individual Data truncated from above at u

f ( x)
F (u )
f ( x)
1 F (d )

Individual Data truncated from below at d

Cases where MLE = Method of Moments Estimator


(if no censored or truncated data)
Distribution

Result
MLE = MoM
MLE = MoM if fixed
MLE = MoM
MLE = MoM
MLE = MoM if r is known
MLE = MoM if m is known

Exponential
Gamma
Normal
Poisson
Negative Binomial
Binomial

If the MLE is the sample mean, the variance of the MLE is the variance of the
distribution =

Var ( X )
n

Common Likelihood Functions


MLE Function
b
a

MLE

L e

b
a
a

b
a

ab

L a eb
L a 1

MLE Formulas
Distribution

Formula

CT?

nc

x d
i

Exponential

Yes

i 1

n
n

ln x

i 1

Lognormal

No
n

Inverse Exponential

ln

xi

i 1

n
n

i 1

nc

Weibull, fixed

Uniform on Individual Data 0,


Uniform on Grouped Data 0,
OR Some observations are censored
at a single point
There must be at least one observation
above c j

Uniform on Grouped Data 0,


All groups are bounded

i 1

No

1
xi

xi

nc

di
i 1

Yes

max xi
n
c j
nj

c j Upper bound of highest finite interval
n j Number of observations below c j

Min of
1) UB of highest interval with data
n
2) LB of highest interval with data *
nj

No

No

Two-parameter Pareto, fixed

n
K

nc

nc

i 1

i 1

K ln di ln xi

One-parameter Pareto, fixed

n
K

nc

nc

i 1

i 1

K ln max , di ln xi

a
Beta, fixed

b=1

Beta, fixed

Yes

n
K
n

K ln xi n ln

Yes

No

i 1

a=1

n
b
K
n

K ln xi n ln

No

i 1

n = # of uncensored observations
c = # of censored observations
d = truncation point
x = observation if uncensored or the censoring point if censored
CT = formula can be used for left-truncated or right-censored data
Bernoulli Technique
Whenever there is one parameter and only 2 classes of observations, maximum likelihood will
assign each class the observed frequency, and you can then solve for the parameter.
If X can be only 2 values (a or b)
# data points = a
P X a
# data points

Reasons to Use Maximum Likelihood


1) Method of Moments and Percentile Matching only use a limited number of features from the
sample.
2) Method of moments are hard to use with combined data.
3) Method of moments and percentile matching cannot always handle truncation and censoring.
4) Method of moments and percentile matching require arbitrary decisions on which moments
or percentiles to use.
Reasons NOT to Use Maximum Likelihood
1) Theres no guarantee that the likelihood can be maximized it can go to infinity.
2) There may be more than one maximum.
3) There may be local maxima in addition to the global maximum; these must be avoided.
4) It may not be possible to find the maximum by setting the partial derivatives to zero; a
numerical algorithm may be necessary.

Fishers Information
1 Parameter

d2

E 2 l
d

1
Var

Fisher's Information

2 Parameters
2
2 l ,

I , E

l ,

Var
Var ,
Cov


l ,

Cov 1

Var

Inverting a Matrix

a
c

b
d

1 d
ad bc c

b
a

Delta Method

dg
Var g( X ) Var ( X )
dx

1 Variable
2

g
g g
g
Var g( X , Y ) Var ( X ) 2Cov( X , Y )
Var (Y )
x y
x
y
Var g( X ) g g

General

g
g
where g
,...,
and is the covariance matrix

x
k
1
Take derivative with respect to unknown variable

2 Variables

Fitting Discrete Distributions


Distribution
Poisson

Negative Binomial

Method of Moments

MLE

x2
r 2
x

2 x

r x

2 Sample Variance (divide by n)

Binomial

Choosing between (a,b,0) distributions to fit the data:

1) Compare sample variance 2 to sample mean x


Poission: Variance = Mean
Negative Binomial: Variance > Mean
Binomial: Variance < Mean

2) Calculate

knk
and observe the slope as a function of k
nk -1

If ratios are increasing, then a > 0


Poisson = 0 slope
Negative Binomial = positive slope
Binomial = negative slope
n is the # of policies/observations of k
The variance of a mixture is always at least as large as the weighted average of the
variances of the components and usually greater due to:

Var ( X )
variance of a mixture

E Var ( X | I )
weighted average of variances

Var E X | I

x
m

Asymptotic Variance of MLEs


Distribution

Exponential

Var

Uniform 0,

Weibull

Pareto

Var

fixed

Pareto fixed

fixed

Formula

n 12 (n 2)

Var 2
n

Var

1
Var X Var ( X )
n

2
n

2
2

Var
n

2
n

Cov , 0

Var

Poisson

n 2

Var
Lognormal

2
2n

Var
n

Hypothesis Tests Graphic Comparison

F ( x) F (d )
1 F (d )
f ( x)
f * ( x)
1 F (d )

F * ( x)

D( x) plots
D( x) Fn ( x) F *( x)
Empirical - Fitted
Empirical calculation uses a denominator of n
If D( x) 0, then Fn ( x) F *( x)
had more data x than predicted by model
If D( x) 0, then Fn ( x) F *( x)
had less data x than predicted by model
If data truncated at d , D( d ) 0
Every vertical jump has distance

1
n

p p plots
On horizontal axis, one point every multiple of
Domain and Range of Graph are 0,1

1
n 1

Points are Fn x j , F * x j

The first data point corresponds to the first sample value


If slope is less than 45 , fitted distribution is putting
too little weight in that region
If slope is more than 45 , fitted distribution is putting
too much weight in that region
Don't plot censored values

Kolmogorov-Smirnov Test

D max Fn ( x) F * ( x)
d x u

Max occurs right before or after a jump


If D < critical value, do not reject H 0 (null hypothesis)
Having censored data lowers D and also lowers the critical value
Critical values 0 as n
If data are X = 2000, 4000, 5000, 5000
and the 5000 values are right-censored

Then F4 5000 F4 5000 0.5

Anderson-Darling Test

A nF * (u ) n
2

Sn y j ln S * y j ln S * y j 1
2

j 0

Fn y j ln F * y j 1 ln F * y j
2

j 1

n includes censored observations

Chi-Square

j 1

Oj E j

Ej

O j2
n
Q
E
j 1 j
k

O j # of observations in each group


E j np j (Expected # of observations in each group)
n Total # of observations
Each group should have at least 5 expected observations. If not,
you have to bring some groups together.
Degrees of Freedom
Distribution with parameters is given
k 1 DoF
Distribution is fitted/estimated by MLE or using different data
k 1 r DoF Parameters are fitted from the data
r # parameters
k # groups
Independent Periods
Q

O j E j

j 1

Vj

where V j is the variance

Degrees of Freeedom k - p (p is number of estimated parameters)

KolmogorovSmirnov
Individual Data

Anderson-Darling
Individual Data

Continuous Fits

Chi-square
Individual or Grouped Data

Loglikelihood
Individual or Grouped
Data

Continuous or Discrete Fits

If there is censored
data , should

If there is censored data

If there is censored data

, no adjustment of

If there is censored data

lower critical value

should lower critical value

critical value

critical value

If parameters are
fitted, critical value
should be lowered

If parameters are fitted,


critical value should be
lowered

If parameters are fitted,


critical value adjusts
automatically

If parameters are fitted,


critical value adjusts
automatically

Larger sample size


makes critical value
decline

Critical value independent of


sample size

Critical value independent of


sample size

Critical value independent


of sample size

No Discretion in
grouping of data

No Discretion in grouping of
data

Discretion in grouping of data

Uniform weight on
all parts of
distribution

Higher weight on tails of


distribution

Higher weight on intervals


with low fitted probability

Type I Error: Rejecting H0 when it is true


Type II Error: Rejecting H1 when it is true

, no adjustment of

Likelihood Ratio Algorithm


If parameters are added to the model, the new model will have a loglikelihood at least as great.
The # DoF for the likelihood ratio test is the number of free parameters in the alternative model
minus the number of free parameters in the base model (null hypothesis).

Compare 2( LogL1 LogL2 ) to critical value at selected chi-square percentile and DOF
LogL1 Alternative Model Loglikelihood (which will be higher)
LogL2 Base Model
If 2( LogL1 LogL2 ) > critical value, accept alternative hypothesis
Start by comparing best 2-parameter to best 1-parameter
If 2( LogL1 LogL2 ) Critical Value (it fails), compare best 3-parameter distributions to best 1-parameter
If 2( LogL1 LogL2 ) Critical Value (it passes), compare 3-parameter distributions to best 2-parameter
Schwarz-Bayesian Criterion

r
LogL - ln n
2
where r is the # parameters
where n is the # of data points
The distribution with the highest resulting LogL is selected

Credibility
eF n0CV 2
yp
where n0
k

1 p
y p coefficient from the standard normal = -1

Given y p , P% = 100 2 percent corresponding to y p 1

k maximum fluctuation you will accept (i.e. within 5%)


Limited Fluctuation Credibility: Poisson Frequency

All must be the same!


1 CVs2

E X 2
E X

Credibility for
Experience
expressed in
Exposure Units

eF
Number of Claims

nF
Aggregate Losses

sF

Number of Claims

Claim Size (Severity)

Aggregate Losses/Pure
Premium

1 CV

n0

n0

CVs2

n0

n0

n0 CVs2

n0 1 CVs2

n0 s

n0 s CVs2

n0 s 1 CVs2

2
s

Pure Premium is the expected aggregate loss per policyholder per time period.

Limited Fluctuation Credibility: Non-Poisson Frequency

eF n0CVs2
nF eF f
Credibility for
Experience
expressed in

Number of Claims

Exposure Units

eF
Number of Claims

nF
Aggregate Losses

sF

Claim Size (Severity)

2f
n0 2

s2
n0 2

s f

2f
n0
f

s2
n0 2
s

2f s2
n0

f 2
s

s2
n0 s 2
s

2f s2
n0 s

f 2
s

2f
n0 s
f

# of Insureds is Exposure
Partial Credibility
PC ZX 1 Z M
PC M Z X M
PC Credibility Premium
M Manual Premium
Z Credibility
X Observed Mean
Z

Aggregate Losses/Pure Premium

n
nF

n Expected Claims
nF Number of Expected Claims needed for Full Credibility

2f
s2
n0 2 2

f s f

Bayesian Credibility
Bayesian Methods Discrete Prior
Class 1
1) Prior Probabilities
2) Likelihood of Experience
3) Joint Probabilities
4) Posterior Probabilities
5) Hypothetical Means
6) Bayesian Premium

Class 2

Product of rows above


Quotients of row 3 over row 3 sum
Product of rows 4 and 5

Product of rows above


Quotients of row 3 over row 3 sum
Product of rows 4 and 5

Bayesian Premium is the predicted expected value of the next trial


# claims is Bernoulli means at most 1 can occur
Bayesian Methods Continuous Prior

| x1 ,..., xn

f x1 ,..., xn |
f x1,..., xn | d

Posterior Density

Limits of Integration are according to prior distribution


f xn 1 | x1 ,..., xn f xn 1 | | x1 ,..., xn d

Predictive Density

is the prior density


| x1 ,..., xn is the posterior density
f x1 ,..., xn | is the likelihood function of the data given conditional
n

f x1 ,..., xn | f xi |
i 1

f x1 ,..., xn is the unconditional joint density function


f x1 ,..., xn f x1 ,..., xn | d

n t
t
e dt
0

n!

n1

Bayesian Credibility: Poisson/Gamma


N ~ Poisson
1

~ Gamma ,

* + claims
* exposures

Pc *
*

1
Posterior: Gamma * ,*
*

Posterior mean is the avg. # claims/policy


Predictive: Negative Binomial r * , *

Bayesian Credibility: Normal/Normal

X ~ Normal , v

~ Normal , a
x Observed Average
n Exposure
v anx
Posterior Mean
v an
va
a*
Posterior Variance
v an

Predictive Mean *
Predictive Variance v a*

Bayesian Credibility: Lognormal/Normal

Lognormal , v

Normal , a

Find

ln xi x

n
v anx
*
Posterior Mean
v an
va
a*
Posterior Variance
v an
E X | E e 0.5v E e e0.5v e

* 0.5 a* 0.5v

Bayesian Credibility: Bernoulli/Beta


Probability of a claim = q
q Unif 0,1
Uniform is a special case of Beta distribution with a b 1 and 1
k # claims
n = exposure
Beta Cx a 1 (1 x)b 1
a* a claims

Plug into Posterior Distribution


b* b exposures - claims
a
E | x *
a* b*
If m 1, treat as a series of 'm' Bernoullis
If exposure is 2 years, treat as '2m' Bernoullis
1) n 2m
and
2)

a
a*
m *
a* b*
a* b*

( x 1) x( x)

Bayesian Credibility: Exponential/Inverse Gamma

1 x
f x | e

Exponential

Inverse Gamma
1
* n
* nx
E (next loss)

*
* 1

If f ( x) is Gamma instead of exponential


x
1
1
f x |
x
e

* n
* nx

Loss Functions

For the loss function minimizing MSE

l ,

Bayesian Point Estimate is the mean of the posterior distribution


For the loss function minimizing Absolute Value of the Error

l ,
Bayesian Point Estimate is the median of the posterior distribution
For the zero-one loss function
Bayesian Point Estimate is the mode of the posterior distribution

Buhlmann Credibility
Buhlmann Credibility: Basics

Overall Mean Expected Value of the Hypothetical Mean

v E v

Expected Value of the Process Variances

a Var

Variance of the Hypothetical Mean

a v Overall Variance
For Poisson frequency HM = PV
Buhlmann's k
v
a
Buhlmann's Credibility Z
k

n
na

n k na v

n # periods when studying frequency or aggregate losses


n # claims when studying severity
If given 2 classes and there are multiple groups within each class,
you must find the mean and variance of each group separately.

Buhlmann Credibility: Continuous

Given a distribution function and the prior function:


1) Use the distribution function to get the HM and PV
2) Find v and a using the prior distribution

Model

Prior

Posterior

Predictive

Poisson

Gamma

Gamma

Negative Binomial

Bernoulli
(q)

* claims
* = exposures
Beta

Beta

a
b

a* a claims

Normal

Normal

,v

v anx
v an
va
a*
v an

Inverse Gamma

1
* =
*

Inverse Gamma

* n
* nx

Buhlmann a

Bernoulli

q E | x

b* =b exposures - claims

Normal

Exponential

r *

Buhlmann v

a*
a* b*

ab
ab
(a b)(a b 1) (a b)2 (a b 1)

Normal

2 a* v
Pareto

*
*

2
( 1)( 2)

2
( 1)2 ( 2)

Exact Credibility
If you have conjugate pairs and they ask for a Buhlmann estimate, use the Bayesian estimate.
Buhlmann as Least Squares Estimates of Bayes
E Initial probabilities x Outcomes = E Initial probabilities x Bayesian Estimates
Yi X i

Cov X , Y
Z
Var ( X )

(1 Z ) E X
Var ( X ) pi X i2 E X

Cov X , Y pi X iYi E X E Y

where X are the initial outcomes and Y are the Bayesian observations
Buhlmann Predictions
Pc (

) (1 Z ) E X

first observation = 0

Pc (2) (1 Z ) E X 2Z
Pc (8) (1 Z ) E X 8Z
Graphics Questions
1) The Bayesian prediction must be within the range of the hypothetical means
-within range of the prior distribution
2) The Buhlmann predictions must lie on a straight line
3) There should be Bayesian predictions both above and below the Buhlmann line
4) The Buhlmann prediction must be between the overall mean and the observation

Cov X i X j
Cov X i X j a
Var X i v a

Empirical Bayes Non-Parametric Methods

Uniform Exposures

Non-Uniform Exposures

Mean of all data

2
m
(
x

x
)
ij ij
i

r n
1
( xij xi )2

r (n 1) i 1 j 1
avg/class

i 1 j 1

avg/cell

(
i 1

v
1 r
2
( xi x )
r 1 i 1 avg/class overall avg n

avg/class

avg/cell
r

Mean of sample
variances of the
rows

1)

ni
per class

mi ( xi

i 1 exp/class avg/class

) 2 v(r 1)

overall avg
r

m
overall

mi2

i 1 exp/class

m
overall

Years of Experience

r = # groups
m = # exposures
To calculate individual variances

v1

Xi X

n 1

In the PC formula, M = the average of ALL claims

# Policyholders

Empirical Bayes Semi-Parametric Methods

Poisson Model
x
v x
a s 2 v
s2

Xi X

r 1
r # Policyholders
a
regardless of # of years (but if non-uniform exposures, use n = # exposures
a v
for the group you are looking at)
If non-uniform exposures, a must be calculated using Non-parametric formula
Z

For PC
1) X= total # observed claims (but if non-uniform exposures, use the average)
2) If exposure is 5 years, divide PC by 5 to get estimate for 1 year (next year)
Non-Poisson Model
1) Negative Binomial with fixed
E N | r r
Var N | r r (1 )

x
v x (1 )
a s 2 v
2) Gamma with fixed
E X |
Var X | 2

x
v x
a s 2 v

Simulation
Inversion Method

1) Get u F ( x)
2) Solve for x
3) Plug in 'u' to get simulated value
If F (2 ) .25
F (2) .75
Then .25 u .75 is mapped to x 2
If F(x) = 'a' (constant) in range x1 , x2 , then map a x 2
If given a graph with (x, F(x))
1) Start on the y-axis with the u values
2) Move right until you hit the line
If the line is horizontal, keep going right until it starts going up
3) Go vertically down to x
4) x is the simulated value

Number of Data Values to Generate

F ( x) 1 F ( x)
Var F ( x)
n
eF n0CV 2

Estimated Item
Mean
Confidence Interval:

s
x z n
n
sn is the square root of the unbiased sample variance after n runs
Number of Runs:
Calculates number of runs needed for the sample mean to be within 100k% of the true mean.

n n0CV 2

Must use unbiased Variance in CV 2

1
Var
X

Var X

Remember that
n

F(x)
Confidence Interval:
F ( x) z VaR F ( x)
F ( x) z

F ( x) 1 F ( x)
n

Number of Runs:
Pn
1 n
n Pn
n n0
n0 P
Pn
n
n
Pn # runs below x

Percentiles

Confidence Interval:

Y , Y
a

a nq 0.5 z1 p nq 1 q

b nq 0.5 z1 p nq 1 q

Risk Measures

TVaR p ( X ) E L | L VaR p
VaR p ( X ) eX VaR p ( X )

E L E L ^ p
1 p

xf ( x)dx

FX1 ( p )

1 p
1

VaR ( X )dy
y

1 p

TVaRq ( X ) is the mean of the upper tail of the distribution


TVaRq ( X ) Conditional Tail Expectation
n

TVaRq ( X )
sq2

j k

n k 1

2
n
E TVaR ( X ) 2
E
TVaR
(
X
)
q
q


n 1

Var TVaR q ( X )

sq2 q TVaR q ( X ) VaR q ( X )

n k 1

Confidence Interval =TVaR q ( X ) z Var TVaR q ( X )

Estimate of VaRq ( X ) is Yk where k nq 1


So if simulation has 1000 runs and you're estimating 95th percentile,
Then use Y951
Bootstrap Approximation

( F ) is the parameter
g x1 ,..., xn is an estimator based on a sample of 'n' items
2

MSE g x1 ,..., xn ( F ) EFn g x1 ,..., xn ( Fn )

2
x

i
i 1

Estimating the mean with the Sample Mean


n

2
MSE x

2
x

i
i 1

n2

Sums of Distributions
Single
Bernoulli
Binomial
Poisson
Geometric
Negative Binomial
Normal
Exponential
Gamma
Chi-Square

Multiple
Binomial
Binomial
Poisson
Negative Binomial
Negative Binomial
Normal
Gamma
Gammas
Chi-Square

You might also like