You are on page 1of 99

Non-Life Insurance Mathematics

Christel Geiss and Stefan Geiss


Department of Mathematics and Statistics
University of Jyvaskyla
July 29, 2015

Contents
1 Introduction
1.1 Some facts about probability

. . . . . . . . . . . . . . . . . .

5
6

2 Claim number process models


9
2.1 The homogeneous Poisson process . . . . . . . . . . . . . . . . 9
2.2 The renewal process . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 The inhomogeneous Poisson process... . . . . . . . . . . . . . . 19
3 The
3.1
3.2
3.3

total claim amount process S(t)


The Cramer-Lundberg-model . . . . . . . . . . . . . . . . . .
The renewal model . . . . . . . . . . . . . . . . . . . . . . . .
Properties of S(t) . . . . . . . . . . . . . . . . . . . . . . . . .

23
23
23
24

4 Premium calculation principles


29
4.1 Used principles . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5 Claim size distributions
31
5.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.3 The QQ-plot . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6 Modern
6.1 The
6.2 The
6.3 The
7 The
7.2
7.3
7.4

premium calculation principles


exponential principle . . . . . . . . . . . . . . . . . . . .
quantile principle . . . . . . . . . . . . . . . . . . . . . .
Esscher principle . . . . . . . . . . . . . . . . . . . . . . .

35
35
35
35

distribution of S(t)
Mixture distributions . . . . . . . . . . . . . . . . . . . . . . .
Applications in insurance . . . . . . . . . . . . . . . . . . . . .
The Panjer recursion . . . . . . . . . . . . . . . . . . . . . . .

37
38
40
42

CONTENTS
7.5
7.6

Approximation of FS(t) . . . . . . . . . . . . . . . . . . . . . . 45
Monte Carlo approximations of FS(t) . . . . . . . . . . . . . . 46

8 Reinsurance treaties

49

9 Probability of ruin
51
9.1 The risk process . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.2 Bounds for the ruin probability . . . . . . . . . . . . . . . . . 55
9.3 An asymptotics for the ruin probability . . . . . . . . . . . . . 67
10 Problems

81

A The Lebesgue-Stieltjes integral


95
A.1 The Riemann-Stieltjes integral . . . . . . . . . . . . . . . . . . 95
A.2 The Lebesgue-Stieltjes integral . . . . . . . . . . . . . . . . . . 96

1. Introduction
Insurance Mathematics is sometimes divided into life insurance, health insurance and non-life insurance. Life insurance includes for instance life
insurance contracts and pensions where long terms are covered. Non-life
insurance comprises insurances against fire, water damage, earthquake, industrial catastrophes or car insurance, for example. Non-life insurances cover
in general a year or other fixed time periods. Health insurance is special because it is differently organized in each country.
The course material is based on the textbook Non-Life Insurance Mathematics by Thomas Mikosch [6]

The problem to solve


We will consider the following situation.
1. Insurance contracts (or policies ) are sold. This is the income of
the insurance company.
2. At times Ti , 0 T1 T2 . . . claims happen. The times Ti are called
the claim arrival times.
3. The i-th claim arriving at time Ti causes the claim size Xi .
Task: Find a stochastic model for the Ti s and Xi s to compute or estimate
how much an insurance company should demand for its contracts and how
much initial capital of the insurance company is required to keep the probability of ruin below a certain level.

CHAPTER 1. INTRODUCTION

1.1

Some facts about probability

We shortly recall some definitions and facts from probability theory which
we need in this course. For more information see [9] or [2], for example.
(i) A probability space is a triple (, F, P), where
is a non-empty set,
F is a -algebra consisting of subsets of and
P is a probability measure on (, F).
(ii) A function f : R is called a random variable if and only if for
all intervals (a, b), < a < b < the pre-image
f 1 ((a, b)) := { : a < f () < b} F.
(iii) The random variables f1 , ..., fn are independent if and only if
P(f1 B1 , ..., fn Bn ) = P(f1 B1 ) P(fn Bn )
for all Bk B(R), k = 1, ..., n. (Here B(R) denotes the Borel algebra.)
If the fi s have discrete values, i.e. fi : {x1 , x2 , x3 , . . .}, then the random
variables f1 , ..., fn are independent if and only if
P(f1 = k1 , ..., fn = kn ) = P(f1 = k1 ) P(fn = kn )
for all ki {x1 , x2 , x3 . . .}.

(iv) If f1 , . . . , fn are independent random variables such that fi has the


Rb
density function hi (x), i.e. P(fi (a, b)) = a hi (x)dx, then
Z
P((f1 , ..., fn ) B) =
1IB (x1 , ..., xn )h1 (x) hn (xn )dx1 dxn
Rn

for all B B(Rn ). The -algebra B(Rn ) is the Borel -algebra,


which is the smallest -algebra containing all the open rectangles
(a1 , b1 ) ... (an , bn ). The function 1IB (x) is the indicator function
for the set B, which is defined as

1 if x B
1IB (x) =
0 if x 6 B.

1.1. SOME FACTS ABOUT PROBABILITY

(v) A random variable f : {0, 1, 2, ...} is Poisson distributed with


parameter > 0 if and only if
P(f = k) = e

k
.
k!

This is often written as f P ois().


(vi) A random variable g : [0, ) is exponentially distributed with
parameter > 0 if and only if for all a < b
Z b
1I[0,) (x)ex dx.
P(g (a, b)) =
a

The picture below shows the density 1I[0,) (x)ex for = 3.

0.0

1.0

2.0

3.0

density for lambda=3

2
x

CHAPTER 1. INTRODUCTION

2. Models for the claim


number process N (t)
In the following we will introduce three processes which are used as
claim number processes: the Poisson process, the renewal process and
the inhomogeneous Poisson process.

2.1

The homogeneous Poisson process with


parameter > 0

Definition 2.1.1 (homogeneous Poisson process). A stochastic process


N = (N (t))t[0,) is a Poisson process if the following conditions are fulfilled:
(P1) N (0) = 0 a.s. (almost surely), i.e. P({ : N (0, ) = 0}) = 1.
(P2) N has independent increments, i.e. if 0 = t0 < t1 < ... < tn , (n 1),
then N (tn ) N (tn1 ), N (tn1 ) N (tn2 ), ..., N (t1 ) N (t0 ) are independent.
(P3) For any s 0 and t > 0 the random variable N (t + s) N (s) is Poisson
distributed, i.e.
P(N (t + s) N (s) = m) = et

(t)m
, m = 0, 1, 2, ...
m!

(P4) The paths of N , i.e. the functions (N (t, ))t[0,) for fixed are
almost surely right continuous and have left limits. One says N has
c`
adl`
ag (continue `
a droite, limite `
a gauche) paths.

10

CHAPTER 2. CLAIM NUMBER PROCESS MODELS

Lemma 2.1.2. Assume W1 , W2 , ... are independent and exponentially distributed with parameter > 0. Then, for any x > 0 we have
P(W1 + + Wn x) = 1 ex

n1
X
(x)k
k=0

k!

This means the sum of independent exponentially distributed random variables is a Gamma distributed random variable.
Proof: Exercise.
Definition 2.1.3. Let W1 , W2 , ... be independent and exponentially distributed with parameter > 0. Define
Tn := W1 + ... + Wn
and
(t, ) := #{i 1 : Ti () t}, t 0.
N
Lemma 2.1.4. For each n = 0, 1, 2, ... and for all t > 0 it holds
n

(t, ) = n}) = et (t) ,


P({ : N
n!
(t) is Poisson distributed with parameter t.
i.e. N
it can be concluded that
Proof: From the definition of N
(t, ) = n} = { : Tn () t < Tn+1 ()}
{ : N
= { : Tn () t} \ { : Tn+1 () t}
Because of Tn Tn+1 we have the inclusion {Tn+1 t} {Tn t}. This
implies
(t) = n) = P(Tn t) P(Tn+1 t)
P(N
n
n1
X
X
(t)k
(t)k
t
t
= 1e
1+e
k!
k!
k=0
k=0
= et

(t)n
.
n!


2.1. THE HOMOGENEOUS POISSON PROCESS


Theorem 2.1.5.
> 0.

11

(t)t[0,) is a Poisson process with parameter


(a) N

(b) Any Poisson process N (t) with parameter > 0 can be written as
N (t) = #{i 1, Ti t}, t 0,
where Tn = W1 + ... + Wn , n 1, and W1 , W2 , ... are independent and
exponentially distributed with > 0.
Proof:
(a) We check the properties of the Definition 2.1.1.
(P1) From (vi) of Section 1.1 we get for any x > 0 that
P(W1 > 0) = P(W1 (0, ))
Z
=
1I[0,) (y)ey dy = 1.
0

(0, ) = 0 if only 0 < T1 () = W1 () but


This implies that N
(0, ) = 0 a.s.
W1 > 0 holds almost surely. Hence N
(s) and N
(t) N
(s) are independent: i.e.
(P2) We only show that N
(s) = l, N
(t) N
(s) = m)=P(N
(s) = l)P(N
(t) N
(s) = m)
P(N
(1)
for l, m 0. The general case can be shown similarly. It holds
(s) = l, N
(t) N
(s) = m)
P(N
(s) = l, N
(t) = m + l)
= P(N
= P(Tl s < Tl+1 , Tl+m t < Tl+m+1 )
By defining functions f1 , f2 , f3 and f4 as
f1
f2
f3
f4

:=
:=
:=
:=

Tl
Wl+1
Wl+2 + ... + Wl+m
Wl+m+1 ,

12

CHAPTER 2. CLAIM NUMBER PROCESS MODELS


and h1 , ..., h4 as the respective densities, it follows that
P(Tl s < Tl+1 , Tl+m t < Tl+m+1 )
= P(f1 s < f1 + f2 , f1 + f2 + f3 t < f1 + f2 + f3 + f4 )
= P(0 f1 < s, s f1 < f2 < , 0 f3 < t f1 f2 ,
t (f1 + f2 + f3 ) < f4 < )
tx
x

1
2
Z
Z Z Z
h4 (x4 )dx4 h3 (x3 )dx3 h2 (x2 )dx2 h1 (x1 )dx1
=
0 sx1

0 tx1 x2 x3

|
|

{z

I4 (x1 ,x2 ,x3 )

{z

I3 (x1 ,x2 )

{z

I2 (x1 )

=: I1
By direct computation and rewriting the density function of
f4 = Wl+m+1 ,
Z
I4 (x1 , x2 , x3 ) =
ex4 1I[0,) (x4 )dx4
= e

tx1 x2 x3
(tx1 x2 x3 )

Here we used t x1 x2 x3 > 0. This is true because the integration w.r.t. x3 implies 0 < x3 < t x1 x2 . The density of
f3 = Wl+2 + ... + Wl+m is
h3 (x3 ) = m1

xm2
3
1I[0,) (x3 )ex3 .
(m 2)!

Therefore,
I3 (x1 , x2 )
tx
Z1 x2
m2
m1 x3
ex3 e(tx1 x2 x3 ) dx3
=

(m 2)!
0

= 1I[0,tx1 ) (x2 )e(tx1 x2 ) m1

(t x1 x2 )m1
.
(m 1)!

2.1. THE HOMOGENEOUS POISSON PROCESS

13

The density of f2 = Wl+1 is


h2 (x2 ) = 1I[0,) (x2 )ex2 .
This implies

I2 (x1 )
Z
(tx1x2 )m1 x2
=
1I[0,tx1 )(x2 )e(tx1 x2 )m1
e
dx2
(m1)!
sx1

= m e(tx1 )

(t s)m
.
m!

Finally, from Lemma 2.1.4 we conclude


Z s
(t s)m l xl1
1
m e(tx1 )
I1 =

1I[0,) (x1 )ex1 dx1


m!
(l

1)!
0
(t s)m sl
= m l et
m!
 l!


l
((t s))m (ts)
(s) s
e
e
=
l!
m!
(s) = l)P(N
(t s) = m).
= P(N
If we sum
(s) = l, N
(t) N
(s) = m) = P(N
(s) = l)P(N
(t s) = m)
P(N
over l N we get
(t) N
(s) = m) = P(N
(t s) = m)
P(N
and hence (1).
(P3) follows from Lemma 2.1.4 and (2).
(P4) is clear from the construction.
(b) The proof is an exercise.

(2)

14

CHAPTER 2. CLAIM NUMBER PROCESS MODELS




10

20

30

40

Poisson, lambda=50

0.0

0.2

0.4

0.6

0.8

1.0

4
0

Poisson, lambda=10

0.0

0.2

0.4

0.6

0.8

1.0

2.2

The renewal process

To model windstorm claims, for example, it is not good to use the Poisson
process because windstorm claims happen rarely, sometimes with years in
between. The Pareto distribution, for example, which has the distribution
function



F (x) = 1
+x

2.2. THE RENEWAL PROCESS

15

with parameters , > 0 would fit better. For a Pareto distributed random variable it is more likely to have large values than for an exponential
distributed random variable.
Definition 2.2.1 (Renewal process). Assume that W1 , W2 , ... are i.i.d. (=independent and identically distributed) random variables such that W1 > 0
a.s. Then

T0 := 0
Tn := W1 + ... + Wn , n 1
is a renewal sequence and
N (t) := #{i 1 : Ti t}, t 0
is the renewal process.
In order to study the limit behavior of N we need the Strong Law of Large
Numbers (SLLN):
Theorem 2.2.2 (SLLN). If the random variables X1 , X2 , ... are i.i.d. with
E|X1 | < then
X1 + X2 + ... + Xn
EX1 a.s.
n
n

Theorem 2.2.3 (SLLN for renewal processes). Assume N (t) is a renewal


process. If EW1 < , then
N (t)
1
=
a.s.
t
t
EW1
lim

Proof:
Because of
{ : N (t)() = n} = { : Tn () t < Tn+1 ()},

nN

we have for N (t)() > 0


TN (t)() ()
TN (t)()+1 ()
TN (t)()+1 () N (t)() + 1
t

<
=
.
N (t)()
N (t)()
N (t)()
N (t)() + 1 N (t)()

(3)

16

CHAPTER 2. CLAIM NUMBER PROCESS MODELS

Note that
= { : T1 () < } = { : sup N (t) > 0}.
t0

Theorem 2.2.2 implies that


Tn
EW1
n

(4)

holds on a set 0 with P(0 ) = 1. Hence limn Tn on 0 and by


definition of N also limt N (t) on 0 . From (4) we get
TN (t)()
= EW1
t N (t)()

for 0 .

t
= EW1
N (t)()

for 0 .

lim

Finally 3 implies
lim


In the following we will investigate the behavior of EN (t) as t .
Theorem 2.2.4 (Elementary renewal theorem). Assume the above setting,
i.e. N (t) is a renewal process. If EW1 < , then
EN (t)
1
=
.
t
t
EW1
lim

(5)

Remark 2.2.5. If the Wi s are exponentially distributed with parameter


> 0, Wi Exp(), i = 1, 2, ..., then N (t) is a Poisson process. Consequently,
EN (t) = t.
Since EWi = 1 , it follows that for all t > 0
EN (t)
1
=
.
t
EW1

(6)

If the Wi s are not exponentially distributed, then the equation (6) holds
only for the limit t .

2.2. THE RENEWAL PROCESS

17

In order to prove Theorem 2.2.4 we formulate the following Lemma of Fatou


type:
Lemma 2.2.6. Let Z = (Zt )t[0,) be a stochastic process such that
Zt : [0, )

for all t 0.

Then
E lim inf Zt lim inf EZt .
t

Proof. By monotone convergence, since t 7 inf st Zs is non-decreasing, we


have
E lim inf Zs = lim E inf Zs .
t st

st

Obviously, E inf st Zs EZu for all u t which allows us to write


E inf Zs inf EZu .
st

ut

This implies the assertion.


Proof of Theorem 2.2.4:
Let c0 =

1
.
EW1

From Theorem 2.2.3 we conclude


c0 = lim

Since Zt := inf st

N (s)
s

N (t)
N (s)
= lim inf
t
st
t
s

a.s.

fulfills the requirements of Lemma 2.2.6 we have

N (s)
N (s)
N (s)
= E lim inf inf
= lim inf E inf
t st
t st
t
st
s
s
s

c0 = E lim inf

lim inf t E Nt(t) .and


We only have to show that lim supt E Nt(t) c0 . For c > 0 we define
Wic := Wi c
and get
Tic := W1c + + Wic Ti ,

i = 1, 2, ...

18

CHAPTER 2. CLAIM NUMBER PROCESS MODELS

Since N (c) (t) := #{i 1 : Tic t} N (t) we obtain


EN (t)
EN (c) (t)
lim sup
lim sup
.
t
t
t
t
Assume we could show that
EN (c) (t)
1
lim sup

.
t
EW1c
t

(7)

Then EW1c EW1 , for c implies


lim sup
t

EN (t)
c0 .
t

We start showing (7) . Let


() := N (c) (t)() + 1
and
Fn := (W1 , ..., Wn ),

n 1,

F0 := {, }.

The random variable is a stopping time w.r.t. (Fn ) i.e.


{ = n} = {N (c) (t) + 1 = n} Fn
Hence it follows by Walds identity that

X
ETNc (c) (t)+1 = E
Wic = E EW1c .
i=1

This implies
lim sup
t

EN (c) (t)
EN (c) (t) + 1
= lim sup
t
t
t
E
= lim sup
t
t
ETNc (c) (t)+1
= lim sup
t EW1c
t

E W1c + ... + WNc (c) (t) + EWNc (c) (t)+1
lim sup
t EW1c
t
1
t+c
lim sup
=
.
c
EW1c
t t EW1


2.3. THE INHOMOGENEOUS POISSON PROCESS...

2.3

19

The inhomogeneous Poisson process and


the mixed Poisson process

Definition 2.3.1. Let : [0, ) [0, ) be a function such that


1. (0) = 0
2. is non-decreasing, i.e. 0 s t (s) (t)
3. is c`adl`ag.
Then the function is called a mean-value function.

y
2

(t)=t

2
t

y
2

(t) continuous

0.0

0.5

1.0

1.5

2.0

20

CHAPTER 2. CLAIM NUMBER PROCESS MODELS

0.0

1.0

2.0

3.0

(t) cdlg

0.0

0.5

1.0

1.5
t

2.0

2.5

3.0

Definition 2.3.2 (Inhomogeneous Poisson process). A stochastic process


N = N (t)t[0,) is an inhomogeneous Poisson process if and only if it
has the following properties:
(P1) N (0) = 0 a.s.
(P2) N has independent increments, i.e. if 0 = t0 < t1 < ... < tn , (n 1),
it holds that N (tn ) N (tn1 ), N (tn1 ) N (tn2 ), ..., N (t1 ) N (t0 ) are
independent.
(Pinh. 3) There exists a mean-value function such that for 0 s < t
P(N (t) N (s) = m) = e((t)(s))

((t) (s))m
,
m!

where m = 0, 1, 2, ..., and t > 0.


(P4) The paths of N are c`adl`ag a.s.
Theorem 2.3.3 (Time change for the Poisson process). If denotes the
is a
mean-value function of an inhomogeneous Poisson process N and N
homogeneous Poisson process with = 1, then
(1)
d
((t)))t[0,)
(N (t))t[0,) = (N

(2) If is continuous, increasing and limt (t) = , then


d
(t))t[0,) .
N (1 (t))t[0,) = (N

2.3. THE INHOMOGENEOUS POISSON PROCESS...

21

Here 1 (t) denotes the inverse function of and f = g means that the
two random variables f and g have the same distribution (but one can not
conclude that f () = g() for ).
be a homogeneous Poisson
Definition 2.3.4 (Mixed Poisson process). Let N
process with intensity = 1 and be a mean-value function. Let : R
. Then
be a random variable such that > 0 a.s., and is independent of N
((t)), t 0
N (t) := N
is a mixed Poisson process with mixing variable .
Proposition 2.3.5. It holds


var()

(t) .
var(N ((t))) = EN ((t)) 1 +
E
Proof:
(t) = var(N
(t)) = t and therefore EN
(t)2 = t + t2 . We
We recall that EN
conclude


((t))) = EN
((t))2 EN
((t)) 2
var(N

= E (t) + 2 (t)2 (E(t))2
= (t) (E + var(t)) .

The property var(N (t)) > EN (t) is called over-dispersion. If N is an
inhomogeneous Poisson process, then
var(N (t)) = EN (t).

22

CHAPTER 2. CLAIM NUMBER PROCESS MODELS

3. The total claim amount


process S(t)
3.1

The Cram
er-Lundberg-model

Definition 3.1.1. The Cram


er-Lundberg-model considers the following
setting:
1. Claims happen at the claim arrival times 0 < T1 < T2 < ... of a Poisson
process
N (t) = #{i 1 : Ti t}, t 0.
2. At time Ti the claim size Xi happens and it holds that the sequence
(Xi )
i=1 is i.i.d., Xi 0.

3. The processes (Ti )


i=1 and (Xi )i=1 are independent.

Remark: Are N and (Xi )


i=1 independent?

3.2

The renewal model

Definition 3.2.1. The renewal model (or Sparre-Anderson-model) considers the following setting:
1. Claims happen at the claim arrival times 0 T1 T2 ... of a
renewal process
N (t) = #{i 1 : Ti t}, t 0.

23

24

CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T )


2. At time Ti the claim size Xi happens and it holds that the sequence
(Xi )
i=1 is i.i.d., Xi 0.

3. The processes (Ti )


i=1 and (Xi )i=1 are independent.

3.3

Properties of the total claim amount process S(t)

Definition 3.3.1. The total claim amount process is defined as


S(t) :=

N (t)
X

Xi , t 0.

i=1

The insurance company needs information about S(t) in order to determine


a premium which covers the losses represented by S(t). In general, the
distribution of S(t), i.e.
P({ : S(t, ) x}), x 0,
can only be approximated by numerical methods or simulations while ES(t)
and var(S(t)) are easy to compute exactly. One can establish principles which
use only ES(t) and var(S(t)) to calculate the premium. This will be done in
chapter 4.
Proposition 3.3.2. (a) For the Cramer-Lundberg-model it holds
(i) ES(t) = tEX1 ,
(ii) var(S(t)) = tEX12 .
(b) Assume the renewal model. Let EW1 =
(i) Then limt

ES(t)
t

(0, ) and EX1 < .

= EX1 .

(ii) If var(W1 ) < and var(X1 ) < , then



var(S(t))
= var(X1 ) + var(W1 )2 (EX1 )2 .
t
t
lim

3.3. PROPERTIES OF S(T)

25

Proof:
Since
1 = 1I () =

1I{N (t)=k} ,

k=0

by direct computation,
ES(t) = E
= E

N (t)
X

Xi

i=1

X


k
X
(
Xi )1I{N (t)=k}
i=1

k=0

k=0

E(X1 + ... + Xk ) E1I{N (t)=k}


{z
} | {z }
|

= EX1

=kEX1

=P(N (t)=k)

kP(N (t) = k)

k=0

= EX1 EN (t).
In the CL-model we have EN (t) = t. For the general case we use the
Elementary Renewal Theorem (Thereom 2.2.4) to get the assertion. We
continue with

ES(t)2 = E

N (t)
X

2
Xi = E

i=1

= E

k=0

=
=
=
=

!2

!
Xi

1I{N (t)=k}

i=1

k=0

k
X
X

X
k
X

k
X
X

!2
Xi

1I{N (t)=k}

i=1

E Xi Xj 1I{N (t)=k}

k=0 i,j=1

X
X
kP(N (t) = k) + (EX1 )2
k(k
EX12
k=1
k=0
2
2
2
EX1 EN (t) + (EX1 ) (EN (t) EN (t))
var(X1 )EN (t) + (EX1 )2 EN (t)2 .

1)P(N (t) = k)

26

CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T )

It follows that
var(S(t)) = ES(t)2 (ES(t))2
= ES(t)2 (EX1 )2 (EN (t))2
= var(X1 )EN (t) + (EX1 )2 var(N (t)).
For the Cramer-Lundberg-model it holds EN (t) = var(N (t)) = t, hence we
have var(S(t)) = t(var(X1 ) + (EX1 )2 ) = tEX12 . For the renewal model we
get
var(X1 )EN (t)
lim
= var(X1 ).
t
t
The relation
var(N (t))
var(W1 )
lim
=
.
t
t
(EW1 )3
is shown in [5, Theorem 2.5.2].

Theorem 3.3.3. The Strong Law of Large Numbers (SLLN) and the Central
Limit Theorem (CLT) for (S(t)) in the renewal model can be stated as follows:
(i) SLLN for (S(t)): If EW1 =

< and EX1 < , then

S(t)
= EX1
t t
lim

a.s.

(ii) CLT for (S(t)): If var(W1 ) < , and var(X1 ) < , then


!


S(t)

ES(t)

t
p
sup P
x (x) 0,


xR
var(S(t))
where is the distribution function of the standard normal distribution,
Z x
y2
1
(x) =
e 2 dy.
2
Proof:
(i) We follow the proof of [6, Theorem 3.1.5 ]. We have shown that
N (t)
= a.s.
t
t
lim

3.3. PROPERTIES OF S(T)

27

and it holds
lim N (t) = a.s.

Because of S(t) = X1 + X2 + ... + XN (t) and, by the SSLN


X1 + ... + Xn
= EX1
n
n
lim

a.s.,

we get
S(t)
N (t)
S(t)
= lim
lim
= EX1
t t
t
t t N (t)
lim

(ii) See [4, Theorem 2.5.16.]

a.s.

28

CHAPTER 3. THE TOTAL CLAIM AMOUNT PROCESS S(T )

4. Classical premium
calculation principles
The standard problem for the insurance companies is to determine that
amount of premium such that the losses S(t) are covered. On the over hand
the price should be low enough to be competitive and attract customers.
A first approximation of S(t) is given by ES(t). For the premium income
p(t) this implies
p(t) < ES(t) insurance company loses on average
p(t) > ES(t) insurance company gains on average
A reasonable solution would be
p(t) = (1 + )ES(t)
where > 0 is the safety loading. Proposition 3.3.2 tells us that in the
renewal model with EW1 = 1 it holds ES(t) t EX1 for large t.

4.1

Used principles

(1) The net principle,


pN ET (t) = ES(t)
defines the premium to be a fair market premium. This however,
can be very risky for the company, which one can conclude from the
Central Limit Theorem for S(t).
(2) The expected value principle,
pEV (t) = (1 + )ES(t),
which is motivated by the Strong Law of Large Numbers.

29

30

CHAPTER 4. PREMIUM CALCULATION PRINCIPLES

(3) The variance principle,


pV AR (t) = ES(t) + var(S(t)), > 0.
This principle is in the renewal model asymptotically the same as
pEV (t), since by Proposition 3.3.2 we have that
pEV (t)
t pV AR (t)
lim

is a constant. This means that plays the role of a safety loading .


(4) The standard deviation principle,
p
pSD (t) = ES(t) + var(S(t)), > 0.

5. Claim size distributions


What distributions one should choose to model the claim sizes (Xi )? If one
analyzes data of claim sizes that have happened in the past, for example by
a histogram or a QQ-plot, it turns out that the distribution is often heavytailed.
Definition 5.1.1. Let F (x) be the distribution function of X1 , i.e.
F (x) = P({ : X1 () x}).
F is called light-tailed
lim sup

n xn

1 F (x)
<
ex

for some > 0.


F is called heavy-tailed
lim inf

n xn

1 F (x)
>0
ex

for all > 0.

5.2

Examples

(1) The exponential distribution Exp() is light-tailed for all > 0, since
the distribution function is F (x) = 1 ex , x > 0, and
1 F (x)
ex
=
= e()x ,
ex
ex
and by choosing 0 < < ,
sup e()x = e()n 0, as n .
xn0

31

32

CHAPTER 5. CLAIM SIZE DISTRIBUTIONS

(2) The Pareto distribution is heavy-tailed. The distribution function


is
F (x) = 1

, x 0, > 0, > 0,
( + x)

or
F (x) = 1

5.3

ba
, x b > 0, a > 0.
xa

The QQ-plot

A quantile is the inverse of the distribution function. We take the left


inverse if the distribution function is not strictly increasing and continuous
which is is defined by
F (t) := inf{x R, F (x) t}, 0 < t < 1,
and the empirical distribution function of the data X1 , ...Xn as
n

1X
1I(,x] (Xi ), x R.
Fn (x) :=
n i=1
It can be shown that if X1 F , (Xi )
i=1 i.i.d., then
lim Fn (t) F (t),

almost surely for all continuity points t of F . Hence, if X1 F , then the


plot of (Fn (t), F (t)) should give almost the straight line y = x.

33

5.3. THE QQ-PLOT

y
3

left inverse of F(x)

F(x)

3
x

34

CHAPTER 5. CLAIM SIZE DISTRIBUTIONS

6. About modern premium


calculation principles
6.1

The exponential principle

The exponential principle is defined as


1
pexp (t) := log EeS(t) ,

for some > 0, where is the risk aversion constant. The function pexp (t) is
defined via the so-called utility theory.

6.2

The quantile principle

Suppose F (x) = P({ : S(t) x}), x R, is the distribution function of


S(t). In Section 5.3 we defined the left inverse of the distribution function
F by
F (y) := inf{x R : F (x) y}, 0 < y < 1.
Then the (1 )quantile principle is defined as
pquant (t) = F (1 ),
where the expression F (1) converges for 0 to the probable maximal
loss. This setting is related to the theory of Value at Risk.

6.3

The Esscher principle

The Esscher principle is defined as


pEss (t) =

ES(t)eS(t)
, > 0.
Ee(S(t))
35

36 CHAPTER 6. MODERN PREMIUM CALCULATION PRINCIPLES


In all the above principles the expected value E(g(S(t)) needs to be computed
for a certain function g(x) to compute p(t). This means it is not enough to
know ES(t) and var(S(t)), the distribution of S(t) is needed as well.

7. The distribution of the total


claim amount S(t)
Theorem 7.1.1. Let (, F, P) be a probability space.
(a) The distribution of a random variable f : R can be uniquely
described by its distribution function F : R [0, 1],
F (x) := P({ : f () x}), x R.
(b) Especially, it holds for g : R R, such that g 1 (B) B(R), for all
B B(R), that
Z
Eg(f ) =
g(x)dF (x)
R

(in the sense that, if either side of this expression exists, so does the
other, and then they are equal, see [7], pp. 168-169).
(c) The distribution of f can also be determined by its characteristic
function, (see [9])
f (u) := Eeiuf , u R,
or by its moment-generating function
mf (h) := Eehf , h (h0 , h0 )
provided that Eeh0 f < for some h0 > 0.
Remember: for independent random variables f and g it holds
f +g (u) = f (u)g (u).

37

38

7.2

CHAPTER 7. THE DISTRIBUTION OF S(T )

Mixture distributions

Definition 7.2.1 (Mixture distributions).


Let Fi , i = 1, ..., n be distribution
Pn
functions and pi [0, 1] such that i=1 pi = 1. Then
G(x) = p1 F1 (x) + ... + pn Fn (x), x R,
is called the mixture distribution of F1 , ..., Fn .
Lemma 7.2.2. Let f1 , ..., fn be random variables with distribution function
F1 , ..., Fn , respectively. Assume that J : {1, ..., n} is independent from
f1 , ..., fn and P(J = i) = pi . Then the random variable
Z = 1I{J=1} f1 + ... + 1I{J=n} fn
has the mixture distribution function G.
Definition 7.2.3 (Compound Poisson random variable). Let N P ois()
and (Xi )
i=1 i.i.d. random variables, independent from N . Then
Z :=

N
X

Xi

i=1

is called a compound Poisson random variable.


Proposition 7.2.4. The sum of independent compound Poisson random
variables is a compound Poisson random variable: Let S1 , . . . , Sn given by
Sk =

Nk
X

(k)

Xj , k = 1, ..., n

j=1

be independent compound Poisson random variables such that


Nk P ois(k ), k > 0,
(k)

(Xj )j1 i.i.d.,


(k)

and Nk is independent from (Xj )j1 for all k = 1, ..., n. Then


S := S1 + ... + Sn is a compound Poisson random variable with representation
N
X
d
S=
Yl , N P ois(), = 1 + ... + n
l=1

7.2. MIXTURE DISTRIBUTIONS

39

and (Yl )l1 is an i.i.d. sequence, independent from N and


d

Y1 =

n
X

(k)

1I{J=k} X1 , with P(J = k) =

k=1

k
,

(k)

and J is independent of (X1 )k .


Proof:
P
From Theorem 7.1.1 we know that it is sufficient to show that S and N
l=1 Yl
have the same characteristic function. We start with the characteristic function of Sk :
P Nk

(k)

Sk (u) = EeiuSk = Eeiu j=1 Xj

X
Pm
(k)
eiu j=1 Xj 1I{Nk =m}
= E
= E

m=0

(k)

(k)

eiuX1 ... eiuXm 1I{Nk =m}


|
{z
}
m=0
all of these are independent

=
=
=

m=0

X
m=0

X
m=0

(k)

EeiuX1

m

P(Nk = m)

m
X (k) (u) P(Nk = m)
1

m m
k (1 (k) (u))
k k
X1
X (k) (u)
e
=e
.
1
m!

Then
S (u) = Eeiu(S1 +...+Sn )
= EeiuS1 ... EeiuSn
= S1 (u) ... Sn (u)
= e

1 (1

= exp

(1) (u))
X1

n (1

(n) (u))

X1
... e

!
n
X k
1
X (k) (u)
.
1

k=1

40
Let =
we get

CHAPTER 7. THE DISTRIBUTION OF S(T )


PN

l=1

Yl . Then by the same computation as we have done for Sk (u)


(u) = Eeiu = e(1Y1 (u)) .

Finally,
(k)

Pn

Y1 (u) = Eeiu k=1 1I{J=k} X1


n 

X
P
(k)
iu n
1I{J=k} X1
k=1
1I{J=l}
= E
e
l=1

n
X



(l)
E eiuX1 1I{J=l}

l=1

n
X

X (l) (u))
1

l=1

l
.

7.3

Applications in insurance

First application
Assume that the claims arrive according to an inhomogeneous Poisson process, i.e.
N (t) N (s) P ois((t) (s)).
The total claim amount in year l is
Sl =

N (l)
X

(l)

Xj , l = 1, ..., n.

j=N (l1)+1

Now, it can be seen, that


N (l)N (l1)
d

Sl =

(l)

Xj , l = 1, ..., n

j=1

and Sl is compound Poisson distributed. Proposition 7.2.4 implies that the


total claim amount of the first n years is again compound Poisson distributed,

7.3. APPLICATIONS IN INSURANCE

41

where
d

S(n) := S1 + ... + Sn =

N
X

Yi

i=1

P ois((n))

Yi

(1)

(n)

= 1I{J=1} X1 + ... + 1I{J=n} X1


(i) (i 1)
.
P(J = i) =
(n)

Hence the total claim amount S(n) in the first n years (with possibly
different claim size distributions in each year) has a representation as a
compound Poisson random variable.
Second application
We can interprete the random variables

Si =

Ni
X

(i)

Xj , Ni P ois(i ), i = 1, . . . , n,

j=1

as the total claim amounts of n independent portfolios for the same fixed pe(i)
riod of time. The (Xj )j1 in the i-th portfolio are i.i.d, but the distributions
may differ from portfolio to portfolio (one particular type of car insurance,
for example). Then
d

S(n) = S1 + ... + Sn =

N
X

Yi

i=1

is again compound Poisson distributed with


N = P ois(1 + ... + n )
d

(1)

(n)

Yi = 1I{J=1} X1 + ... + 1I{J=n} X1

and P(J = l) =

l
.

42

7.4

CHAPTER 7. THE DISTRIBUTION OF S(T )

The Panjer recursion: an exact numerical


procedure to calculate FS(t)

Let
S=

N
X

Xi ,

i=1

N : {0, 1, ...} and (Xi )i1 i.i.d, N and (Xi ) independent. Then, setting
S0 := 0, Sn := X1 + ... + Xn , n 1 yields
P(S x) =
=
=

X
n=0

X
n=0

P(S x, N = n)
P(S x|N = n)P(N = n)
P(Sn x)P(N = n)

n=0

FXn1 (x)P(N = n),

n=0

where FXn1 (x) is the n-th convolution of FX1 , i.e.


FX21 (x)

=
X1 ,X2 independent

=
=
=

P(X1 + X2 x) = E1I{X1 +X2 x}


Z Z
1I{x1 +x2 x} (x1 , x2 )dFX1 (x1 )dFX2 (x2 )
R
R
Z Z
1I{x1 xx2 } (x1 , x2 )dFX1 (x1 )dFX2 (x2 )
R
R
Z
FX1 (x x2 )dFX2 (x2 )
R

and by recursion using FX1 = FX2 ,


Z
(n+1)
FX1
(x) :=
FXn1 (x y)dFX1 (y).
R

But the computation of FXn1 (x) is numerically difficult. However, there is a


recursion formula for P(S x) that holds under certain conditions:

7.4. THE PANJER RECURSION

43

Theorem 7.4.1 (Panjer recursion scheme). Assume the following conditions:


(C1) Xi : {0, 1, ...}
(C2) for N it holds that


b
qn1 , n = 1, 2, ...
qn = P(N = n) = a +
n
for some a, b R.
Then for
pn := P(S = n), n = 0, 1, 2, ...

q0
, if P(X1 = 0) = 0
p0 =
N
EP(X1 = 0)
, otherwise

n
X
1
bi
pn =
P(X1 = i)pni , n 1.
a+
1 aP(X1 = 0) i=1
n
Proof:

p0 = P(S = 0) = P(S = 0, N = 0) + P(S = 0, N > 0)


= P(S0 = 0) P(N = 0) + P(S = 0, N > 0)
| {z }
=1

= P(N = 0) +P(S = 0, N > 0)


| {z }
=q0

q0
|{z}

P(X1 =0)0 P(N =0)

P(X + ... + Xk = 0, N = k)
| 1
{z
}
k=1
P(X1 =0)k P(N = k)
| {z }
qk

= EP(X1 = 0)N .
This implies (1).
For pn , n 1,
pn

P(S = n) =

X
k=1

P(Sk = n)qk

(1)
(2)

44

CHAPTER 7. THE DISTRIBUTION OF S(T )


(C2)

b
P(Sk = n)(a + )qk1 .
k
k=1

(3)

Assume P(Sk = n) > 0. Now, because Q = P(|Sk = n) is a probability


measure the following holds.

n 
X
bl
a+
P(X1 = l|Sk = n)
{z
}
n |
l=0

Q(X1 =l)

b
= a + E Q X1
n
b
= a+
EQ (X1 + ... + Xk )
nk
b
= a+
EQ Sk
nk | {z }
=n

b
= a+ ,
k

(4)

where the last equation yields from the fact that Q(Sk = n) = 1. On the
other hand, we can express the term a + kb also by

n 
X
bl
P(X1 = l|Sk = n)
a+
n
l=0
=
=

n
X
l=0
n
X
l=0

(a +

bl P(X1 = l, Sk X1 = n l)
)
n
P(Sk = n)

(a +

bl P(X1 = l)P(Sk1 = n l)
)
.
n
P(Sk = n)

Thanks to (4) we can now replace the term a +


which yields

pn

b
k

(5)

in (3) by the RHS of (5)

X
n 
X


bl
=
a+
P(X1 = l)P(Sk1 = n l)qk1
n
k=1 l=0

n 

X
X
bl
=
a+
P(X1 = l)
P(Sk1 = n l)qk1
n
l=0
|k=1
{z
}
P(S=nl)

7.5. APPROXIMATION OF FS(T )

45


n 
X
bl
P(X1 = l)P(S = n l)
= aP(X1 = 0)P(S = n) +
a+
n
l=1

n 
X
bl
= aP(X1 = 0)pn +
a+
P(X1 = l)pnl ,
n
l=1
which will give the equation (2)

n 
X
1
bl
pn =
a+
P(X1 = l)pnl
1 aP(X1 = 0) l=1
n

Remark 7.4.2.
TheP
Panjer recursion only works for distributions of Xi on {0, 1, 2, ...}
i.e.
k=0 PXi (k) = 1 (or, by scaling, on a lattice {0, d, 2d, ...} for d > 0
fixed).
Traditionally,
the distributions used to model Xi have a density, and
R
h (x)dx = 0. But on the other hand, claim sizes are expressed
{0,1,2,...} xi
in terms of prices, so they take values on a lattice. The density hXi (x)
could be approximated to have a distribution on a lattice, but how
large would the approximation error then be?
N can only be Poisson, binomially or negative binomially distributed.

7.5

Approximation of FS(t) using the Central


Limit Theorem

Assume, that the renewal model is used, and that


S(t) =

N (t)
X

Xi , t 0.

i=1

In Theorem 3.3.3 the Central Limit Theorem is used to state that if


var(W1 ) < and var(X1 ) < , then


!


S(t) ES(t)

t
sup P p
x (x) 0.

xR
var(S(t))

46

CHAPTER 7. THE DISTRIBUTION OF S(T )

Now, by setting
y ES(t)
x := p
,
var(S(t))
for large t the approximation
P(S(t) y)

y ES(t)
p
var(S(t))

can be used.
Warning: This approximation is not good enough to estimate P(S(t) > y)
for large y, see [6], Section 3.3.4.

7.6

Monte Carlo approximations of FS(t)

a) The Monte Carlo -method:


If the distributions of N (t) and X1 are known, then an i.i.d. sample of
N1 , ..., Nm , (Nk N (t), k = 1, ..., m)
and i.i.d. samples of
(1)
(1)
X1 , ..., XN1
(j)
...
Xi X1 , i = 1, ..., Nj , j = 1, ..., m

(n)
(n)
X1 , ..., XNm

can be simulated on a computer and the sums


S1 =

N1
X
i=1

Xi1 , ..., Sm =

Nm
X

Xim

i=1

calculated. Then it follows that Si S(t), and the Si s are independent. By the Strong Law of Large Numbers,
m

m :=

1 X
a.s.
1IA (Si ) P(S(t) A) = p, as m .
m i=1

It can be shown that this does not work well for small values of p (see
[6], section 3.3.5 for details).

7.6. MONTE CARLO APPROXIMATIONS OF FS(T )

47

b) The bootstrap method


The bootstrap method is a statistical simulation technique, that doesnt
require the distribution of Xi s. The term bootstrap is a reference to
M
unchhausens tale, where the baron escaped from a swamp by pulling
himself up by his own bootstraps. Similarly, the bootstrap method only
uses the given data.
Assume, theres a sample, i.e. for some fixed we have the real
numbers
x1 = X1 (), ..., xn = Xn (),
of the random variables X1 , ..., Xn , which are supposed to be i.i.d.
Then, a draw with replacement can be made as illustrated in the
following example:
Assume n = 3 and x1 = 4, x2 = 1, x3 = 10 for example. Drawing
with replacement means we choose a sequence of triples were each
triple consists of the randomly out of {1,4,10} chosen numbers. For
example, we could get:
x1 x2 x3






 











  
)



9




x2 x1 x1

x3 x1 x2

x 3 x2 x2

...

We denote the k-th triple by X (k) = (X1 (k), X2 (k), X3 (k)), k {1, 2, ...}.
Then, for example, the sample mean of the k-th triple

(k) := X1 (k) + X2 (k) + X3 (k)


X
3
has values between min{x1 , x2 , x3 } = 1 and max{x1 , x2 , x3 } = 10, but
the values near x1 +x32 +x3 = 5 are more likely than the minimum or the
maximum, and it holds the SLLN
N
1 X
x1 + x2 + x3
X (i)
lim
N N
3
i=1

Moreover, it holds in general


(i)) =
var(X

var(X1 )
.
n

a.s.

48

CHAPTER 7. THE DISTRIBUTION OF S(T )


Verifying this is left as an exercise.
n is the
In insurance, the sum of the claim sizes X1 + ... + Xn = nX
target of interest and with this, the total claim amount
!
N (t)

n
X
X
X
S(t) =
Xi =
Xi 1I{N (t)=n} .
i=1

n=0

i=1

Here, the bootstrap method is used to calculate confidence bands for


(the parameters of) the distributions of the Xi s and N (t).
Warning: The bootstrap method doesnt always work! In general,
simulation should only be used, if everything else fails. Often better
approximation results can be obtained by using the Central Limit Theorem.
So all the methods represented should be used with great care, as each of
them has advantages and disadvantages. After all, nobody is perfect also
applies to approximation methods.

8. Reinsurance treaties
Reinsurance treaties are mutual agreements between different insurance companies to reduce the risk in a particular insurance portfolio. Reinsurances
can be considered as insurance for the insurance company.
Reinsurances are used if there is a risk of rare but huge claims. Examples of
these usually involve a catastrophe such as earthquake, nuclear power station
disaster, industrial fire, war, tanker accident, etc.
According to Wikipedia, the worlds largest reinsurance company in 2009 is
Munich Re, based in Germany, with gross written premiums worth over $31.4
billion, followed by Swiss Re (Switzerland), General Re (USA) and Hannover
Re (Germany).
There are two different types of reinsurance:
A Random walk type reinsurance
1. Proportional reinsurance: The reinsurer pays an agreed proportion
p of the claims,
Rprop (t) = pS(t).
2. Stop-loss reinsurance: The reinsurer covers the losses that exceed
an agreed amount of K,
RSL (t) = (S(t) K)+ ,
where x+ = max{x, 0}.
3. Excess-of-loss reinsurance: The reinsurer covers the losses that exceed an agreed amount of D for each claim separately,
RExL

N (t)
X
=
(Xi D)+ ,
i=1

where D is the deductible.

49

50

CHAPTER 8. REINSURANCE TREATIES


B Extreme value type reinsurance
Extreme value type reinsurances cover the largest claims in a portfolio.
The ordering of the claims X1 , ..., XN (t) is denoted by
X(1) ... X(N (t)) .
1. Largest claims reinsurance: The largest claims reinsurance covers
the k largest claims arriving within time frame [0, t],
RLC (t) =

k
X

X(N (t)i+1) .

i=1

2. ECOMOR reinsurance:
(Excedent du co
ut moyen relatif = excess of the average cost)
N (t)+1
Define k = b 2 c. Then
N (t)
X
RECOM OR (t) =
(X(N (t)i+1) X(N (t)k+1) )+

i=1
k1
X

X(N (t)i+1) (k 1)X(N (t)k+1)

i=1

Treaties of random walk type can be handled like before. For example,
P( RSL (t) x) = P(S(t) K) + P(K < S(t) x + K),
| {z }
(S(t)K)+

so if FS(t) is known, so is FRSL (t) .


Treaties of extreme value type are dealt with extreme value theory techniques.

9. Probability of ruin
9.1

The risk process

If the renewal model is assumed, then the total claim amount process is
S(t) =

N (t)
X

Xi , t 0.

i=1

Let p(t) = ct be the premium income function, where c is the premium


rate. The risk process (or surplus process) is then defined by
U (t) := u + p(t) S(t), t 0,
where U (t) is the insurers capital balance at time t, and u is the initial
capital.

risk process U(t)

U(t)
3

U(0)=4

10

51

12

52

CHAPTER 9. PROBABILITY OF RUIN

Definition 9.1.1 (Ruin, ruin time, ruin probability).


ruin(u) := { : U (t, ) < 0 for some t > 0}
= the event that U ever falls below zero.
Ruin time T := inf{t > 0 : U (t) < 0}
= the time when the process falls below zero
for the first time.
The ruin probability is given by
(u) = P(ruin(u)) = P(T < ).
Remark 9.1.2.
1. T : R {} is an extended random variable (i.e. T can also
assume ).
2. In the literature (u) is often written as
(u) = P(ruin|U (0) = u)
to indicate the dependence on the initial capital u.
3. Ruin can only occur at the times t = Tn , n 1. This implies
ruin(u) = { : T () < }
= { : inf U (t, ) < 0}
t>0

= { : inf U (Tn (), ) < 0}


n1

= { : inf (u + cTn S(Tn )) < 0},


n1

where the last equation yields from the fact that U (t) = u + ct S(t).
Since in the renewal model it was assumed that Wi > 0 a.s., it follows
that
N (Tn ) = #{i 1 : Ti Tn } = n
and
N (Tn )

S(Tn ) =

X
i=1

Xi =

n
X
i=1

Xi ,

9.1. THE RISK PROCESS

53

where
Tn = W1 + ... + Wn ,
which imply that



: inf u + cTn S(Tn ) < 0
n1
(
!
)
n
X
=
: inf u + cTn
Xi < 0 .


n1

i=1

By setting
Zn := Xn cWn , n 1
and
Gn := Z1 + ... + Zn , n 1, G0 := 0,
it follows that
n
o
: inf (Gn ) < u
n1
n
o
= : sup Gn > u

{ : T () < } =

n1

and for the ruin probability the equality




(u) = P sup Gn () > u
n1

holds.
The objective is to achieve the following properties:
Avoiding a situation, where the probability of ruin (u) = 1
(u) should be small, if the initial capital u is large.
By the Strong Law of Large Numbers (with the assumption that E|Z1 | < ),
Gn
= EZ1 almost surely.
n n
lim

If EZ1 > 0, then

a.s.

Gn , n ,
because Gn nEZ1 for large n. This means ruin probability (u) = 1 for
all u > 0, if EZ1 > 0.

54

CHAPTER 9. PROBABILITY OF RUIN

Theorem 9.1.3. If EW1 < , EX1 < and


EZ1 = EX1 cEW1 0,
then (u) = 1, i.e. for every fixed u > 0 ruin occurs with probability 1.
Proof:
The case EZ1 > 0 is clear from above. The case EZ1 = 0 we show under the
additional assumption that EZ12 < : Let


Z1 + ... + Zn

Am := lim sup
m .
n
n
Notice that for fixed and n0 1
lim sup
n

iff
lim sup
nn0

Z1 () + ... + Zn ()

m
n

Zn0 () + ... + Zn ()

m.
n

Hence
Am

(Zn0 , Zn0 +1 , ...)

n0 =1

The sequence (Zn )n1 consists of independent random variables. By the 01


law of Kolmogorov (see [3, Proposition 2.1.6]) we conclude that
P(Am ) {0, 1}.
Since


Z1 + ... + Zn

P lim sup
= = lim P(Am )
m
n
n
it suffices to show P(Am ) > 0. We have


Z1 + ... + Zn

Am =
lim sup
m
n
n

[

\
Z1 + ... + Zk

m .
k
n=1 k=n

9.2. BOUNDS FOR THE RUIN PROBABILITY

55

By Fatous Lemma and the Central Limit Theorem,




Z
+
...
+
Z
1
k

m
P(Am ) lim sup P
k
k
Z
x2
dx
e 22
=
>0
2 2
m
where 2 = EZ12 .

Definition 9.1.4 (Net profit condition). The renewal model satisfies the net
profit condition (NPC) if and only if
EZ1 = EX1 cEW1 < 0. (N P C)
The consequence of (N P C) is that on average more premium flows into the
portfolio of the company than claim sizes flow out: We have
Gn = p(Tn ) + S(Tn )
= c(W1 + ... + Wn ) + X1 + ... + Xn
which implies
EGn = nEZ1 < 0.
Theorem 9.1.3 implies that any insurance company should choose the premium p(t) = ct in such a way that EZ1 < 0. In that case there is hope that
the ruin probability is less than 1.

9.2

Bounds for the ruin probability in the


small claim size case

In this section it is assumed, that the renewal model is used and the net
profit condition holds (i.e. EX1 cEW1 < 0).
Recall from Theorem 7.1.1 that for a random variable f : (, F) (R, B(R))
the function
mf (h) = Eehf ,
was called the moment-generating function if it exists at least for h in a
small interval (h0 , h0 ).

56

CHAPTER 9. PROBABILITY OF RUIN

Remark 9.2.1.

a) The map
h 7 Eehf

is, in fact, the two-sided Laplace transform.


b) If mf (h) in (h0 , h0 ) exists and, moreover, is m-times differentiable
dm
(such that we can switch E and dh
m ) then it holds
dm
mf (h) = Ef m ehf
dhm
and therefore

dm
mf (0) = Ef m .
dhm

We will say that the small claim condition holds if and only if there exists
h0 > 0 such that
mX1 (h) = EehX1 exists for all h (h0 , h0 ).
Theorem 9.2.2 (The Lundberg inequality). Assume that mZ1 exists at least
in a small interval (h0 , h0 ). If there exists a solution r (0, h0 ) to
mZ1 (r) = Eer(X1 cW1 ) = 1,
then for each u > 0 it holds that
(u) eru ,
where r is called the Lundberg coefficient.
The result implies, that if the small claim condition holds and the initial
capital u is large, there is in principal no danger of ruin.
Remark 9.2.3.

a) If mZ1 exists in an environment of 0 then


P(Z1 ) = P(eZ1 e )
e mZ1 ()

9.2. BOUNDS FOR THE RUIN PROBABILITY

57

and
P(Z1 ) e mZ1 ()
= e mZ1 (),
which implies for some > 0 that
P(|Z1 | ) e [mZ1 () + mZ1 ()].
b) It can be shown, that if r exists, it is unique which follows from the
fact that mZ1 is convex (if it exists): We have
e(1)r0 Z1 +r1 Z1 (1 )er0 Z1 + er1 Z1 .
Moreover, mZ1 (0) = 1 and by Jensens inequality,
mZ1 (h) = EeZ1 h eEZ1 h
such that (assuming (NPC) holds) we get
lim mZ1 (h) lim eEZ1 (h) = .

If mZ1 exists in (, ) and mZ1 (h) = 1 for some h {r, s} (0, ]


then, by convexity,
mZ1 (h) = 1 h [0, r s].
From a) we have

P(|Z1 | > ) ce c

for some c > 0

and it holds
n

E|Z1 | =

P(|Z1 | > )d = n
0

Z0

P(|Z1 | > )n1 d

ce/c n1 d
0
 n1
Z
n
/c
nc
e
d
c
0
n

58

CHAPTER 9. PROBABILITY OF RUIN


nc

n+1

e n1 d

= n!c

n+1

Because of
mZ1 (h) = Ee

hZ1

=E

X
(hZ1 )n
n=0

n!

(hc)n

n=0

we conclude that h 7 mZ1 (h) is analytic in a neighborhood of zero.


By the uniqueness of analytic functions we get Zb1 = 1 which implies
Z1 = 0 a.s.
c) In practice, r is hard to compute from the distributions of X1 and
W1 . Therefore it is often approximated numerically or by Monte Carlo
methods.
Proof:
Set Gk := Z1 + ... + Zk and

n (u) := P max Gk > u ,
1kn

u > 0.

Because of n (u) (u) for n it is sufficient to show


n (u) eru ,

n 1, u > 0.

For n = 1 we get the inequality by


1 (u) = P(Z1 > u) = P(erZ1 > eru ) eru EerZ1 = eru .
Now we assume that the assertion holds for n. We have

n+1 (u) = P max Gk > u
1kn+1

= P(Z1 > u) + P
= P(Z1 > u) + P
Z
= P(Z1 > u) +

max Gk > u, Z1 u

1kn+1

max (Gk Z1 ) > u Z1 , Z1 u

2kn+1
u


P max Gk > u x dFZ1 (x)
1kn

9.2. BOUNDS FOR THE RUIN PROBABILITY

59

where we have used for the last line that max2kn+1 (Gk Z1 ) and Z1 are
independent. We estimate the first term
Z
Z
P(Z1 > u) =
dFZ1 (x)
er(xu) dFZ1 (x),
(u,)

(u,)

and proceed with the second term as follows:


Z
Z

P max Gk > u x dFZ1 (x) =
n (u x)dFZ1 (x)
1kn
(,u]
(,u]
Z
er(ux) dFZ1 (x).

(,u]

Consequently,
Z
n+1 (u)

r(xu)

Z
dFZ1 (x) +

(u,)

er(ux) dFZ1 (x) = eru .

(,u]

We consider an example where it is possible to compute the Lundberg coefficient:


Example 9.2.4. Let X1 , X2 , ... Exp() and W1 , W2 , ... Exp(). Then
mZ1 (h) = Eeh(X1 cW1 )
= EehX1 EehcW1

for < h <


=
h + ch
c
since
hX1

Ee

Z
=

ehx ex dx

.
h

The (NPC) condition reads as


0 > EZ1 = EX1 cEW1 =

1
c

60

CHAPTER 9. PROBABILITY OF RUIN

i.e.
c > .
Hence mZ1 exists on ( c , ) and for r > 0 we get

=1
r + cr
= + cr r cr2

r= .
c

Consequently,

(u) eru = e( c )u .
Applying the expected value principle p(t) = (1 + )ES(t) = (1 + )EX1 t
we get

=
=
.

c
1+
(1 + )

This implies

(u) eru = eu 1+ ,
where one should notice that even does not change the ruin probability considerably!
The following theorem considers the special case, the Cramer-Lundbergmodel:
Theorem 9.2.5 (Cramers ruin bound). Assume that the Cramer-Lundbergmodel is used and the net profit condition (NPC) holds. In addition, let the
distribution of X1 have a density, assume mX1 (h) exists in a small neighborhood (h0 , h0 ) of the origin and it holds for the Lundberg coefficient r that
h0 < r < h0 . Then there exists a constant c > 0 such that
lim eru (u) = c,


1
with c = EX
r

R
0

yery (1 FX1 (y))dy

1
.

9.2. BOUNDS FOR THE RUIN PROBABILITY

61

We introduce the survival probability


(u) = 1 (u).
Lemma 9.2.6 (Fundamental integral equation for the survival probability).
Assume that for the Cramer-Lundberg-model (NPC) holds and EX1 < .
Then (applying the expected value principle)
Z u
1
FX1 (y)(u y)dy
(1)
(u) = (0) +
(1 + )EX1 0
where FX1 (y) = P(X1 > y).
Remark 9.2.7. Let the assumptions of Lemma 9.2.6 hold.
a) The assertion can be reformulated as follows. Let
Z y
1
FX1 ,I (y) :=
FX1 (z)dz, y 0.
EX1 0
The function FX1 ,I (y) is a distribution function since
Z
1
lim FX1 ,I (y) =
FX1 (z)dz
y
EX1 0
Z
1
P(X1 > z)dz = 1
=
EX1 0
Hence (1) can be written as
1
(u) = (0) +
1+

(u y)dFX1 ,I (y).
0

b) It holds limu (u) = 1. This can be seen as follows.


lim (u) =

=
=

lim (1 (u))

lim (1 P(sup Gk > u))

k1

lim P(sup Gk u)

k1

62

CHAPTER 9. PROBABILITY OF RUIN


where Gk = Z1 + ... + Zk . Since EZ1 < 0 the SLLN implies
lim Gk = a.s.

Therefore we have supk1 Gk < a.s. and





lim P sup Gk u = 1.

k1

c) It holds (0) = 1+
1 for . Indeed, because of b) and Lemma
9.2.6 we may conclude that
Z
1
1I[0,u] (y)(u y)dFX1 ,I (y)
lim
1 = (0) +
1 + u 0
Z

1
= (0) +
lim 1I[0,u] (y)(u y) dFX1 ,I (y)
1 + 0 u
Z
1
= (0) +
dFX1 ,I (y)
1+ 0
1
= (0) +
.
1+

Proof of Lemma 9.2.6:


a) We first show that
u
(u) = e c
c

Z
e
[u,)

z
c

Z
(z x)dFX1 (x)dz.

(2)

[0,z]

We consider


(u) = P sup Gn u
n1


= P Z1 u, Gn Z1 u Z1 n 2


Z
Z
=
P Gn Z1 u (x cw) n 2 dFX1 (x)dFW1 (w)
[0,)

[0,u+cw]

9.2. BOUNDS FOR THE RUIN PROBABILITY

63

where we used for the last line that x cw u and


x 0 0 x u + cw. We use that Gn Z1 Z1 + ... + Zn1
and substitute z = u + cw


Z
Z
(u) =
P Gn u (x cw) n 1 dFX1 (x)ew dw
[0,) [0,u+cw]
Z
Z
(u x + cw)dFX1 (x)ew dw
=
[0,) [0,u+cw]
Z
Z
zu
z
(z x)dFX1 (x)e c d
=
c
[u,) [0,z]
b) In this step we show

(t) = (0) +
c

(t x)FX1 (x)dx.

Differentiation of (2) leads to



Z
zu

(z x)dFX1 (x) e c
(u)
0 (u) =
c
c
[0,z]
z=u
Z

(u)
(u x)dFX1 (x),
=
c
c [0,u]
such that
Z
t
(t) (0)
(u)du
c 0
Z Z
t
=
(u x)dFX1 (x)du
c 0 [0,u]
u Z

Z 

t
0

=
(u x)FX1 (x) +
(u x)FX1 (x)dx du
c 0
[0,u]
0

Z 
Z
t
0
=
(0)FX1 (u) (u)FX1 (0) +
(u x)FX1 (x)dx du
c 0
[0,u]
Z t
Z Z

t
= (0)
FX1 (u)du
0 (u x)FX1 (x)dudx
c
c
0
0
[x,t]
Z t

=
(t x)FX1 (x)dx.
c 0

64

CHAPTER 9. PROBABILITY OF RUIN


This implies
t

(t) (0) =
c

(t x)(1 FX1 (x))dx.


0

Applying the expected value principle we have


assertion.

1
(1+)EX1

Lemma 9.2.8 (Smiths renewal equation). It holds


Z u
ru
ru
er(ux) (u x)dF (r) (x)
e (u) = qe FX1 ,I (u) +
0

where q :=

1
,
1+

r is the Lundberg coefficient and


F

(r)

q
(x) =
EX1

ery FX1 (y)dy.

The function F (r) is called the Esscher transform of FX1 .


Proof:
a) We first show that F (r) is a distribution function.
Z
q
(r)
ery FX1 (y)dy
lim F (x) =
x
EX1 0
q 1
(EerX1 1),
=
EX1 r
since
Ee

rX1

P(erX1 > t)dt

Z0

P(erX1 > ery )rery dy

Z 0
Z
ry
=
re dy +
P(X1 > y)rery dy

0
Z
= 1+r
FX1 (y)ery dy.
=

yields the


9.2. BOUNDS FOR THE RUIN PROBABILITY

65

From Eer(X1 cW1 ) = 1 we conclude


lim F

(r)



q 1
1
(x) =
1
EX1 r EercW1


q 1 rc +
=
1
EX1 r

c
1
qc 1
=
= 1.
=
EX1
(1 + )EX1

b) From Lemma 9.2.6 we conclude that


Z u
(u) = q FX1 ,I (u) +
(u y)d(qFX1 ,I )(y).
0

Indeed, by (1) and Remark 9.2.7 we have


Z u

FX (y)
1
(u) =
(u y) 1 dy.
+
1+ 1+ 0
EX1
Hence
Z u
1
FX (y)
1
(u) = 1 (u) =
(u y) 1 dy

1+ 1+ 0
EX1
Z u
Z u
FX1 (y)
FX (y)
(u y) 1 dy
= qq
dy + q
EX1
EX1
0
0
Z u
= q FX1 ,I (u) +
(u y)d(qFX1 ,I )(y).
0

This equation would have the structure of a renewal equation (see


the renewal equation (3) below) if only q = 1. Therefore we consider
Z u
ru
ru
e (u) = e q FX1 ,I (u) +
er(uy) (u y)ery d(q FX1 ,I )(y)
Z0 u
= eru qFX1 ,I (u) +
er(uy) (u y)dF (r) (y).
0

Since F (r) is a distribution function, we have indeed a renewal equation.



The following assertion is a generalization of the famous Blackwell renewal
Lemma.

66

CHAPTER 9. PROBABILITY OF RUIN

Lemma 9.2.9 (Smiths key renewal lemma). Let H : R R be a continuous


distribution function such that H(x) = 0 for x 0 and
Z
xdH(x) < .
0<
R

Let
a) k : R [0, ) be Riemann integrable such that k(x) = 0 for x < 0,
b) limx k(x) = 0.
Then

k(u y)dH(y)

R(u) =
0

is in the class of all functions on (0, ) which are bounded on finite intervals
the only solution to the renewal equation
Z u
R(u) = k(u) +
R(u y)dH(y)
(3)
0

and it holds

1
lim R(u) = R
u
xdH(x)
R

k(u)du.
0

Proof: See [8] page 202 ff.


Proof: of Theorem 9.2.5
We apply Lemma 9.2.9 setting
R(u) := eru (u)
k(u) := qeru FX1 ,I (u)
H(x) := F (r) (x).
We get for :=

R
R

ru

1
xdF (r) (x)

qery FX1 ,I (y)dy




Z0
Z y
1
ry
=
qe 1
FX1 (z)dz dy
EX1 0
0

lim e (u) =

9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY


Z
1
qe
FX1 (z)dzdy

EX1 y
0
Z
Z z
q

ery dy FX1 (z)dz


EX1 0
0


Z
Z
q
q
rz

e FX1 (z)dz
FX1 (z)dz
r EX1 0
EX1 0

[1 q] =
r
r 1+
Z

=
=
=
=

67

ry

Finally
1
=

xdF (r) (x)


R
Z
q
yery FX1 (y)dy
=
EX1 0
Z
1
=
yery FX1 (y)dy
(1 + )EX1 0

implies
lim eru (u) =

1
EX1
R
.
ry
r
ye FX1 (y)dy
0


Remark 9.2.10. The condition (NPC) EZ1 = EX1 cEW1 < 0 could in
the Cramer-Lundberg-model (assuming the expected value principle) also be
formulated as
c = (1 + )EX1 for some > 0.

9.3

An asymptotics for the ruin probability


in the large claim size case

We will need the following Theorem.

68

CHAPTER 9. PROBABILITY OF RUIN

Theorem 9.3.1. Assume the CLM model and that the condition (NPC)
holds. Let (XI,k )k1 i.i.d. random variables with distribution function FX1 ,I
and



u
(u) =
1+
(1 + ) P(XI,1 + ... + XI,n u) , u R.
1+
n=1
Then is in the class

G :=
g : R [0, ) : non-decreasing, bounded,


0
for x < 0
right-continuous with g(x) =

for x = 0
1+
the unique solution to
1
(u) = (0) +
1+

(u y)dFX1 ,I (y).
0

Proof:
Uniqueness. Assume 1 , 2 are solutions and = 1 2 . Then
Z u
1
(u y)dFX1 ,I (y)
(u) =
1+ 0
Z u
1
F X (y)
=
(u y) 1 dy
1+ 0
EX1
Z u
1
(y)F X1 (u y)dy
=
(1 + )EX1 0
and
1
|(u)|
(1 + )EX1

|(y)|dy.
0

Gronwalls Lemma implies that|(u)| = 0 for u R.


Definition 9.3.2. A distribution function F : R [0, 1] such that F (0) = 0
and F (x) < 1 x > 0 is called subexponential if and only if for i.i.d. (Xi )
and Xi F it holds that
P(X1 + ... + Xn > x)
= 1 n 2.
x P(max1kn Xk > x)
lim

We denote the class of subexponential distribution functions by S.

9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY

69

Lemma 9.3.3. F S
lim

P(X1 + ... + Xn > x)


= n n 1.
P(X1 > x)

Proof:
It holds for Sn = X1 + ... + Xn
P(Sn > x)
P(Sn > x)
=
P(max1kn Xk > x)
1 P(max1kn Xk x)
P(Sn > x)
=
1 P(X1 x)n
P(Sn > x)
=
1 (1 P(X1 > x))n
P(Sn > x)
=
.
P(X1 > x)n(1 + o(1))

Proposition 9.3.4.

a) For FX S it holds
F X (x y)
= 1 y > 0.
x
F X (x)
lim

b) If FX S, then for all > 0 it holds


ex P(X > x) for x .
c) If FX S, then for all > 0 there exists a K > 0 such that
P(Sn > x)
K(1 + )n
P(X1 > x)

n 2 and x 0.

Proof. a) Let F := FX . For 0 y x < we have


P(X1 + X2 > x)
=
P(X1 > x)
=

P(t + X > x)dF (t)


P(X1 > x)
R
P(X1 > x) + (,y] P(t + X > x)dF (t)
R

P(X1 > x)

70

CHAPTER 9. PROBABILITY OF RUIN


R
+

(y,x]

P(t + X > x)dF (t)

P(X1 > x)
F (x y)
1 + F (y) +
(F (x) F (y)).
F (x)
We choose x large enough such that F (x) F (y) > 0 and observe that


P(X1 + X2 > x)
1
F (x y)
1

1 F (y)
1
P(X1 > x)
F (x) F (y)
F (x)
as x .
b) A measurable function L : [0, ) (0, ) is called slowly varying if
L(c)
= 1 c > 0.
L()
lim

Such a function can be represented as



Z
(t)
dt
x x0
L() = c0 () exp
t
0
for some x0 > 0 where
lim (t) = 0,

lim c0 () = c0 > 0.

For any slowly varying function L it holds


lim L() = > 0.

Indeed,

Z

lim c0 () exp


(t)
dt
=
t


Z
lim c0 () exp log +


(t)
dt

t
0

Z 
dt
lim c0 () exp log sup |(t)|

t0
0 t


lim c0 () exp log (log log 0 ) sup |(t)|

t0

9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY

71

= .
Now we continue with the proof of b). Let L() := F (log ). For some c > 0
lim

L(c)
F (log c + log )
= lim
=1

L()
F (log )

which implies that L is slowly varying. Therefore,


lim F (log ) = lim ex F (x) = .

The proof of c) can be found in [4][Lemma 1.3.5].


For the examples below we introduce the next lemma.
Lemma 9.3.5. Let X1 , X2 be independent positive random variables such
that for some > 0
F Xi (x) =

Li (x)
x

where L1 , L2 are slowly varying. Then


F X1 +X2 (x) = x (L1 (x) + L2 (x))(1 + (1)).
Proof:
For 0 < <

1
2

we have

{X1 + X2 > x} {X1 > (1 )x} {X2 > (1 )x}


{X1 > x, X2 > x}
and hence
P(X1 + X2 > x) F X1 (x) + F X2 (x) + F X1 (x)F X2 (x)
[F X1 (x) + F X1 (x)][1 + F X1 (x)]
= [F X1 (x) + F X1 (x)][1 + o(1)]


L1 ((1 )x) L2 ((1 )x)
=
+
[1 + o(1)]
((1 )x)
((1 )x)


L1 ((1 )x)
L2 ((1 )x)
= F X1 (x)
+ F X2 (x)
[1 + o(1)](1 ) .
L1 (x)
L2 (x)

72

CHAPTER 9. PROBABILITY OF RUIN

From this we get


lim sup
x

P(X1 + X2 > x)
P(X1 + X2 > x)
= lim sup
F X1 (x) + F X1 (x)
x
F X1 (x) L1 ((1)x)
+ F X2 (x) L2 ((1)x)
L1 (x)
L2 (x)
(1 ) .

On the other hand,


P(X1 + X2 > x)
=
=

P({X1 > x} {X2 > x})


P(X1 > x) + P(X2 > x) P(X1 > x)P(X2 > x)
F (x) + F X2 (x) F X1 (x)F X2 (x)
 X1


F X1 (x) + F X2 (x) 1 F X1 (x)

and hence
lim inf
x

P(X1 + X2 > x)
1.
F X1 (x) + F X2 (x)

Consequently,
lim

P(X1 + X2 > x)
= 1.
F X1 (x) + F X2 (x)


Definition 9.3.6. If there exists a slowly varying function L and some > 0
such that for a positive random variable X it holds
L(x)
x
then FX is called regularly varying with index or of Pareto type with
exponent .
F X (x) =

Corollary 9.3.7. If FX is regularly varying with index then FX is subexponential.


Proof:
An iteration of Lemma 9.3.5 implies
F X1 +...+Xn (x)
L(x) + ... + L(x)

= n.
L(x)
F X (x)


9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY

73

Example 9.3.8. a) The exponential distribution with parameter > 0 is


not subexponential.
b) The Pareto distribution
F (x) = 1

, x 0, > 0, > 0,
( + x)

is subexponential.
c) The Weibull distribution
r

F (x) = 1 ecx ,

0 < r < 1, x 0,

is subexponential, but not of Pareto type.


Proof. a) One shows, for example, that relation a) of Proposition 9.3.4 is not
satisfied.
b) We put
P(X > x) =

1
1 (x)
=: L(x)

x ( + x)
x

and conclude

cx x + )
+ cx x


x+
=
c
1 for x .
+ cx

L(cx)
=
L(x)

c) For the proof that the Weibull distribution is subexponential see


[4][Sections 1.4.1 and A3.2 ]. The Weibull distribution can not be of Pareto
type because it has all moments while distributions of Pareto type do not.
Proposition 9.3.9. Let X1 , X2 FX . It holds
FX S

P(X1 + X1 > x)
=1
x P(max{X1 , X2 } > x)
lim

74

CHAPTER 9. PROBABILITY OF RUIN

P(X1 + X1 > x)
= 2.
x
P(X1 > x)
lim

1 +X1 >x)
Proof. We show by induction that limx P(X
= 2 implies FX S.
P(X1 >x)
Then the case n = 2 in Lemma 9.3.3 is our assumption. We assume that

P(X1 + ... + Xk > x)


= k.
x
P(X1 > x)
lim

Hence there exists for all (0, k) an x0 > 0 such that


(k )P(X1 > x) P(X1 + ... + Xk > x) (k + )P(X1 > x)
Moreover,
Rx

P(X1 + ... + Xk > x y)dFX (y)


P(X1 > x)
R xx0
P(X1 > x y)dFX (y)
1 + (k + ) 0
P(X1 > x)
P(X > x x0 ) P(X > x)

P(X1 > x)

P(X1 + ... + Xk + Xk+1 > x)


= 1+
P(X1 > x)

From the proof of Proposition 9.3.4 it follows


P(X > x x0 ) P(X > x)
= 0.
x
P(X1 > x)
lim

Moreover,
Rx
0

P(X1 > x y)dFX (y)


P(X1 + X2 > x)
=
1.
P(X1 > x)
P(X1 > x)

which implies
P(X1 + ... + Xk + Xk+1 > x)
k + 1.
P(X1 > x)
The other inequality can be shown similarly. For the remaining equivalence
see the proof of Lemma 9.3.3.

9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY

75

Summary

Pareto type
S

light-tailed

heavy-tailed

F light-tailed lim supx F (x)ex < for some > 0.


F heavy-tailed lim inf x F (x)ex > 0 for all > 0.
An important feature of X S is that the moment generating function mX
does not exist in an environment of 0.
We proceed with the main result of this section. For this we recall that for
the Cramer-Lundberg-model, Proposition 3.3.2 a) implies
ES(t) = EN (t)EX1 = tEX1 =

EX1
t.
EW1

Assuming the expected value (or variance) principle:


pEV (t) = (1 + )ES(t) = (1 + )

EX1
t.
EW1

Choosing the premium rate c, i.e.


p(t) = ct,
it follows that
c = (1 + )

EX1
EW1

(4)

76

CHAPTER 9. PROBABILITY OF RUIN

which implies
EX1 (1 + ) cEW1 = 0
and further
EX1 cEW1 < 0,
which means that the net profit condition holds. Equation (4) implies that
=c

EW1
1.
EX1

Theorem 9.3.10. Assume the Cramer-Lundberg-model is used, EX1 < ,


the net profit condition (NPC) is fulfilled, X1 has a density and the distribution function
Z y
1
FX1 ,I (y) :=
1 FX1 (z)dz, y > 0
EX1 0
is subexponential. Then
(u)
1
= .
u 1 FX1 ,I (u)

lim

Proof:
From Lemma 9.2.6 we know that the survival probability solves
Z u
1
(u y)dF X1 ,I (y)
(u) = (0) +
(1 + )EX1 0
The function is bounded, non-decreasing and right-continuous, since
(u) = P(sup Gk u).
k1

Therefore we can apply Theorem 9.3.1 and get





n
(u) =
1+
(1 + ) P(XI,1 + ... + XI,n u)
1+
n=1
and

X
(u) =
(1 + )n P(XI,1 + ... + XI,n > u)
1 + n=1

9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY

77

since

X
(1 + )n = 1.
1 + n=0
Hence

P(XI,1 + ... + XI,n > u)


X
(u)
=
(1 + )n
1 + n=1
F X1 ,I (u)
F X1 ,I (u)
By assumption,
lim

P(XI,1 + ... + XI,n > u)


= n.
F X1 ,I (u)

In order to be able to exchange summation and limit, we will use the estimate
of Proposition 9.3.4 c)
P(XI,1 + ... + XI,n > u) K(1 + )n F X1 ,I (u).
For (0, ) we have

n P(XI,1

(1 + )

n=1

X
+ ... + XI,n > u)
(1 + )n
K
< .
(1 + )n
F X1 ,I (u)
n=1

Therefore,

(u)
X
P(XI,1 + ... + XI,n > u)
lim
=
(1 + )n lim
u F X ,I (u)
u
1 + n=1
F X1 ,I (u)
1

X
1
=
(1 + )n n = .
1 + n=1


Is there a criterion for
FX,I S?
Definition 9.3.11. A positive random variable X belongs to S iff
a) EX = (0, ),

78

CHAPTER 9. PROBABILITY OF RUIN


b) limx

Rx
0

F (xy)
F (y)dy
F (x)

= 2.

Proposition 9.3.12. If X S then X S and FX,I S.


Proof. From the definition we conclude that for all > 0 there exists a
constant x0 > 0 such that
Z t
F (t y)F (y)dy 2(1 + )F (t).
2(1 )F (t)
0

Then, for any x > x0 ,


Z
Z
F (t)dt
2(1 )
x

F (t y)F (y)dy 2(1 + )

F (t)dt
x

which implies
RRt
2(1 )

F (t y)F (y)dy/2
R
2(1 + )
F (t)dt/
x

and
2(1 )

FI,X FI,X (x)


2(1 + ).
F I,X (x)

Proposition 9.3.9 implies that FX,I S .


Example 9.3.13. The Weibull distribution
r

P(X > x) = ecx ,

x 0,

with fixed c > 0 and r (0, 1) belongs to S .


Proof. Let M (x) := cxr . We show
Z x
Z x/2
F (x y)
F (y)dy = 2
eM (x)M (xy)M (y) dy 2
F
(x)
0
0
for x . For 0 < y <

x
2
0

we have
0

1 eyM (x) eM (x)M (xy) eyM (x/2)

9.3. AN ASYMPTOTICS FOR THE RUIN PROBABILITY


and hence
Z
Z x/2
M (y)
e
dy
0

x/2

M (x)M (xy)M (y)

x/2

dy

79

eyM (x/2)M (y) dy.

Since the LHS and the RHS of the inequality both converge to as x
we get
Z
lim

x/2

eM (x)M (xy)M (y) dy = .

Corollary 9.3.14. Assume the Cramer-Lundberg-model is used, X1 is


Weibull distributed
r

P(X > x) = ecx ,

x 0,

with fixed c > 0 and r (0, 1) and the net profit condition (NPC) is fulfilled.
Then
1
(u)
lim R cxr = R cxr .
u
e
dr
0 e
dr
u

80

CHAPTER 9. PROBABILITY OF RUIN

10. Problems
1. Poisson distribution
Let f : {0, 1, 2, ...} be Poisson distributed with intensity > 0, i.e.
P({ : f () = k}) = e

k
,
k!

k N.

Show that
(a) Ef =
(b) var(f ) = ,

(var(f ) := (Ef Ef )2 )

2. Order statistics
Let Y1 , Y2 , . . . Yn be i.i.d. random variables with a continuous strictly
increasing distribution function F . Put
Yn := max{Y1 , Y2 , . . . , Yn },

Yn1
() := second largest value of Y1 (), Y2 (), . . . , Yn ()

and so on. The random variable Yi is called the i-th order statistics of
(Y1 , Y2 , . . . , Yn ).
(a) Compute P(at least 2 of the Yi are equal).
(b) Show that the random variables X1 , . . . , Xn , given by Xi := F (Yi ),
are independent and uniformly distributed on [0, 1].
(c) Show that the distribution of (X1 , . . . , Xn ) has the density
f (x1 , . . . , xn ) = n!1I{0x1 x2 ...xn 1} .

81

82

CHAPTER 10. PROBLEMS

3. Characteristic functions
For a random variable X : R we define
b := EeitX
X(t)

for t R.

b : R C is called characteristic function of the random


The function X
variable X.
(a) Compute the characteristic function of the exponential distribution
with parameter > 0.
(b) Compute the characteristic function of the Poisson distribution with
intensity > 0.
(c) Compute the characteristic function of the Gamma distribution with
parameters (n, ) where > 0 and n {1, 2, ....}, i.e. of the measure
on B(R) with density
p(x) :=

1
1I[0,) (x)n xn1 ex .
(n 1)!

(d) Show that the characteristic function of any random variable is positive semidefinit, i.e. it holds for t1 , ...tn R and c1 , ...cn C that
n
X

b k tl )ck cl 0.
X(t

k,l=1

(e) Show that for independent random variables X1 , ..., Xn : R the


relation
c1 (t) X
cn (t)
(X1 +\
+ Xn )(t) = X
holds true.
(f) If X1 , ..., Xn are independent and exponentially distributed with parameter > 0 then show that
P(X1 + + Xn > x) = e

n1
X
(x)k
k=0

k!

4. Let 0 T1 T2 ... be the jump times of a Poisson process. Show that


for any 0 x1 x2 xn , n 1,
P(T1 x1 , . . . , Tn xn )

83
Z

x1

z1

x2 z1

z2

xn z1 ...zn1

...

ezn dzn ...dz1 .

5. Let 0 W1 W2 ... be independent exponentially distributed random


variables. Show that
P(W1 x1 , . . . , W1 + + Wn xn )
Z xn z1 ...zn1
Z x2 z1
Z x1
z2
z1
ezn dzn ...dz1 .
e
...
e
=
0

6. Prove Lemma 2.1.2. by induction.


7. Let N be a Poisson process. Show that
Z s
Z
x
e
P(N (s) 1, N (t) 2) =
0

tx

ey dydx 0 < s < t.

8. Let N be a Poisson process and


Tn () := inf{t > 0 : N (t, ) = n}
the time of the n-th jump (n 1) of N .
(a) Show that for 0 = b0 a1 < b1 a2 < ... an < bn it holds that

\
n
{ak < Tk bk }
P
k=1

b1

b2

bn

n ezn dzn ...dz1

...
a1
b1

a2
b2 x1

an

bn (x1 +...+xn1 )

...
a1

a2 x1

n e(x1 +...+xn ) dxn ...dx1 .

an (x1 +...+xn1 )

Hint: Notice the relation


n
\

{ak < Tk bk } =

k=1

n1
\

{N (ak ) N (bk1 ) = 0, N (bk ) N (ak ) = 1}

k=1

{N (an ) N (bn1 ) = 0, N (bn ) N (an ) 1}.

84

CHAPTER 10. PROBLEMS


(b) Use (a) to prove that for Wn := Tn Tn1 it holds
P(W1 t1 , . . . , Wn tn ) = ni=1 (1 eti ).

9. Show that for 0 t1 t2 tn , n 1, we have


P(T1 t1 , . . . , Tn tn )
Z
Z t2 z1
Z t1
z2
z1
e
...
e
=
0

tn z1 ...zn1

ezn dzn ...dz1 .

Hint: Use
{T1 t1 , . . . , Tn tn } = {N (t1 ) 1, . . . N (tn ) n}.
10. Let N1 and N2 be independent Poisson prosesses with intensities 1 > 0
and 2 > 0, respectively. (Note that the processes X and Y are called
independent if the -algebras (X) and (Y ) are independent, i.e. if it
holds
P(A B) = P(A)P(B) A (X), B (Y )
where (X) ((Y )) denotes the smallest algebra such that all X(t), t 0
(Y (t), t 0) are measurable.) Show that N = N1 + N2 is again a Poisson
process.
11. Show that the probability that the independent Poisson processes N1 and
N2 ever jump at the same time is zero.
12. Stopping times
Let (, F, P) be a probability space and (Fn )
n=0 a filtration, i.e. it holds

(Fn )n=0 is a sequence of -algebras with the properties:


(a) Fn F n N,
(b) Fn Fn+1 n N.
Let : N {} be a stopping time w.r.t. (Fn )
n=0 , i.e. it holds
{ : () = n} Fn ,

n N.

85
Show that : [0, ) {} is a stopping time w.r.t. (Fn )
n=0 iff
{ : () n} Fn ,

n N.

13. Walds formula


Let (Xn )
n=1 be a sequence of i.i.d. distributed integrable (i.e. E|Xn | < )
random variables on (, F, P). We define the - algebras F0 := {, } and
Fn := (X1 , ..., Xn ), n N. Let : N {} be a stopping time
w.r.t. (Fn )nN such that E < . Show that
E

Xn = EEX1 .

n=1

P
P
Hint: Use n=1 Xn =
n=1 Xn 1I{n} and check that Xn and 1I{n} are
independent.
14. Compute the probability that the Poisson process jumps at time t > 0:
We define N (t)() := N (t)() N (t)() = N (t)() limst N (s)().
Find out
P({ : N (t)() = 1}).
15. Order statistics property of the Poisson process
Let N be a Poisson process and (Xn )nN a sequence of independent on
[0, 1] uniformly distributed random variables. Show
P({(T1 , ..., Tn ) B}|{N (1) = n}) = P({(X1 , ..., Xn ) B}) B = [b0 , b1 ]...[bn1 , bn ]
where 0 = b0 < b1 < ... < bn .
Hint: P({(T1 , ..., Tn ) B}|{N (1) = n}) =

P({(T1 ,...,Tn )B}{Tn 1<Tn+1 })


P({Tn 1<Tn+1 })

16. Martingale property of the compensated Poisson process


Let N be a Poisson process with Intensity > 0. We define the -algebra
(0), N
(1), ..., N
(k))
Fk = (N

86

CHAPTER 10. PROBLEMS


(t) := N (t) t. Show that the relation
where N
(n)|Fk ] = N
(k),
E[N

k = 1, ..., n

holds.
(n) N
(k) and Fk related?
Hint: How are N
17. Renewal processes do not explode:
Show that for a renewal process N it holds
P({ : N(t)() < }) = 1.
For almost all the process N jumps only finitely many times until
time t.
P
Hint: Why does P(
i=1 Wi = ) = 1 hold?
18. Let (N (t))t0 be a renewal process
N (t) := #{i 1 : Ti t},
where Tn := W1 + + Wn and (Wi )i1 is a sequence of positive i.i.d.
random variables such that EW1 < . We define the renewal function
m(t) := EN (t) + 1.
Show that m satisfies the renewal equation
Z t
m(t) = 1I[0,) (t) +
m(t y)dFT1 (y).
0

Hint: Use the representation N (t) =

i=1

1I[0,t] (Ti ).

19. Show that


(a) the Pareto distribution, given by the distribution function
 a
b
F (x) = 1
, a > 0, b > 0, x b,
x
is heavy tailed.

87
(b) the Weibull distribution, given by the distribution function

F (x) = (1 ecx )1I{(0,)} (x) (c > 0, > 0)


is heavy tailed for < 1 and light tailed for 1.

20. The standard deviation principle is motivated by te Cental Limit Theorem for (S(t)): Show for the renewal model (assume var(X1 ) < and
var(W1 ) < ) that for all x R
P(S(t) pSD (t) x) (),

t ,

where (x) is the standard normal distribution function.


21. Consider the renewal model with var(X1 ) < and var(W1 ) = 1 . Show
that
(a) the expectation principle and the variance principle are asymptotically equivalent, i.e. there exists a constant c > 0 such that
pEV (t)
c,
pVAR (t)

for t .

(b) the netto principle and the standard variation principle are asymptotically equivalent, i.e. there exists a constant c > 0 such that
pN ET (t)
c,
pSD (t)

for t .

88

CHAPTER 10. PROBLEMS

22. Typical requirement on premium principles are, for example,


Non-negative loading: p(t) ES(t).
Consistency: the premium for S(t) + c is p(t) + c.
with
Additivity: for independent total claim amounts S(t) and S(t)

corresponding premiums p(t) and p(t) the premium for S(t) + S(t)
should be p(t) + p(t).
Homogenity : For c > 0 the premium for cS(t) should be cp(t).
Which of the premium principles pN ET , pEV , pVAR and pSD satisfies these
conditions in the renewal model ?
23. Let N be a (homogeneous) Poisson process with Intensity > 0 and let
T1 , T2 , ... be the according jump times of N . Assume (Xn )
n=1 is a sequence
of i.i.d random variables independent from N, and X1 has the distribution
function F. We define for 0 < s < t and a < b
M ((s, t] (a, b]) = #{i 1 : Ti (s, t], Xi (a, b]}
Show that
(a) M ((s, t] (a, b]) is Poisson distributed with intensity
(ts)(F (b) F (a)),
(b) the random variables M (A1 ) and M (A2 ) are independent for
Ak := (sk , tk ] (ak , bk ]) with A1 A2 = .
Hint: a) Compute the characteristic function u 7 EeiuM ((s,t](a,b]) , and
PN (t)
use M ((s, t] (a, b]) = n=N (s)+1 1I(a,b] (Xn ) gilt.
b) One may use the fact that the random variables f, g : R are
independent Eeiuf +ivg = Eeiuf Eeivg for all u, v R.
24. Mixed distribution
Let f1 , . . . , fn be random variables and F1 , . . . , Fn , their according distribution functions. The random variable J : {1, . . . , n} is independent
from f1 , . . . , fn and it holds P(J = k) = pk for k = 1, . . . , n. Describe the
distribution function of
n
X
Z=
1I{J=k} fk .
k=1

89
by the help of (Fi ) and (pk ).
25. Panjer Recursion
For which (a, b) R2 do the follwing distribution satisfy the (a,b)condition
qn = P(N = n) = a +

b
qn1 ?
n

(a) Poisson distribution


(b) Binomial
 distribution:
qn = m
pn (1 p)mn , fr n = 0, . . . , m (0 < p < 1).
n
(c) Negative binomial distribution:


v+n1 v
qn =
p (1 p)n , n = 0, 1, . . . ,
n

(0 < p < 1,

v > 0).

(The negative binomial distribution can be interpreted as the probability of n successes which occur with
probebility p and v failures from altogehter v + n trials under the condition that the (v + n)-th trial is a
failure.
For v > 0 one defines

v+n1
n

holds (x + 1) = x(x) for x > 0.

(v+n)

:= n!(v) where (x) =

R x1 t
e dt is the Gamma function, and it
0 t

26. A renewal Rtheorem


t
Let m(t) := 0 (u)du, where is measurable, positive and bounded, and
it holds limu (u) = > 0. Prove the convergence
Z t
Z
lim
f (t y)dm(y) =
f (x)dx
t

for any bounded measurable function f with compact support.


27. Lebesgue-Stieltjes integral
Let N = (Ns )s0 be a Poisson process. Show that for 0 < a < b it holds
Z
Z
sdN (s, ) = bN (b, ) aN (a, )
N (s, )ds.
[a,b]

[a,b]

90

CHAPTER 10. PROBLEMS


Hint: Assume > 0 and let g : [a , b] R be nopn-decreasing
right continuous and bounded. Then g induces a finite measure g on
([a, b], B([a, b])) :
g ((y, x]) := g(x) g(y).
For Borel functions f : [a, b] R with
Z
|f (x)|dg (x) <
[a,b]

one defines the Lebesgue-Stieltjes integral by the Lebesgue integral:


Z
Z
f (x)dg(x) :=
f (x)dg (x)
[a,b]

[a,b]
g ((y,x])
g ([a,b])

(Since Q((y, x]) =


R
f (x)dg(x) := g ([a, b])EQ f
[a,b]
= (g(b) g(a))EQ f.)

is a probability measure we have

28. Cram
er-Lundberg: special casel
We consider the Cramer-Lundberg model with intensity > 0 and assume
that the claims are Gamma distributed i.e. the density is given by
1 x
hX1 (x) :=
x e , x > 0.
()
(a) Compute the moment generating function mX1 (h) and the find out
for which h the expression is finite.
(b) Formulate the (NP) condition in , , und c.
(c) Compute the Lundberg coefficient assuming the (NP) condition
holds.

29. Panjer Recursion II


(a) For the Poisson distribution, the binomial distribution and the negative binomial distribution determine the according region R1 , R2 and
R3 of points (a, b) R2 where the (a,b) condition
b
qn = P(N = n) = a + qn1 ,
n
is satisfied.

91
(b) Show that the Poisson distribution, the binomial distribution and
the negative binomial distribution are the only distributions on N
which can satisfy the (a,b) condition.
Hint: Check whether there are (a, b) 6 R1 R2 R3 such that there
exists a probability measure on N.
30. Compound Poisson process
Let N1 , . . . , Nn be indpendent Poisson processes with the according intensities 1 , ..., n and let x1 , . . . , xn R. Is
x1 N1 + + xn Nn
a compound Poisson process?
31. Integrated tail distribution
Let X : [0, ) be a random variable. Show that
 1 Rx
P(X > y)dy
x0
EX 0
F (x) :=
0
x<0
is a distribution function.
32. It
os formula for the Poisson process
Let N = (N (t))t0 be a Poisson process.
(a) Let f : R R be a Borel-function. Show that
Z
f (N (t, )) = f (N (s, )) +
f (N (u, )) f (N (u, ))dN (u, )
(s,t]

= f (N (s, )) +

[f (N (u, ) + N (u, )) f (N (u, ))]N (u, ).

s<ut

(b) Show for a function g : [0, )[0, ) R which is continuously differentiable in the first variable and continuous in the second variable
that
Z

f (u, N (u, ))du


f (t, N (t, )) = f (s, N (s, )) +
(s,t] t

92

CHAPTER 10. PROBLEMS


Z
f (u, N (u, )) f (u, N (u, ))dN (u, ).

+
(s,t]

Hint: Consider for s = s0 < s1 < ... < sn = t


f (t, N (t, )) f (s, N (s, )) =

n
X

[f (sk , N (sk , )) f (sk1 , N (sk1 , ))].

k=1

Use (a) to show that for any > 0 there exists a > 0 such that

f (sk , N (sk , )) f (sk1 , N (sk1 , ))


Z

=
f (u, N (u, ))du
(sk1 ,sk ] t
Z
f (u, N (u, )) f (u, N (u, ))dN (u, ) +
+
(sk1 ,sk ]

for max1kn (sk sk1 ) < .


Subexponential distributions are a class of distributions for which it
holds mX (h) = for any h > 0 :
Definition. A distribution function F : R [0, 1] is called subexponential, if F (0) = 0 and F (x) < 1 for x > 0 and it holds the relation
1 F 2 (x)
= 2.
x 1 F (x)
lim

We will denote the class of subexponential distributions by S.


33. Exponential distribution
Show that the exponential distribution does not belong to S.
34. Pareto distribution
Show that the Pareto distribution belongs to S.

93
35. Subexponential: equivalent condition
Prove that F : [0, ) [0, 1] belongs to S iff
1 F n (x)
= n n 2
x 1 F (x)

(1)

lim

for the non-trivial direction one could proceed as follows:


(a) 1. step:
Show the relation
F (n+1) (x)
=1+
F (x)

F n (x t)
dF (t).
F (x)

(2)

(b) 2. step:
From (2) one concludes for 0 < y x that
Z x
F (x t)
F (x t)
dF (t) +
dF (t)
F (x)
F (x)
0
y
F (x y)
(F (x) F (y)).
1 + F (y) +
F (x)

F 2 (x)
= 1+
F (x)

With this inequality and from the monotonicity properties of F one


can deduce that
lim

F (x y)
= 1.
F (x)

(3)

(c) 3. step:
Use (2) to show (1) by induction in n.:
For sufficiently large x, y mit y x one has
Z xy n
Z xy
F (x t) F (x t)
F (x t)
dF (t) (n + )
dF (t).
F (x t) F (x)
F (x)
0
0
Moreover, because of (3) we have
Z
0

xy

F (x t)
F (x) F 2 (x)
dF (t) =

F (x)
F (x)

xy

F (x t)
dF (t) 1
F (x)

94

CHAPTER 10. PROBLEMS


as x . Similarly one gets
Z x
F n (x t) F (x t)
dF (t) 0
F (x)
xy F (x t)
as x . This implies (1) for n + 1.

A. The Lebesgue-Stieltjes
integral
A.1

The Riemann-Stieltjes integral

We recall first the Riemann-Stieltjes integral. Let f : [a, b] R be continuous


and g : [a, b] R of finite 1-variation, i.e. for p = 1 it holds
p
V[a,b]
(g)

:=

n
X

sup

|g(ti ) g(ti1 )|p < .

i=1

finite partitions
a=t0 <t1 <...<tn =b
n=1,2,...

(A function of finite 1-variation is always a Borel function: see Corollary


p
17.5 of [1]. If p 1 and V[a,b]
(g) < we say that g has a finite p-variation.
The paths of the Brownian motion have a finite 2-variation but not a finite
1-variation)
Denoting a partition by P = {a = t0 < t1 < .... < tn = b} we define the
Riemann-Stieltjes sums
S(P, f, g) :=

n
X

f (ci )[g(ti ) g(ti1 )]

i=1

where ci [ti1 , ti ] is chosen arbitrarily. We will say that the RiemannStieltjes integral
Z
(RS)

f (x)dg(x)
a

95

96

APPENDIX A. THE LEBESGUE-STIELTJES INTEGRAL

exists if S(P, f, g) converges whenever |P| := max1in |ti ti1 | 0. Using


the Darboux sums
U (P, f, g) :=
L(P, f, g) :=

n
X

sup

i=1 c[ti1 ,ti ]


n
X

inf

i=1

c[ti1 ,ti ]

f (c) [g(ti ) g(ti1 )]


f (c) [g(ti ) g(ti1 )]

one easily concludes from


L(P, f, g) S(P, f, g) L(P, f, g)
Rb
that the Riemann-Stieltjes integral (RS) a f (x)dg(x) exists for any continuous function f and finite 1-variation function g. One can also show that
the Riemann-Stieltjes integral exists for any piecewise continuous function
f and finite 1-variation function g if the jumps of f and g do not coincide.
Example A.1.1. Let x (a, b). For f = 31I[x,b] and g = 21I[a,x) we observe
the following: For k such that tk1 x tk it holds
U (P, f, g) =

sup

f (c) [g(tk ) g(tk1 )] = 3(0 2) = 6

c[tk1 ,tk ]

while
L(P, f, g) =

inf
c[tk1 ,tk ]

Consequently, (RS)

A.2

Rb
a

f (c) [g(tk ) g(tk1 )] = 0(0 2) = 0.

f (x)dg(x) does not exist.

The Lebesgue-Stieltjes integral

Lemma A.2.1. The following assertions are equivalent:


a) The function g : [a, b] R is non-decreasing and right-continuous and
1
V[a,b]
(g) < .

A.2. THE LEBESGUE-STIELTJES INTEGRAL

97

b) There exists a finite measure g on ((a, b], B((a, b])) such that
g(x) g(a) = g ((a, x]).
Definition A.2.2. For a Borel function f : [a, b] R and a non-decreasing
and right-continuous function g : [a, b] R such that
Z
|f (x)|dg (x) <
(a,b]

we define the Lebesgue-Stieltjes integral by


Z
Z
(LS)
f (x)dg(x) :=
(a,b]

f (x)dg (x)

(a,b]

Lemma A.2.3. For a right-continuous function g : [a, b] R such


1
that V[a,b]
(g) < there exist non-decreasing right-continuous functions
h1 , h2 : [a, b] R such that
1
1
V[a,b]
(h1 ) + V[a,b]
(h2 ) <

and
g(x) = h1 (x) h2 (x).
Definition A.2.4. For a Borel function f : [a, b] R and a right-continuous
1
(g) < and g = h1 h2 from the
function g : [a, b] R such that V[a,b]
previous Lemma such that
Z
Z
|f (x)|dh1 (x) +
|f (x)|dh2 (x) <
(a,b]

(a,b]

we define the Lebesgue-Stieltjes integral by


Z
Z
Z
(LS)
f (x)dg(x) := (LS)
f (x)dh1 (x) (LS)
(a,b]

(a,b]

(a,b]

f (x)dh2 (x).

98

APPENDIX A. THE LEBESGUE-STIELTJES INTEGRAL

Example A.2.5.
a) Let x (a, b), f = 31I[x,b] and g = 21I[a,x) . The function g we can write as g(y) = 2 2x ((a, y]). We obtain
Z
Z
(LS)
f (y)dg(y) = 0 2
f (y)dx (y)
(a,b]

(a,b]

= 2f (x)
= 6.
b) In the same way one can show that for the Poisson process (N (t))t0
(for those paths which are c`adl`ag ) and 0 < a < b it holds
Z
(N (b) N (a) + 1)(N (b) N (a))
N (t) N (a)dN (t) =
(LS)
.
2
(a,b]
Note that the (RS) integral is not defined here.

Bibliography
[1] N. L. Carothers. Real Analysis. Cambridge 2000
[2] C. Geiss and S. Geiss.
An introduction to probability theory.
http://stochastics-mathematics.uibk.ac.at/scripts.htm
[3] S. Geiss. Stochastic processes in discrete time
mathematics.uibk.ac.at/scripts.htm

http://stochastics-

[4] P. Embrechts, C. Kl
uppelberg, and T. Mikosch. (1997) Modelling Extremal Events for Insurance and Finance Springer 1997
[5] A. Gut. Stopped Random Walks: Limit Theorems and Applications
Springer 2009
[6] T. Mikosch. Non-life insurance mathematics: An Introduction with the
Poisson Process. Springer, 2009.
[7] M. Lo`eve. Probability theory 1. Springer, 1977.
[8] S. Resnick. Adventures in Stochastic Processes Birkhauser, 1992.
[9] A.N. Shiryaev. Probability. Springer, 1996.

99

You might also like