Professional Documents
Culture Documents
Contents
1 Introduction
1.1 Some facts about probability
. . . . . . . . . . . . . . . . . .
5
6
23
23
23
24
35
35
35
35
distribution of S(t)
Mixture distributions . . . . . . . . . . . . . . . . . . . . . . .
Applications in insurance . . . . . . . . . . . . . . . . . . . . .
The Panjer recursion . . . . . . . . . . . . . . . . . . . . . . .
37
38
40
42
CONTENTS
7.5
7.6
Approximation of FS(t) . . . . . . . . . . . . . . . . . . . . . . 45
Monte Carlo approximations of FS(t) . . . . . . . . . . . . . . 46
8 Reinsurance treaties
49
9 Probability of ruin
51
9.1 The risk process . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.2 Bounds for the ruin probability . . . . . . . . . . . . . . . . . 55
9.3 An asymptotics for the ruin probability . . . . . . . . . . . . . 67
10 Problems
81
1. Introduction
Insurance Mathematics is sometimes divided into life insurance, health insurance and non-life insurance. Life insurance includes for instance life
insurance contracts and pensions where long terms are covered. Non-life
insurance comprises insurances against fire, water damage, earthquake, industrial catastrophes or car insurance, for example. Non-life insurances cover
in general a year or other fixed time periods. Health insurance is special because it is differently organized in each country.
The course material is based on the textbook Non-Life Insurance Mathematics by Thomas Mikosch [6]
CHAPTER 1. INTRODUCTION
1.1
We shortly recall some definitions and facts from probability theory which
we need in this course. For more information see [9] or [2], for example.
(i) A probability space is a triple (, F, P), where
is a non-empty set,
F is a -algebra consisting of subsets of and
P is a probability measure on (, F).
(ii) A function f : R is called a random variable if and only if for
all intervals (a, b), < a < b < the pre-image
f 1 ((a, b)) := { : a < f () < b} F.
(iii) The random variables f1 , ..., fn are independent if and only if
P(f1 B1 , ..., fn Bn ) = P(f1 B1 ) P(fn Bn )
for all Bk B(R), k = 1, ..., n. (Here B(R) denotes the Borel algebra.)
If the fi s have discrete values, i.e. fi : {x1 , x2 , x3 , . . .}, then the random
variables f1 , ..., fn are independent if and only if
P(f1 = k1 , ..., fn = kn ) = P(f1 = k1 ) P(fn = kn )
for all ki {x1 , x2 , x3 . . .}.
k
.
k!
0.0
1.0
2.0
3.0
2
x
CHAPTER 1. INTRODUCTION
2.1
(t)m
, m = 0, 1, 2, ...
m!
(P4) The paths of N , i.e. the functions (N (t, ))t[0,) for fixed are
almost surely right continuous and have left limits. One says N has
c`
adl`
ag (continue `
a droite, limite `
a gauche) paths.
10
Lemma 2.1.2. Assume W1 , W2 , ... are independent and exponentially distributed with parameter > 0. Then, for any x > 0 we have
P(W1 + + Wn x) = 1 ex
n1
X
(x)k
k=0
k!
This means the sum of independent exponentially distributed random variables is a Gamma distributed random variable.
Proof: Exercise.
Definition 2.1.3. Let W1 , W2 , ... be independent and exponentially distributed with parameter > 0. Define
Tn := W1 + ... + Wn
and
(t, ) := #{i 1 : Ti () t}, t 0.
N
Lemma 2.1.4. For each n = 0, 1, 2, ... and for all t > 0 it holds
n
(t)n
.
n!
11
(b) Any Poisson process N (t) with parameter > 0 can be written as
N (t) = #{i 1, Ti t}, t 0,
where Tn = W1 + ... + Wn , n 1, and W1 , W2 , ... are independent and
exponentially distributed with > 0.
Proof:
(a) We check the properties of the Definition 2.1.1.
(P1) From (vi) of Section 1.1 we get for any x > 0 that
P(W1 > 0) = P(W1 (0, ))
Z
=
1I[0,) (y)ey dy = 1.
0
:=
:=
:=
:=
Tl
Wl+1
Wl+2 + ... + Wl+m
Wl+m+1 ,
12
1
2
Z
Z Z Z
h4 (x4 )dx4 h3 (x3 )dx3 h2 (x2 )dx2 h1 (x1 )dx1
=
0 sx1
0 tx1 x2 x3
|
|
{z
{z
I3 (x1 ,x2 )
{z
I2 (x1 )
=: I1
By direct computation and rewriting the density function of
f4 = Wl+m+1 ,
Z
I4 (x1 , x2 , x3 ) =
ex4 1I[0,) (x4 )dx4
= e
tx1 x2 x3
(tx1 x2 x3 )
Here we used t x1 x2 x3 > 0. This is true because the integration w.r.t. x3 implies 0 < x3 < t x1 x2 . The density of
f3 = Wl+2 + ... + Wl+m is
h3 (x3 ) = m1
xm2
3
1I[0,) (x3 )ex3 .
(m 2)!
Therefore,
I3 (x1 , x2 )
tx
Z1 x2
m2
m1 x3
ex3 e(tx1 x2 x3 ) dx3
=
(m 2)!
0
(t x1 x2 )m1
.
(m 1)!
13
I2 (x1 )
Z
(tx1x2 )m1 x2
=
1I[0,tx1 )(x2 )e(tx1 x2 )m1
e
dx2
(m1)!
sx1
= m e(tx1 )
(t s)m
.
m!
1)!
0
(t s)m sl
= m l et
m!
l!
l
((t s))m (ts)
(s) s
e
e
=
l!
m!
(s) = l)P(N
(t s) = m).
= P(N
If we sum
(s) = l, N
(t) N
(s) = m) = P(N
(s) = l)P(N
(t s) = m)
P(N
over l N we get
(t) N
(s) = m) = P(N
(t s) = m)
P(N
and hence (1).
(P3) follows from Lemma 2.1.4 and (2).
(P4) is clear from the construction.
(b) The proof is an exercise.
(2)
14
10
20
30
40
Poisson, lambda=50
0.0
0.2
0.4
0.6
0.8
1.0
4
0
Poisson, lambda=10
0.0
0.2
0.4
0.6
0.8
1.0
2.2
To model windstorm claims, for example, it is not good to use the Poisson
process because windstorm claims happen rarely, sometimes with years in
between. The Pareto distribution, for example, which has the distribution
function
F (x) = 1
+x
15
with parameters , > 0 would fit better. For a Pareto distributed random variable it is more likely to have large values than for an exponential
distributed random variable.
Definition 2.2.1 (Renewal process). Assume that W1 , W2 , ... are i.i.d. (=independent and identically distributed) random variables such that W1 > 0
a.s. Then
T0 := 0
Tn := W1 + ... + Wn , n 1
is a renewal sequence and
N (t) := #{i 1 : Ti t}, t 0
is the renewal process.
In order to study the limit behavior of N we need the Strong Law of Large
Numbers (SLLN):
Theorem 2.2.2 (SLLN). If the random variables X1 , X2 , ... are i.i.d. with
E|X1 | < then
X1 + X2 + ... + Xn
EX1 a.s.
n
n
Proof:
Because of
{ : N (t)() = n} = { : Tn () t < Tn+1 ()},
nN
<
=
.
N (t)()
N (t)()
N (t)()
N (t)() + 1 N (t)()
(3)
16
Note that
= { : T1 () < } = { : sup N (t) > 0}.
t0
(4)
for 0 .
t
= EW1
N (t)()
for 0 .
lim
Finally 3 implies
lim
In the following we will investigate the behavior of EN (t) as t .
Theorem 2.2.4 (Elementary renewal theorem). Assume the above setting,
i.e. N (t) is a renewal process. If EW1 < , then
EN (t)
1
=
.
t
t
EW1
lim
(5)
(6)
If the Wi s are not exponentially distributed, then the equation (6) holds
only for the limit t .
17
for all t 0.
Then
E lim inf Zt lim inf EZt .
t
st
ut
1
.
EW1
Since Zt := inf st
N (s)
s
N (t)
N (s)
= lim inf
t
st
t
s
a.s.
N (s)
N (s)
N (s)
= E lim inf inf
= lim inf E inf
t st
t st
t
st
s
s
s
c0 = E lim inf
i = 1, 2, ...
18
.
t
EW1c
t
(7)
EN (t)
c0 .
t
n 1,
F0 := {, }.
X
ETNc (c) (t)+1 = E
Wic = E EW1c .
i=1
This implies
lim sup
t
EN (c) (t)
EN (c) (t) + 1
= lim sup
t
t
t
E
= lim sup
t
t
ETNc (c) (t)+1
= lim sup
t EW1c
t
E W1c + ... + WNc (c) (t) + EWNc (c) (t)+1
lim sup
t EW1c
t
1
t+c
lim sup
=
.
c
EW1c
t t EW1
2.3
19
y
2
(t)=t
2
t
y
2
(t) continuous
0.0
0.5
1.0
1.5
2.0
20
0.0
1.0
2.0
3.0
(t) cdlg
0.0
0.5
1.0
1.5
t
2.0
2.5
3.0
((t) (s))m
,
m!
21
Here 1 (t) denotes the inverse function of and f = g means that the
two random variables f and g have the same distribution (but one can not
conclude that f () = g() for ).
be a homogeneous Poisson
Definition 2.3.4 (Mixed Poisson process). Let N
process with intensity = 1 and be a mean-value function. Let : R
. Then
be a random variable such that > 0 a.s., and is independent of N
((t)), t 0
N (t) := N
is a mixed Poisson process with mixing variable .
Proposition 2.3.5. It holds
var()
(t) .
var(N ((t))) = EN ((t)) 1 +
E
Proof:
(t) = var(N
(t)) = t and therefore EN
(t)2 = t + t2 . We
We recall that EN
conclude
((t))) = EN
((t))2 EN
((t)) 2
var(N
= E (t) + 2 (t)2 (E(t))2
= (t) (E + var(t)) .
The property var(N (t)) > EN (t) is called over-dispersion. If N is an
inhomogeneous Poisson process, then
var(N (t)) = EN (t).
22
The Cram
er-Lundberg-model
3.2
Definition 3.2.1. The renewal model (or Sparre-Anderson-model) considers the following setting:
1. Claims happen at the claim arrival times 0 T1 T2 ... of a
renewal process
N (t) = #{i 1 : Ti t}, t 0.
23
24
3.3
N (t)
X
Xi , t 0.
i=1
ES(t)
t
= EX1 .
25
Proof:
Since
1 = 1I () =
1I{N (t)=k} ,
k=0
by direct computation,
ES(t) = E
= E
N (t)
X
Xi
i=1
X
k
X
(
Xi )1I{N (t)=k}
i=1
k=0
k=0
= EX1
=kEX1
=P(N (t)=k)
kP(N (t) = k)
k=0
= EX1 EN (t).
In the CL-model we have EN (t) = t. For the general case we use the
Elementary Renewal Theorem (Thereom 2.2.4) to get the assertion. We
continue with
ES(t)2 = E
N (t)
X
2
Xi = E
i=1
= E
k=0
=
=
=
=
!2
!
Xi
1I{N (t)=k}
i=1
k=0
k
X
X
X
k
X
k
X
X
!2
Xi
1I{N (t)=k}
i=1
E Xi Xj 1I{N (t)=k}
k=0 i,j=1
X
X
kP(N (t) = k) + (EX1 )2
k(k
EX12
k=1
k=0
2
2
2
EX1 EN (t) + (EX1 ) (EN (t) EN (t))
var(X1 )EN (t) + (EX1 )2 EN (t)2 .
1)P(N (t) = k)
26
It follows that
var(S(t)) = ES(t)2 (ES(t))2
= ES(t)2 (EX1 )2 (EN (t))2
= var(X1 )EN (t) + (EX1 )2 var(N (t)).
For the Cramer-Lundberg-model it holds EN (t) = var(N (t)) = t, hence we
have var(S(t)) = t(var(X1 ) + (EX1 )2 ) = tEX12 . For the renewal model we
get
var(X1 )EN (t)
lim
= var(X1 ).
t
t
The relation
var(N (t))
var(W1 )
lim
=
.
t
t
(EW1 )3
is shown in [5, Theorem 2.5.2].
Theorem 3.3.3. The Strong Law of Large Numbers (SLLN) and the Central
Limit Theorem (CLT) for (S(t)) in the renewal model can be stated as follows:
(i) SLLN for (S(t)): If EW1 =
S(t)
= EX1
t t
lim
a.s.
(ii) CLT for (S(t)): If var(W1 ) < , and var(X1 ) < , then
!
S(t)
ES(t)
t
p
sup P
x (x) 0,
xR
var(S(t))
where is the distribution function of the standard normal distribution,
Z x
y2
1
(x) =
e 2 dy.
2
Proof:
(i) We follow the proof of [6, Theorem 3.1.5 ]. We have shown that
N (t)
= a.s.
t
t
lim
27
and it holds
lim N (t) = a.s.
a.s.,
we get
S(t)
N (t)
S(t)
= lim
lim
= EX1
t t
t
t t N (t)
lim
a.s.
28
4. Classical premium
calculation principles
The standard problem for the insurance companies is to determine that
amount of premium such that the losses S(t) are covered. On the over hand
the price should be low enough to be competitive and attract customers.
A first approximation of S(t) is given by ES(t). For the premium income
p(t) this implies
p(t) < ES(t) insurance company loses on average
p(t) > ES(t) insurance company gains on average
A reasonable solution would be
p(t) = (1 + )ES(t)
where > 0 is the safety loading. Proposition 3.3.2 tells us that in the
renewal model with EW1 = 1 it holds ES(t) t EX1 for large t.
4.1
Used principles
29
30
n xn
1 F (x)
<
ex
n xn
1 F (x)
>0
ex
5.2
Examples
(1) The exponential distribution Exp() is light-tailed for all > 0, since
the distribution function is F (x) = 1 ex , x > 0, and
1 F (x)
ex
=
= e()x ,
ex
ex
and by choosing 0 < < ,
sup e()x = e()n 0, as n .
xn0
31
32
, x 0, > 0, > 0,
( + x)
or
F (x) = 1
5.3
ba
, x b > 0, a > 0.
xa
The QQ-plot
1X
1I(,x] (Xi ), x R.
Fn (x) :=
n i=1
It can be shown that if X1 F , (Xi )
i=1 i.i.d., then
lim Fn (t) F (t),
33
y
3
F(x)
3
x
34
for some > 0, where is the risk aversion constant. The function pexp (t) is
defined via the so-called utility theory.
6.2
6.3
ES(t)eS(t)
, > 0.
Ee(S(t))
35
(in the sense that, if either side of this expression exists, so does the
other, and then they are equal, see [7], pp. 168-169).
(c) The distribution of f can also be determined by its characteristic
function, (see [9])
f (u) := Eeiuf , u R,
or by its moment-generating function
mf (h) := Eehf , h (h0 , h0 )
provided that Eeh0 f < for some h0 > 0.
Remember: for independent random variables f and g it holds
f +g (u) = f (u)g (u).
37
38
7.2
Mixture distributions
N
X
Xi
i=1
Nk
X
(k)
Xj , k = 1, ..., n
j=1
39
Y1 =
n
X
(k)
k=1
k
,
(k)
(k)
X
Pm
(k)
eiu j=1 Xj 1I{Nk =m}
= E
= E
m=0
(k)
(k)
=
=
=
m=0
X
m=0
X
m=0
(k)
EeiuX1
m
P(Nk = m)
m
X (k) (u) P(Nk = m)
1
m m
k (1 (k) (u))
k k
X1
X (k) (u)
e
=e
.
1
m!
Then
S (u) = Eeiu(S1 +...+Sn )
= EeiuS1 ... EeiuSn
= S1 (u) ... Sn (u)
= e
1 (1
= exp
(1) (u))
X1
n (1
(n) (u))
X1
... e
!
n
X k
1
X (k) (u)
.
1
k=1
40
Let =
we get
l=1
Finally,
(k)
Pn
n
X
(l)
E eiuX1 1I{J=l}
l=1
n
X
X (l) (u))
1
l=1
l
.
7.3
Applications in insurance
First application
Assume that the claims arrive according to an inhomogeneous Poisson process, i.e.
N (t) N (s) P ois((t) (s)).
The total claim amount in year l is
Sl =
N (l)
X
(l)
Xj , l = 1, ..., n.
j=N (l1)+1
Sl =
(l)
Xj , l = 1, ..., n
j=1
41
where
d
S(n) := S1 + ... + Sn =
N
X
Yi
i=1
P ois((n))
Yi
(1)
(n)
Hence the total claim amount S(n) in the first n years (with possibly
different claim size distributions in each year) has a representation as a
compound Poisson random variable.
Second application
We can interprete the random variables
Si =
Ni
X
(i)
Xj , Ni P ois(i ), i = 1, . . . , n,
j=1
as the total claim amounts of n independent portfolios for the same fixed pe(i)
riod of time. The (Xj )j1 in the i-th portfolio are i.i.d, but the distributions
may differ from portfolio to portfolio (one particular type of car insurance,
for example). Then
d
S(n) = S1 + ... + Sn =
N
X
Yi
i=1
(1)
(n)
and P(J = l) =
l
.
42
7.4
Let
S=
N
X
Xi ,
i=1
N : {0, 1, ...} and (Xi )i1 i.i.d, N and (Xi ) independent. Then, setting
S0 := 0, Sn := X1 + ... + Xn , n 1 yields
P(S x) =
=
=
X
n=0
X
n=0
P(S x, N = n)
P(S x|N = n)P(N = n)
P(Sn x)P(N = n)
n=0
n=0
=
X1 ,X2 independent
=
=
=
43
q0
|{z}
P(X + ... + Xk = 0, N = k)
| 1
{z
}
k=1
P(X1 =0)k P(N = k)
| {z }
qk
= EP(X1 = 0)N .
This implies (1).
For pn , n 1,
pn
P(S = n) =
X
k=1
P(Sk = n)qk
(1)
(2)
44
b
P(Sk = n)(a + )qk1 .
k
k=1
(3)
Q(X1 =l)
b
= a + E Q X1
n
b
= a+
EQ (X1 + ... + Xk )
nk
b
= a+
EQ Sk
nk | {z }
=n
b
= a+ ,
k
(4)
where the last equation yields from the fact that Q(Sk = n) = 1. On the
other hand, we can express the term a + kb also by
n
X
bl
P(X1 = l|Sk = n)
a+
n
l=0
=
=
n
X
l=0
n
X
l=0
(a +
bl P(X1 = l, Sk X1 = n l)
)
n
P(Sk = n)
(a +
bl P(X1 = l)P(Sk1 = n l)
)
.
n
P(Sk = n)
pn
b
k
(5)
X
n
X
bl
=
a+
P(X1 = l)P(Sk1 = n l)qk1
n
k=1 l=0
n
X
X
bl
=
a+
P(X1 = l)
P(Sk1 = n l)qk1
n
l=0
|k=1
{z
}
P(S=nl)
45
n
X
bl
P(X1 = l)P(S = n l)
= aP(X1 = 0)P(S = n) +
a+
n
l=1
n
X
bl
= aP(X1 = 0)pn +
a+
P(X1 = l)pnl ,
n
l=1
which will give the equation (2)
n
X
1
bl
pn =
a+
P(X1 = l)pnl
1 aP(X1 = 0) l=1
n
Remark 7.4.2.
TheP
Panjer recursion only works for distributions of Xi on {0, 1, 2, ...}
i.e.
k=0 PXi (k) = 1 (or, by scaling, on a lattice {0, d, 2d, ...} for d > 0
fixed).
Traditionally,
the distributions used to model Xi have a density, and
R
h (x)dx = 0. But on the other hand, claim sizes are expressed
{0,1,2,...} xi
in terms of prices, so they take values on a lattice. The density hXi (x)
could be approximated to have a distribution on a lattice, but how
large would the approximation error then be?
N can only be Poisson, binomially or negative binomially distributed.
7.5
N (t)
X
Xi , t 0.
i=1
46
Now, by setting
y ES(t)
x := p
,
var(S(t))
for large t the approximation
P(S(t) y)
y ES(t)
p
var(S(t))
can be used.
Warning: This approximation is not good enough to estimate P(S(t) > y)
for large y, see [6], Section 3.3.4.
7.6
(n)
(n)
X1 , ..., XNm
N1
X
i=1
Xi1 , ..., Sm =
Nm
X
Xim
i=1
calculated. Then it follows that Si S(t), and the Si s are independent. By the Strong Law of Large Numbers,
m
m :=
1 X
a.s.
1IA (Si ) P(S(t) A) = p, as m .
m i=1
It can be shown that this does not work well for small values of p (see
[6], section 3.3.5 for details).
47
x2 x1 x1
x3 x1 x2
x 3 x2 x2
...
We denote the k-th triple by X (k) = (X1 (k), X2 (k), X3 (k)), k {1, 2, ...}.
Then, for example, the sample mean of the k-th triple
var(X1 )
.
n
a.s.
48
n
X
X
X
S(t) =
Xi =
Xi 1I{N (t)=n} .
i=1
n=0
i=1
8. Reinsurance treaties
Reinsurance treaties are mutual agreements between different insurance companies to reduce the risk in a particular insurance portfolio. Reinsurances
can be considered as insurance for the insurance company.
Reinsurances are used if there is a risk of rare but huge claims. Examples of
these usually involve a catastrophe such as earthquake, nuclear power station
disaster, industrial fire, war, tanker accident, etc.
According to Wikipedia, the worlds largest reinsurance company in 2009 is
Munich Re, based in Germany, with gross written premiums worth over $31.4
billion, followed by Swiss Re (Switzerland), General Re (USA) and Hannover
Re (Germany).
There are two different types of reinsurance:
A Random walk type reinsurance
1. Proportional reinsurance: The reinsurer pays an agreed proportion
p of the claims,
Rprop (t) = pS(t).
2. Stop-loss reinsurance: The reinsurer covers the losses that exceed
an agreed amount of K,
RSL (t) = (S(t) K)+ ,
where x+ = max{x, 0}.
3. Excess-of-loss reinsurance: The reinsurer covers the losses that exceed an agreed amount of D for each claim separately,
RExL
N (t)
X
=
(Xi D)+ ,
i=1
49
50
k
X
X(N (t)i+1) .
i=1
2. ECOMOR reinsurance:
(Excedent du co
ut moyen relatif = excess of the average cost)
N (t)+1
Define k = b 2 c. Then
N (t)
X
RECOM OR (t) =
(X(N (t)i+1) X(N (t)k+1) )+
i=1
k1
X
i=1
Treaties of random walk type can be handled like before. For example,
P( RSL (t) x) = P(S(t) K) + P(K < S(t) x + K),
| {z }
(S(t)K)+
9. Probability of ruin
9.1
If the renewal model is assumed, then the total claim amount process is
S(t) =
N (t)
X
Xi , t 0.
i=1
U(t)
3
U(0)=4
10
51
12
52
where the last equation yields from the fact that U (t) = u + ct S(t).
Since in the renewal model it was assumed that Wi > 0 a.s., it follows
that
N (Tn ) = #{i 1 : Ti Tn } = n
and
N (Tn )
S(Tn ) =
X
i=1
Xi =
n
X
i=1
Xi ,
53
where
Tn = W1 + ... + Wn ,
which imply that
: inf u + cTn S(Tn ) < 0
n1
(
!
)
n
X
=
: inf u + cTn
Xi < 0 .
n1
i=1
By setting
Zn := Xn cWn , n 1
and
Gn := Z1 + ... + Zn , n 1, G0 := 0,
it follows that
n
o
: inf (Gn ) < u
n1
n
o
= : sup Gn > u
{ : T () < } =
n1
holds.
The objective is to achieve the following properties:
Avoiding a situation, where the probability of ruin (u) = 1
(u) should be small, if the initial capital u is large.
By the Strong Law of Large Numbers (with the assumption that E|Z1 | < ),
Gn
= EZ1 almost surely.
n n
lim
a.s.
Gn , n ,
because Gn nEZ1 for large n. This means ruin probability (u) = 1 for
all u > 0, if EZ1 > 0.
54
Am := lim sup
m .
n
n
Notice that for fixed and n0 1
lim sup
n
iff
lim sup
nn0
Z1 () + ... + Zn ()
m
n
Zn0 () + ... + Zn ()
m.
n
Hence
Am
n0 =1
Z1 + ... + Zn
P lim sup
= = lim P(Am )
m
n
n
it suffices to show P(Am ) > 0. We have
Z1 + ... + Zn
Am =
lim sup
m
n
n
[
\
Z1 + ... + Zk
m .
k
n=1 k=n
55
m
P(Am ) lim sup P
k
k
Z
x2
dx
e 22
=
>0
2 2
m
where 2 = EZ12 .
Definition 9.1.4 (Net profit condition). The renewal model satisfies the net
profit condition (NPC) if and only if
EZ1 = EX1 cEW1 < 0. (N P C)
The consequence of (N P C) is that on average more premium flows into the
portfolio of the company than claim sizes flow out: We have
Gn = p(Tn ) + S(Tn )
= c(W1 + ... + Wn ) + X1 + ... + Xn
which implies
EGn = nEZ1 < 0.
Theorem 9.1.3 implies that any insurance company should choose the premium p(t) = ct in such a way that EZ1 < 0. In that case there is hope that
the ruin probability is less than 1.
9.2
In this section it is assumed, that the renewal model is used and the net
profit condition holds (i.e. EX1 cEW1 < 0).
Recall from Theorem 7.1.1 that for a random variable f : (, F) (R, B(R))
the function
mf (h) = Eehf ,
was called the moment-generating function if it exists at least for h in a
small interval (h0 , h0 ).
56
Remark 9.2.1.
a) The map
h 7 Eehf
dm
mf (0) = Ef m .
dhm
We will say that the small claim condition holds if and only if there exists
h0 > 0 such that
mX1 (h) = EehX1 exists for all h (h0 , h0 ).
Theorem 9.2.2 (The Lundberg inequality). Assume that mZ1 exists at least
in a small interval (h0 , h0 ). If there exists a solution r (0, h0 ) to
mZ1 (r) = Eer(X1 cW1 ) = 1,
then for each u > 0 it holds that
(u) eru ,
where r is called the Lundberg coefficient.
The result implies, that if the small claim condition holds and the initial
capital u is large, there is in principal no danger of ruin.
Remark 9.2.3.
57
and
P(Z1 ) e mZ1 ()
= e mZ1 (),
which implies for some > 0 that
P(|Z1 | ) e [mZ1 () + mZ1 ()].
b) It can be shown, that if r exists, it is unique which follows from the
fact that mZ1 is convex (if it exists): We have
e(1)r0 Z1 +r1 Z1 (1 )er0 Z1 + er1 Z1 .
Moreover, mZ1 (0) = 1 and by Jensens inequality,
mZ1 (h) = EeZ1 h eEZ1 h
such that (assuming (NPC) holds) we get
lim mZ1 (h) lim eEZ1 (h) = .
P(|Z1 | > ) ce c
and it holds
n
E|Z1 | =
P(|Z1 | > )d = n
0
Z0
ce/c n1 d
0
n1
Z
n
/c
nc
e
d
c
0
n
58
n+1
e n1 d
= n!c
n+1
Because of
mZ1 (h) = Ee
hZ1
=E
X
(hZ1 )n
n=0
n!
(hc)n
n=0
u > 0.
n 1, u > 0.
= P(Z1 > u) + P
= P(Z1 > u) + P
Z
= P(Z1 > u) +
max Gk > u, Z1 u
1kn+1
2kn+1
u
P max Gk > u x dFZ1 (x)
1kn
59
where we have used for the last line that max2kn+1 (Gk Z1 ) and Z1 are
independent. We estimate the first term
Z
Z
P(Z1 > u) =
dFZ1 (x)
er(xu) dFZ1 (x),
(u,)
(u,)
(,u]
Consequently,
Z
n+1 (u)
r(xu)
Z
dFZ1 (x) +
(u,)
(,u]
Ee
Z
=
ehx ex dx
.
h
1
c
60
i.e.
c > .
Hence mZ1 exists on ( c , ) and for r > 0 we get
=1
r + cr
= + cr r cr2
r= .
c
Consequently,
(u) eru = e( c )u .
Applying the expected value principle p(t) = (1 + )ES(t) = (1 + )EX1 t
we get
=
=
.
c
1+
(1 + )
This implies
(u) eru = eu 1+ ,
where one should notice that even does not change the ruin probability considerably!
The following theorem considers the special case, the Cramer-Lundbergmodel:
Theorem 9.2.5 (Cramers ruin bound). Assume that the Cramer-Lundbergmodel is used and the net profit condition (NPC) holds. In addition, let the
distribution of X1 have a density, assume mX1 (h) exists in a small neighborhood (h0 , h0 ) of the origin and it holds for the Lundberg coefficient r that
h0 < r < h0 . Then there exists a constant c > 0 such that
lim eru (u) = c,
1
with c = EX
r
R
0
1
.
61
(u y)dFX1 ,I (y).
0
=
=
lim (1 (u))
k1
lim P(sup Gk u)
k1
62
lim P sup Gk u = 1.
k1
c) It holds (0) = 1+
1 for . Indeed, because of b) and Lemma
9.2.6 we may conclude that
Z
1
1I[0,u] (y)(u y)dFX1 ,I (y)
lim
1 = (0) +
1 + u 0
Z
1
= (0) +
lim 1I[0,u] (y)(u y) dFX1 ,I (y)
1 + 0 u
Z
1
= (0) +
dFX1 ,I (y)
1+ 0
1
= (0) +
.
1+
Z
e
[u,)
z
c
Z
(z x)dFX1 (x)dz.
(2)
[0,z]
We consider
(u) = P sup Gn u
n1
= P Z1 u, Gn Z1 u Z1 n 2
Z
Z
=
P Gn Z1 u (x cw) n 2 dFX1 (x)dFW1 (w)
[0,)
[0,u+cw]
63
(t) = (0) +
c
(t x)FX1 (x)dx.
(z x)dFX1 (x) e c
(u)
0 (u) =
c
c
[0,z]
z=u
Z
(u)
(u x)dFX1 (x),
=
c
c [0,u]
such that
Z
t
(t) (0)
(u)du
c 0
Z Z
t
=
(u x)dFX1 (x)du
c 0 [0,u]
u Z
Z
t
0
=
(u x)FX1 (x) +
(u x)FX1 (x)dx du
c 0
[0,u]
0
Z
Z
t
0
=
(0)FX1 (u) (u)FX1 (0) +
(u x)FX1 (x)dx du
c 0
[0,u]
Z t
Z Z
t
= (0)
FX1 (u)du
0 (u x)FX1 (x)dudx
c
c
0
0
[x,t]
Z t
=
(t x)FX1 (x)dx.
c 0
64
(t) (0) =
c
1
(1+)EX1
where q :=
1
,
1+
(r)
q
(x) =
EX1
rX1
Z0
Z 0
Z
ry
=
re dy +
P(X1 > y)rery dy
0
Z
= 1+r
FX1 (y)ery dy.
=
yields the
65
(r)
q 1
1
(x) =
1
EX1 r EercW1
q 1 rc +
=
1
EX1 r
c
1
qc 1
=
= 1.
=
EX1
(1 + )EX1
FX (y)
1
(u) =
(u y) 1 dy.
+
1+ 1+ 0
EX1
Hence
Z u
1
FX (y)
1
(u) = 1 (u) =
(u y) 1 dy
1+ 1+ 0
EX1
Z u
Z u
FX1 (y)
FX (y)
(u y) 1 dy
= qq
dy + q
EX1
EX1
0
0
Z u
= q FX1 ,I (u) +
(u y)d(qFX1 ,I )(y).
0
66
Let
a) k : R [0, ) be Riemann integrable such that k(x) = 0 for x < 0,
b) limx k(x) = 0.
Then
k(u y)dH(y)
R(u) =
0
is in the class of all functions on (0, ) which are bounded on finite intervals
the only solution to the renewal equation
Z u
R(u) = k(u) +
R(u y)dH(y)
(3)
0
and it holds
1
lim R(u) = R
u
xdH(x)
R
k(u)du.
0
R
R
ru
1
xdF (r) (x)
lim e (u) =
EX1 y
0
Z
Z z
q
e FX1 (z)dz
FX1 (z)dz
r EX1 0
EX1 0
[1 q] =
r
r 1+
Z
=
=
=
=
67
ry
Finally
1
=
implies
lim eru (u) =
1
EX1
R
.
ry
r
ye FX1 (y)dy
0
Remark 9.2.10. The condition (NPC) EZ1 = EX1 cEW1 < 0 could in
the Cramer-Lundberg-model (assuming the expected value principle) also be
formulated as
c = (1 + )EX1 for some > 0.
9.3
68
Theorem 9.3.1. Assume the CLM model and that the condition (NPC)
holds. Let (XI,k )k1 i.i.d. random variables with distribution function FX1 ,I
and
u
(u) =
1+
(1 + ) P(XI,1 + ... + XI,n u) , u R.
1+
n=1
Then is in the class
G :=
g : R [0, ) : non-decreasing, bounded,
0
for x < 0
right-continuous with g(x) =
for x = 0
1+
the unique solution to
1
(u) = (0) +
1+
(u y)dFX1 ,I (y).
0
Proof:
Uniqueness. Assume 1 , 2 are solutions and = 1 2 . Then
Z u
1
(u y)dFX1 ,I (y)
(u) =
1+ 0
Z u
1
F X (y)
=
(u y) 1 dy
1+ 0
EX1
Z u
1
(y)F X1 (u y)dy
=
(1 + )EX1 0
and
1
|(u)|
(1 + )EX1
|(y)|dy.
0
69
Lemma 9.3.3. F S
lim
Proof:
It holds for Sn = X1 + ... + Xn
P(Sn > x)
P(Sn > x)
=
P(max1kn Xk > x)
1 P(max1kn Xk x)
P(Sn > x)
=
1 P(X1 x)n
P(Sn > x)
=
1 (1 P(X1 > x))n
P(Sn > x)
=
.
P(X1 > x)n(1 + o(1))
Proposition 9.3.4.
a) For FX S it holds
F X (x y)
= 1 y > 0.
x
F X (x)
lim
n 2 and x 0.
P(X1 > x)
70
(y,x]
P(X1 > x)
F (x y)
1 + F (y) +
(F (x) F (y)).
F (x)
We choose x large enough such that F (x) F (y) > 0 and observe that
P(X1 + X2 > x)
1
F (x y)
1
1 F (y)
1
P(X1 > x)
F (x) F (y)
F (x)
as x .
b) A measurable function L : [0, ) (0, ) is called slowly varying if
L(c)
= 1 c > 0.
L()
lim
lim c0 () = c0 > 0.
Indeed,
Z
lim c0 () exp
(t)
dt
=
t
Z
lim c0 () exp log +
(t)
dt
t
0
Z
dt
lim c0 () exp log sup |(t)|
t0
0 t
lim c0 () exp log (log log 0 ) sup |(t)|
t0
71
= .
Now we continue with the proof of b). Let L() := F (log ). For some c > 0
lim
L(c)
F (log c + log )
= lim
=1
L()
F (log )
Li (x)
x
1
2
we have
72
P(X1 + X2 > x)
P(X1 + X2 > x)
= lim sup
F X1 (x) + F X1 (x)
x
F X1 (x) L1 ((1)x)
+ F X2 (x) L2 ((1)x)
L1 (x)
L2 (x)
(1 ) .
and hence
lim inf
x
P(X1 + X2 > x)
1.
F X1 (x) + F X2 (x)
Consequently,
lim
P(X1 + X2 > x)
= 1.
F X1 (x) + F X2 (x)
Definition 9.3.6. If there exists a slowly varying function L and some > 0
such that for a positive random variable X it holds
L(x)
x
then FX is called regularly varying with index or of Pareto type with
exponent .
F X (x) =
= n.
L(x)
F X (x)
73
, x 0, > 0, > 0,
( + x)
is subexponential.
c) The Weibull distribution
r
F (x) = 1 ecx ,
0 < r < 1, x 0,
1
1 (x)
=: L(x)
x ( + x)
x
and conclude
cx x + )
+ cx x
x+
=
c
1 for x .
+ cx
L(cx)
=
L(x)
P(X1 + X1 > x)
=1
x P(max{X1 , X2 } > x)
lim
74
P(X1 + X1 > x)
= 2.
x
P(X1 > x)
lim
1 +X1 >x)
Proof. We show by induction that limx P(X
= 2 implies FX S.
P(X1 >x)
Then the case n = 2 in Lemma 9.3.3 is our assumption. We assume that
P(X1 > x)
Moreover,
Rx
0
which implies
P(X1 + ... + Xk + Xk+1 > x)
k + 1.
P(X1 > x)
The other inequality can be shown similarly. For the remaining equivalence
see the proof of Lemma 9.3.3.
75
Summary
Pareto type
S
light-tailed
heavy-tailed
EX1
t.
EW1
EX1
t.
EW1
EX1
EW1
(4)
76
which implies
EX1 (1 + ) cEW1 = 0
and further
EX1 cEW1 < 0,
which means that the net profit condition holds. Equation (4) implies that
=c
EW1
1.
EX1
lim
Proof:
From Lemma 9.2.6 we know that the survival probability solves
Z u
1
(u y)dF X1 ,I (y)
(u) = (0) +
(1 + )EX1 0
The function is bounded, non-decreasing and right-continuous, since
(u) = P(sup Gk u).
k1
n
(u) =
1+
(1 + ) P(XI,1 + ... + XI,n u)
1+
n=1
and
X
(u) =
(1 + )n P(XI,1 + ... + XI,n > u)
1 + n=1
77
since
X
(1 + )n = 1.
1 + n=0
Hence
In order to be able to exchange summation and limit, we will use the estimate
of Proposition 9.3.4 c)
P(XI,1 + ... + XI,n > u) K(1 + )n F X1 ,I (u).
For (0, ) we have
n P(XI,1
(1 + )
n=1
X
+ ... + XI,n > u)
(1 + )n
K
< .
(1 + )n
F X1 ,I (u)
n=1
Therefore,
(u)
X
P(XI,1 + ... + XI,n > u)
lim
=
(1 + )n lim
u F X ,I (u)
u
1 + n=1
F X1 ,I (u)
1
X
1
=
(1 + )n n = .
1 + n=1
Is there a criterion for
FX,I S?
Definition 9.3.11. A positive random variable X belongs to S iff
a) EX = (0, ),
78
Rx
0
F (xy)
F (y)dy
F (x)
= 2.
F (t)dt
x
which implies
RRt
2(1 )
F (t y)F (y)dy/2
R
2(1 + )
F (t)dt/
x
and
2(1 )
x 0,
x
2
0
we have
0
x/2
x/2
dy
79
Since the LHS and the RHS of the inequality both converge to as x
we get
Z
lim
x/2
x 0,
with fixed c > 0 and r (0, 1) and the net profit condition (NPC) is fulfilled.
Then
1
(u)
lim R cxr = R cxr .
u
e
dr
0 e
dr
u
80
10. Problems
1. Poisson distribution
Let f : {0, 1, 2, ...} be Poisson distributed with intensity > 0, i.e.
P({ : f () = k}) = e
k
,
k!
k N.
Show that
(a) Ef =
(b) var(f ) = ,
(var(f ) := (Ef Ef )2 )
2. Order statistics
Let Y1 , Y2 , . . . Yn be i.i.d. random variables with a continuous strictly
increasing distribution function F . Put
Yn := max{Y1 , Y2 , . . . , Yn },
Yn1
() := second largest value of Y1 (), Y2 (), . . . , Yn ()
and so on. The random variable Yi is called the i-th order statistics of
(Y1 , Y2 , . . . , Yn ).
(a) Compute P(at least 2 of the Yi are equal).
(b) Show that the random variables X1 , . . . , Xn , given by Xi := F (Yi ),
are independent and uniformly distributed on [0, 1].
(c) Show that the distribution of (X1 , . . . , Xn ) has the density
f (x1 , . . . , xn ) = n!1I{0x1 x2 ...xn 1} .
81
82
3. Characteristic functions
For a random variable X : R we define
b := EeitX
X(t)
for t R.
1
1I[0,) (x)n xn1 ex .
(n 1)!
(d) Show that the characteristic function of any random variable is positive semidefinit, i.e. it holds for t1 , ...tn R and c1 , ...cn C that
n
X
b k tl )ck cl 0.
X(t
k,l=1
n1
X
(x)k
k=0
k!
83
Z
x1
z1
x2 z1
z2
xn z1 ...zn1
...
tx
b1
b2
bn
...
a1
b1
a2
b2 x1
an
bn (x1 +...+xn1 )
...
a1
a2 x1
an (x1 +...+xn1 )
{ak < Tk bk } =
k=1
n1
\
k=1
84
tn z1 ...zn1
Hint: Use
{T1 t1 , . . . , Tn tn } = {N (t1 ) 1, . . . N (tn ) n}.
10. Let N1 and N2 be independent Poisson prosesses with intensities 1 > 0
and 2 > 0, respectively. (Note that the processes X and Y are called
independent if the -algebras (X) and (Y ) are independent, i.e. if it
holds
P(A B) = P(A)P(B) A (X), B (Y )
where (X) ((Y )) denotes the smallest algebra such that all X(t), t 0
(Y (t), t 0) are measurable.) Show that N = N1 + N2 is again a Poisson
process.
11. Show that the probability that the independent Poisson processes N1 and
N2 ever jump at the same time is zero.
12. Stopping times
Let (, F, P) be a probability space and (Fn )
n=0 a filtration, i.e. it holds
n N.
85
Show that : [0, ) {} is a stopping time w.r.t. (Fn )
n=0 iff
{ : () n} Fn ,
n N.
Xn = EEX1 .
n=1
P
P
Hint: Use n=1 Xn =
n=1 Xn 1I{n} and check that Xn and 1I{n} are
independent.
14. Compute the probability that the Poisson process jumps at time t > 0:
We define N (t)() := N (t)() N (t)() = N (t)() limst N (s)().
Find out
P({ : N (t)() = 1}).
15. Order statistics property of the Poisson process
Let N be a Poisson process and (Xn )nN a sequence of independent on
[0, 1] uniformly distributed random variables. Show
P({(T1 , ..., Tn ) B}|{N (1) = n}) = P({(X1 , ..., Xn ) B}) B = [b0 , b1 ]...[bn1 , bn ]
where 0 = b0 < b1 < ... < bn .
Hint: P({(T1 , ..., Tn ) B}|{N (1) = n}) =
86
k = 1, ..., n
holds.
(n) N
(k) and Fk related?
Hint: How are N
17. Renewal processes do not explode:
Show that for a renewal process N it holds
P({ : N(t)() < }) = 1.
For almost all the process N jumps only finitely many times until
time t.
P
Hint: Why does P(
i=1 Wi = ) = 1 hold?
18. Let (N (t))t0 be a renewal process
N (t) := #{i 1 : Ti t},
where Tn := W1 + + Wn and (Wi )i1 is a sequence of positive i.i.d.
random variables such that EW1 < . We define the renewal function
m(t) := EN (t) + 1.
Show that m satisfies the renewal equation
Z t
m(t) = 1I[0,) (t) +
m(t y)dFT1 (y).
0
i=1
1I[0,t] (Ti ).
87
(b) the Weibull distribution, given by the distribution function
20. The standard deviation principle is motivated by te Cental Limit Theorem for (S(t)): Show for the renewal model (assume var(X1 ) < and
var(W1 ) < ) that for all x R
P(S(t) pSD (t) x) (),
t ,
for t .
(b) the netto principle and the standard variation principle are asymptotically equivalent, i.e. there exists a constant c > 0 such that
pN ET (t)
c,
pSD (t)
for t .
88
corresponding premiums p(t) and p(t) the premium for S(t) + S(t)
should be p(t) + p(t).
Homogenity : For c > 0 the premium for cS(t) should be cp(t).
Which of the premium principles pN ET , pEV , pVAR and pSD satisfies these
conditions in the renewal model ?
23. Let N be a (homogeneous) Poisson process with Intensity > 0 and let
T1 , T2 , ... be the according jump times of N . Assume (Xn )
n=1 is a sequence
of i.i.d random variables independent from N, and X1 has the distribution
function F. We define for 0 < s < t and a < b
M ((s, t] (a, b]) = #{i 1 : Ti (s, t], Xi (a, b]}
Show that
(a) M ((s, t] (a, b]) is Poisson distributed with intensity
(ts)(F (b) F (a)),
(b) the random variables M (A1 ) and M (A2 ) are independent for
Ak := (sk , tk ] (ak , bk ]) with A1 A2 = .
Hint: a) Compute the characteristic function u 7 EeiuM ((s,t](a,b]) , and
PN (t)
use M ((s, t] (a, b]) = n=N (s)+1 1I(a,b] (Xn ) gilt.
b) One may use the fact that the random variables f, g : R are
independent Eeiuf +ivg = Eeiuf Eeivg for all u, v R.
24. Mixed distribution
Let f1 , . . . , fn be random variables and F1 , . . . , Fn , their according distribution functions. The random variable J : {1, . . . , n} is independent
from f1 , . . . , fn and it holds P(J = k) = pk for k = 1, . . . , n. Describe the
distribution function of
n
X
Z=
1I{J=k} fk .
k=1
89
by the help of (Fi ) and (pk ).
25. Panjer Recursion
For which (a, b) R2 do the follwing distribution satisfy the (a,b)condition
qn = P(N = n) = a +
b
qn1 ?
n
(0 < p < 1,
v > 0).
(The negative binomial distribution can be interpreted as the probability of n successes which occur with
probebility p and v failures from altogehter v + n trials under the condition that the (v + n)-th trial is a
failure.
For v > 0 one defines
v+n1
n
(v+n)
R x1 t
e dt is the Gamma function, and it
0 t
[a,b]
90
[a,b]
g ((y,x])
g ([a,b])
28. Cram
er-Lundberg: special casel
We consider the Cramer-Lundberg model with intensity > 0 and assume
that the claims are Gamma distributed i.e. the density is given by
1 x
hX1 (x) :=
x e , x > 0.
()
(a) Compute the moment generating function mX1 (h) and the find out
for which h the expression is finite.
(b) Formulate the (NP) condition in , , und c.
(c) Compute the Lundberg coefficient assuming the (NP) condition
holds.
91
(b) Show that the Poisson distribution, the binomial distribution and
the negative binomial distribution are the only distributions on N
which can satisfy the (a,b) condition.
Hint: Check whether there are (a, b) 6 R1 R2 R3 such that there
exists a probability measure on N.
30. Compound Poisson process
Let N1 , . . . , Nn be indpendent Poisson processes with the according intensities 1 , ..., n and let x1 , . . . , xn R. Is
x1 N1 + + xn Nn
a compound Poisson process?
31. Integrated tail distribution
Let X : [0, ) be a random variable. Show that
1 Rx
P(X > y)dy
x0
EX 0
F (x) :=
0
x<0
is a distribution function.
32. It
os formula for the Poisson process
Let N = (N (t))t0 be a Poisson process.
(a) Let f : R R be a Borel-function. Show that
Z
f (N (t, )) = f (N (s, )) +
f (N (u, )) f (N (u, ))dN (u, )
(s,t]
= f (N (s, )) +
s<ut
(b) Show for a function g : [0, )[0, ) R which is continuously differentiable in the first variable and continuous in the second variable
that
Z
92
+
(s,t]
n
X
k=1
Use (a) to show that for any > 0 there exists a > 0 such that
=
f (u, N (u, ))du
(sk1 ,sk ] t
Z
f (u, N (u, )) f (u, N (u, ))dN (u, ) +
+
(sk1 ,sk ]
93
35. Subexponential: equivalent condition
Prove that F : [0, ) [0, 1] belongs to S iff
1 F n (x)
= n n 2
x 1 F (x)
(1)
lim
F n (x t)
dF (t).
F (x)
(2)
(b) 2. step:
From (2) one concludes for 0 < y x that
Z x
F (x t)
F (x t)
dF (t) +
dF (t)
F (x)
F (x)
0
y
F (x y)
(F (x) F (y)).
1 + F (y) +
F (x)
F 2 (x)
= 1+
F (x)
F (x y)
= 1.
F (x)
(3)
(c) 3. step:
Use (2) to show (1) by induction in n.:
For sufficiently large x, y mit y x one has
Z xy n
Z xy
F (x t) F (x t)
F (x t)
dF (t) (n + )
dF (t).
F (x t) F (x)
F (x)
0
0
Moreover, because of (3) we have
Z
0
xy
F (x t)
F (x) F 2 (x)
dF (t) =
F (x)
F (x)
xy
F (x t)
dF (t) 1
F (x)
94
A. The Lebesgue-Stieltjes
integral
A.1
:=
n
X
sup
i=1
finite partitions
a=t0 <t1 <...<tn =b
n=1,2,...
n
X
i=1
where ci [ti1 , ti ] is chosen arbitrarily. We will say that the RiemannStieltjes integral
Z
(RS)
f (x)dg(x)
a
95
96
n
X
sup
inf
i=1
c[ti1 ,ti ]
sup
c[tk1 ,tk ]
while
L(P, f, g) =
inf
c[tk1 ,tk ]
Consequently, (RS)
A.2
Rb
a
97
b) There exists a finite measure g on ((a, b], B((a, b])) such that
g(x) g(a) = g ((a, x]).
Definition A.2.2. For a Borel function f : [a, b] R and a non-decreasing
and right-continuous function g : [a, b] R such that
Z
|f (x)|dg (x) <
(a,b]
f (x)dg (x)
(a,b]
and
g(x) = h1 (x) h2 (x).
Definition A.2.4. For a Borel function f : [a, b] R and a right-continuous
1
(g) < and g = h1 h2 from the
function g : [a, b] R such that V[a,b]
previous Lemma such that
Z
Z
|f (x)|dh1 (x) +
|f (x)|dh2 (x) <
(a,b]
(a,b]
(a,b]
(a,b]
f (x)dh2 (x).
98
Example A.2.5.
a) Let x (a, b), f = 31I[x,b] and g = 21I[a,x) . The function g we can write as g(y) = 2 2x ((a, y]). We obtain
Z
Z
(LS)
f (y)dg(y) = 0 2
f (y)dx (y)
(a,b]
(a,b]
= 2f (x)
= 6.
b) In the same way one can show that for the Poisson process (N (t))t0
(for those paths which are c`adl`ag ) and 0 < a < b it holds
Z
(N (b) N (a) + 1)(N (b) N (a))
N (t) N (a)dN (t) =
(LS)
.
2
(a,b]
Note that the (RS) integral is not defined here.
Bibliography
[1] N. L. Carothers. Real Analysis. Cambridge 2000
[2] C. Geiss and S. Geiss.
An introduction to probability theory.
http://stochastics-mathematics.uibk.ac.at/scripts.htm
[3] S. Geiss. Stochastic processes in discrete time
mathematics.uibk.ac.at/scripts.htm
http://stochastics-
[4] P. Embrechts, C. Kl
uppelberg, and T. Mikosch. (1997) Modelling Extremal Events for Insurance and Finance Springer 1997
[5] A. Gut. Stopped Random Walks: Limit Theorems and Applications
Springer 2009
[6] T. Mikosch. Non-life insurance mathematics: An Introduction with the
Poisson Process. Springer, 2009.
[7] M. Lo`eve. Probability theory 1. Springer, 1977.
[8] S. Resnick. Adventures in Stochastic Processes Birkhauser, 1992.
[9] A.N. Shiryaev. Probability. Springer, 1996.
99