You are on page 1of 13

The fundamental theorem of actuarial

risk science
Sérgio B. Volchan∗

Abstract
In this paper we present one of the main results of collective risk theory
in non-life insurance mathematics, which says that for small claims the ruin
probability decreases exponentially fast. The discussion is made in the context
of the classical Cramér-Lundberg model using the martingale technique.

Keywords: risk theory, insurance, martingales

1 Introduction
Like most areas of mathematics, probability theory emerged from the need to deal
with real-life problems. Besides its use in games of chance and astronomical data
analysis, the field experienced a great impetus in the 19th century from applications
in the social sciences: statistics, demography, economics, insurance and even law. It
is then not totally surprising, though still remarkable, that the pioneering work on
the most sophisticated branch of modern probability theory, that of continuous-time
stochastic processes, were also related to social science applications. 1
First and foremost is Louis Bachelier’s 1900 famous thesis titled “Théorie de la
spéculation” in which for the first time a (semi-rigorous) mathematical treatment of
Brownian motion is used to describe price fluctuations at the Paris stock exchange.
Written under the supervision of none other than Henri Poincaré, it antedated Ein-
stein’s work (1905) on the physical explanation of Brownian motion and Wiener’s
(1923) rigorous mathematical construction, namely of Wiener process (with con-
tinuous nowhere differentiable sample paths). It antedated as well the now almost

Pontifı́cia Universidade Católica do Rio de Janeiro, Departamento de Matemática, Rua
Marquês de São Vicente 225,Gávea, 22453-900 Rio de Janeiro, Brasil volchan@mat.puc-rio.br
1
Incidentally, the earlier use of probabilistic ideas in kinetic gas theory in the hands of Maxwell
and Boltzmann was in part inspired by statistics as applied to social sciences. It seems that the
first evolution equation for a probability density ever written was Boltzmann’s integro-differential
equation in 1872.

1
universally accepted axiomatization of probability theory laid down in 1933 by An-
drei Kolmogorov (who was critical of Bachelier’s lack or rigor but acknowledged his
influence in his work on continuous-time Markov processes), not to mention Itô’s
stochastic calculus developed in the 1940’s, also influenced by Bachelier.
As is well known, this work was almost forgotten (partly due to its novelty and
partly to its lack of rigor) until it was rediscovered in the 1950’s and early 60’s
by mathematicians, physicists and economists. It gained renewed significance dur-
ing the intense academic work on financial mathematics in the wake of the deep
structural changes in the global financial markets of the seventies (and the con-
comitant computer and telecommunications revolution), a work that culminated
in the Nobel-winning (1997) Black-Scholes-Merton model. By that time, the the-
ory of martingales, developed in the fifties by Doob (having Wiener process as the
quintessential example) and Itô’s stochastic calculus were in full bloom. Then in
the eighties the connection of martingales to the crucial financial concept of arbi-
trage was clarified and widely explored. Nowadays, stochastic analysis is a standard
tool of the modern theoretical and applied finance specialist and the importance of
Bachelier’s work was internationally recognised through the celebration of the First
World Congress of the Bachelier Finance Society held in Paris (2000). [14]
What is perhaps less well known is that almost at the same time as Bachelier’s
thesis, another pioneering work, this turn in the field of actuarial science, was done by
the Swede Filip Lundberg. In his 1903 Uppsala thesis he uses yet another important
example of stochastic process, to wit, Poisson process (with discontinuous sample
paths), in modelling the ruin problem for an insurance company. Extended and
rigorized by Harald Cramér in the thirties, the so-called Cramér-Lundberg model is
still a landmark of insurance mathematics (non-life branch). It wouldn’t be unfair
to say that it has a similar role in actuarial science as the Black-Scholes-Merton
model in finance. [12]
It is fascinating to realize that from such applications in finance and insurance,
developed almost simultaneously, the two most important examples of stochastic
processes came to life. 2 Interestingly, actuarial/insurance mathematics was also
important in the quest for the foundations of probability theory. In fact, David
Hilbert, in his famous address at 1900 International Congress of Mathematics held
in Paris, included the axiomatization of probability theory as part his 6th problem
(on the axiomatization of physics of which probability was thought to be a part).
He makes reference to a lecture on the subject by insurance mathematician Georg
Bohlmann (published in 1900) in the context of life-insurance problems (who, in
turn, cites Poincaré’s 1896 textbook on probability as a main source!). [7]
In this paper we discuss the classical Crámer-Lundberg model, in particular the
use of martingale methods to derive the Cramér-Lundberg estimate. The paper
is structured as follows. We first discuss the central but subtle concept of risk in
finance and insurance and the role of the actuary as a risk manager. We then
describe the classical Cramér-Lundberg model and the related ruin problem for
insurance risk. Finally we derive the Cramér-Lundberg estimate, after which we
make some concluding remarks.
2
Which are the basic examples of the class of of Lévy processes [1, 17], i.e., with stationary and
independent increments and right-continuous with left limits sample paths.

2
2 The elusive concept of risk
Everyone agrees that the concept of risk is central to the disciplines of finance and
actuarial science. Ironically, however, there is no consensus on what it precisely
means. It generally has the negative connotation of a “loss” 3 usually measured in
monetary units. And the fact is that it may have different meanings to different
people in different situations. Thus one speaks of various “risk factors” in a given
context. These factors are usually interconnected in complex ways and one tries
to devise associated “risk measures” in order to obtain some quantitative estimate
(and hopefully some control) of one’s vulnerability to such factors. 4
Risk is also commonly associated with uncertainty, a psychological category,
which in turn manifests itself due to the unpredictability of certain future events.
This is a typical situation in finance and insurance, as various transactions and
exchanges are contingent on the occurrence or not of certain future events, generating
uncertainty regarding what could happen in-between. Consider for example the risk
of default of a company (or government) which is unable to honour its contractual
obligations due to unforeseen changes in the political, social-economic and even
natural scenarios.
Now, the unpredictability of a certain phenomenon or process has many sources,
for instance:
(a) Ignorance, lack of information or poor/partial knowledge about the laws and
mechanisms that rule this phenomenon. This is an unavoidable fact of the hu-
man condition. It can in principle be mitigated with improved and continuous
research; so in a sense, science can be viewed as a collective effort to curb such
human limitations. But for an individual, incomplete information is the rule.
(b) It might be the case that even knowing in detail the laws and mechanisms
involved, still the process or system dealt with has some intrinsic instability;
for example, it could display sensitivity to small perturbations causing an
amplification of errors, leading to a disruption of any long-time predictability.
This is thought to be the case of so-called “chaotic dynamical systems” (e.g.,
in meteorology).
(c) It could also happen that the system is stochastic by nature. Of course, that
doesn’t mean that the system’s behaviour is arbitrary; quite the opposite,
it means that its behaviour (or some aspects of it) is governed by the (very
stringent) laws of probability.
All of the items above (and others) can of course happen simultaneously in a
given situation. However, item (c), that we may call “stochastic hypothesis”, is
very popular in many models in finance and actuarial science. So much so that
in these models “risk” is simply defined as a certain random variable (or some
parameter linked to it, like standard deviation) representing an uncertain future
payment/liability. In other words, for the sake of mathematical modelling, one
avoids discussing the origin of randomness and just identifies it to risk. Thus, in
finance models all the complexities of market price determination are reduced to
3
But not always, as it can mean gains from investment or speculation. Incidentally, the word
risk in Chinese is the concatenation of “danger” with “opportunity”, which seems to better capture
the gist of the concept.
4
By the way, the charge of interest is an old form of protection against the risk of lending money,
so in this sense it is a primitive form of risk measure.

3
the hypothesis that the prices of risky assets are described by some given stochastic
processes, typically related to Wiener’s. Similarly, in actuarial science it is common
to suppose that claims (corresponding, say, to car accidents) happen at random
according to a given probability distribution, typically linked to the Poisson process
(see section 3 below).
As uncertainty cannot be completely eliminated in human affairs, individuals
and society tried to devise means to at least minimise its impact. In other words,
whatever the origins or type of risk, its management or administration is one of
the primary tasks of the risk analyst or technologist (a function in great demand at
major financial and governmental institutions). Related tasks are planning and fore-
casting, designing of appropriate management tools (e.g., models, risk measures and
estimation), etc. The fundamental goal is to improve the decision-making process
at all levels of the organisation, avoiding real and potential losses and increasing its
overall efficiency. This is accomplished through an analysis of the vulnerabilities of
the firm to all kinds of risks and impacts that could compromise its future revenues.
This calls for a careful analysis of such risks based on sound economic fundamentals,
reliable data and good mathematical modelling. 5
A decision that comes out of such analysis should concern, for example, the
amount of reserve capital that the institution should keep in order to absorb the
impact of possible losses due to adverse changes of price of assets in the firm’s port-
folio (and the choice of such assets itself), the so-called market risk. Such a decision
is made not only in the private sector but also by the regulatory authorities, such
as governments and central banks, when they demand safety margins in business
transactions.
Clearly the importance and responsibility of the risk manager are enormous.
A wrong decision regarding asset allocation based on a faulty risk analysis could
cripple the competitiveness of the firm (or even jeopardise its very existence). A
series of recent management/financial scandals [9] such as Barings Bank, Proctor &
Gamble, Metallgeselschaft and Long Term Capital Management 6 , just to mention
a few, brought public opinion and governments to demand greater controls and
responsible management in a context of global, fast, interdependent and volatile
(i.e., unstable) markets.
As the level of sophistication and sheer complexity of financial systems (and
society as a whole) increased so did the mathematical tools needed to tackle models
describing them. This partly explains the demand for people with such technical
expertise: statisticians, mathematicians, engineers and physicists. While this trend
opened a new market for these professionals, it also meant an increase in the cost of
specialised personnel to the firms, which inevitably raises the price of their services.
Moreover, it is not enough to have a good technical expertise, one also needs the
ability to critically evaluate the strengths and weaknesses of the models, particularly
with respect to its presuppositions and unavoidable simplifications. This could help
minimise yet another form of risk, namely modelling risk, a type of operational
risk arising from errors and/or inadequacies of the model used. So, for example, a
common simplifying assumptions common to many finance models is that the firm’s
assets are always convertible to cash, i.e., that there is no (or very low) liquidity
risk. Illiquidity usually manifests itself as a difficult in changing one’s position
without appreciably depressing market prices. This increases the time needed to
5
The lack of reliable mathematical models for premium calculation seems to be the reason why
till around the middle of 19th century insurance was a very risky business indeed: around 60% of
the companies went bankrupt in 10 years! [4]
6
In whose team, by the way, belonged Nobel prize winners Myron Scholes and Robert Merton.

4
complete such transactions which in turn might increase the firm’s exposure to
market fluctuations which, in turn might increase transaction costs, spreads, etc.
Such situations are bound to happen sometime in the real world and the risk manager
has to be prepared for those, maybe rare, but quite real, occasions.
The actuary is that kind of risk technologist specialised in insurance, a social
mechanism for compensating individuals and institutions for financial losses due to
unforeseen events or circumstances. Though traditionally associated with property
and life insurance (covering for instance accidents, theft, natural disasters, etc)
the field has considerably enlarged its scope in the last 30 years. The renowned
actuary Hans Bühlmann (1979) proposed the following classification of insurance
kinds, roughly following the historical trend of the field:
• Insurance of the first kind: life-insurance, based on mortality tables and inter-
est rate calculations using mainly non-probabilistic techniques;
• Insurance of the second kind: characterised by the introduction of probabilistic
tools and results such as the Law of Large Numbers and the Central Limit
Theorem in non-life insurance;
• Insurance of the third kind insurance: characterised by a deep interaction with
finance and heavily involving the whole area of stochastic analysis, advanced
statistics and other tools.
Though insurance of the third kind seems to be a new trend, one easily recog-
nises that many crucial notions in modern finance have a distinctly actuarial flavor.
For example, the concept of hedging can be viewed as a kind of protection against
unfavourable market behaviour. And although derivative instruments (options, fu-
tures, swaps, etc) have a clear speculative aim, they are also mechanisms of risk
transfer.

3 The Ruin Problem


Risks can of course be minimised simply by avoiding them altogether or by trying to
predict the occurrence of unfavourable events. As these alternatives are not always
feasible, a third method, which is at the heart of the insurance industry, is that
of risk transfer from a group of people (or institutions) to another. A classical
mechanism is pooling 7 : many individuals transfer their risks to a collective body,
like an insurance company or a pension fund, which is better equipped to handle
them.
Human behaviour regarding risk and uncertainty is quite complicated. We feel
discomfort when facing uncertainty and studies show that people (even experts)
perform poorly when handling probabilistic concepts. Our behaviour is not always
“rational” 8 and depends on many variables like level of education, how far in the
future a possibly hazardous event is, emotional and affective dispositions, social
factors, etc. In any case, a traditional working hypothesis is that “rational” indi-
viduals tend to be risk averse: they would rather face a small but certain loss than
a large uncertain one. As unfavourable events are (conceivably) rare, the risk from
7
Which is akin to the concept of diversification, well known in finance.
8
Think of smoking habits, drug abuse and the socially accepted though highly dangerous car
driving activity.

5
the many to which the insured-against event did not happen can be transfered to
the few for whom it did. This is an insight gained from the Law of Large Numbers:
the “collective” or average (or, depending on the context, long-term) effect of many
small independent random causes is nonrandom. It justifies the motto “certum ex
incertis” adopted by the oldest actuarial organisation, the Institute of Actuaries in
London (1848). [5]
From the viewpoint of an individual buying an insurance contract he gets,
through an affordable price paid regularly, protection against unexpected and po-
tentially costly events (e.g., a car crash, illness). On the other side, the insurance
company selling the contract charges a relatively low price (the premium) very many
clients in such a way that it will always be able to pay for claims, as these occur
randomly but not very frequently.
Therefore, running an insurance company requires first and foremost the mainte-
nance of a healthy balance of cash inflows and outflows, which is the basic principle
of double-entry bookkeeping going back to Luca Pacioli (1496). [6] Of course, the
company has to gauge the cost of its diverse administration services, taxes and other
expenses and would invest part of its capital to earn profits.
A great innovation of Lundberg’s approach in studying the interaction of in-
surer and insured was to switch from an individual model (which focuses on an
individual’s portfolio) to a collective model, focusing on aggregate quantities. Thus,
ignoring for simplicity other kinds of inflow (investment revenues, new capital, in-
terests and dividends received, rental incomes, etc) and outflows (commissions and
other administrative and operational costs, taxes, interests and dividends payed,
etc), [8] the basic ingredients of his model are:
• the premium charged by the company: call Πt be the aggregate (or total)
premium collected up to time t > 0.
• the claims arriving at some random claim times 0 = T0 < T1 < T2 < ..., where
the corresponding amounts to be paid at these times are described by some
non-negative random variables X1 , X2 , ..., called the claim sizes.
Let Nt = sup{n ≥ 1 : Tn ≤ t} (sup(∅) ≡ 0) be the number of claims arriving in
the interval [0, t]. Then the aggregate claim amount process {St }t≥0 is given by the
random sum: 
 XNt
Xk , N t > 0




St = k=1 (1)




0, Nt = 0.
The capital surplus of the insurance company is the stochastic process {Ut }t≥0 ,
called the risk process, defined on some probability space (Ω, F , P), with initial
capital U0 = u ≥ 0 and given, for t > 0 by
Ut = u + Π t − S t .

The classical Cramér-Lundberg model has the following additional hypothesis:


(0) the premium income is linear deterministic: Πt = ct, where c > 0 is the
premium income rate;

6
(1) the claim size process {Xn }n≥1 is a sequence of positive i.i.d. random variables
with common distribution function F and finite mean µ = E[X1 ];
(2) the inter-arrival times Tn − Tn−1 , n ≥ 1 are i.i.d. exponentially distributed
random variables with parameter λ > 0;
(3) the processes {Tn }n≥1 and {Xn }n≥1 are independent.
It follows from (2) that the claim number process {Nt }t≥0 is a homogeneous
Poisson process with intensity λ, that is (N0 = 0 a.s.):
k
−λt (λt)
P(Nt = k) = e , k = 0, 1, 2, . . . ,
k!
so that {St }t≥0 is a compound Poisson process, which is a Lévy process. [18]

Ut

PSfrag replacements

0 τ t

Figure 1: A realisation of the risk process

It is clearly of great practical importance to evaluate or estimate the aggregate


claim size distribution

X (λt)k n∗
Gt (x) = P(St ≤ x) = e−λt F (x), x, t ≥ 0,
k=0
k!

where the F n∗ (x) = P(X1 + . . . + Xn ≤ x) is the n-fold convolution of the claim


size distribution F (F 0∗ is the Heaviside function). In what follows, however, we’ll
focus on another (and in a sense preliminary) issue: the long-term stability of the
risk process.

7
As the service offered by an insurance company is of the kind in which one pays
first and takes later (hopefully never!), a basic requirement (and there are explicit
legal regulations in this respect) is that the company will still exist and be solvent
if and when one needs it. More precisely, one needs to evaluate the ruin probability
of the risk process:
ψ(u) = P(Ut < 0, for some t ≥ 0),
which can alternatively be expressed through the associated ruin time:
τ = τu = inf{t ≥ 0 : Ut < 0},
(inf{∅} ≡ +∞), i.e., the first time that the company is “in the red”. This is a
stopping-time with respect to the filtration F = {Ft }t≥0 , where Ft = σ(Us : s ≤ t)
is the σ-algebra describing the history of the risk process up to time t and we have,
ψ(u) = P(τ < ∞).
A fundamental problem is then to determine the adequate initial capital u > 0
and insurance premium rate c > 0. Using ψ(u) as a measure of solvency a first
natural requirement is that that 0 ≤ ψ(u) < 1, otherwise ψ(u) = 1 and ruin is
(almost) sure. 9 It is therefore reasonable to impose that E[Ut ] > 0 for all t > 0. A
simple calculation shows that
E[Ut ] ≥ E[Ut − U0 ] = ct − µE[Nt ] = t(c − λµ),
so that one assumes the so-called net profit condition (“the premium has to be
greater than the average total claim”):
c > λµ,
which already gives a basic estimate of the premium rate. Define the parameter
c
ρ= − 1 > 0,
λµ
called the safety loading. As the premium income up to time t is ct = (1 + ρ)λµt, ρ
can be interpreted as a risk premium rate, that extra the insurance company charges
to avoid certain ruin. Notice that by the Law of Large Numbers, under the net profit
condition Ut → +∞ almost surely as t ↑ ∞, though the process can attain negative
values in some instants.
It can be shown [11] that under the net profit condition:

ρ X
1 − ψ(u) = (1 + ρ)−n FIn∗ (x)
1 + ρ n=0
where
1 x
Z
FI (x) = F̄ (y)dy,
µ 0
called the “integrated tail distribution”. Though this a very important formula, one
cannot hope to obtain closed-form expressions from it apart for some special cases
(for example, for the exponential distribution). So it is important to have at least an
estimate of ψ(u) guaranteeing that it is “sufficiently” small for all u ≥ 0, say, below
a prescribed threshold of 5%. It turns out that if one assumes a certain condition
bearing on the tail of the claim size distribution, then one can get such an estimate
in terms of the initial capital u. This is discussed next.
9
Also one can check that limu→∞ ψ(u) = 0.

8
4 The Cramér-Lundberg estimate
We can at last state one of the classical results of actuarial mathematics:
Theorem (Cramér-Lundberg). Consider the above model under the net-profit
condition and suppose there exists R > 0 (called the Lundberg exponent or adjust-
ment coefficient) satisfying the following Lundberg or small-claim condition:
Z ∞
c
eRx (1 − F (x))dx = . (∗)
0 λ
Then for all t > 0
ψ(u) ≤ e−Ru .
The original proof was based on complicated computations based on inversion of
Laplace-Stieltjes transforms. We will follow the very elegant “exponential martingale
technique” that goes back to Gerber [15]. But before embarking on the proof, let
us discuss the meaning of condition (*).
It presupposes that the left-hand side exists in a neighbourhood of the origin.
Consider the function
Z ∞
(esx − 1)dF (x) = E esX1 − 1,
 
h(s) =
0

which is essentially the Laplace-Stieltjes transform of F , well defined for all s ≤ 0. If


there is an s0 > 0 such that h(s0 ) < ∞, then h(s) < ∞ and infinitely differentiable
for s ∈ (−∞, s0 ). Moreover, for s > 0, an application of Fubini’s theorem gives
Z ∞ Z ∞
sx
h(s) = s e (1 − F (x))dx = s esx F̄ (x)dx,
0 0

where F̄ (x) = 1 − F (x) is the tail of F .


We then see that (*) can be rephrased as: there exists a positive root R of the
function g(s) = λh(s) − cs. By Markov’s inequality, for all x > 0

F̄ (x) = 1 − F (x) = P(X1 > x) ≤ e−Rx E eRX1 ,


 

so that (*) implies that large claims have exponentially small probabilities, justifying
the name “small-claims condition”.
Now, g(0) = 0, g 0 (s) = λh0 (s) − c = λE[X1 esX1 ] − c, so that g 0 (0) = λE[X1 ] − c =
λµ−c < 0 by the net-profit condition. Furthermore, g 00 (s) = λh00 (s) = λE[X12 esX1 ] ≥
0, so that g is convex. Hence, if g has a non-zero root, it is positive and unique.
The existence of a positive root is achieved if h(s) grows fast enough in relation
to the linear function cs. A sufficient condition is the following: there exists s∞ > 0
(possibly = ∞) such that h(s) < ∞ for s < s∞ and h(s) ↑ +∞ ass ↑ +∞. The case
s∞ finite is immediate. For the case s∞ = ∞, picking x̂ with F (x̂) < 1 one gets for
s big enough that
Z +∞
h(s) ≥ esx dF (x) − 1 ≥ esx̂ F̄ (x̂) − 1,

9
g(s)

PSfrag replacements

0 R s∞ s

Figure 2:

which grows faster than any linear function (see fig. 2).
Proof of the theorem:
The proof rests on the fact that the process {Mt }t≥0 , where

Mt = e−rUt −tg(r) ,

is a continuous-time F-martingale for every value of r for which E erX1 < ∞.


 
First notice that this process is non-negative and obviously F-adapted. Also,

 −r(Ut −U0 )   r P Nt X  X  P Nt
−rct −rct
E er i=0 Xi |Nt = k P(Nt = k) =

E e =e E e i=0 i
=e
k=0

X e−λt (λt)n
e−rct (1 + h(r))n = e−rct eλth(r) = etg(r) ,
k=0
n!

and similar calculation shows that for 0 ≤ s < t one has E[e−r(Ut −Us ) ] = e(t−s)g(r) . It
follows that for all t ≥ 0
E[Mt ] = e−ru < ∞.
Furthermore, as the risk process has independent increments, then for 0 ≤ s ≤ t,

E Mt |Fs = E e−rUt −tg(r) |Fs = E e−r(Ut −Us )−(t−s)g(r) e−rUs −sg(r) |Fs =
     

e−rUs −sg(r) E e−r(Ut −Us )−(t−s)g(r) |Fs = e−rUs −sg(r) = Ms P − a.s.,


 

which is the basic martingale property.

10
Next, let t0 < ∞ and consider the bounded stopping time t0 ∧ τ = min(t0 , τ ).
Applying the martingale stopping theorem (see Kallenberg [17], Thm. 6.12, p. 104)
we obtain:
E[Mt0 ∧τ ] = E[M0 ] = e−ru .
But remembering that Uτ < 0, we have that

E e−rUt0 ∧τ −(t0 ∧τ )g(r) =


 

E e−rUt0 ∧τ −(t0 ∧τ )g(r) |τ ≤ t0 P(τ ≤ t0 ) + E e−rUt0 ∧τ −(t0 ∧τ )g(r) |τ > t0 P(τ > t0 ) ≥
   

E e−rUτ −τ g(r) |τ ≤ t0 P(τ ≤ t0 ) ≥ E e−τ g(r) |τ ≤ t0 P(τ ≤ t0 ) ≥


   

inf (e−tg(r) )P(τ ≤ t0 ).


0≤t≤t0

Therefore,
e−ru
P(τ ≤ t0 ) ≤ = e−ru sup (etg(r) ),
inf 0≤t≤t0 (e−tg(r) ) 0≤t≤t0

an estimate that is interesting in itself. Taking now t0 ↑ ∞, we get

P(τ < ∞) ≤ e−ru sup(etg(r) ),


0≤t

and one would like to substitute for the greatest possible r, subject to the condition
sup(etg(r) ) < ∞. In other words, we look for sup(r : g(r) ≤ 0), which is exactly the
0≤t 0≤t
Lundberg exponent R. Hence,

P(τ < ∞) ≤ e−Ru ,


finishing the proof.

5 Conclusions
The Cramér-Lundberg theorem is a nice example of the use of martingale methods in
applied probability. But the discussion presented here is just the tip of the iceberg:
there is a huge literature and ongoing research activity on various generalisations
of, improvements and related topics on the ruin problem.
To begin with, the classical model is not very realistic. A basic criticism is
that it ignores possible effects of dependence, seasonality, clustering and other in-
homogeneities of claim distributions. An even more serious problem is that many
distributions used in practice to fit empirical claim-size data, like Pareto’s, violate
Lundberg’s condition. That is, they display heavy tails and the theory has to be
reconfigured to deal with such “dangerous” non-Cramér regime.
Within the classical model, there is a host of interesting questions we did not
touch here. For example, we have seen that even if ruin happens, under the net
profit condition the firm will eventually come out of insolvency. One would then
like to know, among other things, for how long the firm stays insolvent, how severe
is its deficit during that period, etc. Also, as in the long run the firm’s capital
surplus increases without limits, one could impose some upper barrier to model the

11
allocation of capital to pay for costs, dividends, etc. More generally, there is the
ever-growing interplay of insurance with finance, witnessing a growing exchange of
methods and ideas from the two disciplines.
Another “hot” topic is reinsurance, a risk-reducing mechanism in which an in-
surance company buys insurance (from another company) to cover its own portfo-
lio. A motivation for this move is the occurrence of catastrophic events, like natu-
ral disasters (hurricane, earthquakes, tsunamis, etc) or man-made ones (accidents,
bankruptcies, etc) whose costs far exceeds the covering capacity of an isolated firm.
And in some cases not even reinsurance is enough to cope with such disasters, which
explains the importance of the recent research effort on rare but extremal events.
Finally, there are some bigger questions: are modern societies, with their in-
creasing complexity and interdependence, more vulnerable to systemic risks? Are
insurance technologies making our societies safer or not? What is the role and re-
sponsibilities of risk managers in the new global scenario? These are formidable
challenges to modern society whose answers will keep firms, governments and risk
managers extremely busy for the coming decades.
ACKNOWLEGMENTS. Work partially supported by FINEP (Pronex project).
The author is thankful to the anonymous referee for many valuable criticisms and
suggestions.

References
1. Applebaum, D. (2004). Lévy processes-From probability theory to finance and quantum
groups. Notices Amer. Math. Soc., 51, 1336–1347.

2. Bingham, N. H. (2001). Probability and statistics: some thoughts on the turn of the millen-
nium. In Probability Theory: Recent History and Relations to Science (ed. V. F. Hendricks &
S. A. Pedersen), Synthese Library 297, Kluwer, Dordrecht, 15–49.

3. Bühlmann, H. (1989). Tendencies of development in risk theory. In Proceedings of 1989


Centennial Celebration of the Society of Actuaries, Schaumburg, Illinois.

4. Bühlmann, H.(1997). The actuary: the role and limitations of the profession since the mid-
19th century.ASTIN Bulletin, 27, 2, 165-171.

5. Bühlmann, H. (2002). On the Prudence of the Actuary and the Courage of the Gambler
(Entrepreneur). Giornale dell’ Istituto Italiano degli Attuari, LXV, Roma, 1–12.

6. Calzi, M. L. and Basile, A. (2004). Economists and mathematics from 1494 to 1969:
Beyond the art of accounting, in: M. Emmer (ed.), Mathematics and Culture I, 95–107.
Springer, New York.

7. Corry, L. (1997). Hilbert and the axiomatization of physics. Arch. Hist. Ex. Sci. 51, 83–198.

8. Daykin, C.D., Pentikinen, T. and Pesonen, M. (1994). Practical Risk Theory for Actu-
aries. Chapman & Hall, London.

9. Dybvig, P. H. and Marshall, W. J (1997). The New Risk Management: the Good, the
Bad, and the Ugly. Review of the Federal Reserve Bank of Saint Louis, November/December,
9–21.

12
10. Embrecht, P.(1995). Risk theory of the second and third kind. Scand. Actuarial J., 1, 35–43.

11. Embrecht, P., Klüppelberg, C. and Mikosch, T.(1997).Modelling Extremal Events.


Springer.

12. Geman, H. (2000). From Bachelier and Lundberg to insurance and weather derivatives,
in:Mathematical Physics Studies, Kluwer Academic Publishers, 81–95.

13. Geman, H. (1999). Learning about risk: some lessons from insurance. Europ. Finance Rev.,
2, 113–124.

14. Geman, H., Madan, D., Pliska, S.R. and Vorst, T. (eds.) (2002). Mathematical Fi-
nance, Bachelier Congress 2000. Springer, New York.

15. Gerber, H.U. (1973). Martingales in risk theory. Mitt. Ver. Schweiz. Vers. Math., 73, 205–
216.

16. Grandell, J. (1991). Aspects of Risk Theory. Springer-Verlag, New York.

17. Kallenberg, O. (1997). Foundations of Modern Probability. Springer, New York.

18. Rolsky, T., Schmidli, H., Schmidt, V. and Teugels, J. (2001). Stochastic processes for
insurance and finance., Wiley, New York.

19. Shyriaev, A. N. (1999). Essentials of Stochastic Finance: Facts, Models, Theory, World
Scientific, Singapore.

20. Trowbridge, C. L. (1989). Fundamental Concepts of Actuarial Science. AERF.

13

You might also like