Professional Documents
Culture Documents
Schemes:
Insurance Company Example
Bjarne Hjgaard Michael Taksary
Abstract
The paper represents a model for the nancial valuation of a rm which
has control on the dividend payment stream and its risk as well as poten-
tial prot by choosing dierent business activities among those available
to it. This is an extension of the classical Miller Modigliani theory of
rm valuation theory to the situation of controllable business activities
in stochastic environment. We associate the value of the company with
the expected present value of the net dividend distributions (under the
optimal policy).
The example we consider is a large corporation such as an insurance
company, whose liquid assets in the absence of control
uctuate as a Brow-
nian motion with a constant positive drift and a constant diusion coe-
cient. We interpret the diusion coecient as risk exposure, while drift is
understood as potential prot. At each moment of time there is an option
to reduce risk exposure, simultaneously reducing the potential prot, like
using proportional reinsurance with another carrier for an insurance com-
pany. Another thing that the management of the company controls is the
dividends paid-out to the shareholders. The objective is to nd a policy
which maximizes the expected total discounted dividends paid out until
the time of bankruptcy. Two cases are considered:
a) The rate of dividend pay-out is bounded by some positive constant
M.
b) There is no restriction on the rate of dividend pay-out.
We use recently developed techniques of Mathematical Finance to ob-
tain an easy understandable closed form solution. We show that there are
two levels u0 < u1 . As a function of currently available reserve, the risk
exposure monotonically increases on (0; u0 ) from 0 to maximum possible.
When the reserve exceeds u1 the dividends are paid at the maximal rate in
the rst case and in the second case every excess above u1 is distributed as
dividend. We also show that for M small enough u0 = u1 and the optimal
risk exposure is always less than the maximal.
Department of mathematics, Aalborg University, Denmark.
y Department of AMS, SUNY, Stony Brook, USA.This work is supported by NATO Col-
laborative Research Grant CRG 951275 and as NSF Grant DMS 9301200.
1
Keywords: Dividends pay-out, proportional reinsurance, diusion mod-
els, Stochastic Control Theory, HJB equation.
1 Introduction
The classical paper by Miller & Modigliani [32] provides the valuation formula
for an innite horizon rm under the assumption of perfect certainty. It shows
that the value of the company can be equated with to the present value of
the net distributions to shareholders over innite horizon. There were several
extensions of this work, in which it was shown that in most cases the Miller
Modigliani approach can be used in the case of stochastic environment as well
(e.g., Sethi et al. [38], [39], [37]).
In this paper we consider a rm valuation problem for a company in which the
dividend stream as well as the risk exposure are controlled by management.
We equate the value of such company to the expected present value of the net
dividend distributions (under the optimal policy).
While our methods are rather general, the framework of a large insurance com-
pany seems to the best illustration to the model we consider (for a discussion
of the relation to consumption/investment models, see Remark 1.1 below).
Optimizing dividends pay-out is a classical problem in actuarial mathematics,
on which earlier work is given in e.g. de Finetti [13], Borch [2], [3], Buhlmann
[6], Gerber [16], [17] and Buzzi [7]. Recently, there has been a renewed interest
in the diusion models for the corporations with controllable risk exposure and
dividends distribution, e.g. Radner & Shepp [35] and Browne [4], [5]. In these
papers the problem is formulated and solved in the framework of controlled
diusions, see e.g. Fleming & Rishel [14]. Our approach is also based on con-
trolled diusion techniques, in particular the so-called mixed classical-singular
control. In this framework the problem is also partially treated by Whittle [43]
and the case of no reinsurance options is solved in this framework by Asmussen
& Taksar [1] and to some extend by Paulsen & Gjessing [33]. For other appli-
cations of control theory in insurance mathematics see Hjgaard & Taksar [22],
Martin-Lof [29], [30], [31], Davis [9] and Dayananda [11].
Consider a corporation whose liquid assets at time t are described by a stochas-
tic process fRt g. For an insurance company fRt g is the reserve of the company
at time t . We will refer to this process, as the risk process.
To give a mathematical formulation of the optimization problem, we start with
a probability space (
; F ; IP), a ltration fFt gt0 and a process fWt g, which is
a standard Brownian motion with respect to fFt g. The ltration Ft represents
the information available at time t and any decision is made based upon this
information. We assume, that in the case of no control the risk process evolves
2
according to
dRt = dt + dWt
with ; > 0. For a motivation of this model see e.g. Asmussen & Taksar [1],
Grandell [20] and Dayananda [11]. The initial reserve R0 is supposed to be
F0 -measurable. Without loss of generality we can assume that it is equal to a
deterministic value x.
A control policy is described by a two-dimensional stochastic process fa (t); Lt g,
where 0 a (t) 1 corresponds to the risk exposure at time t and Lt 0 is
a non-decreasing process whose value corresponds to the cumulative amount of
the liquid assets distributed up to time t. For the case of the insurance company
1 a (t) denotes the fraction of the incoming claims that is reinsured at time
t while Lt denotes the total amount of dividends paid out up to time t. When
applying policy we let fRt g denote the controlled risk process. The increment
Rt+dt Rt will then be distributed as a (t)(Rt+dt Rt ) (Lt+dt Lt ). The
dynamics for Rt are then given by
dRt = a (t)dt + a (t)dWt dLt (1.1)
R0 = x L 0 :
(1.2)
The above equation is derived under the assumption sometimes called cheap
reinsurance. The latter means that the reinsuring companies have the same
safety loading as the insuring company.
The policy is said to be admissible if the process fa (t); Lt g is adapted to
the ltration fFt g and 0 a (t) 1, and Lt is nonnegative non-decreasing
right-continuous functional. In this case we call the process fa (t); Lt g the
control process or simply the control. We denote by the set of all admissible
policies. For a given admissible policy we dene the return function V by
Z
V (x) = IE e ct
dLt ; (1.3)
0
where = inf ft : Rt = 0g with fRt g the solution to (1.1) and c > 0 is a
discount rate. For any admissible Lt we put L0 = 0, thus the integral in the
right hand side is considered from 0 to 1 and always includes the quantity
L0 . Note also that is not an endogenously determined random variable, it
rather depends on our policy. We will call the bankruptcy time.
The objective is to nd the optimal return function, which is dened as
V (x) = sup V (x); (1.4)
2
and to nd an optimal policy that satises V (x) = V (x) for all x.
The problem we consider is a so-called mixed singuar/regular stochastic control
problem. Within a dierent framework this types of problems were considered
in Lechoczky & Shreve [27], Davis & Norman [10] and Cadenillas & Hausmann
[8].
3
Our rst step would be to nd the qualitative behaviour of the function V . Let
be any admissible control. Dene another admissible control ^ by
(
a^ (t) = a (t) t < ;
0 t
and (
^
L =
Lt t < :
t L t
Then ^ = and
V^ (x) = V (x) for all x:
In other words setting a (t) and dLt to any quantity (in particular to 0) for
t does not change the value of V (x). Thus without loss of generality we
can reduce the set of admissible policies to A, where A is dened as
2 A if and only if a (t) = 0 and dLt = 0 for t > :
Then for any 2 A we have Rt = 0 for t > and
Z1
V (x) = IE e ct dLt dt: (1.5)
0
Proof Let x be an admissible policy for the initial reserve x1 and x for the
1 2
initial reserve x2 . Let 0 < < 1 and dene by
a (t) = ax1 (t) + (1 )ax2 (t):
and
Lt = Lt x1 + (1 )Lt x2 : (1.6)
Then by linearity of (1.1) is an admissible policy for the initial reserve =
x1 +(1 )x2 and Rt = Rtx1 +(1 )Rtx2 with = x1 _ x2 . Linearity
4
Remark 1.1 The problem of this paper can also be formulated as a consump-
tion/investment problem in the following way. Suppose that there is an indi-
vidual investor who has two assets available for him: one is risky and one is
risk-free. The price of the risky asset is governed by an arithmetic Brownian
motion with drift and diusion , while the price of the risk free asset is
constant. He continuously trade the assets, incurring no transaction cost, using
proceeds to nance his consumption. If we denote by a (t) the proportion of
his wealth that he keeps in the risky asset at time t and by Lt his cumulative
consumption up to time t, then the dynamics of his wealth is given by (1.1),
(1.2). If the objective is to optimize the expected total discounted consumption
until the time of his bankruptcy, then this consumption/investment problem is
equivalent to the one we are considering in this paper. It should be mentioned,
however, signicant dierence between this model and the classical consump-
tion/investment models. First, in the classical consumption/investment models
(see Karatzas et al. [23], Presman & Sethi [34] and Sethi et al. [40]) the prices
of risky assets are governed by geometric Brownian motions. Second, the objec-
tive is to maximize the expected total discounted utility of consumption. The
utility function U () is actually a function of consumption rate and it is assumed
to be concave with derivative at innity equal to zero. The latter ensures that
in the optimization problem one can consider only those functionals L, which
are dierentiable thus dealing only within the domain of the classical stochastic
control. In our case we have \utility" function U (x) equal to x, thus eliminat-
ing the possibility to deal only with the dierentiable functional L unless this
requirement is set a priori. It is also important to observe that in case of the
risky asset governed by a geometric Brownian motion, the problem becomes by
and large trivial: either it is optimal to consume everything instantaneously,
or the optimal value function is equal to innity (see Radner & Shepp [35]).
Thus while formally, one can formulate the problem of the present paper in the
framework of consumption/investment model, with \non-natural" risky assets
and utility function, it is impossible to apply directly the results of those models
to our case. Nevertheless some tools and techniques can be \borrowed" from
those models and successfully employed in the dividend optimization problem.
2
In this section we consider the case, in which the rate of dividend pay-out is
bounded by some 0 < M < 1. In this case Lt can be represented in the
following way Zt
Lt = l (s)ds;
0 l (s) M: (2.1)
0
Notice that here (1.1), (1.2) can be written as
dRt = (a (t) l (t))dt + a (t)dWt (2.2)
R0 = x (2.3)
and Z
V (x) = IE e ct
l (t)dt: (2.4)
0
We dene for all 0 a 1 and 0 l M the dierential operator La;l by
2 2
La;lg(x) = 2a g00 (x) + (a l)g0 (x) cg(x): (2.5)
Proof By similar arguments as in Hjgaard & Taksar [22] we can prove that V
satises the Dynamic programming principle
Z ^
c(^ )
V (x) = sup IE e cs
l (s)ds + e V (R
^ ) : (2.7)
0
For any admissible and h > 0 let h = h ^ inf ft : Rt 62 (x h; x + h)g. Then
h < 1 a.s. and h ! 0 a.s. Fix a; l arbitrary and choose such that a (t) = a
6
and l (t) = l. Choose h < x, then h < and by suboptimality of we get
from (2.7) with = h
"Z #
h
V (x) IE e cs
lds + e ch
V (R ) :
h
0
Z h Z h
= IE
0
e cs
lds + V (x) + IE
0
e L
cs a;l
V (Rs )ds;
where we have used Ito's formula (see e.g. ksendal [44]) applied to g(t; x) =
e ct V (x). Subtract V (x) from both sides and divide by IE[h ] to get
Z h
0 IE[1h ] IE e cs(l + La;l V (Rs ))ds ! l + La;l V (x);
0
when h ! 0. Arbitrariness of a and l yields
0 a2[0;max
1];l2[0;M ]
l + La;l V (x): (2.8)
For any h < x and > 0 there exist policies ~x satisfying
"Z #
h
sup IE e cs
l (s)ds + e ch
V (R )
h
0
"Z #
h~x
IE 0
e cs
l~x (s)ds + e ch~x
V (R ~x
h~x
) + :
Let = (IE[h~x ])2 and, using arguments similar to the above, we get
" Z #
0 1 IE
h~x
e (l~x (s) + L cs a~x (s);l~x ~
V (R ))ds + (IE[ ])2
h
IE[h~x ] s ~x
0
" Z h #
1
IE[h ] IE
x
e cs max (l + La;l V (Rs~ ))ds + (IE[h~x ])2
~
~x 0 a2[0;1];l2[0;M ]
! a2[0;max
1];l2[0;M ]
l + L V (x);
a;l
(2.9)
when h ! 0. The validity of (2.6) now follows from (2.8) and (2.9). That
V (0) = 0 is obvious from V (0) = 0 for all . 2
7
Therefore for all x < u1
1 2 2 00 0
max a f (x) + af (x) cf (x) = 0: (2.11)
a2[0;1] 2
Let a(x) be the maximizer of the left hand side of (2.11). Let O [0; u1 ) be
such that 0 < a(x) < 1 for all x 2 O. Then
f 0 (x)
a(x) = ; x 2 O: (2.12)
2 f 00 (x)
Inserting (2.12) in (2.11) we must have for all x 2 O
2 [f 0 (x)]2
cf (x) = 0: (2.13)
22 f 00 (x)
The solution f1 to (2.13) with f1 (0) = 0 can be easily found \from scratch". It
is f1 (x) = c1 x
, where
c
= : (2.14)
2 + c
2
2
8
We then conjecture the following solution
8
< f1 (x) = c1 x x < u0
>
+ = u
0 (2.22)
d + d+ =
u
0 1 : (2.23)
These equations have a solution
1 1 (
!
(; ) = u0 d(d+ ud0
) ; u0 d u0 )
d+ d
: (2.24)
+
p
To simplify these expressions let v1 = 2 + 2c2 . Then d+ d = 2v1 =2 ,
= 2c2 =v12 and u0 = 2 =v12 . This yields
d 2 2c2 2 2
d u
= +
+ 0 v12
= + v1 2c v12
2
v1 + v1
= v12
= v 1
1
and
2 u0
1 ( v1 1) u
0 ( v1 ) 2 u
0
= = 22 = 2 ( d+ ):
2v1
By similar arguments
2 u
0
=
2 ( d ):
As a result
f2 (x) = ( d+ ed (x u0 )
d ed+ (x u0 )
); u0 < x; (2.25)
where = c1 2 u
0 =(2) with c1 > 0 unknown. Now we need to determine the
unknown c1 ; c4 ; u1 . By similar arguments as above we only need to equalize
^
from left and right the rst and the second derivatives at u1 . Let = c4 edu 1
d+ d2 ed (u u ) d d2+ ed (u u ) = d;^
1 0 + 1
(2.28)
0
9
where we have used (2.26) in (2.28). Dividing (2.28) by (2.27) we eliminate
and solve for u1 , getting
!
1 d^ d
u1 = u0 + ln (2.29)
d+ d d+ d^
and from (2.27)
=
1 > 0: (2.30)
d+ d (e (u1 d + ed (u u ) )
u0 ) + 1 0
After all free constants having been determined, we can suggest the following
solution
8
>
> 2 x
x < u0
< 2 u0
f (x) = > (d+ e d (x u0 ) +d e d+ (x u0 ) ) u0 < x < u1 ; (2.31)
>
: 1 ed^(x u1 )
M
c + d^
x > u1
where u0 ; u1 is given by (2.16) and (2.29),
; by (2.14),(2.30) and d ; d^ is
given by (2.18) and (2.20).
To show that f dened by (2.31) solves the problem, we rst need to ensure,
that u1 u0 .
p p
Proof Recall v1 = 2 + 2c2 and let v2 = ( M )2 + 2c2 . Then it follows
from (2.29) that u1 u0 is equivalent to
d^ d + M v2 + + v1 = M v2 + v1 1:
^ = + v1 + M + v2 v1 M + v2
(2.33)
d+ d
It is satised if and only if M v2 , which is equivalent to (2.32). 2
10
Proof The maximization with respect to l is obvious from concavity and the
construction. For x < u0 from the construction follows that f maximizes (2.6)
to zero with respect to a. For u0 < x < u1 we only need to show, that
2 a2 00
f (x) + af 0(x) cf (x) 0
2
for all a < 1. Since there is an equality for a = 1, the above inequality holds if
and only if
2 (1 a2 ) 00
Ga (x) = f (x) + (1 a)f 0 (x) 0:
2
for any a < 1 and all u0 x u1 . Substituting x = u0 , we can use (2.12) to
get
2 (1 a2 ) 00 0
H0 (a) = Ga (u0 ) =
2 f (u0 ) + (1 a)f (u0 )
2
= (12 a ) f 0 (u0 ) + (1 a)f 0 (u0 )
2
= f 0(u0 )( 12 a + a2 ) > 0
and substituting x = u1 we can use twice continuous dierentiability to obtain
2 d^
H1 (a) = Ga (u1 ) = (1 a2 )
2 + (1 a);
which is a convex function of a. Let v2 be as in Lemma 2.1. Simple dierenti-
ation shows that H1 (a) attains its minimum at
a = 2 ^ = 1:
d M + v2
The last inequality follows from the assumption. Therefore Ga (u1 ) 0. It
is easy to see, that both f 0(x) and f 00(x) satisfy (2.11) with a = 1 for all
u0 < x < u1 and since Ga (x) is a (dependent on a) linear combination of these
functions, it satises the same dierential equation for any a. It then follows
from the maximum principle, that Ga (x) 0 for all u0 < x < u1 and all a < 1.
For x > u1 we again need to prove
2 (1 a2 ) 00
Ga (x) = f (x) + (1 a)f 0 (x) 0:
2
for any a < 1. Here, however, we need only to consider
2 d^
e d^(x u ) Ga (x) = (1 a2 )
2 + (1 a) = H1 (a);
1
Now we consider the case, in which M < =2 + c2 =. Let the `switching
points' u0 and u1 be dened by the same formulae as above. Then based on
11
the calculations we have done in Lemma 2.1 one can assume that u1 < u0 in
this case. For x > u0 we again get the equation (2.19) with solution
M
ed^(x u1 )
f3 (x) =
+
c
for some unknown and for u1 < x < u0 we get the equation
1 2 2 00 0
max a f (x) + (a M )f (x) cf (x) + M = 0 (2.35)
a2[0;1] 2
12
Dierentiating with respect to z and applying (2.39) once more leads to
!
2 00 2 z 0
22 X (z )e 22 + c e X (z ) + Me = 0:
z z
Put
= 22 =2 ; (2.41)
then
X 00 (z ) (1 + c) X 0 (z ) + M = 0: (2.42)
The solution to (2.42) is given by
M
X (z ) = k1 e(1+c)z +
1 + c z + k2 (2.43)
where k1 and k2 are free parameters. From (2.38) we determine a solution of
(2.36) for x > u1 Zx
f (x) = e X (y) dy + c2 (2.44)
1
u1
with c2 free. Since M=c can be incorporated into the free parameter c2 the
solution to (2.35) is of the same form. From (2.38) and the denition of u1
follows that X (0) = u1 . Concavity of f implies that X 1 (x) is an increasing
function on [u1 ; 1). Thus k1 0. The maximizing function a(x) of the left
hand side of (2.35) for x > u1 is given by
f 0(x)
a(x) = 2 00 = 2 X 0 (X 1 (x)):
f (x)
If k1 > 0 then a(x) is strictly increasing and a(x) = 1 must have a solution.
However, we have already argued that this is not possible. This contradiction
leads to a conjecture that k1 = 0 and we obtain
a(x) = 2
M
= 2M = M
< 1: (2.45)
(1 + c) (1 + 2c ) 2 + c
2
2 2
13
From (2.46) and (2.47) we have c2 = M=c and nally c1 = u11
=
. Using (2.14)
and (2.41) we obtain the following solution
8
< u1 x x < u1
f (x) = :
u1 (2.50)
M
(x u1 )
c
M
c (1
e ) x > u1
with
dened by (2.14) and u1 by (2.49). The maximizing function a(x) is
then given by ( x
a(x) = u(1
) x < u1 ;
2
(2.51)
x>u 1
2 (1
) 1
where it is easily veried, that this expression coincides with the expression
given by (2.45) for x > u1 . Since a(x) < 1 for all x the following theorem
follows easily from construction.
Theorem 2.2 Assume M < 2 + c and let f be given by (2.50). Then f is a
2
and (
l(x) = M 0 x < u1 ;
xu 1
where u1 is given by (2.49) for M < 2 + c and by (2.29) for M 2 + c .
2 2
Theorem 2.3 Let V be given by (1.4), f be given by (2.50) for M < 2 + c 2
Proof Let R0 = x and x an arbitrary policy . Choose 0 < " < x and let
" = inf ft : Rt = "g, then Ito's formula yields
Z t^"
e c(t^ ) f (Rt^" ) = f (x) + L
cs a (s);l (s)
f (Rs )ds
"
e
0
14
Z t^"
+ e cs
a (s)f 0 (Rs )dWs
0
Z t^"
f (x) e cs
l (s)ds
0
Z t^"
+ e cs
a (s)f 0 (Rs )dWs : (2.54)
0
In (2.54) the last inequality is due to (2.52). Since f 0(Rs ) f 0 (") < 1 on [0; t ^
" ], the last term on the r.h.s. is a zero-mean martingale. Taking expectations
in (2.54) we obtain
Z t^"
c(t^" )
IE[e f (Rt^" )] + IE e cs
l (s)ds f (x): (2.55)
0
By concavity of f , f (y) a + by for some a; b > 0. Therefore
e c(t^ ) f (Rt^" ) a + bRt^" K (1 + Rt^ + ") K (2 + Rt^ ); (2.56)
"
15
This implies that f (x) = V (x) V (x). Therefore f (x) = V (x). 2
Since is the optimal policy we will refer to a(x); l(x) as the optimal feedback
control functions.
In this case we will only sketch the proof, the remaining steps can be found in
e.g. Fleming & Soner [15], Karatzas & Shreve [25], Harrison & Taksar [21], or
Taksar [42]. As in the proof of Proposition 2.1 it can be shown, that V satises
the dynamic programming principle,
Z ^
c(^ )
V (x) = sup IE e cs
dL + e
s V (R
^ ) : (3.2)
0
for any fFt g stopping time . Fix a arbitrary and dene by a (t) = a for all
t and Lt = 0 for t < h and arbitrary for t h with h as in the proof of
Proposition 2.1. Then we have from (3.2)
V (x) IE[e c V (Rh )]
h
Making 'smooth t' at u0 and u1 as in the previous section we get the solution
8 2 x
>
< 2 u0 x < u0
f (x) = > (d+ ed (x u0 ) +d e d+ (x u0 ) ) u0 < x < u1 ; (3.5)
:
x u1 + x > u1
where
1 d
u1 = u0 + ln (3.6)
d+ d d+
and u0 is given by (2.16), d by (2.18), by (2.30) with u1 from (3.6) and
= (d+ ed (u1 u0 )
+ d ed + (u1 u0 )
)
d d d d+
d+ d
d+
+ d
+d d
d+
+ d
= " d d d d+ #
d+ d d
d+
+ d
+ d
d+
+ d
d+ + d dd+ d2+ d2
= = d+ d [d+ d ]
d+ d [1 dd+ ]
= d+d +d d = c :
+
Note that = f (u1 ) = =c can be found directly by inserting x = u1 and a = 1
in (3.3). The maximizing function a(x) is here given by (2.34). Concavity
follows by arguments similar to those of Section 2.
It should be noted that this solution is also suggested by Whittle [43], but
it appears that the work there is based on the assumptions of the optimal
17
strategy being a `barrier strategy' without proving this assumption, and there
are no arguments for either the HJB equation or the verication theorem. We
include a proof of this theorem, since we nd it non-trivial.
Proof For x < u1 it follows from arguments similar to those of Theorem 2.1.
Since for x > u1
V 0 (x) = 1;
we need only to show that La V (x) is non-positive for each 0 a 1. For
x = u1 this holds due to twice continuous dierentiability and the validity of
the required inequality on [0; u1 ). In particular, a c 0 for all a 2 [0; 1].
For x > u1 we obtain La V (x) = a c(x u1 + ) < a c 0. Thus the
theorem is proved. 2
Remark 3.1 It is readily veried that the solution fM given by (2.31) tends
to the solution f given by (3.5) when M ! 1. First notice that d^ ! 0 when
M ! 1, wherefrom it follows that uM 1 given by (2.29) tends to u1 given by
(3.6) and M in fM tends to in f . Therefore in the limit the solutions coincide
for all x u1 . If x > u1 then from d^ ! 0 follows
M 1 M 1
fM (x) + ^(1 + d^(x u1 )) = + ^ + x u1
c d c d
and by continuity of fM at uM 1
M 1
+ = M (d+ ed (uM u ) + d ed (uM u ) ) ! :
c d^
1 0 + 1 0
18
P
Proof Fix an arbitrary . Put = fs : Ls 6= Ls g. Let L^ t = s2;st (Ls
Ls ) be the discontinuous part of L and let L~ t = Lt L^ t be the continuous
part. Choose " > 0 and let " = inf ft : Rt "g.Then by the generalized Ito
formula (see Dellacherie & Meyer [12, Theorem VIII.27]), we can write
Z t^"
c(t^" )
e f (R
t^" ) = f (x) + e L
cs a (s)
f (Rs )ds
0
Z t^" Z t^"
+ e cs
f 0(R )dWs
s e cs
f 0(Rs )dLs
0 X 0
+ e cs
[f (Rs ) f (Rs ) f 0 (Rs )(Rs Rs )]
2 ^
s ;s t "
Z t^"
= f (x) + e L
cs a (s)
f (Rs )ds
0
Z t^" Z t^"
+ e cs
f 0(R )dWs
s e cs
f 0(Rs )dL~ s
0 X 0
+ e cs
[f (Rs ) f (Rs )]
s2;st^"
where we have used equality Rs Rs = (Ls Ls ). In view of (3.7) the
second term on the r.h.s. is non-positive. By concavity 0 f 0 (Rs ) < f 0(") on
(0; " ), therefore the third term is a zero-mean square integrable martingale.
Taking expectations, we obtain
Z t^"
IE[e c(t^" )
f (R
t^" )] f (x) IE e cs
f 0 (Rs )dL~ s
X0
+IE e cs
[f (Rs ) f (Rs )] (3.8)
s2;st^"
Since f 0 (x) 1 the mean-value theorem implies f (Rs ) f (Rs ) (Ls Ls ).
The latter combined with (3.8) results in
Z t^"
c(t^" )
IE[e f (Rt^" )] + e cs
f 0 (Rs )dLs f (x): (3.9)
0
Letting " ! 0, we can apply same arguments as in the proof of Theorem 2.3 to
get Z t^
c(t^ )
IE[e f (Rt^ )] +
e csf 0 (Rs )dLs f (x): (3.10)
0
We conclude by letting t ! 1. 2
Let a(x) be given by (2.34), and u1 given by (3.6). Consider a pair (fRt g; fLt g)
which is a solution to the following systems of equations
Z t Z t
R t = x+ a(R )ds +
s a(Rs )dWs Lt ;
0 0
Rt
Z1
u1 ; t 0; (3.11)
I (Rt < u1 )dLt = 0:
0
19
The pair (fRt g; fLt g) is called a solution to the Skorohod problem in [ 1; u1 ).
Existence of such a solution is proved in Lions & Sznitman [28]. The process
fRt g is a diusion process in [ 1; u1 ), re
ected
at u1 , whose drift coecient
is a(x), and diusion is a(x). Then fRt g solves (1.1), (1.2) with policy
Proposition 3.3 Let f be dened by (3.5). Then f (x) = V (x) for all x.
Proof For simplicity assume that x u1 . In this case the results of Lions
& Sznitman [28] show that fLt g as well as fRt g are continuous processes.
Trivially La(x) f (x) = 0 for all x u1 . Applying Ito's formula in the same
manner as in the proof of Proposition 3.2, we get
Z t^
e ct IEx [f (Rt ); t < ] = f (x) IE e cs
f 0 (Rs )dLs : (3.12)
0
Since f 0 (u1 ) = 1 one can use (3.11) to get
Z t^ Z t^
IE e cs
f 0(Rs )dLs = IE e cs
f 0 (Rs )I (Rs = u1 )dLs
0 Z0t^
= IE e cs
dLs : (3.13)
0
Substitution of (3.13) into (3.12) and making t ! 1 results in
Z (x)
f (x) IE e cs
dLs = 0;
0
which completes the proof. 2
Corollary 3.1 Let f be dened by (3.5) then V (x) = f (x) for all x and is
the optimal policy.
In Asmussen & Taksar [1] the same problem is solved without a reinsurance
option. Mathematically, this corresponds to a (t) 1 for all . The result of
that paper is given by the following theorems. First the case of bounded rate
of dividend pay-out. Let
M 1
= + ^ (4.1)
c d
20
u0 =
1 ln 1 d (4.2)
d+ d 1 d+
and
=
1 (4.3)
d+ ed+ u0 d ed u0
Let 2
d
u0 = p 2 ln d : (4.4)
+ 2c2 +
Dene by (4.3) with u0 given by (4.4). The following theorem gives a descrip-
tion of the optimal return function in the case of unrestricted rate of dividends.
21
(1) (2)
4
4
3 3
2 2
1 1
00 1 2 3 4 5 00 1 2 3 4 5 6 7
(3) (4)
1 1
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
00 00 1 2 3 4 5 6 7
1 2 3 4 5 6 7
22
(1) (2)
14 12
12
10
10
8
4
4
2
2
00 2 4 6 8 10 00 2 4 6 8 10
(3) (4)
3.5
3
3
2.5
2 2
1.5
1 1
0.5
00 2 4 6 8 10 00 2 4 6 8 10
23
(1) (2)
18
14
16
12
14
12 10
10
8
8
6
4
4
2
2
00 00 2 4 6 8 10
2 4 6 8 10
(4)
(3)
10
10
8
8
6
6
4
4
2
2
00 00 2 4 6 8 10
2 4 6 8 10
Figure 3: The gain of reinsurance in the unrestricted case calculated for = 1 and
= 1 in (1),(3), = 2 in (2),(4) and c = 0:1 in (1),(2) and c = 0:5 in (3),(4). The
dotted line represents the solution with no reinsurance.
5 Sensitivity analysis
u02.5
2
1.5
1
0.5
0
0 0
0.5 0.5
1 1
σ 1.5 1.5
2 2 µ
2.5 2.5
3 3
25
(1) (2)
7
4
6
u1 5 u13
4
3 2
2
1
1
0 0
0 1.2 0 1.2
0.5 1.4 0.5 1.4
1 1.6 1 1.6
σ 1.8
σ
µ
1.5 1.5 1.8
2
2.2
2 2
2.2
2
µ
2.5 2.5
2.4 2.4
3 3
1.2 (3)
1
0.8
u1
0.6
0.4
0.2
0
0 1.2
0.5 1.4
1 1.6
σ 1.5 1.8
2
2.2
2 µ
2.5
2.4
3
26
12
u1
10
0 5 10 15 20 25 σ 30
Figure 6: The point u1 as a function of in the unbounded case with = 1:2 and
c = 0:1.
0
for some x > 0. If we choose x < u0 (or x < u1 , if M < K ) then a(y) =
y=(2 (
1)) and
Zz
2 a (y)
2 a2 (y ) dy = k 2(1
) log(y );
27
for some k and Z
S (x) = k
x 1 dz:
0 z 2(1
)
So if we choose
< 1=2 then S (x) = 1. This corresponds to
2
22 > c: (5.7)
Notice that equality in (5.7) is the relation which maximizes both u0 and u1 .
Remark 5.1 We have only commented on the drift term and diusion term,
but the discount rate obviously plays an important role, as mentioned in the
previous section. In insurance the process fRt g is in general regarded as the
diusion limit of the compound Poisson process frt g given by
X
Nt
rt = x + pt + Uk ;
k=1
References
[1] S. Asmussen and M. Taksar (1997). Controlled diusion models for optimal divi-
dend pay-out. Insurance: Mathematics and Economics, 20, 1-15.
[2] K. Borch (1967). The theory of risk. Jour. Roy. Statist. Soc. B 29, 432-452.
[3] K. Borch (1969). The capital structure of a rm. Swedish Jour. Econ. 71, 1-13.
[4] S. Browne (1995). Optimal investment policies for a rm with random risk process:
Exponential utility and minimizing the probability of ruin. Math. of OR, 20, 4,
937-958.
[5] S. Browne (1996). Survival and growth with liability: Optimal portfolio strategies
in continuous time. To appear in Math. of OR.
[6] H. Buhlmann (1970). Mathematical Methods in Risk Theory. Springer Verlag,
Berlin.
[7] R. Buzzi (1974). Optimale Dividendestrategien fur den Risikoprozess mit aus-
tauschbaren Zuwachsen. Ph.D. dissertation 5388, ETH Zurich.
[8] A. Cadenillas and U.G. Haussmann (1994). Stochastic Maximum Principle for a
Singular Control Problem, Stochastics and Stochastics Reports, 49, 211-237.
[9] M. Davis (1993) Markov Models and Optimization. Chapman and Hall, London.
28
[10] M.H. Davis and A. Norman (1990). Portfolio selection with transaction costs,
Math. of Oper. Res., 15, 676-713.
[11] P.W.A Dayananda (1970). Optimal reinsurance. J. Appl. Probab. 7, 134-156.
[12] C. Dellacherie and P.-A. Meyer (1980). Probabilites et Potentiel. Theorie des Mar-
tingales, Hermann, Paris.
[13] B. de Finetti (1957). Su un' impostazione alternativa dell teoria collectiva del
rischio. Transactions of the 15th international Congress of Actuaries, New York,
2, 433-443.
[14] W.H. Fleming and R.W. Rishel (1975). Deterministic and Stochastic Optimal
Control. Springer Verlag, New York.
[15] W.H. Fleming and H.M. Soner (1993). Controlled Markov Processes and Viscosity
Solutions. Springer Verlag, New York.
[16] H.U. Gerber (1972). Games of economic survival with discrete- and continuous-
income processes. Opns. Res. 20, 37-45.
[17] H.U. Gerber (1977). An optimal cancellation of policies. ASTIN Bull. IX, 125-138.
[18] H.U. Gerber (1979). An Introduction to Mathematical Risk Theory. S.S. Huebner
Foundation Monographs, University of Pennsylvania.
[19] I.I. Gihman and A.V. Skorohod (1975). The Theory of Stochastic Process, Volume
II. Springer Verlag.
[20] J. Grandell (1990). Aspects of Risk Theory. Springer Verlag, Berlin.
[21] J.M. Harrison and M. Taksar (1983). Instantaneous control of Brownian motion.
Math. of OR. 8, 439-453.
[22] B. Hjgaard and M. Taksar (1996). Optimal proportional reinsurance policies for
diusion models. Scand. Act. Jour. (to appear).
[23] I. Karatzas, J.P. Lehoczky, S.P. Sethi and S.E. Shreve (1986). Explicit solution of
a general consumption/investment problem. Math. of OR. 11, 261-294.
[24] I. Karatzas, J.P. Lehoczky and S.E. Shreve (1987). Optimal portfolio and con-
sumption decisions for a "small investor" on a nite horizon. SIAM J. Control
Optim. 25, 1557-1586.
[25] I. Karatzas and S.E. Shreve (1984). Connection between optimal stopping and
singular stochastic control I. Monotone follower problem, SIAM J. Control Optim.
22, 856-877.
[26] S. Karlin & H.M. Taylor (1981.) A Second Course in Stochastic Processes. Aca-
demic Press.
[27] J.P. Lechoczky and S.E. Shreve (1986). Absolutly cocntinuous and singular
stochastic control. Stochastics, 17, 91-109.
[28] P.-L. Lions and A-S. Sznitman (1984). Stochastic dierential equations with re-
ecting boundary conditions. Comm. Pure Appl. Math. 37, 511-537.
29
[29] A. Martin-Lof (1973). A method for nding the optimal decision rule for a policy
holder of an insurance with a bonus system. Scand. Act. J. 1973, 23-39.
[30] A. Martin-Lof (1983). Premium control in an insurance system, an approach using
linear control theory. Scand. Act. J. 1983, 1-27.
[31] A. Martin-Lof (1994). Lectures on the use of control theory in insurance. Scand.
Act. J. 1994, 1-25.
[32] M. H., Miller and F. Modigliani (1961). Dividend policy, growth and valuation of
shares. J. Business, 34, 411-433.
[33] J. Paulsen and H.K. Gjessing (1996). Optimal choice of dividend barriers for
a risk process with stochastic return of investment. Submitted to: Insurance:
Mathematics and Economics.
[34] E. Presman and S. Sethi (1991). Risk aversion behaviour in consump-
tion/investment problems. Math. Finance. 1, 100-124.
[35] R. Radner and L. Shepp (1996). Risk vs. prot potential: A model for corporate
strategy, JOTA, 20.
[36] D. Revuz and M. Yor (1994). Continuous Martingale and Brownian Motion.
Springer Verlag.
[37] S.P. Sethi (1996). When does the share price equal the present value of future
dividends? A modied dividend approach. Economic Theory, 8, 307-319.
[38] S.P. Sethi, N.A. Derzko and J. Lehoczky (1984). General solution of the stochastic
price-dividend integral equation: A theory of nancial valuation. SIAM, J. Math.
Anal., 15, 1100-1113.
[39] S.P. Sethi, N.A. Derzko and J. Lehoczky (1984). A stochastic extension of Miller
Modigliani framework. Mathematical Finance, 1, 57-76.
[40] S. Sethi, M. Taksar and E. Presman (1992). Explicit solution of a general con-
sumption/portfolio problem with subsistence consumption and bankruptcy. J.
Econ. Dynamics Control. 16, 747-768.
[41] B. Sundt (1993). An Introduction to Non-Life Insurance Mathematics. VVW,
Karlsruhe.
[42] M. Taksar (1985). Average Optimal Singular Control and a Related Stopping
Problem, Math. of OR., 10, 63-81.
[43] P. Whittle (1983). Optimization over Time - Dynamic Programming and Stochas-
tic Control. Vol II. Wiley, New York.
[44] B. ksendal (1985). Stochastic Dierential Equations. Springer Verlag, Berlin.
30