You are on page 1of 16

On Distortion Functionals

Georg Ch. Pug


1
Abstract
Distorted measures have been used in pricing of insurance contracts
for a long time. This paper reviews properties of related accept-
ability functionals in risk management, called distortion function-
als. These functionals may be characterized by being mixtures of
average values-at-risk. We give a dual representation of these func-
tionals and show how they functionals may be used in portfolio
optimization. An iterative numerical procedure for the solution of
these portfolio problems is given which is based on duality.
Keywords: Risk measures, insurance premium, portfolio optimization
1 Introduction: Distortion functionals as in-
surance premia
Let L be a random variable describing the (nonnegative) loss distribution
of an insurance contract. Let G
L
be the pertaining distribution function
G
L
(u) = PL u. How much premium should the insurance company
ask for coverage of L? Obviously, the premium should be greater than E[L]
otherwise the insurance company will go bankrupt for sure.
Based on the well known formula
E[L] =

1
0
(1 G
L
(u)) du
a safe insurance premium can be dened by

[L] =

1
0
(1 G
L
(u)) d(u), (1)
1
Department of Statistics and Decision Support Systems, Universitaetsstrasse 5, Uni-
versity of Vienna, 1090 Wien-Vienna, Austria; e-mail: georg.pflug@univie.ac.at
1
where is function mapping [0,1] to [0,1], such that
(u) u for u [0, 1]. (2)
The condition (2) guarantees that the premium is not smaller than the
expectation. However one usually considers more specic functions .
Denition. Distortion functions. A function is called a distortion
function, if it is is monotonic, left continuous and satises (u) u, (0) = 0,
(1) = 1. If is a distortion function,

called a distortion insurance


premium.
Distortion functions as premium principles were introduced by Deneberg
([8]) and further developed by S.S. Wang ([19]) among others.
For instance, the following distortion functions have been proposed
The power distortion
(u) = u
r
, 0 < r < 1.
The Wang distortion
(u) = (
1
(u) + ); > 0
where is the distribution function of the standard normal.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 1. The power distortion function (r = 0.3, dotted) and the Wang
distortion function ( = 1, dashed)
Notice that the function (u) = u for > 1 gives a valid insurance
premium by (2), called the proportional loading, it is however not a distortion
function in the sense of Denition 1.
The formula (1) may be interpreted in two ways. Firstly, one may argue
that instead of the loss distribution function G
L
one considers the -distorted
version G
L,
(u) = 1 (1 G
L
(u)). and takes the expectation of the latter.
Notice that if is a distortion function, then G
L,
is a probability distribution
function and (2) implies that G
L,
dominates G
L
in the rst order sense.
2
Secondly, as will be argued below,

[L] =

1
0
G
1
L
(p) d(1 p). For
a distortion function , d(1 p) is a probability distribution on [0,1].
While the expectation of L equals the expectation of G
1
L
(U) with a uniform
[0,1] U, the insurance premium

[L] is calculated by changing the uniform


distribution dp on [0,1] to a new distribution d(1 u), which puts more
weight to larger loss values. This justies the alternative name change-of-
measure premium principle. If V has distribution PV u = 1 (1 u)
on [0,1], then the random variable Z = G
1
L
(V ) has the distorted distribution
G
L,
.
2 Distortion acceptability functionals
In the previous section we considered nonnegative loss variables L. In this
section we extend the concept to arbitrary prot variables Y , for which the
analogon to the insurance premium is the acceptability value: Let H be a
monotonic, right continuous bounded function on [0,1] satisfying H(0) = 0.
We do not necessarily require that H generates to a probability measure, i.e.
H(1) = 1, but sometimes we will.
Assume that the prot variable Y has distribution function G and that
G
1
is its left continuous inverse, called the quantile function, i.e. G
1
(p) =
infu : G(u) p.
Denition. Distortion acceptability functionals. Let (
H
be the set
of distributions G for which G
1
is dH integrable. For G in (
H
we dene
the distortion acceptability functional
/
H
G =

1
0
G
1
(p) dH(p).
If the random variable Y has distribution function G, we also use the notation
/
H
[Y ], if no confusion may occur. We allow the distribution G to have jumps
to cover nite sample situations.
The following formula for partial integration is well known
E[Y ] =

u dG(U) =

G(u) du +


0
(1 G(u)) du.
The next Lemma generalizes this formula. Notice that we allow H to
have jumps at values G(u), for which G(u) and G(u) = lim
vu
G(v) are
dierent. A similar formula is found in [13].
Lemma 1. Let G (
H
. Then

G
1
(p) dH(p) =

H(G(u)) du +

H(1 G(u)) du (3)


3
with

H(u) = H(1) H(1 u).


Proof. We start with stating that

1
0
G
1
(p) dH(p) =

u dH(G(u)).
Notice rst that

1
0
G
1
(p) dH(p) =

G
1
(G(u)) dH(G(u)).
Now notice that G
1
(G(u)) u and G
1
(G(u)) < u only if there is a v < u
such that G(v) = G(u) and consequently H(G(v)) = H(G(u)). Thus setting
A = u : G
1
(G(u)) = u, we see that

G
1
(G(u)) dH(G(u)) =

A
G
1
(G(u)) dH(G(u))
=

A
u dH(G(u)) =

u dH(G(u))
The partial integration formula for Stieltjes integrals is

[a,b]
K
1
(u+) dK
2
(u)+

[a,b]
K
2
(u) dK
1
(u) = K
1
(b+)K
2
(b+)K
1
(a)K
2
(a).
An application of this formula gives

u dH(G(u)) =

(,0]
u dH(G(u)) +

(0,)
u dH(G(u))
=

(,0]
u dH(G(u))

(0,)
u d[H(1) H(G(u))]
=

(,0]
H(G(u)) du + 0 H(G(0)) () H(G())
+

(0,)
[H(1) H(G(u))] du
[H(1) H(G())] + 0 [H(1) H(G(0))]
=

H(G(u)) du +

H(1 G(u)) du.


2
Remark. If Y is nonnegative and H is a probability distribution, then
/
H
[Y ] =
H
[Y ]
as is an easy consequence of the Lemma.
4
0 1
0
1
H
G(0)
3 2 1 0 1 2 3
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
original df. G
distorted df. H(G)
0
Fig. 2. The (nonconcave) distortion function
H(u) = u + (1 u)
2
+ (1 u)
3
(1 u)
4
(1 u)
5
(left), an original
distribution G() (the standard normal) and its distortion H(G()) (right).
Distortion acceptability functionals appear also under the name of spec-
tral measures (Acerbi [1]). Their role in determining the needed risk capital
has been emphasized by Artzner et al. [5] and H urlimann [12].
Examples for distortion acceptability functionals.
Setting H(p) = p, one gets the expectation
E[Y ] =

1
0
G
1
(p) dp =

G(u) du +


0
(1 G(u)) du.
Setting H(p) = 1l
[,1]
(p) one gets the value-at-risk
V@R

[Y ] = G
1
().
Setting H(p) = min(p/, 1) one gets the average value-at-risk
AV@R

[Y ] =
1


0
G
1
(p) dp
=

min(G(u)/, 1) du +


0
max(1 G(u)/, 0) du.
Several other names have been proposed for this functional, such as con-
ditional value-at-risk (Rockefellar and Uryasev [18]), expected shortfall
(Acerbi and Tasche [2]) and tail value-at-risk (Artzner et al. [4]). The
name average value-at-risk is due to Follmer and Schied ([10]).
If H has a density, i.e. H(p) =

p
0
h(q) dq the pertaining class coincides
with Yaaris dual functionals, see [20].
5
3 Dual representations
If U is uniform [0,1], then G
1
(U) has the same distribution as Y . The
converse is not true: If G has jumps, then G(Y ) is not uniformly distributed,
it is in fact stochastically larger than a uniform [0,1] distribution, PG(Y )
p PU p.
To correct the distribution of G(Y ) towards a uniform distribution, let
| be the countable set of all jump points of G. For u |, let (V
u
) be a
collection of independent random variables, which are Uniform [0, G(u)
G(u)] distributed and which are independent of Y . For u / |, set V
u
= 0.
By composition, one may form the random variable V
Y
.
Lemma 2. G(Y ) V
Y
is uniformly [0,1] distributed and G
1
(G(Y )
V
Y
) = Y a.s.
Proof. Let 0 < p < 1. We have to show that PG(Y ) V
Y
p = p.
One may nd an u such that G(u) p G(u). If G(u) = G(u), then
V
u
= 0 and PG(Y ) p = p. Otherwise G(Y ) V
Y
p i Y < u or Y = u
and V
u
G(u) p, i.e. PG(Y ) V
Y
p = G(u) + [G(u) G(u)][p
G(u)]/[G(u) G(u)] = p.
To prove the second assertion, notice that conditional on Y = u, G(u)V
u
lies in the interval [G(u), G(u)] and with probability 1 in (G(u), G(u)].
However in the latter interval, G
1
equals u. 2
Lemma 3. Suppose that H(p) =

p
0
h() d. Then
/
H
[Y ] = E[Y h(G(Y ) V
Y
)].
Proof. Let U = G(Y ) V
Y
. Then by Lemma 2, U is uniform[0,1] and
G
1
(U) = Y a.s. and therefore /
H
(Y ) = E[G
1
(U)h(U)] = E[Y h(G(Y )
V
Y
)]. 2
The next result shows the dual representation of /
H
.
Proposition 1. (see also Pug [16])
/
H
[Y ] = infE[Y Z] : Z = h(U), where U is uniformly [0,1] distributed.
which is the same as
/
H
[Y ] = infE[Y Z] : Z
CXD
Z

, where Z

= h(U),
with U uniformly [0, 1] distributed .
The inmum in (4) is attained for Z = h(Y V
Y
).
Proof. By virtue of Lemma 3, /
H
[Y ] infE(Y Z) : Z = h(U),U Uniform[0,1].
To prove the other inequality, notice Hoedings Lemma rst ([11], see also
[15]):
Cov(Y, Z) =

G
Y,Z
(u, v) G
Y
(u) G
Z
(v) dudv
6
where G
Y,Z
is the joint distribution and G
Y
, G
Z
are the marginal distributions
of the vector (Y, Z). If we x the marginals, the covariance Cov(Y, Z) (and
equivalently E(Y Z)) is minimized for the joint distribution, which is the
lower Frechet bound G
Y,Z
(u, v) = max(G
Y
(u) + G
Z
(v) 1, 0), i.e. if Y and
Z are antimonotone coupled.
Since h is nonincreasing and G is nondecreasing, it is easy to see that
the variables Y and h(G(Y ) V
Y
) are indeed antimonotone coupled and
therefore (4) is the lower bound of E(Y Z) : Z = h(U).
For the second representation, we use a result by Dentcheva and Ruszczyn-
ski ([9]):
conv Z : Z
L
= V = Z : Z
CXD
V .
Here Z
CXD
V i E[f(Z)] E[f(V )] for all convex functions f. 2
Since the functions H(p) = min(p/, 1) are the extremal elements in
the set of all concave, monotonic probability distribution functions on [0,1],
there is a Choquet representation for distortion functionals (This particular
representation is not due to Choquet [6], but in his spirit).
Proposition 2. Choquet representation. (Acerbi [1], [13]). Suppose
that H concave, i.e. has a representation as H(p) =

p
0
h() d with a
nonincreasing h(p). Then
/
H
G =

1
0
AV@R

G dK() (4)
for a monotonically increasing K, which satises K(0) = 0, K(1) = H(1).
Thus for concave H, the distortion functionals are mixtures of AV@Rs.
Proof. We may assume that the nonincreasing function h dened on
[0, 1] is continuous from the left and has the representation
h(p) =

(0,1]
1

1l
[0,]
(p) dK(). (5)
To show (5), let h
n
be the largest nonincreasing left continuous stepfunction,
which is dominated by h and which jumps only at dyadic rational points
k/2
n
. Clearly, one may write h
n
(p) =

(0,1]
1

1l
[0,]
(p) dK
n
() with K
n
(0) = 0,
K
n
(1) =

1
0
h
n
(p) dp H(1). The sequence dK
n
is a sequence of bounded
measures on [0,1], which has a weak limit dK with K(0) = 0, K(1) =
7
lim
n

1
0
h
n
(p) dp = H(1). Now
/
H
G =

1
0
G
1
(p) dH(p) =

1
0
G
1
(p)h(p) dp
=

1
0
G
1
(p)

(0,1]
1

1l
[0,]
(p) dK() dp
=

(0,1]
1

1
0
G
1
(p)1l
[0,]
(p) dp dK()
=

(0,1]


0
G
1
(p) dp

dK() =

(0,1]
AV@R

(G) dK().
2
Proposition 3. Properties of distortion functionals. (see also De
Giorgi [7])
Distortion functionals /
H
[Y ] =

1
0
G
1
(u) dH(u) are
(i) version independent, i.e. they depend on the random variable Y only
through its distribution function G
(ii) linear in the quantile function G
1
and hence comonotone additive,
(iii) positively homogeneous, i.e. /
H
[cY ] = c/
H
[Y ] for c > 0
(iv) isotonic w.r.t. rst order stochastic dominance (
FSD
).
If dH is a probability measure (i.e. H(1) = 1) then /
H
is
(iv) translation equivariant, i.e. /
H
[Y + a] = /
H
[Y ] + a.
If H is concave (i.e. H(p) =

p
0
h(p) dp) for a nonincreasing h, then
(vi) concave, i.e. /[Y
1
+(1)Y
2
] /
H
[Y
1
]+(1)/[Y
2
] for 0 1,
(vii) isotonic w.r.t. second order stochastic dominance (
SSD
).
(viii) If H is concave and h L
q
[0, 1], then Y /
H
(Y ) is continuous in the
L
p
sense, where 1/p + 1/q = 1.
Proof. (i) is obvious. (ii). Recall that a functional T is called comonotone
additive if for two comonotone random variables Y
1
and Y
2
, T[Y
1
+ Y
2
] =
T[Y
1
] + T[Y
2
]. Two random variables are comonotone, if the joint distribu-
tion G
1,2
(u) is related to the marginal distributions G
1
, G
2
by G
1,2
(u, v) =
min(G
1
(u), G
2
(v)). For comonotone variables, the quantile function of Y
1
+Y
2
8
is the sum of the marginal quantile functions: G
1
1
(u) + G
1
2
(u). (iii). No-
tice that cY has quantile function cG
1
(u) for positive c. (iv) Y
1
is dom-
inated in the rst order sense if G
1
(u) G
2
(u), which is equivalent to
G
1
1
(u) G
1
2
(u) for all u. (iv) The variable a + Y has quantile function
a + G
1
(u).
(vi). If H is concave then /
H
[Y ] has a dual representation as given in
Proposition 1. Thus /
H
is the inmum of linear (in Y ) functions and hence
concave. Finally, for concave H, /
H
(Y ) has by Proposition 2 a representation
as a mixture of AV@Rs. The assertion (vii) is proved, if we show it holds
true for all AV@R

. It is well known (cite here Uryasev and Rockafellar [17])


that
AV@R

[Y ] = maxa
1

E([Y a]

) : a R.
For xed a, the function Y a
1

E([Y a]

) is monotonic w.r.t. second


order stochastic dominance, since the function y a
1

[ya]

is monotonic
and concave. Thus also the maximum is monotonic w.r.t. second order
stochastic dominance and proves (vii). Finally, notice that because of (4) /
H
is upper semicontinuous. If /
H
is not continuous, one may nd a sequence
Y
n
converging to Y in L
p
sense, such that
/
H
(Y ) limsup
n
/
H
(Y
n
) +
for some 0. Represent /
H
[Y
n
] = E[Y
n
h(U
n
)] with U
n
a Uniform[0,1]
variable. Notice that |h(U
n
)|
q
=

h (say) for all n. If |Y Y
n
|
n
< /

h, then
/
H
(Y ) E[Y h(U
n
)] E[Y
n
h(U
n
)] +

h|Y
n
Y |
n
< /
H
(Y
n
) + ,
a contradiction. 2
Notice that (iii) and (vi) imply that /
H
is superadditive
/
H
[Y +

Y ] /
H
[Y ] +/
H
[

Y ].
Remark. Kusuoka [14] has shown that any version independent func-
tional T, which is positively homogeneous, translation equivariant and coin-
cides with its concave bidual can be represented as
T[Y ] = inf

1
0
AV@R

(Y ) dM() : M /,
where / is a set of probability measures on [0,1]. If T is comonotone
additive, then /contains only one probability measure and T has a Choquet
representation of the form (4). In Proposition 2 we have derived this result in
9
a direct manner by relating the mixture measure M to the distortion function
H.
Proposition 3 has an inverse given by the following result.
Proposition 4. Let /G be version independent functional, which is
nite for bounded distributions. If / is positively homogeneous, monotonic
w.r.t. FSD and comonotone additive, then it is of the form /G =

1
0
G
1
(u) dH(u).
Proof. W.l.o.g we may assume that /(1l) = 1 and /(0) = 0. By positive
homogeneity /(c) = c for c 0. By /(c) + /(c) = /(0) = 0, also
/(c) = c for c < 0. Let G
p
(u) be the distribution which has point masses at
0 (with probability p) and 1 (with probability 1p). Let H(p) = 1/(G
p
).
Then H is monotonically nondecreasing, satises H(0) = 0, H(1) = 1. Let
/

(G) =

1
0
G
1
(p) dH(p). Notice that any discrete distribution can be
represented as a comonotone sum of discrete distribution with just two point
masses. This implies that / coincides with /

for all discrete distributions.


If G is bounded and nondiscrete, one may nd, for every , two discrete
distributions G
1
and G
2
such that
G
1

FSD
G
FSD
G
2
such that /

(G
2
) /

(G
1
) . Since also /(G
1
) /(G) /(G
2
) one sees
that in fact /(G) = /

(G). 2
4 Portfolio optimization
In this section, we consider a one-period portfolio optimization problem,
where the objective is to maximize the expected return under the constraint
that a distortion acceptability functional does not fall below a prespecied
level. Since the negative distortion risk functional has the interpretation as
required risk capital (see [3]), one may equivalently say that the portfolio
optimization problem seeks for maximizing the return for a given maximal
risk capital.
Let = (
(1)
, . . . ,
(M)
) a row vector of random portfolio returns dened
on some probability space (, B, P) and x = (x
1
, . . . , x
M
)

the column vector


of weights. The total portfolio has the value Y
x
= x.
The optimization problem reads
10

Maximize (in x) : E() x


subject to
/
H
[ x] q
x

1l = 1
x 0
(6)
Here 1l is the column vector of length M with entries 1. (6) is a linear
optimization problem under a convex constraint. To derive the necessary
conditions for optimality, a characterization of the supergradient set
/
H
[Y ] = Z L
q
: /
H
[

Y ] /
H
[Y ] +E[(

Y Y ) Z] for all

Y L
p

of /
H
at Y is needed. Assume that Y L
p
(), p 1 and q = p/(p 1) for
p > 1 and q = for p = 1. Assume further that H(u) =

u
0
h(v) dv with
h L
q
[0, 1].
Lemma 4. The supergradient of /
H
is /
H
[Y ] = h(G(Y ) V
Y
),
where V
Y
runs through all random variables on , for which the conditional
distribution of V
Y
given Y = u is uniform [0, G(u) G(u)]. The supergra-
dient set is a singleton if and only if G is continuous, given that is rich
enough to carry a uniform [0,1] distribution.
Proof. Let Z = Z = h(U), where U is uniformly [0,1] distributed.
By (4) /
H
[Y ] = E[Y h(G(Y )V
Y
)] = infE[Y Z] : Z Z. Since /
H
[

Y ]
E[(

Y h(G(Y )V
Y
)], one sees that h(G(Y )V
Y
) /
H
[Y ] for all versions of
V
Y
. Conversely, let

Z /
H
[Y ]. If

Z is not in Z, then there is an > 0 and
a

Y such that E[

Y

Z] + infE[

Y Z] : Z Z = /
H
[

Y ]. Consequently
/
H
[Y +

Y ] /
H
[Y ] +E[

Y

Z] /
H
[Y ] +/
H
[

Y ]
/
H
[Y +

Y ] ,
a contradiction. Thus the rst assertion has been proved. If G is continuous,
then V
Y
= 0 and thus the supergradient is unique. If however G has jumps,
then h(G(Y ) V
Y
) may be realized in dierent ways and is thus not unique.
2
Lemma 5. If H has a nonnegative density h L
q
[0, 1] and G is con-
tinuous, then

Y /
H
[

Y ] is strongly dierentiable in L
p
sense at Y . The
derivative is
/
H
[Y ] = h(G(Y )).
Consequently also the mapping x /
H
[ x] is dierentiable at Y
x
=

x
with derivative

x
/[ x] = E[ h(G( x)]].
11
Proof. Let Z = /
H
(Y ). If /
H
is not dierentiable at Y , then there is
a sequence Y
n
converging to Y in L
p
sense such that
/
H
(Y
n
) /
H
(Y ) E[(Y
n
Y )Z] |Y
n
Y |
p
. (7)
for some > 0 and all n. Let Z
n
/
H
(Y
n
). Then
/
H
(Y ) /
H
(Y
n
) E[(Y Y
n
)Z
n
] 0 (8)
(7) and (8) together give
|Y
n
Y |
p
|Z
n
Z|
q
E[(Y
n
Y )(Z Z
n
)] |Y
n
Y |
p
. (9)
By uniform integrability, the sequence (Y
n
) must have a weak cluster point,
say Z

. By (9), Z , = Z

. On the other hand, looking at


/
H
(

Y ) /(Y
n
) +E[(

Y Y
n
)Z
n
],
for all

Y , using the L
p
-continuity of /
H
(see Proposition 3 (viii)) it follows
that letting n
/
H
(

Y ) /(Y ) +E[(

Y Y )Z

.
Thus Z

must be a further element of /


H
(Y ), a contradiction to the as-
sumption. 2
A slight modication of (6) is the problem

Maximize E[] x
subject to
/
H
[ x] = q
x

1l = 1
x 0
Under the assumption of dierentiability, the necessary Karush-Kuhn-
Tucker (KKT-) conditions for this problem are:
E() +
x
/
H
[ x] + 1l + = 0
(/
H
[ x] q) = 0
(x

1l 1) = 0

m
x
m
= 0 for m = 1, . . . , M
0
Suppose that the optimum lies in the interior of X, i.e. asset not present
in the portfolio are neglected. Interpreting /
H
as the necessary risk capital,
the KKT conditions can be formulated as
E[
(m)
]E[x] is proportional to
x
m
e
m

x
/
H
[ x]

M
m=1
x
m
e
m

x
/
H
[ x]
/
H
[x]/
H
[x],
12
where e
m
is the m-th unit vector. Introducing the quantity

x
m
e
m

x
/
H
[ x]

M
m=1
x
m
e
m

x
/
H
[ x]
/
H
[ x]
as the local risk capital contribution, the following condition holds at opti-
mality: For each asset in the portfolio, the return contribution minus total
return is proportional to the local risk capital contribution minus the total
risk capital.
5 Numerical portfolio optimization
A numerical procedure to solve (6) may be based on the dual representation:
Using /[Y ] = infE[Y Z] : Z Z this problem may be solved by the
following dual iterative procedure:
1. Set

Z = .
2. Outer problem. Solve

Maximize (in x) : E[] x


subject to
E[Y
x
Z] q for all Z

Z
x
T
1l = 1
x 0
3. Inner problem. With the incumbent solution x, solve v = infE[Y
x
Z] :
Z Z. If v q, then stop. Otherwise add the minimizer function
Z = argmin E[Y
x
Z] : Z Z to the set

Z and goto 2.
Proposition 6. The dual iterative procedure stops only at optimal
points.
Proof. Suppose that the procedure generates a sequence of solutions
x
1
, . . . , x
n
and dual variables Z
1
, . . . , Z
n
and stops then. Notice that at step
n the outer problem solves the problem

Maximize (in x) : E() x


subject to
/
n
[Y
x
] := infE[Y
x
Z] : Z Z
n
q
x
T
1l = 1
x 0
(10)
13
where Z
n
is the convex hull of Z
1
, . . . , Z
n
. Notice that /
n
/, i.e. the
constraint set of this outer problem contains the original constraint set. Since
the inner problem stopped, we know that /
H
[Y
x
n
] q, i.e. that x
n
is feasible
for the original outer problem. This proves that x
n
is a solution of the original
problem. 2
Let us consider in detail the case of a nite probability space =

1
, . . .
S
. Let us form the probability vector
p =

P(
1
)
.
.
.
P(
S
)

and the [S M] value matrix with entries

s,m
=
(m)
(
s
).
The set Z is a set of [1 S] vectors. The outer problem reads
Maximize p

x
such that
z

diag (p)x q for all z Z


x
T
1l = 1
x 0
(11)
Consider the inner problem
infE[Y
x
Z] : Z = h(U), where U is uniformly [0,1] distributed. (12)
for a decreasing h. We know from the proof of Proposition 1 that E(Y
x
Z)
is minimized for all Z which have a given distribution, if Y
x
and Z are
coupled in an antimonotone way, i.e. have the Frechet lower bound copula
C(p, q) = max(p + q 1, 0). For the discrete model, the portfolio value
Y
x
= x takes the value y
s
, which is the s-th element of the column vector
x with probability p
s
, s = 1, . . . S. Suppose that the ordered values (y
s
)
are (y
[s:S]
) and y
s
= y
[(s):S]
for some permutation . Let p
(s)
= p
s
be the
pertaining probabilities and p
s
=

s
i=1
p
i
and
z
s
=

p
s
p
s1
h(u) du s = 1, . . . S.
Let nally z
s
= z
(s)
. Then the minimal value of (12) is
S

s=1
y
[s:S]
z
s
=
S

s=1
y
s
z
s
14
and the minimizer takes the value z
s
for the scenario s. Form the vector
z = (z
1
, . . . , z
S
)

. then the constraint to be added for the outer problem is


u

x q
where
u

= z

diag (p) .
Proposition 7. For a nite probability space, the dual iterative proce-
dure stops after nitely many steps at a solution.
Proof. The set Z is a polyhedral set of vectors in R
s
. This set has only
nitely many extremal points. The inner problem generates at each step a
new extremal point. Assume that the procedure does not stop at step n.
Then infE[Y
x
n
z] : z

Z < infE[Y
x
n
z] : z z
1
, . . . z
n
and therefore
z
n+1
, the minimizer of infE[Y
x
n
z] : z

Z, cannot be contained in the
convex hull of z
1
, . . . , z
n1
. Thus the procedure must stop in nitely many
steps. 2
References
[1] Acerbi, C.: Spectral measures of risk: a coherent representation of sub-
jective risk aversion. Journal of Banking and Finance 26, 15051518
(2002)
[2] Acerbi, C., Tasche, D.: Expected shortfall: a natural coherent alterna-
tive to value-at-risk. Economic notes 31(2), 379388 (2002)
[3] Artzner, P.: Application of coherent risk measures to capital require-
ments in insurance. North American Acturial Journal 3, 1125 (1999)
[4] Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Coherent measures of
risk. Mathematical Finance 9, 203228 (1999)
[5] Artzner, P., Delbaen, F., Eber, J.M., Heath, D.: Risk management
and capital allocation with coherent measures of risk (2000). Working
Paper,http://www.math.ethz.ch/nance
[6] Choquet, G.: Theory of capacities. Annales de lInstitut Fourier 5,
131295 (1954)
[7] De Giorgi, E.: Reward-risk portfolio selection and stochastic dominance.
Journal of Banking & Finance 29(4), 895926 (2005)
15
[8] Denneberg, D.: Distorted probabilities and insurance premiums. In:
Proceedings of the 14th SOR, Ulm. Athen aum, Frankfurt (1989)
[9] Dentcheva, D., Ruszczy nski, A.: Convexication of stochastic ordering.
C. R. Acad. Bulgare Sci. 57(4), 1116 (2004)
[10] Follmer, H., Schied, A.: Stochastic Finance: An Introduction in Discrete
Time. Walter DeGruyter, Berlin (2002)
[11] Hoeding, W.: Mastabinvariante korrelationstheorie. Schriften Math.
Inst. Univ. Berlin 5, 181233 (1940)
[12] H urlimann, W.: Distortion risk measures and economic capital. N. Am.
Actuar. J. 8(1), 8695 (2004)
[13] Jones, B.L., Zitikis, R.: Empirical estimation of risk measures and re-
lated quantities. N. Am. Actuar. J. 7(4), 4454 (2003)
[14] Kusuoka, S.: On law invariant risk measures. Advances In Mathematical
Economics 3, 8395 (2001)
[15] Lehmann, E.: Some concepts of dependence. Ann. Math. Stat. 37,
11371153 (1966)
[16] Pug, G.: Subdierential representations of risk measures. Mathemati-
cal Programming (to appear) (2005)
[17] Rockafellar, R., Uryasev, S.: Optimization of conditional value-at-risk.
Journal of Risk 2, 2141 (2000)
[18] Rockafellar, R., Uryasev, S.: Conditional value-at-risk for general loss
distributions. Journal of banking and Finance 26(7), 14431471 (2002)
[19] Wang, S.S.: A class of distortion operators for nancial and insurance
risks,. Journal of risk and insurance 67, 1536 (2000)
[20] Yaari, M.: The dual theory of choice under risk. Econometrica 55,
95115 (1987)
16

You might also like