You are on page 1of 93

An Introductory Guide in the Construction of Actuarial

Models:
A Preparation for the Actuarial Exam C/4
ANSWER KEY
Marcel B. Finan
Arkansas Tech University
c
All
Rights Reserved
November 5, 2013

The answer key manuscript is to help the reader to check his/her answers against mine. I am not
in favor of providing complete and detailed solutions to every single problem in the book. The
worked out examples in the book are enough to provide the user with the skills needed to tackle
the practice problems.
This manuscript should not be made public or shared with others. Best of wishes.
Marcel B. Finan
Russellville, Arkansas
July 2013

Section 1
1.1 Deterministic
1.2 Stochastic
1.3 Stochastic
1.4 Stochastic
1.5 Mostly stochastic

Section 2
2.1 (a) Ac = B, B c = A and C c = {1, 3, 4, 5, 6}
(b)
A B = {1, 2, 3, 4, 5, 6}
A C = {2, 4, 6}
B C = {1, 2, 3, 5}
(c)
AB =
A C = {2}
BC =
(d) A and B are mutually exclusive as well as B and C
2.2 Note that Pr(E) > 0 for any event E. Moreover, if S is the sample space then
Pr(S) =

X
i=1

1X
Pr(Oi ) =
2 i=0

 i
1
1
1
=
2
2 1

1
2

=1

Now, if E1 , E2 , is a sequence of mutually exclusive events then


Pr(
n=1 Ei ) =

X
n=1 j=1

Pr(Onj ) =

X
n=1

Pr(En )

5
where En = {On1 , On2 , }. Thus, Pr defines a probability function.
2.3 0.5
2.4 0.56
2.5 0.66
2.6 0.52
2.7 0.05
2.8 0.6
2.9 0.48
2.10 0.04
2.11 The probability is given is the figure below.

The probability that the first ball is red and the second ball is blue is PR(RB) = 0.3.
2.12

The probability that the first ball is red and the second ball is blue is PR(RB) = 6/25.
2.13 0.173
2.14 0.467
2.15 0.1584
2.16 0.0141
2.17 0.29
2.18 0.42
2.19 0.22
2.20 0.657

Section 3
3.1 (a) Discrete (b) Discrete (c) Continuous (d) Mixed.
3.2
x
2
1
p(x) 36

10 11

12

2
36

3
36

4
36

5
36

6
36

5
36

4
36

3
36

1
36

2
36

7
3.3

p,
x=1
1 p, x = 0
p(x) =

0,
x 6= 0, 1.
3.4 1/9
3.5 0.469
3.6 0.132
3.7 0.3
3.8 = 1.
3.9

0
x<1

0.25 1 x < 2
0.75 2 x < 3
F (x) =

0.875 3 x < 4

1
4x

3.10 (a)

F (x) =

0,
1

1
,
(1+x)a1

x<0
x 0.

8
(b)

F (x) =

0,
x<0
kx
1 ke
, x 0.

3.11
F (n) =P (X n) =

n
X

P (X = k)

k=0

 k
n
X
1 2
k=0

3

2 n+1

11 3
3 1 32
 n+1
2
=1
3
=

3.12 (a) 0.135 (b) 0.233 (c)



F (x) =
3.13 f (x) = F 0 (x) =

1 e 5 x 0
0
x<0

ex
(1+ex )2
1

1
3.14 (a) We have that S(0) = 1, S 0 (x) = 20
(100 x) 2 0, s(x) is right continuous, and
S(100) = 0. Thus, S satisfies the properties of a survival function.
1
1
(b) F (x) = 1 S(x) = 1 10
(100 x) 2 .
(c) 0.092

3.15 0.149
3.16 F (x) = 1 S(x) =

x2
,
100

x0

3.17 (a) 0.3 (b) 0.3


0

(x)
3.18 h(x) = SS(x)
= 12 (1 x)1

3.19 S(x) = ex , F (x) = 1 ex , and f (x) = F 0 (x) = ex .

9
3.20 1/480

Section 4
4.1 (b) np(1 p)
4.2 (b)
4.3 (c) (1 p)p2
4.5 (c)

1
2

4.6 (a)
Z
x
1
xe x1 dx
E(X) =
() 0
Z

1 x
=
e x dx
() 0
( + 1)
=
()
=
(b)
Z
1
1
2 x
E(X ) =
xe
dx
() 0
x
Z
1
1 x
=
x+1 e dx
() 0

Z
2
x
( + 2)
x+1
=
e dx
+2
()
( + 2)
0
2
( + 2)
=
()
2

where the last integral is the integral of the pdf of a Gamma random variable with parameters
( + 2, ). Thus,
E(X 2 ) =

2 ( + 2)
2 ( + 1)( + 1)
=
= 2 ( + 1).
()
()

10
Finally,
V ar(X) = E(X 2 ) (E(X))2 = 2 2 ( + 1) 2 2 = 2

4.7 1
4.8 1,417,708,752
4.9 730,182,499.20

4.10 6 3
4.11 9
4.12 0.3284
4.13 We have
0n

xn f (x)dx

=
0

xB+n eCx dx
0 B+n Cx

Z

x
e
B
+
n
B+n1
Cx
+
=A
x
e
dx

C
C
0
0
Z
B + n B+n1 Cx
Ax
e
dx
=
C
0
B+n
=
E(X n1 ).
C
=A

4.14 =

B+1
C

4.15

2
B+1

4.16

3(B+3)
B+1

4.17 0.5

and 02 =

(B+1)(B+2)
C2

11

0,
x<0
2
0.0025x , 0 x 20
4.18 (a) F (x) = 0.005tdt = 0 0.005tdt =

1,
x > 20
40
200
(b) The mean is 3 and the variance is 9
(c) 0.354
Rx

Rx

4.19 16
4.20 (a) E(X k ) =
(b) 1

R
1

axk
dx
xa+1


a
ka
x
k1
1

a
,
ak

0 < k < a.

a(a2)

4.21 1.596
4.22 The mean is
4.23

and the variance is

2
(1)2 (2)

4.24 2
4.25 1.7

Section 5
5.1

Amount of loss
750 500 1200
Insurance payment 250 0
700

12
5.2
1
4
=
12
3
5
Pr(X 67) =
12
7
Pr(X 84) =
12
8
Pr(X 93) =
12
11
Pr(X 100) =
12
Pr(X 102) =1.
Pr(X 45) =

5.3 01 = 75.8333 and 02 = 6312.8333


5.4 0.96982
5.5 12 (100 d) for 0 < d < 100 and 0 otherwise.
5.6 108
5.7

5.8 308,8728
5.9

ed

5.10

1 d
e (2
2

5.11

1
160

ed )

5.12 94.84
5.13 88.4
5.14

+1

x+1
(+1)

13
5.15
Z

(1 + 2t2 )e2t dt
x

= (1 + t + t2 )e2t x = (1 + x + x2 )e2x

S(x) =Pr(X > x) =

for x 0 and 0 otherwise.


1+x+ 21 x2 )
(b) 1+x+x
2
5.16 We have
1 SX (y + d) [1 SX (d)]
Sx (d)
SX (y + d)
SX (y + d) + SX (d)
=
.
=1
SX (d)
SX (d)

SY P (y) =1 FY P (y) = 1

5.17 (a)
f (y + 100)
(0.001 + 0.00002(y + 100))e0.005(y+100)
=
1 F (100)
1.4e0.5
(0.0003 + 0.00002y)e0.005y
=
1.4 

1
3
+
y e0.005y , y > 0
=
1400 70000

fY P (y) =

and 0 otherwise.
(b) E(Y P ) = 2200
and Var(Y P ) =
7

3560000
49

5.18 E[(X 10)+ ] = and Var[(X 10)+ ] =


5.19 d = 6.
5.20 175
5.21 1875
5.22 3.43

425
36

14
5.23 6.259

Section 6
6.1 we can either say 1120 is the twentieth percentile or 1120 is the one-fifth quantile
6.2 3
6.3 2
6.4 the median is 1 and the 70th percentile is 2
6.5 The median is M = 0.3466. This means that half the people get in line less than 0.3466
minutes (about 21 seconds) after the previous person, while half arrive more than 0.3466 minutes
later
6.6 0.693
6.7 998.72
6.8 3 ln 2
6.9 The median is 0.8409
6.10 3659

6.11 a + 2 ln 2
6.12 ln (1 p)
6.13 ln [2(1 p)]
6.14 2
6.15 72.97
6.16 0.4472

15
6.17 6299.61
6.18 2.3811
6.19 50
6.20 2.71

Section 7
7.1 0.2119
7.2 0.9876
7.3 0.0094
7.4 0.692
7.5 0.1367
7.6 0.0088
7.7 0
7.8 23
7.9 0.0162
7.10 6,342,637.5
7.11 0.8185
7.12 16
7.13 0.1587
7.14 0.9887

16
7.15 0.0244
7.16 0.9985
7.17 0.1056
7.18 0.8413
7.19 0.8201
7.20 0.224

Section 8
8.1 E(X) =

2
2

and V ar(X) =

1
2

8.2 A normal random variable with mean 1 + 2 and variance 12 + 22


8.3 0.70
8.4 41.9
2 +4t

8.5 e13t
8.6 4

8.7 28
8.8 2
8.9 5000
8.10 10560
8.11 (tp + 1 p)n
8.12

tp
1t(1p)

provided that |t| < (1 p)1

17
8.13 True
8.14 ta PX (tb )
8.15 E(X) =

1
p

and Var(X) =

1p
p2

8.16 0.4t2 + 0.2t3 + 0.2t5 + 0.2t8


7

8.17 31 t2 + 61 t3 + 18 t + 38 t 2
8.18

t3
2t

8.19
x
1 0
32
p(x) 16
81
81

24
81

8
81

1
81

8.20 E(X) = 3.5 and Var(X) = 6.25

Section 9
9.1 Let m > 0. Then there is M > 0 such that ebx xm+1 which is equivalent to saying that
xm ebx x1 for x M. By the comparison test of improper integrals we find that
Z
xm ebx < .
0
k

Since E(X ) is an integral of the above form, we conclude that E(X k ) < for all k > 0. That is,
the distribution of X is light-tailed.
R
9.2 Since X is heavy-tailed, we have E(X k ) = 0 xk fX (x)dx = for some k > 0. Now, let
t > 0. Let N be large enough so that etx xk for all x N. Hence,
Z N
Z
Z N
Z
Z
k
tx
k
k
x fX (x)dx +
e fX (x)dx
x fX (x)dx +
x fX (x)dx =
xk fX (x)dx = .
0

Since

RN
0

xk fX (x)dx < , we conclude that

9.3 We have

R
N

etx fX (x)dx =

tx

e fX (x)dx

MX (t) =
0

etx fX (x)dx =

18

9.4 From Table C, we have


k ( + k)
()
for all k > 0. Hence, the Gamma distribution is light-tailed.
E(X k ) =

9.5 From Table C/ Exam 4, we have




k
E(X ) = 1

provided that k < . Since E(X k ) is only valid for k < , the distribution is heavy-tailed.
9.6 The Pareto distribution has a more heavy-tailed than the Gamma distribution,
9.7 The Weibull distribution has a lighter tail than the inverse Weibull distribution.
9.8 X is more heavy-tailed than Y
9.9 X and Y have similar or proportional tails.
9.10 X is more light-tailed than Y
9.11 X has a heavier tail than Y
9.12
Distribution Heavy-Tail
Weibull
Inverse Pareto
X
Normal
Loglogistic
X

Light-Tail
X
X

9.13
Distribution
Heavy-Tail Light-Tail
Paralogistic
X
Lognormal
X
Inverse Gamma
X
Inverse Gaussian
X

19

9.14
Distribution
Inverse Paralogistic X
Inverse Exponential X

9.15 limx

SX (x)
SY (x)

9.16 limx

SX (x)
SY (x)

9.17 limx

SX (x)
SY (x)

Heavy-Tail Light-Tail

9.18 c > 0
9.19 X has a heavier tail than Y
9.20 The tail of X is heavier than that of Y which in turn is heavier than the tail of Z

Section 10
10.1

d
dx

f (x+y)
f (x)

y(1)
x2

1+


y 2 y
e
x

< 0 for > 1.

10.2 For = 1, the Gamma distribution is just the exponential function which has a constant
hazard rate
10.3 We have
h(x) =

f (x)
x 1
=
.
S(x)

Hence,
( 1)x 2
.

Thus, h(x) is increasing (light-tailed distribution) for > 1 and decreasing (heavy-tailed distribution) for 0 < < 1
h0 (x) =

10.4 X is light-tailed

20

10.5 We have

(x + )+1
H(x) =

=
(x + y + )+1

x+
x+y+

+1
.

Hence,


y
x+
H (x) = ( + 1)
> 0.
x+y+
(x + y + )2
Thus, H(x) is increasing and by Theorem 10.1, h(x) is decreasing which shows that the Pareto
distribution is heavy-tailed


10.6 We have
f 0 (x)
f (x)
= lim
lim h(x) = lim
x f (x)
x
x S(x)
d h
xi
d
[ln (f (x))] = lim
( 1) ln x
= lim
x dx
x dx



1 1
1
= lim

= .
x

10.7 We have

R
lim e(x) = lim

SX (t)dt
SX (x)
1
= lim
= lim
.
x
x
S(x)
fX (x)
h(x)

10.8
10.9 Since 0 < < 1, the hazard rate function is decreasing and hence e(x) is increasing. The
result follows from the fact that e(0) = and e() = .
10.10 Since > 1, the hazard rate function is inecreasing and hence e(x) is decreasing. The
result follows from the fact that e(0) = and e() =
10.11 For this distribution we have fX (x) =

. Thus,
x+

(x+)+1

and SX (x) =

.
(x+)

Hence, h(x) =

= .
limx h(x)
This shows that e(x) is increasing and hence the distribution is heavy-tailed
lim e(x) =

fx (x)
SX (x)

21
10.12 limx e(x) =
1
2
2
10.13 (a)S(x) = (x+1)
2 , f (x) = (x+1)3 , and h(x) = x+1 .
(b) E(X) = 1 and E(X 2 ) = so that X is heavy-tailed.

10.14 Since the hazard rate is nonicreasing, X is a heavy-tailed


10.15 Since e(x) is nondecreasing, X is a heavy-tailed
10.16

S(x+y)
S(x)

is nonincreasing so that e(x) is nonincreasing and therefore X is light-tailed

10.17

f (x+y)
f (x)

is nondecreasing so that h(x) is nonincreasing. Thus, X is heavy-tailed

Section 11
11.1 (a) S(x) =
(b) fe (x) =

S(x)
E(X)

11.2 he (x) =
11.3 S(x) =
11.4 Se (x) =

1
e(x)

2
2 ex

=
=

e(0)
e
e(x)


2
2
2
2tet dt = et = ex .
0

for x > 0 and 0 otherwise.

2x+3
2x+5

Rx

1
0 e(t) dt

e(x)
S(x)
e(0)

x2

= (1 + x)ex 2 , x > 0
x2

= ex 2 , x > 0

11.5 0.6559
11.6 (a) E(X) = 3 and E(X 2 ) =
11.7 S(x) =

10
10+9x

24
5

(b)

4
5

 109

11.8

Section 12
12.1 By (P3), we have (0) = (0) = (0). Choosing 6= 1, we conclude that (0) = 0.
Since there are no losses, no capital is required to support the risk

22
12.2 This follows from (P3) and (P4)
12.3 Check the properties (P1)-(P4)
12.4 (b)
12.5 (a)
12.6 (a) and (c)
12.7 All three are correct
12.8 Simple algebra
12.9 0
12.10 We have
p
Var(L + )
p
=E(L) + + Var(L)
=(L) +
p
(L) =E(L) + Var(L)
=E(L) + Var(L)
=(L)

(L + ) =E(L + ) +

where > 0.

Section 13
13.1 a(1 p) + pb
13.2 1.8974
13.3 2.0227
13.4 60

23
1

13.5 (1 p)
13.6 40
13.7 10,000
13.8 347.21
13.9 5
13.10 VaR0.96 = 400. This says that there is 4% chance the losses will exceed 400

Section 14
14.1 (a) e(x) = 12 (b x) (b) TVaRp (L) = 12 [(a + b) + p(b a)]
14.2 (a) 0.90 = 1.8974 and e(1.8974) = 0.0051 (b) TVaR0.90 (L) = 1.9025
14.3 0.85 = 7 and TVaR0.85 (L) = 10.25
14.4 (a) = 1000 (b) e(100) = 220 (c) 0.95 = 647.55 (d) TRaV0.95 (L) = 867.55
14.5 TVaR0.75 (L) = 756
14.6 1.3
14.7 1.651
14.8 2.02
14.9 0.82
14.10 100,000
14.11 120.62

Section 15

24
15.1 We have
x
x
x
x
FcX (x) = Pr(cX x) = Pr(X ) = FX
= 1 e c = FX ( ).
c
c
c
This is an exponential distribution with parameter c
y 2

15.2 FY (y) = 1 e( c )
15.3 FY (y) =

yc
c

15.4 Let Y = cX, c > 0. We have



y
FY (y) =Pr(Y y) = Pr X
c
y
( c
) .
=e
This is a Frechet distribution with parameters c and .
15.5 Let Y = cX, c > 0. We have

y
FY (y) =Pr(Y y) = Pr X
c
1
 
=1 
1 + cy
This is a Burr distribution with parameters , c, and
15.6 (a) (b) (c) (d)
15.7 (1) is the only correst answer
15.8 Let Y = cX. Then

y
FY (y) =Pr(Y y) = Pr X
c
!

y
ln c
=



ln y ln c
=
.
ln c

25
This is a lognormal distribution with parameters ln c and ln c and consequently has no scale
paramter
15.9 The Gamma distribution with parameters and has a cdf FX (x) =
Let Y = cX. Then

y
FY (y) =Pr(Y y) = Pr X
c
Z y
c
1
t1 et dt.
=
() 0

1
()

Rx

t1 et dt.

This is a Gamma distribution with parameters and c


15.10 100
x

15.11 Letting = 1, we obtain fX (x) =

which is the pdf of an exponential distribution

15.12 0.0295

Section 16
16.1 0.0949
16.2 100
16.3 E(Y1 ) = 2 and E(Y2 ) = 6
16.4 E(X) = 1300 and Var(X) = 6935000
16.5 0.0568
16.6 35


1
x+1

 1

2
x+2

2

N
x+N

 N

16.7 FX (x) = 1 a1
a2
aN
h
i
h
i
h
i

1
2
N


fX (x) = a1 (x+11 )11 +1 + a2 (x+22 )22 +1 + + aN (x+NN )NN +1

a1

hX (x) =







N N
1 1 1
2 2 2
N
+a2
++aN

+1

+1

+1
(x+1 ) 1
(x+2 ) 2
(x+N ) N

 1

 2


N
N
1
2
a1 x+
+a2 x+
++aN x+
1
2
N

where aj > 0 and

PN

i=1

ai = 1.

26
16.8 15
16.9 400
16.10 0.146
16.11 0.7566

Section 17
17.1 E(X) =

2478
,
13

17.2 E(X 105) =

Var(X) =
1351
13

2283152
169

and the mode is 10

and eX (x) =

1127
9

17.3 0.1659
17.4
x
pX (x)

94 104 134 180

210 350 524

1
13

1
13

3
13

2
13

4
13

1
13

1
13

The cdf is defined by


FX (x) =

1
number
13

of elements in the sample that are x

17.5 0.61
17.6 17,566,092.92
17.7

fX (x) =

and 0 otherwise.

1
8
1
8
1
8
1
8
1
8
1
8
1
8

1
13
3
13
2
13
4
13
1
13
1
13
1
13

=
=
=
=
=
=
=

1
,
104
3
,
104
2
,
104
4
,
104
1
,
104
1
,
104
1
,
104

90 x 98
100 x 108
130 x 138
176 x 184
206 x 214
346 x 354
520 x 528

27

Section 18
18.1 0.75
18.2 We have that Y = 0.5X. Thus,
fY (y) = 2fX (2y) = 3y 2
for 0 < y < 1 and 0 otherwise.
Thus,
FY (y) = y 3 , 0 y 1
and FY (y) = 1 for y > 1. Also,
SY (y) = 1 y 3 , 0 y < 1
and 0 for y > 1
18.3 E(Y ) = 0.75E(X) =

0.75(2000)
31

= 750 and Y = 1299.04

18.4 We have
FY (y) = FX

y 

y 
=1 1+
=1

y+


.

This is the cdf of a Pareto distribution with parameters and . The pdf is
fY (y) =

.
(y + )+1

18.5 We have

FY (y) =

ln y


.

Thus,
FZ (z) = FY

z 

ln


=

ln z ( + ln )


.

Hence, Z is a lognormal distribution with parameters + ln and .


18.6 The cdf of X is FX (x) =

Rx
1

3t4 dt = 1 x3 and Pr(Y > 2.2) = 1 (1 23 ) =

1
8

= 0.125

28

18.7 We have



w
FW (w) =FZ
1+r
w2
w
1
1
.
=
+
2
2
2 w + 1000(1 + r)
2 w + 1000(1 + r)

Thus, W is an equal mixture of a loglogistic distribution with parameters = 2 and = 10 10(1+r)


and a Pareto distribution with parameters = 1 and = 1000(1 + r)

Section 19
19.1 In the transformed case, we have
FY (y) = 1 e

 
y

and fY (y) =

1
y e

In the inverse transformed case, we have




FY (y) = e

and fY (y) = y 1 e

In the inverse case, we have


1

FY (y) = e( y ) and fY (y) =


19.2 We have
1

FY (y) = 1 FX (y ) =

1
y

1 ( y )
e
y2


=

Y has the inverse Pareto distribution with parameters and


19.3 fY (y) = y 2 fX (y 1 ) =
19.4 fY (y) =

1
f (y 1 )
y2 X

1
y2

2
y

=
1
y

1 y 1 e
y2
()

19.5 fY (y) = y 1 fX (y ) =

y 1
b

2
y3

y
y+

for y > 1 and 0 otherwise

and 0 otherwise
1

for 0 y b and 0 otherwise

19.6 Y has an exponential distribution with parameter


1

29

19.7 We have


FY (y) =

and
1
fY (y) = fZ
y

ln y

ln y


=

y 2

e 2 (

ln y 2

where Z is the standard normal distribution


19.8 fY (y) = y1 fX (ln y) =
19.9 We have

1
2by

for 1 < t < eb and 0 otherwise

1
1 1 ln y
y ( +1)
1

e
=
fY (y) = fX (ln y) =
y
y

for y > 1 and 0 otherwise


19.10 0.25

Section 20
20.1 Var(X) =
20.2 fX (x) =

52
b2
12(1)2 (2)

4
(4+x)2

for x > 0 and 0 otherwise

20.3 1.7975
20.4 E() = = E(X)
20.5 FX (x) =
20.6 0.6094
20.7 14
20.8 0.61
20.9 0.75

x
1+x

30

Section 21
21.1 SX (x) = M (x) = (1 + x)
21.2 SX (x) =

x
,
(1+x) ln (1+x)

x>0

21.3 M (x) = e , x 0
21.4 a(x) = 1
x 1

21.5 SX (x) = M (x) = ee


21.6 SX (x) =

1
(ex
10x

e11x )

21.7 SX (x) = M (A(x)) = (1 + x )1

21.8 SX| (x|) = e(

1+x1)

21.9 SX (x) = M (A(x)) = (1 + x)


0
21.10 fX (x) = SX
(x) = M0 (A(X))(A(x))0 = a(x)M0 [A(x)]

21.11 hX (x) =

fX (x)
SX (x)

0 [A(x)]
a(x)M
M (A(x))

Section 22
22.1

f (x) =

22.2 0.9252
22.3 3 + ln 5
22.4 5.61

1
,

(1 )ex ,

0<x<c
x > c.

31
22.5

f (x) =

1
,
1000+

0 < x 1000
x > 1000

1000
x
1
e ,
1000+

and 0 otherwise.
22.6 461.78

Section 23
23.1 We have

ln 1 +
lim ln w1 = lim

( + 1 )1
2


1
1+
( 2
= lim
( + 1/2)2

2

1

1
= lim 1 +

+1

2
=.
Let

+

(/x)
.
w2 = 1 +

We have

(/x)
lim ln w1 = lim ( + ) ln 1 +

i1
h

( 2 )(/x)
1 + (/x)

= lim

( + )2

1   
(/x)

2
= lim 1 +
1+

 

=
.
x


23.2 For large , Stirlings formula gives


1

() e 2 (2) 2 .

32
1

Also, we let = so that = .


Using this and Stirlings formula in the pdf of a transformed beta distribution, we find
fX (x) =

( + )x 1
()( ) (1 + x )+
1

e ( + )+ 2 (2) 2 x 1
1

()e ( ) 2 (2) 2 (1 + x )+
+ 12 1
x
e 1 +
=

(
+)

(
+)
()

x
(1 + x )+
+ 12 1
x
e 1 +
=
h
i
+
() 1 + (/x)

Let

+ 12
w1 = 1 +
.

From the previous problem, lim w1 = e . Now, let



+
(/x)
w2 = 1 +
.

Then

lim w2 = e( x ) .

Hence,

lim fX (x) =

()x+1 e( x )
which is the pdf of an inverse transformed Gamma distribution

23.3 Let = . Then


finverser

Pareto (x)

x 1
(x + ) +1

1 


= 2 1+
1+
x
x
x
=

e x
2
x

33
which is the pdf of an inverse exponential distribution with parameter

Section 24
24.1 The pdf of the Gamma distribution can be written as
f (x, ) =
We have: q() = , p(x) =

x1
,
()

x
1
x1 e .

()

and r() = 1

24.2 The pdf of X is


e x
ex ex ln
=
.
x!
x!
Thus, p(x) = x!1 , q() = e , and r() = ln |lambda
f (x, ) =

24.3 The pdf of X is


p
x
ln ( 1p
)
p
C(m,
x)e
.
f (x, p) = C(m, x)px (1 p)mx = C(m, x)
(1 p)m =
1p
(1 p)m
 
p
Thus, p(x) = C(m.x), q(p) = (1 p)m , and r(p) = ln 1p

24.4 E(X) = and var(X) =


24.5 E(X) = mp and var(X) = mp(1 p)

Section 25
25.1 0.91873
25.2 5e5
25.3 0.1412
25.4 5 cars per week

Section 26

34

26.1 For the Poisson distribution the variance is equal to the mean. For the negative binomial
and geometric the variance exceeds the mean. So the answer is (d)
26.2 0.75
26.3 E(N ) = 3 and Var(N ) = 12
26.4 E(N ) = 6 and Var(N ) = 18
26.5 192
26.6 r = 2 and = 4
26.7 PN (z) = [1 (z 1)]1
q
26.8 CV (N ) = 1+
r

Section 27
27.1 The Poisson distribution has a variance equal to the mean. The negative binomial and geometric distributions have a varaince exceeding the mean. The binomial distribution has a variance
less than the mean. Thus, the answer is (a)
27.2 38.34
27.3 E(N ) = 1.4 and Var(N ) = 0.42
27.4 0.0057
27.5 6.2784
27.6 0.172
27.7

35
m
pm
F (m)

0
1
2
3
4
0.1074 0.2684 0.3020 0.2013 0,0881
0.1074 0.3758 0.6778 0.8791 0.9672

27.8 N1 represents the number of successes in m1 independent trials, each of which results in a
success with probability q. Similarly, N2 represents the number of successes in m2 independent
trials, each of which results in a success with probability q. Hence, as N1 and N2 are assumed to be
independent, it follows that N1 + N2 represents the number of successes in m1 + m2 independent
trials, each of which results in a success with probability q. So N1 +N2 is a binomial random variable
with parameters (m1 + m2 , q)

Section 28
28.1 For the given recursice equation, we find a = 13 < 0 and b = 4. Thus, N is a binomial
q
q
= 13 we find q = 14 . From (m + 1) 1q
= 4 we find m = 11
distribution. From 1q
28.2 E(N ) = 2.7 and Var(N ) = 2.065
28.3 0.125
28.4 3
28.5 0.0118
28.6 8
28.7 a = 0.2 and b = 0.8
28.8 (III)
28.9 0.3012
28.10 0.8
28.11 0.09

Section 29

36
29.1 We have


pM
k
pTk

E(N ) =
M

29.3 E[N (N

1)] =


Var(N ) =

[PNM ]0 (1)

pk

1 pM
0
1 p0

E[N (N 1)] and

1pM
0
1p0

1 pM
0
1 p0


=

1
p
1p0 k

29.2 We have
M

1pM
0
1p0

E[N (N 1)] +

= 1 pM
0

PN0 (1)

1 pM
0
1 p0


=

1 pM
0
1 p0


E(N )

E(N )

1 pM
0
1 p0

29.4
p0 =e = e1 = 0.3679
a =0
b =1
1
p1 =p0 = 0.3679
1
1
p2 =p1 = 0.1839
2
E(N
)
1
E(N T ) =
=
= 1.582
1 p0
1 0.3679


1
1
= 1.582
E[N T (N T 1)] =
E[N (N 1)] =
1 p0
1 0.3679
Var(N T ) =1.582 + 1.582 1.5822 = 0.661276

29.5 (a) PNM (z) =


29.6 0.5

Section 30

1
2

1+

z
32z

(b) E(N M ) = 1.5 and Var(N M ) = 2.25

2
E(N )

37
30.1

1
=
= 0.5
1+
1+1

b =(r 1)
= 0.75
1+


0.75
T
T
p2 =p1 0.5
= 0.106694
2


0.75
T
T
= 0.026674
p3 =p2 0.5
3
T
pM
1 =(1 p0 M )p1 = 0.341421
a=

T
pM
2 =(1 p0 M )p2 = 0.042678
T
pM
3 =(1 p0 M )p3 = 0.010670

30.2 E(N ) =

ln (1+)
2

30.3 E[N (N 1)] = PN00 (1) = ln (1+)


h
i

1 + ln (1+)
30.4 Var(N ) = ln (1+)

30.5 We have
PNT (z)

X
n=0

pTn z n

pTn z n

n=1

1 X
pn z n
1 p0 n=1

1 X
p0
=
pn z n
1 p0 n=0
1 p0

30.6 PNT (z) =

[1(z1)]r (1+)r
1(1+)r

PN (z) p0
1 p0

38

30.7 E(N T ) = [PNT ]0 (1) =

0 (1)
PN
1p0

r
1(1+)r

30.8 E[N T (N T 1)] = [PNT ]00 (1) =


30.9 Var(N T ) =

r(r+1) 2
1(1+)r

00 (1)
PN
1p0

r
1(1+)r

=
h

r(r+1) 2
1(1+)r

r
1(1+)r

i2

Section 31
31.1

(
fY L (y) =

1 e0.25
( y+50
100 )

y=0
2

y > 0.
0.0002(y + 50)e
(
1 e0.25
y=0
2
FY L (y) =
( y+50
)
1 e 100
y > 0.
(
e0.25
y=0
2
SY L (y) =
( y+50
)
y > 0.
e 100

31.2
fY P (y) =0.0002(y + 50)e0.0001y
FY P (y) =1 e0.0001y
2

2 0.01y

SY P (y) =e0.0001y 0.01y


hY P (y) =0.0002(y + 50).

31.3 E(Y L ) = 1000e0.5


31.4 E(Y P ) = 1000
31.5 2000e

400

31.6 1708.70
31.7

(bd)2
12

2 0.01y

39

31.8 (a) 188.75 (b) 269.64


31.9 (a)

(1)(+)1

(b)

31.10 30

Section 32
32.1


fY L (y) =

FY L (y) =

SY L (y) =
hY L (y) =

fY P (y) =

FY P (y) =
SY P (y) =

32.4 340.83
32.5 900

+d
2

0,
1,
y
d

and E(Y P ) =

0yd
y > d.

0<y<d
y > d.

1
, y > d.
d
yd
,
d

2 d2
2

0,
1
,
y

32.2

32.3 E(Y L ) =

d
,

y
,

y=0
y > d.

1 d , 0 y d
1 y ,
y > d.

hY P (y) =

d
,

1
,

0,
1
,
y

0yd
y > d.
0yd
y > d.
0<y<d
y > d.

40
32.6 d =

ln 0.40

32.7 (a) 310 (b) 387.5


32.8 320.83
32.9 456
32.10 6400

Section 33
33.1 LER = 79.2%. The is the percentage of savings in claim payments due to the presence of
the deductible 30.
33.2 86.6%
33.3 1546
33.4 500
33.5 333.33
33.6 0.0146
33.7 0.5
33.8 510.16
33.9 0.625

Section 34
34.1 We have
u
e , y=u
ey , y < u
fY (y) =

0,
y>u

41
and

FY (y) =
Ru


1
x
u
e
dx
=
1

0


u
1+r
34.3 E((1 + r)X u) = (1+r)
1

34.2 E(X u) =

34.4 48
34.5 182.18
34.6 2011.80
34.7 5176.78

Section 35
35.1 1.115
35.2 990,938.89
35.3 0.4163
35.4 133
35.5 353.55
35.6 3031.06
35.7 85%
35.8 29.93
35.9 109.4
35.10 0.583

1 ey , y < u
1
yu

42

Section 36
36.1 8.0925
36.2 0.5553
36.3 The pgf of N L is PN L (z) = e0.5(z1) . Hence,
PN P (z) = PN L [1 + v(z 1)] = e0.5(0.5553)(z1) = e0.27756(z1) .
Note that N P is a Poisson distribution with mean 0.27756
36.4 0.242444
36.5 PN P (z) = (1 0.11852(z 1))1
36.6 0.6304
36.7 0.4424

Section 37
37.1 (a) This is false, In a collective risk model, all the loss amounts are identically distributed.
(b) This is true since the loss amounts need not all have the same distribution.
(c) This is true. In the collective risk model, N (the frequency random variable) is determined
independently of each of the subscripted Xs, the severity random variables.
(d) This is false. If frequency is independent of severity, as it is in the collective risk model, then
this implies that the number of payments does not affect the size of each individual payment
37.2 0
37.3 E(S) = 70 and Var(S) = 12, 500
37.4 This is a collective loss model
37.5 The probability generating function of N is given by
PN (z) = [1 (z 1)]r

43
so that the pgf can be expressed in the form
PN (z; ) = Q(z)
where = r and Q(z) = [1 (z 1)]1

Section 38
38.1 E(S 3 ) = E(N 3 )E(X)2 3E(N 2 )E(X)2 +2E(N )E(X)2 +3E(N 2 )E(X)E(X 2 )3E(N )E(X)E(X 2 )+
E(N )E(X 3 )
38.2 E[(S E(S))3 ] = E(N )E[(X E(X))3 ] + 3E[(N E(N ))2 ]E(X)E[(X E(X))2 ] + E[(N
E(N ))3 ]E(X)3
 ln y
Rx 
( )
dy
38.3 FX2 (x) = 0 ln (xy)

y
38.4 We have
E(S) =0.18 + 0.182(2) + 0.0909(3) + 0.0182(4) = 0.8895
E(S 2 ) =0.18 + 0.182(22 ) + 0.0909(32 ) + 0.0182(42 ) = 2.0173

38.5 1.226
38.6 E(Y ) = 2.5 and Var(Y ) = 23.75
38.7 0.1587
38.8 0.1637
38.9 0.0681
38.10 E(S) = 5600 and Var(S) = 9, 710, 400
38.11 100
38.12 0.1003

44
38.13 0.242
38.14 0.1230
38.15 24
38.16 518
38.17 40
38.18 0.2483
38.19 0.0039
38.20 0.37
38.21 65.3
38.22 0.4207
38.23 0.0233

Section 39
39.1 1.014
39.2 2.25
39.3 1/3
39.4 2.064
39.5 25/16
39.6 2.3608
39.7 2.064

45
39.8 18.15
39.9 18.81

Section 40
40.1 The mgf of S is
MS (z) =PN [MX (z)] = PN [(1 z)1 ]

r
= 1 [(1 z)1 1]
r

1 z
=
[1 (1 + )z]


r
1
1 (1 + )z +
=
1+
1 (1 + )z


r
1

=
1+
1+
1 (1 + )z

r

= 1+

(1 + )[1 (1 + )z] 1 +


r

1
= 1+
[1 (1 + )z] 1
1+
=PN [MX (z)]
where
PN (z)

r

(z 1)
= 1+
1+


is the pgf of the binomial distribution with parameters r and 1+


and MX (z) = [1 (1 + )z]1 is
the mgf of the exponential distribution with mean (1+). Thus, the negative binomial-exponential
model is equivalent to the binomial exponential model.

40.2 This follows from Exercise 40.1 and


FS (x) = 1

X
n=1

40.3 FS (x) = 1

e (1+) ,
1+

x0

Pr(N = n)

n1
X
(x/)j ex/
j=0

j!

46
x

(1+)

40.4 fS (x) = (1+)


, x 0. Note that S is a mixed distribution with the discrete part
2e
Pr(S = 0) = FS (0) = (1 + )1 and the continuous part is the exponential distribution with mean
(1 + )

40.5 Suppose that X1 , X2 , , XN are independent Poisson random variables with parameters
1 , , N . Then
t
t
t
MS (t) = e1 (e 1) eN (e 1) = e(1 ++N )(e 1) .
Hence, S is a Poisson random variable with parameter 1 + + N
40.6 Suppose that X1 , X2 , , XN are independent binomial random variables with parameters
(n1 , p), (n2 , p), , (nN , p). Then
MS (t) = (q + pet )n1 (q + pet )nN = (q + pet )n1 +n2 +nN .
Hence, S is a binomial random variable with parameters (n1 + + nN , p).
40.7 Suppose that X1 , X2 , , XN are independent negative binomial random variables with parameters (r1 , q), (r2 , q), , (rN , q). Then
PS (t) = [1 (z 1)]r1 [1 (z 1)]r2 [1 (z 1)]rN = [1 (z 1)](r1 +r2 ++rN ) .
Hence, S is a negative binomial random variable with parameters (r1 + + rN , ). In particular,
the family of geometric random variables is closed under convolution.
40.8 Suppose that X1 , X2 , , XN are independent gamma random variables with parameters
(1 , ), (2 , ), , (N , ). Then
MS (t) = (1 t)1 (1 t)2 (1 t)N = (1 t)(1 +2 ++N ) .
Hence, S is a gamma random variable with parameters (1 + + N , ). In particular, the family
of exponential random variables is closed under convolution.
40.9 We have
x2

FS (x) =1 e

X
(x/2)j X
j=0

j!

n=j+1

1 x
x
=1 e 2 1 +
.
2
10

Pr(N = n)

47

40.10 MS (2) = 3.6 1048

Section 41
41.1 fS (0) = fN (0) = e0.04 = 0.9608 and
fS (1) =0.04fX (1)fS (0) = 0.04(0.5)(0.9608) = 0.019216
fS (2) =0.02[fX (1)fS (1) + 2fX (2)fS (0)]
=0.02[0.5(0.019216) + 2(0.4)(0.9608)] = 0.01556496
0.04
fS (3) =
[fX (1)fS (2) + 2fX (2)fS (1) + 3fX (3)fS (0)]
3
=0.0041487371
fS (4) =0.01[fX (1)fS (3) + 2fX (2)fS (2) + 3fX (3)fS (1) + 4fX (4)fS (0)]
=0.00037586.

41.2 We have
0.04
[fX (1)fS (n 1) + fX (2)fS (n 2) + fX (3)fS (n 3)]
n
0.04
=
[0.5fS (n 1) + 0.4fS (n 2) + 0.1fS (n 3)]
n
1
= [0.02fS (n 1) + 0.016fS (n 2) + 0.004fS (n 3)].
n

fS (n) =

41.3 0.15172
41.4 12
41.5 76
41.6 165
41.7 1.0001

48

41.8 0.3336
41.9 fS (1) = 0.3687 and fS (2) = 0.2055
41.10 0.0921
41.11 0.2883

Section 42
42.1 (a)
f0 =FX (5) =

5
= 0.1
50

15
5

= 0.2
50 50
25 15
f2 =FX (25) FX (15) =

= 0.2
50 50
25
= 0.5.
f3 =1 FX (25) = 1
50

f1 =FX (15) FX (5) =

(b) 0.1935
42.2 0.0368
42.3 0.0404
42.4 We have
m10

m11

 3  3
5
5
= FX (6) FX (3) =

= 0.150226
8
11

and
3m10

6m11

Z
=
3

Solving this system we find

m10

3(5)3 x
dx = 0.62897.
(x + 5)4

= 0.090796 and m11 = 0.05943.

42.5 We have f0 = m00 = 0.4922 and f3 = m01 + m10 = 0.2637 + 0.090796 = 0.3545.

49
42.6 0.03682
42.7 0.03236

Section 43
43.1
FY L (y) =1 v + vFY P (y)



y
Pr(X>6)Pr(X>6+ 0.75
)

, y < 13.5
1v+v
Pr(X>6)
=

2 v,
y 13.5
where v = 0.15259
43.2 E(S) = 262.4621 and Var(S) = 487, 269, 766.1
43.3 FY P (y) = 1 Pr(0.53Z > y) =

y
Pr(X>30)Pr(X>30+ 0.53
)
Pr(X>30)

and FY P (y) = 1 for y 164.30

43.4
f0
f1
f2
f3
f4
f5
fn

=FY P (15) = 0.2465


=FY P (45) FY P (15) = 0.3257
=FY P (75) FY P (45) = 0.1849
=FY P (105) FY P (75) = 0.0778
=FY P (135) FY P (105) = 0.0322
=FY P (165) FY P (135) = 0.058
=1 1 = 0, n = 6, 7,

43.5
fS (0) =e7(10.3257) = 112.179
n
5.18574 X
fn fS (n j)
fS (n) =
n
j=1

50

43.6 MY L (t) = 0.25918 + 0.74082((1 100t)1


43.7 PN P (z) = e1.06813(z1)

Section 44
44.1 E(S) = 100, 000 and Var(S) = 1.6002 1011
100

44.2 MS (t) = (0.80 + 0.20(1 0.001t)1 )

100

44.3 PS (z) = MS [ln z] = (0.80 + 0.20(1 0.001 ln z)1 )

44.4 (a) The mean is E(S) = 163 and the variance is Var(S) = 1107.77. (b) 217.75
44.5 E(S) = 352n and Var(S) = 88856n

Section 45
45.1 53
45.2 0.29
45.3 0.18
45.4 1975
45.5 = 1.52 or = 5.45

Section 46
46.1 5
46.2 0.2
46.3 151.52

51
46.4

n+8
18(n1)2

46.5 200
46.6

n1

46.7 12
46.8 (D)

Section 47
47.1 1.64
47.2 2.58
47.3 0.2
47.4 78.14 81.86
47.5 0.1825 p 0.2175

Section 48
48.1 (a) H0 : = 18, 000 and H1 : < 18, 000
(b) H0 : = 18, 000 and H1 : 6= 18, 000
(c) H0 : = 18, 000 and H1 : > 18, 000
48.2 (i) Two-tailed (ii) Left-tailed (iii) Right-tailed
48.3 The null and alternative hypotheses are
H0 : 30
H1 : < 30.
The test statistic for the given sample is
z=

20 30
= 3.727.
6/ 5

52
The rejection region is Z < 1.28. Since 3.727 < 1.28, so we reject the null hypothesis in favor
of the alternative.Thus, the mean time to find a parking space is less than 30 minutes.
48.4 We reject the null hypothesis when the level of confidence is greater than or equal to the
pvalue. Thus, the answer to the question is:(ii), (iii), and (iv).
48.5 The null and alternative hypotheses are
H0 : = 20
H1 : > 20.
The test statistic is

22.60 20

= 7.28.
2.50/ 49
The rejection region corresponding to = 0.02 is Z > 2.06. Since 7.28 > 2.06, we reject H0 and
conclude that the typical amount spent per customer is more than $20.
z=

48.6 The null and alternative hypotheses are


H0 : = 16
H1 : =
6 16.
The test statistic is

16.32 16

= 2.19.
0.8/ 30
The rejection region corresponding to = 0.10 is |Z| > 1.645. Since 2.19 > 1.645, we reject H0 and
conclude that the process is out of control.
z=

48.7 The null and alternative hypotheses are


H0 : = 10
H1 : =
6 10.
We have z 2 = z0.01 = 2.33. Thus, the critical values are 2.33 and 2.33.
48.8 (d)

Section 49
49.1

53
x
1
p12 (x) 16

1
12

1
6

1
6

1
12

1
6

1
6

0,
x<1

1x<2

2x<3

45
3
x<4
12
F12 (x) =
7
4x<5

12

35 5 x < 6

6x<7

6
1,
x 7.
49.2 The empirical mean is X =

41
12

and the empirical variance is

49.3 (a) We have

0,

15
22
,
45

H(x)
=
244
,

315

307

315

929

630
1559
,
630

x<1
1x<2
2x<3
3x<4
4x<5
5x<6
6x<7
x 7.

(b) We have

1,
x<1

0.8465, 1 x < 2

0.7659, 2 x < 3

0.6133, 3 x < 4

S(x)
=
0.4609, 4 x < 5

0.3773, 5 x < 6

0.2289, 6 x < 7

0.0842,
x 7.

1331
.
144

54
49.4

Sn (x) =

8
9
5
9
4
9
3
9
2
9
1
9

1,
x < 49
1
= 9 , 49 x < 50
= 49 , 50 x < 60
= 59 , 60 x < 75
= 23 , 75 x < 80
= 79 , 80 x < 120
= 89 , 120 x < 130
0,
x 130.

49.5 6
49.6 1.291
49.7 12

Section 50
50.1 (a) 47.50 (b) 3958.33
50.2 81
50.3 120
50.4 0.396
50.5 20,750

Section 51
51.1 (A) and (D) are false. (B) and (C) are true
51.2 Losses above a policy limit are right-censored and losses below a policy deductible are lefttruncated. The answer is (D)
51.3

55
Life
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

di
0
0
0
0
1
1.2
1.5
2
2.5
3.1
0
0
0
0
0
0
0
0
0.5
0.7
1.0
1.0
2.0
2.0
3.0

xi

0.5

1.0

2.5

3.5

1.0
3.0

2.5

3.5

ui
0.2
0.3
0.5

0.7

2.0

3.0

4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0
4.0

4.0

2.5

51.4
j yj
1 4
2 8

51.5

sj
1
1

rj
2+21=3
1+00=1

56
j y j sj
1 0.9 1
2 1.5 1
3 1.7 1
4 2.1 2

rj
5+53=7
4+42=6
3+20=5
2+10=3

Section 52
52.1 0.52
52.2 0.583
52.3 0.067
52.4 0.7143
52.5 2
52.6 0.385
52.7 0.112
52.8 100

Section 53
53.1 We have


E[pn (xj )] = E

Nj
n


=

E(Nj )
np(xj )
=
= p(xj ).
n
n

This shows that the estimator is unbiased. Finding the variance of pn (xj ) we have
Var[pn (xj )] =

np(xj )[1 p(xj )]


p(xj )[1 p(xj )]
=
0
2
n
n

as n . This shows that the estimator is consistent


53.2 3.951 107

57

53.3 We have
64
= 0.1658
386
d 386 (2)] = p386 (2)[1 p386 (2)]
Var[p
n
p386 (2) =

64 322

= 386 386 = 3.58 104


386

53.4 The endpoints of the interval are:


r
0.1658(1 0.1658)
0.1658 1.96
= (0.1287, 0.2029)
386

Section 54
54.1 Since X1 , X2 , , Xn are independent so are X12 , X22 , , Xn2 . We have
Var(X1 X2 Xn ) =E[(X1 X2 Xn )2 ] E(X1 X2 Xn )2
=E(X12 )E(X22 ) E(Xn2 ) E(X1 )2 E(X2 )2 E(Xn )2
n
n
Y
Y
2
2
= (i + i )
2i
i=1

i=1

54.2 We have
(1 + a1 )(1 + a2 ) (1 + an ) = 1 + a1 + a2 + + an + prodcuts of ai s.
But the ai s are given small so that a product of ai s is even smaller. Ignoring all the product terms,
we obtain the desired result
54.3 0.0148
54.4 0.0694
54.5 10

58

Section 55

d H(3))

55.1 H(3)
= 0.6875 and Var(
= 0.02376
55.2 The 95% linear confidence interval is

(0.6875 1.96 0.02376, 0.6875 + 1.96 0.02376) = (0.3854, 0.9896).


The 95% log-transformed confidence interval is

(0.6875e1.96
55.3 0.607
55.4 (0.189, 1.361)
55.5 0.779
55.6 (0.443, 1.067)
55.7 0.2341

Section 56
56.1 0.4855
56.2 0.53125
56.3 0.026
56.4 0.3
56.5 1 x 2
56.6 0.3

Section 57

0.02376
0.6875

, 0.6875e1.96

0.02376
0.6875

) = (0.4431, 1.0669)

59
57.1 (a) 990 (b) 0.0080
57.2 We have
rj =

j
X
i=0
j1

dj

j1
X

(xj + uj )

i=1
j2

X
X
=[
dj
(xj + uj )] + dj (xj1 + uj1 )
i=0

i=1

=rj1 + dj (xj1 + uj1 )

57.3 0.75
57.4 0.6

Section 58
58.1 0.52490
58.2 0.52
58.3 369
58.4 26,400
58.5 13.75
58.6 107.8
58.7 20
58.8 384
58.9 0.24
58.10 4.468

60

58.11 246.6
58.12 17.55
58.13 208.3
58.14 1.614
58.15 296.21
58.16 224
58.17 104.4
58.18 118.32
58.19 0.983
58.20 0.345

Section 59
59.1 =

x1 +x2 ++xn
2n

59.2 = max{x1 , x2 , , xn }
59.3 4.3275
59.4 3.97
59.5 = min{x1 , x2 , , xn }
59.6 0.2507
59.7 2
59.8 0.6798

61

59.9 1996.90
59.10 0.447


59.11 L(p) =
59.12
0

` (1 ) =

n
1

pe1
100

n
X
i=1

1p 0.01
e
10,000



p 2
e
100

1p 0.2
e
10,000

m
X
2m
6
ln xi +

ln yi
1 (2 + 1 ) (2 + 1 )2 i=1

59.13 16.74
59.14 916.7

Section 60
60.1 73
60.2 233.333
60.3 703
60.4 3000
60.5 2.41877
1100

60.6 L() =

e
3

60.7 471
60.8 f (50)f (15)f (60)f (500)[1 F (100)][1 F (500)]
60.9 3.089
60.10 3,325.67
60.11 0.09

62

60.12 1067
60.13 3/8
60.14 52.68

Section 61
61.1 `0 ()2 =

16
2

4.6252 + 5.3481

61.2 `00 () = 162


61.3 I() =

16
1.732

= 5.346

61.4 0.1871
61.5 [0.8822, 2.5778]
61.6

1
n

61.7 0.447
61.8

32
n

61.9 0.97

Section 62
62.1

`(, ) =

n
X
i=1

[ln + ln ( + 1) ln (xi + )]

63
62.2
2`
n
= 2
2

n
2
n X 1
`
=

x +
i=1 i
n

2`
n X ( + 1)
=

+
2
2
(xi + )2
i=1

62.3

"
I() =

62.4
I()1

1
=
det[I()]

n
n
n2
(+1)

n
(+1)
n
+ n(+1)
2
(+2)2

"

62.5


I(, ) =

n
+ n(+1)
n + (+1)
n
2
(+2)2
n
n + (+1)
n2

5.0391 0.4115
0.4115 0.0524

62.6
d , ) =
Var(

1
(5.0391)(0.0524) 0.41152

0.0524 0.4115
0.4115 5.0391

62.7 0.0187
62.8 (a) We have
MY (etY ) = MY (etX1 etX2 ) = (1 t)2 .
Thus, Y is a Gamma distribution with parameters = 2 and .
(b) 0.15
(c) 0.079
62.9 (0.23, 0.69)


=

0.1209 0.9495
0.9495 11.6274

64
62.10 0.02345
62.11

2 3
3 5

Section 63
63.1 1.00774
63.2 [0.70206, 4.20726]
63.3 [2.641591352, 8.358408648]

Section 64
64.1 This follows from


1
d
= 2 =

64.2 (a) The model distribution is


fX|Q (x|q) = (q)(q) = q 2 .
(b) The joint distribution is
fX,Q (x, q) = q 2

q2
q4
=
.
0.039
0.039

(c) The marginal distribution in X is


Z

0.5

fX (x) =
0.2

q4
dq = 0.15862.
0.039

(d) The posterior distribution is


q4
Q|X (q|x) =
0.006186
64.3 |X (|x) =

413 4 13
e
12!

65
7

64.4 |X (|x) =
64.5 0.5572
64.6 0.721
64.7 0.64
64.8 1.90
64.9

x+c
2

64.10

27
16

64.11 1.9899
64.12 0.622
64.13 0.148

Section 65
65.1 2
65.2 0.45
65.3 0.000398
65.4 450
65.5 1.319
65.6 0.8148

Section 66

10 (0.8e 6 +0.6e
0.395536(10!)

13
12 )

66
66.1 The model distribution is

n 
Y
m
fX|Q (x|q) =
q xi (1 q)mxi .
xi
i=1

The joint distribution is



n 
Y
(a + b) a1
m
fX,Q (x, q) =
q xi (1 q)mxi
q (1 q)b1 .
xi
(a)(b)
i=1

The marginal distribution is



Z
n 
Y
Pn
(a + b) 1 a+Pni=1 xi 1
m
fX (x) =
q
(1 q)b+nm i=1 xi 1 dq
xi (a)(b) 0
i=1
Pn
Pn

n 
Y
m (a + b)(a + i=1 xi )(b + nm i=1 xi )
.
=
xi
(a)(b)(a + b + nm)
i=1

The posterior distribution is


Pn
Pn
(a + b + nm)
Pn
q a+ i=1 xi 1 (1 q)b+nm i=1 xi 1 .
(a + i=1 xi )(b + nm i=1 xi )
P
P
Hence, Q|X has a beta distribution with parameters a + ni=1 xi , b + nm ni=1 xi , and 1.

Q|X (q, x) =

Pn

66.2 The model distribution is

n e
fX| (x|) = Qn

Pn

1
i=1 xi

i=1

x2i

The joint distribution is


n+1

fX, (x, ) =

()

1 Pn
+ i=1 x1

Qn

i=1

x2i

The marginal distribution is


Z
fX (x) =

n+1

()

1 Pn
+ i=1 x1

Qn

2
i=1 xi

d =

( 1

(n + )
Q
.
1 n+
() ni=1 x2i
i=1 xi )

Pn

The posterior distribution is



1 Pn
n+1 + i=1

|X (, x) = 

1
xi

e
.
Pn 1 (n+)
+ i=1 xi
(n + )

67
Hence, |X has a Gamma distribution with parameters + n and
66.3 The model distribution is

1
fX| (x|) = n e

Pn
i=1 xi

Pn

1
i=1 xi

1

The joint distribution is


1
fX, (x, ) = n e

Pn
i=1 xi

e
.
+1 ()

The marginal distribution is


Z
fX (x) =
0

Pn

e ( i=1 xi +)
( + n)
P
.
d
=
+n+1 ()
( + ni=1 xi )+n ()

The posterior distribution is


|X (, x) =

( +

n
1
+n ( i=1 xi +)
e
i=1 xi )
.
+n+1 ( + n)

Pn

Hence, |X has an inverse Gamma distribution with parameters + n and


66.4 The model distribution is
fX| (x|) =
The joint distribution is
fX, (x, ) =

Pn

i=1

xi +

1
.
n

, >M
n++1
where M = max{x1 , x2 , , xn , }. The marginal distribution is
Z

d
=
.
fX (x) =
n++1
(n + )M n+
M
The posterior distribution is
(n + )M n+
.
n+1
Hence, |X has a single-parameter Pareto distribution with 0 = n + and 0 = M
|X (, x) =

Section 67

68
67.1 (a) Let nk be the number of policies un which frequency of exactly k accidrnts occurred.
The Poisson distribution parameter estimate by the method of moments is
P6
= x = Pk=1 knk = 103 = 1.2262.

6
84
k=0 nk
(b) The likelihood function is
L() =

6  k nk
Y
e
k=0

k!

The loglikelihood function is


`() =

6
X


nk ln

k=0

e k
k!


=

6
X
k=0

nk +

6
X

knk ln

k=1

6
X

nk ln (k!).

k=0

The MLE of is found from


P6
6
6
X
knk
d
1X

= 1.2262.
`() =
nk +
knk = 0 = = Pk=1
6
d

n
k
k=0
k=0
k=1
(c) We have
=E(N ) =
E()
= Var(N ) = ,
Var()
n
n
P
is unbiased and consistent.
where n = 6k=0 nk . Thus,
(d) The asymptotic variance is found as follows:
 2
 N 

e
I() =nI(N |) = nE
ln
2

N!
 
N
n
=nE
=
2

= .
Var()
n
(e) The confidence interval is


p
p
1.2262 1.96 1.2262/84, 1.2262 + 1.96 1.2262/84 = (0.9894, 1.463)

69

67.2 1.6438
67.3 pk =

e k
k!

67.4 = 0.03 and r = 41.6667


67.5
 X
4
3
X
1.25

nk
(
r + m)1
H(
r) =100 ln 1 +
r
m=0
k=1

 

1.25
35
15
5
70
=100 ln 1 +
+
+
+

r
r
r + 1 r + 2 r + 3


67.6 =

x
r

67.7 q = 0.1472 and m


= 18.27
67.8 0.06

Section 68
68.1 We have
pM
0 =
and
=

knk
nx
1=
1 = 0.05147
n n0
n n0
k=0

68.2 We have
pM
0 =
and
x(1 e ) =

9048
n0
=
= 0.9048
n
10000

n0
9048
=
= 0.9048
n
10000

n n0
= 0.1012
= 0.1001(1 e ) = 0.0952 =
n

68.3 We have
pM
0 =

10
n0
=
= 0.5
n
20

70
and
x=

1 pM
0
mq = 0.7 = 3q = q = 0.2333
1 p0

68.4 22.5547

Section 69
69.1 0.0714
69.2 For x < 0.3, we have F (x) > Fn (x) so that the fitted distribution is thicker on the left
than the empirical distribution. For x > 0.85, F (x) > Fn (x) which implies S (x) < Sn (x). That
is, the fitted distribution is thinner on the right than the empirical distribution. Also, note that
near the median x = 0.5, the slope is less than 1. That is, less probability on the fitted than the
empirical. Hence, the answer is (E)
69.3 Lets choose x5 = 30. Than F (30) 0.6 but smaller than 0.6. If X is uniform in [1, 100]
and Fu (30) 0.29 so that (C) is eliminated. If X is exponential with
then its cdf is Fu (x) = x1
99
x
mean 10 then Fe = 1 e0.1x and Fe (30) 0.95. Thus, (D) is eliminated. If F (x) = x+1
then
30
F (30) 31 0.97 so that (B) is eliminated. IF X is normal with mean 40 and standard deviation 40
then Fn (30) = 3040
= (0.25) = 0.40 so that (E) is eliminated. Note that with the function
40
in (A) we have F (30) = 1 300.25 0.57. Hence, the answer is (A)

Section 70
70.1 0.2727
70.2 0.4025
70.3
Level of comfidence
Critical value
Test Result

70.4 0.1679

0.10
0.05
0.025
0.01
0.5456 0.6082 0.6619
0.729
Reject Reject Reject Fail to reject

71
70.5 Fail to reject the null hypothesis
70.6
Level of comfidence
Critical value
Test Result

0.10
0.05
0.02
0.01
0.0863 0.0962 0.1075
0.1152
Reject Reject Reject Fail to reject

Section 71
71.1 0.252
71.2
Level of confidence
Critical value
Test Result

0.10
1.933
Fail to reject

0.05
2.492
Fail to reject

0.01
3.857
Fail to reject

71.3 (A) Using sample data gives a better than expected fit and therefore a test statistic that
favors the null hypothesis, thus increasing the Type II error probability.
(B) The K-S test works only on individual data and so B is false.
(C) The Anderson-Darling test emphasizes the tails so that (C) is false.
Hence, the answer is (D)
71.4 (A), (B), and (C) are all correct. Thus, the answer is (D)
71.5 (A)

Section 72
72.1 9.151
= 1.6438
72.2 (a)
(b) The Chi-square statistic is
2 =
We have

421.4809 36.7236 32.49 76.9129


+
+
+
= 7.558.
70.53
115.94
95.30
83.23

72
Level of Significance
10%
5%
2.5$
1%

2kr1,1
4.605
5.991
7.378
9.210

Test result
Reject
Reject
Reject
Do not reject

where the degrees of freedom is k r 1 = 4 1 1 = 2


72.3 The Chi-square statistic is
2 =

1
49 144
225 225 100
+
+
+
+
+
= 17.594.
250 350 240 110 40
10

We have
Level of Significance
10%
5%
2.5$
1%

2kr1,1
9.236
11.070
12.833
15.086

Test result
Reject
Reject
Reject
Reject

The degrees of freedom is 6 1 = 5


72.4 2 = 6.659
72.5 A is false. Using sample data gives a better than expected fit and therefore a test statistic that favors the null hypothesis, thus increasing the Type II error probability. The K-S test
works only on individual data and so B is false. The A-D test emphasizes the tails, thus C is false.
D is false because the critical value depends on the degrees of freedom which in turn depends on
the number of cells, not the sample size. So the answer is (E)
72.6 (A)

Section 73
73.1 We have 0 degrees of freedom in the null hypothesis, since both parameters are specified,
and 2 degrees of freedom in the alternative hypothesis, since both parameters are freely chosen in
maximizing L(, ). We thus have 2 degrees of freedom overall
73.2

73

5%
2.5%
1%
0.5%
5.991 7.378 9.210 10.597

73.3

5%
2.5%
1%
0.5%
c
5.991 7.378
9.210
10.597
Test Result Reject Reject Do not reject Do not reject

73.4

10%
5%
2.5%
1%
c
2.706 3.841
5.024
6.635
Test Result Reject Reject Do not reject Do not reject

73.5 7

Section 74
74.1 (i)
74.2 (A)
74.3 (I)
74.4 Pareto

Section 75
75.1 16,913
75.2 2,381
75.3 0.10
75.4 384.16

+1

74

75.5 960

Section 76
76.1 0.47
76.2 (E)
76.3 0.3723
76.4 5,446,250
76.5 138
76.6 0.8

Section 77
77.1 For the risk parameter, we have

1 e
.
() =
()
For the claims, we have
fX| (x|) =

e x
x!

77.2 For the risk parameter, we have


() =

2 2

()2
2
22

For the claims, we have


2

fX| (x|) =

(x)

1
e 212
1 2

77.3 For the risk parameter, we have


(q) =

(a + b) a1
q (1 q)b1 , 0 < q < 1.
(a)(b)

75
For the claims, we have

fX|Q (x|q) =

m
x

q x (1 q)mx , x = 0, 1, , m

77.4 For the risk parameter, we have

e
() = +1
.
()
For the claims, we have
1 x
e

fX| (x|) =
77.5 For the risk parameter, we have
() =

, > .
+1

For the claims, we have


fX| (x|) =

1
, 0x

Section 78
78.1 (a) fXY (x, y) = fX (x)fY |X (y|x) = x1 , 0 < x < 1, 1 x < y < 1 (b)
78.2 fX|Y (x|y) =

1+ln 2
2

6x(2xy)
43y

78.3 e y

78.4 (a) Observe that X only takes positive values, thus fX (x) = 0, x 0. For 0 < x < 1
we have
Z
Z
1
fX (x) =
fXY (x, y)dy =
fXY (x, y)dy =

1
For x 1 we have
Z

fX (x) =

Z
fXY (x, y)dy =

fXY (x, y)dy =


x

1
x

76
(b) For 0 < x < 1 we have

fXY (x, y)
= +1 , y > 1.
fX (x)
y

fY |X (y|x) =
Hence,

Z
E(Y |X = x) =
1

y
dy =
y +1

dy

.
=
y
1

If x 1 then
fXY (x, y)
x
fY |X (y|x) =
= +1 , y > x.
fX (x)
y
Hence,

Z
E(Y |X = x) =
x

78.5 E(X|Y = y) = 23 y and E(Y |X = x) =

2
3

x
x
dy =
+1
y
1

3

y


1y
1y 2

78.6 The marginal density functions are


1

Z
fX (x) =

x2

21
21 2
x ydy = x2 (1 x4 ),
4
8
Z

fY (y) =

1<x<1

7 5
21 2
x ydx = y 2 , 0 < y < 1.
4
2

Thus, a first way for finding E(Y ) is


Z
E(Y ) =
0

7 5
y y 2 dy =
2

7 7
7
y y 2 dy = .
2
9

For the second way, we use the double expectation result


Z

E(Y ) = E(E(Y |X)) =


1
12

78.8 Var(Y ) = 13

E(Y |X)fX (x)dx =


1

78.7

2
3

1 x6
1 x4

21 2
7
x (1 x6 ) =
8
9

77

78.9 We have E(X) = = Var(X), E(Y |X = x) = x, and Var(Y |X = x) = 2 x2 . Thus,


E(Y ) = E[E(Y |X)] = E(X) = E(X) =
and
Var(Y ) =E[Var(Y |X)] + Var[E(Y |X)]
=E( 2 X 2 ) + Var(X)
= 2 E(X 2 ) + 2 Var(X)
= 2 ( + 2 ) + 2

Section 79
79.1 (a) The prior distribution is
(G) =0.70
(A) =0.20
(B) = 0.10.
(b) The model distribution is
fX| (x|G) =(0.25)(0.10) = 0.025
fX| (x|A) =(0.40)(0.20) = 0.08
fX| (x|B) =(0.30)(0.20) = 0.06

79.2 (a) The marginal probability is


fX (1, 2) =

fX1 | (1|)fX2 | (2|)()

=(0.25)(0.10)(0.7) + (0.4)(0.2)(0.2) + (0.3)(0.2)(0.10)


=0.0395.

78
(b) The joint distribution is
fX,X3 (1, 2, 0) =

fX1 | (1|)fX2 | (2|)fX3 | (0|)()

=(0.25)(0.10)(0.65)(0.70) + (0.4)(0.2)(0.4)(0.2) + (0.3)(0.2)(0.5)(0.10)


=0.020775
X
fX,X3 (1, 2, 1) =
fX1 | (1|)fX2 | (2|)fX3 | (1|)()

=(0.25)(0.10)(0.25)(0.70) + (0.4)(0.2)(0.4)(0.2) + (0.3)(0.2)(0.3)(0.10)


=0.012575
X
fX,X3 (1, 2, 2) =
fX1 | (1|)fX2 | (2|)fX3 | (2|)()

=(0.25)(0.10)(0.10)(0.70) + (0.4)(0.2)(0.2)(0.2) + (0.3)(0.2)(0.2)(0.10)


=0.00615

79.3 (a) The predictive distribution is


0.020775
= 0.5259
0.0395
0.012575
= 0.3183
fX3 |X (1|1, 2) =
0.0395
0.00615
fX3 |X (2|1, 2) =
= 0.1557.
0.0395

fX3 |X (0|1, 2) =

(b) The posterior probabilities are


f (1|G)f (2|G)(G)
(0.25)(0.10)(0.70)
=
= 0.4430
f (1, 2)
0.0395
(0.40)(0.20)(0.20)
f (1|A)f (2|A)(A)
=
= 0.4051
(A|1, 2) =
f (1, 2)
0.0395
f (1|B)f (2|B)(B)
(0.30)(0.20)(0.10)
(B|1, 2) =
=
= 0.1519
f (1, 2)
0.0395
(G|1, 2) =

79
79.4 (a) The hypothetical means are
3 (G) =0(0.65) + 1(0.25) + 2(0.10) = 0.45
3 (A) =0(0.40) + 1(0.40) + 2(0.20) = 0.80
3 (B) =0(0.50) + 1(0.30) + 2(0.20) = 0.70.
(b) The pure premium is
3 = E(X3 ) = 0.45(0.70) + 0.80(0.20) + 0.70(0.10) = 0.545
79.5 (a) Without using the hypothetical means, we have
E(X3 |X] = 0(0.5259) + 1(0.3183) + 2(0.1557) = 0.6297.
(b) The Bayesian premium, using hypothetical means is
E(X3 |X] = (0.45)(0.4430) + (0.80)(0.4051) + (0.70)(0.1519) = 0.62976
79.6 0.158
79.7 3.83
79.8 0.6794
79.9 7.202
79.10 10,322
79.11 0.278

Section 80
80.1 (a) The model distribution is
f (x|) =

1 x
e .

The prior distribution is


15

() =

225e
, > 0.
3

80
(b) The joint density of x and is
(x+15)

225e
f (x, ) = f (x|)() =
4

, > 0.

For x = 12, the joint density is


27

225e
f (12, ) =
, > 0.
4
The marginal density of x is
Z

f (x) =
0

(x+15)

225e
4

and

27

225e
d
f (12) =
4
0
Z 3 27
225
27 e
d
= 3 (3)
27
4 (3)
0
|
{z
}
Z

450
.
(273 )

Similarly,
15

1 (12+x2 ) 225e

f (12, x2 ) =
e
d
2
3
0
Z
1 (27+x2 )

=225
e
d
5
0
(27+x2 )
Z
(27 + x2 )4 e
225
=
(4)
d
(27 + x2 )4
5 (4)
0
|
{z
}
Z

1350
=
.
(27 + x2 )4
The predictive distribution is
f (x2 |12) =

1350
(27+x2 )4
450
(273 )

3(273 )
(27 + x2 )4

(1)

81
which is a type 2 Pareto distribution with parameters = 3 and = 27.
(c) The posterior distribution of is
27

273 e
(|12) =
.
2 4
(d) E(X2 |12) = 13.5
80.2 3.25
80.3 The posterior distribution of P is
(p|4) =

10!
4!6!

p5 (1 p)6

2 (10)(7)
(4)(13)

(13) 5
p (1 p)6
(6)(7)

which is a beta distribution with a = 6, b = 7, and = 1


80.4 (i) Letting q =

1
1+

we can write

k

1+

r 
k
1
1
(r 1)!r(r + 1) (r + k 1)
1
=
k!(r 1)!
1+
1+
(r + k)
=
q r (1 q)k .
(r)(k + 1)

r(r + 1) (r + k 1)
pk =
k!

1
1+

r 

(ii) The model distribution is


f (x|q) =

(r + x)
q r (1 q)x .
(r)(x + 1)

The prior distribution is


(q) =

(a + b) a1
q (1 q)b1 .
(a)(b)

The joint distribution of X and Q is


(r + x)
(a + b) a1
q r (1 q)x
q (1 q)b1
(r)(x + 1)
(a)(b)
(r + x) (a + b) a+r1
=
q
(1 q)b+k1 .
(r)(x + 1) (a)(b)

f (x, q) =f (x|q)(q) =

82
The marginal distribution is
Z
(r + x) (a + b) 1 a+r1
f (x) =
q
(1 q)b+k1 dq
(r)(x + 1) (a)(b) 0
Z
(r + x) (a + b) (a + r)(b + k) 1 (a + b + k + r) a+r1
=
q
(1 q)b+k1 dq
(r)(x + 1) (a)(b) (a + b + k + r) 0 (a + r)(b + k)
|
{z
}
1

(r + x) (a + b) (a + r)(b + k)
.
(r)(x + 1) (a)(b) (a + b + k + r)

The posterior distribution is


(q|x)

(a+b) a+r1
(r+x)
q
(1 q)b+k1
(r)(x+1) (a)(b)
=
(r+x)
(a+b) (a+r)(b+k)
(r)(x+1) (a)(b) (a+b+k+r)

(a + b + k + r) a+r1
q
(1 q)b+k1
(a + r)(b + k)

which is a beta distribution with a0 = a + r and b0 = b + k


80.6 15

Section 81
81.1 The unbiasedness equation yields

0 +

n
X

j =

j=1

which implies
n
X
j=1

i = 1

0
2

81.2 For i = 1, 2, , n, we have


n
X

j=1
j 6= i

j 2 + 2
i = 2

83
or equivalently
n
X

j +
i (1 ) =

j=1

81.3 Problem 81.2 followed by Problem 81.1, we find


P
(1 nj=1
j )

i =
=
1
(1 )
81.4 From Problem 81.3, we find
n
X

j =

j=1

n
0
.
(1 )

This combined with Problem 81.1 yield the equation


1

n
0

0
=
.

(1 )

Solving this equation, we find

0 =

(1 )
.
1 + n

i =

1 + n

Plugging this into Problem 81.3, we find

81.5 The credibility premium is

0 +

n
X
j=1

j Xj =

X
(1 )
Xj
+
1 + n j=1 1 + n

=(1 Z) + ZX
where
Z=

Section 82
82.1 10,622

n
1 + n

84

82.2 0.85651
82.3 0.22
82.4 1063.47
82.5 3
82.6 14
82.7

2
1.52

82.8 8.33
82.9

1
9

82.10 3.27

Section 83
83.1

n
X
n+1

83.2 0.9375
83.3 0.905
83.4 1
83.5 0.93
83.6 8.69
83.7 0.428
83.8 0.8

Section 84

1
()
n+1

85

84.1 (a) We have


Var

 
2

2
mi Xi + mj Xj
mj
mi
Var(Xi |) +
Var(Xj |)
=
mi + mj
mi + mj
mi + mj

2 
 
2 

mi
v()
mj
v()
=
w() +
+
w() +
mi + mj
mi
mi + mj
mj
m2i + m2j
v()
.
=
w() +
2
(mi + mj )
mi + mj

(b) We have

E(Xi ) =E[E(Xi |)] = E[()] =


Cov(Xi , Xj ) =E(Xi , Xj ) E(Xi )E(Xj )
=E[E(Xi Xj |)] E[()]2
=E[E(Xi |)E(Xj |)] E[()]2 (by independence)
=E[2 ()] E[()]2
=Var[()] = a
Var(Xi ) =E[Var(Xi |)] + Var[E(Xi |)]


v()
=E w() +
+ Var[()]
mi
v
=w +
+a
mi

84.2 The unbiasedness equation is

E(Xn+1 ) = =
0 +

n
X
i=1

i =

n
X
i=1

i = 1

0
.

86
For i = 1, 2, , n, (??) becomes
n
X

a=

j a +
i

v
+w
a+
mi

j=1
j 6= i


n
X
v
=

j a +
i w +
mi
j=1





0
v
=a 1
+
i w +
.

mi
Solving this equation for
i , we find

i =

a
0 /
.
w + v/mi

Summing both sides from 1 to n, we find


n

0
a
0 X mj
=1 .
j=1 v + wmj

Solving this equation, we find


1

0 =
(a/)

n
X
j=1

mj
1
+
v + wmj

=
0 =

1 + am

84.3 We have
n
X

X amj
1

+
Xj

0 +

j Xj =

1 + am
v + wmj 1 + am
j=1
j=1
n
X

a
mj
=
+
Xj

1 + am
1 + am j=1 v + wmj

=ZX + (1 Z).

84.4 2.4

(2)

87

84.5 12
84.6 11.13
84.7 257.11
84.8

4
3

84.9 (A) is false. This is true for Bhlmann. The Bhlmann-Straub model allows the variation
in size and exposure.
(B) is false. The model is valid for any type of distributions.
(C) is false. There is no cap on the number of exposure.
Thus, the answer to the problem is (E)
84.10

n
n+ w
a

Section 85
85.1 Problem 66.3 shows that the
Pn posterior distribution is an inverse Gamma distribution with
0
0
parameters = + n and = i=1 xi + . The hypothetical mean is
() = E(Xi |) =
and the Bayesian premium is

Z
E(Xn+1 |X) =

( +

1 Pn
+n ( i=1 xi +)
e
i=1 xi )
d
+n+1 ( + n)

Pn

P
1 Pn
Z
(nX + ) ( + ni=1 xi )+n1 e ( i=1 xi +)
=
d
+n1 0
+n1+1 ( + n 1)
(nX + )
=
.
+n1
Note that the Bayesian premium is a linear function of X1 , X2 , , Xn .

88
Next, we find the B
uhlmann credibility. We have
() =E(Xi |) =

=E() =
1
v() =Var(Xi |) = 2 v =
a =Var() =

E(2 ) =

2
( 1)( 2)

2
( 1)2 ( 2)

v
k = =1
a
n
n
=
Z=
n+k
n+1
Pc =ZX + (1 Z)
n
1

=
X+
n+1
n+11
=(nX + )n + 1.
Thus, the B
uhlmann credibility premium equals the Bayesian premium
85.2 Problem
Pn posterior distribution has a beta distribution with parameters
Pn 66.10 shows that the
0
a = a + i=1 xi , b = b + nm i=1 xi . The hypothetical mean is
(q) = E(Xi |Q) = mq
and the Bayesian premium is
Z
E(Xn+1 |X) =

mq

(a +
Z
m(a + nx)
=
a + b + mn 0
m(a + nx)
=
.
a + b + mn
0

Pn
Pn
(a + b + nm)
Pn
q a+ i=1 xi 1 (1 q)b+nm i=1 xi 1 dq
i=1 xi )(b + nm
i=1 xi )
Pn
Pn
(a + b + nm + 1)
Pn
Pn
q a+ i=1 xi +11 (1 q)b+nm i=1 xi 1 dq
(a + i=1 xi + 1)(b + nm i=1 xi )

Pn

Note that the Bayesian premium is a linear function of X1 , X2 , , Xn .

89
Next, we find the B
uhlmann credibility. We have

(q) =E(Xi |Q) = mq


ma
a+b
v(q) =Var(Xi |Q) = mq(1 q)
mab
v =E[mQ(1 Q)] =
(a + b)(a + b + 1)
m2 ab
a =Var[mQ] = m2 Var(Q) =
(a + b)2 (a + b + 1)
a+b
v
k= =
a
m
n
nm
Z=
=
n+k
nm + a + b
Pc =ZX + (1 Z)
m(a + nx)
=
.
a + b + nm
=E(mQ) = mE(Q) =

Thus, the B
uhlmann credibility premium equals the Bayesian premium
85.3 By Example 66.3 the posterior distribution has normal distribution with mean
1
and variance n2 + a12
. The hypothetical mean is
() = E(Xi |) =

and the Bayesian premium is

P
E(Xn+1 |X) = E[|X] =

xi

+ 2
a



n
1
+ 2
2

1

Note that the Bayesian premium is a linear function of X1 , X2 , , Xn .

P

xi

a2

n
2


1 1
a2

90
Next, we find the B
uhlmann credibility. We have
() =
=E()
v() =Var(Xi |) = 2
v =E( 2 ) = 2
Var() =a2
v
sigma2
k= =
a
a2
n
na2
Z=
= 2
n+k
na + 2
Pc =ZX + (1 Z)
na2 x
2
= 2
+
na + 2 na2 + 2
P

1
xi

1
n
=
+ 2
+
.
2
a
2 a2
Thus, the B
uhlmann credibility premium equals the Bayesian premium
85.4 0.0182

Section 86
86.1 0.818
86.2 786.375
86.3 0.78
86.4 0.8718

Section 87
87.1 0.3682
87.2 0.4987

91
87.3 0.852
87.4 1.351
87.5 98.26
87.6 0.323
87.7 7.56

Section 88
88.1 a = v 2
88.2 0.221
88.3 0.3928
88.4 0.6333
88.5 0.5747
88.6 0.2659
88.7 0.023209

Section 89
89.1 1000
89.2 1
89.3 (D)
89.4 2212.76
89.5 3477.81

92
89.6 7

Section 90
90.1 522.13
90.2 228,503
90.3 224.44
90.4 88.75
90.5 41.897
90.6 35.7
90.7 630.79

Section 91
91.1 We have the following sequence of calculation
k =[pn] + 1 = [3] + 1 = 4
d p (X) =123
VaR
n

\ p (X) =
TVaR

X
1
yi
n k + 1 i=k

1
(123 + 150 + 153 + 189 + 190 + 195 + 200) = 171.43
10 4 + 1
n
1 X
2
\ p (X))2
sp =
(yi TVaR
n k i=k
=

1
[(123 171.43)2 + (150 171.43)2 + (153 171.43)2 + (189 171.43)2 + (190 17
10 4
d p (X)]2
\ p (X) VaR
s2 + p[TVaR
d TVaR
\ p (X)) = p
Var(
nk+1
861.61 + 0.3(123 171.43)2
=
= 223.52
10 4 + 1
=

93

Section 92
92.1 1
92.2 (A)
92.3

44
9

92.4 0.0131
92.5 214

You might also like