You are on page 1of 12

Stochastic Calculus for Finance II

-
some Solutions to Chapter III
Matthias Thul

Last Update: February 19, 2012


Exercise 3.1
We rst note that for u
1
< u
2
, the Brownian increment W (u
2
) W (u
1
) is independent
of the -algebra F (u
1
) by Denition 3.3.3(iii). By Denition 2.2.3, the random variable
X = W (u
2
) W (u
1
) is independent of the -algebra F (u
1
) if
P(A B) = P(A) P(B)
for all A (X) and B F (u
1
). By Denition 3.3.3(i) information accumulates and
every set in F(t) for t < u
1
is also in F (u
1
). Thus, we have
P(A C) = P(A) P(C)
for all A (X) and C F(t) and it follows that the increment W (u
2
) W (u
1
) is
independent of F(t).
Exercise 3.2
E
_
W
2
(t) t

F(s)

= E
_
(W(t) W(s))
2
+ 2W(t)W(s) W
2
(s) t

F(s)

= E
_
(W(t) W(s))
2

+ 2W(s)E[ W(t)| F(s)] W


2
(s) t
= Var (W(t) W(s)) + 2W
2
(s) W
2
(s) t
= t s + W
2
(s) t
= W
2
(s) s (q.e.d.)

The author can be contacted via #firstname#.#lastname#@googlemail.com and


http://www.matthiasthul.com.
1
In the second step, we used that the Brownian increment W(t) W(s) is independent
of F(s) by Denition 3.3.3(iii) and that by Theorem 2.2.5 any function of this increment
is also independent of the -algebra. Furthermore, W(s) is F(s)-measurable and can thus
be taken outside the conditional expectation.
In the third step, we used that the expected value of the Brownian increment is zero by
Denition 3.3.1 to obtain the rst term and the martingale property of Brownian motion
from Theorem 3.3.4 to get the second term.
Exercise 3.3 (Normal kurtosis)
The third and fourth derivative are

(u) = E
_
(X )
3
e
u(X)

=
_
3u
4
+ u
3

6
_
e
1
2

2
u
2

(u) = E
_
(X )
4
e
u(X)

=
_
3
4
+ 6u
2

6
+ u
4

8
_
e
1
2

2
u
2
and it follows that
E
_
(X )
4

(0) = 3
4
(q.e.d.).
Exercise 3.4 (Other variations of Brownian motion)
(i) As by the hint, we have
n1

j=1
(W (t
j+1
) W (t
j
))
2
max
0kn1
|W (t
k+1
) W (t
k
)|
n1

j=0
|W (t
j+1
) W (t
j
)| .
In the limit as the number of partition points increases, the left hand side converges
almost surely to the quadratic variation
P
_
lim
||||0
n1

j=1
(W (t
j+1
) W (t
j
))
2
= T
_
= 1.
The rst term on the right hand side converges almost surely to zero since Brownian
motion has continuous sample paths almost surely by Theorem 3.3.2, i.e.
P
_
lim
||||0
max
0kn1
|W (t
k+1
) W (t
k
)| = 0
_
= 1.
2
Rearranging the inequality gives
n1

j=0
|W (t
j+1
) W (t
j
)|
max
0kn1
|W (t
k+1
) W (t
k
)|

n1
j=0
|W (t
j+1
) W (t
j
)|
.
Now, since the numerator converges almost surely to a positive constant and the
denominator converges almost surely to zero, it follows that the fraction converges
almost surely to plus innity. Since it is bounded from above by the left hand side,
it follows that
P
_
lim
||||0
n1

j=0
|W (t
j+1
) W (t
j
)| =
_
= 1.
(ii) Similar to (i), the sample cubic variation can be bounded by
0
n1

j=0
|W (t
j+1
) W (t
j
)|
3
max
0kn1
|W (t
k+1
) W (t
k
)|
n1

j=0
(W (t
j+1
) W (t
j
))
2
.
As argued in (i), the rst term on the right hand side converges almost surely to zero
and the second term converges almost surely to the quadratic variation. Thus, the
right hand side converges almost surely to zero and consequently the sample cubic
variation converges almost surely to zero as well, i.e.
P
_
lim
||||0
n1

j=0
|W (t
j+1
) W (t
j
)|
3
= 0
_
= 1.
Exercise 3.5 (Black-Scholes-Merton formula)
Since W(T) is known to be N(0, T) normally distributed or equivalently
W(T)

T
is N(0, 1)
standard normally distributed, Theorem 1.5.2 allows us to compute the expectation as
E
_
e
rT
(S(T) K)
+

= e
rT
_

_
S(0) exp
__
r
1
2

2
_
T +

Tz
_
K
_
+
N

(z)dz
where
N

(z) =
1

2
exp
_

x
2
2
_
3
denotes the standard normal density function. Next, we want to eliminate the max
function inside the integral. We observe that the terminal payo of the call option is
non-zero if S(T) > X or equivalently
z
ln
_
K
S(0)
_

_
r
1
2

2
_
T

T
= d

.
Changing the lower limit of integration from to d

allows us to drop the max


function and we get
. . . = e
rT
_

d

_
S(0) exp
__
r
1
2

2
_
T +

Tz
_
K
_
N

(z)dz
= e
rT
__

d

S(0) exp
__
r
1
2

2
_
T +

Tz
_
N

(z)dz
_

d

KN

(z)dz
_
.
The second integral evaluates to
_

d

KN

(z)dz = K
_

d

(z)dz
= KP{z d

}
= KP{z d

}
= KN (d

) .
Here, we exploited the symmetry of the normal distribution in the third step. Using
the denition of the standard normal density, we can write the rst integral as
_

d

S(0) exp
__
r
1
2

2
_
T +

Tz
_
N

(z)dz
=
_

d

S(0) exp
__
r
1
2

2
_
T +

Tz
_
1

2
exp
_

z
2
2
_
dz
= S(0) exp
__
r
1
2

2
_
T
__

d

2
exp
_

z
2
2

Tz
2
_
dz
= S(0) exp
__
r
1
2

2
_
T
__

d

2
exp
_

z
2
2

Tz
2
T
2
_
dz
= S(0)e
rT
_

d

2
exp
_

z
2
2

Tz +
2
T
2
_
dz
= S(0)e
rT
_

d

2
exp
_

_
z

T
_
2
2
_

_
dz.
4
We now make a change of variable by dening x = z

T and get
. . . = S(0)e
rT
_

d

T
1

2
exp
_

x
2
2
_
dx
= S(0)e
rT
_

d

T
P{ = x} dx
= S(0)e
rT
P
_
d

T
_
= S(0)e
rT
P
_
d

T
_
= S(0)e
rT
N
_
d

T
_
.
Dening d
+
= d

T and combining all previous results yields


E
_
e
rT
(S(T) K)
+

= S(0)N (d
+
) Ke
rT
N (d

) .
Exercise 3.6
(i) We can rewrite the expectation as
E[ f(X(t))| F(s)] = E[ f(X(s) + X(t) X(s))| F(s)] .
Note that X(s) = s+W(s) is F(s)-measurable while the increment X(t) X(s) =
(t s) + W(t) W(s) is independent of F(s). By Lemma 2.3.4, we have
E[ f(X(s) + X(t) X(s))| F(s)] = g(X(s)),
where
g(x) = E[f(x + X(t) X(s))] .
Since X(t) X(s) is normally distributed with mean (t s) and variance t s, we
can compute this expectation by Theorem 1.5.2 via
g(x) =
_

f(x + z)
1
_
2(t s)
exp
_

(z (t s))
2
2(t s)
_
dz.
5
Making a change of variable by dening y = x+z and rearranging yields the desired
result
g(x) =
1
_
2(t s)
_

f(y) exp
_

(z x (t s))
2
2(t s)
_
dy.
(ii) Analogous to (i), we can rewrite the expectation as
E[ f(S(t))| F(s)] = E
_
f
_
S(s)
S(t)
S(s)
_

F(s)
_
,
where S(s) = S(0) exp {W(s) + s} is F(s)-measurable and the increment
S(t)
S(s)
=
exp {(W(t) W(s)) + (t s)} is independent of F(s). By Lemma 2.3.4, we have
E
_
f
_
S(s)
S(t)
S(s)
_

F(s)
_
= g(S(s)),
where
g(x) = E
_
f
_
x
S(t)
S(s)
__
.
Since (W(t) W(s)) + (t s) is normally distributed with mean (t s) and
variance
2
(t s), it follows that the ratio
S(t)
S(s)
is log-normally distributed with the
same parameters. The expectation can be computed as
g(x) =
_

0
f(x z)
1
z
_
2
2
(t s)
exp
_

ln z (t s)
2
2
(t s)
_
dz.
Making a change of variable by dening y = x z with dy = xdz yield the desired
result
g(x) =
_

0
f(y)
1
y
_
2
2
(t s)
exp
_

ln
_
y
x
_
(t s)
2
2
(t s)
_
dy.
6
Exercise 3.7
(i) Substituting for X(t) gives
Z(t) = exp
_
W(t)
1
2

2
t
_
and by Theorem 3.6.1, Z(t) is an exponential martingale.
(ii) Since Z(t) is a martingale, the stopped process Z (t
m
) is also a martingale and
we have that
1 = Z(0) = E[Z (t
m
)] = E
_
exp
_
X (t
m
)
_
+
1
2

2
_
(t
m
)
__
for t 0.
(iii) We closely follow the argument in Section 3.6. For any time t
m
, the drifted
Brownian motion is at or below the level m and thus we have that for all t 0,
0 exp {X (t
m
)} e
m
.
Furthermore, since we assume that 0 and > 0, the term +0.5
2
is strictly
positive and thus
lim
t
exp
_

_
+
1
2

2
_
(t
m
)
_
= I
{
m
<}
exp
_

_
+
1
2

2
_

m
_
.
Now, since the rst term is bounded and the second converges to zero when
m
= ,
it follows that the whole expression converges to zero in this case. Thus,
lim
t
exp
_
X (t
m
)
_
+
1
2

2
_
(t
m
)
_
= I
{
m
<}
exp
_
m
_
+
1
2

2
_

m
_
.
We now take the limit for t inside the martingale equation obtained in (ii).
The interchange of limit and expectation is justied by Theorem 1.4.9 as we can
dene a non-negative random variable (constant) Y = e
m
< such that
P
_
Y

exp
_
X (t
m
)
_
+
1
2

2
_
(t
m
)
_

_
= 1
7
as argued before. Thus,
1 = lim
t
E
_
exp
_
X (t
m
)
_
+
1
2

2
_
(t
m
)
__
= E
_
lim
t
exp
_
X (t
m
)
_
+
1
2

2
_
(t
m
)
__
= E
_
I
{
m
<}
exp
_
m
_
+
1
2

2
_

m
__
or
E
_
I
{
m
<}
exp
_

_
+
1
2

2
_

m
__
= e
m
.
Taking the limit for 0 yields
E
_
I
{
m
<}

= P{
m
< } = 1.
Since the stopping time
m
is almost surely nite, we can drop the conditioning to
obtain
E
_
exp
_

_
+
1
2

2
_

m
__
= e
m
.
We dene
= +
1
2

=
_
2 +
2
.
The condition > 0 is only satised by the positive root and we obtain the Laplace
transform
E
_
e

= exp
_
m m
_
2 +
2
_
.
(iv) Dierentiating the Laplace transform w.r.t. to yields
E
_

m
e

=
m
_
2 +
2
exp
_
m m
_
2 +
2
_
,
8
which in the limit for 0 becomes E[
m
] = . Note that the rst term on the
right hand side converges to innity since we assume that m > 0 while the second
term converges to a constant.
(v) If < 0 and > 0, then the term +0.5
2
is still strictly positive and our analysis
in (iii) up to the equation
E
_
I
{
m
<}
exp
_

_
+
1
2

2
_

m
__
= e
m
still holds. We now take the limit for 2 = 2|| (since < 0) such that the
exponential term inside the expectation converges to one and obtain
E
_
I
{
m
<}

= P{
m
< } = e
2m||
< 1.
Note that there is a typo in the exercise (at least in the 2004 edition) - instead of
P{
m
< } = e
2x||
it should read P{
m
< } = e
2m||
which is what we derived
above.
In contrast to (iii),
m
is innite with non-zero probability and we cannot simply
drop the conditioning. Dening in the same way as in (iii), again taking the
positive root for gives
E
_
I
{
m
<}
e
m

= exp
_
m m
_
2 +
2
_
.
However, since e
m
= 0 if and only if
m
= since m > 0 the conditioning can
still be dropped and we obtain the same result as in (iii).
Exercise 3.8
(i) We have
9

n
(u) = E
_
exp
_
u

n
M
nt,n
__
= E
_
exp
_
u

n
nt

k=1
X
k,n
__
= E
_
nt

k=1
exp
_
u

n
X
k,n
_
_
=
nt

k=1
E
_
exp
_
u

n
X
k,n
__
=
_
E
_
exp
_
u

n
X
1,n
___
nt
=
_
e
u

n
p
n
+ e

n
q
n
_
nt
=
_
e
u

n
r
n
+ 1 e
/

n
e
/

n
e
/

n
e

n
r
n
+ 1 e
/

n
e
/

n
e
/

n
_
nt
(q.e.d.).
Here, we used that the increments X
1,n
, . . . , X
n,n
are independent in the fourth
equality to write the expectation as the product as the product of the expectations.
The fth step uses that increments are also identically distributed such that it is
sucient to compute the expression for X
1,n
.
(ii) First note that
1
x
2
(u) =
_
e
ux
rx
2
+ 1 e
x
e
x
e
x
e
ux
rx
2
+ 1 e
x
e
x
e
x
_ t
x
2
.
Thus
ln 1
x
2
(u) =
t
x
2
ln
_
e
ux
rx
2
+ 1 e
x
e
x
e
x
e
ux
rx
2
+ 1 e
x
e
x
e
x
_
=
t
x
2
ln
_
_
rx
2
+ 1
_
e
ux
e
ux
e
x
e
x
+
e
(u)x
e
(u)x
e
x
e
x
_
=
t
x
2
ln
_
(rx
2
+ 1) sinh(ux) + sinh(( u)x)
sinh(x)
_
=
t
x
2
ln
_
(rx
2
+ 1) sinh(ux) + sinh(x) cosh(ux) cosh(x) sinh(ux)
sinh(x)
_
=
t
x
2
ln
_
cosh(ux) +
(rx
2
+ 1 cosh(x)) sinh(ux)
sinh(x)
_
(q.e.d.).
10
(iii) Using the hint, we get
cosh(ux) +
(rx
2
+ 1 cosh(x)) sinh(ux)
sinh(x)
= 1 +
1
2
u
2
x
2
+O
_
x
4
_
+
_
rx
2

1
2

2
x
2
+O(x
2
)
_
(ux +O(x
3
))
x +O(x
3
)
= 1 +
1
2
u
2
x
2
+O
_
x
4
_
+
_
r
1
2

2
_
ux
3
(1 +O(x
2
))
x (1 +O(x
2
))
= 1 +
1
2
u
2
x
2
+
rux
2


1
2
ux
2
+O
_
x
4
_
(q.e.d.).
Here, we used that
ux +O
_
x
3
_
= ux
_
1 +O
_
x
2
__
in the second equality.
(iv) Finally,
ln 1
x
2
(u) =
t
x
2
ln
_
1 +
_
1
2
u +
r


1
2

_
ux
2
+O
_
x
4
_
_
=
t
x
2
__
1
2
u +
r


1
2

_
ux
2
+O
_
x
4
_
_
=
_
1
2
u +
r


1
2

_
tu +O
_
x
2
_
.
Thus,
lim
x0
ln 1
x
2
(u) =
1
2
tu
2
+
1

_
r
1
2

2
_
tu.
It follows that
lim
n
E
_
exp
_
u

n
M
nt,n
__
= lim
n

n
(u)
= lim
x0
1
x
2
(u)
= exp
_
lim
x0
ln 1
x
2
(u)
_
= exp
_
1
2
u
2

2
t +
_
r
1
2

2
_
tu
_
.
11
We recognize this as the moment generating function of a normal random variable
and conclude that
lim
n

n
M
nt,n
N
__
r
1
2

2
_
t,
2
t
_
(q.e.d.).
12

You might also like