You are on page 1of 25

Stochastic Analysis.

Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 1


1 Problem Sheet 0
Problems on Measure and Integration
Let (E, B, ) be a measure space. For p 1 denote by L
p
the equivalence class of
functions f : E Rwith
L
p
(X, B, ) :=
_
f : E R :
_
[f(x)[
p
(dx) <
_
.
This is a Banach space with norm |f|
Lp
=
__
[f(x)[
p
(dx)
_1
p
. A family of
functions f
n
: (E, B) Ris said to be uniformly integrable (u.i.) if for any > 0
there is A > 0 such that if a > A then
_
|fn(x)|>a
[f
n
(x)[(dx) < for all n.
A sequence of functions f
n
is said to converge in measure if lim
n
([f
n

f[ > ) = 0 for any > 0. For a nite measure almost sure convergence implies
convergence in measure.
1. (a) Prove Markovs inequality for a non-negative function:
(f > a)
1
a
_
f(x)d(x)
and Chebyshevs Inequality:

f(x)
_
E
f(x)(dx)

a
_

1
a
2
_
E
_
f(x)
_
E
f(x)(dx)
_
2
(dx).
(b) Let f : (E, B, ) R be an integrable function show that (x :
[f(x)[ > a) decays at least linearly. What do you deduce if f L
p
for
some p = 2, 3, . . . ,?
(c) Show that if for some p > 1, |f
n
|
L
p C for all n then f
n
is
uniformly integrable.
2. Show that if X
t
, t I is a family of uniformly integrable random vari-
ables, it is L
1
bounded (i.e. sup
t
_
[X
t
[d < ).
3. Consider the family of functions f
n
: [0, 1] R,
f
n
(x) =
_

_
c
n
, x [0,
1
n
]
0, x (
1
n
, 1].
Indicate conditions on c
n
so that f
n
is uniformly integrable.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 2
4. Let f
n
, f L
1
. Show that f
n
is uniformly integrable and f
n
f in
measure implies that f
n
converges to f in L
1
and
lim
n
_
f
n
d =
_
fd.
5. For nite measures show that L
p
L
q
if 1 q < p.
6. Construct Lebesgue integrals. Give an example of a right continuous increas-
ing function F : [0, ) [0, 1] with F(0) = 0. Construct the Riemann
Stieljes measure on [0, ) that is associated to F. Let G(x) = F(x) + 1.
Relate
F
to
G
. In your case interpret the integral
_
f(x)d
F
(x).
Problems on Conditional Expectation
1. Show that if (X, Y ) is a 2-dimensional Gaussian r.v. then X is independent
of Y if and only if Cov(X, Y ) = 0.
2. Show that if X and Y are independent random variables on (, T, P) and
f, g : R RBorel measurable with E[f(X)g(Y )[ < then E[f(X)g(Y )] =
Ef(X)Eg(Y ).
3. Let L
2
denote the equivalent classes of L
2
functions f : R. It is a
Banach space with norm
_
E[f[
2
. Let ( be a sub- algebra of T show that
there is a unique measurable function

f : (, () E which minimizes the
distance d(f, g) :=
_
E[f g[
2
among (-measurable functions and prove
that

f = Ef[( a.s.
4. Let X, Y : R be two random variables with a joint density f. For any
A B(R
2
), P ( : (X(, Y ()) A) =
_
A
f(x, y)dxdy.
(a) Show that for B B(R),
P(Y B[X = x) =
_
B
f(x, y)
f(x)
dy,
a.s. with respect to the distribution
X
.
(b) Show that if Y L
1
,
E(Y [X = x) =
_
y
f(x, y)
f(x)
dy, a.s..
5. Let = [
1
2
,
1
2
], T = B([
1
2
,
1
2
]) and P the Lebesgue measure.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 3
(a) Let A
1
= x : x 0 and A
2
= x : x < 0. Let ( = A
1
, A
2
, , .
Let B B(R). Give a formula for PB[(.
(b) Let X(z) = z
2
. Show that
E(1
A
[X)(z) =
1
2
1
A
(z) +
1
2
1
A
(z).
(c) Let Y be an integrable variable, show that EY [X(z) =
1
2
Y (z) +
1
2
Y (z) a.s. Find a version of E(Y [X = z).
6. (a) Let X
0
, X
1
, . . . , X
n
be mean zero randomvariables with non-degenrate
Gaussian distribution. Show that E(X
0
[X
1
, . . . , X
n
=

n
j=1
a
j
X
j
.
Determine a
j
in terms of the covariance matrix: c
ij
= cov(X
i
, X
j
).
(b) More generally let X be a R
d
valued and Y a R
k
valued random vari-
able such that they are jointly Gaussian. Write its covariance matrix in
the block matrix form:
C =
_
C
11
C
12
C
21
C
22
_
,
Assume that it is positive denite. Then for all B B(R
d
), P(X
B[Y ) = (B) for the Gaussian distribution with mean

X = C
12
C
1
22
Y
and covariance

K = C
11
C
12
C
1
22
C
21
. Show that cov(X

X) =

K.
(c) In part (2) above remove the assumption that X, Y are mean zero vari-
ables. Show that the conditional probability distribution of X given Y
is Gaussian with mean

X = EX +C
12
C
1
22
(Y EY ) and covariance

K = C
11
C
12
C
1
22
C
21
.
7. Let X
i
and Y be real valued random variables. Let : R R be Borel
measurable. Suppose that all the terms involved are integrable then
(a)
E(X
1
)Y [X
1
= (X
1
)EY [X
1
.
(b) If (X
1
, Y ) is independent of X
2
then
E(Y [X
1
, X
2
) = E(Y [X
1
).
8. Let X
1
, X
2
, . . . be independent identically distributed integrable random
variables. Let S
n
= X
1
+X
2
+ +X
n
. Prove that
E(X
1
[S
n
, S
n+1
, . . . ) =
S
n
n
, a.s.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 4
2 Hints on Preliminaries
Answer to Problem 2 on integration is in notes.
On Conditional Expectations:
Problem 2. Use Monotone Class argument
Problem 3. Orthorgonal projection and compute E[f

f +f g]
2
.
Problem 5.
1
2
Y (

z) +
1
2
Y (

z)
Problem 6a) Write down the joint distribution f(x
0
, x
1
, . . . , x
n
), compute the
inverse of the covariance matrix.
C :=
_
C
1
B
T
B C
2
_
1
=
_
I 0
C
1
2
B I
_ _
(C
0
B
T
C
2
B)
1
0
0 C
1
_ _
I B
T
C
1
2
0 I
_
.
The quadratic form is
Q(x, y) :=
__
x
y
_
, C
1
_
x
y
__
=

x B
T
C
1
2
y, (C
0
B
T
C
1
2
B)
1
(x B
T
C
1
2
y)
_
y, C
1
2
y).
Integrate x out in
_
xg(y)e

1
2
Q(x,y)
and
_
(y)g(y)e

1
2
Q(x,y)
where (y) is the
conditional expectation of x
0
with respect to y = (x
1
, . . . , x
n
). Compare terms to
see that
(y) = B
T
C
1
2
y.
and (Y ) is a he Gaussian r.v. B
T
C
1
2
Y .
Problem 6b) Dene

X = X

X. Then (

X, Y ) is Gaussian with cov(

X, Y ) =
0. And
E(1
XA
[Y ) = E(1
(

X+C
12
C
1
22
Y )A
[Y )
= E(1
(

X+C
12
C
1
22
y))A
[
y=Y
=
_
1
(x+

X)A
d

X
(x)
Hence the conditional measure is that of

X shifted by

X, which is Gaussian
N(

X, cov(

X)).
Problems 9. Use symmetry.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 5
Problem Sheet One
Exercise 1 Let m R
n
and G a positive denite symmetric matrix. Com-
pute
_
f(x)dx and
_
xf(x)dx where f(x) = Ce

1
2
G(xm),xm
.
Dene p
t
(m, x) =
1
(2t)
n
2
e
xm
2
2t
. If X is a random variable with dis-
tribution p
t
(m, x), explain that P(|X m| > x) C
1
e
C
2
x
for large
x.
Exercise 2 Suppose that B
t
is a one dimensional process on (, T, P) with nite
dimensional distribution given below, for 0 < t
1
< < t
k
, A
k
B(R),
P(B
t
1
A
1
, . . . , B
t
k
A
k
)
=
_
A
1
. . .
_
A
k
p
t
1
(0, y
1
)p
t
2
t
1
(y
1
, y
2
) . . . p
t
k
t
k1
(y
k1
, y
k
)dy
k
dy
k1
. . . dy
1
.
(a) Show that EB
s
B
t
= min(s, t); (b) Show that B
t
has independent increments;
(c) Give the distribution of B
t
B
s
and show that t B
t
is almost surely contin-
uous.
Exercise 3 Let X
t
= B
t
tB
1
, 0 t 1. Show that E(X
s
X
t
) = s(1 t)
for s t. Explain why X
t
is a Gaussian process. A sample continuous Gaussian
process with X
0
= X
1
= 0 and covariance E(X
s
X
t
) = (s t)(1 (s t)) is a
Brownian bridge from 0 to 0. Compute the density of P(B
t
A[B
1
= 0) and that
of EB
t
1
A
1
, . . . , B
tn
A
n
[B
1
= 0 in terms of the heat kernel.
Note that P(B
t
A[(B
1
)) = (B
1
) for some : R RBorel measurable.
By P(B
t
A[B
1
= y) we mean (y).
Exercise 4 Let b : R R be a Lipschitz continuous function and x
t
solves
x
t
= b(x
t
). Let W
t
be a standard Brownian motion. Suppose x

t
: Rsatises
x

t
() = x
0
+
_
t
0
b(x

s
())ds +W
t
().
Show that x

t
converges to x
t
in probability as 0.
Hint: Exercise 2: A process is a Gaussian process if its nite dimensional
distributions are Gaussian. If

a
k
B
t
k
is a Gaussian r.v. for all a
k
R, then
(B
t
1
, . . . B
tn
) is a (multi-variate) Gaussian random variable. The random variable
with components the increments (B
t
k
B
t
k1
, k = 1m. . . , n) is a linear trans-
formation of a Gaussian random variable and is hence Gaussian. For Gaussian
random variables pairwise independence implies independence and two Gaussian
variables are independent if they are uncorrelated.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 6
Problem Sheet 2
Part 1.
Exercise 5 A stochastic process X
t
on (, T, T
t
) is progressively measurable if
for each t, (s, ) X
s
() is measurable as a map ([0, t] , B([0, t]) T
t
)) to
R. Show that right continuous (left continuous) adapted stochastic processes are
progressively measurable.
Suppose that X
t
is adapted to T
t
. Let 0 t
0
t
1
t
n
t. Are the
following processes are progressively measurable?
X
(n)
t
() = X
0
1
{0}
(t) +
n1

i=0
X
t
i
()1
(t
i
,t
i+1
]
(t), Z
(n)
t
() =
n1

i=0
X
t
i
()1
[t
i
,t
i+1
)
(t),

Y
(n)
t
=
n1

i=0
X
t
i+1
1
[t
i
,t
i+1
)
(t).
Exercise 6 Let be a probability measure on R. Dene the product measure on
B(RR) by
(A
1
A
2
) = (A
1
) (A
2
), A
i
B(R).
Let
i
: R R R be the projections. Then
1
,
2
, as real valued random
variables on (R
2
, B(R
2
), ), are independent.
Exercise 7 Dene P
t
f(x) =
1

2t
_
e
(yx)
2
2t
f(y)dy for f bounded measurable.
We say f BC
2
if f and its rst two derivatives are bounded.
Show that P
t
has the semigroup property: P
t+s
f = P
t
P
s
f and observe that
P
t
f 0 if f 0.
If f is BC
2
,
lim
t0
P
t
f(x) f(x)
t
=
1
2
f

(x).
The linear operator / dened by /f := lim
t0
Ptf(x)f(x)
t
whenever the
limit exists is the generator of P
t
.
Show that
d
dx
(P
t
f)(x) =
1
t
E[f(x + B
t
)B
t
] where B
t
N(0, t). Can you
show this holds when f is not differentiable ?
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 7
Exercise 8 A zero mean Gaussian process B
H
t
is a fractional Brownian motion of
Hurst parameter H, H (0, 1), if its covariance is
E(B
H
t
B
H
s
) =
1
2
(t
2H
+s
2H
[t s[
2H
).
Then E[B
H
t
B
H
s
[
p
= C[t s[
pH
. It is Brownian motion when H = 1/2.
(Otherwise this process is not even a semi-martingale).
Show that B
H
t
has H older continuous paths of order < H.
Exercise 9 Let W
t
be a standard Brownian motion and
y

t
= y
0
+
_
t
0
b(y

s
)ds +

W
t
.
Assume that b is bounded, as 0, show that y

t
on any nite time interval
converges uniformly in time on any nite time interval [0, t].
E sup
0st
(y

s
y
0
) 0.
Set z

t
:= y

t/
. Show that z

t
= z
0
+
_
t
0
b(z

s
)ds +

W
t
where

W
t
is a Brownian
motion. [Hint: use P(sup
st
B
s
a) = 2P(B
t
a).]
Exercise 10 Let (W, B, P) be the Wiener space and let T
t
be the natural ltration
of the coordinate process
t
. Show that
t
is a Markov process with respect to its
natural ltration. This means for any bounded Borel measurable function f,
Ef(
t
)[(
r
: 0 r s) = Ef(
t
)[(
s
).
Part 2.
Exercise 11 Let X
1
, X
2
, . . . be independent random variables with EX
i
= 0. Let
T
n
= X
1
, . . . , X
n
. Let S
n
= X
1
+ + X
n
. Show that for j = 1, 2, . . . ,
ES
n+j
[T
n
= S
n
.
Exercise 12 Let X
1
, X
2
, . . . be independent random variables with EX
i
= 1. Let
T
n
= X
1
, . . . , X
n
. Let M
n
=
n
k=1
X
k
. Show that for all n, j = 1, 2, . . . ,
EM
n+j
[T
n
= M
n
.
Exercise 13 Let X : R be integrable. Let (T
t
, t 0) be a ltration. Dene
X
t
= EX[T
t
, t 0. Show that EX
t
[T
s
= X
s
given t > s 0.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 8
Exercise 14 Let X
1
, X
2
, . . . be independent random variables with EX
i
= 0.
Let T
n
= X
1
, . . . , X
n
. Let S
0
= 0 and S
n
= X
1
+ + X
n
. Let C
n
=
f
n
(X
1
, . . . , X
n1
) for some Borel function f
n
: R
n
R. Dene
I(C, X)
n
=

1kn
C
k
(S
k
S
k1
).
This is called the martingale transform. Compute EI(C, X)
n
I(C, X)
n1
[T
n1
.
Exercise 15 Let X
n
be a sequence of random variables bounded in L
1
. Suppose
that for all a < b,
P( : liminf
n
X
n
() < a < b < limsup
n
X
n
()) = 0.
Show that lim
n
X
n
() exists almost surely and the limit is almost surely nite.
Hint: Exercise 5. Recall that the tensor -algebra is the smallest such that each
projection is measurable.
Exercise 7. Let z =
yx

t
then P
t
f(x) =
_
1

2
e
z
2
2
f(x + t

z)dz. Taylor
expand f(x +t

z) at x and observe that


_
ze

z
2
2
dz = 0.
Exercise 10. The class of functions
n
i=1
g
i
(B
s
i
), where g
i
are Borel measurable
and 0 s
0
< s
1
< . . . s
n
= s < t, are sufcient for determining conditional
expectation with respect to T
B
s
. Show that
Ef(B
t
)
n
i=1
g
i
(B
s
i
) = E Ef(B
t
)[(B
s
)
n
i=1
g
i
(x
B
i
).
For example consider
Ef(B
t
)g
2
(B
s
)g
1
(B
s
1
) = EEf(B
t
)g
2
(B
s
)g
1
(B
s
1
)[(B
s
1
) (B
s
B
s
1
)
and use the independent increments property: For 0 s
0
< s
1
< < s
n
, the
increments (B
s
i+1
B
s
i
)
n
i=0
are independent random variables.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 9
Problem Sheet 3
Let (, T, T
t
) be a ltered probability space. Part 1. All processes in this part are
real valued.
Exercise 16 If M
t
is an L
2
bounded martingale show that for s < t, E(M
t

M
s
)
2
= EM
2
t
EM
2
s
.
Exercise 17 Let be a convex function. Show that
(a) If X
t
is a sub-martingale and is increasing then (X
t
) is a sub-martingale.
(b) If X
t
is martingale then (X
t
) is a sub-martingale. Show that [X
t
[ is a sub-
martingale.
Exercise 18 Let X
n
, n = 0, 1, 2 . . . be an T
n
-adapted stochastic process with
X
n
L
1
and X
0
= 0. Dene G
n
= EX
n+1
X
n
[T
n
, n 1.
(a) Let A
0
= A
1
= 0 and A
n
=

n1
j=1
G
j
, n 2. Show that A
n
T
n1
(previsible).
(b) Let M
0
= 0 and M
n
= X
n
A
n
show that M
n
is a martingale. Then
X
n
= M
n
+A
n
. This is the analogue of the Doob-Meyer decomposition for
continuous time processes.
(c) If X
n
has another decomposition X
n
=

M
n
+

A
n
, where

M
n
is a martin-
gale with

M
0
= 0,

A
n
is process with

A
n
T
n1
and

A
0
=

A
1
= 0. Show
that M
n
=

M
n
and A
n
=

A
n
a.s. If X
n
is a sub-martingale show that A
n
is
an increasing process.
(d) Let (M
n
) be a martingale with EM
2
n
< and M
0
= 0 show that there
is an increasing process A
n
such that M
2
n
= N
n
+ A
n
where N
n
is a
martingale and A
n
is an increasing process. Show that A
n
A
n1
=
E(M
n
M
n1
)
2
[T
n1
.
Note that A
n
is the discrete analogue for the martingale bracket or quadratic
variation of (M
n
).
Exercise 19 If B
t
is a standard Brownian motion show that
(a) For any s 0, B
t
B
s
is independent of T
s
where
T
s
= B
r
: 0 r s.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 10
(b) if a ,= 0 is a real number
1

a
B
at
is a Brownian motion;
(c) For any t
0
0, B
t
0
+t
B
t
0
is a standard Brownian motion;
(d) B
t
, B
2
t
t and exp(B
t
t/2) are martingales;
(e) For any 0 s < t, E(B
t
B
s
)
2
[T
s
= t s;
(f) Deen a process W
t
by W
0
= 0 and W
t
= tB1
t
when t > 0. Show that W
t
is a Brownian motion.
(g) lim
t
Bt
t
= 0.
Exercise 20 (a) Suppose that X
t
is continuous bounded and T
t
-adapted. Let
0 t
0
< t
1
< < t
n
t. Let
X
(n)
t
() = X
0
1
{0}
(t) +
n1

i=0
X
t
i
()1
(t
i
,t
i+1
]
(t).
Show that as max
0i<n
(t
i+1
t
i
) 0, E
_
t
0
[X
(n)
s
X
s
[
2
ds 0.
(b) Let B
t
be a one dimensional Brownian motion. Compute
E
n1

i=0
X
t
i
()[B(t
i+1
) B(t
i
)] and E
_
n1

i=0
X
t
i
()[B(t
i+1
) B(t
i
)]
_
2
.
Exercise 21 Let (B
t
) be a 1-dimensional Brownian motion. Is the Brownian
Bridge (B
t
tB
1
, 0 t 1) a martingale? Is the Ornstein-Uhlenbeck process
e
t
B
e
2t a martingale?
Exercise 22 Let (, T, T
t
, P) be a ltered probability space. Let Q be a proba-
bility measure such that Q << P. Let f =
dQ
dP
. Let Q
t
and P
t
are restrictions
of Q, P to T
t
. Let f
t
= Ef[T
t
. Show that f
t
is a L
1
bounded martingale and
dQt
dPt
= f
t
.
Part 2.
Exercise 23 Let (x
t
, t 0) be a sample continuous real valued process. Let T
t
:=
(x
s
: 0 s t) be its natural ltration. Let a > 0, Show that
D() := inf
t0
t : [x
t
()[ a
is a T
t
stopping time.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 11
Exercise 24 If T
n
is a sequence of stopping times such that T = lim
n
T
n
exists
almost surely. Show that T is a stopping time.
Exercise 25 If S is a stopping time dene
T
S
= A T

: A S t T
t
, t 0.
Show that for all t 0, S < t, S > t and S = t are in T
S
. Let T be a
stopping time. Show that S < T, S > T and S = T are in T
S
.
Hint: Exercise 18. The solution is shorter than the question and the implication
is longer than the question! For part C) induct on n. A
1
=

A
1
= 0 implies that
M
1
=

M
1
. Try to prove

A
1
=

A
2
by the martingale property of M
n
and that A
n
is previsible.
Exercise 19. (a) follows from that B
t
has independent increments: (B
t
j+1

B
t
j
), j = 1, 2, . . . , n are independent for any 0 t
1
< t
2
< < t
n
.
A Brownian motion is characterised being a Gaussian process with covariance
E(B
s
B
t
) = min(s, t). See hint on this on the previous exercise sheet. For (f)
observe that the probability that lim
t0,t>0
W
t
= 0 is the same as lim
t0,t>0
B
t
=
0. Apply (f) to obtain (g).
Exercise 20 (b): use exercise 19 (a), and (e).
Exercise 24. Write limits as supremums and inmums.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 12
Problem Sheet 4: Martingales
Exercise 26 Let a > 0, p > 1.
(a) Let (X
n
) be a sub-martingale (or a martingale). Prove the maximal inequal-
ity: letting A = max
1kn
X
k
a,
P
_
max
0kn
X
k
a
_

1
a
E[X
n
1
A
].
(b) Let Y : R
+
be a random variable show that for any constant C > 0,
E(Y C)
p
=
_
C
0
pt
p1
P(Y t)dt.
(c) Suppose that [X
n
[ is a sub-martingale. Let X

= sup
0kn
[X
k
[ Show that
E(X

C)
p

p
p 1
E
_
[X
n
[(X

C)
p1
_
.
(d) Suppose that X
n
is a martingale or a positive sub-martingale. Let X

=
sup
0kn
[X
k
[. Show that E(X

C)
p

_
p
p1
_
p
E[X
n
[
p
] and that
E sup
0kn
[X
k
[
p

_
p
p 1
_
p
E[X
n
[
p
.
Exercise 27 Let I be an interval of R
+
and (X
t
, t I) a right continuous martin-
gale or a right continuous positive submartingale. Let |X|
Lp
:= (E[X[
p
)
1
p
.
Prove the maximal inequality: for p 1 and a > 0,
a
p
P(sup
tI
[X
t
[ a) sup
tI
E([X
t
[
p
).
Prove Doobs L
p
inequality: for p > 1,
_
_
_
_
sup
tI
[X
t
[
_
_
_
_
L
p

_
p
p 1
_
sup
t
|X
t
|
L
p .
Exercise 28 Let M
t
be a continuous integrable and adapted stochastic process.
(a) Show that if for all bounded stopping times T, EM
T
= EM
0
then (M
t
) is a
martingale.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 13
(b) Let X
t
be a T
t
-martingale and (
t
-adapted where (
t
T
t
. Show that M
t
is
a (
t
-martingale.
(c) Let T be a stopping time, M
t
a martingale. Show that M
T
is both a T
t
and
T
tT
martingale.
Exercise 29 Let M
t
be a bounded martingale, show that
(a) Show that if a < b c < d then
E(M
d
M
c
)(M
b
M
a
) = 0.
(b) If a < b < c, EM
a
M
b
(M
b
M
a
)(M
c
M
b
) = 0. Show that
E[M
a
(M
c
M
a
) M
a
(M
b
M
a
) M
b
(M
c
M
b
)]
2
E(M
a
M
b
)
2
(M
2
c
M
2
b
),
Also E[M
a
(M
c
M
a
) M
a
(M
b
M
a
) M
b
(M
c
M
b
)]
2

1
2
E[M
2
c

M
2
a
] where = max(M
b
M
a
)
2
, (M
c
M
b
)
2
.
Exercise 30 Let
n
: 0 t
n
1
=
1
2
n
< t
n
2
=
2
2
n
< . . . be a dyadic partition. Let
M
t
be a bounded sample continuous martingale with M
0
= 0.
(a) Dene
Y
n
t
=

j=0
M
t
n
j
(M
tt
n
j+1
M
tt
n
j
).
In another word, if t (t
n
N
, t
n
N+1
],
Y
n
t
=
N1

j=0
M
t
n
j
(M
tt
n
j+1
M
tt
n
j
) +M
t
n
N
(M
t
M
t
n
N
).
Show that for each n, Y
n
t
is a martingale.
(b) Show that Y
n
t
converges to a process, Y
t
in probability.
(c) Show that Y
t
is a martingale, which is later seen as the stochastic integral
_
t
0
M
s
dM
s
.
(d) Prove the summation by parts formula:
M
2
t
= M
2
0
+ 2Y
n
t
+

j=0
(M
tt
j+1
M
tt
j
)
2
.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 14
(e) Let
Z
n
t
=

j=0
(M
tt
j+1
M
tt
j
)
2
.
Show that Z
n
t
converges in probability to an increasing process A
t
. The
process A
t
is often denoted as M, M)
t
and is called the quadratic variation
process of M
t
. we have the following special case of It os formula:
M
2
t
= M
2
0
+ 2
_
t
0
M
s
dM
s
+M, M)
t
.
Exercise 31 Let
n
: 0 t
1
t
2
t
N(n)
= t, be a a sequence of
partitions of [0, t] with [
n
[ 0. Show that the following convergence holds in
probability:
lim
n
N(n)1

j=0
(B
t
j+1
B
t
j
)
2
= t.
Exercise 32 Let M, N be bounded martingales with M
0
= N
0
= 0. Dene
M, N)
t
:=
1
2
M +N, M +N)
t

1
2
M, M)
t

1
2
N, N)
t
.
Show that for the dyadic partition,

j=0
(M
tt
j+1
M
tt
j
)(N
tt
j+1
N
tt
j
)
converges in probability to M, N) and M
t
N
t
M, N)
t
is a martingale.
If M
t
and N
t
are furthermore independent their quadratic variation M, N)
t
vanishes.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 15
Problem Sheet 5: Stochastic Integrals
Exercise 33 If B
t
= (B
2
t
, . . . , B
n
t
) is a d-dimensional BM show that B
i
, B
j
)
t
=

ij
t and
|B
t
|
2
= 2
n

i=1
_
t
0
B
i
s
dB
i
s
+nt,
where |B
t
|
2
=

i
[B
i
t
[
2
.
Exercise 34 Let B
t
be 1-dimensional Brownian Motion,
1. Compute E
_
T
0
B
2
s
dB
s
.
2. Show that B
t
,
_
t
0
B
3
s
dB
s
) =
_
t
0
B
3
s
ds.
3. Is
_
t
0
e
e
Bs
dB
s
a local martingale? a true martingale?
4. Simplify
_
t
0
(2B
s
+ 1)d
__
s
0
B
r
d(B
r
+r)
_
.
5. Prove that if H and K are continuous bounded and adapted semi-martingales,
HK, B)
t
=
_
t
0
H
r
dK, B)
r
+
_
t
0
K
r
dH, B)
r
.
Exercise 35 Let f, g : R R be C
2
functions, B
t
one dimensional Brownian
motion. Write the following as an integral with respect to the Lebesque measure.
1. f(B
t
+t), g(B
t
)) where f and g are smooth functions.
2. exp (M
1
2
M)), exp (N
1
2
N))) where M
t
and N
t
are continuous lo-
cal martingales.
Exercise 36 Interpret
_
t
0
s dB
s
by a Lebesque integral.
Exercise 37 Let : R R and b : R R be Borel measurable functions.
Suppose that x
t
is an adapted time continuous stochastic process such that the
following identity holds:
x
t
= x
0
+
_
t
0
(x
s
)dB
s
+
_
t
0
b(x
s
)ds.
If f is C
2
dene
/f =
1
2

2
(x)

2
f
x
2
+b(x)

x
f.
Show that f(x
t
) f(x
0
)
_
t
0
/f(x
s
)ds is a local martingale.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 16
Exercise 38 Show that a positive local martingale is a super-martingale.
Exercise 39 Let B
t
, t T be a Brownian motion with B
0
= 0 and T
t
its natural
ltration augmented with null sets. It is know that T
0
consists of sets of null or
full measure and T
t
is right continuous. Hence if M
0
T
0
it is necessary that
M
0
= C some constant C almost surely.
(a) Let D be the set of f in L
2
(, T
T
) with the property that there is h L
2
(B)
such that
f = Ef +
_
T
0
h
s
dB
s
.
Show that D is a closed subspace of L
2
(, T

).
(b) If Z
t
is a semi-martingale let X
t
= e
Zt
1
2
Z,Z
t
. Show that
X
t
= 1 +
_
t
0
X
s
dZ
s
.
(c) It is known that the family
_
n

i=1
a
i
exp
__
T
0
(h
i
)(s)dB
s

1
2
_
T
0
h
2
i
(s)ds
_
: a
i
R, h
i
L
2
([0, T], R), n = 1, 2, . . . ,
_
is dense in L
2
(, T
T
, R
d
).
Show that for all f L
2
(, T
T
) there is a unique h L
2
(B) such that
f = Ef +
_
T
0
h
s
dB
s
.
(d) Let M be an L
2
bounded continuous martingale show that there is a unique
h L
2
(B) such that for all t T,
M
t
= M
0
+
_
t
0
h
s
dB
s
.
[Hint: Recall the correspondence between L
2
(, T
T
) and L
2
bounded con-
tinuous martingales. ]
(e) If M
t
is a continuous local martingale show that there is a unique h pro-
gressively measurable such that
_
t
0
h
2
s
ds < and M
t
= M
0
+
_
t
0
h
s
dB
s
.
Congratulations! Youve proved the integral representation theorem for mar-
tingales.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 17
Problem Sheet 6: It os formula, Martingale Inequalities
Exercise 40 1. Prove that if M
t
is a positive local martingale then it is a super-
martingale.
2. If N
t
is a local martingale show that e
Nt
1
2
N,N
t
is a local martingale and
Ee
Nt
1
2
N,N
t
1.
3. Prove that if Eexp (
1
2
+)N, N)
t
) < then e
Nt
1
2
N,N
t
is a true mar-
tingale.
Exercise 41 Show that any positive local martingale N
t
with N
0
= 1 can be writ-
ten in the form of N
t
= exp (M
t

1
2
M, M)
t
) where M
t
is a local martingale.
Exercise 42 Suppose that S
i
are stopping times with 0 S
1
S
2
t almost
surely. Let f be an adapted continuous process with
_
t
0
Ef
2
r
dr < . Dene
_
S
2
S
1
f
r
dB
r
=
_
S
2
0
f
r
dB
r

_
S
1
0
f
r
dB
r
.
(a) Show that E
_
S
2
S
1
f
r
dB
r
= 0 and E(
_
S
2
S
1
f
r
dB
r
)
2
= E
_
S
2
S
1
(f
r
)
2
dr.
Hint:
_
S
2
S
1
f
r
dB
r
=
_
t
0
1
S
1
<rS
2
f
r
dB
r
.
(b) Show that E
_
S
2
S
1
f
r
dB
r
[T
S
1
= 0
(c) Show that
E
_
__
S
2
S
1
f
r
dB
r
_
2
[T
S
1
_
= E
__
S
2

1
f
2
r
dr[T
S
1
_
.
(d) If
_

0
Ef
2
r
dr < , the conclusions above hold for unbounded S.T. S
1
< S
2
.
Exercise 43 If f be an adapted continuous function with
_
t
0
Ef
2
r
dr < . Show
that for any N > 0, C > 0
P
_
sup
0st
[
_
s
0
f
r
dB
r
[ > C
_
P
__
t
0
(f
r
)
2
dr > N
_
+C
2
N
C
2
for some constant C
2
.
Exercise 44 Let be a bounded stopping time. Let B
t
be an T
t
Brownian motion
and let (
t
= T
t+
. Show that W
t
:= B
t+
B

is a standard (
t
-Brownian motion.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 18
Exercise 45 (Burkholder-Davis-Gundy Inequality) For every p > 0, there exist
universal constants c
p
and C
p
such that for all continuous real valued local martin-
gales vanishing at 0,
c
p
EM, M)
p
2
T
E(sup
t<T
[M
t
[)
p
C
p
EM, M)
p
2
T
where T is a nite number, innity or a stopping time.
(1) Show that for any bounded continuous process H and stopping time T,
c
p
E
__
T
0
H
2
s
dM, M)
s
_
p
2
Esup
sT

_
s
0
H
r
dM
r

p
C
p
E
__
T
0
H
2
s
dM, M)
s
_
p
2
.
(2) For p 2 prove the right hand side of the Burkholder-Davis-Gundy Inequal-
ity. [Hint: Apply It os formula]
(3) For p 4, prove the left hand side of the Burkholder-Davis-Gundy Inequal-
ity. [Hint: Begins with M, M)
t
= M
2
t
2
_
t
0
M
s
dM
s
, followed by an
application of the elementtary identity [a +b[
p
c(p)([a[
p
+[b[
p
) for some
constant c(p) and an application of Kunita-Watanabe inequality.
Exercise 46 Let X be a continuous semi-martingale and f be a convex function.
(a) Prove by approximating f with smooth functions, that there exists a contin-
uous increasing process A
f
such that
f(X
t
) = f(X
0
) +
_
t
0
f

(X
s
)dX
s
+A
f
t
.
[Hint: First assuem that [X
t
[ C. Let : R R
+
be a smooth function
with compact support in (, 0] with
_
R
(x)dx = 1. Let
n
(x) = n(nx)
and f
n
=
_
0

f(x + y)
n
(y)dy the convolution function. Then f
n
f,
f

n
increases to f

. ]
(b) If f(x) = [x[ then f

(x) = sgn(x) where


sgn (x) = 1, x > 0
= 1, x 0.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 19
Prove the following Tanakas formula(e), for any a R,
(X
t
a)
+
= (X
0
a)
+
+
_
t
0
1
{Xs>a}
dX
s
+
1
2
L
a
t
,
(X
t
a)

= (X
0
a)


_
t
0
1
{Xsa}
dX
s
+
1
2
L
a
t
,
[X
t
a[ = [X
0
a[ +
_
t
0
sgn(X
s
a)dX
s
+L
a
t
.
Here L
a
t
is an increasing continuous function and is called the local time of
X
t
at a.
Exercise 47 Let and b be smooth functions from R
d
to R with (at most) linear
growth:
[(x)[ c([1 +[x[), [b(x)[ c(1 +[x[).
Let T
n
an inccreasing sequence of stopping times.
(a) Let B
t
be a one dimensional Brownian motion. Let (x
t
, t 0) be a real
valued adapted sample continuous process, s.t. for all t
x
Tn
t
= x
0
+
_
tTn
0
(x
s
)dB
s
+
_
tTn
0
b(x
s
)ds
Show that E([x
t
[
2
) < .
(b) State and prove a multi-dimensional of the above statement. Keep the fol-
lowing notation: x
t
= (x
1
t
, . . . , x
d
t
) and
(x
j
t
)
Tn
= x
j
0
+
m

k=1
_
tTn
0

j
k
(x
s
)dB
k
s
+
_
tTn
0
b
j
(x
s
)ds.
(c) Can you modify the proof to show that E(sup
tT
[x
t
[
2
) < ?
Hint: Gronwalls lemma says that if
t
C
1
+
_
t
0
g(s)
s
ds then

t
C
1
exp(
_
t
0
g(s)ds).
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 20
Problem Sheet 7: Stochastic Differential Equations
Let B
t
be a one dimensional Brownian motion on a given ltered probability space.
Let , b : R Rbe locally bounded and Borel measurable.
Exercise 48 Write down a solution to dx
t
= x
t
g(B
t
)dB
t
where g : R R
is a bounded Borel measurablke function. Verify you claim. Is this a strong so-
lution? Does pathwise uniqueness hold? Show that if [g
2
(x)[ C + Ce
x
then
_
t
0
g(B
s
)dB
s
is a martingale.
Exercise 49 Black-Scholes equation. Let S
t
be a stock. It is postulated that S
t
is
governed by
dS
t
= (t)S(t)dB
t
+b(t)S
t
dt.
Here , b : R
+
Rare Borel measurable functions. Give an explicit solution.
Exercise 50 Suppose that and b are real valued Lipschitz continuous functions.
Suppose that for all t 0,
x
t
= x
0
+
_
t
0
(x
s
)dB
s
+
_
t
0
b(x
s
)ds,
y
t
= x
0
+
_
t
0
(y
s
)dB
s
+
_
t
0
b(y
s
)ds
Prove that E(x
t
y
t
)
2
= 0.
Exercise 51 Let : R R be BC
1
and f : R R a solution of the ODE

f = (f). For g : R Rlocally bounded and Borel measurable, assume that


dy
t
= dB
t
+g(f(y
t
))dt
has a solution y
t
. Let b = g +
1
2
. Show that f(y
t
) solves
dx
t
= (x
t
)dB
t
+b(x
t
)dt
Exercise 52 (Transform a drift) Consider the SDE: dx
t
= (x
t
)dB
t
+ b(x
t
)dt,
where , b are continuous. Let L =
1
2

2 d
2
dx
2
+ b(x)
d
dx
. A function s is the scale
function if Ls = 0. Assume that > 0. Then
s(x) = e

R
x
0
2b(y)

2
(y)
dy
.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 21
Since s > 0 the scale function is increasing whose inverse on its image is denoted
by s
1
. Let (y) = (s
1
(y)) s(s
1
(y)) if y is in the image of s otherwise let
(y) = 0. Dene y
t
= s(x
t
). Show that y
t
solves:
dy
t
= (y
t
)dB
t
.
Prove that if b is bounded, then pathwise uniqueness holds for the SDE dx
t
=
dB
t
+b(x
t
)dt.
Exercise 53 Consider the SDE, with Stratonovitch integration,
dx
t
=
y
t
r
t
dB
t
dy
t
=
x
t
r
t
dB
t
Show that r
t
=
_
x
2
t
+y
2
t
. Show that r
t
= 1 for all time if r
0
= 1. Conclude that
the SDE can be considered to be dened on the circle S
1
.
Exercise 54 Consider dx
t
= (x
t
)dB
t
+b(x
t
)dt. Assume that and b are locally
Lipschitz continuous and are of at most linear growth:
[(x)[ C(1 +[x[), x, b(x)) C(1 +[x[
2
).
Let x
t
be a solution. Prove that there is no explosion.
Exercise 55 Consider
dx
t
= y
t
dB
1
t
dy
t
= y
t
dB
2
t
.
Show that if y
0
> 0 then y
t
is positive and hence the SDE can be considered to
be dened on the upper half plane. Compute the innitesimal generator L. This is
known as the Brownian motion on the hyperbolic space (upper half plane model).
Exercise 56 Discuss the uniqueness and existence problem for the SDE
dx
t
= sin(x
t
)dB
1
t
+ cos(x
t
)dB
2
t
.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 22
Problem Sheet 8: SDEs
Let
j
, 1 j m and b be measurable locally bounded vector elds on R
n
with
components b = (b
1
, . . . , b
n
) where b
j
: R
n
R and
j
= (
1
j
, . . . ,
n
j
). Write
[b[(x) =
_

(b
i
)
2
(x). Let B
t
= (B
1
t
, . . . , B
m
t
) be a Brownian motion. We do
not distinguish a row vector from a column vector. Let
L =
1
2
m

k=1

i
k

j
k

2
x
i
x
j
+
n

l=1
b
l

l
.
Exercise 57 For L given above, compute L([x[
2
). Suppose that [(x)[
2
c(1 +
[x[
2
) and b(x), x) c(1 + [x[
2
). Let f(x) = [x[
2
+ 1, g(x) = [x[
2
. Prove that
Lf af. How about Lg(x) when [x[ > 1? Here c, a are constants.
Exercise 58 Suppose that , b are locally Lipschitz continuous and have at most
linear growth and let F
t
(x) be the solution to the SDE E(, b). Show that for each
t > 0, lim
x
F
t
(x) = , with convergence in probability.
Exercise 59 A one dimensional continuous process (x
t
, 0 t 1) is said to
be the Brownian bridge if it is a Gaussian process and such that Ex
t
= 0 and
E(x
t
x
s
) = s t st.
1. Prove that if B
t
is a Brownian motion, x
t
= B
t
tB
1
is a Brownian bridge.
Is x
t
adapted to the natural ltration T
B
t
of B
t
?
2. Consider dx
t
= dB
t
+
yxt
1t
dt. Find a solution to this SDE.
[Hint: Try x
t
= (1 t)x
0
+ty + (1 t)
_
t
0
dBs
1s
.]
3. Prove that lim
t1
x
t
= x in L
2
.
Exercise 60 Let B
1
and B
2
be independent Brownian motions. In each case below
compute the innitesimal generator L and discuss whether the SDE explodes:
1.
dx
t
= (y
2
t
x
2
t
)dB
1
t
2x
t
y
t
dB
2
t
dy
t
= 2x
t
y
t
dB
1
t
+ (y
2
t
x
2
t
)dB
2
t
.
2.
dx
t
= (x
2
t
+y
2
t
)dB
1
t
dy
t
= (x
2
t
+y
2
t
)dB
2
t
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 23
Exercise 61 Transform the following SDE into It o form.
dx
t
= (y
2
t
x
2
t
) dB
1
t
2x
t
y
t
dB
2
t
dy
t
= 2x
t
y
t
dB
1
t
+ (y
2
t
x
2
t
) dB
2
t
.
Exercise 62 Write down the innitesimal generator of
dx
t
= (x
2
t
+y
2
t
)x
t
dB
1
t
(x
2
t
+y
2
t
)y
t
dB
2
t
dy
t
= (x
2
t
+y
2
t
)y
t
dB
1
t
(x
2
t
+y
2
t
)x
t
dB
2
t
.
Exercise 63 Let D be a bounded domain of R
d
with smooth boundary. Suppose
that there is a C
2
solution to the Dirichlet problem: u = 0 on D and u = f
on the boundary D of D. Let
D
be the rst exit time from D of the solution
F
t
(x), x D of an SDE whose generator is
1
2
. Prove that u(x) = Ef(F

(x))
and
Ef
2
(F

(x)) = u
2
(x) + 2E
_

0
[u[
2
(F
s
(x))ds.
In the following if A is a matrix A
T
stands for its transpose.
Exercise 64 Suppose that , b are smooth and has compact support. Let x
t
be
solution to dx
t
= (x
t
)dB
t
+b(x
t
)dt with x
0
R
d
.
1. Let
t
= Ex
t
. Show that
t
=
0
+
_
t
0
E(b(x
s
))ds.
2. Let C(t) = (x
t

t
)(x
t

t
)
T
. This is a nn matrix with entries C
i,j
(t) =
(x
i
t

i
t
)(x
j
t

j
t
). Write down a formula for C(t).
3. Let R(t) = E(x
t

t
)(x
t

t
)
T
be the covariance matrix. Let C be an
d d matrix and dene b(x) = Cx. Show that
R
t
= R
0
+
_
t
0
C
s
R
s
ds +
_
t
0
R
s
C
T
s
ds +
_
t
0
E(x
s
)(x
s
)
T
ds.
Exercise 65 Show that if (X
t
) is a Markov process then it is a Markov process
with respect to its own ltration.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 24
Problem Sheet 9: Girsanov Transform
Let
j
, 1 j m and b be measurable locally bounded vector elds on R
n
with
components b = (b
1
, . . . , b
n
) where b
j
: R
n
Rand
j
= (
1
j
, . . . ,
n
j
)
T
. Write
[b[(x) =
_

(b
i
)
2
(x). Let B
t
= (B
1
t
, . . . , B
m
t
) be a Brownian motion. Let
L =
1
2
m

k=1

i
k

j
k

2
x
i
x
j
+
n

l=1
b
l

l
.
Exercise 66 Let Z be a 1-dimensional Gaussian random variable on (, T, P)
with distribution N(a,
2
). Take u Rand dene a probability measure Q by
dQ
dP
() = e
(
u

2
)(Z()a)
1
2
(
u

)
2
.
Show that the distribution of Zu under Qis the distribution of Z under P. [Hint:
compute the characteristic function.]
Exercise 67 Let (M
t
, t T) be a martingale on (, T, T
t
, P). If N
t
is a bounded
martingale, under which measure is M
t
M, N)
t
a local martingale? Prove your
assertion.
Exercise 68 Take u R
n
and B
t
a Brownian motion on (, T
t
, P). Under which
measure is B
t
tu a Brownian motion under Q? Is Q a probability measure?
Exercise 69 Let T > 0 and Q and P be two equivalent measures on (, T
T
, T
t
)
with
dQ
dP
= e
R
T
0
hsdBs
1
2
R
T
0
h
2
s
ds
where h : [0, T] R is locally bounded. Let
N
t
=
_
t
0
h
s
dB
s
. By abuse of notation, if M
t
is a P local martingale, we say that
M
t
N, M)
t
is its Girsanov transform. Is Qa probability measure? If (B
t
, t T)
is a Brownian motion with respect to P compute the Girsanov transform of the
folowing martignales: (a) B
t
; (2) B
2
t
t, (3)
_
t
0
h
s
dB
s
.
Exercise 70 Let m = d. Suppose that x
t
satises x
t
= x
0
+B
t
+
_
t
0
b(x
s
)ds where
b : R
d
R
d
is bounded C
2
. Let N
t
=
_
t
0
dB
s
, b(x
s
)). Compute N, N)
t
. Show
that under Q where
dQ
dP
= exp(N
t

1
2
N, N)
t
), x
t
has distribution N(x
0
, tI).
Exercise 71 1. Let u
t
be a C
2,1
bounded solution to
ut
t
= Lu
t
where L =
1
2
+

n
j=1
b
j

x
j
. Prove that
u(t, x) = Ef(x+B
t
) exp
__
t
0
b(x +B
s
), dB
s
)
1
2
_
t
0
[b[
2
(x +B
s
)ds
_
.
Stochastic Analysis. Lecturer: Xue-Mei Li, Support class: Sebastian V ollmer 25
2. Let g : R
n
Rand assume that b = g. Show that
u
t
(x) = Ef(x+B
t
) exp
_
g(x +B
t
) g(x
0
)
_
t
0
[Lg +
1
2
[g[
2
](x +B
s
)ds
_
.

You might also like