You are on page 1of 10

The conditional expectation is



Z t
e(ts) dWs
E[Xt |X0 = x0 ] = E + (x0 )et +

Example: the Ornstein-Uhlenbeck process

0
t

= + (x0 )e
= (Xt )dt + dWt

dXt

The conditional variance is

where > 0, IR, > 0 and X0 = x0 .

 Z t

e(ts) dWs )2
= E (

Var[Xt |X0 = x0 ]

Solution:

Xt

= + (x0 )et +

Use Itos isometry to obtain

e(ts) dWs

Var[Xt |X0 = x0 ]

Note that this is a sum of deterministic terms and an integral of a


deterministic function with respect to a Wiener process with
normally distributed increments. The distribution is thus normal.

Z

2(ts)

= E


2
1 e2t
2

1 e2t ).

ds =

0
2

Thus (Xt |X0 = x0 ) N ( + (x0 )et , 2


2

Asymptotically Xt N (, 2
) (or always if X0 N (, 2
)).

Parameter interpretation in the OU-process

Example: population growth model

2
1

20

40

60

80

100

= aNt dt + Nt dWt

The It
o solution:

2
0

20

time

40

60

80

100

time

= 0.01, = 1

dNt

X(t)

2
1
2

X(t)

2
1
0
1
2

X(t)

Also called the geometric Brownian motion

20

40

60

80

100

time

= 0.1, = 1

= 0.01, = 0.5

how strongly the system reacts to perturbations

Nt



1
= N0 exp (a 2 )t + Wt
2

The Stratonovich solution:

(the decay-rate or growth-rate)


2

the variation or the size of the noise.

the asymptotic mean

Nt

= N0 exp {at + Wt }

Qualitative behavior of the It


o solution


1 2
Nt = N0 exp (a )t + Wt
2

If a > 21 2 then Nt when t , a.s.

If a < 12 2 then Nt 0 when t , a.s.

If a = 12 2 then Nt will fluctuate between arbitrary large


and arbitrary small values as t , a.s.

Whereas for the Stratonovich solution we have


Nt

= N0 exp {at + Wt }

If a > 0 then Nt when t , a.s.

If a < 0 then Nt 0 when t , a.s.

... just like in the deterministic case.


Apparently it makes a huge difference which interpretation we choose.

Note though:
If Wt is independent of Nt we would expect that
Thus

= E[N0 ]eat

E[Nt ]

i.e. the same as when there is no noise in at . Let us check:

E[Yt ]

1
= E[Y0 ] + 2
| {z } 2
=1

Let

E[eWs ] ds + E
| {z }
|
=E[Ys ]

d
1
E[Yt ] = 2 E[Yt ]
dt
2

and apply Itos formula


dYt

1 2 Wt
e
dt + eWt dWt
2

Yt

1
= Y0 + 2
2

eWs ds +

eWs dWs

E[Y0 ] = 1

so that
E[Yt ] = E[eWt ]

i.e.
Z

Z

We obtain the differential equation for E[Yt ]:

= eWt

Yt

= e

t/2


eWs dWs
{z
}
=0

An existence and uniqueness result

Finally



1
E[Nt ] = E N0 exp (a 2 )t + Wt
2


1
= E[N0 ] exp (a 2 )t E [exp {Wt }]
2




1
1 2
= E[N0 ] exp (a 2 )t exp
t
2
2

Linear growth and local Lipschitz conditions:


For each N IN there exists a constant KN such that
|b(x, t)| + |(x, t)| KN (1 + |x|)
and

= E[N0 ]eat

|b(x, t) b(y, t)| + |(x, t) (y, t)|

exactly as we expected! However, for the Stratonovich solution, the


same calculations give
E[Nt ]

KN (x y)

for all t [0, IN] and for all x, where ||2 = tr T .


Then

(
a+ 2 /2)t

= E[N0 ]e

where a
is seen to be a different parameter from a.

dXt = b(Xt )dt + (Xt )dWt , X0 = U, U {Wt }t0


has a unique t-continuous solution Xt .

10

Examples from ODEs


Examples from ODEs
The equation
dxt
= x2t
dt

x0 = 1

does not satisfy the linear growth condition. It has the unique
solution
xt =

1
1t

0t<1

The equation
dxt
2/3
= 3xt
, x0 = 0
dt
does not satisfy the Lipschitz condition at x = 0. It has more than
one solution:

0
for t a
xt =
(t a)3 for t > a
for any a > 0.

but no global solution (defined for all t).


The linear growth condition ensures that the solution Xt does not
explode, i.e. |Xt | does not tend to in finite time.

11

(1)

The Lipschitz condition ensures that a solution Xt is unique: If Xt


(2)
and Xt are two t-continuous processes satisfying the conditions then
(1)

Xt

(2)

= Xt

for all t T, a.s.

12

The solution Xt where drift and diffusion coefficients fulfill the


growth and Lipschitz conditions is a strong solution:

Sufficient condition for the


existence of a unique weak solution

the version of Wt is given in advance


The solution Xt is FtU -adapted
FtU is the filtration generated by the initial U and Ws , s t.

dXt = b(Xt )dt + (Xt )dWt , X0 = U


a(x) = (x)(x)T
a continuous
a(x) strictly positive definite for all x

If only b() and () are given, and we ask for a pair of processes
t, W
t ) then the solution is called a weak solution.
(X
Strong uniqueness means pathwise uniqueness, weak uniqueness
means that any two solutions are identical in law, i.e. have the same
finite dimensional distributions.

There exists a constant K such that


|aij (x)|

K(1 + |x|2 )

|bi (x)|

K(1 + |x|)

for all i, j = 1, , d and x.


The solution is a strong Markov process.

13

14

Transition densities:
dXt = b(Xt )dt + (Xt )dWt .

A strong solution is also a weak solution.

y 7 p(t, x, y)
There are SDEs with no strong solution, but still a unique weak
solution.

Conditional density of Xt given X0 = x;


also conditional density of Xt+s given Xs = x.

Remark: Note that the above conditions are sufficient conditions, not
necessary conditions.

Data: Xt1 , , Xtn , t1 < < tn .


Likelihood function:
L() =

n
Y

p(ti ti1 , Xti1 , Xti ; )

i=1

15

16

Chapman-Kolmogorov equation:
Z
p(t + s, x, y) = p(t, z, y)p(s, x, z)dz
Kolmogorovs backward equation:

Ornstein-Uhlenbeck

2p
p
p
1 2
(x) 2 + b(x)
=
2
x
x
t
with the initial condition
p(t, x, y) x

Examples:

as t 0.

dXt = (Xt )dt + dWt


Remember that
(Xt |X0 = x0 ) N ( + (x )et , 2 (1 e2t )/2).

x is the Dirac measure at x.


Kolmogorovs forward equation:

p
1 2
[(y)2 p]
[b(y)p] =
2 y 2
y
t

p(t, x, y) = p



(y (x )et )2
exp
2 (1 e2t )/
2 (1 e2t )/
1

(Fokker-Planck equation)

17

18

Cox-Ingersoll-Ross
dXt = (Xt )dt +

Xt dWt

Radial Ornstein-Uhlenbeck
dXt = (Xt1 Xt )dt + dWt ,

> 0, > 0, > 0.


1

p(t, x, y)

(y/x) 2 exp( 12 t y)
()(1 exp(t))
 

xy
(x + y)
exp
I
,
exp(t) 1
sinh( 12 t)


where = 2 2 and = 1.
I is a modified Bessel function with index .

> 0.
p(t, x, y) =

(y/x) xy exp(y 2 + ( + 12 )t)


sinh(t)



(x2 + y 2 )
xy
exp
I 1
exp(2t) 1 2 sinh(t)


I is a modified Bessel function with index .

The transition density is a non-central 2 -distribution.

19

20

Taylor expansions
Review of deterministic expansions:

Let f : IR IR be a continuously differentiable function. By the


chain rule
d
f (xt ) = a(xt )f 0 (xt )
dt
Define the operator

Consider

= af 0

Lf
d
xt
dt

= a(xt )

with initial value xt0 for t [t0 , T ], and a() is sufficiently smooth.
We can write
Z T
xt = xt0 +
a(xs )ds
t0

where denotes differentiation with respect to x. Express the above


equation for f (x) in integral form
Z t
Lf (xs )ds
f (xt ) = f (xt0 ) +
t0
2

Note that if f (x) = x then Lf = a, L f = La and


Z t
a(xs )ds
xt = xt0 +
t0

21

22

If f (x) = a(x) then La = aa0 and


s

Z
a(xs )

La(xz )dz

= a(xt0 ) +

Apply again to the function f = La to obtain


Z t
Z tZ s
La(xz )dzds
ds +
xt = xt0 + a(xt0 )
t0
t

t0

Apply this to the equation for xt


Z t
Z
xt = xt0 +
a(xt0 ) +
t0

= xt0 + a(xt0 )
s

= xt0 + a(xt0 )

La(xz )dz ds
t0

t0

= xt0 + a(xt0 )(t t0 ) + R1


which is the simplest non-trivial expansion for xt .

23

dz ds + R2
t0

t0

ds +
t0

ds + La(xt0 )

1
= xt0 + a(xt0 )(t t0 ) + La(xt0 ) (t t0 )2 + R2
2


La(xz )dz ds

Z tZ

t0

Z tZ

t0

t0

t0

where
Z tZ
R2

=
t0

t0

t0

24

L2 a(xu )du dz ds

The Ito-Taylor expansion


For a general r + 1 times continuously differentiable function f we
obtain the classical Taylor formula in integral form

Iterated application of Itos formula!


Consider
Z
Xt

f (xt )

= f (xt0 ) +

r
X
(t t0 )l

l!

l=1

Ll f (xt0 ) +

t0

Lr+1 f (xs1 )ds1 . . . dsr+1

t0

1
= bf 0 + 2 f 00
2
= f 0

L1 f

25

26

For f twice continuously differentiable, Itos formula yields



Z t
1
b(Xs )f 0 (Xs ) + 2 (Xs )f 00 (Xs ) ds
f (Xt ) = f (Xt0 ) +
2
t0
Z t
(Xs )f 0 (Xs )dW s
+
t0
t

= f (Xt0 ) +

1 0

L f (Xs )dW s
t0

t0

27

t0

t0

Z t

t0

t0

L (Xz )dz +

(Xt0 ) +
t0

1 0

L (Xz )dW z dW s
t0

= Xt0 + b(Xt0 )

Note that for f (x) = x we have L f = b and L f = , and the


original equation for Xt is obtained
Z t
Z t
Xt = Xt0 +
b(Xs )ds +
(Xs )dW s
t0

Like in the deterministic expansions, we apply Itos formula to the


functions f = b and f = and obtain

Z s
Z s
Z t
L0 b(Xz )dz +
L1 b0 (Xz )dW z ds
Xt = Xt0 +
b(Xt0 ) +
+

L f (Xs )ds +
t0

(Xs )dW s
t0

We introduce the operators


L0 f

b(Xs )ds +

= Xt0 +

sr

t0

dW s + R

ds + (Xt0 )
t0

t0

= Xt0 + b(Xt0 )(t t0 ) + (Xt0 )(Wt Wt0 ) + R


This is the simplest non-trivial Ito-Taylor expansion of Xt involving
single integrals with respect to both time and the Wiener process.
The remainder contains multiple integrals with respect to both.

28

In the previous expansion we had


Z tZ
Z tZ s
0
L b(Xz )dz ds +
R =
t0

t0
tZ s

+
t0

t0

L0 (Xz )dz dWs +

The next Ito-Taylor expansion becomes


Z t
Z t
Z tZ
ds + (Xt0 )
dW s + L1 (Xt0 )
Xt = Xt0 + b(Xt0 )

L1 b(Xz )dWz ds

Z tZ

L1 (Xz )dWz dWs

t0

Note that dz ds, dWz ds and dz dWs scales like 0, whereas dWz dWs
scales like dt, comparable to the terms in the simplest expansion with
two single integrals.

t0

t0

with remainder
Z tZ
Z tZ s
0

L b(Xz )dz ds +
R =
t0

t0
tZ s

We therefore continue the expansion by applying the Ito formula to


f = L1 .

t0
Z t

t0
Z s

t0

t0

t0

L0 (Xz )dz dWs +

L1 b(Xz )dWz ds

t0

Z tZ
t0

t0

L0 L1 (Xu )du dWz dWs

t0

L1 L1 (Xu )dWu dWz dWs

+
t0

29

30

Consider the Ito stochastic differential equation


dXt

Numeric solutions

= b(Xt ) dt + (Xt ) dWt

and a time discretization


When no explicit solution is available we can approximate different
characteristics of the process by simulation. (Realizations, moments,
qualitative behavior etc). We use the approximations from the
Ito-Taylor expansions.

0 = t0 < t1 < < tj < < tN = T


Put
j

Different schemes (Euler, Milstein, higher order schemes...)

Rate of convergence (Weak and strong)

Wj

= tj+1 tj
= Wtj+1 Wtj

Then
Wj

31

dWz dWs + R

t0

= Xt0 + b(Xt0 )t + (Xt0 )Wt + (Xt0 ) 0 (Xt0 ) (Wt2 t) + R


2

t0

t0

t0

t0

N (0, j )

32

The Euler-Maruyama scheme


The Euler-Maruyama scheme

We approximate the process Xt given by

dXt

= b(Xt ) dt + (Xt ) dWt ; X(0) = x0

Let us consider the expectation of the absolute error at the final time
instant T :
There exist constants K > 0 and 0 > 0 such that
E(|XT YtN |) K 0.5

at the discrete time-points tj , 1 j N by

for any time discretization with maximum step size (0, 0 ).


Ytj+1

= Ytj + b(Ytj )j + (Ytj )Wj ; Yt0 = x0

We say that the approximating process Y converges in the strong


sense with order 0.5.
(Compare with the Euler scheme for an ODE which has order 1).

where Wj =

j Zj , with Zj N (0, 1) for all j.

33

34

The Euler-Maruyama scheme


Sometimes we do not need a close pathwise approximation, but only
some function of the value at a given final time T (e.g. E(XT ),
E(XT2 ) or generally E(g(XT ))):
There exist constants K > 0 and 0 > 0 such that for any polynomial
g
|E(g(XT ) E(g(YtN ))|

for any time discretization with maximum step size (0, 0 ).


We say that the approximating process Y converges in the weak sense
with order 1.

35

The Milstein scheme


We can even do better!

We approximate Xt by
Ytj+1

= Ytj + b(Ytj )j + (Ytj )Wj


1
+ (Ytj ) 0 (Ytj ){(Wj )2 j } (now Milstein...)
2

where the prime 0 denotes the derivative.

36

The Milstein scheme


The Milstein scheme converges in the strong sense with order 1:

Multi-dimensional diffusions:
Euler scheme: Similar.
Milstein scheme: Involves multiple Wiener integrals.

E(|XT YtN |) K
We could regard the Milstein scheme as the proper generalization of
the deterministic Euler-scheme.

If b(Xt ) does not depend on Xt the Euler-Maruyama and the


Milstein scheme coincide.

37

(n+1)

dWu(1) dWs(2)

Simulation schemes are based on stochastic Ito-Taylor expansions


that are formally obtained by iterated use of Itos formula.
Kloeden and Platen (1992)

38

You might also like