You are on page 1of 10

Stochastic Integrals

Felix Apfaltrer
Presentation at Stochastic PDE Reading Seminar
CUNY Applied Mathematics Seminar

May 13th and May 1st, 2009

1 Review: Brownian Motion


Definition: Let (Ω, F, P) be a probability space. A Brownian motion is a function W (t), t ≥ 0
such that

• W (0) = 0
• W (t) is continuous in t for a.e. ω
• For any partition Π := {0 = t0 < t1 , < t2 < · · · < tn = t} of [0, t], the increments

W (t1 ) − W (t0 ), W (t2 ) − W (t1 ), . . . , W (tn ) − W (tn−1 )

are independent and normally distributed with mean zero and variance ti+1 − ti .
• W (t) − W (s) ∼ N ( 0, t − s ) for any s < t

Remarks:

• Martingale: E[W (t)|F(s)] = W (s) for 0 ≤ s < t.


• E[W (s)W (t)] = min{s, t}
• W = W (t, ω) depends on the realization ω, but to simplify notation we write W (t) or Wt .
• Quadratic variation:
n−1
X
< Wt >t = [W (t), W (t)]t := lim (W (ti+1 ) − W (ti ))2 = t (1)
|Π|→0
i=0

– Let the sampled quadratic variation be


n−1
X
QΠ = (W (ti+1 ) − W (ti ))2 ,
i=0

We show that E[QΠ ] −→ t and Var[QΠ ] −→ 0, hence, [W (t), W (t)]t = t.


|Π|→0

1
Indeed, recalling that the increments of W (ti+1 ) − W (ti ) are normally distributed with
mean zero and variance ti+1 − ti , we obtain
n−1
X n−1
X
E[QΠ ] = E[(W (ti+1 ) − W (ti ))2 ] = (ti+1 − ti ) = tn − t0 = t
i=0 i=0

For X ∼ N ( 0, σ 2 ), we have E[X 4 ] = 3σ 2 and in general Var[X] = E[X 2 ] − µ2X ,


therefore,
n−1
X n−1
X n−1
X
2 4 2
Var[(W (ti+1 ) − W (ti )) ] = E[(ti+1 − ti ) ] − (ti+1 − ti ) = 2 (ti+1 − ti )2 .
i=0 i=0 i=0

and since
n−1
X n−1
X
Var[QΠ ] = 2 (ti+1 − ti )2 ≤ 2 max |ti+1 − ti | (ti+1 − ti ) = 2|Π|t,
i
i=0 i=0

we have that
Var[QΠ ] −→ 0,
|Π|→0

and therefore, the quadratic variation of Wt , i.e. the L2 -limit of QΠ is [W, W ](t) = t.
– The quadratic variation is a central notion in SDE’s; in differential notation this fact
is
dWt dWt = dt.
– [W, W ]t is a path-dependent property, (see remark on p. 131, [?])
– If the interval was [a, b] instead of [0, t] then the partition is Π[a,b] := {a = t0 < t1 , <
t2 < · · · < tn = b} and the quadratic variation is:

[W (t), W (t)][a,b] = b − a

• Markov property: E[f (Wt )|F(s)] = g(Ws )


2
• Transition probability density p(τ, x, y) = √1
2πτ
exp − (x−y)
2τ = P(Ws+τ = y | Ws = x)

• Nowhere differentiable paths.


• Proofs of existence: Paley-Wiener using Fourier series, Lévy-Ciesielsky uses Haar and Schauder
functions, others, ... (see Knight and Kahane)

• Construction as the limit of a scaled symmetric random walk: (see for example [?])
Let
Xk
Mk = Xi , i = 1, 2, . . .
i=1

be a symmetric random walk, e.g. modeled by tossing a fair coin k times and setting

1 wi = H, with probability 1/2
Xi =
−1 wi = T, with probability 1/2

2
then Xi has a Bernoulli distribution with µX = 0, σX = 1, Mk has a Binomial distribution
with µMk = 0, σMk = k, and hence
1
W (n) (t) = √ Mnt ,
n
is also binomially distributed with mean µW (n) (t) = 0, σW (n) (t) = t.
Therefore, by the Central Limit Theorem, as n −→ ∞, W (n) (t) converges to a normal
distribution with mean 0, and variance t. Other properties of W (n) (t) are preserved to the
limit:
– W (n) (0) = 0
– W (n) (t) − W (n) (s) ∼ N ( 0, t − s ) for any s < t
– W (n) (t) has independent increments,
therefore, we obtain the Brownian motion W (t) in the limit.

2 Motivation
We want to be able to define a stochastic differential equation [?, ?]

dXt = a(Xt , t)dt + b(Xt , t)dWt (2)


with initial condition
X(0, ω) = X0 (ω).

This is done by defining the equivalent stochastic integral:


Z t Z t
X(t) = X0 + a(Xs , s)ds + b(Xs , s)dWs (3)
0 0

Therefore, the integral Z t


b(Xs , s)dWs
0
needs to be defined.
Paley-Wiener-Zygmund [?] define such an integral for a deterministic and differentiable function g
on [0, 1] which is zero on the boundary by integration by parts
Z 1 Z 1
gdWs := − Ws g 0 ds,
0 0

but this definition only works for this restricted family of functions.
We are interested in integrals where the integrand is a random function, for example
Z t
Ws dWs =?
0

We start by defining such stochastic integrals from Riemann sums.

3
3 From Riemann Sums
An arbitrary partition of size n of the interval [0, t] is denoted by
Πn := {0 = t0 < t1 , < t2 < · · · < tn = t}.
We denote the mesh-size by |Πn | = max(ti+1 − ti ) = max (ti+1 − ti ).
i 0≤i<n

We approximate the integral Z t


Ws dWs
0

with the Riemann integral

n−1
X
Rn := R(Πn , λ) = W (τk )(W (tk+1 ) − W (tk )), (4)
k=0

where τk = (1 − λ)tk + λtk+1 and λ ∈ [0, 1] is fixed.


LEMMA (proof from [?, ?] ): The L2 (Ω)-limit of Rn is given by

Wt2 1
lim Rn = + (λ − )t (5)
n−→∞ 2 2
Proof sketch: Rewrite Rn as
n−1 n−1 n−1
Wt2 1X X X
Rn = − (Wtk+1 − Wtk )2 + (Wτk − Wtk )2 + (Wtk+1 − Wτk )(Wτk − Wtk )
2 2
k=0 k=0 k=0

The first sum is the quadratic variation in (??), whose value is t. For the second sum, the same
process described in the proof for the quadratic variation(??) leads to
n−1
X n−1
X
E[ (Wτk − Wtk )2 ] = (τk − tk ) = λt and Var[. . . ] −→ 0 as |Π| −→ 0
k=0 k=0

Similarly, since the independents of the third sum are independent, their expected value is 0, and
their variance is zero by an analogous argument to that for the quadratic variation.
Remarks:

• Note the dependance of this limit on λ, which specifies the choice of the value in each
interval that we as the function value determining the height to be taken in the interval. When
a function is Riemann, there is no dependence on the choice of such point. In the limit, the
right-, left-, upper-, lower- and midpoint- Riemann sums coincide with the Riemann-integral.
• If λ = 0, then we are taking the left-endpoint, and the limit becomes the Itô Integral
Z t
W2 t
Ws dWs = t − . (6)
0 2 2
Selecting the left-endpoint allows the definition of the Itô integral
Z t
G(Ws , s)dWs
0

4
for a large class of functions called non-anticipating functions. This notion is very impor-
tant in applications where one does not know at the beginning of a time interval the outcome
of a random variable except at the present moment.

• Notice that in differential form, equation (??) can be formally written as

2Wt dWt = d(Wt2 ) − dt

It is a simple case of Itô’s formula, not traditional calculus anymore.


• If we select the mid-point (λ = 1/2) then the integral we obtain is called the Stratonovich
Integral and denoted by Z t
Ws ◦ dWs
0
This integral formally follows regular calculus rules.

4 Itô Integral
Let W (·) be a Brownian motion on the probability space (Ω, U, P).
Definition: For a fixed t the σ-algebra

• Wt = σ(Ws |0 ≤ s ≤ t) is called the history of the Brownian motion up to time t.


• Wt+ = σ(Wt − Ws |s ≥ t) is called the future of the Brownian motion beyond time t.

Definition: A family F = (Ft )t≥0 ⊂ U of σ-algebras is nonanticipating w.r.t. W if

• Fs ⊂ Ft , ∀0 ≤ s ≤ t

• Wt ⊂ Ft , ∀0 ≤ t
• Ft is independent of Wt+ , ∀t ≥ 0.

We call such a family F(·) a filtration.


Remarks:

• Think of F(t) as “containing all information available to us at time t”. Typically in SDEs,
Ft = σ(Ws |0 ≤ s ≤ t, X0 ), where X0 , where X0 (ω) is an initial condition.
• A stochastic process G(·) is nonanticipating if it is F(t)-measurable (depends only on
information up to time t).
• A stronger notion (progressively measurable) is required: the map

(s, ω) 7−→ Ws (ω)

is B([0, t]) ⊗ F(t)-measurable (see for example chapter 2 of Varadhan’s lecture notes [?]).

5
4.1 The Itô Integral for simple processes

Definition: Let L2 (0, T ) be the space of all real-valued progressively measurable square integrable
processes, i.e.,
Z T
E[ G2 dt ] < ∞,
0
and L1 (0, T ) be defined analogously, so that if F ∈ L1 (0, T ), then
Z T
E[ |F |dt ] < ∞.
0

Definition: A process G ∈ L2 (0, T ) is a simple process if ∃ Πn a partition such that


G(t) = Gk , ∀t ∈ (tk , tk+1 ).
2
Note that since G ∈ L (0, T ), G is adapted to the filtration F(t).

Definition: For a simple process G ∈ L2 (0, T ), we define the Itô (stochastic) integral on the
interval [0,T] as
Z T n−1
X
GdW := G(tk )(W (tk+1 ) − W (tk )) (7)
0 k=1

Figure 1: Simple Process Approximating Brownian Motion [?].

Properties: For any G, H ∈ L2 (0, T ) and a, b ∈ R, we have

1. Linearity:
Z T Z T Z T
aG + bHdW = a GdW + b HdW
0 0 0

2. Zero expectation:
Z T
E[ GdW ] = 0
0

6
3. Itô isommetry:
Z T Z T
2
E[( GdW ) ] = E[ G2 dt]
0 0

Proof sketches:

1. Linearity follows from the definition of the Itô integral for simple processes.
2. Zero expectation follows from the independence of Gtk and W (tk+1 ) − W (tk ), which are
F(tk ) and F + (tk ) respectively, and we E[Wt ] = 0 as well:
Z T n−1
X n−1
X
E[ GdW ] = E[ G(tk )(W (tk+1 ) − W (tk )) ] = E[G(tk )] E[W (tk+1 ) − W (tk )] = 0
0 k=1 k=1

3. Itô isommetry:
Z T n−1
X n−1
X
E[( GdW )2 ] = E[G(tj )G(tk )(W (tj+1 ) − W (tj ))(W (tk+1 ) − W (tk ))]
0 j=1 k=1

If j < k, then Ak := W (tk+1 ) − W (tk ) and Bk,j := G(tj )G(tk )(W (tj+1 ) − W (tj )) and since
E[Ak ] = 0 and E[Bk,j ] < ∞, all the cross-terms cancel. Therefore, we obtain:
Z T n−1
X n−1
X
E[( GdW )2 ] = E[G2 (tk )(W (tk+1 ) − W (tk ))2 ] = E[G2 (tk )] E[(W (tk+1 ) − W (tk ))2 ]
0 j=1 j=1

but pulling E[·] out and since the Wt − Ws ∼ N (0, t − s), the latter is just a Riemann integral
and so Z Z T T
E[( GdW )2 ] = E[ G2 dt]
0 0

4.2 Extension to general processes: Itô integral


Lemma (for proof see [?], theorem 3.1): If G ∈ L2 (0, T ), there exists a sequence of bounded
step processes Gn ∈ L2 (0, T ), such that
Z T
E[ |G − Gn |2 dt −→ 0
0

Properties:
• Properties 1-3 for the case of simple processes are preserved!
RT RT RT
• E[ 0 GdW 0 HdW ] = E[ 0 GHdW ] (wihtout proof).

4.3 Indefinite Itô integral


Definition: For G ∈ L2 (0, T ), the indefinite Itô integral is given by
Z t
I(t) := G(s)dW (s) (0 ≤ t ≤ T ) (8)
0

7
Note that I(0)=0.

Properties inherited from the case of simple processes (see for example [?] p.
134):

(a) Linearity.
Rt
(b) Itô isommetry: E[I 2 (t)] = E[ 0
G2 (s)ds].
(c) I(t) is a martingale if G ∈ L2 (0, T ).
(d) I(t) has a version with continuous paths.
Rt
(e) Quadratic variation: [I, I](t) = 0 G2 (s)ds.

5 Itô’s Formula
Definition: Suppose that X(·) is a realvalued stochastic process such that
Z r Z r
X(r) = X(s) + F dt + GdW
s s

for F ∈ L1 (0, T ), G ∈ L2 (0, T ), and 0 ≤ s ≤ r ≤ T . We say that X(·) has the stochastic
differential
dXt = F dt + GdW (9)

Remarks:
• the differential symbols are only an abbreviation

Theorem (Itô’s Formula) . Suppose that X(·) has a stochastic differential

dXt = F dt + GdW,

for F ∈ L1 (0, T ), G ∈ L2 (0, T ), and 0 ≤ s ≤ r ≤ T .


2
Assume u : R × [0, T ] −→ R is continuous and that ∂u ∂u ∂ u
∂x := ux , ∂t := ut , ∂x2 := uxx exist and
are continuous. Set
Y (t) = Yt := u(X(t), t).
Then Yt has the stochastic differential

dYt = ut dt + ux dX + 21 uxx dXdX


(10)
= (ut + ux F + 12 uxx G2 )dt + ux GdW

We call (??) Itô’s formula or Itô’s chain rule.


It is helpful to use the formal differentiation rule:

dXt dXt = (F dt + GdW )(F dt + GdW ) = G2 dW · dW = G2 dt,

8
where dt · dt = 0, dW · dt = 0. The second expression is the formal notation for the limit of
the sum
n−1
X n−1
X
(ti+1 − ti )(W (ti+1 ) − W (ti )) ≤ max |ti+1 − ti | (W (ti+1 ) − W (ti )) = max |ti+1 − ti | Wt
i i
i=0 i=0

which tends to 0 as |Πn | −→ 0. This is summarized in [?, ?] in the following table

dW dt
dW dt 0
dt 0 0

6 Examples
(a) Let Xt = Wt , u(x) = xm , Y (t, Xt ) = Xtm . Then dX = dW and thus F ≡ 0, G ≡ 1.
Hence Itô’s formula gives

dYt = (ut + ux F + 12 uxx G2 )dt + ux GdW

1
= 2 m(m − 1)W m−2 dt + mW m−1 dW,

which for m = 2 reads d(W 2 ) = 2W dW + dt.


Integrating we obtain Z t
1 1
W dW = Wt2 − t,
0 2 2
λ2 t
(b) Itô exponential: Let Xt = Wt , Y (t, Xt ) := u(t, Wt ) = eλWt − 2 . Then dX = dW and
thus F ≡ 0, G ≡ 1.
2 λ2 t 2
Then ut (t, x) = − λ2 eλx− 2 = − λ2 u, ux (t, x) = λu, uxx = λ2 u, and therefore

dYt = ( ut +  F + 12 uxx G2 )dt + ux G dW


ux
2 ((
(−
= ( (λ2((+(12 λ2 ) Y dt + λY dW,
λ2 t
Thus, Y (t, Wt ) := eλWt − 2 satisfies the stochastic differential equation:

dY = λY dW
Y (0) = 1

(c) Hermite polynomials: For n = 0, 1, . . . , define

(−t)n x2 /2t dn −x2 /2t


hn (x, t) := e (e ),
n! dxn
x2
the nt h Hermite polynomial. Then h0 (x, t) = 1, h1 (x, t) = x, h2 (x, t) = 2 − 2t , h3 (x, t) =
x3 tx
6 − 2 .

Add definition of Ito process p. 145 Shreve. Add examples as in page 147 ff. in
Shreve.

9
References
[1] Ludwig Arnold. Stochastic Differential Equations: Theory and Applications. Krieger Publishing
Company, reprint edition, 1992.
[2] Lawrence Evans. An introduction to stochastic differential equations. Lecture notes available
at http://math.berkeley.edu/∼ evans/sde.course.pdf.
[3] Henry P. McKean. Stochastic Integrals. Academic Press, 1969.
[4] Thomas Mikosh. Elementary Stochastic Calculus. World Scientific, 2006.
[5] Steven Shreve. Stochastic Calculus for Finance II, Continuous Time Models. Springer Verlag,
2000.
[6] Srinivasa Varadhan. Stochastic processes (fall 00). Lecture notes available at
http://math.nyu.edu/faculty/varadhan/processes.html.

10

You might also like