Professional Documents
Culture Documents
Felix Apfaltrer
Presentation at Stochastic PDE Reading Seminar
CUNY Applied Mathematics Seminar
• W (0) = 0
• W (t) is continuous in t for a.e. ω
• For any partition Π := {0 = t0 < t1 , < t2 < · · · < tn = t} of [0, t], the increments
are independent and normally distributed with mean zero and variance ti+1 − ti .
• W (t) − W (s) ∼ N ( 0, t − s ) for any s < t
Remarks:
1
Indeed, recalling that the increments of W (ti+1 ) − W (ti ) are normally distributed with
mean zero and variance ti+1 − ti , we obtain
n−1
X n−1
X
E[QΠ ] = E[(W (ti+1 ) − W (ti ))2 ] = (ti+1 − ti ) = tn − t0 = t
i=0 i=0
and since
n−1
X n−1
X
Var[QΠ ] = 2 (ti+1 − ti )2 ≤ 2 max |ti+1 − ti | (ti+1 − ti ) = 2|Π|t,
i
i=0 i=0
we have that
Var[QΠ ] −→ 0,
|Π|→0
and therefore, the quadratic variation of Wt , i.e. the L2 -limit of QΠ is [W, W ](t) = t.
– The quadratic variation is a central notion in SDE’s; in differential notation this fact
is
dWt dWt = dt.
– [W, W ]t is a path-dependent property, (see remark on p. 131, [?])
– If the interval was [a, b] instead of [0, t] then the partition is Π[a,b] := {a = t0 < t1 , <
t2 < · · · < tn = b} and the quadratic variation is:
[W (t), W (t)][a,b] = b − a
• Construction as the limit of a scaled symmetric random walk: (see for example [?])
Let
Xk
Mk = Xi , i = 1, 2, . . .
i=1
be a symmetric random walk, e.g. modeled by tossing a fair coin k times and setting
1 wi = H, with probability 1/2
Xi =
−1 wi = T, with probability 1/2
2
then Xi has a Bernoulli distribution with µX = 0, σX = 1, Mk has a Binomial distribution
with µMk = 0, σMk = k, and hence
1
W (n) (t) = √ Mnt ,
n
is also binomially distributed with mean µW (n) (t) = 0, σW (n) (t) = t.
Therefore, by the Central Limit Theorem, as n −→ ∞, W (n) (t) converges to a normal
distribution with mean 0, and variance t. Other properties of W (n) (t) are preserved to the
limit:
– W (n) (0) = 0
– W (n) (t) − W (n) (s) ∼ N ( 0, t − s ) for any s < t
– W (n) (t) has independent increments,
therefore, we obtain the Brownian motion W (t) in the limit.
2 Motivation
We want to be able to define a stochastic differential equation [?, ?]
but this definition only works for this restricted family of functions.
We are interested in integrals where the integrand is a random function, for example
Z t
Ws dWs =?
0
3
3 From Riemann Sums
An arbitrary partition of size n of the interval [0, t] is denoted by
Πn := {0 = t0 < t1 , < t2 < · · · < tn = t}.
We denote the mesh-size by |Πn | = max(ti+1 − ti ) = max (ti+1 − ti ).
i 0≤i<n
n−1
X
Rn := R(Πn , λ) = W (τk )(W (tk+1 ) − W (tk )), (4)
k=0
Wt2 1
lim Rn = + (λ − )t (5)
n−→∞ 2 2
Proof sketch: Rewrite Rn as
n−1 n−1 n−1
Wt2 1X X X
Rn = − (Wtk+1 − Wtk )2 + (Wτk − Wtk )2 + (Wtk+1 − Wτk )(Wτk − Wtk )
2 2
k=0 k=0 k=0
The first sum is the quadratic variation in (??), whose value is t. For the second sum, the same
process described in the proof for the quadratic variation(??) leads to
n−1
X n−1
X
E[ (Wτk − Wtk )2 ] = (τk − tk ) = λt and Var[. . . ] −→ 0 as |Π| −→ 0
k=0 k=0
Similarly, since the independents of the third sum are independent, their expected value is 0, and
their variance is zero by an analogous argument to that for the quadratic variation.
Remarks:
• Note the dependance of this limit on λ, which specifies the choice of the value in each
interval that we as the function value determining the height to be taken in the interval. When
a function is Riemann, there is no dependence on the choice of such point. In the limit, the
right-, left-, upper-, lower- and midpoint- Riemann sums coincide with the Riemann-integral.
• If λ = 0, then we are taking the left-endpoint, and the limit becomes the Itô Integral
Z t
W2 t
Ws dWs = t − . (6)
0 2 2
Selecting the left-endpoint allows the definition of the Itô integral
Z t
G(Ws , s)dWs
0
4
for a large class of functions called non-anticipating functions. This notion is very impor-
tant in applications where one does not know at the beginning of a time interval the outcome
of a random variable except at the present moment.
4 Itô Integral
Let W (·) be a Brownian motion on the probability space (Ω, U, P).
Definition: For a fixed t the σ-algebra
• Fs ⊂ Ft , ∀0 ≤ s ≤ t
• Wt ⊂ Ft , ∀0 ≤ t
• Ft is independent of Wt+ , ∀t ≥ 0.
• Think of F(t) as “containing all information available to us at time t”. Typically in SDEs,
Ft = σ(Ws |0 ≤ s ≤ t, X0 ), where X0 , where X0 (ω) is an initial condition.
• A stochastic process G(·) is nonanticipating if it is F(t)-measurable (depends only on
information up to time t).
• A stronger notion (progressively measurable) is required: the map
is B([0, t]) ⊗ F(t)-measurable (see for example chapter 2 of Varadhan’s lecture notes [?]).
5
4.1 The Itô Integral for simple processes
Definition: Let L2 (0, T ) be the space of all real-valued progressively measurable square integrable
processes, i.e.,
Z T
E[ G2 dt ] < ∞,
0
and L1 (0, T ) be defined analogously, so that if F ∈ L1 (0, T ), then
Z T
E[ |F |dt ] < ∞.
0
Definition: For a simple process G ∈ L2 (0, T ), we define the Itô (stochastic) integral on the
interval [0,T] as
Z T n−1
X
GdW := G(tk )(W (tk+1 ) − W (tk )) (7)
0 k=1
1. Linearity:
Z T Z T Z T
aG + bHdW = a GdW + b HdW
0 0 0
2. Zero expectation:
Z T
E[ GdW ] = 0
0
6
3. Itô isommetry:
Z T Z T
2
E[( GdW ) ] = E[ G2 dt]
0 0
Proof sketches:
1. Linearity follows from the definition of the Itô integral for simple processes.
2. Zero expectation follows from the independence of Gtk and W (tk+1 ) − W (tk ), which are
F(tk ) and F + (tk ) respectively, and we E[Wt ] = 0 as well:
Z T n−1
X n−1
X
E[ GdW ] = E[ G(tk )(W (tk+1 ) − W (tk )) ] = E[G(tk )] E[W (tk+1 ) − W (tk )] = 0
0 k=1 k=1
3. Itô isommetry:
Z T n−1
X n−1
X
E[( GdW )2 ] = E[G(tj )G(tk )(W (tj+1 ) − W (tj ))(W (tk+1 ) − W (tk ))]
0 j=1 k=1
If j < k, then Ak := W (tk+1 ) − W (tk ) and Bk,j := G(tj )G(tk )(W (tj+1 ) − W (tj )) and since
E[Ak ] = 0 and E[Bk,j ] < ∞, all the cross-terms cancel. Therefore, we obtain:
Z T n−1
X n−1
X
E[( GdW )2 ] = E[G2 (tk )(W (tk+1 ) − W (tk ))2 ] = E[G2 (tk )] E[(W (tk+1 ) − W (tk ))2 ]
0 j=1 j=1
but pulling E[·] out and since the Wt − Ws ∼ N (0, t − s), the latter is just a Riemann integral
and so Z Z T T
E[( GdW )2 ] = E[ G2 dt]
0 0
Properties:
• Properties 1-3 for the case of simple processes are preserved!
RT RT RT
• E[ 0 GdW 0 HdW ] = E[ 0 GHdW ] (wihtout proof).
7
Note that I(0)=0.
Properties inherited from the case of simple processes (see for example [?] p.
134):
(a) Linearity.
Rt
(b) Itô isommetry: E[I 2 (t)] = E[ 0
G2 (s)ds].
(c) I(t) is a martingale if G ∈ L2 (0, T ).
(d) I(t) has a version with continuous paths.
Rt
(e) Quadratic variation: [I, I](t) = 0 G2 (s)ds.
5 Itô’s Formula
Definition: Suppose that X(·) is a realvalued stochastic process such that
Z r Z r
X(r) = X(s) + F dt + GdW
s s
for F ∈ L1 (0, T ), G ∈ L2 (0, T ), and 0 ≤ s ≤ r ≤ T . We say that X(·) has the stochastic
differential
dXt = F dt + GdW (9)
Remarks:
• the differential symbols are only an abbreviation
dXt = F dt + GdW,
8
where dt · dt = 0, dW · dt = 0. The second expression is the formal notation for the limit of
the sum
n−1
X n−1
X
(ti+1 − ti )(W (ti+1 ) − W (ti )) ≤ max |ti+1 − ti | (W (ti+1 ) − W (ti )) = max |ti+1 − ti | Wt
i i
i=0 i=0
dW dt
dW dt 0
dt 0 0
6 Examples
(a) Let Xt = Wt , u(x) = xm , Y (t, Xt ) = Xtm . Then dX = dW and thus F ≡ 0, G ≡ 1.
Hence Itô’s formula gives
1
= 2 m(m − 1)W m−2 dt + mW m−1 dW,
Add definition of Ito process p. 145 Shreve. Add examples as in page 147 ff. in
Shreve.
9
References
[1] Ludwig Arnold. Stochastic Differential Equations: Theory and Applications. Krieger Publishing
Company, reprint edition, 1992.
[2] Lawrence Evans. An introduction to stochastic differential equations. Lecture notes available
at http://math.berkeley.edu/∼ evans/sde.course.pdf.
[3] Henry P. McKean. Stochastic Integrals. Academic Press, 1969.
[4] Thomas Mikosh. Elementary Stochastic Calculus. World Scientific, 2006.
[5] Steven Shreve. Stochastic Calculus for Finance II, Continuous Time Models. Springer Verlag,
2000.
[6] Srinivasa Varadhan. Stochastic processes (fall 00). Lecture notes available at
http://math.nyu.edu/faculty/varadhan/processes.html.
10