Professional Documents
Culture Documents
Problem
Rajarshi Sarkar
University of Nice and Sophia Antipolis and INRIA - Sophia Antipolis
August,2012
Abstract
This report looks into the first passage time problem. First-passage
time problems are deceptively very simple, but at the same time very complex to tackle with sometimes. We know very little about the first-passage
behavior except in some simple cases like standard Wiener process,Wiener
process with drift,and likewise.
I have tried to study three kinds of approaches towards finding an approximate solution of the first-passage time through numerical simulation.They
are, the Euler scheme with the Brownian bridge idea, the fast algorithm
for Gauss-Markov Processes that involves a probabilistic invariant of dichotomic search and the exact algorithm studied for some very specific
kinds of stochastic differential equations. I have numerically compared
the Euler scheme and the Fast algorithm, and stated the analytical result
for obtaining a very nice relation for the first-passage time in case of Exact
Simulation.
Moreover, I have also tried to price a barrier option, a financial instrument
dependent on the approximation of the first-passage time for the price of
the underlying asset.
Introduction
dXt = (Xt , t)dt + (Xt , t)dWt
with
X0 = x0
(1)
where and are smooth enough for us to have existence and uniqueness
properties for the above stochastic differential equation (SDE). We would just
need the two functions and to be Lipschitz on every compact support [0,T]
for every T 0. We seek to simulate or find a random event of the form,
= inf{t > 0; Xt > L}
where, L is the boundary, which can be a constant or some continuous deterministic function.We usually look for in some closed interval of the form [0,T],
since we know that in the case of a Brownian motion with the boundary being
an affine function of time we get, E( ) = . Thus, we usually bound it in some
finite interval.
First passage time or first hitting time or stopping time problems have been
the center of attraction for many years now. It has useful application in biology,physics,finance, to name a few. It is also very interesting to look at the
1
BROWNIAN MOTION
first-passage time problems, since they are related to many fields of Mathematics, such as probability theory, functional analysis, numerical analysis, number
theory.
Despite the importance and varied applications of first-passage times,its only in
very few situations that we explicitly know the law for them. In the case of a
standard Brownian motion, we know the analytical solution for the first-passage
time for a constant boundary L = a. Bachelier, was one of the first to study the
first-passage time problem, followed by Levy. The very famous Bachelier-Levy
formula was derived for the density p of a Brownian motions first passage over
a linear boundary of the form a + bt, as,
a + bt
a
p(t) = 3
t
t2
2
where, (y) = 12 exp y2
Ofcourse, after them lots of people have improved upon their work and added
more to the theory of first-passage time. However, it is still quite unexplored
as there does not exist any simple form for the density function for the firstpassage time for more complicated diffusion processes. Thus, lies the challenge
to be able to simulate and correctly approximate the first-passage time for such
processes. We embark on this journey, looking at Gauss-Markov processes and
some interesting processes.
First, we look into some results for the Brownian motion and then the GaussMarkov process, which will be later useful when we use the algorithms.
Second, we look into the Euler scheme for numerically approximating the firstpassage time problem.
Third, we look into the Fast Algorithm developed by Taillefumier and Magnasco
in their paper [1] and try to implement it.
Fourth, we try and compare the Euler scheme and the Fast Algorithm.
Fifth, we look into a new method of simulating exactly the path for a diffusion
process[3]. We, very quickly realise the restriction of such a method, however it
is still mathematically very interesting to see the development of such a method.
Here, we just give an analytical result pertaining to the first-passage time. We
find the law for the first-passage time as some function of another first-passage
time, which we already know how to simulate.
Last, but not least, we look into the Financial application of the first-passage
time in pricing a barrier option.
Notations : The notations for a Brownian motion or a Wiener process will be
B(.) and W (.).
Brownian Motion
We will look into some properties for, first a standard Brownian motion and
then for, a Brownian motion with drift. We first state the probability distribution function for the first-passage time for a standard Brownian motion and
then we look into the probability density function for a Brownian motion with
drift. In the process of doing so, we also look into the law of the maximum of
a Brownian bridge conditioned on the two extreme values of the bridge.
BROWNIAN MOTION
BROWNIAN MOTION
2
a
.
density function of a (T ) as 3 a exp 2T
T 22 2
a
Thus, fa (t) = 3
exp a2t
t2
ab <T
Now, we know that W (ab )b = B(ab ) bab = a, thus the right hand side is,
Z
Z
exp 2bB(ab ) 2b2 ab dP =
exp 2b(B(ab ) bab ) dP
ab <T
ab <T
Z
=
exp (2ba) dP
ab <T
BROWNIAN MOTION
Thus we get,
P (ab < T ) = exp(2ba)
exp 2bB(T ) 2b2 T dP
ab <T
Z
= exp(2ba)
exp 2bB(T ) 2b2 T dP
M (T )b a
Z
= exp(2ba)
exp 2bB(T ) 2b2 T dP
sup0tT (B(t)bt)a
xa
Thus dividing the domain of the integration into two parts, we get,
Z a
b2 T
exp bx
fW (T ) (x 2a) dx+
P (ab < T ) = exp(2ba)
2
Z
b2 T
exp(2ba)
exp bx
fW (T ) (x) dx
2
a
x<a
BROWNIAN MOTION
2
1
x
and denoting, P (W (T ) < y) = (y) =
where, fW (T ) (x) = 2T
exp 2T
Ry
2
x
1
dx, we get,
exp 2T
2T
P (ab
b2 T
(x 2a)2
1
< T ) = exp(2ba)
exp bx
exp
dx+
2
2T
2T
Z
b2 T
1
x2
exp bx
exp(2ba)
exp
dx
2
2T
2T
a
Z
P (ab < T ) = 1
+ exp(2ba)
T
T
Upon, differentiating with respect to all the measurable sets ab < T , we get the
probability density function as,
a
(a + bt)2
fab (t) = 3 exp
2t
t 2 2
This is the very famous Bachelier - Levy formula
BROWNIAN MOTION
Now, we can take the Laplace transform of the above expression as;
x,y
x,y
E(eiY, ) = E ei(aBt1 +b(Bt Bt1 )+c(Bt2 Bt ))
x,y
x,y
E(eiY, ) = E eiaBt1 eib(Bt Bt1 ) eic(Bt2 Bt )
now,as Bt1 q(Btx,y Bt1 )q(Bt2 Btx,y ), that is Bt1 independent of (Btx,y Bt1 ),
which in turn is independent of (Bt2 Btx,y ), and we know that for two independent events X, Y , E(XY ) = E(X)E(Y ), thus we can take the expectation
of the independent events separately.
x,y
1
2
2 (a) t1
E(eiY, ) = e
Bt1 )
1
2
2 (b) (tt1 )
x,y
)E(eic(Bt2 Bt
1
2
2 (c) (t2 t)
1 2
2
2
2
2 (a t1 +b (tt1 )+c (t2 t))
Thus, we get a Gaussian random variable with mean 0 and variance (a2 t1 +
b2 (t t1 ) + c2 (t2 t)). Assume that Y, is independent of Bt1 and Bt2 , since
the idea is to get an unbiased Brownian motion from Y, , in the sense, we want
Y, to be independent of both Bt1 and Bt2 .Thus, we take the covariance to
seek the condition for independence. ;
Cov(Y, , Bt1 ) = Cov(aBt1 , Bt1 ) + Cov(b(Btx,y Bt1 ), Bt1 ) + Cov(c(Bt2 Btx,y ), Bt1 )
Cov(Y, , Bt1 ) = at1
as, Bt1 q (Btx,y Bt1 ) and also Bt1 q (Bt2 Btx,y ). Now, we want the covariance
to be 0. Thus, we get ,
at1 = 0
(1 )t1 = 0
since, t1 6= 0;
+ =1
Similarly,
Cov(Y, , Bt2 ) = Cov(Bt1 , Bt2 ) + Cov(Btx,y , Bt2 ) + Cov(Bt2 , Bt2 )
Cov(Y, , Bt2 ) = t1 + t t2
BROWNIAN MOTION
as we know that for a Brownian motion, Cov(Bs , Bt ) = min(s, t). Again, this
covariance value is equal to 0. Thus,
t1 + t t2 = 0
Now, from the previous result we know,
T hus,
=1
(1 )t1 + t t2 = 0
t t1
=
t2 t1
t2 t
thus, =
t2 t1
Thus, we have the complete relation between the biased Brownian motion(the Brownian bridge) Btx,y and the unbiased one which we construct from
the Y, . We know, Bt1 = x and Bt2 = y. Thus, what we have is,
Btx,y = Bt1 + Bt2 + Y,
t2 t
t t1
Btx,y =
x+
y + Y,
t2 t1
t2 t1
Now, we know that, Y, N (0, a2 t1 + b2 (t t1 ) + c2 (t2 t))
Knowing the values of a, b, c, we can directly put the values into the above
equation and get the final value of Y, .
(t2 t)(t t1 )
Y, N
(t2 t1 )
Thus, finally we get,
t2 t
x+
t2 t1
t2 t
=
x+
t2 t1
Btx,y =
Btx,y
t t1
y + B (t2 t)(tt1 )
t2 t1
(t2 t1 )
t t1
t2 t
B tt1
y+
t2 t1
t2 t1 t2 t
GAUSS-MARKOV PROCESS
2(2l (x + z))
exp
= z) =
tz tx
2(l x)(l z)
tz tx
dl
Integrating the above value, only on those measurable sets for which Mtx ,tz L,
we get exactly the probability for the process to reach the boundary(barrier)
in-between the times tx and tz .
2(L x)(L z)
P (Mtx ,tz > L|Btx = x, Btz = z) = exp
tz tx
T hus,
Thus, we get a very nice representation for the first-passage time for a Brownian
bridge.
Gauss-Markov Process
Mathematical Calculations
Definitions
Given some probability space (,F,P ); where is the sample space, F is its
associated field and P is the probability measure equipped with the (,F),
GAUSS-MARKOV PROCESS
10
ai Xti
i=1
Rt
From this we get, (t)dWt = exp( 0 (u) du)dZt with Z0 = x0 and from further
simplification we get,
Z t
Z s
Z t = x0 +
exp(
(u) du)(s) dWs
0
GAUSS-MARKOV PROCESS
11
Rt
Rt
Rs
Thus we get, Xt = exp( 0 (u) du)(x0 + 0 exp( 0 (u) du)(s) dWs ).
Thus defining the functions g, h and f as,
Z t
g(t) = exp(
(u) du)
0
Z t
Z s
h(t) =
exp(2
(u) du)(s)2 ds
0
0
Z t
f (t) = exp(
(u) du)(t)
0
f (s) dWs )
0
Now, we can see that the Xt is a RGaussian random variable with mean being
t
g(t)x0 and the variance being g(t)2 0 f (s)2 ds or g(t)2 .h(t).
Discrete Construction of Gauss-Markov Processes
Given some reals tx < ty < tz , the conditioning formula for a Gauss-Markov process gives a distribution of Xty , knowing Xtx =x and Xtz =z[2]. It can be shown
that the corresponding probability density is the Gaussian law N ((ty ),(ty )),
where (ty ) denotes the time dependent mean,
(ty ) =
g(t)
g(t0 )
1
p
exp
P (Xt [x, x+dx]|Xt0 = x0 ) =
dx
2(h(t) h(t0 )
g(t) 2(h(t) h(t0 ))
iff h(t) 6= h(t0 ). Second result being the fact that we can actually evaluate
P (Xty [y, y + dy]|Xtx = x, Xtz = z) with tx < ty < tz , the probability of Xt
knowing the values at times tx and tz as x and z respectively. This is because X
is a Markov process, thus a sample path which starts from x and terminates at
GAUSS-MARKOV PROCESS
12
Thus, we use the above two properties to derive the law of Xty given the values
of Xtx = x and Xtz = z given that tx < ty < tz .
P (Xty [y, y+dy]|Xtx = x, Xtz = z) =
1
g(ty ) 2(h(ty )h(tx ))
exp
y
g(tx )
g(ty )
x
2(h(ty )h(tx ))
g(tz )
1
2(h(tz )h(tx ))
g(tz )
1
2(h(tz )h(ty ))
exp
z
g(ty )
g(tz )
y
2(h(tz )h(ty ))
dydz
2
( g(tzz ) g(txx ) )
exp 2(h(t
dz
z )h(tx ))
We got this representation from the first result due to Doobs representation.
By factorizing the part inside the exponential, we get,
P (Xty [y, y + dy]|Xtx = x, Xtz = z) =
2
g(ty ) h(ty )h(tx )
g(ty ) h(tz )h(ty )
x
+
z
y
q
exp
dy
(h(t
)h(t
))(h(t
y
x
z )h(ty ))
(h(t
)h(t
))(h(t
)h(t
))
2
y
x
z
y
2
2g(t
)
y
2g(ty )
h(t
)h(t
)
z
x
h(tz )h(tx )
Thus getting the desired result that Xty with the knowledge that Xtx = x ,
g(t ) h(t )h(t )
Xtz = z, does indeed have a Gaussian law with mean (ty ) = g(txy ) h(tzz )h(txy ) x+
g(ty ) h(ty )h(tx )
g(tz ) h(tz )h(tx ) z
GAUSS-MARKOV PROCESS
13
2(2l (x + z))
exp
h(tz ) h(tx )
2(l x)(l z)
h(tz ) h(tx )
dl
Integrating the above value, only on those measurable sets for which Mtx ,tz L,
we get exactly the probability for the process to reach the boundary(barrier)
in-between the times tx and tz .
2(L x)(L z)
P (Mtx ,tz > L|Xtx = x, Xtz = z) = exp
h(tz ) h(tx )
T hus,
Another result, which will finally help us to attain the final result on the
probability of crossing a constant boundary for a Gauss-Markov process.
Let W = {Wt , Ft ; 0 t 1} denote a standard real Wiener process on some
probability space (, F, P ). For, every 0 t 1 and every reals x, z, Lx , Lz
with x < Lx and z < Lz , consider the first-passage time tx = inf{t tx |Wt
L(t)}. where L is the affine function joining (tx , x) to (tz , z). Then conditioning
on Wtx = x and Wtz = z, we have
2(Lx x)(Lz z)
P (tx < tz |Wtx = x, Wtz = z) = exp
tz tx
GAUSS-MARKOV PROCESS
14
tz Lx tx Lz
,
tz tx
b=
tz t
tz tx Lx
ttx
tz tx Lz ,
Lz Lx
tz tx
Now,we can change the first-passage time problem with an affine boundary to
a drifted Brownian motion Wt0 = Wt bt with a constant boundary a.
tx = inf{t tx |Wt a + bt} = inf{t tx |Wt bt a} = t0x
We will be using the Girsanovs Theorem to change the probability measure to
make this drifted Brownian motion into a standard one under this new probability measure.
The process defined for t tx is;
b2
Zt = exp b(Wt x) (t tx )
2
This is a martingale with E(Zt ) = 1 as Ztx = 1.Also, from the above expression
of Zt , we can deduce E(Zt ) = 1, from the result of characteristic function for a
Brownian motion;
b2
E(Zt ) = E exp b(Wt x) (t tx )
2
2
b
= E (exp (b(Wt x))) exp (t tx )
2
2
2
b
b
= exp
(t tx ) exp (t tx )
2
2
=1
Thus, we get;
E(Zt ) = 1
We can now define a new probability measure; Q, under which Wt0 is a standard
Weiner process with variance equal to t.To test this fact,
b2
Q(Wt0z [z 0 , z 0 + dz 0 ]|Wt0x = x0 = x btx ) = E 1Wtz [z,z+dz]|Wtx =x exp b(z x) (tz tx )
2
2
1
(z x)
b2
=p
exp
+ b(z x) (tz tx ) dz
2(tz tx )
2
2(tz tx )
2
1
(z x b(tz tx ))
=p
exp
dz
2(tz tx )
2(tz tx )
1
(z 0 x0 )2
p
=
exp
dz 0
2(t
t
)
2(tz tx )
z
x
if we let z 0 = z btz . Thus, under Q, Wt0 is indeed a standard Brownian motion.
Now we know that the two probability measures are equivalent. Also, we
define the relation between the two measures as;
Q(A) = EP (1A Ztz )
GAUSS-MARKOV PROCESS
15
where, A Ftx ,tz where, Ftx ,tz means the natural filtration generated by the
original Weiner process Wt tx t tz . Now, Wt0x = x0 = x btx and
Wt0z = z 0 = z btz .
We guess that, the two events, dB 0 = {Wt0z [z 0 , z 0 + dz 0 ]|Wt0x = x0 } and
dB = {Wtz [z, z + dz]|Wtx = x}, have the same probability under the two
probability measure Q and P respectively. In both the cases, we just evaluate the probability of a Brownian motion being equal to z 0 and z, thanks to
Girsanovs theorem. However, when Wt0z = z 0 , simultaneously Wtz = z. Thus,
they have the same law and thus the same probability.
!
2
(z 0 x0 )
1
0
exp
P (dB) = Q(dB ) = p
2(tz tx )
2(tz tx )
If, Mt0x ,tz is the maximum of the path (Wt0 )tx ttz .
Mt0x ,tz = suptx ttz Wt0 = suptx ttz (Wt bt)
Lets, consider the infinitesimal event dE = {Wtz dz, suptx ttz (Wt bt)
d|Wtx = x} and dE 0 = {Wt0z dz 0 , suptx ttz Mt0x ,tz d|Wt0x = x0 }. Again
by Girsanovs theorem, we know that,
P (dE) = Q(dE 0 )
Now, from the previous result we explicitly know the representation of the above
probabilities. Thus,
!
2
(2 (x0 + z 0 ))
2(2 (x0 + z 0 )
0
dz 0 d
exp
Q(dE ) = p
2(tz tx )
2(tz tx )(tz tx )
We need, Q(Mt0x ,tz > a|Wt0x = x0 , Wt0z = z 0 ). By the law of conditional probability we know that,
Q(Mt0x ,tz > a|Wt0x = x0 , Wt0z = z 0 ) =
Thus, we get
P (suptx ttz (Wt bt) d|Wtx = x, Wtz = z) = Q(Mt0x ,tz d|Wt0x = x0 , Wt0z = z 0 )
2(2 (x0 + z 0 ))
2( x0 )( z 0 )
=
exp
d
tz tx
tz tx
Now, integrating the above relation over all the measurable sets for which
Mt0x ,tz > a, we get the final result on probability as,
2(a x0 )(a z 0 )
P (suptx ttz (Wt bt) > a|Wtx = x, Wtz = z) = exp
tz tx
Thus, putting back the original values of a =
Lx
and b = Ltzz t
we get,
x
tz Lx tx Lz 0
,x
tz tx
= xbtx , z 0 = zbtz ,
P (suptx ttz (Wt bt) > a|Wtx = x, Wtz = z) = P (tx < tz |Wtx = x, Wtz = z)
2(Lx x)(Lz z)
= exp
tz tx
GAUSS-MARKOV PROCESS
16
We can extrapolate this result to get the same for a time changed Wiener
process(Brownian motion). If we define B = {Bt , Ft ; 0 t T } as the solution
of the SDE, dBt = f (t)dWt for some smooth,bounded function f . Letting,
Btx = x1 and Btz = z1 and the affine boundary being the same, then we can
also say,
2(Lx x1 )(Lz z1 )
P (suptx <t<tz Bt > a + bt|Btx = x1 , Btz = z1 ) = exp
h(tz ) h(tx )
where h(t) =
Rt
t0
f (u)2 du.
Now, we turn our attention on the problem at hand, that is the GaussMarkov process.
Let W = {Wt , Ft ; 0 t T } denote a standard real Wiener process on
some probability space (, F, P ). Let X = {Xt , Ft ; 0 t T } is a real
adapted centered Gauss-Markov process on (, F, P ), defined by the equation,
dXt = (t)Xt dt + (t)dWt
with
X0 = x0
is the expectation of Xt knowing that Xtx = Lx and Xtz = Lz . Then conditioning to Xtx = x and Xtz = z we have,
2(Lx x)(Lz z)
P (tx < tz |Xtx = x, Xtz = z) = exp
g(tz )g(tx )(h(tz ) h(tx ))
Rt
Rt
where, g(t) = exp( 0 (u) du) and f (t) = exp( 0 (u) du)(t) and h(t), is a
Rs
Rt
2
part of the variance for the process Xt , being equal to 0 exp(2
0 (u) du)(s) ds.
Xt
Xt
Thus, we see that g(t)
is a time changed Wiener process, as d g(t)
= f (t)dWt .
Thus, we can apply both the above proved results very nicely to this proof.
We also see, by the definition of the boundary in consideration, we get (tx ) =
Lx and (tz ) = Lz .
EULER SCHEME
17
Lz
Lx
, Kz = g(t
, x0 = g(txx ) , z 0 = g(tzz ) , for any reals x, z, Lx , Lz ,
Let, Kx = g(t
x)
z)
with x < Lx and z < Lz . Now, lets define the affine boundary which we are
going to consider in the first step which joins (h(tx ), Kx ) and (h(tz ), Kz ) for
tx < t < tz as,
L(t) =
h(tz ) h(t)
h(t) h(tx )
Kx +
Kz
h(tz ) h(tx )
h(tz ) h(tx )
0
0
Now, we consider the first-passage time of the form h(t
= inf{h(t) > h(tx )|Wh(t)
>
x)
0
L(t)}, where Wt is the time changed Wiener process defined by the SDE ,
dWt0 = f (t)dWt .
0
0
Then, conditioning to Wh(t
= x0 and Wh(t
= z 0 , we get,
x)
z)
2(Kx x0 )(Kz z 0 )
0
0
0
0
0
<
h(t
)|W
=
x
=
z
P (h(t
,
W
)
=
exp
z
h(tx )
h(tz )
x)
h(tz ) h(tx )
We know that h(t) is an increasing function. Now, we look into the event whose
probability we just calculated.
0
0
0
0
0
{h(tx ) < h(tz )|Wh(t
= x0 , Wh(t
= z 0 } = {h(t) (h(tx ), h(tz )), Wh(t)
> L(t)|Wh(t
= x0 , Wh(t
= z0}
x)
z)
x)
z)
0
0
0
= {t (tx , tz ), Wh(t)
> L(t)|Wh(t
= x0 , Wh(t
= z0}
x)
z)
0
0
0
= {t (tx , tz ), g(t)Wh(t)
> g(t)L(t)|g(tx )Wh(t
= x, g(tz )(Wh(t
= z}
x)
z)
0
Now, since we can very easily see that Xt and g(t)Wh(t)
have the same law, thus
for tx t tz , we can deduce that,
0
0
P (h(tz ) < h(tz )|Wh(t
= x0 , Wh(t
= z 0 ) = P (tx < tz |Xtx = x, Xtz = z)
x)
z)
using the fact that, g(t)L(t) = (t) and tx = inf{t > tx |Xt > (t)}. Thus,
finally putting all the values, we get,
2(Lx x)(Lz z)
P (tx < tz |Xtx = x, Xtz = z) = exp
g(tz )g(tx )(h(tz ) h(tx ))
Euler Scheme
We look into the Euler Scheme for the approximation of the first-passage time
for a Gauss-Markov process.
We are given a stochastic differential equation of the form,
dXt = (Xt , t)dt + (Xt , t)dWt
0tT
X0 = x0
where, and are smooth functions. The idea is to apply the time discretization
scheme of the stochastic differential equation in question whose solution is Xt .
EULER SCHEME
18
T
T
, where N N . We denote the quantity N
as
The time discretization step is N
. Thus, the scheme looks like,
b(k+1) = X
bk + (X
bk , k) + (X
bk , k) G k+1 with k {0, 1, .....N 1}
X
X0 = x0
N (0, 1).
We define the first-passage time as = inf{t 0; Xt Lt }, where Lt is the
boundary in question. The boundary can be a simple constant boundary or an
affine boundary also. In this report, we deal with constant boundary only. Now,
we define the first-passage time in accordance with our discrete Euler scheme
T
as e = inf{ti 0; Xti Lti }, where ti = i = i N
. Thus we approximate the e
using the Euler scheme.
Gauss-Markov Process
The Gauss-Markov process has the form;
dXt = (t)Xt dt + (t)dWt
with
X0 = x0
with
for
0tT
X0 = x0
where is a non-positive real constant, is a positive constant and x0 is a real
T
where N is a Natural number,
constant. T > 0 The Euler scheme, with = N
is,
b(k+1) = X
bk + X
bk + G k+1 for k {0, 1, ....N 1}
X
X0 = x0
N (0, 1).
We look for the first time instance when the discrete process crosses the boundary Lt , thus we define the first-passage time or the first-hitting time as i =
T
inf{ti 0; Xti Lti } where ti = i N
for i {0, 1, ....N }. We note the first time
the crossing happens and denote that as our first-passage time.
Now, what if none of the discrete simulations touches or crosses the boundary.
Even though the discrete points might not cross, the continuous path may still
cross. In such cases, we use an exit probability at each time step to check
EULER SCHEME
19
what is the probability of crossing the boundary by the diffusion process in such
time intervals. Thus, the method will be to obtain the realizations (Xti )0iN
thanks to the Euler scheme and then conditioning on the values of (Xti ) and
(Xti+1 ),we look at the probability for the crossing to happen in-between these
two values, if and only if none of the values cross the boundary. Also we know
that (Xt )ti tti+1 has the law of some form of Brownian bridge.Using results
from the previous section which we have already proved, we get,
2(Lti x)(Lti+1 z)
bt = x, X
bt = z) = exp
P ( (ti , ti+1 )|X
i
i+1
g(ti )g(ti+1 )(h(ti+1 ) h(ti ))
where we have
the fact that > ti . where g(t) = exp(t) and
acknowledged
2 1exp(2t)
. We get these representation from Doobs Integral
h(t) = ()
2
Representation dealt in the last section.
Now, after we find this probability of crossing, we generate a uniform random
variable U(0, 1). We check this probability of exit with . The idea is the
same as before, as we are looking for a Bernoulli event such as whether the
continuous path between the two discrete path cross the boundary or not. We
want to generate a random event such that its probability of success is exactly
the probability of crossing. In this respect we know that, for a uniform random
variable y U(0, 1), P (y x) = x if 0 x 1.Thus, we generate a uniform
random variable, in this case it is and compare it with our probability of exit
p and we know that,
P ( p) = p
Thus, if p, then we accept that the continuous path between Xti and Xti+1
does cross the boundary.
Numerical Results
We look into the Euler scheme simulation of the Ornstein-Uhlenbeck process
with the help of the Brownian bridge concept.
dXt = Xt dt + dWt
with
X0 = x0
The idea in doing so, is we simulate a new value of the process at the time
point. Now, if the value exceeds the value of the boundary, we observe the time
and denote it as the first-passage time. However, if the value does not cross the
boundary value, we check the probability of crossing or exit between the newly
simulated value and its previously generated value (which was also below the
boundary) and compare it with an uniform random variable U(0, 1). The
idea is to generate a random event whose probability of success is same as the
probability of cross p.
Then, with the parameters being ;
= 1.0, = 1.0, x0 = 0.0, initialtime = 0.0, f inaltime = 10.0, timestep =
10
1
216 , = f inaltime time step = 216 , L = 0.5 and M onteCarlonumber =
200000,
EULER SCHEME
20
1.5
"simulate_path.dat"
1
0.5
0
-0.5
-1
-1.5
-2
0
10
Time
4 5
3N
15
=
1.06
1
N5
EULER SCHEME
21
where, is the standard deviation of the set of sample points (x1 , x2 , ...., xN ).
With, this value of h, we get a nice approximate for the pdf of the first-passage
time for the Ornstein-Uhlenbeck process.1 .
1
"firstpassagekernel.dat"
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
Figure 2: The estimated Probability Density Function(PDF) for the FirstPassage time for an Ornstein-Uhlenbeck process
Now, as we try to estimate the cumulative density function.If we have a
sample data X = {X1 , X2 , ..., XN }, by the law of large number we know that
X1 +X2 +....+XN
converges to E(X).
N
P (X x) = E(1Xx )
N
X
1X i x
N
i=1
where X denotes the sample data set for the first-passage time and N in our
above expression is the Monte-Carlo number, or the number of simulated values
we have in our data set.
Thus, we have the cumulative distribution for the first-passage time.
The cumulative distribution for the first-passage time for the Ornstein-Uhlenbeck
process is,
Now, we know that for this method,we encounter the statistical error due to
the Monte-Carlo simulation.
Now, we know that the statistical error, |E(f (X))
PM
1
V
ariance
f
(X
)|
C
, where C is some positive constant depending
i
i=1
M
M
on the confidence interval of our choice. Thus, we need to calculate the variance
of our sample data of first-passage time. Now, it is very easy to make a mistake
1 The
EULER SCHEME
22
"cumulative_density.dat"
0.9
P(First-passage Time < X)
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
i=1
(xi x)2
M
However, such a calculation will lead to a biased calculation. In, order to get
the unbiased form of the variance we see that,
M E(V arianceexpected ) = E(
M
X
(xi x)2 )
i=1
M
X
= E(
(x2i + x2 2xi x))
i=1
M
M
M
X
X
X
= E(
x2i ) + E(
x2 ) 2E(
xi x)
i=1
i=1
i=1
M
X
xi )
i=1
Now, since x =
PM
i=1
xi
, thus,
PM
i=1
xi = M x.
A FAST ALGORITHM
23
Thus, the law of all the sample variables are the same. Now, we know that,
E(x2 ) = V ariance(x) + E(x)2
PM
xi
= V ariance( i=1
) + (mean)2
M
V ariance(x1 )
=
+ (mean)2
M
also, we know that E(x)= E(X) = mean, from the strong law of large numbers.
Also, as they are identically distributed, V ariance(x1 ) = V ariance and E(x1 ) =
E(X) = mean. Thus, we get,
V ariance(x1 )
2
2
M E(V arianceexpected ) = M V ariance(x1 ) + E(x1 ) M
+ (mean)
M
V ariance
2
2
= M V ariance + mean M
+ (mean)
M
= (M 1)V ariance
M V ariance
expected
=unbiased Variance for the sample data .
Thus, we get that
M 1
Thus, with such a choice of variance, we will get an unbiased variance for our
sample data and thus an accurate statistical error.
The paper [1] by Taillefumier and Magnasco details the procedure they follow
to approximate the first-passage time for a Gauss-Markov process with Holder
continuous boundaries, including the Ornstein-Uhlenbeck process. The main
idea behind their algorithm is a probabilistic variant of dichotomic search. Their
method evaluates discrete points in a sample path exactly and then refines this
evaluation recursively only on regions where a passage is estimated to be more
probable.
Introduction
For the class of Gauss-Markov process, which also includes the ever so important
Ornstein-Uhlenbeck process, the numerical approximation was performed in a
new way thanks to the paper [1]. A general Gauss-Markov process has the form;
dXt = (t)Xt dt + (t)dWt
with
X0 = x0
A FAST ALGORITHM
with
24
U0 = u0 (deterministic).
For example of a first-passage problem concerning an Ornstein-Uhlenbeck process is the leaky integrate-and-fire neuron. In this model the process Ut
represents the membrane potential of a neuron; any time the voltage crosses a
given threshold value, the cell fires and and emits action potential and resets
its potential to a base value.More general Gauss-Markov processes can be seen
as a Ornstein-Uhlenbeck process with the parameters and depending on
time.Certain assumptions are taken into account.They are :
1.Co-efficient Regularity Assumption : The Gauss-Markov processes are solution of a linear stochastic equation with time-dependent non-positive,bounded
function and with time-dependent positive, homogeneously Holder continuous
function .
2. Barrier Regularity Assumption : The barrier is assumed to be homogeneously
Holder continuous and non-negative.
These determine the regularity of a continuous density function for the firstpassage time and prescribes the speed of convergence of the first-passage time
computation.
Algorithm
We will embark on defining the algorithm, originally constructed by Taillefumier
and Magnasco [1].This algorithm efficiently computes the distribution of firstpassage time for a general class of Gauss-Markov processes X and of continuous
boundary L. It involves recursively implementing the probability variant of the
dichotomic search. A dichotomic search is a search algorithm that operates by
selecting between two distinct alternatives (dichotomies) at each step. It is a
specific type of divide and conquer algorithm. A well-known example is binary
search.
The idea is, to simulate the path of the process using the discrete construction
method. We generate first the final position of the process using the formula,
p
XT = g(T )x0 + g(T ) h(T )
where, is a Gaussian random variable N (0, 1).
Then we simulate XT /2 using the result of the discrete construction of GaussMarkov process. In this way, we keep on dividing and subdividing the domain to
find the values of out process at specific time.The idea behind is the dichotomic
search algorithm. The moment, we come across the value of the process Xt
2
reaching or over-shooting the value of the boundary L; by continuity of the
sample path we know that the crossing has occurred before or at t thus, we disregard continuing the simulation of the sample path for time s following t.We
are only concerned with the part of the process prior to time t and continue our
dichotomic search in that part only.
In the case, when the newly simulated value does not cross the boundary and
so does none of the values on its left and as well as its right, we can check if
the probability for a crossing to happen(we get this value with the help of the
result obtained from the section Gauss-Markov process) is larger than some
prescribed small positive real . We will be checking the probability of crossing
2 where
2k+1
T
2n +1
A FAST ALGORITHM
25
on both sides of that simulated value and then decide to continue the dichotomic
search algorithm in the part where we get a greater probability of crossing
We define a certain depth of search initially, i.e. 2TN . When this depth of
discretization is reached, we decide to look for the first-passage time for this
particular simulation. In the case when one of the simulated values cross the
boundary, the first passage time is the time corresponding to the one which
crosses the boundary first. In the other case, when none of the simulated values
cross the boundary, we check with the probability of crossing and compare the
value with an uniform random variable. If the probability of crossing is bigger
than the uniform random variable, we consider that the continuous path between the two time points in the interval; where the probability of crossing is
being checked for, has actually crossed the boundary and note the mid point of
that interval as the first-passage time.
The idea behind this probabilistic variant of the dichotomic search is to search
for the first-passage time by only simulating the process on some points where
the outcome happen close enough to the boundary L.
n,k N (0, 1)
A FAST ALGORITHM
26
(mn,k ) =
3. For, every 1 n N , the algorithm does not take into account any time
following the occurrence of a value of the sample path above the barrier L.
4. Finally, for n=N , we define the approximate first-passage time N as, the
first time when the sample path crosses the boundary, or if it does not, then we
look into the probability of its crossing within the time lN,k and rN,k by comparing the probability of cross with an uniform random variable distributed
as U(0, 1). If the probability of cross is larger than the then we define the
first-passage time N as the mid point of the lN,k and rN,k for the corresponding
k.
Simulation of an Ornstein-Uhlenbeck process
In this section, we work with an Ornstein-Uhlenbeck process as a representative
of the Gauss-Markov process family. The reason behind choosing an OrnsteinUhlenbeck process is that, is it one of the most popular Gauss-Markov processes.
Moreover, it has varied application in the field of physics,finance to name a few.
The Ornstein-Uhlenbeck process we work with looks like,
dXt = Xt dt + dWt
with
X0 = x0
0tT
2. Once, we have generated all the values of our first round of simulation,
we look for the first time when one of such values crosses the boundary. If, none
of the values cross the boundary, we state that the process does not cross the
A FAST ALGORITHM
27
boundary and we carry on generating a new set of path for the same process.
Otherwise, if we find there exist at least one time instance when the process
crosses over the boundary, we note the first time when it happens as Tf inal .
3. We start to check for probability of crossing in each interval of time 2TN
starting from the initial value of the time chosen, which is 0 in our case, to
the value when it crosses the boundary for the
Thus,
we check the
first time.
probability of crossing between time instances 0, 2TN , 2TN , 22T
N , and so on until
we reach Tf inal .At each interval the probability = p is compared with a positive
small, real, pre-chosen . If p for a particular interval, then we subdivide
that interval like we did for the interval [0,T]. This time again, the depth is
chosen to be 2N with the same N as before.
i
h
(k+1)T
, for some k
4. After, simulating all the values in the interval 2kT
N ,
2N
T
2N
Numerical Simulation
The equation in question is,
dXt = Xt dt + dWt
with
X0 = x0
0tT
with = 1.0 , = 1.0, x0 = 0.0, the boundary L = 0.5, and finally the
T = 10. The depth is chosen to be 210 . First we simulate the first set of values
for X.
After the initial simulation, we check for the first time the process value has
touched or exceeded the value of the boundary L. We find such a time in this
case, and we go on checking the probability of exit in each interval with an ,
where = 0.0013 . Wherever the probability is greater than the , we simulate
more values of the process in that interval using the same idea as before.
If, one of these new simulated values touch the boundary value or crosses it, the
first one to do so becomes our first-passage time. If, not so, we check in which
interval, the probability of cross was the highest. We will already have had new
values of X simulated in that interval. This time we check the probability of
cross with an uniform random variable U(0, 1) in each of the newly formed
intervals, thanks to the second round of simulation. The first such probability
calculated to be greater than , gives us the result that although the discrete
path does not touch or cross the boundary, but the continuous path between
3 the
idea behind choosing this value of the is justified in the next section
28
1.5
"Initial_Simulation.dat"
1
0.5
0
-0.5
-1
-1.5
-2
0
10
Time
Here we compare the two previous algorithms in the sense of their speed of simulation, and efficiency. We know that with the Fast algorithm, we will be able
to simulate the first-passage time in less computational time. We attain this,
since we avoid simulating the process; under consideration, any further beyond
a point once we find a crossing has occurred. Instead of simulating with higher
precision over the entire interval, we concentrate only on the region where we
have found our first-passage time to exist.
Of course, there are some instances when the fast algorithm returns an erroneous
first-passage time. This happens when we search for the first-passage time in
an interval where the passage might take place, but its not the first time that
the process is crossing the barrier.
0.52
29
"second_round_of_simulation.dat"
0.51
0.5
0.49
0.48
0.47
0.46
0.45
6.4721
6.4722
6.4723
6.4724
6.4725
6.4726
6.4727
6.4728
Time
"cumulative_distribution.dat"
0.9
P(First-passage Time < X)
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
30
0.9
"density_kernel.dat"
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
Figure 7: The kernel approximation of the probability density function for the
first-passage time for an Ornstein-Uhlenbeck process
It is given that for a chosen parameter , with which we compare our probability of crossing at each interval, and for a given depth 2N , we get,
P ( N > + 2N ) = (N, )
that is the probability that a simulated first-passage time does not approximate
a true first-passage time, where, (N, ) 2N . Thus, we see that with a clever
choice of , we can lower this probability.
The choice of is done in this fashion.
ln ln (N 1)ln2 lnT
with such a choice of , we get .
Thus we look into the time and efficiency of the two algorithms with the same
variable values and same depth = 2N . Also, we simulate the first-passage time
in both the algorithms using the same Monte-Carlo number.
6.1
Numerical Results
with
X0 = x0
MCN
Delta
1000
11000
170000
2700000
0.039
0.00976
0.002441
0.00061
31
1.1566
1.1946
1.2056
1.2162
0.09301
0.0286
0.00727
0.001828
expected value E( 1 <T ), the statistical error and the running time for the
algorithm.
We now look into the numerical result for the fast algorithm for the above
SDE.If, we want the probability for an erroneous result for the first-passage time
to be less that 1010 , we need to choose = exp(38.5). However, with such a
small epsilon, computationally, its almost equal to zero, thus we choose a much
bigger = exp(18) and hence the probability of the algorithm returning an
erroneous result also gets higher 103
MCN
Delta
1000
11000
170000
2700000
0.039
0.00976
0.002441
0.00061
1.564
1.348
1.246
1.2198
0.1235
0.0335
0.00836
0.00202
1
sinh(x)
32
exp(x)
x
exp
+
2sinh(x) 2
The graph of the density function for the case when L = 0.5 and X0 = 0 is,
Comparing it with the density functions estimated with the Euler scheme and
the Fast algorithm respectively.
Thus, we see that the probability density function that we get is a very nice
approximation of the density function for the Ornstein-Uhlenbeck process.
We also see, that the speed of the Fast algorithm, as the Monte-Carlo number
increases, also increases.
32
1.4
"firstpassagekernel.dat"
1.2
the pdf
1
0.8
0.6
0.4
0.2
0
0
10
Figure 8: The graph of the density function using the analytical solution for an
OU process
"firstpassagekernel.dat"
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
Figure 9: The graph of the density function using the kernel approximation for
an Euler scheme
0.9
33
"density_kernel.dat"
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
10
Figure 10: The graph of the density function using the kernel approximation
for the Fast algorithm
Introduction
This section looks into the exact simulation of a diffusion described by a stochastic differential equation. The paper on exact simulation [3] by Beskos and
Roberts, gives a very nice algorithm to do so. The idea was further extended
in the paper [4] by Beskos,Papaspiliopoulos and Roberts.We look at the more
evolved method, mainly presented in [4]. It involves rejection procedure,whenever
applicable, and the use of Point Poisson Process(PPP) to return exact draws
from any finite-dimensional distribution of the solution of the stochastic differential equation.
Lets W = {Wt ; 0 t T } be a standard Brownian motion. Considering
the general type of a one-dimensional stochastic differential equation;
dXt = b(Xt )dt + (Xt )dWt ,
0 t T,
X0 = x0 R
34
RT
and C2 are real constants, and E 0 (s)2 ds < holds for some T < . Also,
if the functions are Lipschitz locally, they are also continuous locally, thus the
functions b and also satisfy the locally bounded condition, since continuous
functions are bounded in a compact set.
So,assuming the above mentioned conditions on the functions b and , we
move ahead.
The exact algorithm provides an alternative method to Euler scheme, which
involves no approximation and still it is computationally highly efficient. It
returns exact sample paths. In the following part, we are going to transform
the above stochastic differential equation to a one with = 1.This can be done
by a simple transformation.
dXt = b(Xt )dt + (Xt )dWt
Let, Yt = (Xt ) such that by applying Itos formula on it we get,
1
dYt = (Xt )0 dXt + (Xt )00 (Xt )2 dt
2
1
0
= (Xt ) b(Xt )dt + (Xt )00 (Xt )2 dt + (Xt )0 (Xt )dWt
2
Thus to make the diffusion co-efficient 1, we get,
(Xt )0 (Xt ) = 1
Thus, we get,
Z
(x) =
x0
1
du
(u)
R Xt
x0
(Xt )0
(Xt )2
1
(s)
ds
1.
2.
in
3.
4.
Rejection Sampling
Generate Y g
Generate U U(0, 1), where U(0, 1) mean uniform random variable
the range (0,1)
f (Y )
If U Cg(Y
) , return Y
else goto 1.
35
(z)fY (z) dz
Z +
fY (z)
fX (z) dz
Cf
X (z)
fY (z)
= CE (Z)
CfX (z)
=C
Now, since
fY (z)
CfX (z)
(z)
36
Exact Algorithm
Considering the diffusion process Y ={Yt ; 0 t T }, which follows the stochastic differential equation;
dYt = (Yt )dt + dWt
0 t T,
Y0 = y0 R
The drift function is assumed to satisfy the Lipschitz condition and also a
growth bound locally.
The main idea behind this algorithm is applying Girsanovs Theorem to make
the process simple enough to simulate. We apply Girsanovs Theorem once to
change the processYt into a standard Wiener process under some probability
space. Then we apply the Girsanovs theorem to change the probability space
so that under it, the process now becomes a Brownian bridge like process. There
are more intricate details in the process of changing the probabilities, but we
will get to that in detail while explaining the entire process.
We first define a sample space (, F) which is equipped with a probability
measure P . Now, under this measure, the Brownian motion which propels our
diffusion process is defined as W = {Wt ; 0 t T }. We know if we have
transform the diffusion process at hand Yt into a standard Wiener process, we
have to take help of Girsanovs Theorem. Now;
dYt = (Yt )dt + dWt
Now, lets say the new probability measure Q under which Yt is a Wiener process
ft .
of the form W
Some notes on Girsanovs Theorem:
Z
Z t
1 t
|s |2 ds}
Let, Z(t) = exp{
s dWs
2
0
0
!
Z T
1
Novikovs Proposition : EP (exp
|t |2 dt < +
2 0
Then, if {t } is an adapted process satisfying the Novikovs Proposition, and
for each T > 0 Z(T ) is a likelihood ratio: that is, the formula;
Q(F ) = EP (Z(T )1F )
defines a new probability measure on (, F). Girsanovs Theorem describes
the distribution of the stochastic process {W (t)} under this new probability
measure. We define,
Z
t
ft = Wt
W
s ds
0
Then, the Girsanovs Theorem states that under this new probability measure
ft }0tT is a standard Wiener process.
Q,the stochastic process {W
Now, we have to transform the original equation we had, dYt = (Yt )dt+dWt
into a Brownian motion under some new probability measure Q. Thus taking
37
s ds
0
T
1
s dWs
2
|s |2 ds}
dP
)
dQ
Z
= E (T (Y ) exp{
f ) exp{
= E Q (T (W
Z
0
0
T
1
s dWs +
2
|s |2 ds})
fs ) dW
fs 1
(W
2
fs )2 ds})
(W
Thus, we get :
Z T
Z
1 T
dP
f
f
ft )2 dt) = G(W
f)
= exp(
(Wt ) dWt
(W
dQ
2 0
0
R
T
fs ) ds ) < +.
This holds if E Q (exp 21 0 (W
Assuming the Novikovs Proposition, we can define the change of the probability
measure.
38
need A to be bounded, thus increasing the restriction on A.In order to get rid
of this strong condition, we introduce a third probability measure Z, which will
be finally used to construct the candidates for the rejection sampling.
We will use candidate paths from a process which is identical to a Brownian
motion except at the end point, i.e at time T . Such, processes are known as
c
biased Brownian motion, W
P roposition:
Let M = {Mt ; 0 t T }, N = {Nt ; 0 t T } be two stochastic processes
on (, F) with corresponding probability measure M and N . Assuming that
fM and fN are the densities of the ending points MT and NT respectively with
identical support R. If, it is true that (M |MT = ) (N |NT = ) for all R,
then;
fM g
dM
() =
(WT )
dN
fN
Proof:
The property, (M |MT = ) (N |NT = ) for all R can be expressed in a
more rigorous way as ;
g
g
M [A|(W
T )] = N [A|(WT )]
a.s.
fM g
(WT )]
fN
Now,
EN [1A
fM g
fM g
g
(WT )] = EN [EN [1A
(WT )|(W
T )]]
fN
fN
fM g
g
= EN [
(WT )EN [1A |(W
T )]]
fN
fM g
g
(WT )N [A|(W
= EN [
T )]]
fN
g
= EM [N [A|(W
T )]]
g
= EM [M [A|(W
T )]]
g
= EM [EM [1A |(W
T )]]
= M [A]
Now, we use the above proposition to get the density function for
the final posi
g
f
tion for the biased Brownian motion. We want to get rid of the exp A(W
T ) A(W0 )
f ). Thus, changing the probability measure from Q to Z we want
part from G(W
the change of probability to be of the form;
dQ
g
f
= exp A(W
T ) + A(W0 )
dZ
g
= C1 exp A(W
T)
39
f0 ) .
where C1 is a constant of the form exp A(W
Now, from the previous preposition, we know that;
dQ
=
dZ
1
2T
y0 )
)
exp( (WT2T
g
hT
where, hT is the density of the end point of the biased Brownian motion. The
numerator for the above expression comes from the fact that under the probag
bility measure Q, we have a Brownian motion W
T N (0, T ). Thus, we get the
final expression for the density function hT ;
!
2
g
(W
1
T y0 )
g
exp A(WT )
hT =
2T
C1 2T
Now, we see that,
dP
dP dQ
=
.
dZ
dQ dZ
1
= exp
2
ct ) + (W
ct )) dt
( (W
0
Thus,to recapitulate the notations used so far, under the probability measure
f is a Brownian
P , W is a Brownian motion, under the probability measure Q, W
c is a biased Brownian motion in
motion and under the probability measure Z, W
d
the sense that the end point, i.e. W
T is distributed according to some probability
density function hT .
Now, we can go ahead in our endevour if we assume that the expression
0
inside the integral 12 ( (.) + 2 (.)) is bounded from below at least. If it is so,
0
k1 and k2 R s.t. k1 21 ( (.) + 2 (.)) k2 , then we can define a new
0
function (.) = 12 ( (.) + 2 (.)) k1 .
Thus, we will get,
!
Z T
dP
ct ) dt
= C2 exp
(W
dZ
0
R
T
where C2 is a constant that come from exp 0 k dt .
Thus, for any test function , we get,
Z
c ) exp
EP ((Y )) = C2 .EZ (W
!!
ct ) dt
(W
= C2 .EZ
!!
Z
c )|something whose probability of happening is
(W
exp
ct ) dt
(W
0
Z
P
exp
!!
ct ) dt
(W
0
exp
0
!!
c
(Wt ) dt
40
Now comes the most important part of this section, the algorithm to generate
the exact sample path for the solution of the stochastic differential equation.
Algorithm
The main difficulty in ourapproach will be
to find something whose success of
RT
c
happening is equal to exp 0 (Wt ) dt . However, we are lucky to have the
Point Poisson Process, which helps
Rus to give us a situation where we can get
T
ct ) dt .
the success of something as exp 0 (W
Proof : Now, with the intensity we choose, i.e I = 1 , we can get the above
relations. Also, remembering that the above relations and the proof that will
follow will only be valid for the chosen intensity and the chosen set D.
With I = 1, we get
R
A
41
P (x1 > ) = P (No. of points in[0, ]x[0, t1 ] = 0, No. of points in[, K]x[0, t1 ] = 1|
No. of points in[0, K]x[0, t1 ] = 1)
= P (N[0,]x[0,t1 ] = 0, N[,K]x[0,t1 ] = 1|N[0,K]x[0,t1 ] = 1)
= P (N[,K]x[0,t1 ] = 1|N[0,K]x[0,t1 ] = 1)
V olume([, K]x[0, t1 ]) exp(V olume([, K]x[0, t1 ])) exp(V olume([0, ]x[0, t1 ])
V olume([0, K]x[0, t1 ]) exp(V olume([0, K]x[0, t1 ]))
(K )t1
=
Kt1
=1
K
Thus,
P (x1 ) = 1 (1
)=
K
K
We needed
R to generate
something whose probability of success would have been
T
c
exp 0 (Wt ) dt . Now we can see that with the help of PPP, we can simct )] such that if N =
ulate a PPP with unit intensity on [0,T] x [0, sup0tT (W
ct )}; t [0, T ], then ;
number of points of the PPP below the graph of {t, (W
!
Z T
ct ) dt
P (N = 0) = exp
(W
0
42
Thus, we finally get the random event whose probability of success is exactly
what we were looking for.
Now, we are in a position to write the proper algorithm for the simulation
of the exact path for the diffusion process in question.
Exact Simulation
1. Generate a random variable (T ) hT , and set (0) = 0
2. Generate a Point Poisson Process(PPP) on the domain [0,T] x [0,K], where K
ct ), with unit intensity4 . Thus, ts+1 ts E(K) and xs+1 xs
= sup0tT (W
U(0, K)
3. We generate all the interim values of the process at all the times generated
from the PPP, using the concept of the Brownian
0 bridge
4. We compare the the values of ((ti )) = 12 ((ti )) + 2 ((ti )) k; where
(ti ) is the value of the process at some generated time ti , thanks to PPP,
[
with our space variable of the PPP to the scope of, ((t
i )) < xi . If, this holds
0
for all the i s such that ti T , then we accept the path as our exact simulated
path of our original process as the two processes X and are equal in law.
[
5. If ti (ti T ), s.t.((t
i )) xi , then we back to step 1.
Numerical Simulation
We are restricted on our choiceof the SDE, mainly because of the bounded0
ness criteria of 21 (.) + 2 (.) . In a later paper[4], they showed how the
0
idea can be extended to the case when limsupx 12 (x) + 2 (x) < or
4 Intensity being 1 means there is an equal of probability of choosing any point in the
domain, thus no point is more favourable than another
0
limsupx 21 (x) + 2 (x) < .
dXt = sin(Xt )dt + dWt
X0 = x0
We know that Xt and (t) are equal in law, Thus, simulating the accepted path
for gives us the exact path for X.
x0 = X0 = 10.5, T imef inal = 10.0, sup0tT ((ti )) = sup0tT 21 cos((ti )) + sin2 ((ti )) +
9
9
1
2 = 8 . Thus, K = 8 .
We have the exact skeleton of the process defined by the SDE,
10.5
"samplepath.dat"
10
9.5
8.5
8
0
10
15
20
25
30
Time
We, know look into the first-passage time for such a process. How to simulate the first-passage time in this case.
Now, after implementing the exact simulation, we accept a path with values
of the diffusion in question at some time points between initial time =0 and
final time = T. The time points in between are evaluated such that, every
Tk+1 Tk is exponentially distributed with parameter K(where K is the sup
of the function ). Now, we are interested in the first passage time. We have
Tk+1 , Tk , XTk+1 , XTk .Let us denote XTk+1 = y, XTk = x, and Tk+1 = t1 , Tk = t2
just for convenience stake. Now, we look into the probability of the process
Knowing the values of a, b, c, we can directly put the values into the above
equation and get the final value of Y, .
(t2 t)(t t1 )
Y, N
(t2 t1 )
Thus, finally we get,
t2 t
x,y
Bg
=
x+
t
t2 t1
t2 t
x,y
Bg
=
x+
t
t2 t1
t t1
y + B (t2 t)(tt1 )
t2 t1
(t2 t1 )
t t1
t2 t
y+
B tt1
t2 t1
t2 t1 t2 t
Thus, we have been able to relate our biased Brownian motion with an unbiased
one.
The next job will be to look into the first-passage of our biased Brownian motion
with the help of results known to us for unbiased Brownian motions(even with
drift).
We define the first-passage time for our original process as,
x,y
x,y
L = inf{t [t1 , t2 ]; Bg
L}
t
and the first-passage time for Brownian motion with drift will be defined as,
45
, = inf{t 0; Bt + t}
P (Lx,y > s)
where =
where,
x,y
= P (Bg
u < L; t1 u s)
t2 u
u t1
t2 u
=P
B ut1 < L; t1 u s
x+
y+
t2 t1
t2 t1
t2 t1 t2 u
Ly
Lx
s t1
+
u ; 0 u
= P Bu <
t2 s
t2 t1
t2 t1
s t1
= P , >
t2 s
Lx
t2 t1
and =
Ly .
t2 t1
xt2 + t1
f or 0 x
x+1
We, know the distribution of the random variable , from the very famous
Bachelier - Levy formula,
( + u)2
f, (u) = 3 exp
, f or u > 0
2u
u 2 2
g(x) =
Thus, this way we can actually derive the analytical result for the first-passage
time for the biased Brownian motion in each interval, and in return get the
first-passage time for the original diffusion process X in each such intervals.
European vanilla option is a contract giving the option holder the right to buy or
sell one unit of underlying assets at a prescribed price, known as exercise price or
strike price K, at a prescribed time, known as expiration date T. Barrier options
are similar to standard vanilla options except that the option is knocked out or
in if the underlying asset price hits the barrier price, B, before expiration date.
Since 1967, barrier options have been traded in the over-the-counter (OTC)
market and nowadays are the most popular class of exotic options. Therefore it
is quite important to develop accurate and efficient methods to evaluate barrier
option prices in financial derivative markets.
Pricing a barrier option is done mainly with the help of the expectation approach,which requires the knowledge of the risk-neutral probability density of
the underlying asset price as it breaches the barrier from above or below.Barrier
option prices are then obtained, by integrating the discounted pay-off function
for the barrier option over the calculated density.
Let the underlying asset price St follows the following stochastic differential
equation(SDE),
dSt = (St , t)dt + (St , t)dWt ,
with
St0 = s0
Now, by the above described method, we will get the price of the barrier option
as,
V (s0 , T ) = E Q ((ST 1 >T )|St0 = s0 )
46
M
CN
X
s.d
1
f (STj ) | C
|E(f (ST ))
M CN j=1
M CN
where M CN denotes the number of Monte-Carlo simulations, s.d is the standard
deviation of our sample data and C is a constant depending on our confidence
interval. Thus, we see that in order to lower the statistical error, we have to
either increase the number of simulations or decrease the standard deviation or
in turn the variance.
We look into the Euler scheme to model the price of a barrier option.The
SDE under consideration is;
dSt = (St , t)dt + (St , t)dWt
with
St0 = s0
where and are known as the drift function and the volatility function respectively.We want these functions to be Lipschitz on any compact support [0,T] for
any T0. This condition will give us the existence and the uniqueness result
for the above SDE. Once, we have established these results, we look forward to
simulate the asset price. It is done, with the help of the Euler scheme.
T
where = N
with N = number of discretization points, k, k + 1 denotes the
kth and the k + 1th iterated solution of the above
SDE and G N (0, 1).
Also, we know that W(k+1) Wk N (0, ) = N (0, 1)
What we are interested is in, E(f (ST )), which is suppose to give us the price
of the option concerned. Here f (ST 1 >T ) is the pay-off function associated
to the diffusion and T is the expiration or the maturity time. For a typical
European Call Option, the pay-off function is ; f (ST ) = max((ST K), 0)
where K is the strike price5 .
5 Strike price is the priced determined by the buyer(seller) of the option, such that if the
price of the option goes beyond(under) the determined strike price, the buyer(seller) of the
option can still buy(sell) at the strike price.
47
Now we employ Monte-Carlo method to approximate E(f (ST )). We simulate, lets say, MCN(the number of Monte-Carlo simulations) values of XTm using
the above mentioned Euler Scheme and approximate the E(f (ST )) by;
m
E(f (ST ))
1 X i
(S K)
m i=1 T
Now, in following the Euler scheme, we know that there is a weak error due
to discretization which is of the order 1. However, we can get rid of this error.
The error mainly comes from the fact, that the discrete path may not cross
the boundary, but the continuous path between two discrete values may still be
able to cross the boundary. This can be checked if we introduce the concept of
crossing probability associated to Brownian bridge. This crossing probability
has been introduced before. We are merely just recalling it. Thus, as before, we
compare this probability of cross p with an uniform random variable . If the
uniform random variable p, then we accept that the path between two values of the price of the asset, has indeed crossed the boundary, iff the two values
are below the boundary. Now, we know that the expression for this probability
of cross is not the simple. We know it for some cases. Since, we are dealing
with Gauss-Markov processes, we will assume that the price of the underlying
asset is a Gauss-Markov process and its SDE is given by,
dSt = (t)St dt + (t)dWt
with
St0 = s0
(s)
ds
du. We get these functions, g and h from the
t0
t0
Doobs Integral Representation, proved previously.
We now just have to compare this probability of exit or cross, after we simulate a new value of the price which is below the boundary. We compare it
with a uniform random variable . The idea behind doing so, is we are looking
for a random event whose probability of success is equal to the probability of
exit or cross. Now, we know that for an uniform random variable U(0, 1),
P ( < p) = p 0 p 1. Thus, the probability of success of < p
is same as the probability of exit. This helps us to get rid of the error due
to discretization due to the application of the Euler scheme for the numerical
approximation.
48
Numerical Simulation
For the numerical simulation purpose, we simulate the price according to the
following equation,
dSt = St dt + dWt
with
S0 = s0
where, = 0.01, = 1.0, x0 = 10.5, strike K = 11.0 The Euler scheme looks
likes,
S(k+1) = Sk + Sk + G k+1
where k = 2kT
N for k {0, 1, ..., N 1} for a certain N , and we know that
12
"process_path.dat"
11.5
11
10.5
10
9.5
9
8.5
0
10
Time
49
price as,
PM
i=1
(STi K)
M
taking only those values of STi s whose entire simulated path is below the boundary value, in this case it being B and also if the final value STi is greater than
K.
Table 3: Pricing the Option - Euler Scheme
Monte-Carlo Number N
Price
Statistical Error
10000
100000
1000000
8
9
10
0.000319
0.001
0.0038
0.00028
0.00012
0.000082
8
9
10
0.000298
0.000989
0.00372
0.00032
0.00023
0.000076
10
10
CONCLUSION
50
Conclusion
We look into the first-passage problem mainly pertaining to Gauss-Markov process, barring the Exact simulation algorithm. It is true that we know how the
probability density function for the case of an Ornstein-Uhlenbeck process looks
like, however the expression is quite cumbersome and long. What, we were most
interested here were general Gauss-Markov processes. We may have simulated
only Ornstein-Uhlenbeck like processes, but our aim was to give general results.
We see, that when applicable the Exact simulation algorithm returns very accurate results for the first-passage time. It just takes help of the first-passage time
for a Brownian motion over an affine boundary, which is very simple to simulate
since we know exactly the density for it. However, the algorithm restrict its
usage to a great deal.
11
Acknowledgment
I would like to thank the entire Team of TOSCA in INRIA, Sophia, specially
Etienne Tanre for his constant support and guidance at every step of my report.
Without his support, this report would not have been possible. I would also
like to extend my gratitude to Denis Talay for welcoming me into the group
TOSCA to write my Master report, Francois Delarue our coordinator, James
Inglis and Camilo Andres Garcia Trillos for the numerous talks and discussion
which helped me in ways more than one. Finally, I would like to thank Bruno
Rubino, of University of LAquila, Italy, for giving me this wonderful opportunity to study in the course MATHMODS.
References
[1] Thibaud Taillefumier, Marcelo O.Magnasco (2010) A Fast algorithm for
the First-Passage Times of Gauss-Markov processes with Holder Continuous Boundaries.
[2] Thibaud Taillefumier (2008) A Discrete Construction for Gauss-Markov
Processes.
[3] Alexandros Beskos,Omiros Papaspiliopoulos and Gareth O.Roberts (2006)
Retrospective Exact Simulation of Diffusion Sample Paths with Applications.
[4] A.Beskos and Gareth O.Roberts (2004) Exact Simulation of Diffusion.
REFERENCES
51