You are on page 1of 6

Continuous-time continuous-state Markov process

Suppose { X (t ), t  } is a Markov Process with the state space V   . Here  is continuous and
the state transition can take at any instant of time t. Suppose the process is at state x0 at time
t  t0 . The state transition probability density function at t is given by f X (t )/ X (t0 ) ( x / x0 ) . For
notational simplicity, let us denote this pdf by f ( x, t / x0 , t0 ) . Further assume that the process is
homogeneous.

Consider the random variable X (t1 ) ) at a time. Given X (t0 )  x0 , the joint PDF of X (t1 ) and
X (t ) is f ( x1 , t1 ; x, t / x0 , t0 ) .Then the marginal density f ( x1 , t1 / x0 , t0 ) can be obtained from as

f ( x1 , t1 / x0 , t0 ) 

 f ( x, t ; x1 , t1 / x0 , t0 )dx

Using the chain rule and subsequently the Markov property, we get

f ( x1 , t1 / x0 , t0 ) 

 f ( x, t / x0 , t0 ) f ( x1 , t1 / x, t , x0 , t0 ) dx



 f ( x, t / x0 , t0 ) f ( x1 , t1 / x, t ) dx

The result is the Chapman-Kolmogorov equation for a continuous-time continuous-state Markov


process and given by

f ( x1 , t1 / x0 , t0 )  

f ( x, t / x0 , t0 ) f ( x1 , t1 / x, t )dx

We have to know how the process evolves. Similar to the Kolmogorov forward and backward
equations for the evolution of the CTMC, we can get those equations for a continuous time Markov
process. Particularly, the corresponding forward Kolmogorv equation is known as the Fokker
Planck (FP) equations. We omit the derivation of the FP equation here.

The FP equation is given by


f ( x, t / x0 , t0 ) f ( x, t / x0 , t0 ) 1 2  2 f ( x, t / x0 , t0 )
   ( x, t )   ( x, t )
t x 2 x 2
where
E (( X (t  t )  X (t )) / X (t )  x)
 ( x, t )  lim and
t 0 t
E (( X (t  t )  X (t )) 2 / X (t )  x )
 2 ( x, t )  lim are the time-varying and space-varying
t  0 t
parameters of the process. Note that they are the mean and the variance of the infinitesimal
transition of the process and finite. It is further assumed that
E (( X (t  t )  X (t )) n / X (t )  x )
lim  0 for n  3.
t  0 t
Note that the FP equations are linear partial differential equation (PDE) with the time and space
varying coefficients. The solution is is generally difficult.
When  ( x, t ) and  2 ( x, t ) are constants, the FP equation simplifies to the diffusion equation
given by
f ( x, t / x0 , t0 ) f ( x, t / x0 , t0 ) 1 2  2 f ( x, t / x0 , t0 )
   
t x 2 x 2
with  and  respectively known as the drift and the diffusion coefficient. For the Wiener
2

process, the transition pdf follows the above PDE.

The FP equation has diverse applications as in the dispersion of suspended particles, the
dynamics of electrons in a semiconductor, aeronautics, image processing and stochastic
finance.

Solution of the diffusion equation for   0


The diffusion equation in this case is given by
f  x , t  1 2  2 f  x , t 
 
t 2 x 2
Considering t0  0 and x0  0 the solution to the diffusion equation gives
1  x2 
  2 
1 2  t 
f ( x, t / x0  0, t0  0)  e
2 2t

Thus the the transition PDF is Gaussian with time-varying mean and variance. With partial
differentiations of f ( x, t / x0  0, t0  0) with respect to t and x it is easy to show that the above
Gaussian PDF satisfies the FP equations.

We can solve the above PDE using the Fourier transform method.

Consider the diffusion equation


f  x , t  1 2  2 f  x , t 
  with initial condition X  0   0 with probability 1.
t 2 x 2
 f  x, 0     x, 0 
Let
Y  , t   FT ( f ( x, t ))

 

f ( x, t )e j x dx

Then,
 f  x, t    f  x, t   j x
FT    e dx
 t   t


 
t 
f ( x, t )e  j x dx


 Y , t 
t
Similarly,
 2   2
FT  2 f ( x, t )    2 f ( x, t )e  j x dx
 x   x
 
  f ( x, t )
 f ( x, t )e  j x      j  e j x dx
x    x

Note that lim F ( x, t )  1 and lim F ( x, t )  0 , As both are constants,


x  x 

f
 0 as x   and x   .
x
 2   f ( x, t )  j x
 FT  2 f ( x, t )     j  e dx
 x   x

  j  f ( x, t )     f  x, t  e
 2  j x
dx


 f  x, t  e
 j x
  2 dx


  Y  , t 
2

Taking the Fourier transform of both sides of the initial condition f  x, 0     x  , we get
Y  , 0   1 .

The differential equation in the Fourier transform domain is given by


Y  , t  1
   2 2Y  , t 
t 2
with the initial condition Y  , 0   1 .
The above equation can be solved for t as
1
  2 2 t
Y  , t   e 2

Taking the inverse Fourier transform, we get

1  x2 
  2 
1 2  t 
f ( x, t )  e
2 2t

Note that X (t ) is symmetric about horizontal axis and the variance increases linearly with
time.
If   x, t     0, then
1  ( x  t )2 
  
1 2   2 t 
f  x, t   e
2 2t

Wiener process or Brownian motion process


One example of the CTMP is the Wiener process or the Brownian motion process .

 
Definition: The random process X  t  , t  0 is called a Wiener process or the Brownian motion
process if it satisfies the following conditions:

(1) X  0   0 with probability 1.

(2) X  t  is an independent increment process.

(3) For each t0  0, t  0 X  t  t0   X (t0 ) has the normal distribution with mean 0
and variance  2 t .
1 x2
1 
f X  t  t0   X  t 0  ( x ) 
2
e 2 t
2 t
Remarks
1 x2
1 
We have f X  t  ( x ) 
2
 e 2 t
2 t
 The conditional CDF
F ( x, t / x0 , t0 )  P ( X (t )  x / X (t0 )  x0 )
 P ( X (t )  X (t0 )  x  x0 / X (t0 )  x0 )
 P ( X (t )  X (t0 )  x  x0 )
 f ( x, t / x0 , t0 )
1 ( x  x0 ) 2

1 2  2 ( t t0 )
 e
2 (t  t0 )
 Wiener process was used to model the Brownian motion – microscopic particles suspended in a
fluid are subject to continuous molecular impacts resulting in the zigzag motion of the particle
named Brownian motion after the British botanist Robert Brown. (1773-1858)
 The Wiener process is characterized by the parameter  . When   1 , the process is called the
standard Wiener process.
A realization of the Wiener process is shown in Figure below

X(t)

Figure

The Wiener process is Markov because of the independent-increment property. It is easy to show that

Autocorrelation and autocovariance function of the Wiener process

RX  t1 , t2   EX  t1  X  t2 
 EX  t1   X  t2   X  t1   X  t1  Assuming t2  t1
 EX  t1  E  X  t2   X  t1   EX 2
 t1 
 EX 2  t1 
  2t1

Similarly if t1  t 2 RX  t1 , t2    t2
2

 RX  t1 , t2    2 min  t1 , t2 

Thus the Wiener process is not stationary. Since the process is zero-mean,

C X  t1 , t2    2 min  t1 , t2 

Continuity of the Differentiability of the Wiener process:


For a Wiener process X  t  , 
RX  t1 , t2    2 min  t1 , t2 

 RX  t , t    2 min  t , t    2t  lim RX  t1 , t2 
t1 t ,t2  t

Thus the autocorrelation function of the Wiener process is continuous everywhere implying that it is
m.s. continuous everywhere.

RX  t1 , t2    2 min  t1 , t2 
 2t if t2  0
 RX  0, t2    2
0 other wise
RX  0, t2   2 if t2  0
 
t2 0 if t2  0
RX  0, t2 
 does not exist at t2  0
t2
 2 RX  t1 , t2 
 does not exist at (t1  0, t2  0)
t1t2

Thus a Wiener process is m.s. differentiable nowhere.

Remark: The Wiener process is not only m.s. continuous but it is continuous with probability 1.
Moreover, each realization of this process is nowhere differentiable. Such a process is difficult to
visualize but has many applications,

You might also like