You are on page 1of 13

Chapter 11

Markov Chains
ENCS6161 - Probability and Stochastic
Processes
Concordia University
Markov Processes
A Random Process is a Markov Process if the future
of the process given the present is independent of
the past, i.e., if t1 < t2 < · · · < tk < tk+1 , then

P [X(tk+1 ) = xk+1 |X(tk ) = xk , · · · , X(t1 ) = x1 ]


= P [X(tk+1 ) = xk+1 |X(tk ) = xk ]

if X(t) is discreete-valued or

fX(tk+1 ) (xk+1 |X(tk ) = xk , · · · , X(t1 ) = x1 )


= fX(tk+1 ) (xk+1 |X(tk ) = xk )

if X(t) is continuous-valued.

ENCS6161 – p.1/12
Markov Processes
Example: Sn = X1 + X2 + · · · + Xn
⇒ Sn+1 = Sn + Xn+1
P [Sn+1 = sn+1 |Sn = sn , · · · , S1 = s1 ]
= P [Sn+1 = sn+1 |Sn = sn ]
So Sn is a Markov process.
Example: The Poisson process is a continuous-time
Markov process.
P [N (tk+1 ) = j|N (tk ) = i, · · · , N (t1 ) = x1 ]
= P [j − i events in tk+1 − tk ]
= P [N (tk+1 ) = j|N (tk ) = i]
An integer-valued Markov process is called Markov
Chain.

ENCS6161 – p.2/12
Discrete-time Markov Chain
Xn is a discrete-time Markov chain starts at n = 0
with
Pi (0) = P [X0 = i], i = 0, 1, 2, · · ·
Then from the Markov property,
P [Xn = in , · · · , X0 = i0 ]
= P [Xn = in |Xn−1 = in−1 ] · · · P [X1 = i1 |X0 = i0 ]P [X0 = i0 ]
where P [Xk+1 = ik+1 |Xk = ik ] is called the one-step
state transition probability.
If P [Xk+1 = j|Xk = i] = pij for all k , Xn is said to have
homogeneous transition probabilities.
P [Xn = in , · · · , X0 = i0 ] = Pi0 (0)pi0 ,i1 · · · pin−1 ,in

ENCS6161 – p.3/12
Discrete-time Markov Chain
The process is completely specified by the initial pmf
Pi0 (0) and the transition matrix
 
p00 p01 p02 · · ·
P =  p10 p11 p12 · · · 
 
.. .. .. ..
. . . .

where for each row:


X
pij = 1
j

ENCS6161 – p.4/12
Discrete-time Markov Chain
Example: Two-state Markov Chain for speach activity
(on-off source)
(
0 silence (off)
two states:
1 with speach activity (on)
State Transition Diagram
α
1−α 1−β
0 1

β
" #
1−α α
P =
β 1−β

ENCS6161 – p.5/12
Discrete-time Markov Chain
The n-Step Transition Probabilities
pij (n) , P [Xk+n = j|Xk = i] n ≥ 0
Let P (n) be the n-step transition probability matrix,
i.e.  
p00 (n) p01 (n) p02 (n) · · ·
P (n) =  p10 (n) p11 (n) p12 (n) · · · 
 
.. .. .. ..
. . . .
Then P (n) = P n , where P is the one-step transition
probability matrix.

ENCS6161 – p.6/12
The State Probabilities
Let p(n) = {Pj (n)} be the state prob. at time n then
X
Pj (n) = P [Xn = j|Xn−1 = i]P [Xn−1 = i]
i
X
= pij Pi (n − 1)
i
i.e. p(n) = p(n − 1)P.
By recursion:
p(n) = p(n − 1)P = p(n − 2)P 2 = · · · = p(0)P n

ENCS6161 – p.7/12
Steady State Probabilities
In many cases, when n → ∞, the Markov chain goes
to steady state, in which the state probabilities do not
change with n anymore, i.e.,
p(n) → π, as n → ∞
π is called the Stationary State pmf.
If the steady state exists, then when n is large, we
have
p(n) = p(n − 1) = π
P ⇒ π = πP
(note: i πi = 1)

ENCS6161 – p.8/12
Steady State Probabilities
Example: Find the steady state pmf of the on-off
source.
πP = π
" #
1−α α
⇒ [π0 , π1 ] = [π0 , π1 ]
β 1−β
together with π0 + π1 = 1
β α
⇒ π0 = π1 =
α+β α+β

ENCS6161 – p.9/12
Continuous-Time Markov Chains
If P [X(s + t) = j|X(s) = i] = pij (t), t ≥ 0 for all s, then
the continuous-time Markov chain X(t) has
homogeneous transition prob.
The transition rate of X(t) entering state j from i is
defined as (
pij (δ)
lim δ→0 δ i 6= j
rij , pij (t)|t=0 =
0
limδ→0 pij (δ)−1
δ i=j
Note: (
0 i 6= j
pij (0) =
1 i=j

ENCS6161 – p.10/12
Continuous-Time Markov Chains
From ( P
Pj (t + δ) = i Pi (t)pij (δ)
P
Pj (t) = i Pi (t)pij (0)
We can show that:
Pj (t + δ) − Pj (t) X pij (δ) − pij (0)
= Pi (t)
δ δ
i
Let δ → 0, we have: X
0
Pj (t) = Pi (t)rij
i
This is called Chapman-Kolmogorov equations.

ENCS6161 – p.11/12
Steady State Probabilities
In the steady-state, Pj (t) doesn’t change with t, so

Pj0 (t) = 0

and hence from Chapman-Kolmogorov equations


X
Pi rij = 0 for all j
i

These are called the Global Balancee Equations.

ENCS6161 – p.12/12

You might also like