You are on page 1of 38

Lecture 12.

5 Additional Issues
Concerning Discrete-Time
Markov Chains
Topics
Review of DTMC
Classification of states
Economic analysis
First-time passage
Absorbing states
A stochastic process { X
n
} where n e N = { 0, 1, 2, . . . } is
called a discrete-time Markov chain if
Pr{ X
n+1
= j | X
0
= k
0
, . . . , X
n-1
= k
n-1
, X
n
= i }

= Pr{ X
n+1
= j |

X
n
= i } transition probabilities

for every i, j, k
0
, . . . , k
n-1
and for every n.
The future behavior of the system depends only on the
current state i and not on any of the previous states.
Discrete-Time Markov Chain
Pr{ X
n+1
= j |

X
n
= i } = Pr{ X
1
= j |X
0
= i } for all n
(They dont change over time)
We will only consider stationary Markov chains.
The one-step transition matrix for a Markov chain
with states S = { 0, 1, 2 } is
where p
ij
= Pr{ X
1
= j

| X
0
= i }
(
(
(

=
22 21 20
12 11 10
02 01 00
p p p
p p p
p p p
P
Stationary Transition Probabilities
Classification of States
Accessible: Possible to go from state i to state j (path exists in
the network from i to j).
2 3
4

1
0
d
4
d
1
d
2
d
3
a
0
a
1
a
2
a
3
2 3 4

a a
1
a
0
a
0 1 2 3
Two states communicate if both are accessible from each other. A
system is irreducible if all states communicate.
State i is recurrent if the system will return to it after leaving some
time in the future.
If a state is not recurrent, it is transient.
Classification of States (continued)
A state is periodic if it can only return to itself after a
fixed number of transitions greater than 1 (or multiple
of a fixed number).
A state that is not periodic is aperiodic.
2
0
1
(1) (1)
(1)
a. Each state visited
every 3 iterations
(1)
2
0
1
(1)
(0.5)
(1)
4
(0.5)
b. Each state visited in multiples
of 3 iterations
Classification of States (continued)
An absorbing state is one that locks in the system once it enters.
2 3 4
a a
1
a
0
d d
d
1 2 3
1
2 3
This diagram might represent the wealth of a gambler who
begins with $2 and makes a series of wagers for $1 each.
Let a
i
be the event of winning in state i and d
i
the event of
losing in state i.
There are two absorbing states: 0 and 4.
Classification of States (continued)
Class: set of states that communicate with each other.
A class is either all recurrent or all transient and may be either
all periodic or aperiodic.
States in a transient class communicate only with each other so
no arcs enter any of the corresponding nodes in the network
diagram from outside the class. Arcs may leave, though,
passing from a node in the class to one outside.
2
0
1
5
3
4
6
Illustration of Concepts
3
1
0
2
0
0
X
0
X
1
X
0
0
0
2
X
0
0
0
3
0
0
X
X
0
1
2
3
State
Example 1
Every pair of states communicates, forming a single
recurrent class; however, the states are not periodic.
Thus the stochastic process is aperiodic and irreducible.
Illustration of Concepts
Example 2
4
0
0
X
X
0
0
X
1
X
X
0
0
0
2
0
0
X
X
0
3
0
0
0
X
0
0
1
2
3
4
State 4
0
0
0
0
0
2 3
1
States 0 and 1 communicate and for a recurrent class.
States 3 and 4 form separate transient classes.
State 2 is an absorbing state and forms a recurrent class.
Illustration of Concepts
Example 3
3
1
0
2
0
0
0
0
X
1
X
0
0
0
2
X
0
0
0
3
0
X
X
0
0
1
2
3
State
Every state communicates with every other state, so we
have irreducible stochastic process.
Periodic? Yes, so Markov chain is irreducible and periodic.
Example
Classification of States
(
(
(
(
(
(

=
2 . 0 8 . 0 0 0 0
1 . 0 4 . 0 5 . 0 0 0
0 7 . 0 3 . 0 0 0
0 0 0 5 . 0 5 . 0
0 0 0 6 . 0 4 . 0
5
4
3
2
1
P
.5
.4
.6
.5
.3
.5
.4
.8
.7
.1
1
5
2
3 4
.2
A state j is accessible from state i if p
ij
(n)
> 0 for some n > 0.

In example, state 2 is accessible from state 1
& state 3 is accessible from state 5
but state 3 is not accessible from state 2.
States i and j communicate if i is accessible from j
and j is accessible from i.

States 1 & 2 communicate; also
states 3, 4 & 5 communicate.
States 2 & 4 do not communicate
States 1 & 2 form one communicating class.
States 3, 4 & 5 form a 2
nd
communicating class.
If all states in a Markov chain communicate
(i.e., all states are members of the same communicating class)
then the chain is irreducible.
The current example is not an irreducible Markov chain.
Neither is the Gamblers Ruin example which
has 3 classes: {0}, {1, 2, 3} and {4}.
First Passage Times
Let f
ii
= probability that the process will return to state i
(eventually) given that it starts in state i.

If f
ii
= 1 then state i is called recurrent.

If f
ii
< 1 then state i is called transient.
If p
ii
= 1 then state i is called an absorbing state.

Above example has no absorbing states
States 0 & 4 are absorbing in Gamblers Ruin problem.
The period of a state i is the smallest k > 1 such that
all paths leading back to i have a length that is
a multiple of k;
i.e., p
ii
(n)
= 0 unless n = k, 2k, 3k, . . .

If a process can be in state i at time n or time n + 1
having started at state i then state i is aperiodic.
Each of the states in the current example are aperiodic
If all states in a Markov chain are
recurrent, aperiodic, & the chain is irreducible
then it is ergodic.
States 1, 2 and 3 each have period 2.
0 1 2 3 4
0 1 0 0 0 0
1 1-p 0 p 0 0
2 0 1-p 0 p 0
3 0 0 1-p 0 p
4 0 0 0 0 1
Example of Periodicity - Gamblers Ruin
Existence of Steady-State Probabilities
A Markov chain is ergodic if it is aperiodic and allows
the attainment of any future state from any initial state
after one or more transitions. If these conditions hold,
then
( )
lim steady state probabilty for state
n
j ij
n
p j t

= =
For example,
(
(
(

=
1 . 0 9 . 0 0
3 . 0 3 . 0 4 . 0
2 . 0 0 8 . 0
P
State-transition network
1
3
2
Conclusion: chain is ergodic.
Economic Analysis
Two kinds of economic effects:
(i) those incurred when the system is in a specified state, and
(ii) those incurred when the system makes a transition from one
state to another.
The cost (profit) of being in a particular state is represented by the
m-dimensional column vector

where each component is the cost associated with state i.
The cost of a transition is embodied in the m m matrix .
where each component specifies the cost of going from state i to
state j in a single step.
( )
T
S S
2
S
1
S
,..., ,
m
c c c = C
( )
R R
ij
c = C
Expected Cost for Markov Chain
Expected cost of being in state i :
ij
m
j
ij i i
p c c c

+ =
=1
R S
Let C = (c
1
, . . . c
m
)
T
e
i
= (0, 0, 1, 0, 0) be the ith row of the m m identity
matrix, and
f
n
= a random variable representing the economic return
associated with the stochastic process at time n.
Property 3: Let {X
n

: n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, and expected
state cost (profit) vector C. Assuming that the process
starts in state i, the expected cost (profit) at the n
th
step
is given by
E[f
n
(X
n
) |X
0
= i] = e
i
P
(n)
C.
Additional Cost Results
What if the initial state is not known?
Property 5: Let {X
n

: n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, initial
probability vector q(0), and expected state cost (profit)
vector C. The expected economic return at the n
th
step
is given by
E[f
n
(X
n
) |q(0)] = q(0)P
(n)
C.
Property 6: Let {X
n

: n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, steady-state
vector , and expected state cost (profit) vector C. Then
the long-run average return per unit time is given by
E
ieS

i
c
i
= C.
An insurance company charges customers annual
premiums based on their accident history
in the following fashion:

- No accident in last 2 years: $250 annual premium
- Accidents in each of last 2 years: $800 annual premium
- Accident in only 1 of last 2 years: $400 annual premium
Historical statistics:
1. If a customer had an accident last year then they
have a 10% chance of having one this year;
2. If they had no accident last year then they have a
3% chance of having one this year.
Insurance Company Example
Problem: Find the steady-state probability and the long-
run average annual premium paid by the customer.
Solution approach: Construct a Markov chain with four
states: (N, N), (N, Y), (Y, N), (Y,Y) where these indicate
(accident last year, accident this year).
(N, N) (N, Y) (Y, N) (Y, Y)
(N, N) 0.97 0.03 0 0
(N, Y) 0 0 0.90 0.10
(Y, N) 0.97 0.03 0 0
(Y, Y) 0 0 0.90 0.10

P =
Y, Y
.97
.03
.97
.90
.03
.10
.90
.10
Y, N
N, Y N, N


State-Transition Network for
Insurance Company
This is an ergodic Markov chain.
All states communicate (irreducible)
Each state is recurrent (you will return, eventually)
Each state is aperiodic
Solving the SteadyState Equations
t
(N,N)
= 0.97 t
(N,N)
+ 0.97 t
(Y,N)
t
(N,Y)
= 0.03 t
(N,N)
+ 0.03 t
(Y,N)

t
(Y,N)
= 0.9 t
(N,Y)
+ 0.9 t
(Y,Y)

t
(N,N)
+ t
(N,Y)
+t
(Y,N)
+ t
(Y,Y)
=

1
Solution:
t
(N,N)
= 0.939, t
(N,Y)
= 0.029, t
(Y,N)
= 0.029, t
(Y,Y)
= 0.003
& the long-run average annual premium is
0.939*250 + 0.029*400 + 0.029*400 + 0.003*800 = 260.5
t
j
= t
i
p
ij
, j = 0,,m


t
j
= 1, t
j
> 0, j
m

i=1
m

j =1
Markov Chain Add-in Matrix
Transition Matrix
Calculate Regular matrix. Rows sum to 1.
Change 4 Recurrent States
Analyze 1 Recurrent State Class
0 Transient States
State 4 0 1 2 3
Index Names (N, N) (N, Y) (Y, N) (Y, Y) Sum Status
0 (N, N) (N, N) 0.97 0.03 0 0 1 Class-1
1 (N, Y) (N, Y) 0 0 0.9 0.1 1 Class-1
2 (Y, N) (Y, N) 0.97 0.03 0 0 1 Class-1
3 (Y, Y) (Y, Y) 0 0 0.9 0.1 1 Class-1
Sum 1.94 0.06 1.8 0.2
Economic Data and Solution
Economic Data Measure: Cost
Calculate Discount Expected Transition Cost Matrix
Rate State State 0 1 2 3
0 Cost Cost (N, N) (N, Y) (Y, N) (Y, Y)
0 (N, N) 250 250 0 0 0 0
1 (N, Y) 400 400 0 0 0 0
2 (Y, N) 400 400 0 0 0 0
3 (Y, Y) 800 800 0 0 0 0
Steady State The vector shows the long run probabilities of each state. Expected
Analysis 0 1 2 3 Cost
(N, N) (N, Y) (Y, N) (Y, Y) per period
Steady State 0.93871 0.029032 0.029032 0.003226 260.483871
Transient Analysis for Insurance Company
Transient
Analysis Average Cost 260.1622 Discounted Cost 5203.243
0 1 2 3 Step Cum. Present
(N, N) (N, Y) (Y, N) (Y, Y) Cost Cost Worth
Start Initial 0 0 0 1 0 0
1 0 0 0.9 0.1 440 440 440
2 0.873 0.027 0.09 0.01 273.05 713.05 713.05
More 3 0.93411 0.02889 0.0333 0.0037 261.3635 974.4135 974.4135
4 0.938388 0.029022 0.029331 0.003259 260.5454 1234.959 1234.959
5 0.938687 0.029032 0.029053 0.003228 260.4882 1495.447 1495.447
Chart 6 0.938708 0.029032 0.029034 0.003226 260.4842 1755.931 1755.931
7 0.93871 0.029032 0.029032 0.003226 260.4839 2016.415 2016.415
8 0.93871 0.029032 0.029032 0.003226 260.4839 2276.899 2276.899
9 0.93871 0.029032 0.029032 0.003226 260.4839 2537.383 2537.383
10 0.93871 0.029032 0.029032 0.003226 260.4839 2797.867 2797.867
Let
ij
= expected number of steps to transition
from state i to state j

If the probability that we will eventually visit state j
given that we start in i is less than 1, then
we will have
ij
= +.
First Passage Times
For example, in the Gamblers Ruin problem,

20
= + because there is a positive probability
that we will be absorbed in state 4 given that we
start in state 2 (and hence visit state 0).
If the probability of eventually visiting state j given
that we start in i is 1 then the expected number
of steps until we first visit j is given by
It will always take
at least one step.
We go from i to r in the first step
with probability p
ir
and it takes
rj

steps from r to j.
Computations for All States Recurrent

ij
= 1 + p
ir

rj
, for i = 0,1, . . . , m1
r=j
For j fixed, we have linear system in m equations and m
unknowns
ij
, i = 0,1, . . . , m1.
Suppose that we start in state (N,N) and want to find
the expected number of years until we have accidents
in two consecutive years (Y,Y).

This transition will occur with probability 1, eventually.
First-Passage Analysis for Insurance Company
For convenience number the states
0 1 2 3
(N,N) (N,Y) (Y,N) (Y,Y)

Then,
03
= 1 + p
00

03
+ p
01

13
+ p
02

23

13
= 1 + p
10

03
+ p
11

13
+ p
12

23

23
= 1 + p
20

03
+ p
21

13
+ p
22

23

03
= 1 + 0.97
03
+ 0.03
13

13
= 1 + 0.9
23

23
= 1 + 0.97
03
+ 0.03
13

(N, N) 0.97 0.03 0 0
(N, Y) 0 0 0.90 0.10
(Y, N) 0.97 0.03 0 0
(Y, Y) 0 0 0.90 0.10

Using P =
So, on average it takes 343.3 years to transition
from (N,N) to (Y,Y).

Note,
03
=
23.
Why? Note,
13
<
03.

Solution:
03
= 343.3,
13
= 310,
23
= 343.3
(N, N) (N, Y) (Y, N) (Y, Y)
0
1
2
3
First-Passage Computations
states
Expected number of steps until the first passage into state 3
From 0 1 2 3
(N, N) (N, Y) (Y, N) (Y, Y)
343.3333 310 343.3333 310
First Passage Probabilities
Game of Craps
Probability of win = Pr{ 7 or 11 } = 0.167 + 0.056 = 0.223
Probability of loss = Pr{ 2, 3, 12 } = 0.028 + 0.56 + 0.028 = 0.112
Start Win Lose P4 P5 P6 P8 P9 P10
Start 0 0.222 0.111 0.083 0.111 0.139 0.139 0.111 0.083

Win 0 1 0 0 0 0 0 0 0


Lose 0 0 1 0 0 0 0 0 0


P4 0 0.083 0.167 0.75 0 0 0 0 0

P =
P5 0 0.111 0.167 0 0.722 0 0 0 0


P6 0 0.139 0.167 0 0 0.694 0 0 0


P8 0 0.139 0.167 0 0 0 0.694 0 0


P9 0 0.111 0.167 0 0 0 0 0.722 0


P10 0 0.083 0.167 0 0 0 0 0 0.75


First Passage Probabilities for Craps
Rolls Start-win Start-lose Sum Cumulative
1 0.222 0.111 0.333 0.333
2 0.077 0.111 0.188 0.522
3 0.055 0.080 0.135 0.656
4 0.039 0.057 0.097 0.753
5 0.028 0.041 0.069 0.822
6 0.020 0.030 0.050 0.872
7 0.014 0.021 0.036 0.908
8 0.010 0.015 0.026 0.933
9 0.007 0.011 0.018 0.952
10 0.005 0.008 0.013 0.965

An absorbing state is a state j with p
jj
= 1.

Given that we start in state i, we can calculate the
probability of being absorbed in state j.

We essentially performed this calculation for the
Gamblers Ruin problem by finding
P
(n)

= (p
ij
(n)
) for large n.

But we can use a more efficient analysis
like that used for calculating first passage times.
Absorbing States

Go directly to j Go to r and then to j
Let q
ij
= probability of being absorbed in state j
given that we start in transient state i.

Then for each j we have the following relationship

q
ij

= p
ij
+ p
ir
q
rj
, i = 0, 1, . . . , k
Let 0, 1, . . . , k be transient states and
k + 1, . . . , m 1 be absorbing states.
k
r = 0
For fixed j (absorbing state) we have k + 1 linear
equations in k + 1 unknowns, q
rj
, i = 0, 1, . . . , k.
Suppose that we start with $2 and want to calculate the
probability of going broke, i.e., of being absorbed in state 0.

We know p
00
= 1 and p
40
= 0, thus
q
20
= p
20
+ p
21
q
10
+ p
22
q
20
+ p
23
q
30
(+

p
24
q
40
)
q
10
= p
10
+ p
11
q
10
+ p
12
q
20
+ p
13
q
30
+

0

q
30
= p
30
+ p
31
q
10
+ p
32
q
20
+ p
33
q
30
+

0

where


P =
0 1 2 3 4
0 1 0 0 0 0
1 1-p 0 p 0 0
2 0 1-p 0 p 0
3 0 0 1-p 0 p
4 0 0 0 0 1
Absorbing States Gamblers Ruin
Now we have three equations with three unknowns.
Using p = 0.75 (probability of winning a single bet)
we have
q
20
= 0 + 0.25 q
10
+ 0.75 q
30


q
10
= 0.25 + 0.75 q
20

q
30
= 0 + 0.25 q
20

Solving yields q
10
= 0.325, q
20
= 0.1, q
30
= 0.025

(This is consistent with the values found earlier.)
Solution to Gamblers Ruin Example
What You Should Know About
The Mathematics of DTMCs
How to classify states.
What an ergodic process is.
How to perform economic analysis.
How to compute first-time passages.
How to compute absorbing probabilities.

You might also like