You are on page 1of 4

Discrete Time Markov Chains

Kristina King

1 Random Variables and Stochastic Processes


Definition 1.1. A random variable is a variable for which the value is governed by chance.

Definition 1.2. A discrete random variable is one for which the set of possible values is
either finite or countably infinite.

Definition 1.3. The random variables X and Y are said to be independent if for all a, b,

P (X a, Y b) = P {X a}P {Y b}

Definition 1.4. Let {X(t)|t T } be a collection of random variables. Then {X(t)|t T }


is a stochastic process.

Definition 1.5. X(t) is known as the state of the process at time t.

Definition 1.6. The set T is called the index of the process.

Definition 1.7. When T is a countable set, the stochastic process is a discrete-time pro-
cess.

2 Discrete Time Markov Chains


It may seem reasonable to assume that each state in a stochastic process is an independent
random variable, but that is often unreasonable.

Example 2.1. Assume, for example, Xn denotes the price of a stock at the end of n trading
days. It would be unreasonable to assume that Xn is independent of Xn1 , Xn2 , ..., X0 .

Some processes, however, only depend on the present state, not all of the previous states.

Definition 2.1. A stochastic process which satisfies

P (Xn+1 = j|Xn = i, Xn1 = in1 , ..., X0 = i0 ) = P (Xn+1 = j|Xn = i)

is known as a Markov chain. That is, a Markov chain is a process for which the distribution
of a future state is only dependent on the present state, and independent of all previous states.

1
Definition 2.2. The one step transition probabilities from state i to j are given by

P (Xn+1 = j|Xn = i) = Pij


P
Remark. Note that probabilities are non-negative, so Pij 0, and for any i, j Pij = 1
because when in state i, it must transition to some state j.
The one-step transition probabilities can be placed into a matrix where i denotes the row
and j denotes the column. This matrix is usually called P.
Example 2.2. Suppose we have a stock market. Let Xn = 0 indicate week n is a bull market,
Xn = 1 indicate a bear market, and Xn = 2 indicate a stagnant market. Suppose that a bull
week is followed by another bull week 90% of the time and followed by a bear week 7.5% of
the time. Suppose that a bear week is followed by a bear week 80% of the time and a bull
week 15% of the time. Suppose a stagnant week is followed by a bear week 25% of the time
and a bull week 25% of the time. This is a Markov chain as the next week can be predicted
based on only the present week.

We can draw a state diagram for this Markov chain in the following way.

The one-step transition matrix is given by



0.90 0.075 0.025
P = 0.15 0.80 0.05
0.25 0.25 0.50

3 Chapman-Kolmogorov equations
Definition 3.1. The n-step transition probability, Pijn is the probability that a process
in state i will be in state j after n additional transitions. That is,

Pijn = P (Xn+k = j|Xk = i)

2
Theorem 3.1. The Chapman-Kolmogorov equations are given by, for all i, j and all n, m
0,

X
n+m
Pij = Pikn Pkj
m

k=0

Remark. Recall from basic probability that P (A and B) = P (A|B)P (B).

Proof.

Pijn+m = P (Xn+m = j|X0 = i)


X
= P (Xn+m = j, Xn = k|X0 = i)
k=0

X
= P (Xn+m = j|Xn = k, X0 = i)P (Xn = k|X0 = i)
k=0

X
X
m n
= Pkj Pik = Pikn Pkj
m

k=0 k=0

Theorem 3.2. Let P(n) be the matrix of n-step transition probabilities. Then P(n) = Pn .
That is, the n-step transition matrix can be obtained by multiplying the one-step transition
matrix by itself n times.
Proof. Let P(n) be the matrix of n-step transition probabilities. Then by the Chapman-
Kolmogorov equations, P(n+m) = P(n) P(m) , where standard matrix multiplication is used.
This is always possible because P will always be a square matrix. Note that P(2) = P(1+1) =
P(1) P(1) = PP = P2 . Assume it is true that P(n1) = Pn1 . Then by the Chapman-
Kolmogorov equations, P(n) = P(n1) P(1) . It follows, then, that P(n) = Pn1 P = Pn . By
induction, we have proven it to be true for all n 1.
Example 3.1. Reconsider Example 2.2. If the market was stagnant three weeks ago, what
is the probability it will not be stagnant this week?

The three step transition matrix is given by



0.8275 0.13375 0.03875 0.90 0.075 0.025 0.7745 0.17875 0.04675
P(3) = P3 = P2 P = 0.2675 0.66375 0.06875 0.15 0.80 0.05 = 0.3575 0.56825 0.07425
0.3875 0.34375 0.26875 0.25 0.25 0.50 0.4675 0.37125 0.16125

The market being stagnant 3 weeks ago and not stagnant this week corresponds to the
3 3
entries in P20 and P21 . That is, the probability of not being stagnant this week is given by
3 3
P20 + P21 = 0.4675 + 0.37125 = 0.83875. There is an 83.875% chance that the market will
not be stagnant this week given that it was stagnant 3 weeks ago.

3
4 Classification of states
Definition 4.1. State j is accessible from state i if Pijn > 0 for some n 0.
Definition 4.2. Two states, i and j that are accessible to each other are said to commu-
nicate
Theorem 4.1. Communication satisfies the following 3 conditions:

1. State i communicates with state i for all i 0.

2. If state i communicates with state j, then state j communicates with state i.

3. If state i communicates with state j, and state j communicates with state k, then state
i communicates with state k.

Proof. The first two properties follow immediately from the definition of communication. To
prove the third property, suppose i communicates with j, and j communicates with k. Thus,
there exist integers m and n such that Pijn > 0, Pjk m
> 0. Now, by the Chapman-Kolmogorov
equations,

X
Pikn+m = Pirn Prk
m
Pijn Pjk
m
>0
r=0

This tells us that k is accessible from i, and similarly we can show that i is accessible from
k. So i and k communicate.
Definition 4.3. Two states that communicate are in the same class
Definition 4.4. A Markov chain is irreducible if there is only one class, that is, if all
states communicate with each other.
Example 4.1. Consider a Markov chain with four states and the transition probability ma-
trix 1 1

2 2
0 0
1 1
0 0
P= 2 2
1 1 1 1
4 4 4 4
0 0 0 1
Even though we can access 0 or 1 from 2, we cannot access 2 from either of these states.
No state is accessible from 3 except itself. Thus, the classes for this Markov chain are {0, 1},
{2}, and {3}.

References
Ross, S. M., 2010: Introduction to Probability Models. Elsevier, 784 pp.

You might also like