You are on page 1of 17

Markov Chain

Analysis

Presented By:-

Aditi Misra
Markov Chains
A Markov Chain is a
stochastic process .
It has following Properties:-
3. Discrete state space,
4. Markovian property,
5. One step stationary transition proba
Stochastic Process
A stochastic process is defined by
the family or set of random
variables{Xt},
where t is a parameter(index)
from a given set T.
Discrete State Space
A state space(S) is the sample
space for all possible values of Xt.

When a state space contain


discrete values, it is called as
discrete state space.

If the discrete state space has a


finite number of states, then a
finite-state Markov chain has
Markovian Property
The probability of a state on the
next trial depends only on the
state in the current trial and not
on the states prior to the present.
Transition probability
§ A transition probability is defined
as the conditional probability that
the process will be in a specific
future state given its most recent
state.
§ This probabilities are also called
one-step transition probabilities,
since they describe the system
between t and t+1.
§ It may be presented in the tabular
Lets consider an example:-
Brand Switching Example:-
Suppose that there are two brands
of detergents D1 and D2, selling in
the market in the beginning of a
year, when a third one, D3, is
introduced into the market. The
market is then observed
continuously month-after-month
for change in the brand loyalty. Let
us say that the rate of brand
switching has settled over time as
follows:
Example continues……
Brand this month Brand
60%
next month
D1 30%
20% D1
10%

D2 50%
D2
15% 30%
5%
D3 80% D3

Now, given these conditions about


brand switching, assuming no further
entry or exit, and given further that the
market share for the brands on a
certain date, say march 1, is 30%, 45%
and 25% for brands D1, D2 and D3
Markov Analysis: Input &
Output
Markov analysis provides for the
following predictions:
The probability of the system
being in a given state at a given
future time.
The steady state probabilities.
Input
 Transition probability:-

Transition probabilities for this problem:-

ij
It must
n satisfy the following 2 properties:-
0J=<ij p < 1 for all i,j
1
The initial conditions:-
The initial conditions describe the
situation the system presently is
in.
Here, the initial condition is-
The market share for the brands
is 30%, 45% and 25% for brands
D1, D2 and D3 respectively.
In a row matrix-
[0.30 0.45 0.25]
Output
Specific-state Probabilities:-
It is for calculating the
probabilities for the system in
specific states.
The probability distribution of the
system being in a certain state(i)
in a certain period(k) may be
expressed as a row matrix:
Q(k) = [q1(k) q2(k) q3(k)………..
qn(k)]
For this example
Q(0)=[q (0) q (0) q (0)]=[0.30
D D D
0.451 0.25]
2 3

For calculating market share for the


next month:
Q(next)=Q(current)xP
To calculate the probability of
a customer to buy a
To calculate the probability of a
customer to buy D3 two months
hence, given that his latest
purchase has been D2
D1 to D3
D1 t=1
0
t=2 Probability
to
D2 .10
0 0
2
0.20 x 0.10 = 0.02
D2 to .D2 D2 to D3
D2 0 0
0
to.50 .30 0.50 x 0.30 = 0.15
D3
.3 D3 to D3
0

0
.80 0.30 x 0.80 = 0.24
Total 0.41
Steady state probability:
A stablised system is said to be in a
steady state or in equilibrium.
Q(k) = Q(k-1)

You might also like