You are on page 1of 23

EI2452 Reliability Evaluation of

Electrical Power Systems


Lecture 6:
Markov Models – continued
Niklas Ekstedt, niklas.ekstedt@ee.kth.se
Content

• Recap from L3
• Compute theoretical solution for one experiment of L3
• Introduce the continuous Markov process
• Step-by-step approach
• Solve examples
• Solve bigger problems, using a computer
Literature – further reading

• Course book, chapter 3.4


• Optional course book: Rausand, M. and Høyland, A. (2004),
System reliability theory: Models, statistical methods, and
applications. Chapter 8. :John Wiley & Sons.
• Billinton, R. & Allan, R. N. (1983). Reliability evaluation of
engineering systems: concepts and techniques . Chapter 8-9.
New York: Plenum. 15 pp hand-out on course site.
• Rundqvist, Göran. Introduktion till Markovkedjor.
http://www.math.kth.se/~goranr/goran2.pdf.
• Rozanov, Ju. Anatolevič (1969). Introductory probability
theory. Rev. Eng. ed. Chapter 7-8. Englewoods Cliffs:
Prentice-Hall
• Wikipedia etc.
Recap from L3

Two state system:


Recap from L3
Recap from L3

Instead, use the “stochastic transitional Probability matrix”, P:


Recap from L3

The probability of being in different states at time n, given the initial


probability vector, P(0) and the matrix P:

For example, with previous matrix, starting in state 1:

1⁄2 1⁄2 1⁄2 1⁄2


𝑷 2 = 𝑷 0 𝑷2 = 1 0 =
1⁄4 3⁄4 1⁄4 3⁄4
3⁄8 5⁄8
= 10 = [3⁄8 5⁄8]
5⁄16 11⁄16
Recap from L3

The probability of being in different states at time ∞, given the


matrix P, use:
𝝅𝑷 = 𝝅 and ∑𝑖 𝜋 𝑖 = 1

For example:
1⁄2 1⁄2
𝜋1 𝜋2 = 𝜋1 𝜋2 and 𝜋1 +𝜋2 = 1
1⁄4 3⁄4
gives:
1 1
𝜋1 + 𝜋2 = 𝜋1 𝜋1 = 1/3
2 4 →
𝜋2 = 2/3
𝜋1 + 𝜋2 = 1
Compute theoretical solution for one
experiment from L3

Starting in state 1.

5⁄6 1⁄6
𝑷=
1⁄3 2⁄3

Probability of being in state 1 and 2 after 15 steps?

15
5⁄6 1⁄6 0,6667 0,3333
𝑷𝟏𝟏 = = 𝑃𝑃 =
1⁄3 2⁄3 0,6666 0,3334
Compute theoretical solution for one
experiment from L3

Probability of being in state 1 and 2 after infinitively many steps?

𝝅𝑷 = 𝝅
5⁄6 1⁄6
𝝅 =𝝅
1⁄3 2⁄3
5
𝜋 + 13𝜋2 = 𝜋1
6 1 3 𝑖𝑖 1 5
� 𝜋1 + 𝜋2 = 𝜋2
1 2
6
𝜋1 + 13(1 − 𝜋1 ) = 𝜋1 ⇒
6 3
𝜋1 + 𝜋2 = 1

(56−13−1)𝜋1 =−13 3 1 2
⇒ −6𝜋1 =−3 ⇒ 𝜋1 =3

𝜋2 = 1 − 𝜋1 = 13
Introduce the continuous Markov process

Below is not a proof, nor a complete derivation. Just a help to understand the relation
between discrete and continuous case.

Remember from L2, constant failure rate λ. The probability that a component will fail
within ∆t is approximately λ∆t. Introduce repair rate µ.
1 − 𝜆Δ𝑡 𝜆Δ𝑡
𝑃=
𝜇Δ𝑡 1 − 𝜇Δ𝑡

This is a probability matrix and we have previously learnt that the limiting state
probabilities are found by: 𝝅𝑷 = 𝝅. Now, let I denote identity matrix.
𝝅𝑷𝑰 = 𝝅𝑰 ⇒ 𝝅 𝑷 − 𝑰 = 𝟎

−𝜆Δ𝑡 𝜆Δ𝑡
𝑸𝛥𝛥 = 𝑷 − 𝑰 =
𝜇Δ𝑡 −𝜇Δ𝑡

Divide both sides with ∆t and denote the resulting matrix Q. The equation to find the
stationary states becomes:
−𝜆 𝜆
𝝅𝝅 = 𝟎 with 𝑸 =
𝜇 −𝜇
Introduce the continuous Markov process

So, in continuous case we draw the state space:

−𝜆 𝜆
Yielding the “transition rate matrix” 𝑸 = and we
𝜇 −𝜇
solve the limit state probabilities by solving 𝝅𝝅 = 𝟎.

R(t): prob. of stay in state 0.


A(t): prob. of finding comp.
in state 0.
Introduce the continuous Markov process

Construct the Q matrix by identify the transition rates 𝜆𝑖𝑖 from


state i to state j. (Remember to use the same unit!!) Let
𝜆𝑖𝑖 = − � 𝜆𝑖𝑖 , 𝑓𝑓𝑓 𝑗 ≠ 𝑖
𝑗
Example:
Step-by-step approach

• Convert all transition rates to the same unit (e.g.


times/year)
• Define all possible states.
• Illustrate all possible transitions between states.
• Create the Q matrix.
• Set up the system of equations using 𝝅𝝅 = 𝟎 and
∑𝑖 𝜋𝑖 = 1.
• Calculate every 𝜋𝑖 by solving the system of equations.
• Use the results.
Solve example

• Example 1:

• Unit: Events/hr
• Calculate:
• Limiting probabilities for state 1, 2, 3
• Availability of system if 3 is failure state and if 2 and 3
are failure states.
Solve example - solution

−0,02 0,01 0,01


• Define Q-matrix: 𝑸 = 0,5 −0,51 0,01
0,5 0 −0,5
• Define limit state vector: 𝝅 = 𝜋1 𝜋2 𝜋3
• Set up system of equations: 𝝅𝑸 = 𝟎 and 𝜋1 +𝜋2 + 𝜋3 = 1

−0,02𝜋1 + 0,5𝜋2 + 0,5𝜋3 = 0


0,01𝜋1 − 0,51𝜋2 + 0𝜋3 = 0
0,01𝜋1 + 0,01𝜋2 − 0,5𝜋3 = 0
𝜋1 +𝜋2 + 𝜋3 = 1

(2)  𝜋1 = 0,51 𝜋 = 51𝜋2


0,01 2
Solve example - solution

• Insert (2) and (4) in (1):

−0,02 51𝜋2 + 0,5𝜋2 + 0,5 1 − 51𝜋2 − 𝜋2 = 0


−1,02 + 0,5 − 25,5 − 0,5 𝜋2 = −0,5
−0,5
𝜋2 = −26,52 = 0,01885
𝜋1 = 51𝜋2 = 51 ∙ 0,01885 = 0,9615
𝜋3 = 1 − 𝜋2 − 𝜋1 = 0,01961

Availability if 3 is failure state: 𝐴 = 1 − 𝜋3 = 0,9804


Availability if 2 and 3 are failure states: 𝐴 = 𝜋1 = 0,9615
Show simulation

Simple simulations of Markov found on internet:

Continuous process in Excel (+other):


http://www.math.ucsd.edu/~bdriver/math180C_S2011/simulati
ons.htm

Discrete markov chain online:


http://www.zweigmedia.com/RealWorld/markov/markov.html
Solve bigger problem, using a computer

• Three components in the block diagram below.


• Compute the availability of the system.
• Only independent events are studied.

FR: 0,1 f/y

2
FR: 0,02 f/y
MTTR: 1 week
1 FR: 0,1 f/y

MTTR: 48 h 3
MTTR: 1 week
Solve bigger problem, using a computer

• Convert to the same unit [1/y]:


• 𝜆1 = 0,02 𝑓 ⁄𝑦 ; 𝜆2 = 𝜆3 = 0,02 𝑓 ⁄𝑦
• 𝜇1 = 8760
48
𝑟⁄𝑦 = 182,5 𝑟⁄𝑦
• 𝜇2 = 𝜇3 = 8760
24∙7
𝑟⁄𝑦 = 52,14 𝑟⁄𝑦

• Define the states (u: working, d: not working):


• Comp. 2 and 3 are equal.
Component
• 6 states are relevant. State 1 2&3
1 u 2u
• States could be defined 2 u 1u
3 u 0u
differently. 4 d 2u
5 d 1u
6 d 0u
Solve bigger problem, using a computer
Component
State 1 2&3
1 u 2u
2 u 1u
• Assumption of independent events makes 3 u 0u
4 d 2u
the following transitions possible: 5 d 1u
6 d 0u
– 1  2, 4
– 2  1, 3, 5
– 3  2, 6
– 4  1, 5
– 5  2, 4, 6
– 6  3, 5
Solve bigger problem, using a computer

• The Q-matrix becomes:

−(𝜆1 + 2𝜆2 ) 2𝜆2 0 𝜆1 0 0


𝜇2 −(𝜇2 + 𝜆1 + 𝜆2 ) 𝜆2 0 𝜆1 0
0 𝜇2 −(𝜇2 + 𝜆1 ) 0 0 𝜆1
𝑸=
𝜇1 0 0 −(𝜇1 + 2𝜆2 ) 2𝜆2 0
0 𝜇1 0 𝜇2 −(𝜇1 + 𝜇2 + 𝜆2 ) 𝜆2
0 0 𝜇1 0 𝜇2 −(𝜇1 + 𝜇2 )
Solve bigger problem, using a computer

• Let computer solve system of linear equations.


• Solution with Matlab m-file and solution with Excel are
provided at course web-site. Matlab result below:

You might also like