Professional Documents
Culture Documents
Lecture Notes
Anne Remke
1 Introduction 5
1.1 Reliability computations . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.1.1 Reliability of serial systems . . . . . . . . . . . . . . . . . . . 7
1.1.2 Reliability of parallel systems . . . . . . . . . . . . . . . . . . 7
1.1.3 Reliability of combined systems . . . . . . . . . . . . . . . . . 8
3
4 CONTENTS
5 CTMCs 43
5.1 Continuous-time Markov chains . . . . . . . . . . . . . . . . . . . . . 43
5.2 Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.3 Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3.1 Transient state probability . . . . . . . . . . . . . . . . . . . . 47
5.3.2 Uniformization method . . . . . . . . . . . . . . . . . . . . . . 48
5.3.3 Steady-state probability . . . . . . . . . . . . . . . . . . . . . 50
6 CSL 53
6.1 Continuous stochastic logic (CSL) . . . . . . . . . . . . . . . . . . . . 53
6.2 Model checking finite state CTMCs . . . . . . . . . . . . . . . . . . . 55
6.2.1 Worst case time-complexity of CSL . . . . . . . . . . . . . . . 57
6.3 General model checking routine . . . . . . . . . . . . . . . . . . . . . 57
Bibliography 57
Chapter 1
Introduction
5
6 1 Introduction
the functionality of the system model. A property that needs to be analyzed has to
be specified in a logic with consistent syntax and semantics. For every state of the
model, it is then checked whether the property is valid or not.
Please note that the main focus of this course is on quantitative model checking
for Probabilistic Computational Tree Logic (PCTL) and for Continuous Stochastic
Logic (CSL). The former can be used to express quantitative properties on DTMCs
and the latter for CTMCs. Efficient computational algorithms have been devel-
oped for checking finite DTMCs and CTMCs against formally specified properties
expressed in these logic, cf. [4, 5], as well as supporting tools, cf. PRISM [20],
and ETMC2 [16], the APNN toolbox [7], and recently MRMC [19]. Other tools,
like GreatSPN [11] are used as front-end to model checking tools like PRISM and
MRMC. As such, we only introduce Labelled State Transition Systems, the Com-
putational Tree Logic and corresponding algorithms to provide a background and a
context for students. We do not aim to provide a complete coverage of this topic,
for which we refer to [18].
Quantitative model checking is one way to perform formal verification, which can
be seen as the application of rigorous, mathematics-based techniques to establish the
correctness of computerised systems. Essentially, formal verification is about proving
that a program satisfies its specification. Many techniques are used for formal
verification, manual proofs, automated theorem proving using a proof assistant or
proof checker tool, static analysis or model checking, which performs systematic
checks on a given property for all states using model checking algorithms and tools
Recent history is full of examples, where software glitches caused massive safety
problems and have lead either to large call-backs actions and hence to considerable
financial and image loss for companies or even to large outages and break-downs.
In the following I would like to give a couple of examples:
The Toyota Prius was one of the first mass-produced hybrid vehicle. Unfor-
tunately, it suffered from several software glitches, such as airbag control in 2007,
its anti-lock braking system (2010) and its hybrid system (2015). Eventually these
have been fixed via software update after in total 185,000 cars were recalled at huge
cost. Not only the incidents themselves but also their handling at Toyota prompted
much criticism and bad publicity for the company. One of the most well-known and
often used examples is the maiden flight of the rocket Ariane 5 on 4th of June 1996
at ESA (European Space Agency). An uncaught exception caused by a numerical
overflow in a conversion routine, which in turn resulted in incorrect altitude sent by
the on-board computer resulted in the self-destruction of the rocket after 37 seconds.
Many more of these examples exist and all these stories have in common that
programmable computing devices were in use (conventional computers and networks
and software embedded in devices like the airbag controllers or mobile phones) and
in all cases programming errors were the direct cause of failure. These failures were
critical for safety, business and performance and resulted in high costs.
1.1 Reliability computations 7
RS = R1 R2 R3 ... Rn . (1.1)
On the other hand, the above formula can be simplified if all components have
the same reliability Ri = R1 = . . . = Rn :
RS = (Ri )n . (1.2)
This chapter focuses on labeled transition systems, LTS s, in Section 2.1 in Section
2.2. Section 2.3 introduces the Computational Tree Logic, CTL that is to express
properties of LTS s and finally model checking algorithms for CTL are presented in
Section 2.4.
L : S 2AP , is a labeling function that assigns a set L(s) sAP to any state
s.
11
12 2 Model checking labeled Transition Systems
where, the true operator tt holds in every state and a AP denotes an atomic
14 2 Model checking labeled Transition Systems
property (see Section 2.1). We can use the Boolean operators , and
to combine state formulas. The exists quantifier and the for all quantifier are
always followed by a path formula, indicated by .
Definition 4 (Path formulas). A path formula for CTL is constructed using the
following grammar:
::= X | U . (2.2)
X denotes the next operator and U is the until operator. X indicates that the
next state on the path fulfills the state formula . U denotes that holds for
all states along the path until holds.
The following short-hand notations are used in the following and denoted even-
tually and always, respectively.
:= tt U
:=
indicates that along the path eventually the state formula holds. means
that always in every state on the path holds.
As mentioned above, a path formula has to be wrapped inside either or ,
see Section 3 for obtaining a CTL formula, e.g., means that there exists a path
where eventually is fulfilled. denotes that for all states of all possible paths
the state formula holds.
Example 5 (State formulas). The following expressions are examples for state for-
mulas:
(ball red)
There exists a path, such that the traffic light will turn green.
Example 6 (Path formulas). The following expressions are examples for path for-
mulas:
In the next state of the path, the state formula = (traffic light green) is
fulfilled.
X(traffic light green)
The traffic light will eventually turn from red to green without being neither
red nor green in between.
red U green
Example 7 (CTL Model Checking). See the computation trees of LTS s in Figures
2.3, 2.4, 2.5, 2.6, 2.7 and 2.8. All the computation trees represent the evolution
of LTS s that consist of the states {s1 , s2 , . . . , s8 , s9 }. We would like to discuss the
validity of different CTL formulas for the initial state.
s|=tt
s|=a iff a L(s)
s|= iff s|= s|=
s|= iff |=
s
s|= iff there is a path P aths(s)s.t. |=
s|= iff for each path P aths(s) : |=
s|= iff for each path P aths(s) : |=
The satisfaction set for a state formula is defined as: Sat () = {s S : s|=}
16 2 Model checking labeled Transition Systems
Figure 2.3: red holds: there exists Figure 2.4: red holds: there exists
a path starting from state s1 , where a path starting from state s1 , where
eventually red holds. Paths s1 , s2 , s6 always red holds. Path s1 , s2 , s6 ful-
and s1 , s3 , s8 fulfill this property. It fills this property. Path s1 , s3 , s8 does
does not matter that state s8 is con- not fulfill this because state s8 is con-
nected to more states as one existing nected to further states, which my not
path is sufficient to fulfill the formula. have the label red.
Figure 2.5: yellow U red holds: there Figure 2.6: red holds: all the paths
exists a path starting from state s1 , from state s1 have to fulfill that even-
where yellow holds for all states until tually red holds along the path. The
red holds. Paths s1 , s4 and s1 , s2 , s5 Paths s1 , s2 , s5 and s1 , s2 , s6 and s1 , s3
and s1 , s2 , s6 fulfill this until prop- and s1 , s4 fulfill this requirement. As
erty. It is sufficient to find one ful- these are all the paths starting from
filling path. Whatever happens after s1 , the property holds. Further con-
the state in which red holds does not nections from s3 , s4 , s5 are irrelevant
influence the outcome. as red holds as soon as red holds.
2.4 Model checking algorithms for CTL 17
Figure 2.7: red maybe holds: for Figure 2.8: yellow Ured holds: all
all the paths starting from state s1 , the paths from state s1 need to have
red has to be always fulfilled. We do only states fulfilling yellow until red
not have enough information to decide holds. Paths s1 , s2 , s5 and s1 , s2 , s5
if the property holds, since the states and s1 , s3 , s7 and s1 , s2 , s8 and s1 , s4
s5 , s7 , s8 and s9 have outgoing connec- fulfill this until property. These are all
tions, where (red) also has to hold. the paths starting from state s1 thus
Note that if the system did not have the property holds. Whatever hap-
any extra unknown states, this prop- pens after the state where red is ful-
erty would hold. filled does not influence the outcome.
::= tt | a | | | X | ( U ) | (2.3)
a (b U c)
As shown in the parse tree in Figure 2.9, the formula is first splitt into two parts
separating the left and right side of the highest binding operator . This yields
1 2 with 1 = a and 2 = (b U c). The left side, 1 , can then be
further split up in and a. Note that all the nodes consists only of state formulas
(not path formulas). That is why appear together in one node. The right side is a
bit more complex: 2 can be spilt up into the part left and right of the until operator
U. Note that U is a path formula and not a state formula, so it is necessary to have
it appear in the node of the parse tree together with its corresponding quantifier, .
Now 2 = 3 U4 with 3 = b and 4 = c. 3 is already an atomic property
and 4 needs to be further split up as shown in Figure 2.9.
To compute Sat (), we perform a recursive computation, starting with the leaves
of the parse tree and compute Sat (x ) where x are the atomic properties in the
leaves. Then we go one level up and combine the satisfaction sets of the nodes
already computed according to the rule that corresponds to the operator that is
shown in the parent node, until we have reached the root of the parse tree. I.e., we
compute all the satisfaction sets of the leaves and go one level up and compute all
the satisfaction sets of the higher nodes until we reach the top node.
In the following, we present the computations for all CTL operators, including
the cases of the all-quantifier combined with eventually, until and always:
2.4 Model checking algorithms for CTL 19
So all states s Sat (a) in the satisfaction set of a, have the label a.
Computing Sat ()
Sat () = S \ Sat ()
To compute the satisfaction set of a negated CTL formula , we have to take
the complement of Sat with respect to the state space S. For example, if S =
{0, 1, 2, 3} and Sat () = {0, 2}, then Sat () = S \ Sat () = {0, 1, 2, 3} \ {0, 2} =
{1, 3}.
Computing Sat ( )
Computing Sat ( )
To compute this satisfaction set, we first need to define the so-called successor set
of a state s:
Definition 7 (Successor set). In a labeled transition system T = (S, T, L), the set
of successors for a state s S is defined as:
So, Post (s) includes all the states s0 such that s and s0 are in the transition
relation (s, s0 ) T . We can now determine Sat (X) as the set of states that have
a successor in the satisfaction set of :
Sat (X) = {s S | (Post (s) Sat ()) 6= }
The satisfaction set hence includes all the states s S for which the intersection,
, between the successor states Post (s) and the satisfaction set of , Sat () is not
the empty set .
This satisfaction set can be determined along a similar reasoning than for Sat (X).
The only difference is that all successor states have to be in Sat :
Sat (X) = {s S | Post (s) Sat ()}
For a state s S, all successor states have to fulfill , hence Post (s) is a subset
of Sat ().
The satisfaction set Sat (U) contains all states starting from which a path exists,
such that holds until holds. Sat (U) is defined as the smallest subset Q S
for which the following two conditions hold:
1. Sat () Q
2. (s Sat () (Post (s) Q) 6= ) = s Q
To compute Q recursively, we first start with Q as the set that contains all the
states s0 S where holds, so we already have Sat () Q. In the next iteration
we add each state s Sat () having a successor in Q, so (Post (s) Q) 6= .
The second part requires an iterative computation until the set does not increase
anymore:
S0 = Sat ()
S1 = S0 -states that can directly move to S0
Repeat until Sk+1 = Sk
So, S0 contains all the states where holds, S0 = Sat (). Next, we need to
find all the states, s / S0 and s Sat (), for which it holds that there exists a
successor s S0 with (s, s0 ) T . Finally, we need to compute S1 = S0 s for all
0
Example: computing Sat (yellow U blue) See the LTS given in Figure 2.10.
We compute Sat (yellow U blue) for an LTS consisting of the states {s1 , s2 , s3 , s4 }
S with L (s1 ) = {yellow}, L (s2 ) = {yellow}, L (s3 ) = {blue} and L (s4 ) = {white}
with transition relations {(s1 , s2 ) , (s2 , s3 ) , (s3 , s2 ) , (s4 , s3 )} T .
Sat (U) consists of all the states in Sat () where for all paths holds until
holds united with the set Sat (). Sat (U) is the smallest subset Q S such
that:
Sat () Q and
We start with Q S as the set that contains all the states s0 S where holds,
so that Sat () Q. In the next iteration, we add each state s Sat () having
all successors in Q. The second part requires an iterative computation until the set
does not decrease anymore:
S0 = Sat ()
S0 contains all the states where holds, i.e. S0 = Sat (). Next, we need to
/ S0 for which all successors are in S0 , so s0 with (s, s0 ) T
find all the states s
it is s0 S0 and s Sat (). Finally we need to compute S1 = S0 s for all
22 2 Model checking labeled Transition Systems
Example: computing Sat (yellow U blue) See the LTS given in Figure 2.10.
We compute Sat (yellow U blue) for an LTS consisting of the states {s1 , s2 , s3 , s4 }
S with L (s1 ) = {yellow}, L (s2 ) = {yellow}, L (s3 ) = {blue} and L (s4 ) = {white}
with transition relations {(s1 , s2 ) , (s2 , s3 ) , (s3 , s2 ) , (s4 , s3 )} T .
Note that this is the same result as computing Sat (U), because all the states
that were added iteratively fulfilled the property that they had only transitions to
S0 , S1 and S2 . This is pure coincidence.
Sat () consists of all the states s S for which there exists a paths starting
from state s such that always holds. Sat () is the largest subset Q S such
that
Q Sat () and
s Q = (Post (s) Q) 6=
S0 = Sat ()
So, S0 contains all the states where holds, i.e. S0 = Sat (). Next, we need to
find all the states s S0 for which there exists a successor s0 S0 with (s, s0 ) T
and s is in the satisfaction set of , i.e. s Sat (). Finally, we need to compute
S1 = S0 s for all found states s. We then continue with S1 instead of S0 and S2
instead of S1 until Sk+1 = Sk . Note that this computation is always finished within
a finite number of steps. It is S0 S1 . . . Sk sk+1 Sat (). Also note
that a state s Sat () can have a successor s0 / Sat ().
Example: computing Sat (b) See the LTS given in Figure 2.11. We compute
Sat (b) for an LTS consisting of the states {s1 , s2 , . . . , s7 , s8 } S with the labels
and transitions as shown in the figure.
Sat () consists of all the states s S for which for all paths starting from s
always holds. Sat () is the largest subset Q S such that
Q Sat () and
s Q = Post (s) Q
24 2 Model checking labeled Transition Systems
Q is a subset of Sat () and if s Q, it follows that this state has all its successors
in Q, so that Post (s) Q. We compute this set by using the following iterative
process:
S0 = Sat ()
So, S0 contains all the states where holds, i.e. S0 = Sat (). Next, we need to
find all the states s S0 for which all successors are in S0 , so s0 with (s, s0 ) T it is
s0 S0 and s Sat (). Finally we need to compute S1 = S0 s for all found states
s. We then continue with S1 instead of S0 and S2 instead of S1 until Sk+1 = Sk .
Note that this computation is always finished within a finite number of steps. It is
S0 S1 . . . Sk sk+1 Sat (). Also note that a state s Sat ()
cannot have any successor s0 / Sat ().
Example: computing Sat (b) See the LTS given in Figure 2.11. We compute
Sat (b) for an LTS consisting of the states {s1 , s2 , . . . , s7 , s8 } S with the labels
and transitions as shown in the figure.
|| is the number of operators (as in the computation tree), |S| is the number states
and |T | is the number of transitions. Refer to [6, p. 355].
Chapter 3
25
26 3 Discrete-time Markov chains
P (0, 0) = 0.9, P (0, 1) = 0.1, P (1, 0) = 0.4 and P (1, 1) = 0.6. Note that
Equation 3.1 holds: All the values in the matrix are bigger or equal to zero and
smaller or equal to 1. Equation 3.2 also holds: P (0, 0) + P (0, 1) = 0.9 + 0.1 = 1
and P (1, 0) + P (1, 1) = 0.4 + 0.6 = 1.
ps : N [0, 1] (3.3)
p : N [0, 1]|S| (3.4)
so that p (n) = p0 (n) , p1 (n) , . . . , p|S|2 (n) , p|S|1 (n) (3.5)
For example, p0 (4) is the chance of being in state 0 after 4 time steps. When
we look at the example, see Figure 3.1, let the initial probability distribution be
p (0) = (p0 (0) , p1 (0)) = (1, 0), i.e., the execution starts in state 0. Note that the
sum of all elements in p (u) should always add up to 1.
3.2 Transient Evolution 27
Example 9 (Transient Evolution). The transient evolution after one time step is
calculated as follows:
p (1) = (0.9 p0 (0) + 0.4 p1 (0) , 0.1 p0 (0) , 0.6 p1 (0)) = p (0) P
The probabilities to move from a state to another one, are given by so-called
1-step transition probabilities:
The following equation defines the probability to move from a state to another
in a fixed numer of steps:
X
ps (n) = P r{X(n) = s} = P r{X(0) = s0 } P r{(X(n) = s|X(0) = s0 }
s0 S
p (n) = p (0) Pn
Example 10 (Calculating the steady-state). See the example given in Figure 3.1.
If we now build a system of linear equations with v (P I) = 0 (note that v is
normalised), we get:
0.1 +0.1
= 0 0 = v1 v2 = 51 54
v (P I) = 0 = v1 v2
+0.1 0.4
Within this section, different classifications are given for the states in a DTMC.
For a recurrent state s, the mean number of steps between two successive visits
to s is:
X
ms = n fs (n)
n=1
Example 11 (Transient and recurrent states). See Figure 3.2. We have a DTMC
with S = {0, 1, . . . , 10, 11}. We define five subsets of S: {S1 , S2 , S3 , S4 , S5 } S with
S1 = {0, 5}, S2 = {1, 2, 6, 7}, S3 = {3, 4}, S4 = {10}, and S5 = {8, 9, 11}. S2 can be
seen as a component of the system which is transient because it has outgoing con-
nections that once you take that path you can never reach any of the states s S2 .
For the other subsets S1 , S3 , S4 and S5 , it holds that once you are in one of the
recurrent subsets and take only paths that have an origin in the subsets you can
always reach all the states of the subset eventually, i.e., the states S2 = {1, 2, 6, 7}
are transient and the other states, S1 S3 S4 S5 = {3, 4, 5, 8, 9, 10, 11} are recur-
rent.
A DTMC is called irreducible if from each state s, one can reach any other
state in a finite number of steps.
A DTMC is called reducible if from each state s, one cannot reach every
state in a finite number of steps.
In a finite DTMC, irreducibility is equal to the requirement that all states are
recurrent. We define the function ps,s0 with states {s, s0 } S as a function returning
the probability of going from state s to state s0 in exactly n executions:
ps,s0 : N R (3.10)
p (s) (s0 ) (n) = r (3.11)
For example, p (0) (4) (6) returns the probability of going from state 0 S to
state 4 S in exactly 6 steps.
So, s is periodic if there exists a d > 1, for which it holds that for all n with
n mod d 6= 0,the probability of returning to s after n executions equals zero, i.e.
ps,s (n) = p (s) (s) (n) = 0. Period d is the greatest common divisor of all values n
with p (s) (s) (n) > 0. It also holds that:
In a DTMC all states in the same component have the same period.
Example 12. See Figure 3.3. We have a DTMC with S = {0, 1, 2, 3}. The initial
state is 0 S. We want to find ps,s (n) > 0 for all n.
Note that there exists a path from state 0 to state 2 to state 3 and to state 0
again. We now know that p0,0 (3) > 0. Since you can repeat this process infinitely
often p0,0 (6) > 0, p0,0 (9) > 0, . . ., we can directly see that the greatest common
divisor of all the values of n is 3 here. As 3 > 0, state 0 is periodic. Note that
this DTMC is irreducible because you can reach each state from another state. All
the states of this DTMC are in the same component which is recurrent, so you
can conclude that all the states have the same period. We have found out that
state 0 has period 3, so the period of all the other states must also be 3! You
can compute the transient state probabilities p (u) = p (n 1) P with the initial
distribution p (0) = (1, 0, 0, 0). It follows: p (1) = 0, 13 , 23 , 0 , p (2) = (0, 0, 0, 1),
1 2
0 3 3 0
0 0 0 1
It is P =
0 0 0 1.
1 0 0 0
You can see that also the transient revolution is period because the states are pe-
riod. With a period of 3, you obtain the same transient distribution. In this case
limn p (n) does not exist.
Let s and s0 be mutually reachable from each other. Then it holds that:
s is transient s0 is transient
s is null-recurrent s0 is null-recurrent
s is positive recurrent s0 is positive recurrent
s has period d s0 has period d
32 3 Discrete-time Markov chains
So, v = means that there exist only one solution for the following set of
equations: (see also Equation 3.9)
X
(P I) = 0 and j = 1 (3.14)
j
1
s = (3.15)
ms
Note that the limiting distribution does not need to exist in this case, since the
DTMC could be periodic.
For more details on DTMCs and further examples, see [6, pp. 747780] and [14,
pp.3743].
Chapter 4
This chapter introduces the probabilistic computational tree logic, pCTL. The syn-
tax and semantics of state and path formulas in pCTL are provided in Section
4.1 and Section 4.2 presents algorithms for model checking different kinds ofpCTL
formulas.
Definition 19 (State formulas for pCTL). A state formula in pCTL is defined as:
See Equation 4.1. a AP is an atomic property (recall Section 2.1). We can use
the Boolean operators , and to combine several state formulas. p is
the probability, E is the comparison operator, PEp () indicates that the probability
that paths fulfill is E p, where E can be replaced by >, <, and .
Definition 20 (Path formulas for pCTL). A path formula in pCTL is defined as:
::= X | U k (4.2)
Recall from Section 2.3 for CTL that X is denoted as next operator and U is
called the until operator. Those path-operators always need to be wrapped inside
a pCTL state formula . X means that the next state on the path will fulfill .
33
34 4 Probabilistic Model checking
U k means that holds on all the states along the path until holds in k
steps or less. Note that when k is not provided, it means that k = , e.g., U
means that must hold until holds but the number of steps is not bounded. Also
note the following short-hand notations, which are denoted eventually and alwyas,
repsectively.
= tt U k ,
= .
An additional short-hand notation is the following, denoted as implies:
= .
Example 13 (pCTL Formulas). Please find below some examples for pCTL formu-
las, also containing path-operators:
The probability of not going down in the next state is at leasts 95%.
P0.95 (down)
The probability of going down within 5 steps is at most 1%.
P0.01 tt U 5 down
The probability of going down within 5 steps after continuously operating with
at least two processors is at most 1%.
P0.01 2up U 5 down
Which can also be written as follows, in case more (e.g. four) processors are
available:
P0.01 (2up 3up 4up) U 5 down
s|=tt
s|=a iff a L(s)
s|= iff s|= s|=
s|= iff |=
s
s|=PEp () iff P r{ P aths(s)||=}
The satisfaction set for a state formula is defined as: Sat () = {s S : s|=}
4.2 Model checking the non-probabilistic part of pCTL 35
Read more about pCTL in [6, pp. 780785] and in [21, pp.68]. (Note that in
[21], a different notation is used.)
Sat (a) = {s1 , s2 , . . . , sn1 , sn } and for every state s Sat (a) holds that a L (s)
(4.3)
as thePchance of going to a state where holds, starting from the state s S. The
sum s0 Sat() goes over all the states where holds. Now we can define:
Example 14 (Computing Sat (P0.8 (Xb))). See the labeled DTMC Figure 4.1.
Here we have a labeled DTMC with probabilities for the transitions and labels for
the states. This DTMC consists of states {s1 , s2 , . . . , s5 , s6 } S with transitions
{t1 , t2 , . . . , t8 , t9 } T and probability matrix P as shown below. L (s1 ) = {},
L (s2 ) = {a}, L (s3 ) = {}, L (s4 ) = {a}, L (s5 ) = {b} and L (s6 ) = {b}.
0 0.1 0.9 0 0 0
0.4 0 0 0.6 0 0
0 0 0.1 0.1 0.5 0.3
P=
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0.7 0.3
Computing for all states s S Prob (s, Xb) = Prob (s, X {s5 , s6 })
4.3 Model checking the probabilistic part of pCTL 37
If a labeled DTMC becomes very large, a better approach to compute Sat (P0.8 (Xb))
T
is to multiply the probability matrix P with the vector v = v1 v2 . . . vn1 vn ,
where vi is 1 if state si Sat () and 0 otherwise. If wi of the vector w =
T
w1 w2 . . . wn1 wn = P v T matches the probability bound, it is si
Sat (PEp (X)). Computing Sat (P0.8 (Xb)) for the example given in Figure 4.1
yields:
38 4 Probabilistic Model checking
0 0.1 0.9 0 0 0 0 0
0.4 0 0 0.6 0 0 0 0
0 0 0.1 0.1 0.5 0.3
0 = 0.8 = w
Pv =
(4.9)
0 0 0 1 0 0 0 0
0 0 0 0 1 0 1 1
0 0 0 0 0.7 0.3 1 1
In the first case of Equation5.14, it is considered that if s Sat (), then state
s fulfills , so Prob s, U k = 1. In the second case, a state can be in Sat ()
and in Sat (). Note that in the first case, we have already captured all the states
that are in Sat (), so we do not need to check these states again: so we consider
s Sat () \ Sat (). We are not interested in states that are not in s Sat () or
s Sat () because they cannot fulfill the until property. If k > 0 we need to sum
all the probabilities of the outgoing transitions of state s times the probability of
reaching a state from s0 , where s0 is the successor state of s where holds until
holds in less then k 1 steps.
The first two cases cover all the possibilities.Thus Prob s, U k = 0, other-
Recall section 3.2 (Transient evolution for DTMCs). First we need to transform the
labeled DTMC M into M0 by:
Making all the -states absorbing, i.e., removing all outgoing transitions and
adding a self loop with probability 1.
Making all the states that are not -states or -states, ( )-states ab-
sorbing.
It is sufficient to do this for exactly k steps because all the -states and ( )-
states have been made absorbing, so once we hit one of those states you are sure
that you stay there indefinitely. In the example below we are going to see how to
do the transient analysis.
Example 15 ( Computing Sat P0.4 (green blue) U 3 red ). See the DTMC M
given in Figure 4.2. We first transform M to M0 , the result is shown below. Note
that all the -states, s3 S, and violating states, {s4 , s5 } S, have been made
absorbing. Also note that probabilities have been added. Now the transient analysis
is:
2 1
0 3 3 0
1
0 14 14
Compute the probability matrix P for M0 : 2
0 0 1 0
0 0 0 1
Compute v1 v2 . . . vn1 vn Pk (where k is the untilbound) n times and
alter the vector v in the
following way:
1 0 . . . 0 0 , 0 1 ... 0 0 ,
. . ., 0 0 . . . 1 0 , 0 0 . . . 0 1 :
q1 = 1 0 0 0 P3 = 0 29 11 1
18 6
q2 = 0 1 0 0 P3 = 16 0 12 13
q3 = 0 0 1 0 P3 = 0 0 1 0
q4 = 0 0 0 1 P3 = 0 0 0 1
Backwards computation
Making all the -states absorbing, i.e., removing all outgoing transitions and
adding a self loop with probability 1.
Making all the states that are not -states or -states, ( )-states ab-
sorbing.
In the example below we are going to see how to do the transient analysis.
Example 16 (Computing Sat P0.4 (green blue) U 3 red ). See the DTMC M
2 1
0 3 3
0
1 0 1 1
Compute the probability matrix P for M0 : 2 4
0 0 1 0
4
0 0 0 1
T T
Create the indication vector x = x1 x2 . . . xn1 xn : x = 0 0 1 1
Compute PxT = xT k times and each time use the result of the previous
calculation:
2 1 1 2 1 1 2 2 1 2 7
0 3 3 0 0 3
0 3 3 0 3 3
0 3 3 0 3 9
1 0 1 1 0 1 1 0 1 1 1 2 1 0 1 1 2 5
2 4 4 2 2 4 4 2 3 2 4 4 3 6
0 0 1 0 1 = 1 0 0 1 0 1 = 1 0 0 1 0 1 = 1 = x
0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 1 1 1
Note that this method is more efficient then the forward method and the formal
method.
x = Px + i (4.12)
T
where i is the indicator vector i = i1 , i2 , . . . , in1 , in where ii = 1 if si is a
-state and 0 otherwise. P is a matrix for which holds:
All outgoing transitions of states that are not -states or -states are removed.
Figure 4.1.
42 4 Probabilistic Model checking
Compute : P
0 0.1 0.9 0 0 0
0 0 0 0 0 0
0 0 0.1 0.1 0.5 0.3
P =
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
T
Solve x = Px + i with i = 0 0 0 1 1 :
T
x = 0.8 0 98 0 1 1
/ Sat P>0.8 a U b
x1 = 0.8 0.8 s1
/ Sat P>0.8 a U b
x2 = 0 < 0.8 s2
x3 = 89 0.8 s3 Sat P>0.8 a U b
/ Sat P>0.8 a U b
x4 = 0 < 0.8 s4
x5 = 1 > 0.8 s5 Sat P>0.8 a U b
CTMCs
43
44 5 CTMCs
The CTMC may contain states that cannot be left anymore. In this case all
outgoing rates from this state equal zero. The state is then called absorbing:
Note that CTMCs are aperiodic by definition. CTMCs can either have a finite
or an infinite countable state space s. The latter can be used to model systems with
infinite server capacity, as for example the infinite-server queue, or models with
infinite buffer. An important difference between finite and infinite-state CTMCs is
that the corresponding transition rate matrices of infinite CTMCs are of infinite
size. In the following we will deal with finite CTMCs, only.
5.2 Paths
While the generator matrix only considers the one-step behavior of the CTMC, the
actual evolution of the CTMC over time is specified in detail with a path. In a path
states and transitions alternate, where the rates between any two successive states
have to be positive to assure that the path can actually be taken.
For an infinite path , [i] = si denotes for i N the (i + 1)st state of path .
The time spent P in state si is denoted by (, i) = ti . Moreover, with i the smallest
index with t ij=0 tj , let @t = [i] be the state occupied at time t. For finite
paths with length l + 1, [i] and (, i) are defined
P in the wayQdescribed above for
i < l only and (, l) = and @t = sl for t > l1 j=0 tj . P ath (s) is the set of all
finite and infinite paths of the CTMC Q that start in state s and P athQ includes
all (finite and infinite) paths of the CTMC Q.
Now we need a way to state the probability for a given path to be taken while
time proceeds. In order to define such a probability measure Pr , for an initial
distribution on paths, we need to define cylinder sets first.
The cylinder set is a set of paths that is defined by a sequence of states and
time intervals of a given length k. The cylinder set then consists of all paths that
visit the states stated in the defining sequence in the right order and that change
states during the specified time intervals. Thus, the first k states of the paths in the
cylinder fit into the special structure specified through the sequence of states and
time intervals, while the further behavior remains unspecified.
5.3 Probabilities 47
with k > 0, and a = inf I 0 and b = sup I 0 . If s is the only possible initial state
((s) = 1), we write Prs . Recall that N(sk , s0 ) is the one step probability in the
embedded discrete-time Markov chain, as defined in 5.2. For more details on the
probability measure on paths refer to [5] and [9].
5.3 Probabilities
Based on the probability measure on paths, two different types of state probabilities
can be distinguished for CTMCs. Transient state probabilities are presented in
Section 5.3.1 and the steady-state probabilities are presented in Section 5.3.3.
(t)n
(t; n) = et , n N, (5.6)
n!
state the probability of n events occurring in the interval [0, t) in a Poisson process
with rate . Up to time t exactly n jumps have taken place with probability (t; n).
The change of probability in the DTMC after n steps can be described by (0)Pn .
Thus, by the law of total
Pprobability the transient probability vector V(t) is obtained
n
as the weighted sum n=0 (t; n)P over all possible number of steps. Let n be
the state probability distribution vector after n epochs in the DTMC with transition
matrix P, that can be derived recursively:
To avoid the infinite series, the sum can be truncated; the resulting error can be
calculated in advance. If, for example, up to K transitions are considered,
K
X
V(t) = (t; n) n + (K), (5.9)
n=0
5.3 Probabilities 49
where (K) is the error that occurs when the series is truncated after K steps:
K
X X (t)n X (t)n
(K) = k (t; n) n k k et k=1 et (5.10)
n=K+1 n=K+1
n! n=0
n!
Further on it is possible to precompute the finite number of steps K for a given time
instant t and a required accuracy criterion . Thus, the smallest K is needed that
satisfies
K
X (t)n 1
t
= (1 )et . (5.11)
n=0
n! e
Example 19 (Uniformization). Figure 5.2 shows a simple CTMC. Let the initial
distribution be 0 = [1, 0].
and indicate the probabilities to be in some state s0 in the long run. For CTMCs
with a finite state space, the steady-state probabilities always exist. Furthermore,
if the CTMC is strongly connected, the initial state does not influence the steady-
state probabilities as the probability distribution does not depend on the progress in
time (we therefore often write (s0 ) instead of (s, s0 ) for brevity). The steady-state
probability vector then follows from the possibly infinite system of linear equations
and its normalization:
X
Q = 0, and s = 1. (5.13)
s
For finite CTMCs this system of linear equations can be solved with numerical
means known from linear algebra [22].
Example 20 (Steady-state probabilities for strongly-connected CTMCs). Consider
the example CTMC M = (S, R, L) in Figure 5.1 above. Recall that the square
generator matrix Q was:
5 4 1
Q = 10 10 0
0 4 4
From Q and Equation 5.13 we can derive the following system of linear equations:
5 0 + 10 1 =0
4 0 10 1 + 4 2 =0
0 4 2 =0
0 + 1 + 2 =1
5 1
1 = 0 = 0
10 2
1
2 = 0
4
1 1
0 + 1 + 2 = 1
2 4
4 2 1
0 = , 1 = , 2 =
7 7 7
5.3 Probabilities 51
For CTMCs with a graph that is not strongly connected, we define the bottom
strongly connected component (BSCC) as the maximal strongly connected compo-
nent in the graph without any outgoing transitions to states of another component
[5]. Note that there might be incoming transitions. So, the transient states are not
contained in any BSCC and every absorbing state results in a BSCC on its own.
For a single BSCC B S of a CTMC M = (S, R, L), the steady-state probabil-
ities can be computed according to Equation 5.13. Let P r{reach B from s} denote
the probability of eventually reaching a state in B starting from state s. It can be
determined by resolving the following system of equations:
1
if s B
P r{reach B from s} =
(P(s, s0 ) P r{reach B from s})
P
s0 S otherwise
(5.14)
Only the states s1 , s2 and s5 hold the label a. s2 is a transient state, so we are
only interested in B1 and B3 . The steady-state probability for state s1 is:
52 5 CTMCs
2 2
M (s0 , s1 ) = 1= .
3 3
1 2 1
M (s0 , s5 ) = = .
6 3 9
2 1 7
+ = .
3 9 9
Chapter 6
CSL
In this chapter the logic CSL is presented as a formalism to specify complex prop-
erties on states and paths of CTMCs. In Section 6.1, the syntax and semantics of
CSL formulas are given. Section 6.2 summarizes how the next operator, the time
bounded until, the interval until and the point interval until are model checked on
finite CTMCs. In Section 6.3, we discuss the general model checking routine via
satisfaction sets.
Example 22 (CSL Formulas). Some examples for CSL state formulas would be:
The probability of going down within 10 time units after having continuously
operated with at least 2 processors is at most 1%
53
54 6 CSL
For a CSL state formula and a CTMC M, the satisfaction set Sat() contains
all states of M that fulfill . Satisfaction is stated in terms of a satisfaction relation,
denoted |=, as follows.
where M (s, Sat()) = s0 Sat() M (s, s0 ), and P robM (s, ) describes the proba-
P
bility measure of all paths P ath(s) that satisfy when the system is starting
in state s, that is, P robM (s, ) = Pr{ P athM (s) | |= }.
The steady-state operator S./p () denotes that the steady-state probability for
-states meets the bound p. P./p () asserts that the probability measure of the
paths satisfying meets the bound p.
the time interval until with I = [t1 , t2 ] for t1 , t2 R>0 and t1 < t2 ,
Note that the path formula U I is not satisfiable for I = . For a more detailed
description of CSL, see [5].
In [5], it is shown that model checking the time bounded until, the interval until,
and the point interval until can be reduced to the problem of computing transient
probabilities for CTMCs. The idea is to use a transformed CTMC where several
states are made absorbing. As introduced in [5] this proceeds as follows:
The CSL path formula = U [0,t] is valid if a state is reached, before time
t via some path (that is a path via only states). As soon as a state is reached,
the future behavior of the CTMC is irrelevant for the validity of . Thus all states
can be made absorbing without affecting the satisfaction set of formula . As soon
56 6 CSL
For the interval until with time bound I = [t1 , t2 ], 0 < t1 t2 we again follow
the idea of CSL model checking. It is important to note that
Prob(s, U [t1 ,t2 ] ) 6= Prob(s, U [0,t2 ] ) Prob(s, U [0,t1 ] ).
For model checking a CSL formula that contains the interval until operator, we
need to consider all possible paths, starting in a state at the actual time-instance
and reaching a state during the time interval [t1 , t2 ] by only visiting states on
the way. We can split such paths in two parts: the first part models the path from
the starting state s to a state s0 and the second part the path from s0 to a
state s00 only via states. We therefore need two transformed CTMCs: M[] and
M[ ], where M[] is used in the first part of the path and M[ ] in the
second. In the first part of the path, we only proceed along states, thus all states,
that do not satisfy do not need to be considered and can be made absorbing. As
we want to reach a state via states in the second part, we can make all state
that do not fulfill absorbing, because we cannot proceed along these states, and
all states that fulfill , because we are done, as soon as we reach such a state.
In order to calculate the probability for such a path, we accumulate the multi-
plied transition probabilities for all triples (s, s0 , s00 ), where s0 |= and is reached
before time t1 and s00 |= and is reached before time t2 t1 . This can be done,
because we use CTMCs that are time homogeneous.
Proposition 3 (Interval until [5]). For any CTMC M = (S, R, L) and (0 < t1 <
t2 ):
X X
Prob M (s, U [t1 ,t2 ] ) = M[] (s, s0 , t1 ) M[] (s0 , s00 , t2 t1 ). (6.5)
s0 |= s00 |=
The point interval until can then be seen as a simplification of the interval until,
where the second part of the computation does not need to be considered. The CSL
path formula = U [t,t] is valid if a state is reached, at time t via only
states, hence all states that do not satisfy do not need to be considered and can
be made absorbing. In the goal state s0 both and have to be valid.
6.3 General model checking routine 57
59
60 BIBLIOGRAPHY
[16] H. Hermanns, J.-P. Katoen, J. Meyer-Kayser, and M. Siegle. A tool for model-
checking Markov chains. International Journal on Software Tools for Technol-
ogy Transfer, 4(2):153172, 2003.
[18] J.-P. Katoen. Concepts, Algorithms, and Tools for Model Checking. Arbeits-
berichte des IMMD 32(1), Friedrich-Alexander Universitat Erlangen Nurnberg,
1999.
[19] J.-P. Katoen, M. Khattri, and I. Zapreev. A Markov Reward Model Checker.
In Proc. 3rd Int. Conference on the Quantitative Evaluation of Systems
(QEST06), pages 243244. IEEE press, 2005.
[23] K. Trivedi. Probability and Statistics with Reliability, Qeueing and Computer
Science Applications. John Wiley & Sons, 2002.