You are on page 1of 103

Model Based testing: FSM-based Testing

Instructor: Rachida Dssouli Email: dssouli@ciise.concordia.ca Office: EV 007.648 URL: http://www.ciise.concordia.ca/~dssouli

October, 2007

Outline
Protocol testing
Concepts, fault models, related definitions, general approach Methodologies based on FSM
T-Method (Transition Tour method) D-Method (Distinguishing sequences) W-Method (Characterizing sequences) U-Method (Unique input/output sequences)

Introduction and motivation


Testing in the software development cycle The software development cycle Development of test cases (starting during analysis phase) Analysis of test results, the oracle, diagnostics What results can we expected from testing? Testing vs. verification Finite test suite vs. infinite behavior Definition of test suite Conformance relations: What is a correct implementation ? The coverage problem and the fault models Defining the correct behavior: modeling and specification languages

Introduction and motivation


Testing in the software development cycle The software development cycle Development of test cases (starting during analysis phase) Analysis of test results, the oracle, diagnostics What results can we expected from testing? Testing vs. verification Finite test suite vs. infinite behavior Definition of test suite Conformance relations: What is a correct implementation ? The coverage problem and the fault models Defining the correct behavior: modeling and specification languages

Why do we test ?
for detecting errors in the implementation /debugging for demonstrating conformance to a specification or users needs
e.g. protocol conformance testing

for proving the correctness !!

Against what are testing?


Specifications:

System Under Test

Users needs (Requirements) Objectives (specific) Informal specification Formal specification

The answer will help test team to establish a clear relationship between the system under test, the specification and the objective to satisfy.

Correctness and how to achieve it


How do we achieve the correctness of a given system? What is the impact of this process on the final software product?

testing with coverage

program proving
(using theorem prover)

exhaustive Testing

The choice among these alternatives is based on: cost (function of # parameters: time, resources , human expertise,..) feasibility of proof or exhaustive testing the target quality

Models of Specification and Implementation


Conformance testing
abstract model of S
assumptions/ test hypothesis conformance relation

abstract model of I
assumptions/ test hypothesis

precise specification S

conformance relation

implementation I

Distinguishing of the non-conforming implementations


Universe of all possible implementations of a given system

conforming
all possible implementations

non-conforming detected Fail TS


non-detected Pass TS
all implementations in the fault model

Question: How to choose a small (finite) test suite TS and obtain the maximum power of error detection?

Protocol Conformance Testing


To confirm if an implementation conforms to its standard
External tester applies a sequence of inputs to IUT and verifies its behavior Issue1: preparation of conformance tests in coverage of IUTs all aspects Issue2: time required to run test should not be unacceptably long

Two main limitations


Controllability: the IUT cannot be directly put into a desired state, usually requiring several additional state transitions Observability: prevents the external tester from directly observing the state of the IUT, which is critical for a test to detect errors

Formal conformance testing techniques based on FSM


Generate a set of input sequences that will force the FSM implementation to undergo all specified transitions Black box approach: only the outputs generated by the IUT (upon receipt of inputs) are observable to the external tester

Fault Models
A fault model is a hypothetical model of what types of faults may occur in an implementation
Most fault models are "structural", i.e. the model is a refinement of the specification formalism (or of an implementation model)
E.g. mutations of the specification or of a correct implementation

It may be used to construct the fault domain used for defining what "complete test coverage" means
E.g. single fault hypothesis (or multiple faults)

A fault model is useful for the following problems:


Test suite development for given coverage objective Formalization of "test purpose" For existing test suite: coverage evaluation and optimization Diagnostics

Fault Model for FSM


Output fault the machine provides an output different from the one specified by the output function Transfer fault the machine enters a different state than that specified by the transfer function Transfer faults with additional states: number of states of the system is increased by the presence of faults, additional states is used to model certain types of errors Additional or missing transitions: one basic assumption is that the FSM is deterministic and completely defined (fully specified). So the faults occur when it turns out to be non-deterministic and/or incompletely (partially) specified

Fault Models for FIFO Queue and Petri Nets


FSM with several FIFO input queues
Ordering fault. FIFO ordering is not preserved, or in case of multiple input queues, some input event enters a wrong input queue Maximum length fault: the maximum length implemented is less than the one specified, or if an input event gets lost while queue is not overflow Flow control fault: errors of ordering or of loss occur, in case the number of submitted input events overflows the maximum queue length specified

Petri Nets
Input or output arc fault: one of the input or output arcs is connected to the wrong place, missing, or exists in addition to those specified Missing or additional transition: the number of transitions is not the same as in the specification

FSM Related Definitions (1/2)


Directed graph G=(V, E) representing FSM M
Set of vertices V = {vi, v2, ..., vn) represents the set of states Sin M Directed edge (v,, Vj)eE represent a transition from state s; to state s; in M An edge in G is represented by a triple (v,, vf, L), L=ai/oi is the input/output operation corresponding to the transition from s; to s; in M

Some other definitions & assumptions


Deterministic FSM: predictable behavior in a given state for a given input Strongly connected: for each state pair (s;, sj) there is a transition path going from s; to Sj, I.e. each state can be reached from any other state Fully specified: form each state it has a transition for each input symbol. Otherwise partially specified Minimal: the number of states of M is less than or equal to the number of states of any equivalent machine

FSM Related Definitions (2/2)


Start state soeS, usually the state when power-up
Often, there is a special input taking Mto state s0from any other state with a single transition. In this case, M is said to have the reset capability and the input which performs the reset is denoted by "rf

Sequences for testing


A test subsequence of M is a sequence of input symbols for testing either a state or a transition of M A /^-sequence for M is a concatenation of test subsequences for testing all transitions of M A test sequence for M is a sequence of input symbols which can be used in testing conformance of implementations of M against the specification of M An optimize test sequence is a test sequence such that no subsequence of it is completely contained in any other subsequence

So, the problem is how to obtain a "optimize test sequence" for M

Transition Level Approach


The methods for protocol conformance test sequence generation
Produce a test sequence which checks the correctness of each transition of the FSM implementation By no means exhaustive, I.e. no guarantee to exhibit correct behavior given every possible input sequence. The intent is to design a test sequence which guarantees "beyond a reasonable doubt"

Three basic steps for checking a transition (si, sy; L), L=ak/oi
Step 1: The FSM implementation is put into state s*; (e.g. reset+transfer)
Difficulty in realizing this is due to the limited controllability of the implementation

Step 2: Input a* is applied and the output is checked to verify that it is oi, as expected; Step 3: The new state of the FSM implementation is checked to verify that it is Sj, as expected
Difficulty in verifying this is due to the limited observability of the implementation

Testing based on Finite State Models


The finite state machine (FSM) model An infinite fault model Conformance relations: based on I/O sequences Testing based on FSM specifications Fault model Test derivation methods Transition Tour State identification methods Fault coverage guarantees Overview and assumptions Testing based on partially specified behavior Testing against non-deterministic specifications Testing non-deterministic FSMs with input queuing Coverage analysis

FSM
S1 is an initial state
t1: 1/1 S1 S2

T1: 1/1
t4: 2/2

Is a transition
it has a starting state S1, and an ending state S2

t2: 2/2
t8: 2/2 S4 t6: 2/2 t3:1/1

S3 t5: 1/2

Its label is t1
The input is 1 and an output 1 / separates the input from the output

t7: 1/2

An FSM Example
t1: 1/1 Mealy Machine
state set input set spec. output domain function

S1 t2: 2/2 t8: 2/2 S4 t7: 1/2 t6: 2/2 t3:1/1

S2 t4: 2/2

M = < S, S1, X, Y, Ds, , >


initial state output set transfer function

S3 t5: 1/? ?

Ds S x X : Ds --> S : Ds --> Y

S = {S1, S2, S3, S4} X = {1, 2} Y = {1, 2} Ds = S x X - {<S3, 1>}

partially defined (specified), deterministic, initialized

Fault Model for Finite State Machine (FSM)

1) Output fault: point a in FSM fault model. 2) Transfer fault: point b in FSM fault model. 3) Transfer fault with additional states: point c in FSM fault model. 4) Additional or missing transitions: point d in FSM fault model. 5) additional or missing states

Output Fault on transition t1

t1: 1/2 t1: 1/1 S1 t2: 2/2 t8: 2/2 S4 t7: 1/2 t6: 2/2 t3:1/1 S2 S1 S2

t4: 2/2
t8: 2/2 S3 t5: 1/2 S4

t2: 2/2
t3:1/1 t6: 2/2

t4: 2/2

S3 t5: 1/2

t7: 1/2

Specification

Implementation under test IUT

Transfer fault on t2
The ending state is now S3

t1: 1/1 t1: 1/1 S1 S2 S1 t2: 2/2 t4: 2/2 t8: 2/2 S4 S3 t5: 1/2 t6: 2/2 t3:1/1 S2 t4: 2/2

t2: 2/2
t8: 2/2 S4 t6: 2/2 t3:1/1

S3 t5: 1/2

t7: 1/2

t7: 1/2

Specification

IUT

Transfer fault on t5 with Additional state


t1: 1/1 S1 t2: 2/2 t8: 2/2 S4 t7: 1/2 t3:1/1 S2 t4: 2/2 t1: 1/1 S1 t2: 2/2 t8: 2/2 S4 t5: 1/2 t7: 1/2 t6: 2/2 t3:1/1 S2 t4: 2/2

t6: 2/2

S3

S3 t5: 1/? ?

Specification

IUT

Example of implementation with additional state


Specification
S0

Impl. 1
b/f c/f a/e S1 a/f
b/e c/e I0

b/f c/f a/e I1 a/f c/e a/f I2

b/f

c/e b/e S2

c/e b/f

a/f

Impl. 2
Io b/f I3 a/e c/e b/e c/e

b/f c/f a/e I1 a/f c/e b/f

I2 a/f

Example of a test suite


t1: 1/1 S1 t2: 2/2 t8: 2/2 S4 t7: 1/2 t6: 2/2 S2 t4: 2/2

t3:1/1

A test suite is a set of input sequences starting from the initial state of the machine

S3

TS = { r.1.1.2.1, r.2.2.1.2.2}
MS MI
1.1.2.2
2.2.1.2.2 Conforming

Test Case
r.1.1.2.1
r.2.2.1.2.2

MI
1.1.2.2
2.2.2.2.2 Non-conforming

1.1.2.2
2.2.1.2.2

Pass TS

Fail to pass TS

Possible changes made by a developer


Type 1: change the tail state of a transition Type 2: change the output of a transition Type 3: add a transition; and Type 4: add an extra state.
t1: 1/1 S1 S2 t2: 2/2 t8: 2/2 S4 t7: 1/2 t6: 2/2 t3:1/1 t4: 2/2

No limitation on the number of such changes allows for an infinite set of possible implementations !!!

S3 t5: 1/? ?

Fault model for FSM specifications


IDISreq/DR IDISreq/DR IDISreq/DR DT0/IDATind,AK0

s1

CR/ICONind

s2

ICONresp/CC DT0/AK0

s3

DT1/IDATind,AK1 DT1/AK1

s4

For the given transition: change the output (output fault) change the next state (transfer fault) if a new state can be added, then assume an upper bound on the number of states in implementations.

mutations

For the example above, there are (SxO)SxI = 4x74x5=2820 mutants with up to 4 states. Among them, 36 mutants represent single (output or transfer) faults, as only 9 transitions are specified. An example of a very specific fault domain: Only the transitions related to data transfer may be faulty. These are 4 transitions. This results in only 284 mutants (faulty implementations in mplf).
DT0/AK0 DT0/IDATind,AK0

s3

DT1/IDATind,AK1
DT1/AK1

s4

Example of fault detection by the TS


t1: 1/1
S1 t2: 2/2 t8: 2/2 S4 t7: 1/2 t6: 2/2 t3:1/1 S2 t4: 2/2 2/2.1/1.2/2

<qe
t1: 1/1
S1 t2: 2/2 t8: 2/2 S4 t7: 1/2 t6: 2/2

S3

not <qe
2/2.1/1.2/1
t1: 1/1

M2
S2 t3:1/1 t4: 2/2 S1 t2: 2/2 t8: 2/2 S4 t7: 1/2 t5: 1/2

S2 t3:1/1 t4: 2/2

t6: 2/1

S3

S3 t5: 1/2

M1

M1

Test Derivation Methods

T-Method: Transition Tour Method [Nait 81]


For a given FSM S, a transition tour is a sequence which takes the FSM S from the initial state so, traverses every transition at least once, and returns to the initial state s0.
Straightforward and simple scheme New state of the FSM is not checked

Fault detection power


Detects all output errors There is no guarantee that all transfer errors can be detected

The problem of generating a minimum-cost test sequence using the transition tour method is equivalent to the so-called "Chinese Postman" problem in graph theory
First studied by Chinese mathematician Kuan Mei-Ko in 1962

T-Method Example -1
The implementation I1 contains an output error. Our transition tour will detect it

The specification S. A transition tour is a,a,a,b,b,b

The implementation I2 contains a transition error. Our transition tour will not detect it.

DS-method [Gonenc 70]


An input sequence is a distinguishing sequence (DS ) for an FSM S, if the output produced by the FSM S is different when the input sequence is applied to each different state. A DS is used as a state identification sequence.

Detects all output errors, Detects all transfer errors, A DS may not be found for a given FSM.

32

DS method Example
S
b/y a/x a/x b/y

2
a/y

2
b/y a/x a/x b/y a/y

Impl. I2

1
b/x

3
b/x

The specification S A distinguishing sequence is : b.b If we apply it from : state 1 we obtain y.y state 2 we obtain y.x state 3 we obtain x.y

A test case which allow the detection of the transfer error is : a.b.b.b If we apply it from the initial state of : the specification we obtain x.x.y.y the implementation we obtain x.x.x.x
33

DS method
2
b/y a/x a/x b/y a/y

1
b/x

Phase 1: Identification of all states/ State cover From state 1, we can reach state 2 with b/y and state 3 with a/x We assume that the reset exist, Q = { , a, b} DS = b.b Test suite = {r.b.b, r.a.b.b, r.b.b.b} Phase 2, to cover all transitions for output faults and transfer faults P = { , a, b, a.b, a.a, b.b, b.a} Test suite:{r.b.b, r.a.b.b, r.b.b.b, r.a.b.b.b, r.a.a.b.b, r.b.b.b.b, r.b.a.b.b}

34

General methodology for state identification based methods

A) Test generation based on Specification


A-1) Find the Q set or the State cover: minimal inputs that reach a state from the initial one A-2) Find the P set or Transition cover: that will cover all remaining transitions

Generate Test Suites using Q and P sets

B) Fault detection
B-1) Apply the generated test suites to the specification to obtain Expected Outputs B-2) Apply the generated test suites to the implementation to obtain Observed Outputs Compare the expected and observed outputs (test results) If they are different then the verdict is fail otherwise it is a pass for the applied test suites.
35

UIO-Method [Sabnani 88]


and UIOv-Method [Vuong 89]
The UIO-method can be applied if for each state of the specification, there is an input sequence such that the output produced by the machine, when it is initially in the given state, is different than that of all other states. The UIOv-method is a variant of the UIO-method. it check the uniqueness of the applied identification sequences on the implementation, meaning that each identification sequence must be applied on each state of the implementation and the outputs are compared with those expected from the specification. 36

UIO Example
2
a/y a/x a/x b/y

S
b/y

1
b/x

UIO sequences are : state 1 : a.b state 2 : a.a state 3 : a A transition cover set is : P={e, a, a.b, a.a, b, b.a, b.b} The test sequences generated by the UIOmethod are : r.a.b, r.a.a, r.a.b.a.b, r.a.a.a.a, r.b.a.a, r.b.a.a.b, r.b.b.a
37

The specification S

We assume the existence of a reset transition with no output (r/-) leading to the initial state for every state of S

Method W [Chow 78]


The W-method involves two sets of input sequences : W-set is a characteristic set of the minimal FSM, and consists of input sequences that can distinguish between the behaviors of every pair of states P-set is a set of input sequences such that for each transition from state A to state B on input x, there are input sequences p and p.x in P such that p takes the FSM from the initial state into state A.

38

W method Example
b/f a/e

1
a/f c/e b/e c/e

2
c/f

A characterization set is W={a, b} W1 state 1 : a/e, W2 state 2 : a/f, b/f W3 state 3 : b/e W = Union of all Wi

b/f

a/f

A transition cover set for the specification S is : P={e, a, b, c, b.a, b.b, b.c, c.a, c.b, c.c}
P set is not unique you may select b as preamble instead of a

The specification S

We assume the existence of a reset transition with no output (r/-) leading to the initial state for every state of S

The W-method generates the following test sequences: (P.W) = r.a, r.b, r.a.a, r.a.b, r.b.a, r.b.b, r.c.a, r.c.b, r.b.a.a, r.b.a.b, r.b.b.a, r.b.b.b, r.b.c.a, r.b.c.b, r.c.a.a, r.c.a.b, r.c.b.a, r.c.b.b, r.c.c.a, r.c.c.b 39

Wp method [Fujiwara 90]

This method is a generalization of the UIOv method which is always applicable. It is as the same time an optimization of the W-method. The main advantage of the Wp-method, over the W-method, is to reduce the length of the test suite. Instead of using the set W to check each reached state si, only a subset of W is used in certain cases. This subset Wi depends on the reached state si, and is called an identification set for the state si.
40

Example of Wp method (1/3 )


state
b/f a/e

1 e f

2 f f e f

3 f e e

Derivation of W

a b c

1
a/f c/e b/e c/e

2
c/f

b/f

a/f

The specification S We assume the existence of a reset transition with no output (r/-) leading to the initial state for every state of S

for state 1 : a/e for state 2 : c/f for state 3 : b/e The identification sets are : W1={a}, distinguishes the state 1 from all other states W2={c}, distinguishes the state 2 from all other states W3={b}, distinguishes the state 3 from all other states

Example of Wp method (2/3)


W 1 : { a/e } , W 2 : { c/f } , W 3 : { b/e } A state cover set for the specification S is : Q={, b, c}
A transition cover set for the specification S is : P={, a, b, b.c, b.a, b.b, c, c.a, c.c, c.b} P-Q={a, b.c, b.a, b.b, c.a, c.c, c.b} Based on these sets, the Wp-method yields the following test sequences : Phase 1: Q.Wi = {r.a1, r.b.c2, r.c.b3}
The ending state Wi is given in subscript

Phase 2 : (P-Q).Wi ={r.a.c2, r.b.c.c2, r.b.a.a1, r.b.b.b3, r.c.a.b3, r.c.c.c2, r.c.b.a1}

Example of Wp method (3/3)


b/f a/e

2
a/f c/e c/e b/f a/f c/f

The application of the test sequences obtained in Phase 2 leads to the following sequences of outputs :
r.a.c2, r.b.c.c2, r.b.a.a1, r.b.b.b3, r.c.a.b3, r.c.c.c2, r.c.b.a1 S: -.e.f -.f.f.f -. f.f.e -. f.f.e -.e.f.e -. e.e.f -.e.e.e I: -.e.f -.f.f.f -. f.f.e -. f.f.e -.e.f.f -. e.e.f -.e.e.e

b/e

A faulty implementation I I contains a transfer error 2a/f->1 (fat arrow) instead of 2-a/f->2 as defined in the specification S

The output printed in bigger size is different from the one expected according to the specification. Therefore, the transfer error in the implementation is detected by this test sequence.

Wrong Example and trade of for Wp method (1/3 )


Derivation of W
state

2 f f

3 f

b/f a/e

a
2

e
f

1
a/f c/e b/e c/e

b
c/f

b/f

A characterization set is W={a, b} for W method for state 1 : a/e for state 2 : {a/f, b/f } this will increase the size of the test suite
That why c/f should be selected as W for the state 2.

a/f

for state 3 : b/e

The specification S We assume the existence of a reset transition with no output (r/-) leading to the initial state for every state of S

The identification sets are : W1={a}, distinguishes the state 1 from all other states W2={a, b}, distinguishes the state 2 from all other states but it is not optimized W3={b}, distinguishes the state 3 from all other states
44

Examples

45

All state identification Methods

Distinguishing Sequence, UIO, W Test hypothesis H1) Strongly connected machine H2) Contain no equivalent states H3) deterministic H4) Completely specified machine H5) the failure which increases the number of states doesnt occur The method is applied in two phases from the initial state phase 1) -sequence to check that each state defined by the specification also exist in the implementation. phase 2) -sequence to check all the individual transitions in the specification for correct output and transfer in the implementation.

46

DS Method
Assume that Reset transition r/- exist Q1) Verify if a.a is a DS for S and explain why? Q2) Find a DS different from a.a with length 2 for S

S0 b/1
S4 a/0

a/0

S1
b/1

b/0
a/0 a/1 S2 b/0

47

W method

Assume that the reset exist and it brings the machine from any state to the initial state. a) Find characterization set W and generate the set of test cases for the specification S using the W method. b) Does S have a DS sequence? If not explain why?

a/0 S0 b/0 b/1 b/0 a/0 a/1 S1

S2
48

W method
a/0 S0 b/0 S1 b/1 b/0

S0 : b/1 S1 : a/1 S2 : a/0, b/0 W= U Wi W={a, b} Q = { ,a,b} State Cover P = { ,a, b, a.b, a.a, b.a, b.b} transition Cover P-Q = { a.b, a.a, b.a, b.b}

a/0

a/1

S2

P-Q is used for the 2 steps with alpha and beta sequences to avoid redundancy.

Phase 1 , Q.W = {r.a, r.b, r.a.a, r.a.b, r.b.a, r.b.b} Phase 2, (P-Q).W = {r.a.b.a, r.a.b.b, r.a.a.a, r.a.a.b, r.b.a.a, r.b.a.b, r.b.b.a, r.b.b.b}

49

Examples Suite
Transition tour:
S0 b/0 b/0

a/0
Input

a.b.a.b.a.b 0.1.0.0.1.0

b/1 a/0

Output

a/1

S1

S2

Specification S

Derive a DS of length up to 2 for S a.b is a DS for S


State Input Output S0 S1 S2 S0 b 1 S1 S2 S0 S1 S2

a
0

a
1

a
0

b
0

b
0

a.b a.b a. b 0.1 1.0 0.0

Comment: a as input at each state will loop on the state, sequence of a.a. cannot be a DS, the output will be 0.0.. or 1.1 50

Q set: permits to reach each state from the initial state Q = { , b,b.b} The first b to reach the state S2
S0 b/0 b/0

a/0

b.b to reach the state S1. P set is transition cover, permits to execute each transition at least one starting from the

b/1 a/0

a/1

S1

S2

initial state

S
0

a b

How to derive P set: find all Path starting from the size1 and up and each transition should be traversed at least once b
b b b

a b a

P = {, a, b, b.a, b.b, b.b.a, b.b.b}


more than one p set may exist, this depends on the alternative paths that the automata may have.

51

The goal of the Phase 1 is identification of the states in the implementation DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b} Phase 1 Q.DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is: {-.0.1, -.1.0.0, -.1.0.1.0}

Phase 2 ( DS in bold) P.DS= {r.a.b, r.a.a.b,r.b.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b} {-,0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1}

S0 b/0 b/0

a/0

b/1

Note that, the test suites for phase 1 and 2 should be Derived from the specification and applied to the implementation to check it for output and transfer faults.
a/0

a/1

S1

S2

52

Specification S

Implementation I
a/0

S0 b/0 b/0

a/0

S0

b/1 b/1 a/0 a/1 S1 b/0 S2 a/0

a/1

S1

S2

b/0

Apply the transition tour to the implementation I and comment


Transition tour applied to S
Input Output of S Output of I

a.b.a.b.a.b 0.1.0.0.1.0 0.1.0.0.1.0

The implementation I has a transfer fault, the fault is not detected by Transition tour. Transition tour detects all output faults but Doesnt guarantee the detection of transfer faults
53

S0

a/0

Implementation I

b/1 b/0 a/1 S1 b/0 S2 a/0

The goal of the Phase 1 is identification of the states in the implementation DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b}

Phase 1 Q.DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is: {-.0.1, -.1.0.0, -.1.0.1.0} {-.0.1, -.1.0.0, -.1.0.1.0} ) observed outputs from I

Phase 2 ( DS in bold) P.DS= {r.a.b, r.a.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b} {-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1} expected output {-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.0}observed output from I, transfer fault detected
54

Specification S

S0
c/1

a/0

a/1
C/0

b/0 a/0

c/0

S2

b/0
b/0

S1

Derive a UIO sequence for S


State Input Output S0 S1 S2 a a a 1 S0 S1 S2 b 0 b 0 b 0 S0 S1 S2 c c c 0 S0 S1 S2

Transition tour for S


a.c a.c a.c 0.1 0.0 1.1

a.b.a.b.c.a.c.b.c
0.0.0.0.0.1.1.0.0

0 0

1 0

UIO state S0 = UIO state S2 = UIO state S1 =

c/1

a/1
a/0.c/0

55

U-Method: Unique Input/Putout Sequences


In DS and CS, requirement of state identification is too strong
Answer the question of "what is the current state of the implementation ?" For testing it is sufficient to know an error has been detected

UIO sequence of a state of a FSM


An I/O behavior that is not exhibited by any other state of the FSM Answer the question of 7s the implementation currently in state x?"

Advantages against DS & CS


Cost is never more than DS and in practice is usually much less (shorter) Nearly all FSMs have UIO sequences for each state DS - same for all states; UIO sequence - normally different for each state

To check state s by using UIO sequence of s


Apply input part of UIO, compare output sequence with the expected one If the same, then the FSM is in the state s; otherwise, not in the state s If not in state s, no information about the identity of the actual state s'

Analysis
Fault Testing Coverage
Fault coverage for D-, W-, and U-methods is better than of T-method Fault coverage for D-, W-, and U-methods are the same

Summary
All of these four methods assume minimal, strongly connected and fully specified Mealy FSM model of protocol entities On average, T-method produces the shortest test sequence, W-method the longest. D- and U- methods generate test sequence of comparable lengths T-method test sequences are able to detect output faults but not transition D-, W-, and U-methods are capable of detecting all kinds of faults and give the same performance. U-method attracts more and more attentions and there are several approaches based on the basic idea with some improvements

Examples

General methodology for state identification based methods

A) Test generation based on Specification


A-1) Find the Q set or the State cover: minimal inputs that reach a state from the initial one A-2) Find the P set or Transition cover: that will cover all remaining transitions Generate Test Suites using Q and P sets

B) Fault detection
B-1) Apply the generated test suites to the specification to obtain Expected Outputs B-2) Apply the generated test suites to the implementation to obtain Observed Outputs

Compare the expected and observed outputs (test results) If they are different then the verdict is fail otherwise it is a pass for the applied test suites.

DS method Example
S
b/y a/x a/x b/y

2
a/y

2
b/y a/x a/x b/y a/y

Impl. I2

1
b/x

3
b/x

The specification S A distinguishing sequence is : b.b If we apply it from : state 1 we obtain y.y state 2 we obtain y.x state 3 we obtain x.y

A test case which allow the detection of the transfer error is : a.b.b.b If we apply it from the initial state of : the specification we obtain x.x.y.y the implementation we obtain x.x.x.x

DS method
2
b/y a/x a/x b/y a/y

1
b/x

Phase 1: Identification of all states/ State cover From state 1, we can reach state 2 with b/y and state 3 with a/x We assume that the reset exist, Q = { , a, b} DS = b.b Test suite = {r.b.b, r.a.b.b, r.b.b.b} Phase 2, to cover all transitions for output faults and transfer faults P = { , a, b, a.b, a.a, b.b, b.a} Test suite:{r.b.b, r.a.b.b, r.b.b.b, r.a.b.b.b, r.a.a.b.b, r.b.b.b.b, r.b.a.b.b}

DS method Example
The test cases are : state 1: a.b.b b.b.b state 3 : a.a.b.b a.b.b.b state 2 : b.a.b.b b.b.b.b

Test case structure:


preamble.tested transition.state identification

Transition Tour example


Test hypothesis: Initially connected machine
t1: a/1 S1 t2: b/2 t8: b/2 t9: a/2 S4 t6: b/2 t3:a/1 S2

t4: b/2

S3

t7: a/2

Transition tour TT: t1, t4, t3, t9, t2, t3, t6, t7, t8 TT (input/expected output): a/1.b/2.a/1.a/2.b/2.a/1.b/2.a/2.b/2

All state identification Methods


Distinguishing Sequence, UIO, W Test hypothesis H1) Strongly connected machine H2) Contain no equivalent states H3) deterministic H4) Completely specified machine H5) the failure which increases the number of states doesnt occur The method is applied in two phases from the initial state phase 1) -sequence to check that each state defined by the specification also exist in the implementation. phase 2) -sequence to check all the individual transitions in the specification for correct output and transfer in the implementation.

DS Method
Assume that Reset transition r/- exist Q1) Verify if a.a is a DS for S and explain why? Q2) Find a DS different from a.a with length 2 for S

S0 b/1
S4 a/0

a/0

S1
b/1

b/0
a/0 a/1 S2 b/0

W method
Assume that the reset exist and it brings the machine from any state to the initial state. a) Find characterization set W and generate the set of test cases for the specification S using the W method. b) Does S have a DS sequence? If not explain why?

a/0 S0 b/0 b/1 b/0 a/0 a/1 S1

S2

W method
a/0 S0 b/0 S1 b/1 b/0

S0 : b/1 S1 : a/1 S2 : a/0, b/0 W= U Wi W={a, b} Q = {e ,a,b} State Cover P = {e ,a, b, a.b, a.a, b.a, b.b} transition Cover P-Q = { a.b, a.a, b.a, b.b}

a/0

a/1

S2

Phase 1 , Q.W = {r.a, r.b, r.a.a, r.a.b, r.b.a, r.b.b} Phase 2, (P-Q).W = {r.a.b.a, r.a.b.b, r.a.a.a, r.a.a.b, r.b.a.a, r.b.a.b, r.b.b.a, r.b.b.b}

Examples Suite
Transition tour:
S0 b/0

a/0
Input

a.b.a.b.a.b 0.1.0.0.1.0

b/1 b/0

Output

a/1

S1

S2

a/0

Specification S

Derive a DS of length up to 2 for S a.b is a DS for S


State Input Output S0 S1 S2 S0 b 1 S1 S2 S0 a.b S1 S2

a
0

a
1

a
0

b
0

b
0

a.b a.b
0.0

0.1 1.0

Comment: a as input at each state will loop on the state, sequence of a.a. cannot be a DS, the output will be 0.0.. or 1.1

Q set: permits to reach each state from the initial state Q = { , b,b.b} The first b to reach the state S2
S0 b/0

a/0

b.b to reach the state S1. P set is transition cover, permits to execute each transition at least one starting from the

b/1 b/0

a/1

S1

S2

a/0

initial state

S0
a b b a b a b b b b

How to derive P set: find all Path starting from the size1 and up and each transition should be traversed at least once

P = {, a, b, b.a, b.b, b.b.a, b.b.b}


more than one p set may exist, this depends on the alternative paths that the automata may have.

The goal of the Phase 1 is identification of the states in the implementation DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b} Phase 1 Q.DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is: {-.0.1, -.1.0.0, -.1.0.1.0}

Phase 2 ( DS in bold) P.DS= {r.a.b, r.a.a.b,r.b.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b} {-,0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1}

S0 b/0

a/0

b/1 b/0

Note that, the test suites for phase 1 and 2 should be Derived from the specification and applied to the implementation to check it for output and transfer faults.
a/0

a/1

S1

S2

Specification S

Implementation I
a/0

S0 b/0

a/0

S0

b/1 b/1 b/0 a/1 S1 b/0 S2 a/0 a/0

a/1

S1

S2

b/0

Apply the transition tour to the implementation I and comment


Transition tour applied to S
Input Output of S Output of I

a.b.a.b.a.b 0.1.0.0.1.0 0.1.0.0.1.0

The implementation I has a transfer fault, the fault is not detected by Transition tour. Transition tour detects all output faults but Doesnt guarantee the detection of transfer faults

S0

a/0

Implementation I

b/1 b/0 a/1 S1 b/0 S2 a/0

The goal of the Phase 1 is identification of the states in the implementation DS = a.b, Q = { , b,b.b}, P = {, a, b, b.a, b.b, b.b.a, b.b.b}
Phase 1 Q.DS = {r.a.b, r.b.a.b, r.b.b.a.b} Expected output of phase 1is: {-.0.1, -.1.0.0, -.1.0.1.0} {-.0.1, -.1.0.0, -.1.0.1.0} ) observed outputs from I Phase 2 ( DS in bold) P.DS= {r.a.b, r.a.a.b, r.b.a.a.b, r.b.b.a.b, r.b.b.a.a.b, r.b.b.b.a.b} {-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.1} expected output {-.0.1, -.0.0.1, -.1.0.0.0, -.1.0.1.0, -.1.0.1.1.0, -.1.0.0.0.0}observed output from I, transfer fault detected

Specification S

S0
c/1

a/0

a/1
C/0

b/0

c/0

S2

b/0
b/0

S1

a/0

Derive a UIO sequence for S


State Input Output S0 S1 S2 a a a 1 S0 S1 S 2 b 0 b 0 b 0 S0 S1 S2 c c c 0 S0 S1 S 2

Transition tour for S


a.c a.c a.c 0.1 0.0 1.1

a.b.a.b.c.a.c.b.c 0.0.0.0.0.1.1.0.0

0 0

1 0

UIO state S0 = UIO state S2 = UIO state S1 =

c/1

a/1
a/0.c/0

Testing Assumptions and Hypothesis


Objective to Reduce the Set of Test Cases

Assumptions about specifications

completeness: completely specified or partially specified


connectedness: strongly connected or initialy connected

reducibility: reduced or non-reduced


determinism: deterministic or non-deterministic

13

Assumptions about implementations

Deterministic Completely defined react to any input Limited extra states


r/t8: 2/2 S4

r/r/t1: 1/1 S1 S2

t2: 2/2

r/- t3:1/1
t6: 2/2 S3

t4: 2/2

Reliable reset not necessary

t7: 1/2

15

Regularity, a testing assumption


This type of assumption allows to limit testing to a finite set of behaviors in the case of systems that exhibit an infinite behaviors. Examples are programs (or specifications) with loops and integer input and output parameters finite state machines reactive systems, en general
Principle: assume that the implementation has a regular behavior, which means that the number of control states of the implementation is limited. If the number of states is not bigger than the corresponding number of states of the specification, then all loops (of the specification) have to be tested only once. This is the idea behind the FSM fault model where the number of implementation states is limited to n, or to some number m > n. This is also the idea behind certain approaches for testing program loops and for testing in respect to specifications in the form of abstract data types.

Independency, a testing assumption


Principle: The different submodules of the system under test are independent, and faults in one module do not affect the possibility of detecting the faults in the other modules. This is a controversial assumption: In most complex systems, modules or components are dependent. The reasons are: they share resources (e.g. memory) they have explicit interactions Example: several connections supported by a protocol entity test only one connection in detail (it is independent of the others) the others need not be tested, since they are all equal (uniformity assumption, see below)

Independency (suite)
The independency relation is a reasonable assumption in certain cases.

Example:
Equipment to test Entity N+1

SAP Entity N

SAP Entity N

SAP

Entity N

Uniformity, a testing assumption


Uniformity assumption / Congruence Origin: Partition Testing [Weyuker 91] Principle There exist similar behaviors. If they are grouped under an equivalence relation, then it is sufficient to test one behavior of each equivalence class for conformance testing. Special cases: Principle of partition testing: Apply test for at least one representative for each partition of the input domain (software testing, EFSM testing) Equivalent actions for EFSM Equivalent states for FSM

Fairness in respect to non-determinism


Many systems have a non-deterministic nature. In particular, the parallelism of distributed systems introduces many possible interleaving of individual actions within the different system components. The assumption is that all the execution paths effectively realized during testing cover all paths that are pertinent for detecting the possible implementation faults.

s1 a/1 a/4 non-determinism s4 s3

a/2
s2

Partially defined FSMs


Non-specified transitions need not be tested. However different interpretations of undefinedness have an impact on testing: completeness assumption non-specified transition is implicitly defined, e.g. stay in same state (as in SDL), or go to an error state methods for completely defined FSMs may be applied, however, test will rely on implied transitions dont care no specific behavior is specified non-specified transitions must be avoided by test cases robustness tests may be applied to check the reaction of the implementation for non-specified situations forbidden not possible to invoke non-specified transitions

Fault Coverage Evaluation

Methods for Fault Coverage Evaluation


The definition of fault coverage always depends on fault model! Exhaustive mutation analysis Monte-Carlo simulation method Deciding completeness minimize an FSM which is given in the form of the TS, if its minimal form is equivalent to the given FSM then TS is complete (the max # states is assumed), otherwise it is not complete [see Yao] Structural Analysis it evaluates the fault coverage of a given test suite by directly analyzing the test suite against the given FSM. Count the number of states distinguished and transitions checked by the test suite. A numeric measure easy to evaluate (linear complexity) [see Yao] Different possible measures compare number of implementations (common approach) compare the log of number of implementations (corresponds to counting transitions covered) [called order coverage by Yao]

Test Architectures
How do we stimulate protocol entities for testing purposes ?

OSI Terminology

Conformance Testing Terminology


ASP: Abstract Service Primitive

PCO: Point of Control and Observation


IUT: Implementation Under Test PDU: Protocol Data Unit
A PCO maps to a SAP (Service Access Point) in the OSI reference model

The PCO has two FIFO queues: Send (from tester to IUT) Receive (by tester from IUT )

Conceptual Test Architecture (1/3)

There can be several LTs and UTs being simultaneously used

Conceptual Test Architecture (2/3)


Testing contexts
Single-party testing is for testing an IUT which communicates with exactly one real open system, represented by a single lower tester Multi-party testing is for testing an IUT communicating with multiple real open systems, represented by more than one lower tester Configuration for IUT components can be homogeneous or heterogeneous

Lower tester (LT) controls and observes the ILJT's lower service boundary, indirectly, via the underlying service provider
In single-party testing, behaves as the peer entity to IUT In multi-party testing, act as peer entities working in parallel

Lower tester control function (LTCF) coordinating all LTs


Assign the test case verdicts Mandatory in multi-party context, inapplicable in single-party context

Conceptual Test Architecture (3/3)


Upper tester (UT) controls and observes IUT's upper service boundary, by operator access, API, or hardware interface
In single-party context, UT behaves as a user of IUT In multi-party context, UTs working in parallel act as users of IUT

Test coordination procedures (TCPs) are used to ensure cooperation between the UTs and LTs
How tester shall respond Passing (preliminary) results Synchronisation TCP is NOT Transport Control Protocol, as in TCP/IP

ATM Classification
ATMs for multi-party testing
Several parallel upper and lower testers In complex situation a upper tester control function (UTCF) is needed Special cases include only one upper tester, or even no upper tester at all

ATMs for single-party testing


Local Test Method (L) Upper Tester and Lower Tester in Test System Distributed Test Method (D) Upper Tester in SUT, Lower Tester in Test System Co-ordinated Test Method (C) As above, uses Test Management Protocol Remote Test Method (R) Lower Tester in Test System, no Upper Tester

Test Case

Recall service primitives Request Indication Response Confirm

Local Test Method

Upper Tester is located in Test System Requires an upper interface on IUT IUT is built in the tester No ATSs for this method Good for the testing of a hardware component Example: Ethernet driver

Local Test Method

Distributed Test Method

UT in SUT, LT remote Requires synchronization Suitable for upper layer protocols / protocols offering API Example: socket communication

Distributed Test Method

Co-ordinated Test Method

UT in SUT but no access, LT remote No assumption of upper interface to the IUT Use only one PCO below the LT Uses Test Management Protocol (TMP) embedded in ASPs Suitable for mid-layer protocols

Co-ordinated Test Method

Remote Test Method

No Upper Tester Upper Tester can be native application or a user accessible interface Manual co-ordination Limited, but easy to use

Remote Test Method

ATMs Put Together

References (1/2)
C. E. Chow. Introduction to protocol engineering. 2004. cs.uccs.edu/~cs522/pe/ G O. Chistokhvalov, Communication software and architecture, lecture notes. 2002. www.it.lut.fi/kurssit/02-03/010607000/index_eng.html G.J. Holzmann. Design and validation of computer protocols. Chapter 8-11. PrenticeHall. 1991. ISBN 0-13-539925-4, spinroot.com/spin/Doc/Book91.html A. Petrenko, Introduction to the theory of experiments on finite state machines, lecture notes. 2003. www.bretagne.enscachan.fr/DIT/People/Claude.Jard/sem_13_05_2003_petrenko_trans.pdf Igor Potapov . Protocol engineering, lecture notes. 2004. www.csc.liv.ac.uk/~igor/COMP201/ Chris Ling. The Petri Net method, lecture notes. 2001. www.csse.monash.edu.au/courseware/cse5510/Lectures/lecture2b.ppt Gabriel Eirea, Petri nets, lecture notes, UC Berkeley, 2002, www.cs.unc.edu/~montek/teaching/spring-04/petrinets.ppt T.-Y. Cheung. Petri nets for protocol engineering. Elsevier Computer Communications. 19. 1996: 1250-1257 R.Zurawski and M.C. Zhou, Petri Nets and industrial applications: a tutorial, IEEE Trans. Industrial Electronics, vol. 41, no. 6, 1994: 567-583

References (2/2)

T. Murata. Petri nets: properties, analysis and applications. Proceedings of the IEEE. vol. 77. no. 4, 1989:541-580 G.V. Bochmann and R. Gotzhein, Deriving protocol specifications from service specifications, ACM Trans, on Computer Systems, vol. 8, no. 4, 1990: 255-283 R.L. Probert and K. Saleh, Synthesis of communication protocols: survey and assessment, IEEE Trans. Computers, vol. 40, no. 4, 1991: 468-476 Mark Claypool, Modeling and performance evaluation of network and computer systems, lecture notes, 2004, www.cs.wpi.edu/~claypool/courses/533-S04/ R. Dssouli and F. Khendek, Test development for distributed system, 2000, www.ece.concordia.ca/~dssouli/Testing.pdf R. Lai. A survey of communication protocol testing. Elsevier Journal of Systems and Software. 62,2002:21-46 G.V. Bochmann and A. Petrenko. Protocol testing: review of methods and relevance for software testing, Proc. ACM ISSTA, Seattle Washington, USA, 1994: 109-124 A.T. Dahbura, K.K. Sabnani, and M.U. Uyar, Formal methods for generating protocol conformance test sequences. Proceedings of the IEEE, vol. 78, no. 8, 1990: 13171326 D.P. Sidhu and T.-K. Leung, Formal methods for protocol testing: a detailed study, IEEE Trans. Software Engineering, vol. 15, no. 4, 1989: 413-426

You might also like