You are on page 1of 56

28

December 2014

The Magazine for Professional Testers


The Three Pillars of
Agile Quality and Testing
by Robert Galen

Testing the Internet of


Things The Future is Here
by Venkat Ramesh Atigadda

and many more

110001110001000011100110011110110000010000011110

000100110100101101110011000010111001001111001

Agile Testing Day in Utrecht


The Agile Testing Day is a one-day conference for and by international software testing professionals involved in AGILE WORK PROCESSES. Join this
event with your agile team to LEARN and NETWORK with your peers.

Netherlands, Utrecht March 19, 2015

Catch the

Early Bird

by January 16!

www.agiletestingday.nl
Follow us on Twitter:

@Agile_NL

Dear readers,
We are facing the end of the year 2014 and when we look back we see a great year behind us.
The magazine achieved the expectations set by the editorial team. We have increased the quality
of the articles, we have more readers worldwide, and we made some small changes to the structure
and the website. Even though we are happy with these changes, we want to improve your experience
next year and therefore we are facing new ones.
The conferences we organized ran amazingly well. The first edition of Mobile App Europe was impressive with a lot of new insides on the mobile business and we have already started planning for next
year. Our main conference, Agile Testing Days, was the best ever and we are very proud of having so
many talented speakers at it. The attendees had a lot of fun and learned a lot. It was also a pleasure
to have had the exclusive release of the new book by Lisa Crispin and Janet Gregory. More Agile
Testing is a milestone in the agile world. The sister conference Agile Testing Days Netherlands that
took place earlier this year was a success and we are working hard to give you a great experience
again on March 19, 2015 in Utrecht, The Netherlands. Please visit the website www.agiletestingdays.nl.
I want to thank all the authors, sponsors, and partners for their support in issuing the magazine. A
special thank-you goes to Konstanze who laid-out the magazine for the last time!
Last, but not least, I wish you a Merry Christmas and a Happy New Year!

All the best,


Jos Daz

Testing Experience 28/2014

contents 28/2014
From the Editor...............................................................................................1
Agile Is It Worth It? Advantages of Using It........................................4
by Antonio Gonzlez & Rubn Fernndez
Test-Driven Developments are Inefficient;
Behavior-Driven Developments are a Beacon of Hope?
The StratEx Experience Part II.................................................................8
by Rudolf de Schipper & Abdelkrim Boujraf
A Report about Non-Agile Support for Agile........................................12
by Martin Uhlig
PERFORMANCE COLUMN: Exploratory Performance Testing.......... 14
by Alex Podelko
What Developers and Testers Need to Know about the
ISO 27001 Information Security Standard............................................. 16
by Klaus Haller
What Makes a Good Quality Tester?......................................................20
by Jacqueline Vermette
Ready for CAMS? The Foundation of DevOps...................................... 22
by Wolfgang Gottesheim
Combinatorial Testing Tools....................................................................24
by Danilo Berta
A Continuous Integration Test Framework..........................................30
by Gregory Solovey & Phil Gillis

Testing Experience 28/2014

The Three Pillars of Agile Quality and Testing.....................................34


by Robert Galen
Serious Games for Testers......................................................................... 37
by Cesario Ramos & Pascal Dufour
Testing the Internet of Things (IoT) the Future is Here................. 40
by Venkat Ramesh Atigadda
Organize Your Testing Using Test Varieties and Coverage Types...42
by Rik Marselis & Bert Linker
Reasoning the Next Version of Test Automation Myths................. 46
by Sujith Shajee
Testing Enterprise Applications: Bring-Your-Own-Crowd.............48
by Philipp Benkler
A Unified Framework for All Automation Needs Part III................50
by Vladimir Belorusets, PhD
Book Corner.................................................................................................. 52
Masthead......................................................................................................C3
Editorial Board.............................................................................................C3
Picture Credits..............................................................................................C3
Index of Advertisers....................................................................................C3

By Antonio Gonzlez & Rubn Fernndez

Agile

Is It Worth It? Advantages of Using It


Why Did Agility Appear?
Around 50 years ago, code was written without any plan and the architecture design was determined from many short-term decisions.
This worked well for small systems, but as they grew it became harder
to add new features and to fix bugs.
After some years, methodologies were introduced in software development to solve these issues. Methodologies are strict processes whose
aim is to make software more efficient and predictable. Traditional
methodologies (for instance Waterfall) are plan-driven and require
a big effort at the beginning of the project in order to define requirements and architecture properly. As you may notice, these processes
can be frustrating and not very open to changes.
Nowadays, technology and software applications evolve quickly faster
than we expected. Therefore, time-to-market is critical in determining
whether a product will succeed or fail. Reaching the market before your
competitors might actually mean victory. Thus, it is very important
to have the right methodology that embraces and responds to the
continuous changes we are experiencing. That is the main reason why,
in 1975, practices based on iterative enhancements were introduced.
In other words, lets call it agility.

What Are Agiles Main Characteristics?


Reading the Agile Manifesto (www.agilemanifesto.org) it states that
the Agile framework is based on:
Individuals and interactions over processes and tools.
Working software over comprehensive documentation.
Customer collaboration over contract negotiation.
Responding to change over following a plan.

Nonetheless, Agile also works well in big companies. Multinationals are


more than ever required to move fast and adapt to the new environment. Besides, as we all know, the customer is always right. So why
leave them out of the development process? Agile involves the client in
the project, so companies can understand better and in greater detail
what the customer wants.

Waterfall
Failed
29%

Agile
Successful
14%

Challenged
57%

Failed
9%

Successful
42%

Challenged
49%

Source: CHAOS Manifesto 2012

But these are not the only reasons why Agile is important. There are
plenty more. Below are several explanations of why it is reasonable
and suitable to use agile in software development from different
perspectives and points of view.

What Advantages Does Agile Have for Product


Managers?

What this means is that the Agile framework focuses on working


software rather than a definition of strict requirements. Another pillar of its philosophy is granted autonomy and shared responsibility of
all individuals within the team. It means not only thinking about the
client, but also motivating and involving programmers, analysts, and
QA engineers to work towards a common goal.

Product managers would like to know exactly what their customers


want, but this is a hard task and unlikely to happen. Agile provides the
appropriate framework for adapting the product to the customers
actual needs. There is no need to define the product perfectly at the
beginning, but, as iterations are done, it is easy to get feedback from
clients and refine the product, implementing only those features that
provide value.

What Is It about Agile That Is so Interesting?

Furthermore, Agile is well-known for its transparency. Product owners


are always aware of what is being done and what actions are being
taken by the development team. With Agile, product owners do not
need to wait until the end of the project to know what the team has
implemented.

With this brief description, we start to observe why AGILE is so important: response to changes. Often, new companies do not know very
well what their clients want or how to define their roadmap, hence

pivoting and iterating until they reach their expected results is almost
mandatory. Agile development allows small companies to refine their
products and goals on the go.

Testing Experience 28/2014

What Advantages Does Agile Have for Analysts?


Imagine you could gather data and valuable information about a
product before its final version is released. If you are a data scientist,
this may sound perfect to you. That is what Agile gives analysts
continuous information from real clients, and a real product before
it is completely implemented.

What Advantages Does Agile Have for Developers?


Developers are the core of an agile team. Therefore, it is highly important to provide the right tools and methodologies so they can do
a good job. Agile gives freedom to developers to estimate and write
code as they prefer, and motivates people to share how things are
done and work as a team.
With traditional methodologies, software engineers often feel they
mostly do work that has no meaning for the client, or that this work will
be removed from the final product. Agile focuses on doing tasks that
provide value to the customer, so wasting time and effort on writing
useless code is minimized.
Finally, in Agile there are no senior or junior levels. Everyone is a team
member, so everyones opinion is valuable. Agile helps people to share
their opinions so the whole process may benefit and improve.

What Advantages Does Agile Have for QA


Departments?
With traditional methodologies, QA engineers were left out of the
project until the product was about to be released. QA activities were
understood as a one time action at the end of the project, and this
obviously generates big risks and uncertainty about the products

quality. In traditional methodologies, black box testing was prioritized


over user interface, and every test case and test plan had to be well
documented.
The introduction of Agile frameworks benefitted the role of QA engineers, as they became more relevant. They need to be pro-active from
the start of a project by developing proper measures to assure the
quality of the product. In general, the QA role in Agile should:
Help Business Analysts to define stories and their acceptance
criteria so they to know whether they are satisfying customer
requirements.
Integrate with the development team to assess the adoption of
code standards and the improvement of the code base through
refactoring.
Provide developers with high level test cases and scenarios for the
stories before coding. Respond to change-over following a plan.
Ensure black box testing over the user interface and white box
testing to gain knowledge of the internal workings of the application.
Increase automated testing, so the teams speed will increase.
Introduce quality control into every iteration.

What Advantages Does Agile Have for


Management? Are There Any Managers on Agile?
The Agile Manager is responsible for supporting the development
team, clearing any blockages, and keeping the Agile process consistent.
The Agile Manager is the development progress facilitator, and his/
her main task is to maximize the teams effectiveness rather than

Source: Gonzalo Vzquez, own illustration.

Testing Experience 28/2014

controlling how they work, which is more usual in traditional project


management where project managers act as authority figures. In Agile
environments there is no authority role.
The Agile Manager ensures that the right people are on the project,
mentoring, training, guiding, and motivating everyone to reach a
common goal. This important task helps the team to feel ownership
of the project and gives them great motivation to achieve their goals.

Then, Is Agile Perfect?


Of course not. The first thing to note is that Agile is not about perfection,
it is about bringing value to your organization and to your customers in
the most cost-effective and transparent way. For this reason, you need
to be patient when adopting Agile frameworks. It is very important
that the executive team believes in Agile methods. If it happens, your
probability of project success increases considerably.

These benefits imply a transition to Agile, which is not an easy task.


Teams and companies that do not fully believe in the adoption of
Agile will give up their adoption of it as soon as the first problems are
encountered. A good agile team understands the benefits of Agile
and believes in its adoption, choosing the management and technical
methods provided by the Agile framework that work best for them. If
this occurs, the adoption of Agile is likely to succeed.

Acknowledgements to Gonzalo Vzquez (www.gonvazquez.com)


for his wonderful illustration.

> about the authors


Antonio Gonzlez Sanchis is a telecommunications engineer with a Masters degree in Project

Moreover, Agile has several disadvantages. As it focuses strongly on


people, if someone leaves your team, it is likely you will be losing a lot
of information and cohesion from the group. Besides, as the team is
self-managed, maturity is required in all team members and this is
often hard to obtain.
Finally, there are common mistakes in adopting Agile that cause more
inconveniences than advantages. One common mistake is trying to
adapt your organization to Agile. Agile is a framework that provides
methods for being more productive, so the trick is to determine which
methods provided by Agile are suitable for your organization and to
tune your methodology to fit your individual or team needs. Another
error is believing that Agile means that everything can be done at any

Management and a Diploma in Business and


Management from the London School of Economics. He has several years of experience as a
software developer and business analyst for
companies such as Fraunhofer Institute, HewlettPackard, and Amadeus, always working under agile methodologies.
His interests are strongly related to applying agile frameworks to
non-related software environments.
LinkedIn: www.linkedin.com/in/antoniogonzalezsanchis
Website: www.openbmic.com

Rubn Fernndez lvarez has seven years of


experience as a QA test engineer in medical

time. Agile is about flexibility, but if development of a feature has


started, it is not possible to change it before it is finished.

and industrial environments in different companies, including Sogeti, Ingenico, Grifols, and
Aurigae Telefonica R+D. He presently holds the
role of SW QA Manager at Zitro Interactive, where

Does It Only Work in Software Environments?


No, it could work in different environments. For example, think of this
article that we wrote in a kind of agile way. We first defined what we
wanted to do (high level) and we started one week iterations of writing.
After every iteration we asked for feedback from experts and people
whom we considered to be important stakeholders. Using this information, we improved our article and kept on writing. When we had
completed ten iterations, our stakeholders told us it was good enough
for them, so we decided to close the article. Obviously we were not 100%
agile as we did not release our article before it was finally completed.
As said, Agile does not only work for software projects, but it is true
that it might work better with small budget projects, as refining your
product every iteration may incur unexpected expenses. Moreover,
Agile is very effective in environments where changes in the requirements happen quite often due to various business reasons.

Conclusion
Agile is not a one-time aspect in the development process of a company.
It is a development philosophy that helps to deliver frequent releases
with high quality to final customers, through team collaboration,
transparency, and continuous improvement.

Testing Experience 28/2014

he is responsible for test management and test


automation in mobile and web gaming projects. He is a qualified
telecommunications and electronics engineer and is a certified
Scrum Manager.
Twitter: @rbnfdez

LinkedIn: www.linkedin.com/in/rbnfdez

CABA Certified
Agile Business Analyst
Are you a business analyst who is working in or moving
to an Agile environment? How does the work, and your
approach, differ from the traditional BA role? What are

the recognised practices in Agile business analysis? The


CABA course presents practices and tools for Agile BAs
working in any of the emergent flavours of Agile.

For more information, visit caba.diazhilterscheid.com

Open courses:

or contact us at info@diazhilterscheid.com.

March 1920, 2015 Berlin, Germany

Our training courses are available as inhouse courses

June 1112, 2015 Berlin, Germany

and outside of Germany on demand.

Daz & Hilterscheid Unternehmensberatung GmbH

Phone: +49 (0)30 74 76 28-0

Kurfrstendamm 179

Fax: +49 (0)30 74 76 28-99

10707 Berlin

Email: info@diazhilterscheid.com

Germany

Website: caba.diazhilterscheid.com

Testing Experience 28/2014

Missed Part I?
Read it in
issue No. 27!
By Rudolf de Schipper & Abdelkrim Boujraf

Test-Driven Developments are Inefficient;


Behavior-Driven Developments are a Beacon of Hope?
The StratEx Experience (A Public-Private SaaS and On-Premises Application) Part II
In part 1 of this article, we outlined a number of criticisms and roadblocks that we have encountered while looking for the best way to
test our application.

software. This is our target audience for this article. We are not pretending to describe how to test military-spec applications or embedded
systems, for example.

We know it is easy to criticize, but this was not for the sake of being
negative. We believe that our experience has led to some valuable
insights, apart from the points we simply do not like. Further, we
believe our experiences are not unique. So, in this second article we
want to take a look at what can be done to test our software efficiently.

What do we want to achieve with our tests? In simple terms, we are


not looking for mathematical correctness proofs of our code. Nor are
we looking for exhaustively tested use cases. We want to be reasonably
certain that the code we deploy is stable and behaves as expected. We
are ready to accept the odd outlier case that gives an issue. We believe
our bug-fixing process is quick enough to address such issues in an
acceptable timeframe (between 4 and 48 hours).

What we describe here is not all implemented today. As we said in the


previous article, this is part of a journey, a search for the optimum way
of doing things. You might wonder why? There are a couple of reasons
for this. First and foremost, as a small startup company, our resources
are limited. And, as Google says, scarcity brings clarity[1], so we have
to be careful what we spend our time and energy on. Second, when doing our research on testing methodologies and how to apply these best,
there was one recurring theme: there is never enough time, money, or
resources to test. This can probably be translated into management
does not give me enough, because they apparently believe that what
they spend on testing is already enough. Now here comes the big question: what if management is right? Can we honestly say that every test,
every check, every test department even, is super-efficient? We may
argue that test effort may reach up to x% of development effort (30%
has been given as a rough guideline). Well then, if by magic, development effort is sliced down to one fifth, is it not logical to assume that
the test effort should be reduced by the same factor? And how would
this be achieved? We want to explore this here.
This is where we came from. We generate large parts of our code. This
reduces development time dramatically. For a small startup this is a
good thing. But this also means that we must be able to reduce our
testing time. And that was the reason we had a good and hard look at
current testing methods, what to test, when to test, and how to test.

Testing a Web Application Deployable as PublicPrivate Cloud and On-Premises Software


First, lets frame our discussion. The application we are developing is
rather standard from a testing point of view: web-based, multi-tier,
a back-end database, and running in the cloud. The GUI is moderately
sprinkled with JavaScript (JQuery[2] and proprietary scripts from a set
of commercial off-the-shelf (COTS) UI controls like DevExpress[3]and
Aspose[4]). The main way to interact is through a series of CRUD[5]
screens. We can safely say that there are probably thousands of applications like this, except that the same piece of code is deployable as
a Private Cloud and Public Cloud application, as well as on-premises

Testing Experience 28/2014

Lets look a bit closer at the various types of tests we might need to
devise to achieve such reasonable assurance.

Testing a CRUD Application


The application presents a number of screens to the user, starting
from the home screen, with a menu. The menu gives access to screens,
mostly CRUD-type, while some screens are process-oriented or serve a
specific purpose (a screen to run reporting, for example).
A CRUD screen has five distinct, physical views:
1. The items list (index), i.e., the Contracts list
2. The item details, i.e., the Work Page details
3. The item edition, i.e., edit the Activities details
4. The item creation, i.e., create a new Type of Risk
5. And the item deleting, i.e., delete a Project and its related items
Possible actions on each screen are standardized, with the exception
of the details screen, where specific, business-oriented actions can
be added. You may think of an action such as closing a record, which
involves a number of things such as checking validity, status change,
updating log information, and maybe creating some other record in
the system. In essence, these actions are always non-trivial.

Testing a Generated vs. a Hand-Coded Piece of


Software
All CRUD screens are made of fully generated code, with the exception
of the business actions, which are always hand-coded.
The non-CRUD screens are not generated and always hand-coded.
Needless to say we try to keep the number of these screens low.

We have observed that the generated code and screens are usually
of acceptable initial quality. This is because the number of human/
manual activities to produce such a screen is very low. The code templates that are used by the generator obviously have taken their time
to be developed. This was however a localized effort, because we could
concentrate on one specific use case. Once it worked and had been
tested (manually!), we could replicate this with minimal effort to the
other screens (through generation). We knew in advance that all the
features we had developed would work on the other screens as well.
An interesting side-effect of this method is that if there is an error in
the generated code, the probability of finding this error is actually very
high, because the code generation process multiplies the error to all
screens, meaning it is likely to be found very quickly.
The hand-code screens are on the other side of the scale. They present
a high likelihood of errors, and we have also found that these screens
are prone to non-standard look and feel and non-standard behavior
within the application. When compared to the approach of generating
highly standardized code, the reasons for this are obvious.

Testing Business Actions


The business actions are the third main concern for testing. These
are non-trivial actions, highly context (data, user) dependent, and
with a multitude of possible outcomes. We have not yet figured out
how to test these artifacts automatically due to the amount of cases
we would need to take into account. Each change in our logic needs a
complete refactoring of those tests that will certainly produce most
of the complaints from of our beloved customers.

Testing the User Interface Using BDD

ate (parts of) your tests, do it. It reduces the maintenance cycle of your
tests, which means you improve the long-term chances of survival of
your tests. We have not found convincing evidence to state that handcoded (non-standardized) screens can be fully described (see Table 1)
by a set of BDD/Gherkin tests or briefly described (see Table 2). The
simple fact is that it would require a large amount of BDD-like tests to
fully describe such screens. One practice we have observed is to have
one big test for a complete screen; however, we found that such tests
quickly become complex and difficult to maintain for many reasons:
1. You do not want to disclose too much technical information to
the business user, e.g., username/password, the acceptable data
types, the URL of the servers supporting the factory acceptance
test (FAT), system acceptance tests[10] (SAT) and the live application.
2. You need to multiply the number of features by the number of
languages your system supports.
3. You want to keep the BDD test separate from the code that the
tester writes, as the tests depend on the software architecture
(Cloud, on-premises) and the device that may support the application (desktop, mobile).
4. A database containing acceptable data might be used by either
the business user or the tester. The data might be digested by the
system testing the application and reduce the complexity of the
BDD-tests while increasing the source code to test the application
1.

# file: ./Create_Contract_Request_for_offer.feature

2.

Feature: Create a Request for offer

3.
4.

A final concern is the UI testing. Even with highly standardized screens,


we value this type of testing, for three reasons:

5.

First it is the way to run a partial end-to-end test[6] on your


screens as we test partially the content of the database after a
screen is tested.

8.

Second, it is what the user actually sees. If something is wrong


there, there is no escape.
Third, we like to use such test runs to record documentation,
and demo and training videos using Castro[7], Selenium[8], and
Behave[9] mostly open source software.
We believe that this context is relatively common, with the possible
exception of massive code generation and the use of tests to document the application (and we would recommend these as something
to consider for your next project!), so it makes sense to examine how
these different areas can be tested efficiently.
For the generated CRUD screens, tests should be standardized and
generated. Do we need to test all the generated screens? Given the fact
that the tests are generated (so there is no effort involved in creating
the tests), we would say that you must at least have the possibility of
testing all screens. Whether you test them for every deployment is a
question of judgment.
Hand-coded screens, if you have them, probably require hand-coded
tests. Yet, if you have information available that allows you to gener-

As a registered user,
I want to create a Request for offer for a project
Background:

6.

Given I open StratEx "<url>"

7.

When I sign up as "<username>"

9.

Then I should be signed in as "<user_first_last_name>"


Scenario Outline:

10.

Then I click on "Contract" menu item

11.

Then I click on "Request for offer" menu item

12.

Then I click on "Create new" menu item

13.

Then I select "<project_name>" from the "Project"


dropdown

14.

Then I set the "Title" box with "<project_title>"

15.

Then I click on "Save" button

16.

Then I check that the "Project" field equals


"<project_name>"

17.

Then I check that the "Title" field equals


"<project_title>"

18.

And I click on the link "Logout"

19. Examples: staging


20.

| url| username
| user_first_last_name| project_name | project_
title |

21.

| https://staging.<your application>.com | a Username


| a firstname, a lastname | a project name | a project
title |

Table 1. BDD Definition (Full Description): Create a Request for Offer 11_Create_Contract_
Request_ for_offer.feature

Testing Experience 28/2014

1.

# file: ./Create_Contract_Request_for_offer.feature

2.

Feature: Create a Request for offer


As a registered user,

3.

I want to create a Request for offer for a project

4.

Background:

5.
6.

Given I open StratEx "<url>"

7.

When I sign up as "<username>"


Then I should be signed in as "<user_first_last_name>"

8.

Scenario Outline:

9.
10.

Then I create one Request for offer

11.

And I click on the link "Logout"

12. Examples: staging


13.

| url| username|
user_first_last_name|

14.

| https://staging.<your application>.com | a Username |


a firstname, a lastname |

Table 2. BDD Definition (Brief Description): Create a Request for Offer 11_Create_
Contract_Request_ for_offer.feature

1.

@then(u'I create one Request for offer')

2.

def step_impl(context):

3.

# click | 'Contract' menu item

4.

context.browser.find_element_by_xpath(

5.

context.browser.dramatic_pause(seconds=1)

6.

# click | 'Request for Offer' menu item

7.

context.browser.find_element_by_xpath(

8.

context.browser.dramatic_pause(seconds=1)

9.

# click | 'Create New' menu item

10.

context.browser.find_element_by_xpath(

11.

context.browser.dramatic_pause(seconds=2)

12.

# select | id=Project | label=StratEx Demo

13.

Select(context.browser.find_element_by_id("Project")).

"//a[contains(text(),'Contract')]").click()

"//a[contains(text(),'Request for Offer')]").click()

"//a[contains(text(),'Create New')]").click()

select_by_visible_text(context.testData.find(
".//project_name").text)
14.

context.browser.dramatic_pause(seconds=1)

15.

# type | id=Title | Horizon 2020 dedicated SME

16.

context.browser.find_element_by_id("Title").clear()

Instrument - Phase 2 2014

Table 3. Excerpt from 11_Create_Contract_Request_ for_offer.py

At StratEx, we our current practice is to write brief BDD, after many


attempts to find a right balance between writing code and BDD-tests.
We did choose the Python Programming language (see Table 3) to
implement the tests, because Python[11] is readable even by business
users and can be deployed on all our diverse systems made of Linux
and Windows machines.
Business actions are hand-coded too, but such actions are good candidates for BDD-tests, described in Gherkin. As we mentioned before,
Gherkin is powerful for describing functional behavior, and this is
exactly what the business actions implement. So there seems to be a
natural fit between business actions and BDD/Gherkin. The context

10

Testing Experience 28/2014

can usually be described in the same way as for UI tests (see above).
Can such tests be generated? We believe that the effort for this might
outweigh the benefits. Still, using Gherkin to describe the intended
business action and then implementing tests for it seems like a promising approach.

Conclusion
This broadly covers the area of functional testing, including UI tests. The
question obviously occurs as to what else needs to be tested, because
it is clear that the tests we describe here are not ideal candidates for
development-level (unit) tests they would simply be too slow. In various configurations, the tests we described above would run in a predeployment scenario, with more or less coverage: run all screens, run
only the hand-coded screen tests, run some actions as smoke tests, etc.
We believe that the most relevant tests for the development cycle are
the ones related to the work the developer does, i.e., producing code.
This means that generated code can be excluded in principle (although
there is nothing against generating such tests). It focuses therefore on
hand-coded screens and business action implementation.
Starting with the business action implementation, we observe that this

only requires coding in non-UI parts of the application: the model[12]


and the database. It has been shown that it is possible to run unit-like
tests against the model code and against the controller code. Unit tests
against the model can be used to validate the individual functions (as
in normal unit tests), while tests against the controller will actually
validate that the action when invoked by the user from the UI will
yield the expected result. In that sense, this kind of test runs like a UI
test without the overhead (and performance penalty) of a browser.
What is so special about this approach? First, these are not real unit
tests because they do not isolate components. They test an architectural
layer, including the layers below it. This means that, when testing a
model, the database will be accessed as well. This is a deliberate tradeoff between the work required to make components unit-testable
and the impact of testing them in context. It means we have to
consider issues such as database content and we need to accept that
these tests run slower than real unit tests. However, because we have
far fewer of this type of test (we only implement the tests for the business actions, which is between two and ten per screen), the number
of these tests will be around 100200 for the complete application.
We believe that this is a workable situation, as it allows developing
without having to consider the intricacies of emulated data, such as
mocks or other artificial architectural artifacts, to allow for out-ofcontext testing. In other words, we can concentrate on the business
problems we need to solve.
An additional benefit here is that this allows us to test the database
along with the code. Database testing is an area we have not seen
covered often, for reasons that somewhat elude us.
In summary, we have presented here a method for efficiently testing
large parts of web-based software by using elements of code generation to generate automatable tests, and by using BDD concepts to
model tests for non-generated screens and non-generated business
actions. Further, we have described a method for context-based unit
testing that, when combined with generated code and tests, yields

an acceptable trade-off between development efficiency and time


spent on testing.
This article has not covered other areas of testing, such as performance
and security tests. Currently StratEx has no immediate concerns in
these areas that required us to critically observe how we validate the
application in this respect.

> about the authors


Rudolf de Schipper has extensive experience in
project management, consulting, QA, and software development. He has experience in managing large multinational projects and likes working in a team. Rudolf has a strong analytical
attitude, with interest in domains such as the

References
[1] How Google tests software (Whittaker, Arbon et al.)
[2] jQuery: http://jquery.com
[3] DevExpress: https://www.devexpress.com/
[4] Aspose: http://www.aspose.com
[5] Create, Read, Update, Delete
[6] End-to-end test:
http://www.techopedia.com/definition/7035/end-to-end-test
[7] Castro is a library for recording automated screencasts via a
simple API: https://pypi.python.org/pypi/castro
[8] Automating web applications for testing purposes:
http://www.seleniumhq.org
[9] Behave is behavior-driven development, Python style:
https://pypi.python.org/pypi/behave
[10] System Acceptance Test:
https://computing.cells.es/services/controls/sat
[11] Python Programming Language: https://www.python.org/
[12] We assume an MVC or similar architecture is being used here,
or anything that clearly separates the UI from the rest of the
system.

public sector, finance and e-business.


He has used object-oriented techniques for design and development in an international context. Apart from the management
aspects of IT-related projects, his interests span program management, quality management, and business consulting, as well as
architecture and development. Keeping abreast with technical
work, Rudolf has worked with the StratEx team, developing the
StratEx application (www.stratexapp.com), its architecture, and the
code generation tool that is used. In the process, he has learned
and experienced many of the difficulties related to serious software design and development, including the challenges of testing.
LinkedIn: be.linkedin.com/pub/rudolf-de-schipper/3/6a9/6a9
Abdelkrim Boujraf owns companies developing
software like StratEx (program and project management) and SIMOGGA (lean operations management). He has more than 15 years of experience in different IT positions, from software
engineering to program management in management consulting and software manufacturing firms. His fields of expertise are operational excellence and collaborative consumption. Abdelkrim holds an MBA as well as a
masters in IT & Human Sciences. He is a scientific advisor for the
Universit Libre de Bruxelles where he is expected to provide occasional assistance in teaching or scientific research for spin-offs.
LinkedIn: be.linkedin.com/in/abdelkrimboujraf

Referenced books
The Cucumber Book (Wynne and Hellesoy)
Application testing with Capybara (Robbins)
Beautiful testing (Robbins and Riley)
Experiences of Test Automation (Graham and Fewster)
How Google tests software (Whittaker, Arbon et al.)
Selenium Testing Tools Cookbook (Gundecha)

Referenced articles
Model Driven Software engineering (Brambilla et al.)
Continuous Delivery (Humble and Farley)
Domain Specific Languages (Fowler)
Domain Specific Modeling (Kelly et al)
Language Implementation Patterns (Parr)

Testing Experience 28/2014

11

By Martin Uhlig

A Report about Non-Agile Support for Agile


Starting Situation
Certainly some developers and testers in the agile environment are
familiar with the following situation. A team had been working on a
product for a long time, but they did not have a dedicated tester. As a
result, quality requirements had rather been neglected. But now just
before the product release everything should get better and the team
would be supported by additional testers.
At the beginning of this year, I became involved in a similar situation
when I was assigned as a tester to a Scrum team that had suffered
from a high turnover of testers. Apart from the unit tests that the
developers had established, there were no automated tests. But as
the product release moved closer, the team had to deal with various
short-term decisions regarding product changes. We had to find a way
to test the products features as well as its non-functional criteria in
a fast and reliable manner.
To start the automation of integration tests and GUI tests during this
stage of the project would have been fatuous. We needed a manual
solution.

The Idea
In collaboration with our Product Owner (PO), we created a concept to
organise a pure quality assurance sprint (QA Sprint) only for testing, fixing, and retesting. No feature stories had been planned for this sprint.
But how could we test the whole product in such a short time span?
It is not possible to perform the test scope needed for a convincing
and informative result in a two-week iteration not even with all
the nine team members. After all, it was necessary to cover different
configurations of the software with tests. But we were very lucky to
receive special help from five additional testers and developers who
had agreed to support our tests. So we had enough workforce. But
how to manage all these people?

test sets had been worked out and iteratively improved by the tester
in the Scrum team (supported by the whole Scrum team). The work
of the QA teams was to be strictly separated from the Scrum team to
avoid lowering the performance of the Scrum team. Therefore the
teams needed an interface to filter the exchange of information in the
direction of the Scrum team, thus avoiding an information overflow
(esp. only actual bugs, no duplicates, etc.). The Scrum team itself would
focus on reproducing and troubleshooting the bugs. We decided to
staff the interface between the teams with the PO and the tester from
the Scrum team. The only exception to the team seperation was the
Daily Scrum meetings. Besides the Scrum team, one agent from each
QA team attended to give their current status.
This plan was refined so the teams were better balanced. One of the
Scrum teams developers moved to a QA team to test with them. Thus,
every QA team had three members, including one experienced tester
who headed the execution of the tests. Furthermore, the Scrum team
had two relatively new members who were not sufficently trained
to do quick and effective bugfixing of this complex software. These
colleagues were given the mission of performing exploratory testing
apart from the test sets and reproducing bugs.
In summary, our final setup was with two QA teams executing a set
of tests. These tests included all the positive and the most important
negative test cases. Every team worked with a different product configuration. The QA teams tests were supplemented by explorative tests
from the two fresh developers in the Scrum team. Six developers within
the Scrum team took care of bugfixing and deployment. This way, two
teams were established and they supported the Scrum team without
any signifcant negative effects for the self-organization of the Scrum
team (shown in Figure 1). The atypical ratio between developers and
testers in favour of developers was less problematic because the PO
had enough old known issues to be fixed in her backlog, keeping the
developers busy till the testers reported the first bugs.

From the outset it was obvious that the team could not simply be
extended. There is no way of conducting an effective Sprint Planning
or Daily Scrum with 14 team members (plus Scrum Master) and the
whole team would have had to reorganise itself. This was no practicable
solution, not even for only one sprint. As a consequence we had to find
another way to integrate the additional testers.
But what approach would work? The answer seems simple we needed
more teams! But a new Scrum team cannot simply be conjured out of
thin air, especially for just one sprint. So we had to distinguish between
the Scrum team and the QA teams. The Scrum team, consisting of the
former team, should basically work as usual in the best Scrum manner.
But to strengthen the Scrum team we needed the additional testers,
but could not get them aboard the Scrum boat due to the reasons
mentioned above. And so we had to create two new teams that were
substantially self-organized but not a fixed part of the Scrum team.
These QA teams focused on repeatedly running a given set of tests. The

12

Testing Experience 28/2014

Bugs, issues, etc.

Figure 1. Two small QA teams supporting the Scrum team. The communication between
the teams is handled by fixed interfaces and agents (blue).

The Execution
The QA sprint started just like any other sprint for the Scrum team,
except that the Sprint Planning was shorter than usual. Only a few
known issues were presented by the PO in Sprint Planning. During the
sprint the PO and the Scrum teams tester evaluated the bugs found
by the QA teams and added them to the Scrum teams sprint backlog
in a prioritized manner.
Besides the Scrum teams Sprint Planning, there was a kick-off meeting
for the QA teams. They were given the instruction to run the positive
and negative test cases from the committed test sets and to document
the bugs. After the execution of the whole test set (duration approx. 2
days) it was reviewed and improved by the teams impressions. Finally,
the test sets were completed by retesting the bug fixes of the current
version. After that procedure, the test sets were ready to be executed
in the QA teams next iteration. As a result, both QA teams always
had the same version of the product for each iteration, but with two
different configurations. Due to the fact that the latest version was
freshly installed for the QA teams in each iteration, the installation
and update mechanism of the software was repeatedly tested by the
PO and the Scrum teams tester.
To thank and to motivate the QA teams, we used some elements from
the concept of gamification. For example, we launched some awards
and small prizes, such as for the most critical bug or the QA team with
the most bugs found.
The kick-off was the official start for the QA teams. For the first complete
execution of the test cases they needed exactly the assumed time of
two days. As an additional advantage, every team had a member who
knew the product in advance. So the other members could benefit from
them in their steep learning curve.
On the first day, the Scrum team worked on the previously known issues until the first bugs were delivered by the QA teams. Additionally,
the two exploratory testers were able to produce some interesting insights during their tests, which could be transformed into reproducible
bugs. After the first two days, the first bugs and issues were fixed. The
retests for these and other improvements (mainly initiated by the QA
teams) were taken into the test sets for the next test iteration. At the
Daily Scrum, the agents from the QA teams reported on their teams
progress and took important information to the QA teams as planned.

ing for the correct contact person. On the other hand, the members
of the Scrum team had their workload significantly reduced as a consequence of this interface between the teams. Thus, they could focus
on their work and were only given relevant and revised information
by the QA teams.
However, we underestimated the effort required to create the initial
version of good test plans. The same applied to the time to filter, evaluate, and revise the bugs before we submitted them to the Scrum team.
But we still managed this because we could count on the QA teams
and we knew that we could waive the retests within the Scrum team
apart from the usual ones because the QA teams did the job.
That everything worked out as intended is a big credit to our PO, who
is very open-minded and has a quality assurance background. She
was always open to any suggestions and comments. In other projects
and with another PO this concept would certainly have needed more
negotiation with the PO and other stakeholders. Additionally, we could
benefit from the ability to fall back on our capable and motivated colleagues who supported us in the QA teams.
Finally I can report that, the quality assurance in that project is now in a
stable development and the product has been successfully established
on the market. Currently, a large and good choice of automated tests
is available, which are running in continuous integration. The experiences of this QA sprint have certainly had an important influence on
the projects success story.

> about the author


Martin Uhlig works as a testing consultant for
Saxonia Systems AG in Dresden. Since studying
business informatics, he has been passionate
about agile software development and the current trends in the agile community. He has been
working on different projects within the sectors
of logistics, media, and product development.
This also includes agile projects on which he has been working as a
tester and Product Owner.

After each test iteration, the iteration duration of both QA teams


diverged from one another. Therefore we instructed the faster team
to retest old bugs that we had fixed and retested several sprints ago.
The idea panned out as the team actually found some old errors if
only a few that had obviously been reinstalled after fixing.

The Conclusion
The QA sprint was a major success for the project. On the last day of
the sprint, the PO was able to successfully perform the acceptance
test and release the product. As a result of this QA sprint, the team
managed to considerably boost the quality of the product.
The Scrum team and the QA teams appreciated the nature of the
cooperation between the teams. The QA teams benefited from the
clear interfaces because they were given peace of mind whenever
they needed something. Any questions from the QA teams could be
answered very quickly and validly without spending a long time look-

Testing Experience 28/2014

13

Performance
Column by Alex Podelko

Exploratory
Performance Testing
It looks like exploratory performance testing has started to attract
some attention and is getting a mention here and there. Mostly, I
assume, due to the growing popularity of functional exploratory testing[1]. A proponent of exploratory testing probably would not like my
use of the word functional here, but not much has been written, for
example, about performance exploratory testing and even what has
been written often refers to different things.
There have been attempts to directly apply functional exploratory
testing techniques to performance testing. SmartBear blog posts[2, 3]
contrast exploratory performance testing with static traditional load
testing. My view is probably closer to Goranka Bjedovs understanding
as she described it back in 2007[4].
It was clear to me that a traditional, waterfall-like approach to performance testing is very ineffective and error-prone. I presented a more
agile/exploratory approach to performance testing in a traditional
waterfall software development environment at CMG in 2008[5]. I intended to apply the original principals of Manifesto for Agile Software
Development[6] (valuing Individuals and interactions over processes
and tools. Working software over comprehensive documentation. Customer collaboration over contract negotiation. Responding to change
over following a plan.) to performance engineering.
Performance testing in projects utilizing specific agile development
methodologies is a separate topic. Having become more involved in the
agile development environment, I added some aspects of it to my presentation at the Performance and Capacity 2013 conference by CMG[7].
The words agile and exploratory are periodically and loosely used in
relation to performance testing but it does not look like we have any
accepted definition. Both terms are, in a way, antonyms of traditional
waterfall-like performance testing so their meaning may somewhat
overlap in certain contexts. I explained my view of using the word
agile for performance testing in the above-mentioned presentations.
Now it is time to contemplate the use of the word exploratory in the
context of performance testing.

14

Testing Experience 28/2014

If we look at the definition of exploratory testing as simultaneous


learning, test design and test execution1, we can see that it makes
even more sense for performance testing, because learning here is
more complicated, and good test design and execution heavily depend
on a good understanding of the system.
If we talk about the specific techniques used in functional exploratory
testing, some can be mapped to performance testing but definitely
should not be copied blindly. Working with a completely new system,
I found that I rather naturally align my work to a kind of session so
session-related techniques of functional exploratory testing are probably applicable to performance testing. I would not apply such details
as session duration, for example but the overall idea definitely makes
sense. You decide what area of functionality you want to explore,
figure out a way to do that (for example, create a load testing script)
and start to run tests to see how the system behaves. For example,
if you want to investigate the creation of purchase orders, you may
run tests for different numbers of concurrent users, check resource
utilization, see how the system will behave under stress load of that
kind, or how response times and resource utilization respond to the
number of purchase orders in the database, etc. The outcome would
be at least three-fold: (1) early feedback to development about the
problems and concerns found; (2) understanding the system dynamic
for that kind of workload, what kind of load it can handle, and how
much resource it needs; (3) obtaining input for other kinds of testing, such as automated regression or realistic performance testing to
validate requirements. Then we move to another session exploring
the performance of another area of functionality or another aspect of
performance (for example, how performance depends on the number
of items purchased in the order).
The approach looks quite natural to me and maximizes the amount of
early feedback to development, which is probably the most valuable
outcome of performance testing for new systems. However, there are
different opinions. Objections mostly align along three notions, which,

if taken in their pure form, are not fully applicable to performance


testing of new systems:
Creating a detailed project plan (with test design, time estimates,
etc) in the beginning and adhering to it.
Fully automating performance testing.
Using scientific Design of Experiments (DOE) approaches.
I mention all three of them here because (1) they are often referred as
alternatives to exploratory testing; (2) they all are rather idealistic for
the same reason we do not know much about new systems in the
beginning and every new test provides us with additional information.
And often this additional information makes us to modify the system.
Somehow the point that the system is usually changing in the process
of performance testing is often overlooked.
For example, if your bottleneck is the number of web server threads,
it does not make much sense to continue testing the system once you

realize it. As you tune the number of threads, the systems behavior will
change drastically. And you would not know about it from the beginning (well, this is a simple example and an experienced performance
engineer may tune such obvious things from the very beginning but,
at least in my experience, you will always have something to tune or
optimize that you have no idea about in the beginning).

[3] Dennis Guldstrand. 2013. Should Exploratory Load Testing Be Part


of your Process? http://blog.smartbear.com/load-testing/shouldexploratory-load-testing-be-part-of-your-process/
[4] Goranka Bjedov. Performance Testing. 2007. http://googletesting.
blogspot.com/2007/10/performance-testing.html
[5] Alexander.Podelko. Agile Performance Testing. CMG, 2008.
http://alexanderpodelko.com/docs/Agile_Performance_Testing_
CMG08.pdf
[6] Manifesto for Agile Software Development. 2001.
http://agilemanifesto.org/
[7] Alexander Podelko. Agile Aspects of Performance
Testing. Performance and Capacity by CMG, 2013.
http://www.slideshare.net/apodelko/agile-aspects-ofperformance-testing
[8] David Heinemeier Hansson. TDD is dead. Long live testing. 2014.
http://david.heinemeierhansson.com/2014/tdd-is-dead-longlive-testing.html

> about the author

So, actually, you probably do exploratory testing of new systems one


way or another even if you do not recognize it. And it would probably
be more productive to fully acknowledge it and make it a part of the
process, so you will not feel bad facing multiple issues and will not
need to explain why your plans are changing all the time. I would
concur here with the great post TDD is dead. Long live testing. by
David Heinemeier Hansson[8] discussing, in particular, issues related
to using idealistic approaches.

periodically talks and writes about performance-related topics, ad-

References

and documents (including his recent papers and presentations) can

[1] Exploratory Testing. Wikipedia.


http://en.wikipedia.org/wiki/Exploratory_testing

derpodelko.com/blog and can be found on Twitter as @apodelko.

[2] Ole Lensmar. 2012. Why Your Application Needs Exploratory Load
Testing Today. http://blog.smartbear.com/loadui/why-yourapplication-needs-exploratory-load-testing-today

For the last 17 years, Alex Podelko has worked as


a performance engineer and architect for several companies. Currently he is a Consulting
Member of Technical Staff at Oracle, responsible
for performance testing and optimization of
Enterprise Performance Management and Business Intelligence (a.k.a. Hyperion) products. Alex
vocating tearing down silo walls between different groups of performance professionals. His collection of performance-related links
be found at www.alexanderpodelko.com. He blogs at www.alexanAlex currently serves as a director of the Computer Measurement
Group (CMG) www.cmg.org, an organization of performance and
capacity planning professionals.

Testing Experience 28/2014

15

By Klaus Haller

What Developers and Testers Need to Know


about the ISO 27001 Information Security Standard
Late in 2013, the International Organization for Standardization released a new version of its ISO 27001 information security standard[1].
The standard covers requirements applying all organizations and ones
relevant only for organizations with in-house software development
and integration projects. They impact testers, developers, and release
managers. This article summarizes the relevant facts and points out
topics that testing and development teams have to work on.

Why Managers Like ISO 27001


Managers are held accountable for security incidents, even if they
have no information security expertise. Gregg Steinhafel illustrated
this involuntarily. He is a former CEO of Target, the second biggest

discount retailer in the USA. Steinhafel was the first CEO of a major
corporation to lose his job due to a data leak[2]. Thus, managers cannot
rely simply on a statement from their Chief Information Officer Security? Everything is fine! without risking the companys and their
personal future. Here, ISO 27001 comes into play. It brings together,
first, a list of best practices for information security and, second, an
auditing and certification industry. The best practices prevent basic
mistakes or leaving security topics completely unaddressed. External
auditors validate whether the best practices have been implemented.
This external validation gives CEOs and stakeholders extra confidence.

Three Popular Misunderstandings about


Information Security
Information security is a widely used term. Everybody has his own
definition, which can differ from ISO 27001s understanding. The three
most common misunderstandings are:

Misunderstanding 1: Information security focuses


(mainly) on protecting sensitive data
Information security requires the protection of sensitive data. However, this is only one of the three aspects of the CIA triangle[3], a core
concept in information security. The C represents confidentiality:
protecting sensitive data against unauthorized access. The I stands
for integrity: Information must not be changed inappropriately, either
accidentally or intentionally. Finally, A stands for availability: Users
must be able to retrieve information when needed; no data must get
lost. ISO27001 covers all three aspects of the CIA triangle. Thus, organizations must address all of them for their certification.

Misunderstanding 2: Information security fights (mainly) hackers and malware coming from the outside
Outside hackers and malware pose a threat to every organization, but
employees pose a risk as well. Humans make mistakes, even if they

16

Testing Experience 28/2014

handle sensitive data. Worse, employees might engage into criminal


actions. Snowden illustrated how one single person only can harm a
large organization[4]. Thus, information security must also address
risks from internal employees.

Misunderstanding 3: Information security looks (mainly)


at production systems
Production systems store and process sensitive data, but sensitive data
can also reside in development and test environments. This refers to
the confidentiality aspect. When looking at availability, bad code and
wrong configurations are risks as well. They can harm the stability of
production systems. Thus, IT departments have to address the risk
associated with their change and release process, too.

The ISO 27000 Standard Series


The information security standard consists of a broad document family (Figure 1). It is essential to understand the purpose of the various
documents. First, ISO 27000 is the vocabulary standard. It provides
a basic overview and defines the terminology. Second, there are requirement standards: ISO 27001 and ISO 27006. ISO 27006 applies to
auditing organizations only. They are not in the scope of this article.
ISO 27001, however, is the bible for any certification and lists all certification requirements. The current release dates from late 2013 and is
referred to as ISO27001:2013 to distinguish it from the older version,
ISO27001:2005. ISO 27001 has two parts: a main section and appendix
A. The main section defines a general information security framework.
Topics include top management involvement or the need for an incident management system. Appendix A lists concrete security topics
(controls) to be implemented.
This ISO 27001 standard is the only normative binding document. In
contrast, guideline standards offer best practices. ISO 27002 helps in
setting up the controls of appendix A of ISO 27001. Other documents
focus on aspects of the main section of 27001. ISO 27003, for example,
looks at information security management systems, and ISO 27005 at
risk management. Industry sector-specific best practices (sector specific
guideline standards) are also available, e.g., for the financial services
industry or for telecommunications. Guideline standards are not mandatory. Organizations must implement the ISO 27001 requirements
but they are free to follow or not to follow the guideline standards.

ISO 27001 Implementation Responsibilities


As mentioned above, ISO27001 has a main section and an appendix
A. The main section defines an information security framework and
the Chief Security or Chief Compliance Officer has to work on these
topics. In contrast, the IT department in general must implement the

ISO 27000
Overview and terminology
ISO 27006
Requirements for auditing and
certification bodies

ISO 27001 Requirements

Main Section

Vocabulary
Standard

Appendix A

Requirement
Standards

ISO 27002
Code of practice for information
security controls

ISO 27003
Implementing information
security management systems

ISO TR27008
Auditing of information security
management systems

ISO 27005
Risk Management

ISO 27011
Telecom Sector

ISO TR 27015
Financial Services

Guideline
Standards

Sector-Specific
Guideline
Standards

Figure 1. Overview of the 27000 Standard Series. This article mainly deals with ISO 27001 Appendix A with interpretations derived from ISO 27002.

majority of the 114 controls from Appendix A. Some of them are only
relevant to developers, testers, and change managers. They have to
provide solutions for the controls (see Figure 2), for which they can
rely on the next sections.
Does a control define an obligation for
developers, testers, and change managers?
Yes

Does a
companywide
solution
exist?

Yes

Developers/testers/release
managers rely on company-wide
solution from the IT department/IT
Security/Legal & Compliance, etc.

No

Developers/testers/change
managers must provide a solution
(focus of this article)

No

Nothing to
be done by
developers,
testers, and
change
managers

Figure 2. ISO27001 Appendix A Standards and the Need for Engineering, Testing and
Change Teams to Come Up with a Solution on their Own.

Information Assets in Development and Testing


Information security starts with identifying the valuable information
assets, and classifying and labeling them. The user groups with access
have to be clarified. Data access and protection mechanisms must be
defined (controls A.8.2.1-3).
There are two types of information assets: production data and document assets, and engineering-owned assets. In the case of production
data and documents, the business, the legal and compliance department, and risk management all work together. They classify the assets
and define policies for handling them. Data privacy laws impact the
policies, especially in Europe. The policies define, for example, whether

customer data is sensitive, who can access it, etc. Testers and engineers
are not involved in the classification, although the policies can impact
them. They might forbid, e.g., the storage of credit card numbers of
clients in test systems.
Engineering-owned assets have to be classified as well. Potentially, IT has
to drive this. The main assets are source code and documentation, such
as requirements, specifications of new product features, architectural
documents, trading algorithms of hedge funds, etc., and they can be
as critical as production data. ISO27001 does not declare any asset to
be sensitive or not, it just demands clarification.
The policies for handling production data and engineering-owned
assets impact tool decisions. In the last few years, outsourcing requirements and test or project management tools became popular.
Software-as-a-service is thriving. With ISO 27001, organizations must
ensure that projects store data in externally hosted systems only if this
does not contradict information security policies.
ISO 27001 also demands secure development environments for the
complete development cycle (control A.14.2.6). The need for confidentiality, availability, and integrity has a broad impact on access control
mechanisms, the hiring and contracting of developers and testers,
and backup strategies.
A highly critical asset in development and test environments is the
test data. Many applications especially business applications incorporate databases. Testers need suitable data in the databases in
test environments and ISO 27001 control A.14.3.1 demands that the test
data is protected. When looking at the ISO 27002 guideline, it is clear
that the standard reflects old style test data management. In other
words, test data comes from production. The focus in ISO 27002 is to
mitigate the risks associated with the use of production data, such
as the ability to audit the copy process and strict access rules for test
environments. The trend towards synthetic test data in Europe is not
reflected (see[5] for an in-depth discussion on test data management).
However, ISO 27002 is not normative. Organizations can implement
ISO 27001 in their own way. Especially when organizations test with
synthetic data, many ISO 27002 ideas are obsolete.

Testing Experience 28/2014

17

Security principles
including rules for
data transfer

Security principles
including rules for
data transfer

Testing
Testers work on
test systems
Organize user
acceptance
testers

Testing

Usage on
production
systems

Development

Usage on production systems

Development
Developers work on
development systems
Perform security tests
Security principles
followed

Auditable proof/logs needed for:


Developers did develop and run security tests on
development systems.
User acceptance tests passed on test systems
Security principles followed

Independent testers check entry


criteria and sign off the testing
including auditable documentation.

Figure 3. How ISO 27001 Influences in the Software Development Processes V-model (Left) and Scrum (Right)

Scrum, V-Model, Security Principles, and ISO 27001


ISO27001 defines three controls for the software development processes. First, acceptance testing against the requirements is mandatory (control A.14.2.9) against functional and non-functional ones,
the latter including the security requirements. Second, there must be
security tests during the overall development process (control A.14.2.8).
Third, development, test, and operational environments must be
separated (control A.12.1.4).
These controls are compatible with many software development processes such as Scrum or the older V-model (Figure 3 ). However, Scrum
requires more thoughts than the V-model. IT departments using the
V-model often have a test center and quality gates. Quality gates define
the criteria for when tests can start and when they succeed. One criterion before the test start can be that developers performed security
tests. One exit criterion for testing can be that user acceptance tests
in the test environment succeeded for functional and non-functional
requirements. When testers work in a test center and are not part of
a development team, development teams cannot put much pressure
on them. Thus, testers can enforce the ISO 27001 requirements more
easily for all projects, even delayed ones.
In the world of Scrum and Agile, there are often no test centers and no
central governance. It might be questionable, but it is reality (see[6]
for better options). Development and testing overlap time-wise. Roles
overlap. In the development methodology, however, this is no excuse
for missing documentation or non-implemented controls in an ISO
27001 audit. All controls mentioned above must exist for all projects,
even delayed ones. This is the key challenge for agile projects in an
ISO 27001 organization.
Besides the controls for the development process, the ISO standard
formulates controls for the software product itself. It demands that
the security needs in the engineering process are reflected by defining principles and they must be applied to all development projects
(control A.14.2.5). First, the IT department has to write them down.
Second, the IT department must enforce the principles in all projects.

18

Testing Experience 28/2014

As part of the requirements analysis for new software or new releases,


information security requirements have to be collected and specified
(control A.14.1.1). The standard requires clarification of the security
needs of data transfers via public networks and electronic transactions
(control A.14.1.2/A.14.1.3). Again, confidentiality alone is not sufficient.
Integrity and availability have to be addressed, too.

ISO 27001 Controls for Release and Change


Management
The ISO 27001 standard emphasizes availability controls, too, i.e., the
A of the CIA triangle. The following controls help to ensure stable
production systems:
Formal change management for changes in IT and business (control A.12.1.2)
Discouraging changes to vendor-provided software packages
(A.14.2.4)
Strict change control procedures even during development to
prevent unwanted modifications (control A.14.2.2)
Specific testing needs for operation system changes, which require
business critical applications to be tested against a new platform
(control A.14.2.3)
Mandatory rules for software and operating system installation
procedures, and who can do what (A.12.5.1)
Preventing even small changes to circumvent testing or the
change process by separating development, testing, and operational environments (control A.12.1.4), and by having access control
on the source code (control A.9.4.5)
This is old-time check-box style release management. IT staff have
a list of criteria they check. If all have been met, a change can go to

production. This model fulfills todays needs of many organizations.


Highly agile organizations, however, prefer continuous integration[7]
and DevOps[8]. They invest in test automation to have quick feedback
loops. There might be even no manual testing before deploying small
changes into production, which raises the question as to whether this
conforms to ISO 27001.
A clever strategy for dealing with ISO 27001 can help. Discussing
whether ISO 27001 is outdated and Scrum, DevOps, etc. are state of
the art will result in frustration. ISO 27001 is a top management decision that overrules any development or test process. Developers and
testers should invest their time into explaining how they conform to
ISO 27001, even if there is no or minimal human involvement between
coding and deployment production. Written operational procedures,
archived audit logs, etc. help to tell one story: All ISO 27001 controls are
in place, some with manual check lists, others relying on automated,
auditable processes.

Controls for Outsourcing


Outsourcing and offshoring are common in software development
and testing, but pose two information security risks. First, the sourcing partners obtain sensitive data they should not have. Second, their
software development and testing processes might not address the
information security needs properly. ISO 27001 addresses the latter
aspect with control A.14.2.7. It requires the supervision of outsourced
development and testing. The work can be outsourced but the responsibility stays with the organization. In general, ISO 27001 requires suppliers also to be managed with regard to information security (control
A.15). Any supplier management can enforce this. The controls are
not specific to software development and testing, though the checks
might differ slightly.

act accordingly. Chaotic geniuses or non-genius chaotic persons must


be embedded into teams that ensure ISO 27001 conformity.
Not every developer and tester might appreciate the changes the ISO
27001 standard brings. Thus, we dedicate one Doug Larson quote to
all who preferred working in IT in the times before standards such as
ISO 27001: Nostalgia is a file that removes the rough edges from the
good old days.

References
[1] ISO/IEC 27001:2013 Information technology Security techniques
Information security management systems Requirements, ISO,
Geneva, Switzerland, 2013
[2] The Associated Press: Target CEO Gregg Steinhafel resigns following last years security breach, 5.5.2014
[3] Wikipedia: Information security, http://en.wikipedia.org/wiki/
Information_security, last retrieved July 5, 2014
[4] K. Dilanian: A spy world reshaped by Edward Snowden, Los Angeles Times, 22.12.20123
[5] K. Haller: Test Data Management in Practice, Software Quality
Days 2013, Vienna, Austria, 2013
[6] K. Haller: How Scrum Changes Test Centers, Agile Records, August
2013
[7] M. Fowler, M. Foemmel: Continuous integration
http://ww.dccia.ua.es/dccia/inf/asignaturas/MADS/2013-14/
lecturas/10_Fowler_Continuous_Integration.pdf, 2006, last
retrieved July 5, 2014
[8] J. Turnbull: What DevOps means to me, http://www.kartar.net/
2010/02/what-devops-means-to-me/, last retrieved July 5, 2014

Two Main Conclusions on ISO 27001, and


Development and Testing
Conclusion 1: Development, testing, and change management require clear written information security
policies.
ISO 27001 does not require specific organizational forms or software
processes. ISO 27001 emphasizes clear rules and policies for the handling of information assets and the engineering process. First, they
must clarify what data is sensitive and how to handle it. Second, they
must explain how the organization engineers secure software in a
secure development area. Third, they must state how to get software
into production without any risk for production stability.

Conclusion 2: The organization must enforce the policies in all projects and have evidence.
ISO 27001 expects policies to be enforced consistently and to have auditable evidence. In other words, there must be a process organization
and all employees must be continuously educated and motivated to

> about the author


Klaus Haller has worked in IT consulting for more
than nine years, primarily in the banking sector.
His main topics are IT risk and compliance, data
loss prevention, test center organization, and
testing of information systems landscapes. He
publishes regularly on testing and is a frequent
speaker at conferences. He has been with Swisscom since 2006 and is based in Zurich.
Website: www.klaushaller.net

LinkedIn: www.linkedin.com/pub/klaus-haller/48/a2b/798

By Jacqueline Vermette

What Makes a Good Quality Tester?


Twenty five years ago, when I started my career as a young software
engineer, there were very few professional testers. Only large, important projects had testing teams. For most projects, software testing
consisted of having the lead analyst review the system just before
delivery. Occasionally, the tests were even performed together with the
client during the acceptance phase, leading to unpredictable results.
After a particularly painful client acceptance experience, the manager
would call a meeting. He would declare: For the next project, we
absolutely need to test before delivery.
But, whos going to test? replied the project leader.
Well, we have Bob and Jackie who are not very busy. They can test
during the two weeks before delivery. Lets ask them to find as many
bugs as they can.
All right, lets give a try.
So in the next project, all the team members would work very hard
during the last two weeks to cap the project. Our new testers, Bob and
Jackie, would do their very best despite their limited experience. However, Bob did not want to pursue a career in testing. He had no interest
in performing tests all day and would eventually leave the project.
Jackie found more bugs than Bob did. She believed in the process and
would apply her learning experience to improve her contribution in
their subsequent projects.

This was a very typical scenario and reminds me of a Dilbert comic


strip published in March 2010. In it, Dilberts boss asks his help in
quality testing a new software version. Dilbert gives all sorts of silly
reasons to not become a quality tester and concludes his message
by swatting his boss across the face with a binder, to the bosss great
displeasure. In this situation, it is clear that Dilbert has no interest
in testing and most probably does not have the skill set to be a good
tester anyway. Of course, this is a satire, but it is very close to reality.
During the last few decades, development methodology and testing processes have obviously evolved. But even now there is still this
tendency in the IT community to believe that anyone can be a quality
tester but can everyone really test properly? It is incorrect to think
that anyone can produce good software tests. I personally think that
you need the proper genetic make-up to be a great tester, but what
qualities are we talking about here?
A natural-born tester:
1. Has the technical knowledge and deep analytical ability needed to
create extremely complex tests. These characteristics, along with
an innate need to break things down, add to the strength and reliability of the final product. Simple tests can find the most obvious
bugs, such as formatting errors or missing boundary validations.
But it takes more detailed testing scenarios to uncover errors of
logic or cascading effects. For example, going through all cases of
a state diagram, and especially from a state to a forbidden state,
often reveals surprising results. For complex cases, documenting
the tests to execute is essential. Using an old fashioned Excel sheet
is always better than nothing.

20

Testing Experience 28/2014

2. Has an ability to learn. Testers may be asked to go from a limited understanding of a product to mastering that product in a very short
timeframe. They must be able to memorize details and understand
each modules concepts while maintaining a general overview of
the product. Testers must be willing to review and learn all the
expected system behavior by studying the technical documentation and spending time with the main analyst. I remember one
particularly complex application for an aluminum smelter where
very few people had an overview of the whole business process at
the beginning. The management was not too sure whether the
test team would be able to test adequately. But by reading all the
available documentation and asking questions and more questions, we did a great job. Never be shy about asking questions in
order to understand details about the application, especially if the
specifications are not clear enough.
3. Can think outside the box, and takes into account assumptions
and concrete facts. Not all conditions are necessarily stated in the
functional specifications. It is like when you buy a car, you assume
that the hood can easily be opened to check the motor. This criterion is not mentioned in the cars features, but everybody expects
it. Testers should try to test unwritten features. Some unwritten
characteristics could have a significant impact on the quality of
the final product, hence the need to read between the lines. For
example, the system can support some required functionalities,
but what would happen if I tried something a little different? Does
the system support it? Does it crash? Does it corrupt data?
4. Cultivates a keen sense of observation and notes small details.
Their perfectionism can unfortunately annoy programmers and
developers, but good testers can find the biggest bugs in the least
likely situations. If a sequence of system operations is available to
the user, why are they not supposed to perform them, for example?
Why does the screen have labels with different fonts? Report fields
that are not properly aligned or inconsistent use of capitalization are
other examples of small details that can negatively impact the quality of the product. Some people just notice this type of error more
than others. They are probably like that in all their daily activities.
5. Cares deeply about the final product. They believe in their mission,
which is to protect the companys reputation. They love testing
and are proud to find bugs. Finding a bug is highly satisfying and
finding an especially tricky one surely makes their day.
6. Is organized yet flexible. They pay attention to the manuals and
conduct the tests systematically. This is very important in order to
reproduce a bug. A bug that cannot be properly detailed in order to
be reproduced cannot be corrected. They can also adapt to changes
during the course of a project and are willing to repeat tests over and
over, if necessary. After a bug correction, a test case might need to
be modified and re-executed to validate the quality of the system.
Even with all these attributes, no one can be a good tester if they
cannot bring a positive influence to a development team. A tester
must provide positive feedback, be able to motivate team members to

improve the quality of their work and, in general, manage each team
members self-respect.
The testers role is in constant flux. To stay competitive in todays
market, companies must now produce ever more complex software
solutions, at an ever faster pace, and at a lower cost. Test management
tools, system simulation, and automatic test case execution are now
a must. We must adapt to these changes by developing our programming abilities or by working closely with developers. Promoting more
completed unit tests and testing as soon as possible with the developers
helps greatly to reduce errors early in the test process.

References
[1] PAGE Alan, How we test software at Microsoft, Microsoft Press,
2009
[2] PRATAP K.J., The Psychology of testing, The code Project, 2007

> about the author


Jacqueline Vermette is a quality assurance man-

An effective tester cannot guarantee that a product is completely


bug-free. However, choosing the right person for the role will lead
to the best possible results by drastically reducing the impact of any
remaining bugs.
In conclusion, for your next project, do not select a Dilbert to perform
your quality tests. When selecting a software developer, you are trying
to choose the right person to take on your project. You want the best
one possible. Just apply the same principle when choosing a quality
tester. An efficient software tester will help you to maximize your
return on investment.

ager with 25 years of experience in quality assurance, quality control, functional analysis, and
programming. She also worked to set up quality
assurance and control methodologies in order
to ensure the quality of deliverables for manufacturing industry projects. Jacqueline is a certified software tester (CSTE) and is currently working at Keops Technologies.

Wissen fr Tester

Seit
10 Ja
hren
ein
Best
selle
r

T. Linz

Testen in ScrumProjekten

E. Hendrickson

A. Spillner, T. Linz

2014 196 Seiten E 26,90 (D)


ISBN 978-3-86490-093-8

5. Auflage

Explore It!

2013 248 Seiten E 34,90 (D)


ISBN 978-3-89864-799-1

-Book:
plus
Buch + E
unkt.de/
www.dp

Basiswissen Softwaretest
2012 312 Seiten E 39,90 (D)
ISBN 978-3-86490-024-2

A. Spillner, T. Roner,
M. Winter, T. Linz

H.Stauffer, B. Honegger,
H. Gisin

2014 506 Seiten E 44,90 (D)


ISBN 978-3-86490-052-5

2014 268 Seiten E 69,90 (D)


ISBN 978-3-86490-072-3

Praxiswissen Softwaretest Testen von Data-Warehouseund Business-IntelligenceTestmanagement


Systemen
4. Auflage

28/2014
dpunkt.verlag GmbH Wieblinger Weg 17 69123 HeidelbergTesting
fon: Experience
0 62 21 / 1483
40
fax: 0 62 21 / 14 83 99 e-mail: bestellung@dpunkt.de www.dpunkt.de

21

By Wolfgang Gottesheim

Ready for CAMS?


The Foundation of DevOps

In a traditional organisation, handoffs between developers and operators cause friction as these groups follow different goals. Developers
and business drivers within the organization want customers to use
new features and benefit from other improvements, while operators
seek stability and want to provide a stable environment.
One of the groundbreaking books around DevOps, The Phoenix
Project by Gene Kim, Kevin Behr and George Spafford, describes the
practice through The Three Ways of systems thinking, amplifying
feedback loops, and providing a culture of continuous experimentation and learning. Systems thinking means to focus on overall value
streams and to make sure that defects (for example, broken builds),
are not passed on to downstream units (like the Ops department).
Amplifying the feedback loops translates to providing proper communication channels between Dev and Ops, and to achieve this without

creating an overly complex framework of processes. Creating a culture


of experimentation and learning encourages everyone to take risks
and learn from failures.
A lot of people see DevOps as the extension of agile practices from
developers towards operations to overcome cumbersome processes.

DevOps a Definition
DevOps aligns business requirements with IT performance, and recent
studies have shown that organizations adopting DevOps practices have
a significant competitive advantage over their peers. They are able to
react faster to changing market demands, get out new features faster,
and have a higher success rate when it comes to executing changes. The
goal of DevOps is to adopt practices that allow a quick flow of changes
to a production environment, while maintaining a high level of stability, reliability, and performance in these systems. However, the term
nowadays covers a wide range of different topics and consequently
means different things to different people.

The Foundation of DevOps


There are a number of definitions and interpretations for DevOps floating around, and a way to look at it is in terms of CAMS: DevOps means
to adopt a Culture of blame-free communication and collaboration,
to embrace Automation to allow people to focus on important tasks,
to introduce continuous Measurements to get feedback on the quality and usage of features and bug fixes, and to encourage Sharing of
these measurements. This underpins the fact that DevOps is not about
standards or tools, it is about enabling communication and collaboration between departments in an organization.

Plugging Performance into DevOps


When we talk about collaboration, a key aspect is how we prevent

finger-pointing between teams when problems occur. We have to


handle and prevent failures by continuously ensuring high quality,
but while almost every definition of software quality mentions both
functional and non-functional requirements, the non-functional aspects like usability, deployability, and performance are only rarely
measured automatically. This becomes a problem as performance
issues are among the hardest to solve they are heavily dependent
on load, deployment, and user behaviour, and Ops teams need help
in identifying these issues and communicating them to Dev in an
actionable way.
In order to focus the entire team on performance, you must plug
performance into the four pillars of CAMS:
Culture: Tighten feedback loops between Dev and Ops
Automation: Establish automated performance monitoring
Measurement: Measure key performance metrics in CI, Test and Ops
Sharing: Share the same tools and performance metrics data across
Dev, Test and Ops

Figure 1. Running tests against the production system gives a better input for capacity planning and uncovers heavy load application issues

22

Testing Experience 28/2014

Figure 2. Automated Tests running in CI also help to detect performance regressions on metrics such as # of SQL calls, page load time, # of JS files or images, etc.

Four Milestones that Companies Should Have in


Mind

Sharing: Share the Same Tools and Performance Metrics


Data across Dev, Test, and Ops

Culture Tighten the Feedback Loops between


Dev and Ops

The more traditional testing teams are used to executing performance and scalability tests in their own environments at the end of a
milestone. With less time for extensive testing, their test frameworks
and environments have to become available to other teams to make
performance tests a part of an automated testing practice in a Continuous Integration environment. The automatic collection and analysis of
performance metrics ensures that all performance aspects are covered.
This once again entails defining a set of performance metrics that is
applied across all phases, as this is beneficial to identifying the root
cause of performance issues in production, testing, and development
environments.

Culture is the most important aspect because it changes the way in


which teams work together and share the responsibility for the end
users of their application. It not only encourages the adoption of agile
practices in operations work, it also allows developers to learn from
real world Ops experience and starts a mutual exchange that breaks
down the walls between teams. From a performance perspective, it
is important to establish a shared understanding of performance
between Dev, Test, and Ops. This enables collaboration based on wellknown measurements and metrics, establishes a shared language
understood by all teams, and allows all teams to focus on the actual
problems. Finger-pointing between teams has to be replaced by a
practice that enables them to get to the root cause of performance
issues, and working together on current issues enables developers to
become aware of performance problems and their solutions.

Automation Establish a Practice of Automated Performance Monitoring


Operations and test teams usually have a good understanding of
performance, and they need to educate developers on its importance
in large-scale environments under heavy load. Providing automated
mechanisms to monitor performance in all environments, from CI and
test environments to the actual production deployment, allows the
shared language of performance to be spoken.

Measurement Measure Key Performance Metrics in CI,


Test, and Ops
With performance aspects being covered in earlier testing stages, performance engineers on testing teams have time to focus on large-scale
load tests in production-like environments. This helps them to find
data-driven, scalability, and third party impacted performance problems. Close collaboration with Ops ensures that tests can be executed
either in the production environment or in a staged environment
that mirrors production, thus increasing confidence when releasing
a new version.

Conclusion
The first step in adopting a performance culture is to enable a shared
understanding of performance through a set of key performance
metrics that are accepted, understood, and measured across all teams.
These performance metrics allow all teams to talk about performance
in the same way, and reduce the guesswork and finger-pointing often associated with troubleshooting performance problems. Once
these metrics have been defined, their automated measurement and
analysis is the next step that makes performance a part of a DevOps
practice.

> about the author


Wolfgang Gottesheim has nearly ten years of
experience as a software engineer and research
assistant for Java Enterprise environments. In his
role as Technology Strategist in the Dynatrace
Center of Excellence he is involved in the strategic development of the Application Performance
Management solutions. His focal points are Continuous Delivery and DevOps.
Twitter: @gottesheim

Testing Experience 28/2014

23

By Danilo Berta

Combinatorial Testing Tools


When a software application accepts several inputs, each of which
can assume different values, it is impossible in general to test all
combinations of values of input variables, simply because they are too
many. Lets take an example consider a software feature that accepts
as inputs three possible values A, B, and C. These values can be chosen
arbitrarily from the following table:

1-wise testing
When the number of combinations is high, it is possible at least verify
that at least once each individual value of the variables is given
as input to the program to be tested. In other words, if the variable A
can take the values A1, A2, A3, at least a first test must be executed in
which the variable A=A1, a second test in which A=A2, and a third test
in which the variable A=A3; the same goes for the other variables. This
type of test provides a so-called wise-1 cover, and we will see shortly
the meaning. In practice, we have the following table:

A1

B1

C1

A2

B2

C2

A3

B3

# TEST

# Values

A4
4

Table 1. Variables and Values

The total number of possible combinations of variables (A, B, C) is


equal to 432=24; in practice, in order to ensure trying all possible
combinations of the values of the variables (A, B, C) at least once, 24
test cases must be carried out. Such combinations are the following:
14

58

912

1316

1720

2124

A1;B1;C1

A1;B3;C1

A2;B2;C1

A3;B1;C1

A3;B3;C1

A4;B2;C1

A1;B1;C2

A1;B3;C2

A2;B2;C2

A3;B1;C2

A3;B3;C2

A4;B2;C2

A1;B2;C1

A2;B1;C1

A2;B3;C1

A3;B2;C1

A4;B1;C1

A4;B3;C1

A1;B2;C2

A2;B1;C2

A2;B3;C2

A3;B2;C2

A4;B1;C2

A4;B3;C2

Table 2. Combinations of Variables for A, B, and C Values

Now, in this particular case, such a number of tests can still be affordable. However, if we consider the general case of N variables X1, X2, ,
Xk, the first accepting n1 possible values, the second n2 possible values,
the k-th that assumes nk possible values, the total number of combinations is equal to: n1n2nk. Such a value, even for low values of n1,
n2, , nk is a high number. For example, if k=5 and (n1=3; n2=4; n3=2;
n4=2; n5=3) we get a number of combinations equal to 34223=1 44.
That is quite a large number of tests to perform if you want to ensure
complete coverage of all combinations.

A1

A2

A3

A4

B1

B2

B3

C1

C2

Table 3. Max Wise-1 Test Set

A possible first reduction is to set a value for the first variable and assign
a random (but permitted) value to the other variables (stated with *
in Table3) and proceed in this way for all the variables and values. In
this way, we reduce the test cases from 24 to just 9. It is still possible
to further reduce the number of test cases, considering that instead
of * you can put a value of the variable which can then be excluded
from the subsequent test cases.
Put into practice, for test case #1 in place of B=* put B=B1, instead of
C=* put C=C1 and remove test case #5 and test case #8, which are now
both covered by test case #1.
Test case #2: in place of B=* put B=B2, and in place of C=* put C=C2,
and erase test cases #6 and #9, both of which are now covered by
test case #2.

In real software applications, the number of values that can assume ni


variables is high and it is easy to reach the hundreds of thousands of
combinations, making it impossible to perform comprehensive tests
on all combinations.

Test case #3: instead of B=* put B=B3, and in place of C=* insert any
value C1 or C2, considering that the values of the variable C equal to C1
and C2 have already been covered by test cases #1 and #2; we can let
C=* and postpone the choice of whether to enter C1 or C2. Now, remove
test case #7, since B=B3 is now covered by test case #3.

How can we carry out an effective test when the number of variables
and values is so high as to make it impossible to exhaustively test all
combinations? What reduction techniques apply?

Having understood the mechanism, there remains only test case #4,
which covers A=A4; we can let B=* and C=*, postponing the choice of
what to actually select when we will really perform the test.

24

Testing Experience 28/2014

The symbol * represents dont care; we can put any value in it and
the coverage of the test set does not change, and the values of all
variables will be used at least once. Those with * value should be
covered more than once.
The final minimized test set for wise-1 coverage is the following:
# TEST

A1

B1

C1

A2

B2

C2

A3

B3

A4

Table4 is drawn from Table3, moving up the columns of the variables B


and C to fill the values w
ith * with specific values; the * values stagnate
in the lines that cannot be covered from specific values (row 3 variable
C and row 4 variable B), because the number of values of the variables
are different (for example, variable B has just 3 values, while variable A
has 4 values; the missing B value with respect to A is replaced by *).
Therefore saying that a test set, such as that reported in Table4, provides wise-1 coverage is equivalent to saying that each individual value
of each variable is covered at least once.
The general wise-1 rule we can deduce is the following:
N variables X1, X2, , Xk, the first assuming n1 possible values, the
second n2 possible values, the k-th nk possible values, the maximum
number of tests that provide coverage wise-1 is equal to n1+n2++nk,
while the minimum number of tests is equal to the maximum value
among { n1, n2, , nk }.
In real cases, what is of interest is always the test set with the minimum
number of test cases that ensures the chosen coverage (and this is for
obvious reasons).

2-wise testing or pairwise testing


If the wise-1 test guarantees coverage of every single value for each
variable, it is easy to see that a wise-2 test set ensures that all pairs
of values of the variables are covered at least once. In the case of the
variables listed in Table 1, all pairs of variables are as follows: {(A, B), (A,
C), (B, C)}. In fact, the combinatorial calculation shows that the number
of combinations of N values taken K to K (with NK) is equal to:
=

N!
K ! (N K) !

In our example we have three variables (N=3) taken two to two


(K=2), and applying the combinatorial formula above we get

3
2

3!
2 ! (3 2) !

(A,B)

(A,C)

= 3 ; the three pairs are {(A, B), (A, C), (B, C)}.

Wanting to compute all possible pairs of variable values, we need to


consider the following table:

TOTAL

43=12
2

(B,C)

Table 4. Minimized Wise-1 Test Set

N
K

# VARIABLES VALUES

PAIRS

GRAND TOTAL

42=8
32=6
12+8+6=26

Table 5. Counting the Pairs of Values of the Variables A, B, and C

Hence, the total of all the pairs of values of the variables A, B, and C
whose values are reported in Table1 is equal to 26 and they are all
printed in the following table:
#

# of pairs of values
A, B

A, C

B, C

A1,B1

A1,C1

B1, C1

A1,B2

A1,C2

B1, C2

A1,B3

A2,C1

B2, C1

A2,B1

A2,C2

B2, C2

A2,B2

A3,C1

B3, C1

A2,B3

A3,C2

B3, C2

A3,B1

A4,C1

A3,B2

A4,C2

A3,B3

10

A4,B1

11

A4,B2

12

A4,B3

# PAIRS

12

TOTAL

12+8+6=26

Table 6. Pairs of Values of the Variables A, B, and C

Why should you consider a test set to cover wise-2? Is it not enough to
consider a test set with 1-wise coverage? Here we enter into a thorny
issue, in which opinions are different, concordant, and discordant.
Below is the incipit from the site www.pairwise.org[1]:
Pairwise (a.k.a. all-pairs) testing is an effective test case generation technique that is based on the observation that most faults are
caused by interactions of at most two factors. Pairwise-generated
test suites cover all combinations of two, therefore are much smaller
than exhaustive ones yet still very effective in finding defects.
We mention also the opinion of James Bach and Patrick J. Schroeder about the pairwise method: Pairwise Testing: A Best Practice
That Is Not from James Bach, Patrick J. Schroeder available from
http://www.testingeducation.org/wtst5/PairwisePNSQC2004.pdf[2]:
What do we know about the defect removal efficiency of pairwise
testing? Not a great deal. Jones states that in the US, on average,
the defect removal efficiency of our software processes is 85%[26].
This means that the combinations of all fault detection techniques,
including reviews, inspections, walkthroughs, and various forms of
testing remove 85% of the faults in software before it is released.

Testing Experience 28/2014

25

In a study performed by Wallace and Kuhn[27], 15 years of failure


data from recalled medical devices is analyzed. They conclude that
98% of the failures could have been detected in testing if all pairs of
parameters had been tested (they did not execute pairwise testing,
they analyzed failure data and speculated about the type of testing that would have detected the defects). In this case, it appears
as if adding pairwise testing to the current medical device testing
processes could improve its defect removal efficiency to a best-inclass status, as determined by Jones[26].
On the other hand, Smith, et al.[28] present their experience with
pairwise testing of the Remote Agent Experiment (RAX) software
used to control NASA spacecraft. Their analysis indicates that pairwise testing detected 88% of the faults classified as correctness and
convergence faults, but only 50% of the interface and engine faults.
In this study, pairwise testing apparently needs to be augmented
with other types of testing to improve the defect removal efficiency,
especially in the project context of a NASA spacecraft. Detecting
only 50% of the interface and engine faults is well below the 85%
US average and presumably intolerable under NASA standards.
The lesson here seems to be that one cannot blindly apply pairwise
testing and expect high defect removal efficiency. Defect removal
efficiency depends not only on the testing technique, but also on
the characteristics of the software under test. As Mandl[4] has
shown us, analyzing the software under test is an important step
in determining whether pairwise testing is appropriate; it is also an

important step in determining what additional testing technique


should be used in a specific testing situation.
[4] R. Mandl, Orthogonal Latin Squares: An Application of Experiment Design to Compiler Testing, Communication of the ACM,
vol. 28, no. 10, pp. 10541058, 1985.
[26] Jones, Software Assessments, Benchmarks, and Best Practices.
Boston, MA: Addison Wesley Longman, 2000.
[27] D. R. Wallace and D. R. Kuhn, Failure Modes in Medical Device
Software: An Analysis of 15 Years of Recall Data, Intl Jour. of
Reliability, Quality and Safety Engineering, vol. 8, no. 4, pp.
351371, 2001.
[28] B. Smith, M. S. Feather, and N. Muscettola, Challenges and
Methods in Testing the Remote Agent Planner, in Proc. 5th Intl
Conf. on Artificial Intelligence Planning and Scheduling (AIPS
2000), 2000, pp. 254263
To clarify, we can say that the pairwise or 2-wise test method ensures
that all combinations of pairs of values of the variables are tested,
theoretically ensuring the maximization of the anomalies found,
with percentages ranging from 50% to 98% according to the studies.
In fact, no test can ever guarantee a defined percentage removal of
defects (which can only be calculated ex post for the specific project).
Lets say trying to be realistic that pairwise achieves a valid agree-

GET YOUR
PRINTED COPY
in our shop!
s
e
u
s
is
ll
a
r
Orde

26

Testing Experience 28/2014

www.testingexperience-shop.com

ment between the number of tests to be performed and the anomalies


detected, when the number of variables and their values is so high
that it is not possible to test all the combinations (the so called all-wise
testing or N-wise testing, where N is the number of variables we are
playing with).
In the case of a test set covering wise-2 level, it is very simple to know
the maximum number of tests that provides coverage of all pairs of
values of the variables. This value is equal to the number of pairs of
values of the variables themselves. In our example of three variables A,
B and C in Table 1, this number is 26 (calculated as described in Table 6).
The real problem still unsolved is to discover the minimum number
of tests that guarantees wise-2 coverage, although there are a variety
of methods and algorithms that approximate this value for a problem
with an arbitrary number of variables and values. Examples of tools
that use these algorithms are:
1. Microsoft Pairwise Independent Combinatorial Testing tool
(PICT), downloadable from http://download.microsoft.com/
download/f/5/5/f55484df-8494-48fa-8dbd-8c6f76cc014b/pict33.msi
2. NIST Tools: http://csrc.nist.gov/groups/SNS/acts/documents/
comparison-report.html
3. James Bach AllPairs downloadable from
http://www.satisfice.com/tools.shtml
4. Other tools here: http://www.pairwise.org/tools.asp

n-wise Testing with n>2


It is now easy to extend the concept of pairwise or 2-wise testing to the
generic case of n-wise, with n>2. The generic test set provides n-wise
coverage if it is able to cover all the n-tuples (3-tuples if n=3, 4-tuples
if n=4 and so on). As in the pairwise, is always possible to know the
size of the maximum test set, equal to the number n-tuples of values
of the variables, but there is no way to know in the general case the
size of the minimum test set that guarantees coverage n-wise.
Using NIST Tools (ACTS) or Microsoft PICT (or other similar tools), it is
possible to extract a test set that approximates as closely as possible
the minimum test set. It is clear that, given a set of N variables, the
maximum level of wise which you can have is equal to the number
of variables. So, if we have four variables, a 4-wise test set coincides
with all possible combinations, while a 5-wise test set or higher makes
no sense.
The combinatorial testing techniques that we discussed in the first
part of the article are intended to solve a basic problem, which we
have already discussed and we rephrase as follows:
Direct Combinatorial Test Problem: In a software system accepting
N input variables, each of which can take on different values, find the
test set with the smallest number of test cases that guarantees (at least)
coverage of all combinations (2-tuples) of all the values of the variables
involved.
The Pairwise technique and a large number of support tools have
been developed to solve this problem. Once such a test set (the smallest possible) has been generated, you should run the test cases and
detect (if present) all the defects that arise in the software under test.
There is also a second problem, maybe less popular compared to the
previous, as follows:

Reverse Combinatorial Test Problem: Given a Test Set for which you do
not know the method of generation (if any), calculate what percentage
of nwise coverage the test set ensures, with nwise between 1 and the
number variables in the test set.
An example: tests generated by automatic tools for which you have low
or almost zero process control, or when the test cases are generated
by the automatic flows that feed interfaces between different systems
(think of a system that transmits accounting data from a system A
to B); test data are in general excerpts from historical series over
which you have no control.
For test scenarios in some way related to a combinatorial inverse
problem, it is not easy to find support tools as such tools are not readily available. The only tool I found is NIST CCM, in alpha-phase at the
time I am writing this article. If you like, you can request a copy of
the software: go to http://csrc.nist.gov/groups/SNS/acts/documents/
comparison-report.html as previously reported[3].
In the following we describe a set of tools called Combinatorial Testing Tools (executable under Windows, but, if needed, not difficult to
port under Unix/Linux) that enables the (quasi)minimal test set to
be extracted and the coverage of a generic test set to be calculated,
using systematic algorithms calculation coverage and starting from
all n-tuples of the variables values. Such algorithms should be categorized as brute force algorithms and should be used (on a normal
supermarket-bought PC) if the number of variables and values is not
too high.

Overview of CTT
Tools in the Combinatorial Testing Tools product try to provide support
to help solve both problems of combinatorial testing previously stated.
Combinatorial Testing Tools do not intend to compete with the existing tools aimed at solving the direct problem of combinatorial
testing, such as Microsoft PICT, AllPair J. Bach, NIST, or several other
commercial and non-commercial tools already present on the market. These tools implement algorithms definitely more effectively
than CTT and therefore should be favorite I repeat to solve the
direct problem.
Regarding the reverse problem of combinatorial tests, to my knowledge
there are no tools on the market (except NIST CCM in alpha release)
and the CTT then attempt to provide a very first solution to the reverse
problem, to be surely improved over time, when we will better understand the logic and the rules that are behind combinatorial testing and
the minimal determination of test sets related to it.
We would like to remember that:
a. Solving the direct problem means determining the smallest
possible test set with a level of WISE coverage agreed (usually
WISE=2) from the set of variables and values.
b. Solving the reverse problem means determining the level of
coverage of a given test set with respect to a reference WISE level
(also here usually WISE=2)
The tools should be categorized as follows:

Testing Experience 28/2014

27

a. First level tools: batch DOS scripts providing an immediate response to standard scenarios that typically occur in test projects
requiring combinatorial techniques.

First Level Tools Batch DOS Script

b. Second level tools: executable (C++/Perl development language)


and more versatile, giving response to problems that may be less
common, but sometimes occur in test projects requiring combinatorial techniques.

This is the first tool that is mandatory to run before all the other tools.
It generates all the n-tuples corresponding to the value of the Wise
past input (runW = run Wise)

First level scripts were thought of as the wrapper around the second
level executables, in order to simplify end user life with a set of
simple commands that enable you to quickly get a number of items of
standard information. The following table maps the two categories
of tools and the tool with the kind of information it supplies.
In this article we cannot go in details on the behavior of the tools; what
follows is a short description of all of the first level tools. More information can be found in the tools user manual which is downloadable
(with the scripts) from: http://www.opensource-combinatorial-softtest-tools.net/index.php/download
Anyway, here is a useful summary table that links together first level
and second level tools.

First Level
Tools

Second Level
1

runT

runsT

runTS

runsTS

runW
runCC

runsCC

runTSF

runsTSF

10

11

Tools runT and runsT


Get the minimal test set with guaranteed coverage equal to the Wise
passed in input (runT = run Test), extracting the test cases from the
file of the WISE_MAX-tuples (all combinations).

Tools runTS and runsTS


Get the minimal test set with guaranteed coverage equal to the Wise
passed in input or equal to the coverage of the test set passed in input
if less than WISE, extracting the test cases from the file of the test set
passed in input (runTS = run Test Set)

13

Tool runR

Extracts a non-minimal test set but still smaller than the maximum
test set with guaranteed coverage equal to the Wise passed as input
(runR = run Reduce).

Computes the input test set coverage in respect of the WISE passed as
input (runCC = run Circulate Coverage)

Get the minimal test set with guaranteed coverage equal to the Wise
passed in input or equal to the coverage of the test set passed in input
if less than WISE, extracting the test cases from the file of the test set
passed in input, excluding n-tuples already covered by the partial test
set input file (runTSF = run Test Set Forbidden).

Tool runCC and runsCC

Tools runTSF and runsTSF

runC
runR

12

Tool runW

Tool runC
Applies constraints to n-tuple file (or test set file) passed as input.

Table 7. Mapping of First Level vs. Second Level Tools (Wrapping Map)

Below is the list of second level executables:

Second Level Tools Executable

1. calcolaCopertura.exe
2. calcolaCoperturaSlow.exe
3. Combinazioni_n_k.exe
4. contrLimWise.exe
5. count_line.exe
6. creaFileProdCart.exe
7. creaStringa.exe
8. generaTestSet.exe
9. generaTestSetSlow.exe
10. ProdCart.exe
11. reduceNple.exe
12. runConstrains.pl
13. uniqueRowFile.exe

In the following we describes the second level tools, a little more hard
to use but more versatile; may be useful to experienced users for
managing more complex scenarios.

28

Testing Experience 28/2014

Executable calcolacopertura.exe and


calcolaCoperturaSlow.exe
Performs the coverage calculation of the input test set in respect of
the input WISE.

Executable Combinazioni_n_k
Extracts all K by K combinations of a string of length N passed as input.

Executable generaTestSet.exe and


generaTestSetSlow.exe
Gets the minimal test set with guaranteed coverage equal to the Wise
passed in input or equal to the coverage of the test set passed in input
if less than WISE, extracting the test cases from the file of the test set
passed in input, excluding n-tuples already covered by the partial
test set input file.

Executable ProdCart.exe
Generates all possible combinations of the values of variables as defined in the input file.

Executable reduceNple.exe
Squeezes as many n-tuples as possible contained in the input file,
replacing the values * with specific values of the variables and thus
creating a test set from the file of n-tuples. While not the test set
minimum, it is reduced compared to the test set maximum (coincident
with all n-tuples). The number of records depends on the sorting of the
n-tuples input file, in an unknown way. There is definitely a sorting of
the files row to which the test set output contains a minimum number
of test cases with guaranteed WISE-coverage, but finding this sort is
not feasible from a computational point of view, as it is too onerous.
There are six other executables that do not provide direct support to
the generation and/or operation of the test ssets, but are predominantly used by DOS batch tools to perform secondary operations that
it is impossible or at least very complex to do directly from DOS.
These utilities may also be of some use, even if they are not to be considered tout cours test tools. We have not described those utilities
in this article.

variable with a maximum number of values, while N-wise coverage


coincides with all the variable combinations of the value. And what
if we also include the outputs? This could be the material for another
article in the future

Notes of Appreciation
[1] Many thanks to Jacek Czerwonka, owner of the web site
www.pairwise.org, who allowed me to reprint the incipit of the
same. By the way, he wrote to me about some evolution on the subject
of pairwise vs. random testing that you can find in the article An Empirical Comparison of Combinatorial and Random Testing available at
the link: http://www.nist.gov/customcf/get_pdf.cfm?pub_id=915439
written by: Laleh Sh. Ghandehari, Jacek Czerwonka, Yu Lei, Soheil
Shafiee, Raghu Kacker, and Richard Kuhn.
[2] Many thanks to James Bach, owner of the website
www.satisfice.com who allowed me to reprint part of the article
Pairwise Testing: A Best Practice That Is Not from James Bach, Patrick J. Schroeder, available from http://www.testingeducation.org/
wtst5/PairwisePNSQC2004.pdf.
[3] Many thanks to Dr. Richard Kuhn from NIST who kindly sent me a
copy of the NIST CCM tools. We would like to remind you that you
should request a copy: go to http://csrc.nist.gov/groups/SNS/acts/
documents/comparison-report.html as previously reported.

> about the author


Danilo Berta graduated in Physics at Turin University (Italy) in 1993 and first started work as a
Laboratory Technician, but he soon switched to

Conclusion
In the article we gave an overview of a test methodology that uses
combinatorial calculus to find test cases for a software component,
knowing the inputs of the same. Generally speaking a combinatorial
technique like this generates too many test cases, so we need to define
a so-called N-wise coverage (with N from 1 to the number of input
variables), select a value of N (usually N=2, pairwise testing) and extract a subsystem of test cases with the guarantee of N-wise coverage.
This is the Direct Combinatorial Test Problem and there are a lot
of wonderful tools on the market that solve the problem very quickly.
We then dealt with the Reverse Combinatorial Test Problem: if you
have a test set build upon the N inputs of a software component
about which you know nothing, what percentage of N-wise coverage
does the test set ensure? On the market I just found one tool that
addresses this problem: NIST CCM, which is still in alpha phase at the
time I am writing this article. In the article I give an overview of the
CTT (Combinatorial Testing Tools) I developed in C++ that, using a
brute-force approach, try to give a very first response to the Reverse
Combinatorial Test Problem.

the software field as an employee of a consultancy company working for a large number of
customers. Throughout his career, he has worked
in banks as a Cobol developer and analyst, for
the Italian treasury, the Italian railways (structural and functional
software testing), as well as for an Italian automotive company, the
European Space Agency (creation and testing of scientific data files
for the SMART mission), telecommunication companies, the national Italian television company, Italian editorial companies, and
more. This involved work on different kinds of automation test projects (HP Tools), software analysis, and development projects. With
his passion for software development and analysis which is not
just limited to testing he has written some articles and a software
course for Italian specialist magazines. His work enables him to deal
mainly with software testing; he holds both the ISEB/ISTQB Foundation Certificate in Software Testing and the ISEB Practitioner Certificate; he is a regular member of the British Computer Society.
Currently, he lives in Switzerland and works for a Swiss company.
Website: www.bertadanilo.name

But there is (at least) a problem whose solution is not still known. For a
software with N inputs, what is the minimum test set (if it exists) that
guarantees the N-level coverage? The solution exists just for a trivial
case: for 1-wise coverage is always equal to the number of values of the

Testing Experience 28/2014

29

By Gregory Solovey & Phil Gillis

A Continuous Integration Test Framework


Introduction
Continuous Integration (CI) is a software development practice where
developers integrate their work frequently, usually daily, directly on
the projects main stream. CI consists of two activities: build and test.
In this article we describe the test aspects of CI. Continuous Integration Test Framework (CITF) is the Alcatel-Lucent implementation for
testing 4G wireless products, developed over several years. We are
not offering or promoting this particular system, but rather describe
solutions that can be used to build or select similar CITFs.
Before there was CI, there was test automation. Numerous groups,
working on the same large project in several locations, developed their
own automation methodology and tools. Unit testing was done by
developers by adding test code; integration and system testers used
a variety of CLI, GUI, SNMP, and load and stress test tools.

do not have them at all, some teams care about particular resource
details, and some do not know these details.
To satisfy these contradictory conditions the following approach was
created:
A resource has a name, a pool it can belong to, selection attributes, and
ownership attributes. The selection attributes enable resources to be
identified in order be selected at the execution stage. The ownership
attributes, if defined, assign the resource to a user or group of users.
Attr 1
Search
attrs

We looked at several existing CI test systems, but none seemed suitable for our situation of testing many configurations of embedded
systems. We decided to build our own framework using MySQL and a
web interface, with the following goals:
Manage and allocate pools of resources, grouped by configuration.
Have the ability to dynamically assemble multiple test environments for each build, release, and application.
Create testware standards for test tools, or wrappers for all test
tools, to make them look alike (use the same configuration data
and convert their results to the standard hierarchical CITF format).
Provide common interfaces for debugging and reporting the build
status.
Design a quick and easy way to define new software releases and
projects, and integrate new test tools, testware, resource pools,
and test environments.
Intelligently select the appropriate test suites (sanity, regression,
feature) whenever a build is completed to validate the integrity of
the mainstream, existing, and new functionality.

Resource
pollID

Embedded systems require comprehensive test environments that


include variety of test tools, network elements, load simulators, and a
large selection of end devices. Each build has to be tested on multiple
test environments and resource permutations. It is impossible to have
a dedicated test environment for each release/project/feature. On the
contrary, some teams do not want to share their resources, some teams

30

Testing Experience 28/2014

userID

Ownership
attrs

groupID

Figure 1. Resource attributes: name, pollID, search set, ownership set

A test environment is presented as a set of resource placeholders


that should be filled with resources when the test task is issued. Each
placeholder is defined as a set of search attributes.
Attr 1
PH 1

Attr 2
Attr 3

Test Env

Attr 4

PH 2

Attr 5

Figure 2. Test Environment as a set of resource placeholders and respective sets of their
search attributes

A test task description refers to one test environment and describes


the ways to select resource(s) for environment placeholders. There are
three ways of finding a resource for a placeholder: by resource name,
from a pool, or from the resources that satisfy the search attributes.
However, for each resource placeholder only one way should be used.

PH 1

1. Resource Management

Attr 3

name

After the adoption of the CI paradigm, it soon became apparent that

reusing the various independent test solutions at the mainstream level


posed a challenge. The teams responsible for the build verification
had to know the specifics of all tools, be able to debug and interpret
their results promptly, and manually build all the permutations of
test environments.

Attr 2

Test Task

PH 2
PH 3

pool
34
Search
attr
name

Attr 1
Attr 2
Attr 3

abc23

Figure 3. Test task as a set of resource placeholders and the respective rules of their
replacements

Every time a build is done, a new instance of a test task (testTaskExe)


is issued. Based on its test task description, all placeholders need to
be filled with real resources. The CITF performs a two stage selection
procedure:
Stage 1: Select available resource candidates based on a selection
method (name, pool, attributes). Resource is considered available if it
is not under test and if it is in an operational state.
Stage 2: From these candidates, select the permitted resources for the
user who issued the request to test. The resource is permitted to be
used if one of three conditions is true:
1. The resource user and group ownerships are not defined
2. If only group ownership is defined, the user should belong to this
group

A use case (UC) represents a subsystem from a behavioral (functional)


point of view and is related to a specific requirement (scenario, algorithm, or feature). A UC consists of test cases.
A test case (TC) is a single verification act. A TC moves the object-to-test
from an initial to a verification state, compares the real and expected
results, and returns it back to its initial state. In most cases, returning
the system to its initial state makes the TCs independent of each other.
A TC is a series of test actions to move the object-to-test through the
above phases (set, execution, verification, reset).
A test action (TA) is a single act of interaction between a test tool and
the object-to-test. It supports the object-to-tests interfaces (CLI, GUI,
SNTP, HTTP). For example, a test action can be the execution of a single
CLI command, a single interaction with the GUI, or sending a single
http request transaction to the client.

3. If user ownership is defined, it should match the user ID of the


person who issued the test

TA_1
TC_1

TA_3

Resource 1
Test Task Exe
Resource 2

TA_2

UC_1

TC_2

Attr 3
TC_3

Figure 4. Test Task Exe as a set of given resources

A test task is started upon acquiring all the required resources. The
states of its resources will be changed to running. The test task will
start executing component by component, and each component knows
how to build an environment from the selected resources. Upon completing the execution of a task, the resources are returned to the available state. If the returned resource is unhealthy, recovery mechanisms
restore it to its initial state, making it ready for subsequent test runs.

TS_1

TC_3
UC_2
TC_4
UC_3

Test

TC_3

TA_1
TA_4
TA_2
TA_4
TA_2
TA_4
TA_3
TA_2
TA_4
TA_1

2. Testware Management
Continuous integration deals with code before the production stage.
This means the testware needs to be adaptive to frequent changes of
many kinds, such as API and command syntax and semantic changes,
and changes in requirements. When these changes have occurred,
hardcoded syntax in testware will be hard to find and correct. The only
way to achieve testware maintenance is by providing a strict relationship between architecture, requirements, and design documents, and
by separating business functionality from implementation details.

2.1 Testware Hierarchy


A hierarchy of testware should reflect the architecture, requirements,
and design documents that describe the object-to-test from the structural and behavioral views down to the implementation details. The
latter is described below (top-down):
A test (sanity, regression, functional, performance, etc) is a collection
of test sets.
A test set (TS) reflects the structural view of the object-to-test, as described by the system architecture. Examples include: a set of hardware
components, a set of services, or a set of network configurations. A TS
is a grouping of use cases.

TC_1
TS_2

TA_2
TA_3

UC_4
TC_3

TA_2
TA_4

Figure 5. Testware Hierarchy

The hierarchical testware presentation is materialized in unconditional


execution of the test cases: all test sets are executed sequentially; all
use cases inside each test set are executed sequentially; all test cases
related to each use case are executed sequentially.

2.2 Testware External Presentation


The testware has to be updated as a result of changing the business
rules, the API syntax, or the GUI appearance. To minimize the number
of changes, the testware is organized externally into configuration
files, test set files, test scripts, and test case libraries. Such presentation
separates the implementation details from the business functions,
making the testware independent of the environment. The test objects
are reusable across releases, projects, and test types.
The relationships between the internal and external presentations
are described below:

Testing Experience 28/2014

31

A configuration file contains a list of TSs that have to be executed. Each


TS in a configuration file points to a test set.
A test set is a file containing a collection of test script names.
A test script contains one or more UC descriptions; each of them combines the TC calls. The actual TC descriptions are stored in a test library.
A test library contains the description of TCs as a sequence of TAs. In
this manner, various UCs can reuse the same TCs from the test libraries.
A test action is a single act of communication with the interface of the
object-to-test. It is presented as action words: set, send, push, capture,
compare, repeat, etc. The TA parameters are CLI commands or GUI object methods and implemented by the language of a specific test tool.
UC_1
Script_1
TS_2

UC_2

Script_2

TC_1

TC_1

TA_2

TC_3

TA_3

TA_1

TA_1

TA_2

TC_3

TC_2

TC_4

TS_2

UC_3

Script_3

UC_4

TC_3

TC_1
TC_3

Configuration file

Test set files:


per TS

TA_1

TC_2

Test scripts:
UC(s)/TCs

TA_4

TC_3

TA_2

TA_3
TA_4

The test structure as a tree of test sets, use cases, and test cases

Test
actions

Figure 6. Testware external presentation

The configuration, test set, test script, and library are files that present testware. The narrow specialization of each file type serves a
maintenance purpose: a single change in the code of the object-to-test
(on the structural, functional, or syntax level) should lead to a single
change in testware.

3. Test Tool Management: Conform or Wrap


Embedded systems applications demand a variety of test tools and
various independent test solutions. It is unreasonable to expect that
the teams responsible for build verification know the specifics of all
tools and are able to debug and interpret their results promptly.
The challenge is to make all test tools (CLI, GUI, load and performance)
look alike, and appear transparent to the tester. The solution is to
require the testware framework to be followed or to create a wrapper for each test tool that will serve as an interface between the test
framework and test tools.

A subdirectory structure that


mirrors the build structure
Project 1
Release 1
Project 2
Results
Project 3
Release 2
Project 4

Figure 7. Test results location

32

Testing Experience 28/2014

To find a single result from the tens of thousands of test runs per day
with a few mouse clicks, the results repository area needs to mirror
(Figure 4):

The build test infrastructure as a set of test environments and


applications to test

TA_3

Test libraries:
TC definitions

The debug file should have a common look for all test tools, with an
emphasis on what are the stimuli and responses, and how the comparison was made in order to let the tester write CRs that communicate
the problem precisely to the developer.

The build infrastructure, as a set of various release/project/


feature/developer streams

TA_4

TC_4

Before starting its tests, each wrapper dynamically creates its configuration files, which include references to selected tests, resources to be
used, and the location of the results. The configuration files are built
from templates, based on the release, project, and resource parameters.
A debug file that is created during a test execution needs to follow a
standard organization. Upon completion of a test, the results produced
by different test tools are converted into a standard hierarchical format
and uploaded to a results repository together with logs and traces.

Failed test cases can be filtered for known issues in order to avoid re-

analyzing expected failures. The standard results presentation format


supports five levels of test hierarchy, which are presented on the web.

4. Test Process
New releases/projects/features, test environments, and testware have
to be created daily. This requires interfaces that work with test related
objects: to create, to edit, to delete, to run, to monitor, and to report.
The management web interface has to provide a quick and easy way
to define new software releases and projects, and integrate new test
tools, testware, resource pools, and test environments directly into
the database. The build server requests a test for a new build through
an execution interface and is notified of test results and metrics.
The request to run a test is called a verification request (Figure 6). A
verification request specifies a set of test tasks that are independent
and can execute in parallel. Each test task (task for short) specifies a
set of components, which are sets of tests to run sequentially, and is
associated with a test environment, which specifies a set of resources
that must be acquired before execution.

A subdirectory structure that


mirrors the build test structure
Build 1
Build 2
Build 3

Test type 1

Build 4
Build 5
Build 6
Build 7
Build 8

Test type 2

Test environment 1
Test environment 2
Test environment 3
Test environment 4

A result file content that


mirrors the test structure
Application 1
Application 2
Application 3

Use case 1
Test set 1
Use case 2

Application 4
Application 5
Application 6
Application 7
Application 8

Use case 3
Test set 2
Use case 4

Test case 1
Test case 2
Test case 3
Test case 4
Test case 5
Test case 6
Test case 7
Test case 8

Each component calls its test tool to start a test. A test task monitors
all components and, based on the database settings, can terminate
it if the execution goes on too long, repeat its execution, or skip the
execution of the subsequent components.
Comp 1
Verification
Request 1
Verification
Request 2
Verification
Request 3

Test Task 1

Comp 1

Test Task 2
Test Task 4
Test Task 5
Test Task 6

Comp 1

Test Environment 1
Comp 2
Test Environment 2

Comp 1

Comp 2

Comp 3

Comp 1

Comp 4

Comp 3

Test Task 7

Test Environment 3
Comp 5

Comp 3

Comp 6

Figure 8. A verification request is translated into independent test tasks; each task
executes its components sequentially in its associated test environment.

5. Reliability
A considerable number of failed test cases are not real code errors, but
are environment issues, such as random network glitches, testware
mis-synchronization, or resource failures. The following built-in testing
reliability features help test teams handle such errors:
Starting test cases from an initial state and, in case of failure, a
resource is returned to its initial state by the test cases recovery
sequence.

reveals areas of the code that were not tested at all. The code coverage
metrics can be useful in CI if the build consists of many independent
layers and modules. An objects quality can be defined as the quality
of the weakest link. Therefore, it is good practice to request the same
percentage of code coverage for all system components. This is especially important for new code deliveries, since it is the primary proof
(along with requirements traceability) that the necessary automated
tests were added along with the new code.
The fourth category of metrics describes the quality of each build: code
review data, code complexity, warnings, and memory leaks.

7. Conclusions
CI puts heavy demands on testing systems. We found that no commercial solution offered the functionality we needed, which is why we
developed our own CI test framework. The development was driven by
the demands of the test teams responsible for build validation, whose
major constraint was the ability to determine the cause of failure in a
short interval. As a result, we deployed five releases of CITF during the
five years we have been in operation. We currently support four major
releases, approximately thirty projects for each release, ten different
test tools (commercial and in-house), and twenty embedded systems
applications. We support a pool consisting of hundreds of geographically distributed physical resources. The system verifies tens of builds
daily, by running hundreds of thousands of test cases.

> about the authors


Gregory Solovey is a Distinguished Member of

Using filters to mask expected test failures in case of known


problems.

Technical Staff at Alcatel-Lucent. He has a PhD


in Computer Science (thesis Error Identification Methods for High Scale Hierarchical Sys-

Automatically rerunning some software components in case of


intermittent failure during test execution.
Recovering a hardware resource from a failure state when the
resource is returned to the pool after a failed test execution.
Manually updating test results after a tester reruns failed test
cases and reporting the build status promptly.
Maintaining redundancy of database and web servers, along with
frequent backups for quick restoration of functionality in the
event of a system outage or database corruption.

tems), more than 25 years of experience, and


over 60 publications and patents in the fields of
test design and automation. He has extensive
experience in establishing a test strategy and process, designing
production-grade test tools, automating the test design, embedding
automation test frameworks, and implementing continuous integration. He is currently applying and enhancing his approaches to develop a test framework for continuous integration of a hierarchical
embedded system.
LinkedIn: www.linkedin.com/profile/view?id=44354302

6. Metrics

Philip Gillis is a Distinguished Member of Tech-

Metrics capture a snapshot of the quality of each build. The primary


metrics are test object pass/fail result counts and test execution times.
These are calculated at the test tool layer.

has more than 30 years experience in software

The second category of metrics is failure reasons, which are used to


identify bottlenecks in the CI process. Sometimes failures unrelated to
the code can occur, such as network glitches, database errors, testware
mis-synchronization, etc. These data, collected and analyzed over time,
can identify areas of CITF or the environment that should be improved.
The third category of metrics is coverage: feature, requirements, and
code coverage. The percentage of code that was touched by tests does
not prove that a test is complete (i.e. covers all possible errors) but rather

nical Staff at Alcatel-Lucent in Murray Hill, NJ. He


development, including 20 in database design
and administration, and 15 in test automation
systems. Starting with C and Unix, he moved to
Windows programming in the 1990s and to Linux
servers and web browser programming in the late 2000s. He now
designs, implements, and administers a test automation system
using PHP, Perl, JavaScript with jQuery, and MySQL. He holds five US
patents and multiple international patents.
LinkedIn: www.linkedin.com/profile/view?id=5523965

Testing Experience 28/2014

33

By Robert Galen

The Three Pillars of


Agile Quality and Testing
Introduction
A few years ago, I entered an organization to do some agile-focused
coaching and training. From the outside looking in, it was a fairly
mature agile organization. They had internal coaches in place, had
implemented Scrum while also leveraging Extreme Programming
technical practices for a couple of years, and seemed to be fairly committed and rigorous in their application of the methods.
It was a global financial firm that delivered their IT projects via highly
distributed teams. There were a wide variety of tools in place for application lifecycle management (ALM), software development, and
software testing support. In my first few days, everything I encountered, while albeit in superficial detail, just felt like a mature agile
organization and I was pleasantly surprised. Heck, I was impressed!
For the purposes of this article, my observations will shift to being
quality, testing, and tester centric.

Too Narrow a Focus


One of the things I noticed is that the firm had gone all in on BehaviorDriven Development (BDD) leveraging Cucumber. They had invited in
several consultants to teach courses to many of their Scrum teams and,
as a result, everyone was excited and test infected. Teams were literally creating thousands of BDD-level automated tests in conjunction
with delivery of their software. From their teams perspective, there
was incredible energy and enthusiasm. Literally everyone contributed
tests and they measured the number of increasing tests daily.
These tests were executed as part of their continuous integration
environment, so there were visible indicators of coverage and counts.
It was truly visual and very inspirational. And I could tell that everyone
was focused on increasing the number of automated tests so there
was a unity in goals and in each teams focus.
However, a few days into my coaching, I was invited to a product
backlog refinement session where a team was writing and developing their user stories. What I expected was to simply be an observer.
What actually happened is that I soon realized that the team did not
know how to write a solid user story. They could barely write one at
all, which shocked me. After this gap became clear, they asked me to
deliver an ad-hoc user story writing class for them. Afterwards, the
team was incredibly appreciative and everyone seemed to get the
place that stories held in developing their BDD-based acceptance tests.
Over the next several days, I started to realize something very important. The organization was at two levels when it came to its agile
quality and testing practices. People were either all in or they were
unaware or under-practicing specific techniques. For example, they
were all in on BDD and writing automated BDD (Cucumber) tests
and on continuous integration. However, they struggled mightily
with writing basic user stories and literally had no clear or consistent
Definition-of-Done.

34

Testing Experience 28/2014

I also realized that this seesaw effect of focusing on a handful of the


more technical practices was doing them a disservice. Why? Because
it is actually the interplay across practices that influences the effectiveness of your agile testing and the product impact of your quality
practices. I prepared a model for them to illustrate the balance that I
think is critical in agile quality practices.
I called it The Three Pillars of Agile Quality and Testing, and I began to
coach a much more nuanced, broad, and deep approach for their teams.
While this is more of a strategic play and a longer-term focus, the
discussions and the changes they drove had an immediate impact
on the organization. People started to connect the dots between the
technical and softer skill practices. They became much more aware
of the interplay across practices and how this drove an improvement
in quality results.
I want to share the Pillars in this article in the hope that it will help
your agile quality strategy development as well.

A Bit More on Strategy


A really important sub-text to the development of the Three Pillars is
my ongoing observation that agile organizations at best might have a
strategy for their overall transformation. But rarely do they develop or
are able to articulate their overall strategy for agile quality and testing.
I guess the assumption is that these aspects are along for the ride as
part of the development-driven strategies in their agile adoption. But
I could not disagree more with that point of view. I believe that calling
out, making transparent, and focusing on your quality and testing
agile strategies strongly aligns with the Agile Manifesto and makes
for a much more rigorous and successful agile transition.
The ultimate value of the Three Pillars is to help organizations think
about, articulate, and realize their agile strategies in these areas and,
most importantly, to hopefully achieve evolutionary balance.

The Three Pillars of Agile Quality and Testing


As I have said, the driving force behind my creating the Three Pillars
was organizational quality imbalance. As I observed what was happening at my client, I clearly recognized the imbalance. However, it
was unclear to me how to create a model that would help them. I

eventually came upon the following three critical areas, or Pillars,


where I tried to categorize crucial tactics, strategies, and techniques
that I have found helpful to agile teams as they create a broad and deep
supportive structure for their product quality and testing activities.
Here are the pillars at a high level:
1. Development and Test Automation: This pillar is the technology
side of quality and testing, and it is not simply focused on testing
and testers. It includes tooling, execution of the automation test

pyramid, continuous integration, XP technical practices, and support for ALM-distributed collaboration tools.
Often it is the place towards which organizations gravitate first
probably because of our generic affinity for tools solving all of our
challenges. An important way to think about this pillar is that it is
foundational, in that the other two pillars are built on top of the
tooling. And organizations often underestimate the importance,
initial cost, and ongoing costs of maintaining foundational agility
in this pillar. Continuous investment is an ongoing challenge here.
Finally, this pillar is not centric to the testing function or group. While
it includes testing, tooling, and automation, it inherently includes
ALL tooling related to product development across the entire agile
organization. It provides much of the glue in cross-connecting
tools and automation towards efficiency and quality.
2. Software Testing: This pillar is focused on the profession of testing.
On solid testing practices, not simply agile testing practices, but
leveraging the teams past testing experience, skills, techniques,
and tools. This is the place where agile teams move from a trivial
view of agile software testing (which only looks at TDD, ATDD, and
developer-based testing) towards a more holistic view of quality.
It is a pillar where the breadth and depth of functional and non-

functional testing is embraced. Where exploratory testing is understood and practiced as a viable testing technique. It is where
the breadth of non-functional testing is understood and applied
to meet business and domain needs, including performance, load,
security, and customer usability testing.
By definition, this is where testing strategy resides, where planning
and governance sit, and where broad reporting is performed. I am
NOT talking about traditional testing with all of its process focus

and typical lack of value. But I AM talking about effective professional testing, broadly and deeply applied within agile contexts.
3. Cross-Functional Team Practices: Finally, this pillar is focused on
cross-team collaboration, team-based standards, quality attitudes,
and, importantly, on building things properly. Consider this the softskills area of the three pillars, where we provide direction for how
each team will operate consider them the rules of engagement.
For example, this is the place where good old-fashioned reviews and
inspections are valued. This would include pairing (across ALL team
members), but also slightly more formal reviews of architecture, design, code, and test cases. It is a place where inspection is performed
rigorously, as established in the teams Definition-of-Done. Where
refactoring of the code base and keeping it well kept is also of
primary importance.
Speaking of Definition-of-Done, this is the pillar where cross-team
physical constraints, conventions, and agreements are established.
But, more importantly than creating them, it is where the team
makes commitments to consistency and actually holding to their
agreements. Another important focus is on group integrity in conducting powerful retrospectives and fostering continuous improvement in the long term.

Foundational Practices
But beneath the Three Pillars are some foundational principles and
practices that glue everything together. For example, taking a wholeteam view of quality and testing, where it is not just the job of the
testers, but of everyone on the team. I still find far too many agile
teams that relegate the ownership of quality and testing only to testers.

Pillars of Agile Quality


Development &
Test Automation

Software Testing

Pyramid-based Strategy:
(Unit + Cucumber +
Selenium)

Risk-based testing:
Functional &
Non-Functional

Stop-the-Line Mindset

Continuous Integration

Test planning @ Release


& Sprint levels

Code Reviews &


Standards

Attack technical
infrastructure in the
Backlog
Visual Feedback
Dashboards
Actively practice ATDD
and BDD

Cross-Functional
Team Practices
Team-based Pairing

Exploratory Testing

Active Done-Ness

Standards checklists,
templates, repositories

Aggressive Refactoring
of Technical Debt

Balance across manual,


exploratory &
automation

User Stories, 3 Amigo


based Conversations

Whole Team Ownership of Quality


Knowing the Right Thing to Build; And Building it Right
Healthy Agile Centric Metrics
Steering via: Center of Excellence or Community of Practice
Strategic balance across 3 Pillars; Assessment, Recalibration, and Continuous Improvement

Figure 1. High-level View of the Three Pillars

Testing Experience 28/2014

35

Continuously challenging this position and coaching the teams toward


whole-team ownership is an ongoing focus.
Another foundational area is metrics. We all know that what you
measure drives the behavior of the team. As we move towards agility, we need to begin measuring the entire team results rather than
functional results, and even showing care when measuring the team,
such as understanding that velocity is probably not the best metric to
measure a healthy team.
One of the core components of the Three Pillars foundation is a thread
that permeates through the pillars and the foundation. It embraces the
core idea that each agile, self-directed team has a basic responsibility
to build the right things (customer value) and to build them properly
(design, construction integrity, and quality).
Figure 1 is an overview of the types of activity and focus within each
pillar. This is a high-level view and there are important nuances that
are missing mostly due to a lack of space.

Cross-cutting Strategy
Beyond the individual pillars, the value resides in cross-cutting concerns. I will go back to my original story to help make this point. My
client was advanced in BDD practices, but struggling with user story
writing, or even understanding the point of a user story.
The results would have been better if they had made the following
cross-Pillar connections:
In Pillar #1 Behavior-Driven Development (BDD) and Acceptance
Test-Driven Development (ATDD) are mostly technical practices. They
focus on articulating user story acceptance testing in such a way
as to make them automatable via a variety of open source tooling.
Unfortunately they have an underlying assumption that you understand how to write a solid story.
In Pillar #2 One thing I did not mention in the story was that every
team had a different view of what a story should look like and the
rules for writing effective stories. There were no norms, consistency
rules, templates, or even solid examples. A focus on the software
testing aspects of pillar two would have established these practices,
which would have significantly helped their teams.
In Pillar #3 An important aspect of the user story that my client
failed to realize was the conversation part of the story. If you reference the 3-Cs of effective story writing as a model, one of the Cs is
having a conversation about or collaborating on the story. It is the
most important C if you ask me. It is where the 3 Amigos of the
story (the developer(s), the tester(s), and the product owner(s)) get
together; leveraging the story to create conversations that surround
the customer problem they are trying to solve.
Do you see the pattern in this case?
You cannot effectively manage to deliver on agile quality practices
without cross-cutting the concerns and focus. In this case, effective
use of user stories and standards, plus BDD and automation, plus the
conversations needed to cross all three pillars. It requires a strategic
balance in order to implement any one of the practices properly.
I hope this example illustrates the cross-cutting nature for effective use
of the Three Pillars, and that you start doing that on your own as well.

36

Testing Experience 28/2014

Wrapping Up
This initial article is intended to introduce The Three Pillars of Agile
Quality and Testing as a framework or model for solidifying your agile
quality strategies and initiatives.
I hope it has increased your thinking around the importance of developing a balanced quality and testing strategy as part of your overall
agile transformation. As I have observed and learned, it does not simply
happen as a result of going agile, and most of the teams I encounter
are largely out of balance in their adoption.
I hope you find the model useful and please send me feedback. If there
is interest, I may explore more specific dynamics within each pillar in
subsequent articles.
Stay agile my friends,
Bob.

BTW: I am writing a book based on the Three Pillars model. If you would
like to have more information, participate in the books evolution, or
simply stay tuned in to this topic, please join the books mailing list
here http://goo.gl/ORcxbE

> about the author


Bob Galen is President & Certified Scrum Coach
(CSC) at RGCG, LLC a technical consultancy focused towards increasing agility and pragmatism within software projects and teams. He has
over 30 years of experience as a software developer, tester, project manager and leader. Bob
regularly consults, writes, and is a popular speaker on a wide variety of software topics. He is also the author of the
books: Agile Reflections and Scrum Product Ownership. He can
be reached at: bob@rgalen.com
Twitter: @bobgalen

By Cesario Ramos & Pascal Dufour

Serious Games for Testers


Introduction

What is a Serious Game?

The use of ATDD and tools like FitNesse, Cucumber, and Robot Framework makes it necessary to create automated acceptance tests. These
acceptance tests are a natural extension of the acceptance criteria used
in user stories. You use acceptance tests to understand what needs
to be developed so that you can develop the correct functionality. In
order to develop the correct functionality, you need to create a common
understanding of stakeholder value among all team members. You
use story workshops (product backlog refinement meetings in Scrum)
so that everyone can contribute to discovering the whys, hows, and
whats of the stories. You also break down bigger stories (a.k.a. epics)
into smaller stories (a.k.a. user stories), and create new user stories
together with your stakeholders in these workshops.

If you are like us, you have participated in meetings that were boring,
non productive, dominated by a few people, and just a pain to be part
of. You just showed up and said a few things because that was what
management expected you to do. Well, it does not have to be that way.
You can use game mechanics in all your meetings to make them not
only more fun, but much more productive with better results. A way
to successful meetings is through serious games.

Creating shared understanding as a team with your stakeholders gives


you the following benefits:
1. You are more likely to create a solution that really delivers business
value. The close collaboration makes you understand the expected
outcomes. You understand why this functionality needs to be created and what business value it addresses.
2. You are better equipped to develop the correct solution. You focus
on the solution only after you have understood the needs and problems from the customers point of view. Without this understanding, agile just helps you create the wrong stuff faster.
3. Your mindset and approach shift from finding defects to preventing
them at the level of business value and development. Remember
that finding defects is a waste from a lean point of view.
4. You are able to write down in unambiguous acceptance tests what
is meant by successfully developing a user story. You develop only
what is needed no more, no less.
Story workshops not only create a shared understanding and acceptance test cases, they also open up the opportunity for the whole team
to test. The test cases can be automated, so the developers have work
to do together with the testers. The test cases need to be fleshed out
and extended with new ones, and everybody can do that now.
The common understanding also sets you up for defining exploratory
test charters together with your team. And, as you know, everyone can
execute a manual test as long as an experienced tester is coaching.
That is all very interesting, but how can you actually run a successful
story workshop? What steps are needed, what games can you play,
and how can you facilitate the meeting?
In this article we will tell you about how we do our story workshops
using serious games.

A serious game[1] is a game designed to solve a business problem. In a


normal game the purpose is to entertain. In a serious game you use
the game mechanics that engage millions of people playing normal
games all over the world. A serious game puts people into a creative
environment where they are really engaged and can collaborate with
others, moving things around and discussing points of view.
Places where you can use serious games are when understanding
stakeholder needs, clarifying user stories, and distilling test cases. We
do that in a story workshop.

What is a Story Workshop?


An intended outcome of a story workshop is to create a shared understanding of user stories for the whole team and to break them down
to the right size. You create a shared understanding by discussing why
they are needed, what problems they solve for the customer, and by
distilling test cases to create that shared understanding.
Outputs of the story workshop for us are:
1. We want to have two or three examples for each user story to be
refined.
2. We want to have an exploratory test charter defined of each user
story to be refined.
3. We want a size estimate in order to make tradeoffs.
4. We want an estimate of the impact the story is going to make on
the intended business value.
A good time slot for a refinement meeting is between one and two
hours. You can always stop the meeting if you finish early, although
this seldom happens because of Parkinsons law[2].
The first problem you address in a story workshop is to better understand the user stories. A user story is a narrative of a need of a
particular persona. You need to understand what problem and needs
the persona actually has. This is a discovery problem. The question to
be answered is: What problem does the customer have and why does
he/she want it to be solved?

Testing Experience 28/2014

37

Another problem you address in a story workshop is a design problem. A


user story is also an experiment for which a solution must be designed
by development. Therefore, the question to answer is: What solution
best fits the customers needs? The exact details of the design are not
provided by a story workshop but it does start off the thought process
about a solution.
Finally you have a testing problem. The questions to be answered are:
1. How do we know that the solution solves the customers problem?
Does our solution add more value than his/her current solution?
2. How do we know that we are solving the right problem? Is the problem we are solving the problem the customer wants to be solved?
3. How do we know we have solved the problem correctly? How do we
quantify the success and failure of the solution?
In a story workshop we try to address all of these questions.

How to Facilitate a Story Workshop


There are a few things you need to take into account if you want a
successful workshop.
First you need to state the goal of the workshop. It is very important
to state the goal at the beginning of the meeting in order for people
to become engaged.
Next you need to discuss the steps of your workshop clearly. So, what
is the agenda and what are we going to do?

38

Testing Experience 28/2014

After that you will have to define the rules of the workshop. How about
ringing phones? Can we read emails during the workshop? What about
interrupting one another while talking? If you have already done some
story workshops with the team, you just quickly remind them of their
own rules and ask if they are still happy with them.
To boost creativity everything is time boxed, so you want to communicate the time limit.
During the workshop you also want to inform the team of time progress. You can, for example, let them know every 10 to 15 minutes how
much time is left and discuss whether you are still working on the
most important things.
Finally, a parking lot is very useful to have. If you have any questions
or issues that take up too much time or are not relevant, you can put
them on the parking lot and cover them at the end of the meeting.

A Game Sequence for a Successful Story Workshop


In a story workshop for distilling test cases to create shared understanding, you want to perform the following steps:
1. Check in: Explain the goal and agenda of the meeting.
2. Understand the business value: The product owner tells a coherent set of stories with a goal (Sprint Goal if you are using Scrum)
and links them back to the business objective. The team discusses
why we are doing this. We do this using an impact map[3] and
the 5 Whys[4].

3. Understand the customer value: The team breaks up into two


sub-teams, and each sub-team gets half of the user stories. The
sub-teams then create scenarios of the current situation of the
persona so that the team understands the personas challenges
as they are currently. The sub-teams also create scenarios of the
persona with the user stories solution implemented, so that the
team understands the benefits the persona has from the new solution. The teams then get together and discuss the scenarios they
have created with the other subteam. The team discusses Why
does the persona want this? We do this using a storyboard[5]
and a Pain Gain map[6]. This is also the step where you quantify
your goal so you know how much value you have created at the
end of the iteration.
4. Distill acceptance tests: The team then creates acceptance tests
for the user stories. Depending on your tools, you create the use
story narratives with Gherkin specifications[7], flow tables[8] and
decisions tables[9]. The teams break up into sub-teams, and write
tables and Gherkin scenarios in collaboration with the product
owner. The sub-teams then get together and discuss the results
with each other. We do this at the whiteboard using tables and
scenario writing.
5. Define exploratory test charters: Identify risks to target your ex-

ploratory tests. Once you have identified which stories need manual
testing, you set up an exploratory test charter for each. We do this
using a risk impact matrix[10] and we use an exploratory testing
tour to drive our test[11].

6. Closing: Quick summary of results and final remarks.


The above game sequence is just one way of running a story workshop.
It assumed that you have ready user stories to begin with. The games
explained are the games we use most of the time. There are a lot
more games you can use. We encourage you to try some out in your
workshops to discover which work best in your particular context.

[9] Decision tables: http://fitnesse.org/FitNesse.FullReferenceGuide.


UserGuide.WritingAcceptanceTests.SliM.DecisionTable
[10] Risk quadrants: http://pascaldufour.wordpress.com/2013/11/19/
user-story-risk-quadrants/
[11] Testing tours: http://michaeldkelly.com/blog/2005/9/20/touringheuristic.html and: http://msdn.microsoft.com/en-us/library/
jj620911.aspx#bkmk_tours

> about the authors


Cesario Ramos is the founder of AgiliX Agile Consulting where he works as a professional Agile
Coach and Scrum trainer @ Scrum.org. He helps
teams to deliver maximum value within the
constraints of time and budget. Cesario also leads
agile transformation programs at his customers
through coaching, training and hands-on guidance of management and development teams. When Cesario is not
coaching or enjoying family life he enjoys writing and speaking.
Cesario is the author of the book EMERGENT Lean Agile adoption
for an innovation workplace and speaker at international agile conferences where he passionately discusses the latest agile topics. You
can contact Cesario at cesario@agilix.nl.

Pascal Dufour is a passionate tester and Scrum


coach at Validate-it. Pascal has a passion for Agile projects, where he tries to implement pragmatic test strategies combining Agile and context-driven testing. With over ten years of
experience at large international companies,
Pascal has experience with different types of testing from embedded software to system integration. Enthusiastic
and creative, he tries to make testing more fun, making his work

References

visual and as simple as possible. He helps team members improve

[1] Serious game/innovation games:


http://www.innovationgames.com

parency, and motivating people to use a dynamic approach to test-

[2] Parkinsons Law: http://en.wikipedia.org/wiki/Parkinsons_law


[3] Impact Mapping: http://ww.impactmapping.org

their efforts in Scrum, emphasizing ethics, commitment, and transing. He believes teams should learn, try, experiment, and work together to create solutions that solve problems. You can contact
Pascal at pascal@validate-it.nl.

[4] The 5 whys: http://www.gamestorming.com/games-for-problemsolving/the-5-whys/


[5] Storyboard: http://www.romanpichler.com/blog/agile-scenariosand-storyboards/
[6] Pain Gain map: http://www.gamestorming.com/games-fordesign/pain-gain-map/
[7] Gherkin Language: http://cukes.info/gherkin.html
[8] Flow tables: http://fitnesse.org/FitNesse.UserGuide.
FixtureGallery.ImportantConcepts.FlowMode

Testing Experience 28/2014

39

By Venkat Ramesh Atigadda

Testing the Internet of Things (IoT)


The Future is Here
Introduction
The Internet of Things is a scenario in which objects or people are
provided with unique identifiers and the ability to transfer data over a
network without requiring human-to-human or human-to-computer
interaction[1], all these physical objects are connected to one or more
sensors via the internet, and each sensor will monitor a specific condition such as temperature, motion, or location. There is huge transformation in the usage of the internet when compared to the past.
Previously, the internet was used to obtain information, perform
transactions, connect to family and friends, and for entertainment, etc.,
but today it is also used to connect with physical devices and things, for
example a mobile app to locate car keys, a medicine bottle with alerts
on dosage, or temperature control as per climatic changes. Everything
is connected via the internet for continuous monitoring, and the data
is captured and stored in cloud based platforms with a well-defined
analytics strategy to support predictive analytics for critical decision
making This means there is a significant role for testing these devices
or networks to ensure that quality and standards are maintained.

Business Drivers
Business process monitoring

Reduced time-to-market

Rise of smart connected devices

Increased customer
satisfaction

Connected network (people, data,


and things)

Cost saving

Development of IP-enabled devices

Key Testing Types


According to a survey by HP[2], 70% of the units in the Internet of
Things segment have serious security flaws, and an average of 25
security flaws are observed in each IoT device.
Testing Type

Verification Aspects

Security testing

Identity authentication
Password (weak/strong)
Vulnerable to XSS
Data leakage
Data protection

Unencrypted network
TCP/IP
Wi-Fi
Cellular network (3G/4G)

Performance testing

Delayed network connection


Internal computation
Response time

Embedded testing

Timing constraints
Software validation and integration

M2M testing

SMS vs packet data transmission


Integration of sub-networks

Data visualization testing

Usability of the device


User experience

Device testing

Device configuration (manual and timebased)


Alert settings
Device-to-device integration

Table 1. Key Verification Aspects

Below are the reports shared by HP[2], raising significant concerns


regarding user security and privacy:
80% of devices allowed weak passwords
70% did not encrypt data transmissions out to the internet
60% had cross-site scripting or other flaws in their web interface
60% did not use encryption when downloading software updates
In light of these reports, there is a greater need for QA services. So in
Table 1 are some of the key testing aspects which need to performed
as part of IoT testing.

Additional Verification Aspects


Below are some of the additional verification aspects that can be
considered for testing IoT[3]:
Multi-media automation

Verification of LED flashing and sounds made by the


devices

Hardware availability

Running verification tests on the hardware directly


without using simulators

Information exchange between


Networks
Components
Systems

Physical access

Verification of the sensors and other hardware interfaces during runtime with special access

Memory constraints

Verification of the system failure due to low RAM


availability

Conformance testing

Sensors
Smart devices
Gateways

Behavioral aspects

Verification of recompiling embedded code that affects the runtime behavior

Mobile testing

Mobile apps
Mobile devices
Mobile OS (Android, iOS)

Code security

Verification of the embedded system code in the

Interoperability testing

40

Protocol testing

Testing Experience 28/2014

production version that should not be hacked

Table 2. Additional Verification Aspects

Challenges for IoT Testing

Are there any security threats in


the usage of the device?

Testing multiple combination of radio bands including Bluetooth,


Wi-Fi and NFC

Are there any organizational or


project test processes that must
be adhered to?

Devices with multiple lines of code, which makes life difficult for
the testers

Devices using non-standard communication protocols will require


significant changes to application testing

Has the device been thoroughly


tested with regard to localization/
globalization?

Is the IoT connection tested by assessing the performance changes?

Availability of multiple devices and technologies

Obtaining the complete overview on hardware and software architecture to develop test strategy or test plan

Is the device fixed, mobile, or both


(testing will be different for each)?

Has the new device been properly


tested?

Limitations with regard to memory, processing power, battery life

Are there any verifications and


validations in place?

10

Is there any documentation available with regard to the IoT test


strategy, test plan etc.?

Below are some of the key challenges with regard to IoT testing:

Complexity of the software and the system

Best Practices
Here are some of the best practices that can be practiced for the mutual
benefit of the organization and testers:
Use of clear test specifications to improve quality
Use of test automation tool to reduce time-to-market
Analyze the products functional requirements and use cases for
effective testing
Determine test metrics that will measure the impact of the IoT
strategy
Highlight the top priority problems in advance that need to be
tackled

Table 3. IoT Testing Checklist

The checklist above helps the team to test IoT devices/applications for
maintaining quality[3].

References
[1] http://whatis.techtarget.com
[2] http://www.infosecurity-magazine.com/news/internet-of-thingslaid-bare-25-security-flaws/
[3] http://www.logigear.com

Increase the device connections to gain more IoT value

[4] http://www.elektron-technology.com/sites/elektron-technology.
com/files/iot-agar-scientific-white-paper-final.pdf

Advantages

[5] http://blogs.clicksoftware.com/clickipedia/five-expected-benefitsfrom-the-internet-of-things-the-impact-on-service/

Below are some of the advantages of using the IoT:


Leverage sensor data to analyze and work with management to
improve business processes
Delivering detailed information of product in real-life conditions

> about the author

Help businesses deliver performance, reliability, and interoperability

Venkat Ramesh Atigadda has over nine years


industry experience in software testing and has
worked in industry verticals such as energy and

Can be controlled from anywhere through their embedded computers

healthcare. Currently he is a solution developer


in the Assurance CoE of the HiTech Industry So-

Can manage things better, from traffic flows to use of energy

lution Unit at Tata Consultancy Services Limited.


In his role, he is responsible for test strategy consulting, project technical reviews, writing white papers and analyz-

Checklist for IoT Testing


No

Activities

Does the device have safety factors


that a tester needs to consider?

Are there any functional or nonfunctional risks identified for


testing IoT connections?

ing the latest testing trends. He has also published articles in TestingExperience, TestingCircus magazines and white papers in the IJCEM
Yes

No

Comments

and IJRDET forums.

Testing Experience 28/2014

41

By Rik Marselis & Bert Linker

Organize Your Testing Using


Test Varieties and Coverage Types
The Struggle
Many test managers often struggle to define the proper way to spread
the testing efforts throughout the project or release activities in such
way that it properly reflects the constraints of quality, risk, time, and
cost.
In recent years, the rise in approaches that use short cycles has made
it even harder to create a balanced approach to testing and translate
that into a test strategy if there is one at all.
We wondered what the reasons for this problem are. In our opinion, one
of the causes lies with confusion about, or even ignorance of, the meaning of the terms test level, test type, and test design technique.
In this age of agile methodologies, test levels are often associated
with hierarchies in testing, and since Agile promotes doing all activities by one team in a single iteration, there is no hierarchy. The same
reasoning goes for test types. Because all testing happens within
the iteration (which Scrum calls a sprint), the people involved want to
rush their work and do not want to be bothered with the differences
between various types of testing.
Does it really matter? Well, yes! In a survey amongst about 300 projects
over the last 5 years, almost 50% of the people involved said that the
quality delivered by agile IT teams was no better than before they
adopted Agile. So we need to give extra attention to quality, since
quality is supposed to be key in Agile.

Test Varieties
When it comes to testing, as one of the quality measures that can be
taken, we want to make things easy to grasp by introducing something
new: the test variety. This simple term intends to emphasize to all
people involved that testing is not a one-size-fits-all activity. Even when
all testing activities are done by one team within a single iteration, you
will still need to vary the testing. The first variety of testing, of course,
is static testing, i.e., reviewing documents and source code. Static
testing can both be manual (using techniques like technical review
or inspection) and automated (with tools such as static analyzers).
The next view on test varieties relates to the parties involved. The
developers have a technical view of testing, looking to see whether
the software works as designed and properly integrates with other
pieces of software. The designers want to know whether the system
as a whole works according to their specifications (whatever the form
and shape of these specifications may be). And the business people
simply want to know whether their business process is properly supported. Now, during these various tests there will be different points

42

Testing Experience 28/2014

of view, for example related to quality attributes. Functionality will


seldom be forgotten in testing, but what about looking at maintainability by the developers, installability by the designers, and usability
by the business people?
So in this simple example we can already see at least seven test varieties. During the start-up of a project (for example in a sprint zero),
a test strategy for the project is established. And for each iteration
the test strategy is tuned to the needs in that cycle. This way, all team
members know what their points of focus must be during this iteration. By the way, please be aware that when we say test strategy we
do not necessarily refer to a document, we merely want to emphasize
that there must be an agreed way of assigning the right testing activities with the right test intensity. And by being aware of the necessary
test varieties you will also have less difficulty in deciding what testing
activities can be done within the sprint and what has to be organized
separately (the common agreement nowadays is that an end-to-endtest cannot be done by one agile team within their sprint, so this is a
test variety that will often be organized separately).
This is the first step to better testing. By making the people involved
aware of the relevant varieties of testing, it defies the one-size-fits-all
mentality often seen in testing.

Experience-Based and/or Coverage-Based


Approach
The next step in completing the test strategy is to define the proper
approach for the test varieties. Based on the desired quality level and
the perceived risk level, and within the limitations of time and cost, the
team members choose an experience-based and/or coverage-based
approach to testing.
Here is another area of testing that many people struggle with. There
are very many so-called test design techniques. But, in practice, most
testers do not formally apply any technique at all, they just guess
the best test cases in their specific situation. One reason for this is that
there are so many possibilities that they decide not to choose at all. In
our opinion, the choice does not need to be hard. In any given situation
you only need a simple choice of approaches and about a handful of
coverage types to be able to do proper testing.
We distinguish two approaches: experience-based and coverage-based.
For experience-based testing there are a choice of possibilities, of
which exploratory testing is the most well-known and appropriate
approach. These tests are effective at finding defects, but less appropriate for achieving specific test coverage, unless they are combined
with coverage types.

Coverage Type Group

Coverage Type

Description

Variation

Process

Algorithm

Testing the program structure.

Statement coverage

Decision coverage (branch testing/arc


testing)

Paths

Right paths/fault paths

Coverage of the variations in the process in terms of combinations of paths.

Checking both the valid and invalid situations in every defined


error situation. An invalid situation (faulty control steps in the

process or algorithm that precede the processing) should lead

Test depth level 1

Test depth level 2

Test depth level N


Right paths only

Right paths and fault paths

to correct error handling, while a valid situation should be accepted by the system without error handling.
State transitions

Conditions/decisions

Decision points

Verification of relationships between events, actions, activities,


states, and state transitions.

Coverage of the various possibilities within a decision point

with the purpose of arriving at the outcomes of TRUE or FALSE

0-switch
1-switch

N-switch
Condition coverage
Decision coverage

Condition/decision coverage

Modified condition/decision coverage

Multiple condition coverage (per decision point or across decision points)

Cause-effect graph
Pairwise testing
Data

Boundary values

A boundary value determines the transfer from one equivalence class to another. Boundary value analysis tests the

boundary value itself plus the value directly above and directly

Light (boundary value + one value)

Normal (boundary value + two values)

below it.
Equivalence classes

The value range of a parameter is divided into classes in which


different system behaviour takes place. The system is tested
with at least one value from each class.

One value per class

Combination with boundary values

CRUD

Coverage of all the basic operations (create, read, update,

Data combinations

Testing of combinations of parameter values. The basis is

Right paths/fault paths

combinations is the classification tree.

N-wise (e.g. pairwise)

delete) on all the entities.

equivalence classes. A commonly used technique for data

Data flows

Verifying information of a data flow that runs from actor to

Right paths/fault paths

Checking both the valid and invalid situations in every defined

One or some data pairs

All possible combinations

actor, from input to output.

error situation. An invalid situation (certain values or combinations of values defined that are not permitted for the relevant
functionality) should lead to correct error handling, while a

valid situation should be accepted by the system without error


handling.
Appearance

Heuristics

Evaluation of (a number of) usability principles.

Load profiles

Simulation of a realistic loading of the system in terms of

Operational profiles

Simulation of the realistic use of the system by carrying out a

Presentation

Testing the layout of input (screens) and output (lists, reports).

Usability

Validating whether the system is easy to use, understand, and

volume of users and/or transactions.

statistically responsible sequence of transactions.

learn.

Alpha testing
Beta testing

Usability lab

Table 1. Overview of the coverage type groups, examples of coverage types, and possible variations

Testing Experience 28/2014

43

Coverage-based testing uses coverage types. A coverage type focuses


on achieving a specific coverage of quality and risks, and on detecting
specific types of defects. Thus a coverage type aims to cover certain
areas or aspects of the test object. Our starting point is that coverage
types not only indicate what is covered, but also provide directions on
how to do so. Coverage types are, as such, the foundation of the many
test design techniques.

Coverage-Based Testing

Experience-Based Approach

Process

Below we describe three examples of experience-based testing that


may be considered.

Processes can be identified at several levels. There are algorithms of


control flows, event-based transitions between states, and business
processes. Coverage types like paths, statement coverage, and state
transition coverage can be used to test (variations in) these processes.

Error Guessing
The tester uses experience to guess the potential errors that might
have been made and determines the methods to uncover the resulting
defects. Error guessing is also useful during risk analysis to identify
potential failure modes. Part of this is defect-based testing, where
the type of defect sought is used as the basis for the test design, with
tests derived systematically from what is known about the defect.
Error guessing is often no more than ad hoc testing, and the results of
testing are totally dependent on the experience and skills of the tester.

Checklist-based
The experienced tester uses a high-level list of items to be noted,
checked, or remembered, or a set of rules or criteria against which a
product has to be verified. These checklists are built based on a set of
standards, on experience, and on other considerations. A checklist of
user interface standards used as the basis for testing an application
is an example of checklist-based testing.
Checking of individual elements is often done using an unstructured
list. Each element in the list is directly tested by at least one test case.
Although checklist-based testing is more organized than error guessing, it is still highly dependent on the skills of the tester, and the test
is only as good as the checklist that is used.

In our experience many testers have difficulty in selecting the proper


coverage in a specific situation, which is often caused by confusion
about the coverage type that best matches the specific situation they
want to test. Thats why for coverage based testing we have created
four groups of coverage types. Analyse the type of situation youre in
and select a coverage type from the group that matches this challenge.

Conditions/Decisions
In every IT system there are decision points consisting of conditions,
where the system behavior differs depending on the outcome of such
a decision point. Variations of these conditions and their outcomes
can be tested using coverage types like decision coverage, modified
condition/decision coverage, and multiple condition coverage.

Data
Data starts its lifecycle when it is created and ends when it is removed.
In between, the data is used by updating or consulting it. This lifecycle
of data can be tested, as can combinations of input data, and the attributes of input or output data. Some coverage types in this respect
are boundary values, CRUD, data flows, and data combinations.

Appearance
How a system operates, how it performs, and what its appearance
should be are often described in non-functional requirements. Within
this group we find coverage types like heuristics, operational and load
profiles, and presentation.

Exploratory

Coverage Type Table

Exploratory testing is simultaneous learning, test design, and test execution. In other words, exploratory testing is any testing to the extent
that the tester actively controls the design of the tests, as those tests
are performed and use information gained while testing to design new
and better tests. Good exploratory testing is timeboxed based on a
charter that also defines scope and special areas of attention. Since
exploratory testing is preferably done by two people working together
and who apply relevant coverage types for the specific situation at
hand, this approach is preferred over the alternatives mentioned above.

The Table 1 gives an overview of the coverage type groups, examples


of coverage types, and possible variations.

Hybrid approaches
In practice, the use of hybrid approaches is very common. Exploratory
testing, for instance, can be very well combined with the use of coverage
types. And there are test design techniques that may be used within
experience-based as well as coverage-based testing, such as the data
combination test (which uses classification trees).

44

Testing Experience 28/2014

Although the overview is extensive, it is not exhaustive. Looking at


what can be covered, we could have added aspects like syntax (using
a checklist), semantics and integrity rules (using decision points), authorisation, privacy etc. (using checklists, doing reviews, etc.). However,
we do not want to over-complicate things. We advise you to check
relevant literature for the coverage types and test design techniques
that are suitable in your specific situation.

Test Intensity Table


A main goal of the test strategy is to define the necessary intensity of the
testing, commonly based on risk. High risk requires thorough testing,
low risk may need only light testing. To give you a practical overview
of the coverage types you can select for the different classes, we have

highlighted the most commonly used coverage types and some test
design techniques in which they can be applied. We have not given
an overview for appearance, since the coverage types for appearance
are highly interlinked with the aspect to be tested, and we believe that
giving a simplified overview would be misleading.

> about the authors


Rik Marselis is one of Sogetis most experienced
management consultants in the field of quality
and testing. He is a well-known author, presenter, and trainer who has assisted many organiza-

Coverage
Type
Group

Test Intensity
Light

Average

Thorough

their testing and thus achieving fit-for-purpose

Process

Statement

Decision coverage

Paths test depth

Twitter: @rikmarselis

paths test

depth level2

Test and paths test

coverage and
depth level1
process cycle

tions throughout the world in actually improving

and paths test

process cycle test

test
Conditions

level2 algorithms
depth level3 process cycle test

Condition

Modified condi-

Multiple condition

coverage

erage elemen-

tary comparison

decision

elementary

comparison
test

tion decision covtary comparison


test or

condition deci-

sion coverage
decision table

coverage elemen-

One or some
data pairs

data combination test

Pairwise data

combination test

Bert Linker is an experienced test consultant


within Sogeti. He is (co)author of several books
and trainer on many test subjects. He has helped
many organizations in traditional and agile environments improve their testing and quality
processes.

test or

multiple condition

decision coverage

decision table test

test
Data

quality and increased business success.

N-wise or all combinations data

combination test

Table 2. Test intensity table

Conclusion
Applying an effective and efficient way of testing does not need to be
bothersome. Using test varieties, a combination of experience-based
and coverage-based testing, and your choice of about five coverage
types that are relevant for your situation, testing in these fast-paced
times will focus on establishing the stakeholders confidence without
tedious and unnecessary work.

Literature
Testing Embedded Software, Bart Broekman & Edwin Notenboom,
Addison Wesley, 2003, ISBN 9780321159861
TMap NEXT for result-driven testing, Tim Koomen, Leo van der
Aalst, Bart Broekman, Michiel Vroon, UTN Publishers, 2006, ISBN
9072194799
TMap NEXT in Scrum, Leo van der Aalst & Cecile Davis, Sogeti, 2012,
ISBN 9789075414646
Neils Quest for Quality; A TMap HD Story, Aldert Boersma & Erik
Vooijs, Sogeti, 2014, ISBN 9789075414837

Both Bert and Rik wrote several building blocks for the new TMap
HD book that was presented on 28 October 2014.

By Sujith Shajee

Reasoning the Next Version of


Test Automation Myths
Test automation myths is a discussion topic that echoes around the
validation service areas of the IT industry. Probably the first thought
that flashes though the readers mind would be Why the same old
topic? or What is new to debate about this topic?
For once, everyone undisputedly agrees that test automation is not
what it used to be five or ten years ago. Test automation has evolved in
range and enormity. What started out as simple linear scripting on a
single web application is now a complex hybrid framework architecture
that facilitates test execution on applications developed on diverse
platforms and technologies. Undoubtedly automation progressed, and
so did the myths that are associated with it. A shift in peoples perspective and knowledge about test automaton has altered the folklores.

Myth # 2
Version 1.0 Savings through test automation are assured always
Version 2.0 Savings through test automation are assured with wellstructured implementation and can always be achieved in a pre-determined timeframe
When an organization decides to introduce automation into its testing
strategy, the decision is a commitment of huge investment towards
development, maintenance, and other operational costs associated
with the implementation. Return on Investment (ROI) is calculated
and determined, usually prior to kick off of the implementation, and
is considered to be the assured cost savings from implementation.

This article is the authors viewpoint and experience on how the original myth has transformed into a new version, and how derided the

What everyone seems to understand now is that ROI is indicative of cost


savings associated with a planned and thought-through implementation. The calculation of ROI alone does not assure cost savings, but you
need to ensure that the implementation is planned and executed with
caution. Hence savings are not always assured.

Myth # 1

While there is no question that a well-structured implementation


would yield the projected cost savings, post-implementation maintenance and other operational factors have a very great influence on
how and when breakeven is achieved. If a follow-through plan is not
set up for the implementation, achieving benefits from the automation implementation would be a far-fetched goal for the organization.
Hence the key to a successful implementation would be to govern the
maintenance of the automated test suite with caution.

myth still is. The article also provides the authors thoughts on the
new generation of myths.

Version 1.0 Test automation is to replace manual effort for good


Version 2.0 Test automation is to reduce manual effort for good and
execution is with the click of a button
It was not long ago that getting test automation implemented sent
out a message that it would be eliminating manual effort completely.
Test automation just meant that manual effort would be removed and
automation would take care of the rest. With time, everyone agrees that
manual and automation testing go hand-in-hand. Everyone also agrees
that moving from manual to automation validation is a collaborative
perceptive process, and test automation does not necessarily replace
but certainly reduces manual effort over time.
But with all this in perspective, a new change has arisen in the existing
myth. A new view seems to have altered this legend that automation would magically run at the click of a button. Once automation is
completed, the expectation is set that you could trigger scripts and
the entire test suite would run without any monitoring 24x7.
While automation does reduce manual effort, it most certainly does not
take away manual intervention in execution completely. Automation
implementation is a strategic advancement in validation, but it does
come with its own set of responsibilities. A few basic tasks include script
execution monitoring, application and defect monitoring, review of
script failures, minor tweaking, and synchronizations of scripts. These
activities are not always introduced due to script quality but are
governed by many external factors like test environment, application
changes, and application performance. So accounting for the factors
mentioned would set a genuine expectation of how automation execution would proceed after the trigger and what could be expected
from the implementation.

46

Testing Experience 28/2014

Myth # 3
Version 1.0 Test automation uncovers more bugs
Version 2.0 Test automation has failed in its implementation if it is
not able to uncover as many bugs as manual testing
Test automation is designed to reduce manual effort and eliminate
human errors in routine test execution activities. A common idea that
prevailed with automation was that it would successfully uncover
more bugs than manual validation. This idea just falls apart when
you realize that automation is only as good as the manual test cases
it was based and built on.
Identifying more application defects is a result of the quality and
coverage of test cases; yet the ability of automation to uncover defects
decides the success of automation implementation

Automation, in most instances, is meant for regression test suites.


Before an application code is moved to a regression environment
for testing, it has already passed the quality checks in the unit, functional, and integration testing phases. In most cases, the applications
stability and quality in the regression phase is quite predictable. So,
in this phase, whether you decide to perform manual or automated
validation, there is only a slim probability of coming across any more

defects than there are. But this most certainly does not imply that the
automation implementation is a failure. Let us not forget that when
the organization decided to go for automation, uncovering defects was
not the only goal defined to be achieved as an end result of automation.

Myth # 4
Version 1.0 Anyone and everything can be automated
Version 2.0 Automation is software engineering, so a developer is the
right fit for implementation
Yes, you are right when you heard that automation is scripting out
your manual test cases in a specific language that is supported by
the automation tool. Now the questions are, does that mean we can
bring in any automation tool and have the manual tester work on
implementing automation? And if the automation tool supports test
case automation, does it mean it should be automated? The answer
to both these questions is most certainly a big no.
With the evolution in the area of automation, organizations have realized that automation is to be considered as a development project and
should follow a well thought-through implementation plan. A proper
automation feasibility analysis together with a cost-benefit analysis
would really help us decide what has to be automated. Just because
it can be, does not mean it should be. Having said that, this realization did lead people to get their heads round new options to support
implementation and one of these was development team involvement.

It was not long ago, while working on an automation strategy, that I


was asked to provide a couple of application developers to build out
the automated test suite, replacing my own automation analysts. It
was not that my automation analysts were bad at their job, but when
I asked why we were bringing in someone from the development
team to do this work, the responses were: They have been there and
done this kind of work, They have a better knowledge of software
engineering, and They will be quick because, after all, it is about
scripting the cases.
My response was that if you do not want your development team to
develop and test your application, you should also not want your development team to develop and build automation test cases for your
application. They are one and the same thing. Automation is still about
shaping quality test cases to validate the application. You still need
your scripts to find the application bugs. Yes, development skill helps
the implementation process, but that is not the only skill you should
be looking for from an automation engineer who is building it out for
you. An automation engineer is a blend of developer and tester. He/she
should script it out like a developer and still be able to think like a tester.

cycle execution. His team is unaware of most of the changes to the


application until afterwards, it is very late when his team has to run
the regression suite, and they are hit with surprising changes in the
application, both minor and major.
It is high time we realized that automation maintenance involves effort.
Unlike manual testing, even some of the minor changes in the application have a drastic impact on automation. For instance, an existing field
that was not mandatory on the application form is made mandatory
now. For the manual testing team this means updating the test cases
to populate data in this existing field. While for the automation team
it is about identifying the flow, adding the object, updating scripts
to accommodate this change, and finally testing the change. It would
be bizarre to think that all these minor changes could be fixed during
regression execution and the execution could still be completed. You
need to realize that the regression execution is already on a shortened
timeline due to automation implementation.
The solution is quite simple. Involve your automation team like you
involve your manual team in SDLC. Let them be part of the requirement
analysis and impact analysis sessions. This would help them understand the change better, and even plan the maintenance better. We are
also providing the team with enough time to deal with all the changes.
Good communication is a key to smooth maintenance and execution.

Inference
George Orwell rightly said: Myths which are believed in tend to be
become true. There is a serious need to break the traditional ideas
that are not true about automation from time to time. This will ensure impressions that are never intended to be part of an automation
process stay out of it and will help to govern and establish the right set
of standards and practices that will lead to the correct way of looking
at test automation.

> about the author


Sujith Shajee is currently working as a Test Technical Lead at Infosys Limited (NASDAQ: Infy
www.infosys.com) and is part of the Independent
Validation and Testing Services unit. He has
worked on various projects for strategy development, implementation, and delivery in the automation, performance, and service validation
areas, and has developed expertise in a number of validation tools.
Sujith can be reached at SujithK_Shajee@infosys.com.

LinkedIn: www.linkedin.com/pub/sujith-shajee/8/71b/863

Myth # 5
Version 1.0 Any changes to test automation can be done in no time
because it is automation
Version 2.0 Any changes to test automation can be done in no time
because it is automation
No, it is not a typo, and most definitely you are not reading the versions wrongly. This is one myth that has just forgotten to evolve over
time. It was just the other day when a colleague of mine was telling
me about the agony he has to go through every automated regression

Testing Experience 28/2014

47

By Philipp Benkler

Testing Enterprise Applications:


Bring-Your-Own-Crowd
According to Gartner, 25 percent of enterprises will have an enterprise
app store by 20171. Employees, as well as enterprise clients, benefit from
options available on their mobile device. While traditional desktop
applications, such as CRM, are available on smartphones and tablets,
mobile devices offer many possibilities to increase efficiency through
new functionalities. While adapting to those opportunities, companies
face challenges such as mobile device management and bring your
own device (BYOD). No matter how and to which devices enterprise
apps are distributed, they need to work flawlessly and be appealing
enough for people to use them. The following article discusses several
challenges companies face while testing enterprise applications and
offers best practices on how to address them.

Being Aware of Expectations


There is much to learn from the consumer industry when developing
enterprise applications. Smartphones and tablets have become our
constant companions and we have certain expectations of an application, no matter whether we use it before, during, or after work. It
needs to solve an issue or facilitate a process, it needs to be easy to use
and it needs to work. Before developing an enterprise app, companies
must be aware that users regardless of whether they are employees
or clients have the same expectations of enterprise apps as they have
of those they use as a consumer. This could be with regard to design,
navigation, or loading time. Even if there is no alternative, an app will
only be used regularly if it meets these expectations.

Pooling Company Knowledge


To succeed at building enterprise applications that users enjoy, thorough analysis is required to define the basic requirements. What kind
of applications do the employees really need? Which processes does
the app need to perform? What problem does it solve? The future
users are the biggest asset in order to find out. Unlike the consumer
industry, employees and even clients are much more accessible for
knowledge gathering, for example through a survey, and learning
more about their expectations. When developing an enterprise app,
their knowledge brings direct value, and this is why they should be
consulted. The employees know how the business is run, are familiar
with existing processes, and will often have very precise ideas as to
which tools would be useful to facilitate daily working life. Furthermore, when integrating employees, security and intellectual property
risks are low, as internal information is less likely to fall into the hands
of unauthorized people.

Creating Acceptance
By integrating employees in the development process, enterprise apps
have a greater likelihood of acceptance and adoption after completion.
In particular, the involvement of opinion leaders has great potential
for stimulating broad acceptance of new mobile business solutions,
and combating reservations towards transformation and change. It is

advisable to scale the level of employee involvement as development


progresses. Only key employees offer insights in the early stages, when
most mistakes and defects are discovered. Later on, a wider range of
future users might be integrated once the bulk of mistakes has been
fixed. In practice, there is a fine line between creating acceptance and
fueling objections. It might not be wise to show an application that is
still in its early stages and full of technical issues to a wider audience
just yet. People will form a negative image if they cannot see the whole
picture. The only thing they will see is an app that does not work, is not
intuitive to use, or does not help them to be more efficient. This is also
true for other stakeholders and decision makers. Convincing someone
who is not involved in the development process that the app is going
to become a success while it is still full of issues is much more difficult
than showing them a better version later on. It is therefore important
that all people involved are aware of the current state as well as the
roadmap of an application.

Bug Testing
Extensive functional testing is the basis for any successful app. To
avoid organizational blindness, testers from outside the development
team should be included, so that both staff and independent testers
are assessing the app. In many cases a mix of experienced testers and
unbiased users offers the greatest benefits. While the former know
where to look and discover defects reliably, the latter do the unexpected and might find showstoppers that otherwise would have been
missed. Therefore, it makes sense to establish a mix of structured and
explorative testing to cover as many scenarios as possible. Depending
on the stage of the application as well as the availability of internal
resources, this can be done either internally or externally.

Usability Testing
Usability is the key factor for an app to be successful, both in the consumer and in the enterprise sector. Before release, applications should
be tested by the target group to fully understand their wants and needs.
This way, companies can evaluate whether the initial requirements
have been implemented as required at a certain stage in the development process. For consumer apps, crowdtesting is an established
approach for quality assurance and usability testing. Crowdtesting
offers access to specific target groups and devices through large pools
of testers. When developing enterprise apps, employees or enterprise
clients take the place of the crowd. They are the future users and know
what works for them and what does not. Instead of carrying out the
testing process independently, companies should think about using
the crowd platform infrastructure of an external service provider
to distribute tests to their own crowd of employees or customers.
This approach is called Bring-Your-Own-Crowd. It reduces project
management time and budget through a managed testing process,
resulting in high-quality results that can be pushed back directly into
the development process. This way, even testing with confidential
data or restricted access on company devices is possible at any time.
1 http://www.gartner.com/newsroom/id/2334015

48

Testing Experience 28/2014

Device Diversity and Compatibility


Knowing user expectations only helps if the app really works on all
devices necessary. Bring your own device (BYOD) is becoming common
practice in many companies and brings significant challenges when
developing an enterprise app. If there is no standard company device,
applications have to work on all employee devices and platforms that
will be using the app. Taking into account increasing fragmentation,
particularly in the Android market, this is difficult to achieve. In this
case, external crowdtesting enables access to all devices available in a
specific market, provided that access to the app is possible for external
testers, for example through a VPN connection.

Planning and Costs


Extensive testing requires a budget that takes testing costs into consideration. Companies avoiding these costs should bear in mind what
damage can result if the developed applications are not accepted.
Enterprise apps can optimize internal processes, and lead to savings
through efficiency. However, co-existence of digital and analog infrastructure as a result of a rejected app will actually increase costs. If
employees are not satisfied with the companys official applications,
they might also look for an uncertified alternative, with security and
financial consequences. If companies decide to integrate their employ-

ees into the testing process, appropriate time and resources need to be
allocated for that purpose. Employees will need to use their working
time to test, and therefore possibly postpone other tasks. In addition,
a strong commitment is required from all testers in order to get meaningful and high quality results. Thus, ordering unwilling employees
to test an app is not ideal. If they are not already interested in the app

through early involvement, other incentives such as bug bounties or


other forms of rewards need to be found.
The importance of enterprise applications for business processes will
increase dramatically within the coming years. To create apps that
work and appeal to users, they need to be involved in the development
process. Just like in the consumer industry, user expectations and
device diversity are major challenges that require a detailed testing
strategy before development starts. Involving the users and testing
through the Bring-Your-Own-Crowd approach help to address these
challenges, while offering a framework for effective bug and usability
testing.

> about the author


Philipp Benkler is founder and CEO of the crowdtesting specialist Testbirds. He is responsible for
sales, the development of the IT infrastructure,
internationalization and quality assurance. Benkler has significant experience in enterprise
environments and as a freelance software developer. He took part in the elite network masters
program Finance and Information Management at the University
of Augsburg and the TU Munich in Germany, and graduated Master
of Science with honors.
LinkedIn: www.linkedin.com/profile/view?id=55025531
Website: www.testbirds.com
Blog: blog.testbirds.com

SAVE THE DATE


November 0912, 2015
in Potsdam, Germany
www.agiletestingdays.com

Testing Experience 28/2014

49

Missed Part II?


Read it in
issue No. 27!

By Vladimir Belorusets, PhD

A Unified Framework
For All Automation Needs Part III
Introduction
In the first two parts of this article[1], I described the main principles
applied in developing a unified test automation (UTA) framework that
serves as a foundation for testing multiple application interfaces.
The UTA was built on JUnit and JUnitParams. I showed how to test the
browser GUI and REST API within the UTA framework using the open
source Selenium WebDriver and Spring Framework. In this part, I will
describe the details of implementing automated testing of the command line interface (CLI) when connecting to an SSH server.
The most popular tool for automating interaction with CLI is Expect. It

was originally written in Tcl, and there are several open source Expect
implementations in Java. In the UTA, I use the following programs:
Expect-for-Java[2] developed by Ronnie Dong. This API is loosely
based on the Perl Expect library
JCraft JSch[3] implementation for SSH protocol

8.

// associate a channel with the session

9.

Channel channel = session.openChannel("shell");

10. // create an Expect object


11. Expect expect = new Expect(channel.getInputStream(),
12. channel.getOutputStream());
13. channel.connect(CHANNEL_TIMEOUT);
Listing 1. Establish an SSH connection

To verify connection, we can check for the command prompt. The


Expect class contains the expect() method that handles the input
stream against a pattern, places the found match in the match string,
and updates the isSuccess boolean to true. The pattern can be presented as a string or a regular expression. The code snippet is shown
in Listing 2.
1.

expect.expect("#");

2.

assertEquals("#", expect.match);

Listing 2. Check for the command prompt

Structure of the CLI tests


A simple CLI test consists of the following four operations:

For the second operation, Expect provides the method send().

1. Establish an SSH connection with a remote server

Testing command options

2. Run an input command in the CLI

If the command under test has multiple options, like ls in UNIX or


dir in Windows, it is efficient to test it using a data-driven approach
with JUnitParams library and JUnitParamsRunner. When you need
to match a complex output, then the java.util.regex.Pattern class
is at your service. Listing 3 illustrates how to create a data-driven test
for the command show.

3. Get and parse the response


4. Assert the actual outcome against an expected pattern for
verification
The first operation is usually performed once per testing class. The
others comprise a block that is repeated multiple times within a test
as we issue various commands.
The SSH connection is easy to accomplish by using the JCraft JSch
(Java Security Channel) class. Since the session is established once, the
corresponding statement is placed within the @BeforeClass method
(Listing 1).
1.

JSch jsch = new JSch();

2.
3.

// create a connection with an SSH server

4.

Session session = jsch.getSession(userName, sshHost);

5.

session.setPassword(password);

6.

session.setConfig("StrictHostKeyChecking", "no");

7.

session.connect(CONNECTION_TIMEOUT);

1.

@Test

2.

@FileParameters(value = "file:c:/DDT/showCommand.csv",

3.

4.

public void showCommand(String option, String pattern) {

5.

// input: show <option>

6.

expect.send("show " + option + "\n");

7.

// expect
expect.expect(Pattern.compile(pattern, Pattern.DOTALL));

8.
9.

mapper = CsvWithHeaderMapper.class)

// verify

10. assertTrue(expect.isSuccess);
11.

System.out.println(expect.match);

12. }
Listing 3. Testing command with multiple options

The data file showCommand.csv contains two columns: one with the
command options and one with the regex patterns for expected match.

50

Testing Experience 28/2014

CLI tests with decisions


Most CLI tests require a next command to be issued based on some
condition expressed in the previous command outcome. In this case,
you need to create a list of all possible patterns describing the expected
outcomes. When you pass that list into the expect() method, it will
return the index of the pattern that matched. This will let you know
what outcome among the set of multiple outcomes occurred.
Listing 4 provides an example of executing the show hsm status
command that has two possible outcomes: Crypto-user logged in:
yes and Crypto-user logged in: no.
1.

// input

2.

expect.send("show hsm status\n");

The Base class from where all test classes are extended contains the
method commandProcessor(String csvFileName, Expect expect)
that parses the test scenario file, runs all commands, and verifies pattern matches. Using JUnitParams library, this test can be presented
as simply as shown in Listing 6, where cryptoUser.csv is the name
of the test scenario file.
1.

@Test

2.

@Parameters({"cryptoUser.csv"})

3.

public void cryptoUserLogin(String file) throws Exception {

4.

5.

commandProcessor(file, expect);

Listing 6. Test described through the CSV file

3.
4.

// expect

5.

List<Pattern> list = new ArrayList<Pattern>();

6.

list.add(Pattern.compile("Crypto-user logged in: yes"));

7.

list.add(Pattern.compile("Crypto-user logged in: no"));

8.

int matchIndex = expect.expect(10, list);

9.

// verify

10. assertTrue(expect.isSuccess);
11. // make a decision
12. switch(matchIndex) {

Summary
In this last part of the article, I described how to automate testing CLI
within the UTA. Since the CLI is not as rich as GUI, the structure of the
tests is much simpler.
The UTA can be extended to include other interfaces as well, and maybe
someday we will come up with one global test automation framework
that fits all automation needs.

13. // crypto-user logged in


14.

case 0:

15.

// new command

16. // crypto-user is not looged in


17.

case 1:

18.

// new command

19. }
Listing 4. Making decisions

References
[1] Vladimir Belorusets. A Unified Framework for All Automation
Needs Part I, II. Testing Experience, Issue No. 26, pp. 6670, 2014,
Issue No. 27, pp. 913, 2014.
[2] Expect-for-Java https://github.com/ronniedong/Expect-for-Java
[3] JCraft JSch http://www.jcraft.com/jsch

Descriptive test scenarios


For complex tests with multiple decisions, you can create a test scenario
in a separate text file using the pre-defined conventions. Lets consider
a simple example for illustration. I want to test login/logout commands
for the HSM device. My first command is show hsm status, which gives
me the status of the crypto user. If the crypto user is already logged
in, I issue the log out command. If the crypto user is not logged in, I
want him to log into the HSM. This scenario can be presented in CSV
format as shown in Listing 5.
1.

0, show hsm status, Crypto-user logged in: yes|1, Cryptouser logged in: no|3

2.

1, hsm logout crypto user, y/\[n\]|2

3.

2, y, Logged out of HSM partition successfully|-1

4.

3, hsm login crypto user, Crypto user successfully logged


into the HSM|-1

Listing 5. A descriptive test

The first column is the number of the command line. The second column
is a command itself. The rest of the columns present patterns (strings or
regex) for all possible command outcomes. Each pattern ends with the
bar that marks the end of the pattern and the next command number
to go. -1 indicates the end of the test.

> about the author


Dr. Vladimir Belorusets is a Director of QA at
Shocase, Inc. a social network for marketers. He
specializes in test automation and testing methodology. Dr. Belorusets is a Certified ScrumMaster
and Certified Tester Foundation Level. He is the
author of various articles published in Testing
Experience, Agile Record, Software Test & Quality
Assurance, Software Test & Performance, and StickyMinds.com. Dr.
Belorusets was a member of the Strategic Advisory Board and Conference Program Board at Software Test Professionals. He was a
speaker at Atlassian Summit, HP Software Universe, Software Test
Professionals, and STARWEST. Vladimir has held development and
QA management positions at Xerox, EMC, Siebel, CSAA, and various
startups. Dr. Belorusets earned his PhD in Control Systems from
Moscow Institute for Systems Analysis, Russian Academy of Sciences and his Masters Degree in Theoretical Physics from Vilnius
State University, Lithuania. Vladimir has taught numerous courses
on functional and performance testing in various San Francisco Bay
Area computer schools.
LinkedIn: www.linkedin.com/pub/vladimir-belorusets-ph-d-csmctfl/0/2/416

Testing Experience 28/2014

51

Book Corner
Book Review:

Hands-on Mobile App Testing


Authored by Daniel Knott
A few weeks ago, I got my hands on this little
gem. It is available on Leanpub for a recommended price of US$15. It has 340 pages over
nine chapters, plus some introduction and
acknowledgement segments.
The book is A guide for mobile testers and
anyone involved in the mobile app business.

While non-testing professionals can benefit


from a lot of useful information and getting a
better understanding of mobile needs, it clearly caters to testers (sic).
In its initial chapters, the book lays the foundation as to why the mobile factor is different to non-mobile, aka our regular software on
desktop or even laptop (arguably a kind of mobile) computers. In a
nutshell, this covers user expectations, data networks, fragmentation,
and the (often) short release cycles.

It builds up by showing the landscape which constitutes mobile work,


including the challenges, and gives practical tips on how to approach
the different aspects, e.g. how to create mobile device groups to get a
handle on the huge fragmentation of devices on the market.
Throughout the chapters there are a lot of useful links, be it statistical
sites, which can help identify relevant devices in your local market
region, testing heuristics to provide support in finding more test ideas,
or links to tools for different purposes.
Another chapter is dedicated to the so-called soft skills of testers. I
think that they are not exclusive to mobile testers and every tester can
benefit from them. With the advent of the fast paced development
cycles in agile and also in mobile app development (not necessarily the

same), there is need for more technically-oriented skills, like reading


code structures or supporting the team with test automation.
Overall, I enjoyed reading through the book and it contains the broad
palette of information one would expect.
Due to the eBook format, it plays to its strengths with a short update
cycle for outdated information and a direct access through the URLs
to various topics and information.
If the audience is new to the field, they get a comprehensive, up-to-date
book. Daniel also makes a quick side tour to provide an overview of different test methods and techniques, ranging from the classic preventive
vs. constructive testing with its analytical and dynamic approach (e.g.
white box/black box) to references to Exploratory Testing, BBST and
more. If this book is combined with one of the training courses in the
field (CMAP for example), this would be a very good way for a person
to launch themselves into a mobile testing career.
On the other hand, if the reader is an active mobile testing professional, he/she might not take a lot from it. But, as Daniel wrote in his
intro: This book is a practical guide on mobile testing. You can read
it from front to back to get an overview of mobile testing, or you jump
straight to the chapters youre most interested in.
For me personally it repeated a lot of known information, which made
it hard for me to keep reading. But it was worth it, because in each
chapter I found some useful information to take away, which surprised
me in a positive way. Thanks for taking the time to create this book
and providing a good guide.
Maik Nogens

New Releases:
Testing Cloud Services
How to Test SaaS, Paas & IaaS
Authored by Kees Blokland, Jeroen
Mengerink, Martin Pol

Published by Rocky Nook Inc. 1st Edition.


2013. 184 pages. Softcover. US$36.95

52

Testing Experience 28/2014

Guide to Advanced Software


Testing, Second Edition
Authored by Anne Mette Hass

Published by Artech House. 2nd Edition.


2014. 476 pages. Hardcover. US$114.00

Masthead
Editor
Daz& Hilterscheid

Editorial
Jos Daz

Unternehmensberatung GmbH
Kurfrstendamm 179

Layout&Design

10707 Berlin

Lucas Jahn

Germany

Konstanze Ackermann

Phone: +49 (0)30 74 76 28-0


Fax: +49 (0)30 74 76 28-99

Marketing & Sales


Annett Schober
sales@testingexperience.com

Email: info@diazhilterscheid.com
Website: www.diazhilterscheid.com

Price
Online version: free of charge

Website
www.testingexperience.com

Subscribe
subscribe.testingexperience.com

Articles&Authors
editorial@testingexperience.com

www.testingexperience.com
Daz& Hilterscheid is a member of
Verband der Zeitschriftenverleger

Print version: 8.00 (plus shipping)

Berlin-Brandenburg e.V..

www.testingexperience-shop.com

ISSN 1866-5705

In all of our publications at Daz& Hilterscheid

labelling legislation and the rights of ownership

without permission from Daz& Hilterscheid Un-

Unternehmensberatung GmbH, we make every

of the registered owners. The mere mention of

ternehmensberatung GmbH, including other elec-

effort to respect all copyrights of the chosen graphic

a trademark in no way allows the conclusion to

tronic or printed media.

and text materials. In the case that we do not have

be drawn that it is not protected by the rights of

The opinions mentioned within the articles and

our own suitable graphic or text, we utilize those

third parties.

contents herein do not necessarily express those

from public domains.

The copyright for published material created by

of the publisher. Only the authors are responsible

All brands and trademarks mentioned, where ap-

Daz& Hilterscheid Unternehmensberatung GmbH

for the content of their articles.

plicable, registered by third parties are subject

remains the authors property. No material in this

without restriction to the provisions of ruling

publication may be reproduced in any way or form

Editorial Board
A big thank-you goes to the members of
the Testing Experience editorial board for
helping us select articles for this issue:
Maik Nogens, Gary Mogyorodi, Erik van
Veenendaal, Werner Lieblang and Arjan
Brands.

Index of Advertisers

Picture Credits

Agile Testing Days Netherlands........................... C2

iStock.com/akindo..................................................C1

Ranorex............................................................................3

Gonzalo Vzquez (www.gonvazquez.com).....5

CABA Certified Agile Business Analyst..............7


dpunkt.verlag.............................................................. 21
Testing Experience................................................... 26
Rocky Nook, Inc........................................................... 38
Agile Testing Days.....................................................49
CMAP Certified Mobile App Professional......C4

DE

December 1516, 2014 Berlin


Christmas Special: 200 off!

EN

December 1819, 2014 Berlin


Christmas Special: 200 off!

DE

January 1213, 2015 Berlin

For further information visit


cmap.diazhilterscheid.com or contact us at
info@diazhilterscheid.com.
All our courses are available as inhouse
courses and outside of Germany on demand!

CMAP Certified
Mobile App Professional
The new certification for Mobile App Testing
Apps and mobiles have become an important ele-

and test software in their everyday work. A Mobile

ment of todays society in a very short time frame.

App Testing certified professional can support the

It is important that IT professionals are up-to-date

requirements team in review of mobile application,

with the latest developments of mobile technology

improve user experience with a strong understand-

in order to understand the ever evolving impacts on

ing of usability and have the ability to identify and ap-

testing, performance, and security. These impacts

ply appropriate methods of testing, including proper

transpire and influence how IT specialists develop

usage of tools, unique to mobile technology.

Daz & Hilterscheid Unternehmensberatung GmbH


Kurfrstendamm 179
10707 Berlin
Germany

Phone: +49 (0)30 74 76 28-0


Fax: +49 (0)30 74 76 28-99
Email: info@diazhilterscheid.com
Website: cmap.diazhilterscheid.com

You might also like