You are on page 1of 24

TE TER

Essential

for

software

testers

June 2016

Including articles by:


Gregory Solovey
Nokia
Isabel Evans
BCS SiGIST
Hans Buwulda
LogiGear
Huw Price
CA Technologies
Derk-Jan de Grood
Valori

4 /5

v2.0

number 37

Test

Automation

for

RECORDING_1

RECORD MODULES

Everyone

BROWSER

OPEN

MOUSE

CLICK

KEY

SEQUENCE

VALIDATE

ATTRIBUTE EQUAL

USER CODE

MY_METHOD

Recording_1.cs
RECORDING_1

void ITestModule.Run()
{
Report.Log(ReportLevel.I
"Website", "Opening web

Report.Log(ReportLevel.I
"Mouse", "Mouse Left Clic

Report.Log(ReportLevel.I

Any Technology
Seamless Integration
Broad Acceptance
Robust Automation

1
Lic

All Te

ense

chno
logi
All U
pdat es.
es.

Quick ROI

now
t
u
o
n
io
s
r
e
v
w
Ne
Remote test execution
Git integration
Performance improvements
Fresh user interface
Code templates
and so much more...

www.ranorex.com/try-now

TE TER

From the editor

The value of testing


We all appreciate that the world is connected.

Derk-Jan de Grood argues that testers

In fact, it's been suggested that some 30 billion

also need to challenge and ask 'What are

'autonomous things' will be attached to the

you trying to achieve?' If testers are to

Internet by 2020 and with the pressure to

question the 'what', we haven't ignored

move at greater speed, there is little wonder

the 'how' and contributors Huw Price, Hans

that application delivery is a challenge.

Buwalda and Gregory Solovey each set out

Organizations identify that the biggest

deliver more consistently.

practical insights into how it is possible to


stumbling block is establishing how to take a
consistent approach to testing across multiple

We hope it provides plenty of food for thought

channels of engagement. Whilst the broad

and as ever, your feedback and views are

brush understanding exists that testing should

welcome.

be an early consideration, not an afterthought,

Contact

there still needs to be a commitment from

Vanessa Howard

senior managers who need to understand the

Editor

value of involving testing teams at the outset


if applications are to deliver for business.

Editor
Vanessa Howard
editor@professionaltester.com
Managing Director
Niels Valkering
ops@professionaltester.com
Art Director
Christiaan van Heest
art@professionaltester.com
Sales
advertise@professionaltester.com
Publisher
Jerome H. Mol
publisher@professionaltester.com
Subscriptions
subscribe@professionaltester.com

IN THIS ISSUE
4 A test mature organization
Ever wondered how organizations know when they have achieved
test maturity? Let Gregory Solovey guide you.

9 Make yourself heard


BCS SIGiST is offering mentoring for testers looking to improve

Gregory Solovey
Isabel Evans
Hans Buwulda
Huw Price
Derk-Jan de Grood

their presentation skills and here Isabel Evans outlines why these
skills matter.

12 Test design driven automation


Hans Buwalda sets out his view on the prerequisites for
achieving automation success.

17 Meeting the challenge of API testing


Huw Price argues that the ubiquity of APIs makes the need for
rigorous model-based testing more pressing than ever.
Editorial Board
Professional Tester would like to extend its
thanks to Lars Hoffmann, Gregory Solovey
and Dorothy Graham. They have provided
invaluable feedback reviewing contributions.

22 Testing Talk
In this issue's Testing Talk, Anthea Whelan talks to
Derk-Jan de Grood about why testing needs to be
about far more than bug hunting.

Professional Tester is published


by Professional Tester Inc
We aim to promote editorial independence
and free debate: views expressed by
contributors are not necessarily those
of the editor nor of the proprietors.
Professional Tester Inc 2015
All rights reserved. No part of this publication
may be reproduced in any form without prior
written permission. Professional Tester is
a trademark of Professional Tester Inc.

PT - June 2016 - professionaltester.com

Test strategy

A test mature organization


by Gregory Solovey

How do organizations know when they


have achieved test maturity? Here's a
useful guide.

Initial, Definition, Integration, Management


and Measurement, and Optimization.
This article is an attempt to describe the
specifics of each stage, what challenges
you can expect and when it is reasonable
to consider a level reached.
Level 1 - Initial
The tests are executed manually and
typically two types of tools are used at this
stage - a test management system (TMS)
and a change request system (CR).
The first challenge is to identify the objects
to test. Modern software is multilayered,
with independent services on each layer.
For example, drivers, middleware,
application, GUI. The layers communicate
with each other through APIs, messages,
DB, etc. The question is how will it be
possible to test an intermediate layer as an
independent product? Can this be done
through a higher level application, through
a simulator of the higher level application,
or directly through the APIs? Should we
test it using the GUI to simulate the end
user scenarios, or through the rest-less
APIs to make the test stable and
independent from the GUI implementation?
There is no definitive answer, each
approach has its pros and cons.

One way or another, all


these practices have to
be in use in a mature
test organization.

Every software organization aims to


achieve zero implementation defects.
There is no silver bullet (or golden
practice) that makes this possible.
What, then, is the set of necessary
and sufficient practices?
The Test Maturity Model (TMM) is a very
accurate way to categorize test-related
practices and methodically guide organizations through the sequence of steps
required to achieve test maturity. There
are five levels in TMM, as seen in Figure 1:

PT - June 2016 - professionaltester.com

The second challenge is to implement the


object's controllability and observability,
which allow for the execution of all test
cases. In other words, when the test
interface is defined, it is necessary to
analyze that each test case can be
executed: can a test stimulus be initiated
from an external interface and can the test
response be captured at the external
interface? If this is not the case, a test
harness has to be implemented, in the
form of additional GUI objects or CLI
commands, in order to expose the APIs,
messages, states, attributes, etc.

Test strategy

Initial

Managed

Manual test; test management system; CR


management system
Stand-alone test automation frameworks for unit,
functional and performance test tools

Integration

Continuous integration test framework for continuous


deliveries and dynamic test environments

Quantitative

Quality monitors and metrics dashboards; TDD/BDD,


requirements, code, errors coverage

Optimizing

Continual process improvement through review,


audits and RCAs

1
Initial

Managed
Test
environment 1

Quantitative

Optimizing

Level 2 Definition
It is test automation time. Test tools have
to be selected based on the available
interfaces - WEB, GUI, CLI, mobile, DB;
or based on the operating system and the
development environment: Windows vs.
Linux; NET vs. Java vs. SOAP, etc. Today
the tendency is toward open source test
tools vs. commercial or internally developed
ones. What is important is that each has a
test framework, as shown in Figure 3.
A test framework provides two main
benefits to the user:

Figure1: The five levels of the TMM

Integration

All found errors are registered in a defect


tracking system, for example Godzilla.

Test
environment N

Software
update

Software
update

Test
application
1

Test
application
1

Test
application
2

Test
application
5

Figure 2: Tests are executed manually, in a stand-alone environment

1. Transform the test development process


from coding toward declaring tests.
The test framework provides a domainspecific language (DSL) to a tester,
which supports a hierarchical test structure and uses verbs (keywords) such
as: set, send, repeat, receive, compare,
tear down. Each keyword, as a function,
accepts parameters and is supported by
the driver that does the real job. The set
of commonly used keywords is organized
in libraries. The test development then
shifts to setting a sequence of keywords.
A framework also provides an IDE for the
testers to select and order keywords, run
tests and analyze results.

Stable manual tests exist for each


system document.

2. Make the testware change-tolerant with


respect to changes in requirements, API
syntax, GUI appearance, CLI parameters, file locations, security attributes,
etc. This is achieved with various
libraries, which maintain the identity of
the CLI command syntax, API's
parameters and GUI objects. A test
framework supports the testware
organization through a collection of
configuration files, test suites, test cases,
libraries of drivers and keywords.

All test cases can be executed completely through the external interfaces.

This level is achieved when the following


criteria are met:

Test cases are stored in a test


management system, for example
Quality Center.

A framework shifts the testers' focus from


writing new test scripts to declaring tests
keywords.

The third challenge is to establish a test


hierarchy according to the document
hierarchy. The tests have to be a mirror of
the requirements/architecture/
specification/interface documents.
This level is achieved when the following
criteria are met:

PT - June 2016 - professionaltester.com

Test strategy

that include permutations of browsers,


operating systems, device variants, and so
on, as seen in Figure 4.

1
Initial

2
Managed

Test Tool/ Framework

Test Tool/ Framework

Test
environment 1

Test
environment N

Integration

Quantitative

Optimizing

Software
update

Software
update

Test
application
1

Test
application
1

Test
application
2

Test
application
5

Figure 3: Tests are executed automatically in a stand-alone environment

1
build

Initial

2
Managed

3
Integration

Quantitative

Optimizing

Continuous Integration
Test Framework
Test
environment 1

Test
environment N

Software
update

Software
update

Test
application
1

Test
application
1

Test
application
2

Test
application
5

Figure 4: Allowing for tests in multiple environments and multiple builds simultaneously.

The debug file structure is standardized,


allowing for easy error identification and
creation of defect trackers.
The testware is organized in a fashion
that allows for unique changes for any
single modification of the object-to-test.
Level 3 Integration
Continuous integration, continuous
delivery, continuous deployment are
buzzwords today in software engineering.
Software is built numerous times per day
for multiple changes, various releases,
features and applications. It has to be
tested immediately by various test tools
and in sophisticated test environments

PT - June 2016 - professionaltester.com

The challenges go beyond the test


automation framework: the test environments have to be dynamically created,
results from several test tools have to be
presented in standardized form, the mechanisms of build acceptance have to be
defined, etc. Here are a few more features
related to the continuous integration test
framework: web-based monitors to keep
track of all critical aspects, automated tools
to correct and mitigate results of test failures, recovery from crashes and applications failures, automatic test reruns, automatic test results backup and maintenance.
Upon build completion the continuous
integration test framework should automatically initiate parallel tests in all official
environments. If errors are discovered, the
test team immediately takes action to
identify the source of the errors, distinguish
code errors from environment and testware
faults, and makes a determination about
the fate of the build.
This level is achieved when the following
criteria are met:
A means exists to describe/define the
release data within the framework: verification requests, resources, test environment, applications and associated tests.
A consistent framework debug file
structure is defined for all test tools,
that references the test description
and specifies the details of the
verdict creation.
There are framework features that allow
to re-run particular failed tests, filter
environment errors from the code-related
ones, manually update the results,
regenerate metrics/e-mails.
Level 4 - Management and Measurement
(Quantitative)
At this level automated test cases can be
easily added to the continuous integration
solution. As a result, at some point, the

Test strategy

1
Initial
build

2
Managed

3
Integration

Test
Test
environment 1

Test
environment N

Software
update

Software
update

Test
application
1

Test
application
1

Test
application
2

Test
application
5

Monitoring Dashboards

4
Quantitative

Optimizing

Test
density

Code
coverage

Req
coverage

CR
density

Feature_1
Feature_3
Feature_5

Figure 5: Dashboards monitor the quality of tests

number of test cases and the time to run


them can become unmanageable. The
challenge now becomes to limit this
number, by creating a minimum set of
needed test cases. There are a few ways to
measure the necessary number of test
cases, via the coverage of various
properties of the object to test:
requirements and architecture coverage,
code coverage, error coverage (in APIs,
messages, states, models). Unfortunately,
none is perfect and sufficient by itself; all
have to be used to some degree.
Requirements-architecture coverage
(traceability). Obviously this is a
necessary, but not sufficient condition.
Each requirement has to always be
tested by more than one test case. A bidirectional traceability tool traces test
cases from the test management system
to the requirements documents and viceversa.
Code coverage. Test cases have to cover
all lines of code, but again, this is not a
sufficient condition. Each line of the code
has to be tested, but sometimes more
than one test case is required. A code
coverage tool is particularly useful for
developer unit test.
Error coverage. It is assumed that all
requirements and architecture items are
formally described by UML-like models
such as state machines, conditions,

algorithms, ladder diagrams, syntax, etc.


In this case there are formal methods for
test design that guarantee the minimum
set of test cases that covers all
implementation errors. Unfortunately,
there are no satisfactory solutions for
automatic software test generation (it
only exists in the hardware world).
Therefore, in order to control this
process, testers need to be trained and
must verify the test quality during the
review meeting.
For a complete picture of this level,
a few more measurements have to be
maintained for each software build:
server/application performance
benchmarks (memory allocation,
processes utilization) and load, scalability
benchmarks (throughput, delays, latency).
To make these measurements work
they should be presented as a set of
dashboards for various entities: releases,
features, software components, and so on,
as shown in Figure 5. In this case the focus
of the project's review meetings would shift
from discussing where we are to what
should be done in order to be where we
want to be.
This level is achieved when the following
criteria are met:
Dashboards are in place to measure the
test quality; the weak spots can be easily

PT - June 2016 - professionaltester.com

Test strategy

1
Initial
build

2
Managed

3
Integration

Test
Test
environment 1

Monitoring Dashboards

Test
environment N

Software
update

Software
update

Test
application
1

Test
application
1

Test
application
2

Test
application
5

Prevention Dashboards

4
Quantitative

5
Optimizing

Req
Testability

Test
Test
Consistency Harness

CR
Lessons

Feature_1
Feature_3
Feature_5

Figure 6: Dashboards monitor the process of error prevention

identified, indicating where necessary


adjustments are needed.
Performance and scalability benchmarks
are taken and verified for each build.
Level 5 Optimization
All previous levels provide reactive test
challenge response mechanisms. The
specific of the last TMM level is to prevent
errors, by establishing fault tolerance
mechanisms. There are a few ways to
implement this level, for example by
writing test-friendly documentation
and test-friendly code.
Here is a list of the requirements for
supporting test-driven requirements:
Transform the requirements/architecture
from a business format into a hierarchy
of formal software models: conditions,
algorithms, state machines, instruction
sets, message diagrams, syntax.
Define tags for each
requirement/architecture item for
traceability and testability.
Establish traceability of testware to
documents (requirements, architecture,
design, interface specification).
Review requirements, architecture,
design, interface documents for
testability.

PT - June 2016 - professionaltester.com

Provide access to all object APIs


and messaging interfaces with
other subsystems.

system pulls data and applies specific


constraints that result in possible violation
notifications to the respective parties. The
results of these audits are presented as a
set of prevention dashboards, as set out
in Figure 6.

Provide access to the object states and


attributes values and return them in TLV
format.

This level is achieved when no


implementation errors seep into the
production code.

Report all errors/warnings with a


predefined format.

Conclusion
This article shows a specific
implementation of the five TMM stages.
It is an attempt to keep the description
generic, assuming that similar approaches
can be used in any software company.
The sequence of practices implementation
and their specifics depend on a company's
test culture and its weakest areas.
However, in one way or another, all these
practices have to be in use in a mature
test organization

Here is a possible list of requirements


for supporting test-driven code:

Redirect system messages to external


interfaces.
There is only one mechanism to implement
these rules: promote them to guidelines
and then verify the compliance of the
documents and code during the respective
reviews. Review metrics should be
uploaded to the dashboard. The audit

Gregory Solovey PhD, is a distinguished member of technical staff at Nokia and


frequent contributor to Professional Tester.

Feature

Make yourself heard


by Isabel Evans

BCS SIGiST is offering a place to four


testers on its 'New Speakers Mentoring
Scheme'

solution and convincing peers and linemanagers that a course of action can and
should be followed is an essential skill in
business today.
All too often, we hear that if testers are to
add value, we need to break out of our 'silo'
and question what is being tested. Agile
demands ongoing dialogue and collective
problem solving and so sound communication skills are becoming as vital as a
tester's ability to be analytical.
The BCS SIGiST has a mission to help the
development of testers in their careers, and
that includes helping provide a platform for
those who want to improve their public
speaking. And so we are offering up to four
new or improving speakers, based in the
UK, the chance to be mentored to improve
their presentation skills.

As we progress in our
careers, the ability to get
our message across by
speaking to a group
becomes even more
important.

Jerry Seinfeld once spoke about that


oft-quoted 'fact' that people fear public
speaking more than they fear death. He
observed: Death is number two. Does
that seem right? That means to the
average person, if you have to go to a
funeral, you're better off in the casket
than doing the eulogy.

Being able to present well is important for


all testers, not just those people who want
to share their stories with a wide audience.
In the course of our working lives we will
often need to be able to put a point across
clearly at a meeting, argue a case,
persuade others, and share information
in an engaging way.

Whilst the hope is that addressing an


audience must be preferable to a meeting
with the Grim Reaper, it probably is fair to
say that public speaking does cause many,
if not all of us, some anxiety and
trepidation.

Spoken rather than written reports are


often more useful and appropriate in
projects. Our reports at stand-ups and
progress meetings need to be succinct and
worth listening to, if we are to engage our
colleagues, customers and stakeholders.
We often have messages that are difficult
to get across or hard for people to hear.
We need to be able to take questions
and answer them. As we progress in
our careers, the ability to get our message
across by speaking to a group becomes
even more important.

Yet at the same time, we also recognize


that identifying an issue, arriving at a

Speaking at the SIGiST, especially as part


of the New Speaker Mentoring Scheme, is

PT - June 2016 - professionaltester.com

Feature

an opportunity to improve your working


practices around preparedness, spontaneity in response to unexpected
questions, ability to make a case
coherently, and spoken communication,
providing you with enhanced work
and life skills.
We will pair UK-based applicants with
mentors during the year to prepare them
for a short speaking slot at the BCS
December conference. Our mentors are
Julian Harty, Graham Thomas, Mieke
Gevers and Dorothy Graham all seasoned presenters who are committed
to helping nurture talent.
The theme for the conference day is
Challenge Yourself and we want you to
tell us about a testing challenge you have
faced, how you overcame, or even more
interesting how you failed and the
lessons you learned.
Here's a summary for Professional Tester
readers of how to apply.
How do I apply?
Download the application form
(http://www.bcs.org/category/18795) and
send it to the BCS Programme Secretary,
Isabel Evans (her email address is on the
form) in an email titled SIGiST New
Speakers.
When do I need to do this?
Do it now! The deadline is 31st July 2016.
What is the process the SIGiST will use
to select the successful applicants?
The Programme Secretary and Mentors
will review the applications, select four
and assign each one to a Mentor.
Remember, what we are looking for at this
stage is your idea for a presentation. Write
that in the abstract. Then fill in the key
points you wish to highlight. These don't
need to be perfect yet.
What happens then?
You will provide an improved abstract
submission by the end of August, and you
will present a 10 to 15-minute talk at the

10

PT - June 2016 - professionaltester.com

BCS SIGiST on Wednesday 7th December


2016.
What can I expect from my mentor?
You will be matched by one of four world
class testing experts and speakers who
will:
advise how to make a really appealing
abstract to submit
guide you in preparing your submission,
explain the presenting technology
review and help you rehearse your
presentation
introduce you when you speak at the
BiCS SIGiST conference in London
So your mentor will provide advice, review
comments and discuss your ideas with you
during this time, but you are responsible
for your content and delivery.
I'm not based in London; can you help
me with travel?
Ask your company to pay your expenses
as part of your professional development.
If that is not possible, discuss with the
Programme Secretary as we pay expenses
in some circumstances. Remember, we are
looking for UK-based speakers in this
scheme.
When and where is the conference?
Wednesday 7th December 2016, BCS
Offices, Davidson Building 5 Southampton
Street London WC2E 7HA

Find out more about BCS SiGIST mentors


Dorothy Graham

www.dorothygraham.co.uk

Graham Thomas

www.badgerscroft.com

Mieke Gevers

http://bit.ly/23gbi2O

Julian Harty

http://bit.ly/1WbFQRo

Isabel Evans, is the BCS SIGiST programme secretary. She has more than 30 years
experience in the IT industry, mainly in quality management, testing, training and
documentation.

TE TER
Essential

for

software

testers

Professional Tester has led


knowledge-sharing, commentary
and opinion about testing methods,
techniques and innovation
since 1999
Be part of the conversation

Subscribe. Contribute. Advertise.

Test automation

Test design driven automation


by Hans Buwalda

LogiGear's Hans Buwalda sets out his


view on the prerequisites for achieving
automation success.

For the lowest level of testing, the unit


tests, this is not arduous. Unit tests are
essentially functions that one-by-one test
the other functions (methods) in a system.
At one step higher, component tests can
verify the methods exposed by a component, usually without having to worry about
the UI of system under test. Similarly,
REST services can easily be accessed by
tests. In all these cases the automation of
the tests is intrinsic, and such tests are
usually not that sensitive to changes
in the target system.
However, higher level tests like functional
and integration tests can be more cumbersome, particularly if they have to work
through the UI. It is this category that
this article will address.

When I start a project


my first question isn't
"what do I build?" but
"how do I test it?"

12

PT - June 2016 - professionaltester.com

Automated testing has never been more


important, and is gradually developing
from a nice-to-have into a must-have,
particularly with the influence of DevOps.
In essence, DevOps means that the deployment process gets engineered just as
the system is being deployed itself. This
allows for rebuilding and redeploying, in
one form or another, to happen whenever
there are changes in the code. It is similar
to the "make" process known in the UNIX
world, where parts of a build process can
be repeated when code files change.
A major aid in that process is when tests
can be created and automatically
executed, meaning that the new
deployment will work ok.

A very powerful approach to testing is


known as "exploratory testing", a term
coined by Cem Kaner, and further
developed by James and John Bach,
Michael Bolton and others. However,
exploratory testing is not intended as a
starting point for automation. It is meant
as a "testing by learning" approach.
Testers, preferably in groups of two and
following a script, will exercise a system,
to get to know it and in that way identify
potential issues. The strength in exploratory
testing is the ability to find unanticipated
issues in an application, something that is
harder to do with tests that are prepared
and automated in advance, which Ill revisit
later in this article. For more information
about exploratory testing, you can visit
Satisfice.com. Figure 1 outlines the three
major test categories and their suitability
for automation.
The role of test design
In most project testing, automation is seen
as an activity, a chore that needs to done,
but is not particularly inspirational. The
focus is then usually on the technical

Test automation

Relation to code

Quality / depth

Automation

Scalability

Unit
Testing

Close relationship
with the code

Singular test
scope, but deep
into the code

Fully automated
by nature

Scalable, grows
with the code,
easy to repeat

Functional
Testing

Usually does not


have a one-onone relation with
code

Quality and
scope depends
on test design

In particular UI
based
automation can
be a challenge

Often a bottleneck in scalability

Exploratory
Testing

Human driven,
not seeking a
relation with code

Usually deep and


thorough, good
at finding
problems

May or may not


be automated
afterwards

Not meant to be
repeatable.
Rather do a new
session

Figure 1: Matrix

step 16

Open http://www.bigstore.com

The "BIG Store" main page is displayed, with a "sign in" link

step 17

Click on "Sign In", upper right corner

A sign in dialog shows, the "Sign in" button is disabled

step 18

Enter "johnd" in the user name field

The "Sign In" button is still disabled

step 19

Enter "bigtester" in the password field

Now the "Sign In" button is enabled

step 20

Click on the "Sign in" button

The page now shows "Hello John" in the upper right corner

step 21

Enter "acme watch" in the search field

The "Search" button is enabled

step 22

Click on the "Search" button

5 watches of Acme Corporation are displayed

step 23

Double click on "Acme Super Watch 2"

The details page of the Acme Super Watch 2 is displayed

step 24

Verify the picture of the watch

The picture should show a black Acme Super Watch 2

step 25

Select "red" in the "Color" dropdown list

The picture now shows a red Acme Super Watch 2

step 26

Type 2 in the "Order quantity" textbox

The price in the right shows "$79.00 + Free Shipping"

step 27

Click on "Add to cart"

The status panel shows "Acme Super Watch 2" added

step 28

Click on "Check out"

The Cart Check Out open, with the 2 Acme Super Watches

Figure 2: Steps
Given User turns off Password required option for Drive Test
And User has logged in by Traffic Applicant account
And User is at the Assessments Take a Test page
And User clicks the Traffic Test link
And User clicks the Next button
And User clicks the Sheet radio button in Mode page if displayed
And User clicks the Start button
And User waits for test start
And User clicks the Stop Test button
When User clicks the Confirm Stop Test button
And User enters the correct applicant password
And User clicks the Confirm Stop Test button
Then The Test is Over should be displayed in the Message label
And the value of the Message label should be The test is over
And The Welcome to Traffic Testing page should be displayed
Figure 3: A strong communications format can still result in poor usability

activity of scripting often lukewarm test


cases. The result can be a fairly trivial test
set that, due to poor structure, is also hard
to maintain.
Figure 2 shows an example, adapted from
an actual test project:
As you can see, this test is not easy to
digest. There are many detailed steps
and checks, and there is no scope for
them. The effect is that the test is hard

to read, and as a result, even harder to


maintain or manage. When tests look like
this, one should not expect an automation
engineer to produce well-structured and
maintainable automation for them, its
not feasible.
The example in Figure 3, using a BDD
Given-When-Then (GWT) format,
also comes from a real project. It shows
how using a strong format for communication, GWT, will not help much if it

is not used very well:


To improve the way tests are written we
have to look at the test developers first.
Once they are on track producing wellorganized tests, the automation engineer
will have an easier time with the technical
side of the automation.
Actions
A first step to get testers to effectively
work with everybody else is to use a
format that is both easy to read and easy
to automate. We will use "actions" for that,
which are keywords with arguments.
However, it is also straightforward to
translate these into GWT scenarios.
In our tool, TestArchitect we put our tests
in a spreadsheet format. This is not essential to success, but a big benefit is
that it makes the tests accessible for nontechnical people, like functional testers,
domain experts, auditors, etc.
Notice in Figure 4 how well this test can
be understood without having to know
what the details are of the UI of the
application under test. It could be web
based, client server, legacy mainframe,
or mobile, for this test it doesn't matter.
This makes tests very resilient against
changes in the application that do not
change the business logic of
this scenario.
Test modules
The Action Based Testing approach
regards tests as products that have value
in their own right. The tests are built as
series of keyword-driven actions and
organized in products called test modules.
A crucial step is the definition of these
modules. For example, there is a separation between "business tests" and
"interaction tests", meaning tests of
business transactions are kept in other
modules than tests of the UI interaction
with a user. The underlying notion is that
the tester creating the test design has a
bigger impact on the automation success
than the technical automation engineer.
In Figure 5, notice that the first step is
identifying the test modules. Think of them

PT - June 2016 - professionaltester.com

13

Test automation

acc nr

first

last

open account

123123

John

Doe

acc nr

amount

deposit
deposit

123123
123123

10,11
20,22

acc nr

expected

check balance

123123

30,33

Figure 4: Open Account

High Level Test Design - Test Development Plan

define the "chapters"

Test Module 1

Test Module 2

Test Module N

Objectives

Objectives

Objectives
create the "chapters"

Test Cases

Test Cases

Test Cases

create the "words"

Actions

make the words work

AUTOMATION

interaction test

enter
enter
check property

business test

window

control

value

log in
log in

user name
password

jdoe
car guy

window

control

property

expected

log in

ok button

enabled

true

user

password

log in

jdoe

car guy

first

last

brand

model

enter rental

Mary

Renter

Ford

Escape

last

total

check bill

Renter

140.42

Figure 5: Overview test modules approach

as chapters in a book. Once you


have defined them, the rest is a matter
of putting the text in the right chapters.
Structuring tests this way gives them a
good scope, which in turn helps when
deciding what actions to use and what
checks to create. For example, some test
modules are "interaction tests", which test
whether the user can interact with the
application, while other modules take a
"business test" perspective. A key
requirement for success is to keep these
two kinds in a separate test module. It
should not happen that a business level
test, like checking the billing amount of
a car rental as shown in Figure 5,
contains navigation details, like
a "select menu item" action.

14

PT - June 2016 - professionaltester.com

Business objects and business flows


Having a clear modularized test design
will greatly assist in the managing and
maintaining automated tests. The identified
modules work like buckets that you can
create your test cases into. However, it is
not always easy to decide where to start.
We have some good experience with using
"business objects" and "business flows"
as a starting point. In this approach,
you would:
identify the business objects with which
your application works
also identify a number of business flows,
end-to-end transactions to involve
multiple business objects

Test automation

create promotion
create promotion

name

start

finish

percentage

category

christmas
tablets

1-12-2016
20-12-2016

25-12-2016
1-1-2017

5
11

all
tablets

date

time travel
check nett price

23-12-2016
article

price

iPad Mini 4

338,19

Figure 6: Promotion 1

click
select
check list item exists

window

control

main

new promotion

window

type

promotion

town

window

list

value

promotion

towns

Tietjerksteradeel

Figure 7: Promotion 2

have additional categories for the tests


that don't fit the above two buckets
For example, an e-commerce site might
have the following:
business objects: articles, customers,
staff members, promotions, payments
business flows: place/fulfill/pay an order,
introduce a new article and sell
other tests:
- authorizations
- extensibility
- interoperability
Within each of these categories
you can have business tests and
interaction tests. For business objects
the business tests will do various life cycle
operations, like creation, modification, and
closure. The interaction tests will look at
the dialogs involved, and test details such
as; does a drop down box have the
correct values?
In Figure 6, the examples show tests
around "promotions". There will be many
kinds, like percentage or fixed cash, overall
or per article/country etc., and fixed time
period or perpetual.We will discuss the
action "time travel" later in this article.

These could typically be interaction tests


and in Figure 7, the test (fragment) verifies
whether the Dutch town (municipality)
"Tietjerksteradeel" is in the list of towns.
Testability
Test design is, in my view, the most
important driver for automation success.
However, following closely behind is
"testability": is the application under test
prepared for testing, in particular automated testing? Testability should be a high
priority requirement. When I start a project
my first question isnt "what do I build?"
but "how do I test it?"
Surprisingly, many applications are
not very testable. LogiGear has several
game companies as customers, and in
more than one instance, the only access
to a game is the graphical display of it. We
have experience in image-based testing,
and were able to handle the tests, but it
requires a great deal of effort and the
result is ultimately hard to maintain.
Testability starts with good
system design. If a system has clear
components, services, and tiers, there will
be many ways tests can access it, to set
up situations and verify outcomes. That
access can be UI and non-UI, depending
on the scope of the tests.

PT - June 2016 - professionaltester.com

15

Test automation

A top priority testability item is the identification of UI elements, like controls in


a desktop application, or HTML elements
on a web page. UI elements are typically
identified by test tools using their properties, and changes in those properties is a
big source of problems for test automation.
If a button is identified with a user visible
caption "Submit", and in a subsequent
version of that application that caption is
changed to "OK", the tests won't be able
to find the button anymore and will stop
working. However, it should be noted that
virtually all desktop platforms like Java/
Swing or WinForms allow controls to have
a hidden "name" property, which is easy
to define by a developer, and thoroughly
solves the identification problem
for the testers.
An equally high priority testability
requirement is timing. There are many
cases where, for example, a table on a
screen is populating with values, and a
test has to wait until that process has
finished before verifying a single cell value.
Quite often the tester will not have a clear
criterion that the test can wait for, and will
use some arbitrary hard-code waiting time.
This typically means that the test slows
down if the waiting time is too high, or
will break if it is too short. This gets worse
when the tests are executed on virtual
machines, which is becoming standard
practice. We have several large projects
that run tests for hours or even days in a
row, and occasionally break on time-outs.
It is however usually very straightforward
for a developer to offer a criterion that the
test can wait for, like a dedicated property
of the earlier mentioned table control.
Apart from timing hooks other "white box"
access to the internals of a system can be
helpful as well. For example, the graphical
game can provide features to let the test
know which objects (like "monsters")
are on the screen and where they are,
so that in the case of monsters the test
can "shoot" them. Another example is
the graphics we test for a geophysical
application. For those tests that want to
verify the numbers being displayed it is
much easier to get these numbers via an

16

PT - June 2016 - professionaltester.com

API call then to have to reverse-engineer


them from an image.

acceptance criteria. Since in a typical


sprint the UI details are not known or
stable yet, testers won't be able to write
Team work
interaction details in the test, which I
In addition to test design and testability,
consider a good thing. Those come later.
the cooperation between various players
Also in the early phase, the testers should
is a major driver for testing and automation also discuss with the developers what UI
success. The reverse is also true: if tests
items will be created, and what their internal
are comprehensive, efficient and run stably, identifying properties are going to be. The
it is of great help to the rest of the develop- tester can then manually create and mainment and delivery processes. In particular
tain in the interface mapping, thus elimiDevOps style processes will run smoothly.
nating a large part of the automation work.
When the sprint continues the actions used
For agile projects, the team is the first
in the initial test modules can be automated,
place where co-operation will and should
and more detailed interaction tests can be
take place. The tester and automation
created as well, in their own modules.
engineering roles can work with the product
owners and the developers. Often tests
Summary
can even give direction for the rest of the
Automation has many facets. It is often
project, but I would argue that is a good
seen as a must-have, but it does not get
benefit but not their primary role (except
the priority it deserves. Working with an
maybe for unit tests).
overall test design, and having applications
developed in a testable way can be a great
The testers in the team should start the
help. It all comes together with the attention
sprint focusing on the higher level test
and cooperation of all involved, which can
modules first, which should ideally be
make automated testing a success, and
at a similar level as the incoming sprint
as a result help the success of the
backlog items like user stories and
project overall

Hans Buwalda leads LogiGear's research and development of test automation


solutions, and oversees the delivery of advanced test automation consulting and
engineering services. He is also the original architect of the key-word framework for
software testing organizations.

Test strategy

Meeting the challenge


of API testing
by Huw Price

Why a rigorous and model-based


approach is needed in API testing

Some of the reasons for the complexity


in API testing are set out in this article.
The growth of API use means that
testers must accept the development
practices associated with them, recognizing that a legacy approach has failed.
The central tenant of this article will be
that testers should adopt the role of critical
modeller, and should strongly influence
the design and implementation of APIs.
They should be involved far earlier, and
this is necessary to achieve sufficiently
rigorous testing when faced with the
complexity of APIs.

The design should be


changed to make it
possible to design or
observe the reason
for a result.

APIs are by no means new, and


componentizing is a fundamental of
good programming. However, nowadays
practically every organization is cashing in
on the business value of making sections
of code readily available to other applications. These range from new customer
channels to better integration and visibility,
and this rise to ubiquity makes the need
to rigorously test APIs more pressing
than ever.
At first, the challenges of API testing look
similar to those of testing complex chains
of applications end-to-end, where the job
of the tester is to understand the causes
across applications which led to an
eventual result. This, however, is often
considered the most difficult aspect of API
testing, and trying to get all of the moving
parts in the correct state at the right time is
sometimes considered almost impossible.

Current API testing methods


Most API testing is currently performed by
some kind of test harness which has been
created by hand, for example by manually
writing scripts in Python or Javascript to
trigger an API. There are some tools that
can automatically create basic tests from
a protocol but these tests are generally
primitive. Other tools can track traffic and
replay this in testing frameworks later.
Typically, neither method leads to rigorous
testing and the unsystematic approach to
test creation prevents measurability and
these methods cannot, therefore, provide
estimates of either risk or test coverage.
In order to sufficiently test APIs which
change frequently, these scripts should
be created automatically as part of a
controlled test automation framework.
A model-based approach can introduce
the required rigor, while newly available
and scalable testing frameworks mean
that testing can keep up with the rate of
change. This approach will be set out
later in this feature.
Observability: is the API actually
testable?
The first question to ask is: Is the API
testable? Though much research has

PT - June 2016 - professionaltester.com

17

Test strategy

This is one reason for involving testers


earlier, in the actual design and implementation of APIs. The design should be
changed to make it possible to design or
observe the reason for a result. In practice,
this can mean leaving breadcrumbs such
as probe points or data that can be used in
testing to verify that the right result was
achieved for the right reason.

Figure 1: Version dependency mapping of three APIs,


with constraints restricting which versions can occur together.

A good example of this is found in car


management systems, where probes are
used to identify exactly where a system is
failing. The difficulty, however, is getting the
correct number of probes in the right place.
APIs are no different, and a simple audit log
or more detailed return message can be
hugely beneficial when testing the results
from an API.
The need to overcome complexity
in API testing
A further case for involving testers as early
as possible and for model-based testing as
well, is to overcome the complexity of APIs.
Broadly speaking, there are two types
of test cases which must be covered
sufficiently:
Positive Testing here, the process is
defined clearly and the tester ensures
that all decisions and function points
have been covered by the designed
test cases.
Negative Testing here, test cases
define the edge cases which should be
rejected and simulate the data scenarios
to ensure that the API rejects them
correctly.
Numerous factors can cause the number
of possible test cases across both categories to grow far beyond the capability
of manual test case design, and this
challenge needs to be overcome.

Figure 2: The logic of a customer API

been done around the testability of


systems (see Richard Bender), it is
rarely implemented within traditional IT
engineering disciplines. As a consequence,
testers often get a result for the wrong
reason, where the cause of the eventual
event was different from what they

18

PT - June 2016 - professionaltester.com

thought. For instance, two defects


might cancel each other out, producing
a false positive. This lack of observability
in testing then leads to frustration when
fixing bugs unearths a myriad of others
and development feels like it is moving
backwards.

APIs calling other APIs


Most APIs are assemblies of calls to other
APIs, each with their own rules which can
create unexpected results. The complexity
of an API can, therefore, grow exponentially
as it is combined with other API calls,

Test strategy

test_name
Invalid1
InvalidParm1
Valid1
Invalid2
InvalidParm2
Invalid3
Valid2
Invalid4
InvalidParm3
Valid3
Invalid5
InvalidParm4

Action
Change
Change
Change
Change
Change
Lookup
Lookup
Lookup
Create
Create
Create
Create

and this is a particularly acute problem


for manual, unsystematic test design.

be hundreds of times more complicated


than posting the initial transaction.

APIs and Units of Work: Testing


rollbacks and cleanups
With multi-tier architecture, testing must
further cope with a unit of work which
spans multiple APIs. Whereas previously
it might have been possible to roll a system
back relatively easily to a point before a
failure, with multiple APIs separate rollback
and cleanup processes might be needed.
Each cleanup itself will need rigorous
testing and it is not an exaggeration to
say that testing a failure and rollback can

Versioning of APIs
Versioning is a further cause of growing
complexity in API testing. Most systems
have a degree of deprecation, so an API
must be able to handle an old version
calling new versions, or a combination
thereof. The API must recognize missing
values and assign some kind of default to
allow the old version to work. Whats more,
it might be the case that some versions can
be called by some versions but not others,
and the numerous possible combinations
must therefore be tested.

Customer
Number
1
1
1
1

NewPayload
Names=James walker
Name=?
Names=James walker
Names=James walker

1
1

Names=James walker
Names=James walker

14
15
1

Name=?
Names=James walker Name=Josh Taylor
Names=James walker
Name=Josh Taylor

Payload
Name=Huw Price
Name=Huw Price
Names=James walker
Name=Huw Price
Name=Huw Price

expected_results
API fails
API fails
Payload Displayed
API fails
API fails
API fails
Payload Displayed
API fails
Invalid Payload;API fails
NewCustomerID;Payload Displayed
NewCustomerID;API fails
API fails

Figure 3: The twelve logical combinations derived from the model in Figure 2.

The model in Figure 1 is a


dependency map of three APIs, with
some logic between them. There are
multiple versions of each API, some of
which can work with others. Without
considering different versions of the API,
there are 127 combinations which need
to be tested. However, when the versioning
is overlaid, there are 10,287 possible
combinations which need to be tested,
and it is not likely that manual scripting
will cover a sufficient proportion
of these.
Ordering of APIs
The calling order of APIs must also be
factored into test case design, and this
can further cause the number of possible
test cases to skyrocket. Usually one API
must be called before another, and so on,
but in rare instances APIs can be called in
random sequences too. Here, the number
of combinations will be huge, and the only
real way to test the combinations is to
consider each in isolation, making sure
that you test each trigger and each
effect separately.

Figure 4: Subflows have been created to designate what needs virtualizing, and the
process behind the virtualization. These subflows (orange) are then
incorporated under the master process, ready for testing.

Introducing rigour with


model-based testing
Considering the above factors, it will
be almost impossible to exhaustively test
every combination involved in a given set
of APIs using typical approaches to API
testing. Instead, testing must be realistic
and proportionate, as well as automated
and systematic. This requires the ability

PT - June 2016 - professionaltester.com

19

Test strategy

Figure 5: The subprocess has been incorporated into a test case to test a path through the master flow.

Figure 6: The basic test modelled above now has optional Virtual Test control added

to prioritize tests in a systematic way,


and model-based testing offers a highly
structured approach to overcome the
complexity of API testing.
Test categorization
As exhaustive testing will rarely be
possible, you should first break down test
types based on units of work and the
availability of machine power. For example,
if you have enough virtual machines you
could consider continually exhaustively
testing some APIs, especially if they are
critical components called by many other
processes, before connecting APIs up into
different categories of tests. The types of
testing can be loosely categorized as:

20

PT - June 2016 - professionaltester.com

Exhaustively testing a single API If the


number of combinations is reasonable
then consider testing all possible
scenarios, as is set out in the next
section.
Multi-version testing If there are multiple versions of an API being called or
available, then they should be modelled
and their dependencies should be tested
to ensure cross version stability.
Order-sensitive API testing The order
each API is called in must be modelled
and understood and tests designed to
cover an appropriate level of
combinations.

Test strategy

Failure testing Testing failure should be


considered a specific set of tests, i.e. test
the successful rollback of a unit of work
to a data safe state.
Chain testing The linking of APIs
together.
Exhaustively testing an individual API
As an example for this article, a single
customer API will serve as the starting
point for exhaustive testing.
Figure 3 shows that there are 12
possible combinations through the logic
gates. There are 3 valid and 9 invalid combinations to be tested. This is a simplified
version, and you would have far more logic
around the pay-load validation; however,
creating a completely automated test
harness for this API would be straightforward as is set out next.
Incorporating virtualization
in API testing
In order to test this API, it is likely that
you will need to use virtualization. In this
simplified example, some of the negative
tests (starting InvalidParm) can be set up
by forcing deliberately bad values into the
API parameters (name=?); however,
for the other negative tests, you will have
to simulate either a slow response, a fault
or a database update failure. In order to
test these combinations as part of the
master flow, you would need to change
your API to a virtualized version. The test
case design logic housed in the example
flowchart has therefore been adjusted to
allocate virtual endpoints within the tests.
In Figure 4, the subflows (orange) set out
the APIs which will be virtualized, as
reflected in indented steps in the test
case shown in Figure 5.
Moving from exhaustively testing
an individual API to chain, integration,
multi-version, failure and order
sensitive API testing
Once test cases have been defined to
exhaustively test an individual API, you
can use model-based techniques to connect components into further types of
tests. Effective testing of chains of API

calls or integration testing requires that you


fully understand each API, its causes and
effects, as well as its data dependencies.
You can additionally combine the version
compatibility matrix (as defined in Figure
1), while selectively choosing to virtualize
certain decisions to merge functional and
logical testing in one model, as shown
in Figure 6.
Auto generate scripts,
data and virtual endpoints
So far, we have created a model which
incorporates the various APIs to be tested,
and their relationships, while also specifying what needs to be virtualized. Test
data and automation code snippets can
further be overlaid, but the question
remains of how to actually generate
the tests systematically.
As the flowchart model is mathematically
precise, paths can be generated automatically from it. These paths are equivalent to test cases which can then be
converted into automated tests, as shown
in Figure 7. If automated test scripts and

test data have been overlaid into the


flowchart itself prior to the creation of the
optimized paths, then the manual effort
of generating automated tests can be
substantially reduced.
Combining the automated test design and
virtual responses with version compatibility
allows you to define combinations of tests
in a much more structured way. Faced with
a large number of possible combinations,
you have a clear measure of coverage
and risk with which to prioritize tests.
Summary
The ability to selectively stabilize
some of the API calls in a controlled
way provides a much more structured
and rigorous approach to testing APIs.
The use of model-based testing techniques, whereby each decision gate is
tested with the minimum number of test
cases in conjunction with a risk-based
approach, further means that API testing
can be optimized to more rigorously test
a given set of APIs, even when faced
with huge complexity

Figure 7: The flow (top) shows the subflow models of APIs connected into a chain; the scripts (bottom)
are created as an output of the test case design tool.

Huw Price is vice president, application delivery & global QA strategist, at CA


Technologies. Huws 30-year career in testing has given him a deep understanding of
testing as a science and how it can be applied to modern organizations so that they
can meet todays challenges.

PT - June 2016 - professionaltester.com

21

Interview

Testing Talk
by Bogdan Bereza

Anthea Whelan talks to Derk-Jan de Grood, a thought


leader in testing circles, as well as an agile transition
coach. A senior consultant for Valori, Derk-Jan has won
the European Excellence award, has published several
successful books and frequently presents keynote
speeches at a variety of agile testing conferences. You
can also see him during the upcoming Test Automation
Day in Rotterdam on 23 June.

You mention that you enjoy studying trends


and how they affect us. Has anything
caught your eye lately?
I try to collect as much information as
possible from a variety of sources. I listen
to my peers at conferences to see what is
happening and get to visit a great many
organisations. I learn more and more
where people find their troubles lie, where
their struggles are, and from this mix I try
to find new solutions. I get new ideas and
try them out. You can get a lot of new ideas
from trend-watching, but the real challenge
is to translate that into benefit for our
customers.
There is a lot going on in IT right now.
But I tend to focus a lot on agile these
days. Within agile the daily focus shifts
from working in silos to collaboration,
from execution to coaching, from preparing
to doing. But the test fundamentals remain
in place. I am current preparing my keynote
for the September SIGIST in London, on
this topic. It is really interesting. When we
apply the agile principles, the test activities
may seem the same, but the motivation
to do so might change. The scope has
widened beyond just finding defects. It's
about contributing to business value.

Since a lot of projects get


into stormy water, the
stakeholders very often
have big concerns and
would really love to have
insight in progress, quality
and dependencies.

22

PT - June 2016 - professionaltester.com

This aligns with a lot of stuff that I have


been working with for a long time: the
value-driven aspects of IT. Testers should
be able to say: I want to do all the tests
you want me to do, but first, why do you
want me to do these tests? What are
you trying to achieve?
Does anything still surprise you?
I did a workshop recently about sharpening
the profession and thinking about what sort
of skills are required to become a tester
and I was surprised by how many people
did not even know about agile. There are

Interview

still testers and I am surprised by


how many there are who grew up in the
fashion of applying the methods, plus the
odd trick or two and then you are done.
That no longer works, but there seems
to be a group of testers that cling to
old values.
Why do you feel that automated testing is
still quite slow to be adopted?
Perhaps because of the Millennium bug?
In the 1990s, a lot of testers were needed
for things like the Y2K bug and the Euro
introduction. Many were hired, without any
IT background, to manually test these
systems. They became very good at
functional testing, but perhaps they may
have been a little afraid of technique.
For years we'd reason that automation is
good, but that we needed to ensure that all
the boundary conditions were met before
we could start automating - otherwise we
will just end up with a mess.
I see a lot of organizations still with a
lot of legacy issues in both equipment
and techniques. People have difficulties
adopting technologies and tools; in most
organizations, it's easier to hire a person
than to buy a tool.
Within testing circles, we are asking
more frequently if testers really need
to be programmers. In the conferences
I've attended, the audience usually seems
quite evenly split. Meanwhile, the tools
have become better the need to automate is greater, and we have changed
our major development methods, so that,
in turn, creates the need to automate. A lot
of managers aim for dev-ops, continuous
integration and deployment you have
to have your house in order if you want to
do these things. It's very difficult to have,
without automation. So the setting has
changed and automation is part of agile
now; part of getting your development
processes in order.

without automation. Switching to


automation can initially be expensive.
You often talk about showing the added
value of testing. Why is that important?
To me testing is more than just bughunting just making the code a little
better. Testing is about aligning with
stakeholder needs and addressing their
concerns: that adds value, especially in
the perception of the stakeholder. Since
a lot of projects get into stormy water, the
stakeholders very often have big concerns
and would really love to have insight in
progress, quality and dependencies.
Testing is a means to obtain that crucial
information. By making this clear, we get
a lot of commitment to do our testing.
Whether these activities are focused
on bug-hunting or automating tests, the
stakeholders provide support since they
understand it delivers the information
they so desperately need.
Ask whose is the final decision? Who
decides whether they accept this? What
must be done so that it will be accepted?
What exactly are the acceptance criteria?
Testing is a means to obtain that crucial
information. Once we have made that clear,
whether these activities are focused on bug
hunting or automating tests, stakeholders
will provide support as our activities will
deliver the information they so
desperately need
Find out more about Derk-Jan de Grood at: djdegrood.wordpress.com/

Business managers have to change their


way of thinking. They just want good quality
solutions, quickly, but this is not possible

PT - June 2016 - professionaltester.com

23

You might also like