Professional Documents
Culture Documents
December 2014
110001110001000011100110011110110000010000011110
000100110100101101110011000010111001001111001
Catch the
Early Bird
by January 16!
www.agiletestingday.nl
Follow us on Twitter:
@Agile_NL
Dear readers,
We are facing the end of the year 2014 and when we look back we see a great year behind us.
The magazine achieved the expectations set by the editorial team. We have increased the quality
of the articles, we have more readers worldwide, and we made some small changes to the structure
and the website. Even though we are happy with these changes, we want to improve your experience
next year and therefore we are facing new ones.
The conferences we organized ran amazingly well. The first edition of Mobile App Europe was impressive with a lot of new insides on the mobile business and we have already started planning for next
year. Our main conference, Agile Testing Days, was the best ever and we are very proud of having so
many talented speakers at it. The attendees had a lot of fun and learned a lot. It was also a pleasure
to have had the exclusive release of the new book by Lisa Crispin and Janet Gregory. More Agile
Testing is a milestone in the agile world. The sister conference Agile Testing Days Netherlands that
took place earlier this year was a success and we are working hard to give you a great experience
again on March 19, 2015 in Utrecht, The Netherlands. Please visit the website www.agiletestingdays.nl.
I want to thank all the authors, sponsors, and partners for their support in issuing the magazine. A
special thank-you goes to Konstanze who laid-out the magazine for the last time!
Last, but not least, I wish you a Merry Christmas and a Happy New Year!
contents 28/2014
From the Editor...............................................................................................1
Agile Is It Worth It? Advantages of Using It........................................4
by Antonio Gonzlez & Rubn Fernndez
Test-Driven Developments are Inefficient;
Behavior-Driven Developments are a Beacon of Hope?
The StratEx Experience Part II.................................................................8
by Rudolf de Schipper & Abdelkrim Boujraf
A Report about Non-Agile Support for Agile........................................12
by Martin Uhlig
PERFORMANCE COLUMN: Exploratory Performance Testing.......... 14
by Alex Podelko
What Developers and Testers Need to Know about the
ISO 27001 Information Security Standard............................................. 16
by Klaus Haller
What Makes a Good Quality Tester?......................................................20
by Jacqueline Vermette
Ready for CAMS? The Foundation of DevOps...................................... 22
by Wolfgang Gottesheim
Combinatorial Testing Tools....................................................................24
by Danilo Berta
A Continuous Integration Test Framework..........................................30
by Gregory Solovey & Phil Gillis
Agile
Waterfall
Failed
29%
Agile
Successful
14%
Challenged
57%
Failed
9%
Successful
42%
Challenged
49%
But these are not the only reasons why Agile is important. There are
plenty more. Below are several explanations of why it is reasonable
and suitable to use agile in software development from different
perspectives and points of view.
With this brief description, we start to observe why AGILE is so important: response to changes. Often, new companies do not know very
well what their clients want or how to define their roadmap, hence
pivoting and iterating until they reach their expected results is almost
mandatory. Agile development allows small companies to refine their
products and goals on the go.
and industrial environments in different companies, including Sogeti, Ingenico, Grifols, and
Aurigae Telefonica R+D. He presently holds the
role of SW QA Manager at Zitro Interactive, where
Conclusion
Agile is not a one-time aspect in the development process of a company.
It is a development philosophy that helps to deliver frequent releases
with high quality to final customers, through team collaboration,
transparency, and continuous improvement.
LinkedIn: www.linkedin.com/in/rbnfdez
CABA Certified
Agile Business Analyst
Are you a business analyst who is working in or moving
to an Agile environment? How does the work, and your
approach, differ from the traditional BA role? What are
Open courses:
or contact us at info@diazhilterscheid.com.
Kurfrstendamm 179
10707 Berlin
Email: info@diazhilterscheid.com
Germany
Website: caba.diazhilterscheid.com
Missed Part I?
Read it in
issue No. 27!
By Rudolf de Schipper & Abdelkrim Boujraf
software. This is our target audience for this article. We are not pretending to describe how to test military-spec applications or embedded
systems, for example.
We know it is easy to criticize, but this was not for the sake of being
negative. We believe that our experience has led to some valuable
insights, apart from the points we simply do not like. Further, we
believe our experiences are not unique. So, in this second article we
want to take a look at what can be done to test our software efficiently.
Lets look a bit closer at the various types of tests we might need to
devise to achieve such reasonable assurance.
We have observed that the generated code and screens are usually
of acceptable initial quality. This is because the number of human/
manual activities to produce such a screen is very low. The code templates that are used by the generator obviously have taken their time
to be developed. This was however a localized effort, because we could
concentrate on one specific use case. Once it worked and had been
tested (manually!), we could replicate this with minimal effort to the
other screens (through generation). We knew in advance that all the
features we had developed would work on the other screens as well.
An interesting side-effect of this method is that if there is an error in
the generated code, the probability of finding this error is actually very
high, because the code generation process multiplies the error to all
screens, meaning it is likely to be found very quickly.
The hand-code screens are on the other side of the scale. They present
a high likelihood of errors, and we have also found that these screens
are prone to non-standard look and feel and non-standard behavior
within the application. When compared to the approach of generating
highly standardized code, the reasons for this are obvious.
ate (parts of) your tests, do it. It reduces the maintenance cycle of your
tests, which means you improve the long-term chances of survival of
your tests. We have not found convincing evidence to state that handcoded (non-standardized) screens can be fully described (see Table 1)
by a set of BDD/Gherkin tests or briefly described (see Table 2). The
simple fact is that it would require a large amount of BDD-like tests to
fully describe such screens. One practice we have observed is to have
one big test for a complete screen; however, we found that such tests
quickly become complex and difficult to maintain for many reasons:
1. You do not want to disclose too much technical information to
the business user, e.g., username/password, the acceptable data
types, the URL of the servers supporting the factory acceptance
test (FAT), system acceptance tests[10] (SAT) and the live application.
2. You need to multiply the number of features by the number of
languages your system supports.
3. You want to keep the BDD test separate from the code that the
tester writes, as the tests depend on the software architecture
(Cloud, on-premises) and the device that may support the application (desktop, mobile).
4. A database containing acceptable data might be used by either
the business user or the tester. The data might be digested by the
system testing the application and reduce the complexity of the
BDD-tests while increasing the source code to test the application
1.
# file: ./Create_Contract_Request_for_offer.feature
2.
3.
4.
5.
8.
As a registered user,
I want to create a Request for offer for a project
Background:
6.
7.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
| url| username
| user_first_last_name| project_name | project_
title |
21.
Table 1. BDD Definition (Full Description): Create a Request for Offer 11_Create_Contract_
Request_ for_offer.feature
1.
# file: ./Create_Contract_Request_for_offer.feature
2.
3.
4.
Background:
5.
6.
7.
8.
Scenario Outline:
9.
10.
11.
| url| username|
user_first_last_name|
14.
Table 2. BDD Definition (Brief Description): Create a Request for Offer 11_Create_
Contract_Request_ for_offer.feature
1.
2.
def step_impl(context):
3.
4.
context.browser.find_element_by_xpath(
5.
context.browser.dramatic_pause(seconds=1)
6.
7.
context.browser.find_element_by_xpath(
8.
context.browser.dramatic_pause(seconds=1)
9.
10.
context.browser.find_element_by_xpath(
11.
context.browser.dramatic_pause(seconds=2)
12.
13.
Select(context.browser.find_element_by_id("Project")).
"//a[contains(text(),'Contract')]").click()
"//a[contains(text(),'Create New')]").click()
select_by_visible_text(context.testData.find(
".//project_name").text)
14.
context.browser.dramatic_pause(seconds=1)
15.
16.
context.browser.find_element_by_id("Title").clear()
10
can usually be described in the same way as for UI tests (see above).
Can such tests be generated? We believe that the effort for this might
outweigh the benefits. Still, using Gherkin to describe the intended
business action and then implementing tests for it seems like a promising approach.
Conclusion
This broadly covers the area of functional testing, including UI tests. The
question obviously occurs as to what else needs to be tested, because
it is clear that the tests we describe here are not ideal candidates for
development-level (unit) tests they would simply be too slow. In various configurations, the tests we described above would run in a predeployment scenario, with more or less coverage: run all screens, run
only the hand-coded screen tests, run some actions as smoke tests, etc.
We believe that the most relevant tests for the development cycle are
the ones related to the work the developer does, i.e., producing code.
This means that generated code can be excluded in principle (although
there is nothing against generating such tests). It focuses therefore on
hand-coded screens and business action implementation.
Starting with the business action implementation, we observe that this
References
[1] How Google tests software (Whittaker, Arbon et al.)
[2] jQuery: http://jquery.com
[3] DevExpress: https://www.devexpress.com/
[4] Aspose: http://www.aspose.com
[5] Create, Read, Update, Delete
[6] End-to-end test:
http://www.techopedia.com/definition/7035/end-to-end-test
[7] Castro is a library for recording automated screencasts via a
simple API: https://pypi.python.org/pypi/castro
[8] Automating web applications for testing purposes:
http://www.seleniumhq.org
[9] Behave is behavior-driven development, Python style:
https://pypi.python.org/pypi/behave
[10] System Acceptance Test:
https://computing.cells.es/services/controls/sat
[11] Python Programming Language: https://www.python.org/
[12] We assume an MVC or similar architecture is being used here,
or anything that clearly separates the UI from the rest of the
system.
Referenced books
The Cucumber Book (Wynne and Hellesoy)
Application testing with Capybara (Robbins)
Beautiful testing (Robbins and Riley)
Experiences of Test Automation (Graham and Fewster)
How Google tests software (Whittaker, Arbon et al.)
Selenium Testing Tools Cookbook (Gundecha)
Referenced articles
Model Driven Software engineering (Brambilla et al.)
Continuous Delivery (Humble and Farley)
Domain Specific Languages (Fowler)
Domain Specific Modeling (Kelly et al)
Language Implementation Patterns (Parr)
11
By Martin Uhlig
The Idea
In collaboration with our Product Owner (PO), we created a concept to
organise a pure quality assurance sprint (QA Sprint) only for testing, fixing, and retesting. No feature stories had been planned for this sprint.
But how could we test the whole product in such a short time span?
It is not possible to perform the test scope needed for a convincing
and informative result in a two-week iteration not even with all
the nine team members. After all, it was necessary to cover different
configurations of the software with tests. But we were very lucky to
receive special help from five additional testers and developers who
had agreed to support our tests. So we had enough workforce. But
how to manage all these people?
test sets had been worked out and iteratively improved by the tester
in the Scrum team (supported by the whole Scrum team). The work
of the QA teams was to be strictly separated from the Scrum team to
avoid lowering the performance of the Scrum team. Therefore the
teams needed an interface to filter the exchange of information in the
direction of the Scrum team, thus avoiding an information overflow
(esp. only actual bugs, no duplicates, etc.). The Scrum team itself would
focus on reproducing and troubleshooting the bugs. We decided to
staff the interface between the teams with the PO and the tester from
the Scrum team. The only exception to the team seperation was the
Daily Scrum meetings. Besides the Scrum team, one agent from each
QA team attended to give their current status.
This plan was refined so the teams were better balanced. One of the
Scrum teams developers moved to a QA team to test with them. Thus,
every QA team had three members, including one experienced tester
who headed the execution of the tests. Furthermore, the Scrum team
had two relatively new members who were not sufficently trained
to do quick and effective bugfixing of this complex software. These
colleagues were given the mission of performing exploratory testing
apart from the test sets and reproducing bugs.
In summary, our final setup was with two QA teams executing a set
of tests. These tests included all the positive and the most important
negative test cases. Every team worked with a different product configuration. The QA teams tests were supplemented by explorative tests
from the two fresh developers in the Scrum team. Six developers within
the Scrum team took care of bugfixing and deployment. This way, two
teams were established and they supported the Scrum team without
any signifcant negative effects for the self-organization of the Scrum
team (shown in Figure 1). The atypical ratio between developers and
testers in favour of developers was less problematic because the PO
had enough old known issues to be fixed in her backlog, keeping the
developers busy till the testers reported the first bugs.
From the outset it was obvious that the team could not simply be
extended. There is no way of conducting an effective Sprint Planning
or Daily Scrum with 14 team members (plus Scrum Master) and the
whole team would have had to reorganise itself. This was no practicable
solution, not even for only one sprint. As a consequence we had to find
another way to integrate the additional testers.
But what approach would work? The answer seems simple we needed
more teams! But a new Scrum team cannot simply be conjured out of
thin air, especially for just one sprint. So we had to distinguish between
the Scrum team and the QA teams. The Scrum team, consisting of the
former team, should basically work as usual in the best Scrum manner.
But to strengthen the Scrum team we needed the additional testers,
but could not get them aboard the Scrum boat due to the reasons
mentioned above. And so we had to create two new teams that were
substantially self-organized but not a fixed part of the Scrum team.
These QA teams focused on repeatedly running a given set of tests. The
12
Figure 1. Two small QA teams supporting the Scrum team. The communication between
the teams is handled by fixed interfaces and agents (blue).
The Execution
The QA sprint started just like any other sprint for the Scrum team,
except that the Sprint Planning was shorter than usual. Only a few
known issues were presented by the PO in Sprint Planning. During the
sprint the PO and the Scrum teams tester evaluated the bugs found
by the QA teams and added them to the Scrum teams sprint backlog
in a prioritized manner.
Besides the Scrum teams Sprint Planning, there was a kick-off meeting
for the QA teams. They were given the instruction to run the positive
and negative test cases from the committed test sets and to document
the bugs. After the execution of the whole test set (duration approx. 2
days) it was reviewed and improved by the teams impressions. Finally,
the test sets were completed by retesting the bug fixes of the current
version. After that procedure, the test sets were ready to be executed
in the QA teams next iteration. As a result, both QA teams always
had the same version of the product for each iteration, but with two
different configurations. Due to the fact that the latest version was
freshly installed for the QA teams in each iteration, the installation
and update mechanism of the software was repeatedly tested by the
PO and the Scrum teams tester.
To thank and to motivate the QA teams, we used some elements from
the concept of gamification. For example, we launched some awards
and small prizes, such as for the most critical bug or the QA team with
the most bugs found.
The kick-off was the official start for the QA teams. For the first complete
execution of the test cases they needed exactly the assumed time of
two days. As an additional advantage, every team had a member who
knew the product in advance. So the other members could benefit from
them in their steep learning curve.
On the first day, the Scrum team worked on the previously known issues until the first bugs were delivered by the QA teams. Additionally,
the two exploratory testers were able to produce some interesting insights during their tests, which could be transformed into reproducible
bugs. After the first two days, the first bugs and issues were fixed. The
retests for these and other improvements (mainly initiated by the QA
teams) were taken into the test sets for the next test iteration. At the
Daily Scrum, the agents from the QA teams reported on their teams
progress and took important information to the QA teams as planned.
ing for the correct contact person. On the other hand, the members
of the Scrum team had their workload significantly reduced as a consequence of this interface between the teams. Thus, they could focus
on their work and were only given relevant and revised information
by the QA teams.
However, we underestimated the effort required to create the initial
version of good test plans. The same applied to the time to filter, evaluate, and revise the bugs before we submitted them to the Scrum team.
But we still managed this because we could count on the QA teams
and we knew that we could waive the retests within the Scrum team
apart from the usual ones because the QA teams did the job.
That everything worked out as intended is a big credit to our PO, who
is very open-minded and has a quality assurance background. She
was always open to any suggestions and comments. In other projects
and with another PO this concept would certainly have needed more
negotiation with the PO and other stakeholders. Additionally, we could
benefit from the ability to fall back on our capable and motivated colleagues who supported us in the QA teams.
Finally I can report that, the quality assurance in that project is now in a
stable development and the product has been successfully established
on the market. Currently, a large and good choice of automated tests
is available, which are running in continuous integration. The experiences of this QA sprint have certainly had an important influence on
the projects success story.
The Conclusion
The QA sprint was a major success for the project. On the last day of
the sprint, the PO was able to successfully perform the acceptance
test and release the product. As a result of this QA sprint, the team
managed to considerably boost the quality of the product.
The Scrum team and the QA teams appreciated the nature of the
cooperation between the teams. The QA teams benefited from the
clear interfaces because they were given peace of mind whenever
they needed something. Any questions from the QA teams could be
answered very quickly and validly without spending a long time look-
13
Performance
Column by Alex Podelko
Exploratory
Performance Testing
It looks like exploratory performance testing has started to attract
some attention and is getting a mention here and there. Mostly, I
assume, due to the growing popularity of functional exploratory testing[1]. A proponent of exploratory testing probably would not like my
use of the word functional here, but not much has been written, for
example, about performance exploratory testing and even what has
been written often refers to different things.
There have been attempts to directly apply functional exploratory
testing techniques to performance testing. SmartBear blog posts[2, 3]
contrast exploratory performance testing with static traditional load
testing. My view is probably closer to Goranka Bjedovs understanding
as she described it back in 2007[4].
It was clear to me that a traditional, waterfall-like approach to performance testing is very ineffective and error-prone. I presented a more
agile/exploratory approach to performance testing in a traditional
waterfall software development environment at CMG in 2008[5]. I intended to apply the original principals of Manifesto for Agile Software
Development[6] (valuing Individuals and interactions over processes
and tools. Working software over comprehensive documentation. Customer collaboration over contract negotiation. Responding to change
over following a plan.) to performance engineering.
Performance testing in projects utilizing specific agile development
methodologies is a separate topic. Having become more involved in the
agile development environment, I added some aspects of it to my presentation at the Performance and Capacity 2013 conference by CMG[7].
The words agile and exploratory are periodically and loosely used in
relation to performance testing but it does not look like we have any
accepted definition. Both terms are, in a way, antonyms of traditional
waterfall-like performance testing so their meaning may somewhat
overlap in certain contexts. I explained my view of using the word
agile for performance testing in the above-mentioned presentations.
Now it is time to contemplate the use of the word exploratory in the
context of performance testing.
14
realize it. As you tune the number of threads, the systems behavior will
change drastically. And you would not know about it from the beginning (well, this is a simple example and an experienced performance
engineer may tune such obvious things from the very beginning but,
at least in my experience, you will always have something to tune or
optimize that you have no idea about in the beginning).
References
[2] Ole Lensmar. 2012. Why Your Application Needs Exploratory Load
Testing Today. http://blog.smartbear.com/loadui/why-yourapplication-needs-exploratory-load-testing-today
15
By Klaus Haller
discount retailer in the USA. Steinhafel was the first CEO of a major
corporation to lose his job due to a data leak[2]. Thus, managers cannot
rely simply on a statement from their Chief Information Officer Security? Everything is fine! without risking the companys and their
personal future. Here, ISO 27001 comes into play. It brings together,
first, a list of best practices for information security and, second, an
auditing and certification industry. The best practices prevent basic
mistakes or leaving security topics completely unaddressed. External
auditors validate whether the best practices have been implemented.
This external validation gives CEOs and stakeholders extra confidence.
Misunderstanding 2: Information security fights (mainly) hackers and malware coming from the outside
Outside hackers and malware pose a threat to every organization, but
employees pose a risk as well. Humans make mistakes, even if they
16
ISO 27000
Overview and terminology
ISO 27006
Requirements for auditing and
certification bodies
Main Section
Vocabulary
Standard
Appendix A
Requirement
Standards
ISO 27002
Code of practice for information
security controls
ISO 27003
Implementing information
security management systems
ISO TR27008
Auditing of information security
management systems
ISO 27005
Risk Management
ISO 27011
Telecom Sector
ISO TR 27015
Financial Services
Guideline
Standards
Sector-Specific
Guideline
Standards
Figure 1. Overview of the 27000 Standard Series. This article mainly deals with ISO 27001 Appendix A with interpretations derived from ISO 27002.
majority of the 114 controls from Appendix A. Some of them are only
relevant to developers, testers, and change managers. They have to
provide solutions for the controls (see Figure 2), for which they can
rely on the next sections.
Does a control define an obligation for
developers, testers, and change managers?
Yes
Does a
companywide
solution
exist?
Yes
Developers/testers/release
managers rely on company-wide
solution from the IT department/IT
Security/Legal & Compliance, etc.
No
Developers/testers/change
managers must provide a solution
(focus of this article)
No
Nothing to
be done by
developers,
testers, and
change
managers
Figure 2. ISO27001 Appendix A Standards and the Need for Engineering, Testing and
Change Teams to Come Up with a Solution on their Own.
customer data is sensitive, who can access it, etc. Testers and engineers
are not involved in the classification, although the policies can impact
them. They might forbid, e.g., the storage of credit card numbers of
clients in test systems.
Engineering-owned assets have to be classified as well. Potentially, IT has
to drive this. The main assets are source code and documentation, such
as requirements, specifications of new product features, architectural
documents, trading algorithms of hedge funds, etc., and they can be
as critical as production data. ISO27001 does not declare any asset to
be sensitive or not, it just demands clarification.
The policies for handling production data and engineering-owned
assets impact tool decisions. In the last few years, outsourcing requirements and test or project management tools became popular.
Software-as-a-service is thriving. With ISO 27001, organizations must
ensure that projects store data in externally hosted systems only if this
does not contradict information security policies.
ISO 27001 also demands secure development environments for the
complete development cycle (control A.14.2.6). The need for confidentiality, availability, and integrity has a broad impact on access control
mechanisms, the hiring and contracting of developers and testers,
and backup strategies.
A highly critical asset in development and test environments is the
test data. Many applications especially business applications incorporate databases. Testers need suitable data in the databases in
test environments and ISO 27001 control A.14.3.1 demands that the test
data is protected. When looking at the ISO 27002 guideline, it is clear
that the standard reflects old style test data management. In other
words, test data comes from production. The focus in ISO 27002 is to
mitigate the risks associated with the use of production data, such
as the ability to audit the copy process and strict access rules for test
environments. The trend towards synthetic test data in Europe is not
reflected (see[5] for an in-depth discussion on test data management).
However, ISO 27002 is not normative. Organizations can implement
ISO 27001 in their own way. Especially when organizations test with
synthetic data, many ISO 27002 ideas are obsolete.
17
Security principles
including rules for
data transfer
Security principles
including rules for
data transfer
Testing
Testers work on
test systems
Organize user
acceptance
testers
Testing
Usage on
production
systems
Development
Development
Developers work on
development systems
Perform security tests
Security principles
followed
Figure 3. How ISO 27001 Influences in the Software Development Processes V-model (Left) and Scrum (Right)
18
References
[1] ISO/IEC 27001:2013 Information technology Security techniques
Information security management systems Requirements, ISO,
Geneva, Switzerland, 2013
[2] The Associated Press: Target CEO Gregg Steinhafel resigns following last years security breach, 5.5.2014
[3] Wikipedia: Information security, http://en.wikipedia.org/wiki/
Information_security, last retrieved July 5, 2014
[4] K. Dilanian: A spy world reshaped by Edward Snowden, Los Angeles Times, 22.12.20123
[5] K. Haller: Test Data Management in Practice, Software Quality
Days 2013, Vienna, Austria, 2013
[6] K. Haller: How Scrum Changes Test Centers, Agile Records, August
2013
[7] M. Fowler, M. Foemmel: Continuous integration
http://ww.dccia.ua.es/dccia/inf/asignaturas/MADS/2013-14/
lecturas/10_Fowler_Continuous_Integration.pdf, 2006, last
retrieved July 5, 2014
[8] J. Turnbull: What DevOps means to me, http://www.kartar.net/
2010/02/what-devops-means-to-me/, last retrieved July 5, 2014
Conclusion 2: The organization must enforce the policies in all projects and have evidence.
ISO 27001 expects policies to be enforced consistently and to have auditable evidence. In other words, there must be a process organization
and all employees must be continuously educated and motivated to
LinkedIn: www.linkedin.com/pub/klaus-haller/48/a2b/798
By Jacqueline Vermette
20
2. Has an ability to learn. Testers may be asked to go from a limited understanding of a product to mastering that product in a very short
timeframe. They must be able to memorize details and understand
each modules concepts while maintaining a general overview of
the product. Testers must be willing to review and learn all the
expected system behavior by studying the technical documentation and spending time with the main analyst. I remember one
particularly complex application for an aluminum smelter where
very few people had an overview of the whole business process at
the beginning. The management was not too sure whether the
test team would be able to test adequately. But by reading all the
available documentation and asking questions and more questions, we did a great job. Never be shy about asking questions in
order to understand details about the application, especially if the
specifications are not clear enough.
3. Can think outside the box, and takes into account assumptions
and concrete facts. Not all conditions are necessarily stated in the
functional specifications. It is like when you buy a car, you assume
that the hood can easily be opened to check the motor. This criterion is not mentioned in the cars features, but everybody expects
it. Testers should try to test unwritten features. Some unwritten
characteristics could have a significant impact on the quality of
the final product, hence the need to read between the lines. For
example, the system can support some required functionalities,
but what would happen if I tried something a little different? Does
the system support it? Does it crash? Does it corrupt data?
4. Cultivates a keen sense of observation and notes small details.
Their perfectionism can unfortunately annoy programmers and
developers, but good testers can find the biggest bugs in the least
likely situations. If a sequence of system operations is available to
the user, why are they not supposed to perform them, for example?
Why does the screen have labels with different fonts? Report fields
that are not properly aligned or inconsistent use of capitalization are
other examples of small details that can negatively impact the quality of the product. Some people just notice this type of error more
than others. They are probably like that in all their daily activities.
5. Cares deeply about the final product. They believe in their mission,
which is to protect the companys reputation. They love testing
and are proud to find bugs. Finding a bug is highly satisfying and
finding an especially tricky one surely makes their day.
6. Is organized yet flexible. They pay attention to the manuals and
conduct the tests systematically. This is very important in order to
reproduce a bug. A bug that cannot be properly detailed in order to
be reproduced cannot be corrected. They can also adapt to changes
during the course of a project and are willing to repeat tests over and
over, if necessary. After a bug correction, a test case might need to
be modified and re-executed to validate the quality of the system.
Even with all these attributes, no one can be a good tester if they
cannot bring a positive influence to a development team. A tester
must provide positive feedback, be able to motivate team members to
improve the quality of their work and, in general, manage each team
members self-respect.
The testers role is in constant flux. To stay competitive in todays
market, companies must now produce ever more complex software
solutions, at an ever faster pace, and at a lower cost. Test management
tools, system simulation, and automatic test case execution are now
a must. We must adapt to these changes by developing our programming abilities or by working closely with developers. Promoting more
completed unit tests and testing as soon as possible with the developers
helps greatly to reduce errors early in the test process.
References
[1] PAGE Alan, How we test software at Microsoft, Microsoft Press,
2009
[2] PRATAP K.J., The Psychology of testing, The code Project, 2007
ager with 25 years of experience in quality assurance, quality control, functional analysis, and
programming. She also worked to set up quality
assurance and control methodologies in order
to ensure the quality of deliverables for manufacturing industry projects. Jacqueline is a certified software tester (CSTE) and is currently working at Keops Technologies.
Wissen fr Tester
Seit
10 Ja
hren
ein
Best
selle
r
T. Linz
Testen in ScrumProjekten
E. Hendrickson
A. Spillner, T. Linz
5. Auflage
Explore It!
-Book:
plus
Buch + E
unkt.de/
www.dp
Basiswissen Softwaretest
2012 312 Seiten E 39,90 (D)
ISBN 978-3-86490-024-2
A. Spillner, T. Roner,
M. Winter, T. Linz
H.Stauffer, B. Honegger,
H. Gisin
28/2014
dpunkt.verlag GmbH Wieblinger Weg 17 69123 HeidelbergTesting
fon: Experience
0 62 21 / 1483
40
fax: 0 62 21 / 14 83 99 e-mail: bestellung@dpunkt.de www.dpunkt.de
21
By Wolfgang Gottesheim
In a traditional organisation, handoffs between developers and operators cause friction as these groups follow different goals. Developers
and business drivers within the organization want customers to use
new features and benefit from other improvements, while operators
seek stability and want to provide a stable environment.
One of the groundbreaking books around DevOps, The Phoenix
Project by Gene Kim, Kevin Behr and George Spafford, describes the
practice through The Three Ways of systems thinking, amplifying
feedback loops, and providing a culture of continuous experimentation and learning. Systems thinking means to focus on overall value
streams and to make sure that defects (for example, broken builds),
are not passed on to downstream units (like the Ops department).
Amplifying the feedback loops translates to providing proper communication channels between Dev and Ops, and to achieve this without
DevOps a Definition
DevOps aligns business requirements with IT performance, and recent
studies have shown that organizations adopting DevOps practices have
a significant competitive advantage over their peers. They are able to
react faster to changing market demands, get out new features faster,
and have a higher success rate when it comes to executing changes. The
goal of DevOps is to adopt practices that allow a quick flow of changes
to a production environment, while maintaining a high level of stability, reliability, and performance in these systems. However, the term
nowadays covers a wide range of different topics and consequently
means different things to different people.
Figure 1. Running tests against the production system gives a better input for capacity planning and uncovers heavy load application issues
22
Figure 2. Automated Tests running in CI also help to detect performance regressions on metrics such as # of SQL calls, page load time, # of JS files or images, etc.
The more traditional testing teams are used to executing performance and scalability tests in their own environments at the end of a
milestone. With less time for extensive testing, their test frameworks
and environments have to become available to other teams to make
performance tests a part of an automated testing practice in a Continuous Integration environment. The automatic collection and analysis of
performance metrics ensures that all performance aspects are covered.
This once again entails defining a set of performance metrics that is
applied across all phases, as this is beneficial to identifying the root
cause of performance issues in production, testing, and development
environments.
Conclusion
The first step in adopting a performance culture is to enable a shared
understanding of performance through a set of key performance
metrics that are accepted, understood, and measured across all teams.
These performance metrics allow all teams to talk about performance
in the same way, and reduce the guesswork and finger-pointing often associated with troubleshooting performance problems. Once
these metrics have been defined, their automated measurement and
analysis is the next step that makes performance a part of a DevOps
practice.
23
By Danilo Berta
1-wise testing
When the number of combinations is high, it is possible at least verify
that at least once each individual value of the variables is given
as input to the program to be tested. In other words, if the variable A
can take the values A1, A2, A3, at least a first test must be executed in
which the variable A=A1, a second test in which A=A2, and a third test
in which the variable A=A3; the same goes for the other variables. This
type of test provides a so-called wise-1 cover, and we will see shortly
the meaning. In practice, we have the following table:
A1
B1
C1
A2
B2
C2
A3
B3
# TEST
# Values
A4
4
58
912
1316
1720
2124
A1;B1;C1
A1;B3;C1
A2;B2;C1
A3;B1;C1
A3;B3;C1
A4;B2;C1
A1;B1;C2
A1;B3;C2
A2;B2;C2
A3;B1;C2
A3;B3;C2
A4;B2;C2
A1;B2;C1
A2;B1;C1
A2;B3;C1
A3;B2;C1
A4;B1;C1
A4;B3;C1
A1;B2;C2
A2;B1;C2
A2;B3;C2
A3;B2;C2
A4;B1;C2
A4;B3;C2
Now, in this particular case, such a number of tests can still be affordable. However, if we consider the general case of N variables X1, X2, ,
Xk, the first accepting n1 possible values, the second n2 possible values,
the k-th that assumes nk possible values, the total number of combinations is equal to: n1n2nk. Such a value, even for low values of n1,
n2, , nk is a high number. For example, if k=5 and (n1=3; n2=4; n3=2;
n4=2; n5=3) we get a number of combinations equal to 34223=1 44.
That is quite a large number of tests to perform if you want to ensure
complete coverage of all combinations.
A1
A2
A3
A4
B1
B2
B3
C1
C2
A possible first reduction is to set a value for the first variable and assign
a random (but permitted) value to the other variables (stated with *
in Table3) and proceed in this way for all the variables and values. In
this way, we reduce the test cases from 24 to just 9. It is still possible
to further reduce the number of test cases, considering that instead
of * you can put a value of the variable which can then be excluded
from the subsequent test cases.
Put into practice, for test case #1 in place of B=* put B=B1, instead of
C=* put C=C1 and remove test case #5 and test case #8, which are now
both covered by test case #1.
Test case #2: in place of B=* put B=B2, and in place of C=* put C=C2,
and erase test cases #6 and #9, both of which are now covered by
test case #2.
Test case #3: instead of B=* put B=B3, and in place of C=* insert any
value C1 or C2, considering that the values of the variable C equal to C1
and C2 have already been covered by test cases #1 and #2; we can let
C=* and postpone the choice of whether to enter C1 or C2. Now, remove
test case #7, since B=B3 is now covered by test case #3.
How can we carry out an effective test when the number of variables
and values is so high as to make it impossible to exhaustively test all
combinations? What reduction techniques apply?
Having understood the mechanism, there remains only test case #4,
which covers A=A4; we can let B=* and C=*, postponing the choice of
what to actually select when we will really perform the test.
24
The symbol * represents dont care; we can put any value in it and
the coverage of the test set does not change, and the values of all
variables will be used at least once. Those with * value should be
covered more than once.
The final minimized test set for wise-1 coverage is the following:
# TEST
A1
B1
C1
A2
B2
C2
A3
B3
A4
N!
K ! (N K) !
3
2
3!
2 ! (3 2) !
(A,B)
(A,C)
= 3 ; the three pairs are {(A, B), (A, C), (B, C)}.
TOTAL
43=12
2
(B,C)
N
K
# VARIABLES VALUES
PAIRS
GRAND TOTAL
42=8
32=6
12+8+6=26
Hence, the total of all the pairs of values of the variables A, B, and C
whose values are reported in Table1 is equal to 26 and they are all
printed in the following table:
#
# of pairs of values
A, B
A, C
B, C
A1,B1
A1,C1
B1, C1
A1,B2
A1,C2
B1, C2
A1,B3
A2,C1
B2, C1
A2,B1
A2,C2
B2, C2
A2,B2
A3,C1
B3, C1
A2,B3
A3,C2
B3, C2
A3,B1
A4,C1
A3,B2
A4,C2
A3,B3
10
A4,B1
11
A4,B2
12
A4,B3
# PAIRS
12
TOTAL
12+8+6=26
Why should you consider a test set to cover wise-2? Is it not enough to
consider a test set with 1-wise coverage? Here we enter into a thorny
issue, in which opinions are different, concordant, and discordant.
Below is the incipit from the site www.pairwise.org[1]:
Pairwise (a.k.a. all-pairs) testing is an effective test case generation technique that is based on the observation that most faults are
caused by interactions of at most two factors. Pairwise-generated
test suites cover all combinations of two, therefore are much smaller
than exhaustive ones yet still very effective in finding defects.
We mention also the opinion of James Bach and Patrick J. Schroeder about the pairwise method: Pairwise Testing: A Best Practice
That Is Not from James Bach, Patrick J. Schroeder available from
http://www.testingeducation.org/wtst5/PairwisePNSQC2004.pdf[2]:
What do we know about the defect removal efficiency of pairwise
testing? Not a great deal. Jones states that in the US, on average,
the defect removal efficiency of our software processes is 85%[26].
This means that the combinations of all fault detection techniques,
including reviews, inspections, walkthroughs, and various forms of
testing remove 85% of the faults in software before it is released.
25
GET YOUR
PRINTED COPY
in our shop!
s
e
u
s
is
ll
a
r
Orde
26
www.testingexperience-shop.com
Reverse Combinatorial Test Problem: Given a Test Set for which you do
not know the method of generation (if any), calculate what percentage
of nwise coverage the test set ensures, with nwise between 1 and the
number variables in the test set.
An example: tests generated by automatic tools for which you have low
or almost zero process control, or when the test cases are generated
by the automatic flows that feed interfaces between different systems
(think of a system that transmits accounting data from a system A
to B); test data are in general excerpts from historical series over
which you have no control.
For test scenarios in some way related to a combinatorial inverse
problem, it is not easy to find support tools as such tools are not readily available. The only tool I found is NIST CCM, in alpha-phase at the
time I am writing this article. If you like, you can request a copy of
the software: go to http://csrc.nist.gov/groups/SNS/acts/documents/
comparison-report.html as previously reported[3].
In the following we describe a set of tools called Combinatorial Testing Tools (executable under Windows, but, if needed, not difficult to
port under Unix/Linux) that enables the (quasi)minimal test set to
be extracted and the coverage of a generic test set to be calculated,
using systematic algorithms calculation coverage and starting from
all n-tuples of the variables values. Such algorithms should be categorized as brute force algorithms and should be used (on a normal
supermarket-bought PC) if the number of variables and values is not
too high.
Overview of CTT
Tools in the Combinatorial Testing Tools product try to provide support
to help solve both problems of combinatorial testing previously stated.
Combinatorial Testing Tools do not intend to compete with the existing tools aimed at solving the direct problem of combinatorial
testing, such as Microsoft PICT, AllPair J. Bach, NIST, or several other
commercial and non-commercial tools already present on the market. These tools implement algorithms definitely more effectively
than CTT and therefore should be favorite I repeat to solve the
direct problem.
Regarding the reverse problem of combinatorial tests, to my knowledge
there are no tools on the market (except NIST CCM in alpha release)
and the CTT then attempt to provide a very first solution to the reverse
problem, to be surely improved over time, when we will better understand the logic and the rules that are behind combinatorial testing and
the minimal determination of test sets related to it.
We would like to remember that:
a. Solving the direct problem means determining the smallest
possible test set with a level of WISE coverage agreed (usually
WISE=2) from the set of variables and values.
b. Solving the reverse problem means determining the level of
coverage of a given test set with respect to a reference WISE level
(also here usually WISE=2)
The tools should be categorized as follows:
27
a. First level tools: batch DOS scripts providing an immediate response to standard scenarios that typically occur in test projects
requiring combinatorial techniques.
This is the first tool that is mandatory to run before all the other tools.
It generates all the n-tuples corresponding to the value of the Wise
past input (runW = run Wise)
First level scripts were thought of as the wrapper around the second
level executables, in order to simplify end user life with a set of
simple commands that enable you to quickly get a number of items of
standard information. The following table maps the two categories
of tools and the tool with the kind of information it supplies.
In this article we cannot go in details on the behavior of the tools; what
follows is a short description of all of the first level tools. More information can be found in the tools user manual which is downloadable
(with the scripts) from: http://www.opensource-combinatorial-softtest-tools.net/index.php/download
Anyway, here is a useful summary table that links together first level
and second level tools.
First Level
Tools
Second Level
1
runT
runsT
runTS
runsTS
runW
runCC
runsCC
runTSF
runsTSF
10
11
13
Tool runR
Extracts a non-minimal test set but still smaller than the maximum
test set with guaranteed coverage equal to the Wise passed as input
(runR = run Reduce).
Computes the input test set coverage in respect of the WISE passed as
input (runCC = run Circulate Coverage)
Get the minimal test set with guaranteed coverage equal to the Wise
passed in input or equal to the coverage of the test set passed in input
if less than WISE, extracting the test cases from the file of the test set
passed in input, excluding n-tuples already covered by the partial test
set input file (runTSF = run Test Set Forbidden).
runC
runR
12
Tool runW
Tool runC
Applies constraints to n-tuple file (or test set file) passed as input.
Table 7. Mapping of First Level vs. Second Level Tools (Wrapping Map)
1. calcolaCopertura.exe
2. calcolaCoperturaSlow.exe
3. Combinazioni_n_k.exe
4. contrLimWise.exe
5. count_line.exe
6. creaFileProdCart.exe
7. creaStringa.exe
8. generaTestSet.exe
9. generaTestSetSlow.exe
10. ProdCart.exe
11. reduceNple.exe
12. runConstrains.pl
13. uniqueRowFile.exe
In the following we describes the second level tools, a little more hard
to use but more versatile; may be useful to experienced users for
managing more complex scenarios.
28
Executable Combinazioni_n_k
Extracts all K by K combinations of a string of length N passed as input.
Executable ProdCart.exe
Generates all possible combinations of the values of variables as defined in the input file.
Executable reduceNple.exe
Squeezes as many n-tuples as possible contained in the input file,
replacing the values * with specific values of the variables and thus
creating a test set from the file of n-tuples. While not the test set
minimum, it is reduced compared to the test set maximum (coincident
with all n-tuples). The number of records depends on the sorting of the
n-tuples input file, in an unknown way. There is definitely a sorting of
the files row to which the test set output contains a minimum number
of test cases with guaranteed WISE-coverage, but finding this sort is
not feasible from a computational point of view, as it is too onerous.
There are six other executables that do not provide direct support to
the generation and/or operation of the test ssets, but are predominantly used by DOS batch tools to perform secondary operations that
it is impossible or at least very complex to do directly from DOS.
These utilities may also be of some use, even if they are not to be considered tout cours test tools. We have not described those utilities
in this article.
Notes of Appreciation
[1] Many thanks to Jacek Czerwonka, owner of the web site
www.pairwise.org, who allowed me to reprint the incipit of the
same. By the way, he wrote to me about some evolution on the subject
of pairwise vs. random testing that you can find in the article An Empirical Comparison of Combinatorial and Random Testing available at
the link: http://www.nist.gov/customcf/get_pdf.cfm?pub_id=915439
written by: Laleh Sh. Ghandehari, Jacek Czerwonka, Yu Lei, Soheil
Shafiee, Raghu Kacker, and Richard Kuhn.
[2] Many thanks to James Bach, owner of the website
www.satisfice.com who allowed me to reprint part of the article
Pairwise Testing: A Best Practice That Is Not from James Bach, Patrick J. Schroeder, available from http://www.testingeducation.org/
wtst5/PairwisePNSQC2004.pdf.
[3] Many thanks to Dr. Richard Kuhn from NIST who kindly sent me a
copy of the NIST CCM tools. We would like to remind you that you
should request a copy: go to http://csrc.nist.gov/groups/SNS/acts/
documents/comparison-report.html as previously reported.
Conclusion
In the article we gave an overview of a test methodology that uses
combinatorial calculus to find test cases for a software component,
knowing the inputs of the same. Generally speaking a combinatorial
technique like this generates too many test cases, so we need to define
a so-called N-wise coverage (with N from 1 to the number of input
variables), select a value of N (usually N=2, pairwise testing) and extract a subsystem of test cases with the guarantee of N-wise coverage.
This is the Direct Combinatorial Test Problem and there are a lot
of wonderful tools on the market that solve the problem very quickly.
We then dealt with the Reverse Combinatorial Test Problem: if you
have a test set build upon the N inputs of a software component
about which you know nothing, what percentage of N-wise coverage
does the test set ensure? On the market I just found one tool that
addresses this problem: NIST CCM, which is still in alpha phase at the
time I am writing this article. In the article I give an overview of the
CTT (Combinatorial Testing Tools) I developed in C++ that, using a
brute-force approach, try to give a very first response to the Reverse
Combinatorial Test Problem.
the software field as an employee of a consultancy company working for a large number of
customers. Throughout his career, he has worked
in banks as a Cobol developer and analyst, for
the Italian treasury, the Italian railways (structural and functional
software testing), as well as for an Italian automotive company, the
European Space Agency (creation and testing of scientific data files
for the SMART mission), telecommunication companies, the national Italian television company, Italian editorial companies, and
more. This involved work on different kinds of automation test projects (HP Tools), software analysis, and development projects. With
his passion for software development and analysis which is not
just limited to testing he has written some articles and a software
course for Italian specialist magazines. His work enables him to deal
mainly with software testing; he holds both the ISEB/ISTQB Foundation Certificate in Software Testing and the ISEB Practitioner Certificate; he is a regular member of the British Computer Society.
Currently, he lives in Switzerland and works for a Swiss company.
Website: www.bertadanilo.name
But there is (at least) a problem whose solution is not still known. For a
software with N inputs, what is the minimum test set (if it exists) that
guarantees the N-level coverage? The solution exists just for a trivial
case: for 1-wise coverage is always equal to the number of values of the
29
do not have them at all, some teams care about particular resource
details, and some do not know these details.
To satisfy these contradictory conditions the following approach was
created:
A resource has a name, a pool it can belong to, selection attributes, and
ownership attributes. The selection attributes enable resources to be
identified in order be selected at the execution stage. The ownership
attributes, if defined, assign the resource to a user or group of users.
Attr 1
Search
attrs
We looked at several existing CI test systems, but none seemed suitable for our situation of testing many configurations of embedded
systems. We decided to build our own framework using MySQL and a
web interface, with the following goals:
Manage and allocate pools of resources, grouped by configuration.
Have the ability to dynamically assemble multiple test environments for each build, release, and application.
Create testware standards for test tools, or wrappers for all test
tools, to make them look alike (use the same configuration data
and convert their results to the standard hierarchical CITF format).
Provide common interfaces for debugging and reporting the build
status.
Design a quick and easy way to define new software releases and
projects, and integrate new test tools, testware, resource pools,
and test environments.
Intelligently select the appropriate test suites (sanity, regression,
feature) whenever a build is completed to validate the integrity of
the mainstream, existing, and new functionality.
Resource
pollID
30
userID
Ownership
attrs
groupID
Attr 2
Attr 3
Test Env
Attr 4
PH 2
Attr 5
Figure 2. Test Environment as a set of resource placeholders and respective sets of their
search attributes
PH 1
1. Resource Management
Attr 3
name
Attr 2
Test Task
PH 2
PH 3
pool
34
Search
attr
name
Attr 1
Attr 2
Attr 3
abc23
Figure 3. Test task as a set of resource placeholders and the respective rules of their
replacements
TA_1
TC_1
TA_3
Resource 1
Test Task Exe
Resource 2
TA_2
UC_1
TC_2
Attr 3
TC_3
A test task is started upon acquiring all the required resources. The
states of its resources will be changed to running. The test task will
start executing component by component, and each component knows
how to build an environment from the selected resources. Upon completing the execution of a task, the resources are returned to the available state. If the returned resource is unhealthy, recovery mechanisms
restore it to its initial state, making it ready for subsequent test runs.
TS_1
TC_3
UC_2
TC_4
UC_3
Test
TC_3
TA_1
TA_4
TA_2
TA_4
TA_2
TA_4
TA_3
TA_2
TA_4
TA_1
2. Testware Management
Continuous integration deals with code before the production stage.
This means the testware needs to be adaptive to frequent changes of
many kinds, such as API and command syntax and semantic changes,
and changes in requirements. When these changes have occurred,
hardcoded syntax in testware will be hard to find and correct. The only
way to achieve testware maintenance is by providing a strict relationship between architecture, requirements, and design documents, and
by separating business functionality from implementation details.
TC_1
TS_2
TA_2
TA_3
UC_4
TC_3
TA_2
TA_4
31
UC_2
Script_2
TC_1
TC_1
TA_2
TC_3
TA_3
TA_1
TA_1
TA_2
TC_3
TC_2
TC_4
TS_2
UC_3
Script_3
UC_4
TC_3
TC_1
TC_3
Configuration file
TA_1
TC_2
Test scripts:
UC(s)/TCs
TA_4
TC_3
TA_2
TA_3
TA_4
The test structure as a tree of test sets, use cases, and test cases
Test
actions
The configuration, test set, test script, and library are files that present testware. The narrow specialization of each file type serves a
maintenance purpose: a single change in the code of the object-to-test
(on the structural, functional, or syntax level) should lead to a single
change in testware.
32
To find a single result from the tens of thousands of test runs per day
with a few mouse clicks, the results repository area needs to mirror
(Figure 4):
TA_3
Test libraries:
TC definitions
The debug file should have a common look for all test tools, with an
emphasis on what are the stimuli and responses, and how the comparison was made in order to let the tester write CRs that communicate
the problem precisely to the developer.
TA_4
TC_4
Before starting its tests, each wrapper dynamically creates its configuration files, which include references to selected tests, resources to be
used, and the location of the results. The configuration files are built
from templates, based on the release, project, and resource parameters.
A debug file that is created during a test execution needs to follow a
standard organization. Upon completion of a test, the results produced
by different test tools are converted into a standard hierarchical format
and uploaded to a results repository together with logs and traces.
Failed test cases can be filtered for known issues in order to avoid re-
4. Test Process
New releases/projects/features, test environments, and testware have
to be created daily. This requires interfaces that work with test related
objects: to create, to edit, to delete, to run, to monitor, and to report.
The management web interface has to provide a quick and easy way
to define new software releases and projects, and integrate new test
tools, testware, resource pools, and test environments directly into
the database. The build server requests a test for a new build through
an execution interface and is notified of test results and metrics.
The request to run a test is called a verification request (Figure 6). A
verification request specifies a set of test tasks that are independent
and can execute in parallel. Each test task (task for short) specifies a
set of components, which are sets of tests to run sequentially, and is
associated with a test environment, which specifies a set of resources
that must be acquired before execution.
Test type 1
Build 4
Build 5
Build 6
Build 7
Build 8
Test type 2
Test environment 1
Test environment 2
Test environment 3
Test environment 4
Use case 1
Test set 1
Use case 2
Application 4
Application 5
Application 6
Application 7
Application 8
Use case 3
Test set 2
Use case 4
Test case 1
Test case 2
Test case 3
Test case 4
Test case 5
Test case 6
Test case 7
Test case 8
Each component calls its test tool to start a test. A test task monitors
all components and, based on the database settings, can terminate
it if the execution goes on too long, repeat its execution, or skip the
execution of the subsequent components.
Comp 1
Verification
Request 1
Verification
Request 2
Verification
Request 3
Test Task 1
Comp 1
Test Task 2
Test Task 4
Test Task 5
Test Task 6
Comp 1
Test Environment 1
Comp 2
Test Environment 2
Comp 1
Comp 2
Comp 3
Comp 1
Comp 4
Comp 3
Test Task 7
Test Environment 3
Comp 5
Comp 3
Comp 6
Figure 8. A verification request is translated into independent test tasks; each task
executes its components sequentially in its associated test environment.
5. Reliability
A considerable number of failed test cases are not real code errors, but
are environment issues, such as random network glitches, testware
mis-synchronization, or resource failures. The following built-in testing
reliability features help test teams handle such errors:
Starting test cases from an initial state and, in case of failure, a
resource is returned to its initial state by the test cases recovery
sequence.
reveals areas of the code that were not tested at all. The code coverage
metrics can be useful in CI if the build consists of many independent
layers and modules. An objects quality can be defined as the quality
of the weakest link. Therefore, it is good practice to request the same
percentage of code coverage for all system components. This is especially important for new code deliveries, since it is the primary proof
(along with requirements traceability) that the necessary automated
tests were added along with the new code.
The fourth category of metrics describes the quality of each build: code
review data, code complexity, warnings, and memory leaks.
7. Conclusions
CI puts heavy demands on testing systems. We found that no commercial solution offered the functionality we needed, which is why we
developed our own CI test framework. The development was driven by
the demands of the test teams responsible for build validation, whose
major constraint was the ability to determine the cause of failure in a
short interval. As a result, we deployed five releases of CITF during the
five years we have been in operation. We currently support four major
releases, approximately thirty projects for each release, ten different
test tools (commercial and in-house), and twenty embedded systems
applications. We support a pool consisting of hundreds of geographically distributed physical resources. The system verifies tens of builds
daily, by running hundreds of thousands of test cases.
6. Metrics
33
By Robert Galen
34
pyramid, continuous integration, XP technical practices, and support for ALM-distributed collaboration tools.
Often it is the place towards which organizations gravitate first
probably because of our generic affinity for tools solving all of our
challenges. An important way to think about this pillar is that it is
foundational, in that the other two pillars are built on top of the
tooling. And organizations often underestimate the importance,
initial cost, and ongoing costs of maintaining foundational agility
in this pillar. Continuous investment is an ongoing challenge here.
Finally, this pillar is not centric to the testing function or group. While
it includes testing, tooling, and automation, it inherently includes
ALL tooling related to product development across the entire agile
organization. It provides much of the glue in cross-connecting
tools and automation towards efficiency and quality.
2. Software Testing: This pillar is focused on the profession of testing.
On solid testing practices, not simply agile testing practices, but
leveraging the teams past testing experience, skills, techniques,
and tools. This is the place where agile teams move from a trivial
view of agile software testing (which only looks at TDD, ATDD, and
developer-based testing) towards a more holistic view of quality.
It is a pillar where the breadth and depth of functional and non-
functional testing is embraced. Where exploratory testing is understood and practiced as a viable testing technique. It is where
the breadth of non-functional testing is understood and applied
to meet business and domain needs, including performance, load,
security, and customer usability testing.
By definition, this is where testing strategy resides, where planning
and governance sit, and where broad reporting is performed. I am
NOT talking about traditional testing with all of its process focus
and typical lack of value. But I AM talking about effective professional testing, broadly and deeply applied within agile contexts.
3. Cross-Functional Team Practices: Finally, this pillar is focused on
cross-team collaboration, team-based standards, quality attitudes,
and, importantly, on building things properly. Consider this the softskills area of the three pillars, where we provide direction for how
each team will operate consider them the rules of engagement.
For example, this is the place where good old-fashioned reviews and
inspections are valued. This would include pairing (across ALL team
members), but also slightly more formal reviews of architecture, design, code, and test cases. It is a place where inspection is performed
rigorously, as established in the teams Definition-of-Done. Where
refactoring of the code base and keeping it well kept is also of
primary importance.
Speaking of Definition-of-Done, this is the pillar where cross-team
physical constraints, conventions, and agreements are established.
But, more importantly than creating them, it is where the team
makes commitments to consistency and actually holding to their
agreements. Another important focus is on group integrity in conducting powerful retrospectives and fostering continuous improvement in the long term.
Foundational Practices
But beneath the Three Pillars are some foundational principles and
practices that glue everything together. For example, taking a wholeteam view of quality and testing, where it is not just the job of the
testers, but of everyone on the team. I still find far too many agile
teams that relegate the ownership of quality and testing only to testers.
Software Testing
Pyramid-based Strategy:
(Unit + Cucumber +
Selenium)
Risk-based testing:
Functional &
Non-Functional
Stop-the-Line Mindset
Continuous Integration
Attack technical
infrastructure in the
Backlog
Visual Feedback
Dashboards
Actively practice ATDD
and BDD
Cross-Functional
Team Practices
Team-based Pairing
Exploratory Testing
Active Done-Ness
Standards checklists,
templates, repositories
Aggressive Refactoring
of Technical Debt
35
Cross-cutting Strategy
Beyond the individual pillars, the value resides in cross-cutting concerns. I will go back to my original story to help make this point. My
client was advanced in BDD practices, but struggling with user story
writing, or even understanding the point of a user story.
The results would have been better if they had made the following
cross-Pillar connections:
In Pillar #1 Behavior-Driven Development (BDD) and Acceptance
Test-Driven Development (ATDD) are mostly technical practices. They
focus on articulating user story acceptance testing in such a way
as to make them automatable via a variety of open source tooling.
Unfortunately they have an underlying assumption that you understand how to write a solid story.
In Pillar #2 One thing I did not mention in the story was that every
team had a different view of what a story should look like and the
rules for writing effective stories. There were no norms, consistency
rules, templates, or even solid examples. A focus on the software
testing aspects of pillar two would have established these practices,
which would have significantly helped their teams.
In Pillar #3 An important aspect of the user story that my client
failed to realize was the conversation part of the story. If you reference the 3-Cs of effective story writing as a model, one of the Cs is
having a conversation about or collaborating on the story. It is the
most important C if you ask me. It is where the 3 Amigos of the
story (the developer(s), the tester(s), and the product owner(s)) get
together; leveraging the story to create conversations that surround
the customer problem they are trying to solve.
Do you see the pattern in this case?
You cannot effectively manage to deliver on agile quality practices
without cross-cutting the concerns and focus. In this case, effective
use of user stories and standards, plus BDD and automation, plus the
conversations needed to cross all three pillars. It requires a strategic
balance in order to implement any one of the practices properly.
I hope this example illustrates the cross-cutting nature for effective use
of the Three Pillars, and that you start doing that on your own as well.
36
Wrapping Up
This initial article is intended to introduce The Three Pillars of Agile
Quality and Testing as a framework or model for solidifying your agile
quality strategies and initiatives.
I hope it has increased your thinking around the importance of developing a balanced quality and testing strategy as part of your overall
agile transformation. As I have observed and learned, it does not simply
happen as a result of going agile, and most of the teams I encounter
are largely out of balance in their adoption.
I hope you find the model useful and please send me feedback. If there
is interest, I may explore more specific dynamics within each pillar in
subsequent articles.
Stay agile my friends,
Bob.
BTW: I am writing a book based on the Three Pillars model. If you would
like to have more information, participate in the books evolution, or
simply stay tuned in to this topic, please join the books mailing list
here http://goo.gl/ORcxbE
The use of ATDD and tools like FitNesse, Cucumber, and Robot Framework makes it necessary to create automated acceptance tests. These
acceptance tests are a natural extension of the acceptance criteria used
in user stories. You use acceptance tests to understand what needs
to be developed so that you can develop the correct functionality. In
order to develop the correct functionality, you need to create a common
understanding of stakeholder value among all team members. You
use story workshops (product backlog refinement meetings in Scrum)
so that everyone can contribute to discovering the whys, hows, and
whats of the stories. You also break down bigger stories (a.k.a. epics)
into smaller stories (a.k.a. user stories), and create new user stories
together with your stakeholders in these workshops.
If you are like us, you have participated in meetings that were boring,
non productive, dominated by a few people, and just a pain to be part
of. You just showed up and said a few things because that was what
management expected you to do. Well, it does not have to be that way.
You can use game mechanics in all your meetings to make them not
only more fun, but much more productive with better results. A way
to successful meetings is through serious games.
37
38
After that you will have to define the rules of the workshop. How about
ringing phones? Can we read emails during the workshop? What about
interrupting one another while talking? If you have already done some
story workshops with the team, you just quickly remind them of their
own rules and ask if they are still happy with them.
To boost creativity everything is time boxed, so you want to communicate the time limit.
During the workshop you also want to inform the team of time progress. You can, for example, let them know every 10 to 15 minutes how
much time is left and discuss whether you are still working on the
most important things.
Finally, a parking lot is very useful to have. If you have any questions
or issues that take up too much time or are not relevant, you can put
them on the parking lot and cover them at the end of the meeting.
ploratory tests. Once you have identified which stories need manual
testing, you set up an exploratory test charter for each. We do this
using a risk impact matrix[10] and we use an exploratory testing
tour to drive our test[11].
References
their efforts in Scrum, emphasizing ethics, commitment, and transing. He believes teams should learn, try, experiment, and work together to create solutions that solve problems. You can contact
Pascal at pascal@validate-it.nl.
39
Business Drivers
Business process monitoring
Reduced time-to-market
Increased customer
satisfaction
Cost saving
Verification Aspects
Security testing
Identity authentication
Password (weak/strong)
Vulnerable to XSS
Data leakage
Data protection
Unencrypted network
TCP/IP
Wi-Fi
Cellular network (3G/4G)
Performance testing
Embedded testing
Timing constraints
Software validation and integration
M2M testing
Device testing
Hardware availability
Physical access
Verification of the sensors and other hardware interfaces during runtime with special access
Memory constraints
Conformance testing
Sensors
Smart devices
Gateways
Behavioral aspects
Mobile testing
Mobile apps
Mobile devices
Mobile OS (Android, iOS)
Code security
Interoperability testing
40
Protocol testing
Devices with multiple lines of code, which makes life difficult for
the testers
Obtaining the complete overview on hardware and software architecture to develop test strategy or test plan
10
Below are some of the key challenges with regard to IoT testing:
Best Practices
Here are some of the best practices that can be practiced for the mutual
benefit of the organization and testers:
Use of clear test specifications to improve quality
Use of test automation tool to reduce time-to-market
Analyze the products functional requirements and use cases for
effective testing
Determine test metrics that will measure the impact of the IoT
strategy
Highlight the top priority problems in advance that need to be
tackled
The checklist above helps the team to test IoT devices/applications for
maintaining quality[3].
References
[1] http://whatis.techtarget.com
[2] http://www.infosecurity-magazine.com/news/internet-of-thingslaid-bare-25-security-flaws/
[3] http://www.logigear.com
[4] http://www.elektron-technology.com/sites/elektron-technology.
com/files/iot-agar-scientific-white-paper-final.pdf
Advantages
[5] http://blogs.clicksoftware.com/clickipedia/five-expected-benefitsfrom-the-internet-of-things-the-impact-on-service/
Activities
ing the latest testing trends. He has also published articles in TestingExperience, TestingCircus magazines and white papers in the IJCEM
Yes
No
Comments
41
Test Varieties
When it comes to testing, as one of the quality measures that can be
taken, we want to make things easy to grasp by introducing something
new: the test variety. This simple term intends to emphasize to all
people involved that testing is not a one-size-fits-all activity. Even when
all testing activities are done by one team within a single iteration, you
will still need to vary the testing. The first variety of testing, of course,
is static testing, i.e., reviewing documents and source code. Static
testing can both be manual (using techniques like technical review
or inspection) and automated (with tools such as static analyzers).
The next view on test varieties relates to the parties involved. The
developers have a technical view of testing, looking to see whether
the software works as designed and properly integrates with other
pieces of software. The designers want to know whether the system
as a whole works according to their specifications (whatever the form
and shape of these specifications may be). And the business people
simply want to know whether their business process is properly supported. Now, during these various tests there will be different points
42
Coverage Type
Description
Variation
Process
Algorithm
Statement coverage
Paths
to correct error handling, while a valid situation should be accepted by the system without error handling.
State transitions
Conditions/decisions
Decision points
0-switch
1-switch
N-switch
Condition coverage
Decision coverage
Condition/decision coverage
Cause-effect graph
Pairwise testing
Data
Boundary values
A boundary value determines the transfer from one equivalence class to another. Boundary value analysis tests the
boundary value itself plus the value directly above and directly
below it.
Equivalence classes
CRUD
Data combinations
Data flows
error situation. An invalid situation (certain values or combinations of values defined that are not permitted for the relevant
functionality) should lead to correct error handling, while a
Heuristics
Load profiles
Operational profiles
Presentation
Usability
learn.
Alpha testing
Beta testing
Usability lab
Table 1. Overview of the coverage type groups, examples of coverage types, and possible variations
43
Coverage-Based Testing
Experience-Based Approach
Process
Error Guessing
The tester uses experience to guess the potential errors that might
have been made and determines the methods to uncover the resulting
defects. Error guessing is also useful during risk analysis to identify
potential failure modes. Part of this is defect-based testing, where
the type of defect sought is used as the basis for the test design, with
tests derived systematically from what is known about the defect.
Error guessing is often no more than ad hoc testing, and the results of
testing are totally dependent on the experience and skills of the tester.
Checklist-based
The experienced tester uses a high-level list of items to be noted,
checked, or remembered, or a set of rules or criteria against which a
product has to be verified. These checklists are built based on a set of
standards, on experience, and on other considerations. A checklist of
user interface standards used as the basis for testing an application
is an example of checklist-based testing.
Checking of individual elements is often done using an unstructured
list. Each element in the list is directly tested by at least one test case.
Although checklist-based testing is more organized than error guessing, it is still highly dependent on the skills of the tester, and the test
is only as good as the checklist that is used.
Conditions/Decisions
In every IT system there are decision points consisting of conditions,
where the system behavior differs depending on the outcome of such
a decision point. Variations of these conditions and their outcomes
can be tested using coverage types like decision coverage, modified
condition/decision coverage, and multiple condition coverage.
Data
Data starts its lifecycle when it is created and ends when it is removed.
In between, the data is used by updating or consulting it. This lifecycle
of data can be tested, as can combinations of input data, and the attributes of input or output data. Some coverage types in this respect
are boundary values, CRUD, data flows, and data combinations.
Appearance
How a system operates, how it performs, and what its appearance
should be are often described in non-functional requirements. Within
this group we find coverage types like heuristics, operational and load
profiles, and presentation.
Exploratory
Exploratory testing is simultaneous learning, test design, and test execution. In other words, exploratory testing is any testing to the extent
that the tester actively controls the design of the tests, as those tests
are performed and use information gained while testing to design new
and better tests. Good exploratory testing is timeboxed based on a
charter that also defines scope and special areas of attention. Since
exploratory testing is preferably done by two people working together
and who apply relevant coverage types for the specific situation at
hand, this approach is preferred over the alternatives mentioned above.
Hybrid approaches
In practice, the use of hybrid approaches is very common. Exploratory
testing, for instance, can be very well combined with the use of coverage
types. And there are test design techniques that may be used within
experience-based as well as coverage-based testing, such as the data
combination test (which uses classification trees).
44
highlighted the most commonly used coverage types and some test
design techniques in which they can be applied. We have not given
an overview for appearance, since the coverage types for appearance
are highly interlinked with the aspect to be tested, and we believe that
giving a simplified overview would be misleading.
Coverage
Type
Group
Test Intensity
Light
Average
Thorough
Process
Statement
Decision coverage
Twitter: @rikmarselis
paths test
depth level2
coverage and
depth level1
process cycle
test
Conditions
level2 algorithms
depth level3 process cycle test
Condition
Modified condi-
Multiple condition
coverage
erage elemen-
tary comparison
decision
elementary
comparison
test
condition deci-
sion coverage
decision table
coverage elemen-
One or some
data pairs
Pairwise data
combination test
test or
multiple condition
decision coverage
test
Data
combination test
Conclusion
Applying an effective and efficient way of testing does not need to be
bothersome. Using test varieties, a combination of experience-based
and coverage-based testing, and your choice of about five coverage
types that are relevant for your situation, testing in these fast-paced
times will focus on establishing the stakeholders confidence without
tedious and unnecessary work.
Literature
Testing Embedded Software, Bart Broekman & Edwin Notenboom,
Addison Wesley, 2003, ISBN 9780321159861
TMap NEXT for result-driven testing, Tim Koomen, Leo van der
Aalst, Bart Broekman, Michiel Vroon, UTN Publishers, 2006, ISBN
9072194799
TMap NEXT in Scrum, Leo van der Aalst & Cecile Davis, Sogeti, 2012,
ISBN 9789075414646
Neils Quest for Quality; A TMap HD Story, Aldert Boersma & Erik
Vooijs, Sogeti, 2014, ISBN 9789075414837
Both Bert and Rik wrote several building blocks for the new TMap
HD book that was presented on 28 October 2014.
By Sujith Shajee
Myth # 2
Version 1.0 Savings through test automation are assured always
Version 2.0 Savings through test automation are assured with wellstructured implementation and can always be achieved in a pre-determined timeframe
When an organization decides to introduce automation into its testing
strategy, the decision is a commitment of huge investment towards
development, maintenance, and other operational costs associated
with the implementation. Return on Investment (ROI) is calculated
and determined, usually prior to kick off of the implementation, and
is considered to be the assured cost savings from implementation.
This article is the authors viewpoint and experience on how the original myth has transformed into a new version, and how derided the
Myth # 1
myth still is. The article also provides the authors thoughts on the
new generation of myths.
46
Myth # 3
Version 1.0 Test automation uncovers more bugs
Version 2.0 Test automation has failed in its implementation if it is
not able to uncover as many bugs as manual testing
Test automation is designed to reduce manual effort and eliminate
human errors in routine test execution activities. A common idea that
prevailed with automation was that it would successfully uncover
more bugs than manual validation. This idea just falls apart when
you realize that automation is only as good as the manual test cases
it was based and built on.
Identifying more application defects is a result of the quality and
coverage of test cases; yet the ability of automation to uncover defects
decides the success of automation implementation
defects than there are. But this most certainly does not imply that the
automation implementation is a failure. Let us not forget that when
the organization decided to go for automation, uncovering defects was
not the only goal defined to be achieved as an end result of automation.
Myth # 4
Version 1.0 Anyone and everything can be automated
Version 2.0 Automation is software engineering, so a developer is the
right fit for implementation
Yes, you are right when you heard that automation is scripting out
your manual test cases in a specific language that is supported by
the automation tool. Now the questions are, does that mean we can
bring in any automation tool and have the manual tester work on
implementing automation? And if the automation tool supports test
case automation, does it mean it should be automated? The answer
to both these questions is most certainly a big no.
With the evolution in the area of automation, organizations have realized that automation is to be considered as a development project and
should follow a well thought-through implementation plan. A proper
automation feasibility analysis together with a cost-benefit analysis
would really help us decide what has to be automated. Just because
it can be, does not mean it should be. Having said that, this realization did lead people to get their heads round new options to support
implementation and one of these was development team involvement.
Inference
George Orwell rightly said: Myths which are believed in tend to be
become true. There is a serious need to break the traditional ideas
that are not true about automation from time to time. This will ensure impressions that are never intended to be part of an automation
process stay out of it and will help to govern and establish the right set
of standards and practices that will lead to the correct way of looking
at test automation.
LinkedIn: www.linkedin.com/pub/sujith-shajee/8/71b/863
Myth # 5
Version 1.0 Any changes to test automation can be done in no time
because it is automation
Version 2.0 Any changes to test automation can be done in no time
because it is automation
No, it is not a typo, and most definitely you are not reading the versions wrongly. This is one myth that has just forgotten to evolve over
time. It was just the other day when a colleague of mine was telling
me about the agony he has to go through every automated regression
47
By Philipp Benkler
Creating Acceptance
By integrating employees in the development process, enterprise apps
have a greater likelihood of acceptance and adoption after completion.
In particular, the involvement of opinion leaders has great potential
for stimulating broad acceptance of new mobile business solutions,
and combating reservations towards transformation and change. It is
Bug Testing
Extensive functional testing is the basis for any successful app. To
avoid organizational blindness, testers from outside the development
team should be included, so that both staff and independent testers
are assessing the app. In many cases a mix of experienced testers and
unbiased users offers the greatest benefits. While the former know
where to look and discover defects reliably, the latter do the unexpected and might find showstoppers that otherwise would have been
missed. Therefore, it makes sense to establish a mix of structured and
explorative testing to cover as many scenarios as possible. Depending
on the stage of the application as well as the availability of internal
resources, this can be done either internally or externally.
Usability Testing
Usability is the key factor for an app to be successful, both in the consumer and in the enterprise sector. Before release, applications should
be tested by the target group to fully understand their wants and needs.
This way, companies can evaluate whether the initial requirements
have been implemented as required at a certain stage in the development process. For consumer apps, crowdtesting is an established
approach for quality assurance and usability testing. Crowdtesting
offers access to specific target groups and devices through large pools
of testers. When developing enterprise apps, employees or enterprise
clients take the place of the crowd. They are the future users and know
what works for them and what does not. Instead of carrying out the
testing process independently, companies should think about using
the crowd platform infrastructure of an external service provider
to distribute tests to their own crowd of employees or customers.
This approach is called Bring-Your-Own-Crowd. It reduces project
management time and budget through a managed testing process,
resulting in high-quality results that can be pushed back directly into
the development process. This way, even testing with confidential
data or restricted access on company devices is possible at any time.
1 http://www.gartner.com/newsroom/id/2334015
48
ees into the testing process, appropriate time and resources need to be
allocated for that purpose. Employees will need to use their working
time to test, and therefore possibly postpone other tasks. In addition,
a strong commitment is required from all testers in order to get meaningful and high quality results. Thus, ordering unwilling employees
to test an app is not ideal. If they are not already interested in the app
49
A Unified Framework
For All Automation Needs Part III
Introduction
In the first two parts of this article[1], I described the main principles
applied in developing a unified test automation (UTA) framework that
serves as a foundation for testing multiple application interfaces.
The UTA was built on JUnit and JUnitParams. I showed how to test the
browser GUI and REST API within the UTA framework using the open
source Selenium WebDriver and Spring Framework. In this part, I will
describe the details of implementing automated testing of the command line interface (CLI) when connecting to an SSH server.
The most popular tool for automating interaction with CLI is Expect. It
was originally written in Tcl, and there are several open source Expect
implementations in Java. In the UTA, I use the following programs:
Expect-for-Java[2] developed by Ronnie Dong. This API is loosely
based on the Perl Expect library
JCraft JSch[3] implementation for SSH protocol
8.
9.
expect.expect("#");
2.
assertEquals("#", expect.match);
2.
3.
4.
5.
session.setPassword(password);
6.
session.setConfig("StrictHostKeyChecking", "no");
7.
session.connect(CONNECTION_TIMEOUT);
1.
@Test
2.
@FileParameters(value = "file:c:/DDT/showCommand.csv",
3.
4.
5.
6.
7.
// expect
expect.expect(Pattern.compile(pattern, Pattern.DOTALL));
8.
9.
mapper = CsvWithHeaderMapper.class)
// verify
10. assertTrue(expect.isSuccess);
11.
System.out.println(expect.match);
12. }
Listing 3. Testing command with multiple options
The data file showCommand.csv contains two columns: one with the
command options and one with the regex patterns for expected match.
50
// input
2.
The Base class from where all test classes are extended contains the
method commandProcessor(String csvFileName, Expect expect)
that parses the test scenario file, runs all commands, and verifies pattern matches. Using JUnitParams library, this test can be presented
as simply as shown in Listing 6, where cryptoUser.csv is the name
of the test scenario file.
1.
@Test
2.
@Parameters({"cryptoUser.csv"})
3.
4.
5.
commandProcessor(file, expect);
3.
4.
// expect
5.
6.
7.
8.
9.
// verify
10. assertTrue(expect.isSuccess);
11. // make a decision
12. switch(matchIndex) {
Summary
In this last part of the article, I described how to automate testing CLI
within the UTA. Since the CLI is not as rich as GUI, the structure of the
tests is much simpler.
The UTA can be extended to include other interfaces as well, and maybe
someday we will come up with one global test automation framework
that fits all automation needs.
case 0:
15.
// new command
case 1:
18.
// new command
19. }
Listing 4. Making decisions
References
[1] Vladimir Belorusets. A Unified Framework for All Automation
Needs Part I, II. Testing Experience, Issue No. 26, pp. 6670, 2014,
Issue No. 27, pp. 913, 2014.
[2] Expect-for-Java https://github.com/ronniedong/Expect-for-Java
[3] JCraft JSch http://www.jcraft.com/jsch
0, show hsm status, Crypto-user logged in: yes|1, Cryptouser logged in: no|3
2.
3.
4.
The first column is the number of the command line. The second column
is a command itself. The rest of the columns present patterns (strings or
regex) for all possible command outcomes. Each pattern ends with the
bar that marks the end of the pattern and the next command number
to go. -1 indicates the end of the test.
51
Book Corner
Book Review:
New Releases:
Testing Cloud Services
How to Test SaaS, Paas & IaaS
Authored by Kees Blokland, Jeroen
Mengerink, Martin Pol
52
Masthead
Editor
Daz& Hilterscheid
Editorial
Jos Daz
Unternehmensberatung GmbH
Kurfrstendamm 179
Layout&Design
10707 Berlin
Lucas Jahn
Germany
Konstanze Ackermann
Email: info@diazhilterscheid.com
Website: www.diazhilterscheid.com
Price
Online version: free of charge
Website
www.testingexperience.com
Subscribe
subscribe.testingexperience.com
Articles&Authors
editorial@testingexperience.com
www.testingexperience.com
Daz& Hilterscheid is a member of
Verband der Zeitschriftenverleger
Berlin-Brandenburg e.V..
www.testingexperience-shop.com
ISSN 1866-5705
third parties.
Editorial Board
A big thank-you goes to the members of
the Testing Experience editorial board for
helping us select articles for this issue:
Maik Nogens, Gary Mogyorodi, Erik van
Veenendaal, Werner Lieblang and Arjan
Brands.
Index of Advertisers
Picture Credits
iStock.com/akindo..................................................C1
Ranorex............................................................................3
DE
EN
DE
CMAP Certified
Mobile App Professional
The new certification for Mobile App Testing
Apps and mobiles have become an important ele-