You are on page 1of 12

White Paper

White Paper by Bloor


Author Philip Howard
Publish date September 2015

Automated testing:
coping with change
The truth is that
you can skimp on

automated testing and
deploy manual testers
because you think
it is cheaper (it isn’t)
or because it is easier
to get operational
rather than capital
budget, but this is
simply short-sighted.


Author Philip Howard
Automated testing: Figure 1:
The Price of Failure

coping with change OVER % 60


OF IT PROJECTS FAIL
Standish Group’s Chaos Manifesto 2013
hange is a constant in both amended requirements. In fact, one of
C development and testing. In
development environments
the arguments against using automated
testing frameworks has historically been
this has, over the years, resulted in precisely that they haven’t been good at ONLY % 69
the use of agile approaches and, more managing change. We are going to argue OF FUNCTIONALITY
IS DELIVERED
recently, what has come to be known as that, with the right tools, it is actually Standish Group’s Chaos Manifesto 2013
“cloud first”. That is, the idea that you possible to automate the change process
aim to have multiple small releases and as a part of the testing environment and
enhancements to your applications rather thus to enable continuous delivery.
than larger, more occasional releases. In practice there are various issues to
$46bn
SPENT ANNUALLY
Actually, we would argue that this really consider: FIXING DEFECTS
IIBA, 2013
originated with open source projects but,
in any case, it is a subset of “continuous • The need for test automation
frameworks to be able to respond to
delivery”. Whatever it is called, the
idea is that by focusing on incremental
constant user demands. We can call 56%
improvements to an application you
this “responsive automation”. OF DEFECTS STEM FROM
!?

AMBIGUOUS REQUIREMENTS
are less at the mercy of changes to Reusability. By building up a library Bender RBT, 2009

requirements. Of course, this is not a of reusable test assets functionality


panacea: it only applies to upgrades can be tested more rapidly by
and improvements, not to green-field selecting components from this
developments, though agile development library. CAUSING % 80
OF DEFECT COSTS
can reasonably be regarded as supporting
• change
Bender RBT, 2009
Traceability. In order to automate
continuous delivery.
you need all the data,
There is a distinction between “cloud
expected results and test scripts to
first” and “continuous delivery” in that
be automatically updated through
the former emphasises development
traceability back to requirements.
UP TO 50%
whereas the latter refers to the whole OF TIME SPENT

• Impact
LOOKING FOR TEST DATA
software development lifecycle, including analysis. Simply implementing Grid-Tools’ experience working on site

testing, provisioning, and so on, as well a change is one thing, but you need to
as development. And not forgetting understand how this might impact on
that there are a myriad of tools that you other parts of the system, because the 70%
might want to use that need to be linked former can break the latter. OF ALL TESTING
STILL MANUAL
automatically and without the need for
manually scripted integration. However, • the
Speed of delivery. To keep up with
competition, companies need to
Bloor Research, 2014

while this is the broader context, in WHAT DIFFERENCE


get new applications and upgrades
this paper we are going to discuss the HAS AGILE DESIGNER
to market faster. Testing cannot be a
impact of change on testing within the TMX MADE?
roadblock on this path.
context of continuous delivery. This is an
issue that has not historically been well
addressed.
We will discuss each of these issues
in turn.
95%
REDUCTION
Much testing continues to be manual IN DEFECTS
Agile Designer Press Release
and Figure 1 illustrates some of the costs
associated with that practice. Moreover,
it should be obvious that manual
processes are going to be equally
30%
REDUCTION
deficient when it comes to managing IN TEST CYCLES
Grid-Tools’ experience working on site /
change. Nevertheless, there are performing audits of test cases

automated testing frameworks available


on the market and here we want to
discuss how these cope with new and 100%
COVERAGE FROM
12 TEST CASES GENERATED
Audit at a large financial services company

3 A Bloor White Paper


Responsive automation
Responsive automation The key is the first point: changes are
All testing environments have to be formally captured. Software should then
able to react to changing user demands. identify what test cases are required to
Moreover, it is typical that change validate a change made to an application
requests are both frequent and never and search the existing library of test
ending. The issue arises as to how you cases to see if suitable test cases already
react to these requests in a timely and exist and, if not, to generate new test
efficient manner. The short answer is cases to be stored in the library for future


that you need to reduce manual testing, use. Notice that this implies some sort
increase automation and do so in a way of test case management software. If
that allows you to be more responsive suitable existing test cases exist then
to change. However, it is easy to say this, they should have test scripts already
The short answer much more difficult to realise in practice. associated with them, along with profiles
is that you need The question is: how can automation of the data required to run those tests
enable responsiveness? and links to where that data resides. If
to reduce manual
To turn this around: what are you those test cases don’t exist, then you
testing, increase actually looking to achieve? From a want the software to generate the test
automation and testing perspective on application change scripts and data profiles at the same time
do so in a way that requests, what you would like in an ideal as you generate the test scripts.
allows you to be more world is automated derivation of all the Put all that together and you
test cases you need to ensure adequate genuinely have the ability to be
responsive to change.
coverage, generation of the relevant test responsive to change.
However, it is easy to scripts, and the automated provision of
say this, much more appropriate data to run against those Automating reusability
difficult to realise tests. In fact, if we really want to be Testing is all too frequently treated as a
in practice. The idealistic, you would like this to be a one- series of unrelated processes: you have
click process. And this isn’t entirely blue some code to test, you design test cases,
question is: how can
sky thinking. It is not difficult to imagine write test scripts, define the profile of the
automation enable artificial intelligence and machine data needed for your tests, identify where
responsiveness?


learning capabilities being built into that data is, and describe the expected
test automation frameworks that start to results. If the data is not easily available
move testing in this direction. you may have to use the facilities of a
However, we are not there yet and, service virtualisation tool in order to
in the meantime, at least some degree capture and/or simulate appropriate data.
of manual intervention is going to be In any case, these steps are typically
required. The question is, therefore, how considered as a part of a single process
to minimise this requirement? And the that is isolated from other such
first part of any answer to this conundrum processes. Needless to say, test cases
must be that requests for change, and the and their associated test components
details thereof, are captured in some sort are typically stored for potential reuse
of formal manner. There are actually two but how much reuse really goes on?
(perhaps three) considerations here. Firstly, Of course, this has been a bugbear
the definition of the change requirements in development circles for decades:
should be directly usable at the start of the everyone recognises the theoretical
automation process. Secondly, the process benefits of reuse but making it happen
of capturing these requirements needs to is another matter entirely. However, it is
be understandable not just to developers potentially easier to implement in testing
and testers, but also to the business users than it is in development. This is because
that are commissioning the changes. If test cases can be generated directly from
this is not the case then there is too requirements whilst that is not generally
great a risk that what the developers are the case for application software.
creating will be different from what the The key point to supporting
user wants. Thirdly, preferably, this whole reusability in a testing environment is
process should be easy to use and not software that will identify what test cases
require detailed training. (along with the scripts, data and expected

© 2015 Bloor 4
results) are relevant to the particular avoided if at all possible. For example,
software being developed and which can we spoke not so long ago with a company
scan an existing library of test cases to that had so many ETL (extract, transform
identify if a test case already exists and, and load) scripts – tens of thousands – that
if not, will create and store it for future the department literally had no time to
use. In other words, reusability needs to do anything other than to maintain those
be automated: simply creating a library of scripts.
potentially reusable test components will The question is therefore how to
not be sufficient because we know that avoid manual maintenance or, at least,
human nature means that it will not be to minimise it (even in an automated
properly utilised. Worse, you end up with environment you will need some sort of


more and more test cases, which makes manual oversight)? The short answer,
the identification of reusable components beyond saying simply “automation”, is that
even more difficult, meaning less and less you need traceability from requirements,
reuse. So, test case management needs through test cases and test scripts, to
to be automated. the data and your expected results. And Change is a constant
However, it isn’t simply a question it is only if you have this traceability and that creates
of reusability for new test components; right through the environment that you
problems for testing
you also need to cater to the fact can successfully expect to implement
that there will typically be (tens of) automation that will take away many of environments.
thousands of existing assets. These will those expensive manual processes. In particular, there
need to be scanned by the test case What does that mean in practice? is a problem with
management software so that you can It means that when a requirement is test components and
identify both duplicates and out-of-date changed then relevant amendments are
particularly test scripts,
test components that are no longer automatically generated (or retrieved if
valid. It would probably be sensible you have appropriate test cases in your especially if these
if you could identify where test cases library) for all the subsequent steps in the are written and
maintained manually.


were simply versions of an underlying, testing process: the test cases, the scripts,
more fundamental test case. In any case the data that needs to be run through
you need software to help you perform the tests, and the results that you expect
governance against your existing test from those tests.
assets. If you were running this in Achieving this is not as simple as
stand-alone mode you would then want stating it. In reality you are going to need
the ability to compare any new test case an integrated suite of tools that starts
with what already exists. However, in a with requirements capture and test case
truly automated environment you would and test script generation, combined with
want the software that captured your test data capabilities. In this latter case
requirements to automatically look for you will want test data management
relevant test cases in your repository, only for in-house data but will need to
generating new test components if these integrate with service virtualisation for
were not already available. third party data or other data that is not
In practice, the total automation easily accessible. This suggests that
described is not available, but this is point products will not be suitable as
the direction in which the market is, and these will only resolve, at best, a part of
should be (in our opinion), moving. the problem. As an aside, and taking a
broader perspective – from requirements
Traceability through development to testing and
Change is a constant and that creates provisioning – then we are talking about
problems for testing environments. In an integrated suite of products that
particular, there is a problem with test combine to provide continuous delivery.
components and particularly test scripts, Going back to the testing
especially if these are written and environment, if we are assuming that
maintained manually. This is because the traceability is implemented throughout,
cost of manually maintaining scripts can be then everything depends on the original
prohibitive. In fact, and we can generalise requirements, or changes thereto. This in
here – not just to testing but to any sort turn means that requirements need to be
of development process – maintenance, captured in a formal manner in some sort
especially manual maintenance, is to be of model (where the word “model” is used

5 A Bloor White Paper


here in its most abstract sense) so that bring down company web sites for days
when you make a change to the model or weeks, costing not just revenues but
then everything else is automatically loss of prestige and, in some cases, fines.
updated by virtue of the traceability back Conversely, an automated test
to the model. Note that this doesn’t framework should be able to identify any
necessarily mean generating new test implications of a change, provided that
cases, it may mean recognising that there it has been used to capture the entire
is an existing test case that can be reused application with all of its requirements.
to support the current scenario. Then, when a change is made to those
What is required is joined-up thinking requirements you should be able to
or, more accurately, joined-up product perform impact and dependency analyses


suites. This should include test case to see how these changes will impact on
management (managing reusable testing the rest of the system. If you are going to
assets) as well as the other capabilities assess these manually then ideally they
discussed. should be available in graphical format
…fewer, smaller (we would recommend actually using
Impact analysis
changes are less likely a graph) as well as in a more tabular
Supporting changes through a test manner, to suit different users’ preferences.
to disrupt an existing
automation framework is one thing but However, better yet, what you would like
system. However, it’s not the whole story. Changes in the software to do is to identify all the
regardless of the themselves can have implications beyond relationships and dependencies that are
approach taken you their obvious scope. It is entirely easy altered because of this change, and then
would like to be able to make what seems like an innocuous generate (or retrieve from a library) all
little change only to find that the whole relevant test cases, scripts and so on.
to know what impact
application breaks. The risk of this This should mean that not just the direct
any particular change happening tends to be proportional to effects of a change are tested but also its
might have on the rest the complexity of the application you indirect effects.
of the application.


are changing – the more complex the Of course there is a coverage issue
application, the more likely it is to collapse here. Typically not everything gets
– the last straw on the camel’s back. tested. But this is because of the time
This is one good reason to adopt a and manpower required for testing,
style of application upgrades that focuses especially manual testing. Automation
on incremental upgrades rather than offers the promise of exhaustive testing.
major releases: fewer, smaller changes If you test everything then you’ll know
are less likely to disrupt an existing that everything works. Test less than
system. However, regardless of the everything and you won’t.
approach taken you would like to be
able to know what impact any particular Speed of delivery
change might have on the rest of the Consider Uber. Its service is challenging
application. traditional markets for taxis all over the
In principle, the knock-on effects of world. Love it or hate it, it is disruptive.
making a change should be captured And similar things are happening across
and handled by the developers of the industry sectors. In particular, customer
application in question but, in practice, facing applications are rapidly evolving,
this will often be left to testers. However, with companies adopting cloud-first
how do testers know what impact any (or, more broadly, continuous delivery)
particular change might have elsewhere? development cycles whereby new releases
In practice, the simple answer is that come out every quarter. They don’t have
they don’t. In reality, it is more or lots of new features in each release –
less impossible to catch unintended they are incremental – but they rapidly
consequences if you are using manual accumulate new features and functionality.
testing methods because you won’t This is the world you live in and the old
be able to see linkages across the “we’ll outsource development because it
application. There are many documented is cheap” model no longer works except
cases of companies implementing new perhaps for some back-office applications.
systems where these have failed precisely Thus, for many applications, time to market
because of unforeseen consequences. and speed of delivery is crucial. However,
The most well-known are those that that can’t be at the expense of bugs and

© 2015 Bloor 6
functions that don’t work so proper testing the testing environment, resolving any
still needs to be done, but it needs to be issues that arise, focusing on high level
done in such a way that does not slow problems where genuine expertise is
down release cycles. required, liaising with developers and
How do you achieve this? One users, and so on. This is how we envision
answer would be to hire more testers. the future, but we are not there yet.
A lot more. An alternative would be to While test automation framework vendors
make existing testers more productive. are now truly attempting to grasp the
We recommend the latter but how is this automation nettle in a holistic way, fully
to be accomplished? This isn’t a complex functional, fully integrated product suites
question – the answer to making workers are not available yet so there will be a


more productive has been the same for gradual evolution for testers, which will
more than 200 years – you make workers give them time to adapt and to learn new
more productive by giving them tools skills. We do, however, believe that the
that help them work more efficiently. day of the traditional tester is numbered:
Specifically, it is tools that help to not just because the technology is …the answer to making
automate some or all manual processes emerging that can automate many testing workers more productive
– whether it’s the Spinning Jenny or the tasks but because the market requires
has been the same for more
production line – that enable improved application delivery in timescales that
productivity. In the case of testing: test simply can’t be met through traditional than 200 years – you make
automation frameworks. manual testing. workers more productive by
In effect, testers should be the giving them tools that help
operators of test automation tools: them work more efficiently.
leaving the routine tasks of identifying
Specifically, it is tools that
what test cases need to be run, the
generation of the relevant test scripts help to automate some or
all manual processes.


and so forth, to be handled by the
automation software. Testers then
become like DBAs: they are managing

7 A Bloor White Paper


Conclusion
utomated testing frameworks The truth is that you can skimp on
A are not what they were even
a couple of years ago. Then
automated testing and deploy manual
testers because you think it is cheaper
we were looking at disparate, poorly (it isn’t) or because it is easier to get
integrated point solutions that addressed operational than capital budget, but this
a bit of the testing environment but is simply short-sighted. You might get
were not in any sense holistic. In that lucky and never have the sort of outages
environment you could see some sense that some organisations are infamous for,
in the argument that maybe a manual but you probably won’t. Your competitors
approach, at least in some areas, had that adopt automated testing frameworks
benefits over adopting automation. In will get to market faster than you can
our opinion that point of view is no and with applications of higher quality.
longer valid and, within three to five The truth is that you need to get that
years, it will be completely discredited. competitive advantage before they do.
We expect to see a considerable leap
forward as more and more testing
becomes automated.

FURTHER INFORMATION
Further information about this subject is available from
www.bloorresearch.com/update/2262

© 2015 Bloor 8
About the author
PHILIP HOWARD
Research Director / Information Management

hilip started in the computer In addition to the numerous reports


P industry way back in 1973
and has variously worked as
Philip has written on behalf of Bloor
Research, Philip also contributes regularly
a systems analyst, programmer and to IT-Director.com and IT- Analysis.com
salesperson, as well as in marketing and and was previously editor of both
product management, for a variety of Application Development News and
companies including GEC Marconi, GPT, Operating System News on behalf of
Philips Data Systems, Raytheon and NCR. Cambridge Market Intelligence (CMI).
After a quarter of a century of not He has also contributed to various
being his own boss Philip set up his own magazines and written a number of
company in 1992 and his first client was reports published by companies such as
Bloor Research (then ButlerBloor), with CMI and The Financial Times.
Philip working for the company as an Philip speaks regularly at conferences
associate analyst. His relationship with and other events throughout Europe and
Bloor Research has continued since that North America.
time and he is now Research Director Away from work, Philip’s primary
focused on Data Management. leisure activities are canal boats, skiing,
Data management refers to the playing Bridge (at which he is a Life
management, movement, governance Master), dining out and foreign travel.
and storage of data and involves
diverse technologies that include (but
are not limited to) databases and data
warehousing, data integration (including
ETL, data migration and data federation),
data quality, master data management,
metadata management and log and
event management. Philip also tracks
spreadsheet management and complex
event processing.

9 A Bloor White Paper


Bloor overview
Bloor Research is one of Europe’s Founded in 1989, we have spent 25
leading IT research, analysis and years distributing research and analysis
consultancy organisations, and in 2014 to IT user and vendor organisations
celebrated its 25th anniversary. We throughout the world via online
explain how to bring greater Agility subscriptions, tailored research services,
to corporate IT systems through the events and consultancy projects. We are
effective governance, management and committed to turning our knowledge into
leverage of Information. We have built business value for you.
a reputation for ‘telling the right story’
with independent, intelligent, well-
articulated communications content and
publications on all aspects of the ICT
industry. We believe the objective of
telling the right story is to:

• Describe the technology in context to


its business value and the other systems
and processes it interacts with.

• Understand how new and innovative


technologies fit in with existing ICT
investments.

• Look at the whole market and explain


all the solutions available and how they
can be more effectively evaluated.

• Filter ‘noise’ and make it easier to find


the additional information or news
that supports both investment and
implementation.

• Ensure all our content is available


through the most appropriate channels.

© 2015 Bloor 10
Copyright and disclaimer
This document is copyright © 2015 Bloor. No part of this publication may be
reproduced by any method whatsoever without the prior consent of Bloor Research.
Due to the nature of this material, numerous hardware and software products have been
mentioned by name. In the majority, if not all, of the cases, these product names are
claimed as trademarks by the companies that manufacture the products. It is not Bloor
Research’s intent to claim these names or trademarks as our own. Likewise, company
logos, graphics or screen shots have been reproduced with the consent of the owner and
are subject to that owner’s copyright.
Whilst every care has been taken in the preparation of this document to ensure that the
information is correct, the publishers cannot accept responsibility for any errors or omissions.

11 A Bloor White Paper


2nd Floor
145–157 St John Street
LONDON EC1V 4PY
United Kingdom

Tel: +44 (0)207 043 9750


Web: www.BloorResearch.com
email: info@BloorResearch.com

You might also like