You are on page 1of 9

Success with Test Automation

Bret Pettichord
bret@pettichord.com
www.pettichord.com
Revised version of a paper originally presented at Quality Week, San Francisco, ay !""#.
$ersion of %& 'une %((!.
1.0 Abstract
This paper describes several principles for test automation. These principles were used to develop a
system of automated tests for a new family of client/server applications at BMC Software. This work
identifies the major concerns when staffing test automation with testers developers or contractors. !t
encourages applying standard software development processes to test automation. !t identifies criteria for
selecting appropriate tests to be automated and advantages of a Testcase !nterpreter. !t describes how
cascading failures prevent unattended testing. !t identifies the most serious bug that can affect test
automation systems and describes ways to avoid it. !t circumscribes reasonable limits on test automation
goals.
2.0 Introduction
)ver the past several years, tools that help programmers *uickly create applications with graphical user
interfaces have dramatically improved programmer productivity. +his has increased the pressure on
testers, who are often perceived as bottlenecks to the delivery of software products. +esters are being
asked to test more and more code in less and less time. +hey need to dramatically improve their own
productivity. +est automation is one way to do this.
+his paper presents advice on how to staff, plan and design a test automation system for ,-.
applications. . will present some ideas that have helped me make testsuites that are reliable and easy to
maintain. +hese concepts and suggestions will be demonstrated by reference to the system built to test
/01s etaSuite family of products. +hese client2server applications provide an easy3to3use interface for
administering open systems databases.
3.0 Taking Test Automation Seriously
Software testers, under pressure to do more testing in less time, often find themselves rushed and eager
for anything that will give them a hand. +heir fear and desperation lead them to seek a 4silver bullet4 that
will help them regain a rational pace to their work. +est automation is often this silver bullet. +he fantasy
is that it will make their 5ob simpler and easier and help them contend with unrealistic schedules.
6utomating tests for a graphical user interface presents significant difficulties not found in character
based interfaces, much less command line interfaces or programming interfaces 768.s9. ,raphical user
interfaces tend to be made of comple: components and tend to be constantly redesigned during the
development process. Significant successes have been made in delivering test tools that are able to
identify and manipulate graphical user interfaces. Q6 8artner by Segue Software and ;runner and
WinRunner by ercury .nteractive are e:amples.
ost e:perienced software testers have e:cellent insights into the kinds of practices that are critical for
software development. +hey see the conse*uences of their developers coding before designing, neglecting
code reviews or ignoring feedback< more bugs and slipped schedules. =owever, testers1 clear insight into
the development process often fades when they undertake test automation. +hey fail to reali>e that test
automation is itself a software development activity and that it therefore needs to adhere to critical
development practices. ?ike developers who are stressed for time, testers are prone to skipping steps,
taking the big leap, and blindly hoping to come out with a success at the other side. /ut frustration and
disappointment are more likely conse*uences.
Some testers even try to develop test automation in their spare time. . have rarely seen this deliver
testsuites that can bear much weight.
a5or challenges for ,-. test automation are maintainability and reliability. +hese challenges demand
that a software engineering approach be taken to address them. @ifferent teams define the software
development process differently. +his is fine. +he important thing to remember is that this process should
be used with test automation as well.
?et1s look at some of the ways testers and developers are tempted to underestimate test automation. First
off, the name 4test tools4 makes them sound simple and easy to use. /ut they are really development
environments speciali>ed for creating testing programs.
any testers do not have strong programming skills. +his combined with the repetitive nature of much
testing, leads people to use record and playback techni*ues. +here indeed are many tools that allow
scripts to be recorded and then played back, using screen captures for verification. +he problem that
always crops up is that the layouts are changed, invalidating the screen captures and then the interface
controls change making playback fail. Aow the scripts must be re3recorded from scratch. Record and
playback tools provide an easy way to create throwaway testsuites.
.t is particularly hard for people who have had so much success using these techni*ues with character3
based interfaces to understand the difficulty of using them with graphical interfaces.
Recording tests and reviewing the created code is an e:cellent way to learn how to use a test tool. /ut
most successful test automators move on to more generali>ed testing methods.
Someone using a test tool, whether she has the title of tester or developer, needs to understand how to
design, develop and maintain software. +est automation systems will need to be tested themselves and
should be sub5ected to fre*uent review and improvement to make sure that they are actually addressing
the testing needs of the organi>ation.
.0 !ho Should Automate Tests"
. have been a ,-. test automation specialist for a couple different testing groups in the past four years
and currently head up a small team of automation specialists. . think it makes a lot of sense to have
someone focus on these aspects of the testing. +hese are some of my thoughts on how to select someone
to do this task.
6 test automator needs to have good testing and development skills. She needs to understand testing
re*uirements and the situations testers face. 6utomating tests should not be an opportunity to impose a
particular testing methodology on testers. +hey will find fault with it and refuse to use it. Rather it needs
to build from e:isting testing methodologies.
.f the test automator has background as a tester, you will need to ask if she will show the necessary
discipline. Sometimes testers who really want to be programmers sei>e on test automation as a way for
them to develop these skills. .t is important that they have good 5udgment and not get carried away with
the programming. /e wary if they are hoping to automate all of their testing. +hey need to be focusing on
the big wins. +hey may focus on improving the automation when it is actually good enough for the 5ob.
6 test automator needs to know how to develop software. She needs to be particularly aware of issues
such as maintenance and reliability. aking the system easy to update with changes to the product under
test should be the priority.
.f her background is as a developer, you will need to ask if she has understanding and respect for the
testing process.
Sometimes you can find independent contractors who have well3matched backgrounds. With them, you
will have to ask who will be maintaining the testing system after they have left. aintenance will be a
critical challenge.
.f you have access to good training in test automation, take advantage of it. @evelopments in test
automation are being made very *uickly. .t1s often cheaper to pay to people learn from someone else1s
mistakes than to have to make them make the mistake again themselves.
@on1t assign automation to re5ects from programming or testing. -nless test automation is done by
someone who is motivated and working closely with the rest of the development group, it will not
succeed.
#.0 $hoosing !hat To Automate
6 colleague once asked me if . thought it was theoretically possible to automate !((B of testing. +his
*uestion threw me. +heoretically, testing should never be necessary in the first place. +he programs
should be coded correctly first offC /ut we1re not really talking about the theoretical. +esting is the art of
the pragmatic. ,ood software testing re*uires good 5udgment.
?ook at where you are spending a lot of time testing manually. +his is what you should consider
automating. .f you are a conscientious tester, you are often aware of things you wished you only had time
to test. @on1t focus your automation efforts on these tasks that may otherwise go untested. .t1s usually a
mistake. For one thing, you only want to code automation after you have a determined testing procedure.
.f you1ve run through your manual tests a couple times, you probably have a solid sense of how to
proceed with the testing. Dou don1t want to automate tests you haven1t run much manually and then reali>e
that there is a more3effective procedure. +his may mean re3working your automation or 5ust giving up on
it. 6nother problem with automating tests you haven1t found the time to run manually is that you1re not
likely to find the time to maintain them later. +est automation always breaks down at some point. ake
sure the tests are important enough that you will be devoting the time to maintain them when the
opportunity arises. First, get your testing procedures and practices standardi>ed and effective. +hen, look
at how you can use automation to improve your productivity.
+esting can be boring. Sometimes people want to automate even casual tests that will only be e:ecuted a
couple times. +he thought is that automation may allow them to avoid the tedium. /ut there are snags.
0omplications arise. +est automation will itself have to be debugged. +his may often take as much time as
it would to 5ust e:ecute the tests manually in the first place. . use a rule of thumb that says test automation
will take at least ten times the time it would take to e:ecute the tests manually. @on1t fall into the
temptation to automate simply to make your 5ob more e:citing.
any testers really want to be programmers. +est automation may provide these people with an
opportunity to develop their programming skills. .f you find yourself in this circumstance, try to stay clear
on your goals and how to sensibly use your programming skills to accelerate the testing. @on1t let your
attention get caught into the programming for its own sake. @on1t try to automate all of your testing. @on1t
be a perfectionist. .f your program does the testing, great. .t can have a couple bugs. Dou1re not creating a
commercial productE if there are fatal bugs, you1ll be around to fi: them. ?ater .1ll discuss some advice
regarding the parts of test automation that must be reliable. .f you are intent to become a programmer,
hone your testing skills while you seek a programming position. +hey will be e:tremely valuable when
you get programming work.
8erformance is an area where wasted effort can be applied to test automation. 8erformance improvements
generally depend on assumptions about the product under test. /ut since maintainability is usually
fostered by making as few assumptions about how the product works as is practical, improving
performance often reduces maintainability. @on1t do this. ake maintainability a priority. . have seen
performance improvements to test automation that had to be ripped out when the product changed. +hey
made it harder to maintain the testsuite and didn1t last long anyway. +he best way to allow more tests to
be run in the day is to design your testing system to allow for unattended testing. . have more say about
this later.
+est automation efforts have failed by trying to do too much. Dou are better off trying to get first results
*uickly. +his has several advantages. .t will allow you to *uickly identify any testability issues. +hese
may re*uire cooperation from developers and may take some time to resolve. +he sooner they are
identified, the better off you are. Dou may also wish to 5ust automate the most laborious part of the
testing, leaving such items as setup and verification to be done manually.
Starting small and building from there will allow you to validate your testing strategy. ,etting early
feedback from testers, programmers and build engineers will allow you to grow your testsuite into
something that will benefit many people. @emonstrate to your programmers the assumptions you are
depending on.
.f you1ve been asked to speciali>e on the test automation, you may find a tendency to try to get a big
chunk all worked out before handing it off. Fight this tendency. +he sooner you hand off bits to others that
they can use in their daily testing, the better off you all will be. +est automation may re*uire testers to
rethink how they are doing their 5ob. +he sooner they start this, the better. ?ate in a testing cycle, they
may find themselves putting all their energy into keeping up with the product changes and the bug list.
+hey may not put much energy into learning how to use the automation and you may find yourself
frustrated when it goes under used and unappreciated.
First, try to get one test to run. +hen build up your testsuite. Reali>e that the people using test automation
don1t care much code you1ve written to support the testing. 6ll they will notice is how many tests are
automated and how reliable the testsuite is. 6fter you have a small suite, you can work on generali>ing
code and putting together a more general testing system.
/uild acceptance tests are the tests that are run before software is moved into testing. +hese should be
able to be run *uickly, often in less than an hour. +hese tests are e:cellent candidates for automation.
+hese tests are run fre*uently. +hese tests should try to cover as many different functions of the product
as possible. +he aim is breadth, not depth.
.t1s worth the investment to make your acceptance tests easy to setup and run. )nce the acceptance
testsuite is put in place, smart programmers will want to run it on their code before checking it in. Finding
their own bugs will help them avoid embarrassment and lot of rushing around. 6s a tester, you will want
to do all that you can to make this happen.
.n my e:perience, making good decisions about what to automate can be critical to successful test
automation. +here are often many simple things that can give big paybacks. Find them.
%.0 Building &aintainable Testsuites
)ne of the biggest challenges to using automated testsuites is keeping them functional as the product
interface changes. +he /0 eta testsuite uses several techni*ues to allow it to be easily maintained as
our product interface changes.
We use Q6 8artner for our test automation. +his includes tools for creating 4window declarations4 which
map window controls to logical names. .f the name of a control changes, we can update the window
declaration. 6ll of our test scripts will now work with the revised product. 6nother nice feature of this
tool is that it can often locate moved controls. .n these cases, we don1t need to make any changes to our
testsuite. Window declarations are 5ust one techni*ue we use to keep our testsuites easy to maintain.
When we have different dialogs with largely the same controls, we use Q6 8artner1s class hierarchy to set
up a superclass that contains the common controls. )nly the uni*ue items are defined for the individual
dialogs. +his also simplifies maintenance.
.t is very important for us to be able to anticipate user interface changes. We keep in close communication
with our developers on this. +hey understand that late and unannounced changes to the user interface may
delay testing. /y knowing what parts of the interface remain sub5ect to design changes, we are able to use
common routines that can be easily updated later.
We also use common code for testing e*uivalent features in the different products in the eta product
family. @ifferent products administer different databases, such as )racle, Sybase, @/% and the like.
,enerali>ing the common aspects has made it easy for us to port our testing apparatus to new products. .t
has also reduced the total lines of code, thus reducing the amount of code to maintain and debug.
8robably the most significant thing we do to reduce the maintenance burden is we write our testcases in
an abstract testing language. +estcases only indicate the ob5ects to be manipulated in the testcase. We
build an interpreter and test driver to actually e:ecute the testcases. +his has many advantages, only one
of which is easing maintenance. .1d like to e:plain how we do this in more detail.
'.0 Building Test Inter(reters
6 test interpreter and driver allow testcases to be easily specified by a domain e:pert. any testers do not
want to have to deal with the various details of a testing tool. 6 test interpreter allows them to focus on
testing re*uirements rather than automation implementation. +he testcase indicates the details of what to
test. +he test interpreter and driver actually do the testing. +hey know how to do the testing. +his
arrangement is particularly effective when different people are responsible for the testcases and the test
automation.
=ere is an e:ample of one of our testcases.
TEST CASE ID: dtbed101
EDIT TABLE: SA3.TB03
ADD COLUMN(S)
Position NAME TYPE NULLS DEAULT O! BIT DATA LO""ED COMPACT
11 NE#$C%A!$COL$LEN1& C%A!(100) Y N
Note: Co'()n n*)e is o+ )*,i)() 'en-t. *nd is o+ t/0e 1.*2.
3333333333333333333333333333333333333333333333333333333333333333333333333333333333333
33333
EDIT TABLE: SA4.TB03
ADD COLUMN(S)
Position NAME TYPE NULLS DEAULT O! BIT DATA LO""ED COMPACT
11 NE#$INTE"E!$COLINTE"E! Y N
Note: Co'()n is o+ t/0e inte-e2.
3333333333333333333333333333333333333333333333333333333333333333333333333333333333333
33333
EDIT TABLE: SA3.TB04
ADD COLUMN(S)
Position NAME TYPE NULLS DEAULT O! BIT DATA LO""ED COMPACT
35 NE#$LOB$COL$AT$END CLOB(56) Y N 3 Y
3
Note: Co'()n is o+ t/0e 1'ob. Lo--ed is t.e de+*('t.
+he format for this testcase was originally based on documentation that was meant strictly for use by
other people. We formali>ed it and made it be the actual input language for our test driver. ?et me review
some of the advantages to using this kind of format for specifying testcases.
Testcases are independent of implementation details. any of our testcases are specified long before our
testers know what the user interface will look like. 6lso, when interface changes are made later, the
testcases don1t have to be updated. +he testcases only need to be changed when the product re*uirements
change.
Testers don"t have to know test tool details. We leave this for our automation specialists and those testers
who have an interest in the testing tools.
Testing can focus on re#uirements. We are able to leverage the knowledge of our domain e:perts. We
document the testcase format and this is what they need to know.
Tests are self$documenting. Since the format was originally based on documentation, the tests are easy to
review and understand.
?et me give some more information about how we develop our interpreters and drivers.
+he testcase format which . gave an e:ample of above is actually the second generation. 6n e:ample of
the first generation format is given below.
7A7dtbed1017E7TB77SA37TB0377C701171o'n*)e77NE#$C%A!$COL$LEN1&
7 7dtbed1017E7TB77SA37TB0377C70117d*t*t/0e77C%A!(100)
7 7dtbed1017E7TB77SA37TB0377C70117n(''77Y
7 7dtbed1017E7TB77SA37TB0377C70117de+*('t77N
7 7dtbed1017E7TB77SA47TB0377C701171o'n*)e77NE#$INTE"E!$COL
7 7dtbed1017E7TB77SA47TB0377C70117d*t*t/0e77INTE"E!
7 7dtbed1017E7TB77SA47TB0377C70117n(''77Y
7 7dtbed1017E7TB77SA47TB0377C70117de+*('t77N
7 7dtbed1017E7TB77SA37TB0477C703571o'n*)e77NE#$LOB$COL$AT$END
7 7dtbed1017E7TB77SA37TB0477C70357d*t*t/0e77CLOB(56)
7 7dtbed1017E7TB77SA37TB0477C70357n(''77Y
7 7dtbed1017E7TB77SA37TB0477C70357de+*('t77N
7 7dtbed1017E7TB77SA37TB0477C70357'o--ed77Y
+his format is easier for our automation to parse and e:ecute but is more difficult to write. .t was difficult
for our testers to create these files and they would often make mistakes like putting items in the wrong
column. +his would lead to a long process of debugging testcases. +his was often frustrating for testers,
who would rather be finding bugs in the product than in their own test data.
+his testcase was actually created from the first by means of a translator that converts the information
from the first format to the second. We wrote our translator in 8erl. +he column format is then the input to
the test driver written in Q6 8artner.
?et me review several components of our testing system that allows us to support easy3to3read testcase
formats.
Translator. +his converts the testcase into a format that is easier for a program to read. )ur translator is
written in 8erl. Q6 8artner does not support the kind of string3matching commands 7regular e:pressions9
that this re*uired.
Testcase %eader. +his Q6 8artner function reads and parses the intermediate format. Frrors in the test
data are reported.
Test &river. +his Q6 8artner script starts the testcase reader and e:ecutes the lines of the testcase. .t
embodies the testing methodology and conventions. +he driver finishes by triggering the product to
generate a work script. )ur test drivers also test things like memory leaks.
'indow &eclarations. +his Q6 8artner file identifies the controls. 6ny special handling for non3standard
controls can be specified here.
(erification. 6 separate -ni: shell script compares the generated work script against a pre3defined
baseline. +he script automatically ignores insignificant differences such as time stamps.
).0 *ee(ing +our Testsuite ,eliable
Dou will want to be able to depend on your automated testsuite. Dou will want it to be able to run it new
builds need to be tested. Dou will want to trust its accuracy.
+he absolute worse thing that can happen to an automated testsuite is that it reports that tests have passed
when there are in fact problems with the functions they should be testing. +his situation is called a false
positive. .f your testsuites get a reputation for false positives, no one will want to use them. +hey1d rather
do the testing manually.
Dour test automation will have bugs in it. Dou will be able to live with this if automation bugs either result
in aborts 7the test didn1t run9 or false negative 7reported failure but no product bug9. ,enerally, you will
want to be manually reproducing reported problems anyway.
+he goal of test automation should be to reduce the number of tests that need to be run manually, not to
eliminate manual testing entirely. 6s long as no more than a small portion result in false negatives,
automation will have saved you significant amounts of time. Aow you know the likely places to find
bugs< the testcases that failed.
When you are coding your testsuite, you will want to take some measures to ensure that errors are not
hidden or ignored. +hat is generally the cause of false positives. +he easiest way of inserting this kind of
problem is to make a mistake with e:ception handling. @ouble3check any e:ception handling code you
write or better yet have someone else review it. +he rule of thumb is 4When in doubt, call it out.4 What
this means is that unless your code is sure of the cause of an error, it should not suppress the reporting of
the error. . have also seen false positives result from switch statements that did not include a default
clause.
/eing very careful to avoid false positives will allow you e:periment more with other parts of your
testing system. .t does not have to be perfect to be useful.
. have also found that usability is important for perceived reliability. .f it is easy to set up the tests
incorrectly without getting good error messages, testers won1t think well of the testing system. +hey will
be frustrated if they review the results of a test run only to reali>e they forgot to set a parameter, meaning
that they tested the wrong thing. .f this happens repeatedly, they will reali>e that test automation is not
saving them time. /e sensitive to these issues. +hink about how to reduce the likelihood for these kinds of
problems.
+he biggest way to keep your testsuites reliable is to design them so that they can be run unattended. +his
will allow you test at overnight or while you are at meetings. .t will also allow you to be testing on
multiple machines at once.
-.0 .sing /rror ,eco0ery Systems
6 common problem that prevents productive unattended testing is cascading failures. +his is what
happens. )ne of the tests fails. +he product is left in an une:pected state 7perhaps an error dialog is
displayed9. Subse*uent tests attempt to run but can1t because the error dialog is still displayed. +o run the
testsuite, the product must be reset and the testsuite must be restarted after the failed test. Successive
failures will re*uire the testsuite to be rerun again and again.
6n error recovery system is the solution to this problem. .t automatically records the error and then
restores the application. +his allows successive tests to run reliably. 0ascading failures are avoided,
allowing unattended testing to occur.
6 recovery system needs to know what the 4base state4 of the product is. 6fter each testcase, it will check
to see if it is in this state. .f not, it will reset the product.
+estcases must be set up properly in order to take advantage of a recovery system. Fach testcase must
start and end at the predefined base state. +he base state for our products is 5ust the main window that
appears after starting it. +his is a somewhat different approach from manual testing. +ypically, manual
testers don1t reset the product before each test, but rather run several tests in succession, only resetting the
product if a problem arises.
)ne conse*uence of this is that tests cannot depend on earlier tests. +his principle is called 4testcase
independence.4 .f a test is meant to pick up where another finished, it will have to be redesigned. )ne
option is to include the repetition of the earlier test as part of the second test. 6dhering to testcase
independence will allow your tests to work with a recovery system and run unattended. +hey will also be
able to be run singly or in a group.
+estcases that are not independent can cause difficulties unrelated to unattended testing. )ne may fail
when run as part of a battery, but pass when run alone. Dou may decide that the bug is irreproducible,
when the problem is really with the testcase.
We1ve built our error recovery system from code included with Q6 8artner as well as code we1ve written
ourselves. )ur recovery system has been customi>ed to recogni>e our products1 error dialogs and to be
able to close down various product dialogs.
We began building ours very early. .t helped us debug our test drivers and test data. Frrors that our
recovery system logs and handles include scripting errors, product error dialogs, une:pected product
behavior, and product crash.
We also included code to handle time3out situations, but we are planning to remove this. +he recovery
system has not been able to cleanly shut down the product during a time3out. We1ve decided it1s better 5ust
to wait.
@on1t try to engineer your recovery system to recover from all 4possible4 errors. .nstead, make it handle
the actual errors you are encountering.
10.0 1i0ing with Test Automation
=ere are some recent results on our use of test automation during a busy testing cycle. .n one week in
arch of !""#, we ran the following number of testcases of the type described above.
P2od(1t Uni8(e test1*ses 2(n Tot*' 2(ns
P2od(1t A 399 1005
P2od(1t B :3 155
P2od(1t C 5: &4
Dou can see that on average tests were run about three times during that week.
/y dedicating people on test automation, we have multiplied the other testers1 productivity and have
focused on the testing areas where the big wins are.
We have accepted 4,ood Fnough4 test automation. We1ve reali>ed that 5ust like with products we sell,
software *uality is a combination of reliability, features and timeliness. -nderstanding the importance of
avoiding false positives has allowed us to make reasonable *uality trade3offs.
We have tightened our testing cycle. )ur testing is more consistent and repeatable. We are able to test on
more configurations. We are constantly improving our test battery.
11.0 Bibliogra(hy
Q6 8artner, Segue Software 7http<22www.segue.com9.
WinRunner and ;runner, ercury .nteractive 7http<22www.merc3int.com9.
6n 6pproach to Functional +est 6utomation, Gerry Hallar 7http<22www.crl.com2I>allar2testing.html9.
+esting 0omputer Software, Second Fdition, 0haper !!, 0em Ganer, 'ack Falk and =ung Quoc Aguyan
7$an Aostrand Reinhold9.
12.0 About the Author
/ret 8ettichord is a consultant, writer and trainer speciali>ing in software testing and test automation. =e
edits the Software +esting =otlist, writes a column for Stickyminds.com, and fre*uently speaks at
industry conferences. =e is the founder of the 6ustin Workshop for +est 6utomation

You might also like