You are on page 1of 88

The Magazine for Agile Developers and Agile Testers

October 2010

www.agilerecord.com free digital version made in Germany issue 4


© Tyler Olson - Fotolia.com
© iStockphoto/Yuri_Arcurs

online training
English & German (Foundation)

ISTQB® Certified Tester Foundation Level


ISTQB® Certified Tester
Advanced Level Test Manager

Our company saves up to

60%
of training costs by online training.
The obtained knowledge and the savings ensure
the competitiveness of our company.

www.te-trainings-shop.com
Editorial

Dear readers,

Oh my Goodness, the summer is already over! Terrible. After a great summer travel-
ling around Spain, we now have to start working hard again.
I will still be travelling, but I will not be able to avoid the wet weather. No travels to
Australia or South America for me.

This first issue after the summer is again providing us with new articles based on
experiences, basics and perspectives in the Agile World. We really appreciate that
you – our loyal reader –recommend our magazine to all your interested contacts. We
now have quite a good numbers of readers around the world. We are very proud of
all the great articles we received from Argentina over USA to Europe and to India. It is
amazing to see how powerful it is to be well connected and to see how the communi-
ty works and recommends the work of the agile authors. A big thank you to all of you
and especially to the authors, who are doing a great job in sending us their papers.

We are facing the Agile Testing Days in Berlin and are very enthusiastic about it. We
have great speakers lined up. Come see them. If you cannot make it to Berlin, follow
it on twitter under #agiletd.

Unfortunately, we have had to postpone the Oz Agile Days. Due to some internal and
planning circumstances it is impossible to run it during the planned days. We will
inform you as soon as we have new dates.

There is a new certification for Agile Testers called CAT – Certified Agile Tester. It is
going to be presented at the Agile Testing Days by Dr. Stuart Reid and iSQI – Inter-
national Software Quality Institute. The Syllabus has been developed with the help
of the industry. Employees of companies like HP, IBM, Microsoft, Nokia, Barclays,
Zurich, Xing, Mobile.de etc. are involved. The Syllabus looks very pragmatic and
the exam has three parts: practical exam, written exam (no multiple choice) and a
personal assessment of the trainee. I’m really looking forward to hearing more about
it. You will find more information at www.agile-tester.org.

Last but not least I want to congratulate Lee Copeland for his email signature. It
says: Life is short ... forgive quickly, kiss slowly, love truly, laugh deeply... and never
regret making someone smile.
I like it very much. I think that if we all would think and act in the same way, we
would be living in a much better world.

I hope you can forgive us quickly, love truly and laugh deeply, but for the moment we
will pass on kissing slowly! We appreciate your understanding :-).

Best regards

José Díaz

www.agilerecord.com 3
Contents
Editorial  3

Burning Down the Reports  6


by Alex Rosiu

Scrum in a Traditional Project Organization  8


by Remi-Armand Collaris & Eef Dekker & Jolande van Veen

Values for Value  14


by Tom Gilb & Lindsey Brodie

Why must test code be better than production code?  24


by Alexander Tarnowski

Test Driven Development Vs. Behavior Driven Development  26


by Amit Oberoi

Supporting Team Collaboration: Testing Metrics in Agile and Iterative Development  30


by Dr. Andreas Birk & Gerald Heller

Descriptive Programming - New Wine in Old Skins  37


by Kay Grebenstein & Steffen Rolle

The Role of QA in an Agile Environment  40


by Rodrigo Guzman

Managing the Transition to Agile  46


by Joachim Herschmann

Developing Software Development  49


by Markus Gärtner

Listen each other to a better place  52


by Linda Rising

Add some agility to your system development  55


by Maurice Siteur & Eibert Dijkgraaf

Applying expert advice  58


by Eric Jimmink (illustrations: Waldemar van den Hof )

Myths and Realities in Agile Methodologies  61


by Mithun Kumar S R

Do You Need a Project Manager in an Agile Offshore Team?  63


by Raja Bavani

Acceptance TDD and Agility Challenges  65


by Ashfaq Ahmed

4 www.agilerecord.com
Loosing my Scrum virginity…what not to do the first time.  69
by Martin Bauer

Lessons Learned in Agile Testing  73


by Rajneesh Namta

The 10 Most Popular Misconceptions about Exploratory Testing  77


Rony Wolfinzon and Ayal Zylberman

The double-edged sword of feature-driven development -  80


by Alexandra Imrie

Continuous Deployment and Agile Testing  82


by Alexander Grosse

What Donkeys Taught Me About Agile Development  84


by Lisa Crispin

Masthead  86

Index Of Advertisers  86

www.agilerecord.com 5
© rachwal - Fotolia.com
Burning Down the Reports
by Alex Rosiu

For years I have relied on different reports, pie-charts and graphs after having failed so many times, we have now finally come to a
for tracking my team’s progress, as well as my own. I had filters point where we’re doing things much different than before.
for the tasks which were in progress, for the ones overdue, and
for the ones assigned to myself. I had pie-charts rendering the The all-knowing team leads turned into trusting Scrum masters,
bug count for each team member, and also for each product ar- relying on their people, who now need to work closely together to
chitectural component. I used a graph that told me how many get the job done. The teams are now composed of both develop-
issues were created, compared to the number that were resolved ers and testers, so everybody finally got on the same page. We
over a period of time. don’t develop anything that can’t be tested right away, develop-
ers receive useful input from the testers just from the analysis
But why… and design phase. Basically, everyone works together from the
start until the end. This doesn’t only happen inside a team, but
Well, I guess we were driven to this kind of management by to some extent, throughout the entire “big” team.
the way we used to work. Our teams were built around the om-
niscient Team Lead figure, the one who had all the answers to I still have my reports and pie-charts, but I don’t recall when I’ve
any question, from anyone – “above” or “below”; the one who last checked any of them. Instead, we all use a continuously up-
was reporting, and was always being reported to. Each developer dated Burndown Chart. Not only does it tell us how much work
was focused on his or her components, not “needing” too much we have done, but, most importantly, it tells us how much we
knowledge of others’, so the lead played a key role in keeping have left to do until we’re done. This also means that we can use
things together and working well. Testing was perceived as a it to tell whether we are able to finish everything we’ve planned,
somewhat external job, as there was a completely separate team and also, what we are going to get done by the end of the sprint.
for that, with its own schedule and own way of doing things. The Combined with the task board, containing cards for what’s to do,
developers pretty much thought their job was done as soon as in progress and what is done, we’ve got all the information we
the product of their work was delivered to the testers. need.

You have probably figured out by now that the team leads had all If anybody asks about progress, problems or status, anyone in
the reasons in the world to make sure that they were aware of the team can answer, because we’re all together in this. We are
the detailed status, problems, and interdependencies with other all estimating, planning, re-estimating when needed, together, so
teams, while still struggling to avoid doing micro-management. we all know and care about what’s happening at any time.
Tough job.
And it’s all in one single chart. ■
This is exactly why we had to change something.

And we did. After countless attempts to correctly understand


and adopt the Agile practices and principles, after reading many
books, attending some conferences and training courses, and

6 www.agilerecord.com
> About the author

Alex Rosiu
I am a Technical Project
Manager at BitDefender, a
security software compa-
ny. As a computer science
graduate, I started my ca-
reer as a software develop-
er, moving on to technical
lead as soon as my experi-
ence allowed me to.
As a Certified Scrum Master, I have great interest in
mentoring and implementing the Agile practices in our
company. I am an active member of the Agile Alliance
and the Scrum Alliance, and I also enjoy sharing my pro-
fessional experiences on my personal blog: http://alex.
rosiu.eu.
Contact: alexrosiu@gmail.com

the tool for test case design


and test data generation

www.casemaker.eu
© Pitopia/Klaus-Peter Adler, 2007

www.agilerecord.com 7
© Mariocopa / www.PIXELIO.de
Scrum in a Traditional
Project Organization
by Remi-Armand Collaris & Eef Dekker & Jolande van Veen

Scrum is a framework for managing Agile teams. An important Project Board


practice in Scrum is that the development teams are self-orga-
Executive
nizing. This means that the team determines and optimizes its
approach to its specialist work. Development teams are enthusi-
astic about it, that’s for sure. They quickly apply Scrum, but quite
Senior Senior
soon it becomes clear that the organization in which they work
Supplier User
has to accommodate the new approach. The question is how to
do that.
Project
Manager
The answer of an enthusiastic Scrum expert will be: just do
Scrum, and everything will work out fine. There are, however,
quite a few aspects of project management which Scrum does
Team Team Team
not cover, like resourcing, budget affairs, business case, com- Manager 3 Manager 2 Manager 1
munication with stakeholders, project setup and support. These
aspects could be filled in with the help of other methods, for ex-
ample with a management method like PRINCE2. Team 1

In this article we show how, in our work as consultants with Figure 1: Traditional project organization
Ordina, we have embedded Scrum teams in existing PRINCE2
project organizations. This is a challenge, for Scrum introduces a 2. Project Organization With Scrum
couple of new roles which do not clearly map to roles in the exist- In a project organization with Scrum, the Project Board issues
ing organization. Moreover, applying Scrum asks for a different a project assignment to a Project Manager, who in turn forms
mindset, which means that responsibilities of existing roles will a Scrum Team for the software development part of the project
be different as well. assignment. This has consequences for the project organization
and the responsibilities of the different roles in the organization.
1. Traditional Project Situation
As before, the Project Manager is still the one who hires people
First, let’s sketch a standard PRINCE2 project organization. The in the project. In that sense, there still exists a hierarchical re-
Project Board issues a project assignment to a Project Manager, lationship. However, there is no work package in the sense of a
who establishes teams and, if the size of the project demands well-defined amount of functionality which is given to the team,
it, appoints Team Managers. The Project Manager translates the but there is a work package as assignment in terms of the goal to
project assignment into work packages for the different teams. be reached. Figure 2 shows the organization chart. It is important
The Team Manager translates the received work package into to note that the vertical line to the Scrum Team differs somewhat
tasks for the individual team members. This situation is visual- in meaning from the traditional situation in figure 1: the Project
ized in figure 1.1 Manager supports the team to self-organize. We’ll explain this
later.

1  We don’t show reporting and communication lines here. Scrum helps the team to deliver value to the business early. The

8 www.agilerecord.com
The Conference for Testing & Finance Professionals

May 9 – 10, 2011


in Bad Homburg (near Frankfurt am Main), Germany

The conference for Testing & Finance professionals includes speeches and
field reports of professionals for professionals in the areas of software testing,
new developments and processes. Futhermore there will be field reports for
recent projects in financial institutions and theoretical speeches for regulatory
reporting, risk- and profit based reporting.

Infos at www.testingfinance.com or contact us at info@testingfinance.com.

Supported by:
A Díaz & Hilterscheid Conference
role he is the single point of contact for decisions
about priority and functionality, but he cannot or-
der the team to take on more work than the team
is willing to commit to. In compiling and prioritizing
the Product Backlog, the Product Owner is driven by
the value which the Backlog items deliver to the de-
mand organization. Both the Scrum Master and the
Product Owner are involved in the daily practices of
the team and must be prepared and able to spend a
sufficient amount of time fulfilling their roles.

3. A Shift Of Management Responsibilities


In the previous section, we have shown how the new
Scrum roles fit in the existing project organization.
Adding these roles and omitting the Team Manager
role means that the management responsibilities in
the project organization are reallocated. In order to
Figure 2: Project organization with one Scrum team
do this, we need to know what is expected of the
Scrum roles. Briefly: the Product Owner represents the business,
team is fully transparent to all stakeholders with regard to the the team is self-organizing and the Scrum Master facilitates the
tasks they are performing, progress they have made and impedi- team. Figure 3 shows a view of management responsibilities for
ments that slow the team down2. In order to do so, Scrum intro- the traditional and for the Scrum situation.
duces the following management roles:
• Product Owner There are tasks that do not change in the Agile situation. These
• Scrum Master are mainly the tasks at the level of Project Board and Project
• Self-organizing team Manager. There is, however, one important shift: not scope is
the main driver, but goal. In the traditional situation, teams are
The Product Owner represents all stakeholders for the team. He focused on the solution which is specified beforehand (scope).
defines desired pieces of functionality and prioritizes them dur- The goal of the business is not so clearly in focus for the team.
ing the project, he decides on data for delivery to production, he In the Agile situation, goal is the central issue and scope is less
guards the coherence of the deliveries and accepts the incre- important. The business goal must be reached; the scope may
mentally growing solution. change during the project. The shift from scope-driven to goal-
driven management goes hand in hand with a different mindset
The Scrum Master facilitates the team and sees to it that the for all stakeholders in all layers of the organization.4
team applies the Scrum practices and does not lose focus. He
also removes impediments that emerge in the daily work of the In the Agile situation, some Project Board and Project Manager
team, either by taking action himself or by invoking others, higher responsibilities are delegated to the Product Owner. This is true
in the organization’s hierarchy. for ‘Prioritize requirements’ (within the tolerances set by the Proj-
ect Board) and ‘Align the Stakeholders’.
The self-organizing team is a management unit. Scrum assigns
responsibilities which traditionally are in the hands of a Team In the task ‘Distribute work packages at team level’ we assumed
Manager or Project Manager to the team as a whole3. in the previous section that there is one self-organizing team. If
there are more such teams, this task means that these teams
The Scrum Master works closely together with the team. This role together receive the work package and in discussion with the
resembles that of the traditional Team Lead, with an important Product Owner decide which team will execute which part of the
difference: the Scrum Master does not stand ‘above’ the team work package. In the traditional situation, the Team Manager is
but on the same level. He stimulates the team to organize itself responsible for the team, in the Agile situation the team takes
and to commit to a clearly defined, realistic work load. The Scrum over the responsibility for distributing tasks and ensuring com-
Master does not receive a Work Package but the team puts it mitment.
together per Sprint (iteration) from the high priority Product Back-
log items. A comparison of the responsibilities of Team Manager and Scrum
Master shows that their roles are alike in various ways. The most
The Product Owner is also at the same level as the team. In this important difference is that in the Agile situation the team itself
gives commitment for the work, while traditionally the Team Man-
2  We do not explain Scrum here at length, but confine to organizational
4  More on this change of mindset and practices, see also our article ‘Soft-
aspects. More information about Scrum can be found at www.scrumal-
ware process improvement with Scrum, RUP and CMMi: three reasons
liance.org/pages/what_is_scrum.
why this is a bad title’, Agile Record, April 2010. More information at
3  Details can be found in the table in the next section. www.scrumup.eu/publications.html.

10 www.agilerecord.com
E-E AMS
*

NOw avaILabLE wORLdwIdE


Foto: © swoodie - Fotolia.com

IREB® Certified Professional for Requirements Engineering (English, German)


ISTQB ® Certified Tester Foundation Level (English, German, Spanish, Russian)
ISEB ® Intermediate Certificate in Software Testing (German)
ISTQB ® Certified Tester Advanced Level – Test Manager (English, German, Spanish)
ISTQB ® Certified Tester Advanced Level – Test Analyst (English)
ISTQB ® Certified Tester Advanced Level – Technical Test Analyst (English)
ISSECO ® Certified Professional for Secure Software Engineering (English)

Why choose E-E ams?

More Flexibility in Scheduling:


• choose your favourite test center around the corner
• decide to take the exam at the time most convenient for you

Convenient System Assistance throughout the Exam:


• “flagging”: you can mark questions and go back to these flagged ones at the end of the exam
• “incomplete”: you can review the incompletely answered questions at the end of the exam
• “incongruence”: If you try to check too many replies for one question the system will notify you

Immediate Notification: passed or failed

* availability of E-Exams is subject to country restrictions

www.isqi.org | certification@isqi.org
CERTiFYING PEOPLE
www.agilerecord.com 11
Figure 3: Management responsibilities in traditional and in Agile situations

ager performs this task. The Scrum Master does not commit to To summarize, it is one thing to have a clear picture of what shift
the content, but coaches the process, facilitates the team and of responsibilities is needed. It is quite another to make an orga-
works on impediments signalled by the team. nization go through this shift and to capture the right mindset.
This takes more than following a workshop or reading a book.
4. Changing The Organization The Agile Coach can support the growth of commitment within
In order to introduce Scrum into an organization requires not the organization and channel this commitment to the right ac-
only a new distribution of management responsibilities, but also tions. ■
a new mindset in the whole organization. An Agile Coach can help
accomplish this.

In Scrum literature you find a double task for the Scrum Master:
on the one hand he facilitates the team, on the other he coaches
the project organization in introducing Scrum. In the organization
chart of figure 2, the Scrum Master is positioned below the Project
Manager. This position at the same level as the team sometimes
hinders a good execution of the coaching activities with regard
to Project Manager, Project Board members and the rest of the
organization. He does not have mandate in his position. More-
over, not all Scrum Masters are well-equipped to do the coach-
ing. For an adequate coaching of Project Manager, Project Board
members and the rest of the organization, thorough knowledge
of Agile development methods (Scrum, XP), specialist methods
(SDM, DSDM, RUP) and management methods (PRINCE2) are
needed, as well as experience with software process improve-
ment initiatives in complex organizations. We therefore advise to
have a separate person to take over the coaching role, the Agile
Coach, who is given a clear mandate to coach the organization.

12 www.agilerecord.com
> About the authors
Remi-Armand Collaris Jolande van Veen
is a consultant at Ordina, is project manager at Ordi-
based in The Netherlands. na. She is a certified Scrum
He has worked for a num- Master and has a lot of ex-
ber of financial, insurance perience in managing proj-
and semi-government in- ects and line organisations
stitutions. In recent years, in complex environments
his focus shifted from proj- that use Agile.
ect management to coach- We hope to hear from you
ing organizations in adopt- soon and are happy to
ing Agile using RUP and work in any improvement
Scrum. An important part of his work at Ordina is con- suggestions you might have.
tributing to the company’s Agile RUP development case
and giving presentations and workshops on RUP, Agile
and project management. With co-author Eef Dekker,
he wrote the Dutch book RUP op Maat: Een praktische
handleiding voor IT-projecten, (translated as RUP Tai-
lored: A Practical Guide to IT Projects), second revised
edition published in 2008 (see www.rupopmaat.nl).
They are now working on a new book: ScrumUP, Agile
Software Development with Scrum and RUP (see www.
scrumup.eu).

Eef Dekker
is a consultant at Ordina,
based in The Netherlands.
He mainly coaches orga-
nizations in implementing
RUP in an agile way. Fur-
thermore he gives presen-
tations and workshops on
RUP, Use Case Modeling
and software estimation
with Use Case Points. With
co-author Remi-Armand Collaris, he wrote the Dutch
book RUP op Maat, Een praktische handleiding voor
IT-projecten, (translated as RUP Tailored, A Practical
Guide to IT Projects), second revised edition published
in 2008 (see www.rupopmaat.nl). They are now working
on a new book: ScrumUP, Agile Software Development
with Scrum and RUP (see www.scrumup.eu).

www.agilerecord.com 13
© Mikhail Tolstoy - Fotolia.com
Values for Value
by Tom Gilb & Lindsey Brodie

Part 2 of 2:
Some Alternative Ideas On Agile Values For Delivering Stakeholder Value
(Part 1, Value-Driven Development Principles and Values – Agility is the Tool, Not the
Master, last issue)

The Agile Manifesto (Agile Manifesto, 2001) has its heart in the Your values concerning financial affairs and the environment will
right place, but I worry that its advice doesn’t go far enough to probably influence what you buy. Your perceived or actual ben-
really ensure delivery of stakeholder value. For instance, its first efits of what you will gain from your purchases (say, more time,
principle, “Our highest priority is to satisfy the customer through lower costs, and increased satisfaction) reflect their value to you.
early and continuous delivery of valuable software” focuses on Here then is a summary of my values for building IT systems – ag-
“the customer” rather than the many stakeholders whose views ile or not! These values will necessarily mirror to some degree the
all need consideration. It also places the focus on the delivery advice given in the principles set out in an earlier article (Gilb &
of “valuable software” rather than the delivery of “value” itself Brodie, 2010), but I will try to make a useful distinction between
(If still in doubt about such a focus, the Agile Manifesto itself them. I consider there are four core values – simplicity, commu-
states “working software”). These are the same problems that nication, feedback, and courage.
have been afflicting all software and IT projects long before agile
appeared: too ‘programmer-centric’. Code has no value in itself; Simplicity
it is perfectly possible to deliver bug-free code of little value. We 1. Focus on delivering real stakeholder value
can deliver software functions, as defined in the requirements, to I believe in simplicity. Some of our software methods, like CMMI
the ‘customer’ – but still totally fail to deliver critical value to the (Capability Maturity Model Integrated) have become too com-
many critical stakeholders. plicated. See for example, (CMMI, 2008). Agile is at least a
I should probably at this point mention that I do agree with many healthy reaction to such extremes. But sometimes the pendulum
of the ideals of the agile community. After all, my 1988 book, swings too far in the opposite direction. Einstein was reputed to
‘Principles of Software Engineering Management’ (Gilb, 1988) have said (but nobody can actually prove it! (Calaprice, 2005)),
has been recognized as a source for some of their ideas. I also “Things” (like software methods) “should be as simple as possi-
count several of the ‘Agilistas’ as friends. It is just that what I see ble, but no simpler”. My main argument with agile practice today
happening in everyday agile practices leads me to believe a more is that we have violated that sentiment. We have oversimplified.
explicit formulation is needed. So in this article, I set out my set The main fault is in the front end to the agile process: the require-
of values – modified from the Agile values - and
provide ten associated guidelines for delivering
value. Feel free to update them and improve them
as you see the need.
Perhaps a distinction between ‘guidelines’, ‘val-
ues’ and ‘value’ is in place. ‘Guidelines’ or ‘prin-
ciples’ provide advice: ‘follow this guideline and
things will probably turn out better’. ‘Values’ are
deep-seated beliefs of what is right and wrong,
and provide guidance as to how to consider, or
where to look for value. ‘Value’ is the potential
perceived benefit that will result from some ac-
tion (for example, the delivery of a requirement) or
Figure 1. Value to stakeholders: most agile practices today usually fail to identify or clarify all the
thing. For example, you might follow the guideline
stakeholders, and their stakeholder value!
of always buying from a known respected source.

14 www.agilerecord.com
Figure 2. Some examples of stakeholders: the source is re-crear.org, a voluntary-sector client of the author

ments. The current agile practices put far too much emphasis quality levels in this paper, see (Gilb, 2005: especially Chapter
on user, use cases and functions. They say ‘value’ and they say 5) for further detail.
‘customer’, but they do not teach or practice this in a reasonable You can refine the list of quality requirements as experience dic-
way for most real projects. They are ‘too simple’. tates. You can also often reuse lists of stakeholders, and their
I’ll return to discuss this point later, but one of the main failings known quality requirements in other projects within your domain.
of the agile process is not recognizing that setting the direction – Doing this is NOT a heavy project overhead. The argument is that
especially stating the qualities people want and the benefits (the both exercises (identifying the stakeholders and their quality re-
received value) they expect when they invest in an IT system – is quirements) save time and aid successful project completion. It
key. Iterative and incremental development without such direc- is part of ‘the simplest path to success’. There are, by implica-
tion is much less effective. tion, even simpler paths to failure: just don’t worry about all the
stakeholders initially – but they will ‘get you’ later.
If you want to address this failing, then the simplest thing you can
do is to identify and deal with the top few dozen critical stake- Communication
holders of your system. To deal with ‘the user’ and/or ‘the cus- Now we come to my second value, communication. I am sure
tomer’ only is ‘too simple’. The ‘top few critical stakeholders’ can we all believe in ‘good communication’, and I suspect most peo-
be brainstormed in less than 30 minutes, and refined during the ple are probably under the illusion that ‘communication is not
project, as experience dictates. It is not a heavy ‘overhead’. It is perfect, but it is pretty good, maybe good enough’. However, my
one of the necessities for project success. experience worldwide in the IT/software industry is that commu-
nication is typically poor.
The next step is to identify the primary and critical quality require-
ments of each stakeholder. As a rough measure, brainstorming 2. Measure the quality of communication quantitatively
this to get an initial reasonable draft is an hour’s work for a small I have a simple way of measuring communication that never fails
group. For example: to surprise managers and technical people alike. I use a simple
End Users: Easy To Learn, Easy To Use, Difficult to Make Errors, (5 to 30 minutes) specification quality control (SQC) exercise, on
Fast System Response, Reliable. ‘good requirements’ of their choice. See (Gilb & Graham, 1993;
Financial Admin: Up-to-Date, Accurate, Connectivity to Finance Gilb, 2005: Chapter 10) for further detail on this method.
Systems. SQC is a really simple way to measure communication. I just ask
IT Maintenance: Easy to Understand, Easy to Repair, Defect- the participants to look at a selected text of 100 to 300 words. I
Free. prefer the ‘top level most critical project requirements’ (because
that will be most dramatic when they are shown to be bad!). I get
Note: this is just a start! We need to define the requirements their agreement to 3 rules:
well enough to know if designs will work and if projects are mea-
surably delivering value incrementally! The above ‘nice sounding 1. The text (words and phrases) should be unambiguous to
words’ are ‘too simple’ for success. For brevity, I’m not going to the intended readership
explain about identifying scales of measure and setting target 2. The text should be clear enough to test successful delivery

www.agilerecord.com 15
of it. the result of persistent management attention to reducing the
3. The ‘objectives’ should not specify proposed designs or ar- defects. One of my clients managed to reduce their level of major
chitecture for getting to our objectives. defects per page from 82 to 10 in 6 months. The documentation
of most IT projects is at about 100-200 defects per page, and
The participants have to agree that these rules are logically nec- many in IT do not even know it.
essary. I then ask them to spend 5 to 30 minutes identifying any
words, terms or phrases, which fail these rules. And ask them to 3. Estimate expected results and costs in weekly steps and
count the number of such failures (the ‘specification defects’). get quantified measurement feedback on your estimates the
I then collect the number of defects found by each participant. same week
That is in itself enough. In most cases, everyone has found ‘too My experience of humans is that they are not good at making
many’ defects: typically 5 to 40 defects per 100-300 words. So estimates for IT systems: for example, at estimating project costs
this written communication – though critical - is obviously ‘bad’. (Gilb, 2010a). In fact, rather than estimating, it is far simpler and
Moreover, it gets even more serious when you realize that the more accurate to observe what happens to the cost and quality
best defect finder in a group probably does not find more that attributes of actual, real systems as changes are introduced.
1/6 of the defects actually provably there, and a small team finds One great benefit with evolutionary projects (which include both
only 1/3 of them! (Gilb & Graham, 1993). iterative cycles of delivery and feedback on costs and capability,
The sad thing is that this poor communication is pervasive within and the incrementing of system capability) is that we can let the
IT projects, and clear communication (we can define this as “less project inform us about what’s actually happening, and we can
than one defect per 300 words potentially remaining, even if un- then relate that to our estimated quality levels and estimated
identified”) is exceptional. Clear communication is in fact only incremental costs: we can learn from unexpected deviation from

Figure 3. Extract from a case study at Confirmit.

16 www.agilerecord.com
plans how good we are at estimating (Gilb, 2005: Chapter 10). Traditional agile practice needs to take this on board.
However, in order to support evolutionary project measurement, It is also very healthy to prove that you can deliver real value
we have to do better than the typical way of measuring – that is, incrementally, not just assume that user stories are sufficient –
better than using the rate of user story ‘burn-down’. We have to they are NOT. Such real value delivery means that we must apply
measure the real top-level stakeholder value that is being pro- total systems thinking: people, hardware, business processes -
duced (or not). Yet most IT projects fail to specify upfront what much more than code.
stakeholder value is expected. In such a situation, it is difficult
to learn. 5. Measure the critical aspects in the improved system,
weekly.
To give an example of better communication, see Figure 3, which Some, in fact most developers seem to never ever measure the
is an extract from a case study at Confirmit (Johansen & Gilb, critical aspects of their system! And we wonder why our IT system
2005). Using the Evo Agile method, 4 small development teams failure rates are notoriously high!
with 13 developers in total worked on a total of 25 top-level Some developers may carry over to agile a Waterfall method con-
critical software product requirements for a 12-week period with cept of measuring critical quality attributes (such as system per-
weekly delivery cycles. Figure 3 is a snapshot of cycle 9 of 12. If formance) only at the end of a series of delivery cycles - before
you look at the “%” under “Improvements”, you can see that they a major handover, or contractual handover. I think we need to
are on track to meeting the required levels for delivery – which measure (test) some of the critical quality attributes every weekly
in fact they are very good at doing. This is a better way of track- cycle. That is we measure any of the critical quality attributes that
ing project progress than monitoring user story burn-down rates we think could have been impacted, and not just the ones we are
- they are directly tracking delivery of the quality requirements of targeting for improvement in the requirements.
their stakeholders. Measurement need not be expensive for short-term cycles. We
can use appropriate simplification methods, such as sampling, to
Feedback give early indications of progress, the order of magnitude of the
4. Install real quantified improvements for real stakehold- progress, and any possible negative side effects. This is known
ers, weekly as good engineering practice.
I value getting real results. Tangible benefits that stakeholders The Confirmit project (Johansen & Gilb, 2005), for example, sim-
want! I value seeing these benefits delivered early and frequently. ply decided they would spend no more than 30 minutes per week
I have seen one project where user stories and use cases were to get a rough measure of the critical quality attributes. So they
delivered by an experienced Scrum team, systems develop- measured a few, each week. That worked for them.
ment successfully delivered their code, but there was just one
‘small’ problem - the stakeholder business found that their sales 6. Analyze deviations from value and cost estimates
dropped dramatically as soon as the fine new system was deliv- The essence of ‘feedback’ is to learn from the deviation from
ered (Kai Gilb, 2009). Why? It was taking about 300 seconds for your expectations. This requires using numbers (quantification)
a customer to find the right service supplier. Nobody had tried to to specify requirements, and it requires measuring numerically,
manage that aspect. After all, computers are so fast! The problem with enough accuracy to sense interesting deviations. To give an
lay in the total failure to specify the usability requirements quan- example, see Figure 4, which is from the Confirmit case study
titatively. For example, there should have been a quality require- previously mentioned.
ment, ‘maximum
time to find the
right supplier will
be 30 seconds,
and average 10
seconds’. The sys-
tem needed bet-
ter requirements
specified by the
business, not the
Scrum team. As
it was, the project
‘succeeded’ and
delivered to the
wrong require-
ments: the code was bug-free, but the front end was not suf- Figure 4. Another extract from the Confirmit case study
ficiently usable. It was actually a management problem, not a
programming problem. It required several levels of management In this case when the impact of the ‘Recoding’ design deployed
value analysis above the developer level to solve! in Step 9 was almost twice as powerful as expected (actual 95%
Stakeholders do not EVER value function (user stories and use of the requirement level was met as opposed to the 50% that
cases) alone. They need suitable quality attributes delivered, too. was estimated), the project team was able to stop working on

www.agilerecord.com 17
the Productivity attribute and focus their attention for the 3 re- 7. Change plans to reflect quantified learning, weekly
maining iterations before international release on the other re- One capability, which is implicit in the basic agile notion, is the
quirements, like Intuitiveness, which had not yet met their target ability to change quickly from earlier plans. One easy way to do
levels. The weekly measurements were carried out by Microsoft this is to have no plans at all, but that is a bit extreme for my
Usability Labs. This feedback improved Confirmit’s ability to hit taste.
or exceed almost all value targets, almost all the time. I call this The feedback we get numerically and iteratively should be used
‘dynamic prioritization’. to attack ‘holy cows’. For example, say the directors, or other
You cannot learn about delivery of the essential stakeholder equally powerful forces in the organization, had agreed that they
quality attributes any other way – it has to be numeric. However, primarily wanted some particular quantified quality delivered
numeric feedback is hardly mentioned, and hardly practiced in (say, ‘Robustness’), and it was clear to you from the feedback
agile systems development. Instead, we have ‘apparent numer- that a major architectural idea supported by these directors was
acy’ by talking about velocity and burn-down rates – these are not at all delivering on the promise. Courage would be to attack
indirect measures. and change the architectural idea.
All the quality attributes (‘-ilities’, like reliability, usability, secu- Of course, one problem is that these same directors are the
rity) or work capacity attributes (throughput, response time, stor- main culprits in NOT having clear numeric critical objectives for
age capacity) are quantifiable and measurable in practice (Gilb, the quality values of the system. The problem is that they are not
2005: Chapter 5), though few developers are trained to under- even trained at Business School to quantify qualities (Hopper &
stand that about the ‘quality’ requirements (For example, ask Hopper, 2007), and the situation may be as corrupt or political
how they measure ‘usability’). as described in ‘Plundering the Public Sector’ (Craig & Brooks,
2006).
Courage In my experience, however, the major problem is closer to the
Courage is needed to do what is right for the stakeholders, for project team, and is not corruption or politics, or even lack of
your organization, and for your project team – even if there are caring. It is sheer ignorance of the simple fact that management
strong pressures (like the deadline) operating to avoid you doing must primarily drive projects from a quantified view of the top
the right thing. Unfortunately, I see few signs of such courage in critical objectives (Gilb, 2008b). Intelligent, but ignorant: they
the current agile environment. Everybody is happy to go along might be ‘champions’ in the area of financial budgets, but they
with a weak interpretation of some agile method. Many people are ‘children’ when it comes to specifying quality.
don’t seem to care enough. If things go too badly – get another
job. If millions are wasted – who cares, ‘it’s not my money’. But One lesson I have learned, which may surprise most people, is
if the project money were your money, would you let things con- that it seems if you really try to find some value delivery by the
tinue as they are? Even when your family home is being fore- second week and every week thereafter, you can do it. No matter
closed on, and you cannot feed or clothe your children very well, what the project size or type. The ‘big trick’ is that we are NOT
because your project is $1 million over budget? constructing a large complex system from scratch. We invariably
leverage off of existing systems, even those that are about to be

Figure 5. Concepts of weekly delivery cycles with stakeholder feedback. From HP, a client applying the Evo
method on a large scale (Cotton 1996; May & Zimmer 1996; Upadhyayula, 2001)

18 www.agilerecord.com
1.
im akk
de red
ut
sc itie
hs
pr rtes
ac
hi Unt
ge
n erne
Ra
um hmen
ISEB Intermediate (deutsch)

Der ISEB Intermediate Kurs ist das Bindeglied zwischen dem ISTQB Certified Tester Founda-
tion Level und dem Advanced Level. Er erweitert die Inhalte des Foundation Levels, ohne
dass man sich bereits für eine Spezialisierung - Test Management, technisches Testen oder
funktionales Testen - entscheiden muss. In drei Tagen werden Reviews, risikobasiertes Tes-
ten, Test Management und Testanalyse vertieft; zahlreiche Übungsbeispiele erlauben die
direkte Anwendung des Gelernten.
Eine einstündige Prüfung mit ca. 25 szenario-basierten Fragen schließt den Kurs ab. Das
„ISEB Intermediate Certificate in Software Testing“ erhält man ab 60% korrekter Antworten.

Voraussetzungen
Für die Zulassung zur Prüfung zum “Intermediate Certificate in Software Testing“ muss
der Teilnehmer die Prüfung zum Certified Tester Foundation Level (ISEB/ISTQB) bestan-
den haben UND entweder mindestens 18 Monate Erfahrung im Bereich Software Test-
ing ODER den akkreditierten Trainingskurs “ISEB Intermediate” abgeschlossen haben
- vorzugsweise alle drei Anforderungen.

Termine
02.11. – 04.11.2010
06.12. – 08.12.2010

€1600,00
plus Prüfungsgebühr €200 zzgl. MwSt.

http://training.diazhilterscheid.com
retired, which need improvement. We make use of systematic do, well before ‘the deadline’ (that would have been set, and
decomposition principles (Gilb, 2010b; 2008a; 2005: Chapter would have been overrun)
10). The big trick is to ignore the ‘construction mode’ that most • Management shifts focus from budget and costs to return
developers have, and focus instead on the ‘stakeholder value de- on investment (ROI)
livery’ mode.
I sometimes simplify this method by calling it the ‘1.1.1.1.1.1’
method, or maybe we could call it the ‘Unity’ method:
Policies for Evo Decomposition Plan, in 1 week
To deliver at least 1%
• PP1: Budget: No Evo cycle shall exceed 2% of total Of at least 1 requirement
budget before delivering measurable results to a To at least 1 real stakeholder
real environment. Using at least 1 design idea,
On at least 1 function of the system.
• PP2: Deadline: No Evo cycle will exceed 2% of total The practical power of this simple idea is amazing. If you really
project time (that’s one week, for a one-year proj- try, and management persists in providing encouragement and
ect) before it demonstrates practical measurable support, it almost always works. It sure beats waiting for weeks,
improvement, of the kind you targeted. months, and years, and ‘nothing happens’ of any real value for
stakeholders.
• PP3: Priority: Evo cycles which deliver the most As a consultant, I always have the courage to propose we do this,
‘planned value’ to stakeholders, for the ‘resources and the courage to say I know our team will find a way. Manage-
they claim’, shall be delivered first, to the stake- ment is at least curious enough to let us try (it costs about a week
holders. Do the juicy bits first! or two). And it always works. Management does not always actu-
ally go for real delivery the second week. There can be political,
Figure 6. Evo decomposition policies cultural and contractual constraints, but they get the point that
this is predictably doable.
See Figure 6 (Gilb, 2010b) for my advice to top managers, when Delivering value to ‘customers’ is in fact what the agile people
they ask me how they can support deploying the Evo method, have declared they want to do, but in my view they never really
and getting rapid results: put in place these decomposition poli- took sufficient steps to ensure that. Their expression of value is
cies as guidance. Demand this practice from your development too implicit, and (of course!) the focus should be on all the stake-
teams. If they complain, re-train or re-place. No excuses! They will holders.
just delay necessary results if not led by management. History is
clear. 9. Tell stakeholders exactly what quantified improvement
you will deliver next week (or at least next release!)
8. Immediately implement the most-valued stakeholder Confirmit used impact estimation (IE) [4, 10, 19] to estimate what
needs by next week value would be delivered the next week (see Figure 3). I think
Don’t wait, don’t study (analysis paralysis), and don’t make ex- they did not directly tell the affected stakeholders what quality
cuses. Just do it! This attitude really is courageous. In develop- levels they predicted. However, most of the stakeholders got to
ment environments, where managers are traditionally happy to see the actual delivered results each quarter. And the results
wait years with no results at all, it takes courage to suggest we were incredibly good. In fact, once Confirmit realized they could
should try to start delivering the value stream immediately and continually get such great improvements, they did brag about it
continuously. It is rather revolutionary. Yet surely no one would numerically on their website!
argue it is not desirable? Since it is quite unpredictable to fully understand what precise
Part of being courageous is having the courage to say you are quality improvements are going to result and when, it is perhaps
sure we will succeed in finding small (weekly) high-value delivery foolhardy (rather than courageous) to announce to your stake-
increments. The issue is that most people have no training and holders precisely what they are going to get weekly/fortnightly/
no theory for doing this. Most people have never seen it happen monthly in the next cycle. However, based on your understanding
in practice. Agile developers have now a widely established prac- of the improvements you are getting each cycle, it is safe to an-
tice of delivery of functionality (user stories) in small increments. nounce what improvements in value you were going to deliver in
That is a start, culturally, towards breaking work down into small- the next major release!
er timescales. But as I pointed out earlier (several times!), func-
tions are not the same thing as value delivery to stakeholders. 10. Use any design, strategy, method or process that works
Assuming you can deliver reasonable value for the effort spent well quantitatively in order to get your results
(the costs) - week after week – a surprising thing happens: Be a systems engineer, not a just a programmer (a ‘softcrafter’
• People cease to care about ‘the deadline’ (Gilb, 1988)). Have the courage to do whatever it takes to deliver
• People cease to ask for estimates of the monetary budget first-class results!
• You are strongly encouraged to keep on going, until value is In current agile software practices, the emphasis is on program-
less than costs ming, and coding. Design and architecture often mean only the
• You end up delivering far more real value than other projects program logic and the application architecture. Agile developers

20 www.agilerecord.com
Figure 7. A ‘Competitive Engineering’ view of systems engineering (Gilb, 2005). This shows a set of processes and artifacts needed within systems engineering.

often do not include in their design aspects such as maintenance, might be managed by my 80th birthday? ■
system porting, training, motivation, contractual deals, working
practices, responsibility, operations and all other elements of a Acknowledgments
real system. They seem narrowly focused on their code. In fact, Thanks are due to Lindsey Brodie for editing this article.
as I have discussed earlier, they focus on the code
References
functionality, and not even the code qualities! Listen to them Alice Calaprice (Editor) (2005) “The New Quotable Einstein”,
write, speak, and tweet – it is all about code, user stories and Princeton University Press.
use cases. In order to get competitive results, someone else – a
real systems engineer - will have to take over the overall respon- Agile Manifesto (2001). See http://agilemanifesto.org/princi-
sibility. ples.html [Last Accessed: September 2010].

Summary Todd Cotton (1996) “Evolutionary Fusion: A Customer-Oriented


Agile development embraces much that is good practice: moving Incremental Life Cycle for Fusion.” See http://www.hpl.hp.com/
to rapid iteration is a ‘good thing’. However, it fails to worry suf- hpjournal/96aug/aug96a3.pdf
ficiently about setting and monitoring the direction for projects,
and instead concentrates on programmer-focused interests, Daniel Craig and Richard Brooks (2006) Plundering the Public
such as use cases and functions. It fails to adequately address Sector, Constable.
multiple stakeholders and achievement of real, measured stake-
holder value. Instead it has ‘solo’ product owners and implicit Kai Gilb (2009) A Norwegian Post case study. See http://www.
stakeholder value. Here in this article, I have presented some gilb.com/tikidownload_file.php?fileId=277
ideas about what really matters and how agile systems develop-
ment needs to change to improve project delivery of stakeholder Tom Gilb (2010a) Estimation or Control. Draft paper, see http://
value. www.gilb.com/tiki-download_file.php?fileId=433
Systems engineering is still a young discipline. The software com-
munity has now seen many failed fads come and go over the last Tom Gilb (2010b) Decomposition. A set of slides, see http://
50 years. Maybe, it is time to review what has actually worked. www.gilb.com/tiki-download_file.php?fileId=350
After all, we have many experienced intelligent people: we ought
to be able to do better. I think we need to aim to get the IT project Tom Gilb (2008a) “Decomposition of Projects: How to Design
failure rate (challenged 44% and total failure 24%) down from Small Incremental Steps”, Proceedings of INCOSE 2008. See
about 68% (Standish, 2009) to less than 2%. Do you think that http://www.gilb.com/tiki-download_file.php?fileId=41

www.agilerecord.com 21
Tom Gilb (2008b) “Top Level Critical Project Objectives”.
Set of slides, see http://www.gilb.com/tiki-download_file. > About the author
php?fileId=180 Tom Gilb
(born 1940, California) has
Tom Gilb (2005) Competitive Engineering, Elsevier Butterworth- lived in UK since 1956,
Heinemann. For Chapter 10, Evolutionary Project Management, and Norway since 1958.
see http://www.gilb.com/tiki-download_file.php?fileId=77/ For He is the author of 9 pub-
Chapter 5, Scales of Measure, see http://www.gilb.com/tiki- lished books, including
download_file.php?fileId=26/ Competitive Engineering: A
Handbook For Systems En-
Tom Gilb (1988) Principles of Software Engineering Manage- gineering, Requirements
ment, Addison-Wesley. Engineering, and Software
Engineering Using Plan-
Tom Gilb and Lindsey Brodie (2010) “What’s Fundamentally guage, 2005. He has taught and consulted world-wide
Wrong? Improving our Approach Towards Capturing Value in for decades, including having direct corporate methods-
Requirements Specification”. See http://www.requirementsnet- change influence at major corporations such as Intel,
work.com/node/2544#attachments [Last Accessed: September HP, IBM, Nokia. He has had documented his founding
2010]. influence in Agile Culture, especially with the key com-
mon idea of iterative development. He coined the term
Tom Gilb and Dorothy Graham (1993) Software Inspection, Ad- ‘Software Metrics’ with his 1976 book of that title. He
dison-Wesley. is co-author with Dorothy Graham of the static testing
method ‘Software Inspection’ (1993). He is known for
CMMI (2008) “CMMI or Agile: Why Not Embrace Both!”, Software his stimulating and advanced presentations, and for
Engineering Institute (SEI). See http://www.sei.cmu.edu/pub/ consistently avoiding the oversimplified pop culture that
documents/08.reports/08tn003.pdf [Last Accessed: Septem- regularly entices immature programmers to waste time
ber 2010]. and fail on their projects. More detail at www.gilb.com.

Kenneth Hopper and William Hopper (2007) “The Puritan Gift”, I. Lindsey Brodie
B. Taurus and Co. Ltd.. is currently carrying out
research on prioritiza-
Trond Johansen and Tom Gilb, From Waterfall to Evolutionary tion of stakeholder value,
Development (Evo): How we created faster, more user-friendly, and teaching part-time
more productive software products for a multi-national market, at Middlesex University.
Proceedings of INCOSE, 2005. See http://www.gilb.com/tiki- She has an MSc in Infor-
download_file.php?fileId=32 mation Systems Design
from Kingston Polytech-
Elaine L. May and Barbara A. Zimmer (1996) “The Evolution- nic. Her first degree was
ary Development Model for Software”, Hewlett-Packard Journal, Joint Honours Physics and
August 1996, Vol. 47, No. 4, pages 39-45. See http://www.gilb. Chemistry from King’s College, London University. Lind-
com/tiki-download_file.php?fileId=67/ sey worked in industry for many years, mainly for ICL.
Initially, Lindsey worked on project teams on customer
The Standish Group (2009) “Chaos Summary 2009”. See sites (including the Inland Revenue, Barclays Bank, and
http://www.standishgroup.com/newsroom/chaos_2009.php J. Sainsbury’s) providing technical support and develop-
[Last Accessed: August 2010]. ing customised software for operations. From there, she
progressed to product support of mainframe operating
Sharma Upadhyayula (2001) MIT Thesis: “Rapid and Flex- systems and data management software: databases,
ible Product Development: An Analysis of Software products at data dictionary and 4th generation applications. Hav-
Hewlett Packard and Agilent”. See supadhy@mit.edu. http:// ing completed her Masters, she transferred to systems
www.gilb.com/tiki-download_file.php?fileId=65 development - writing feasibility studies and user re-
quirements specifications, before working in corporate
IT strategy and business process re-engineering. Lind-
sey has collaborated with Tom Gilb and edited his book,
“Competitive Engineering”. She has also co-authored a
student textbook, “Successful IT Projects” with Darren
Dalcher (National Centre for Project Management). She
is a member of the BCS and a Chartered IT Practitioner
(CITP).

22 www.agilerecord.com
© Katrin Schülke
Berlin, Germany

IT Law
Contract Law
German
English
Spanish
French

www.kanzlei-hilterscheid.de
info@kanzlei-hilterscheid.de

k a n z l e i h i l t e r s c h e i d

www.agilerecord.com 23
© Klaus Eppele - Fotolia.com
Why must test code be better than
production code?
by Alexander Tarnowski

As TDD becomes common practice, an average developer Test code smells


spends more and more time writing test code. This evolution Martin Fowler’s smell metaphor1 applies to test code, too. With
has been followed by an explosion of literature telling develop- tests the smells are different. Depending on the kind of tests you
ers how to get started doing TDD and unit testing. The same write, they may vary. My list looks like this:
literature tacitly assumes that a developer will automatically
become a good test coder after having learned a couple of Code duplication. By their very nature, tests share lots of code,
frameworks and techniques. Often, developers are taught that at all possible abstraction levels. It may be test object creation
the quality of their test’s code should be “as good as produc- or test double creation, dataset population, expectation setup,
tion”, but this topic is seldom elaborated upon. invocation of the same method with different parameters, asser-
tions, teardowns, to name a few.
How many of us have started our test developer careers writing
an xUnit test of a calculator? In the calculator’s world, things are For some reason, the extract method, or extract class refactor-
simple. Assert that the two operands combined with the plus op- ings, seldom get applied in test classes, not to mention any cre-
erator will indeed be added together. After having made this test ational patterns.
green, we realize that we’ve just implemented a toy program, the
xUnit equivalent of “Hello world”. Some people stop at this point, Poor naming. Since test code is a second- grade citizen, we don’t
declaring that real-world applications differ too much from the have to make the effort of finding meaningful intent-revealing
toy, and that doing developer testing or test driving them is too names. After all, who needs to read test code six months later?
inefficient or too difficult.
Incorrect expectations. Some frameworks make this easy, some
Those who decide to stick around delve into mocking frame- not. When employing a mock framework, it’s very easy to turn
works and other techniques based on test doubles. After hav- everything into expectations. Irrelevant indirect inputs may eas-
ing mastered them, they naturally move on to more specialized ily become subject to strict behavior verification. Again, some
frameworks designed to simplify testing of specific components frameworks will try to stop you from doing this, but nonetheless
or technologies (XMLUnit, HttpUnit are examples of this). When stubs and mocks seem difficult to combine in the same context.
combining these with fakes, like in-mem databases, they have a
large toolbox of techniques and tools… and potential to fail. Misleading assertions. It’s easy to get creative and generous
with assertions. If we can assert this, then why not assert that,
The reason why I mention all these frameworks is to make the while at it. Well, why stop at post-conditions? Why not check pre-
point that becoming a test developer takes time and practice. conditions as well? This type of assertion misuse produces tests
This implies that we hit a learning curve, and direct our attention with unclear purpose.
away from some fundamental practices that we would apply oth-
erwise. In short, developers get so distracted learning the testing Test smells can be further broken down and elaborated upon;
frameworks, that they run the risk of producing low-quality tests however, for the sake of discussion, this should be enough.
that smell.
1  M. Fowler. Refactoring: Improving the Design of Existing Code,
Addison-Wesley, 1999.

24 www.agilerecord.com
Punishment by multiplicity
Assume that you find a bug in your code, or better yet, want to > About the author
change its behavior. It’s not unreasonable to assume that this
Alexander Tarnowski
particular piece of functionality is exercised by two or three unit
is a software developer
tests and hopefully one integration or acceptance test like the
with a decade of experi-
ones produced with Fit or Concordion.
ence in the field, and a
broad definition of the term
Furthermore, let’s assume that the change is quite straightfor-
“software development”.
ward in terms of coding, and that you would feel quite confident
He has a master’s degree
doing it, even if there were no test coverage whatsoever.
in computer science and a
bachelor’s degree in busi-
Now, depending on whether you have a test first or test last
ness administration, which
approach, you would take different approaches on making the
makes him interested in
change. Irrespective of the approach, once the dust has settled,
both technical and economic aspects of the industry.
a couple of unit tests would have been adjusted, as well as the
automated acceptance test. Possibly, if the change was some-
Having worked in all phases of the development pro-
how of a conditional nature, some new tests could have been
cess, he believes in craftsmanship and technical excel-
created.
lence at every stage. He’s currently interested in how
introduction of agile methodologies on a larger scale
If your unit test code suffered from one or several of the smells
will redefine the tester and developer roles.
mentioned previously, your trivial fix in production code would
introduce X times the effort in adjusting the test code! Tests con-
taining poorly named variables, duplicated code, out-of-place as-
sertions, and other programming poorness, take lots of time to
maintain! Not only is it cumbersome; it’s simply not fun, and it
certainly doesn’t look good from an efficiency point of view.

The effort can become even greater if the same flaws creep into
your automated acceptance tests that most likely rely on more
complex fixtures.

Suddenly, a trivial change has exploded into half a day’s effort,


after which success isn’t granted.

Who wants to work like that, and who can defend this approach
if it gets criticized by people who honestly believe that the rate
of development can be increased by skipping tests in the first
place?

Test code stands alone


Production code may have the luxury of being part of a docu-
mented domain model and be supported by documented busi-
ness rules, use cases, and other forms of documentation. Devel-
oper tests often get written in a context where we are expected
to know their outcome without bothering to refer to any docu-
mentation. For these reasons, production code is not only better
in terms of software craftsmanship; it’s also better documented.

So, in order to bridge this disadvantage, and because of the dan-


ger of introducing test code smells, and because we don’t want
to defend meaningless repetitive maintenance of an ungrateful
codebase, test code must be better than production code!

Or at least as good, if we aim for the stars and hit the tree tops…

www.agilerecord.com 25
© DWP - Fotolia.com
Test Driven Development Vs. Behavior
Driven Development
by Amit Oberoi

2. Difference between Mocks and Stubs


A specific approach to mock the objects A common misconception between stubs and mocks is that
Test-driven development (TDD) is a dishonestly plain idea. Plain, stubs are static classes and mocks are dynamically generated
because tests are written prior to coding. Dishonestly, because classes created by some tool like JMock, NMock etc. The real dif-
it contradicts the fundamentals of the roles testing plays in the ference between stubs and mocks is in the style of unit testing,
software development life cycle (SDLC). i.e. state-based and interaction-based unit testing.

The initial idea of testing was very much limited to identifying 3. Stubs
defects, retest and do regression testing unless no defects were A stub is a class which is hard-coded to return data from its
found or we have reached a saturated point in terms of support- methods and functions. Stubs are used inside unit tests when
ing the testing activities. Today, testing is not only about report- we are testing that a class or method delivers expected output
ing defects; instead it plays a very vital role in serving the devel- for a known input. Stubs are easy to use in testing, and involve
opment to understand the features and delivering the same on no extra dependencies for unit testing. The basic technique is to
time. implement the dependencies as concrete classes, which reveal
only a small part of the overall behavior of the dependent class,
TDD fundamentally impacts the SDLC and the quality of the sys- which is needed by the class under test. As an example, con-
tem build with focus on reducing CT and improving RFT. TDD is sider the case where a service implementation is under test. The
widely used in Agile methodologies and is a core practice of Ex- implementation has a dependency as below
treme Programming. Looking at the flexibility offered by TDD, it
can be easily incorporated even in non-agile projects and can
play a vital role in the unit testing phase, helping developers to public class SimpleService implements Service
eliminate logical and interface related errors, which result in ma- {
jor defects reported in the system & integration testing phases. private Dependency dependent;
public void setDependency(Dependency de-
1. Introduction pendent)
Test-driven development has always been simple in introductory {
tutorials; something like Assert 1+1 is equal to 2 and check. In this.dependent = dependent;
the real world of enterprise application development, we always }
face relational databases, middleware, web-services, different // part of Service interface
security infrastructures interacting and other external resources. public boolean isActive()
Interactions with such external resources make automated tests {
a bit difficult to code, and are also hard to test, as at times it return dependent.isActive();
makes the tests indeterminate. }
}
One of the primary strategies to extend unit test coverage into
those hard-to-test interfaces is to use mock objects, stubs, or
other fake objects in place of the external resources, so that your
tests don’t involve the external resources at all.

26 www.agilerecord.com
To test the implementation of isDependent(), we can have a A suitable base class for the interface should be defined as:
stubbed response like below:
public class StubDependencyAdapter implements
public void testDependencyWhenServiceIsActive() Dependency
throws Exception {
{ public boolean isActive()
Service service = new SimpleService(); {
service.setDependent(new StubDependency()); return false;
assertTrue(service.isActive()); }
} }
class StubDependency implements Dependency
{ And the new test case will look like this:
public boolean isActive()
{ public void testDependencyWhenServiceIsActive()
return true; throws Exception {
}
} Service service = new SimpleService();
service.setDependency(new StubDependencyAdapt-
The stub collaborator does nothing more than return the value er() {
that we need for the test. It is common to see such stubs imple- public boolean isActive() {
mented inline as anonymous inner classes, e.g. return true;
}

public void testDependencyWhenServiceIsActive() 4. Mocks


throws Exception Mocks are used to record and verify the interaction between two
{ classes. Using mock objects, we get a high level of control over
Service service = new SimpleService(); testing the internals of the implementation of the unit under test.
service.setDependency(new Dependency() Mocks are beneficial to use at the I/O boundaries (database, net-
{ work, XML-RPC servers etc.) of the application, so that the inter-
public boolean isActive() actions of the external resources can be implemented when they
{ are not in our control. Below is the implementation of the above
return true; test using mocks.
}
}); MockControl control = MockControl.
createControl(Dependency.class);
assertTrue(service.isActive()); Dependency dependent = (Dependency) control.
} getMock();
control.expectAndReturn(dependent.isActive(),
This saves us a lot of time maintaining stub classes as separate true);
declarations, and also helps to avoid the common pitfalls of stub control.replay();
implementations: re-using stubs across unit tests, and explosion
of the number of concrete stubs in a project. service.setDependency(dependent);
assertTrue(service.isActive());
Often the dependent interfaces in a service like this are not as
simple as this small example. To implement the stub inline re- control.verify(); });
quires dozens of lines of empty declarations of methods that are
not used in the service. Also, if the dependent interface changes Another advantage to the mocking approach is that it gives us
(e.g. adds a method), we have to manually change all the inline more flexibility in the development process when working within
stub implementations in all the test cases, which can be a lot of a team. If one person is responsible for writing one chunk of code
work. and another person within the team is responsible for some oth-
er piece of dependent code, it may not be feasible for this person
To solve the above two problems, the best way is to start with a to start writing a “stubby“ implementation of the dependency,
base class, and instead of implementing the interface afresh for when the first person is still working on it. However, by using
each test case, we extend a base class. If the interface changes, mock objects anyone can test his piece of code independent of
we only have to change the base class. Usually the base class the dependencies that may be outside his responsibility.
would be stored in the unit test directory of the project, not in the
production or main source directory. In general, we can divide our expectations from the unit under
test as (i) test-driven development and (ii) behavior-driven devel-

www.agilerecord.com 27
opment, where the former uses the stubs and the latter uses the into existence from demand, rather than “pushed” from imple-
mocks. mentation: This effect of pull is that production is not based on
forecast; instead commitment is delayed until demand is present
5. Why to Use Mock Objects to indicate what the customer really wants.
• Miniature unit tests – Tests using mock objects are small
in size and cover a specific unit of the code. The unit under By testing an object in isolation, the programmer is forced to con-
test might interact with other classes or external resources, sider an object’s interactions with its dependencies in the ab-
which can easily be mocked using mock objects; hence al- stract, possibly before those dependencies exist. TDD with mock
lowing the test to concentrate only on the unit under test and objects guides interface design by the services that an object re-
its interactions with the external resources or other classes quires, not just those it provides. This process results in a system
within the same application, irrespective of the implementa- of narrow interfaces, each of which defines a role in an interac-
tion of the dependent classes and external resources. Ex- tion between objects, rather than wide interfaces that describe
cessive mocking calls in a unit test can lead to too tightly all the features provided by a class.
coupled unit tests with the internal implementation of the
mocked dependency, which can make the code very difficult Behavior-driven development is nothing other than the rephras-
to refactor as the unit tests are brittle. ing of test-driven development; with the aim of bringing well-es-
tablished best practices under a single common banner.
• Isolated and autonomous tests – We should be independent
to run the test in any order we want, irrespective of the pre- Conclusion
conditions and dependencies in a simulated environment. With interest in unit testing, the XUnit frameworks and test-driven
Each test should start from a known state and clean up development have grown in leaps and bounds, and more and
the utilized resources and the object state once completed more people encounter mock objects. A lot of the time people
(passed or failed). Static properties, singletons, and reposi- learn a bit about the mock object frameworks, without fully un-
tories are a common cause for a test to fail, but the worst derstanding the mockist/classical partition that discerns them.
problem might be testing against the database. The entire However, irrespective of the side we lean on, I guess it is essen-
purpose of a database is to maintain state, and that’s not a tial to comprehend the distinction in views. While we don’t have
particularly good trait inside a unit test. Using a mock object to be a mockist to find the mock frameworks handy, it is useful
in place of any kind of external resource with dependency on to understand the thinking that guides many of the design deci-
the state of the object will isolate the unit tests and make it sions of the application or unit under test. ■
order-independent and isolated.
References
• Easy to set up – A significant reason to use mock objects is 1. Behavior Driven Development official website: http://behav-
to avoid the need to set up external resources into a known iour-driven.org/
state for each test. Taking an example of database inter-
action as an external resource, the unit under test may be 2. In pursuit of code quality: Adventures in behavior-driven
tracking a single database row or a particular record within development: http://www.ibm.com/developerworks/java/
a database; but due to referential integrity one row or record library/j-cq09187/index.html
might require setting up a lot of data first. The same can be
the case with a message queue (MQ) or a web-service or 3. JMock official website: http://www.jmock.org/
with file dependencies between the components to update
database records. All the above dependencies are difficult 4. NMock official website: http://www.nmock.org/
and at times expensive to set up, but can easily be estab-
lished and replaced by mock objects. 5. Mock Objects official website: http://www.mockobjects.
com/
• Speedy test executions – Testing with mock objects no
doubt automates the tests, which can then be executed mul-
tiple times in a day to test the regression issues caused by
the ongoing development. This may require a separate test
environment which, once set up, will definitely help in re-
ducing the cost of quality and early detection of the defects
reducing the cycle time and maintaining the RFT.

6. Behavior Driven Development


Mock objects change the focus of TDD from thinking about the
changes in the state of an object to thinking about its interac-
tions (behavior) with other objects. The mock object approach
to programming has similarities with Lean Development. A core
principle of Lean Development is that value should be “pulled”

28 www.agilerecord.com
> About the author
Amit Oberoi
Amit Oberoi has over a
decade of experience in
the field of Information
technology covering Soft-
ware Development, System
Administration, Network
Development and Testing.
He is working as a Project
Manager with TechMahin-
dra Ltd. and is currently
responsible for delivering Ethernet based telecom pro-
ducts.

Having worked in all phases of development process


Amit believes in simplicity and efficiency of the proces-
ses, be it technical or managerial. Amit is hard focused
towards increasing the productivity of his teams by re-
ducing the interdependency of development and testing
activities.
© iStockphoto.com/ScantyNebula

IREB -
Certified Professional for Requirements Engineering -
Foundation Level
26.10.10-28.10.10 Berlin
01.12.10-03.12.10 Berlin
Die Disziplin des Requirements Engineering (Erheben, Do-
kumentieren, Prüfen und Verwalten) stellt die Basis jeglicher
Software-Entwicklung dar. Studien bestätigen, dass rund die
Hälfte aller Projekte scheitern aufgrund fehlerhafter oder un-
vollständiger Anforderungsdefinitionen.

http://training.diazhilterscheid.com

www.agilerecord.com 29
© endostock - Fotolia.com
Supporting Team Collaboration:
Testing Metrics in Agile and Itera-
tive Development
by Dr. Andreas Birk & Gerald Heller

Agile development is driven strongly by the aim to improve colla- These three kinds of metrics might be all a small agile team that
boration within and across software development teams. Metrics develops a moderately complex system needs. Larger develop-
play an important role to enable and foster team collaboration. ment teams that develop more complex software, however, will
Testing metrics, in particular, contribute to integrating develop- usually need additional metrics to master effective team collabo-
ment and quality assurance, an endeavor that can be particularly ration. Most of those additional metrics are related to testing and
challenging in large and distributed agile development environ- quality assurance aspects.
ments.
Agile Testing Metrics
This article presents recommended practices and provides Agile testing metrics become important for team collaboration
guidance for an effective use of metrics for testing and quality when a project performs specific system tests in addition to unit
assurance in an agile context. Specific to metrics in agile and testing and acceptance testing. This is the case, for instance, for
iterative development is an incremental, yet systematic approach complex systems that need to be integrated from sub-systems in
to metrics adoption and use: Teams start by acquiring just the a stepwise manner, when non-functional tests (e.g., load and per-
information they currently need. As objectives and needs evolve formance testing) must be passed, or when development teams
over the course of a release development lifecycle, the teams and infrastructures are distributed across several physical sites.
gradually extend and evolve their metrics to suit their changing
needs. We present a list of useful metrics, present implications A simple example shows that the three basic development me-
for team collaborations, and provide recommendations for gra- trics work item status, burndown/burnup, and velocity are not
dual metrics definition and evolution. sufficient in the context of separate system testing: When a deve-
loper has completed a work item, and indicates this against the
Agile Development Metrics task as Done, this would be misleading, because at this point the
The role of metrics in agile development can be illustrated well work item has not yet passed system test and cannot actually be
by the three essential kinds of metrics used in nearly every agi- regarded as Done. Therefore, the task board should be extended
le project: Work item status, burndown (and the closely related by at least another column Ready for System Test. In this case,
burnup), and velocity (see Box 1). Work item status can be deri- the task board would also include a testing metric.
ved from agile task boards (cf. [3]). Most agile teams use those
boards to track work items (i.e., tasks, user stories, or epics). Testing metrics that optimally support team collaboration should,
Task boards are updated and used several times each day, es- of course, be a bit more elaborate than the simple status exten-
tablishing an effective metrics-based coordination instrument for sion in the previous example. The following list of agile testing
team collaboration. metrics guides the selection and customization of appropriate
metrics for a given project. Often, it is recommended to start with
Burndown/burnup and velocity are metrics that indicate work simple versions of a metric and refine it over time.
progress over a longer period of time (i.e., an iteration or a re-
lease). They are derived from work item status and might be used Running Tested Features
occasionally for short-term decisions. But usually, management The metric Running tested features is the most essential tes-
and teams use them to understand whether the project is on ting metric relevant to supporting collaboration between deve-
track, and to indicate possible issues. lopment and test team. It complements work item status, as

30 www.agilerecord.com
described in the example above, by indicating whether a work defects as early as possible. Only the defects that remain after
item (typically a user story) has successfully passed system test. the end of an iteration are included into the defect backlog (i.e.,
Running tested features is usually depicted analogous to burnup defects to be fixed) and shall appear in defect statistics.
charts as an accumulation of testing features over the course of
a sprint or release. Both kinds of information can be depicted The basic agile defect metric, also termed defect backlog, just
in the same chart. Then, the likely gap between developed and counts the numbers of defects that reside in the actual defect
tested features indicates the size of the system test and defect backlog list and plots them over time (i.e., usually, iteration by
backlogs (see below). iteration). Since an agile objective is to keep the defect backlog
small, the defect backlog list and the associated metric are im-
Story Cycle Time portant input to iteration planning. If the curve of the defect back-
Story cycle time expresses how many cycles (or iterations) stories log metric remains flat even in later iterations, then the project
need from being scheduled for development until successful test can expect to get along with little stabilization effort.
completion. This metric can be illustrated in different levels of
detail: Average cycle time, variance of cycle time across all user More sophisticated versions of the defect backlog metric catego-
stories, and distribution plot of cycle times of all user stories. rize the metrics according to severity, system part, feature affec-
ted, or defect status (see also defect validation backlog below).
Test Development Status Such distinction is most relevant in particular during later itera-
Test development status is the testing equivalent of work item tions of a release.
status. Instead of software development, it refers to test case de-
velopment. For each user story, test development status tracks Defect Validation Backlog
the number of test cases and their status along the stages To Do, With regard to (re-)testing of fixed defects, the metric defect va-
In Process, and Done. The metric can be applied in the same way lidation backlog helps coordinating collaboration with develop-
to the development of automated tests. ment and system testing teams. It addresses the fact that defects
cannot always be re-tested immediately at the system test level.
Test Coverage The metric counts the fixed defects still waiting to be validated in
Test coverage also relates test status to user story (see Figure system test. If this number grows too large, then work balance
1). But other than running tested features, which counts comple- between defect removal and re-testing should be re-adjusted, or
ted stories only, test coverage drills down into the status of test a decision should be taken to focus the re-test effort only on the
preparation and test execution. Typically, test coverage counts most critical defects.
for each user story, how many (system) test cases were defined,
how many of those ran, failed, and passed. It indicates the status Defect Removal Time
of both development (or developed product quality, respectively) Defect removal time counts the number of iteration cycles that
and testing (in particular test preparation and execution). known defects reside in the system before they are fixed. It can
be measured as average value, range of variance, or distributi-
Test Automation Rate and Coverage on across all defects. This metric is typically used as a process
Test automation rate and test automation coverage are similar to improvement metric at the end of a release: If it turns out that
test coverage but refer to automated tests. Test automation rate defect removal time during the past release was too long, then
is the simpler metric that just counts the number of automated the root causes should be investigated and corrective action for
tests. This information is useful in the process of automation to the following releases should be taken.
indicate progress and achievements of test automation efforts.
It can be depicted over time like a burnup chart. Test automation Team Collaboration
rate is also useful to estimate maintenance effort for automated Testing metrics support team collaboration in multiple ways and
test. at different occasions. Most important for daily testing-related
work are (1) internal collaboration within the testing team and
Test automation coverage structures automated tests according (2) collaboration between testing and development sub-teams.
to features or work items. It also relates automated tests to ma- Testing team internal collaboration is typically driven by metrics
nual tests or overall number of a work item’s functional tests. like test automation coverage for system tests. Test automation
is a task specifically for the testing team. From the metric, team
Test automation coverage can also be integrated into test co- members can derive the following kind of information: What sys-
verage to show the overall status and success of tests of a given tem functionality is covered or not yet covered? What types of
feature. Usually, some tests can never be automated. So it might automated tests find how many defects? Are automated tests
also be useful to indicate those tests in order to highlight a reali- more effective than manual tests? Based on such information, a
stic target for test automation coverage. testing team prioritizes tests to be automated and improves the
effectiveness of existing automated tests.
Defect Backlog
Defect management in agile development distinguishes between Important metrics that support collaboration between testing
defects that are fixed during the iteration in which they have been and development are defect backlog and defect validation back-
detected and defects that will be fixed later. The goal is to fix log. From defect backlog, the development team derives new

www.agilerecord.com 31
Testmetriken im Testmanagement
am 22. & 23.11.2010 in Berlin
Intensivtraining Testmetriken im Testmanagement
• Wie lange müssen wir noch testen?
• Wieviele Probleme werden wir in Produktion bekommen?
• Wo stehen wir mit der Qualität unserer Applikation?
• Wie effizient ist unser Testprozess?
Sind Sie mit der einen oder anderen Frage dieser Art bereits konfrontiert worden und konnten keine fundierte Ant-
wort geben? Mit unserem Intensivtraining Testmetriken im Testmanagement bieten wir Ihnen eine Übersicht, welche
Testmetriken sich in der Praxis bewähren und wie Sie diese Metriken sinnvoll einsetzen und interpretieren können.
Die Trainingsinhalte sind:
• Identifikation von Metrikquellen, Schaffung von Metrikquellen
• Metriken zur Größen/Komplexitätsbestimmung mit/ohne existierendem Sourcecode/Referenzprojekt
• Metriken zur Feststellung des Qualitätszustandes
• Metriken für den Problemmengenforecast, zur Fundierung der Teststrategie, zur Kalkulation des Testaufwandes,
zur Messung der Testeffizienz und „Gefährliche“ Metriken
• Wann welche Metrik einsetzen
• Iterative Metrikoptimierung
Die Metriken werden sowohl theoretisch vermittelt, als auch praktisch anhand eines Beispielprojektes demonstriert.

http://training.diazhilterscheid.com 998,00 € zzgl. MwSt.

Anforderungsmanagement
am 25.10.2010 in Berlin
Qualität beginnt bereits bei der Anforderung. Je früher die Anforderungs-
qualität sichergestellt wird, desto günstiger gestaltet sich der Verlauf des gesamten
Entwicklungsprojekts.
Anforderungsmanagement hilft relevante Informationen zu sammeln und dabei eine stabile und breite
Basis als Grundlage für Soll – Ist Vergleiche zu schaffen. Sie erwerben bei diesem Intensivtraining umfassende Kennt-
nisse zu folgenden Bereichen:
• Systematisierung von Anforderungen (Funktionale und nicht funktionale Anforderungen)
• Charakterisierung von Analyseergebnissen
• Geschäftsattribute, Geschäftsregeln, Geschäftsprozesse und Aktivitäten
• Designergebnisse (Anwendungsfälle (use cases), Oberflächen (screens)) sowie deren effektive Beschreibung
• Dokumentation von Anforderungen
• Anforderungsinhalte und deren Prüfung auf Vollständigkeit, Korrektheit
• Review von Anforderungen
• Workflow, Kommunikation, z.B. Interview Technik, ...
Danach sind Sie in der Lage kompetent folgende Fragestellungen für Ihr Unternehmen und Ihre Prozesse zu beant-
worten:
• Wann sollten Anforderungen erfasst werden?
• Wer definiert die Anforderungen und welche Qualitätskriterien sind verbindlich?
• Welchen Nutzen haben zentral erfasste und systematisierte Anforderungen?
• Change Prozess: Anforderungen können sich ändern, wie reagiert man darauf?
• Wer ist für die Inhalte der Anforderungen zuständig?
• u.v.m.
http://training.diazhilterscheid.com 499,00 € zzgl. MwSt.
work tasks, and it can assess how well the activities of develop- During Ramp-Up both development and testing establish their
ment, testing, and defect removal are balanced. Defect validati- essential metrics, such as work item status, burndown, running
on backlog, vice versa, provides the testing team with informati- tested features, story cycle time, and test development status.
on about its performance and work priorities related to the defect Usually, these metrics will be defined in a simple basic form. For
removal cycle. instance, story cycle time is measured just as an average value.

Other types of collaboration include the software team on the In the course of the subsequent iterations, development can gra-
one side and external stakeholders such as development ma- dually refine and extend its metrics, if additional information or
nagement and product management on the other side. Figure 2 coordination needs require it. So a project might enrich its initial
shows metrics that are important for those collaboration constel- burndown chart of work units per time by adding a curve of pl-
lations (i.e., software team with external stakeholders). Running anned work.
tested features, for instance, supports communication and colla-
boration between development and product management. Test For testing-related metrics, there are two different additional fo-
automation rate informs development management about sta- cus areas for metrics evolution. During Develop & Test, test auto-
tus and achievements of the testing team’s automation efforts. mation and defect management become increasingly important.
In the Stabilize phase, high attention is placed on defects. So, a
Figure 2 also distinguishes between the coordinative and analyti- basic defect backlog count might be established and gradually
cal use of metrics. Coordinative metrics are directly used within evolved using the defect backlog variance to distinguish between
collaboration processes. A typical example is the work item sta- defect severity and user stories related to defects. These latter
tus that tells a tester which work item has been implemented and metrics can be required in order to prioritize defect fixing and re-
is ready to be picked up for testing. Analytical metrics provide testing when the release schedule becomes tight. ■
input to investigations and decisions that indirectly can shape
collaboration. An example is defect removal time. When too high, References
it indicates possible problems, which can be investigated in itera- [1] Andreas Birk. Agile Metrics Grid. (http://makingofsoftware.
tion retrospectives. They can result in new collaboration policies com/2010/agile-metrics-grid)
and settings on the interface between development and testing.
More information on the contents of Figure 2 is provided in [1]. [2] Andreas Birk. Goal/Question/Metric (GQM). (http://ma-
kingofsoftware.com/2010/goalquestionmetric-gqm)
Iterative Metrics Evolution
Agile teams value individuals, interactions, and working software [3] Mike Cohn. Succeeding with Agile: Software Development
over extensive overhead activities such as process-focus and Using Scrum. Addison-Wesley Professional, 2009.
documentation. Agile teams also respond to change quickly. For
metrics, this implies that only those metrics shall be collected [4] IEEE Computer Society. IEEE Standard Glossary of Software
that provide direct value to the team. However, when situations Engineering Terminology. IEEE Std 610.12-1990, 1990.
change, the metrics should be refined and evolved in order to
respond to the new information needs. [5] Dave Nicolette. Agile Metrics. (http://davenicolette.wiki-
spaces.com/Agile+Metrics)
An important driver of metrics change is the evolving focus of
development and testing activities during a release development [6] Rini van Solingen, Egon Berghout. The Goal/Question/Metric
lifecycle. Typically, release development proceeds through the Method. McGraw-Hill Education, 1999. (http://www.iteva.rug.nl/
three phases of Ramp-Up, Develop & Test, and Stabilize. During gqm/GQM%20Guide%20non%20printable.pdf)
Ramp-Up, which may span the first or the first few iteration cyc-
les, a rudimentary system (or system skeleton) and the develop- [7] Wikipedia. Earned Value Management. (http://en.wikipedia.
ment infrastructure are established so that agile development org/wiki/Earned_value_management)
practices such as continuous build and test can be applied. Sys-
tem testing uses this phase to prepare the testing infrastructure.

The subsequent phase of Develop & Test includes the regular


agile development cycles of implementing user stories and con-
ducting unit tests, conducting system test, and extending test
automation. The final phase is system stabilization, which is do-
minated by defect fixing and intensive system testing, including
re-test of fixed defects.

Figure 3 shows how the focus of development and testing met-


rics changes over a schematic release development lifecycle of
twelve iterations. It also illustrates example patterns of metrics
use and evolution.

www.agilerecord.com 33
Figure 1:

Figure 1: Example of a test coverage chart from an agile project (anonymized). The left column contains the user stories. The bars and spots inform about test
execution and success.

Figure 2:

Figure 2: Agile metrics grid that structures metrics according to internal and external target groups as well as according to coordinative and analytical uses [1].

34 www.agilerecord.com
Figure 3:

Figure 3: Evolution of metrics focus across agile iterations of a release development lifecycle.

Box 1: Essential Agile Metrics Burnup


Burnup is a measure of team progress in terms of work results
Most agile projects use a core set of agile metrics that is intro- achieved. It is closely related to burndown, but focuses on what
duced here. These development metrics provide the context for the team has already completed, and on the rate at which those
adding specific metrics for testing and quality assurance. achievements have been reached (i.e., additional work results
per Sprint). This information is particularly useful when the ove-
Work Item Status rall number of work items increases over time. Then the burn-
Work item status measures progress on the level of basic work down also increases, and only the burnup is a direct measure of
items, typically the tasks of a user story. Each task proceeds team achievements.
through at least three stages: To do, in process, and done. A task
board (cf. [3]) is a frequently used tool to illustrate work item Velocity
status. It consists of a column for each status, and a row for each Velocity is a measure of productivity (i.e., output per unit of input)
user story. Each task of a user story is represented by a tag that typically defined as measure of work items completed in a Sprint.
is placed in the appropriate status column of its story row. It can be defined on different levels, such as team velocity, indi-
vidual velocity, or release velocity. Team velocity in terms of story
Work item status can also be an aggregated metric that reports points per Sprint is a common and useful definition of velocity
the accumulated tasks for each status over time. used in many Scrum projects.

Burndown Knowing team velocity is an important basis for Sprint and re-
Burndown is a measure of team progress, i.e. work performed lease planning. Decrease of velocity can be an indicator of pro-
over time, and is often related to initially planned progress. In its ject issues and trigger detailed analysis during an iteration ret-
basic form, burndown is measured in terms of work units, such rospective.
as story points or ideal days, over time period. Time period can
be a Sprint, separated into weeks or days (Sprint Burndown), or a Box 2: Metrics Definitions and Resources
Release, separated into Sprints (Release Burndown).
IEEE defines metric as follows: Metric: A quantitative measure of
Burndown shows the team how it progresses as well as how much the degree to which a system, component, or process possesses
work is still to do in the given time period (e.g., Sprint). Burndown a given attribute. [4]
charts that plot work progress rate over time often combine the
actual progress rate curve with related curves, such as total work Some practitioners and researchers prefer the terms software
to be performed, planned work, and linear burndown. measure and software measurement. However, here, we stick to
the more commonly used metric.

www.agilerecord.com 35
There are various kinds of metrics for which many different clas-
sification systems have been proposed. A very basic distinction > About the author
is between product metrics (i.e., attributes of software artifacts
Dr. Andreas Birk
and work products, such as complexity and reliability metrics)
is founder and principal
and process metrics (i.e., attributes of processes, activities, and
consultant of Software.
projects, such as project effort).
P ro c e s s . M a n a g e m e n t .
He helps organizations to
For agile development, Dave Nicolette has proposed a metrics
align their software pro-
classification of informational metrics (telling us what’s going
cesses with their business
on), diagnostic metrics (identifying areas for improvement), and
goals. His focus areas are
motivational metrics (influencing behavior) [5].
test management, require-
ments, and software pro-
Goal/Question/Metric is an established method for systematic
cess improvement. During
metrics definition and interpretation. Refer to the book of van
more than 15 years in the software domain, Andreas
Solingen and Berghout [6] for an introduction to GQM, and to [2]
Birk has coached many software organizations to en-
for a concise overview.
hance their testing practices and to migrate to iterative
and agile development.
An elaborated and popular approach to progress measurement
and management is the Earned Value Management (EVM) me-
Gerald Heller
thod. Wikipedia [7] provides an excellent overview of EVM and
is a software engineering
guides to further information resources. ■
consultant with more than
20 years experience in
large-scale, global distribu-
ted software development.
Gerald Heller has establis-
hed and driven the require-
ments engineering process
at Hewlett-Packard’s lar-
gest software organization
on a worldwide basis for several years. He has develo-
ped methodological blueprints for the product HP Qua-
lity Center. His current work focuses on the ideal blend
of agile practices with other established software engi-
neering processes.

Subscribe for the


printed issue at

E W
N www.testingexperience-shop.com

36 www.agilerecord.com
© Benjamin Herzog - Fotolia.com
Descriptive Programming - New Wine
in Old Skins
by Kay Grebenstein & Steffen Rolle

The work on agile projects suggests a rethinking of the way in By utilizing „Descriptive Programming“ test objects can be used
which testers operate – advocating a route from the classic V- in test scripts without the need for an Object Repository. The ob-
model and the organizational separation of developers and tes- jects are written directly into the code based on their characte-
ters, to iterations, and a more profound involvement in the deve- ristics such as ID, name, position or color, and not addressed as
lopment process. references by the Object Repository.

Additionally, the focus of the test method needs to be addressed. In practice


Presently, due to instant iterations, new features as well as exis- There are two types of object allocation for the „Descriptive
ting features need to be tested. The constantly growing number Programming“. Firstly, the descriptions are made in the form of
of regression tests can currently only be solved through test au- string arguments - „PropertyName: = PropertyValue“:
tomation.
// name and value of the attribute separated
The design of an automated test case begins by „training“ the by comma and quote
products GUI. HP‘s Quick Test Professional Object Repository
scans the user interface and stores the GUI object references, so TestObject_String(„PropertyName1:=Property
they can be used later during the test scripting. Value1“, “…“, “PropertyNameX:=PropertyValu
eX“);
Problem
This procedure, however, can cause problems in agile projects. Secondly, through their description by a property container,the
Firstly, the screens are not yet available at the beginning of the definition and attributes of the properties are linked to the object-
iteration, or further changes need to be made to them during the oriented languages.
process. Additionally, many test automation tools such as Quick-
Test Professional are very sensitive to changes made to software // declaration of the object
in general, and to the interface in particular, known by testers as
„modification senility“. Set TestObject_Container = Description.Crea-
te
The indications of this problem can be seen after making chan-
ges to the GUI. The test scripts are unable to re-find „taught“ ob-
jects from Object Repositories, and QuickTest Professional will
simply abort the test automation as an error. // definition as a input element

Solution TestObject_Container(„PropertyName1“).value
Swift iterative development cycles and the high frequency of = „PropertyValue1“
changes is the foundation of all agile development methodology.
Leading to the method of operation for creating automated tests The example given below illustrates this function: A text box and
had to be adapted: QuickTest Professional therefore provides a a radio button are ready for testing.
function called „Descriptive Programming“.

www.agilerecord.com 37
Browser(„Browser“).Page(„Page“). Set Obj_TextSet = Description.Create
Textbox(„box_text“).set „Text“ Obj_TextSet(„html tag“).value = „INPUT“
Obj_TextSet(„name“).value = „time.*“
Browser(„Browser“).Page(„Page“).WebRadioG- Dim allTextFields, SingleTextField
roup („group_radio“).set ON Set allTextFields Browser(„Browser“).
Page(„Page“).ChildObjects(Obj_TextSet)
Firstly, compared to conventional programming with QuickTest For each SingleTextField in allTextFields
Professional using the reference to the Object Repository: The singleTextField.set „test“
text box is filled with „text“ and the radio button is activated. Next

// Set text box with Text Hints


Browser(„Browser“).Page(„Page“). So where does the tester get the necessary object information?
WebEdit(„Name:=box_text“, „html Useful features such as the name, class, and html tags are taken
tag:=INPUT“).set „Text“ from the specification. If developers use a unique, „published“
ObjectID during the development process, the ID can be used as
// Switch the radio button ON the main attribute, and, in addition, it increases the software’s
Browser(„Browser“).Page(„Page). testability.
WebRadioGroup(„Name:=radio_text“, „html
tag=INPUT“).set ON Important tips to consider:

Now the text box and the radio buttons are addressed by the • To identify objects with the Object Spy, “Class name” is the
„Descriptive Programming“ using the string argument method: “micclass” property

// Set text box with Text • Texts in parentheses represent a regular expression: e.g.,
Browser(„Browser“).Page(„Page“). Logout (Steffen) must be masked to logout \(Steffen \)
WebEdit(„Name:=box_text“, „html
tag:=INPUT“).set „Text“ • Identify objects uniquely (If several objects are identified du-
ring runtime, this will lead to errors)
// Switch the radio button ON
Browser(„Browser“).Page(„Page). • Property names are case-sensitive!
WebRadioGroup(„Name:=radio_text“, „html
tag=INPUT“).set ON Pros & Cons
One disadvantage is that script designers have to forgo the use
Hint: It is possible to use references of the Object Repository be- of the Object Repository’s front end. Furthermore, it is not possib-
fore the DP call, but only in this order. le to create the test object descriptions independently of the test
scripts. Moreover, changes of the object properties lead inevitab-
The arrangement of the object container requires time and ef- ly to changes in the test script.
fort due to its complexity. The following lines generate the object
Obj_Desc and define it as an input element. In contrast, an advantage of this method is that no additional
management layer is needed. Additionally, production or ma-
// genaration of object nagement of unnecessary test objects, which are unintentionally
Set Obj_Desc = Description.Create scanned by the OR, is no longer necessary.

// and declaration as input Accessing objects is transparent to everyone, which enables high
Obj_Desc(„html tag“).value = „INPUT“ code portability and makes mass updates, such as search and
replace, easily possible. The „Object List“ is also compatible with
// and defintion of the name various versions of QTP and is usually more efficient than wor-
Obj_Desc(„name“).value = „box_text“ king with the OR.

// use of the container In principle, Descriptive Programming goes against the approach
Browser(„Browser“).Page(„Page“).WebEdit(Obj_ recommended by QuickTest Professional and requires a more
Desc).Set „Text“ disciplined approach by the script designer because, without any
comments or style-guide, the legibility of the code is reduced.
Naturally, DP can be used for complex test scripting. In the ex- However, Descriptive Programming, as well as being used in agile
ample given, in each text field of several similar elements the projects, can also be used in „Early Development“ and supports
word “test” is inserted. the agile work of the “agile tester”. Test scripts can be created
with the help of prototypes or even created based on the speci-
fications. ■

38 www.agilerecord.com
> About the author
Kay Grebenstein Steffen Rolle
is ISTQB certified test con- completed his degree in
sultant working at Saxiona computer science in 2001.
Systems AG, Dresden. He Since then he has been
has been working in seve- working in the areas of
ral projects in a number of quality assurance, soft-
sectors (DaimlerChrysler, ware development and ser-
Deutsche Telekom, Sie- ver administration within
mens and Otto Versand). a range of complex and
Currently, he is head of the safety-relevant projects.
Saxonia Systems Compe- As ISTQB certified tester,
tence Center for Quality Assurance. he joined Saxonia Systems AG in 2008 as consultant
of quality assurance issues. Currently, Steffen Rolle is
working as a test automation expert and tester for a
web-services integration project.

Sie suchen
das Besondere?

_ Seite 54

www.agilerecord.com 39
© Marek Mierzejewski - Fotolia.com
The Role of QA in an
Agile Environment
by Rodrigo Guzman

The agile methodologies incorporated new software develop- the development cycle.
ment practices and project management.
It involves a major change in the traditional daily relationship bet-
These new practices introduced a paradigm shift in work teams. ween QA and development.

New paradigm: A key factor is the integration of the role of QA into the team. This
practice helps to incorporate the paradigm shift.
• Testing is not an asynchronous process of development and
is not unique in the area of quality control.
To achieve this change, it is essential to generate all the neces-
• Testing becomes embedded throughout the development sary conditions for incorporating the role of QA in the daily dyna-
life cycle and is a key element of the process. mics of the team.
• Testing is not the last step in a sequential manual process.
The daily routine of QA should include participation in various
• Testing is continuous, integrated, developmental, collabora- team meetings. An active participation in planning meetings, dai-
tive and mostly automated. ly meetings, retrospectives meetings, adds a unique perspective
to the team in terms of identifying problems.
Under this new paradigm, there are two questions we had to ask:
„What is the new role of QA? How do we implement the new role? Ideally it is good practice to locate physically the entire project
team, including the QA person, in the same place. Just seeing
The purpose of this article is to share some ideas and implemen- the QA person every day gives to opportunity of participating in
ted practices that may help answer these two questions. the formal and informal discussions of the day, which helps in
the change.
Team Member – Joining the Team
The Agile Manifesto: Value Individuals and interactions over pro- In this context and as this relationship matures, the practices of
cesses and tools. the old paradigm become obsolete. New ways of organizing and
working occur spontaneously, because testing is incorporated
In agile environments, Quality Assurance is not an independent into each step of the process, not just at the end of the develop-
and isolated team. It is not in a cascade process where develop- ment cycle. For example, if the team has the same objective, its
ments get completed, specifications closed and documentation members communicate every day, progress is known by all, and
completed before beginning to test in the final stages. the sense of using tools for reporting tasks, issues, bug-trackers,
etc. diminishes.
The role of QA is not „police“, and testing goes beyond „pass“ or
„fail”. The Agile environments are characterized by short development
cycles, continuous delivery, and short sprints, all of which, ge-
QA is now a team player and works together with the project nerates rapid resolution of problems, almost instantly. In such
team. Their tasks are integrated into the rest of the team. Shared environments, any errors found by the team are usually resolved
goals and their added value are extremely important throughout straight away. Error reporting tools are generally a drag on the

40 www.agilerecord.com
team and do not create value. They only make sense only if errors without relying on clear and complete documentation to derive
are not resolved immediately. test cases.

The change management is key to creating a change in work cul- Within the team, a QA member is the person who has developed
ture and facilitating the adoption of new rules. the ability to detect problems. Asking the right questions at the
right time is part of the new role in these environments. Train
In particular, if we are changing to an agile work scheme, we colleagues in this skill and help with questions to detect early
must consider that many of the typical daily practices of a tradi- problems and investigate them.
tional QA team change or disappear. For this reason, it is impor-
tant to have the necessary maturity at different levels and roles If the right questions are asked at the time of generating a sto-
within the organization. ry, it will help to eliminate ambiguities in business requirements,
answer questions and avoid future misunderstandings.
Recommended practices are:
Furthermore, the investigation of the contents of a story and its
• Put the whole team working in the same office, including QA.
implementation helps the team to consider all the tasks neces-
• Incorporate testing at all stages of the development cycle. sary for implementation, feasibility, and estimate.
• QA members should actively participate in team meetings.
QA provides information, feedback and suggestions rather than
• QA members participate in the definition of user stories. being the last line of defense.
• Shared goals – whole team thinking.
Recommended practices are:
• Promote the development of outdoor activities for the team.
• Involve the member of QA in the role of writing the story and
the acceptance criteria.
Inquiry and Feedback
The Agile Manifesto: Working software over comprehensive do- • Involve the member of QA in the discussion and assessment
cumentation. of the story.

• Ask for continuous feedback to the member of QA


In agile environments, business requirements are not as specific
and the level of initial uncertainty is high. In this sense, frequent • Ask, ask, ask ...
change is accepted as a fundamental part of building software.
Test Coverage
This assumption completely changes the way in which the team The quality of a product should not be judged solely by the num-
communicates, and it means a significant change in work routine ber of tests written against it or the number of bugs it contains.
for the QA members. The role of QA is not to write as many tests as possible or to find
as many bugs as possible, but rather to help the business under-
QA have to adapt to rapid deployment cycles and changes in tes- stand and measure risk.
ting patterns.
Testing is no longer a phase; it integrates closely with develop-
The role of the QA member should be to help with his vision and ment. Continuous testing is the only way to ensure continuous
experience at all times. The key is collaboration. It is important to progress.
participate in daily discussions of work. It is important to develop
skills to capture and assimilate information in a non-traditional Keep the code clean. Buggy software is hard to test, harder to
way and to give continuous feedback to the team about any pro- modify and slows everything down. Keep the code clean and help
blem or bottleneck that can be identified. fix the bugs fast.

QA members should participate in the process of feature prio- The QA member’s task will be to determine the acceptance cri-
ritization and planning. By listening to the business analysts or teria with the team. Defined acceptance criteria must be met to
the users themselves, QA will be able to learn about the users’ consider the work completed. With the acceptance criteria and
needs. the business needs defined, we can identify possible scenarios
and develop test cases to consider the associated risk.
Also, QA members should listen to the developers as they discuss
architecture. All of the knowledge gathered during these discus- Moreover, the acceptance criteria can be used to guide the deve-
sions will not only help to focus the testing efforts on the most lopment of business requirements.
critical areas, but will also help QA to determine the areas of gre-
atest risk to the business. This is commonly done by using automated acceptance testing to
minimize the amount of manual effort involved.
The QA members must develop skills to work in environments of
high uncertainty, where they must be able to carry out their work The scenarios and the associated risk will serve as a guide to

www.agilerecord.com 41
ensure that the most important cases are covered with different Working in an agile environment can be uncomfortable for QA
levels of testing during different periods of development and im- members, particularly if they are making the transition from tra-
plementation. ditional QA.

The QA member can detect early if there are gaps between the The transition to an agile environment forces QA members out
defined scenarios and the test coverage at each level of testing: of their comfort zone. This creates anxiety and stress, and it may
unit test, integration test, regression test. Any such gaps can be generate uncertainty in job security.
pointed out directly to the developer asking about the scenari-
os covered by unit testing. Another good practice is that the QA This concern is unnecessary, and we should help the team to
person has the ability to access the code and review the cases understand that change is a smart decision and creates great
developed in the unit tests. opportunities.

Any gaps identified may be filled in different ways as appropriate. The experience we have in our organization in the transition to
This may be covered by developing the missing cases in the unit agile is that the QA members have taken a much more prominent
test, and is mainly covered by exploratory testing. It is a good role and have gained much more influence in the development
practice that the QA member also has the ability to develop unit process and the final product. ■
tests.

This will help to have a high degree of test coverage at the end of
the development cycle ensuring the quality of code.

Moreover, the QA person may play an important role in reviewing


the results of continuous integration, identifying failed tests and
working with the developers for defect resolution.

It will be important to provide continuous feedback on the sta-


tus of the story and to identify and communicate throughout the
development cycle about the performance of each acceptance
criterion, the test coverage, the results of continuous integration,
etc.
> About the author
Finally, it is important that the QA members think about the
scenarios and cases to include in the regression testing before Rodrigo Guzman
final deployment. (35 years old) is Quality As-
surance Senior Manager at
Agile teams typically find that the fast feedback afforded by au- MercadoLibre.com, a Latin
tomated regression is a key to detecting problems quickly, thus America leading e-com-
reducing risk and rework. merce technology compa-
ny. He joined the company
Recommended practices are: in 2004 and is responsible
for defining, implementing
• QA unit test review
and managing software
• Automated acceptance criteria quality policies that enable
IT to ensure and control the operation of the website in
• Automated regression tests
the 12 Latin American countries.
• Identify scenarios and risks Before joining MercadoLibre.com, he worked for 10 ye-
ars in the IT area at Telecom Argentina, a telecommuni-
Conclusion cations company.
Quality is now a problem to be solved for the whole team. The role Since his degree in Business Administration and a post
of QA is to help understand and measure risk. Listening, learning, degree in Quality and Management, he has fifteen ye-
asking, participating, and prioritizing are all key aspects of suc- ars of experience in systems, mainly in processes, pro-
cessful team integration. jects and quality.

Shared goals, continuous feedback, test automation, uncertain-


ty, short cycles are characteristics of an agile environment.

The QA member must be trained and prepared to complement


the team and develop skills for their new role.

42 www.agilerecord.com
Subscribe
for the printed issue!

NEW
Please fax this form to +49 (0)30 74 76 28 99, send an e-mail to info@agilerecord.com
or subscribe at www.testingexperience-shop.com:
Billing Adress
Company:
VAT ID:
First Name:
Last Name:
Street:
Post Code:
City, State:
Country:
Phone/Fax:
E-mail:

Delivery Address (if differs from the one above)


Company:
First Name:
Last Name:
Street:
Post Code:
City, State:
Country:
Phone/Fax:
E-mail:
Remarks:

1 year subscription 2 years subscription

32,- € 60,- €
(plus VAT & shipping) (plus VAT & shipping)

Date Signature, Company Stamp


Can agile
be certified?
Get your answer at the
Agile Testing Days
4th to 7th October 2010, Berlin

Find out what Aitor, Erik


or Nitin think about the
certification at
www.agile-tester.org

Training Concept
All Days: Daily Scrum and Soft Skills Assessment
Day 1: History and Terminology: Agile Manifesto, Principles and Methods
Day 2: Planning and Requirements
Day 3: Testing and Retrospectives
Day 4: Test Driven Development, Test Automation and Non-Functional
© Sergejy Galushko – Fotolia.com

Day 5: Practical Assessment and Written Exam


Supported by

Barclays
Hewlett Packard
IBM
Mobile.de
Nokia
NTS
SWIFT
T-Systems Multimedia Solutions
XING
Zurich
© Lea-Louisa Moeller - Fotolia.com
Managing the Transition to Agile
by Joachim Herschmann

In these challenging economic times, there is a dramatic incre- An enterprise project management and execution application
ase in the need of organizations to adapt the software delivery that supports both Agile and traditional models of development
lifecycle processes to the rapid changes that are imposed on is required. As waterfall, iterative and Agile projects will co-exist,
them. Leadership makes the decision to transition its develop- they all need to be managed at the same time using the same
ment organization – not just for small teams, but also for large metrics. A light-weight, easy-to-use project management tool that
numbers of engineers, working on a broad portfolio of develop- sits on top of traditional ALM tools can help to plan releases and
ment projects from many different locations around the world — sprints, manage the backlog and user stories, and collaborate
to a more agile approach as part of an effort to vastly improve with burn-down charts and corkboards. It is important to sup-
performance, be more responsive to customers and improve port the way Agile teams work, empowering them to be more
quality. However, there are many challenges that an established effective at their jobs, while automatically giving management
software organization faces when shifting to Agile. and executives visibility into their progress. Agile teams need a
daily “workbench”, where they can chart progress against the
Let’s have a look at some major considerations that any enterpri- daily plan, keep updated on changes, and stay on the same page
se making a shift to Agile must tackle: throughout the execution of the sprint, so that the teams can be
more efficient.
• Empowering self-managing teams in a distributed environ-
ment
Empowering teams in such a way will also significantly reduce the
• Measuring the benefits time and effort teams spend communicating with customers and
• Applying Agile in a heterogeneous tooling environment business stakeholders. Rather than having several conversa-
tions with a customer to update them on sprint progress, teams
• Planning in an Agile world can involve their customers in their processes, including them in
• Quality in Agile – a new paradigm for QA the sprint reviews which are conducted using the team boards,
backlogs and burn-down charts.
• Managing a successful transition

Measuring the Benefits


Empowering Self-Managing Teams in a Distributed Environ- The biggest fear in connection with going Agile is that you will
ment lose control. The reality, however, is that you never really had con-
As an organization begins to scale its Agile efforts, teams need trol in the first place. Project managers build schedules, but there
a better way to collaborate, share information and manage their is really no connection between these dates and windows and
work. The whiteboards, cork-boards, sticky note-pads and index what is going on underneath. To achieve the ultimate goal of its
cards used by many Agile teams are fine for those that are co- Agile transformation – to get better at predictably delivering high-
located, but they can‘t scale when more teams make the transi- quality software – it is necessary to get visibility into processes,
tion. Self-managing teams make several decisions and changes establish a baseline for performance and be able to measure
in their “plans” each day, and keeping everyone on the same progress.
page and providing cross-project visibility becomes increasingly
difficult. Plenty of data is already being collected from the tools the teams
use. However, only if an infrastructure is put in place, teams can

46 www.agilerecord.com
constantly analyze current and historical data across all of the duct management organizations, and ensure that the work is
organization’s projects, and present actionable information that happening – sprint by sprint –, enterprises need to link strategic
delivers true value. (This data can include key ALM metrics, in- goals directly to the ALM artifacts that are associated with them:
cluding quality trends such as defects, code coverage, and test requirements, user stories, tasks, and test cases. How does this
automation, as well as performance trends such as team velocity work?
and schedule variance.) Capturing and exposing metadata from
test automation tools and including it in meaningful ways will add Marketing creates the overarching goal of a product release, de-
another quality dimension to the metrics. Automatically collected fining and storing the high-level requirements in a requirements
data from the tools can help teams to manage their work, and management system. Product management then breaks the re-
can help managers and executives to avoid unpleasant surprises, quirements down into goal stories, and prioritizes these, along
to prioritize and to make quick decisions. Instead of spending with any change requests, in a backlog. Teams then decompose
two weeks working with their reports to gather status information the goal stories down into actionable pieces (user stories). The
and create PPT decks for monthly operations reviews, executives user stories are linked back to the goal stories and requirements.
can have the relevant information at their fingertips – any time. In planning their sprints, the teams estimate the size of the user
stories and determine the content of a sprint based the team’s
Applying Agile within a Heterogeneous Tooling Environment velocity (capacity) and the user story’s priority (business value).
Most enterprises use a mix of tools and processes to complete,
manage and store their work. Different teams manage code and Then, as the teams complete user stories, the progress is being
changes in multiple, separate instances of repositories. Through tracked and linked back to the high-level goal stories and requi-
release planning and tracking changes constantly, the relevant rements in the requirements management system. At any time
data can be housed in several different places. in the release, marketing or the product management team have
visibility into how the release is progressing in terms of which
In an Agile environment you need to have sound engineering goal stories are completed, how much work is still outstanding,
practices and tooling, because almost immediately, Agile expo- and how that work compares to remaining team velocity (capaci-
ses those areas that need greater attention. The way you deploy ty) for the release.
and structure your data will determine the accuracy and scale
of your project. Since the process of shifting to Agile must have Agile managers must be able to quickly make informed decisions
minimal negative impact on the organization’s ability to maintain to keep planning on track. By gathering real-time status infor-
its aggressive release schedules, trying to standardize and con- mation from multiple sources – change requests, requirements,
solidate tools and repositories all at once prior to the transition and test runs — management is able to evaluate all the pertinent
to Agile is usually not an option. It would be too disruptive. Yet, to information needed to make intelligent planning decisions. Provi-
succeed at a transformation and become a more effective orga- ded with this visibility and the context in terms of business value,
nization, it is necessary to establish certain standards and iden- the guesswork is taken out of sprint planning for them.
tify ways to improve in some of the core areas of ALM.
Quality in Agile – A New Paradigm for QA
The first step is to define standards for data descriptions – uni- Quality Assurance is an area that many enterprises struggle with
form definitions for different activities and assets across the or- when they shift to Agile. The initial tendency is to look at each
ganization. Use a single definition for goal stories, requirements, sprint as a “mini-waterfall” with a testing window at the end.
user stories, etc. This helps to make it easier for teams to under- However, the reality is that Agile calls for a much bigger shift.
stand each other’s work, and allows them to manage dependen- It requires a fundamental change in the way traditional delivery
cies across teams. Next, use a standard management console organizations structure their teams and their work, because Agile
for all of delivery projects. Stories, tasks, assets can be viewed testing happens in concurrence with development activities in a
and manipulated in it, and all of the changes are reflected ac- sprint.
ross the various tools. Hence, integration with existing systems
becomes a much more important factor than replacement and An area of particular challenge concerns test automation. Accor-
consolidation of tools. ding to Agile principles, every feature that gets developed within
a sprint must have associated test cases that have been run.
Planning in an Agile World Unfortunately, automating the tests is not always possible – and
One fear that is common to organizations considering Agile is the sometimes creates waste. For instance, if the user interface of a
perceived lack of planning in the approach. While the pace and release is going to change significantly in a given sprint, any test
fluidity of Agile may give the impression that the teams are driving cases that are created and automated will have to be scrapped
forward with little regard for a long-term road map, the “flatness” and redone.
of Agile teams – and the increased interaction between develo-
pers and business stakeholders/customers – actually makes it One way to overcome this issue along the road to adopting Agile
possible for teams to be more aware of business objectives and is to make slight adaptations to this Agile practice. For examp-
priorities than they might be in a traditional model. le, a feature or story is completed in a given sprint if the team
has designed the test cases and run them manually to ensure
To drive alignment between its Agile teams, marketing and pro- they work. The automation is then completed in the next sprint.

www.agilerecord.com 47
However, there are many risks involved in taking this approach.
One of the tenets of Agile is that there is a clearly defined set of > About the author
deliverables that must be met before a user story (or feature) is
Joachim Herschmann
considered complete. By changing the completion criteria, and
is the Product Director
signing off on a feature or user story pending an action that will
Test Automation at Micro
take place in the following sprint, there is the risk that the team
Focus responsible for the
will forget to complete the action of automating the tests when
company‘s automated tes-
they get focused on the next sprint.
ting offerings. He has over
15 years of experience in
Provided the necessary means are in place for managers to have
the software development
the visibility they need - as described above - this kind of ap-
and testing disciplines, and
proach can actually work. If the cumulative number of test cases
he has been a frequent
is shown for the release, and if this number fails to go up in a gi-
speaker and instructor on
ven sprint, it is likely that the tests from the previous sprint were
these topics for over a decade. He is also a certified
not automated.
ScrumMaster. Joachim joined Micro Focus in July 2009
through the acquisition of Borland, where he was in
Managing a Successful Transition
charge of the company‘s Lifecycle Quality Management
When going through an Agile transition, evaluate it from the
suite of products. Before Borland, he was a technical
perspective of the business and ask the question: “How is Agile
account manager and consultant for software testing
working for us?” Ultimately, businesses could not care less what
and quality assurance with Segue, a leading testing so-
methodology teams use, as long as they deliver predictable, high-
lution provider. Previously he was also a consultant spe-
quality results. To manage that, you need intelligence. You need
cializing in the implementation, testing and launching of
to see what’s working and what isn’t, identify trends, surface are-
large-scale website projects.
as that need more attention, and make informed decisions.

Chances are, most enterprise development organizations will


never be completely Agile. Nor should they. The reason for any
transformation is not to standardize on a process, but to create a
high-octane, optimized delivery engine that makes the best use
of its resources to deliver business value. You need to be able
to manage both Agile and traditional projects – rolling them all
up into a single dashboard. Further, you need a holistic view of
delivery across this portfolio, into specific projects and down into
teams and tasks. This information will help to plan and manage
an organization‘s transformation. As you are making decisions
on how to transform an entire organization, providing visibility
into current and historical metrics is the only way to plan a suc-
cessful transition. The data will help you to understand the key
benefits that Agile brings to teams, so that you can identify the
projects that make the most sense to transition. ■

Want to write for...


Next issue January 2010
Deadline Proposal November 20th
Deadline Article December 10th

www.agilerecord.com

48 www.agilerecord.com
© Marzanna Syncerz - Fotolia.com
Developing Software Development
by Markus Gärtner

The software business resides in a constant crisis. This crisis car manufacturer would like to reach. Thereby he will ensure that
has already lasted since the sixties, and every decade since then the car is safe enough given the time he has to develop the car.
seems to have had an answer to it. Among the most popular and
most recent movements were the Software Engineering and the Software programmers as well as software testers also deal with
Agile movement. In his book Software Craftsmanship - The New trade offs in their daily work. For example, a software tester con-
Imperative [1] Pete McBreen argues against the engineering me- siders the cost of automation and the value of exploration. The
taphor and explains why it just holds for very large or very small more time the tester spends on automating tests, the less time
projects, but not for the majority, the medium sized software de- there is for exploring the product. The more time is spent on ex-
velopment projects. ploration, the less time will be available to automate regression
tests for later re use. Figure 1 illustrates this trade off.
As Albert Einstein said, “We cannot solve our problems with the
same thinking we used when we created them”. So far, every
aspect of the software crisis turned out to be self inflicted in or-
der to sell training or educational courses on the solution that
happened to be mainstream at the time. Since essentially, all
models are wrong, but some are useful (George Box), this article
will take a closer look at the useful aspects of the latest answers
to the software crisis, software engineering and craftsmanship.

To avoid any confusion, the term software development in this ar-


ticle will mean programming, testing, documenting and delivery.
Similarly, a software developer may be a programmer as well as
a tester, a technical writer or a release manager. I will provide a Figure 1: The exploration vs. automation trade off in software testing
compelling view on the overall development process and compa-
re it to the terms we may have adapted from similar models like The level of automated testing constitutes another trade off deci-
Software Engineering or Software Craftsmanship. sion. Automating a test at a high system level comes with the risk
of reduced stability due to many dependencies in the surroun-
From Software Engineering... ding code. Automating the same test at a lower unit level may
Engineering consists of many trade offs. For example, an engi- not cover inter module integration problems or violated contracts
neer developing a car makes several trade offs: between two modules. Figure 2 shows this trade off.

• fuel consumption vs. horse power

• horse power vs. final price

• engine size vs. car weight.

An engineer considers these variables when constructing a car


and uses a trade off decision to achieve a certain goal that the

www.agilerecord.com 49
leaves out essential details resulting in a simplified view on the
overall system.

McBreen’s main point is that the software engineering metaphor


does not provide a way to introduce people new to software de-
velopment to their work. Therefore he introduces the craft meta-
phor. The Software Engineering model does not provide an an-
swer on how to teach new junior programmers, testers, technical
writers, and delivery managers on their job. And in fact, Prof. Dr.
Figure 2: The composition decomposition trade off in software testing
Edsger W. Dijkstra already noticed this in 1988. Back then, Dijk-
stra wrote an article on the cruelty of really teaching computer
Similarly, there are four such trade offs mentioned in the Agile science [2]. According to Dijkstra, the engineering metaphor for
Manifesto. The last sentence makes them explicit: software development and delivery leaves too much room for
misconceptions, since the model lacks essential details.
“That is, while there is value in the items on the right, we value
the items on the left more.” The craft analogy provides a model for teaching people new to
software development on the job, and does so in a collabora-
Using the same graphical representation as before, figures 3(a) tive manner by choosing practices to follow, deliberate learning
3(d) illustrate the values from the Agile Manifesto: opportunities and providing the proper slack to learn new tech-
niques and practices. All these aspects are crucial to keep the de-
velopment process vital. Experienced people teach their younger
colleagues. The younger colleagues learn how to do software
development while working on a project. By taking the lessons
learned directly into practice, new and inexperienced workers get
to know how to develop software in a particular context. Over
time, this approach creates a solid basis for further development
in software and as well as personal.

... and beyond


There are other aspects in the craft metaphor, although these
ideas, too, had been flowing around since the earlier days of the
Software Engineering movement. Taking pride in your daily work,
caring for the needs of the customer, and providing the best
product within the given time, money and quality considerations
that the customer made. Of course, every software development
team member is asked to provide their feedback on the feasibili-
ty of the product to be created. This includes providing a personal
view on the trade offs that each individual makes to estimate the
Figure 3: The four Agile value statements as trade offs
targeted costs and dates.

At times a software project calls for more documentation. The Software Development
project members by then are better off spending more time on Dijkstra wrote in late 1988 about the cruelty of analogies [2].
documentation and less time on creating the software, thereby Likewise, a few years earlier Frederick P. Brooks discussed the
creating less software. Similarly, for a non collaborative customer essence and the accidents of past software problems [3]. Brooks
more time may be spent on negotiating the contract. The trade stated that he did not expect any major breakthrough in the soft-
offs between individuals and interactions as opposed to process- ware world during the ten years between 1986 and 1996 that
es and tools as well as responding to change opposed to follow- would improve software development by any order of magnitude.
ing a plan need to be decided for each software project. Agile Reflecting back on the 1990s, his point seems to hold to a cer-
methods prefer the light-weight decisions to these trade offs, but tain degree.
keep themselves open for heavy weight approaches when project
and context call for it. Since these two pioneers in the field of software development
wrote down the prospects of future evolutions, another decade
... towards craftsmanship ... has past. Reflecting on the points they made about a quarter of a
In his book[1], Pete McBreen describes the facets of crafts- century ago, most of them still hold. However, the past ten years
manship by and large. We have to keep in mind, though, that of software development with Agile methods, test driven devel-
craftsmanship just like engineering provides another model on opment and exploratory testing approaches show some benefits
how software development can work. This model is suitable for in practice. What we as a software producing industry need to
understanding the basic principles, but, as with every model, it keep in mind, however, is the fact that software engineering as

50 www.agilerecord.com
well as software craftsmanship are analogies, or merely models.
They provide heuristics, and heuristics are fallible. On the other > About the author
hand, these models provide useful insights that help us under-
Markus Gärtner
stand some fractions of our work. The models focus on a certain
is a senior software de-
aspect of the development process, while leaving out details that
veloper for it-agile GmbH
may be essential at times but not for the current model in use.
in Hamburg, Germany.
Personally committed to
From the engineering metaphor, trade offs are useful. Given the
Agile methods, he believes
complexity of most software projects, trade offs provide a way
in continuous improve-
to keep the project under control, while still delivering working
ment in software testing
software. Systems thinking can help to see the dynamics at play
and programming through
to make decisions based on trade offs. From the craft analogy,
skills. Markus co-founded
apprenticeships help to teach people on the job and help them
the European chapter on
master their skills. Where traditional education systems fail, the
Weekend Testing in 2010. He blogs at blog.shino.de
appealing of direct cooperation with an apprentice helps to teach
and is a black-belt in the Miagi-Do school of software
people relevant facets of their day to day work.
testing.

While the analogies help, we need to keep in mind what Alistair


Cockburn found out in his studies on software projects [4]:

• Almost any methodology can be made to work on some proj-


ects.

• Any methodology can manage to fail on some projects.

That said, the analogies apply at times. We need to learn when a


model or analogy applies in order to solve a specific problem, and
when to use another model. No single analogy holds all the time,
so finally creating and maintaining a set of analogies is essential
for the people in software development projects, in order to com-
municate and collaborate. ■

References
[1] Software Craftsmanship The New Imperative, Pete Mc-
Breen, Addison Wesley, 2001

[2] On the cruelty of really teaching computing science, Prof.


Dr. Edsger W. Dijkstra, University of Texas, December 1988

[3] No Silver Bullet Essence and Accidents of Software Engi-


neering, Frederick P. Brooks, Jr., Computer Magazine, April
1987

[4] Characterizing people as non linear, first order components


in software development, Alistair Cockburn, Humans and
Technology, 1999

www.agilerecord.com 51
© Artsem Martysiuk - Fotolia.com
Listen each other to a better place
by Linda Rising

With my good friend, Mary Lynn Manns, I’ve written a book en- me of something that author E. M. Forster observed, “How do I
titled Fearless Change which describes patterns for introducing know what I think until I say it?” It seems that we need someone
new ideas. Mary Lynn and I struggled to come up with a good to “listen us into understanding.”
name for our book and finally decided on “Fearless” as a reflec-
tion of one of the most important patterns in the collection: Fear In Barbara Waugh’s book about her experience as a change
Less. This pattern addresses the problem of resistance to new agent at HP, she proposed:
ideas. Our usual reaction to those who are skeptical about our
ideas is to treat the resistors as naysayers and avoid them. We “Instead of a great keynote speaker, what if we
don’t want to hear anything critical of our new idea. We tend to have a great keynote listener who can listen us
surround ourselves with those who agree with us. This means into creating our visions for HP’s future?”
we limit what we can learn about the idea or how to improve the
introduction process. We happily go forward believing that all is Barbara explains that she first heard about the generative power
well—except for “those” negative people who just won’t listen! of listening from Nelle Morton, the late feminist theologian and
author, who believed that listening is a great and powerful skill
The Fear Less pattern advises innovators to listen carefully to that opens the creative floodgates in the person being listened
those who aren’t initially enthusiastic about the new idea. Listen to. The listener’s attentive, unbroken, and receptive silence invi-
and learn. As my mother used to say, “Listen hard to what you tes speakers to explore their thoughts and come up with ideas
don’t want to hear.” The skeptic who takes the time to tell you that they’ve never had before. Ideas that literally didn’t exist until
what won’t work is offering a gift. Appreciate it. they were “listened into speech.”

Often when I talk about this pattern, I tell the story of skeptics Listen to better health
who not only gave me the gift of their viewpoints, but when I ap- Listening can have deeper impact for us than helping us under-
preciated them, when I listened, those skeptics became my grea- stand what we are thinking. I was intrigued by reading an account
test supporters. They didn’t necessarily sign up wholesale for the of an experiment in The Placebo Response. In the mid-80s, seve-
idea, but they helped me do the best job possible of bringing the ral family physicians in Canada, led by Dr. Martin Bass, studied a
idea to real fruition in the organization. I often thought that may- large group of patients who visited doctors with a wide variety of
be no one had ever seriously listened to them before. I wondered common symptoms. The investigators asked: What best predicts
what that’s like—not to have anyone listen to you. whether the patient will say that he is better one month later?

Listen to understand Their detailed review of the medical records showed many things
Mary Lynn and I discovered a magical writing technique. When that did not predict whether the patient would get better: the tho-
we would get stuck on some part of the book, I would say, “Ask roughness of the medical history and physical exam, whether the
me a question.” Then Mary Lynn (she was really good at this!) physician did any lab tests or X-rays, and which medications were
would say, “Linda, why <important question>?” Then I would prescribed. Almost everything physicians are taught turned out to
start explaining as I would to an audience member who might make no difference for this group of patients.
have asked the same question. Mary Lynn would type furiously
to capture what often surprised both of us. This process reminds

52 www.agilerecord.com
The doctors were able to identify one factor that best predicted Listen to our customers
whether the patient would report feeling better after one month
and that was—whether the patient said that the physician had Think of the power of adopting this technique in the workplace!
carefully listened to the patient’s description of the illness at What would happen if we listened to our colleagues and our cus-
the first visit to the doctor. tomers? What would change in our homes, if we listened to the
members of our family? Would we all help each other to be in
In a follow-on study, Bass and his colleagues considered a large a better place? One of the customer interaction patterns I have
group of patients who came in with the new onset symptom of written is called Listen, Listen, Listen. It is about helping you and
headache. After a year, they found that what best predicted an your customer move to a place of better understanding and buil-
improvement in the headaches was the patients’ report that, at ding a trusting relationship. The keystone of that pattern collec-
the very first visit, they had a chance to discuss their problem ful- tion is called It’s a Relationship, Not a Sale.
ly and felt the physician was able to appreciate what it meant to
them. Barbara Starfield of John Hopkins University did a similar Let me recommend the free newsletter Good Experience http://
study of public health clinic patients in Baltimore and reached www.goodexperience.com/signup.php
the same conclusion. The doctors listened their patients into bet-
ter health. A recent issue pointed to a Wall Street Journal article about
Vodafone‘s attempt to make a simpler cell phone (Mobile Pho-
Listen to reach a better place nes, Older Users Say, More Is Less):
I met someone at a conference recently who said: “I want to talk
to you about patterns. What’s the big deal? I really don’t like pat- http://tinyurl.com/a9oqd
terns. Why should I? I don’t get it!”
Here‘s an excerpt from that article:
I began my standard “why patterns are great” talk, throwing in
everything but the kitchen sink in my attempt to convince my What [Vodafone] heard from consumers aged 35
protagonist and “sell” patterns. Finally, I paused and the person to 55 shocked executives of the Newbury, England
said: “But, those patterns <about a particular subject> are wor- company. Many in that age range didn‘t know their
thless!” cell phone numbers or how to use basic functions.
One-third, for example, said they didn‘t know how to
Ah, so the problem was not with “patterns” at all, but with “those tell when they had received a text message. Some
particular patterns,” and, as it happens, I didn’t like them either. thought the envelope icon that signals a message
As soon as I acknowledged that “those particular patterns” were meant their phone bill had arrived...
not very good ones, the speaker was happy and moved on to
another topic—enough said. Many 35- to 55-year-olds also didn‘t like going into
Vodafone retail stores because the young staff -
I was astounded. How many times must I learn this lesson? I get average age 24 – talked in acronyms they couldn‘t
countless numbers of questions in email and during presenta- understand. These consumers said they weren‘t
tions. As soon as I hear a keyword, I’m off and running, assuming interested in the cameras, Internet browsers and
that, of course, I can answer that! I am careful to say at the end, many of the other features that are becoming stan-
“Does that answer your question?” But, of course, many times I dard on the latest cell phones. “Our biggest custo-
wonder if the questioner is intimidated by the situation and nods mer segment turned round and said: ‘You haven‘t
out of politeness. If only I would stop and really listen. I could been listening to us,’” says Guy Laurence, the
listen the questioners to a better place and go right along with company‘s consumer-marketing director. “It was an
them! If I had only done this with the patterns objections, I could industry for kids.”
have listened him into appreciating patterns, instead of arguing
my case.
What a wake-up call! Listening, really listening to your customers
Now and then, I like seeing old re-runs of the television series can move you to a better understanding of customer needs that
MASH. Just a few weeks ago as I was thinking about the power your product can satisfy.
of listening, I saw the episode where a soldier killed in battle has
trouble realizing that he has died. He tries to communicate with Listen to each other
members of the MASH unit, but only Klinger, who is suffering On a personal level, you might try what humorist Loretta LaRoche
from a high fever, can hear him. The “dead” soldier observes that calls Power Whining. Simply tell a friend that you‘re stressed and
of all the things that he thought he would miss after death, the need 2 minutes to unload. The friend’s job is just to listen without
worst is that he is talking but no one is listening. No one can interrupting. When you‘re done, reciprocate. When both of you
listen him to a better place. How many people spend their lives have finished, wrap up with a 1-minute monologue each, descri-
like this? bing the things for which you are most grateful. The last bit puts
everything into perspective by reminding you both to be grateful
for all the things that aren‘t stressing you out. We need that kind

www.agilerecord.com 53
of reminder to help us stay on an even keel.
> About the author
Finally, in case you feel that no one listens to you, the best way Linda Rising
to solve this problem is to start listening to others. Give someo-
With a Ph.D. from Arizona
ne the gift of your attention and you will probably find that soon
State University in the field
others will be listening to you. Could we start a chain reaction
of object-based design me-
that might help a noisy world that needs the silence of personal trics, Linda Rising’s back-
attention? Let me know if it works for you! ■ ground includes university
teaching and industry work
References in telecommunications,
Brody, Howard, The Placebo Response, Cliff Street Books, 1997. avionics, and tactical wea-
pons systems. An interna-
Manns, Mary Lynn and Linda Rising, Fearless Change: Patterns tionally known presenter
for Introducing New Ideas, Addison-Wesley, 2004. on topics related to patterns, retrospectives, agile de-
velopment, and the change process, Linda is the author
Waugh, Barbara with Margot Silk Forrest, The Soul in the Compu- of numerous articles and four books – Design Patterns
ter, Inner Ocean, 2001. in Communications, The Pattern Almanac 2000, A Pat-
terns Handbook, and Fearless Change: Patterns for
Introducing New Ideas, written with Mary Lynn Manns.
Find more information about Linda at www.lindarising.
org.

Wir auch!
Lassen Sie sich anstecken von der kollegialen Arbeitsatmosphäre in einem starken und motivierten Team. Zum nächst-
möglichen Termin stellen wir ein:
Senior Consultants
IT Management & Quality Services (m/w) für Agile Softwareentwicklung und Softwaretest
Deutschland und Europa

Sie haben

• eine fundierte Ausbildung oder ein Studium (z. B. Informatik, BWL, Mathematik) sowie mehrjährige Berufser-
fahrung als IT-Consultant in einem Fachbereich bzw. in der Organisation eines Unternehmens oder bei einer
Beratung in entsprechenden Projekten
• Erfahrung mit agilen Entwicklungsmodellen, -standards und -methoden sowie mit in diesem Umfeld bewährten
Testverfahren und Tools
• Kenntnisse in der praktischen Anwendung von Methoden und Standards, wie CMMI®, SPICE, ITIL®, TPI®,
TMMI®, IEEE, ISO 9126
• Erfahrung in der Führung von großen Teams (als Projektleiter, Teilprojektleiter, Testmanager)
• Vertriebserfahrung und Sie erkennen innovative Vertriebsansätze

Sie verfügen über

• Eigeninitiative und repräsentatives Auftreten


• eine hohe Reisebereitschaft
• gute Englischkenntnisse in Wort und Schrift

Dann sprechen Sie uns an – wir freuen uns auf Sie!

Bitte senden Sie Ihre aussagekräftige Online-Bewerbung an unseren Bereich Personal (hr@
diazhilterscheid.de). Ihre Fragen im Vorfeld beantworten wir gerne (+49 (0)30 74 76 28 0).

Díaz & Hilterscheid Unternehmensberatung GmbH


Kurfürstendamm 179, D-10707 Berlin
www.diazhilterscheid.com

54 www.agilerecord.com
© Nmedia - Fotolia.com
Add some agility to your
system development
by Maurice Siteur & Eibert Dijkgraaf

Agile development and traditional system development seem to Article 1.05 of the water patrol regulations states: ‘A skipper
be two completely different worlds. must in favor of safety in shipping, as far as this is required by
special circumstances, prefer good craftsmanship above this re-
gulation.’

This is the ultimate example that a process or a regulation is not


Agile Traditional an aim in itself, but is meant for a higher objective.
Software Software
development development
Organizations do their best to perform better, but have a hard
time doing that. Time and time again, we hear things like: ‘We
should work in releases’, ‘We would like to organize releases’,
and ‘We are working on that’. Or ‘We should do more agile deve-
Agile is too free-format for the traditional guys, and traditional is lopment’, but the organization has no clue on how to start with
way too slow for the agile guys. We want to show you that agile agile development.
can be used in traditional development processes and vice versa.
The experience with agile development is very useful for helping
A good example of agility in a very tight environment is in a Dutch the traditional development to get more flexible. However, agile
law: development could also adopt some ideas from release manage-
ment.

Changes/patches

Release 1 In production

Release 2 In production

Project/ Release In production


Large change

Release 1 In production
Large project In production
Release 2
Release 3 In production

Figure 1 – Changes into production

www.agilerecord.com 55
Maintenance is more than 50% of the cost of ownership of an With all this information, a release can be composed, and this
application. During maintenance, both traditional and agile pro- scope and combined information is called the release plan.
jects face the challenge of bringing controllable changes into the
production environment. Releases can give this controllability. Releases are planned at regular time intervals. The dates of the
‘Controllable’ means the ease to implement and test changes releases are known upfront to everybody. Changes that are not
in the software. ready for a release date will be put in the next releases. So relea-
ses will always be on time.
Releases are similar to iterations of agile development: smaller
units of work that are implemented and brought into production. A release can have a standard timetable. The next phrase, how-
Organizations with pure agile development in the complete orga- ever, is very true: ‘Every release is equal, but some releases are
nization will do this all the time; all other organizations have their more equal.’
challenges.
This means that releases should be treated the same way, but
In real life, organizations have the feeling that they are running every release is different from the other release. The impact ana-
all the time to keep up with all these changes. Different kinds of lysis will decide what kind of release it will be and how different
changes are put into production (see figure 1). it is from other releases.

Changes can be: Testing Releases


The tester should be involved from the first moment the release
• Complete projects;
is generated. While executing the impact analysis, the tester has
• Large changes to one or more software systems; two major activities: Firstly, he has to ensure that every change is
• Smaller changes; combined with acceptance criteria and will be testable. Secondly,
he has to add the impact on the planning from a testing point of
• Patches in production. view. A simple technical change sometimes needs more testing,
while a technically complicated change may have less impact on
These changes can be done agile or traditionally and implemen- testing. Based on our experience, we can say that it is very impor-
ted the same way. No matter how the development process looks tant that the tester understands the technical changes, so that
like, producing a new release has a lot of similarities. he is able to translate that into risks.

Decide on how to test the release based on the impact – write


it down in a test plan. The impact determines the test strategy
for that release. The test strategy is the part of the test plan that
Agile Traditional changes every release. The test strategy decides on the amount
Software Software
development development of testing that needs to be done.

Specify test cases as far as needed. Existing test cases can be


reused for regression testing, and some test cases need to be
added, deleted or changed.
Using Releases
In order to canalize the number of changes, the use of releases Execute the test when the software is delivered. All previous
can be the solution. Advantages of releases are: tasks should be completed by now. The test execution should
start as soon as this is possible (when the software is delivered),
• Related changes are combined;
in order to save valuable time. Problems with test environments
• Better to plan; frequently influence the critical path. Preparation is key.
• More testable;
Make a report at the end of the release, and try to learn from the
• Better predictability of going into production. releases you do.

For a list of all potential changes, an impact analysis is done Testing in releases makes the testing activity more controllable.
with all stakeholders. The importance for the business is valued The changes are made testable. Life is getting easy. Time plans
against the realization effort. In addition, any possible risks will will be followed, which is not the case in almost every testing
be identified. It is already at this point in time that testing should assignment.
give a first estimate of the test effort needed. Changes can influ-
ence each other. For efficiency reasons, this must be recognized. Agility in Releases
Then changes can either be combined or not, and instead be Agility is in the ease of making documents and using them during
implemented chronologically in a conscious way. software development. Pragmatism and industrialization pair up
with each other in this approach. It is like working in a lean facto-
ry. A release means making a:

56 www.agilerecord.com
• Release plan

• Release test plan > About the author


• Release report/advice Maurice Siteur
is a testing expert within
Make the documents small and reuse them. This is not
Capgemini with over 25
copy&paste!! Think of the test strategy that is different for every
years’ experience in IT. He
release. These documents are the very basic principles of good
is the author of a book on
release management and testing.
release management and
testing.
Add the rest yourself and make sure you keep up with the speed
of the project. Think of the following statement: Rules are an aid
to reach goals and not a goal in themselves. In other words, when
you need to make test cases, but the project has no time for it,
find a trick to solve this. The same is true for documents, but
some minimum is needed for accountability reasons.
Eibert Dijkgraaf
Eibert Dijkgraaf is a testing
It will help enormously to make this work, if there is a master
advisor within Capgemini
plan. Release plans and release test plans should not be filled
with 14 years’ experience
with information that applies to every release. This applies to in-
in software testing. He
formation about:
feels strongly about life
• Stakeholders cycle test management.

• Test environment

• Use of tooling

• Organization of the release – persons will be different, but


the roles not

• …

The master plan should contain the above stable information.


Make sure the plans contain as far as possible only contents that
changes with most releases. This ensures that people read the
plans.

Conclusion
Agile can benefit from working in releases, especially during
maintenance.

Traditional development will profit from working in a more agile


way by using releases.

Both agile and traditional development can meet in releases and


move together into production. ■

Traditional
Software
development
Agile
Software
development

www.agilerecord.com 57
© pressmaster - Fotolia.com
Applying expert advice
by Eric Jimmink (illustrations: Waldemar van den Hof )

For attendants of conferences and readers of books and articles,


the biggest added value is in advice that one can readily apply Your customer is also paying you to think
in their daily work. In my experience as an agile test consultant,
I must say that it can be hard to take the message home and Gojko quoted the historic commissioning of a ‘Mach 2.5 air-
link it to the context of a current project. That may require some plane’. At the time, that would have been hard and very costly
thought, and a good view of the bigger picture. For an agile proj- to achieve. By asking ‘Why’, the design team concluded that
ect to really be a big success, many things will have to fit together. the basic need was to avoid being shot out of the sky. The
This article is about some of the lessons I learned, and a context team proposed developing a highly maneuverable airplane
in which they can all be applied. instead of a fast one, and this led to F16. It was highly suc-
cessful, but it never reached Mach 2.5.
Challenge requirements
(Gojko Adzic [1]; Tom Gilb) Earlier this year I was in a team where the customer specifi-
It is a common pitfall for an agile team as a whole, to assume cally asked for a nightly batch process, to obtain new and
that our customers have all of the answers. In telling us ‘What’ changed records from a few tables in a remote system. As a
they want, they often leave out the ‘Why’ and move straight on to team, we asked many questions. It turned out that the data
the ‘How’. Of course, ‚How‘ is not something that the customer was used in the workflow processes we were building, and
should prescribe: teams can build creative solutions, and have that a one-day lapse in that data would actually be quite un-
much more knowledge about the implementation domain. If we desirable. The customer had experienced troublesome links
ask our customer for clarification ‘Why’ he asks for something, with other systems inside the organization, often with compli-
it is quite possible that his needs would be better served with a cations like firewalls, and moving to a Windows platform from
different product. legacy systems on Unix. He had assumed that this situation
was similarly complex, and that the ‘only solution’ would be a
batch process. The team realized that this situation was dif-
ferent, and that with today’s technology they could provide
more value to the customer. The end result was a system that
used synchronous database updates.

At the Agile Testing Days in 2009, Tom Gilb illustrated a good


reason to challenge requirements. In countless projects, the
formulation of requirements is best described as one big defect
insertion process. He proceeded to give two suggestions for im-
provement. Interpretation problems could be greatly reduced if
requirements had the form of executable tests. For rapidly pro-
viding us with insight on the quality of a requirements document,
“Agile QC” [2] was presented. This entails a quick scan of a docu-
ment sample against a set of a few simple rules. Extrapolation
yields an estimate of the number of defects, and the amount of
Team members should challenge requirements rework if the document is passed. The business can then choose

58 www.agilerecord.com
between using the document, or having it rewritten. Fixing the
defects that were found is impractical, because with the rapid
scanning technique the majority of the defects would remain un-
detected.

Team == Product
You actively have to build that great team, and keep it together.
The team must have time to develop what they call a Shared Vi-
sion, and a Web of Commitment.

What I sometimes see as a tester in agile projects, is that the


Agile QC rapidly gives insight in the quality of a requirements document Shared Vision (of the task at hand, and where you are going to
as a team) is there alright, but that the commitment to quality is
Quality is Value to some person not evenly spread across the team members. Too often, I see that
(Jerry Weinberg [3]) several members have absolutely no affinity with testing, and
The point is here that Quality as a perceived Value is very subjec- lack the results-oriented attitude to pick up „testing tasks“. Near
tive. It‘s easy to make assumptions. Whenever I am faced with the end of the iteration, they prefer to relax a bit, review some
requirements and (partial) acceptance criteria on paper, I remind code which has already been reviewed, and read the stories for
myself that I cannot rely on paper alone. When I talk with the end the next sprint.
users and the persons who will do the formal acceptance tests,
I frequently find that my assumptions about their criteria were What you need on your Great Team, is that about half of the de-
inaccurate. Moreover, I obtain new information on how the sys- velopers do possess enough Testing DNA to pick up testing tasks.
tem will actually be used. Such information helps in defining bet- Your team surely doesn‘t have to consist entirely of tester/devel-
ter test scenarios. When shared with the other members of the opers, but it‘s bad if you don‘t have at least a few. For the work
team, it may save loads of time. In the eyes of the customer(s), really does demand some flexibility in the way team members
we‘re that much closer to getting the product right the first time apply their skills, adapting to the work at hand.
around. When working in fixed-length iterations (as most teams do), it is
An interesting situation arises, when the team discovers new a natural aspect of the process, that there is more to be tested
stakeholders at some point during the project. This invariably near the end. Test automation helps a lot, but will not make it go
leads to new insights in the priorities, with delays and scope away.
creep just around the corner.
At the organizational level, it is obviously important to create an
If I am in a situation where I can seek out my customer, then I will environment for those great teams. Besides the obvious, atten-
always do so – even if I don’t really have any burning questions. tion should be given to career paths, retention, and recruitment.
I’m not at all worried about wasting my time, or the customer’s. If you want to increase the percentage of people who can mix
At the very least, I will get early feedback on my assumptions, and testing and design / coding tasks, then you have to define this as
the customer will receive information about the test approach. a specific career path which is well rewarded.
Typically, I will leave with new information, and examples to be
transformed into tests. Collective test ownership
(Elisabeth Hendrickson [7])
Great teams yield Great Products At the Agile Testing Days in 2009, Elisabeth Hendrickson held a
(Jim and Michelle McCarthy) great talk, and gave a new name to a concept. She listed 7 key
The manifesto [4] tells us „Build projects around motivated indi- practices that made testing fit on agile teams. One was collective
viduals. Give them the environment and support they need, and test ownership. Of course it makes perfect sense to consider all
trust them to get the job done.“. testware to be part of the shared codebase. However, how many
teams take this far enough to ensure that all tests can and will
Jim and Michelle McCarthy [5] take this a lot further. They say: be maintained and repeated accordingly? By all members of the
To be able to make a great product, you need a great team. team? Including the tests which are not automated?
It is a good first step if the testing mindset of the developers on

www.agilerecord.com 59
Handling testing spikes

With respect to sharing the load in testing, a team needs Understanding and Foresight. If it is understood that some of the team
members will most likely have to switch roles, then you can also plan for this to occur. For example, one team member may agree not
to write any code for story C, as he or she will most likely end up testing that story.

Actively sharing knowledge is critical for success. To be really agile as a team, team members must have more than a passing knowl-
edge of each other’s work. Many teams already do a lot in this regard, by employing practices such as pair programming, additional
reviews, and assigning work to the least qualified implementer [6]. What a team ought to do, is to standardize the confirmatory aspect
of testing in their coding standards.

Making the problem visible can really help. For the team as a whole, it can serve as an eye opener if it is visible on the wall which
items are not Done because they have not yet been tested.
A team could even go further, and take a Kanban implementation to limit the work in progress. If the number of ‘slots’ in a ‘ready
for test’ column is limited, then the consequence of filling up the last slot in that column is that the team cannot start a new story.

The work requires collaboration, and mutual support. A developer wants rapid feedback from a tester, who might have to switch tasks
in order to provide that feedback in time. To be able to use a BDD tool effectively, a tester might need another team member to write
the glue code (e.g. a FitNesse fixture).

the team is such, that they do not just feel proud about the code [3] http://en.wikipedia.org/wiki/Software_quality
they write, but also about the unit tests that accompany such [4] http://www.agilemanifesto.org/principles.html
code. Test code must be maintained and refactored just as regu- [5] http://www.mccarthyshow.com
lar production code. In the event of changes in the requirements, [6] http://stabell.org/2007/07/13/arlo-beginners-mind/
re-doing manual tests can be a tedious task. Such a testing task [7] http://blogs.imeta.co.uk/agardiner/ar-
should also be shared, the pain felt by the whole team. chive/2009/10/13/784.aspx
However, that in itself is not enough. The best road for a test ap- [8] http://blog.dannorth.net/introducing-bdd/
proach that is really shared by the entire team, is to have it start [9] http://testobsessed.com/2008/12/08/acceptance-test-
at the customer. Approaches like BDD [8] or ATDD [9] begin with driven-development-atdd-an-overview/
the desired behavior of the system and the customer’s criteria. [10] http://blog.xebia.com/2009/06/19/the-definition-of-
In so doing, the chance for the team to misinterpret the require- ready/
ments is minimized. Discussion and feedback about the require-
ments are placed upfront, to the point that they are part of the
requirements gathering process. Of course, in order for these
> About the author
customer-focused approaches to work, you do need to have a
Great Customer on your side. That doesn’t have to be a single Eric Jimmink
person, of course. is a test consultant at Or-
For the record: ATDD was the first of the 7 practices. dina, based in The Neth-
erlands. He started his
Conclusion career as a developer, and
All of the stories above align well with ATDD. It is a practice that shifted his focus towards
many teams and organizations still treat as if it were an optional testing around 1998. Eric
one. For them, embracing ATDD and the related practices would has been a practitioner
be a big step, or a growth process. In such situations, I encour- and strong advocate of ag-
age team members (especially designers and testers) to actively ile development and test-
seek out the customer, to get concrete examples. Examples really ing since 2001. Eric ad-
help to clarify the intent behind a requirement. Especially testers vices organisations on how
are enthusiastic about this approach of seeking out examples. In to arrange agile testing. He also coaches teams and
their role it is easy to feel the pain, when the team starts an itera- individual developers and testers. Since 2008, Eric pre-
tion with requirements that are not really ready[10]. sented and shared his experiences at four international
If you happen to have a Great Customer then just go for it, in- conferences, including the Agile Testing Days. He co-
corporate as much as you see fit. Your customer will value your authored Testen2.0 – de praktijk van agile testen (Test-
expertise, every step of the way. ■ ing2.0 – agile testing in practice), a Dutch book about
testing in the context of agile development.
References:
[1] http://www.acceptancetesting.info/the-book/
[2] http://www.result-planning.com/Inspection

60 www.agilerecord.com
© Katrin Schülke
Myths and Realities in
Agile Methodologies by Mithun Kumar S R

During a casual chat with one of my friends, we had a chance to Myth: More to deliver. So work all weekends.
glance through the glassed conference room in which a project This is a continuation of the previous reality. Agile projects run
meeting was being held. My friend suddenly concluded, ‘This pro- into the troublesome phase of working weekends, because of
ject is agile’. Surprised, I asked him how he was able to decide improper planning. “Overtime is a symptom of a serious problem
on that. Without any hesitation, he said, ‘Look at the projected on the project” according to Kent Beck. Agile calls for more dis-
spreadsheets used for planning. This is definitely Agile’. What? cipline than other methodologies. Plan, act and be quick enough
Do just spreadsheets make a project Agile! I still wonder. to react to external influences. And most importantly, enjoy the
weekends!
Though many projects embrace and take pride in calling themsel-
ves “Agile”, not all understand the real meaning of it and thereby Myth: Considering the right side of the Manifesto to be com-
end up in trouble and finally blame the process. The other extre- pletely out of scope in Agile.
me, too, doesn’t fare well in spite of the “conventional” tag. Let’s While the Manifesto specifies that agile methodology values
crack the myths about Agile methodologies. individuals and interactions over processes and tools, working
software over comprehensive documentation, customer collabo-
Myth: Fastest to Deliver is Agile ration over contract negotiation, responding to change over fol-
Customers and delivery heads are delighted to hear the word lowing a plan, the “purists” take the meanings literally and end
“fastest”. Agile, however, never speaks of “fastest”; rather it is up with an unplanned, undocumented process. Few tend not to
frequent deliveries which are stable and bring business value to document even the required critical information to external sta-
the customers. keholder, say the customer, bringing in a communication gap
which leads to failure.
Myth: Meeting everyday is Agile
Daily Scrum meetings waste time on trivial issues, rather than No doubt, the left has more significance than the right processes,
addressing what is required for the project. Stand-up meetings but considering what is good to the project is equally significant.
sometimes wind up only when it is close to an hour, underutilizing
the resources and pushing the load to the end. Meetings need Myth: Agile is only for smaller co-located projects
to be crisp and short. In cases where projects do not need this Agile definitely scales up. The best way to achieve in a big pro-
meeting, the ritual of “daily” can be changed to their frequency. ject is to have more self-organizing teams and do a big bang ap-
However, one also needs to have a constant check that no com- proach. Trust that this has worked in global corporations, whose
munication gaps creep in. project size is more than hundreds of man years and which have
geographically dispersed teams.
Myth: We would need to compromise on quality to meet all
requirements. Agile definitely calls for a mindset change, but not at the cost of
The reality is that Agile calls for stringent quality gates at all sta- superstitions and blind beliefs. Experiencing and tailoring as per
ges. Compromise comes into the picture only when too many re- the individual needs would score more than implementing boo-
quirements are squeezed into a sprint without prioritization and kish knowledge. By the way, did I mention that the spreadsheet
through unachievable schedules. we were seeing in the conference room was about plans to form
a corporate soccer team? ■

www.agilerecord.com 61
> About the author
Mithun Kumar S R
works with Siemens Infor-
mation Systems Limited
on its Magnetic Resonance
- Positron Emission Tomo-
graphy (MR-PET) scanners.
He previously worked with
Tech Mahindra Limited for
a major Telecom project
using Agile methodology.
An ISTQB certified Test Ma-
nager, he regularly coaches certification aspirants. Mit-
hun holds a Bachelors degree in Mechanical Enginee-
ring and is currently pursuing Masters in Business Laws
from NLSIU (National Law School of India University).

Testing driven by innovation


February 14 –16, 2011 – Sheraton Brussels Airport Hotel, Brussels,
Belgium

The Belgium Testing Days is the place where the QA professionals meet having
three days with innovative thoughts, project experiences, ideas and case studies
interchanges.

Some of the best known personalities of the agile and software tester world are
going to be part of the conference. To name just some of them, we have Lisa
Crispin, Johanna Rothman, Julian Harty and Hans Schaefer to support us.

Infos at www.belgiumtestingdays.com
or contact us at info@belgiumtestingdays.com.

Supported by:

A Díaz & Hilterscheid Conference / Endorsed by AQIS


© Lida Salatian - Fotolia.com
Do You Need a Project Manager in an
Agile Offshore Team?
by Raja Bavani

Software Product Engineering in a distributed environment requi- enabling team members to find solutions to problems. In a way,
res optimal utilization of teams as well as hardware, software the Scrum Master is responsible for hiring, employee develop-
and related resources in order to improve speed to market under ment and grooming, too. The role of a Scrum Master comes with
budget constraints. In this context, there is a tendency to propo- an adequate command to lead the team. However, it does not
se a reduction in management overheads in distributed models involve control. The team controls itself and gets the necessary
and form extended teams that report to managers working at coaching from the Scrum Master. Agile teams are self-directed
remote locations. This may work for very small extended teams of teams. Team members work together, support each other and
1 or 2 engineers working on production support or routine main- solve their problems. They reflect and improve. They inspect and
tenance tasks. Does it work for larger teams as well? Many times, adapt. The classical ‘Project Manager’ role is a loaded role. It
practitioners tend to embrace agile principles and recommend a combines both the ‘What’ and ‘How’ parts of Software Project
self-directed team of offshore engineers that can work with an Management, whereas the role of Scrum Master revolves around
onsite manager. Here the compelling question is on the need of the ‘How’ part alone. This is because the Product Owner takes
an offshore project manager. This gives rise to several related care of the ‘What’ part and is responsible for providing product
questions such as specifications to the Scrum Master. A typical Scrum team has 7 to
9 team members. For every Scrum team, there is a Scrum Master
a) Do self-directed teams need a leader or a manager? and a Product Owner who are part of the team. In this team, a
Scrum Master needs to be co-located, whereas the Product Ow-
b) When there is an onsite agile team that reports to an onsite ner can be from a remote location. Scrum practitioners strongly
Project Manager, why do we need an offshore Project Mana- recommend this structure. This is because, without a co-located
ger for an agile offshore team that is going to work with the Scrum Master, the team will not have a coach or a mentor to go
onsite team? to. In fact, on a need basis, Scrum Masters mentor their teams
in implementing Scrum. Also, the Scrum Master will not see the
c) What is the role of a Scrum Master at the offshore location? team in real time and understand when to intervene and support
in order to remove impediments or resolve issues. According to
d) Why can’t an agile team report to a remote manager or Sc- Scrum practitioners, having a remote Scrum Master and leaving
rum Master? the team alone is the first step to ensure project failure.

Let us start this discussion with the assumption that we are im- This answer may not be very convincing when our context does
plementing agile practices through a home grown methodology not involve ‘Scrum’. This leads to questions such as ‘We do not
or an industry standard agile methodology such as Scrum in a use Scrum. We use a home grown agile methodology and our on-
distributed model. site Project Manager will provide all necessary details to the off-
shore team. Why do you need another manager at the offshore
Scrum prescribes 3 roles: Product Owner, Scrum Master, and location?’
Team Member. Typically, the Product Owner owns product spe-
cifications and provides the same to the Scrum Master and the Fair enough. Let us explore this from the expectations we have
rest of the team. On the other hand, the Scrum Master facilitates on self-directed teams. In a self-directed team, everyone is res-
the process of software creation by working with the team and ponsible for asking questions, answering questions, owning up to

www.agilerecord.com 63
situations and resolving problems. However, it is very uncommon or a similar senior role to support your local team to deliver the
to see self-directed teams that go on a mission without a mana- desired behavior.
ger or a coach. The manager of a self-directed team manages the
context or contextual situations. The role of the manager is not Eventually, defining the structure of distributed teams, so that
to micro-manage team members. This role involves real-time ob- engineers at any location are not treated as augmented team
servations, interactions and assessments of situations for timely members reporting to a manager or a leader at a different loca-
corrective actions. This role involves consolidation of observa- tion, is very critical for the success of distributed agile projects. ■
tions and events in order to understand if there are any issues
that may impact the project goals. It does not stop here. This is
a critical role that binds the team together and enables conflict
resolution when required. This role helps the team by means of
influencing external teams, or supports teams that owe a timely
response or output for the self-directed team to perform. This role
involves identifying events that require a root cause analysis or
reflection in order to incorporate continuous improvement. Also,
this role involves initiation of appreciations and celebrations to
complement team members. > About the author
Raja Bavani
With these thoughts, can you think of offshore agile teams that heads delivery for
function without a co-located manager (or a Scrum Master if you MindTree’s Software Pro-
practice Scrum)? Or have you seen successful products delivered duct Engineering (SPE)
with such an optimization? We have not seen this happening. group in Pune and also
Based on our interactions with industry experts, not having a co- plays the role of SPE evan-
located role such as Scrum Master or similar, is a sure recipe for gelist. He has more than 20
disaster. This is why in our engagements we strongly recommend years of experience in the
such a role for each offshore team. IT industry and has publis-
hed papers at international
According to Alistair Cockburn who is one of the co-founders of conferences on topics rela-
the Agile Manifesto, Software Development is a people intensive ted to Code Quality, Distributed Agile, Customer Value
cooperative game. Every orchestra needs a real-time conductor. Management and Software Estimation. His Software
Every football game requires a real-time coach as well as a ma- Product Engineering experience started during the ear-
nager. Every space mission has a leader. This applies to Software ly 90s, when he was involved in porting a leading ERP
Development as well. product across various UNIX platforms. Later he moved
onto products that involved Data Mining and Master
During August 2010, Len Bass, Senior Member of Technical Staff, Data Management. During early 2000, he worked with
Software Engineering Institute of Carnegie Mellon University, in some of the niche Independent Software Vendors in
his key note address at the IEEE International Conference on the hospitality and finance domains. At MindTree, he
Global Software Engineering (ICGSE 2010, Princeton, NJ) made worked with project teams that executed SPE services
a very good comparison of Software Architectures and Software for some of the top vendors of Virtualization Platforms,
Project Teams in distributed environments. Software Architecture Business Service Management solutions and Health
has two primary things – structure and behavior. Software Ar- Care products. His other areas of interests include Glo-
chitects define the structure of any architecture depending on its bal Delivery Model, Requirement Engineering, Software
expected behavior. That is, the behavior drives the structure, and Architecture, Software Reuse, Customer Value Manage-
the structure needs to deliver behavioral expectations. The same ment, Knowledge Management, and IT Outsourcing. He
things hold good for distributed software teams. When you are regularly interfaces with educational institutions to of-
structuring any team, try to identify the qualities and results that fer guest lectures and writes for technical conferences.
you expect from the team. That will help you define the structure. His SPE blog is available at http://www.mindtree.com/
blogs/category/software-product-engineering. He can
To summarize, let us revisit our question. Do you need a Pro- be reached at raja_bavani@mindtree.com
ject Manager in an Agile Offshore Team? Well, it depends on
the expected behavior of the team. For very small teams of 1
or 2 engineers that do monotonous work, such as bug fixing or
maintenance of end-of-life non-critical products, you may be able
to manage with a remote Project Manager. However, in all other
cases, you will need to structure that team in such a way that it
gets adequate local leadership and managerial support to de-
liver the best. If you follow Scrum, you will need a local Scrum
Master for every project. Else, you may need a ‘Project Manager’

64 www.agilerecord.com
© Brian Finestone - Fotolia.com
Acceptance TDD and
Agility Challenges
by Ashfaq Ahmed

Agile processes have been widely embraced in recent years by • How to maintain effective communication while being agile
software organizations to cope with frequently changing requi- and distributed.
rements and to ensure on-time delivery to the market with the
desired quality. However, several experience reports indicate In an effort to cope with these challenges, we enquired TDD
that the process improvement initiatives are often challenged. In practitioners on a forum [1] having approximately 4500 mem-
this article, we discuss how Acceptance TDD (ATDD) helped us to bers, how QA are involved into a process while practicing TDD.
cope with agility challenges. We also present how ATDD was int- Although it turned into a very interesting discussion, we inferred
roduced into the team and the lessons learned from the process that firstly there is no consensus among practitioners about QA’s
improvement effort. role in the specific case, and secondly no argument was suppor-
ted by any empirical evidence. On the contrary, we observed sig-
1. Introduction nificant emphasis on ATDD in testing literature, workshops, and
In today’s market, software organizations are required to deliver conferences [ 2, 3, 4]. Thus, we ultimately picked up ATDD to
competitive and innovative products more than ever before. To analyze how well it can help us to deal with our problems.
cope with market dynamics, many organizations have embarked
on agile methods. Likewise many other counterparts, we also The remainder of this article is organized as follows: the next sec-
decided to go agile about two years ago by adopting SCRUM tion presents how ATDD helped us cope with agility challenges
and test driven development (TDD). The team consisted of eight in a distributed team. Section 3 presents how we introduced the
members with two quality assurers (QA), four system developers, process into our team. In section 4, we share lessons learned
one system architect, and one Scrum master. The team was geo- that will help others to successfully introduce ATDD. Finally, we
graphically distributed over three locations. conclude the report.

In our process improvement effort, however, we encountered the 2. Acceptance TDD and the challenges
following challenges: We witnessed ATDD to be very useful to cope with the above-
mentioned challenges. What follows is a brief description of the
• The paradigm shift from traditional software development
role ATDD played with respect to agility challenges.
methodologies towards agile ultimately redefined project
roles. In particular, when TDD had been chosen as develop-
2.1 Defining the QA role
ment methodology, then consequently the question arose
At the beginning when TDD was first adopted, QA’s role and con-
about QA’s role in an agile project.
tribution was not considerably significant. Unfortunately, SCRUM
• The existing requirement engineering process was quite also doesn’t provide any guidance on QA’s role in an agile pro-
laborious. Despite spending a lot of time on documenting ject [5]. But ATDD helped us to define the role in a way that QA
and maintaining requirements, issues with requirements initiates sprints by writing acceptance tests specified during the
still persisted. Hence, the objective was to make the requi- sprint planning meeting. These acceptance tests further lead the
rement engineering process more effective. development effort. Defining acceptance tests up-front; not only

www.agilerecord.com 65
actively involves QA earlier in the development process, but also 3. How we introduced ATDD
has significant impact on the quality of the delivered solution. It is a well established fact in software process improvement li-
terature that the success of process improvement initiatives is
2.2 Improving the requirement engineering process highly dependent on how a process is rolled out [9, 10, 11]. Here
We struggled with two challenges in our requirement enginee- we present how ATDD was introduced into our team.
ring process: with vague requirements, and with excessive docu-
mentation. Writing acceptance tests during the sprint planning 3.1 Introductory workshop
meeting in the presence of the focus group consisting of three A workshop was arranged to get the team introduced to ATDD.
different roles, i.e. QA, developer, and customer, makes the requi- The agenda was to focus on two key points: firstly, why should
rement elicitation process more structured. Moreover, the focus we opt for ATDD, and secondly an introduction to the process.
group with diverse domain knowledge provides the opportunity Furthermore, roles and responsibilities of individuals at different
to consider both technical and business aspects of a user story phases of a sprint were also discussed.
[2]. Afterwards, once the acceptance tests have been written by
QA, the developer can embark on development without repeated- 3.2 Piloting
ly asking to resolve any ambiguities in the requirements [6]. All Through the workshop, the team got acquainted with the process
subsequently emerging questions are also addressed to the cus- and their roles and responsibilities at different phases in the de-
tomer. Therefore, the customer should be available throughout velopment process. Thus, we decided to try out ATDD as a pilot
the development process. in the next sprint. We used the opportunity of the sprint planning
meeting to define acceptance tests. In the sprint retrospective
Excessive documentation was the next challenge. We produced meeting, we got quite positive feedback about ATDD, and it was
requirement specification documents consisting of numerous pa- decided to adopt it as our normal development practice.
ges. Updating and maintaining the documents was also quite a
tedious job. Despite all the effort, concerns about requirements 3.3 ATDD vs. SCRUM
still remained. Hence, the goal was to optimize the requirement We were already practicing SCRUM, and it was important to align
engineering process by reducing waste and delivering concise, the ATDD activities with the existing process. We found ATDD very
precise, clear and testable requirements. compliant to SCRUM. and it was quite easily incorporated into
it. Nevertheless, some changes or improvements were made to
In ATDD, you pick some user stories for the sprint. Then the fo- existing practices.
cus group defines acceptance tests for each user story. These
acceptance tests become part of a sprint backlog document. In First, SCRUM doesn’t explicitly specify how the team will collabo-
a later phase, if further acceptance tests are determined, the rate, rather it states that the team is responsible for figuring out
sprint backlog is updated accordingly. Thereby, documentation how to turn product backlog into an increment of functionality
could be significantly reduced and yet qualified requirements are [12]. In contrast, ATDD helped us to define roles and responsibi-
delivered. lities to turn a user story into an implemented solution. The team
still holds collective responsibility, but the attainment of the goal
2.3 Communication in a distributed agile team becomes more formal.
Agile methods emphasize close collaboration and intensive com-
munication among peers [7]. Communication becomes even Secondly, task estimation had mostly been based on gut feeling
more important in a distributed team. A distributed team also before. However, the mindset of elaborating user stories in terms
has many communication barriers due to cultural differences [8]. of acceptance tests helped to make more realistic estimates.
As mentioned before, we had already been struggling with a poor
requirement engineering process. Therefore, it was evident that
if proper communication mechanism were not devised, it may
lead to poor performance.

ATDD improved communication in two ways. First, having ac-


ceptance tests as part of the sprint backlog became a shared
source of information. So, it doesn’t matter really where you are,
everyone has equal access to the same piece of information.
Secondly, defining acceptance tests up front in sprint planning
meeting prevents misinterpretation and misunderstanding in
the development process. Communication time spent to clarify
vague requirements was also reduced. In a nutshell, communi-
cation has been improved and is now much more effective than
before. Whilst ATDD alone is not a means of communication, it
certainly proved to be a useful communication tool.
Fig. 3.3 ATDD incorporated into SCRUM

66 www.agilerecord.com
Here, we present a synopsis of the process after incorporating countered in a distributed agile team. This involved defining the
ATDD into SCRUM (Fig 3.3). QA role in an agile team, improving the requirement engineering
process and employing ATDD as a communication tool in a distri-
Sprint planning buted team. Moreover, we achieved an active involvement of the
A sprint planning session is divided into two distinct parts. Firstly, customer (product owner), and increased progress visibility, that
user stories are picked for the sprint on the basis of their busi- ultimately provides better oversight and control over the project.
ness value and technical perspective. Secondly, the chosen user Consequently, by considering the concerns and interests of all
stories are further elaborated and estimated. The elaboration of stakeholders, one can make a strong business case for ATDD
user stories leads to defining acceptance tests. Our product ow- that will help to push with a greater impact. You may still find peo-
ner plays the role of the customer. QA has the responsibility to ple a bit reluctant to change their traditional way of doing things.
document the acceptance tests that result from the discussions. To break this inertia, you need strong management commitment.

Picking-up user stories 4.2 Continuous optimization


After the sprint planning meeting, developers pick up user stories Once the process has been successfully rolled out, we should not
for implementation. sit back and relax. Firstly, any process should be continuously
monitored and controlled. If you are already practicing SCRUM,
Development sprint retrospective meetings can be a natural choice to get
A developer implements a user story by getting acceptance tests feedback from peers and express your concerns. If some impro-
passed for the story in hand. If anything is revealed during the vement areas are identified, it is vital to discuss improvement
implementation process that should be considered for the user strategies in the presence of all stakeholders to get everyone
story under implementation, then the customer (product owner) committed.
is requested to give his opinion and provide acceptance tests.
We found out that we need to put more effort into writing better
Testing acceptance tests. To address this issue, we introduced a review
On successful implementation of a user story, it is assigned to process. This helped us to gradually improve our acceptance test
QA for testing. One may ask, what is the point of testing once writing skills.
acceptance tests have been successfully implemented. We’ve
learned that the mere implementation of acceptance tests can- 4.3 Selection of the customer
not guarantee the quality of a product. Although there is less like- ATDD assigns a central role to the customer. In fact, it would be
lihood of finding bugs related to functional requirements, QA still appropriate to say that the customer drives the process by having
needs to do some testing from the non-functional requirements ownership of the acceptance tests. Therefore, it is critical to pick
perspective. the right customer. It may not be wise to rely completely on one
customer’s input. By doing so, you run in the danger to end up
This process goes on in an iterative fashion, which is subsequent- developing a product for a specific customer. One suggestion in
ly followed by sprint demo and sprint retrospective. this regard is to assign the customer’s role to someone within the
organization, who maintains regular contact with the customer
4. Lessons Learned: and completely understands the customer’s needs. An alterna-
The process improvement initiative has taken about 1.5 years. tive solution could be to form a user group consisting of users or
This journey has not been as lenient as expected. We learned customers with diverse needs. The group may consist of users
that you need to be proactive during all phases of the project by from different types of organizations, i.e., large, medium, and
doing the right things at the right time. Here we briefly describe small. Never keep the same group for a long time. So, keep on
some key success factors which can have critical impact on the welcoming new members and diligently say good-bye to old ones.
effort of introducing ATDD into your team or organization.
5. Conclusion
4.1 Project kick-off We can state, on the basis of our empirical evidence, that ATDD
Indeed, one of the most crucial phases for any process impro- can play a vital role in coping with agility challenges. For instance,
vement effort is the project kick-off. Software process impro- it provides a guide for defining the QA role in an agile team. Re-
vement (SPI) practitioners have widely reported resistance to quirement engineering processes improve, and communication
embrace the change as an almost inevitable impediment in any in a distributed team becomes more effective. Furthermore, it
SPI effort. Therefore, it is of importance to show all stakeholders can easily be incorporated into SCRUM and can even improve
what is in it for them. You may start by highlighting the remedies. some of its practices.
However, it is not always necessary to make an ROI case based
on existing problems. This job can be done by simply focusing on To make a successful process improvement initiative, SPI per-
positive outcomes from adopting the process, or maybe a com- sonnel should be aware of all the pitfalls that may jeopardize the
bination of both. Our purpose here is to not to go into the details effort. Kick-off the improvement initiative with a greater push and
of the rewards that ATDD will render. Instead, we list a few of afterwards focus on continuous improvement by identifying prob-
them that we witnessed through our effort. It has already been lems and optimizing the process. Select the customer wisely; this
discussed that ATDD helped us to overcome some challenges en- will have decisive impact on the success of ATDD. All the best for

www.agilerecord.com 67
your ATDD initiative! ■
> About the author
Acknowledgements:
Ashfaq Ahmed
ISTQB® certified tester,
We are indebted to Lasse Bjerde and Knut Brakestad for their
works for Visma Software
valuable feedback on the drafts of this paper. Special thanks are
International AS, Norway.
also due to Daniel Jian for thought-provoking discussions and as-
He has a master degree
sistance with drawing the diagram.
in Software Engineering
and Management. He
6. REFERENCES
has been in the software
[1] testdrivendevelopment@yahoogroups.com
industry for three years.
He is passionate about
[2] Koskela, L. (2008) Acceptance TDD Explained, Methods &
software quality and more
Tools, summer issue.
specifically the software process improvement. He has
presented a paper on maturity driven process improve-
[3] http://www.craiglarman.com/wiki/index.php?title=Agile_
ment with his peers at the Third International Workshop
Acceptance_TestDriven_Development:_Requirements_as_
on Engineering Complex Distributed Systems (ECDS-
Executable_Tests, Last accessed: 16th August, 2010.
2009). He can be reached at ashfaq.ahmed@visma.
com
[4] http://www.agiletestingdays.com/klarckrantanenharkonen.
php, Last accessed:16th August,2010

[5] Veenendaal, E.V. (2010) SCRUM & Testing: Assessing the


risks, Agile Record, Issue 3

[6] Shalloway, A., Beaver,G., and Trott, J (2009) Lean-Agile Soft-


ware Development: Achieving Enterprise Agility, Addison-
Wesley,

[7] Miller, A. (2008) Distributed Agile Development at Microsoft


patterns & practices

[8] Olsson Holmström, H., Ó Conchúir, E., Ågerfalk, P., and Fitz-
gerald B. (2008). Two-Stage Offshoring: An Investigation of
the Irish Bridge. MIS Quarterly, Vol. 32, No. 2, pp. 1-23

[9] Baddoo, N. and Hall, T. (2002) Motivators of Software Pro-


cess Improvement: an analysis of practicioners’ views, The
Journal of Systems and Software, Vol. 62, pp.85-96.

[10] Dybå, T. (2005) An Empirical Investigation of the Key Factors


for Success in Software Process Improvement, IEEE Tran-
sactions on Software Engineering, Vol. 31,No. 5

[11] Niazi, M., Willson D. and Zowghi D. (2006) Critical Success


Factors for Software Process Improvement Implementati-
on: An Empirical Study, Software Process:Improvement and
Practice Journal, Vol. 11, Issue. 2, pp. 193-211.

[12] Schwaber, K. (2004) Agile project management with SC-


RUM, Microsoft Press

68 www.agilerecord.com
© Anatoly Tiplyashin - Fotolia.com
Loosing my Scrum virginity…what not
to do the first time.
by Martin Bauer

At the end of Sprint 4, things started to get ugly. Simon opened the tab for Sprint 4. Looking across at the final
column, “percent complete”, there wasn’t a single feature that
It was a warm Friday afternoon in London and my colleague, Si- was 100%. Not only that, there were a number of features that
mon, and I were walking along the banks of the river Thames hadn’t even started.
towards our client’s office. We were chatting about who was do-
ing what during the Inspect & Adapt and Sprint Review meeting The mood in the room turned stony cold. Jason, the Project Ma-
we were about to have. Simon was going to demonstrate a few nager, asked what everyone else was thinking, “Why is there
features and then go through the status of all features in Sprint not a single feature complete?” Simple question, not so simple
4. Then it was over to me; I was going to walk through the plan to answer. Simon struggled to explain. The reality was that the
for Sprint 5 which is all I could contribute as I had only joined the sprint had been overloaded, and there was no way the team was
project the previous week. going to get it all done. Taking on so many features divided the
attention of Simon and Sally, the product owner. They were trying
At the start of the meeting, there were the usual introductions. to cover too much, and so nothing got finished.
Simon knew most of the people in the room quite well, and the-
re was light-hearted banter before things got under way. Before The bigger picture was even more dire. Despite a month of up-
launching into the first part of the meeting, the walkthrough of front analysis, there were still details that needed to be fleshed
completed features, Simon made an apology that not all of the out during each sprint. Too many details. So much so that Simon
features were complete. The rest of the room seemed to collec- was way behind in getting analysis done and was putting fea-
tively shrug, not really caring. tures into sprints when he knew the analysis was incomplete.
To add to that, Simon was playing the role of both analyst and
The demonstration went relatively smoothly, as smoothly as a project manager, never having managed a project of this scale.
live demo can! The occasional glitch, the odd, unexpected result, Torn between getting the analysis done, monitoring progress and
„it was working at the office“, the usual story. At various points planning ahead, Simon struggled to keep things together, and at
someone around the room would pipe up, suggest a change, the end of Sprint 4, the true state of affairs had come to light,
seek clarification or wonder why a particular feature wasn’t diffe- and it wasn‘t pretty.
rent. Each time there was a reasonable answer and the changes
and enhancements were noted. All in all, the demonstration went Not that Jason, who had replaced the previous Project Manager
fine. No major stumbling blocks, no major changes. That’s when at the start of Sprint 3, cared, nor did the rest of the room. The re-
we reached the point of reviewing the status of features in Sprint ality was, that at the end of Sprint 4 not a single feature had been
4, and things took a severe turn for the worse. completed, which is the entire purpose of sprints. The issue was,
this was the first time anybody, other than myself and Simon, had
Simon opened up the shared spreadsheet showing the status of any insight into the fact that the project was in serious trouble.
each sprint. The initial plan was to use Rally to plan and manage
user stories and sprints. For reasons that are still unclear to me, Simon tried to explain that Sprint 4 had been deliberately over-
the team stopped using Rally after the user stories were entered. loaded in an attempt to get through many of the features that
From that point onwards, the team reverted to a series of Word needed detailed analysis, not to mention that we had been held
documents and spreadsheets. up by a third party on several features. It fell on deaf ears, as

www.agilerecord.com 69
Jason put it bluntly, the sprint had been poorly planned. It should bring forward a couple of features and get a head start on Sprint
never had so many features. He was right. Simon, with the best 6. I didn’t realize at the time that this wasn’t really in the spirit
of intentions, had dug himself a hole that he couldn‘t talk himself of Scrum, but for me, what mattered most was proving to Jason
out of. Even though the project was using Scrum terminology, that we could deliver, and the best way to do that was to deliver.
it wasn‘t actually following the Scrum approach, especially with It wasn’t until the second week that I started to understand the
sprint planning - not the only departure from Scrum. Mind you, underlying problems that had made it nigh impossible for Simon
this was not exactly surprising, as neither Simon or any of the dev to have succeeded in previous sprints.
team had any experience with Scrum.
Although there‘d been some analysis done before sprint 0, there
There was a long, uncomfortable silence in the room. No one were still a lot of details to be worked out. Rather than use Rally,
truly accepted Simon‘s explanation, and there wasn‘t a lot of add the tasks to each user story, a Word document was used
confidence in the room that it was going to get any better. That‘s as the product backlog. The initial estimates done before sprint
when Simon handed over to me, in order to outline the plan for 0 were assumed to be correct. As the details of each feature
Sprint 5. At the time, I only had a superficial understanding of Sc- were fleshed out, changes, enhancements, adjustments, addi-
rum, so I relied on my previous experience in putting together the tions crept into the backlog. What didn’t happen was for those
plan for Sprint 5. I kept it simple and came up with a plan that I changes to be reflected in the effort required. Each change or
thought was achievable after speaking to each of the developers, adjustment on its own was minor, but added up, there was a
allowing a buffer for the usual issues that surface. This, as I was significant increase over the past two months.
to learn, was not exactly how sprint planning works in Scrum. I
didn’t talk to Sally, Jason or anyone other than the dev team. The problem was not that the team wasn‘t following Scrum to
the letter; the team was responding to change over following a
I presented my plan for Sprint 5 and was greeted with scepticism. plan. The problem was that the changes weren’t being reflected
“Why do you think this plan will work, when no features were in the overall effort. Each time a change was identified, it would
completed in Sprint 4?” asked Jason. I did my best to explain the be done, during the sprint if possible, if not, the feature would
logic. First, finish features that were mostly done. Second, start rollover to the next sprint. The flow-on effect wasn’t realized until
features where the analysis is complete and could be completed the fateful Sprint 4 review, by which point the damage was done.
within the sprint safely. Third, allow time to deal with the chan- There was no way to go back and add up all the little amend-
ges that arose in the inspect & adapt. And finally, allow a margin ments that turned a small snowfall into an avalanche.
for error. Common sense, well at least to me. Jason asked a
few more questions on areas that I had already considered, so Despite the damage done, Sprint 5 went relatively well. The team
was able to address them easily.. Still, there was only begrudging completed the majority of the features. There were a few that
acceptance. The reality was the proof was in the pudding. Con- didn‘t make the cut, but Jason had been prewarned at the mid-
fidence was low, we needed to get runs on the board, we had to sprint review. The Inspect & Adapt and Sprint Review for Sprint 5
actually deliver what we said before we could win back any trust. was the opposite of the last one. It started tense and by the end
everyone was relaxed and joking. There was more to demonstra-
As part of reluctantly accepting the plan for Sprint 5, there were te, more features complete and most of the time was spent on
some caveats: Jason wanted better communication, wanted bet- discussing refinements rather than recriminations. My plan for
ter visibility of progress during the sprint. He didn’t want to get to Sprint 6 followed that of Sprint 5, once again without consolation
the sprint review to find out the true state of affairs. That was fine from Sally or Jason. I was still missing the spirit of sprint plan-
by me, my job was to let Simon get on with completing analysis ning. Nonetheless, the plan was accepted on face value given
on the features that still needed to have details worked out. It the previous plan had worked. Things were looking up, or so it
wasn’t hard for me to provide updates on progress, even if I still seemed.
didn’t understand Scrum or what the project was about.
Sprint 6 kicked off well, shielding Simon from reporting and plan-
The meeting finally ended. It had been a torturous two hours. ning meant he could focus on analysis and make sure features
were ready for developers to start work. Even though I had made
The next week went relatively smoothly. The developers made it clear at the start of Sprint 6 that we wouldn’t be able to finish
good progress on completing features and getting started on all the features within 2 weeks, Jason was trying to avoid a Sprint
new features. After the daily sprint, I would catch up with each 7 and the overhead that came with it. Not wanting to rock the
developer individually to confirm how far along he was with the boat after only just winning back some trust, I went along with
features they were assigned and if they were on track to comple- the plan and we decided in the second week of Sprint 6 to make
te them as per the plan. During the short one-on-one catch-ups, it a 3-week sprint. Things progressed relatively well until we were
I found out the key pain points for each developer, not just the in the middle of week 3. Jason was pushing me to commit to
blockers. when we would be code complete. I pushed the developers for an
answer. We would be code complete bar one feature by the end
By the end of the week, I had to face the music at a mid-sprint re- of the week. All appeared well, “appeared” being the operative
view. Fortunately, I was able to report that we were making good word.
progress and were slightly ahead of plan. We were even hoping to

70 www.agilerecord.com
The end of Sprint 6 arrived, and we were code complete except even though were we getting through bug fixing quickly, there
for two features. Before the Inspect & Adapt, Simon was franti- were still around 20 features to be fully tested end-to-end. That’s
cally preparing, going through various user journeys to make sure when things got ugly, again.
all was in order. It wasn‘t, it wasn‘t even close. Individual features
were code complete and worked individually, but not necessarily Sprint 6 had gone over by a week, and testing had gone over by
with each other. Cracks in the surface emerged and soon turned 2 weeks. A budget overrun had been flagged as early as Sprint
into a yawning gap between the concept of code complete and an 4, but optimism had got in the way of reality. Not any more. The
operational site that we could demonstrate. budget had finally run out. That was it, there was nothing left.
The project came to a grinding halt. Assumptions around code
It was too late to fix the situation, there too many scenarios that complete, code quality and the concept of done had caught us
had never been considered, as there were features that had ne- all with our pants down.
ver come together. It wasn’t for lack of analysis or direction, it
was simply that some situations hadn‘t been foreseen. Simon A crisis meeting was held with the key stakeholders. It took seve-
did his best to avoid these scenarios, but it was only a matter of ral days to nut it out, but a solution was found. There would be a
time that testing would start in earnest and the truth would come workshop with the key stakeholders to go through the site, iden-
out. Naturally, testing was the next topic of discussion. We had tify all P1 issues. We would then fix them. Nothing else would be
80 features, but had failed to make much progress with testing added, changed or amended. It was a once-only workshop. Sche-
except for the stand-alone features which didn’t represent the duled to take 6 hours, the workshop kicked off with everyone in
true complexity of the system. Basically, we had only just started. a surly mood. After 10 hours, there were only half of the original
Once again, not the way Scrum is supposed to work, but neither people left. Pragmatic decisions were made, and everyone finally
myself nor Simon knew that. I had a gut feeling that testing would went home spent but clear on what the path to completion was,
take around 4 weeks. Jason wanted it done in 2. Neither of us in theory.
were even in the ballpark.
Along with phrases like, never say never, it can’t be done, I promi-
A week later we only had 10 features properly tested. I met with se not to...etc., the “once-only” workshop wasn’t, new P1 issues
Jason to plan out the rest of testing. He wanted half of the fea- crept in, a symptom of the project from its very inception. The two
tures tested by the end of the second week and the rest by the weeks allowed to fix all the issues stretched to three. Additional
end of the third week, thinking we’d increase velocity by adding P1 issues were found and added to the list. Starting at 65 P1
another tester. I wasn’t convinced, but had little choice. issues, the number after three weeks had crept up to 107.

With a huge effort, we reached the target of testing half the fea- It was no surprise that everyone was tired, tense and strung out.
tures by the end of the second week, but I knew there was no way P1 fatigue had hit everyone. Deployments were rushed to get out
we‘d get the next 40 done in the following week. Jason was con- fixes breaking previous fixes. Everyone was getting sick and tired
stantly badgering me to give an indication of velocity and asking of the whole thing and just wanted it to end. Finally it did, finally
me about code quality. The issue wasn‘t the quality of code, it there were enough compromises made on both sides and the
was just that scenarios kept arising that no one had considered, site was in a state to be launched. Not perfect, not what Sally
and each time this happened, we had to go back to the drawing wanted, but close enough. Late on a Thursday evening, the dns
board and work out how to solve it. At the end of the third week, was finally switched over and the site was live. The celebrations

Want to advertise...
Next issue January 2010
Deadline December 10th

www.agilerecord.com

www.agilerecord.com 71
were limp and washed out. We were simply glad it was finally it was done when the budget ran out. All of these views were cor-
over. rect from the individual perspective. The problem was that we, as
a team, didn’t have a single view. The inevitable tension and frus-
It wasn‘t until a few weeks later when I‘d recovered, did some tration of the back and forth that ensued caused great damage
reading about Scrum and considered the previous few months to team morale and progress.
that I realized the project was never a Scrum project. It used
all the terminology of a Scrum project, but without the spirit of All of these issues can be tracked back to a single root cause, a
it. During the project, when things started to go wrong, the dev lack of common understanding. We didn‘t have a common view
team’s inclination was to revert back to what they knew and were on how things were to be done. When things went astray, people
comfortable with, a waterfall approach. On the other hand, Jason started pulling in different directions, reverting to what they knew
was pushing to follow the Scrum approach more closely, as he best and making things even worse. It would be easy to blame
felt that would help rectify matters. Both approaches could have the problems on it being the first time the dev team had used
worked, the problem was as things got tense, we were moving in Scrum, but that’s a symptom, not the cause. They didn’t under-
different directions and made things even worse. stand that things would work differently, that change would be
managed differently, that the client wasn’t expecting it to be right
There were 3 main issues. the first time, nor did Sally understand that the developers were
expecting just that. Whether it was Scrum, waterfall or a hybrid
The first was embarking on a project using a new approach and approach, the problem was the team was not on the same page.
assuming that it was going to be fine. When does anyone try so- Without a common understanding, a team can’t truly form or per-
mething new and get it right first time? Jason had plenty of expe- form to the best of its ability. The single most important thing I
rience with Scrum, but Simon and the dev team didn’t. Each time learned about using Scrum for the first time was not about Scrum
a developer was handed details of a feature they expected it was itself, it was that if the team members aren‘t clear on the ap-
fully specified. Sally wasn’t used to that approach, she thought proach, whatever it might be, there will be problems. The method
she‘d have the opportunity to inspect & adapt, change was nor- is secondary, a common understanding is first and foremost. ■
mal, once again, who gets a spec right the first time? Neither
Simon or the developers had that mindset. Each change was
greeted by a developer wondering why the client hadn’t through it
through upfront, a waterfall mentality. So on one hand there was
Sally expecting to be able to change and adapt, and on the other
hand, Simon and the developers expecting it to be built once with
little or no changes. > About the author

The second issue was that of managing change. In waterfall, Martin Bauer
you have a list of features that are done in order, as decided by is the head of project ma-
the project manager, the team puts their collective heads down, nagement at Vision With
works hard for a couple of months and then surfaces to test and Technology, an award-win-
fine tune. Any changes are handled with a change request, and ning digital agency based
the impact on time and budget is understood. That’s what Simon in London. He has over 15
and the dev team had in mind, except in this case, rather than a years’ experience in Web
couple of months, it would be a couple of weeks and repeated 6 development and content
times. That’s not what Sally and Jason were thinking, they were management. Mr. Bauer is
thinking the Scrum approach, pick the key features, get them the first certified Feature-
done right and then get onto the next set. At the end of each Driven Development Pro-
sprint, the velocity is understood and the next sprint is planned. ject Manager, an advocate of agile development, and
That didn‘t happen. Changes were squeezed into the next sprint also a qualified lawyer. His experience covers being a
without an understanding of the impact, the hope that they could director of several businesses, managing teams of de-
be done along with the features already in that sprint. No allo- velopers, business analysts, and project managers. Mr.
wance was actually made for change and the inevitable flow-on Bauer can be reached at martin@martinbauer.com;
effect. It took 4 sprints before that hit home. Change is fine, thin- Web site: www.martinbauer.com.
king it will have no flow-on effect isn‘t.

The final issue was the concept of “done”. Different people had
different definitions. The developers thought they were done
when they checked their code in. I thought it was done once indi-
vidual features were tested on the staging server. Jason thought
it was done when all P1 issues were closed. Sally thought it was
done when it matched her vision of what she wanted, even if it
meant fine tuning a feature 10 times. Key stakeholders thought

72 www.agilerecord.com
© Daniel Hohlfeld - Fotolia.com
Lessons Learned in Agile Testing
by Rajneesh Namta

Recently, my colleague and I presented at Agile NCR (Gurgaon, professionals as well. Here are some lessons that we have lear-
India). In this presentation, we talked about our experiences of ned while transitioning from a traditional to an agile way of wor-
working as QA or testers on Agile projects (offshore) in India over king.
a considerable period of time. While on these projects, we con-
tinue to learn and daily gain some new insights about our work, One Team - One Goal
methodology and the people we work with. The concept of “One Team” is a very powerful one. It entails a
complete change of mindset, which is very refreshing as well as
I had a feeling that the slides in our presentation may not have rewarding. The team sitting in one place at one table is one such
been sufficient to get across the message we were trying to con- example, where the concept is actually realized, as physical bar-
vey, and hence this article. The intent of this article is to reach riers are removed. Conventionally, test teams or the independent
a wider audience and share these lessons with the community. verification and validation units (some fancy names in organi-
zations for the test team) are separated from the development
As we uncover better ways of developing software, so are we fin- teams and usually sit in separate cubicles, on a different floor
ding better ways to test it. One of the best things about Agile is or in different buildings. This leads to obvious communication
that everyone on the team (developers, testers, architects, ana- barriers, but more importantly it breeds a mentality of us versus
lysts et al) starts right from project initiation, and if this is not the them in the team. Fault finding and blame game ensue, as peop-
case (which may be possible with some organizations), it is still le look down upon each other and end up facing each other and
the preferred route. It helps greatly to be a part of the team from blocking progress. A team of ‘comrade in arms’ rather than ‘ad-
day one, since as a tester you get the lead time to get acquainted versaries’ is much more likely to succeed, and that’s what “One
with the infrastructure and technology, understand team dyna- Team” means. This also means that the QA guy is no longer the
mics and initiate a customer dialog to get an insight into the busi- “Quality Police” on the project, and the whole team owns quality
ness at a very early stage. This is something which will pay off in resulting in better quality products.
the longer run, as all this is very crucial for a tester to contribute
effectively when the actual action takes place. A team with peop- The team should be like a well oiled machine, where all individual
le of mixed skills right from the start will add a lot of value rather parts work in unison to achieve the goal. Every success is a team
than a team of people with a very specific skill. This is something success, and each problem is a team problem in such a set-up.
which rarely happens in a traditional framework, where people Team members take collective decisions and ownership for the
are generally added and taken off on need basis. Interestingly, work they do.
slowly and steadily the realization has dawned on many that they
can create more value for their customers while maintaining a Have a Test Strategy
consistent level of quality in each sprint by having people with One of the key questions a tester often encounters when he/
mixed skills. It also brings in a powerful change of mindset which she transitions from a traditional to an Agile way of working is
enables everyone to see the tester as an integral part of the core whether to create the heavy-weight test plan, strategy and other
team rather than someone from a different planet talking in an similar documents or not . These documents are given way too
alien language, as he is not in sync with the project reality. much importance in a process-oriented set-up and are pre-re-
quisite to start any testing activity. In contrast, Agile focuses on
Agile not only helps create better software, but probably better ‘just enough’ documentation and is often misunderstood for no

www.agilerecord.com 73
documentation. Eisenhower once said ‘plans are useless but pl- not introduced any regressions. Additionally, it would make sen-
anning is indispensable’ and that’s the key while planning in an se to have a checklist in place, which testers can refer to when
agile project. moving a task from ‘Test’ to ‘Done,’ so as to reduce the risk of
missing anything due to plain oversight or any resource crunch.
As a tester you should possess a high level of clarity about the
whole testing activity. A test strategy will help the whole team to The definition of done, or the DoD, is negotiated between the
see the testing activity clearly and enable them to contribute in team and the stakeholders as a generic set of guidelines of when
fine- tuning it. You will get brilliant ideas from the team, as every- to consider the user stories done for sprints in a release. A tester
one would be interested in having a test process in place which is should also create a DoD based on the sprint/release goals and
helping the final cause. It will give the tester a clear path to take his testing objective for the project based on discussions with
when testing during a sprint or a release. the team and the stakeholders. Once the criteria for done is de-
cided, it can be added to the DoD for the project. This will make
Test strategy should contain details on a high level, a plan for it visible to everyone involved and act as a guiding principle and
a release, for example, with things like testing techniques and reference to the tester. A DoD for the tester can be something
tools to be used, automation of regression tests, testing any in- like the following:
teraction with third-party tools and interfaces, database testing
• Required functionality as described in the user story is im-
among others. There is no set template for a test strategy, and it
plemented.
can be anything: a piece of paper. a Wiki page, a text document,
a diagram or an email detailing your approach. The only impor- • Test data and cases are documented (automated in BDD
tant thing is that you have a strategy and it’s communicated to tool) or on a Wiki (confluence).
everyone in the team. Extensive documentation should only be • The implementation has passed the functional tests.
created when you are able to keep it up-to-date, otherwise it will
soon go stale. • All automated regression tests are green.

Involve Customers in the Test Process Clarify Specifications Using Examples:


No one in the team has the kind of insight and domain know- It is good practice to give concrete examples when asking so-
ledge that a customer or the end user of the product has. Invol- mething from the Product Owner, developer or the user. A query
ving the customer in the test process will increase the efficacy of asked in plain language or a lengthy email might not get the re-
the testing activity. To achieve this, you will have to create a trans- sponse you need, but seeing a real example will definitely evoke
parent and trustworthy relationship and initiate the right kind of positive reaction.
dialog. Nowadays, many BDD and ATDD tools are available and
gaining popularity; these specify the tests in a domain language, This helps tremendously if you work on offshore projects in a dis-
which can be easily understood and learned by customers. By tributed mode and need to clarify or disambiguate requirements
using these tools, the users can go one step ahead and design or with the customer. Since the customer will be geographically lo-
write test cases for requirements themselves, which would serve cated elsewhere, it’s very important to keep the feedback cycle
as acceptance criteria. fast and short, as sprint cycles are usually short as well. Additio-
nally, the user might himself come up with alternative scenarios
There are numerous ways in which the customer can contribute using examples, which can help to disambiguate requirements.
to the testing activity within or outside a sprint. For data-centric
applications or in fact for any application, only the customer can An example of such a case could be a project, where you are
provide the actual production data, which is a critical require- testing a finance application in which complex calculations are
ment for testing. Testing the application with the right kind of done. It would be good idea to create a small spreadsheet with
data is key to uncovering the defects, which may otherwise only calculations for different scenarios and ask for any clarification
be found once the software goes into production. citing specific examples on the sheet.

Additionally, the customer can provide actual usage scenarios Test as per the Context of the Sprint
and other requirements, such as performance benchmarks, ear- Testing inherently remains the same in Agile, only the way it’s
ly in the release cycle, which ultimately will translate into a more applied in an agile project is different. Alternatively, we can say
usable, fast and stable product. that the rules of the game remain the same, but it’s altogether
a different playing field. A tester in an agile framework would be
Have a Definition of Done in Place flexible enough to allow changes to a set plan (responding to
Something which bothers testers frequently is how to make a de- change over following a plan). Hence, to test as per the context
cision to stop testing and ensure that the user story or feature of a sprint would be a logical and wise choice. It’s very impor-
under test has been tested adequately. Theoretically, testers are tant that the tester remains aware of the context and constantly
never done, but they need to make a decision at some point to makes adjustments in his/her strategy to accommodate change.
stop testing what they are testing and take up new tasks. One
possible approach would be to do risk-based testing within a At the start of a sprint, the tester can bring in his unique per-
sprint focusing more on critical items and ensuring that they have spective and do requirements testing and story exploration along

74 www.agilerecord.com
with automating left-overs from previous sprints to add to the kept simple (apply the KISS principle here), yet should include
regression test suite. As a sprint progresses, the tester can test every possible indicator that allows the developer to debug the
the features being coded (ideally, it would be faster to do it ma- problem quickly and efficiently. Test/defect reports (automatic or
nually the first time) using exploratory and other methods, which otherwise) should be smart enough that only a glance through
would result in the most effective utilization of time and effort. As them is enough to know about the failures that have happened
the sprint nears completion, the focus should shift toward testing and their probable cause.
end-to-end workflows and regression testing to ensure everything
that worked before still works. Generally, workflow and regressi- Regularly Reassess your Test Process
on tests are automated and should not take much time to exe- It’s very important to reassess the test process, since only then it
cute. Hence it’s very important to first figure out the context of a will be of relevance. The software which is being created grows in
sprint and then test accordingly. size and complexity in every sprint. Similarly, the test assets and
artefacts also grow, both in number and complexity. The main-
Test Automation is a Team Effort tenance nightmare sets in as the release cycle progresses and
Test automation is not only about an automatic execution of the relevance of automated regression tests and other test artefacts
test cases. It has a much wider application like integrating au- becomes a major issue. A strategy that worked yesterday may not
tomatic tests with the build process, integrating toolsets/frame- be relevant anymore, as there have been continuous changes.
works developed to test different components of the application, Hence, the testers need to re-visit and rethink their test process
automatic reporting and notification mechanisms etc. to keep it relevant and up-to-date.

A tester might be involved in creating and maintaining most of A tester must be smart enough to try and see beyond a sprint and
the test artefacts, but he needs the help of the entire team to strategize accordingly. The release backlog is always accessible,
keep it going. Some situations might demand an in-depth techni- and he/she should learn to make good use of it. While automa-
cal know-how (like mocking some external interfaces, or making ting and creating a test strategy, the tester should always factor in
disparate tools talk to each other), which a tester might generally the type and complexity of stories which are further downstream,
lack, and hence it’s all the more important that the team is there so as to minimize the rework when it’s time to actually implement
to support you. It’s again an offshoot of the “One Team” concept, them. A tester should be evaluating and revisiting the test design
where every problem is a team problem. every sprint to ensure that he/she is on the right track. Evoluti-
onary test design should happen in parallel to the evolutionary
It also gives the team (especially developers) a lot of confidence design of the system.
before making any change if a robust framework is in place that
they can rely on. So it’s for the benefit of the whole team and the Explore, Learn, Innovate and Improve Constantly
whole team owns it. A tester might take the ownership of main- Agile is not only about working on any one project. It’s a conti-
taining and keeping it relevant in the long run, but not without the nuous learning process, where people constantly enhance exis-
commitment of the team. ting skills and add new ones. As a tester, you add more value
to yourself, your organization and your customer by constantly
Provide Fast and Quality Feedback learning and exploring new things. You will learn new tools and
Faster feedback is the very essence of agile development. Having techniques which will not only help you to work more efficiently
automatic checks in place (like CI, automated unit testing and re- and better in the current project, but will stay with you long after
gression testing) ensure that feedback is instantaneous. Taking the project is over.
a cue from such practices, design your functional tests such that
they can be integrated into the build. If some tests slow down the Try something new within a project and introduce new toolsets
build, abstract them out#put them into inside a separate suite or a working methodology to improve the current state of affairs
and schedule them to run overnight with some notification me- (say a quick POC). This will probably help you to make things bet-
chanism. There is no point in creating suites or tests that keep ter, and even if do not and fail quickly, you know something which
on running for days, as delayed feedback would slow down the didn’t work.
entire chain and hamper the speed and productivity of the team.
Maybe you would also like to share your knowledge and exper-
Additionally you should not wait for a bug to be logged and go tise with others and give something back to the community and
through the complete lifecycle in the bug tracking system before contribute for example to open source or other initiatives. Blog-
it’s fixed and re-verified. As soon as an issue is found, it’s good ging, writing papers, participating in conferences and engaging
to announce it (write it on the whiteboard, instantly message the in discussions with the community at various forums will not only
developer concerned, or just shout!).Get it fixed there and then. add to your knowledge and skill, but will make you visible to the
community as well.
Quality feedback means that everything (bug report, incident log
or a test report) that a tester provides for the consumption of To conclude, it’s great fun and highly rewarding to be part of an
the other team members should be so precise and refined that agile team, since as a tester you not only contribute significantly
only a cursory glance through it should be enough to get an idea to the entire lifecycle, but you also improve as a person and pro-
about what problem was found where in the system. It should be fessional. A tester feels more valued and empowered, when he/

www.agilerecord.com 75
she is heard and consulted for each activity.
> About the author
Testers that are part of the core team no longer have to take over
Rajneesh Namta
the role of ‘sentinels of quality ‘or the ‘last line of defence’, as eve-
is a Senior Test Consultant
ryone on the team is committed to build the right product. Testers
at Xebia India. He is a pas-
help in creating and maintaining the safety net, which enables the
sionate tester and has now
developers to accommodate and make changes with confidence.
worked for almost 7 years
Testers collect the information emitted by various signals put in
in various roles in QA. He is
place, and present that to the stakeholders so that they can make
certified Scrum Master and
informed decisions about the software being built. Above all, tes-
has been practicing Agile
ters question the software to know more about it, and good ques-
for the last 2 years now.
tions can only originate in an agile mind. ■
He has worked across dif-
ferent stages of software
development, including requirements specification,
user acceptance testing and post-production training.
I would like thank Harsh Saini (my colleague and friend) for the
Rajneesh is passionate about software quality and re-
thought-provoking and stimulating discussions I had with him,
lated tools and techniques and has made consistent ef-
which translated into the original presentation and ultimately led
forts to improve the existing way of working, not only for
to this article.
himself but for the teams and organizations he worked
for. He frequently blogs about software testing on his
company blog and recently presented at Agile NCR
(Gurgaon, India) a talk titled ‚Lessons learned in Agile
Testing‘, which was very well received by the audience.
This talk is the idea behind his article.

Your Ad here
www.agilerecord.com

76 www.agilerecord.com
© Alexander Zhiltsov - Fotolia.com
The 10 Most Popular Misconceptions
about Exploratory Testing
Rony Wolfinzon and Ayal Zylberman

The most well known definition of exploratory testing (ET) was experts, including James Bach himself. It also draws on notes
coined by James Bach and Cem Kaner: prepared by James prior to and following the panel.

Exploratory testing is simultaneous learning, test design, and Misconception #1: ET doesn’t provide complete coverage of
test execution.1 the application.
If you do ET badly, it doesn’t provide accountability! But if you do
This definition of exploratory testing was the prevailing definition it well, it provides even better accountability.
for many years. The new descriptive definition says:
Session-based test management is a well-known method for en-
“Exploratory testing is an approach to testing that emphasizes suring high accountability by documenting tests using many dif-
the freedom and responsibility of each tester to continually op- ferent techniques, e.g. video, writing, log analysis, etc.
timize the value of his work. This is done by treating learning,
test design, and test execution as mutually supportive activities (More information about session-based testing can be found in
that run in parallel throughout the project.”2 James Bach’s website at www.satisfice.com)

The difference between ET and other common testing approa- Another ET practice that provides improved coverage is “state
ches lies in ET’s focus on the skillset of the tester rather than on tables”. This practice is based on linking the session logs to the
the methodology and processes being used. Still, many people requirements, thereby creating transparency charts that indicate
see ET as diametrically opposed to a structured, well-defined test in what session each of the test cases was tested. It’s important
approach. In other words, as a QA manager once told me: “ET to mention that testing all the requirement test cases doesn’t
is the approach I instruct my testers to use when we don’t have prove that the system has been checked; however, testing all the
time for structured testing.”… requirements using the ET approach does increase confidence
that most of the system’s bugs were found.
The fact that ET combines the tasks of learning, test design and
test execution doesn’t mean it eliminates the need for planning Misconception #2: ET is not a structured approach.
the test. In fact, in some situations ET can be more structured All testing is structured! The question is: how is it structured?
and better documented than other traditional approaches. Unskilled exploratory testers often think that ET is not structured
because they are testing unconsciously, i.e. different tests are
The goal of this article is to define the most popular misconcep- conducted in each test cycle.
tions about ET, and to explain why we believe these misconcep-
tions are wrong as well as how they should be addressed. ET is not a methodology. In some cases, methodologies, such
as Rapid Software Testing and also Agile Testing, apply the ET
This article is based on a panel discussion conducted in Tel Aviv approach and package it together with a full process. In other ca-
in May 2010, whose members included some leading testing ses, a proprietary methodology is prepared based on the specific
needs of the organization and the product.
1  Exploratory Testing Explained by James Bach, v.1.3 4/16/03

By the same token, we might claim that all structured testing


2  Exploratory Testing Explained by James Bach, v.3.0 7/26/10

www.agilerecord.com 77
(ST) is unstructured. It is very rare to see in STDs any information Scripted Testing usually produces a mass of test documents
about what the test is testing. Most popular STD templates don’t sometimes amounting to thousands of pages or more. In most
contain any data about the test, or what requirements or flows cases these documents are not inspected thoroughly, and only
they are testing. In session-based testing we use a guide book sample tests get reviewed.
that contains all the flows and “business rules” that we are tes- In ET, at the end of each session, a Test Log is produced contain-
ting. This gives the tester a bird’s eye view of the testing program, ing a charter of the test conditions, area tested, detailed notes on
and therefore a better understanding of the product he is testing, how testing was conducted, a list of any bugs found, a list of is-
as opposed to mindlessly executing a test step by step. sues (open questions, product or project concerns), any files the
tester used or created to support their testing, percentage of the
Misconception #3: ET testers require a different skillset than session spent on the charter vs. investigating new opportunities,
scripted testers. percentage of the session spent on creating and executing tests,
There is no such thing as an „ET tester“. bug investigation /reporting, session setup or other non-testing
activities, and session start time and duration. Although the list
ET doesn’t really change the main activities involved in testing. of items included in the log seems long, it usually only amounts
One way of looking at it is that ET only changes the timing of the- to a few pages that are much easier to inspect and control, and
se activities; instead of designing the tests and then executing thus provide better visibility and transparency on what is being
them, both activities are performed simultaneously. tested.
In one of the organization we worked in, we defined a Test Leader
Most people that are capable of planning tests are also capable position whose main task is reviewing the Test Logs. In practice,
of executing ET. However, if you hire a tester that has only execu- this was the first time a real review was done on what was being
tion skills (monkey testing), he won’t be capable of working ac- tested…
cording to the ET approach. But then, at least in our experience,
he won’t be capable of any other testing either, regardless of the
approach being used... Misconception #6: ET takes more time than Scripted Testing.
The following diagram shows the difference between ET and
Misconception #4: ET testers don’t need training. Scripted Testing processes:
ET is not ad-hoc testing! The exploratory tester needs a large ar-

senal of testing techniques in order to perform good and efficient While studying the system is done only once in ET, Scripted Tes-
exploratory tests. Without the techniques that are provided in ting requires that learning take place during several steps: Once,
each ET training program, testing is inefficient. before the test planning (based on the requirements /design),
and once more, before executing the test. The tester spends time
Misconception #5: ET means lack of visibility and transpar- both on understanding the system under test and on understan-
ency. ding the test documents. As a result, ET requires less time to be
One of the main arguments against ET is that it does not allow spent on learning and, therefore, fewer resources are needed for
the management to control what is being tested. Well, actually, testing.
we believe this is the main argument against Scripted Testing.

78 www.agilerecord.com
Another resources-related issue is the level of detail in the test
documents. Although Scripted Testing often requires detailed do- > About the author
cumentation, when using ET, goals can only be achieved by using
Rony Wolfinzon
high-level-of-detail descriptions of the conducted tests. This style
is a business manager in
of documentation saves a lot of time, mainly when reviewing and
QualiTest Group. He has
maintaining the test documentation.
been in the field of soft-
ware testing since 2006,
Misconception #7: When using ET, there is no room for Script-
and has worked mainly
ed Testing
in the field of military sys-
It would be too arrogant to say that no Scripted Testing is ever
tems. Rony consulted and
needed. The fundamentals of Scripted Testing are very important
managed projects for com-
to any testing project. The great benefit of exploratory testing is
panies such as IAF, Elbit
that it offers the testers skills and techniques that facilitate their
Systems, IAI, Malam, Intel.
ability to identify ”show stoppers”, and “non requirement based”
defects sooner, while investing less effort. The intelligent testing
Ayal Zylberman
project manager must find the correct balance between ET and
is a senior Software Test
ST that will lead to the best possible outcome for his project.
Specialist and QualiTest
Group co-founder. He has
Misconception #8: Most organizations don’t use ET.
been in the field of soft-
We believe that most organizations are simply afraid to admit
ware testing since 1995,
they are using ET. Let’s take for example a tester who is highly
in several disciplines such
familiar with the software. And let’s say that this tester has a 20-
as military systems, billing
step test, of which the first 10 steps are identical in all other 50
systems, SAP, RT, NMS,
tests he has already executed. Do you really think that this tester
networking, and telepho-
will execute the first 10 steps in each test? If you believe, like we
ny. He was one of the first
do, that he won’t, it follows that what he is doing is simply ex-
in Israel to implement automated testing tools and is
ploratory testing with a different state each time (AKA condition
recognized as one of the top level experts in this field
statechart testing). In this case, why should he waste his time on worldwide. Ayal was involved in test automation activi-
documenting so many steps, if he is not planning to use them in ties in more than 50 companies. During his career, he
the future? Examples like this abound, which is why we claim that has published more than 10 professional articles world-
most organizations do use ET, but are afraid to admit it. wide and is a highly sought-after lecturer for Israeli and
international conferences.
Misconception #9: ET cannot be applied to complex systems.
The more complex the system is, the higher the number of pos-
sible test cases that can be created to test it. This is the rea-
son why for complex systems a certain degree of risk analysis
is performed when deciding which test cases will be performed
and which will be ignored. However, once the tests have been
selected, they are the only ones that will be executed. As a result,
defects residing within the sphere of the rejected tests may be
missed.

One of the main advantages of using ET is that it gives the tester


the freedom to select different scenarios in each test cycle. This
leads to greater coverage of the system; more defects can be
found, resulting in higher quality of the system under test. So the
bottom line is: the more complex a system is, the more suitable
it is for ET!

Misconception #10: ET goes home


This is a well-known fallacy; ET is just a movie. Existence of aliens
has never been proved. ■

www.agilerecord.com 79
© Valery Sibrikov - Fotolia.com
The double-edged sword of feature-
driven development -
by Alexandra Imrie

Or how to avoid zombies in your software perts. A session in October 2009 with Lisa Crispin gave us some
When our team started out on the agile journey, one of the recur- good ideas about how we could improve the success of our sto-
ring problems we had was that we would spend time planning, ries and put more focus on deliverable (and testable) require-
designing, estimating (and even developing) features that never ments to please our customer and measure our success. Since
saw the light of day. Looking back, there were various reasons for then, we’ve used six concrete strategies to improve the likelihood
us being left with these “zombie” features that were half-imple- of a feature being implemented in a sprint:
mented but never made it to the users.
1. Talking through features:
Reasons for zombie features We take the time as a team (sometimes all of us, sometimes just
Underestimation was often a cause – back then our estimates certain members – depending on the story) to really talk our way
weren’t as good as they are now, and necessary reprioritization through the stories for an iteration. This usually means spending
within a sprint would sometimes mean that we had to leave less more time discussing things than we previously did – we even
important things out. After the sprint, the customer would also invented a new word for the meetings: excrucialating, a combina-
occasionally change his mind, and the feature was left out com- tion of excruciating and crucial. Over time though, we’ve learned
pletely. to plan these meetings with plenty of breaks, no time pressure
from other sources, and perhaps doughnuts or cakes to keep us
Another trap we used to fall into was going the wrong way in alert. And our discussions have really paid off, especially in terms
terms of development. We’d discuss a feature and design it, but of epics. By talking through all facets of a story or an epic, we
obviously not thoroughly enough. At some point in the sprint we’d see where the conceptual or technical problems may occur well
come up against a brick wall that we hadn’t considered and had before starting work on the story, and can plan enough time for
to make the decision to keep this feature at the cost of another them or change them accordingly. We’ve found ourselves going
or lose it. Either way generally meant stopping work on a feature the wrong way much less since we started this practice.
in development.
2. Keep it thin
The third example I can remember is difficulty in planning and We make a huge effort to make story points as thin as possible.
implementing epics – collections of stories that, by their very We focus on value – what must be achieved to make this sto-
nature, stretch over multiple sprints. We’d start with preparati- ry useful for the customer? This is often a come-down from the
on work for the epic – infrastructure, refactoring, reading up on “excrucialating” discussions, where we really get to think about
the technology… and for some reason or other never actually got all of the possibilities and options, but it’s important not to get
around to implementing any of the visible features. carried away. Nice-to-haves and other features either get put into
the product backlog, or we also write cards for them in case we
Working on being feature-driven have time at the end to smoothen things out.
It was obvious that something wasn’t working well with our pl-
anning process and with our slicing of features. If we wanted to 3. Focus on the test
achieve agility, we would have to make sure that our sprints gave We’ve found that discussing the acceptance criteria for each
us deliverable, visible and usable stories. That’s the core of agi- story card (and writing them onto it) really helps us to focus on
lity – workable software in short iterations. Not just “very similar delivering value. If we can’t determine a test for it, then we really
software to the last iteration, but with more internal features you have to decide what the story actually brings us (hopefully not
can’t see”. more zombies!). Sometimes a story card does have to be writ-
ten in a less visible way – making the necessary infrastructure
Having identified this problem area, we called in one of the ex- changes in the database, for example. However, combined with

80 www.agilerecord.com
our effort to make everything thin, we ensure that such stories The next set of points
usually take an unofficial maximum of two days, so that visible Not to be disheartened, we came up with a few suggestions we
feature development is not far off. could try out to deal with the issue of finding time to refactor.

1. Adjust the amount of stories we place into a sprint to leave


4. The story board
time for such areas.
We built a story board in the room where we have our stand-up
meetings. We have four columns on it: planned, under construc- 2. Work on a case-by-case basis. If we need a couple of days
tion, to test and done. The to test column was added later to added for refactoring, then we can make the decision to
make us really aware that “done” means tested. Ideally, cards gain better code and perhaps lose something less important
shouldn’t be in the to test column for too long. An automated test at the beginning of the sprint, not halfway through.
should tell us the next day if the feature is done, and a manual 3. Introduce regular (shorter) refactoring sprints to work on in-
acceptance test should be done as quickly as possible once the ternal knots we’ve identified.
feature is committed (or even earlier, see below).
4. Add a certain amount to each story for “possible refacto-
5. Working on the same stories ring”.
Developers choose story cards from the board to work on. Pre-
viously, developers would work on their own paths, which often The way that we decided to go is number 2. Together with the
meant that many things got started, but not all of them finished. customer, we discuss the story and the reasons why it would be
Where possible now, our developers work on a story together, worth doing some internal work on it. Based on these discus-
developing infrastructure and interface at the same time, for ex- sions, we plan time for the work (again, sticking to our unofficial
ample. As well as meaning that we can focus on finishing one maximum), so that the rest of the stories relating to this feature
thing before starting another, this also gives the team (and the can be developed more easily and, hopefully, with fewer errors
customer!) a good feeling about new features coming in quickly. resulting from unclear code.

6. Show and tell Continuing the journey


In combination with the story board, we introduced show & tell We’ve been using this strategy for a couple of iterations now, and
sessions each week, where everyone gets the opportunity to de- it seems to be helping. Hopefully we’ve managed to dull the se-
monstrate to the rest of the team (and the customer) what they cond edge of the double-edged sword and gain the benefits of
are working on. The knowledge that a demonstration is coming a feature-driven approach without suffering from rushed sprints
up is a great focus for staying close to the thin plan. Show & tell and the problems they bring. Time will tell, and I’m sure we’ll
also has two other advantages: it serves as an ongoing manual introduce new ideas along the way to combat other issues that
acceptance test and also gives us new ideas for follow-up fea- come up. For now though, we seem to have the zombies under
tures in the next sprint. Some suggestions (especially sugges- control… Muuuuurgghhh…. ■
tions about usability) can even be incorporated into the current
feature development, if they are not too big.

These six points have really helped us in the past year. We’re > About the author
getting much better at identifying (and sticking to) what we really
Alexandra Imrie
need to release a new feature.
came to Bredex GmbH in
2005 after finishing her
The catch
degree in linguistics. Her
So far, so advantageous. What our new productivity has brought
first role involved writing
with it though, is the question “where is the time to refactor?”.
technical documentation,
The focus on thin stories with the minimum amount of code to
but writing about features
gain new value doesn’t leave much room for making internal
soon turned into discus-
code better, clearer, more maintainable or more up-to-date on
sing how features should
the way. The zombies in the code are no longer half-done fea-
be implemented from the
tures, but remnants of previous implementations and old libra-
customer perspective. Now
ries. As I said before, no one can release a new version of the
she is responsible for communicating with customers;
software that does all it did before, just with nicer internal code
giving workshops and training courses, and working as
and updated libraries.
test consultant or a tester for various projects. She also
continues to represent the customer’s view in terms of
I see this difficulty being compounded by the impression that
understanding, usability and feature requests. Two of
teams (and customers) get from frequent show & tell meetings
her main interests in the world of software development
or story discussions – that feature development is non-stop. That
are how to make software user-friendly and how to bring
kind of thinking leads to time pressure in the team which can
agility to testing processes.
have many unwanted results.

www.agilerecord.com 81
© Frank Mascher - Fotolia.com
Continuous Deployment and Agile
Testing
by Alexander Grosse

“Our highest priority is to satisfy the customer through early and Database Changes and Architectural Changes
continuous delivery of valuable software” These changes are almost never deployed automatically. Espe-
from the Agile Manifesto cially database changes need careful preparation, as a rollback
without downtime is very hard.
Continuous Deployment is one of the current buzzwords. Inter-
net companies are pushing code to production in a breathtaking Testing
frequency (Flickr [1] is pushing to production 10 times a day, To be able to deploy anytime, you cannot afford to have separa-
other companies like Wordpress [3] and IMVU [4] are doing the te development, QA and operations departments. You probably
same, Google is doing it depending on the product). even cannot afford to have any manual work done after a devel-
oper has checked in.
Good ideas (or revenue generating ideas) should be delivered as
soon as possible to the end user. A company is loosing money To achieve this the keyword is obviously automation. The best
if features are developed and then sit in the version control wai- way to do this is to use a build pipeline. But what is a build pipe-
ting for deployment. Also consumers expect constant innovation, line and what is the difference to Continuous Integration?
but there is also an internal reason for getting the development
department into a state that Continuous Deployment is possible. In Continuous Integration usually parts of the complete deploy-
ment are built (in the Java world a jar for example) deployed to
Looking only shortly at it people tend to focus on deployment a Tomcat and tested. This is obviously a good approach but not
techniques – for example using tools like Puppet et al. to auto- good enough because here the test systems are usually not the
mate deployments, or Nagios for monitoring of servers that a de- same as the production systems. And to avoid bad surprises de-
ployment was successful. But what does Continuous Deployment ploying to production you should test on production like systems.
mean for day-to-day development and testing? How do you have
to organize your development department to be able to push out A build pipeline deploys all systems the same way as on produc-
code that often without neglecting quality? tion and applies the Fail Fast pattern, which essentially means
to try to detect errors as soon as possible. To align test and pro-
Pushing code out as often as possible duction systems, in a build pipeline the CI server usually produ-
Hearing Continuous Deployment some people tend to think that ces RPM’s and builds all systems using the full RPM stack (from
everything that is being developed is just pushed out to produc- OS to application). This means every time a developer checks in
tion. This is not true – depending on the change - usually diffe- code, a RPM is built and a complete test system is built and tes-
rent approaches are being taken. ted. So you test your deployment every time you check in. As it is
a pipeline several test stages are performed and only if all tests
Bugfixes and small enhancements are passed a RPM can be deployed on production.
Are usually pushed out directly, but this does not mean there is
no testing involved (see Testing below) Role of Testers
Seeing the emphasis on automated testing, the role of testers
Features change more to Software Engineer in Test (Google has an own
Depend on the company, some are using A/B testing before de- department for Software Engineer in Test), basically skilled Soft-
ployment, some are not. ware Engineers with a passion for testing (yes they exist!). Also

82 www.agilerecord.com
Part of a Build Pipeline
software engineers have to take much more responsibility for the Links
quality of their code, otherwise it takes too much time to ensure [1] Deployment at Flickr - http://www.slideshare.net/
quality of the code. jallspaw/10-deploys-per-day-dev-and-ops-cooperation-at-
flickr
Organization (Scrum, Kanban)
[2] Book about Continuous Delivery (they call it delivery instead
What is the best way to organize your development department
of deployment because deployment could also mean just
to be able to deploy automatically? Obviously heavy up-front de-
deploying to a QA system) - http://continuousdelivery.com/
sign techniques like waterfall don’t work. Scrum aims for produc-
tion ready software after each sprint, so it is not an optimal fit. A [3] Deployment at Wordpress - http://toni.org/2010/05/19/
combination of Kanban and XP works best with focus on strong in-praise-of-continuous-deployment-the-wordpress-com-
engineering practices and the goal to deploy every feature which story/
reached the right end of the Kanban board. [4] Deployment at IMVU - http://timothyfitz.wordpress.
com/2009/02/10/continuous-deployment-at-imvu-doing-
Summary the-impossible-fifty-times-a-day/
Should every organization aim to do Continuous Deployment?
Realistically not, there are many companies where Continuous [5] Devops - http://en.wikipedia.org/wiki/DevOps
Deployment is not an option (banking industries, security critical
applications), but I think that every company should try to bring
his development department in a state where it would be possi- > About the author
ble to do Continuous Deployment. So, use Continuous Deploy-
Alexaner Grosse
ment as a vision for your development department and actually
is heading the Places De-
deploy as many as 50 times a day if business allows it or test
velopment at Nokia‘s Lo-
release candidates extensively after automation if your business
cation services unit. As a
demands that.
group they are responsible
for location-based services
Doing that the role of developers, testers and system administ-
such as Ovi Maps on the
rators changes. Developers have to take more responsibility for
web and device. Alexander
the quality of their code, the role of testers changes from manu-
is working in the software
al testing to become more of an automation expert and system
industry since 1996 and is
administrators have to work with developers as one team – a
holding a Masters in com-
movement which is called DevOps ([5]). ■
puter science from the university of Oldenburg. At Nokia
he built up the development department of the Places
unit up to a state where eight teams are working in par-
allel on a service oriented architecture.

www.agilerecord.com 83
© Lisa Crispin
What Donkeys Taught Me About
Agile Development
by Lisa Crispin

If you’ve met me or attended one of my presentations or tutori- our own workload if we never deliver on the commitments we
als, you probably know about my miniature donkeys, Ernest and make. If any team member sees a problem, but doesn’t feel safe
Chester. Driving my donkeys is my avocation, but working with to raise the issue with the rest of the team, that problem won’t
them has taught me surprising skills that help me contribute to get fixed, and that person won’t be happy and productive.
my software development team.
It might take a long time to build up a trusting relationship – it
Trust sure did with Ernest – and it doesn’t take long to destroy it if you
Ernest, our first donkey, was rescued from an abusive situation. something harmful. But it’s worth the effort. My teammates and
He was half-starved and terrified of people. It took us weeks just I trust each other. If anyone needs help, they get it right away.
to get near him. I’d never worked with a donkey before, but I’ve Here’s an example. Recently, one of our Canoo WebTests GUI
ridden horses all my life, and trained at the FEI levels of dressage, regression scripts began to fail. A script was clicking on a link,
so I thought training a donkey would be a piece of cake. Howe- and the resulting page returned a 404 error. However, the GUI
ver, Ernest was a whole different animal. After long months of still worked fine manually in the browser. This occurred right after
experimenting and working with an experienced donkey trainer, I a programmer checked in a refactoring. He couldn’t believe his
learned the key. If a donkey trusts you, and believes that you love change caused the problem, but he trusts me to give him honest
him, he will do whatever you ask. If not, well – forget about it, you information – the test script had not been changed, and it had
won’t be able to bribe or bully him into doing your bidding. passed up to that point, so it must be something he did. After
we both spent hours of research, he found that when he moved
Donkeys have the reputation of being stubborn, but the truth is, a Velocity macro into a module using IntelliJ Idea, the IDE itself
they have a strong need for personal safety – they’re looking out made other code changes without his knowledge, trying to be
for Number One. Once we were driving Ernest through a field, “helpful”. Some Javascript includes were lost, causing the 404
and he stopped dead and refused to budge. Finally I got out of “behind the scenes”, breaking the test. Without trust, the situ-
the cart and looked in the tall grass ahead – there was a tang- ation could have degenerated into a “blame game”. Instead, we
le of barbed wire stretched across, invisible in the grass. Ernest worked together until we solved the problem.
somehow knew it was there, and wasn’t going to let the stupid
humans hurt him. Our business people trust us, so if we need more time to deliver
a feature the right way, they wait. We trust that the examples they
If my donkeys are afraid of something, even something that looks give us for desired system behavior are accurate. So, we’re able
trivial to me such as paper bag blowing across the road, it’s my to deliver business value steadily and reliably.
responsibility to save them from it. So, when something alarms
them, I get them away from the alarming object. They know I will Donkey Energy
protect them, so now they trust that anyplace I take them must Speaking of steady and reliable – these are two central attributes
be safe. of donkeys. Ernest and Chester love to work. Chester is younger,
and likes to play the clown. But hitch him to a cart or a load of hay
Teams revolve around trust, too. If I don’t have credibility with to haul, and he focuses on his job. Donkeys don’t set the world
the programmers on my team, they won’t jump to fix a defect I on fire, but they throw their shoulders into their work and go one
report. Maybe they think I’m trying to get them in trouble or make step at a time. Ernest isn’t flashy, but he has won the Castle Rock
myself look good. The business experts won’t trust us to manage Donkey and Mule Show Obstacle Driving for Minis competition

84 www.agilerecord.com
five times. Chester might be a miniature donkey, only about a down. On my development team, we often are so heads-down
meter tall, but he can easily pull two adults in a cart over hill and in work we forget to stop and reward ourselves. It’s fun to play
dale and even through water or snow. As a team, they work the games, enjoy treats, have a celebration, and remember why we
dressage arena every week, a tough job in the deep sand, each work so hard.
one pulling his weight. They never quit, so I have to be careful I
don’t present a challenge that is too big. What You Can Learn from Donkeys
Take a lead from Ernest and Chester. Work on building trusting
Because my donkeys know I have their best interests at heart, relationships and nurturing a learning culture. Create an atmos-
they’re happy to try new experiences. Last year, I bought a four- phere of personal safety in your team and organization. Work
wheeled buckboard wagon, much larger than the two-wheeled steadily at a sustainable pace, keeping focus on the next goal.
carts they had pulled before. The first time I hitched them to the Anticipate adventure, and enjoy honing your craft. Celebrate
wagon, they willingly adapted to the new situation and learned when you achieve goals, big and small.You’ll discover one truth
along with me. Periodically, we work with trainers to take our about agile development: it means always finding good ways to
skills to a new level. deliver the highest quality software, satisfying your customers
and yourself. ■
In my experience with agile development, slow and steady wins
the race. I don’t know if my team is one of those “ultra-perfor-
ming” teams, but I do know we deliver significant business value
to production every two weeks, and the quality of our product
exceeds our customers’ expectations. We don’t have peaks and
valleys; we focus on finishing one story at a time, and we finish
several over the course of a two-week iteration. Sustainable pace
rules – it allows us to continually deliver value without burdening
ourselves with too much technical debt.

Like donkeys, software teams need good care and feeding: we


> About the author
need time to learn, time to experiment and improve our process.
With a nurturing culture, we continue to do our jobs a little better Lisa Crispin
every day, expanding our abilities. We can adapt to whatever cur- is an agile testing coach
ve our business throws us. and practitioner. She is
the co-author, with Janet
Enjoyment Gregory, of Agile Testing:
Donkeys really do love to work. If they see other donkeys getting A Practical Guide for Tes-
to work while they sit idle, they look dejected. They also seem to ters and Agile Teams (Ad-
love adventure. They’re always up for a road trip – they leap into dison-Wesley, 2009). She
the horse trailer (which is quite a big leap for these small don- specializes in showing tes-
keys). They might be going for a trail drive near the mountains, or ters and agile teams how
going to a school (inside the school building, even) to be hugged testers can add value and
by children. They might be going to a donkey show or for a hike – how to guide development with business-facing tests.
it doesn’t matter, they clearly enjoy the journey. Her mission is to bring agile joy to the software testing
world and testing joy to the agile development world.
Watching them reinforces for me how important it is to love what Lisa joined her first agile team in 2000, having enjoyed
we do. Enjoyment is a key agile value. We must take pride in many years working as a programmer, analyst, tester,
our craftsmanship, satisfied to deliver the right product to our and QA director.
customers, able to do so while maintaining a sustainable pace. Since 2003, she’s been a tester on a Scrum/XP team at
When I first started in the software business, I thought it was ePlan Services, Inc. in Denver, Colorado. She frequently
something to do until I figured out what I wanted to do when I leads tutorials and workshops on agile testing at con-
“grow up”. Finally I realized I was passionate about quality and ferences in North America and Europe. Lisa regularly
making a difference. I love being part of a business, able to con- contributes articles about agile testing to publications
tribute to its success in many ways. When every team member such as Better Software Magazine, IEEE Software, and
has this passion, and every team member is fully engaged in the Methods and Tools. Lisa also co-authored Testing Extre-
process of building the best possible software, that’s a joyful and me Programming (Boston: Addison-Wesley, 2002) with
productive team. Tip House.

Donkey playtime reminds me how important it is to celebrate suc-


cess. When work is over, Ernest and Chester play hard, chasing
each other, engaging in tug-of-war, whacking each other with
toy balls and feed tubs, stealing things the careless humans set

www.agilerecord.com 85
Masthead
EDITOR
Díaz & Hilterscheid
Unternehmensberatung GmbH
Kurfürstendamm 179
10707 Berlin, Germany

Phone: +49 (0)30 74 76 28-0 Fax: +49 (0)30 74 76 28-99 E-Mail: info@diazhilterscheid.de

Díaz & Hilterscheid is a member of “Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.”

EDITORIAL
José Díaz

LAYOUT & DESIGN
Díaz & Hilterscheid

WEBSITE
www.agilerecord.com

ARTICLES & AUTHORS


editorial@agilerecord.com

ADVERTISEMENTS
sales@agilerecord.com

PRICE
online version: free of charge -> www.agilerecord.com
print version: 8,00 € (plus shipping) -> www.testingexperience-shop.com

In all publications Díaz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to
make use of its own graphics and texts and to utilise public domain graphics and texts.

All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling la-
belling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be
drawn that it is not protected by the rights of third parties.

The copyright for published material created by Díaz & Hilterscheid Unternehmensberatung GmbH remains the author’s property. The dupli-
cation or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Díaz & Hilterscheid
Unternehmensberatung GmbH.

The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible
for the content of their articles.

No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Index Of Advertisers
CaseMaker 7
Díaz & Hilterscheid GmbH 2, 9, 19, 29, 32, 39, 54, 62
gebrauchtwagen.de 87
iSQI 11, 44 - 45
Kanzlei Hilterscheid 23

86 www.agilerecord.com
Es gibt nur einen Weg zum Glück.
Training with a View

“A casual lecture style by Mr. Lieblang, and dry, incisive comments in-between. My attention was correspondingly high.
With this preparation the exam was easy.”
Mirko Gossler, T-Systems Multimedia Solutions GmbH

“Thanks for the entertaining introduction to a complex topic and the thorough preparation for the certification.
Who would have thought that ravens and cockroaches can be so important in software testing”
Gerlinde Suling, Siemens AG

11.10.10-13.10.10 Certified Tester Foundation Level – Kompaktkurs Hannover


25.10.10-29.10.10 Certified Tester Advanced Level – TEST ANALYST Stuttgart
25.10.10–25.10.10 Anforderungsmanagement !! NEW !! Berlin
26.10.10-28.10.10 Certified Professional for Requirements Engineering – Foundation Level Berlin
02.11.10-05.11.10 Certified Tester Foundation Level München
02.11.10–04.11.10 ISEB Intermediate Certificate in Software Testing !! NEW !! Berlin
08.11.10-12.11.10 Certified Tester Advanced Level – TEST ANALYST Berlin
15.11.10-18.11.10 Certified Tester Foundation Level Berlin
22.11.10-26.11.10 Certified Tester Advanced Level – TESTMANAGER Frankfurt
22.11.10–23.11.10 Testmetriken im Testmanagement !! NEW !! Berlin
25.11.10–26.11.10 HP Quality Center !! NEW !! Berlin
29.11.10-30.11.10 Testen für Entwickler Berlin
01.12.10-03.12.10 Certified Professional for Requirements Engineering – Foundation Level Berlin
06.12.10–08.12.10 ISEB Intermediate Certificate in Software Testing !! NEW !! Berlin
07.12.10-09.12.10 Certified Tester Foundation Level – Kompaktkurs Düsseldorf/Köln
09.12.10–10.12.10 HP QuickTest Professional !! NEW !! Berlin
Kurfürstendamm, Berlin © Katrin Schülke

13.12.10-17.12.10 Certified Tester Advanced Level – TESTMANAGER Berlin

also onsite training worldwide in German, English, Spanish, French at


http://training.diazhilterscheid.com/
training@diazhilterscheid.com

- subject to modifications -

You might also like