Professional Documents
Culture Documents
com
When SIAC* projects began implementation of the Rational Unified Process (RUP), the
main challenge for testers was to adjust the rigid requirements-based testing style to the
iterative nature of the RUP framework. According to the RUP, software requirements and
design iteratively evolve throughout the project lifecycle, and so it is expected that the
test design would also do so. Hence, testers have to implement a test design approach that
should be effective in the context of evolving software requirements. Although the RUP
methodology is a comprehensive source of activities and guidelines that define the
overall testing workflow, still it lacks some details about test design. In particular, we
needed more details about a) how the test design steps and artifacts relate to other RUP
disciplines, and b) how to implement the test design activities based on evolving
requirements. To fill this gap, we developed an approach that we called – Iterative Test
Design (ITD). In this article, we describe our approach that combines features of the
requirements-based and exploratory testing schools. The main benefit of the ITD is that it
provides a framework for test design that is effective in the context of incomplete or
evolving requirements.
*
SIAC – Securities Industry Automation Corporation
Vol. 4 Issue 3 -2- www.testinginstitute.com
testing expert James Bach has been evolving and teaching the exploratory testing as a
systematic risk-based approach to testing5. The main concept of this methodology is that
test design and test execution happen at the same time; and testers develop test ideas
driven by exploration of the product’s quality risks. As we will discuss in this article, the
features of both testing schools can be combined to help us effectively deal with
incomplete or evolving requirements.
There is an important difference between the last step and the first three steps. In
particular, while performing our test design activities during the first three steps, we
explore product requirements; whereas at Step 4, we manually execute tests to explore
the product itself. Test execution is sometimes viewed as a simple procedure that can be
completely automated. Therefore, someone may have a question, “Why is test execution
a part of test design?” According to the exploratory testing concept6, software testing is a
challenging intellectual process that focuses on software product exploration. That, in
turn, requires human observation and analysis that can never be replaced by automated
scripts. As a result, in the course of manual test execution, testers can develop new test
ideas and enhance their existing test design.
Once we have identified generic quality risks, we then develop ideas about how to
execute tests for the identified risks. Forms to capture the test logic could be as simple as
a checklist of test conditions, or it could be a much more elaborate form, such as test
design patterns. In general, patterns in software development are used to capture a
solution to the issue that has a recurring context7. Hence, the test logic for generic quality
risks can be presented by test design patterns. The deliverable of this step is a defined
approach to application functional testing. The forms to present the test approach could
be either a list of test ideas we developed at this step, or a section in a test plan document,
if it is a required deliverable that follows the IEEE Std.829-1998 “IEEE Standard for
Software Test Documentation”.
We begin our activities at this step with analyzing the use case description and
decomposing it into a few control flows. For this purpose, we can use a use-case activity
diagram, if it is available. Then we analyze each control flow to identify specific quality
risks and develop ideas about what to test, i.e., what can go wrong if the user (or actor)
performs all possible valid and/or invalid actions. While exploring requirements
described in a use case, we should use the list of generic quality risks (our deliverable
from Step 1) as a reference source that can suggest some test ideas that are not obvious
from the use case description. Analyzing use cases, we can find that a particular kind of
quality risks not initially included in the generic list is identified across a number of use
cases; hence, it can also be qualified as a generic risk. In this case we should go back to
Step 1 and update the list of generic quality risks. Finally, we define test objectives for
the use case based on the identified specific quality risks. An important deliverable of this
step is a draft of high-level test design specifications that capture our test ideas and test
objectives for the use cases.
As we learn more details about the system implementation, our understanding of quality
risks can further evolve and we can identify additional test objectives for the use cases. In
this case, we should go back to Step 2 and update objectives in our test designs. When
we feel our test designs are fairly complete, we should review them with business experts
and, based on their feedback, revise the documents, if necessary. Another good practice is
to validate the test designs using a system prototype to make sure our documents captured
correct test ideas. Deliverables of this step are high-level test design specifications
(completed and reviewed at this point) and test case specifications, still in draft form.
According to our approach, the high-level test design specifications are testers’ main
deliverables. Even though this document type has been known for many years, for some
reason testers do not use it as frequently as test case specifications. As we found in the
context of complex projects with iterative development, this document type provides the
following benefits. First, test design specifications are easy to review and verify as they
present high-level ideas about “what” to test as opposed to test case specifications that are
intended to capture detailed information about “how” to test. Second, being high-level
documents, test design specifications are typically less affected by changing and evolving
system requirements and design specifications as opposed to detailed test case
specifications. Hence, their maintenance cost can be significantly lower. Third, from test
design specifications, management can see the number of test cases to be developed and
executed. These data, being available early in the process, can help management better
allocate resources and schedule testing activities in the project plan.
Business
Modeling Step 1 Approach to testing
High-level Test
Requirements Step 2 Designs (draft)
Our test design approach is project-context-driven. This means that our decisions about
whether we need detailed test case specifications and what level of details is appropriate
for test design specifications should depend on a project’s context and management
objectives. In general, before making a decision about how much to invest in designing
and maintaining test documentation, we should clarify requirements for the test
documentation by answering such questions as “What is the purpose of test
documentation?” and “Who and how will be using test documentation?” The same type
of test documentation can have different details for different purposes. For example, it
could be a product to be delivered with a system that should comply with contractual
requirements; or it could be a tool for testers that should help them execute testing in a
structured way; or test documentation could be a tool for management that should help
them better plan and control the project. Therefore, as any other deliverable on a project,
test documentation should have defined requirements and design guidelines tailored to a
given project context. The book6 “Lessons Learned in Software Testing” has a detailed
discussion of this topic.
Vol. 4 Issue 3 -8- www.testinginstitute.com
Acknowledgement
This work was supported by the Application Scripting Group (ASG) at SIAC, and the
authors are grateful to the ASG testers for their feedback.
References
1
G. Myers "The Art of Software Testing", John Wiley & Sons, New York, 1979
2
B. Hetzel “The Complete Guide to Software Testing”, John Wiley & Sons, 1988
3
E. Kit "Software Testing in the Real World", Addison-Wesley, 1995
4
C. Kaner “Testing Computer Software”, International Thomson Computer Press, 1988
5
J. Bach “Risk-Based Testing”, STQE Magazine, November 1999, pp.22-29
6
C. Kaner, J. Bach, B. Pettichord “Lessons Learned in Software Testing”, John Wiley &
Sons, 2002
7
E. Gamma, et al. “Design Patterns. Elements of Reusable Object-Oriented Software”,
Addison-Wesley, 1995
8
P. Kruchten “The Rational Unified Process. An Introduction”, Addison-Wesley, 2000
9
P. Kroll, P. Kruchten “The rational Unified Process Made Easy. A Practitioner’s Guide
the RUP”, Addison-Wesley, 2003