You are on page 1of 13

divine QA Testing

2 Test Case Design Methods

2.1 Introduction

A rich variety of test case design methods have evolved for software. These
methods provide the developer with a systematic approach to testing more
important, method provide a mechanism that can help to ensure the completeness
of tests and provide the highest likelihood for uncovering errors in software.

Any engineered product can be tested in one of the two ways: (1) Knowing the
specified function that a product has been designed to perform,tests can be
condcuted that demonstrated each function is fully opertational while at the same
time searching for errors in each function; (2)Knowing the internal workings of
aproduct tests can be condcuted to ensure that " all gears mesh",that is internal
operations are performed accrosding to specifications and all internal components
hav been adquetaely excersied .the first test approach is called black box testing
and the second white box testing .

Black box tests are conducted at the software interface level.Although they are
designed to unc over errors,black boc tests are used to demonstrated that software
functions are operational, that input is properly accepted and output is correctly
produced and the integrity of external information is maintained.A black box test
examines some fundamaental aspect of a system with little regard for the internal
logic structure of the software.Black box testing is also called behavioural testing
.It foucses on the functional requirements of the software.White box testing of
software involves close examination of procedural detail.logical paths through the
software are tested by providing test cases that exercise specific sets of conditions
and for loops.The "status of the program" may be examined at various points to
determine if the expected or asserted status corresponds to the actual status.

The attribuets of both black box and white box testing can be combined to provide
an approach that validates the software interface and selectively ensures that the
internal workings of the software are correct.

2.2 Test Specifications

Test designs specifications defines the approach for testing, test techniques to be
used, test case design methods to be used, test environment, etc. Test
specifications are documented in the test plan.

A test suite is a framework that provides for a way of grouping test cases. A test
suite may consist of test cases to test a specific functionality. For instance a test
case for testing a specific user interface (or web page) can be grouped together to
form a test suite. Each test suite and its test cases should be uniquely identifiable.

Page 1 of 13
divine QA Testing

A test case contains a description of the test that needs to be conducted on a


system. A test case is reusable and may be included in one or more test suites.

Test case description typically includes the following:

Test case identifier: A unique name or number


Purpose: What features, path etc are being tested?
Reference: Reference to specifications and design documents should
be provided
Input: Input consists of input data and/or input actions. Input data
may include or data items to be entered online or data
records in a file or database to be set up or data items to be
input from the interface of some other software systems.
Input actions include keyboard or mouse actions/commands
for navigation and control necessary for online interaction.
Output: Output consists of output data and/or outcome. Output data
consists of messages, reports, etc., that would be output on
completion of the test case. Outcome describes reaching a
specific state, for instance successful completion of a
transaction.
Environment: List specific hardware, software and network configuration
that needs to be set up for the test.
Procedure: Test procedure describe the steps for the test setup, test
starting, test executions, results logging, recording
measurements, test stopping, and contingency (what to do
when it all goes wrong). Test steps may also describe how
and where to restart a test after a shut down.
Constraints: Any constraints on performing the test cases.
Dependencies: What tests have to be executed before this on and what if
the program fails them?
Feature pass/Fail: A criteria that describes under what circumstances a feature
can be considered as "passed"(Test has failed) or
"failed"(Test has succeeded) some tines we may describe a
"partial pass" criteria also.

Depending on the complexity of the software system and the level of testing
(Unit, Integration, System and Acceptance) some or all of the items stated above
could be included in a test case template for recording test case design.

2.3 Test Case Template

Depending on testing requirement and the level of testing (Unit, Integration,


System and Acceptance) one of the following test case templates may be used for
the test case design.

Page 2 of 13
divine QA Testing

Type 1:
Test case Name: Mnemonic identifier
Test Id: Numeric identifier
Test suite Id: Test suite(s) identifier (numeric)
Feature: System/application feature being tested
Priority: Priority assigned to the test
Environment: Hardware and software required
Duration: Test schedule
Effort: Person hours
Set up: List steps needed to set up test
Test step: Test steps and sub test steps for starting; conducting and
stopping the test. For each test step the following items
shall be defined
<Test step No.> < Step description > < Input/Input action>
<Output/Out come> <Result> <Bug identification Id>
Feature Pass/Fail: Pass: Output and out come expected for complete pass
Fail: Output and outcome expected for failure
Partial pass: Output and out come expected for partial pass

Type 2:
Test case Name: Mnemonic identifier
Test case Id: Numeric identifier
Test suite Id: Test suite(s) identifier
Purpose: What feature it tests
Priority: Priority assigned to the test
Input: Data input/input actions
Output: Output and outcome expected (Pass/Fail/Partial Pass)
Environment: Hardware and software required
Procedure: <Set up procedure>
<Test steps>
<Test stop and wrap up steps>
Constraints: Any constraint associated with the test
Dependencies: Inter dependency with the other test cases

Type 3:
Test case Id: Numeric identifier
Test suite Id: Test suite(s) identifier
Purpose: What feature it tests
Priority: Priority assigned to the test
Input: Data input/input actions
Output: Output and outcome expected (Pass/Fail)
Test step: Test steps and sub test steps for starting, conducting
and stopping the test
Constraints: Constraints on the test
The environment requirement and test setup
procedure are described at test suite level only

Page 3 of 13
divine QA Testing

Type 4:
Test case Id: Numeric identifier
Test suite Id: Test suite(s) identifier
Purpose: What feature it tests
Test step: Test steps and sub test steps for starting, conducting
and stopping the test
Refer to item A) for test step description
Feature Pass/Fail: Pass: Output and out come expected
Fail: Output and outcome expected

The environment requirement and test setup procedure are described at test suite
level only

2.4 Business Logic Based Test Design

In software systems business logic and functionality are documented and modeled
using use cases. A use case typically describes the interaction between an actor
(User of the system) and the system so that the actor can achieve desired results. A
use case normally describes how a user can access a specific feature or
functionality or service of a software system to perform a specific task. Use case
based test case design provides for functional testing of the application. Use cases
form excellent input for functional test case design for integration, system and
acceptance testing. Functional test case can be designed based on Use cases in the
following way:

1. Identify the list of use cases to be tested.


2. Select a use case and identify its related use cases and their interfaces.
3. Study the use case
• Identify entry condition (Pre condition)
• Identify inputs required
• Identify exit conditions (Post condition)
• Identify output and outcome
• Study the normal flow
• Identify exceptions and study alternate flows
• Identify constraints on the use case
4. Analyze normal flow
• Explore normal flow into implementation detail (if required)
• Link and read along with user interface design and data model
(if available)
• Identify the sub-path(s) in the normal flow
• Identify the data set that executes a given normal flow path
5. Analyze alternate (exception) flows
• Explore each alternate (exception) flow

Page 4 of 13
divine QA Testing

• Link and read along with user interface design and data model
(if required)
• Identify sub-path(s) in each alternate flow
• Identify data set that executes a given alternate flow path

6. Design test case


• Define test environment set up
• Define test procedure (test step) of the test detailing interaction
and input actions
• Define input data set and out come
• Define use case feature pass/fail/partial pass criteria
• Define dependencies (pre requisite) with other test cases
7. Document the test case in the test case template
8. Walk through (dry run) the test cases on the application
9. Review test case design for completeness and corrections
• Identify missed exceptions and paths
• Identify need for more test cases
• Identify defects in existing test cases
10. Update test case design
11. Verify test case design and close the review findings

A use case may have dependency with another use case, which would require
interface and interaction. Ensure that this dependency and associated use case
flows are captured in the test case design. Use case based test cases can be used
for end user testing (during acceptance testing) by grouping test case pertaining to
all use cases in which an actor (end user) participates for interacting with the
system.

2.5 User interface based test design

User interface testing involves testing the user interface of the product to verify
weather the product performs its intended functions correctly or does it behave
incorrectly. User interface testing includes standard (normal) usage testing as well
as unusual usage testing and failure (error condition) testing (negative behavior).
All interactive applications including web applications provide user interfaces
using which a user interacts with a system to perform a desired function. User
interface based testing involves testing the user interface and the functional
testing of the application through the user interface.

User Interface Testing

User interface testing requires tests that verify the windows on their graphical
objects for accuracy. A user interface may work well functionally, but graphical

Page 5 of 13
divine QA Testing

objects can appear to the user to be corrupted. One way to find these problems are
to have automated scripts that address screen verification through out the testing
process. This type of automation testing is time consuming and may require much
maintenance, so keep it simple and compact. Some times these automated scripts
can be used as acceptance tests.

Manual testing allows the tester the flexibility to make judgements and to catch
subtle things that elude automated testing, but it is much harder to repeat such
tests accurately.

Geometry management requires the push buttons, labels and text to be in a


predictable location and consistent from one window to another. Verify that
graphical objects are consistent in size and distance even when the
windows/pages are resized and different fonts are used.
Usability testing ensures that all windows/pages present a cohesive look to the
user, including spelling, graphics, page size, response time, etc. Examples of
usability testing include:
• Spelling check
• Graphics check (color, aligning, size, etc)
• Meaningful error messages
• Accuracy of data displayed
• Accuracy of data in the database as a result of user input
• Accuracy of data in the database as a result of external factors (e.g.
imported data)
• Meaningful help pages including context sensitive help

User type (novice/beginner, intermediate, advanced/expert also needs to be


considered while designing usability test.

Navigational testing/page flow testing verifies that all navigational methods work
correctly. Ok buttons, cancel buttons, keys windows, dialogue windows,
toolboxes and others offer different ways of navigating through the
windows/pages. Since there are almost infinite ways to navigate through an
application an efficient way of checking them is to alternate navigation options
when doing other types of testing
Page flow testing deals with ensuring that jumping to random pages does not
confuse the application. Each page should typically check to ensure that it can
only be viewed via specific previous pages and if the referring page was not one
of that set, then an error page should be displayed. A page flow diagram is very
useful for the tester to use when checking for correct page flow within the
application. Some simple checks to consider are forcing the application to move
in an unnatural path, the application must resist, and display appropriate error
messages.

Page 6 of 13
divine QA Testing

User interface test design

The following procedure can be used for user interface test design for testing the
user interface and the functionality of the application.
1. Study the user interfaces (web pages) and the user interface navigation
diagrams (page flow diagrams)
2. For each user interface (web page)
• Identify the data fields (input/output)
• Navigation input actions required
• For each navigation/input action define the output outcome.
3. Develop specific/alternate user interaction dialog paths
• For each user interface
• Across related user interface
• Across unrelated user interface (if required)
4. Identify data set and input actions that would activate a specific user interface dialog
5. Design test case(s)
• Define test environment set up
• Define test procedure (test steps) of the test detailing navigation and
input actions
• Define dependencies with other test cases - pre requisite for the test
case
• Define input data (if any)
• Define output of the test case
• Define outcome of the test case
• Define pass/fail/partial pass criteria
6. Document the test case in the test case template
7. Walk through (dry run) the test case on the application
8. Review test case design to identify
• Missed conditions and paths
• Need for more test cases
• Defects in existing test cases
9. Update the test case design
10. Verify the test case design and close the review findings

Test case(s) pertaining to a specific user interface and user interface path can be
grouped together to form a test suite. Test case can be documented using any of
the test case templates described earlier.

User interface based test case design provides for development of test cases for
testing the user interface and its usability as well as the functionality of the
application.

Page 7 of 13
divine QA Testing

2.6 Program Logic Based Test Design

Program logic or method (of a class) logic documented in implementation design


forms the basis for unit/class level testing. The logic is documented using flow
charts, Pseudo code, state machines, etc.

Logic based testing is a white box testing strategy. It is also called basis path
testing. In logic based testing it is necessary to test each program flow path
uniquely at least once. According to basis -path testing technique the cyclomatic
complexity of the logic flow gives the upper bound for the number of independent
paths that form the "basis paths" that need to be tested.

Cyclomatic complexity = predicate nodes + 1


A predicate node is a "conditional"

Example: A program with two predicate nodes (conditionals) would have 3 basis
paths. We need one test case for testing each path. For identifying the paths it is
necessary to construct the flow graph for the logic of the program/method.

An alternate working approach is to test both the paths of each "conditional"


statement in the program. A conditional statement may be simple condition having
only one operator (ex. A>5) or may be complex condition having more than one
operator (ex A >5 AND B=2 or C > 5). For testing each simple condition two test
cases are required. In case of complex condition we need to construct a twit table
to find the possible options and design one test case for each option.

In case, if there are 3 simple conditional statements in program we need 6 test


cases. This simplified approach may increase the number of test cases by 3 (in
case of basis path technique, it would be 3 only), however it saves test case design
time by eliminating the laborious task of constructing program flow graphs and
identifying the unique basis paths.

We need to identify the data set for executing each basis path or conditional path.
The test case design is documented in the test case template as described in the
earlier section. The test case are reviewed and approved.

2.7 Input Domain Based Test Design

Logic and data structure are the key elements of a program. Data structures
modeled in the form of data model define the input domain of the software
system. A data model typically describes the entities and their relationships in the
application domain. It also defines the attributes of each entity and their data

Page 8 of 13
divine QA Testing

description. The attributes of the entities along with their data descriptions (name,
type, size, constraints) are documented in a data dictionary.

The input domain of the software system would be very large and testing the
software system with complete input domain would be very expensive and time
consuming. Techniques such as equivalence portioning (EP), Boundary value
analysis (BVA) etc are applied to reduce the input domain to arrive at on an
acceptable and manageable size for input domain based testing.

Test cases can be designed based on the input domain in the following way:
1. Study the data model (entity -relation model) of the application.
2. Identify and study the attributes of each entity in terms of Data type, size, and
constraints (constraints can be primary key, one among a range of values,
computed value, foreign key, etc.).
3. Identify the critical attributes that are used for condition checks, computations,
data manipulation, validations and retrieval, etc. These attributes form the
critical data in the input domain.
4. Identify other attributes that are simply Input-Output type
5. Identify and define standard set of validations that should be conducted for
attributes of particular data type (Say numeric). These constitute the
validations tests that are conducted with invalid data, invalid data type etc.

Ex: Data item with given data type (Say XNO with numeric (3) data type) has an
acceptable range of values. For instance acceptable range for XNO is -999 to
+999. But based on the input domain the valid values are say only 1-990. Then
rest of the acceptable range of values constitutes the invalid values.

Acceptance Range

Invalid Range Valid Range Invalid Range

Validation test case with invalid data can be picked from the "invalid range" of
values. Validation test case is required for all attributes in the input domain.

6. For each critical attribute test cases for "Edge testing" can be identified from
the input domain from the equivalence classes and boundary values as per the
following:
Equivalence classes: A group of test forms an equivalence class if they all test the
same thing, will catch the same bug, involve the same input data, result in similar
operation in the program, effect the same output variables, none force the program

Page 9 of 13
divine QA Testing

to do error handling or all of them do. An equivalence class represents a set of


valid or invalid states for input conditions.

Equivalence partitioning is a black box testing method that divides the input
domain of a program into classes of data from which test cases can be derived.
Equivalence portioning strives to define a test case that uncovers classes of errors,
there by reducing the total number of test cases that must be developed.

Prepare a table containing attributes name, valid equivalence class and invalid
equivalence class columns. For each attribute
• Identify the valid and invalid equivalence classes and record them in the table
• Record one value of each class to represent the class in the table.
• Include additional values to provide for standard validation test cases

Ex.
Attribute Valid Equivalence Class Invalid Equivalence Class
XNO Number between 1-990 Number between -999-0
Number between 991-999

Boundary values: Boundary value analysis leads to selection of test cases that
exercise boundary values. BVA leads to the selection of test cases at the "edges"
of the equivalence class.

The following boundary values can be considered while designing test cases for
an equivalence class with lower (LB) and upper (UB) boundary values

LB - 1 UB - 1
LB UB
LB + 1 LB + 1

Depending on the input conditions, appropriate boundary values have to be


considered for the valid and invalid classes, to arrive at the minimum set of input
values.

Ex: Equivalence classes and boundary values for variable XNO:

Attribute Valid Equivalence Invalid Equivalence Class


Class
XNO Class: 1-990 Class: -999 – 0
Boundary Values: 0, Boundary Values: -999, -998, -1, 0, 1
1, 2, 989, 990 Class: 991 – 999
Boundary Values: 990, 991, 992, 998,
999

Page 10 of 13
divine QA Testing

By combining the Equivalence classes and the boundary values, we can arrive at
the test data for XNO as:

Attribute Valid Values Invalid Values


XNO 1, 2, 989, 990 -999, -1, 0, 991, 999

The valid and invalid values identified are used as the input data in the test cases
designed. In addition to the invalid values, standard validations for numeric data
type have to be also considered while designing the test case.

2.8 Test Case Integration and Traceability

During the test case design process, we may apply one or more test case design
techniques described earlier. Any one technique may not ensure complete
coverage of testing. It is necessary to adopt and use more than one technique and
integrate the test cases developed.

It is possible that the same test case could have been identified by the multiple
techniques applied. We need to remove any such redundancies before integrating
all test cases.

Subsequently, test cases can be packaged in to test suits and test suites can be
combined to meet a test specification.

During the test case design process, it is necessary to establish traceability


between the "application functions to be tested", their "testing requirements",
corresponding test specification containing test suites, and test suites containing
test cases.

A traceability matrix should be set up for verifying weather al aspects of the


system are testable and have actually been tested.

Application function ⇒ Test Requirement ⇒ Test Specification ⇒ Test Suite


⇒ Test Case ⇒ Test Result

2.9 Product Quality and Test Coverage

As people use a product, they form opinions about how well that product fulfills
their expectations. In a sense, during development of the product, the
development and testing teams use the software system to try to gauge, in
advance, customer's experiences of product quality. The extent to which the

Page 11 of 13
divine QA Testing

software system allows you to do this is known as the "fidelity" of the software
system.

It is necessary to ensure that the user's experience of the product quality is a


subset of tester's experiences of product quality. In other words, the tester should
not only cover "customer usage of software system" but also cover the "Quality
risks of the software system".

Test cases that cover the most important quality risks, requirements and functions
are assigned highest priority to achieve high "fidelity".

Test coverage encompasses functional and behavioral coverage of the operations


and other uses of the system to which the customer base as a whole is likely to
subject it.

Functional Coverage: Functional coverage requires "testing of what system


does", "what it does not do", "what it should not do" and "what it should do".
Functional coverage can be determined by establishing traceability by the linkage
between the application's functions, test requirements, test specifications, test
suites and individual test cases.

The following measurements can be considered for functional coverage analysis:

• No. of requirements/ functions covered.


• No. of use cases/use case paths covered.
• No. of use interfaces/use interface paths covered.

Quality Risk Coverage: Apart from testing some functional aspect of the system,
a test case may also test the quality risk (failure modes) associated with it, directly
or indirectly. Testing should look for situations in which the software system fails
to meet customer's reasonable expectations in particular areas.

The quality risk coverage provided by each test case needs to be specified by
assigning a numeric value:

0 – The test case does not address a quality risk


1 – The test case provides some level of indirect coverage for the quality risk
2 – The test case provides direct and significant coverage for the quality risk

When you total the numbers assigned to test cases by quality risk category and by
test suite, you can measure respectively, weather you are covering a particular risk
and weather test are providing an adequate return on investment. You need to
relate these numbers to the risk priority numbers. High priority number should
correspond to high risk, low priority numbers to low risk.

Page 12 of 13
divine QA Testing

Configuration Coverage: Configuration coverage addresses the possible


combinations of hardware, software and networking environments for which the
application needs to be tested.

There could be a large or infinite number of configurations that can be set up to


test the system. It is necessary to focus only on the "Key configurations"
prescribed by the customer. Factors to consider in this decision include customer
usage and the risk to the product if that particular item does not work.

You need to use every opportunity to increase test configuration coverage through
careful use of test cycles. By reshuffling the configuration used with each test in
each cycle, you can get even closer to complete coverage.

Code, Path and Branch Coverage: Code coverage addresses weather all the
lines of code in a program/class/component have been tested.

Path coverage indicates weather all the program flow paths have been tested.
Branch coverage addresses weather each simple condition (a complex condition is
constructed using set of simple conditions) and both its branches (True/False)
have been tested.

Code coverage, path coverage and branch coverage measures are normally used at
the Unit testing level.

Page 13 of 13

You might also like