You are on page 1of 69

Software Engineering

Lecture - 1

Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata 1


Compiler Design

 Testing

 Levels of Testing

 Integration Testing

 Test case Specification,

2
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

What is Testing?
Once source code has been generated, software must be tested to
uncover as many errors as possible before delivery to your customer

Your goal is to design a series of test cases that have a high likelihood
of finding errors.

Software testing techniques provide systematic guidance for


designing tests that
(1) Exercise the internal logic of software components,
(2) exercise the input and output domains of the program to uncover
errors in program function, behaviour and performance.

Software Testing is the process of executing a program or


system with intent to finding errors.

Who does it?


During early stages of testing, a software engineer performs all tests.

However, as the testing process progresses, testing specialists may


become involved.
3
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Testing
The aim of Testing is to detect all the errors in a program or system.

As the input data domain of most of the programs is very large, it is
not practical to test the program exhaustively with respect to all values
that the input domain can assume.

So it is difficult to guarantee compete error free code even after


testing.

 A careful testing can expose most of the errors and hence reduce
defects.

4
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Basic terminologies?

Error
- is a mistake committed by development team during any of the
development phases.
- also referred as a fault, a bug or a defect.

Failure
- manifestation of an error.
- an error may not necessary lead to a failure.

Test case
- presented as triplet [I, S, O] where I : input, S: state, O: expected
output.

Test suite
- Set of all test cases with which system has to be tested.

5
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Testing Activities

1. Test suite design

2. Running test cases and checking the results to detect failures

3. Debugging
- identify the statements that are in error.
- failure symptoms are analyzed to locate the errors.

4. Error correction

6
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Why to design Test Cases?

When test cases are designed on random data input, many of the test
cases do not contribute to the significance of the test suite. Because,
they do not identify any additional error which is not already detected.

if (x > y) max = x;
else max = x;

The test suite {(x=3,y=2);(x=2,y=3)} can detect the error, whereas a


larger test suite {(x=3,y=2);(x=4,y=3);(x=5,y=1)} does not detect
the error.

A minimal test suite is a carefully designed set of test cases such that
each test case helps detect deferent error. This is in contrast to testing
using some random input values.

Two main approaches are-


•Black-box approach
•White-box (or glass-box) approach

7
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Levels of Testing
Unit testing
Integration testing
System testing

During unit testing, the individuals components (or units) of a


program are tested.

After testing all the units individually, the units are slowly integrated
and tested after each step of integration (integration testing).

Finally, the fully integrated system is tested (system testing).

8
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Unit Testing
Unit testing is undertaken after a module has been coded and
reviewed.

Before carrying out unit testing, the unit test cases have been
designed and test environment for the unit under test has to be
developed.

Drivers and Stub Modules


In order to test a single module, we need a complete environment to
provide all relevant code that is necessary for execution of the module.

The procedures belonging to other modules that the module under


test calls.

Nonlocal data structures that the module access.

A procedure to call the functions of the module under test with


appropriate parameters.

9
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Unit Testing

Stub
•A stub is a dummy procedure that has same I/O parameters as the
given procedure, but has a highly simplified behaviour.
•For example, a stub procedure may produce the expected behaviour
using a simple look up mechanism.

Driver
•A driver module should contain the non-local data structures accessed
by module under test.
•It should also have the code to call the different functions of the
module under test with appropriate parameter values for testing.

10
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Black-box Testing

In black-box testing, test cases are designed from an examination of


the input / output values only and no knowledge of design or code is
required.

•Equivalence class partitioning

•Boundary value analysis

11
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Equivalence class partitioning

The domain of input values to the program under test is partitioned


into a set of equivalence classes; such that for every input data
belonging to the same equivalence class, the program behave similarly.

By defining equivalence class of input data, any one value belonging to


an equivalence is as good as testing the code with any other value
belonging to the same equivalent class.

The following are general guideline for designing the equivalence


classes:

1. If the input data values to system specified by a range of values,


then one valid and two invalid equivalence classes need to be defined.
For example, if the integer input values are from 1 to 10, the
equivalence classes are: {1-10}, {-∞ , 0} and {11, +∞}.
2. If input data set consist of discrete values, then one equivalence
class for the valid input and another for invalid input should be defined.
For example, if input set is {A, B, C}, the equivalence classes are {A,
B, C} and {U – {A, B, C}}.
12
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Equivalence class partitioning

Example: For a software that computes the square root of an input


integer that can assume values in the range of 0 and 5000. Determine
the equivalence class test suite.

The three equivalence classes are-


• The set of negative integers.
• The set of integers in range of 0 to 5000.
• The set of integers larger than 5000.
A possible test suite can be: {-5, 500, 6000}.

13
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Equivalence class partitioning

Example: Design the equivalence test classes for a program that reads
two integer pairs (m1, c1) and (m2, c2) defining straight lines of the
form y = mx + c. The program computes the intersection point of the
two straight lines and displays point of intersections.

Three equivalence classes are:


•Parallel lines (m1 = m2, c1 != c2)
•Intersecting lines (m1 != m2)
•Coincident lines (m1 = m2, c1 = c2)
The required test case suite can be {(2, 2)(2, 5), (5, 5)(7, 7), (10,
10)(10, 10)}.

14
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Equivalence class partitioning

Example: Design the equivalence test classes for a function that reads
a character string of size less than five characters and display whether
it is a palindrome.

Design of equivalence classes:

The required test case suite can be {abc, aba, abcdef}.

15
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Boundary Value Analysis (BVA)

A type of frequently committed error is missing out special


considerations that should be given to the values at the boundaries of
different equivalence classes.
For example, programmers may use < instead of <=.

Boundary value analysis-based test suite design involves


designing test cases using the values at the boundaries of
different equivalence classes.

- If input condition specifies a range values a & b, test cases should be


designed by values just above and just below both a and b.

- If input conditions specifies number of values, test cases should be


designed so that they exercise maximum and minimum numbers,
values just below minimum and just above maximum are also tested.

16
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

BVA Example

Example: if an equivalence class contains the integers in the range 1


to 10, then the boundary value test suite is {0, 1, 10, 11}.

Example: For a function that computes the square root of


integer values in the range of 0 and 5000, determine boundary
value test suite.

There are 3 equivalence classes:


- Set of negative integers
- Set of integers range of 0 to o 5000
- Set of integers greater than 5000

The boundary value-based test suite is: {0, -1, 5000, 5001}.

17
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

BVA test case – co-ordinates

18
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Robustness test cases

19
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Worst cases of test cases

20
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

BVA – CASE study

In a module written to register candidates for the army, the following


conditioned where to be met
- Age a to be between 18 and 24 (18 < a < 24)
- Height to be between 150 and 180 cms (150 < h < 180)

What are different test cases to test the module?

21
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

White-Box Testing

Fault-based Testing
-attempts to execute (or cover) certain elements of program.
-examples are statement coverage, branch coverage, and path
coverage base testing.

Coverage-based Testing
-targets to detect certain types of faults. These faults that a test
strategy focuses on constitutes the fault model of the strategy.
- example is mutation testing.

22
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Testing criterion for coverage-based testing

The set of specific program elements that a testing strategy requires to


be executed is called the testing criterion of the strategy.

Stronger versus weaker

A white-box testing strategy is said to be stronger than another


strategy, if all types of program elements covered by the second
strategy are also covered by the first strategy, and the first strategy
additionally covers some more types of elements not covered by
second strategy.

23
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Stronger versus weaker

A is stronger testing
strategy B

A and B are
complementary
testing strategy

24
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Statement Coverage

 Aims to design test cases so as to execute every statement of the


program at least once.

 The principal idea governing the statement coverage strategy is that


unless a statement is executed, there is no way to determine whether
an error exists in that statement.

int computeGCD(x, y) The condition expression of the while


int x, y; statement need to be made true and
{ conditional expression of the if statement
while (x != y){ needs to be made both true and false. By
if (x > y) then
x = x – y;
choosing the test set {(x = 3, y = 3), (x =
else 4, y = 3), (x = 3, y = 4)}, all statements of
y = y – x; the program would be executed at least
} once.
return x
}

25
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Branch Coverage

 Test cases to be designed so as to make each branch condition in the


program to assume true and false values in term.

 Branch testing is also known as edge testing, since in this testing


scheme, each edge of a program’s control flow graph is traversed at
least once.

 The test cases can be {(x = 3, y = 3), (x = 3, y = 2), (x = 4, y = 3),


(x = 3, y = 4)}.

26
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Brach coverage-based testing is stronger than statement


coverage-based testing.

Proof: We need to show that (i) branch coverage ensures statement


coverage, and (ii) statement coverage does not ensure branch
coverage.

(i) Branch testing would guarantees statement coverage since every


statement must belong to some branch (assuming that there is no
unreachable code).

(ii)To show that statement coverage does not ensure branch coverage,
it would be sufficient to give an example of a test suite that achieves
statement coverage, but does not cover at least one branch.
Consider the following code, and the test suite {5}.
if( x > 2) x += 1;

The test suite would achieve statement coverage. However, it should


not achieve branch coverage, since the condition (x > 2) would not
be made false.

27
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Condition Coverage

 Test cases are designed to make each component of a composite


condition expression to assume both true and false values.

 For example, in the conditional expression ((c1 .and. c2) .or. c3), the
components c1, c2 and c3 are each assume both true and false.

 For a composite condition expression of n components, 2n test cases


are required for condition coverage. Thus, for condition coverage, the
number of test cases increases exponentially with the number of
component conditions. Therefore, a condition coverage-based testing
technique is practical only if n is small.

28
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Path Coverage

 requires designing test cases such that all linearly independent path
(basis paths) in the program are executed at least once.

 A linearly independent path can be defined in terms of control flow


graph (CFG) of a program.

29
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Control Flow Graph (CFG)

 A control flow graph describes the sequence in which the different


instructions of a program get executed.

 Drawing CFG
Arrows on the Flow-Graph indicate edges
Each circle one or more procedural statements
Area bounded by edges and nodes are considered as regions.

30
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Compiler Design

Control Flow Graph (CFG)

Sequence: Selection: Iteration:


1. a = 5; 1. if(a > b) 1. a = 0;
2. b = a * 2 – 1; 2. c = 3; 2. while (a > b){
3. else c = 5; 3. b = b – 1;
4. c = c * c; 4. b = b * a;}
5. c = a + b;

31
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata
Control Flow Graph (CFG)
Control Flow Graph (CFG) for an example program

int compute_gcd(int x, int y){


1. while(x != y){
2. if (x > y) then
3. x = x – y;
4. else y = y – x;
5. }
6. return x; }
Path

 A path through a program is any node and edge sequence from start
node to end node of the control flow graph of a program.

 A program can have more than one terminal nodes when it contains
multiple exit or return statements.

 Writing test cases for exercising all paths is impractical since there
are infinite number of paths.

 For this reason, path coverage-based testing has been designed, to


not to cover all paths, but only a subset of paths called linearly
independent paths (or basis path).
Linearly independent set of paths (or basis path set)

 A set of paths called linearly independent set of paths, if each path in


the set introduces at least one new edge that is not included in any
other path in the set.

 If a set of paths is linearly independent of each other, then no path in


the set can be obtained through any linear operations (i.e. Additions or
subtractions) on the other paths in the set. [McCabe, 76]

 A set of paths is linearly independent of each other, if for any path in


the set, its subpath cannot be member of the set.

 For simple program it is straightforward to find out the linearly


independent paths by observations. But for complex program it is
difficult.

 McCabe’s cyclometric complexity defines upper bound for the


linearly independent paths.

 Though the McCabe’s metric does not directly identify the linearly
independent paths, but it provides us with a practical way of
determining approximately how many paths to look for.
McCabe’s Cyclomatic

 defines an upper bound on the number of independent paths in a


program.

Method 1
Given a control flow graph G, the cyclomatic complexity V(G) can be
computed as:
V(G) = E – N + 2
where, N is the number of nodes and E is the number of edges.

For the CFG example E = 7 and N = 6. Hence, V(G) = 7 – 6 + 2 = 3.


McCabe’s Cyclomatic

Method 2
V(G) = Total number of non-overlapping bounded areas + 1

where, any region enclosed by nodes and edges can be called as


bounded region.

For the CFG example, the number of bounded areas is 2.


V(G) = 2 + 1 = 3.

Method 3

If N is the number of decision and loop statements of a program, then


the McCabe’s metric is equal to N + 1.
insertion_procedure (int a[], int p [], int N)
{
int i, j, k;
for (i=0; i<=N; i++)
p[i] = i;
for (i=2; i<=N; i++)
{
k=p[i];j=1;
while (a[p[j-1]] > a[k]) {
p[j] = p[j-1];
j--
}
p[j] = k;
}
}
insertion_procedure (int a[], int p [], int N)
{
(1) Int i,j,k;
(2) for ((2a)i=0; (2b)i<=N; (2c)i++)
(3) p[i] = i;
(4) for ((4a)i=2; (4b)i<=N; (4c)i++)
{
(5) k=p[i];j=1;
(6) while (a[p[j-1]] > a[k]) {
(7) p[j] = p[j-1];
(8) j--
}
(9) p[j] = k;
}
}
Count the number of regions on the graph: 4

No. of predicates (red on graph) + 1 : 3 + 1 = 4

No of edges – no. of nodes + 2: 14 – 12 + 2 = 4


Steps to carry out path coverage-based testing

Step 1: Draw control flow graph.

Step 2: Determine V(G).

Step 3: Determine the cyclomatic complexity. This gives the minimum


number of test cases required to achieve path coverage.

Step 4: Repeat
Test using randomly designed set of test cases.
Perform dynamic analysis to check the path coverage achieved.
until at least 90% path coverage is achieved.
Other uses of McCabe’s cyclomatic complexity metric

1. Estimate the structural complexity of the code


Cyclomatic complexity of a program is a measure of the
psychological complexity or the level of difficulty in understanding
the program

2. Estimation of testing effort


It indicates the minimum number of test cases required to achieve
path coverage.

3. Estimate of program reliability


Experimental studies indicate there exists a clear relationship
between the McCabe’s metric and the number of errors latent in the
code after testing.
Data Flow-based Testing

It selects the test paths of a program according to the definitions and


uses of different variables in program.

Consider a program P. For a statement numbered S, let

For the statement S: a = b + c; DEF(S) = {a}, USES{S} = {b, c}.


Data Flow-based Testing

The definition of variable X at statement S is said to be live at


statement S1, if there exists a path from statement S to statement S1
which does not contain any definition of X.

The definition-use chain (or DU chain) of a variable X is of the form


[X, S, S1], where S and S1 are statement numbers, such that

X ϵ DEF(S)

and

X ϵ USES(S)
And the definition of X in the statement S is live at statement S1.

 Data flow testing strategy is to require that every DU chain be


covered at least one.

 Data flow testing strategy extremely useful for testing


programs containing nested if and loop statement.
Mutation Testing

 The idea is to make a few arbitrary changes to a program at a time.

 Each time the program is changed, it is called mutated program


and the change effected is called a mutant.
For example, one mutation operator may randomly delete a program
statement.

 A mutated program is tested against the original test suite of the


program statement.

 If there exists at least one test case in the test suite for which a
mutated program yield an incorrect result, then the mutant is said to
be dead, since the error introduced by the mutation operator
successfully been detected by test suite.

 If a mutant remains alive even after all the test cases have been
exhausted, the test suite is enhanced to kill the mutant.
Mutation Testing

Advantage:
 it can be automated to a great change.

Disadvantage:
 it is computationally very expensive, since a large number of possible
mutants can be generated.

 Not suitable for manual testing.

At present, several test tools are available that automatically generate


mutants for a given program
White Box Testing

Definition: Testing based on the internal


specifications with knowledge of how system is
constructed.

Derives test cases that


- All independent paths within a module have
been exercised at least once.
- All logical decision on true and false side.
- Exercise all loops at boundary value
- Exercise internal data structures to assure
validity
Black Box Testing

Definition: Testing based on the external


specifications without knowledge of how
system is constructed.

Finds error in:


- incorrect or missing function
- interface error
- behaviour or performance error
- initialization and termination error
Comparison between Whitebox and Blackbox Testing
Whitebox Testing Blackbox Testing
Make up test cases based on how Make up test cases based on the
the data is known to be processed known requirements for input,
by the program. output
Requirements are as designed in Requirement as given in the
the detailed design doc or functional specifications or
inspection of the source code business requirements.
Tester needs explicit knowledge of Tester does not needed explicit
internal working of items being knowledge of internal working of
tested. the item being tested.
More expensive Less expensive
Requires source code Requires executable
More laborious Less laborious
Whitebox or Blackbox Testing ?

Instances where whitebox testing is better than blackbox testing


 Logical error

 Memory overflow undetected

 Typological error

Instances where blackbox testing is better than whitebox testing


 Functional requirements not met

 Integration errors

 Incorrect parameters passed between functions


Software Testing Steps
UNIT TESTING

Unit testing focuses verification effort on the smallest unit of software


design—the software component or module.

Using the component-level design description as a guide, important


control paths are tested to uncover errors within the boundary of the
module.

The unit test is white-box oriented, and the step can be conducted in
parallel for multiple components.
Unit Test Considerations

The module interface is tested to ensure that information properly flows


into and out of the program unit under test.

The local data structure is examined to ensure that data stored


temporarily maintains its integrity during all steps in an algorithm's
execution.

Boundary conditions are tested to ensure that the module operates


properly at boundaries established to limit or restrict processing. All
independent paths (basis paths) through the control structure are
exercised to ensure that all statements in a module have been
executed at least once.

And finally, all error handling paths are tested.


Common errors in computation are
Unit Test Considerations 1. misunderstood or incorrect arithmetic
precedence,
2. mixed mode operations,
3. incorrect initialization,
4. precision inaccuracy,
5. incorrect symbolic representation of an
expression. Comparison and control flow
are closely coupled to one another (i.e.,
change of flow frequently occurs after a
comparison).
Test cases should uncover errors
such as
1. comparison of different data types,
2. incorrect logical operators or
precedence,
3. expectation of equality when precision
error makes equality unlikely,
4. Incorrect comparison of variables,
5. improper or nonexistent loop
termination,
6. Failure to exit when divergent iteration
is encountered,
7. improperly modified loop variables.
Unit Test Environment
Integration Testing

The primary objective of integration testing is to test the module


interfaces, i.e. There are no errors in parameter passing, when one
module invokes functionality of another module.

The following approaches can be used to develop the test plan:


Big bang approach

Top-down approach

Bottom-up approach

Mixed (or sandwiched) approach


Big-bang integration testing

 All the modules making up a system are integrated in a single step.

This technique used only for small systems.

The main problem is that once an error is found during integration


testing, it is very difficult to localize the error as the error may
potentially lie in any of the module.

Debugging errors reported during big bang integration testing are


very expensive to fix, hence it is not useful to test large systems.
Bottom-up integration testing

 A large software often made up of several subsystems. A subsystem


can have several modules. Modules integrated by moving upwards.

 In bottom-up integration testing, first the modules for the each


subsystem are integrated. Thus, the subsystems can be integrated
separately and independently.

 Large software systems normally require several levels of subsystem


testing, lower-level subsystems are successively combined to form
higher-level system.

Test drivers are required.

 Disadvantage of bottom-up testing


is the complexity occurs when system
is made up of a large number of small
subsystems that are at same level. In
extreme case it is almost like big-
bang testing.
A bottom-up integration strategy may be implemented with the
following steps:
“Bottom-up integration testing, as its name implies, begins
construction and testing with atomic modules”

1. Low-level components are combined into clusters (sometimes called


builds) that perform a specific software subfunction.

2. A driver (a control program for testing) is written to coordinate test case


input and output.

3. The cluster is tested.

4. Drivers are removed and clusters are combined moving upward in the
program structure.
Bottom-up integration

Components are combined to form clusters 1, 2, and 3. Each of the clusters is tested
using a driver (shown as a dashed block). Components in clusters 1 and 2 are subordinate
to Ma. Drivers D1 and D2 are removed and the clusters are interfaced directly to Ma.
Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma
and Mb will ultimately be integrated with component Mc, and so forth.
Top-down integration testing

 It starts with the root module. After the top-level skeleton has been
tested, the modules that are immediately lower layer of the skeleton
are combined with it and tested. (Modules are integrated by moving
downwards).

 Stubs replace lower level modules.

 Integration conducted in a depth first or breath first manner.

 Disadvantage of top-down
integration testing approach is that in
the absence of lower-level routines,
many times it may become difficult to
exercise the top-level routines in
desired manner since the lower level
routines perform several lower low-
level functions such as the input /
output operations.
The integration process is performed in a series of five steps:

“Top-down integration testing is an incremental approach to construction


of program structure.”

“Modules subordinate (and ultimately subordinate) to the main control


module are incorporated into the structure in either a depth-first or
breadth-first manner.”

1. The main control module is used as a test driver and stubs are substituted
for all components directly subordinate to the main control module.

2. Depending on the integration approach selected (i.e., depth or breadth first),


subordinate stubs are replaced one at a time with actual components.

3. Tests are conducted as each component is integrated.

4. On completion of each set of tests, another stub is replaced with the real
component.

5. Regression testing (Section 18.4.3) may be conducted to ensure that new


errors have not been introduced.
Top-down integration

Depth-first integration would integrate all components on a major control


path of the structure. Selection of a major path is somewhat arbitrary and
depends on application-specific characteristics. For example, selecting the
lefthand path, components M1, M2 , M5 would be integrated first. Next, M8
or (if necessary for proper functioning of M2) M6 would be integrated.
Then, the central and righthand control paths are built.
Breadth-first integration incorporates all components directly subordinate
at each level, moving across the structure horizontally. From the figure,
components M2, M3, and M4 (a replacement for stub S4) would be
integrated first. The next control level, M5, M6, and so on, follows.
Mixed integration testing

 It is combination of top-down and bottom-up testing.

 Both stubs and drivers are required to be designed.


Regression Testing

 Each time a new module is added as part of integration testing, the


software changes. New data flow paths are established, new I/O may
occur, and new control logic is invoked. These changes may cause
problems with functions that previously worked flawlessly.

 Regression testing is the re-execution of some subset of tests that


have already been conducted to ensure that changes have not
propagated unintended side effects.

 Regression testing may be conducted manually, by re-executing a


subset of all test cases or using automated capture/playback tools.
The regression test suite (the subset of tests to be executed)
contains three different classes of test cases:

• A representative sample of tests that will exercise all software


functions.

• Additional tests that focus on software functions that are likely to be


affected by the change.

• Tests that focus on the software components that have been changed.
Smoke Testing

“Smoke testing is an integration testing approach that is commonly


used when “shrink/wrapped” software products are being developed. It
is designed as a pacing mechanism for time-critical projects, allowing
the software team to assess its project on a frequent basis.”

The smoke testing approach encompasses the following activities:


1. Software components that have been translated into code are
integrated into a “build.” A build includes all data files, libraries,
reusable modules, and engineered components that are required to
implement one or more product functions.

2. A series of tests is designed to expose errors that will keep the build
from properly performing its function. The intent should be to uncover
“show stopper” errors that have the highest likelihood of throwing the
software project behind schedule.

3. The build is integrated with other builds and the entire product (in its
current form) is smoke tested daily. The integration approach may be
top down or bottom up.
Smoke testing provides a number of benefits when it is applied
on complex, time critical software engineering projects:
• Integration risk is minimized. Because smoke tests are conducted
daily, incompatibilities and other show-stopper errors are uncovered
early, thereby reducing the likelihood of serious schedule impact when
errors are uncovered.

• The quality of the end-product is improved. Because the approach is


construction (integration) oriented, smoke testing is likely to uncover
both functional errors and architectural and component-level design
defects. If these defects are corrected early, better product quality will
result.

• Error diagnosis and correction are simplified. Like all integration


testing approaches, errors uncovered during smoke testing are likely to
be associated with “new software increments”—that is, the software
that has just been added to the build(s) is a probable cause of a newly
discovered error.

• Progress is easier to assess. With each passing day, more of the


software has been integrated and more has been demonstrated to
work. This improves team morale and gives managers a good indication
that progress is being made.
Compiler Design

Phased versus Incremental Integration Testing

Big-bang integration testing is carried out at single step integration.


But, for other strategies, integration carried out over several steps.

In incremental integration testing, only one new module is added to


the partially integrated system each time.

In phased integration, a group of related modules are added to the


partial system each time.

Phased integration requires less number of integration steps


compared to the incremental integration approach.

When failures are detected, it is easier to debug the system while


using incremental testing approach.

69
Arup Kr. Chattopadhyay, Department of IT, IEM, Kolkata

You might also like