You are on page 1of 33

Metrics and Models in

Software Testing

METRICS AND MODELS IN SOFTWARE TESTING


Need for Software Metrics
How do we measure the progress of testing?
Why do we devote more time for testing a particular module?
Who is responsible for the selection of a poor test suite?
How many faults do we expect during testing?
How much time and resources are required for software testing?
AND MANY SUCH QUESTIONS NEED AN ANSWER!

METRICS AND MODELS IN SOFTWARE TESTING


Software Metrics
Software metrics can be defined as:
The continuous application of measurement based
techniques to the software development process and its
products to supply meaningful and timely management
information, together with the use of those techniques to
improve that process and its products.

METRICS AND MODELS IN SOFTWARE TESTING


Measure, Measurement and Metrics
Definition Example
Measure A measure provides a Number of failures
quantitative indication of experienced during testing
the extent, amount,
dimension, capacity or size
of some attributes of
product or process.
Measurement Measurement is the act of A way of recording such
determining a measure. failures.
Metrics The metric is a quantitative Average number of failures
measure of the degree to experienced per hour
which a product or process during testing.
possesses a given attribute.

METRICS AND MODELS IN SOFTWARE TESTING


Applications of Software Metrics
Application can be found at every phase of the Software Development Life Cycle.

Requirement and Analysis Phase


Design Phase
Use case to design metrics
Implementation phase
Testing phase

METRICS AND MODELS IN SOFTWARE TESTING


Categories of Metrics
Product Metrics: Product metrics describe the
characteristics of the product such as size, complexity,
design features, performance, efficiency, reliability,
portability etc.

Process Metrics: Process metrics describe the effectiveness


and quality of the processes that produce the software

METRICS AND MODELS IN SOFTWARE TESTING


Product Metrics for Testing
Basic Metrics:
I. Number of failures experienced in a time interval.
II. Time interval between failures.
III. Cumulative failures experienced up to a specified time.
IV. Time of failure.
V. Estimated time of testing.
VI. Actual testing time.

METRICS AND MODELS IN SOFTWARE TESTING


Product Metrics for Testing
Additional Metrics:
I. % of time spent= (Actual time spent/Estimated testing time)*100.
II. Average time between failures.
III. Maximum and minimum failures experienced in a time interval.
IV. Average number of failures experienced in time intervals.
V. Time remaining to complete the testing.

METRICS AND MODELS IN SOFTWARE TESTING


Process Metrics for Testing
Basic process metrics:
I. Number of test cases designed.
II. Number of test cases executed.
III. Number of test cases passed.
IV. Number of test cases failed.
V. Test case execution time.
VI. Total execution time etc.

METRICS AND MODELS IN SOFTWARE TESTING


Process Metrics for Testing
Additional Metrics:
I. % of test cases executed.
II. % of test cases passed.
III. % of test cases failed.
IV. Total actual execution time/ total estimated execution time.
V. Average execution time of a test case.

METRICS AND MODELS IN SOFTWARE TESTING


Object Oriented Metrics
Metrics defined for structured programming are insufficient
for the object oriented environment.
Capture Attributes of a software product.
Promising approach towards quality assessment.
Need to evaluate the quality of object oriented software.

Coupling Cohesion Inheritance


Size Metrics
Metrics Metrics Metrics

METRICS AND MODELS IN SOFTWARE TESTING


Coupling Metrics
Information about
Coupling Between
attribute usage and Objects(CBO)
method invocations of
other classes.
Data Abstraction
Coupling(DAC)
Higher values of coupling
metrics indicates that
Message Passing
higher number of stubs
Coupling(MPC)
are required during
testing.
Response for a Class(RFC)

METRICS AND MODELS IN SOFTWARE TESTING


Information Flow Based Coupling(ICP)

Information Flow-Based Inheritance Coupling(IHICP)

Information Flow-Based Non-Inheritance Coupling(NIHICP)

Fan-in

Fan-out

METRICS AND MODELS IN SOFTWARE TESTING


Cohesion Metrics
Information about attribute
Lack of Cohesion of
usage and method
Methods(LCOM)
invocations within a class.

Highly cohesive classes are Tight Class Cohesion(TCC)


easy to test.

Loose Class Cohesion(LCC)

Information Based
Cohesion(ICH)

METRICS AND MODELS IN SOFTWARE TESTING


Inheritance Metrics
Information about methods Number of Children(NOC)
overridden, inherited and
added.
Depth of Inheritance Tree(DIT)
Higher the depth of
inheritance tree, more
complex is the design. Number of Parents(NOP)

Testing effort increases


accordingly. Number of Descendants(NOD)

METRICS AND MODELS IN SOFTWARE TESTING


Number of Ancestors(NOA)

Number of Methods Overridden(NMO)

Number of Methods Inherited(NMI)

Number of Methods Added(NMA)

METRICS AND MODELS IN SOFTWARE TESTING


Size Metrics
Indicates the length of a
Number of Attributes per
class. Class(NA)

More the number of


methods in a class, greater Number of Methods per
is the complexity and thus, Class(NM)
testing is rigorous.

Weighted Methods per


Class(WMC)

METRICS AND MODELS IN SOFTWARE TESTING


Number of Public Methods(PM)

Number of Non-Public Methods(NPM)

Line of Code(LOC)

METRICS AND MODELS IN SOFTWARE TESTING


Test Metrics
Indicator of the effectiveness and efficiency of a software testing process.

These consists of :
a. Time
b. Quality of source code
c. Source code coverage
d. Test case defect density
e. Review efficiency

METRICS AND MODELS IN SOFTWARE TESTING


Test metric : Time
Testing with respect to time helps in measuring :
a. Time required to run a test case
b. Total time required to run a test suite.
c. Time available for testing.
d. Time interval between failures.
e. Cumulative failures experienced up to a given time.
f. Time of failure.
g. Failures experienced in a time interval.

METRICS AND MODELS IN SOFTWARE TESTING


Time based failure specification
# Failure time measured (minutes) Failure intervals (minutes)
1 12 12
2 26 14
3 35 9
4 38 3
5 50 12

Failure based failure specification


Time (minutes) Cumulative failures Failure in interval of 20 minutes
20 1 1
40 4 3
60 5 1
80 5 0
100 8 3

METRICS AND MODELS IN SOFTWARE TESTING


Test metric : Quality of source code (QSC)
We can calculate QSC after a reasonable time of release using the formula:

QSC = (WDB + WDA) / S

WDB : # of weighted defects found before release.


WDA : # of weighted defects found after release.
S : Size of the source code (KLOC)

METRICS AND MODELS IN SOFTWARE TESTING


Test metric : Source code coverage
% of source code coverage is given by :

Number of statements of a source


code covered by a test suite X 100
Total number of statements of a
source code

METRICS AND MODELS IN SOFTWARE TESTING


Test metric : Test case defect density
Test case defect density is given by :

Number of failed test cases


X 100
Number of executed test cases

METRICS AND MODELS IN SOFTWARE TESTING


Test metric : Review efficiency
Review efficiency is given by :

Total number of defects found during review


X 100
Total number of project defects

METRICS AND MODELS IN SOFTWARE TESTING


Software Quality Attributes Prediction
Models
Software Quality is dependent on many attributes like reliability, maintainability,
fault proneness, testability, complexity.
Models for prediction of attributes of quality, such as -:
Reliability Models
Basic Execution Time Model
Logarithmic Poisson Execution Time Model
The Jelinski Moranda Model

Fault Prediction Model


Maintenance effort Prediction Model

METRICS AND MODELS IN SOFTWARE TESTING


Reliability Models
Emphasis on failures rather than faults.
It is the probability of failure-free operation of software in a given time under
specified conditions.
Model based on execution time gives better result than those on calendar time.
These models are applicable at system testing level.
The reliability of a program increases and failure intensity decreases.

METRICS AND MODELS IN SOFTWARE TESTING


Basic Execution Time Model
Developed by J.D. MUSA in 1979.
Based on execution time.
Failures may occur according to a Non-Homogeneous Poisson Process(NHPP)
during testing.
The real world has many examples of events where poisson processes are used.

METRICS AND MODELS IN SOFTWARE TESTING


Logarithmic Poisson Execution Time
Model
With a slight modification in the failure intensity function, we have this model.
When execution time is more, the logarithmic Poisson model may give larger
values of failure intensity than the basic model.

METRICS AND MODELS IN SOFTWARE TESTING


The Jelinski Moranda Model
Its the earliest and simplest model.
This model assumes that all failures have the same failure
rate.

METRICS AND MODELS IN SOFTWARE TESTING


Fault Prediction Model
It can be used to identify classes that are prone to have
severe faults.
Model helps to focus testing on those parts of the system
that are likely to cause serious failures.

METRICS AND MODELS IN SOFTWARE TESTING


Maintenance Effort Prediction Model
Maintenance effort, an important factor for software
developers.
A model has been used to predict maintenance effort using
Artificial Neural Network (ANN)

METRICS AND MODELS IN SOFTWARE TESTING


THE END

METRICS AND MODELS IN SOFTWARE TESTING

You might also like