You are on page 1of 18

Software Reliability Growth

Models

Submitted to :Submitted by:-

Mrs Meenakshi Nayer

Jasmin kaur (12csb037)

Reliability
Reliability is usually defined as the probability that a system will operate without failure for a
specified time period under specified operating conditions.
Reliability is concerned with the time between failures or its reciprocal, the failure rate.

Software reliability
Software reliability is about to define the stability or the life of software system with
different properties.
These properties include the trustfulness of software system, software cost, execution time,
software stability etc.
Software reliability growth models can be used as an indication of the number of failures that
may be encountered after the software has shipped and thus as an indication of whether the
software is ready to ship; These models use system test data to predict the number of defects
remaining in the software

Types of reliability models


There are essentially two types of software reliability models - those that attempt to predict
software reliability from design parameters and those that attempt to predict software reliability
from test data.
The first type of models are usually called "defect density" models and use code characteristics
such as lines of code, nesting of loops, external references, input/outputs, and so forth to estimate
the number of defects in the software.
The second type of models are usually called "software reliability growth" models. These models
attempt to statistically correlate defect detection data with known functions such as an
exponential function.

Software Reliability Growth


models
Most software reliability growth
models have a parameter that relates
to the total number of defects
contained in a set of code.
If we know this parameter and the
current
number
of
defects
discovered, we know how many
defects remain in the code .

Knowing the number of residual defects helps us decide whether or not the code is
ready to ship and how much more testing is required if we decide the code is not
ready to ship.
It gives us an estimate of the number of failures that our customers will encounter
when operating the software.
This estimate helps us to plan the appropriate levels of support that will be required
for defect correction after the software has shipped and determine the cost of
supporting the software.

Software Reliability Growth


Model Data
1. Test Time Data-For a software reliability growth model developed during QA test, the
appropriate measure of time must relate to the testing effort. There are three possible candidates
for measuring test time:
calendar time
number of tests run
execution (CPU) time
2. Defect Data
Major: Can tolerate the situation, but not for long. Solution needed.
Critical: Intolerable situation. Solution urgently needed.
3. Grouped Data
the amount of failures and test time that occurred during a week.

Software Reliability Growth


Models types
Software reliability growth models have been grouped into two classes of models -concave l and
S-shaped.
The most important thing about both models is that they have the same asymptotic behaviour,
i.e., the defect detection rate decreases as the number of defects detected (and repaired) increases,
and the total number of defects detected asymptotically approaches a finite value.

Functions in software reliability


growth models
Mean value function expresses the average cumulative
failures at each point in time.
Failure intensity function is the number of failures per time
unit; it is computed as the derivative of the mean value function
with respect to time.

Various types of Models


1) Jelinski-Moranda Model: The Jelinski-Moranda model was first introduced
as a software reliability growth model in Jelinski and Moranda (1972) .
This is a continuous time-independently distributed inter failure times and independent and
identical error behavior model . The software failure rate of hazard function at any time is
proportional to the current fault content of the program. The distribution of the order statistics
is the Exponential distribution.
The main assumptions for the Jelinski-Moranda model are the following:
At the beginning of testing, there are n 0 faults in the software code with n 0 being an unknown but fixed number.
All faults are of the same type.
Immediate and perfect repair of faults.
Faults are detected independently of each other.
The times between failures are exponentially distributed with parameter proportional to the number of remaining
faults.
Each fault is equally dangerous with respect to the probability of its instantaneously causing a failure. Further
more,the hazard rate of each fault does not change over time, but remains constant at .

The failures are not correlated, i.e. given n 0 and the times between
failures (t 1 , t 2 , . . . . , tn 0 )
Whenever a failure has occurred, the fault that caused it is removed
instantaneously and without introducing any new fault into the software.
The failure intensity function is the product of the inherent number of faults
and the probability density of the time until activation of a single fault, n a (t),
i.e.:
Therefore,

2) Goel-Okumoto Model
This model, first proposed by Goel and Okumoto, is one of the most popular NHPP model in the
field of software reliability modeling.
It is also called the exponential NHPP model. Assumptions (2), (3) and (4) for the JelinskiMoranda model are also valid for the Goel-Okumoto model. Considering failure detection as a
Non homogeneous Poisson process with an exponentially decaying rate function, the mean value
function is hypothesized in this model as

and the intensity function of this model is given as

where a is the expected total number of faults to be eventually detected and b represents the fault
detection rate.

3) Generalized Goel NHPP Model


In order to describe the situation that software failure intensity increases slightly at the beginning
and then begins to decrease, Goel proposed a simple generalization of the Goel-Okumoto model
with an additional parameter c.
The mean value function and intensity function are

where a is the expected total number of faults to be eventually detected and b and c are
parameters that reflect the quality of testing.

4) Inflected S-Shaped Model


This model solves a technical problem in the Goel-Okumoto model.
It was proposed by Ohba and its underlying concept is that the observed software reliability
growth becomes S-shaped if faults in a program are mutually dependent, i.e., some faults are not
detectable before some others are removed.

The mean value function is

The parameter r is the inflection rate that indicates the ratio of the number of detectable faults to
the total number of faults in the software, a is the expected total number of faults to be eventually
detected, b is the fault detection rate, an is the inflection factor.
On taking r = then the inflection S-shaped model mean value function and intensity function
are given as

5) Logistic Growth Curve Model


In general, software reliability tends to improve and it can be treated as a growth process during the testing
phase. That is, the reliability growth occurs due to fixing faults. Therefore, under some conditions, the models
developed to predict economic population growth could also be applied to predict software reliability growth.
These models simply fit the cumulative number of detected faults at a given time with a function of known
form. Logistic growth curve model is one of them and it has an S-shaped curve. Its mean value function and
intensity function are

where a is the expected total number of faults to be eventually detected and k and b are parameters which can
be estimated by fitting the failure data.

6) Musa-Okumoto Model
Musa-Okumoto have been observed that the reduction in failure rate resulting from repair action
following early failures are often greater because they tend to the most frequently occurring once,
and this property has been incorporated in the model .
mean value function and intensity function of the model given as:

where a is the expected total number of faults to be eventually detected and b is the fault
detection rate.

You might also like