Professional Documents
Culture Documents
a
c
a Software testing is a critical element of software quality assurance and represents the ultimate
process to ensure the correctness of the product. The quality product always enhances the customer
confidence in using the product thereby increases the business economics. In other words, a good
quality product means zero defects, which is derived from a better quality process in testing.
The definition of testing is not well understood. People use a totally incorrect definition of the
word testing, and that this is the primary cause for poor program testing. Examples of these definitions
are such statements as ´Testing is the process of demonstrating that errors are not presentµ, ´The
purpose of testing is to show that a program performs its intended functions correctlyµ, and ´Testing is
the process of establishing confidence that a program does what it is supposed to doµ.
Testing the product means adding value to it, which means raising the quality or reliability of
the program. Raising the reliability of the product means finding and removing errors. Hence one
should not test a product to show that it works; rather, one should start with the assumption that the
program contains errors and then test the program to find as many errors as possible. Thus a more
appropriate definition is:
Testing is the process of executing a program with the intent of finding errors.
Defects can exist in the software, as it is developed by human beings who can make mistakes
during the development of software. However, it is the primary duty of a software vendor to ensure
that software delivered does not have defects and the customers day-to-day operations do not get
affected. This can be achieved by rigorously testing the software. The most common origin of
software bugs is due to:
! Poor understanding and incomplete requirements
! Unrealistic schedule
! Fast changes in requirements
! Too many assumptions and complacency
ÿÿ
In a typical project life cycle, testing is the late activity. When the product is tested, the defects
may be due to many reasons. It may be either programming error or may be defects in design or
defects at any stages in the life cycle. The overall defect distribution is shown in fig 1.1 .
Quality design refers to the characteristic s that designers specify for an item. The grade
of materials, tolerance, and performance specifications all contribute to the quality of design.
[ !
! Quality
! Quality control
! Quality assurance
! Cost of quality
1. Cyclomatic complexity
2. Cohesion
]. Number of function points
4. Lines of code
When we examine an item based on its measurable characteristics, two kinds of quality may
be encountered:
! Quality of design
! Quality of conformance
"a#$% &' ÿ$(è Quality of design refers to the characteristics that designers specify for
an item. The grade of materials, tolerance, and performance specifications all contribute to
quality of design. As higher graded materials are used and tighter, tolerance and greater levels
of performance are specified the design quality of a product increases if the product is
manufactured according to specifications.
"a#$% &' )&è'&*+aè) Quality of conformance is the degree to which the design
specifications are followed during manufacturing. Again, the greater the degree of conformance,
the higher the level of quality of conformance. In software development, quality of design
encompasses requirements, specifications and design of the system. Quality of conformance is
an issue focused primarily on implementation. If the implementation follows the design and the
resulting system meets its requirements and performance goals, conformance quality is high.
"a#$% )&è*&# , )- QC is the series of inspections, reviews, and tests used throughout
the development cycle to ensure that each work product meets the requirements placed upon it.
QC includes a feedback loop to the process that created the work product. The combination of
measurement and feedback allows us to tune the process when the work products created fail
to meet their specification. These approach views QC as part of the manufacturing process QC
activities may be fully automated, manual or a combination of automated tools and human
interaction. An essential concept of QC is that all work products have defined and measurable
specification to which we may compare the outputs of each process the feedback loop is
essential to minimize the defect produced.
]
a
a QA is an essential activity for any business that produces products to be used by others.
The SQA group serves as the customer in-house representative. That is the people who perform
SQA must look at the software from customer·s point of views. The SQA group attempts to
answer the questions asked below and hence ensure the quality of software.
2. Have technical disciplines properly performed their role as part of the SQA activity?
aa
..
V
ÿ
#
a Integration testing is a systematic technique for constructing the program structure while
conducting tests to uncover errors associated with interfacing. The objective is to take unit
tested modules and build a program structure that has been dictated by design.
$ It is the antithesis of the big bang approach. The program is
constructed and tested in small segments, where errors are easier to isolate and correct;
interfaces are more likely to be tested completely; and a systematic test approach may be
applied. We discuss some of incremental methods here:
! The main control module is used as a test driver, and stubs are substituted for all
modules directly subordinate to the main control module.
! Depending on the integration approach selected (i.e., depth-or breadth first),
subordinate stubs are replaced one at a time with actual modules.
! Tests are conducted as each modules are integrated
! On completion of each set of tests, another stub is replaced with real module
! Regression testing may be conducted to ensure that new errors have not been
introduced the process continues from step2 until the entire program structure is built.
Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems arise.
The most common of these problems occurs when processing at low levels in the hierarchy is
required to adequately test upper levels. Stubs replace low-level modules at the beginning of
top-down testing; therefore, no significant data can flow upward in the program structure.
1. Delay many tests until stubs are replaced with actual modules.
2. Develop stubs that perform limited functions that simulate the actual module
The first approach causes us to lose some control over correspondence between specific
tests and incorporation of specific modules. This can lead to difficulty in determining the cause
of errors tends to violate the highly constrained nature of the top down approach. The second
approach is workable but can lead to significant overhead, as stubs become increasingly
complex. The third approach is discussed in next section.
c" $
odules are integrated from the bottom to top, in this approach processing required for
modules subordinate to a given level is always available and the needs for subs is eliminated.
1. Low-level modules are combined into clusters that perform a specific software sub function.
4. Drivers are removed and clusters are combined moving upward in the program structure.
As integration moves upward, the need for separate test drivers lessens. In fact, if the top two
levels of program structure are integrated top-down, the number of drivers can be reduced
substantially and integration of clusters is greatly simplified.
Each time a new model is added as a part of integration testing, the software changes.
New data flow paths are established, new I/O may occur, and new control logic is invoked. These
changes may cause problems with functions that previously worked flawlessly. In the context of
an integration test, strategy regression testing is the re-execution of subset of tests that have
already been conducted to ensure that changes have not propagated unintended side effects.
Regression testing is the activity that helps to ensure that changes do not introduce unintended
behavior or additional errors.
Regression testing may be conducted manually, by re-executing a subset of all test cases or
using automated capture playback tools. Capture-playback tools enable the software engineer
to capture test cases and results for subsequent playback and comparison.
The regression test suite contains three different classes of test cases:
2. Additional tests that focus on software functions that are likely to be affected by the change.
]. Tests that focus on software components that have been changed.
[ !
An overall plan for integration of the software and a description of specific tests are
documented in a test specification. The specification is deliverable in the software engineering
process and becomes part of the software configuration.
I. Scope of testing
2. Schedule
]. Overhead software
1. Order of integration
q Purpose
q odules to be tested
q Expected results
]. Test environment
V. References
VI. Appendices
The Following criteria and corresponding tests are applied for all test phases. Interfaces integrity.
Internal and external interfaces are tested as each module is incorporated into the structure.
Functional Validity. Tests designed to uncover functional error are conducted. Information
content. Tests designed to uncover errors associated with local or global data structures are
conducted. Performance Test designed to verify performance bounds established during
software design are conducted.
A schedule for integration, overhead software, and related topics are also discussed as part of
the ´test Planµ section. Start and end dates for each phase are established and availability
windows for unit tested modules are defined. A brief description of overhead software(stubs and
drivers) concentrates on characteristics that might require special effort. Finally, test
environments and resources are described.
$&0 1 +
a ISO/IEC 9126 Software engineering ³ Product quality is an international standard for the
evaluation of software quality. The fundamental objective of this standard is to address some of
the well known human biases that can adversely affect the delivery and perception of a software
development project. These biases include changing priorities after the start of a project or not
having any clear definitions of "success". By clarifying, then agreeing on the project priorities
and subsequently converting abstract priorities (compliance) to measurable values (output data
can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a
common understanding of the project's objectives and goals.
+The quality model established in the first part of the standard, ISO/IEC 9126-1,
classifies software quality in a structured set of characteristics and sub-characteristics as follows:
! Functionality - A set of attributes that bear on the existence of a set of functions and
their specified properties. The functions are those that satisfy stated or implied needs.
! Suitability
! Accuracy
! Interoperability
! Security
! Functionality Compliance
! Reliability - A set of attributes that bear on the capability of software to maintain its level
of performance under stated conditions for a stated period of time.
! aturity
! Fault Tolerance
! Recoverability
! Reliability Compliance
! Usability - A set of attributes that bear on the effort needed for use, and on the
individual assessment of such use, by a stated or implied set of users.
! Understandability
! Learn ability
! Operability
! Attractiveness
! Usability Compliance
! Efficiency - A set of attributes that bear on the relationship between the level of
performance of the software and the amount of resources used, under stated conditions.
! Time Behaviour
! Resource Utilisation
! Efficiency Compliance
! aintainability - A set of attributes that bear on the effort needed to make specified
modifications.
! Analyzability
! Changeability
! Stability
! Testability
! aintainability Compliance
! Portability - A set of attributes that bear on the ability of software to be transferred from
one environment to another.
! Adaptability
! Installability
! Co-Existence
! Replaceability
! Portability Compliance
Each quality sub-characteristic (e.g. adaptability) is further divided into attributes. An attribute is
an entity which can be verified or measured in the software product. Attributes are not defined
in the standard, as they vary between different software products.
The standard provides a framework for organizations to define a quality model for a software
product. On doing so, however, it leaves up to each organization the task of specifying precisely
its own model. This may be done, for example, by specifying target values for quality metrics
which evaluates the degree of presence of quality attributes
$+ Internal metrics are those which do not rely on software execution (static
measures)
+ External metrics are applicable to running software.
" + Quality in use metrics is only available when the final product is used in
real conditions.
Ideally, the internal quality determines the external quality and external quality determines
quality in use.
This standard stems from the model established in 1977 by cCall and his colleagues, who
proposed a model to specify software quality. The cCall quality model is organized around
three types of Quality Characteristics:
! Factors (To specify): They describe the external view of the software, as viewed by the
users.
! Criteria (To build): They describe the internal view of the software, as seen by the
developer.
! etrics (To control): They are defined and used to provide a scale and method for
measurement.
ISO/IEC 9126 distinguishes between a defect and a non-conformity, a defect being The no
fulfillment of intended usage requirements, whereas a nonconformity is The no fulfillment of
specified requirements. A similar distinction is made between validation and verification, known
as V&V in the testing trade.
c
a
c
[ a
2
a It is vital for software developers to recognize that the quality of support a products is
normally as important to customers as that of the quality of product itself. Delivering software
technical support has quickly grown into big business. Today software support is a business in
its own right. Software support operations are not there because they want to be. They exist
because they are a vital void in the software industry, helping customer use the computer
systems in front of the them, a job that is getting more and more difficult. There is a
phenomenal increase in the number of people who use their computer for ´ ission Criticalµ
Applications. This puts extra pressure on the software support groups in the organizations.
During maintenance phase of the software project, the complexity metrics can be used to track
and control the complexity level of modified module.
In this scenario, the software developer must ensure that the customer·s support
requirement are identified and must design and engineer the business and technical
infrastructure from which the product will be supported. This applied equally to those business
producing software packages and to in-house information systems departments. Support for
software can be complex and may include.
! User Documentation
! Packaging and distribution arrangements
! Implementation and customization services and consulting
! Product training
! Help Desk Assistance
! Error reporting and correction
! Enhancement
For an application installed on a single site, the support requirement may be simply to provide
telephone and assign a stall member to receive and follow up queries. For a shrink wrapped
product, it may mean providing localization and worldwide distribution facilities and
implementing major administrative coin purer systems support global help-desk services.
]
a Unit testing focuses verification efforts on the smallest unit of software design the
module. Using the procedural design description as guide, important control paths are tested to
uncover errors within the boundary of the module. The relative complexity of tests and
uncovered errors are limited by the constraint scope established for unit testing. The unit test is
normally white-box oriented, and the step can be conducted in parallel for multiple modules.
" The tests that occur as part of unit testing are illustrated schematically
in figure 6.5.
The module interface is tested to ensure that information properly flows into and out of the
program unit under test. The local data structure is examined to ensure the data stored
temporarily maintains its integrity during all steps in an algorithm·s execution. Boundary
conditions are tested to ensure that the module operates properly at boundaries established to
limit or restrict processing. All independent paths through the control structure are exercised to
ensure that all statements in a module have been executed at least once. And finally, all error-
handling paths are tested.
Tests of data flow across a module interface are required before any other test is initiated. If
data do not enter and exit properly, all other tests are doubtful.
).
]. Incorrect initialization
4. Precision Inaccuracy
5. Error description does not provide enough information to assist in the location of the cause of
the error.
Boundary testing is the last task of the unit tests step. Software often files at its
boundaries. That is, Errors often occur when the nth element of an n-dimensional array is
processed; when the I st repetition of a loop with i passes is invoke; or when the maximum or
minimum allowable value is encountered. Test cases that exercise data structure, control flow
and data values just below, at just above maxima and minima are Very likely to uncover errors.
"
Unit testing is normally considered as an adjunct to the coding step. After
source-level code has been developed, reviewed, and verified for correct syntax, unit test case
design begins. A review of design information provides guidance for establishing test cases that
are likely to uncover errors in each of the categories discussed above. Each test case should be
coupled with a set of expected results. Because a module is not a standalone program, driver
and or stub software must be developed for each unit test. The unit test environment is
illustrated in figure 5.6.In most applications a driver is nothing more than a ´ ain programµ that
accepts test case data, passes such data to the test module and prints relevant results. Stubs
serve to replace modules that are subordinate to the module that is to be tested. A stub or
´dummy sub programµ uses the subordinate module·s interface may do minimal data
manipulation prints verification of entry, and returns. Drivers and stubs represent overhead. That
is, both are software that must be developed but that is not delivered with the final software
product. If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately,
many modules cannot be adequately unit tested with ´simpleµ overhead software. In such cases,
Complete testing can be postponed until the integration test step (Where drivers or stubs are
also used). Unit test is simplified when a module with high cohesion is designed. When a
module addresses only one function, the number of test cases is reduced and errors can be
more easily predicted and uncovered
V
a This testing technique takes into account the internal structure of the system or
component. The entire source code of the system must be available. This technique is known as
white box testing because the complete internal structure and working of the code is available.
White box testing helps to derive test cases to ensure:
2. All logical decisions are exercised for both true and false paths.
]. All loops are executed at their boundaries and within operational bounds.
! Logic errors and incorrect assumptions most likely to be made when coding for ´special
casesµ. Need to ensure these execution paths are tested.
! ay find assumptions about execution paths incorrect, and so make design errors. White
box testing can find these errors.
! Typographical errors are random. Just as likely to be on an obscure logical path as on a
ainstream path.
! ´Bugs lurk in corners and congregate at boundariesµ
)+
SC concerns itself with answering the question "Somebody did something, how can
one reproduce it?" Often the problem involves not reproducing "it" identically, but with
controlled, incremental changes. Answering the question thus becomes a matter of comparing
different results and of analyzing their differences. Traditional configuration management
typically focused on controlled creation of relatively simple products. Now, implementers of
SC face the challenge of dealing with relatively minor increments under their own control, in
the context of the complex system being developed
)+