You are on page 1of 22

c  

  

a   

 
 

  c 



             
 


a  Software testing is a critical element of software quality assurance and represents the ultimate
process to ensure the correctness of the product. The quality product always enhances the customer
confidence in using the product thereby increases the business economics. In other words, a good
quality product means zero defects, which is derived from a better quality process in testing.

The definition of testing is not well understood. People use a totally incorrect definition of the
word testing, and that this is the primary cause for poor program testing. Examples of these definitions
are such statements as ´Testing is the process of demonstrating that errors are not presentµ, ´The
purpose of testing is to show that a program performs its intended functions correctlyµ, and ´Testing is
the process of establishing confidence that a program does what it is supposed to doµ.

Testing the product means adding value to it, which means raising the quality or reliability of
the program. Raising the reliability of the product means finding and removing errors. Hence one
should not test a product to show that it works; rather, one should start with the assumption that the
program contains errors and then test the program to find as many errors as possible. Thus a more
appropriate definition is:
Testing is the process of executing a program with the intent of finding errors.

   

! To show the software works: It is known as demonstration-oriented


! To show the software doesn·t work: It is known as destruction-oriented
! To minimize the risk of not working up to an acceptable level: it is known as evaluation-
oriented
è 

Defects can exist in the software, as it is developed by human beings who can make mistakes
during the development of software. However, it is the primary duty of a software vendor to ensure
that software delivered does not have defects and the customers day-to-day operations do not get
affected. This can be achieved by rigorously testing the software. The most common origin of
software bugs is due to:
! Poor understanding and incomplete requirements
! Unrealistic schedule
! Fast changes in requirements
! Too many assumptions and complacency

ÿÿ 

In a typical project life cycle, testing is the late activity. When the product is tested, the defects
may be due to many reasons. It may be either programming error or may be defects in design or
defects at any stages in the life cycle. The overall defect distribution is shown in fig 1.1 .


  


a  The quality is defined as ´a characteristic or attribute of somethingµ. As an attribute of an


item, Quality refers to measurable characteristics-things we are able to compare to known
standards such as length, color, electrical properties, malleability, and so on. However, software,
largely an intellectual entity, is more challenging to characterize than physical objects.

Quality design refers to the characteristic s that designers specify for an item. The grade
of materials, tolerance, and performance specifications all contribute to the quality of design.

Quality of conformance is the degree to which the design specification is followed


during manufacturing. Again, the greater the degree of conformance, the higher the level of
quality of Conformance.

 a    

! A quality management approach


! Effective software engineering technology
! Formal technical reviews
! A multi-tiered testing strategy
! Control of software documentation and changes made to it
! A procedure to assure compliance with software development standards
!  easurement and reporting mechanisms

[  !

! Quality
! Quality control
! Quality assurance
! Cost of quality

The American heritage dictionary defines quality as ´a characteristic or attribute of


somethingµ. As an attribute of an item quality refers to measurable characteristic-things, we are
able to compare to known standards such as length, color, electrical properties, and malleability,
and so on. However, software, largely an intellectual entity, is more challenging to characterize
than physical object. Nevertheless, measures of a programs characteristic do exist. These
properties include

1. Cyclomatic complexity

2. Cohesion
]. Number of function points

4. Lines of code

When we examine an item based on its measurable characteristics, two kinds of quality may
be encountered:

! Quality of design
! Quality of conformance

"a#$% &' ÿ $(è Quality of design refers to the characteristics that designers specify for
an item. The grade of materials, tolerance, and performance specifications all contribute to
quality of design. As higher graded materials are used and tighter, tolerance and greater levels
of performance are specified the design quality of a product increases if the product is
manufactured according to specifications.

"a#$% &' )&è'&*+aè)  Quality of conformance is the degree to which the design
specifications are followed during manufacturing. Again, the greater the degree of conformance,
the higher the level of quality of conformance. In software development, quality of design
encompasses requirements, specifications and design of the system. Quality of conformance is
an issue focused primarily on implementation. If the implementation follows the design and the
resulting system meets its requirements and performance goals, conformance quality is high.

"a#$% )&è*&# , )- QC is the series of inspections, reviews, and tests used throughout
the development cycle to ensure that each work product meets the requirements placed upon it.
QC includes a feedback loop to the process that created the work product. The combination of
measurement and feedback allows us to tune the process when the work products created fail
to meet their specification. These approach views QC as part of the manufacturing process QC
activities may be fully automated, manual or a combination of automated tools and human
interaction. An essential concept of QC is that all work products have defined and measurable
specification to which we may compare the outputs of each process the feedback loop is
essential to minimize the defect produced.

"a#$% a"*aè)  , a- QA consists of the editing and reporting functions of


management. The goal of quality assurance is to provide management with the data necessary
to be informed about product quality, there by gaining insight and confidence that product
quality is meeting its goals. Of course, if the data provided through QA identify problems, it is
management·s responsibility to address the problems and apply the necessary resources to
resolve quality issues.


]
   a 


a  QA is an essential activity for any business that produces products to be used by others.
The SQA group serves as the customer in-house representative. That is the people who perform
SQA must look at the software from customer·s point of views. The SQA group attempts to
answer the questions asked below and hence ensure the quality of software.

  

1. Has software development been conducted according to pre-established standards?

2. Have technical disciplines properly performed their role as part of the SQA activity?

 aa 

SQA Plan is interpreted as shown in Fig 2.2

SQA is comprised of a variety of tasks associated with two different constituencies


  ..

! Performing Quality assurance by applying technical methods


! Conduct Formal Technical Reviews
! Perform well-planned software testing.

 a    

2 Quality assurance planning oversight


2 Record keeping
2 Analysis and reporting.
2 QA activities performed by SE team and SQA are governed by the following plan.
2 Evaluation to be performed.
2 Audits and reviews to be performed.
2 Standards that is applicable to the project.
2 Procedures for error reporting and tracking
2 Documents to be produced by the SQA group
2 Amount of feedback provided to software project team.
[   a !

! Prepare SQA Plan for a project


! Participate in the development of the project·s software description
! Review software-engineering activities to verify compliance with defined software
process.
! Audits designated software work products to verify compliance with those defined as
part of the software process.
! Ensures that deviations in software work and work products are documented and
handled according to a documented procedure.
! Records any noncompliance and reports to senior management.

V
 ÿ              
       
  #       



a  Integration testing is a systematic technique for constructing the program structure while
conducting tests to uncover errors associated with interfacing. The objective is to take unit
tested modules and build a program structure that has been dictated by design.

ÿ$  Integration testing is a systematic technique for


constructing the program structure while conducting tests to uncover errors associated with
interfacing. The objective is to take unit tested modules and build a program structure that has
been dictated by design. There are often tendencies to attempt non-incremental integration;
that is, to construct the program using a ´big bangµ approaches. All modules are combined in
advance. The entire program is tested as a whole. And chaos usually results! A set of errors is
encountered. Correction is difficult because isolation of causes is complicated by the vast
expanse of the entire program. Once these errors are corrected, new ones appear and the
process continues in a seemingly endless loop.

$  It is the antithesis of the big bang approach. The program is
constructed and tested in small segments, where errors are easier to isolate and correct;
interfaces are more likely to be tested completely; and a systematic test approach may be
applied. We discuss some of incremental methods here:

    Top-down integration is an incremental approach to construction of


program structure.  odules are integrated by moving downward through the control hierarchy,
beginning with the main control module.

        

! The main control module is used as a test driver, and stubs are substituted for all
modules directly subordinate to the main control module.
! Depending on the integration approach selected (i.e., depth-or breadth first),
subordinate stubs are replaced one at a time with actual modules.
! Tests are conducted as each modules are integrated
! On completion of each set of tests, another stub is replaced with real module
! Regression testing may be conducted to ensure that new errors have not been
introduced the process continues from step2 until the entire program structure is built.

Top-down strategy sounds relatively uncomplicated, but in practice, logistical problems arise.
The most common of these problems occurs when processing at low levels in the hierarchy is
required to adequately test upper levels. Stubs replace low-level modules at the beginning of
top-down testing; therefore, no significant data can flow upward in the program structure.

The tester is left with three choices:

1. Delay many tests until stubs are replaced with actual modules.

2. Develop stubs that perform limited functions that simulate the actual module

]. Integrate the software from the bottom of the hierarchy upward

The first approach causes us to lose some control over correspondence between specific
tests and incorporation of specific modules. This can lead to difficulty in determining the cause
of errors tends to violate the highly constrained nature of the top down approach. The second
approach is workable but can lead to significant overhead, as stubs become increasingly
complex. The third approach is discussed in next section.

c" $

 odules are integrated from the bottom to top, in this approach processing required for
modules subordinate to a given level is always available and the needs for subs is eliminated.

A bottom-up integration strategy may be implemented with the following steps:

1. Low-level modules are combined into clusters that perform a specific software sub function.

2. A driver is written to coordinate test case input and output.

]. The cluster is tested.

4. Drivers are removed and clusters are combined moving upward in the program structure.

As integration moves upward, the need for separate test drivers lessens. In fact, if the top two
levels of program structure are integrated top-down, the number of drivers can be reduced
substantially and integration of clusters is greatly simplified.

*  

Each time a new model is added as a part of integration testing, the software changes.

New data flow paths are established, new I/O may occur, and new control logic is invoked. These
changes may cause problems with functions that previously worked flawlessly. In the context of
an integration test, strategy regression testing is the re-execution of subset of tests that have
already been conducted to ensure that changes have not propagated unintended side effects.
Regression testing is the activity that helps to ensure that changes do not introduce unintended
behavior or additional errors.

/   !

Regression testing may be conducted manually, by re-executing a subset of all test cases or
using automated capture playback tools. Capture-playback tools enable the software engineer
to capture test cases and results for subsequent playback and comparison.

The regression test suite contains three different classes of test cases:

1. A representative sample of tests that will exercise all software functions.

2. Additional tests that focus on software functions that are likely to be affected by the change.
]. Tests that focus on software components that have been changed.

Regression tests should follow on critical module function.

[ !

A critical module has one or more of the following characteristics.

! Addresses several software requirements


! Has a high level of control
! Is a complex or error-prone
! Has a definite performance requirement.
$ ÿ

An overall plan for integration of the software and a description of specific tests are
documented in a test specification. The specification is deliverable in the software engineering
process and becomes part of the software configuration.

Test Specification Outline

I. Scope of testing

II. Test Plan

1. Test phases and builds

2. Schedule

]. Overhead software

4. Environment and resources

III. Test Procedures

1. Order of integration

q Purpose

q  odules to be tested

2. Unit test for modules in build

q Description of test for module n

q Overhead software description

q Expected results
]. Test environment

q Special tools or techniques

q Overhead software description

4. Test case data

5. Expected results for build

IV. Actual Test Results

V. References

VI. Appendices

The Following criteria and corresponding tests are applied for all test phases. Interfaces integrity.
Internal and external interfaces are tested as each module is incorporated into the structure.
Functional Validity. Tests designed to uncover functional error are conducted. Information
content. Tests designed to uncover errors associated with local or global data structures are
conducted. Performance Test designed to verify performance bounds established during
software design are conducted.

A schedule for integration, overhead software, and related topics are also discussed as part of
the ´test Planµ section. Start and end dates for each phase are established and availability
windows for unit tested modules are defined. A brief description of overhead software(stubs and
drivers) concentrates on characteristics that might require special effort. Finally, test
environments and resources are described.

˜
  $&0 1 +


a  ISO/IEC 9126 Software engineering ³ Product quality is an international standard for the
evaluation of software quality. The fundamental objective of this standard is to address some of
the well known human biases that can adversely affect the delivery and perception of a software
development project. These biases include changing priorities after the start of a project or not
having any clear definitions of "success". By clarifying, then agreeing on the project priorities
and subsequently converting abstract priorities (compliance) to measurable values (output data
can be validated against schema X with zero intervention), ISO/IEC 9126 tries to develop a
common understanding of the project's objectives and goals.

The standard is divided into four parts:


! quality model
! external metrics
! internal metrics
! Quality in use metrics.

+The quality model established in the first part of the standard, ISO/IEC 9126-1,
classifies software quality in a structured set of characteristics and sub-characteristics as follows:
! Functionality - A set of attributes that bear on the existence of a set of functions and
their specified properties. The functions are those that satisfy stated or implied needs.
! Suitability
! Accuracy
! Interoperability
! Security
! Functionality Compliance
! Reliability - A set of attributes that bear on the capability of software to maintain its level
of performance under stated conditions for a stated period of time.
!  aturity
! Fault Tolerance
! Recoverability
! Reliability Compliance
! Usability - A set of attributes that bear on the effort needed for use, and on the
individual assessment of such use, by a stated or implied set of users.
! Understandability
! Learn ability
! Operability
! Attractiveness
! Usability Compliance
! Efficiency - A set of attributes that bear on the relationship between the level of
performance of the software and the amount of resources used, under stated conditions.
! Time Behaviour
! Resource Utilisation
! Efficiency Compliance
!  aintainability - A set of attributes that bear on the effort needed to make specified
modifications.
! Analyzability
! Changeability
! Stability
! Testability
!  aintainability Compliance
! Portability - A set of attributes that bear on the ability of software to be transferred from
one environment to another.
! Adaptability
! Installability
! Co-Existence
! Replaceability
! Portability Compliance
Each quality sub-characteristic (e.g. adaptability) is further divided into attributes. An attribute is
an entity which can be verified or measured in the software product. Attributes are not defined
in the standard, as they vary between different software products.

Software product is defined in a broad sense: it encompasses executable, source code,


architecture descriptions, and so on. As a result, the notion of user extends to operators as well
as to programmers, which are users of components as software libraries.

The standard provides a framework for organizations to define a quality model for a software
product. On doing so, however, it leaves up to each organization the task of specifying precisely
its own model. This may be done, for example, by specifying target values for quality metrics
which evaluates the degree of presence of quality attributes
$+ Internal metrics are those which do not rely on software execution (static
measures)
+ External metrics are applicable to running software.
" + Quality in use metrics is only available when the final product is used in
real conditions.

Ideally, the internal quality determines the external quality and external quality determines
quality in use.

This standard stems from the model established in 1977 by  cCall and his colleagues, who
proposed a model to specify software quality. The  cCall quality model is organized around
three types of Quality Characteristics:
! Factors (To specify): They describe the external view of the software, as viewed by the
users.
! Criteria (To build): They describe the internal view of the software, as seen by the
developer.
!  etrics (To control): They are defined and used to provide a scale and method for
measurement.
ISO/IEC 9126 distinguishes between a defect and a non-conformity, a defect being The no
fulfillment of intended usage requirements, whereas a nonconformity is The no fulfillment of
specified requirements. A similar distinction is made between validation and verification, known
as V&V in the testing trade.

c  
  

a  

 
 

  c 


[ a     2


a  It is vital for software developers to recognize that the quality of support a products is
normally as important to customers as that of the quality of product itself. Delivering software
technical support has quickly grown into big business. Today software support is a business in
its own right. Software support operations are not there because they want to be. They exist
because they are a vital void in the software industry, helping customer use the computer
systems in front of the them, a job that is getting more and more difficult. There is a
phenomenal increase in the number of people who use their computer for ´ ission Criticalµ
Applications. This puts extra pressure on the software support groups in the organizations.
During maintenance phase of the software project, the complexity metrics can be used to track
and control the complexity level of modified module.

In this scenario, the software developer must ensure that the customer·s support
requirement are identified and must design and engineer the business and technical
infrastructure from which the product will be supported. This applied equally to those business
producing software packages and to in-house information systems departments. Support for
software can be complex and may include.

! User Documentation
! Packaging and distribution arrangements
! Implementation and customization services and consulting
! Product training
! Help Desk Assistance
! Error reporting and correction
! Enhancement

For an application installed on a single site, the support requirement may be simply to provide
telephone and assign a stall member to receive and follow up queries. For a shrink wrapped
product, it may mean providing localization and worldwide distribution facilities and
implementing major administrative coin purer systems support global help-desk services.

 

  


a  SQA is comprised of a variety of tasks associated with two different constituencies

1. The software engineers who do technical work like

! Performing Quality assurance by applying technical methods


! Conduct Formal Technical Reviews
Conduct formal technical reviews to assess the test strategy and test cases themselves.
Formal technical reviews can uncover inconsistencies, omissions, and outright errors in
the testing approach. This saves time and improves product quality.

! Perform well-planned software testing.


Develop a continuous improvement approach for the testing process. The test strategy
should be measured. The metrics collected during testing should be used as part of a
statistical process control approach for software testing.

2. SQA group that has responsibility for

! Quality assurance planning oversight


! Record keeping
! Analysis and reporting.

]
     


a  Unit testing focuses verification efforts on the smallest unit of software design the
module. Using the procedural design description as guide, important control paths are tested to
uncover errors within the boundary of the module. The relative complexity of tests and
uncovered errors are limited by the constraint scope established for unit testing. The unit test is
normally white-box oriented, and the step can be conducted in parallel for multiple modules.

"  The tests that occur as part of unit testing are illustrated schematically
in figure 6.5.

The module interface is tested to ensure that information properly flows into and out of the
program unit under test. The local data structure is examined to ensure the data stored
temporarily maintains its integrity during all steps in an algorithm·s execution. Boundary
conditions are tested to ensure that the module operates properly at boundaries established to
limit or restrict processing. All independent paths through the control structure are exercised to
ensure that all statements in a module have been executed at least once. And finally, all error-
handling paths are tested.

Tests of data flow across a module interface are required before any other test is initiated. If
data do not enter and exit properly, all other tests are doubtful.

).   

1. Number of input parameters equals to number of arguments.

2. Parameter and argument attributes match.

]. Parameter and argument systems match.

4. Number of arguments transmitted to called modules equal to number of parameters.

5. Attributes of arguments transmitted to called modules equal to attributes of parameters.


6. Unit system of arguments transmitted to call modules equal to unit system of parameters.

7. Number attributes and order of arguments to built-in functions correct.

8. Any references to parameters not associated with current point of entry.

9. Input-only arguments altered.

10. Global variable definitions consistent across modules.

11. Constraints passed as arguments.

[       $3&4        





1. File attributes correct?

2. Open/Close statements correct?

]. Format specification matches I/O statements?

4. Buffer size matches record size?

5. Files opened before use?

6. End-of-File conditions handled?

7. I/O errors handled

8. Any textual errors in output information?

    


    
   

1. Improper or inconsistent typing

2. Erroneous initialization are default values

]. Incorrect variable names

4. Inconsistent data types

5. Underflow, overflow, and addressing exception


In addition to local data structures, the impact of global data on a module should be ascertained
during unit testing. Selective testing of execution paths is an essential task during the unit test.
Test cases should be designed to uncover errors to erroneous computations; incorrect
comparisons are improper control flow. Basis path and loop testing are effective techniques for
uncovering a broad array of path errors.

a  

1.  isunderstood or incorrect arithmetic precedence

2.  ixed mode operation

]. Incorrect initialization

4. Precision Inaccuracy

5. Incorrect symbolic representation of an expression.

)     




     .

1. Comparison of different data types

2. Incorrect logical operators are precedence

]. Expectation of equality when precision error makes equality unlikely

4. Incorrect comparison or variables

5. Improper or non-existent loop termination.

6. Failure to exit when divergent iteration is encountered

7. Improperly modified loop variables.

(        


   
a 
    

1. Error description is unintelligible

2. Error noted does not correspond to error encountered

]. Error condition causes system intervention prior to error handling


4. Exception-condition processing is incorrect

5. Error description does not provide enough information to assist in the location of the cause of
the error.

Boundary testing is the last task of the unit tests step. Software often files at its
boundaries. That is, Errors often occur when the nth element of an n-dimensional array is
processed; when the I st repetition of a loop with i passes is invoke; or when the maximum or
minimum allowable value is encountered. Test cases that exercise data structure, control flow
and data values just below, at just above maxima and minima are Very likely to uncover errors.

"   Unit testing is normally considered as an adjunct to the coding step. After
source-level code has been developed, reviewed, and verified for correct syntax, unit test case
design begins. A review of design information provides guidance for establishing test cases that
are likely to uncover errors in each of the categories discussed above. Each test case should be
coupled with a set of expected results. Because a module is not a standalone program, driver
and or stub software must be developed for each unit test. The unit test environment is
illustrated in figure 5.6.In most applications a driver is nothing more than a ´ ain programµ that
accepts test case data, passes such data to the test module and prints relevant results. Stubs
serve to replace modules that are subordinate to the module that is to be tested. A stub or
´dummy sub programµ uses the subordinate module·s interface may do minimal data
manipulation prints verification of entry, and returns. Drivers and stubs represent overhead. That
is, both are software that must be developed but that is not delivered with the final software
product. If drivers and stubs are kept simple, actual overhead is relatively low. Unfortunately,
many modules cannot be adequately unit tested with ´simpleµ overhead software. In such cases,
Complete testing can be postponed until the integration test step (Where drivers or stubs are
also used). Unit test is simplified when a module with high cohesion is designed. When a
module addresses only one function, the number of test cases is reduced and errors can be
more easily predicted and uncovered
V
      


a  This testing technique takes into account the internal structure of the system or
component. The entire source code of the system must be available. This technique is known as
white box testing because the complete internal structure and working of the code is available.
White box testing helps to derive test cases to ensure:

1. All independent paths are exercised at least once.

2. All logical decisions are exercised for both true and false paths.

]. All loops are executed at their boundaries and within operational bounds.

4. All internal data structures are exercised to ensure validity.


[   

! Traverse complicated loop structures


! Cover common data areas,
! Cover control structures and sub-routines,
! Evaluate different execution paths
! Test the module and integration of many modules
! Discover logical errors, if any.
! Helps to understand the code

Why the white box testing is used to test conformance to requirements?

! Logic errors and incorrect assumptions most likely to be made when coding for ´special
casesµ. Need to ensure these execution paths are tested.
!  ay find assumptions about execution paths incorrect, and so make design errors. White
box testing can find these errors.
! Typographical errors are random. Just as likely to be on an obscure logical path as on a
 ainstream path.
! ´Bugs lurk in corners and congregate at boundariesµ

˜
  )+ 


a  In software engineering, software configuration management (SC ) is the task of


tracking and controlling changes in the software. Configuration management practices include
revision control and the establishment of baselines.

SC  concerns itself with answering the question "Somebody did something, how can
one reproduce it?" Often the problem involves not reproducing "it" identically, but with
controlled, incremental changes. Answering the question thus becomes a matter of comparing
different results and of analyzing their differences. Traditional configuration management
typically focused on controlled creation of relatively simple products. Now, implementers of
SC  face the challenge of dealing with relatively minor increments under their own control, in
the context of the complex system being developed

 )+

! Configuration identification - Identifying configurations, configuration items and


baselines.
! Configuration control - Implementing a controlled change process. This is usually
achieved by setting up a change control board whose primary function is to approve or
reject all change requests that are sent against any baseline.
! Configuration status accounting - Recording and reporting all the necessary information
on the status of the development process.
! Configuration auditing - Ensuring that configurations contain all their intended parts and
are sound with respect to their specifying documents, including requirements,
architectural specifications and user manuals.
! Build management -  anaging the process and tools used for builds.
! Process management - Ensuring adherence to the organization's development process.
! Environment management -  anaging the software and hardware that host the system.
! Teamwork - Facilitate team interactions related to the process.
! Defect tracking -  aking sure every defect has traceability back to the source.

You might also like