You are on page 1of 11

SOFTWARE

TESTING
Introduction
Definition
Need for Testing
Misunderstanding about Testing
Testing Techniques
Types of Testing
Levels of Testing
Final version of Testing
- Alpha Testing
- Beta Testing
When to stop Testing?
Conclusion

Compiled by
V. Raj Kumar
Software Testing

Introduction:
“Testing is a process of planning, preparing, executing and analyzing, aimed at
establishing the characteristics of an information system, and demonstrating the
difference between the actual status and the required status.”
A primary purpose for testing is to detect software failures so that defects may be
uncovered and corrected
Software bugs will almost always exist in any software module with moderate
size: not because programmers are careless or irresponsible, but because the complexity
of software is generally intractable -- and humans have only limited ability to manage
complexity.
Main objective
 Testing is a process of executing a program with the intent of finding an error.
 A good test case is one that has a high probability of finding an as yet undiscovered
error.
 A successful test is one that uncovers an as yet undiscovered error.
 Identifies defects before software deployment
 Reduces incompatibility and interoperability issues
 To reduce the cost of rework by detecting defects at an early stage.

Role of Testing
Primary
 Determine whether system meets specifications
 Determine whether system meets needs
Secondary
 Instill confidence
 Continuously improve the testing process
Definition

 Testing is the process of exercising or evaluating a system or system component


by manual or automated means to verify that it satisfies specified requirements -
(IEEE 83a)
 A process of demonstrating that errors are not present?
 A way of establishing confidence that a program does what it is supposed to do?

Need for Testing?


A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy
$59.5 billion annually.
Testing reduces the level of uncertainty about the quality of a system.

 Quality Control: Mechanism to ensure that the required quality characteristics


exist in the finished product.
 Quality Assurance: Ensures that the quality management procedures work.
 Verification refers to the set of activities that ensure that software correctly
implements a specific function, imposed at the start of that phase.
 Validation refers to the test phase of the life cycle which ensures that the end
product meets the user’s needs.
Verification: Have we built the software right (i.e., does it match the specification?)? It is
process based.
Validation: Have we built the right software (i.e., is this what the customer wants?)? It is
product based.

 Developers hide their mistakes


 Avoid project overruns by following a defined test methodology.

Misunderstandings about testing

o Testing is debugging
o Testing is not the job of a programmer
o If programmers were more careful testing would be unnecessary
o Testing never ends
o Testing activities start only after the coding is complete
o Testing is not a creative task

Types of Testing

• Black Box Testing


• White Box Testing
• Grey Box Testing

Black Box Testing


As the name itself suggests, we cannot see anything in a black box. In the same way
in black box testing it is not required to know about code but the knowledge of
functionality is mandatory.

Black box testing treats the software as a without any knowledge of internal
implementation

That's why there are situations when


1. A black box tester writes many test cases to check something that can be tested by only
one test case and/or

2. Some parts of the back end are not tested at all

Therefore, black box testing has the advantage of an unaffiliated opinion on the one hand
and the disadvantage of blind exploring on the other.

White Box Testing

White box testing also called clear box testing, glass box testing, transparent box testing,
and translucent box testing or structural testing

White box testing, by contrast to black box testing, is when the tester has access to the
internal data structures and algorithms

White Box, like Black Box, is a test design method. Tests based on the internal logic of
the application are called White Box tests.

White box testing is often more thorough, but also much more time consuming than
Black box testing and requires some knowledge of development processes.

White Box often talks of Code Coverage, which is where the code itself is covered by
test cases. There are several levels of Code coverage.
Black box testing White box testing
Black Box testing is planned without White Box testing is planned with the
the intimate knowledge of the program intimate knowledge of the program

Black Box test is usually based on White Box testing aims at testing each
specification of the program aspect of the program logic

Grey Box Testing


The term grey box testing has come into common usage.
This involves having access to internal data structures and algorithms for purposes of
designing the test cases, but testing at the user, or black-box level.

Levels of Testing
 Unit Testing
 Integration Testing
 System Testing
 Acceptance Testing
 Regression Testing
 Performance Testing
 Security Testing
 Recovery Testing

Unit Testing
The goal of unit testing is to isolate each part of the program and show that the individual
parts are correct. A unit test provides a strict, written contract that the piece of code must
satisfy. As a result, it affords several benefits. Unit tests find problems early in the
development cycle.
Disadvantages
Testing cannot be expected to catch every error in the program - it is impossible to
evaluate all execution paths for all but the most trivial programs
Integration Testing
Intermediate level of testing

'Integration testing' (sometimes called Integration and Testing, abbreviated I&T)


is the phase of software testing in which individual software modules are combined and
tested as a group.

It follows unit testing and precedes system testing.

Integration testing takes as its input modules that have been unit tested, groups
them in larger aggregates, applies tests defined in an integration test plan to those
aggregates, and delivers as its output the integrated system ready for system testing.

The testing of joined components of a system to determine if they function


correctly together. Components in this sense are modules or units of code within the same
system. An example may be the integration of a shopping basket component with a web
component. Component Integration Testing is likely to focus on the two components
operating correctly as a single solution.

Testing of combined parts of an application to determine if they function together


correctly. The ‘parts’ can be code modules, individual applications, client and server
applications on a network, etc.

Progressively unit tested software components are integrated and tested until the
software works as a whole

Test that evaluate the interaction and consistency of interacting components.

System Testing
- System testing of software or hardware is testing conducted on a complete,
integrated system to evaluate the system's compliance with its specified
requirements. System testing falls within the scope of black box testing, and as
such, should require no knowledge of the inner design of the code or logic
Acceptance Testing
- Formal testing with respect to user needs, requirements, and business processes
conducted to determine whether or not a system satisfies the acceptance criteria
and to enable the user, customers or other authorized entity to determine whether
or not to accept the system.
- Acceptance testing is black-box testing performed on a system (e.g. software,
lots of manufactured mechanical parts, or batches of chemical products) prior to
its delivery. It is also known as functional testing, black-box testing, release
acceptance, QA testing, application testing, confidence testing, final testing,
validation testing, or factory acceptance testing
Regression Testing
 Regression testing can be defined as the retesting of a previously tested program
following modification to ensure that faults have not been introduced or
uncovered as a result of the changes made to software, hardware or environment.
 . Common methods of regression testing include re-running previously run tests
and checking whether previously fixed faults have re-emerged.

Performance Testing
- It will make sure that product does not take up much of the system resource and
time taking for executing task. Imagine the reaction of the user, if save operation
takes up more than 5 minutes and also testing will check that response time meets
the user requirement.
Security Testing
- It is currently top of many peoples list of testing they should do, although it’s hard
to see what people are expecting from security testing.
- The Process to determine that an IS (Information System) protects data and
maintains functionality as intended.
The six basic security concepts that need to be covered by security testing are:

Confidentiality, integrity, authentication, authorization, availability and non-repudiation

Quality assurance certifications

 CSQE offered by the American Society for Quality (ASQ)


 CSQA offered by the Quality Assurance Institute (QAI)

Testing certifications

 Certified Software Tester (CSTE)

 ISEB offered by the Information Systems Examinations Board

 ISTQB Certified Tester, Advanced Level (CTAL) offered by the International


Software Testing Qualification Board

Final version of testing

Before shipping the final version of software, alpha and beta testing are often done
additionally:

Alpha testing takes place at developers' sites, and involves testing of the operational
system by internal staff, before it is released to external customers
Beta testing takes place at customers' sites, and involves testing by a group of customers
who use the system at their own locations and provide feedback, before the system is
released to other customers. The latter is often called “field testing”.

When to stop testing?


This can be difficult to determine. Many modern software applications are so
complex and run in such an interdependent environment, that complete testing can never
be done. Common factors in deciding when to stop are...
· Deadlines, e.g. release deadlines, testing deadlines;
· Test cases completed with certain percentage passed;
· Test budget has been depleted;
· Coverage of code, functionality, or requirements reaches a specified point;
· Bug rate falls below a certain level; or Beta or alpha testing period ends.

Conclusion:

• Software testing is an art. Most of the testing methods and practices are not very
different from 20 years ago. It is nowhere near maturity, although there are many
tools and techniques available to use. Good testing also requires a tester's
creativity, experience and intuition, together with proper techniques.
• Testing is more than just debugging. Testing is not only used to locate defects and
correct them. It is also used in validation, verification process, and reliability
measurement.
• Testing is expensive. Automation is a good way to cut down cost and time.
Testing efficiency and effectiveness is the criteria for coverage-based testing
techniques.
• Complete testing is infeasible. Complexity is the root of the problem. The
stopping time can be decided by the trade-off of time and budget. Or if the
reliability estimate of the software product meets requirement.
• Testing may not be the most effective method to improve software quality.
Alternative methods, such as inspection, and clean-room engineering, may be
even better.

You might also like