You are on page 1of 24

OPEN LEARNING SERIES .

By
Mohit Sharma
Test Terminology

• Test Object – SW under test; either module, unit, component, service


or package (depending on the test level), whose behavior will be
tested and evaluated.
e.g. Employee Registration, Personal banking system etc.
• Test Stub – Dummy component or object used to simulate the
behavior of a real component or object which is used by the test
object.

• Test Driver – Test software to call the test object. e.g.


E-SIM - Embedded Software Simulation and Testing Environment driver
Its a native software simulator (NSS) for embedded software. It
makes it possible to develop and test software before any target hardware is
available.
Test Terminology
 Test Suite – Collection of related test cases.

 Test Specification – Documented plan showing how to execute tests.


Contains test description or test case generation scheme.
Microsoft Word
Document

 Test Environment – Includes everything that is necessary to execute a


test suite ( workstations, OS, test tools etc. ..)

 e.g. Client PC: P IV 1.9 GHz 256 RAM, 20 GB HDD, Windows 2000 OS
 Server PC: P III 866 MHz, 512 MB RAM, 8.6 GB HDD, Windows NT
 server / Windows 2000 server
 processor, 128 KB RAM, 20GB HDD
 Backend Software: SQL Server 2000/Oracle 9i
 etc.
Testing Techniques
 Positive Testing
 Testing which attempts to show that a given module
of an application does what it is supposed to do.

 Negative Testing
 Testing which attempts to show that the module does
not do anything that it is not supposed to do.

 Boundary analysis
 Testing which check for boundary conditions as set for
the particular field or value
Test Levels

There are six distinct levels of Testing:


a) Unit / Component Testing
b) Integration Testing
c) Validation / Functional Testing
d) System Testing
e) Acceptance Testing
f) Alpha and Beta Testing
Test Plan & Specification

• Overall Test Planning is mentioned in Project Plan

• Separate Test Plan may be generated for Integration, System and


Acceptance Testing
• Or may be combined with Specifications as mentioned below:
a) Unit / Component Testing – Unit Test Specs (Detailed Design)
b) Integration Testing - Integration Test Plan & Specs (High Level
Design)
c) Validation / Functional Testing - Integration Test Plan & Specs
d) System Testing - System Test Plan & Specs (Functional Specs)
e) Acceptance Testing – Acceptance Test Plan & Specs (Requirement
Specs)
Unit/ Component Testing

What is a Unit?
Unit is the smallest testable piece of software. e.g. a class, a
module or an API.
Unit Testing is the lowest level of testing performed during
software development, where individual units of
software are tested in isolation from other parts of the
Program.
Unit Testing is not intended to be a one-time test to aid bug
free coding. Unit tests have to be repeated whenever
software is modified or used in a different environment.
GOAL : To confirm that the unit is correctly coded and meets
its ‘intended’ functionality.
Integration Testing

GOAL : To confirm that though the different subsystems


were individually satisfactory, the combination of the
subsystems meet their ‘intended’ functionality.

In Integration Testing, we can test the following:


• Functional Validity
• Interface Integrity
• Performance
• Resource usage
Integration testing

There are different approaches to Integration testing based


on the order in which the subsystems are selected for testing
and integration.

• Bottom Up Integration
• Top Down Integration
• Variations of the above
Integration testing

Bottom up Testing A Layer I


• The subsystem in the
lowest layer of the call B C D Layer II

hierarchy are tested


individually.
E F G Layer III
• Then the next
subsystems are tested
that call the previously Test E
tested subsystems
• This is done Test B,E,F Test
A,B,C,
repeatedly until all Test F D,E,F,G
subsystems are Test C
included in testing
• Test driver is needed Test G Test D,G
to do the testing.
Integration testing

Top down Testing


A Layer I
• The top layer or the
controlling subsystem is
tested first. B C D Layer II

• Then all the subsystems


that are called by the E F G Layer III
tested subsytems are
combined and the
resulting collection of
subsystems is tested.
• This is done repeatedly Test
Test A A,B,C,
until all subsystems are Test A,B,C,D
D,E,F,G
incorporated into the
Test.
• Test stubs are needed to
do the testing.
Validation / Function Testing

GOAL : To detect bugs that cannot be attributed to


components as such, but to the inconsistencies between
components, or to the planned interaction of components
and other objects.

In Validation Testing, we can test the following:


 Functional Validity
 Performance
 Other requirements ( Compatibility, Maintainability and
Error Recovery)
System Testing

GOAL : To determine if the system meets the requirements


(functional and global)

In System Testing, we can test the following:


• Functional Validity
• Performance
• Portability
Acceptance Testing

GOAL : To meet all end-user requirements so that it is


acceptable to the Customer.

In Acceptance Testing, we can test the following:


• Requirements Validity
• Performance
Testing Exercise
Exercise:
-Make Groups
- Identify
positive and
negative test
cases for the
given screen
- Write test
cases to check
boundary
conditions
Alpha and Beta Testing

If the software developed is for use by many customers, then one cannot
use Acceptance Testing. Alternatively, Alpha and Beta Testing is
used to uncover errors.
In Alpha Testing, test is done at the Developer site by a Customer. The
Customer uses the software, with the developer ‘looking over the
shoulder’ and recording errors, usage problems and providing
solutions.
Alpha Testing is performed in a Controlled environment.
In Beta Testing, test is done at one or more Customer sites by ‘End
Users’. The Customer records and reports errors and usage problems
at regular intervals.
Beta Testing is performed in an environment NOT controlled by the
Developer.
Test Methodologies

Informal:
· Incremental coding
Static Analysis:
· Hand execution: Reading the source code
· Code Inspection (formal presentation to others)
· Automated Tools checking for syntactic and
semantic errors
Dynamic Analysis:
· Black-box testing (Test the input/output
behavior)
· White-box testing (Test the internal logic of the
subsystem or object)
Test Strategies

Black Box and White Box Tests


The word “box” in Black box and White Box Testing refers to the
system under test ; the color refers to the visibility that the Tester
has into the inner workings of the system.
In Black Box testing, the Tester has NO visibility into the inner
workings. The Tester sees only the interfaces exposed by the
system.
In contrast, White Box Testing offers the Tester FULL visibility
into how the system works. It requires an implicit knowledge of
the systems inner workings.
Black box Testing is sometimes referred to as Functional or
Behavioral Testing.
White Box Testing is referred to as Structural Testing. It is also
known as Glass Box Testing.
Test Design – Black Box Testing

Since Black Box Testing is Testing without knowledge


of the internal workings of the item under test,
Test design for such tests are of the following
types:
• Specification Derived Tests
• Tests based on Equivalence Partitioning
• Tests based on Boundary Value Analysis
Test Design – White Box Testing

As White Box Testing requires knowledge of the internal


workings of the object / unit under test, Test design
for such tests are of the following types:
• Basis Path Testing
• Branch Testing or Control Flow Testing
• Condition Testing
• Data Flow Testing
• Loop Testing
Code Coverage

Code coverage analysis is the process of:


· Finding areas of a program not exercised by a set of
test cases,
· Creating additional test cases to increase coverage, and
· Determining a quantitative measure of code coverage,
which is an indirect measure of quality.
Optional aspect of code coverage analysis is:
· Identifying redundant test cases that do not increase
coverage.
Quiz

 Q1 What is the difference between White and Black Box Testing?

Q2 Different Test Methodologies are:


 Informal, Static Analysis and Dynamic Analysis
 Condition Testing & Data Flow Testing
 Both the above

Q3 System Test Specs is prepared against:


 Detailed Functional Specs
 Requirements Specs
 Detailed Design
Any Questions?
Thank You

QMS Intranet – //132.186.193.50

You might also like